Wiki
LLM Partner Theory
A growing theoretical work on treating large language models as partners rather than tools. Each chapter develops one architectural primitive of the overall framework.
- 01
AI as Cognitive Extension
Why a large language model is better modeled as an extension module to a human mind than as a tool sitting outside of it, and what design consequences follow.
- 02
Role Separation Architecture
Why a single general-purpose assistant is the wrong shape for serious partner systems, and how to partition responsibilities across multiple specialized roles.
- 03
Memory Substrate
Why a long-running partnership with an LLM needs a persistent store outside the model, and what shape that store should take.
- 04
Service Topology
Decomposing a self-hosted LLM partner system into independently replaceable services — memory, portal, and workers — without falling into microservice theater.
- 05
Decision Pipeline
How to structure an algorithmic trading or allocation system so that LLM consultation has somewhere safe to plug in — without letting the model anywhere near execution.