nemuilemon
  • Wiki
  • Gallery
  • Tools

Chapters

  1. 01AI as Cognitive Extension
  2. 02Role Separation Architecture
  3. 03Memory Substrate
  4. 04Service Topology
  5. 05Decision Pipeline

Wiki

LLM Partner Theory

A growing theoretical work on treating large language models as partners rather than tools. Each chapter develops one architectural primitive of the overall framework.

  1. 01

    AI as Cognitive Extension

    Why a large language model is better modeled as an extension module to a human mind than as a tool sitting outside of it, and what design consequences follow.

  2. 02

    Role Separation Architecture

    Why a single general-purpose assistant is the wrong shape for serious partner systems, and how to partition responsibilities across multiple specialized roles.

  3. 03

    Memory Substrate

    Why a long-running partnership with an LLM needs a persistent store outside the model, and what shape that store should take.

  4. 04

    Service Topology

    Decomposing a self-hosted LLM partner system into independently replaceable services — memory, portal, and workers — without falling into microservice theater.

  5. 05

    Decision Pipeline

    How to structure an algorithmic trading or allocation system so that LLM consultation has somewhere safe to plug in — without letting the model anywhere near execution.

© 2026 nemuilemon.

Uses Google Analytics. Data is collected anonymously.