ai
knowledge-management
topic
wiki
LLM Wiki
Summary
The LLM Wiki pattern replaces stateless RAG retrieval with a persistent, compounding knowledge base of interlinked markdown files maintained by an LLM. Knowledge is compiled once at ingest time and kept current, not re-derived on every query.
Concept
Summary
Llm Wiki Pattern
Core pattern: persistent wiki instead of stateless RAG
Rag Vs Llm Wiki
Comparison table and tradeoffs between approaches
Knowledge Compounding
Each source enriches existing pages, not just adds documents
Three Layer Architecture
Raw Sources, Wiki (LLM-maintained), Schema (governance)
Ingest Pipeline
5-step: Resolve → Route → Synthesize → Embed → Update
Query Pipeline
RAG over compiled wiki pages instead of raw chunks
Lint Operation
Health checks: orphans, broken links, contradictions
Schema Pagespec
Page universe definition for routing and governance
Query Templates
6 categories: synthesis, gap-finding, debate, output, health, personal
Claude Code Hooks
Session lifecycle hooks for capturing conversation knowledge
Hot Cache
~500-char cache of most recent context for quick access
Compiler Analogy
Knowledge processing maps to software compilation
Index And Log
Content catalog + chronological record for navigation
Key Entities
Entity
Role
Andrej Karpathy
Pattern originator, ~100 articles / ~500k words in his wiki
Plaban Nayak
Python implementation author, 24 query templates
Teachers Tech
Beginner setup guide with Japan trip demo
Cole Medin
Internal data adaptation for Claude Code memory
Nate Herk
Real wiki demos, reported 95% token reduction
Key Sources
What's Missing
Multi-user/team wiki patterns (beyond personal use)
Automated search infrastructure beyond index.md (for >500 pages)
Embedding-based semantic search integration
Dataview/obsidian integration patterns