Skip to content

Knowledge Compounding

Summary

The principle that knowledge in an LLM Wiki accumulates and deepens over time, with each new source enriching existing pages rather than sitting alongside them as an isolated document.

Core Principle

The compounding principle is the central value proposition of the LLM Wiki pattern: every ingest makes the entire wiki richer, not just adds one more document.

When a new source is added: 1. It doesn't just create new pages — it enriches pages already there 2. Existing pages gain new information, new cross-references, new nuance 3. Contradictions with prior knowledge are flagged 4. The answers to future queries become deeper because the underlying pages are richer

The Compounding Loop

The --save mechanism on queries completes the compounding loop: when you ask a question and the answer represents new, valuable knowledge, it can be filed back as a new wiki page. Future sessions benefit immediately. The wiki knows the answer too.

Scale Effects

  • At 10 pages: Answers basic questions
  • At 50 pages: Starts synthesizing across ideas you never explicitly connected
  • At 100+ pages: Can answer questions where the answer doesn't exist in any single source — the answer lives in the relationships between pages

Cross-referential density grows as the wiki matures. A page on "attention mechanism" that had two outgoing links might now have five. The wiki exhibits knowledge graph behavior in plain text — no graph database required.

Analogy: Compound Interest

Like financial compound interest, the returns are small early and accelerating over time. The first few sources feel like you're just organizing notes. After dozens of sources, the wiki starts producing insights you didn't explicitly put in — because the LLM can see patterns across the entire compiled knowledge base.

Contrast with RAG

In RAG systems, knowledge never compounds. Every query starts from zero. The system is perpetually amnesiac — capable, but incapable of growing. The same 50 papers chunked in RAG produce the same quality answers on day one and day one thousand. The same 50 papers ingested into an LLM Wiki produce progressively deeper, more cross-referenced answers as the wiki matures.

See Also