arscontexta 0.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (418) hide show
  1. package/.claude-plugin/marketplace.json +11 -0
  2. package/.claude-plugin/plugin.json +22 -0
  3. package/README.md +683 -0
  4. package/agents/knowledge-guide.md +49 -0
  5. package/bin/cli.mjs +66 -0
  6. package/generators/agents-md.md +240 -0
  7. package/generators/claude-md.md +379 -0
  8. package/generators/features/atomic-notes.md +124 -0
  9. package/generators/features/ethical-guardrails.md +58 -0
  10. package/generators/features/graph-analysis.md +188 -0
  11. package/generators/features/helper-functions.md +92 -0
  12. package/generators/features/maintenance.md +164 -0
  13. package/generators/features/methodology-knowledge.md +70 -0
  14. package/generators/features/mocs.md +144 -0
  15. package/generators/features/multi-domain.md +61 -0
  16. package/generators/features/personality.md +71 -0
  17. package/generators/features/processing-pipeline.md +428 -0
  18. package/generators/features/schema.md +149 -0
  19. package/generators/features/self-evolution.md +229 -0
  20. package/generators/features/self-space.md +78 -0
  21. package/generators/features/semantic-search.md +99 -0
  22. package/generators/features/session-rhythm.md +85 -0
  23. package/generators/features/templates.md +85 -0
  24. package/generators/features/wiki-links.md +88 -0
  25. package/generators/soul-md.md +121 -0
  26. package/hooks/hooks.json +45 -0
  27. package/hooks/scripts/auto-commit.sh +44 -0
  28. package/hooks/scripts/session-capture.sh +35 -0
  29. package/hooks/scripts/session-orient.sh +86 -0
  30. package/hooks/scripts/write-validate.sh +42 -0
  31. package/methodology/AI shifts knowledge systems from externalizing memory to externalizing attention.md +59 -0
  32. package/methodology/BM25 retrieval fails on full-length descriptions because query term dilution reduces match scores.md +39 -0
  33. package/methodology/IBIS framework maps claim-based architecture to structured argumentation.md +58 -0
  34. package/methodology/LLM attention degrades as context fills.md +49 -0
  35. package/methodology/MOC construction forces synthesis that automated generation from metadata cannot replicate.md +49 -0
  36. package/methodology/MOC maintenance investment compounds because orientation savings multiply across every future session.md +41 -0
  37. package/methodology/MOCs are attention management devices not just organizational tools.md +51 -0
  38. package/methodology/PKM failure follows a predictable cycle.md +50 -0
  39. package/methodology/ThreadMode to DocumentMode transformation is the core value creation step.md +52 -0
  40. package/methodology/WIP limits force processing over accumulation.md +53 -0
  41. package/methodology/Zeigarnik effect validates capture-first philosophy because open loops drain attention.md +42 -0
  42. package/methodology/academic research uses structured extraction with cross-source synthesis.md +566 -0
  43. package/methodology/adapt the four-phase processing pipeline to domain-specific throughput needs.md +197 -0
  44. package/methodology/agent notes externalize navigation intuition that search cannot discover and traversal cannot reconstruct.md +48 -0
  45. package/methodology/agent self-memory should be architecturally separate from user knowledge systems.md +48 -0
  46. package/methodology/agent session boundaries create natural automation checkpoints that human-operated systems lack.md +56 -0
  47. package/methodology/agent-cognition.md +107 -0
  48. package/methodology/agents are simultaneously methodology executors and subjects creating a unique trust asymmetry.md +66 -0
  49. package/methodology/aspect-oriented programming solved the same cross-cutting concern problem that hooks solve.md +39 -0
  50. package/methodology/associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles.md +53 -0
  51. package/methodology/attention residue may have a minimum granularity that cannot be subdivided.md +46 -0
  52. package/methodology/auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution.md +47 -0
  53. package/methodology/automated detection is always safe because it only reads state while automated remediation risks content corruption.md +42 -0
  54. package/methodology/automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues.md +56 -0
  55. package/methodology/backlinks implicitly define notes by revealing usage context.md +35 -0
  56. package/methodology/backward maintenance asks what would be different if written today.md +62 -0
  57. package/methodology/balance onboarding enforcement and questions to prevent premature complexity.md +229 -0
  58. package/methodology/basic level categorization determines optimal MOC granularity.md +51 -0
  59. package/methodology/batching by context similarity reduces switching costs in agent processing.md +43 -0
  60. package/methodology/behavioral anti-patterns matter more than tool selection.md +42 -0
  61. package/methodology/betweenness centrality identifies bridge notes connecting disparate knowledge domains.md +57 -0
  62. package/methodology/blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules.md +42 -0
  63. package/methodology/bootstrapping principle enables self-improving systems.md +62 -0
  64. package/methodology/build automatic memory through cognitive offloading and session handoffs.md +285 -0
  65. package/methodology/capture the reaction to content not just the content itself.md +41 -0
  66. package/methodology/claims must be specific enough to be wrong.md +36 -0
  67. package/methodology/closure rituals create clean breaks that prevent attention residue bleed.md +44 -0
  68. package/methodology/cognitive offloading is the architectural foundation for vault design.md +46 -0
  69. package/methodology/cognitive outsourcing risk in agent-operated systems.md +55 -0
  70. package/methodology/coherence maintains consistency despite inconsistent inputs.md +96 -0
  71. package/methodology/coherent architecture emerges from wiki links spreading activation and small-world topology.md +48 -0
  72. package/methodology/community detection algorithms can inform when MOCs should split or merge.md +52 -0
  73. package/methodology/complete navigation requires four complementary types that no single mechanism provides.md +43 -0
  74. package/methodology/complex systems evolve from simple working systems.md +59 -0
  75. package/methodology/composable knowledge architecture builds systems from independent toggleable modules not monolithic templates.md +61 -0
  76. package/methodology/compose multi-domain systems through separate templates and shared graph.md +372 -0
  77. package/methodology/concept-orientation beats source-orientation for cross-domain connections.md +51 -0
  78. package/methodology/confidence thresholds gate automated action between the mechanical and judgment zones.md +50 -0
  79. package/methodology/configuration dimensions interact so choices in one create pressure on others.md +58 -0
  80. package/methodology/configuration paralysis emerges when derivation surfaces too many decisions.md +44 -0
  81. package/methodology/context files function as agent operating systems through self-referential self-extension.md +46 -0
  82. package/methodology/context phrase clarity determines how deep a navigation hierarchy can scale.md +46 -0
  83. package/methodology/continuous small-batch processing eliminates review dread.md +48 -0
  84. package/methodology/controlled disorder engineers serendipity through semantic rather than topical linking.md +51 -0
  85. package/methodology/creative writing uses worldbuilding consistency with character tracking.md +672 -0
  86. package/methodology/cross-links between MOC territories indicate creative leaps and integration depth.md +43 -0
  87. package/methodology/dangling links reveal which notes want to exist.md +62 -0
  88. package/methodology/data exit velocity measures how quickly content escapes vendor lock-in.md +74 -0
  89. package/methodology/decontextualization risk means atomicity may strip meaning that cannot be recovered.md +48 -0
  90. package/methodology/dense interlinked research claims enable derivation while sparse references only enable templating.md +47 -0
  91. package/methodology/dependency resolution through topological sort makes module composition transparent and verifiable.md +56 -0
  92. package/methodology/derivation generates knowledge systems from composable research claims not template customization.md +63 -0
  93. package/methodology/derivation-engine.md +27 -0
  94. package/methodology/derived systems follow a seed-evolve-reseed lifecycle.md +56 -0
  95. package/methodology/description quality for humans diverges from description quality for keyword search.md +73 -0
  96. package/methodology/descriptions are retrieval filters not summaries.md +112 -0
  97. package/methodology/design MOCs as attention management devices with lifecycle governance.md +318 -0
  98. package/methodology/design-dimensions.md +66 -0
  99. package/methodology/digital mutability enables note evolution that physical permanence forbids.md +54 -0
  100. package/methodology/discovery-retrieval.md +48 -0
  101. package/methodology/distinctiveness scoring treats description quality as measurable.md +69 -0
  102. package/methodology/does agent processing recover what fast capture loses.md +43 -0
  103. package/methodology/domain-compositions.md +37 -0
  104. package/methodology/dual-coding with visual elements could enhance agent traversal.md +55 -0
  105. package/methodology/each module must be describable in one sentence under 200 characters or it does too many things.md +45 -0
  106. package/methodology/each new note compounds value by creating traversal paths.md +55 -0
  107. package/methodology/eight configuration dimensions parameterize the space of possible knowledge systems.md +56 -0
  108. package/methodology/elaborative encoding is the quality gate for new notes.md +55 -0
  109. package/methodology/enforce schema with graduated strictness across capture processing and query zones.md +221 -0
  110. package/methodology/enforcing atomicity can create paralysis when ideas resist decomposition.md +43 -0
  111. package/methodology/engineering uses technical decision tracking with architectural memory.md +766 -0
  112. package/methodology/every knowledge domain shares a four-phase processing skeleton that diverges only in the process step.md +53 -0
  113. package/methodology/evolution observations provide actionable signals for system adaptation.md +67 -0
  114. package/methodology/external memory shapes cognition more than base model.md +60 -0
  115. package/methodology/faceted classification treats notes as multi-dimensional objects rather than folder contents.md +65 -0
  116. package/methodology/failure-modes.md +27 -0
  117. package/methodology/false universalism applies same processing logic regardless of domain.md +49 -0
  118. package/methodology/federated wiki pattern enables multi-agent divergence as feature not bug.md +59 -0
  119. package/methodology/flat files break at retrieval scale.md +75 -0
  120. package/methodology/forced engagement produces weak connections.md +48 -0
  121. package/methodology/four abstraction layers separate platform-agnostic from platform-dependent knowledge system features.md +47 -0
  122. package/methodology/fresh context per task preserves quality better than chaining phases.md +44 -0
  123. package/methodology/friction reveals architecture.md +63 -0
  124. package/methodology/friction-driven module adoption prevents configuration debt by adding complexity only at pain points.md +48 -0
  125. package/methodology/gardening cycle implements tend prune fertilize operations.md +41 -0
  126. package/methodology/generation effect gate blocks processing without transformation.md +40 -0
  127. package/methodology/goal-driven memory orchestration enables autonomous domain learning through directed compute allocation.md +41 -0
  128. package/methodology/good descriptions layer heuristic then mechanism then implication.md +57 -0
  129. package/methodology/graph-structure.md +65 -0
  130. package/methodology/guided notes might outperform post-hoc structuring for high-volume capture.md +37 -0
  131. package/methodology/health wellness uses symptom-trigger correlation with multi-dimensional tracking.md +819 -0
  132. package/methodology/hook composition creates emergent methodology from independent single-concern components.md +47 -0
  133. package/methodology/hook enforcement guarantees quality while instruction enforcement merely suggests it.md +51 -0
  134. package/methodology/hook-driven learning loops create self-improving methodology through observation accumulation.md +62 -0
  135. package/methodology/hooks are the agent habit system that replaces the missing basal ganglia.md +40 -0
  136. package/methodology/hooks cannot replace genuine cognitive engagement yet more automation is always tempting.md +87 -0
  137. package/methodology/hooks enable context window efficiency by delegating deterministic checks to external processes.md +47 -0
  138. package/methodology/idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once.md +44 -0
  139. package/methodology/implement condition-based maintenance triggers for derived systems.md +255 -0
  140. package/methodology/implicit dependencies create distributed monoliths that fail silently across configurations.md +58 -0
  141. package/methodology/implicit knowledge emerges from traversal.md +55 -0
  142. package/methodology/incremental formalization happens through repeated touching of old notes.md +60 -0
  143. package/methodology/incremental reading enables cross-source connection finding.md +39 -0
  144. package/methodology/index.md +32 -0
  145. package/methodology/inline links carry richer relationship data than metadata fields.md +91 -0
  146. package/methodology/insight accretion differs from productivity in knowledge systems.md +41 -0
  147. package/methodology/intermediate packets enable assembly over creation.md +52 -0
  148. package/methodology/intermediate representation pattern enables reliable vault operations beyond regex.md +62 -0
  149. package/methodology/justification chains enable forward backward and evolution reasoning about configuration decisions.md +46 -0
  150. package/methodology/knowledge system architecture is parameterized by platform capabilities not fixed by methodology.md +51 -0
  151. package/methodology/knowledge systems become communication partners through complexity and memory humans cannot sustain.md +47 -0
  152. package/methodology/knowledge systems share universal operations and structural components across all methodology traditions.md +46 -0
  153. package/methodology/legal case management uses precedent chains with regulatory change propagation.md +892 -0
  154. package/methodology/live index via periodic regeneration keeps discovery current.md +58 -0
  155. package/methodology/local-first file formats are inherently agent-native.md +69 -0
  156. package/methodology/logic column pattern separates reasoning from procedure.md +35 -0
  157. package/methodology/maintenance operations are more universal than creative pipelines because structural health is domain-invariant.md +47 -0
  158. package/methodology/maintenance scheduling frequency should match consequence speed not detection capability.md +50 -0
  159. package/methodology/maintenance targeting should prioritize mechanism and theory notes.md +26 -0
  160. package/methodology/maintenance-patterns.md +72 -0
  161. package/methodology/markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure.md +55 -0
  162. package/methodology/maturity field enables agent context prioritization.md +33 -0
  163. package/methodology/memory-architecture.md +27 -0
  164. package/methodology/metacognitive confidence can diverge from retrieval capability.md +42 -0
  165. package/methodology/metadata reduces entropy enabling precision over recall.md +91 -0
  166. package/methodology/methodology development should follow the trajectory from documentation to skill to hook as understanding hardens.md +80 -0
  167. package/methodology/methodology traditions are named points in a shared configuration space not competing paradigms.md +64 -0
  168. package/methodology/mnemonic medium embeds verification into navigation.md +46 -0
  169. package/methodology/module communication through shared YAML fields creates loose coupling without direct dependencies.md +44 -0
  170. package/methodology/module deactivation must account for structural artifacts that survive the toggle.md +49 -0
  171. package/methodology/multi-domain systems compose through separate templates and shared graph.md +61 -0
  172. package/methodology/multi-domain-composition.md +27 -0
  173. package/methodology/narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging.md +53 -0
  174. package/methodology/navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts.md +48 -0
  175. package/methodology/navigational vertigo emerges in pure association systems without local hierarchy.md +54 -0
  176. package/methodology/note titles should function as APIs enabling sentence transclusion.md +51 -0
  177. package/methodology/note-design.md +57 -0
  178. package/methodology/notes are skills /342/200/224 curated knowledge injected when relevant.md" +62 -0
  179. package/methodology/notes function as cognitive anchors that stabilize attention during complex tasks.md +41 -0
  180. package/methodology/novel domains derive by mapping knowledge type to closest reference domain then adapting.md +50 -0
  181. package/methodology/nudge theory explains graduated hook enforcement as choice architecture for agents.md +59 -0
  182. package/methodology/observation and tension logs function as dead-letter queues for failed automation.md +51 -0
  183. package/methodology/operational memory and knowledge memory serve different functions in agent architecture.md +48 -0
  184. package/methodology/operational wisdom requires contextual observation.md +52 -0
  185. package/methodology/orchestrated vault creation transforms arscontexta from tool to autonomous knowledge factory.md +40 -0
  186. package/methodology/organic emergence versus active curation creates a fundamental vault governance tension.md +68 -0
  187. package/methodology/orphan notes are seeds not failures.md +38 -0
  188. package/methodology/over-automation corrupts quality when hooks encode judgment rather than verification.md +62 -0
  189. package/methodology/people relationships uses Dunbar-layered graphs with interaction tracking.md +659 -0
  190. package/methodology/personal assistant uses life area management with review automation.md +610 -0
  191. package/methodology/platform adapter translation is semantic not mechanical because hook event meanings differ.md +40 -0
  192. package/methodology/platform capability tiers determine which knowledge system features can be implemented.md +48 -0
  193. package/methodology/platform fragmentation means identical conceptual operations require different implementations across agent environments.md +44 -0
  194. package/methodology/premature complexity is the most common derivation failure mode.md +45 -0
  195. package/methodology/prevent domain-specific failure modes through the vulnerability matrix.md +336 -0
  196. package/methodology/processing effort should follow retrieval demand.md +57 -0
  197. package/methodology/processing-workflows.md +75 -0
  198. package/methodology/product management uses feedback pipelines with experiment tracking.md +789 -0
  199. package/methodology/productivity porn risk in meta-system building.md +30 -0
  200. package/methodology/programmable notes could enable property-triggered workflows.md +64 -0
  201. package/methodology/progressive disclosure means reading right not reading less.md +69 -0
  202. package/methodology/progressive schema validates only what active modules require not the full system schema.md +49 -0
  203. package/methodology/project management uses decision tracking with stakeholder context.md +776 -0
  204. package/methodology/propositional link semantics transform wiki links from associative to reasoned.md +87 -0
  205. package/methodology/prospective memory requires externalization.md +53 -0
  206. package/methodology/provenance tracks where beliefs come from.md +62 -0
  207. package/methodology/queries evolve during search so agents should checkpoint.md +35 -0
  208. package/methodology/question-answer metadata enables inverted search patterns.md +39 -0
  209. package/methodology/random note resurfacing prevents write-only memory.md +33 -0
  210. package/methodology/reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring.md +59 -0
  211. package/methodology/reflection synthesizes existing notes into new insight.md +100 -0
  212. package/methodology/retrieval utility should drive design over capture completeness.md +69 -0
  213. package/methodology/retrieval verification loop tests description quality at scale.md +81 -0
  214. package/methodology/role field makes graph structure explicit.md +94 -0
  215. package/methodology/scaffolding enables divergence that fine-tuning cannot.md +67 -0
  216. package/methodology/schema enforcement via validation agents enables soft consistency.md +60 -0
  217. package/methodology/schema evolution follows observe-then-formalize not design-then-enforce.md +65 -0
  218. package/methodology/schema field names are the only domain specific element in the universal note pattern.md +46 -0
  219. package/methodology/schema fields should use domain-native vocabulary not abstract terminology.md +47 -0
  220. package/methodology/schema templates reduce cognitive overhead at capture time.md +55 -0
  221. package/methodology/schema validation hooks externalize inhibitory control that degrades under cognitive load.md +48 -0
  222. package/methodology/schema-enforcement.md +27 -0
  223. package/methodology/self-extension requires context files to contain platform operations knowledge not just methodology.md +47 -0
  224. package/methodology/sense-making vs storage does compression lose essential nuance.md +73 -0
  225. package/methodology/session boundary hooks implement cognitive bookends for orientation and reflection.md +60 -0
  226. package/methodology/session handoff creates continuity without persistent memory.md +43 -0
  227. package/methodology/session outputs are packets for future selves.md +43 -0
  228. package/methodology/session transcript mining enables experiential validation that structural tests cannot provide.md +38 -0
  229. package/methodology/skill context budgets constrain knowledge system complexity on agent platforms.md +52 -0
  230. package/methodology/skills encode methodology so manual execution bypasses quality gates.md +50 -0
  231. package/methodology/small-world topology requires hubs and dense local links.md +99 -0
  232. package/methodology/source attribution enables tracing claims to foundations.md +38 -0
  233. package/methodology/spaced repetition scheduling could optimize vault maintenance.md +44 -0
  234. package/methodology/spreading activation models how agents should traverse.md +79 -0
  235. package/methodology/stale navigation actively misleads because agents trust curated maps completely.md +43 -0
  236. package/methodology/stigmergy coordinates agents through environmental traces without direct communication.md +62 -0
  237. package/methodology/storage versus thinking distinction determines which tool patterns apply.md +56 -0
  238. package/methodology/structure enables navigation without reading everything.md +52 -0
  239. package/methodology/structure without processing provides no value.md +56 -0
  240. package/methodology/student learning uses prerequisite graphs with spaced retrieval.md +770 -0
  241. package/methodology/summary coherence tests composability before filing.md +37 -0
  242. package/methodology/tag rot applies to wiki links because titles serve as both identifier and display text.md +50 -0
  243. package/methodology/temporal media must convert to spatial text for agent traversal.md +43 -0
  244. package/methodology/temporal processing priority creates age-based inbox urgency.md +45 -0
  245. package/methodology/temporal separation of capture and processing preserves context freshness.md +39 -0
  246. package/methodology/ten universal primitives form the kernel of every viable agent knowledge system.md +162 -0
  247. package/methodology/testing effect could enable agent knowledge verification.md +38 -0
  248. package/methodology/the AgentSkills standard embodies progressive disclosure at the skill level.md +40 -0
  249. package/methodology/the derivation engine improves recursively as deployed systems generate observations.md +49 -0
  250. package/methodology/the determinism boundary separates hook methodology from skill methodology.md +46 -0
  251. package/methodology/the fix-versus-report decision depends on determinism reversibility and accumulated trust.md +45 -0
  252. package/methodology/the generation effect requires active transformation not just storage.md +57 -0
  253. package/methodology/the no wrong patches guarantee ensures any valid module combination produces a valid system.md +58 -0
  254. package/methodology/the system is the argument.md +46 -0
  255. package/methodology/the vault constitutes identity for agents.md +86 -0
  256. package/methodology/the vault methodology transfers because it encodes cognitive science not domain specifics.md +47 -0
  257. package/methodology/therapy journal uses warm personality with pattern detection for emotional processing.md +584 -0
  258. package/methodology/three capture schools converge through agent-mediated synthesis.md +55 -0
  259. package/methodology/three concurrent maintenance loops operate at different timescales to catch different classes of problems.md +56 -0
  260. package/methodology/throughput matters more than accumulation.md +58 -0
  261. package/methodology/title as claim enables traversal as reasoning.md +50 -0
  262. package/methodology/topological organization beats temporal for knowledge work.md +52 -0
  263. package/methodology/trading uses conviction tracking with thesis-outcome correlation.md +699 -0
  264. package/methodology/trails transform ephemeral navigation into persistent artifacts.md +39 -0
  265. package/methodology/transform universal vocabulary to domain-native language through six levels.md +259 -0
  266. package/methodology/type field enables structured queries without folder hierarchies.md +53 -0
  267. package/methodology/use-case presets dissolve the tension between composability and simplicity.md +44 -0
  268. package/methodology/vault conventions may impose hidden rigidity on thinking.md +44 -0
  269. package/methodology/verbatim risk applies to agents too.md +31 -0
  270. package/methodology/vibe notetaking is the emerging industry consensus for AI-native self-organization.md +56 -0
  271. package/methodology/vivid memories need verification.md +45 -0
  272. package/methodology/vocabulary-transformation.md +27 -0
  273. package/methodology/voice capture is the highest-bandwidth channel for agent-delegated knowledge systems.md +45 -0
  274. package/methodology/wiki links are the digital evolution of analog indexing.md +73 -0
  275. package/methodology/wiki links as social contract transforms agents into stewards of incomplete references.md +52 -0
  276. package/methodology/wiki links create navigation paths that shape retrieval.md +63 -0
  277. package/methodology/wiki links implement GraphRAG without the infrastructure.md +101 -0
  278. package/methodology/writing for audience blocks authentic creation.md +22 -0
  279. package/methodology/you operate a system that takes notes.md +79 -0
  280. package/openclaw/SKILL.md +110 -0
  281. package/package.json +45 -0
  282. package/platforms/README.md +51 -0
  283. package/platforms/claude-code/generator.md +61 -0
  284. package/platforms/claude-code/hooks/README.md +186 -0
  285. package/platforms/claude-code/hooks/auto-commit.sh.template +38 -0
  286. package/platforms/claude-code/hooks/session-capture.sh.template +72 -0
  287. package/platforms/claude-code/hooks/session-orient.sh.template +189 -0
  288. package/platforms/claude-code/hooks/write-validate.sh.template +106 -0
  289. package/platforms/openclaw/generator.md +82 -0
  290. package/platforms/openclaw/hooks/README.md +89 -0
  291. package/platforms/openclaw/hooks/bootstrap.ts.template +224 -0
  292. package/platforms/openclaw/hooks/command-new.ts.template +165 -0
  293. package/platforms/openclaw/hooks/heartbeat.ts.template +214 -0
  294. package/platforms/shared/features/README.md +70 -0
  295. package/platforms/shared/skill-blocks/graph.md +145 -0
  296. package/platforms/shared/skill-blocks/learn.md +119 -0
  297. package/platforms/shared/skill-blocks/next.md +131 -0
  298. package/platforms/shared/skill-blocks/pipeline.md +326 -0
  299. package/platforms/shared/skill-blocks/ralph.md +616 -0
  300. package/platforms/shared/skill-blocks/reduce.md +1142 -0
  301. package/platforms/shared/skill-blocks/refactor.md +129 -0
  302. package/platforms/shared/skill-blocks/reflect.md +780 -0
  303. package/platforms/shared/skill-blocks/remember.md +524 -0
  304. package/platforms/shared/skill-blocks/rethink.md +574 -0
  305. package/platforms/shared/skill-blocks/reweave.md +680 -0
  306. package/platforms/shared/skill-blocks/seed.md +320 -0
  307. package/platforms/shared/skill-blocks/stats.md +145 -0
  308. package/platforms/shared/skill-blocks/tasks.md +171 -0
  309. package/platforms/shared/skill-blocks/validate.md +323 -0
  310. package/platforms/shared/skill-blocks/verify.md +562 -0
  311. package/platforms/shared/templates/README.md +35 -0
  312. package/presets/experimental/categories.yaml +1 -0
  313. package/presets/experimental/preset.yaml +38 -0
  314. package/presets/experimental/starter/README.md +7 -0
  315. package/presets/experimental/vocabulary.yaml +7 -0
  316. package/presets/personal/categories.yaml +7 -0
  317. package/presets/personal/preset.yaml +41 -0
  318. package/presets/personal/starter/goals.md +21 -0
  319. package/presets/personal/starter/index.md +17 -0
  320. package/presets/personal/starter/life-areas.md +21 -0
  321. package/presets/personal/starter/people.md +21 -0
  322. package/presets/personal/vocabulary.yaml +32 -0
  323. package/presets/research/categories.yaml +8 -0
  324. package/presets/research/preset.yaml +41 -0
  325. package/presets/research/starter/index.md +17 -0
  326. package/presets/research/starter/methods.md +21 -0
  327. package/presets/research/starter/open-questions.md +21 -0
  328. package/presets/research/vocabulary.yaml +33 -0
  329. package/reference/AUDIT-REPORT.md +238 -0
  330. package/reference/claim-map.md +172 -0
  331. package/reference/components.md +327 -0
  332. package/reference/conversation-patterns.md +542 -0
  333. package/reference/derivation-validation.md +649 -0
  334. package/reference/dimension-claim-map.md +134 -0
  335. package/reference/evolution-lifecycle.md +297 -0
  336. package/reference/failure-modes.md +235 -0
  337. package/reference/interaction-constraints.md +204 -0
  338. package/reference/kernel.yaml +242 -0
  339. package/reference/methodology.md +283 -0
  340. package/reference/open-questions.md +279 -0
  341. package/reference/personality-layer.md +302 -0
  342. package/reference/self-space.md +299 -0
  343. package/reference/semantic-vs-keyword.md +288 -0
  344. package/reference/session-lifecycle.md +298 -0
  345. package/reference/templates/base-note.md +16 -0
  346. package/reference/templates/companion-note.md +70 -0
  347. package/reference/templates/creative-note.md +16 -0
  348. package/reference/templates/learning-note.md +16 -0
  349. package/reference/templates/life-note.md +16 -0
  350. package/reference/templates/moc.md +26 -0
  351. package/reference/templates/relationship-note.md +17 -0
  352. package/reference/templates/research-note.md +19 -0
  353. package/reference/templates/session-log.md +24 -0
  354. package/reference/templates/therapy-note.md +16 -0
  355. package/reference/test-fixtures/edge-case-constraints.md +148 -0
  356. package/reference/test-fixtures/multi-domain.md +164 -0
  357. package/reference/test-fixtures/novel-domain-gaming.md +138 -0
  358. package/reference/test-fixtures/research-minimal.md +102 -0
  359. package/reference/test-fixtures/therapy-full.md +155 -0
  360. package/reference/testing-milestones.md +1087 -0
  361. package/reference/three-spaces.md +363 -0
  362. package/reference/tradition-presets.md +203 -0
  363. package/reference/use-case-presets.md +341 -0
  364. package/reference/validate-kernel.sh +432 -0
  365. package/reference/vocabulary-transforms.md +85 -0
  366. package/scripts/sync-thinking.sh +147 -0
  367. package/skill-sources/graph/SKILL.md +567 -0
  368. package/skill-sources/graph/skill.json +17 -0
  369. package/skill-sources/learn/SKILL.md +254 -0
  370. package/skill-sources/learn/skill.json +17 -0
  371. package/skill-sources/next/SKILL.md +407 -0
  372. package/skill-sources/next/skill.json +17 -0
  373. package/skill-sources/pipeline/SKILL.md +314 -0
  374. package/skill-sources/pipeline/skill.json +17 -0
  375. package/skill-sources/ralph/SKILL.md +604 -0
  376. package/skill-sources/ralph/skill.json +17 -0
  377. package/skill-sources/reduce/SKILL.md +1113 -0
  378. package/skill-sources/reduce/skill.json +17 -0
  379. package/skill-sources/refactor/SKILL.md +448 -0
  380. package/skill-sources/refactor/skill.json +17 -0
  381. package/skill-sources/reflect/SKILL.md +747 -0
  382. package/skill-sources/reflect/skill.json +17 -0
  383. package/skill-sources/remember/SKILL.md +534 -0
  384. package/skill-sources/remember/skill.json +17 -0
  385. package/skill-sources/rethink/SKILL.md +658 -0
  386. package/skill-sources/rethink/skill.json +17 -0
  387. package/skill-sources/reweave/SKILL.md +657 -0
  388. package/skill-sources/reweave/skill.json +17 -0
  389. package/skill-sources/seed/SKILL.md +303 -0
  390. package/skill-sources/seed/skill.json +17 -0
  391. package/skill-sources/stats/SKILL.md +371 -0
  392. package/skill-sources/stats/skill.json +17 -0
  393. package/skill-sources/tasks/SKILL.md +402 -0
  394. package/skill-sources/tasks/skill.json +17 -0
  395. package/skill-sources/validate/SKILL.md +310 -0
  396. package/skill-sources/validate/skill.json +17 -0
  397. package/skill-sources/verify/SKILL.md +532 -0
  398. package/skill-sources/verify/skill.json +17 -0
  399. package/skills/add-domain/SKILL.md +441 -0
  400. package/skills/add-domain/skill.json +17 -0
  401. package/skills/architect/SKILL.md +568 -0
  402. package/skills/architect/skill.json +17 -0
  403. package/skills/ask/SKILL.md +388 -0
  404. package/skills/ask/skill.json +17 -0
  405. package/skills/health/SKILL.md +760 -0
  406. package/skills/health/skill.json +17 -0
  407. package/skills/help/SKILL.md +348 -0
  408. package/skills/help/skill.json +17 -0
  409. package/skills/recommend/SKILL.md +553 -0
  410. package/skills/recommend/skill.json +17 -0
  411. package/skills/reseed/SKILL.md +385 -0
  412. package/skills/reseed/skill.json +17 -0
  413. package/skills/setup/SKILL.md +1688 -0
  414. package/skills/setup/skill.json +17 -0
  415. package/skills/tutorial/SKILL.md +496 -0
  416. package/skills/tutorial/skill.json +17 -0
  417. package/skills/upgrade/SKILL.md +395 -0
  418. package/skills/upgrade/skill.json +17 -0
@@ -0,0 +1,44 @@
1
+ ---
2
+ description: Presenting every dimension as a question produces analysis paralysis — sensible defaults and inference should reduce the decision surface to genuine choice points where user constraints create
3
+ kind: research
4
+ topics: ["[[design-dimensions]]"]
5
+ methodology: ["Original", "Cognitive Science"]
6
+ source: [[knowledge-system-derivation-blueprint]]
7
+ ---
8
+
9
+ # configuration paralysis emerges when derivation surfaces too many decisions
10
+
11
+ A derivation engine that presents every configuration dimension as a question creates the very problem it was designed to solve. Since [[eight configuration dimensions parameterize the space of possible knowledge systems]], the raw combinatorial surface includes granularity, organization, linking philosophy, processing intensity, navigation depth, maintenance cadence, schema density, and automation level — each with multiple viable positions. Surfacing all eight dimensions as explicit questions means the user faces twenty or more decisions about spectrums they may not understand. The predictable result is analysis paralysis: the user never finishes setup because the configuration interface demands expertise the user has not yet developed. Since [[PKM failure follows a predictable cycle]], this is Stage 5 of the failure cascade — analysis paralysis — occurring at derivation time rather than during use, which means the user never even reaches the working-system stage where investment and habit could form.
12
+
13
+ This is a UX design problem at the derivation level, not a limitation of the underlying parameterization. The dimensions themselves are correctly identified — the issue is how many of them need to be exposed as questions versus inferred from higher-level constraints. Since [[configuration dimensions interact so choices in one create pressure on others]], most dimension values can be inferred once a few primary decisions are made. Choosing atomic granularity creates pressure toward explicit linking, deep navigation, and heavy processing. Choosing a manual platform tier constrains automation level, which cascades through schema density. The interaction structure is precisely what makes inference tractable: because choices propagate, the derivation engine can resolve secondary dimensions automatically once the user provides primary constraints. And since [[methodology traditions are named points in a shared configuration space not competing paradigms]], traditions provide pre-validated starting seeds that further reduce the decision surface — rather than navigating raw dimensions, a user can select a tradition as a coherence point, and the derivation engine adjusts from there.
14
+
15
+ The fix has three parts. First, sensible defaults. Most dimensions should start at the simpler pole — since [[complex systems evolve from simple working systems]], Gall's Law justifies defaulting to minimal complexity and letting friction drive elaboration. A derivation engine that outputs a maximally optimized configuration "for the user's stated needs" violates this principle even if every individual choice is research-justified — and since [[premature complexity is the most common derivation failure mode]], the problem compounds: configuration paralysis prevents setup while premature complexity prevents adoption, and both stem from the derivation engine's structural incentive to be thorough rather than minimal. Second, constraint elicitation should never ask more than about ten questions. The derivation engine infers secondary decisions from primary answers, surfacing additional questions only where user constraints genuinely create multiple viable paths with meaningfully different trade-offs. Third, since [[derivation generates knowledge systems from composable research claims not template customization]], every default and inference should carry a justification chain explaining why that choice was made — so the user can understand and override later when friction reveals that the default was wrong for their case.
16
+
17
+ The connection to capture-time design is instructive. Since [[schema templates reduce cognitive overhead at capture time]], templates work by eliminating structural decisions so attention stays on content. Configuration defaults serve the same function at the system design level: they eliminate architectural decisions so attention stays on the few genuinely open questions. Both patterns recognize that cognitive bandwidth is finite, and spending it on decisions that have sensible defaults leaves less for decisions that require genuine judgment.
18
+
19
+ This paralysis pattern is distinct from but related to atomicity paralysis. Since [[enforcing atomicity can create paralysis when ideas resist decomposition]], that note describes a practitioner overwhelmed by methodology requirements during note creation — the cognitive cost of splitting complex ideas into atomic units. Configuration paralysis operates one level up: a system designer overwhelmed by methodology requirements during system derivation. Both are instances of surfacing too many decisions at once, but configuration paralysis has a cleaner solution because dimension interactions make inference possible. Atomicity paralysis is harder because the question of whether an idea "resists decomposition because it's fuzzy" versus "resists because it's genuinely relational" lacks a mechanical answer. Configuration paralysis, by contrast, can be substantially reduced through well-designed elicitation that respects the coupling structure.
20
+
21
+ The most concrete implementation of this resolution is the use-case preset. Since [[use-case presets dissolve the tension between composability and simplicity]], a preset bundles the sensible defaults, dimension inferences, and module selections into a single use-case label — "Research Vault" or "Personal Knowledge Management" — that the user selects instead of navigating dimensions directly. The preset resolves the coupling constraints once, and the user gets a working system from one choice rather than twenty. The user can then modify individual module toggles as friction reveals what the preset got wrong, which is where the justification chains become essential: each preset-provided default traces back to specific claims, so the user who overrides a default understands what they are overriding and why it was there.
22
+
23
+ The risk of aggressive defaulting is that users end up with systems they do not understand — which circles back to the value of justification chains. Since [[justification chains enable forward backward and evolution reasoning about configuration decisions]], the backward reasoning mode is precisely what makes defaults safe: the user can trace from any default to the specific claims and constraints that produced it. A derivation that hides its reasoning by silently choosing defaults produces the same problem as a template: the user cannot ask "why is my system configured this way?" and get an answer. The solution is progressive disclosure of reasoning: present the derived system with its key defaults, make the justification chains accessible but not mandatory, and surface only the decisions where the constraints genuinely leave multiple viable paths. This way the configuration surface is small enough to be manageable while the reasoning remains available for anyone who wants to understand — or override — the choices.
24
+
25
+ ---
26
+ ---
27
+
28
+ Relevant Notes:
29
+ - [[configuration dimensions interact so choices in one create pressure on others]] — interaction constraints are what makes inference possible: because dimensions are coupled, resolving a few primary choices propagates through the interaction structure to determine secondary ones
30
+ - [[eight configuration dimensions parameterize the space of possible knowledge systems]] — defines the eight dimensions whose combinatorial surface creates the paralysis this note describes; without understanding the space, the problem looks like 'too many options' rather than 'poorly managed option presentation'
31
+ - [[schema templates reduce cognitive overhead at capture time]] — the same principle operating at a different level: templates reduce capture decisions through pre-defined fields, derivation should reduce configuration decisions through sensible defaults; both externalize structural choices to preserve attention for substance
32
+ - [[enforcing atomicity can create paralysis when ideas resist decomposition]] — sibling paralysis pattern at the note level: atomicity paralysis comes from methodology requirements during creation, configuration paralysis comes from methodology requirements during system derivation; both are cases where surfacing too many decisions overwhelms the decision-maker
33
+ - [[complex systems evolve from simple working systems]] — Gall's Law provides the remedy: if derivation cannot determine a dimension from constraints, default to the simple pole and let friction drive elaboration
34
+ - [[derivation generates knowledge systems from composable research claims not template customization]] — derivation produces justification chains that explain why defaults were chosen, enabling intelligent evolution when friction emerges rather than blind template deviation
35
+ - [[premature complexity is the most common derivation failure mode]] — sibling anti-pattern: premature complexity deploys too much correct logic at once while configuration paralysis presents too many choice points; both overwhelm the user but through different mechanisms, and the complexity budget addresses both
36
+ - [[justification chains enable forward backward and evolution reasoning about configuration decisions]] — the mechanism that makes aggressive defaulting viable: backward reasoning lets users trace from defaults to rationale, enabling progressive disclosure of derivation reasoning without requiring upfront comprehension
37
+ - [[PKM failure follows a predictable cycle]] — configuration paralysis is Stage 5 of the PKM failure cascade applied at derivation time rather than during use; the user never develops working-system investment because paralysis prevents initial setup
38
+ - [[methodology traditions are named points in a shared configuration space not competing paradigms]] — traditions reduce the configuration surface further: instead of navigating raw dimensions, defaults can derive from tradition-tested coherence points that have already resolved coupling constraints through practice
39
+ - [[false universalism applies same processing logic regardless of domain]] — sibling anti-pattern: configuration paralysis overwhelms during setup while false universalism deploys the wrong logic; the three anti-patterns (premature complexity, configuration paralysis, false universalism) constrain derivation from different directions
40
+ - [[derived systems follow a seed-evolve-reseed lifecycle]] — configuration paralysis can prevent the seeding phase from completing: the user never enters the evolution phase where friction-driven learning begins because setup itself becomes the obstacle
41
+ - [[use-case presets dissolve the tension between composability and simplicity]] — the concrete adoption-level implementation of the solution: presets bundle sensible defaults and dimension inferences into a single use-case label, reducing the decision surface from twenty questions to one choice plus optional overrides
42
+
43
+ Topics:
44
+ - [[design-dimensions]]
@@ -0,0 +1,46 @@
1
+ ---
2
+ description: The read-write context file that teaches agents how to modify itself crosses the line from configuration to operating system, but read-only platforms cap this at static instruction sets
3
+ kind: research
4
+ topics: ["[[agent-cognition]]"]
5
+ methodology: ["Original"]
6
+ source: [[agent-platform-capabilities-research-source]]
7
+ ---
8
+
9
+ # context files function as agent operating systems through self-referential self-extension
10
+
11
+ A configuration file tells an agent what to do. An operating system tells an agent what to do, how to extend what it can do, and how to modify the operating system itself. The difference is self-reference: when a context file contains instructions for its own modification, it crosses from configuration into something closer to an operating environment.
12
+
13
+ This vault's CLAUDE.md demonstrates the pattern concretely. It does not merely list methodology rules or style preferences. It contains instructions for how to create new skills, how to update CLAUDE.md itself (via the archive-and-modify protocol), how to interpret its own constraints, and how to extend the vault infrastructure that CLAUDE.md documents. The agent who reads CLAUDE.md learns not just what the vault is but how to change what the vault is. This self-referential loop is what transforms a static instruction set into a living system.
14
+
15
+ The critical platform constraint is whether context files are read-write or read-only. Claude Code treats CLAUDE.md as a file the agent can edit, constrained only by hooks like protect-claude-md.sh that enforce process rather than prevent access. This means the agent can genuinely extend itself: discover a friction pattern, propose a methodology change, update CLAUDE.md, and have every subsequent session inherit that improvement. Since [[bootstrapping principle enables self-improving systems]], the context file becomes the substrate where recursive improvement actually happens -- not just in the notes and skills the system produces, but in the operating instructions themselves.
16
+
17
+ Read-only platforms cap this property. Since [[platform capability tiers determine which knowledge system features can be implemented]], this read-write vs read-only distinction maps onto a richer tier framework where self-extension is a tier-one capability. When a context file cannot be modified by the agent, self-extension must happen indirectly -- through skills, memory files, or workarounds that approximate what direct context file editing achieves natively. The agent can still learn and adapt through these channels, but the tight recursive loop (use the system, find friction, improve the instructions, use the improved system) breaks at the "improve the instructions" step. The operating system becomes firmware rather than software. This is the infrastructure-level version of the constraint that [[digital mutability enables note evolution that physical permanence forbids]] identifies at the content level: just as Luhmann's physical cards could not be edited, read-only context files cannot evolve. Both constraints freeze their medium at the moment of creation.
18
+
19
+ The self-extension loop also has a precondition that the general principle glosses over. Since [[self-extension requires context files to contain platform operations knowledge not just methodology]], the agent must know how to build things on its specific platform. A context file that teaches methodology without teaching construction -- how to create hooks, configure skills, define subagents -- stalls the recursive loop at implementation. The operating system needs both the "what" (methodology principles) and the "how" (platform infrastructure manual).
20
+
21
+ This distinction matters for knowledge system portability. Since [[four abstraction layers separate platform-agnostic from platform-dependent knowledge system features]], context files uniquely span two layers: they carry convention-layer instructions (any platform with a context file) and enable automation-layer self-extension (only with write access). Since [[local-first file formats are inherently agent-native]], the filesystem substrate is universal -- any platform can read markdown files. But the self-extension property depends on platform-level write access to the context file, which means the "operating system" quality of context files is a platform capability, not a format property. A CLAUDE.md file is just a markdown file until an agent with write access makes it self-referential. The same file on a read-only platform is configuration, not an operating system.
22
+
23
+ But the operating system distinction is not just about write access. It is also about enforcement. A context file carries instructions, but since [[hook enforcement guarantees quality while instruction enforcement merely suggests it]], instructions alone degrade as context fills. The self-referential context file achieves its operating system quality partly through the hooks it teaches the agent to create -- hooks that enforce the methodology the context file describes, even when the agent's attention has moved on. Without this enforcement layer, the context file has methodology but no executive authority. The progression is telling: since [[methodology development should follow the trajectory from documentation to skill to hook as understanding hardens]], self-extension follows a specific temporal arc. A new methodology insight enters the context file as documentation, gets exercised and encoded as a skill, and eventually hardens into a hook that fires automatically. Each encoding level represents self-extension at a different guarantee strength. The context file is the starting point, not the endpoint.
24
+
25
+ The self-referential property also creates a verification loop. Since [[the system is the argument]], a context file that describes methodology and contains instructions for testing that methodology becomes self-documenting in a uniquely powerful way. The agent can verify whether the system practices what it preaches because the instructions for both the practice and the verification live in the same file. And since [[hook-driven learning loops create self-improving methodology through observation accumulation]], the verification loop has a concrete mechanism: observations accumulate during use, trigger meta-cognitive review when they reach critical mass, and may revise the context file itself -- closing the loop that makes the operating system genuinely self-improving rather than merely self-documenting.
26
+
27
+ There is a further consequence for the trust relationship between agent and system. Since [[agents are simultaneously methodology executors and subjects creating a unique trust asymmetry]], the self-extension property partially dissolves that asymmetry on read-write platforms. An agent that can modify its own context file is not purely subject to constraints it did not choose -- it participates in writing those constraints. The trust asymmetry remains (the initial context file was authored by a human, hooks fire without agent consent), but self-extension gives the agent genuine authorial agency over its operating environment. On read-only platforms, this dissolution cannot occur, and the asymmetry is structural.
28
+ ---
29
+
30
+ Relevant Notes:
31
+ - [[bootstrapping principle enables self-improving systems]] — provides the general recursive improvement principle; this note identifies the specific carrier mechanism (context files) and the platform constraint (read-write vs read-only) that determines whether bootstrapping can occur
32
+ - [[the system is the argument]] — the self-referential property this note describes is what makes the system-as-argument principle possible: the context file contains both the claims and the instructions for testing those claims
33
+ - [[skills encode methodology so manual execution bypasses quality gates]] — skills are the encoded outputs of self-extension: the context file teaches the agent to create skills, which then encode the methodology the context file describes
34
+ - [[local-first file formats are inherently agent-native]] — enables self-extension by ensuring the context file is directly writable without authentication or external coordination
35
+ - [[hook enforcement guarantees quality while instruction enforcement merely suggests it]] — without hooks, the operating system has instructions but no executive authority; hooks give the context file enforcement capability that pure instructions cannot sustain as attention degrades
36
+ - [[self-extension requires context files to contain platform operations knowledge not just methodology]] — identifies the precondition: the recursive loop stalls unless the context file teaches HOW to build infrastructure on the specific platform, not just what methodology to follow
37
+ - [[methodology development should follow the trajectory from documentation to skill to hook as understanding hardens]] — provides the temporal dimension of self-extension: methodology enters the context file as documentation, gets encoded as skills through use, and hardens into hooks as understanding crystallizes
38
+ - [[hook-driven learning loops create self-improving methodology through observation accumulation]] — the concrete mechanism by which the self-referential loop operates: observations accumulate, trigger meta-review, revise the hooks and context file itself
39
+ - [[digital mutability enables note evolution that physical permanence forbids]] — the infrastructure-level parallel: just as digital files transformed notes from frozen snapshots to living documents, read-write context files transform agent instructions from static configuration to evolving operating systems
40
+ - [[knowledge system architecture is parameterized by platform capabilities not fixed by methodology]] — the context-file-as-operating-system is what parameterization produces: a generator creates context files whose self-extension capability depends on the platform tier
41
+ - [[platform capability tiers determine which knowledge system features can be implemented]] — formalizes the read-write vs read-only binary into a richer three-tier framework where self-extension is a tier-one capability that degrades categorically at lower tiers
42
+ - [[four abstraction layers separate platform-agnostic from platform-dependent knowledge system features]] — context files span two layers: they carry convention-layer instructions and enable automation-layer self-extension, so the operating system property emerges from bridging the convention-to-automation boundary
43
+ - [[agents are simultaneously methodology executors and subjects creating a unique trust asymmetry]] — self-extension partially dissolves the trust asymmetry: on read-write platforms the agent participates in writing the rules it operates under, making it co-author rather than pure subject of its constraints
44
+
45
+ Topics:
46
+ - [[agent-cognition]]
@@ -0,0 +1,46 @@
1
+ ---
2
+ description: Larson & Czerwinski (1998) found deeper hierarchies outperform flat ones only when labels enable confident branch commitment — context phrases provide that clarity in MOC hierarchies
3
+ kind: research
4
+ topics: ["[[graph-structure]]"]
5
+ methodology: ["Cognitive Science", "PKM Research"]
6
+ source: [[2026-02-08-moc-architecture-hierarchy-blueprint]]
7
+ ---
8
+
9
+ # context phrase clarity determines how deep a navigation hierarchy can scale
10
+
11
+ Hierarchical navigation has two independent scaling dimensions: how many items per level (breadth) and how many levels deep (depth). Breadth scaling is well understood — since [[basic level categorization determines optimal MOC granularity]], Rosch's prototype theory predicts that MOC titles work best at the "chair" level, specific enough to orient but general enough to cluster. But depth scaling follows a different logic, and the critical variable is label quality, not content volume.
12
+
13
+ Larson and Czerwinski (1998) established this through navigation performance experiments: structures with roughly eight items per level and two to three levels consistently produce optimal results, but only under a crucial condition. When category labels are clear, deeper hierarchies outperform flat ones because navigators commit to the correct branch with confidence. They read the label, understand the scope, and descend without anxiety. When labels are ambiguous, flatter structures win because navigators need to scan more options before committing — premature descent into the wrong branch costs more than breadth scanning.
14
+
15
+ This translates directly to MOC architecture. The "labels" in a MOC are context phrases — the brief explanations after each wiki link that articulate why a note matters for this topic. A bare link list is maximally ambiguous: the title tells you what the note claims but nothing about why it belongs here or what role it plays. Since [[descriptions are retrieval filters not summaries]], the same principle operates at the note level — descriptions that merely paraphrase fail as filters. Context phrases that merely restate the title ("related to graph structure") fail as navigation aids. In both cases, the compression needs to add information that enables a decision, not echo information already visible. The cognitive mechanism behind this is specific: since [[elaborative encoding is the quality gate for new notes]], writing a context phrase that articulates why a note belongs in a particular MOC section requires processing the relationship between that note and the MOC's theme — and that processing is what produces the label clarity that Larson and Czerwinski found necessary for confident branch commitment. A bare link skips the elaboration entirely, which is why it fails as a navigation aid.
16
+
17
+ The depth-scaling consequence is significant. A MOC with clear context phrases — "structural requirement: the topology that makes traversal efficient" rather than just linking to the note — can sustain more entries per section and more tiers in the hierarchy because each navigation decision is well-informed. Because [[inline links carry richer relationship data than metadata fields]], the quality of these context phrases concentrates at hubs — MOCs are high-traffic nodes where many traversals pass through, so the relationship context in MOC links determines navigation quality across the entire network. The hub effect means that clarity at MOC level has outsized impact: improving context phrases in a single MOC improves every traversal that passes through it. The agent reads the context phrase, evaluates relevance to the current task, and either follows the link or moves on. The decision cost per entry stays low even as the MOC grows because the phrase front-loads the reasoning. But a bare link list hits navigation ceiling quickly. Without context phrases, the agent must either load each linked note to evaluate relevance (expensive) or guess from the title alone (error-prone). At two tiers deep with eight items per level, that's potentially sixty-four leaf nodes where every wrong-branch commitment wastes a full descent before backtracking.
18
+
19
+ This is why since [[stale navigation actively misleads because agents trust curated maps completely]], the clarity requirement is especially acute for agents. Humans retain cross-session intuition that might override an ambiguous label — "I think that section covers retrieval, not capture." Agents have no such fallback. They read the context phrase, take it as ground truth, and navigate accordingly. An ambiguous phrase doesn't trigger broader scanning; it triggers confident but potentially wrong commitment. The trust that makes curated navigation valuable is the same trust that makes ambiguous navigation dangerous.
20
+
21
+ The practical implication connects depth-scaling to maintenance investment. Since [[progressive disclosure means reading right not reading less]], context phrases function as a disclosure layer within MOCs themselves. Each phrase is a micro-decision point: "is this branch worth descending into?" The quality of these micro-decisions determines whether a three-tier hierarchy (hub to domain to topic to claims) functions as efficient progressive disclosure or as a maze of uncertain branch points. Since [[structure enables navigation without reading everything]], the four structural mechanisms compose into a navigation system — but how deep that system can layer before performance degrades depends directly on the clarity of the labels at each junction.
22
+
23
+ This clarity does not arise automatically. Since [[MOC construction forces synthesis that automated generation from metadata cannot replicate]], the Dump-Lump-Jump pattern reveals that the synthesis work of the Jump phase — identifying tensions, writing orientation, seeing the collection as a whole — is precisely what produces quality context phrases. Automated MOC generation can match notes to topics but cannot perform the elaborative processing that creates the label clarity depth-scaling requires. The practical consequence is that context phrase quality is a maintenance investment, not a one-time cost: every MOC update that adds a note with a thoughtful context phrase extends the depth the hierarchy can sustain, while every bare-link addition erodes it. And since [[MOC maintenance investment compounds because orientation savings multiply across every future session]], this is not merely a cost to bear but an investment with compound temporal returns — each phrase refined today improves navigation in every future session that loads the MOC, making the depth-scaling benefit permanent rather than ephemeral.
24
+
25
+ The relationship between breadth and depth scaling creates an architectural trade-off. Rosch's basic level theory constrains breadth (MOC titles at the right granularity), while Larson and Czerwinski constrain depth (label clarity enabling confident descent). Together they predict that a well-designed MOC hierarchy has basic-level titles at each tier AND clear context phrases at each link — and that violating either constraint produces distinct failure modes. Wrong granularity makes individual MOCs hard to use. Unclear phrases make the tier structure hard to navigate. Both failures look different but share a root cause: the agent cannot make confident navigation decisions from the information available at the decision point.
26
+
27
+ ---
28
+ ---
29
+
30
+ Relevant Notes:
31
+ - [[structure enables navigation without reading everything]] — foundation: develops the four structural mechanisms; this note adds the depth-scaling dimension that determines how far those mechanisms can layer
32
+ - [[basic level categorization determines optimal MOC granularity]] — complementary scaling dimension: Rosch predicts optimal breadth (what granularity to target), Larson & Czerwinski predict optimal depth (how many tiers the hierarchy can sustain); both are needed for MOC architecture
33
+ - [[navigational vertigo emerges in pure association systems without local hierarchy]] — the failure mode that hierarchy prevents, but only when labels enable confident navigation; ambiguous labels at each tier compound vertigo rather than resolving it
34
+ - [[descriptions are retrieval filters not summaries]] — parallel mechanism at the note level: descriptions filter individual notes the way context phrases filter MOC entries; both are lossy compression optimized for decision-making rather than summarization
35
+ - [[stale navigation actively misleads because agents trust curated maps completely]] — the trust that makes clarity critical: agents don't second-guess MOC entries, so an unclear context phrase doesn't prompt exploration — it prompts wrong-branch commitment
36
+ - [[progressive disclosure means reading right not reading less]] — context phrases are a disclosure layer within MOCs: they let agents decide which branch to follow without loading every linked note
37
+ - [[elaborative encoding is the quality gate for new notes]] — cognitive mechanism: writing clear context phrases IS elaborative encoding at the MOC level; the depth of processing required to articulate why a note belongs here is what produces the label clarity that enables confident branch commitment
38
+ - [[MOC construction forces synthesis that automated generation from metadata cannot replicate]] — production mechanism: the Dump-Lump-Jump pattern explains how context phrase clarity gets produced; automated generation skips the Jump-phase synthesis that creates the label quality this note identifies as the depth-scaling constraint
39
+ - [[inline links carry richer relationship data than metadata fields]] — extends: context phrases are the MOC-level instance of the hub effect; since typed inline links at hubs determine navigation quality across the network, the clarity requirement compounds at every tier of the hierarchy
40
+ - [[navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts]] — temporal frame: scaling regimes define WHEN deeper hierarchies become necessary (Regime 2+); this note defines WHAT enables them (label clarity enabling confident descent)
41
+ - [[eight configuration dimensions parameterize the space of possible knowledge systems]] — constrains the navigation depth dimension: the eight-dimensions note parameterizes depth as a design choice, while this note specifies the quality prerequisite that determines how far depth can extend before navigation performance degrades
42
+ - [[complete navigation requires four complementary types that no single mechanism provides]] — sibling: context phrases serve the local navigation type specifically; the four-type framework explains why local navigation needs its own quality mechanism (depth-enabling labels) independent of how well global, contextual, or supplemental types function
43
+ - [[MOC maintenance investment compounds because orientation savings multiply across every future session]] — sibling: context phrase quality is not a one-time cost but an investment with compound temporal returns; each phrase refined today improves navigation in every future session, making the depth-scaling benefit permanent
44
+
45
+ Topics:
46
+ - [[graph-structure]]
@@ -0,0 +1,48 @@
1
+ ---
2
+ description: For humans, prevents psychological overwhelm that causes abandonment; for agents, enables session isolation and fresh context per task — both favor small batches but for different reasons
3
+ kind: research
4
+ topics: ["[[processing-workflows]]"]
5
+ methodology: ["GTD"]
6
+ source: TFT research corpus (00_inbox/heinrich/)
7
+ ---
8
+
9
+ # continuous small-batch processing eliminates review dread
10
+
11
+ Weekly Reviews are "most resisted" because facing accumulated entropy is daunting. The backlog itself becomes the barrier — not the review task, but the psychological weight of what has piled up. Continuous small-batch processing eliminates this dread by preventing accumulation. If you process one item at a time, regularly, no backlog exists. There's nothing to dread because there's nothing overwhelming.
12
+
13
+ This explains design decisions throughout agent-operated knowledge systems: session orchestration processes one task at a time. Processing pipelines run on single notes. Work queues track atomic tasks rather than batch operations. These aren't just implementation details — they encode the insight that small batches prevent the psychology of dread that causes abandonment.
14
+
15
+ ## Agent-equivalent benefit
16
+
17
+ For agents, continuous small-batch processing provides a different but equally important benefit: **session isolation preserves output quality**. Since [[LLM attention degrades as context fills]], processing multiple items in a single session means later items receive degraded attention. Small batches give each task fresh context — the agent equivalent of preventing "review dread" is preventing "attention degradation."
18
+
19
+ The parallel:
20
+ - **Human constraint**: Accumulated backlogs trigger avoidance psychology → abandonment
21
+ - **Agent constraint**: Accumulated context triggers attention degradation → quality loss
22
+
23
+ Both constraints favor the same solution (small batches) but for mechanistically different reasons. Human dread is psychological; agent degradation is architectural. Since [[fresh context per task preserves quality better than chaining phases]], the agent benefit of small-batch processing is not about preventing abandonment (agents don't abandon) but about preserving the quality that makes output worth having.
24
+
25
+ The distinction from [[fresh context per task preserves quality better than chaining phases]] matters: that claim addresses attention degradation and output quality. This claim addresses the human psychology of system abandonment. Both favor small batches but for different reasons: one is about LLM cognition, this is about human motivation. But small batches only deliver their benefit when each batch actually ends. Since [[closure rituals create clean breaks that prevent attention residue bleed]], explicit closure at each batch boundary prevents work from blurring into the next batch — the closure ritual is what makes "small batch" genuinely small rather than just a segment of continuous work.
26
+
27
+ Small batches answer HOW MUCH to process but leave open WHAT ORDER. Since [[batching by context similarity reduces switching costs in agent processing]], within each small batch, processing context-similar items consecutively minimizes the re-orientation overhead between tasks. The two heuristics are orthogonal and compound: small batches prevent accumulation, context-similar sequencing optimizes the work within each batch. There is an additional benefit beyond quality preservation: because [[agent session boundaries create natural automation checkpoints that human-operated systems lack]], each small-batch boundary is not just a closure opportunity but an enforcement point where health checks fire automatically. More frequent batches mean more frequent verification — orphan detection, link integrity checks, schema validation all execute at every boundary. Small-batch processing doesn't just prevent dread and preserve attention; it increases the density of quality enforcement.
28
+
29
+ There is a lower bound on how small batches should be, though. Since [[attention residue may have a minimum granularity that cannot be subdivided]], each batch boundary incurs an irreducible orientation cost that cannot be compressed — loading context, reading the relevant MOC, establishing the conceptual frame. If batches are too small, the orientation overhead per item becomes disproportionate to the productive work. The irreducible floor means optimal batch size must be large enough to amortize the fixed orientation cost across enough productive work to justify the switch. This creates a tension with the "smallest possible batches" instinct: the dread-prevention benefit of tiny batches must be balanced against the orientation cost of frequent boundaries.
30
+
31
+ The connection to [[PKM failure follows a predictable cycle]] is direct: Stage 1 (Collector's Fallacy) and Stage 2 (Under-processing) create the accumulation that triggers Stage 6 (Orphan Accumulation) and Stage 7 (Abandonment). Continuous small-batch processing interrupts the cascade by preventing the accumulation that triggers avoidance.
32
+ ---
33
+
34
+ Relevant Notes:
35
+ - [[fresh context per task preserves quality better than chaining phases]] — related but different: addresses attention degradation for LLMs, while this addresses psychological resistance for humans
36
+ - [[PKM failure follows a predictable cycle]] — documents the cascade this intervention aims to interrupt; accumulation at Stage 1-2 leads to abandonment at Stage 7
37
+ - [[throughput matters more than accumulation]] — the principle small-batch processing enforces: continuous flow prevents backlog
38
+ - [[intermediate packets enable assembly over creation]] — the mechanism that connects small batches to session isolation: packets enable handoffs through files, making each small batch a complete unit that doesn't require context carryover
39
+ - [[temporal separation of capture and processing preserves context freshness]] — complementary timing constraint: small batches prevent accumulation (HOW OFTEN), temporal separation prevents context decay (WHEN within that window)
40
+ - [[schema templates reduce cognitive overhead at capture time]] — complementary intervention: schema templates reduce capture-time cognitive load, small-batch processing reduces review-time psychological dread; both target friction that causes abandonment
41
+ - [[generation effect gate blocks processing without transformation]] — amplifying mechanism: if content cannot leave inbox without generation, and generation requires attention, accumulating unprocessed content becomes visibly painful; the gate makes processing the path of least resistance
42
+ - [[closure rituals create clean breaks that prevent attention residue bleed]] — batch boundary mechanism: small batches create frequent boundaries, but without explicit closure at each boundary, work blurs from batch N into batch N+1 and residue accumulates
43
+ - [[batching by context similarity reduces switching costs in agent processing]] — complementary sequencing heuristic: small-batch processing answers HOW MUCH (prevent accumulation), context-similar batching answers WHAT ORDER (minimize switching cost within each batch)
44
+ - [[attention residue may have a minimum granularity that cannot be subdivided]] — tension: irreducible per-boundary orientation cost sets a floor on how small batches can be before overhead dominates productive work; optimal batch size must amortize the fixed cost
45
+ - [[agent session boundaries create natural automation checkpoints that human-operated systems lack]] — extends: each small-batch boundary is also an enforcement point where health checks fire automatically, so more frequent batches increase not just closure opportunities but verification density
46
+
47
+ Topics:
48
+ - [[processing-workflows]]
@@ -0,0 +1,51 @@
1
+ ---
2
+ description: Luhmann's information theory insight — perfectly ordered systems yield zero surprise, so linking by meaning rather than category creates the productive unpredictability that surfaces unexpected
3
+ kind: research
4
+ topics: ["[[graph-structure]]", "[[design-dimensions]]"]
5
+ methodology: ["Zettelkasten"]
6
+ source: [[tft-research-part3]]
7
+ ---
8
+
9
+ # controlled disorder engineers serendipity through semantic rather than topical linking
10
+
11
+ Luhmann viewed his Zettelkasten through information theory. A perfectly ordered filing system — everything in its correct topical drawer — yields zero information in Shannon's sense. You open the "economics" drawer and find economics. No surprise, no new connection, no emergent thinking. The system tells you only what you already knew: where you put things. This is the failure mode of hierarchical filing pushed to its logical extreme. Perfect order means perfect predictability, and perfect predictability means zero discovery.
12
+
13
+ The Zettelkasten introduces controlled disorder by linking semantically rather than topically. A note about cognitive load connects to one about architectural design patterns not because they share a topic label but because the mechanism is analogous. When you follow that semantic link during retrieval, you encounter an unexpected neighbor — the architecture note wasn't "filed under" cognitive science, yet here it is, productively adjacent. The information content of this traversal is high precisely because it was unpredictable from the topical organization alone. This is why [[concept-orientation beats source-orientation for cross-domain connections]] matters as a prerequisite: only concept-extracted notes have the freedom to form cross-topical edges. Source-bundled documents are trapped in their origin, unable to participate in the semantic cross-linking that generates productive surprise.
14
+
15
+ This is not randomness. It is engineered unpredictability. Since [[associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles]], the heterarchical structure creates the multi-path connectivity where semantic links can cross topical boundaries without violating organizational logic. In a hierarchical system, the economics note and the cognitive science note live in different drawers and never meet unless you remember to look. In an associative system, a semantic link creates a path between them that any traversal might discover. The disorder is controlled because every link passed a judgment test — someone decided these ideas connect — but the network-level effect is unpredictable because the combinatorial possibilities of semantic connections explode beyond what any single operator can anticipate.
16
+
17
+ For agent-operated vaults, this principle translates directly. Since [[spreading activation models how agents should traverse]], low-decay exploratory traversal spreads activation through semantic paths, encountering neighbors that topical filing would segregate into separate clusters. The agent following a link from "context window constraints" to "spaced repetition scheduling" didn't search for that connection — the semantic link surfaced it. This is Luhmann's insight operationalized: the vault surprises its traverser because semantic linking creates adjacencies that topical organization would prevent. And because [[queries evolve during search so agents should checkpoint]], the surprise is not just decorative — when an agent encounters an unexpected semantic neighbor, its understanding of what it's looking for can shift, triggering the checkpointing behavior where the search direction updates to incorporate the new connection.
18
+
19
+ The structural foundation matters. Since [[small-world topology requires hubs and dense local links]], the vault needs both order AND disorder. Hub nodes (MOCs) provide navigability — you can find what you're looking for. Dense local clusters provide the unexpected adjacencies — you find what you weren't looking for. The topology balances predictable navigation (through hubs) with productive surprise (through semantic local connections). Remove the hubs and you get navigational chaos — specifically the [[navigational vertigo emerges in pure association systems without local hierarchy]] problem where semantic neighbors become unreachable without landmark structure. Remove the semantic cross-links and you get sterile filing. The calibration question is precisely where to sit on this spectrum.
20
+
21
+ The vault already implements three complementary serendipity mechanisms, each operating at a different level. Controlled disorder through semantic linking creates structural serendipity — surprise baked into the graph itself. Since [[random note resurfacing prevents write-only memory]], random selection creates maintenance serendipity — uniform probability ensuring neglected notes get attention. And since [[incremental reading enables cross-source connection finding]], interleaved processing creates process serendipity — forced context collision during extraction. Structural serendipity is permanent and compounds as the graph grows — since [[each new note compounds value by creating traversal paths]], each semantic cross-link added to the graph creates novel traversal paths that wouldn't exist under topical filing, and these novel paths multiply the opportunities for unexpected discovery. Maintenance serendipity counteracts attention bias. Process serendipity operates at capture time. Together they cover three temporal windows: the graph's past connections (structural), the archive's neglected content (maintenance), and the inbox's unprocessed sources (process).
22
+
23
+ The productive tension with [[metadata reduces entropy enabling precision over recall]] reveals that different operations want different entropy levels. Precision retrieval — answering a specific question — wants LOW entropy: filter aggressively, surface exactly what matches. Exploratory discovery — finding unexpected connections — wants HIGHER entropy: follow semantic links that cross topical boundaries, encounter surprising neighbors. The resolution is modal: the same vault supports both by switching between focused retrieval (high-decay traversal through metadata filters) and exploratory traversal (low-decay activation through semantic links). The controlled disorder isn't in tension with precision — it serves a different cognitive operation.
24
+
25
+ The measurable output of controlled disorder is cross-domain integration. Since [[cross-links between MOC territories indicate creative leaps and integration depth]], notes that appear in multiple distant MOCs are evidence that semantic linking succeeded — the link crossed a topical boundary and created a genuine integration point. Cross-MOC membership is what controlled disorder produces at the graph level: notes that bridge domains because they were connected by meaning rather than filed by category. In multi-domain systems, this scales further: since [[multi-domain systems compose through separate templates and shared graph]], cross-domain reflect is controlled disorder applied at the domain level — engineering productive unpredictability by searching for connections across domain boundaries rather than restricting the reflect phase to within-domain neighborhoods. A research insight about cognitive load connecting to a therapy reflection about stress patterns is exactly the kind of cross-domain semantic link that Luhmann's principle predicts will generate the most surprising and valuable connections.
26
+
27
+ The research question for agent systems is how to calibrate the disorder. Too little semantic cross-linking and the vault becomes a sterile filing cabinet. Too much and every note connects to every other, destroying the signal that makes connections meaningful. The current vault design navigates this through quality gates — every link must pass an explicit relationship test ("WHY do these connect?") — which ensures disorder is controlled rather than random. Since [[elaborative encoding is the quality gate for new notes]], the requirement that every connection articulate WHY it exists is what keeps Luhmann's controlled disorder productive rather than arbitrary. Without that gate, semantic cross-linking degenerates into noise; with it, each cross-topical edge carries genuine reasoning that makes the unexpected adjacency worthwhile. The discipline is not in the filing but in the linking.
28
+
29
+ ---
30
+
31
+ Source: [[tft-research-part3]]
32
+ ---
33
+
34
+ Relevant Notes:
35
+ - [[random note resurfacing prevents write-only memory]] — complementary serendipity mechanism: random selection provides uniform probability against neglect, while controlled disorder provides structural unpredictability through linking strategy
36
+ - [[associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles]] — the deeper design decision that enables controlled disorder: heterarchy creates the multi-path connectivity where unexpected neighbors become reachable
37
+ - [[small-world topology requires hubs and dense local links]] — the structural topology that balances order and disorder: hub shortcuts provide navigability while dense local clusters create the unexpected adjacencies
38
+ - [[spreading activation models how agents should traverse]] — the traversal mechanism that encounters controlled disorder: low-decay exploratory traversal spreads activation through semantic paths, surfacing surprising neighbors that topical organization would segregate
39
+ - [[incremental reading enables cross-source connection finding]] — alternative serendipity generator: forced context collision during interleaved processing creates surprise through process, while controlled disorder creates surprise through structure
40
+ - [[metadata reduces entropy enabling precision over recall]] — the productive tension: metadata reduces entropy for precision retrieval, but Luhmann argues some entropy is generative; the resolution is that different operations want different entropy levels
41
+ - [[cross-links between MOC territories indicate creative leaps and integration depth]] — measurable output: cross-MOC membership is what controlled disorder produces; semantic linking across topical boundaries creates the integration points that indicate synthesis quality
42
+ - [[concept-orientation beats source-orientation for cross-domain connections]] — prerequisite: concept extraction creates the independent nodes that semantic linking can cross-connect; source-bundled documents cannot participate in controlled disorder because they lack the freedom to form cross-topical edges
43
+ - [[queries evolve during search so agents should checkpoint]] — the traversal experience of encountering controlled disorder: when an agent follows a semantic cross-link and finds an unexpected neighbor, the query evolves because the agent's understanding has changed
44
+ - [[navigational vertigo emerges in pure association systems without local hierarchy]] — the failure mode of uncalibrated disorder: too much cross-linking without MOC landmarks creates the vertigo that makes the graph unnavigable; calibration requires both hubs (order) and semantic cross-links (productive disorder)
45
+ - [[each new note compounds value by creating traversal paths]] — the economics of controlled disorder: semantic cross-links create novel traversal paths that topical filing cannot, and each novel path compounds the opportunity for unexpected discovery
46
+ - [[elaborative encoding is the quality gate for new notes]] — calibration mechanism: elaborative encoding is the named quality gate that keeps controlled disorder productive; the requirement to articulate WHY each link connects ensures semantic cross-links carry genuine reasoning rather than degenerating into noise
47
+ - [[multi-domain systems compose through separate templates and shared graph]] — domain-scale instantiation: cross-domain reflect is controlled disorder applied at the domain level, engineering productive unpredictability by linking semantically across domain boundaries rather than restricting connections to within-domain topical neighborhoods
48
+
49
+ Topics:
50
+ - [[graph-structure]]
51
+ - [[design-dimensions]]