arscontexta 0.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (418) hide show
  1. package/.claude-plugin/marketplace.json +11 -0
  2. package/.claude-plugin/plugin.json +22 -0
  3. package/README.md +683 -0
  4. package/agents/knowledge-guide.md +49 -0
  5. package/bin/cli.mjs +66 -0
  6. package/generators/agents-md.md +240 -0
  7. package/generators/claude-md.md +379 -0
  8. package/generators/features/atomic-notes.md +124 -0
  9. package/generators/features/ethical-guardrails.md +58 -0
  10. package/generators/features/graph-analysis.md +188 -0
  11. package/generators/features/helper-functions.md +92 -0
  12. package/generators/features/maintenance.md +164 -0
  13. package/generators/features/methodology-knowledge.md +70 -0
  14. package/generators/features/mocs.md +144 -0
  15. package/generators/features/multi-domain.md +61 -0
  16. package/generators/features/personality.md +71 -0
  17. package/generators/features/processing-pipeline.md +428 -0
  18. package/generators/features/schema.md +149 -0
  19. package/generators/features/self-evolution.md +229 -0
  20. package/generators/features/self-space.md +78 -0
  21. package/generators/features/semantic-search.md +99 -0
  22. package/generators/features/session-rhythm.md +85 -0
  23. package/generators/features/templates.md +85 -0
  24. package/generators/features/wiki-links.md +88 -0
  25. package/generators/soul-md.md +121 -0
  26. package/hooks/hooks.json +45 -0
  27. package/hooks/scripts/auto-commit.sh +44 -0
  28. package/hooks/scripts/session-capture.sh +35 -0
  29. package/hooks/scripts/session-orient.sh +86 -0
  30. package/hooks/scripts/write-validate.sh +42 -0
  31. package/methodology/AI shifts knowledge systems from externalizing memory to externalizing attention.md +59 -0
  32. package/methodology/BM25 retrieval fails on full-length descriptions because query term dilution reduces match scores.md +39 -0
  33. package/methodology/IBIS framework maps claim-based architecture to structured argumentation.md +58 -0
  34. package/methodology/LLM attention degrades as context fills.md +49 -0
  35. package/methodology/MOC construction forces synthesis that automated generation from metadata cannot replicate.md +49 -0
  36. package/methodology/MOC maintenance investment compounds because orientation savings multiply across every future session.md +41 -0
  37. package/methodology/MOCs are attention management devices not just organizational tools.md +51 -0
  38. package/methodology/PKM failure follows a predictable cycle.md +50 -0
  39. package/methodology/ThreadMode to DocumentMode transformation is the core value creation step.md +52 -0
  40. package/methodology/WIP limits force processing over accumulation.md +53 -0
  41. package/methodology/Zeigarnik effect validates capture-first philosophy because open loops drain attention.md +42 -0
  42. package/methodology/academic research uses structured extraction with cross-source synthesis.md +566 -0
  43. package/methodology/adapt the four-phase processing pipeline to domain-specific throughput needs.md +197 -0
  44. package/methodology/agent notes externalize navigation intuition that search cannot discover and traversal cannot reconstruct.md +48 -0
  45. package/methodology/agent self-memory should be architecturally separate from user knowledge systems.md +48 -0
  46. package/methodology/agent session boundaries create natural automation checkpoints that human-operated systems lack.md +56 -0
  47. package/methodology/agent-cognition.md +107 -0
  48. package/methodology/agents are simultaneously methodology executors and subjects creating a unique trust asymmetry.md +66 -0
  49. package/methodology/aspect-oriented programming solved the same cross-cutting concern problem that hooks solve.md +39 -0
  50. package/methodology/associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles.md +53 -0
  51. package/methodology/attention residue may have a minimum granularity that cannot be subdivided.md +46 -0
  52. package/methodology/auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution.md +47 -0
  53. package/methodology/automated detection is always safe because it only reads state while automated remediation risks content corruption.md +42 -0
  54. package/methodology/automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues.md +56 -0
  55. package/methodology/backlinks implicitly define notes by revealing usage context.md +35 -0
  56. package/methodology/backward maintenance asks what would be different if written today.md +62 -0
  57. package/methodology/balance onboarding enforcement and questions to prevent premature complexity.md +229 -0
  58. package/methodology/basic level categorization determines optimal MOC granularity.md +51 -0
  59. package/methodology/batching by context similarity reduces switching costs in agent processing.md +43 -0
  60. package/methodology/behavioral anti-patterns matter more than tool selection.md +42 -0
  61. package/methodology/betweenness centrality identifies bridge notes connecting disparate knowledge domains.md +57 -0
  62. package/methodology/blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules.md +42 -0
  63. package/methodology/bootstrapping principle enables self-improving systems.md +62 -0
  64. package/methodology/build automatic memory through cognitive offloading and session handoffs.md +285 -0
  65. package/methodology/capture the reaction to content not just the content itself.md +41 -0
  66. package/methodology/claims must be specific enough to be wrong.md +36 -0
  67. package/methodology/closure rituals create clean breaks that prevent attention residue bleed.md +44 -0
  68. package/methodology/cognitive offloading is the architectural foundation for vault design.md +46 -0
  69. package/methodology/cognitive outsourcing risk in agent-operated systems.md +55 -0
  70. package/methodology/coherence maintains consistency despite inconsistent inputs.md +96 -0
  71. package/methodology/coherent architecture emerges from wiki links spreading activation and small-world topology.md +48 -0
  72. package/methodology/community detection algorithms can inform when MOCs should split or merge.md +52 -0
  73. package/methodology/complete navigation requires four complementary types that no single mechanism provides.md +43 -0
  74. package/methodology/complex systems evolve from simple working systems.md +59 -0
  75. package/methodology/composable knowledge architecture builds systems from independent toggleable modules not monolithic templates.md +61 -0
  76. package/methodology/compose multi-domain systems through separate templates and shared graph.md +372 -0
  77. package/methodology/concept-orientation beats source-orientation for cross-domain connections.md +51 -0
  78. package/methodology/confidence thresholds gate automated action between the mechanical and judgment zones.md +50 -0
  79. package/methodology/configuration dimensions interact so choices in one create pressure on others.md +58 -0
  80. package/methodology/configuration paralysis emerges when derivation surfaces too many decisions.md +44 -0
  81. package/methodology/context files function as agent operating systems through self-referential self-extension.md +46 -0
  82. package/methodology/context phrase clarity determines how deep a navigation hierarchy can scale.md +46 -0
  83. package/methodology/continuous small-batch processing eliminates review dread.md +48 -0
  84. package/methodology/controlled disorder engineers serendipity through semantic rather than topical linking.md +51 -0
  85. package/methodology/creative writing uses worldbuilding consistency with character tracking.md +672 -0
  86. package/methodology/cross-links between MOC territories indicate creative leaps and integration depth.md +43 -0
  87. package/methodology/dangling links reveal which notes want to exist.md +62 -0
  88. package/methodology/data exit velocity measures how quickly content escapes vendor lock-in.md +74 -0
  89. package/methodology/decontextualization risk means atomicity may strip meaning that cannot be recovered.md +48 -0
  90. package/methodology/dense interlinked research claims enable derivation while sparse references only enable templating.md +47 -0
  91. package/methodology/dependency resolution through topological sort makes module composition transparent and verifiable.md +56 -0
  92. package/methodology/derivation generates knowledge systems from composable research claims not template customization.md +63 -0
  93. package/methodology/derivation-engine.md +27 -0
  94. package/methodology/derived systems follow a seed-evolve-reseed lifecycle.md +56 -0
  95. package/methodology/description quality for humans diverges from description quality for keyword search.md +73 -0
  96. package/methodology/descriptions are retrieval filters not summaries.md +112 -0
  97. package/methodology/design MOCs as attention management devices with lifecycle governance.md +318 -0
  98. package/methodology/design-dimensions.md +66 -0
  99. package/methodology/digital mutability enables note evolution that physical permanence forbids.md +54 -0
  100. package/methodology/discovery-retrieval.md +48 -0
  101. package/methodology/distinctiveness scoring treats description quality as measurable.md +69 -0
  102. package/methodology/does agent processing recover what fast capture loses.md +43 -0
  103. package/methodology/domain-compositions.md +37 -0
  104. package/methodology/dual-coding with visual elements could enhance agent traversal.md +55 -0
  105. package/methodology/each module must be describable in one sentence under 200 characters or it does too many things.md +45 -0
  106. package/methodology/each new note compounds value by creating traversal paths.md +55 -0
  107. package/methodology/eight configuration dimensions parameterize the space of possible knowledge systems.md +56 -0
  108. package/methodology/elaborative encoding is the quality gate for new notes.md +55 -0
  109. package/methodology/enforce schema with graduated strictness across capture processing and query zones.md +221 -0
  110. package/methodology/enforcing atomicity can create paralysis when ideas resist decomposition.md +43 -0
  111. package/methodology/engineering uses technical decision tracking with architectural memory.md +766 -0
  112. package/methodology/every knowledge domain shares a four-phase processing skeleton that diverges only in the process step.md +53 -0
  113. package/methodology/evolution observations provide actionable signals for system adaptation.md +67 -0
  114. package/methodology/external memory shapes cognition more than base model.md +60 -0
  115. package/methodology/faceted classification treats notes as multi-dimensional objects rather than folder contents.md +65 -0
  116. package/methodology/failure-modes.md +27 -0
  117. package/methodology/false universalism applies same processing logic regardless of domain.md +49 -0
  118. package/methodology/federated wiki pattern enables multi-agent divergence as feature not bug.md +59 -0
  119. package/methodology/flat files break at retrieval scale.md +75 -0
  120. package/methodology/forced engagement produces weak connections.md +48 -0
  121. package/methodology/four abstraction layers separate platform-agnostic from platform-dependent knowledge system features.md +47 -0
  122. package/methodology/fresh context per task preserves quality better than chaining phases.md +44 -0
  123. package/methodology/friction reveals architecture.md +63 -0
  124. package/methodology/friction-driven module adoption prevents configuration debt by adding complexity only at pain points.md +48 -0
  125. package/methodology/gardening cycle implements tend prune fertilize operations.md +41 -0
  126. package/methodology/generation effect gate blocks processing without transformation.md +40 -0
  127. package/methodology/goal-driven memory orchestration enables autonomous domain learning through directed compute allocation.md +41 -0
  128. package/methodology/good descriptions layer heuristic then mechanism then implication.md +57 -0
  129. package/methodology/graph-structure.md +65 -0
  130. package/methodology/guided notes might outperform post-hoc structuring for high-volume capture.md +37 -0
  131. package/methodology/health wellness uses symptom-trigger correlation with multi-dimensional tracking.md +819 -0
  132. package/methodology/hook composition creates emergent methodology from independent single-concern components.md +47 -0
  133. package/methodology/hook enforcement guarantees quality while instruction enforcement merely suggests it.md +51 -0
  134. package/methodology/hook-driven learning loops create self-improving methodology through observation accumulation.md +62 -0
  135. package/methodology/hooks are the agent habit system that replaces the missing basal ganglia.md +40 -0
  136. package/methodology/hooks cannot replace genuine cognitive engagement yet more automation is always tempting.md +87 -0
  137. package/methodology/hooks enable context window efficiency by delegating deterministic checks to external processes.md +47 -0
  138. package/methodology/idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once.md +44 -0
  139. package/methodology/implement condition-based maintenance triggers for derived systems.md +255 -0
  140. package/methodology/implicit dependencies create distributed monoliths that fail silently across configurations.md +58 -0
  141. package/methodology/implicit knowledge emerges from traversal.md +55 -0
  142. package/methodology/incremental formalization happens through repeated touching of old notes.md +60 -0
  143. package/methodology/incremental reading enables cross-source connection finding.md +39 -0
  144. package/methodology/index.md +32 -0
  145. package/methodology/inline links carry richer relationship data than metadata fields.md +91 -0
  146. package/methodology/insight accretion differs from productivity in knowledge systems.md +41 -0
  147. package/methodology/intermediate packets enable assembly over creation.md +52 -0
  148. package/methodology/intermediate representation pattern enables reliable vault operations beyond regex.md +62 -0
  149. package/methodology/justification chains enable forward backward and evolution reasoning about configuration decisions.md +46 -0
  150. package/methodology/knowledge system architecture is parameterized by platform capabilities not fixed by methodology.md +51 -0
  151. package/methodology/knowledge systems become communication partners through complexity and memory humans cannot sustain.md +47 -0
  152. package/methodology/knowledge systems share universal operations and structural components across all methodology traditions.md +46 -0
  153. package/methodology/legal case management uses precedent chains with regulatory change propagation.md +892 -0
  154. package/methodology/live index via periodic regeneration keeps discovery current.md +58 -0
  155. package/methodology/local-first file formats are inherently agent-native.md +69 -0
  156. package/methodology/logic column pattern separates reasoning from procedure.md +35 -0
  157. package/methodology/maintenance operations are more universal than creative pipelines because structural health is domain-invariant.md +47 -0
  158. package/methodology/maintenance scheduling frequency should match consequence speed not detection capability.md +50 -0
  159. package/methodology/maintenance targeting should prioritize mechanism and theory notes.md +26 -0
  160. package/methodology/maintenance-patterns.md +72 -0
  161. package/methodology/markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure.md +55 -0
  162. package/methodology/maturity field enables agent context prioritization.md +33 -0
  163. package/methodology/memory-architecture.md +27 -0
  164. package/methodology/metacognitive confidence can diverge from retrieval capability.md +42 -0
  165. package/methodology/metadata reduces entropy enabling precision over recall.md +91 -0
  166. package/methodology/methodology development should follow the trajectory from documentation to skill to hook as understanding hardens.md +80 -0
  167. package/methodology/methodology traditions are named points in a shared configuration space not competing paradigms.md +64 -0
  168. package/methodology/mnemonic medium embeds verification into navigation.md +46 -0
  169. package/methodology/module communication through shared YAML fields creates loose coupling without direct dependencies.md +44 -0
  170. package/methodology/module deactivation must account for structural artifacts that survive the toggle.md +49 -0
  171. package/methodology/multi-domain systems compose through separate templates and shared graph.md +61 -0
  172. package/methodology/multi-domain-composition.md +27 -0
  173. package/methodology/narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging.md +53 -0
  174. package/methodology/navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts.md +48 -0
  175. package/methodology/navigational vertigo emerges in pure association systems without local hierarchy.md +54 -0
  176. package/methodology/note titles should function as APIs enabling sentence transclusion.md +51 -0
  177. package/methodology/note-design.md +57 -0
  178. package/methodology/notes are skills /342/200/224 curated knowledge injected when relevant.md" +62 -0
  179. package/methodology/notes function as cognitive anchors that stabilize attention during complex tasks.md +41 -0
  180. package/methodology/novel domains derive by mapping knowledge type to closest reference domain then adapting.md +50 -0
  181. package/methodology/nudge theory explains graduated hook enforcement as choice architecture for agents.md +59 -0
  182. package/methodology/observation and tension logs function as dead-letter queues for failed automation.md +51 -0
  183. package/methodology/operational memory and knowledge memory serve different functions in agent architecture.md +48 -0
  184. package/methodology/operational wisdom requires contextual observation.md +52 -0
  185. package/methodology/orchestrated vault creation transforms arscontexta from tool to autonomous knowledge factory.md +40 -0
  186. package/methodology/organic emergence versus active curation creates a fundamental vault governance tension.md +68 -0
  187. package/methodology/orphan notes are seeds not failures.md +38 -0
  188. package/methodology/over-automation corrupts quality when hooks encode judgment rather than verification.md +62 -0
  189. package/methodology/people relationships uses Dunbar-layered graphs with interaction tracking.md +659 -0
  190. package/methodology/personal assistant uses life area management with review automation.md +610 -0
  191. package/methodology/platform adapter translation is semantic not mechanical because hook event meanings differ.md +40 -0
  192. package/methodology/platform capability tiers determine which knowledge system features can be implemented.md +48 -0
  193. package/methodology/platform fragmentation means identical conceptual operations require different implementations across agent environments.md +44 -0
  194. package/methodology/premature complexity is the most common derivation failure mode.md +45 -0
  195. package/methodology/prevent domain-specific failure modes through the vulnerability matrix.md +336 -0
  196. package/methodology/processing effort should follow retrieval demand.md +57 -0
  197. package/methodology/processing-workflows.md +75 -0
  198. package/methodology/product management uses feedback pipelines with experiment tracking.md +789 -0
  199. package/methodology/productivity porn risk in meta-system building.md +30 -0
  200. package/methodology/programmable notes could enable property-triggered workflows.md +64 -0
  201. package/methodology/progressive disclosure means reading right not reading less.md +69 -0
  202. package/methodology/progressive schema validates only what active modules require not the full system schema.md +49 -0
  203. package/methodology/project management uses decision tracking with stakeholder context.md +776 -0
  204. package/methodology/propositional link semantics transform wiki links from associative to reasoned.md +87 -0
  205. package/methodology/prospective memory requires externalization.md +53 -0
  206. package/methodology/provenance tracks where beliefs come from.md +62 -0
  207. package/methodology/queries evolve during search so agents should checkpoint.md +35 -0
  208. package/methodology/question-answer metadata enables inverted search patterns.md +39 -0
  209. package/methodology/random note resurfacing prevents write-only memory.md +33 -0
  210. package/methodology/reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring.md +59 -0
  211. package/methodology/reflection synthesizes existing notes into new insight.md +100 -0
  212. package/methodology/retrieval utility should drive design over capture completeness.md +69 -0
  213. package/methodology/retrieval verification loop tests description quality at scale.md +81 -0
  214. package/methodology/role field makes graph structure explicit.md +94 -0
  215. package/methodology/scaffolding enables divergence that fine-tuning cannot.md +67 -0
  216. package/methodology/schema enforcement via validation agents enables soft consistency.md +60 -0
  217. package/methodology/schema evolution follows observe-then-formalize not design-then-enforce.md +65 -0
  218. package/methodology/schema field names are the only domain specific element in the universal note pattern.md +46 -0
  219. package/methodology/schema fields should use domain-native vocabulary not abstract terminology.md +47 -0
  220. package/methodology/schema templates reduce cognitive overhead at capture time.md +55 -0
  221. package/methodology/schema validation hooks externalize inhibitory control that degrades under cognitive load.md +48 -0
  222. package/methodology/schema-enforcement.md +27 -0
  223. package/methodology/self-extension requires context files to contain platform operations knowledge not just methodology.md +47 -0
  224. package/methodology/sense-making vs storage does compression lose essential nuance.md +73 -0
  225. package/methodology/session boundary hooks implement cognitive bookends for orientation and reflection.md +60 -0
  226. package/methodology/session handoff creates continuity without persistent memory.md +43 -0
  227. package/methodology/session outputs are packets for future selves.md +43 -0
  228. package/methodology/session transcript mining enables experiential validation that structural tests cannot provide.md +38 -0
  229. package/methodology/skill context budgets constrain knowledge system complexity on agent platforms.md +52 -0
  230. package/methodology/skills encode methodology so manual execution bypasses quality gates.md +50 -0
  231. package/methodology/small-world topology requires hubs and dense local links.md +99 -0
  232. package/methodology/source attribution enables tracing claims to foundations.md +38 -0
  233. package/methodology/spaced repetition scheduling could optimize vault maintenance.md +44 -0
  234. package/methodology/spreading activation models how agents should traverse.md +79 -0
  235. package/methodology/stale navigation actively misleads because agents trust curated maps completely.md +43 -0
  236. package/methodology/stigmergy coordinates agents through environmental traces without direct communication.md +62 -0
  237. package/methodology/storage versus thinking distinction determines which tool patterns apply.md +56 -0
  238. package/methodology/structure enables navigation without reading everything.md +52 -0
  239. package/methodology/structure without processing provides no value.md +56 -0
  240. package/methodology/student learning uses prerequisite graphs with spaced retrieval.md +770 -0
  241. package/methodology/summary coherence tests composability before filing.md +37 -0
  242. package/methodology/tag rot applies to wiki links because titles serve as both identifier and display text.md +50 -0
  243. package/methodology/temporal media must convert to spatial text for agent traversal.md +43 -0
  244. package/methodology/temporal processing priority creates age-based inbox urgency.md +45 -0
  245. package/methodology/temporal separation of capture and processing preserves context freshness.md +39 -0
  246. package/methodology/ten universal primitives form the kernel of every viable agent knowledge system.md +162 -0
  247. package/methodology/testing effect could enable agent knowledge verification.md +38 -0
  248. package/methodology/the AgentSkills standard embodies progressive disclosure at the skill level.md +40 -0
  249. package/methodology/the derivation engine improves recursively as deployed systems generate observations.md +49 -0
  250. package/methodology/the determinism boundary separates hook methodology from skill methodology.md +46 -0
  251. package/methodology/the fix-versus-report decision depends on determinism reversibility and accumulated trust.md +45 -0
  252. package/methodology/the generation effect requires active transformation not just storage.md +57 -0
  253. package/methodology/the no wrong patches guarantee ensures any valid module combination produces a valid system.md +58 -0
  254. package/methodology/the system is the argument.md +46 -0
  255. package/methodology/the vault constitutes identity for agents.md +86 -0
  256. package/methodology/the vault methodology transfers because it encodes cognitive science not domain specifics.md +47 -0
  257. package/methodology/therapy journal uses warm personality with pattern detection for emotional processing.md +584 -0
  258. package/methodology/three capture schools converge through agent-mediated synthesis.md +55 -0
  259. package/methodology/three concurrent maintenance loops operate at different timescales to catch different classes of problems.md +56 -0
  260. package/methodology/throughput matters more than accumulation.md +58 -0
  261. package/methodology/title as claim enables traversal as reasoning.md +50 -0
  262. package/methodology/topological organization beats temporal for knowledge work.md +52 -0
  263. package/methodology/trading uses conviction tracking with thesis-outcome correlation.md +699 -0
  264. package/methodology/trails transform ephemeral navigation into persistent artifacts.md +39 -0
  265. package/methodology/transform universal vocabulary to domain-native language through six levels.md +259 -0
  266. package/methodology/type field enables structured queries without folder hierarchies.md +53 -0
  267. package/methodology/use-case presets dissolve the tension between composability and simplicity.md +44 -0
  268. package/methodology/vault conventions may impose hidden rigidity on thinking.md +44 -0
  269. package/methodology/verbatim risk applies to agents too.md +31 -0
  270. package/methodology/vibe notetaking is the emerging industry consensus for AI-native self-organization.md +56 -0
  271. package/methodology/vivid memories need verification.md +45 -0
  272. package/methodology/vocabulary-transformation.md +27 -0
  273. package/methodology/voice capture is the highest-bandwidth channel for agent-delegated knowledge systems.md +45 -0
  274. package/methodology/wiki links are the digital evolution of analog indexing.md +73 -0
  275. package/methodology/wiki links as social contract transforms agents into stewards of incomplete references.md +52 -0
  276. package/methodology/wiki links create navigation paths that shape retrieval.md +63 -0
  277. package/methodology/wiki links implement GraphRAG without the infrastructure.md +101 -0
  278. package/methodology/writing for audience blocks authentic creation.md +22 -0
  279. package/methodology/you operate a system that takes notes.md +79 -0
  280. package/openclaw/SKILL.md +110 -0
  281. package/package.json +45 -0
  282. package/platforms/README.md +51 -0
  283. package/platforms/claude-code/generator.md +61 -0
  284. package/platforms/claude-code/hooks/README.md +186 -0
  285. package/platforms/claude-code/hooks/auto-commit.sh.template +38 -0
  286. package/platforms/claude-code/hooks/session-capture.sh.template +72 -0
  287. package/platforms/claude-code/hooks/session-orient.sh.template +189 -0
  288. package/platforms/claude-code/hooks/write-validate.sh.template +106 -0
  289. package/platforms/openclaw/generator.md +82 -0
  290. package/platforms/openclaw/hooks/README.md +89 -0
  291. package/platforms/openclaw/hooks/bootstrap.ts.template +224 -0
  292. package/platforms/openclaw/hooks/command-new.ts.template +165 -0
  293. package/platforms/openclaw/hooks/heartbeat.ts.template +214 -0
  294. package/platforms/shared/features/README.md +70 -0
  295. package/platforms/shared/skill-blocks/graph.md +145 -0
  296. package/platforms/shared/skill-blocks/learn.md +119 -0
  297. package/platforms/shared/skill-blocks/next.md +131 -0
  298. package/platforms/shared/skill-blocks/pipeline.md +326 -0
  299. package/platforms/shared/skill-blocks/ralph.md +616 -0
  300. package/platforms/shared/skill-blocks/reduce.md +1142 -0
  301. package/platforms/shared/skill-blocks/refactor.md +129 -0
  302. package/platforms/shared/skill-blocks/reflect.md +780 -0
  303. package/platforms/shared/skill-blocks/remember.md +524 -0
  304. package/platforms/shared/skill-blocks/rethink.md +574 -0
  305. package/platforms/shared/skill-blocks/reweave.md +680 -0
  306. package/platforms/shared/skill-blocks/seed.md +320 -0
  307. package/platforms/shared/skill-blocks/stats.md +145 -0
  308. package/platforms/shared/skill-blocks/tasks.md +171 -0
  309. package/platforms/shared/skill-blocks/validate.md +323 -0
  310. package/platforms/shared/skill-blocks/verify.md +562 -0
  311. package/platforms/shared/templates/README.md +35 -0
  312. package/presets/experimental/categories.yaml +1 -0
  313. package/presets/experimental/preset.yaml +38 -0
  314. package/presets/experimental/starter/README.md +7 -0
  315. package/presets/experimental/vocabulary.yaml +7 -0
  316. package/presets/personal/categories.yaml +7 -0
  317. package/presets/personal/preset.yaml +41 -0
  318. package/presets/personal/starter/goals.md +21 -0
  319. package/presets/personal/starter/index.md +17 -0
  320. package/presets/personal/starter/life-areas.md +21 -0
  321. package/presets/personal/starter/people.md +21 -0
  322. package/presets/personal/vocabulary.yaml +32 -0
  323. package/presets/research/categories.yaml +8 -0
  324. package/presets/research/preset.yaml +41 -0
  325. package/presets/research/starter/index.md +17 -0
  326. package/presets/research/starter/methods.md +21 -0
  327. package/presets/research/starter/open-questions.md +21 -0
  328. package/presets/research/vocabulary.yaml +33 -0
  329. package/reference/AUDIT-REPORT.md +238 -0
  330. package/reference/claim-map.md +172 -0
  331. package/reference/components.md +327 -0
  332. package/reference/conversation-patterns.md +542 -0
  333. package/reference/derivation-validation.md +649 -0
  334. package/reference/dimension-claim-map.md +134 -0
  335. package/reference/evolution-lifecycle.md +297 -0
  336. package/reference/failure-modes.md +235 -0
  337. package/reference/interaction-constraints.md +204 -0
  338. package/reference/kernel.yaml +242 -0
  339. package/reference/methodology.md +283 -0
  340. package/reference/open-questions.md +279 -0
  341. package/reference/personality-layer.md +302 -0
  342. package/reference/self-space.md +299 -0
  343. package/reference/semantic-vs-keyword.md +288 -0
  344. package/reference/session-lifecycle.md +298 -0
  345. package/reference/templates/base-note.md +16 -0
  346. package/reference/templates/companion-note.md +70 -0
  347. package/reference/templates/creative-note.md +16 -0
  348. package/reference/templates/learning-note.md +16 -0
  349. package/reference/templates/life-note.md +16 -0
  350. package/reference/templates/moc.md +26 -0
  351. package/reference/templates/relationship-note.md +17 -0
  352. package/reference/templates/research-note.md +19 -0
  353. package/reference/templates/session-log.md +24 -0
  354. package/reference/templates/therapy-note.md +16 -0
  355. package/reference/test-fixtures/edge-case-constraints.md +148 -0
  356. package/reference/test-fixtures/multi-domain.md +164 -0
  357. package/reference/test-fixtures/novel-domain-gaming.md +138 -0
  358. package/reference/test-fixtures/research-minimal.md +102 -0
  359. package/reference/test-fixtures/therapy-full.md +155 -0
  360. package/reference/testing-milestones.md +1087 -0
  361. package/reference/three-spaces.md +363 -0
  362. package/reference/tradition-presets.md +203 -0
  363. package/reference/use-case-presets.md +341 -0
  364. package/reference/validate-kernel.sh +432 -0
  365. package/reference/vocabulary-transforms.md +85 -0
  366. package/scripts/sync-thinking.sh +147 -0
  367. package/skill-sources/graph/SKILL.md +567 -0
  368. package/skill-sources/graph/skill.json +17 -0
  369. package/skill-sources/learn/SKILL.md +254 -0
  370. package/skill-sources/learn/skill.json +17 -0
  371. package/skill-sources/next/SKILL.md +407 -0
  372. package/skill-sources/next/skill.json +17 -0
  373. package/skill-sources/pipeline/SKILL.md +314 -0
  374. package/skill-sources/pipeline/skill.json +17 -0
  375. package/skill-sources/ralph/SKILL.md +604 -0
  376. package/skill-sources/ralph/skill.json +17 -0
  377. package/skill-sources/reduce/SKILL.md +1113 -0
  378. package/skill-sources/reduce/skill.json +17 -0
  379. package/skill-sources/refactor/SKILL.md +448 -0
  380. package/skill-sources/refactor/skill.json +17 -0
  381. package/skill-sources/reflect/SKILL.md +747 -0
  382. package/skill-sources/reflect/skill.json +17 -0
  383. package/skill-sources/remember/SKILL.md +534 -0
  384. package/skill-sources/remember/skill.json +17 -0
  385. package/skill-sources/rethink/SKILL.md +658 -0
  386. package/skill-sources/rethink/skill.json +17 -0
  387. package/skill-sources/reweave/SKILL.md +657 -0
  388. package/skill-sources/reweave/skill.json +17 -0
  389. package/skill-sources/seed/SKILL.md +303 -0
  390. package/skill-sources/seed/skill.json +17 -0
  391. package/skill-sources/stats/SKILL.md +371 -0
  392. package/skill-sources/stats/skill.json +17 -0
  393. package/skill-sources/tasks/SKILL.md +402 -0
  394. package/skill-sources/tasks/skill.json +17 -0
  395. package/skill-sources/validate/SKILL.md +310 -0
  396. package/skill-sources/validate/skill.json +17 -0
  397. package/skill-sources/verify/SKILL.md +532 -0
  398. package/skill-sources/verify/skill.json +17 -0
  399. package/skills/add-domain/SKILL.md +441 -0
  400. package/skills/add-domain/skill.json +17 -0
  401. package/skills/architect/SKILL.md +568 -0
  402. package/skills/architect/skill.json +17 -0
  403. package/skills/ask/SKILL.md +388 -0
  404. package/skills/ask/skill.json +17 -0
  405. package/skills/health/SKILL.md +760 -0
  406. package/skills/health/skill.json +17 -0
  407. package/skills/help/SKILL.md +348 -0
  408. package/skills/help/skill.json +17 -0
  409. package/skills/recommend/SKILL.md +553 -0
  410. package/skills/recommend/skill.json +17 -0
  411. package/skills/reseed/SKILL.md +385 -0
  412. package/skills/reseed/skill.json +17 -0
  413. package/skills/setup/SKILL.md +1688 -0
  414. package/skills/setup/skill.json +17 -0
  415. package/skills/tutorial/SKILL.md +496 -0
  416. package/skills/tutorial/skill.json +17 -0
  417. package/skills/upgrade/SKILL.md +395 -0
  418. package/skills/upgrade/skill.json +17 -0
@@ -0,0 +1,66 @@
1
+ ---
2
+ description: The 8 configuration axes and their interaction constraints -- granularity, processing, automation, and more
3
+ type: moc
4
+ ---
5
+
6
+ # design-dimensions
7
+
8
+ The 8 dimensions that define a knowledge system's configuration space. Granularity, organization, linking, processing, navigation, maintenance, schema, automation. How they interact and constrain each other.
9
+
10
+ ## Core Ideas
11
+
12
+ ### Research
13
+ - [[blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules]] -- Platform-dependent modules ship as construction instructions so agents build contextually adapted artifacts — but bluepr
14
+ - [[composable knowledge architecture builds systems from independent toggleable modules not monolithic templates]] -- Four traditions converge — component engineering (contracts), Unix (small tools), Alexander's pattern language (generate
15
+ - [[configuration dimensions interact so choices in one create pressure on others]] -- Atomic granularity forces explicit linking, deep navigation, and heavy processing — the valid space is far smaller than
16
+ - [[configuration paralysis emerges when derivation surfaces too many decisions]] -- Presenting every dimension as a question produces analysis paralysis — sensible defaults and inference should reduce the
17
+ - [[controlled disorder engineers serendipity through semantic rather than topical linking]] -- Luhmann's information theory insight — perfectly ordered systems yield zero surprise, so linking by meaning rather than
18
+ - [[decontextualization risk means atomicity may strip meaning that cannot be recovered]] -- Extracting claims from source discourse strips argumentative context, and Source footers plus wiki links may not reconst
19
+ - [[dense interlinked research claims enable derivation while sparse references only enable templating]] -- Four structural properties of TFT research — atomic composability, dense interlinking, methodology provenance, and seman
20
+ - [[dependency resolution through topological sort makes module composition transparent and verifiable]] -- Topological sort on a module DAG resolves dependencies automatically while producing human-readable explanations that te
21
+ - [[derivation generates knowledge systems from composable research claims not template customization]] -- Templates constrain to deviation from fixed starting points while derivation traverses a claim graph to compose justifie
22
+ - [[derived systems follow a seed-evolve-reseed lifecycle]] -- Minimum viable seeding, friction-driven evolution, principled restructuring when incoherence accumulates — reseeding re-
23
+ - [[each module must be describable in one sentence under 200 characters or it does too many things]] -- The single-sentence test operationalizes Unix "do one thing" as a measurable constraint — if the description exceeds 200
24
+ - [[eight configuration dimensions parameterize the space of possible knowledge systems]] -- Granularity, organization, linking, processing intensity, navigation depth, maintenance cadence, schema density, and aut
25
+ - [[every knowledge domain shares a four-phase processing skeleton that diverges only in the process step]] -- Capture, connect, and verify are domain-invariant operations while the process step (extract claims, detect patterns, bu
26
+ - [[evolution observations provide actionable signals for system adaptation]] -- Six diagnostic patterns map operational symptoms to structural causes and prescribed responses, converting accumulated o
27
+ - [[false universalism applies same processing logic regardless of domain]] -- The derivation anti-pattern where the universal four-phase skeleton is exported without adapting the process step — "ext
28
+ - [[friction-driven module adoption prevents configuration debt by adding complexity only at pain points]] -- Concrete thresholds — add after 5 manual repetitions, split above 500-char descriptions, remove after 3 unused sessions,
29
+ - [[goal-driven memory orchestration enables autonomous domain learning through directed compute allocation]] -- Define a persona and goal, allocate compute budget, get back a populated knowledge graph — the pattern shifts knowledge
30
+ - [[implicit dependencies create distributed monoliths that fail silently across configurations]] -- When modules share undeclared coupling through conventions, environment, or co-activation assumptions, the system looks
31
+ - [[justification chains enable forward backward and evolution reasoning about configuration decisions]] -- Traces each configuration decision to research claims, enabling forward (constraints to decisions), backward (decisions
32
+ - [[knowledge systems share universal operations and structural components across all methodology traditions]] -- Eight operations and nine structural components recur across Zettelkasten, PARA, Cornell, Evergreen, and GTD — implement
33
+ - [[maintenance operations are more universal than creative pipelines because structural health is domain-invariant]] -- Structural health checks (validation, orphans, links, MOC coherence) transfer across domains and platforms while creativ
34
+ - [[methodology traditions are named points in a shared configuration space not competing paradigms]] -- Zettelkasten, PARA, Cornell, Evergreen, and GTD each make different choices along the same dimensions (granularity, link
35
+ - [[module communication through shared YAML fields creates loose coupling without direct dependencies]] -- YAML frontmatter functions as an event bus where one module writes a field and another reads it, so modules never call e
36
+ - [[module deactivation must account for structural artifacts that survive the toggle]] -- Enabling a module creates YAML fields, MOC links, and validation rules that persist after deactivation — ghost infrastru
37
+ - [[multi-domain systems compose through separate templates and shared graph]] -- Domain isolation at template and processing layers, graph unity at wiki link layer — five composition rules and four cro
38
+ - [[novel domains derive by mapping knowledge type to closest reference domain then adapting]] -- Six knowledge type categories identify which reference domain's processing patterns transfer to unfamiliar domains, then
39
+ - [[orchestrated vault creation transforms arscontexta from tool to autonomous knowledge factory]] -- The shift from "plugin that helps you set up a vault" to "system that builds domain knowledge for you" — init creates st
40
+ - [[organic emergence versus active curation creates a fundamental vault governance tension]] -- Curation prunes possible futures while emergence accumulates structural debt — the question is not which pole to choose
41
+ - [[premature complexity is the most common derivation failure mode]] -- Derivation can produce systems with 12 hooks and 8 processing phases because the claim graph justifies them, but users a
42
+ - [[progressive schema validates only what active modules require not the full system schema]] -- Each module declares its required YAML fields and validation checks only active modules — otherwise disabling modules do
43
+ - [[scaffolding enables divergence that fine-tuning cannot]] -- agents with identical weights reach different conclusions when their external memory differs — scaffolding is the differ
44
+ - [[schema evolution follows observe-then-formalize not design-then-enforce]] -- Five signals (manual additions, placeholder stuffing, dead enums, patterned text, oversized MOCs) drive a quarterly prot
45
+ - [[schema field names are the only domain specific element in the universal note pattern]] -- The five-component note architecture (prose-title, YAML frontmatter, body, wiki links, topics footer) is domain-invarian
46
+ - [[schema fields should use domain-native vocabulary not abstract terminology]] -- When schema field names match how practitioners naturally think — "triggers" not "antecedent_conditions" — adoption succ
47
+ - [[storage versus thinking distinction determines which tool patterns apply]] -- PARA and Johnny.Decimal optimize for filing and retrieval ("where did I put that?") while Zettelkasten and ACCESS/ACE op
48
+ - [[ten universal primitives form the kernel of every viable agent knowledge system]] -- Markdown files, YAML frontmatter, wiki links, MOC hierarchy, tree injection, description fields, topics footers, schema
49
+ - [[the derivation engine improves recursively as deployed systems generate observations]] -- Each deployed knowledge system is an experiment whose operational observations enrich the claim graph, making every subs
50
+ - [[the no wrong patches guarantee ensures any valid module combination produces a valid system]] -- Borrowed from Eurorack where any patch produces sound without damage, enabled modules with satisfied dependencies must n
51
+ - [[the vault methodology transfers because it encodes cognitive science not domain specifics]] -- Each vault structural pattern maps to a cognitive science principle — Cowan's limits, spreading activation, attention ma
52
+ - [[use-case presets dissolve the tension between composability and simplicity]] -- Curated module selections for common use cases (Research Vault, PKM, Project Management) give template-level simplicity
53
+
54
+ ## Tensions
55
+
56
+ (Capture conflicts as they emerge)
57
+
58
+ ## Open Questions
59
+
60
+ - Which dimension cascades have the strongest pressure effects?
61
+ - Can dimension positions be inferred from observable vault behavior?
62
+
63
+ ---
64
+
65
+ Topics:
66
+ - [[index]]
@@ -0,0 +1,54 @@
1
+ ---
2
+ description: Physical index cards cannot be edited without destruction, so Luhmann designed for permanence — digital files have no such constraint, making continuous revision the natural mode
3
+ kind: research
4
+ topics: ["[[maintenance-patterns]]"]
5
+ methodology: ["Evergreen"]
6
+ source: [[tft-research-part2]]
7
+ ---
8
+
9
+ # digital mutability enables note evolution that physical permanence forbids
10
+
11
+ Luhmann wrote his Zettelkasten notes on paper index cards. Once written, a card could not be easily edited — you would need to scratch out text, write over existing content, or create a new card and somehow integrate it into the physical sequence. The medium imposed permanence as a design constraint. Luhmann adapted by designing for notes that would be written once and remain fixed, with new thinking expressed through new cards that referenced old ones rather than modifying them.
12
+
13
+ Matuschak identifies this as a constraint of the medium, not a feature of the method. In digital systems, files can be edited indefinitely without degradation. There is no scratched-out text, no overflow into margins, no need to maintain physical card sequences. A note can be rewritten completely while maintaining all its incoming links. The file itself has no memory of what it used to say.
14
+
15
+ This changes what notes can be. Luhmann's cards were snapshots — a thought captured at a moment in time, frozen by the medium. Digital notes can be living documents that evolve as understanding deepens. The research frames this as notes being "refactored" like code: renamed, restructured, split, merged, improved continuously. The metaphor is apt because code refactoring assumes the artifact will change many times over its lifetime, with each change improving without destroying.
16
+
17
+ The philosophical shift is from permanence to evolution. Matuschak explicitly rejects permanence as a goal: notes should be "written and organized to evolve, contribute, and accumulate over time." This is not a small adjustment to the Zettelkasten method — it is a fundamental reorientation. Where Luhmann's system accumulated fixed records that created value through cross-reference, the evergreen approach accumulates living documents that create value through continuous improvement.
18
+
19
+ For agent-operated vaults, this reorientation is liberating. Because [[backward maintenance asks what would be different if written today]], the answer can actually be implemented. An agent can rewrite prose, sharpen claims, add connections discovered later, even challenge the original argument with new evidence. None of this requires creating parallel documents or tracking versions manually — the note simply becomes what it should be today.
20
+
21
+ The mechanism through which evolution occurs varies. Since [[incremental formalization happens through repeated touching of old notes]], many small touches during traversal can accumulate into significant improvement without any single moment of deliberate revision. But the deeper point is that the medium permits this. Analog cards could not accumulate small improvements — each improvement would degrade the card. Digital files absorb unlimited improvements invisibly.
22
+
23
+ This creates a different relationship between notes and time. A Zettelkasten card is dated implicitly by when it entered the system and by the vocabulary and thinking patterns of that moment. An evergreen note has no single date — it might have been created three years ago but rewritten last week, and its prose reflects current understanding regardless of creation timestamp. The note's age becomes irrelevant; only its quality matters.
24
+
25
+ The constraint relaxation has a shadow side. Physical permanence forced Luhmann to think carefully before writing — once inked, a card could not be taken back. Digital mutability can enable sloppiness: capture fast, fix later, except "later" never comes. The discipline that the medium once imposed must now be self-imposed or encoded in workflows. This is why skills like reweave exist — they create the maintenance pressure that physical cards didn't need because they couldn't decay through neglect; they simply existed.
26
+
27
+ The mutability principle operates at the infrastructure level too, not just at the content level. Since [[context files function as agent operating systems through self-referential self-extension]], read-write context files are to read-only context files what digital notes are to physical cards: the medium either permits evolution or freezes instructions at creation. A read-only context file on a platform that forbids modification is functionally a Zettelkasten card -- the agent cannot improve its own operating instructions regardless of what it learns through use. The permanence constraint that Luhmann adapted to at the note level reappears at the system level whenever the infrastructure prevents self-modification.
28
+
29
+ There is also a semantic constraint that emerges from graph structure itself. Since [[backlinks implicitly define notes by revealing usage context]], a note with many incoming links has accumulated meaning beyond its explicit content — other notes depend on what it claims. This creates a counterweight to pure mutability: the medium permits any revision, but the accumulated backlinks represent commitments from elsewhere in the graph that shouldn't be disrupted casually. Evolution is possible, but must respect what the note has come to mean through use.
30
+
31
+ The parallel to [[wiki links are the digital evolution of analog indexing]] is instructive. Just as wiki links extend Cornell's cue columns from single-page to cross-document scope, digital mutability extends notes from frozen snapshots to living documents. Both are constraints of analog media that digital removes, and both removals enable qualitatively different knowledge work: graphs that span entire domains, notes that reflect today's understanding regardless of when they were created.
32
+
33
+ There is a further dimension: mutability within a tool means little if the tool itself traps the content. Since [[data exit velocity measures how quickly content escapes vendor lock-in]], the mutability promise is only as real as the format's portability. Notes in a high-velocity format (plain text, markdown) can evolve across tool transitions — the note survives the death of Obsidian and keeps evolving in whatever comes next. Notes in a low-velocity format (sharded database) are functionally as immutable as Luhmann's paper cards: not because the medium forbids editing, but because the tool forbids leaving. Exit velocity turns the abstract concern about tool longevity into an auditable metric for whether mutability actually persists across the note's lifetime.
34
+
35
+ Mutability also has a dimension beyond time. Since [[federated wiki pattern enables multi-agent divergence as feature not bug]], digital notes can not only evolve sequentially (version 1 becomes version 2) but coexist simultaneously across perspectives (agent A's version alongside agent B's version, both linked). This is a form of mutability Luhmann's system could not express: not just "this card could become something different" but "this concept supports multiple legitimate formulations right now." Where sequential mutability changes a note's content over time, federation extends mutability into a spatial dimension — multiple concurrent versions, each evolving independently, linked by their shared concept. The medium that permits revision also permits coexistence.
36
+
37
+ What's remarkable is that since [[knowledge systems become communication partners through complexity and memory humans cannot sustain]], Luhmann's system achieved this partnership status despite physical permanence constraints. His Zettelkasten could "surprise" him with connections he had forgotten — the communication partner phenomenon emerged from density and cross-reference alone. Digital mutability doesn't create the partnership possibility; it amplifies it. Where Luhmann could only reference old cards, we can rewrite them — which means the system can evolve its own thinking, not just accumulate fixed snapshots. The partner becomes more intelligent over time, not just more extensive.
38
+ ---
39
+
40
+ Relevant Notes:
41
+ - [[backward maintenance asks what would be different if written today]] — the operational expression of this principle: if notes can evolve, the maintenance question becomes what should change, not whether change is allowed
42
+ - [[backlinks implicitly define notes by revealing usage context]] — provides a counterweight to mutability: accumulated backlinks represent semantic commitments from other parts of the graph that constrain how aggressively a note should be rewritten
43
+ - [[incremental formalization happens through repeated touching of old notes]] — one mechanism by which evolution actually occurs: many small touches accumulating into crystallized thinking
44
+ - [[wiki links are the digital evolution of analog indexing]] — parallel evolution: just as wiki links extend Cornell's cue columns beyond single-page scope, note mutability extends beyond Luhmann's card constraints
45
+ - [[local-first file formats are inherently agent-native]] — the substrate that makes mutability possible: plain text files can be edited directly without APIs or authentication, enabling the revision that physical cards forbade
46
+ - [[bootstrapping principle enables self-improving systems]] — mutability enables bootstrapping: systems can improve their own notes only because the medium permits revision; Luhmann's Zettelkasten couldn't bootstrap content
47
+ - [[vault conventions may impose hidden rigidity on thinking]] — mutability provides the escape valve: even if creation conventions are rigid, notes can evolve past their initial form through backward maintenance
48
+ - [[knowledge systems become communication partners through complexity and memory humans cannot sustain]] — the partnership thesis: mutability amplifies but doesn't create the communication partner phenomenon — Luhmann achieved partnership through density alone, but digital mutability allows the partner to become more intelligent over time, not just more extensive
49
+ - [[data exit velocity measures how quickly content escapes vendor lock-in]] — extends mutability across tool boundaries: high exit velocity ensures notes can evolve regardless of which software reads them, while low velocity makes notes functionally immutable when locked in a dying tool
50
+ - [[federated wiki pattern enables multi-agent divergence as feature not bug]] — adds a spatial dimension to mutability: not just revision over time, but coexistence across perspectives at the same time; federation extends the evolution principle from sequential revision to parallel interpretation
51
+ - [[context files function as agent operating systems through self-referential self-extension]] — infrastructure-level parallel: just as digital notes evolve past their creation moment, read-write context files evolve past their initial configuration; read-only platforms impose the same permanence constraint on context files that physical cards imposed on Zettelkasten notes
52
+
53
+ Topics:
54
+ - [[maintenance-patterns]]
@@ -0,0 +1,48 @@
1
+ ---
2
+ description: Progressive disclosure, description quality, findability -- how notes get discovered by agents and humans
3
+ type: moc
4
+ ---
5
+
6
+ # discovery-retrieval
7
+
8
+ How notes get found. Description quality, title composability, semantic search, topic map navigation. The discovery-first design principle applied across the system.
9
+
10
+ ## Core Ideas
11
+
12
+ ### Research
13
+ - [[BM25 retrieval fails on full-length descriptions because query term dilution reduces match scores]] -- When a BM25 query contains many terms, each term's IDF contribution gets diluted by common words competing for scoring b
14
+ - [[complete navigation requires four complementary types that no single mechanism provides]] -- Rosenfeld and Morville's global, local, contextual, and supplemental navigation types map onto hub, MOC, wiki link, and
15
+ - [[description quality for humans diverges from description quality for keyword search]] -- Descriptions that pass prediction tests (5/5) can fail BM25 retrieval because human-scannable prose uses connective voca
16
+ - [[descriptions are retrieval filters not summaries]] -- Note descriptions function as lossy compression enabling agents to filter before loading full content, which is informat
17
+ - [[distinctiveness scoring treats description quality as measurable]] -- NLP-based validation tool that computes pairwise description similarity to flag retrieval confusion risk, operationalizi
18
+ - [[faceted classification treats notes as multi-dimensional objects rather than folder contents]] -- Ranganathan's 1933 PMEST framework formalizes why each YAML field should be an independent classification dimension — fa
19
+ - [[flat files break at retrieval scale]] -- unstructured storage works until you need to find things — then search becomes the bottleneck, and for agents, retrieval
20
+ - [[good descriptions layer heuristic then mechanism then implication]] -- Structure descriptions as three layers — lead with actionable heuristic, back with mechanism, end with operational impli
21
+ - [[live index via periodic regeneration keeps discovery current]] -- A maintenance agent regenerating index files on note changes bridges static indices that go stale with dynamic queries t
22
+ - [[markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure]] -- Wiki link edges, YAML metadata, faceted query dimensions, and soft validation compose into graph database capabilities w
23
+ - [[maturity field enables agent context prioritization]] -- A seedling/developing/evergreen maturity field could help agents prefer mature notes when context is tight and surface s
24
+ - [[metadata reduces entropy enabling precision over recall]] -- Without metadata agents rely on full-text search which returns many irrelevant matches; YAML frontmatter pre-computes lo
25
+ - [[narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging]] -- Thomas Vander Wal's broad/narrow distinction explains why vault tagging uses idiosyncratic sentence-titles instead of co
26
+ - [[progressive disclosure means reading right not reading less]] -- The efficiency framing misses the point — tokens are free, quality requires depth, so the goal is a dense relevant conte
27
+ - [[question-answer metadata enables inverted search patterns]] -- An 'answers' YAML field listing questions a note answers could enable question-driven search rather than keyword-driven
28
+ - [[retrieval utility should drive design over capture completeness]] -- System architecture choices should optimize for "how will I find this later" not "where should I put this" — a design or
29
+ - [[retrieval verification loop tests description quality at scale]] -- Systematic scoring across all vault notes turns description quality from subjective judgment into measurable property, e
30
+ - [[sense-making vs storage does compression lose essential nuance]] -- The vault bets that titles plus descriptions plus full content available preserves enough, but very subtle or contextual
31
+ - [[structure enables navigation without reading everything]] -- Four structural mechanisms — wiki links, MOCs, claim titles, and YAML descriptions — compose into discovery layers that
32
+ - [[the AgentSkills standard embodies progressive disclosure at the skill level]] -- The same metadata-then-depth loading pattern that governs note retrieval in the vault also governs skill loading in the
33
+ - [[type field enables structured queries without folder hierarchies]] -- Content-kind metadata (claim, synthesis, tension) provides a query axis orthogonal to wiki link topology, enabling "find
34
+ - [[wiki links create navigation paths that shape retrieval]] -- wiki links are curated graph edges that implement GraphRAG-style retrieval without infrastructure — each link is a retri
35
+
36
+ ## Tensions
37
+
38
+ (Capture conflicts as they emerge)
39
+
40
+ ## Open Questions
41
+
42
+ - What description quality threshold maximizes retrieval accuracy?
43
+ - How does semantic search interact with explicit wiki-link navigation?
44
+
45
+ ---
46
+
47
+ Topics:
48
+ - [[index]]
@@ -0,0 +1,69 @@
1
+ ---
2
+ description: NLP-based validation tool that computes pairwise description similarity to flag retrieval confusion risk, operationalizing the distinctiveness criterion as automated quality assurance
3
+ kind: research
4
+ topics: ["[[discovery-retrieval]]"]
5
+ ---
6
+
7
+ # distinctiveness scoring treats description quality as measurable
8
+
9
+ The insight that [[descriptions are retrieval filters not summaries]] implies a validation criterion: a description is good if it uniquely identifies its note among all notes in the system. This is testable. Compute pairwise similarity between all descriptions, flag pairs above a threshold. High-similarity pairs represent retrieval confusion risk — an agent searching might return the wrong note because the descriptions don't distinguish them. The threshold itself follows the pattern that [[confidence thresholds gate automated action between the mechanical and judgment zones]] -- rather than a single binary cutoff, a three-tier response (auto-flag above 0.8, suggest review between 0.6 and 0.8, ignore below 0.6) would graduate the system's response based on its confidence in the confusion risk.
10
+
11
+ This transforms description quality from subjective judgment ("does this feel distinct enough?") to measurable property ("what's the similarity score with the most similar other description?"). The scoring mechanism makes the quality gate enforceable.
12
+
13
+ ## Implementation Pattern
14
+
15
+ The algorithm:
16
+
17
+ 1. Extract all description strings from YAML frontmatter
18
+ 2. Embed each description (or use simpler TF-IDF/BM25 similarity)
19
+ 3. Compute pairwise similarity matrix
20
+ 4. Flag pairs exceeding threshold (perhaps 0.8 cosine similarity)
21
+ 5. Output: pairs of notes whose descriptions are confusingly similar
22
+
23
+ ```bash
24
+ # Sketch: extract descriptions, compute similarity
25
+ rg "^description:" thinking/*.md | # get all descriptions
26
+ # [pipe to similarity computation]
27
+ # [flag high-similarity pairs]
28
+ ```
29
+
30
+ The output enables targeted revision: for each flagged pair, at least one description needs sharpening to distinguish the notes.
31
+
32
+ ## The Quality Assurance Pattern
33
+
34
+ This is NLP-based linting applied to knowledge system metadata. The scoring should integrate into health check operations or run as a separate validation pass. Automated checks catch problems that human scanning misses — particularly at scale where manually comparing 50+ descriptions becomes impractical.
35
+
36
+ The pattern generalizes: any metadata field with a "distinctiveness matters" semantic can be validated this way. Titles could be scored similarly. The underlying principle is that knowledge graph elements that serve retrieval should be measurable against retrieval criteria.
37
+
38
+ ## Connection to Testing Effect Experiment
39
+
40
+ The [[testing effect could enable agent knowledge verification]] experiment tests description quality through prediction: can an agent predict note content from title + description? Distinctiveness scoring offers a complementary approach: instead of testing retrieval success, test retrieval ambiguity. A note might pass the testing effect check (description enables prediction) while still being confusable with another note if both descriptions are similar.
41
+
42
+ The two validations catch different failure modes:
43
+ - Testing effect catches: descriptions too vague to enable prediction
44
+ - Distinctiveness scoring catches: descriptions that distinguish from title but not from other notes
45
+
46
+ Together they form a more complete quality assurance layer. Since [[schema enforcement via validation agents enables soft consistency]], these validation mechanisms share a common operational pattern: async checking that accumulates results for batch maintenance rather than blocking creation. Distinctiveness scoring runs as a periodic validation pass, flagging high-similarity pairs for attention during maintenance cycles — exactly the soft enforcement model that preserves creation flow while surfacing quality drift. Since [[retrieval verification loop tests description quality at scale]], both approaches can now be systematized: the loop runs prediction scoring across all notes and adds actual retrieval testing to verify search ranking. A note might pass prediction (description makes sense to humans) but fail retrieval (description lacks searchable keywords), or vice versa. The combined signal from distinctiveness scoring, prediction verification, and retrieval testing provides a comprehensive quality surface for the description layer.
47
+
48
+ ## Why This Matters
49
+
50
+ At scale, description quality determines filtering efficiency. Since [[throughput matters more than accumulation]], filtering speed directly impacts processing velocity. Every confusing description that causes wrong-note retrieval or multiple-note scanning adds friction. Automated detection of confusion risk enables proactive improvement before retrieval failures occur.
51
+
52
+ The pattern also inverts the typical quality control direction: instead of checking each description against abstract criteria, check descriptions against each other. The corpus itself defines what "distinct enough" means — your description must distinguish from every other description in the system. But since [[description quality for humans diverges from description quality for keyword search]], distinctiveness itself splits across channels. Two descriptions might be easily distinguished by an agent reading them sequentially (different logical structure, different implications) while being confusingly similar to BM25 (shared common vocabulary, same connective words). The pairwise similarity measurement needs to account for which retrieval channel it targets — embedding similarity measures scanning-channel distinctiveness, while keyword overlap measures search-channel distinctiveness.
53
+ ---
54
+
55
+ Relevant Notes:
56
+ - [[descriptions are retrieval filters not summaries]] — provides the theoretical foundation: descriptions should maximize distinctiveness within corpus, not comprehensiveness
57
+ - [[good descriptions layer heuristic then mechanism then implication]] — complements distinctiveness with structure: layered descriptions are more likely to be distinctive because each layer adds differentiating information
58
+ - [[testing effect could enable agent knowledge verification]] — complementary validation approach: testing effect catches vagueness, distinctiveness scoring catches ambiguity
59
+ - [[retrieval verification loop tests description quality at scale]] — the scaled implementation: systematic prediction-then-verify cycles with scoring, combined with actual retrieval testing, forming a complete quality surface alongside distinctiveness scoring
60
+ - [[skills encode methodology so manual execution bypasses quality gates]] — grounds why this should be automated in health checks rather than manual checking
61
+ - [[throughput matters more than accumulation]] — connects to success metric: filtering efficiency determines processing velocity, confusion risk slows filtering
62
+ - [[metadata reduces entropy enabling precision over recall]] — provides the information-theoretic grounding: distinctiveness is entropy reduction applied to descriptions; high similarity means redundant information
63
+ - [[metacognitive confidence can diverge from retrieval capability]] — tests the gap this methodology addresses: structural metrics (descriptions exist) may not correlate with retrieval success; distinctiveness scoring helps close that gap by measuring description effectiveness, not just existence
64
+ - [[schema enforcement via validation agents enables soft consistency]] — shared operational pattern: both are async validation mechanisms that flag rather than block, accumulating results for maintenance rather than preventing creation
65
+ - [[description quality for humans diverges from description quality for keyword search]] — complicates the measurement target: distinctiveness scoring measures inter-note confusion but the score may apply differently across retrieval channels; descriptions can be distinctive for human scanning while being indistinct for keyword matching if the distinguishing features are connective prose rather than keyword vocabulary
66
+ - [[confidence thresholds gate automated action between the mechanical and judgment zones]] -- the 0.8 similarity threshold is itself a confidence-gated pattern: pairs above 0.8 are auto-flagged (high confidence of confusion risk), pairs between 0.6 and 0.8 could be suggested for review (medium confidence), and pairs below 0.6 are ignored (low confidence); the three-tier response pattern generalizes the binary threshold into graduated action
67
+
68
+ Topics:
69
+ - [[discovery-retrieval]]
@@ -0,0 +1,43 @@
1
+ ---
2
+ description: Agent processing can recover practical benefits of slow capture for the system, but the human loses encoding benefits because the agent generates, not the human
3
+ kind: research
4
+ topics: ["[[processing-workflows]]"]
5
+ source: TFT research corpus (00_inbox/heinrich/)
6
+ ---
7
+
8
+ # does agent processing recover what fast capture loses
9
+
10
+ Research consistently shows handwriting produces better encoding than typing. The mechanism: slowness forces real-time summarization. You can't write as fast as you think, so you must compress, select, and transform. This is [[the generation effect requires active transformation not just storage]] at capture time — the human's brain does the generative work during the act of recording.
11
+
12
+ Agent-delegated capture inverts this. Capture is fast and verbatim. The human speaks, the system transcribes. No compression at capture time. The generative work happens later, when agents process the transcript: extracting claims, writing descriptions, finding connections.
13
+
14
+ Agent processing can recover the practical benefits of slow capture — a high-quality vault representation emerges from the agent's extraction and synthesis work. But it cannot recover the human encoding benefits. The human doesn't remember and understand the content deeply because the human didn't do the generative work. The system gets the generation effect. The human doesn't.
15
+
16
+ This tradeoff is now the unspoken cost of an entire industry movement. Since [[vibe notetaking is the emerging industry consensus for AI-native self-organization]], the "dump and AI organizes" paradigm promotes effortless capture without acknowledging that effortlessness means the human did no generative cognitive work. The industry consensus promises frictionless knowledge management. The unaddressed question is whether frictionless capture with AI processing builds understanding in anyone but the AI.
17
+
18
+ This creates a fundamental tradeoff: fast capture + agent processing builds excellent external memory but weak internal understanding. Slow capture + human summarization builds strong personal memory but may not need vault help at all. The human must choose what to optimize for. But the tradeoff is not purely a loss: since [[voice capture is the highest-bandwidth channel for agent-delegated knowledge systems]], voice capture triples speed beyond typing while preserving emotional metadata — tone, emphasis, hesitation — that text capture strips away. These paraverbal signals give the agent extraction heuristics that partially compensate for what fast capture loses in encoding depth. The human does not deeply encode the content, but the emotional channel provides priority signals the agent would not otherwise have.
19
+
20
+ But there may be a middle path. Since [[guided notes might outperform post-hoc structuring for high-volume capture]], research on Guided Notes suggests that lightweight structure at capture time — skeleton outlines or prompts like "main point, evidence, questions" — preserves generation effects without the friction of full manual structuring. The agent could provide capture scaffolding rather than post-hoc processing alone, prompting the human during capture in ways that trigger generation ("What's the main claim here?") without requiring the human to design the structure themselves.
21
+
22
+ The practical implication: agent-delegated capture excels at building searchable, retrievable knowledge infrastructure. But heavy reliance on it may create externalized cognition dependency — ideas exist in the system but not in the human's recall. When you read a note that an agent processed from your raw capture, it can feel like someone else's note about your ideas rather than your own thinking. And since [[decontextualization risk means atomicity may strip meaning that cannot be recovered]], even the system's representation may be contextually impoverished — the agent recovers structural quality (well-formed claims, proper connections) but may lose the argumentative scaffolding that made the original ideas compelling. The human loses encoding; the vault may lose context. Both are costs of the delegation pattern.
23
+
24
+ This tradeoff takes on a different character in light of the broader capture landscape. Since [[three capture schools converge through agent-mediated synthesis]], the "fundamental divergence" between Accumulationist speed, Interpretationist quality, and Temporal urgency dissolves into a division of labor: human for speed, agent for quality. At the system level, this convergence is real — the vault gets the best of all three schools. But at the human level, this note's concern persists: the convergence delivers Interpretationist quality to the vault while leaving the human with Accumulationist encoding. The system-level resolution does not resolve the human-level cost. The capture schools converge for the knowledge graph, but the encoding question remains open for the person.
25
+
26
+ This parallels what [[random note resurfacing prevents write-only memory]] addresses for the system: content that accumulates but never gets revisited becomes write-only. Fast capture may create write-only memory in the human's brain — ideas captured but not encoded, existing only in the system.
27
+ ---
28
+
29
+ Relevant Notes:
30
+ - [[the generation effect requires active transformation not just storage]] — the cognitive science foundation; encoding benefits accrue to whoever does the generating
31
+ - [[temporal separation of capture and processing preserves context freshness]] — argues for separating capture from processing, which enables this delegation pattern
32
+ - [[random note resurfacing prevents write-only memory]] — parallel concept for the system; fast capture may create write-only memory in the human
33
+ - [[guided notes might outperform post-hoc structuring for high-volume capture]] — proposes a middle path: lightweight scaffolding at capture time preserves human generation benefits without requiring full manual structuring
34
+ - [[capture the reaction to content not just the content itself]] — concrete implementation of the middle path: prompting what is your reaction? at capture time preserves human encoding (the reaction is generative work) while maintaining fast capture's low friction
35
+ - [[cognitive outsourcing risk in agent-operated systems]] — sibling experiment addressing skill atrophy rather than encoding loss; this tests whether humans remember specific content, that tests whether humans retain meta-cognitive capability to do knowledge work at all
36
+ - [[cognitive offloading is the architectural foundation for vault design]] — the motivating framework; this experiment tests the limit of the offloading thesis — if frictionless capture is optimal for offloading economics, what does the human lose?
37
+ - [[three capture schools converge through agent-mediated synthesis]] — reframes this tradeoff: the capture schools converge at the system level (vault gets Interpretationist quality), but the human encoding gap this note describes persists as the shadow side of that convergence
38
+ - [[voice capture is the highest-bandwidth channel for agent-delegated knowledge systems]] — intensifies the tradeoff by tripling capture speed, but also partially mitigates it: voice preserves emotional metadata (tone, emphasis, hesitation) that gives agents extraction heuristics text capture cannot provide
39
+ - [[vibe notetaking is the emerging industry consensus for AI-native self-organization]] — industry context: the encoding tradeoff this note explores is the unspoken cost of the entire vibe notetaking movement, which promotes effortless capture without acknowledging that effortlessness means the human retains nothing
40
+ - [[decontextualization risk means atomicity may strip meaning that cannot be recovered]] — compounds the delegation cost: the human loses encoding, and the vault may also lose argumentative context that gave claims their original force during extraction
41
+
42
+ Topics:
43
+ - [[processing-workflows]]
@@ -0,0 +1,37 @@
1
+ ---
2
+ description: Worked examples of derived vaults across 12 domains -- therapy, research, PM, trading, and more
3
+ type: moc
4
+ ---
5
+
6
+ # domain-compositions
7
+
8
+ Worked domain examples showing how the 8 dimensions, vocabulary, and personality combine for specific use cases. Each example demonstrates a complete derived system.
9
+
10
+ ## Domains
11
+
12
+ - [[therapy journal uses warm personality with pattern detection for emotional processing]] -- therapeutic journaling with mood-trigger correlation, ethical constraints, and growth tracking
13
+ - [[academic research uses structured extraction with cross-source synthesis]] -- literature reviews, claim extraction, methodology-comparable synthesis
14
+ - [[trading uses conviction tracking with thesis-outcome correlation]] -- trade journaling, strategy drift detection, conviction-outcome feedback loops
15
+ - [[creative writing uses worldbuilding consistency with character tracking]] -- worldbuilding consistency graphs, character tracking, timeline management
16
+ - [[project management uses decision tracking with stakeholder context]] -- decision documentation, stakeholder management, cross-project learning
17
+ - [[product management uses feedback pipelines with experiment tracking]] -- feedback-to-feature pipelines, experiment tracking, customer voice intelligence
18
+ - [[engineering uses technical decision tracking with architectural memory]] -- ADR tracking, dependency graphs, architectural memory across team members
19
+ - [[student learning uses prerequisite graphs with spaced retrieval]] -- prerequisite graph construction, mastery tracking, spaced retrieval scheduling
20
+ - [[personal assistant uses life area management with review automation]] -- life area management, goal tracking, habit formation, review automation
21
+ - [[health wellness uses symptom-trigger correlation with multi-dimensional tracking]] -- symptom-trigger correlation, training optimization, multi-dimensional wellness tracking
22
+ - [[legal case management uses precedent chains with regulatory change propagation]] -- precedent chain tracking, regulatory change propagation, cross-matter intelligence
23
+ - [[people relationships uses Dunbar-layered graphs with interaction tracking]] -- Dunbar-layered relationship graphs, interaction tracking, personal CRM
24
+
25
+ ## Cross-Domain Patterns
26
+
27
+ (Patterns that recur across multiple domain compositions)
28
+
29
+ ## Open Questions
30
+
31
+ - Which domain compositions stress-test the derivation engine most?
32
+ - What domain-specific failure modes need documentation?
33
+
34
+ ---
35
+
36
+ Topics:
37
+ - [[index]]
@@ -0,0 +1,55 @@
1
+ ---
2
+ description: Cognitive science shows text+visuals create independent memory traces that reinforce each other — multimodal LLMs could use diagrams as alternative traversal cues alongside wiki links
3
+ kind: research
4
+ topics: ["[[agent-cognition]]"]
5
+ methodology: ["Cornell"]
6
+ source: [[3-3-cornell-note-taking-system]]
7
+ ---
8
+
9
+ # dual-coding with visual elements could enhance agent traversal
10
+
11
+ Cognitive science documents that combining visual and verbal representations creates two independent memory traces that reinforce each other. This is called dual-coding theory. In human knowledge work, the sketchnoting variation of Cornell Notes exploits this by putting diagrams in the Notes column while keeping text-based keywords in the Cue column — a pattern that significantly boosts human retention. The agent translation question: does multimodal processing (text + diagrams) give agents alternative traversal pathways, or does everything collapse into the same latent space?
12
+
13
+ The question is whether this pattern translates to agent-operated knowledge systems. Current vaults are text-heavy — markdown files with prose, wiki links, and YAML metadata. Could visual representations provide alternative traversal cues that text misses?
14
+
15
+ ## What Visual Elements Could Offer
16
+
17
+ Wiki links create textual connection points. But some relationships are easier to see than to say. A Mermaid diagram showing the dependency structure between concepts makes patterns visible that would require multiple paragraphs to describe. An auto-generated relationship graph showing note clusters reveals structural properties (isolation, density, hub status) that individual wiki links can't surface.
18
+
19
+ Since [[spreading activation models how agents should traverse]], traversal works through semantic connections between concepts. Visual representations could function as a parallel activation network — the diagram activates related concepts through spatial proximity and visual grouping, not just through explicit links. This would provide redundancy: if the textual path fails (agent doesn't recognize the connection), the visual path might succeed. This is a different kind of redundancy than what [[trails transform ephemeral navigation into persistent artifacts]] proposes. Trails provide temporal redundancy by persisting successful navigation sequences. Visual dual-coding provides modal redundancy — two encoding formats for the same relationships. Both address the question of how to make traversal more robust.
20
+
21
+ Since [[each new note compounds value by creating traversal paths]], the value question is whether visual elements create new paths or just restate existing ones. If a Mermaid diagram in a MOC only visualizes what the wiki links already express, it adds no new traversal options — it's decoration. But if the visual reveals structural patterns that the link list obscures, it creates genuinely new navigation affordances.
22
+
23
+ ## The Multimodal Question
24
+
25
+ Modern LLMs are multimodal — they can process images alongside text. This capability is underexploited in current knowledge management. A vault could include relationship diagrams, concept maps, or even hand-drawn sketches that carry structural information. The agent would process both the prose and the visual, potentially finding connections that neither alone would surface.
26
+
27
+ The open question is whether current multimodal capabilities are good enough for this to work. Human dual-coding works because the visual and verbal systems are genuinely distinct in the brain. LLM multimodal processing may not have the same separation — the visual input gets converted to the same latent space as text, which might mean no dual benefit.
28
+
29
+ ## Implementation Considerations
30
+
31
+ If dual-coding does provide benefit, implementation would require:
32
+
33
+ 1. Mermaid diagrams in MOCs showing concept relationships
34
+ 2. Auto-generated graph visualizations (via scripts) showing vault topology
35
+ 3. Conventions for when visual representation adds value vs. when it's noise
36
+ 4. Image attachments in markdown notes where spatial relationships matter
37
+
38
+ The risk is complexity without benefit. Since [[progressive disclosure means reading right not reading less]], adding visual layers only makes sense if they enable better curation, not if they just add more content to process. A diagram that takes 500 tokens to describe and another 500 tokens to visually encode costs 1000 tokens. If both paths lead to the same conclusion, you've paid double for no additional insight.
39
+
40
+ This note explores one direction of the modality question — whether spatial-visual can supplement spatial-textual. The inverse direction is equally important: since [[temporal media must convert to spatial text for agent traversal]], content that exists in temporal formats (audio, video, podcasts) must first become spatial text before it can participate in any traversal, let alone benefit from visual supplementation. The modality conversation has two halves: converting temporal to spatial (mandatory, lossy but necessary) and enriching spatial-textual with spatial-visual (optional, benefit uncertain). This note addresses the second half; the temporal media note addresses the first.
41
+
42
+ This remains a research question rather than a settled claim. The theoretical basis from human cognition is solid. The translation to agent cognition is unverified. Since [[testing effect could enable agent knowledge verification]] proposes prediction-based verification as a way to test whether descriptions actually work, a parallel experiment could test whether visual elements provide verifiable benefit — can agents using visual+text outperform agents using text-only on connection-finding tasks?
43
+ ---
44
+
45
+ Relevant Notes:
46
+ - [[spreading activation models how agents should traverse]] — visual elements would function as parallel activation pathways alongside textual links
47
+ - [[each new note compounds value by creating traversal paths]] — the test: do visuals create new paths or restate existing ones?
48
+ - [[progressive disclosure means reading right not reading less]] — visual layers only add value if they enable better curation, not just more content
49
+ - [[trails transform ephemeral navigation into persistent artifacts]] — sibling proposal for traversal redundancy: trails are temporal (path reuse), dual-coding is modal (visual + text)
50
+ - [[testing effect could enable agent knowledge verification]] — sibling Cornell-derived experiment: both propose alternative verification/enhancement channels for agent cognition
51
+ - [[wiki links are the digital evolution of analog indexing]] — methodological lineage: Cornell cue columns became wiki links; sketchnoting variations could become visual traversal cues
52
+ - [[temporal media must convert to spatial text for agent traversal]] — the inverse modality question: this note asks whether spatial-visual can supplement spatial-textual, while temporal media conversion addresses the prerequisite of getting temporal content into spatial text at all
53
+
54
+ Topics:
55
+ - [[agent-cognition]]
@@ -0,0 +1,45 @@
1
+ ---
2
+ description: The single-sentence test operationalizes Unix "do one thing" as a measurable constraint — if the description exceeds 200 characters, the module bundles capabilities that should be separate toggles
3
+ kind: research
4
+ topics: ["[[design-dimensions]]", "[[note-design]]"]
5
+ methodology: ["Original", "Systems Theory"]
6
+ source: [[composable-knowledge-architecture-blueprint]]
7
+ ---
8
+
9
+ # each module must be describable in one sentence under 200 characters or it does too many things
10
+
11
+ The Unix philosophy says "do one thing well," but without a concrete test the principle degrades into aspiration. Developers agree their module does one thing — they just define "one thing" broadly enough to include three capabilities bundled under a unifying label. The single-sentence description test provides a measurable operationalization: if you cannot describe what a module does in one sentence under 200 characters, it does too many things and should be split.
12
+
13
+ The test works because description length is a reliable proxy for scope. A module that manages YAML schema validation is describable in a sentence: "Validates frontmatter fields against template-defined schemas." A module that manages YAML schema validation AND migration AND format conversion requires qualifiers, conjunctions, and subordinate clauses that push the description past the threshold. The length is not arbitrary — it tracks the same cognitive load that makes compound note titles unwieldy. Since [[claims must be specific enough to be wrong]], the specificity constraint on note titles and the brevity constraint on module descriptions share a mechanism: both use conciseness as evidence that the underlying concept is singular rather than bundled. A vague note title hides multiple claims; an overlong module description hides multiple capabilities.
14
+
15
+ This connects directly to how [[descriptions are retrieval filters not summaries]]. Note descriptions function as filters that help agents decide whether to load full content — brevity forces precision, and precision enables filtering. Module descriptions serve an analogous but distinct function: they help agents and human operators decide whether to enable a module, and they serve as a design-time constraint that prevents scope creep before it reaches the user. The note description is a retrieval heuristic; the module description is a design test. Both exploit the same insight — that conciseness forces clarity — but in different phases of the lifecycle.
16
+
17
+ The constraint has practical teeth beyond aesthetics. Since [[composable knowledge architecture builds systems from independent toggleable modules not monolithic templates]], the composability guarantee depends on modules being independently toggleable. A module that bundles capabilities A, B, and C forces users who want only A to accept B and C as well, breaking the granular control that composability promises. The feature-creep toggle anti-pattern emerges: enabling a module implicitly enables capabilities the user did not request, and those capabilities may interact with other modules in ways the dependency graph does not capture. This is precisely how [[implicit dependencies create distributed monoliths that fail silently across configurations]] — bundled capabilities multiply the channels through which undeclared coupling forms, because each capability within the unfocused module can independently develop phantom dependencies on fields or conventions from other modules. The single-sentence test prevents this upstream: if the description cannot fit in one sentence, the module's interaction surface is too large for the dependency resolver to track. Since [[the no wrong patches guarantee ensures any valid module combination produces a valid system]], unfocused modules threaten the safety guarantee because their hidden internal interactions expand the test surface combinatorially — you must verify not just that the module composes with others, but that each of its bundled capabilities composes independently.
18
+
19
+ The description test also connects to platform constraints. Since [[skill context budgets constrain knowledge system complexity on agent platforms]], skill descriptions consume limited context space. A module whose description exceeds 200 characters because it does too many things also consumes disproportionate budget. The design principle and the platform constraint converge: even without a philosophical commitment to focused modules, the resource allocation problem of limited context space would force conciseness. But the design principle is stronger than the budget constraint — even on a platform with unlimited context, a module that cannot be described in one sentence does too many things because the problem is scope, not space.
20
+
21
+ The test also has downstream consequences for module lifecycle. Since [[module deactivation must account for structural artifacts that survive the toggle]], a focused module that writes two YAML fields and creates one MOC convention has a tractable deactivation profile — the cleanup is proportional to the scope. An unfocused module that bundles three capabilities writes six fields, touches three types of structural artifacts, and leaves ghost infrastructure distributed across multiple concerns. The single-sentence test constrains this at design time: modules that pass the test tend to have deactivation costs that users can understand and accept. Similarly, since [[dependency resolution through topological sort makes module composition transparent and verifiable]], the transparency of dependency explanations depends on modules being narrowly scoped. "Processing-pipeline requires atomic-notes because pipeline phases operate on single-claim units" is an explanation that teaches architecture. A bundled module's dependency rationale would require subordinate clauses for each capability's requirements — the explanation becomes as unfocused as the module.
22
+
23
+ There is a shadow side. The 200-character threshold is a heuristic, and heuristics have failure modes. Some genuinely singular capabilities require nuanced description because the capability itself is conceptually complex. A module for "semantic search with BM25 plus vector embeddings and LLM reranking" does one thing (search) but the implementation requires mentioning three mechanisms. The test might flag this as unfocused when it is actually focused but technically layered. The mitigation is that the test should trigger investigation, not automatic splitting. When a description exceeds the threshold, the question is: "Does this module bundle independent capabilities that users might want separately?" If yes, split. If the length comes from describing one capability with inherent complexity, the description may need tightening rather than the module needing splitting. Since [[complex systems evolve from simple working systems]], the default should favor splitting — it is easier to compose two focused modules than to manage one unfocused one — but the test is a signal, not a verdict.
24
+
25
+ The parallel to note-level design is instructive. Since [[enforcing atomicity can create paralysis when ideas resist decomposition]], atomicity is to notes what the single-sentence test is to modules — both use conciseness as evidence that the underlying concept is singular rather than bundled, and both face the same failure mode when genuinely complex singular things resist the constraint. A note that resists decomposition because the relational structure IS the insight parallels a module whose single capability requires mentioning three mechanisms. The mitigations diverge, however: note-level atomicity has no mechanical test (whether an idea "resists decomposition because it's fuzzy" versus "resists because it's genuinely relational" requires judgment), while the single-sentence test provides at least a threshold that triggers investigation. The module-level constraint is more operationalizable precisely because modules have descriptions that can be measured, while notes have claims that must be evaluated.
26
+
27
+ ---
28
+ ---
29
+
30
+ Relevant Notes:
31
+ - [[composable knowledge architecture builds systems from independent toggleable modules not monolithic templates]] — parent architecture: this note provides one of the design principles that makes module boundaries enforceable rather than advisory
32
+ - [[descriptions are retrieval filters not summaries]] — extends the description-as-constraint pattern from notes to modules: both use brevity as a forcing function, but notes use it for retrieval filtering while modules use it for scope enforcement
33
+ - [[claims must be specific enough to be wrong]] — shared mechanism: specificity tests work because vagueness hides bundled concerns, whether in note titles that gesture at topics or module descriptions that paper over feature creep
34
+ - [[skill context budgets constrain knowledge system complexity on agent platforms]] — platform pressure reinforces the constraint: even if the single-sentence test were not a design principle, skill description budgets would force conciseness as a resource allocation problem
35
+ - [[the no wrong patches guarantee ensures any valid module combination produces a valid system]] — depends on: focused modules with clear boundaries are what makes the combinatorial safety guarantee tractable, because unfocused modules create hidden interactions that expand the test surface
36
+ - [[complex systems evolve from simple working systems]] — enables: Gall's Law requires that each addition is understandable in isolation, and the single-sentence test ensures modules remain simple enough to evaluate independently
37
+ - [[implicit dependencies create distributed monoliths that fail silently across configurations]] — prevents upstream: unfocused modules multiply the channels through which implicit dependencies form, because bundled capabilities interact with other modules through undeclared conventions that escape the dependency resolver
38
+ - [[module deactivation must account for structural artifacts that survive the toggle]] — reduces deactivation cost: a focused module writes fewer fields and creates fewer structural commitments, so its deactivation footprint is proportionally smaller and cleanup is tractable
39
+ - [[dependency resolution through topological sort makes module composition transparent and verifiable]] — enables legibility: focused modules produce dependency explanations that teach architecture rather than obscure it, because a single-capability module's dependency rationale is expressible in one sentence
40
+ - [[enforcing atomicity can create paralysis when ideas resist decomposition]] — parallel constraint at note level: atomicity is to notes what the single-sentence test is to modules, both using conciseness as evidence of singularity, and both facing the same failure mode when genuinely complex singular things resist the constraint
41
+ - [[friction-driven module adoption prevents configuration debt by adding complexity only at pain points]] — applies the single-sentence test as a lifecycle threshold: the 500-char split trigger and 15-20 module cap are direct consequences of the describability constraint operating as a resource allocation mechanism during friction-driven evolution
42
+
43
+ Topics:
44
+ - [[design-dimensions]]
45
+ - [[note-design]]
@@ -0,0 +1,55 @@
1
+ ---
2
+ description: Unlike folders where 1000 documents is just 1000 documents, a graph of 1000 connected nodes creates millions of potential paths — the marginal note increases value of all existing notes
3
+ kind: research
4
+ topics: ["[[graph-structure]]"]
5
+ source: TFT research corpus (00_inbox/heinrich/)
6
+ ---
7
+
8
+ # each new note compounds value by creating traversal paths
9
+
10
+ A folder of documents has linear value. Add a document, the value increases by one. The hundredth document is worth no more than the first. Documents sit next to each other without affecting each other. Since [[associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles]], the folder hierarchy not only fails to compound — it actively constrains future organization by requiring upfront classification decisions.
11
+
12
+ A graph of connected notes has compounding value. Add a note with three connections, and you've created three new paths between previously unconnected ideas. The hundredth note doesn't just add one more thing to know — it creates dozens of new traversal paths that make the first ninety-nine notes more reachable.
13
+
14
+ ## The Mechanism
15
+
16
+ When you create a link, you create a path in both directions. Note A linking to Note B means someone traversing from A can reach B, but it also means someone exploring outward from B can discover A. Each bidirectional link creates options that didn't exist before.
17
+
18
+ The math compounds quickly. In a flat folder, N documents means N items. In a connected graph, N nodes with an average of K links creates O(N × K) direct paths, but the indirect paths grow much faster. The path from any note to any other note typically exists through 2-4 hops, meaning the graph functions as if everything were connected to everything — without the cognitive overhead of actually maintaining N² links.
19
+
20
+ This is why since [[small-world topology requires hubs and dense local links]], the structure matters as much as the content. The right topology multiplies the reachability effect. Hub nodes act as highway interchanges, creating shortcuts that keep path lengths short even as the network grows. And since [[betweenness centrality identifies bridge notes connecting disparate knowledge domains]], we can measure which specific nodes contribute most to this reachability — the notes whose removal would most increase average path length are the structural load-bearers that disproportionately drive compounding.
21
+
22
+ ## Why This Matters for Knowledge Work
23
+
24
+ Traditional document management treats notes as inventory. More notes means more stuff to organize, more overhead, diminishing returns. The hundredth document is slightly harder to find than the tenth.
25
+
26
+ This inventory mindset comes from temporal organization — documents filed by date or project, understood by WHEN they appeared. Since [[topological organization beats temporal for knowledge work]], the system rejects this model entirely. Graph-structured knowledge inverts this. More notes means more connection opportunities, more paths for serendipitous discovery, increasing returns. The hundredth note makes the tenth easier to find because it creates new routes to reach it.
27
+
28
+ This explains why link density matters more than note count. A knowledge system of 1000 poorly-linked notes functions worse than 100 richly-linked ones. The value comes from paths, not nodes. The consequence extends beyond navigation: because [[dense interlinked research claims enable derivation while sparse references only enable templating]], density is not just a usability property but a functional threshold — below it, the graph can only support template customization, while above it, agents can derive novel configurations by traversing the claim network. Since [[structure without processing provides no value]], creating nodes (notes) without edges (processed connections) produces no compounding — unprocessed notes are isolated points that cannot participate in the traversal network. But since [[orphan notes are seeds not failures]], this doesn't condemn orphan creation — it condemns orphan abandonment. An orphan is locked potential awaiting integration, not a failure state. The gardening framing: seeds planted now may bloom later when connection-finding or backward maintenance discovers their connections. And since [[inline links carry richer relationship data than metadata fields]], the quality of the link itself affects compounding: a typed inline link ("since X, therefore Y") creates a higher-quality path than a bare reference, because the traverser can judge whether to follow based on the relationship type. Dense typing compounds more than dense linking.
29
+
30
+ ## The Investment Logic
31
+
32
+ Digital gardening research names this the "compound interest effect on knowledge" — old work supports new work. The financial metaphor is precise: notes are appreciating assets, not depreciating inventory. A note created today will be discovered through paths that don't exist yet — paths created by notes written tomorrow. Unlike documents that grow stale in folders, connected notes gain value over time as the network they participate in grows denser.
33
+
34
+ Every note you create is an investment that pays dividends as long as it's connected. The implication: time spent on connections isn't overhead. It's the value-generating activity. A note without links is potential value locked away. A note with three well-reasoned links has already multiplied its worth threefold.
35
+
36
+ Because value compounds through connections, the success metric should be connection velocity rather than archive size. Since [[throughput matters more than accumulation]], processing velocity from capture to synthesis matters more than how much you've accumulated — and this note explains why: each synthesis creates new traversal paths that increase the value of everything already synthesized.
37
+
38
+ The paths this note describes are what [[spreading activation models how agents should traverse]] moves through. Compounding creates the paths; spreading activation is how agents use them. The richer the path network, the more options for focused retrieval (high decay) or exploratory synthesis (low decay).
39
+ ---
40
+
41
+ Relevant Notes:
42
+ - [[topological organization beats temporal for knowledge work]] — the foundational choice that makes compounding possible: concept-based organization enables traversal paths while date-based organization cannot
43
+ - [[small-world topology requires hubs and dense local links]] — explains HOW to structure for this effect, while this note explains WHY the effect exists
44
+ - [[wiki links implement GraphRAG without the infrastructure]] — the practical mechanism that makes traversal paths work
45
+ - [[throughput matters more than accumulation]] — the operational consequence: if value compounds through connections, success metrics should emphasize connection velocity over archive size
46
+ - [[spreading activation models how agents should traverse]] — compounding creates the paths; spreading activation is how agents traverse them
47
+ - [[dangling links reveal which notes want to exist]] — shows how notes enter the graph with pre-accumulated value: high-frequency dangling links mean the note starts with hub-level connectivity
48
+ - [[structure without processing provides no value]] — the inverse case: notes created without processing produce no compounding because they lack the edges (connections) that create paths
49
+ - [[inline links carry richer relationship data than metadata fields]] — link quality affects compounding: typed inline links create higher-quality paths than bare references
50
+ - [[orphan notes are seeds not failures]] — provides the gardening framing: orphans are locked potential awaiting integration, not failure states; creation is valid, abandonment is the failure
51
+ - [[betweenness centrality identifies bridge notes connecting disparate knowledge domains]] — measures exactly WHICH notes contribute most to path creation: high-betweenness notes are the ones whose removal would most reduce reachability across the graph
52
+ - [[dense interlinked research claims enable derivation while sparse references only enable templating]] — the derivation consequence: compounding value through traversal paths is not just a navigation benefit but a functional threshold that determines whether the graph can support principled derivation or only template customization
53
+
54
+ Topics:
55
+ - [[graph-structure]]