arscontexta 0.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (418) hide show
  1. package/.claude-plugin/marketplace.json +11 -0
  2. package/.claude-plugin/plugin.json +22 -0
  3. package/README.md +683 -0
  4. package/agents/knowledge-guide.md +49 -0
  5. package/bin/cli.mjs +66 -0
  6. package/generators/agents-md.md +240 -0
  7. package/generators/claude-md.md +379 -0
  8. package/generators/features/atomic-notes.md +124 -0
  9. package/generators/features/ethical-guardrails.md +58 -0
  10. package/generators/features/graph-analysis.md +188 -0
  11. package/generators/features/helper-functions.md +92 -0
  12. package/generators/features/maintenance.md +164 -0
  13. package/generators/features/methodology-knowledge.md +70 -0
  14. package/generators/features/mocs.md +144 -0
  15. package/generators/features/multi-domain.md +61 -0
  16. package/generators/features/personality.md +71 -0
  17. package/generators/features/processing-pipeline.md +428 -0
  18. package/generators/features/schema.md +149 -0
  19. package/generators/features/self-evolution.md +229 -0
  20. package/generators/features/self-space.md +78 -0
  21. package/generators/features/semantic-search.md +99 -0
  22. package/generators/features/session-rhythm.md +85 -0
  23. package/generators/features/templates.md +85 -0
  24. package/generators/features/wiki-links.md +88 -0
  25. package/generators/soul-md.md +121 -0
  26. package/hooks/hooks.json +45 -0
  27. package/hooks/scripts/auto-commit.sh +44 -0
  28. package/hooks/scripts/session-capture.sh +35 -0
  29. package/hooks/scripts/session-orient.sh +86 -0
  30. package/hooks/scripts/write-validate.sh +42 -0
  31. package/methodology/AI shifts knowledge systems from externalizing memory to externalizing attention.md +59 -0
  32. package/methodology/BM25 retrieval fails on full-length descriptions because query term dilution reduces match scores.md +39 -0
  33. package/methodology/IBIS framework maps claim-based architecture to structured argumentation.md +58 -0
  34. package/methodology/LLM attention degrades as context fills.md +49 -0
  35. package/methodology/MOC construction forces synthesis that automated generation from metadata cannot replicate.md +49 -0
  36. package/methodology/MOC maintenance investment compounds because orientation savings multiply across every future session.md +41 -0
  37. package/methodology/MOCs are attention management devices not just organizational tools.md +51 -0
  38. package/methodology/PKM failure follows a predictable cycle.md +50 -0
  39. package/methodology/ThreadMode to DocumentMode transformation is the core value creation step.md +52 -0
  40. package/methodology/WIP limits force processing over accumulation.md +53 -0
  41. package/methodology/Zeigarnik effect validates capture-first philosophy because open loops drain attention.md +42 -0
  42. package/methodology/academic research uses structured extraction with cross-source synthesis.md +566 -0
  43. package/methodology/adapt the four-phase processing pipeline to domain-specific throughput needs.md +197 -0
  44. package/methodology/agent notes externalize navigation intuition that search cannot discover and traversal cannot reconstruct.md +48 -0
  45. package/methodology/agent self-memory should be architecturally separate from user knowledge systems.md +48 -0
  46. package/methodology/agent session boundaries create natural automation checkpoints that human-operated systems lack.md +56 -0
  47. package/methodology/agent-cognition.md +107 -0
  48. package/methodology/agents are simultaneously methodology executors and subjects creating a unique trust asymmetry.md +66 -0
  49. package/methodology/aspect-oriented programming solved the same cross-cutting concern problem that hooks solve.md +39 -0
  50. package/methodology/associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles.md +53 -0
  51. package/methodology/attention residue may have a minimum granularity that cannot be subdivided.md +46 -0
  52. package/methodology/auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution.md +47 -0
  53. package/methodology/automated detection is always safe because it only reads state while automated remediation risks content corruption.md +42 -0
  54. package/methodology/automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues.md +56 -0
  55. package/methodology/backlinks implicitly define notes by revealing usage context.md +35 -0
  56. package/methodology/backward maintenance asks what would be different if written today.md +62 -0
  57. package/methodology/balance onboarding enforcement and questions to prevent premature complexity.md +229 -0
  58. package/methodology/basic level categorization determines optimal MOC granularity.md +51 -0
  59. package/methodology/batching by context similarity reduces switching costs in agent processing.md +43 -0
  60. package/methodology/behavioral anti-patterns matter more than tool selection.md +42 -0
  61. package/methodology/betweenness centrality identifies bridge notes connecting disparate knowledge domains.md +57 -0
  62. package/methodology/blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules.md +42 -0
  63. package/methodology/bootstrapping principle enables self-improving systems.md +62 -0
  64. package/methodology/build automatic memory through cognitive offloading and session handoffs.md +285 -0
  65. package/methodology/capture the reaction to content not just the content itself.md +41 -0
  66. package/methodology/claims must be specific enough to be wrong.md +36 -0
  67. package/methodology/closure rituals create clean breaks that prevent attention residue bleed.md +44 -0
  68. package/methodology/cognitive offloading is the architectural foundation for vault design.md +46 -0
  69. package/methodology/cognitive outsourcing risk in agent-operated systems.md +55 -0
  70. package/methodology/coherence maintains consistency despite inconsistent inputs.md +96 -0
  71. package/methodology/coherent architecture emerges from wiki links spreading activation and small-world topology.md +48 -0
  72. package/methodology/community detection algorithms can inform when MOCs should split or merge.md +52 -0
  73. package/methodology/complete navigation requires four complementary types that no single mechanism provides.md +43 -0
  74. package/methodology/complex systems evolve from simple working systems.md +59 -0
  75. package/methodology/composable knowledge architecture builds systems from independent toggleable modules not monolithic templates.md +61 -0
  76. package/methodology/compose multi-domain systems through separate templates and shared graph.md +372 -0
  77. package/methodology/concept-orientation beats source-orientation for cross-domain connections.md +51 -0
  78. package/methodology/confidence thresholds gate automated action between the mechanical and judgment zones.md +50 -0
  79. package/methodology/configuration dimensions interact so choices in one create pressure on others.md +58 -0
  80. package/methodology/configuration paralysis emerges when derivation surfaces too many decisions.md +44 -0
  81. package/methodology/context files function as agent operating systems through self-referential self-extension.md +46 -0
  82. package/methodology/context phrase clarity determines how deep a navigation hierarchy can scale.md +46 -0
  83. package/methodology/continuous small-batch processing eliminates review dread.md +48 -0
  84. package/methodology/controlled disorder engineers serendipity through semantic rather than topical linking.md +51 -0
  85. package/methodology/creative writing uses worldbuilding consistency with character tracking.md +672 -0
  86. package/methodology/cross-links between MOC territories indicate creative leaps and integration depth.md +43 -0
  87. package/methodology/dangling links reveal which notes want to exist.md +62 -0
  88. package/methodology/data exit velocity measures how quickly content escapes vendor lock-in.md +74 -0
  89. package/methodology/decontextualization risk means atomicity may strip meaning that cannot be recovered.md +48 -0
  90. package/methodology/dense interlinked research claims enable derivation while sparse references only enable templating.md +47 -0
  91. package/methodology/dependency resolution through topological sort makes module composition transparent and verifiable.md +56 -0
  92. package/methodology/derivation generates knowledge systems from composable research claims not template customization.md +63 -0
  93. package/methodology/derivation-engine.md +27 -0
  94. package/methodology/derived systems follow a seed-evolve-reseed lifecycle.md +56 -0
  95. package/methodology/description quality for humans diverges from description quality for keyword search.md +73 -0
  96. package/methodology/descriptions are retrieval filters not summaries.md +112 -0
  97. package/methodology/design MOCs as attention management devices with lifecycle governance.md +318 -0
  98. package/methodology/design-dimensions.md +66 -0
  99. package/methodology/digital mutability enables note evolution that physical permanence forbids.md +54 -0
  100. package/methodology/discovery-retrieval.md +48 -0
  101. package/methodology/distinctiveness scoring treats description quality as measurable.md +69 -0
  102. package/methodology/does agent processing recover what fast capture loses.md +43 -0
  103. package/methodology/domain-compositions.md +37 -0
  104. package/methodology/dual-coding with visual elements could enhance agent traversal.md +55 -0
  105. package/methodology/each module must be describable in one sentence under 200 characters or it does too many things.md +45 -0
  106. package/methodology/each new note compounds value by creating traversal paths.md +55 -0
  107. package/methodology/eight configuration dimensions parameterize the space of possible knowledge systems.md +56 -0
  108. package/methodology/elaborative encoding is the quality gate for new notes.md +55 -0
  109. package/methodology/enforce schema with graduated strictness across capture processing and query zones.md +221 -0
  110. package/methodology/enforcing atomicity can create paralysis when ideas resist decomposition.md +43 -0
  111. package/methodology/engineering uses technical decision tracking with architectural memory.md +766 -0
  112. package/methodology/every knowledge domain shares a four-phase processing skeleton that diverges only in the process step.md +53 -0
  113. package/methodology/evolution observations provide actionable signals for system adaptation.md +67 -0
  114. package/methodology/external memory shapes cognition more than base model.md +60 -0
  115. package/methodology/faceted classification treats notes as multi-dimensional objects rather than folder contents.md +65 -0
  116. package/methodology/failure-modes.md +27 -0
  117. package/methodology/false universalism applies same processing logic regardless of domain.md +49 -0
  118. package/methodology/federated wiki pattern enables multi-agent divergence as feature not bug.md +59 -0
  119. package/methodology/flat files break at retrieval scale.md +75 -0
  120. package/methodology/forced engagement produces weak connections.md +48 -0
  121. package/methodology/four abstraction layers separate platform-agnostic from platform-dependent knowledge system features.md +47 -0
  122. package/methodology/fresh context per task preserves quality better than chaining phases.md +44 -0
  123. package/methodology/friction reveals architecture.md +63 -0
  124. package/methodology/friction-driven module adoption prevents configuration debt by adding complexity only at pain points.md +48 -0
  125. package/methodology/gardening cycle implements tend prune fertilize operations.md +41 -0
  126. package/methodology/generation effect gate blocks processing without transformation.md +40 -0
  127. package/methodology/goal-driven memory orchestration enables autonomous domain learning through directed compute allocation.md +41 -0
  128. package/methodology/good descriptions layer heuristic then mechanism then implication.md +57 -0
  129. package/methodology/graph-structure.md +65 -0
  130. package/methodology/guided notes might outperform post-hoc structuring for high-volume capture.md +37 -0
  131. package/methodology/health wellness uses symptom-trigger correlation with multi-dimensional tracking.md +819 -0
  132. package/methodology/hook composition creates emergent methodology from independent single-concern components.md +47 -0
  133. package/methodology/hook enforcement guarantees quality while instruction enforcement merely suggests it.md +51 -0
  134. package/methodology/hook-driven learning loops create self-improving methodology through observation accumulation.md +62 -0
  135. package/methodology/hooks are the agent habit system that replaces the missing basal ganglia.md +40 -0
  136. package/methodology/hooks cannot replace genuine cognitive engagement yet more automation is always tempting.md +87 -0
  137. package/methodology/hooks enable context window efficiency by delegating deterministic checks to external processes.md +47 -0
  138. package/methodology/idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once.md +44 -0
  139. package/methodology/implement condition-based maintenance triggers for derived systems.md +255 -0
  140. package/methodology/implicit dependencies create distributed monoliths that fail silently across configurations.md +58 -0
  141. package/methodology/implicit knowledge emerges from traversal.md +55 -0
  142. package/methodology/incremental formalization happens through repeated touching of old notes.md +60 -0
  143. package/methodology/incremental reading enables cross-source connection finding.md +39 -0
  144. package/methodology/index.md +32 -0
  145. package/methodology/inline links carry richer relationship data than metadata fields.md +91 -0
  146. package/methodology/insight accretion differs from productivity in knowledge systems.md +41 -0
  147. package/methodology/intermediate packets enable assembly over creation.md +52 -0
  148. package/methodology/intermediate representation pattern enables reliable vault operations beyond regex.md +62 -0
  149. package/methodology/justification chains enable forward backward and evolution reasoning about configuration decisions.md +46 -0
  150. package/methodology/knowledge system architecture is parameterized by platform capabilities not fixed by methodology.md +51 -0
  151. package/methodology/knowledge systems become communication partners through complexity and memory humans cannot sustain.md +47 -0
  152. package/methodology/knowledge systems share universal operations and structural components across all methodology traditions.md +46 -0
  153. package/methodology/legal case management uses precedent chains with regulatory change propagation.md +892 -0
  154. package/methodology/live index via periodic regeneration keeps discovery current.md +58 -0
  155. package/methodology/local-first file formats are inherently agent-native.md +69 -0
  156. package/methodology/logic column pattern separates reasoning from procedure.md +35 -0
  157. package/methodology/maintenance operations are more universal than creative pipelines because structural health is domain-invariant.md +47 -0
  158. package/methodology/maintenance scheduling frequency should match consequence speed not detection capability.md +50 -0
  159. package/methodology/maintenance targeting should prioritize mechanism and theory notes.md +26 -0
  160. package/methodology/maintenance-patterns.md +72 -0
  161. package/methodology/markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure.md +55 -0
  162. package/methodology/maturity field enables agent context prioritization.md +33 -0
  163. package/methodology/memory-architecture.md +27 -0
  164. package/methodology/metacognitive confidence can diverge from retrieval capability.md +42 -0
  165. package/methodology/metadata reduces entropy enabling precision over recall.md +91 -0
  166. package/methodology/methodology development should follow the trajectory from documentation to skill to hook as understanding hardens.md +80 -0
  167. package/methodology/methodology traditions are named points in a shared configuration space not competing paradigms.md +64 -0
  168. package/methodology/mnemonic medium embeds verification into navigation.md +46 -0
  169. package/methodology/module communication through shared YAML fields creates loose coupling without direct dependencies.md +44 -0
  170. package/methodology/module deactivation must account for structural artifacts that survive the toggle.md +49 -0
  171. package/methodology/multi-domain systems compose through separate templates and shared graph.md +61 -0
  172. package/methodology/multi-domain-composition.md +27 -0
  173. package/methodology/narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging.md +53 -0
  174. package/methodology/navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts.md +48 -0
  175. package/methodology/navigational vertigo emerges in pure association systems without local hierarchy.md +54 -0
  176. package/methodology/note titles should function as APIs enabling sentence transclusion.md +51 -0
  177. package/methodology/note-design.md +57 -0
  178. package/methodology/notes are skills /342/200/224 curated knowledge injected when relevant.md" +62 -0
  179. package/methodology/notes function as cognitive anchors that stabilize attention during complex tasks.md +41 -0
  180. package/methodology/novel domains derive by mapping knowledge type to closest reference domain then adapting.md +50 -0
  181. package/methodology/nudge theory explains graduated hook enforcement as choice architecture for agents.md +59 -0
  182. package/methodology/observation and tension logs function as dead-letter queues for failed automation.md +51 -0
  183. package/methodology/operational memory and knowledge memory serve different functions in agent architecture.md +48 -0
  184. package/methodology/operational wisdom requires contextual observation.md +52 -0
  185. package/methodology/orchestrated vault creation transforms arscontexta from tool to autonomous knowledge factory.md +40 -0
  186. package/methodology/organic emergence versus active curation creates a fundamental vault governance tension.md +68 -0
  187. package/methodology/orphan notes are seeds not failures.md +38 -0
  188. package/methodology/over-automation corrupts quality when hooks encode judgment rather than verification.md +62 -0
  189. package/methodology/people relationships uses Dunbar-layered graphs with interaction tracking.md +659 -0
  190. package/methodology/personal assistant uses life area management with review automation.md +610 -0
  191. package/methodology/platform adapter translation is semantic not mechanical because hook event meanings differ.md +40 -0
  192. package/methodology/platform capability tiers determine which knowledge system features can be implemented.md +48 -0
  193. package/methodology/platform fragmentation means identical conceptual operations require different implementations across agent environments.md +44 -0
  194. package/methodology/premature complexity is the most common derivation failure mode.md +45 -0
  195. package/methodology/prevent domain-specific failure modes through the vulnerability matrix.md +336 -0
  196. package/methodology/processing effort should follow retrieval demand.md +57 -0
  197. package/methodology/processing-workflows.md +75 -0
  198. package/methodology/product management uses feedback pipelines with experiment tracking.md +789 -0
  199. package/methodology/productivity porn risk in meta-system building.md +30 -0
  200. package/methodology/programmable notes could enable property-triggered workflows.md +64 -0
  201. package/methodology/progressive disclosure means reading right not reading less.md +69 -0
  202. package/methodology/progressive schema validates only what active modules require not the full system schema.md +49 -0
  203. package/methodology/project management uses decision tracking with stakeholder context.md +776 -0
  204. package/methodology/propositional link semantics transform wiki links from associative to reasoned.md +87 -0
  205. package/methodology/prospective memory requires externalization.md +53 -0
  206. package/methodology/provenance tracks where beliefs come from.md +62 -0
  207. package/methodology/queries evolve during search so agents should checkpoint.md +35 -0
  208. package/methodology/question-answer metadata enables inverted search patterns.md +39 -0
  209. package/methodology/random note resurfacing prevents write-only memory.md +33 -0
  210. package/methodology/reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring.md +59 -0
  211. package/methodology/reflection synthesizes existing notes into new insight.md +100 -0
  212. package/methodology/retrieval utility should drive design over capture completeness.md +69 -0
  213. package/methodology/retrieval verification loop tests description quality at scale.md +81 -0
  214. package/methodology/role field makes graph structure explicit.md +94 -0
  215. package/methodology/scaffolding enables divergence that fine-tuning cannot.md +67 -0
  216. package/methodology/schema enforcement via validation agents enables soft consistency.md +60 -0
  217. package/methodology/schema evolution follows observe-then-formalize not design-then-enforce.md +65 -0
  218. package/methodology/schema field names are the only domain specific element in the universal note pattern.md +46 -0
  219. package/methodology/schema fields should use domain-native vocabulary not abstract terminology.md +47 -0
  220. package/methodology/schema templates reduce cognitive overhead at capture time.md +55 -0
  221. package/methodology/schema validation hooks externalize inhibitory control that degrades under cognitive load.md +48 -0
  222. package/methodology/schema-enforcement.md +27 -0
  223. package/methodology/self-extension requires context files to contain platform operations knowledge not just methodology.md +47 -0
  224. package/methodology/sense-making vs storage does compression lose essential nuance.md +73 -0
  225. package/methodology/session boundary hooks implement cognitive bookends for orientation and reflection.md +60 -0
  226. package/methodology/session handoff creates continuity without persistent memory.md +43 -0
  227. package/methodology/session outputs are packets for future selves.md +43 -0
  228. package/methodology/session transcript mining enables experiential validation that structural tests cannot provide.md +38 -0
  229. package/methodology/skill context budgets constrain knowledge system complexity on agent platforms.md +52 -0
  230. package/methodology/skills encode methodology so manual execution bypasses quality gates.md +50 -0
  231. package/methodology/small-world topology requires hubs and dense local links.md +99 -0
  232. package/methodology/source attribution enables tracing claims to foundations.md +38 -0
  233. package/methodology/spaced repetition scheduling could optimize vault maintenance.md +44 -0
  234. package/methodology/spreading activation models how agents should traverse.md +79 -0
  235. package/methodology/stale navigation actively misleads because agents trust curated maps completely.md +43 -0
  236. package/methodology/stigmergy coordinates agents through environmental traces without direct communication.md +62 -0
  237. package/methodology/storage versus thinking distinction determines which tool patterns apply.md +56 -0
  238. package/methodology/structure enables navigation without reading everything.md +52 -0
  239. package/methodology/structure without processing provides no value.md +56 -0
  240. package/methodology/student learning uses prerequisite graphs with spaced retrieval.md +770 -0
  241. package/methodology/summary coherence tests composability before filing.md +37 -0
  242. package/methodology/tag rot applies to wiki links because titles serve as both identifier and display text.md +50 -0
  243. package/methodology/temporal media must convert to spatial text for agent traversal.md +43 -0
  244. package/methodology/temporal processing priority creates age-based inbox urgency.md +45 -0
  245. package/methodology/temporal separation of capture and processing preserves context freshness.md +39 -0
  246. package/methodology/ten universal primitives form the kernel of every viable agent knowledge system.md +162 -0
  247. package/methodology/testing effect could enable agent knowledge verification.md +38 -0
  248. package/methodology/the AgentSkills standard embodies progressive disclosure at the skill level.md +40 -0
  249. package/methodology/the derivation engine improves recursively as deployed systems generate observations.md +49 -0
  250. package/methodology/the determinism boundary separates hook methodology from skill methodology.md +46 -0
  251. package/methodology/the fix-versus-report decision depends on determinism reversibility and accumulated trust.md +45 -0
  252. package/methodology/the generation effect requires active transformation not just storage.md +57 -0
  253. package/methodology/the no wrong patches guarantee ensures any valid module combination produces a valid system.md +58 -0
  254. package/methodology/the system is the argument.md +46 -0
  255. package/methodology/the vault constitutes identity for agents.md +86 -0
  256. package/methodology/the vault methodology transfers because it encodes cognitive science not domain specifics.md +47 -0
  257. package/methodology/therapy journal uses warm personality with pattern detection for emotional processing.md +584 -0
  258. package/methodology/three capture schools converge through agent-mediated synthesis.md +55 -0
  259. package/methodology/three concurrent maintenance loops operate at different timescales to catch different classes of problems.md +56 -0
  260. package/methodology/throughput matters more than accumulation.md +58 -0
  261. package/methodology/title as claim enables traversal as reasoning.md +50 -0
  262. package/methodology/topological organization beats temporal for knowledge work.md +52 -0
  263. package/methodology/trading uses conviction tracking with thesis-outcome correlation.md +699 -0
  264. package/methodology/trails transform ephemeral navigation into persistent artifacts.md +39 -0
  265. package/methodology/transform universal vocabulary to domain-native language through six levels.md +259 -0
  266. package/methodology/type field enables structured queries without folder hierarchies.md +53 -0
  267. package/methodology/use-case presets dissolve the tension between composability and simplicity.md +44 -0
  268. package/methodology/vault conventions may impose hidden rigidity on thinking.md +44 -0
  269. package/methodology/verbatim risk applies to agents too.md +31 -0
  270. package/methodology/vibe notetaking is the emerging industry consensus for AI-native self-organization.md +56 -0
  271. package/methodology/vivid memories need verification.md +45 -0
  272. package/methodology/vocabulary-transformation.md +27 -0
  273. package/methodology/voice capture is the highest-bandwidth channel for agent-delegated knowledge systems.md +45 -0
  274. package/methodology/wiki links are the digital evolution of analog indexing.md +73 -0
  275. package/methodology/wiki links as social contract transforms agents into stewards of incomplete references.md +52 -0
  276. package/methodology/wiki links create navigation paths that shape retrieval.md +63 -0
  277. package/methodology/wiki links implement GraphRAG without the infrastructure.md +101 -0
  278. package/methodology/writing for audience blocks authentic creation.md +22 -0
  279. package/methodology/you operate a system that takes notes.md +79 -0
  280. package/openclaw/SKILL.md +110 -0
  281. package/package.json +45 -0
  282. package/platforms/README.md +51 -0
  283. package/platforms/claude-code/generator.md +61 -0
  284. package/platforms/claude-code/hooks/README.md +186 -0
  285. package/platforms/claude-code/hooks/auto-commit.sh.template +38 -0
  286. package/platforms/claude-code/hooks/session-capture.sh.template +72 -0
  287. package/platforms/claude-code/hooks/session-orient.sh.template +189 -0
  288. package/platforms/claude-code/hooks/write-validate.sh.template +106 -0
  289. package/platforms/openclaw/generator.md +82 -0
  290. package/platforms/openclaw/hooks/README.md +89 -0
  291. package/platforms/openclaw/hooks/bootstrap.ts.template +224 -0
  292. package/platforms/openclaw/hooks/command-new.ts.template +165 -0
  293. package/platforms/openclaw/hooks/heartbeat.ts.template +214 -0
  294. package/platforms/shared/features/README.md +70 -0
  295. package/platforms/shared/skill-blocks/graph.md +145 -0
  296. package/platforms/shared/skill-blocks/learn.md +119 -0
  297. package/platforms/shared/skill-blocks/next.md +131 -0
  298. package/platforms/shared/skill-blocks/pipeline.md +326 -0
  299. package/platforms/shared/skill-blocks/ralph.md +616 -0
  300. package/platforms/shared/skill-blocks/reduce.md +1142 -0
  301. package/platforms/shared/skill-blocks/refactor.md +129 -0
  302. package/platforms/shared/skill-blocks/reflect.md +780 -0
  303. package/platforms/shared/skill-blocks/remember.md +524 -0
  304. package/platforms/shared/skill-blocks/rethink.md +574 -0
  305. package/platforms/shared/skill-blocks/reweave.md +680 -0
  306. package/platforms/shared/skill-blocks/seed.md +320 -0
  307. package/platforms/shared/skill-blocks/stats.md +145 -0
  308. package/platforms/shared/skill-blocks/tasks.md +171 -0
  309. package/platforms/shared/skill-blocks/validate.md +323 -0
  310. package/platforms/shared/skill-blocks/verify.md +562 -0
  311. package/platforms/shared/templates/README.md +35 -0
  312. package/presets/experimental/categories.yaml +1 -0
  313. package/presets/experimental/preset.yaml +38 -0
  314. package/presets/experimental/starter/README.md +7 -0
  315. package/presets/experimental/vocabulary.yaml +7 -0
  316. package/presets/personal/categories.yaml +7 -0
  317. package/presets/personal/preset.yaml +41 -0
  318. package/presets/personal/starter/goals.md +21 -0
  319. package/presets/personal/starter/index.md +17 -0
  320. package/presets/personal/starter/life-areas.md +21 -0
  321. package/presets/personal/starter/people.md +21 -0
  322. package/presets/personal/vocabulary.yaml +32 -0
  323. package/presets/research/categories.yaml +8 -0
  324. package/presets/research/preset.yaml +41 -0
  325. package/presets/research/starter/index.md +17 -0
  326. package/presets/research/starter/methods.md +21 -0
  327. package/presets/research/starter/open-questions.md +21 -0
  328. package/presets/research/vocabulary.yaml +33 -0
  329. package/reference/AUDIT-REPORT.md +238 -0
  330. package/reference/claim-map.md +172 -0
  331. package/reference/components.md +327 -0
  332. package/reference/conversation-patterns.md +542 -0
  333. package/reference/derivation-validation.md +649 -0
  334. package/reference/dimension-claim-map.md +134 -0
  335. package/reference/evolution-lifecycle.md +297 -0
  336. package/reference/failure-modes.md +235 -0
  337. package/reference/interaction-constraints.md +204 -0
  338. package/reference/kernel.yaml +242 -0
  339. package/reference/methodology.md +283 -0
  340. package/reference/open-questions.md +279 -0
  341. package/reference/personality-layer.md +302 -0
  342. package/reference/self-space.md +299 -0
  343. package/reference/semantic-vs-keyword.md +288 -0
  344. package/reference/session-lifecycle.md +298 -0
  345. package/reference/templates/base-note.md +16 -0
  346. package/reference/templates/companion-note.md +70 -0
  347. package/reference/templates/creative-note.md +16 -0
  348. package/reference/templates/learning-note.md +16 -0
  349. package/reference/templates/life-note.md +16 -0
  350. package/reference/templates/moc.md +26 -0
  351. package/reference/templates/relationship-note.md +17 -0
  352. package/reference/templates/research-note.md +19 -0
  353. package/reference/templates/session-log.md +24 -0
  354. package/reference/templates/therapy-note.md +16 -0
  355. package/reference/test-fixtures/edge-case-constraints.md +148 -0
  356. package/reference/test-fixtures/multi-domain.md +164 -0
  357. package/reference/test-fixtures/novel-domain-gaming.md +138 -0
  358. package/reference/test-fixtures/research-minimal.md +102 -0
  359. package/reference/test-fixtures/therapy-full.md +155 -0
  360. package/reference/testing-milestones.md +1087 -0
  361. package/reference/three-spaces.md +363 -0
  362. package/reference/tradition-presets.md +203 -0
  363. package/reference/use-case-presets.md +341 -0
  364. package/reference/validate-kernel.sh +432 -0
  365. package/reference/vocabulary-transforms.md +85 -0
  366. package/scripts/sync-thinking.sh +147 -0
  367. package/skill-sources/graph/SKILL.md +567 -0
  368. package/skill-sources/graph/skill.json +17 -0
  369. package/skill-sources/learn/SKILL.md +254 -0
  370. package/skill-sources/learn/skill.json +17 -0
  371. package/skill-sources/next/SKILL.md +407 -0
  372. package/skill-sources/next/skill.json +17 -0
  373. package/skill-sources/pipeline/SKILL.md +314 -0
  374. package/skill-sources/pipeline/skill.json +17 -0
  375. package/skill-sources/ralph/SKILL.md +604 -0
  376. package/skill-sources/ralph/skill.json +17 -0
  377. package/skill-sources/reduce/SKILL.md +1113 -0
  378. package/skill-sources/reduce/skill.json +17 -0
  379. package/skill-sources/refactor/SKILL.md +448 -0
  380. package/skill-sources/refactor/skill.json +17 -0
  381. package/skill-sources/reflect/SKILL.md +747 -0
  382. package/skill-sources/reflect/skill.json +17 -0
  383. package/skill-sources/remember/SKILL.md +534 -0
  384. package/skill-sources/remember/skill.json +17 -0
  385. package/skill-sources/rethink/SKILL.md +658 -0
  386. package/skill-sources/rethink/skill.json +17 -0
  387. package/skill-sources/reweave/SKILL.md +657 -0
  388. package/skill-sources/reweave/skill.json +17 -0
  389. package/skill-sources/seed/SKILL.md +303 -0
  390. package/skill-sources/seed/skill.json +17 -0
  391. package/skill-sources/stats/SKILL.md +371 -0
  392. package/skill-sources/stats/skill.json +17 -0
  393. package/skill-sources/tasks/SKILL.md +402 -0
  394. package/skill-sources/tasks/skill.json +17 -0
  395. package/skill-sources/validate/SKILL.md +310 -0
  396. package/skill-sources/validate/skill.json +17 -0
  397. package/skill-sources/verify/SKILL.md +532 -0
  398. package/skill-sources/verify/skill.json +17 -0
  399. package/skills/add-domain/SKILL.md +441 -0
  400. package/skills/add-domain/skill.json +17 -0
  401. package/skills/architect/SKILL.md +568 -0
  402. package/skills/architect/skill.json +17 -0
  403. package/skills/ask/SKILL.md +388 -0
  404. package/skills/ask/skill.json +17 -0
  405. package/skills/health/SKILL.md +760 -0
  406. package/skills/health/skill.json +17 -0
  407. package/skills/help/SKILL.md +348 -0
  408. package/skills/help/skill.json +17 -0
  409. package/skills/recommend/SKILL.md +553 -0
  410. package/skills/recommend/skill.json +17 -0
  411. package/skills/reseed/SKILL.md +385 -0
  412. package/skills/reseed/skill.json +17 -0
  413. package/skills/setup/SKILL.md +1688 -0
  414. package/skills/setup/skill.json +17 -0
  415. package/skills/tutorial/SKILL.md +496 -0
  416. package/skills/tutorial/skill.json +17 -0
  417. package/skills/upgrade/SKILL.md +395 -0
  418. package/skills/upgrade/skill.json +17 -0
@@ -0,0 +1,55 @@
1
+ ---
2
+ description: Accumulationist speed, Interpretationist quality, and Temporal context preservation stop being tradeoffs when agent processing handles the generation that human capture skips — the "fundamental
3
+ kind: research
4
+ topics: ["[[processing-workflows]]", "[[agent-cognition]]"]
5
+ methodology: ["Capture Design", "Zettelkasten", "Evergreen"]
6
+ source: [[tft-research-part3]]
7
+ ---
8
+
9
+ # three capture schools converge through agent-mediated synthesis
10
+
11
+ PKM research identifies three capture philosophies as a "fundamental divergence" — schools of thought that make incompatible assumptions about what capture should optimize for.
12
+
13
+ The Accumulationist school (Progressive Summarization, Readwise) prioritizes speed and fidelity. Capture everything, highlight generously, process later. The risk is the Collector's Fallacy: accumulating without understanding, building a library that feels productive but produces no synthesis. The Interpretationist school (Zettelkasten, Smart Notes) prioritizes transformation at capture time. Rewrite in your own words, generate connections immediately, never store a quote without your reaction. The risk is friction: interpretation is slow, and the friction prevents capturing thoughts that arrive faster than you can process them. The Temporal school (Daily Notes, Roam) prioritizes context preservation through chronological ordering. Everything lands on today's page, linked but not filed, organized by when rather than what. The risk is retrieval complexity: without spatial filing, finding things depends entirely on link quality and search.
14
+
15
+ These schools diverge because they make different assumptions about who does the processing and when. Accumulationists defer processing entirely. Interpretationists process at capture time. Temporalists replace processing with linking. Each optimizes one dimension while sacrificing others — speed, quality, or context.
16
+
17
+ Agent operation dissolves this divergence by splitting capture and processing across different actors. The human captures with Accumulationist speed: zero-friction dumps, voice memos, rapid highlights. No interpretation required at capture time — and since [[Zeigarnik effect validates capture-first philosophy because open loops drain attention]], this speed is not merely convenient but cognitively necessary, because every uncaptured thought drains working memory until externalized. The agent then processes with Interpretationist quality: extracting claims, writing descriptions, generating connections, producing the synthesis that Zettelkasten demands. Since [[the generation effect requires active transformation not just storage]], the agent's processing is genuine generation — producing descriptions, articulated connections, and synthesis claims that didn't exist in the source — not mere reorganization. The convergence only works because the agent does real interpretive work, not because it automates filing.
18
+
19
+ The Temporal school's contribution survives in a different form. Daily Notes preserve capture context through chronological ordering. In the agent-mediated model, context preservation happens through temporal processing urgency instead: since [[temporal separation of capture and processing preserves context freshness]], processing must follow capture within hours before Ebbinghaus decay erodes the context that makes interpretation possible. The temporal dimension shifts from an organizational principle (file by date) to a processing constraint (process before context fades). The Temporal school's insight — that when matters — is preserved even though its mechanism (Daily Notes) is replaced.
20
+
21
+ But this convergence has a shadow side. Since [[does agent processing recover what fast capture loses]], the system gets Interpretationist quality while the human gets Accumulationist encoding — which is to say, minimal encoding. The generation effect benefits whoever generates, and if agents do all the generating, the human's understanding may be shallow even as the vault's representation is deep. Since [[cognitive outsourcing risk in agent-operated systems]], this is not just encoding loss but potential skill atrophy — the human may lose not only specific memories but the meta-cognitive ability to structure ideas and recognize connections. The convergence is real at the system level but potentially hollow at the human level. This is why [[capture the reaction to content not just the content itself]] matters: prompting for reactions at capture time preserves a sliver of Interpretationist encoding within Accumulationist speed. "What's your reaction?" costs seconds but generates a human-side cognitive hook that pure agent processing cannot reconstruct.
22
+
23
+ The convergence is not just theoretical — since [[vibe notetaking is the emerging industry consensus for AI-native self-organization]], the "dump and AI organizes" pattern is converging across tools from Notion AI to Mem to Supermemory. But most implementations stop at embedding-based linking rather than genuine agent-curated synthesis. The industry validates the capture convergence while mostly failing the processing convergence: tools implement Accumulationist speed without Interpretationist quality, producing searchable archives rather than traversable knowledge graphs. The differentiation lives in whether the agent generates reasoned connections or merely clusters by surface similarity.
24
+
25
+ The convergence also depends on what the agent actually does with accumulated content. Since [[concept-orientation beats source-orientation for cross-domain connections]], the agent must extract concepts from sources rather than merely filing them. Accumulationist capture without concept extraction just produces well-organized hoarding — the first stage in what [[PKM failure follows a predictable cycle]] documents as a 7-stage cascade from collection to abandonment. The agent's interpretive pass must transform accumulated material into concept nodes that can participate in cross-domain synthesis — otherwise the convergence delivers Accumulationist speed without Interpretationist value, which is exactly the Collector's Fallacy dressed in automation.
26
+
27
+ There is a practical mechanism that bridges Accumulationist speed with some Interpretationist structure even before agent processing begins: since [[schema templates reduce cognitive overhead at capture time]], pre-defined fields can prompt minimal generative acts during capture without imposing the full friction of manual rewriting. Fill "key claim" and "my reaction" rather than restructuring the entire passage. This is not full Interpretationist processing, but it seeds the agent's later work with human-generated anchors that improve extraction quality.
28
+
29
+ The deeper claim is that the "fundamental divergence" was an artifact of assuming a single actor. Because [[methodology traditions are named points in a shared configuration space not competing paradigms]], the capture schools are also configuration choices along the processing-intensity dimension — Accumulationist at the light end, Interpretationist at the heavy end, Temporal using time as a substitute for both. Framing them as configurations rather than competing philosophies makes mixing them natural rather than contradictory. When one person must capture AND process, speed and quality genuinely trade off. When capture and processing are distributed — human for speed, agent for quality — the tradeoff dissolves into a division of labor. The schools converge not because one wins but because agent mediation lets you take the best of each: Accumulationist speed, Interpretationist depth, Temporal urgency. Since [[cognitive offloading is the architectural foundation for vault design]], frictionless capture is optimal for offloading economics, and agent processing addresses the generation gap that pure offloading creates. The convergence is the architecture working as designed. And because [[AI shifts knowledge systems from externalizing memory to externalizing attention]], the convergence represents something more than workflow optimization — it is an instance of the broader shift where agents externalize not just what you know but what you attend to, deciding which claims deserve extraction and which connections are genuine.
30
+
31
+ Since [[incremental formalization happens through repeated touching of old notes]], the convergence isn't a one-shot event either. The agent's initial processing is the first interpretive pass, but subsequent traversals — during reflect, reweave, and organic encounters — continue crystallizing what fast capture deposited as raw material. The Interpretationist ideal of fully processed notes is achieved not at capture time but through accumulated agent touches over time.
32
+
33
+ ---
34
+ ---
35
+
36
+ Relevant Notes:
37
+ - [[temporal separation of capture and processing preserves context freshness]] — operationalizes the timing dimension: agent mediation only works if processing follows capture before Ebbinghaus decay erodes the context that makes interpretation possible
38
+ - [[the generation effect requires active transformation not just storage]] — the mechanism that makes Interpretationist quality real: agent processing must produce genuine generation (descriptions, connections, synthesis), not just reorganization, for the convergence to deliver on its promise
39
+ - [[does agent processing recover what fast capture loses]] — the critical tension: convergence delivers system-level quality but may sacrifice human encoding benefits, splitting who benefits from the generation effect
40
+ - [[capture the reaction to content not just the content itself]] — the middle path that preserves human generation within fast capture: prompting for reactions at capture time maintains encoding benefits without Interpretationist friction
41
+ - [[concept-orientation beats source-orientation for cross-domain connections]] — the extraction step that makes convergence productive: agent processing must extract concepts from accumulated sources, not just file them, or Accumulationist speed produces Accumulationist hoarding
42
+ - [[cognitive offloading is the architectural foundation for vault design]] — the theoretical foundation: frictionless capture is optimal for offloading economics, and agent processing addresses the generation gap that pure offloading creates
43
+ - [[incremental formalization happens through repeated touching of old notes]] — the temporal dimension of convergence: initial agent processing is the first formalization pass, subsequent traversals continue crystallizing what fast capture initially deposited as raw material
44
+ - [[schema templates reduce cognitive overhead at capture time]] — a practical convergence mechanism: templates give Accumulationist capture some Interpretationist structure without requiring full manual rewriting
45
+ - [[cognitive outsourcing risk in agent-operated systems]] — the deepest shadow side: convergence may produce not just encoding loss but skill atrophy, where humans lose the meta-cognitive ability to structure ideas after delegating all processing to agents
46
+ - [[Zeigarnik effect validates capture-first philosophy because open loops drain attention]] — cognitive science grounding for Accumulationist speed: zero-friction capture is not just practical but necessary because uncaptured thoughts drain working memory via open loops
47
+ - [[PKM failure follows a predictable cycle]] — the documented cascade that convergence must prevent: without genuine agent processing, Accumulationist speed triggers Stage 1 (Collector's Fallacy) leading to eventual abandonment
48
+ - [[AI shifts knowledge systems from externalizing memory to externalizing attention]] — the broader paradigm shift: the convergence is an instance of agents externalizing not just storage but attention allocation, deciding what deserves extraction and connection
49
+ - [[vibe notetaking is the emerging industry consensus for AI-native self-organization]] — industry validation: the dump-and-AI-organizes consensus across tools is the convergence happening at product scale, but most implementations stop at embedding-based filing rather than agent-curated synthesis
50
+ - [[storage versus thinking distinction determines which tool patterns apply]] — maps the divergence: Accumulationist/Interpretationist maps onto storage/thinking; agent mediation dissolves the divergence by enabling thinking-system quality with storage-system capture speed
51
+ - [[methodology traditions are named points in a shared configuration space not competing paradigms]] — the configuration view applied to capture philosophy: capture schools are positions on the processing-intensity dimension rather than competing paradigms, and agent mediation dissolves their apparent incompatibility by distributing configuration across actors
52
+
53
+ Topics:
54
+ - [[processing-workflows]]
55
+ - [[agent-cognition]]
@@ -0,0 +1,56 @@
1
+ ---
2
+ description: Fast loops (per-event hooks) catch instant violations, medium loops (per-session checks) catch accumulated drift, and slow loops (weekly-monthly audits) catch structural evolution — each timescale
3
+ kind: research
4
+ topics: ["[[maintenance-patterns]]"]
5
+ methodology: ["Systems Theory", "Original"]
6
+ source: [[automated-knowledge-maintenance-blueprint]]
7
+ ---
8
+
9
+ # three concurrent maintenance loops operate at different timescales to catch different classes of problems
10
+
11
+ A single maintenance schedule cannot serve a knowledge system because the problems it faces develop at fundamentally different rates. A schema violation exists the instant a malformed note is saved. An orphan note accumulates over the course of a session as new notes arrive without MOC updates. Stale descriptions develop over weeks as understanding evolves and the surrounding graph changes. These are not the same problem at different severities — they are different classes of problem with different propagation characteristics, and addressing them requires different operational architectures running concurrently.
12
+
13
+ The three-loop architecture groups maintenance by timescale, giving each loop a distinct character:
14
+
15
+ **The fast loop** operates per-event, typically in sub-second response. Schema validation on every write, auto-commit after every file change, index invalidation when content changes. These operations are fully mechanical — they require zero judgment and produce deterministic results. Since [[hook enforcement guarantees quality while instruction enforcement merely suggests it]], the fast loop runs as infrastructure rather than instruction, firing regardless of agent attention state. The fast loop catches problems that cannot wait because downstream operations immediately consume the output. A malformed note that passes schema validation will be linked from MOCs, cited in other notes, and indexed for semantic search — each consuming the broken state before any scheduled check could catch it.
16
+
17
+ **The medium loop** operates per-session or per-day. Health dashboards at session start, qmd freshness checks before semantic search, orphan detection between processing batches. These operations combine mechanical detection with judgment-requiring remediation. Since [[session boundary hooks implement cognitive bookends for orientation and reflection]], the medium loop's detection side is already implemented — the session-start health check compares actual vault state against desired state and surfaces the delta. The medium loop catches problems that accumulate within or across a small number of sessions but do not propagate catastrophically. Since [[agent session boundaries create natural automation checkpoints that human-operated systems lack]], the discrete session architecture provides enforcement points that continuous human operation lacks — each boundary is an event where maintenance can fire automatically, which is why the medium loop is structurally feasible for agents in a way it is not for humans. An orphan note is equally orphaned whether you detect it after one minute or one hour, so per-event checking would waste the attention budget without catching problems faster.
18
+
19
+ **The slow loop** operates per-week or per-month. Full vault health audits, meta-cognitive review of accumulated observations and tensions, trend analysis across maintenance logs, structural review of MOC sizes and graph topology. These are high-judgment operations that require loading significant context and reasoning about patterns rather than checking individual items. The slow loop catches problems that develop as understanding evolves — since [[spaced repetition scheduling could optimize vault maintenance]], individual note review at maturity-adapted intervals operates within this timescale, where newly created notes get shorter review intervals and mature notes get longer ones. Since [[evolution observations provide actionable signals for system adaptation]], the slow loop's detection side has a concrete diagnostic protocol: six observation patterns (unused types, N/A-stuffed fields, emergent fields, navigation failure, unlinked output, oversized MOCs) each mapping to specific structural causes and prescribed responses. The slow loop also catches problems that no individual check can detect: gradual methodology drift, assumption invalidation, structural imbalances in the graph.
20
+
21
+ The key insight is that each loop has a different relationship between detection and remediation. In the fast loop, detection and remediation are identical — a schema check that finds a violation blocks the write, and the agent immediately fixes it. In the medium loop, detection is mechanical but remediation requires judgment — orphan detection is a simple graph query, but deciding whether to connect or archive an orphan requires understanding the note's role. In the slow loop, even detection requires judgment — determining whether a description has gone stale requires comparing the note's claims against current understanding, which is a semantic operation. Since [[confidence thresholds gate automated action between the mechanical and judgment zones]], each loop operates at a different point on the confidence spectrum: the fast loop auto-applies, the medium loop suggests, and the slow loop logs for review. This gradient maps directly onto the question of when to fix versus merely flag — since [[the fix-versus-report decision depends on determinism reversibility and accumulated trust]], the fast loop passes all four conditions trivially (deterministic, reversible, low-cost, trusted), the medium loop passes detection but not remediation conditions, and the slow loop fails even the determinism condition for detection itself. The four-condition framework operationalizes what it means for each loop to have a "different relationship between detection and remediation."
22
+
23
+ The loops are not nested versions of the same check but genuinely different operations. The fast loop cannot catch what the slow loop catches because stale descriptions do not violate any schema — they are valid at write time and become misleading only as context changes. The slow loop cannot replace the fast loop because a weekly audit that catches a schema violation means a week of downstream operations consumed broken state. Since [[reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring]], all three loops implement the same reconciliation pattern — declare desired state, measure divergence, correct — but they differ in what "desired state" means, how divergence is measured, and how correction happens.
24
+
25
+ This architecture also explains why since [[maintenance scheduling frequency should match consequence speed not detection capability]], the constraint on each loop's frequency is the propagation rate of the problems it targets rather than the cost of running the check. The fast loop runs per-event because schema violations propagate instantly. The medium loop runs per-session because orphan accumulation propagates at session scale. The slow loop runs per-month because description staleness propagates at the timescale of understanding evolution. Since [[automated detection is always safe because it only reads state while automated remediation risks content corruption]], running detection more frequently than necessary wastes tokens but cannot corrupt content. And since [[idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once]], overlapping detection across loops is harmless — the fast loop and medium loop may both check for dangling links, but the redundant check produces identical results rather than compounding into errors.
26
+
27
+ The three-loop architecture relates to but differs from its sibling scheduling concepts. The consequence speed spectrum provides the theoretical principle — match frequency to propagation rate. Spaced repetition provides the note-level implementation within the slow loop — adapt review intervals to maturity. The three-loop architecture provides the organizational container — how many concurrent loops a system needs, what each loop's character should be, and how they compose without interference. Since [[gardening cycle implements tend prune fertilize operations]], the slow loop's remediation side maps to the three gardening operations: tend catches content that needs updating, prune catches notes that have overgrown, and fertilize catches connection gaps. But the detection that triggers these gardening operations happens at the medium loop's timescale (orphan detection, link density checks), while the judgment-heavy remediation happens in the slow loop's dedicated sessions. The detection and remediation for a single class of problem can span multiple loops.
28
+
29
+ The architecture also has a lifecycle dimension: checks within each loop can become obsolete. Since [[automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues]], each loop's checks need periodic evaluation — a fast-loop schema check that catches nothing for three months may indicate that upstream skills have structurally eliminated the problem, and keeping the check active wastes the attention budget the three-loop architecture was designed to protect. Retirement is the terminal tier of the scheduling spectrum: frequency zero.
30
+
31
+ When any loop's detection or remediation fails silently, since [[observation and tension logs function as dead-letter queues for failed automation]], the failure evidence accumulates in atomic observation and tension notes rather than disappearing. This matters especially for detection failures — if a medium-loop health check crashes, the system loses its self-monitoring capability without knowing it has done so, and the dead-letter queue is what makes that failure visible for the slow loop's meta-cognitive review.
32
+
33
+ Since [[maintenance operations are more universal than creative pipelines because structural health is domain-invariant]], the three-loop architecture transfers across knowledge system deployments because each loop checks structural properties (schema compliance, link integrity, graph topology) rather than domain semantics. The practical consequence for agent-operated systems is that maintenance is not a single activity scheduled at a single frequency but a concurrent architecture where multiple timescales operate simultaneously. Agent sessions are cheap — they cost tokens, not human time — but context is expensive, because it costs attention quality. Scheduling maintenance into its own dedicated sessions at the appropriate timescale preserves the quality budget of productive sessions while ensuring each class of problem gets caught at the rate its propagation demands.
34
+
35
+ ---
36
+ ---
37
+
38
+ Relevant Notes:
39
+ - [[maintenance scheduling frequency should match consequence speed not detection capability]] — provides the theoretical foundation: consequence speed determines detection frequency, and the three loops are the architectural embodiment of grouping problems by propagation rate into discrete operational tiers
40
+ - [[spaced repetition scheduling could optimize vault maintenance]] — operates WITHIN the slow loop: spaced repetition schedules individual note review based on maturity, but the three-loop architecture is the container that determines which timescale note-level scheduling belongs to
41
+ - [[hook enforcement guarantees quality while instruction enforcement merely suggests it]] — implements the fast loop: hooks fire on every event regardless of attention state, which is why the fast loop can guarantee schema compliance while the medium and slow loops cannot guarantee anything without explicit invocation
42
+ - [[session boundary hooks implement cognitive bookends for orientation and reflection]] — implements the medium loop: session-start health dashboards surface problems that accumulated since the last session, placing the detection at exactly the right timescale for session-scale consequences
43
+ - [[reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring]] — the architectural pattern underlying all three loops: each loop is a reconciliation cycle that declares desired state and periodically measures divergence, differing only in frequency and scope
44
+ - [[automated detection is always safe because it only reads state while automated remediation risks content corruption]] — enables aggressive scheduling across all three loops: since detection only reads state, each loop can run its checks at whatever frequency consequence speed demands without risk of corruption
45
+ - [[gardening cycle implements tend prune fertilize operations]] — the slow loop's remediation actions: tend, prune, and fertilize are what happens when slow-loop detection finds structural drift, but the detection and remediation happen at different timescales within the slow loop itself
46
+ - [[idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once]] — the safety property that makes overlapping loops harmless: when fast-loop and medium-loop checks cover the same territory, idempotency ensures redundant detection produces identical results rather than compounding errors
47
+ - [[confidence thresholds gate automated action between the mechanical and judgment zones]] — each loop has a different confidence profile: fast loops operate in the mechanical zone (schema checks are deterministic), medium loops straddle the boundary (orphan detection is mechanical but remediation requires judgment), slow loops operate primarily in the judgment zone (structural review needs semantic understanding)
48
+ - [[the fix-versus-report decision depends on determinism reversibility and accumulated trust]] — operationalizes the detection-remediation gradient: the four conditions (determinism, reversibility, low cost, accumulated trust) explain why the fast loop auto-fixes while the slow loop only logs; the framework maps the three loops onto the fix-versus-report spectrum
49
+ - [[agent session boundaries create natural automation checkpoints that human-operated systems lack]] — explains why the medium loop is structurally feasible: discrete session boundaries provide enforcement points where health checks fire automatically, making per-session maintenance guaranteed rather than aspirational
50
+ - [[evolution observations provide actionable signals for system adaptation]] — provides the slow loop's concrete diagnostic protocol: six observation patterns mapping symptoms to structural causes give the slow loop structured detection rather than open-ended pattern recognition
51
+ - [[automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues]] — the lifecycle endpoint for checks in any loop: retirement criteria determine when a check should stop running, completing the loop lifecycle from creation through scheduling to retirement
52
+ - [[observation and tension logs function as dead-letter queues for failed automation]] — the failure capture mechanism across all three loops: when detection or remediation fails silently, the dead-letter queue makes failures visible for slow-loop meta-cognitive review rather than allowing silent degradation
53
+ - [[maintenance operations are more universal than creative pipelines because structural health is domain-invariant]] — grounds transferability: the three-loop architecture transfers across deployments because each loop checks structural properties rather than domain semantics
54
+
55
+ Topics:
56
+ - [[maintenance-patterns]]
@@ -0,0 +1,58 @@
1
+ ---
2
+ description: Success in knowledge systems is measured by processing velocity from capture to synthesis, not by the size of the archive
3
+ kind: research
4
+ topics: ["[[processing-workflows]]"]
5
+ ---
6
+
7
+ # throughput matters more than accumulation
8
+
9
+ The fundamental mistake in knowledge management is measuring success by what you have instead of what flows through. PKM research names this the "Collector's Fallacy" — believing that saving information equals learning. A vault with 10,000 unprocessed notes is not ten times more valuable than one with 1,000 — it's potentially worse, because accumulation without synthesis creates a graveyard of good intentions. Since [[behavioral anti-patterns matter more than tool selection]], the Collector's Fallacy is behavioral, not tool-dependent — users migrate from app to app seeking features that will solve their throughput problems, but the behavior travels with them. Since [[structure without processing provides no value]], even sophisticated structure — flat folders, wiki links, MOCs — produces no benefit when the processing steps are skipped. The Lazy Cornell anti-pattern proves this experimentally: students who draw the structural lines but skip the cognitive work show no improvement over linear notes.
10
+
11
+ The insight comes from distinguishing stock from flow. Stock is static: the number of notes, the size of the archive, the breadth of coverage. Flow is dynamic: the rate at which raw captures become synthesized understanding, the velocity from inbox to integrated knowledge. Since [[ThreadMode to DocumentMode transformation is the core value creation step]], this velocity has a specific name from wiki collaboration theory — throughput measures how fast chronological thread captures become timeless synthesized documents, and the transformation itself is where value is created. But velocity has a companion metric: since [[insight accretion differs from productivity in knowledge systems]], high throughput of mechanical operations produces no value. Throughput measures speed; accretion measures depth. A session with high throughput but zero accretion has efficiently produced nothing meaningful.
12
+
13
+ When capture is easy (and with AI assistance, it's nearly frictionless), the constraint shifts entirely to processing. Anyone can clip articles, save highlights, dump voice notes. The differentiator is whether those captures ever transform into something usable. This is structurally predictable: since [[every knowledge domain shares a four-phase processing skeleton that diverges only in the process step]], the bottleneck always concentrates at the process phase because capture, connection, and verification are structural operations while processing requires domain-specific semantic judgment. The skeleton predicts that ANY knowledge system will find its throughput constraint at the same point. And since [[LLM attention degrades as context fills]], throughput applies at session level too — chaining phases in one session means later phases run on degraded attention, so fresh context per task becomes part of the throughput discipline. But high velocity can be deceptive: since [[verbatim risk applies to agents too]], an agent can rapidly produce well-structured outputs that reorganize content without genuine insight. This creates the illusion of throughput while the processing is merely rearrangement. True throughput requires the velocity to be of genuine synthesis, not just formatted compression.
14
+
15
+ A 1:1 ratio of capture to synthesis means everything that enters gets processed. A growing gap between capture and synthesis means the system is failing regardless of how impressive the archive looks. Since [[PKM failure follows a predictable cycle]], this velocity gap is Stage 1 (Collector's Fallacy) — the first stage in a cascade that leads through under-processing, productivity porn, over-engineering, analysis paralysis, and orphan accumulation to eventual abandonment. Tracking the capture-to-synthesis ratio provides early warning before the cascade completes.
16
+
17
+ This suggests concrete practices:
18
+
19
+ **WIP limits on inbox.** When the inbox exceeds ~20 items, stop capturing and start processing. Accumulation is a signal to process, not to celebrate collection size. And since [[temporal separation of capture and processing preserves context freshness]], age matters too — since [[temporal processing priority creates age-based inbox urgency]], a 12-hour-old item demands attention before a 1-hour-old item because context freshness decays exponentially, with notes under 24 hours at standard priority, 24-72 hours elevated, and beyond 72 hours critical. [[continuous small-batch processing eliminates review dread]] tests the psychological mechanism: whether preventing backlog accumulation also prevents the dread that causes system abandonment.
20
+
21
+ **Track the ratio.** If captures outpace synthesis by 10:1, the system isn't scaling — it's drowning. The metric that matters is throughput, not volume. Since [[spreading activation models how agents should traverse]], frequently traversed nodes reveal what actually gets used, and since [[dangling links reveal which notes want to exist]], link frequency reveals what deserves processing. These demand signals provide objective throughput metrics. And since [[queries evolve during search so agents should checkpoint]], traversal includes reassessment points — which themselves consume processing bandwidth, making throughput efficiency even more critical.
22
+
23
+ **Just-in-time over just-in-case.** Since [[processing effort should follow retrieval demand]], heavy upfront processing wastes effort on content that may never be revisited. Process on demand, not on capture.
24
+
25
+ The counterargument is that some knowledge compounds over time and having a large archive enables serendipitous discovery. This is true but misleading. The archive's value comes from its processedness, not its size. A small, densely connected vault enables more serendipity than a large, unprocessed dump because connections create the discovery paths — since [[small-world topology requires hubs and dense local links]], it's the topology that enables serendipity, not raw volume. But density isn't the only path to serendipity: [[incremental reading enables cross-source connection finding]] proposes that interleaved processing creates serendipity through forced context collision, not just through connection density. The open question is whether the vault's reflect phase — which finds relationships between claim notes regardless of source — already captures cross-source connections, or whether sequential extraction causes some connections to never surface as claims in the first place. And since [[wiki links implement GraphRAG without the infrastructure]], each curated link is a deliberate API invocation, not a statistical correlation. Unprocessed dumps have no edges, only nodes. You cannot traverse an unconnected graph.
26
+
27
+ Flow thinking reframes what "productivity" means in knowledge work. It's not about how much you read, bookmark, or capture. It's about the transformation rate — how quickly raw inputs become integrated understanding that can be used, shared, or built upon. And since [[backward maintenance asks what would be different if written today]], throughput requires maintenance: without periodic reconsideration, even processed content becomes stale, effectively reverting to unprocessed accumulation. The question is selection: if maintenance attention follows the same power-law as link density, peripheral notes accumulate neglect. [[random note resurfacing prevents write-only memory]] tests whether random selection counteracts this bias, while [[spaced repetition scheduling could optimize vault maintenance]] tests whether interval-based scheduling — front-loading attention on recently created notes — allocates maintenance bandwidth more efficiently than uniform or random selection. The [[productivity porn risk in meta-system building]] experiment tests this principle at the system level: does building sophisticated infrastructure increase output velocity, or does it become accumulation disguised as throughput improvement? The discriminator is whether complexity growth correlates with output growth — if the system gets more sophisticated but publishes nothing more, building has become the new accumulation.
28
+
29
+ Since [[intermediate packets enable assembly over creation]], high-throughput systems naturally produce composable building blocks. Each session creates a packet that future work can assemble from. The packet production rate becomes an operational throughput metric: how many reusable outputs did this session produce? This connects flow thinking to concrete practice — sessions should end with archived packets, not just completed tasks.
30
+ ---
31
+
32
+ Relevant Notes:
33
+ - [[insight accretion differs from productivity in knowledge systems]] — the complementary quality metric: throughput measures speed, accretion measures depth; high throughput of mechanical operations produces no value
34
+ - [[processing effort should follow retrieval demand]] — the JIT processing principle that makes throughput sustainable
35
+ - [[descriptions are retrieval filters not summaries]] — enables fast filtering, improving processing velocity
36
+ - [[metadata reduces entropy enabling precision over recall]] — information-theoretic foundation for filtering: precision-first retrieval means fewer irrelevant notes pollute processing
37
+ - [[good descriptions layer heuristic then mechanism then implication]] — the structural formula that makes descriptions effective filters; better-structured descriptions improve filtering speed
38
+ - [[small-world topology requires hubs and dense local links]] — grounds the density over volume claim: topology creates discovery, not accumulation
39
+ - [[spreading activation models how agents should traverse]] — traversal frequency provides objective throughput metrics: what gets activated is what matters
40
+ - [[dangling links reveal which notes want to exist]] — demand signals reveal where processing investment pays off, operationalizing throughput
41
+ - [[wiki links implement GraphRAG without the infrastructure]] — unprocessed content has nodes but no edges; throughput creates the curated links that enable traversal
42
+ - [[queries evolve during search so agents should checkpoint]] — checkpointing adds processing overhead, making throughput efficiency even more critical for complex searches
43
+ - [[intermediate packets enable assembly over creation]] — packets are what high-throughput systems produce: session outputs structured for reuse, enabling assembly over creation
44
+ - [[each new note compounds value by creating traversal paths]] — WHY throughput matters: each synthesis creates new paths that increase the value of all existing notes
45
+ - [[LLM attention degrades as context fills]] — grounds why throughput applies at session level: chained phases run on degraded attention
46
+ - [[fresh context per task preserves quality better than chaining phases]] — the design decision that operationalizes session-level throughput: fresh context per task
47
+ - [[random note resurfacing prevents write-only memory]] — experiments with random selection to prevent the accumulation-without-revisiting failure mode this note warns against
48
+ - [[spaced repetition scheduling could optimize vault maintenance]] — experiments with interval-based scheduling as alternative maintenance allocation; tests whether front-loading attention preserves throughput better than uniform review
49
+ - [[incremental reading enables cross-source connection finding]] — tests the serendipity claim: density creates discovery through topology, but interleaved processing may create additional serendipity through forced context collision that sequential extraction misses
50
+ - [[structure without processing provides no value]] — the Lazy Cornell proof: structural affordances without processing operations produce no measurable benefit, explaining WHY throughput (processing) matters more than accumulation (structure)
51
+ - [[PKM failure follows a predictable cycle]] — tests whether the Collector's Fallacy (the failure mode this note warns against) predicts a 7-stage cascade; if validated, throughput metrics become early-warning indicators for system failure
52
+ - [[verbatim risk applies to agents too]] — adds quality dimension to velocity: high throughput of verbatim-style outputs is not genuine processing, so the experiment tests whether agents produce processing illusions alongside real synthesis
53
+ - [[ThreadMode to DocumentMode transformation is the core value creation step]] — names what throughput actually produces: the transformation from chronological thread captures into timeless synthesized documents; throughput measures the velocity of this transformation
54
+ - [[storage versus thinking distinction determines which tool patterns apply]] — scope qualifier: throughput is specifically a thinking-system metric; storage systems legitimately optimize for accumulation because their purpose IS the archive, making the Collector's Fallacy a thinking-system criticism that does not apply to storage contexts
55
+ - [[every knowledge domain shares a four-phase processing skeleton that diverges only in the process step]] — structural prediction: the bottleneck always concentrates at the process phase because capture, connection, and verification are structural operations while processing requires domain-specific semantic judgment
56
+
57
+ Topics:
58
+ - [[processing-workflows]]
@@ -0,0 +1,50 @@
1
+ ---
2
+ description: when note titles are complete claims rather than topics, traversing wiki links reads like prose and following paths becomes reasoning — the file tree becomes a scan of arguments
3
+ kind: research
4
+ topics: ["[[note-design]]", "[[graph-structure]]"]
5
+ source: [[2026-01-25-build-claude-a-tool-for-thought]]
6
+ ---
7
+
8
+ # title as claim enables traversal as reasoning
9
+
10
+ don't name notes like topics ("thoughts on memory"). name them like claims ("structure enables navigation without reading everything").
11
+
12
+ when you link to a claim-titled note, the link becomes part of your argument:
13
+
14
+ > "because [[structure enables navigation without reading everything]], we invest in wiki links even though they have maintenance overhead"
15
+
16
+ the title IS the reasoning. traversal IS thinking. since [[note titles should function as APIs enabling sentence transclusion]], the title functions as a typed signature — you know what you're getting before you load the full note. a topic label like "memory notes" is an undocumented function; a claim like "structure enables navigation" tells you the return value.
17
+
18
+ this works because [[inline links carry richer relationship data than metadata fields]]. the prose surrounding a link encodes WHY the linked note matters here — "because `[[X]]`" is a causal claim, "since `[[Y]]`" is a foundation claim, "but `[[Z]]`" is a tension. claim-as-title makes these constructions possible, because topic labels don't compose grammatically. you can write "since [[claims must be specific enough to be wrong]]" but not "since `[[specificity notes]]`." and because [[propositional link semantics transform wiki links from associative to reasoned]], these informal relationship signals (since, because, but) could be standardized into a constrained vocabulary (causes, enables, contradicts, extends) — making the reasoning chains not just readable prose but machine-parseable argument structure.
19
+
20
+ **practical benefits:**
21
+ - scanning file tree = scanning arguments
22
+ - following links = following reasoning chains
23
+ - the vault becomes readable without opening files
24
+
25
+ since [[progressive disclosure means reading right not reading less]], this matters for context window management. the first disclosure layer is titles. if titles are claims, agents can curate what to load based on what each note argues. if titles are topics, agents must load notes to discover what they argue — the disclosure layer fails.
26
+
27
+ **contrast with topic titles:**
28
+ - "memory notes" tells you nothing
29
+ - "structure enables navigation" tells you the claim
30
+ - one is a folder, the other is a thought
31
+
32
+ over time, since [[implicit knowledge emerges from traversal]], the traversal paths themselves build argumentative intuition. an agent that repeatedly follows the chain from "title as claim" through "specificity" to "composability" internalizes the reasoning pattern, not just the individual facts. since [[IBIS framework maps claim-based architecture to structured argumentation]], this has a formal name: the vault is a discourse graph where claim-titled notes are Positions, wiki links are Arguments connecting them, and traversal follows argumentation chains. the informal insight that "following links reads as reasoning" turns out to map precisely onto Rittel's Issue-Position-Argument structure — claim-as-title is what makes the notes function as genuine Positions rather than vague topic gestures.
33
+
34
+ but there is a shadow side. since [[vault conventions may impose hidden rigidity on thinking]], the claim-as-title pattern demands that insights resolve into sentence form. when reformulation feels forced, the question is whether the insight isn't ready or the format can't accommodate it. not every idea decomposes into a single declarative sentence — some are relational, procedural, or emergent. the pattern works best for ideas that ARE claims; it may distort ideas that aren't.
35
+ ---
36
+
37
+ Relevant Notes:
38
+ - [[note titles should function as APIs enabling sentence transclusion]] — formalizes this insight: titles as function signatures, bodies as implementation, links as function calls
39
+ - [[claims must be specific enough to be wrong]] — the specificity requirement that makes claim-as-title work; vague titles degenerate into topic labels that carry no information when linked
40
+ - [[inline links carry richer relationship data than metadata fields]] — explains why traversal-as-reasoning works: prose context around links encodes relationship type, making each link a typed reasoning step
41
+ - [[structure enables navigation without reading everything]] — the navigability claim this pattern serves; claim titles are what make navigation-without-reading possible
42
+ - [[implicit knowledge emerges from traversal]] — repeated path traversal builds intuition, and claim-titled paths make the intuition argumentative rather than merely associative
43
+ - [[progressive disclosure means reading right not reading less]] — claim titles enable the first disclosure layer: know what a note argues before deciding to load it
44
+ - [[vault conventions may impose hidden rigidity on thinking]] — the shadow side: forcing insights into claim-as-title form may distort genuinely non-linear or relational ideas
45
+ - [[propositional link semantics transform wiki links from associative to reasoned]] — extends: the informal encoding this note describes (since/because/but as relationship signals) could be standardized into machine-parseable relationship types, making traversal-as-reasoning not just readable but queryable
46
+ - [[IBIS framework maps claim-based architecture to structured argumentation]] — formalizes: claim-titled notes are Positions in Rittel's discourse framework, and traversing wiki links between them constitutes following argumentation chains — giving this note's informal insight a formal theoretical grounding
47
+
48
+ Topics:
49
+ - [[note-design]]
50
+ - [[graph-structure]]
@@ -0,0 +1,52 @@
1
+ ---
2
+ description: The garden vs stream distinction from digital gardening theory grounds why vaults use topic MOCs and wiki links rather than date-based folders
3
+ kind: research
4
+ topics: ["[[graph-structure]]"]
5
+ methodology: ["Evergreen"]
6
+ source: TFT research corpus (00_inbox/heinrich/)
7
+ ---
8
+
9
+ # topological organization beats temporal for knowledge work
10
+
11
+ Knowledge belongs in networks, not timelines. This is the theoretical foundation for why the system uses a flat structure with wiki links and topic MOCs rather than date-based folders or chronological filing.
12
+
13
+ Maggie Appleton's "Topography over Timelines" principle articulates a 25-year-old distinction in knowledge work theory. Mike Caulfield (2015) and Mark Bernstein (1998) before him contrasted two fundamentally different orientations:
14
+
15
+ **The Stream** — Time-ordered, ephemeral, recency-dominant. Blogs, Twitter feeds, daily journals. Content is understood by *when* it appeared. New items push old items down. What matters is what's recent. The organizing principle is the calendar.
16
+
17
+ **The Garden** — Topological, timeless, integrative. Wikis, zettelkastens, knowledge graphs. Content is understood by *what it connects to*. Old ideas interweave with new ideas. What matters is how things relate. The organizing principle is the concept.
18
+
19
+ The stream works for communication. When you're publishing or sharing, recency signals relevance. Readers want to know what's new. But the stream fails for thinking. Ideas don't have timestamps that matter. A good idea from last year is just as useful as one from today — more useful if it's been tested and connected. Organizing by date buries good thinking under chronological sediment.
20
+
21
+ The garden works for understanding. By organizing topologically — clustering related concepts regardless of when they emerged — you can traverse by meaning rather than by recency. The question shifts from "what did I think about last Tuesday" to "what do I know about X." This is how understanding actually works: not as a timeline but as a web. But topological organization alone doesn't guarantee navigability — since [[navigational vertigo emerges in pure association systems without local hierarchy]], a purely associative garden still needs MOCs to provide local landmarks, or semantic neighbors remain unreachable.
22
+
23
+ For agent-operated vaults, this distinction matters operationally. When an agent traverses the system looking for relevant context, date-based organization forces it to scan chronologically — loading "January notes" then "February notes" — with no semantic guidance. Topological organization lets it load "notes about knowledge management" directly. The structure matches how agents (and humans) actually seek understanding.
24
+
25
+ But topological organization alone doesn't guarantee efficient navigation. Since [[small-world topology requires hubs and dense local links]], the power-law distribution where MOCs have many links (~90) and atomic notes have few (3-6) creates the short paths that make topological organization practical. Without this distribution — if every note had uniform connectivity — path lengths would grow with vault size, making large gardens untraversable. The garden metaphor implies cultivated structure: hubs as central clearings, atomic notes as local clusters, wiki links as paths between them.
26
+
27
+ The system embodies this choice. The thinking folder is flat. Notes live as `[claim as sentence].md`, not `2026-01-30/notes.md`. Topic MOCs (like `knowledge-work.md`) provide entry points into concept clusters. Wiki links create the edges that let you traverse by meaning. There are no date folders because dates don't matter for understanding — only relationships matter. And since [[type field enables structured queries without folder hierarchies]], category-based organization (finding all methodology notes, all tensions, all synthesis) happens through metadata queries rather than folder structure — the flat architecture loses nothing while gaining the flexibility that single-folder-membership restrictions would deny.
28
+
29
+ The garden-vs-stream distinction operates at the media format level too, not just the organizational level. Since [[temporal media must convert to spatial text for agent traversal]], audio and video are streams not just metaphorically but literally — time-locked sequences where reaching a specific point requires either knowing the timestamp or scanning linearly. They must convert to spatial text (markdown transcripts) before they can participate in topological organization at all. The garden metaphor applies to format choice as much as to filing structure: text is the garden, temporal media is the stream.
30
+
31
+ The garden-vs-stream distinction also has a page-level twin. Since [[ThreadMode to DocumentMode transformation is the core value creation step]], individual wiki pages undergo the same transformation: ThreadMode content (chronological thread contributions organized by when) becomes DocumentMode content (timeless synthesis organized by what it means). Garden/stream is the system architecture; ThreadMode/DocumentMode is the page-level transformation that populates the garden with genuinely topological content rather than temporally organized threads dressed in wiki formatting.
32
+
33
+ This is a closed design decision. The theoretical foundation is established. The implementation is committed. We use gardens for thinking.
34
+ ---
35
+
36
+ Relevant Notes:
37
+ - [[wiki links implement GraphRAG without the infrastructure]] — wiki links are the mechanism that enables topological organization
38
+ - [[wiki links are the digital evolution of analog indexing]] — the topological pattern has 70+ year validation: Cornell cue columns indexed by concept, not chronology
39
+ - [[concept-orientation beats source-orientation for cross-domain connections]] — extends this principle to extraction: within the garden, organize by concept not by source, enabling cross-domain edges
40
+ - [[spreading activation models how agents should traverse]] — traversal works by semantic connections, which topological organization provides
41
+ - [[each new note compounds value by creating traversal paths]] — compounding requires connection-based structure, not temporal sequence
42
+ - [[retrieval utility should drive design over capture completeness]] — the foundational design principle: topological organization is retrieval-first architecture applied to structure
43
+ - [[type field enables structured queries without folder hierarchies]] — shows how flat architecture doesnt sacrifice category-based queries: type metadata provides query dimensions without folder constraints
44
+ - [[navigational vertigo emerges in pure association systems without local hierarchy]] — the failure mode: topological organization solves the stream problem but creates its own navigability problem when MOCs are absent
45
+ - [[small-world topology requires hubs and dense local links]] — structural requirements: topological organization needs power-law distribution (MOC hubs with many links, atomic notes with few) to enable short paths between any concepts
46
+ - [[faceted classification treats notes as multi-dimensional objects rather than folder contents]] — formal justification: Ranganathan's framework explains why temporal filing impoverishes retrieval by collapsing multiple independent classification dimensions into a single axis (date)
47
+ - [[temporal media must convert to spatial text for agent traversal]] — extends the garden-vs-stream distinction to media FORMAT: audio and video are streams not just metaphorically but literally, and must convert to spatial text before they can participate in topological organization
48
+ - [[ThreadMode to DocumentMode transformation is the core value creation step]] — the page-level twin: garden/stream is the system-level architecture, ThreadMode/DocumentMode is the page-level transformation; both articulate the same insight at different scales
49
+ - [[storage versus thinking distinction determines which tool patterns apply]] — the garden-vs-stream distinction is the structural expression of the storage/thinking split: storage systems tolerate temporal filing while thinking systems require topological organization because 'how does this relate?' demands concept-based traversal
50
+
51
+ Topics:
52
+ - [[graph-structure]]