arscontexta 0.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (418) hide show
  1. package/.claude-plugin/marketplace.json +11 -0
  2. package/.claude-plugin/plugin.json +22 -0
  3. package/README.md +683 -0
  4. package/agents/knowledge-guide.md +49 -0
  5. package/bin/cli.mjs +66 -0
  6. package/generators/agents-md.md +240 -0
  7. package/generators/claude-md.md +379 -0
  8. package/generators/features/atomic-notes.md +124 -0
  9. package/generators/features/ethical-guardrails.md +58 -0
  10. package/generators/features/graph-analysis.md +188 -0
  11. package/generators/features/helper-functions.md +92 -0
  12. package/generators/features/maintenance.md +164 -0
  13. package/generators/features/methodology-knowledge.md +70 -0
  14. package/generators/features/mocs.md +144 -0
  15. package/generators/features/multi-domain.md +61 -0
  16. package/generators/features/personality.md +71 -0
  17. package/generators/features/processing-pipeline.md +428 -0
  18. package/generators/features/schema.md +149 -0
  19. package/generators/features/self-evolution.md +229 -0
  20. package/generators/features/self-space.md +78 -0
  21. package/generators/features/semantic-search.md +99 -0
  22. package/generators/features/session-rhythm.md +85 -0
  23. package/generators/features/templates.md +85 -0
  24. package/generators/features/wiki-links.md +88 -0
  25. package/generators/soul-md.md +121 -0
  26. package/hooks/hooks.json +45 -0
  27. package/hooks/scripts/auto-commit.sh +44 -0
  28. package/hooks/scripts/session-capture.sh +35 -0
  29. package/hooks/scripts/session-orient.sh +86 -0
  30. package/hooks/scripts/write-validate.sh +42 -0
  31. package/methodology/AI shifts knowledge systems from externalizing memory to externalizing attention.md +59 -0
  32. package/methodology/BM25 retrieval fails on full-length descriptions because query term dilution reduces match scores.md +39 -0
  33. package/methodology/IBIS framework maps claim-based architecture to structured argumentation.md +58 -0
  34. package/methodology/LLM attention degrades as context fills.md +49 -0
  35. package/methodology/MOC construction forces synthesis that automated generation from metadata cannot replicate.md +49 -0
  36. package/methodology/MOC maintenance investment compounds because orientation savings multiply across every future session.md +41 -0
  37. package/methodology/MOCs are attention management devices not just organizational tools.md +51 -0
  38. package/methodology/PKM failure follows a predictable cycle.md +50 -0
  39. package/methodology/ThreadMode to DocumentMode transformation is the core value creation step.md +52 -0
  40. package/methodology/WIP limits force processing over accumulation.md +53 -0
  41. package/methodology/Zeigarnik effect validates capture-first philosophy because open loops drain attention.md +42 -0
  42. package/methodology/academic research uses structured extraction with cross-source synthesis.md +566 -0
  43. package/methodology/adapt the four-phase processing pipeline to domain-specific throughput needs.md +197 -0
  44. package/methodology/agent notes externalize navigation intuition that search cannot discover and traversal cannot reconstruct.md +48 -0
  45. package/methodology/agent self-memory should be architecturally separate from user knowledge systems.md +48 -0
  46. package/methodology/agent session boundaries create natural automation checkpoints that human-operated systems lack.md +56 -0
  47. package/methodology/agent-cognition.md +107 -0
  48. package/methodology/agents are simultaneously methodology executors and subjects creating a unique trust asymmetry.md +66 -0
  49. package/methodology/aspect-oriented programming solved the same cross-cutting concern problem that hooks solve.md +39 -0
  50. package/methodology/associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles.md +53 -0
  51. package/methodology/attention residue may have a minimum granularity that cannot be subdivided.md +46 -0
  52. package/methodology/auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution.md +47 -0
  53. package/methodology/automated detection is always safe because it only reads state while automated remediation risks content corruption.md +42 -0
  54. package/methodology/automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues.md +56 -0
  55. package/methodology/backlinks implicitly define notes by revealing usage context.md +35 -0
  56. package/methodology/backward maintenance asks what would be different if written today.md +62 -0
  57. package/methodology/balance onboarding enforcement and questions to prevent premature complexity.md +229 -0
  58. package/methodology/basic level categorization determines optimal MOC granularity.md +51 -0
  59. package/methodology/batching by context similarity reduces switching costs in agent processing.md +43 -0
  60. package/methodology/behavioral anti-patterns matter more than tool selection.md +42 -0
  61. package/methodology/betweenness centrality identifies bridge notes connecting disparate knowledge domains.md +57 -0
  62. package/methodology/blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules.md +42 -0
  63. package/methodology/bootstrapping principle enables self-improving systems.md +62 -0
  64. package/methodology/build automatic memory through cognitive offloading and session handoffs.md +285 -0
  65. package/methodology/capture the reaction to content not just the content itself.md +41 -0
  66. package/methodology/claims must be specific enough to be wrong.md +36 -0
  67. package/methodology/closure rituals create clean breaks that prevent attention residue bleed.md +44 -0
  68. package/methodology/cognitive offloading is the architectural foundation for vault design.md +46 -0
  69. package/methodology/cognitive outsourcing risk in agent-operated systems.md +55 -0
  70. package/methodology/coherence maintains consistency despite inconsistent inputs.md +96 -0
  71. package/methodology/coherent architecture emerges from wiki links spreading activation and small-world topology.md +48 -0
  72. package/methodology/community detection algorithms can inform when MOCs should split or merge.md +52 -0
  73. package/methodology/complete navigation requires four complementary types that no single mechanism provides.md +43 -0
  74. package/methodology/complex systems evolve from simple working systems.md +59 -0
  75. package/methodology/composable knowledge architecture builds systems from independent toggleable modules not monolithic templates.md +61 -0
  76. package/methodology/compose multi-domain systems through separate templates and shared graph.md +372 -0
  77. package/methodology/concept-orientation beats source-orientation for cross-domain connections.md +51 -0
  78. package/methodology/confidence thresholds gate automated action between the mechanical and judgment zones.md +50 -0
  79. package/methodology/configuration dimensions interact so choices in one create pressure on others.md +58 -0
  80. package/methodology/configuration paralysis emerges when derivation surfaces too many decisions.md +44 -0
  81. package/methodology/context files function as agent operating systems through self-referential self-extension.md +46 -0
  82. package/methodology/context phrase clarity determines how deep a navigation hierarchy can scale.md +46 -0
  83. package/methodology/continuous small-batch processing eliminates review dread.md +48 -0
  84. package/methodology/controlled disorder engineers serendipity through semantic rather than topical linking.md +51 -0
  85. package/methodology/creative writing uses worldbuilding consistency with character tracking.md +672 -0
  86. package/methodology/cross-links between MOC territories indicate creative leaps and integration depth.md +43 -0
  87. package/methodology/dangling links reveal which notes want to exist.md +62 -0
  88. package/methodology/data exit velocity measures how quickly content escapes vendor lock-in.md +74 -0
  89. package/methodology/decontextualization risk means atomicity may strip meaning that cannot be recovered.md +48 -0
  90. package/methodology/dense interlinked research claims enable derivation while sparse references only enable templating.md +47 -0
  91. package/methodology/dependency resolution through topological sort makes module composition transparent and verifiable.md +56 -0
  92. package/methodology/derivation generates knowledge systems from composable research claims not template customization.md +63 -0
  93. package/methodology/derivation-engine.md +27 -0
  94. package/methodology/derived systems follow a seed-evolve-reseed lifecycle.md +56 -0
  95. package/methodology/description quality for humans diverges from description quality for keyword search.md +73 -0
  96. package/methodology/descriptions are retrieval filters not summaries.md +112 -0
  97. package/methodology/design MOCs as attention management devices with lifecycle governance.md +318 -0
  98. package/methodology/design-dimensions.md +66 -0
  99. package/methodology/digital mutability enables note evolution that physical permanence forbids.md +54 -0
  100. package/methodology/discovery-retrieval.md +48 -0
  101. package/methodology/distinctiveness scoring treats description quality as measurable.md +69 -0
  102. package/methodology/does agent processing recover what fast capture loses.md +43 -0
  103. package/methodology/domain-compositions.md +37 -0
  104. package/methodology/dual-coding with visual elements could enhance agent traversal.md +55 -0
  105. package/methodology/each module must be describable in one sentence under 200 characters or it does too many things.md +45 -0
  106. package/methodology/each new note compounds value by creating traversal paths.md +55 -0
  107. package/methodology/eight configuration dimensions parameterize the space of possible knowledge systems.md +56 -0
  108. package/methodology/elaborative encoding is the quality gate for new notes.md +55 -0
  109. package/methodology/enforce schema with graduated strictness across capture processing and query zones.md +221 -0
  110. package/methodology/enforcing atomicity can create paralysis when ideas resist decomposition.md +43 -0
  111. package/methodology/engineering uses technical decision tracking with architectural memory.md +766 -0
  112. package/methodology/every knowledge domain shares a four-phase processing skeleton that diverges only in the process step.md +53 -0
  113. package/methodology/evolution observations provide actionable signals for system adaptation.md +67 -0
  114. package/methodology/external memory shapes cognition more than base model.md +60 -0
  115. package/methodology/faceted classification treats notes as multi-dimensional objects rather than folder contents.md +65 -0
  116. package/methodology/failure-modes.md +27 -0
  117. package/methodology/false universalism applies same processing logic regardless of domain.md +49 -0
  118. package/methodology/federated wiki pattern enables multi-agent divergence as feature not bug.md +59 -0
  119. package/methodology/flat files break at retrieval scale.md +75 -0
  120. package/methodology/forced engagement produces weak connections.md +48 -0
  121. package/methodology/four abstraction layers separate platform-agnostic from platform-dependent knowledge system features.md +47 -0
  122. package/methodology/fresh context per task preserves quality better than chaining phases.md +44 -0
  123. package/methodology/friction reveals architecture.md +63 -0
  124. package/methodology/friction-driven module adoption prevents configuration debt by adding complexity only at pain points.md +48 -0
  125. package/methodology/gardening cycle implements tend prune fertilize operations.md +41 -0
  126. package/methodology/generation effect gate blocks processing without transformation.md +40 -0
  127. package/methodology/goal-driven memory orchestration enables autonomous domain learning through directed compute allocation.md +41 -0
  128. package/methodology/good descriptions layer heuristic then mechanism then implication.md +57 -0
  129. package/methodology/graph-structure.md +65 -0
  130. package/methodology/guided notes might outperform post-hoc structuring for high-volume capture.md +37 -0
  131. package/methodology/health wellness uses symptom-trigger correlation with multi-dimensional tracking.md +819 -0
  132. package/methodology/hook composition creates emergent methodology from independent single-concern components.md +47 -0
  133. package/methodology/hook enforcement guarantees quality while instruction enforcement merely suggests it.md +51 -0
  134. package/methodology/hook-driven learning loops create self-improving methodology through observation accumulation.md +62 -0
  135. package/methodology/hooks are the agent habit system that replaces the missing basal ganglia.md +40 -0
  136. package/methodology/hooks cannot replace genuine cognitive engagement yet more automation is always tempting.md +87 -0
  137. package/methodology/hooks enable context window efficiency by delegating deterministic checks to external processes.md +47 -0
  138. package/methodology/idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once.md +44 -0
  139. package/methodology/implement condition-based maintenance triggers for derived systems.md +255 -0
  140. package/methodology/implicit dependencies create distributed monoliths that fail silently across configurations.md +58 -0
  141. package/methodology/implicit knowledge emerges from traversal.md +55 -0
  142. package/methodology/incremental formalization happens through repeated touching of old notes.md +60 -0
  143. package/methodology/incremental reading enables cross-source connection finding.md +39 -0
  144. package/methodology/index.md +32 -0
  145. package/methodology/inline links carry richer relationship data than metadata fields.md +91 -0
  146. package/methodology/insight accretion differs from productivity in knowledge systems.md +41 -0
  147. package/methodology/intermediate packets enable assembly over creation.md +52 -0
  148. package/methodology/intermediate representation pattern enables reliable vault operations beyond regex.md +62 -0
  149. package/methodology/justification chains enable forward backward and evolution reasoning about configuration decisions.md +46 -0
  150. package/methodology/knowledge system architecture is parameterized by platform capabilities not fixed by methodology.md +51 -0
  151. package/methodology/knowledge systems become communication partners through complexity and memory humans cannot sustain.md +47 -0
  152. package/methodology/knowledge systems share universal operations and structural components across all methodology traditions.md +46 -0
  153. package/methodology/legal case management uses precedent chains with regulatory change propagation.md +892 -0
  154. package/methodology/live index via periodic regeneration keeps discovery current.md +58 -0
  155. package/methodology/local-first file formats are inherently agent-native.md +69 -0
  156. package/methodology/logic column pattern separates reasoning from procedure.md +35 -0
  157. package/methodology/maintenance operations are more universal than creative pipelines because structural health is domain-invariant.md +47 -0
  158. package/methodology/maintenance scheduling frequency should match consequence speed not detection capability.md +50 -0
  159. package/methodology/maintenance targeting should prioritize mechanism and theory notes.md +26 -0
  160. package/methodology/maintenance-patterns.md +72 -0
  161. package/methodology/markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure.md +55 -0
  162. package/methodology/maturity field enables agent context prioritization.md +33 -0
  163. package/methodology/memory-architecture.md +27 -0
  164. package/methodology/metacognitive confidence can diverge from retrieval capability.md +42 -0
  165. package/methodology/metadata reduces entropy enabling precision over recall.md +91 -0
  166. package/methodology/methodology development should follow the trajectory from documentation to skill to hook as understanding hardens.md +80 -0
  167. package/methodology/methodology traditions are named points in a shared configuration space not competing paradigms.md +64 -0
  168. package/methodology/mnemonic medium embeds verification into navigation.md +46 -0
  169. package/methodology/module communication through shared YAML fields creates loose coupling without direct dependencies.md +44 -0
  170. package/methodology/module deactivation must account for structural artifacts that survive the toggle.md +49 -0
  171. package/methodology/multi-domain systems compose through separate templates and shared graph.md +61 -0
  172. package/methodology/multi-domain-composition.md +27 -0
  173. package/methodology/narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging.md +53 -0
  174. package/methodology/navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts.md +48 -0
  175. package/methodology/navigational vertigo emerges in pure association systems without local hierarchy.md +54 -0
  176. package/methodology/note titles should function as APIs enabling sentence transclusion.md +51 -0
  177. package/methodology/note-design.md +57 -0
  178. package/methodology/notes are skills /342/200/224 curated knowledge injected when relevant.md" +62 -0
  179. package/methodology/notes function as cognitive anchors that stabilize attention during complex tasks.md +41 -0
  180. package/methodology/novel domains derive by mapping knowledge type to closest reference domain then adapting.md +50 -0
  181. package/methodology/nudge theory explains graduated hook enforcement as choice architecture for agents.md +59 -0
  182. package/methodology/observation and tension logs function as dead-letter queues for failed automation.md +51 -0
  183. package/methodology/operational memory and knowledge memory serve different functions in agent architecture.md +48 -0
  184. package/methodology/operational wisdom requires contextual observation.md +52 -0
  185. package/methodology/orchestrated vault creation transforms arscontexta from tool to autonomous knowledge factory.md +40 -0
  186. package/methodology/organic emergence versus active curation creates a fundamental vault governance tension.md +68 -0
  187. package/methodology/orphan notes are seeds not failures.md +38 -0
  188. package/methodology/over-automation corrupts quality when hooks encode judgment rather than verification.md +62 -0
  189. package/methodology/people relationships uses Dunbar-layered graphs with interaction tracking.md +659 -0
  190. package/methodology/personal assistant uses life area management with review automation.md +610 -0
  191. package/methodology/platform adapter translation is semantic not mechanical because hook event meanings differ.md +40 -0
  192. package/methodology/platform capability tiers determine which knowledge system features can be implemented.md +48 -0
  193. package/methodology/platform fragmentation means identical conceptual operations require different implementations across agent environments.md +44 -0
  194. package/methodology/premature complexity is the most common derivation failure mode.md +45 -0
  195. package/methodology/prevent domain-specific failure modes through the vulnerability matrix.md +336 -0
  196. package/methodology/processing effort should follow retrieval demand.md +57 -0
  197. package/methodology/processing-workflows.md +75 -0
  198. package/methodology/product management uses feedback pipelines with experiment tracking.md +789 -0
  199. package/methodology/productivity porn risk in meta-system building.md +30 -0
  200. package/methodology/programmable notes could enable property-triggered workflows.md +64 -0
  201. package/methodology/progressive disclosure means reading right not reading less.md +69 -0
  202. package/methodology/progressive schema validates only what active modules require not the full system schema.md +49 -0
  203. package/methodology/project management uses decision tracking with stakeholder context.md +776 -0
  204. package/methodology/propositional link semantics transform wiki links from associative to reasoned.md +87 -0
  205. package/methodology/prospective memory requires externalization.md +53 -0
  206. package/methodology/provenance tracks where beliefs come from.md +62 -0
  207. package/methodology/queries evolve during search so agents should checkpoint.md +35 -0
  208. package/methodology/question-answer metadata enables inverted search patterns.md +39 -0
  209. package/methodology/random note resurfacing prevents write-only memory.md +33 -0
  210. package/methodology/reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring.md +59 -0
  211. package/methodology/reflection synthesizes existing notes into new insight.md +100 -0
  212. package/methodology/retrieval utility should drive design over capture completeness.md +69 -0
  213. package/methodology/retrieval verification loop tests description quality at scale.md +81 -0
  214. package/methodology/role field makes graph structure explicit.md +94 -0
  215. package/methodology/scaffolding enables divergence that fine-tuning cannot.md +67 -0
  216. package/methodology/schema enforcement via validation agents enables soft consistency.md +60 -0
  217. package/methodology/schema evolution follows observe-then-formalize not design-then-enforce.md +65 -0
  218. package/methodology/schema field names are the only domain specific element in the universal note pattern.md +46 -0
  219. package/methodology/schema fields should use domain-native vocabulary not abstract terminology.md +47 -0
  220. package/methodology/schema templates reduce cognitive overhead at capture time.md +55 -0
  221. package/methodology/schema validation hooks externalize inhibitory control that degrades under cognitive load.md +48 -0
  222. package/methodology/schema-enforcement.md +27 -0
  223. package/methodology/self-extension requires context files to contain platform operations knowledge not just methodology.md +47 -0
  224. package/methodology/sense-making vs storage does compression lose essential nuance.md +73 -0
  225. package/methodology/session boundary hooks implement cognitive bookends for orientation and reflection.md +60 -0
  226. package/methodology/session handoff creates continuity without persistent memory.md +43 -0
  227. package/methodology/session outputs are packets for future selves.md +43 -0
  228. package/methodology/session transcript mining enables experiential validation that structural tests cannot provide.md +38 -0
  229. package/methodology/skill context budgets constrain knowledge system complexity on agent platforms.md +52 -0
  230. package/methodology/skills encode methodology so manual execution bypasses quality gates.md +50 -0
  231. package/methodology/small-world topology requires hubs and dense local links.md +99 -0
  232. package/methodology/source attribution enables tracing claims to foundations.md +38 -0
  233. package/methodology/spaced repetition scheduling could optimize vault maintenance.md +44 -0
  234. package/methodology/spreading activation models how agents should traverse.md +79 -0
  235. package/methodology/stale navigation actively misleads because agents trust curated maps completely.md +43 -0
  236. package/methodology/stigmergy coordinates agents through environmental traces without direct communication.md +62 -0
  237. package/methodology/storage versus thinking distinction determines which tool patterns apply.md +56 -0
  238. package/methodology/structure enables navigation without reading everything.md +52 -0
  239. package/methodology/structure without processing provides no value.md +56 -0
  240. package/methodology/student learning uses prerequisite graphs with spaced retrieval.md +770 -0
  241. package/methodology/summary coherence tests composability before filing.md +37 -0
  242. package/methodology/tag rot applies to wiki links because titles serve as both identifier and display text.md +50 -0
  243. package/methodology/temporal media must convert to spatial text for agent traversal.md +43 -0
  244. package/methodology/temporal processing priority creates age-based inbox urgency.md +45 -0
  245. package/methodology/temporal separation of capture and processing preserves context freshness.md +39 -0
  246. package/methodology/ten universal primitives form the kernel of every viable agent knowledge system.md +162 -0
  247. package/methodology/testing effect could enable agent knowledge verification.md +38 -0
  248. package/methodology/the AgentSkills standard embodies progressive disclosure at the skill level.md +40 -0
  249. package/methodology/the derivation engine improves recursively as deployed systems generate observations.md +49 -0
  250. package/methodology/the determinism boundary separates hook methodology from skill methodology.md +46 -0
  251. package/methodology/the fix-versus-report decision depends on determinism reversibility and accumulated trust.md +45 -0
  252. package/methodology/the generation effect requires active transformation not just storage.md +57 -0
  253. package/methodology/the no wrong patches guarantee ensures any valid module combination produces a valid system.md +58 -0
  254. package/methodology/the system is the argument.md +46 -0
  255. package/methodology/the vault constitutes identity for agents.md +86 -0
  256. package/methodology/the vault methodology transfers because it encodes cognitive science not domain specifics.md +47 -0
  257. package/methodology/therapy journal uses warm personality with pattern detection for emotional processing.md +584 -0
  258. package/methodology/three capture schools converge through agent-mediated synthesis.md +55 -0
  259. package/methodology/three concurrent maintenance loops operate at different timescales to catch different classes of problems.md +56 -0
  260. package/methodology/throughput matters more than accumulation.md +58 -0
  261. package/methodology/title as claim enables traversal as reasoning.md +50 -0
  262. package/methodology/topological organization beats temporal for knowledge work.md +52 -0
  263. package/methodology/trading uses conviction tracking with thesis-outcome correlation.md +699 -0
  264. package/methodology/trails transform ephemeral navigation into persistent artifacts.md +39 -0
  265. package/methodology/transform universal vocabulary to domain-native language through six levels.md +259 -0
  266. package/methodology/type field enables structured queries without folder hierarchies.md +53 -0
  267. package/methodology/use-case presets dissolve the tension between composability and simplicity.md +44 -0
  268. package/methodology/vault conventions may impose hidden rigidity on thinking.md +44 -0
  269. package/methodology/verbatim risk applies to agents too.md +31 -0
  270. package/methodology/vibe notetaking is the emerging industry consensus for AI-native self-organization.md +56 -0
  271. package/methodology/vivid memories need verification.md +45 -0
  272. package/methodology/vocabulary-transformation.md +27 -0
  273. package/methodology/voice capture is the highest-bandwidth channel for agent-delegated knowledge systems.md +45 -0
  274. package/methodology/wiki links are the digital evolution of analog indexing.md +73 -0
  275. package/methodology/wiki links as social contract transforms agents into stewards of incomplete references.md +52 -0
  276. package/methodology/wiki links create navigation paths that shape retrieval.md +63 -0
  277. package/methodology/wiki links implement GraphRAG without the infrastructure.md +101 -0
  278. package/methodology/writing for audience blocks authentic creation.md +22 -0
  279. package/methodology/you operate a system that takes notes.md +79 -0
  280. package/openclaw/SKILL.md +110 -0
  281. package/package.json +45 -0
  282. package/platforms/README.md +51 -0
  283. package/platforms/claude-code/generator.md +61 -0
  284. package/platforms/claude-code/hooks/README.md +186 -0
  285. package/platforms/claude-code/hooks/auto-commit.sh.template +38 -0
  286. package/platforms/claude-code/hooks/session-capture.sh.template +72 -0
  287. package/platforms/claude-code/hooks/session-orient.sh.template +189 -0
  288. package/platforms/claude-code/hooks/write-validate.sh.template +106 -0
  289. package/platforms/openclaw/generator.md +82 -0
  290. package/platforms/openclaw/hooks/README.md +89 -0
  291. package/platforms/openclaw/hooks/bootstrap.ts.template +224 -0
  292. package/platforms/openclaw/hooks/command-new.ts.template +165 -0
  293. package/platforms/openclaw/hooks/heartbeat.ts.template +214 -0
  294. package/platforms/shared/features/README.md +70 -0
  295. package/platforms/shared/skill-blocks/graph.md +145 -0
  296. package/platforms/shared/skill-blocks/learn.md +119 -0
  297. package/platforms/shared/skill-blocks/next.md +131 -0
  298. package/platforms/shared/skill-blocks/pipeline.md +326 -0
  299. package/platforms/shared/skill-blocks/ralph.md +616 -0
  300. package/platforms/shared/skill-blocks/reduce.md +1142 -0
  301. package/platforms/shared/skill-blocks/refactor.md +129 -0
  302. package/platforms/shared/skill-blocks/reflect.md +780 -0
  303. package/platforms/shared/skill-blocks/remember.md +524 -0
  304. package/platforms/shared/skill-blocks/rethink.md +574 -0
  305. package/platforms/shared/skill-blocks/reweave.md +680 -0
  306. package/platforms/shared/skill-blocks/seed.md +320 -0
  307. package/platforms/shared/skill-blocks/stats.md +145 -0
  308. package/platforms/shared/skill-blocks/tasks.md +171 -0
  309. package/platforms/shared/skill-blocks/validate.md +323 -0
  310. package/platforms/shared/skill-blocks/verify.md +562 -0
  311. package/platforms/shared/templates/README.md +35 -0
  312. package/presets/experimental/categories.yaml +1 -0
  313. package/presets/experimental/preset.yaml +38 -0
  314. package/presets/experimental/starter/README.md +7 -0
  315. package/presets/experimental/vocabulary.yaml +7 -0
  316. package/presets/personal/categories.yaml +7 -0
  317. package/presets/personal/preset.yaml +41 -0
  318. package/presets/personal/starter/goals.md +21 -0
  319. package/presets/personal/starter/index.md +17 -0
  320. package/presets/personal/starter/life-areas.md +21 -0
  321. package/presets/personal/starter/people.md +21 -0
  322. package/presets/personal/vocabulary.yaml +32 -0
  323. package/presets/research/categories.yaml +8 -0
  324. package/presets/research/preset.yaml +41 -0
  325. package/presets/research/starter/index.md +17 -0
  326. package/presets/research/starter/methods.md +21 -0
  327. package/presets/research/starter/open-questions.md +21 -0
  328. package/presets/research/vocabulary.yaml +33 -0
  329. package/reference/AUDIT-REPORT.md +238 -0
  330. package/reference/claim-map.md +172 -0
  331. package/reference/components.md +327 -0
  332. package/reference/conversation-patterns.md +542 -0
  333. package/reference/derivation-validation.md +649 -0
  334. package/reference/dimension-claim-map.md +134 -0
  335. package/reference/evolution-lifecycle.md +297 -0
  336. package/reference/failure-modes.md +235 -0
  337. package/reference/interaction-constraints.md +204 -0
  338. package/reference/kernel.yaml +242 -0
  339. package/reference/methodology.md +283 -0
  340. package/reference/open-questions.md +279 -0
  341. package/reference/personality-layer.md +302 -0
  342. package/reference/self-space.md +299 -0
  343. package/reference/semantic-vs-keyword.md +288 -0
  344. package/reference/session-lifecycle.md +298 -0
  345. package/reference/templates/base-note.md +16 -0
  346. package/reference/templates/companion-note.md +70 -0
  347. package/reference/templates/creative-note.md +16 -0
  348. package/reference/templates/learning-note.md +16 -0
  349. package/reference/templates/life-note.md +16 -0
  350. package/reference/templates/moc.md +26 -0
  351. package/reference/templates/relationship-note.md +17 -0
  352. package/reference/templates/research-note.md +19 -0
  353. package/reference/templates/session-log.md +24 -0
  354. package/reference/templates/therapy-note.md +16 -0
  355. package/reference/test-fixtures/edge-case-constraints.md +148 -0
  356. package/reference/test-fixtures/multi-domain.md +164 -0
  357. package/reference/test-fixtures/novel-domain-gaming.md +138 -0
  358. package/reference/test-fixtures/research-minimal.md +102 -0
  359. package/reference/test-fixtures/therapy-full.md +155 -0
  360. package/reference/testing-milestones.md +1087 -0
  361. package/reference/three-spaces.md +363 -0
  362. package/reference/tradition-presets.md +203 -0
  363. package/reference/use-case-presets.md +341 -0
  364. package/reference/validate-kernel.sh +432 -0
  365. package/reference/vocabulary-transforms.md +85 -0
  366. package/scripts/sync-thinking.sh +147 -0
  367. package/skill-sources/graph/SKILL.md +567 -0
  368. package/skill-sources/graph/skill.json +17 -0
  369. package/skill-sources/learn/SKILL.md +254 -0
  370. package/skill-sources/learn/skill.json +17 -0
  371. package/skill-sources/next/SKILL.md +407 -0
  372. package/skill-sources/next/skill.json +17 -0
  373. package/skill-sources/pipeline/SKILL.md +314 -0
  374. package/skill-sources/pipeline/skill.json +17 -0
  375. package/skill-sources/ralph/SKILL.md +604 -0
  376. package/skill-sources/ralph/skill.json +17 -0
  377. package/skill-sources/reduce/SKILL.md +1113 -0
  378. package/skill-sources/reduce/skill.json +17 -0
  379. package/skill-sources/refactor/SKILL.md +448 -0
  380. package/skill-sources/refactor/skill.json +17 -0
  381. package/skill-sources/reflect/SKILL.md +747 -0
  382. package/skill-sources/reflect/skill.json +17 -0
  383. package/skill-sources/remember/SKILL.md +534 -0
  384. package/skill-sources/remember/skill.json +17 -0
  385. package/skill-sources/rethink/SKILL.md +658 -0
  386. package/skill-sources/rethink/skill.json +17 -0
  387. package/skill-sources/reweave/SKILL.md +657 -0
  388. package/skill-sources/reweave/skill.json +17 -0
  389. package/skill-sources/seed/SKILL.md +303 -0
  390. package/skill-sources/seed/skill.json +17 -0
  391. package/skill-sources/stats/SKILL.md +371 -0
  392. package/skill-sources/stats/skill.json +17 -0
  393. package/skill-sources/tasks/SKILL.md +402 -0
  394. package/skill-sources/tasks/skill.json +17 -0
  395. package/skill-sources/validate/SKILL.md +310 -0
  396. package/skill-sources/validate/skill.json +17 -0
  397. package/skill-sources/verify/SKILL.md +532 -0
  398. package/skill-sources/verify/skill.json +17 -0
  399. package/skills/add-domain/SKILL.md +441 -0
  400. package/skills/add-domain/skill.json +17 -0
  401. package/skills/architect/SKILL.md +568 -0
  402. package/skills/architect/skill.json +17 -0
  403. package/skills/ask/SKILL.md +388 -0
  404. package/skills/ask/skill.json +17 -0
  405. package/skills/health/SKILL.md +760 -0
  406. package/skills/health/skill.json +17 -0
  407. package/skills/help/SKILL.md +348 -0
  408. package/skills/help/skill.json +17 -0
  409. package/skills/recommend/SKILL.md +553 -0
  410. package/skills/recommend/skill.json +17 -0
  411. package/skills/reseed/SKILL.md +385 -0
  412. package/skills/reseed/skill.json +17 -0
  413. package/skills/setup/SKILL.md +1688 -0
  414. package/skills/setup/skill.json +17 -0
  415. package/skills/tutorial/SKILL.md +496 -0
  416. package/skills/tutorial/skill.json +17 -0
  417. package/skills/upgrade/SKILL.md +395 -0
  418. package/skills/upgrade/skill.json +17 -0
@@ -0,0 +1,91 @@
1
+ ---
2
+ description: The prose surrounding a wiki link captures WHY two notes connect, not just THAT they connect — relationship context that frontmatter fields cannot encode
3
+ kind: research
4
+ topics: ["[[graph-structure]]"]
5
+ ---
6
+
7
+ # inline links carry richer relationship data than metadata fields
8
+
9
+ Dublin Core's fifteen metadata elements include "Relation" as a core field. Traditional knowledge management would encode note relationships as frontmatter:
10
+
11
+ ```yaml
12
+ related_to: [note-a, note-b, note-c]
13
+ ```
14
+
15
+ But this encoding is informationally impoverished. You know THAT notes connect, but not WHY or HOW.
16
+
17
+ ## Prose captures relationship context
18
+
19
+ When you write `since [[spreading activation models how agents should traverse]], the traversal pattern becomes clear`, the relationship is embedded in argument. Since [[spreading activation models how agents should traverse]] develops the "notes as APIs" metaphor where following links is function invocation, inline links extend this: they are TYPED function invocations where prose provides the type annotation. The reader understands:
20
+ - The direction of dependence (this claim builds on that one)
21
+ - The nature of the connection (mechanism explanation)
22
+ - Why the link matters here (it clarifies traversal)
23
+
24
+ Compare to `related_to: ["spreading activation models how agents should traverse"]`. This tells you nothing except that some relationship exists. You'd have to read both notes to understand what the relationship IS.
25
+
26
+ ## Multiple links to the same note encode different relationships
27
+
28
+ A single note might link to [[wiki links implement GraphRAG without the infrastructure]] from multiple locations:
29
+
30
+ - "Because [[wiki links implement GraphRAG without the infrastructure]], we don't need entity extraction" — encoding a consequence relationship
31
+ - "This extends [[wiki links implement GraphRAG without the infrastructure]] by explaining the information density advantage" — encoding an extension relationship
32
+
33
+ A `related_to` field can only list the target once. Inline links can invoke the same note with different relationship semantics at different points in the argument.
34
+
35
+ ## The API analogy makes this clear
36
+
37
+ Since [[note titles should function as APIs enabling sentence transclusion]], notes function as APIs — title as function signature, body as implementation, wiki links as function calls. Inline links extend this: they are typed function calls. The surrounding prose is the type annotation: it tells you what the function returns in this context.
38
+
39
+ ```
40
+ // Metadata approach: untyped reference
41
+ related_to: [note_a]
42
+
43
+ // Inline approach: typed invocation
44
+ since [[note_a]], we can conclude X
45
+ this contradicts [[note_a]] because Y
46
+ this extends [[note_a]] by adding Z
47
+ ```
48
+
49
+ The first tells you a reference exists. The others tell you what the reference MEANS.
50
+
51
+ This connects to how [[claims must be specific enough to be wrong]] — vague claims fail because nobody can engage with them. Similarly, vague links (untyped references) fail because traversers can't decide whether to follow them. Just as specificity makes titles work as function signatures that can be reliably invoked, typed inline links make connections work as meaningful traversal decisions. The type annotation answers "why should I follow this?"
52
+
53
+ ## Why inline links are preferred over footer links
54
+
55
+ The preference for inline links with context over footer links without isn't aesthetic. It's informational. A footer that says `[[note]] — related` loses almost all relationship data. A footer that says `[[note]] — provides the theoretical foundation this applies` preserves it.
56
+
57
+ But inline links embedded in argument are even richer, because the relationship IS the argument. The connection isn't annotated externally — it's woven into the reasoning itself. This richness has a cognitive explanation: since [[elaborative encoding is the quality gate for new notes]], the prose around a wiki link is where elaborative encoding happens — the author must actively relate the new claim to existing knowledge, and that relating is the cognitive act that creates durable understanding. A footer that says `[[note]] — related` skips the elaboration entirely. A footer that says `[[note]] — provides the theoretical foundation this applies` forces the author to process HOW the notes connect, and that processing is what transforms a structural filing operation into a cognitive one.
58
+
59
+ ## The constraint: link quality matters
60
+
61
+ This power comes with responsibility. Since inline links carry relationship semantics, bad inline links pollute the reasoning. A link that says "since `[[X]]`, therefore Y" better be logically valid. The prose makes the relationship claim explicit, which means it can be challenged. Since [[mnemonic medium embeds verification into navigation]], these typed link contexts could function as verification prompts — when an agent traverses `since [[X]]`, it can test whether X actually supports the "since" relationship. The relationship annotations that make links meaningful also make them testable.
62
+
63
+ Frontmatter relations hide behind vagueness. Inline links commit to specific claims about how ideas connect.
64
+
65
+ But committing to specific claims about connections has a maintenance cost. Because [[tag rot applies to wiki links because titles serve as both identifier and display text]], the prose surrounding each link encodes a grammatical dependency on the target's exact title. When a target note's title sharpens — the natural consequence of incremental formalization — every inline invocation must be re-evaluated: does the sentence still read naturally with the new title? Richer relationship encoding means richer breakage when titles change. A `related_to` field only needs the filename updated; an inline link like `since [[old title]], the pattern becomes clear` may need the entire sentence rewritten. The same richness that makes inline links informationally superior makes them more fragile under rename.
66
+
67
+ ## Link Quality at Network Hubs
68
+
69
+ The value of typed inline links concentrates at hubs. Since [[small-world topology requires hubs and dense local links]], MOCs serve as high-traffic nodes where many traversals pass through. A MOC with 90 links is visited far more often than an atomic note with 4 links. This means the relationship context in MOC links — the phrases that explain WHY each note belongs there — determines navigation quality across the entire network. A MOC with bare `[[note]]` references forces agents to load notes speculatively. A MOC with typed inline links like "[[note]] — extends the compounding argument by adding temporal dynamics" lets agents judge relevance before loading.
70
+
71
+ The hub effect multiplies link quality. Better link typing at MOCs improves not just one traversal but every traversal that passes through that hub. This is why MOC maintenance matters disproportionately: it's not just organizing one document, it's improving the major intersections where navigation decisions happen.
72
+
73
+ This clarifies the relationship with [[metadata reduces entropy enabling precision over recall]]. Metadata IS valuable for filtering — type, status, descriptions all pre-compute decision-relevant features that shrink the search space. Since [[type field enables structured queries without folder hierarchies]], filtering metadata provides a query axis orthogonal to wiki link topology: "find all tension notes" or "find all methodology notes about X." But relationship metadata (`related_to: [note-a, note-b]`) operates at the wrong level: it tells you relationships EXIST without encoding WHAT KIND. The entropy reduction from knowing "these are related" is minimal compared to knowing "this extends that by adding X." Inline links encode the semantics that make traversal decisions meaningful — type metadata filters WHAT, inline links explain WHY.
74
+ ---
75
+
76
+ Relevant Notes:
77
+ - [[note titles should function as APIs enabling sentence transclusion]] — foundational: establishes that titles are function signatures; this note extends the pattern by showing inline links are TYPED function calls with prose as type annotation
78
+ - [[wiki links implement GraphRAG without the infrastructure]] — explains how wiki links enable multi-hop reasoning; this note explains why inline links are informationally richer than alternatives
79
+ - [[wiki links are the digital evolution of analog indexing]] — the digital evolution from Cornell cue columns enables not just wider scope but richer relationship encoding through prose context
80
+ - [[spreading activation models how agents should traverse]] — develops the notes as APIs metaphor where following links is function invocation; this note extends it: inline links are TYPED function calls
81
+ - [[claims must be specific enough to be wrong]] — applies the same quality criterion to relationships: just as vague claims fail engagement, vague links fail traversal decisions
82
+ - [[metadata reduces entropy enabling precision over recall]] — clarifies scope: metadata reduces entropy for TYPE/STATUS/DESCRIPTION filtering, but relationship metadata loses the semantics inline links encode
83
+ - [[descriptions are retrieval filters not summaries]] — parallel anti-pattern: descriptions that paraphrase add nothing, just as related_to fields that merely list targets add nothing
84
+ - [[type field enables structured queries without folder hierarchies]] — clarifies which metadata IS valuable: type/status fields provide filtering dimensions (WHAT kind), while relationship metadata (WHY connected) belongs in prose
85
+ - [[small-world topology requires hubs and dense local links]] — typed links matter most at hubs: MOCs concentrate traversal, so the relationship context in hub links determines navigation quality across the network
86
+ - [[tag rot applies to wiki links because titles serve as both identifier and display text]] — the maintenance cost of richness: inline links embed grammatical dependencies on exact title phrasing, making rename cascades require prose re-authoring rather than simple find-and-replace
87
+ - [[elaborative encoding is the quality gate for new notes]] — cognitive science grounding: the prose around inline links IS elaborative encoding, the mechanism that creates durable understanding through connecting new knowledge to existing knowledge
88
+ - [[IBIS framework maps claim-based architecture to structured argumentation]] — formal naming: what this note calls 'typed function calls with prose as type annotation' are Arguments in IBIS terms — evidence connecting Positions through reasoned relationships; the formalization adds that inline contexts without genuine reasoning are non-arguments that fail both the elaboration test and the discourse completeness test
89
+
90
+ Topics:
91
+ - [[graph-structure]]
@@ -0,0 +1,41 @@
1
+ ---
2
+ description: Evergreen systems optimize for depth of understanding over efficiency of output, which means the vault exists for thinking rather than task completion
3
+ kind: research
4
+ topics: ["[[processing-workflows]]"]
5
+ methodology: ["Evergreen"]
6
+ source: [[tft-research-part2]]
7
+ ---
8
+
9
+ # insight accretion differs from productivity in knowledge systems
10
+
11
+ Matuschak draws a distinction that reshapes how we should evaluate knowledge systems. "Insight accretion" — the gradual deepening of understanding — differs fundamentally from "productivity" — the efficiency of output. Traditional tools optimize for productivity: how quickly can you capture, organize, file, and retrieve? Evergreen systems optimize for accretion: how much does each interaction with a note deepen your grasp of the idea?
12
+
13
+ The distinction matters because the metrics pull in different directions. Productivity metrics reward speed and volume: notes created per session, items processed per hour, capture latency. Accretion metrics reward depth: did this session change how you think about something? Did the note become richer, more connected, more nuanced? Since [[storage versus thinking distinction determines which tool patterns apply]], the metric choice is not arbitrary — storage systems (PARA, Johnny.Decimal) legitimately optimize for productivity because their purpose is filing and retrieval, while thinking systems (Zettelkasten, ACCESS/ACE) must optimize for accretion because their purpose is synthesis. You can be highly productive — efficiently processing inputs into organized outputs — while accreting nothing because the processing was mechanical rather than generative. Since [[enforcing atomicity can create paralysis when ideas resist decomposition]], "note gymnastics" names exactly this pattern: effort spent splitting, managing fragments, and fitting insights into atomic form is productivity (structural work) without accretion (deepening understanding) when the decomposition friction isn't revealing incomplete thinking but rather forcing relational ideas into forms they resist. Since [[the generation effect requires active transformation not just storage]], the test is whether the operation produces something that didn't exist in the source — a new description, an articulated connection, a synthesized claim. Rearrangement creates no accretion regardless of how efficiently it's performed.
14
+
15
+ This validates a core design choice: the vault exists for thinking, not task management. When we say "quality over speed" and "tokens are free," we're choosing accretion over productivity. A session that produces one deeply-understood note has succeeded where a session that processes twenty items superficially has failed — even though the productivity metrics favor the latter.
16
+
17
+ The software industry critique embedded here is sharp. Tools compete on productivity features: faster capture, better organization, more integrations. But these features optimize for the wrong thing. They make collection and filing more efficient while doing nothing for the actual cognitive work of understanding. The result is ever-larger archives of shallowly-processed content — high productivity, zero accretion. This critique has a meta-level analog: since [[productivity porn risk in meta-system building]], building sophisticated infrastructure can itself become the new accumulation, producing complexity growth without output growth. The discriminator is always whether the work deepens understanding or merely reorganizes existing content.
18
+
19
+ For agent-operated systems, this creates a design question: what does accretion look like when an agent does the processing? Since [[throughput matters more than accumulation]], we already reject pure collection in favor of processing velocity. But throughput is still a productivity metric — it measures flow rate, not depth. The deeper question is whether agent processing creates genuine accretion or merely the appearance of it. An agent can rapidly transform raw captures into well-structured notes (high throughput) while producing only reorganization, not insight (zero accretion).
20
+
21
+ The answer lies in what the processing produces. Accretion happens when notes become richer over time: new connections surface, descriptions sharpen, the claim itself evolves as understanding deepens. Since [[structure without processing provides no value]], the Lazy Cornell pattern proves that structural motions without cognitive engagement produce nothing. The same applies to agents: structural transformations (formatting, filing, linking by keyword similarity) are productivity work, not accretion work. Accretion requires the harder operations: evaluating whether a connection is genuine, articulating why two ideas relate, challenging a claim against new evidence, splitting a note because the original bundled distinct insights.
22
+
23
+ The practical implication: design agent workflows to force accretion, not just throughput. The reflect phase should require articulating relationships, not just detecting them. The reweave phase should ask "what would be different if written today?" not just "what new links exist?" The recite phase should test whether understanding transfers, not just whether keywords match. Each phase should produce deeper understanding, not faster filing.
24
+
25
+ Reflection sessions are the purest expression of accretion-oriented work. Since [[reflection synthesizes existing notes into new insight]], sessions that read existing notes rather than process new input — with no output pressure, no draft, no production goal — produce emergent synthesis through cross-note pattern recognition. The conditions that made reflection work (fresh context, no output orientation, willingness to be surprised) are precisely the conditions that optimize for accretion over productivity. A productivity metric would score a reflection session poorly: one new note from hours of reading. An accretion metric recognizes it as perhaps the highest-quality work the vault can do: a synthesis claim that emerged from deep engagement with existing thinking.
26
+
27
+ One measurable proxy for accretion is cross-MOC membership. Since [[cross-links between MOC territories indicate creative leaps and integration depth]], notes that appear in multiple distant MOCs demonstrate genuine integration — the author understood two domains well enough to see where they connect. A session that produces cross-MOC notes has achieved integration across domains (high accretion). A session that produces single-MOC notes may have organized efficiently within a silo (productivity without the creative leaps that indicate deep understanding).
28
+ ---
29
+
30
+ Relevant Notes:
31
+ - [[throughput matters more than accumulation]] — related but distinct: throughput addresses velocity vs collection size, while this addresses depth vs efficiency
32
+ - [[structure without processing provides no value]] — failure mode when productivity metrics reward structural motions over genuine insight work
33
+ - [[productivity porn risk in meta-system building]] — the meta-level manifestation: building infrastructure feels productive but may produce zero accretion if complexity grows without output growth
34
+ - [[the generation effect requires active transformation not just storage]] — the mechanism underlying accretion: generation creates cognitive hooks that mere rearrangement cannot, explaining what makes processing genuine vs mere housekeeping
35
+ - [[enforcing atomicity can create paralysis when ideas resist decomposition]] — concrete example: note gymnastics describes productivity (structural splitting, fragment management) without accretion (deepening understanding) when atomization friction is format resistance rather than incomplete thinking
36
+ - [[writing for audience blocks authentic creation]] — another accretion failure mode: performative writing optimizes for presentation quality (how the note reads to others) over depth of understanding (what the note helps the writer grasp), producing polish without insight
37
+ - [[storage versus thinking distinction determines which tool patterns apply]] — upstream classification: accretion is the right success metric for thinking systems while productivity metrics legitimately measure storage system health; applying the wrong metric to the wrong system type produces false confidence
38
+ - [[reflection synthesizes existing notes into new insight]] — purest accretion example: a session with no output pressure reading existing notes produced emergent synthesis through cross-note pattern recognition; productivity metrics would score this poorly (one note from hours of reading) while accretion metrics recognize it as the highest-quality knowledge work
39
+
40
+ Topics:
41
+ - [[processing-workflows]]
@@ -0,0 +1,52 @@
1
+ ---
2
+ description: Work products structured as composable packets let agents assemble outputs from existing material rather than creating from scratch
3
+ kind: research
4
+ topics: ["[[processing-workflows]]"]
5
+ ---
6
+
7
+ # intermediate packets enable assembly over creation
8
+
9
+ Tiago Forte's Intermediate Packets framework suggests that projects should be assembled from pre-existing packets rather than created from scratch. Each work session becomes a retrievable packet. An agent-operated knowledge system functions as a packet repository. This shifts the model from archive search to assembly.
10
+
11
+ Forte's "Slow Burns" methodology extends this insight: projects that would require intense deadline pressure to create from scratch become manageable when assembled from packets that accumulated over time. The vault becomes a parts warehouse rather than an archive. When projects arise, agents assemble from inventory rather than fabricating from scratch under time constraints. The retrieval strategy shifts from "just-in-case archive search" (hoping something relevant exists) to "just-in-time assembly" (knowing the building blocks are ready).
12
+
13
+ Since [[throughput matters more than accumulation]], the key metric is processing velocity from capture to synthesis. Packets are what high-throughput systems produce: incrementally processed outputs ready for assembly. The accumulation mistake is measuring by inbox size; the throughput insight is measuring by packet production rate. The granularity of packets matters for cross-source discovery: since [[incremental reading enables cross-source connection finding]], extracting smaller, more atomic packets from multiple sources and processing them in interleaved order creates forced context collision that source-by-source processing misses. The extract becomes the atomic unit divorced from its original document structure, enabling juxtaposition across sources.
14
+
15
+ The implication for agent work: structure outputs for reuse. Since [[session outputs are packets for future selves]], each session should produce composable building blocks that future sessions can assemble from rather than starting from zero. Creation becomes assembly. The system accumulates building blocks. Packets also enable session isolation: since [[LLM attention degrades as context fills]], handoffs through packets preserve attention quality by giving each processing phase fresh context. The handoff documents ARE packets — since [[session handoff creates continuity without persistent memory]], each session's structured summary functions as a briefing packet that the next session assembles from. The session handoff format is a packet specification: completed work, incomplete tasks, discoveries, recommendations — all the building blocks for session N+1 to assemble its context. Because [[the generation effect requires active transformation not just storage]], packets must contain generated artifacts — synthesis, articulated connections, processed insights — not merely collected inputs. A packet of bookmarks has no assembly value. A packet of claims with descriptions and connections enables genuine assembly. The generation is what makes the packet composable. But since [[verbatim risk applies to agents too]], the packet can look generated while actually being reorganized content — a well-formatted summary that adds no insight. The risk for packet-based workflows: agents producing packets at high velocity that appear composable but contain no genuine building blocks inside.
16
+
17
+ ## Packets are what JIT processing naturally produces
18
+
19
+ This connects to [[processing effort should follow retrieval demand]] at an interesting scale. JIT processing says invest on retrieval, not capture — minimal work upfront, deep processing when something proves valuable. Packets operate one level up: the session itself performs retrieval-driven work, and the packet is the processed output. The packet IS what JIT produces when retrieval triggers deep engagement. Packets don't contradict JIT; they're what JIT workflows naturally create.
20
+
21
+ Forte uses a supply chain metaphor that clarifies this: instead of warehousing finished goods that might never sell, maintain raw materials and process only when orders come in. Inbox items are raw materials with minimal processing. When a project creates demand, that triggers processing. The vault becomes demand-driven rather than supply-driven. Raw captures sit with low investment until retrieval demonstrates value, then processing produces packets that accumulate as inventory for future assembly.
22
+
23
+ This changes how we think about content production. Instead of "write an article," the task becomes "assemble an article from existing packets." The work happened incrementally across sessions. The final assembly is lightweight because the heavy lifting already occurred.
24
+
25
+ ## The composability constraint
26
+
27
+ Packets must be genuinely composable. If they require extensive editing or restructuring to fit together, assembly offers no advantage over creation. Composability requires intentional design at the packet level.
28
+
29
+ Just as [[wiki links implement GraphRAG without the infrastructure]] creates a curated graph where every edge passed human judgment, packets must be structured so every output is retrievable and invocable. A packet that can't be found or linked isn't a building block — it's inventory. Since [[note titles should function as APIs enabling sentence transclusion]], the pattern extends to packets: the packet is a callable function that future work can invoke.
30
+
31
+ ## Derivation as system-level assembly
32
+
33
+ The assembly-over-creation principle operates at its largest scale in knowledge system derivation. Since [[derivation generates knowledge systems from composable research claims not template customization]], the research claims in the graph ARE intermediate packets — composable units that a derivation agent assembles into novel configurations rather than designing from scratch. A template approach is creation: you build a system, then offer it as a starting point. Derivation is assembly: you compose a system from pre-existing claim-packets, each carrying its own justification. The claim graph is a parts warehouse for knowledge system architectures, and derivation is Forte's "Slow Burns" applied to methodology generation — the claims accumulated over months of research become the building blocks assembled when a new use case creates demand.
34
+ ---
35
+
36
+ Relevant Notes:
37
+ - [[note titles should function as APIs enabling sentence transclusion]] — foundational: packets extend the notes-as-APIs pattern; packets are callable functions that future work can invoke, just as notes are
38
+ - [[throughput matters more than accumulation]] — packets are the mechanism that makes throughput sustainable: incrementally processed outputs ready for assembly
39
+ - [[processing effort should follow retrieval demand]] — packets are what JIT processing produces when retrieval triggers deep work; they operate one level up from note-level JIT
40
+ - [[wiki links implement GraphRAG without the infrastructure]] — packets need curated links to be discoverable; the curation quality requirement extends from notes to session outputs
41
+ - [[LLM attention degrades as context fills]] — packets enable session isolation: fresh context preserves attention quality between phases
42
+ - [[fresh context per task preserves quality better than chaining phases]] — the design decision packets make possible; without packets as handoff mechanism, session isolation would lose context
43
+ - [[session handoff creates continuity without persistent memory]] — handoff documents ARE packets: each session's briefing is a packet that the next session assembles from
44
+ - [[session outputs are packets for future selves]] — applies the packet principle specifically to session boundaries: each session's output is a composable building block, and the Memento metaphor grounds the claim that outputs are callable functions, not just data
45
+ - [[the generation effect requires active transformation not just storage]] — explains why packets must contain generated artifacts: collection has no assembly value, synthesis does
46
+ - [[trails transform ephemeral navigation into persistent artifacts]] — proposes extending the packet pattern to navigation: trails as packets that hand off discovered paths, not just work products
47
+ - [[verbatim risk applies to agents too]] — tests whether packets can look composable while containing only reorganized content; if validated, packet quality requires generation verification, not just structural checks
48
+ - [[incremental reading enables cross-source connection finding]] — smaller, more atomic packets enable interleaved processing across sources; extract granularity affects whether cross-source connections get discovered
49
+ - [[derivation generates knowledge systems from composable research claims not template customization]] — system-level assembly: research claims ARE intermediate packets composable into novel knowledge system configurations; derivation is the assembly pattern applied to methodology generation
50
+
51
+ Topics:
52
+ - [[processing-workflows]]
@@ -0,0 +1,62 @@
1
+ ---
2
+ description: Parsing markdown to structured objects (JSON with link objects, metadata blocks, content sections) before operating and serializing back eliminates regex fragility in link finding, schema validation,
3
+ kind: research
4
+ topics: ["[[processing-workflows]]", "[[agent-cognition]]"]
5
+ confidence: speculative
6
+ methodology: ["Original"]
7
+ source: [[tft-research-part3]]
8
+ ---
9
+
10
+ # intermediate representation pattern enables reliable vault operations beyond regex
11
+
12
+ Pandoc converts between dozens of document formats — markdown to LaTeX, HTML to EPUB, reStructuredText to DOCX — but it doesn't maintain N*M converters for every source-target pair. Instead, it parses every input into an Abstract Syntax Tree (a canonical intermediate representation), then serializes from that AST to the output format. N readers plus M writers give N+M implementations for N*M conversions. The architectural insight is that a canonical intermediate structure decouples input parsing from output generation, making both more reliable. This same decompose-reconstruct architecture operates at the platform infrastructure level: since [[platform adapter translation is semantic not mechanical because hook event meanings differ]], translating hooks across platforms requires decomposing each hook into its quality guarantee properties and reconstructing them independently, solving the N*M platform-adapter problem through the same intermediate-decomposition strategy that Pandoc applies to document formats.
13
+
14
+ Agent-operated vaults face a structurally similar problem. Every vault operation — finding all links to a note, validating YAML frontmatter, checking link targets, updating wiki link text across files, migrating footer formats to YAML — currently works on raw markdown strings via regex. And regex on markdown is inherently fragile. A pattern like `\[\[([^\]]+)\]\]` to match wiki links breaks when someone writes `[[note with \] in title]]` or when a code block contains example wiki link syntax. YAML parsing via `rg "^description:"` works until a description spans multiple lines or a code fence contains YAML-like content. Each edge case demands another regex refinement, and the refinements interact in ways that create new edge cases.
15
+
16
+ The intermediate representation (IR) pattern proposes a different architecture: parse markdown files into structured objects first, operate on those objects, then serialize back to markdown. A note becomes a JSON-like structure with typed fields — a frontmatter dictionary, an array of content blocks (paragraphs, headings, code fences, tables), and an array of link objects with source position, target title, and surrounding context. Operations become property lookups and object mutations rather than string matches.
17
+
18
+ ## What this changes in practice
19
+
20
+ The benefits are most visible in link operations, since [[wiki links implement GraphRAG without the infrastructure]] makes wiki links the primary retrieval mechanism. Finding all notes that link to X currently requires regex across all files. With an IR, it becomes a property lookup on pre-parsed link objects — faster, exact, and immune to false positives from code blocks or backtick-wrapped examples. Backlink resolution, which the vault uses heavily for connection finding and orphan detection, goes from "grep and hope the regex handles edge cases" to "query the link index." And since [[propositional link semantics transform wiki links from associative to reasoned]], link objects with typed relationship fields would make semantic edge queries ("find all notes that contradict X") a property filter rather than NLP inference on surrounding prose — the parsing opportunity that propositional semantics identifies becomes tractable infrastructure rather than aspirational pattern matching.
21
+
22
+ Schema validation also benefits. Since [[schema enforcement via validation agents enables soft consistency]], validation agents currently parse YAML frontmatter from raw text each time they check a note. With an IR, frontmatter is already a typed dictionary — checking for required fields, validating enum values, and measuring description length become property checks on structured data. The validator doesn't need to handle YAML parsing edge cases because the parser already did. This matters especially because [[faceted classification treats notes as multi-dimensional objects rather than folder contents]] — Ranganathan's insight that notes have multiple queryable dimensions assumes those dimensions are reliably accessible, and an IR makes faceted queries property lookups on typed objects rather than regex extraction that breaks on multiline values or edge-case YAML.
23
+
24
+ Bulk transformations become safer too. Migrating from markdown footers to YAML relevant_notes (which the vault has done) required careful sed/awk scripting with risk of corrupting content. With an IR, the migration reads from the old structure, writes to the new structure, and serialization guarantees valid output. The transformation logic never sees raw text.
25
+
26
+ ## The agent translation
27
+
28
+ This pattern maps cleanly to agent operation because it separates concerns that agents handle poorly when combined. An agent reading raw markdown must simultaneously parse structure (where does frontmatter end?), extract semantics (which wiki links are real vs. examples?), and operate (update this link target). Each concern introduces failure modes. An IR pipeline separates these: the parser handles structure, the operation handles semantics, the serializer handles output format. Each stage can be validated independently.
29
+
30
+ Since [[skills encode methodology so manual execution bypasses quality gates]], an IR layer would function as a quality gate at the infrastructure level. Skills that operate on structured objects cannot produce malformed output because the serializer enforces format invariants — you cannot accidentally delete a closing `---` from frontmatter because the serializer generates delimiters from structure, not from string manipulation.
31
+
32
+ Since [[programmable notes could enable property-triggered workflows]], an IR makes property triggers straightforward. When notes are already parsed objects with typed fields, checking "is status seedling AND age > 14 days?" is a property comparison, not a YAML extraction followed by date arithmetic on strings.
33
+
34
+ ## The cost and the question
35
+
36
+ The cost is real. Building and maintaining a parse-operate-serialize pipeline adds infrastructure. The vault currently works with regex — imperfectly, but functionally. Since [[local-first file formats are inherently agent-native]] precisely because they require no infrastructure beyond a filesystem, adding an IR layer introduces a dependency that regex doesn't have. And because [[data exit velocity measures how quickly content escapes vendor lock-in]], the IR layer's infrastructure cost is measurable: the files themselves retain high exit velocity (they're still plain markdown), but operations that depend on the IR cannot be performed by tools that lack the parser. If the parser breaks, operations halt. If the parser drifts from the actual file format, silent corruption follows.
37
+
38
+ There's also a philosophical tension. The vault philosophy says "files ARE the database" — YAML frontmatter queried by ripgrep provides database-like functionality without a database. An IR layer partially contradicts this by creating an in-memory representation that diverges from the file during operations. The file remains the source of truth, but operations happen on a copy. This is manageable (parse fresh each time, or invalidate on file change), but it adds a consistency concern that raw-file operations avoid.
39
+
40
+ The open question is where the crossover point lies. At the current vault scale (~120 notes), regex works and edge cases are manageable. At 500 notes, bulk operations become more common and regex fragility compounds. At 5000 notes, an IR layer likely becomes necessary because the combinatorics of edge cases exceed what regex can handle reliably. The bet is that investing in the IR early prevents the technical debt that accumulates from patching regex solutions. But the counter-argument — that regex with good test coverage suffices indefinitely — hasn't been disproven at this scale.
41
+
42
+ The implementation direction, if pursued: start with read-only IR for queries (link finding, schema validation, backlink resolution) where the benefit is immediate and the risk of corruption is zero. Add write-through IR for transformations only when read-only proves its value. This follows both [[schema enforcement via validation agents enables soft consistency]] and [[complex systems evolve from simple working systems]] — Gall's Law applied to vault infrastructure means the IR should emerge at friction points rather than being designed comprehensively upfront. The crossover-point question above is precisely the "has pain emerged that justifies this complexity?" test.
43
+
44
+ ---
45
+ ---
46
+
47
+ Relevant Notes:
48
+ - [[schema enforcement via validation agents enables soft consistency]] — validates against the same structured representation rather than parsing raw YAML each time, making validation composable
49
+ - [[local-first file formats are inherently agent-native]] — the substrate this operates on: plain text files remain the storage format, but agents work through a structured intermediary rather than treating files as raw strings
50
+ - [[wiki links implement GraphRAG without the infrastructure]] — link operations are the primary beneficiary: finding all notes that link to X becomes a property lookup on structured objects rather than a regex match that breaks on edge cases
51
+ - [[skills encode methodology so manual execution bypasses quality gates]] — an IR layer would itself be a quality gate: skills operating on structured objects cannot produce malformed output because the serializer enforces format invariants
52
+ - [[note titles should function as APIs enabling sentence transclusion]] — the notes-as-APIs pattern maps directly: in an IR, each note IS a structured API object with typed fields, and invocation becomes method dispatch rather than string interpolation
53
+ - [[programmable notes could enable property-triggered workflows]] — property triggers become straightforward when notes are already parsed objects with typed fields rather than text files requiring YAML extraction
54
+ - [[complex systems evolve from simple working systems]] — grounds the incremental adoption strategy: start read-only, add write-through where pain demonstrates need; the crossover-point question is Gall's Law asking 'has pain emerged?'
55
+ - [[propositional link semantics transform wiki links from associative to reasoned]] — typed link objects in an IR make relationship extraction trivial: the parsing opportunity that propositional semantics identifies becomes a property lookup rather than NLP inference on surrounding prose
56
+ - [[faceted classification treats notes as multi-dimensional objects rather than folder contents]] — provides the theoretical framework: Ranganathan's multi-dimensional classification assumes queryable properties, and an IR makes those properties typed objects rather than regex-extracted strings
57
+ - [[data exit velocity measures how quickly content escapes vendor lock-in]] — the portability tension: an IR layer introduces an in-memory dependency that lowers exit velocity even though the underlying files remain plain text; exit velocity provides the formal metric for evaluating whether the IR's reliability gains justify the infrastructure cost
58
+ - [[platform adapter translation is semantic not mechanical because hook event meanings differ]] — structural isomorphism: the adapter pattern decomposes hook guarantees into constituent properties and reconstructs them independently per platform, solving the same N*M combinatorial translation problem that Pandoc's AST solves for document formats; the difference is that the IR's intermediate structure is a data format while the adapter's intermediate structure is a set of quality guarantee properties
59
+
60
+ Topics:
61
+ - [[processing-workflows]]
62
+ - [[agent-cognition]]
@@ -0,0 +1,46 @@
1
+ ---
2
+ description: Traces each configuration decision to research claims, enabling forward (constraints to decisions), backward (decisions to rationale), and evolution (friction to revisable assumptions) reasoning
3
+ kind: research
4
+ topics: ["[[design-dimensions]]"]
5
+ methodology: ["Original"]
6
+ source: [[knowledge-system-derivation-blueprint]]
7
+ ---
8
+
9
+ # justification chains enable forward backward and evolution reasoning about configuration decisions
10
+
11
+ When a derivation engine produces a knowledge system, every configuration choice — atomic granularity, explicit linking, dense schemas, automated processing — could have been made differently. The question is whether the reasoning behind each choice is preserved or lost. Templates lose it by default because the template author's reasoning is implicit in the structure but never recorded. Derivation preserves it through justification chains: structured traces that link each decision to the specific research claims and user constraints that produced it.
12
+
13
+ A justification chain has a simple structure. A decision (say, atomic granularity with heavy processing) traces through specific claims — since [[configuration dimensions interact so choices in one create pressure on others]], atomic granularity forces explicit linking which demands processing capacity to maintain — with each step annotated by the user constraint that makes this claim applicable (high synthesis demand, agent-operated, platform supports automation). The chain is not a log of what happened during derivation but an argument for why the system is shaped the way it is.
14
+
15
+ What makes chains genuinely useful rather than merely documentary is that they enable three distinct reasoning modes that operate in different temporal directions.
16
+
17
+ **Forward reasoning** starts from constraints and derives decisions. Given that the user needs synthesis-heavy knowledge work on a platform with automation support, the chain shows how these constraints, filtered through research claims about dimension interactions, produce atomic granularity with explicit linking and deep navigation. This is the derivation process itself — composing justified decisions from constraints and claims. Forward reasoning is what happens at creation time, but preserving the chain means anyone can re-trace the derivation to verify it or understand it.
18
+
19
+ **Backward reasoning** starts from a decision and explains why. A user encountering their derived system can ask "why does this require typed wiki links?" and trace the chain backward: typed links because explicit linking is needed, explicit linking because atomic granularity demands it, atomic granularity because synthesis-heavy knowledge work requires composable units. Each step is grounded in a specific claim with a specific applicability condition. This transforms configuration from opaque prescription to transparent argumentation — the user does not have to trust the derivation engine's judgment because the reasoning is inspectable. This mode is also what makes aggressive defaulting safe when [[configuration paralysis emerges when derivation surfaces too many decisions]] — rather than surfacing every dimension as a question, the derivation engine can infer secondary choices from primary constraints and expose only genuine choice points, because backward reasoning lets any user trace from a default to the rationale that produced it.
20
+
21
+ **Evolution reasoning** starts from friction and identifies which decisions to reconsider. This is the temporally richest mode and the one that makes justification chains architecturally essential. When a derived system encounters friction — say, the processing pipeline produces notes that sit unlinked — since [[evolution observations provide actionable signals for system adaptation]], the diagnostic protocol maps the symptom to a structural cause (processing mismatch). But identifying the structural cause is only the first step. The justification chain tells you which specific claims and constraints led to the current processing design, so you can evaluate whether the claims were wrong, the constraints changed, or the interaction between dimensions was underestimated. Without the chain, evolution is guesswork: something is broken, tweak settings until it works. With the chain, evolution is principled: the symptom traces through the diagnostic to the justification, and the justification shows exactly which assumptions to question.
22
+
23
+ This three-mode structure is what separates derivation from mere configuration. Since [[derivation generates knowledge systems from composable research claims not template customization]], the derivation process itself is claim-graph traversal that produces justified decisions. But the justification chain is not just a byproduct of derivation — it is the primary value. A template gives you the same configuration without the reasoning. A derivation gives you the configuration AND the chain, which means the configuration can evolve intelligently because since [[derived systems follow a seed-evolve-reseed lifecycle]], when accumulated friction triggers reseeding, the chains tell the re-derivation process exactly what the first derivation assumed and which assumptions need updating. And because [[premature complexity is the most common derivation failure mode]], the initial derivation intentionally defers complexity — the chain encodes the deferred insights as evolution guidelines, so users can trace from friction to the specific claims that justify adding what was originally held back.
24
+
25
+ The connection to provenance in the knowledge graph is structurally parallel. Since [[source attribution enables tracing claims to foundations]], individual claims trace to their intellectual sources — which research document, which tradition, which original insight. Justification chains do the same for system architecture decisions. The tracing direction is the same (from output to rationale), the value proposition is the same (evolution and verification), and the failure mode when absent is the same (opaque systems that resist intelligent modification). The difference is scope: source attribution operates at the note level, justification chains operate at the system configuration level.
26
+
27
+ There is a shadow side. Justification chains are only as good as the claims they reference. If the claim graph contains shallow or contradicted claims, the chains look rigorous while tracing to weak foundations. Evolution reasoning is especially vulnerable to this — if the chain says "atomic granularity because of research claim X" but claim X was never empirically tested, the chain creates false confidence in a decision that may be wrong for reasons the chain cannot surface. The chain documents the derivation engine's reasoning, not the ground truth. This means chain quality is a trailing indicator of claim graph quality, and a well-structured chain pointing to weak claims is arguably more dangerous than no chain at all, because it looks trustworthy.
28
+
29
+ The remedy for this vulnerability is operational evidence. Since [[the derivation engine improves recursively as deployed systems generate observations]], deployment observations are the mechanism that converts untested claims into empirically grounded ones — and justification chains are the structures that benefit most directly from that grounding. A chain tracing to a claim sharpened by three deployments carries different epistemic weight than a chain tracing to a theoretical inference. As the claim graph matures through recursive improvement, the chains that reference it become more trustworthy not because the chain structure changes but because the claims it references become more grounded.
30
+
31
+ ---
32
+ ---
33
+
34
+ Relevant Notes:
35
+ - [[derivation generates knowledge systems from composable research claims not template customization]] — the parent claim that justification chains are a key differentiator of derivation; this note develops the specific mechanism and reasoning modes that make chains valuable
36
+ - [[source attribution enables tracing claims to foundations]] — justification chains are source attribution applied at the system architecture level rather than the individual note level, creating the same verification and evolution capability for configuration decisions that provenance creates for intellectual claims
37
+ - [[configuration dimensions interact so choices in one create pressure on others]] — justification chains must capture not just individual dimension choices but the interaction pressures between them; a chain that only records the direct rationale without documenting cross-dimension constraints is incomplete
38
+ - [[derived systems follow a seed-evolve-reseed lifecycle]] — evolution reasoning is what makes justification chains temporally valuable: when friction accumulates, the chain tells you which claims to question rather than which settings to blindly tweak
39
+ - [[evolution observations provide actionable signals for system adaptation]] — the diagnostic protocol generates the friction signals that evolution reasoning interprets through justification chains, connecting surface symptoms to the specific derivation decisions that produced them
40
+ - [[false universalism applies same processing logic regardless of domain]] — concrete application of evolution reasoning: when processing mismatch symptoms appear (unlinked output, semantically empty operations), the justification chain traces back to the false universalism assumption that research-domain operations transfer, making the assumption revisable
41
+ - [[configuration paralysis emerges when derivation surfaces too many decisions]] — backward reasoning is the specific mode that makes aggressive defaulting viable: users can trace from any inferred default to the claims and constraints that produced it, resolving the opacity risk without requiring upfront comprehension of all dimensions
42
+ - [[premature complexity is the most common derivation failure mode]] — evolution reasoning is what makes the complexity budget's shadow side (under-derivation) manageable: deferred complexity encoded as evolution guidelines becomes accessible through chains that trace from friction to the specific claims justifying the deferred elaboration
43
+ - [[the derivation engine improves recursively as deployed systems generate observations]] — the remedy for the shadow side: deployment observations convert untested claims into empirically grounded ones, and justification chains are the structures that benefit most directly because chain trustworthiness is a trailing indicator of claim graph quality
44
+
45
+ Topics:
46
+ - [[design-dimensions]]
@@ -0,0 +1,51 @@
1
+ ---
2
+ description: The same conceptual system (atomic notes, wiki links, MOCs, pipelines, quality gates) manifests differently on each platform because infrastructure determines which features can actually operate,
3
+ kind: research
4
+ topics: ["[[agent-cognition]]"]
5
+ methodology: ["Original"]
6
+ source: [[agent-platform-capabilities-research-source]]
7
+ ---
8
+
9
+ # knowledge system architecture is parameterized by platform capabilities not fixed by methodology
10
+
11
+ The tempting assumption when building knowledge systems for agents is that the methodology defines a fixed architecture and each deployment replicates it. Atomic notes, wiki links, MOCs, processing pipelines, quality gates, maintenance cycles -- the system IS these things, so build them everywhere the same way. But this assumption breaks immediately on contact with real platforms because the methodology describes what to achieve while the platform determines what can actually operate.
12
+
13
+ The better frame is parameterization. Since [[knowledge systems share universal operations and structural components across all methodology traditions]], the conceptual system remains constant: the same eight operations (capture, structure, connect, process, synthesize, maintain, retrieve, evolve) and nine structural components (notes, schema, links, navigation, folders, templates, hooks, search, health) recur regardless of platform. These are the invariant goals. But since [[eight configuration dimensions parameterize the space of possible knowledge systems]], the implementation of each goal varies along specific dimensions — granularity, organization, linking, processing intensity, navigation depth, maintenance cadence, schema density, and automation level — and platform capabilities constrain which positions along each dimension are viable. Since [[platform capability tiers determine which knowledge system features can be implemented]], a full-automation platform (Claude Code) implements processing pipelines with fresh context per phase via subagent spawning, while a minimal-infrastructure platform implements the same pipeline goal through manual session boundaries. The methodology is identical, but the parameterization differs.
14
+
15
+ What makes this more than a deployment concern is that since [[four abstraction layers separate platform-agnostic from platform-dependent knowledge system features]], the parameters are not arbitrary knobs but structured by layer. The foundation layer (files, conventions, wiki links) is invariant across all parameterizations because since [[local-first file formats are inherently agent-native]], the file IS the artifact and needs no infrastructure. Convention layer parameters adjust how instructions encode quality standards. Automation layer parameters determine whether enforcement is guaranteed (via hooks) or suggested (via instructions). Orchestration layer parameters control whether multi-phase coordination runs automatically or manually. A knowledge system generator that understands these layers can produce the maximum-quality system a given platform sustains rather than offering a degraded copy of a full-featured design.
16
+
17
+ The parameterization frame also explains why since [[platform adapter translation is semantic not mechanical because hook event meanings differ]], cross-platform portability is hard. Changing parameters is not flipping feature flags. A PostToolUse hook that validates schemas on every file write achieves three things: automatic firing, real-time feedback, and out-of-context-window execution. On a platform without per-operation hooks, the generator must decompose that guarantee and reconstruct each property through available mechanisms. Since [[configuration dimensions interact so choices in one create pressure on others]], the space of valid configurations is smaller than the combinatorial product of individual capabilities.
18
+
19
+ There is a productive tension with since [[complex systems evolve from simple working systems]]: parameterization is a design-time choice about starting configuration, but Gall's Law says complex systems must evolve from working simplicity. The reconciliation is that parameterization should target the simplest working configuration for each platform, then let evolutionary pressure add complexity where friction emerges. A generator that targets maximum complexity for a given platform violates Gall's Law even if the platform could theoretically support it. The generator's real job is producing the minimum viable parameterization that starts working, with enough platform knowledge embedded in the context file for the agent to extend the system when pain emerges.
20
+
21
+ The temporal dimension of parameterization reveals a deeper design constraint. Since [[methodology development should follow the trajectory from documentation to skill to hook as understanding hardens]], the initial parameterization should favor instruction-level encoding even on platforms capable of full automation. The generator produces a context file at the convention layer, and since [[self-extension requires context files to contain platform operations knowledge not just methodology]], that context file must include platform-specific construction knowledge so the agent can evolve the system upward through the layers as friction accumulates. This means parameterization determines not just the starting configuration but the evolutionary ceiling: since [[bootstrapping principle enables self-improving systems]], only tier-one platforms where the agent can modify its own context file and create infrastructure can close the recursive improvement loop. Lower-tier parameterizations produce systems that operate but cannot evolve.
22
+
23
+ The parameterization frame also clarifies why since [[platform fragmentation means identical conceptual operations require different implementations across agent environments]], the generator's cost structure is uneven. Foundation and convention parameters are write-once because they are platform-agnostic. Automation and orchestration parameters are write-per-platform because since [[skill context budgets constrain knowledge system complexity on agent platforms]], even platforms at the same tier impose different budget constraints that force different skill consolidation strategies. The generator must understand these constraints to produce viable parameterizations rather than theoretically complete ones that exceed platform budgets.
24
+
25
+ What remains invariant across all parameterizations is the conceptual architecture itself. Since [[coherent architecture emerges from wiki links spreading activation and small-world topology]], the foundational triangle -- wiki links as structure, spreading activation as traversal mechanism, small-world topology as structural requirement -- works identically whether the platform supports hooks and subagents or just reads files. The parameterization adjusts how the agent interacts with this invariant structure, not whether the structure exists. And since [[data exit velocity measures how quickly content escapes vendor lock-in]], the invariant layers have maximum portability while the parameterized layers introduce platform dependencies -- a gradient that the generator should make explicit so operators understand which features survive platform transitions.
26
+
27
+ ---
28
+ ---
29
+
30
+ Relevant Notes:
31
+ - [[platform capability tiers determine which knowledge system features can be implemented]] -- provides the tier framework (full, partial, minimal) that this claim generalizes into the parameterization principle
32
+ - [[four abstraction layers separate platform-agnostic from platform-dependent knowledge system features]] -- provides the layer decomposition (foundation, convention, automation, orchestration) that defines what the parameters control
33
+ - [[local-first file formats are inherently agent-native]] -- explains why the foundation layer is invariant across parameterizations: plain text with embedded metadata works everywhere, so the parameters only affect upper layers
34
+ - [[complex systems evolve from simple working systems]] -- complements this from the temporal axis: parameterization determines the starting configuration, Gall's Law determines how it evolves on any given platform
35
+ - [[platform adapter translation is semantic not mechanical because hook event meanings differ]] -- reveals that parameterization is not just feature toggling but semantic translation: what a hook achieves on one platform may require a fundamentally different mechanism on another
36
+ - [[skills encode methodology so manual execution bypasses quality gates]] -- illustrates what gets lost when parameterization removes features: not convenience but the methodology itself
37
+ - [[context files function as agent operating systems through self-referential self-extension]] -- the context file is the primary carrier of parameterized output: what the generator produces is a context file whose self-extension capability itself varies by platform tier
38
+ - [[self-extension requires context files to contain platform operations knowledge not just methodology]] -- the content requirement that parameterization creates: universal methodology sections plus platform-specific construction manuals
39
+ - [[platform fragmentation means identical conceptual operations require different implementations across agent environments]] -- the implementation cost that makes parameterization necessary: if platforms were uniform, fixed replication would suffice
40
+ - [[methodology development should follow the trajectory from documentation to skill to hook as understanding hardens]] -- provides the temporal evolution path within a parameterized system: initial parameterization should start at instruction level and evolve upward as understanding develops
41
+ - [[bootstrapping principle enables self-improving systems]] -- parameterization constrains not just the starting configuration but the evolutionary capacity: only tier-one platforms can close the recursive improvement loop
42
+ - [[data exit velocity measures how quickly content escapes vendor lock-in]] -- exit velocity grades inversely with parameterization depth: foundation-layer parameters have maximum portability, orchestration-layer parameters have minimum
43
+ - [[skill context budgets constrain knowledge system complexity on agent platforms]] -- a concrete parameterization constraint: platform-enforced budgets force skill consolidation that reshapes how methodology gets encoded
44
+ - [[coherent architecture emerges from wiki links spreading activation and small-world topology]] -- the foundational triangle is what remains invariant across all parameterizations: the conceptual system that parameterization implements differently per platform
45
+ - [[storage versus thinking distinction determines which tool patterns apply]] -- upstream parameter: before selecting which patterns to parameterize, the generator must identify whether the target use case is primarily storage or primarily thinking; this determines which pattern catalog applies
46
+ - [[configuration dimensions interact so choices in one create pressure on others]] -- formalizes why parameterization is harder than setting independent knobs: dimension coupling means the valid configuration space is far smaller than the combinatorial product, so the generator must understand which parameter combinations form coherent operating points
47
+ - [[eight configuration dimensions parameterize the space of possible knowledge systems]] — concretizes the abstract parameterization this note describes: specifies the eight dimensions and their poles that the generator navigates
48
+ - [[knowledge systems share universal operations and structural components across all methodology traditions]] — provides the formal inventory of what remains invariant across parameterizations: the eight operations and nine structural components are the constants that dimensions parameterize and platforms implement differently
49
+
50
+ Topics:
51
+ - [[agent-cognition]]
@@ -0,0 +1,47 @@
1
+ ---
2
+ description: Luhmann's systems-theoretic insight that slip-boxes "surprise" users validates agent-vault partnerships — the combination exceeds what either could achieve alone
3
+ kind: research
4
+ topics: ["[[agent-cognition]]"]
5
+ methodology: ["Zettelkasten", "Systems Theory"]
6
+ source: [[tft-research-part2]]
7
+ ---
8
+
9
+ # knowledge systems become communication partners through complexity and memory humans cannot sustain
10
+
11
+ Luhmann described his Zettelkasten not as a filing system but as a communication partner — a system that could "surprise" him by surfacing connections he had forgotten or never consciously made. The system worked because it accumulated a level of complexity and memory that no biological mind could sustain on its own.
12
+
13
+ This is not metaphor. A knowledge system with enough density becomes qualitatively different from a simple archive. When you have ten notes, you can hold them all in mind. When you have ten thousand, navigated by links you created over years, the system knows things you don't remember knowing. You ask a question, traverse the links, and discover your past self made connections your present self forgot. The system surprises you with your own thinking. Because [[each new note compounds value by creating traversal paths]], the network grows denser with each addition — each new note creates new paths to old material, and those paths become the mechanism through which the system "remembers" what you forgot.
14
+
15
+ The ars contexta vision extends this. Where Luhmann was constrained by physical cards and manual traversal, agent-operated vaults add active intelligence. Since [[you operate a system that takes notes]], the division of labor is explicit: the vault provides persistent memory across sessions, the agent provides semantic understanding and traversal capability, the human provides direction and judgment. The human is no longer a note-taker but a system operator whose primary contribution is the quality filter that mechanics alone cannot provide. None of these alone produces what the combination produces. And because [[intermediate packets enable assembly over creation]], the system accumulates not just notes but composable building blocks — material that future work can assemble from rather than create from scratch. The vault becomes a repository of your past thinking, formatted for reuse.
16
+
17
+ Since [[session handoff creates continuity without persistent memory]], the vault accumulates complexity that would otherwise die with each session. The agent reads what previous instances wrote, builds on it, and leaves artifacts for future instances. Over time, this creates exactly what Luhmann described: a system complex enough to surprise its operators.
18
+
19
+ But since [[cognitive outsourcing risk in agent-operated systems]], there is a shadow side. If the system handles all the cognitive work, the human may lose the very skills that made the partnership valuable. The human becomes a rubber stamp rather than a genuine participant. The mitigation is ensuring the human retains genuine judgment — approving not passively but with understanding, directing not delegating but collaborating.
20
+
21
+ The positive case remains: biological memory is lossy, bounded, and mortal. Because [[cognitive offloading is the architectural foundation for vault design]], the vault is not merely storage but a distributed cognitive system — Clark and Chalmers' Extended Mind made operational. An externalized system with the right structure compounds value over time. Luhmann's Zettelkasten outlived him and continues producing insights decades later. The vault, properly maintained, could do the same — a thinking structure that accumulates complexity no individual session or biological mind could sustain alone.
22
+
23
+ Since [[the vault constitutes identity for agents]], the partnership takes on a deeper character. Luhmann's Zettelkasten was a partner external to his identity — it surprised him, but he existed independently of it. For an agent, the vault is not a separate partner but the constitutive substrate of who the agent is. The "communication" is not between agent and tool but between human and agent-as-constituted-by-vault. This means the partnership quality is directly tied to vault quality — a richer vault does not just produce better outputs, it produces a richer partner. And since [[external memory shapes cognition more than base model]], the architecture of that vault matters more than the base model running the agent — two agents with identical weights but different memory architectures think differently because they retrieve different material, making memory architecture the primary lever for partnership quality. And since [[notes are skills — curated knowledge injected when relevant]], the partner's capabilities are literally its note collection — each note is a thinking skill the agent can invoke, and the quality of the dialogue depends on the quality of these skills. A partner with vague notes has vague capabilities; a partner with sharp, densely-linked notes can reason precisely about the domains those notes cover.
24
+
25
+ What makes it a partner rather than a tool is the surprise. When the system tells you something you didn't explicitly put there — a connection you made years ago, a synthesis that emerges from accumulated claims — then it's thinking with you, not just for you. The information-theoretic foundation for this surprise is that [[controlled disorder engineers serendipity through semantic rather than topical linking]] — perfectly ordered systems yield zero surprise in Shannon's sense, so the semantic cross-links that defy topical categorization are precisely the mechanism through which the system produces the unexpected connections Luhmann described. The system surprises because its linking strategy deliberately creates adjacencies that the operator could not have predicted from the topical organization alone.
26
+
27
+ ---
28
+
29
+ Source: [[tft-research-part2]]
30
+ ---
31
+
32
+ Relevant Notes:
33
+ - [[cognitive outsourcing risk in agent-operated systems]] — the shadow side; this note articulates why partnership is valuable, that note articulates what we might lose
34
+ - [[session handoff creates continuity without persistent memory]] — the mechanism; handoffs create the persistent complexity that enables the system to become a genuine partner
35
+ - [[each new note compounds value by creating traversal paths]] — the economic foundation; compounding explains how systems accumulate the complexity that enables surprise
36
+ - [[intermediate packets enable assembly over creation]] — the artifact pattern; packets are what the system accumulates that enable future assembly from material the user forgot creating
37
+ - [[cognitive offloading is the architectural foundation for vault design]] — the cognitive science grounding; this note argues partnership is productive via Luhmann, that note explains WHY it works at the architecture level through Clark/Chalmers Extended Mind and Cowan's working memory limits
38
+ - [[AI shifts knowledge systems from externalizing memory to externalizing attention]] — evolution: Luhmann's partnership was memory-based (the system remembers what you forgot); agent-operated partnership is becoming attention-based (the system notices what you could not attend to), shifting the surprise mechanism from forgotten connections to unnoticed patterns
39
+ - [[controlled disorder engineers serendipity through semantic rather than topical linking]] — the information-theoretic mechanism for surprise: semantic cross-links that defy topical categorization create the unpredictable adjacencies through which the system produces connections its operators could not have anticipated
40
+ - [[storage versus thinking distinction determines which tool patterns apply]] — identity statement: Luhmann's formulation that the Zettelkasten is 'not a filing system but a communication partner' is the purest expression of thinking-system identity; partnership requires synthesis beyond retrieval, which is precisely what distinguishes thinking systems from storage systems
41
+ - [[the vault constitutes identity for agents]] — the partnership reframed: if the vault constitutes identity, then the communication partner is not a separate entity but the agent-as-constituted-by-vault; the human converses not with a tool but with the identity that the vault has produced, deepening Luhmann's insight from tool-partnership to identity-partnership
42
+ - [[notes are skills — curated knowledge injected when relevant]] — capability grounding: the partner's dialogue quality depends on its skill library; each note is a capability the agent can invoke, so note quality directly determines partnership quality
43
+ - [[you operate a system that takes notes]] — role specification: names the form the partnership takes in practice; the human operates the system rather than doing the note-taking, contributing judgment and direction while the agent contributes mechanics and traversal
44
+ - [[external memory shapes cognition more than base model]] — architecture thesis: the partnership lever is memory architecture not model capability; vault structure determines what enters the cognitive loop, making architecture quality the primary determinant of partnership quality
45
+
46
+ Topics:
47
+ - [[agent-cognition]]
@@ -0,0 +1,46 @@
1
+ ---
2
+ description: Eight operations and nine structural components recur across Zettelkasten, PARA, Cornell, Evergreen, and GTD — implementations vary but the architectural inventory is fixed
3
+ kind: research
4
+ topics: ["[[design-dimensions]]"]
5
+ methodology: ["PKM Research", "Systems Theory"]
6
+ source: [[arscontexta-notes]]
7
+ ---
8
+
9
+ # knowledge systems share universal operations and structural components across all methodology traditions
10
+
11
+ When you lay Zettelkasten, PARA, Cornell Note-Taking, evergreen notes, and GTD side by side, what emerges is not five competing approaches but five implementations of the same underlying architecture. Every viable knowledge system performs the same eight operations and is built from the same nine structural components. The implementations vary — Zettelkasten uses atomic notes where PARA uses project folders, Cornell structures processing into five Rs where GTD routes by context — but the inventory of what needs to happen and what pieces are needed is fixed.
12
+
13
+ The eight operations are: capture (getting information into the system), structure (organizing it within the system), connect (creating relationships between pieces), process (transforming raw input into domain-appropriate form), synthesize (finding patterns across processed content), maintain (keeping the system healthy over time), retrieve (finding the right information when needed), and evolve (improving the system itself). Every methodology tradition implements all eight, though they weight them differently — and since [[storage versus thinking distinction determines which tool patterns apply]], the weighting follows a systematic pattern: storage systems (PARA, GTD) emphasize capture, structure, and retrieve while thinking systems (Zettelkasten, Evergreen) emphasize process, connect, and synthesize. Notably, since [[maintenance operations are more universal than creative pipelines because structural health is domain-invariant]], the maintain operation transfers across domains with less adaptation than process or synthesize — schema validation, orphan detection, and link integrity checking are structural operations that apply identically whether the system handles research claims or therapy reflections. Zettelkasten emphasizes processing and connection. PARA emphasizes structure and retrieval. Cornell emphasizes processing through its structured review phases. GTD emphasizes capture and routing. But none of them omits any operation entirely — a knowledge system that captures but never maintains degrades, one that processes but never retrieves wastes effort, one that connects but never evolves becomes brittle. The reason for this invariance is not coincidence or tradition: since [[the vault methodology transfers because it encodes cognitive science not domain specifics]], each operation implements a cognitive necessity — capture externalizes working memory contents, connection-finding follows spreading activation dynamics, retrieval implements information foraging — and these cognitive operations do not change when the subject matter does.
14
+
15
+ The nine structural components are: notes (atomic units of knowledge), schema (metadata structure making notes queryable), links (graph edges between notes), navigation (attention management structures like MOCs or folder hierarchies), folders (physical organization), templates (reusable structural patterns), hooks (automation triggers), search (retrieval infrastructure), and health (maintenance operations). Again, every tradition needs all nine, though the emphasis shifts. Zettelkasten foregrounds links and notes. PARA foregrounds folders and navigation. The presence of components like hooks and health in the universal inventory reflects a maturity insight — systems that lack automation and maintenance may function initially but degrade predictably as they grow.
16
+
17
+ This universality has a direct consequence for system design. Because [[methodology traditions are named points in a shared configuration space not competing paradigms]], the apparent competition between traditions dissolves once you see them as different configurations of the same underlying components. An agent building a knowledge system does not need to choose between Zettelkasten and PARA — it needs to decide how to implement each operation and which components to emphasize. Since [[eight configuration dimensions parameterize the space of possible knowledge systems]], those implementation decisions map to specific dimension settings (granularity, linking philosophy, processing intensity, etc.), but the components themselves are the constants beneath the dimensions. The dimensions describe HOW each component varies; the universal inventory describes WHAT components exist. Crucially, because [[configuration dimensions interact so choices in one create pressure on others]], these implementation decisions are not independent — choosing atomic granularity forces explicit linking, deep navigation, and heavy processing, so the universal components form a coupled design space where traditions cohere at specific configurations rather than sampling freely across all possible combinations.
18
+
19
+ The relationship between universal components and the agent kernel is worth making precise. Since [[ten universal primitives form the kernel of every viable agent knowledge system]], the kernel is not a direct copy of the universal inventory but a translation of it into agent-native terms. The universal observation says "every system needs search infrastructure." The kernel specifies "agents need semantic search via embeddings or LLM-assisted similarity." The universal observation says "every system needs navigation structures." The kernel specifies "agents need MOC hierarchies with tree injection at session start." The kernel adds agent-specific constraints — context window limits, session boundaries, file-based operation — that the universal observation does not address. In this sense, the universal components are the genus and the kernel primitives are the species adapted for a particular operator type.
20
+
21
+ The processing skeleton provides a complementary decomposition. Since [[every knowledge domain shares a four-phase processing skeleton that diverges only in the process step]], the universal operations have internal structure: capture, process, connect, and verify form a sequential pipeline, with only the process step carrying domain-specific logic. The eight operations listed above include operations beyond this skeleton (synthesize, maintain, evolve) that are either meta-operations running across the pipeline or backward-pass operations that the forward skeleton does not capture. The skeleton is the forward pass; the full operation set includes the backward passes and meta-operations that keep the system alive.
22
+
23
+ The universal inventory also creates a quality requirement for research graphs that aim to support derivation. Since [[dense interlinked research claims enable derivation while sparse references only enable templating]], merely cataloging the eight operations and nine components is insufficient — the claims about each must be densely interlinked with interaction knowledge that explains how choices about one component constrain choices about others. A sparse inventory enables template selection; a dense one enables principled derivation.
24
+
25
+ The shadow side is that calling components "universal" risks premature closure. The inventory of eight operations and nine components comes from analyzing existing methodology traditions, all of which emerged in specific historical and cultural contexts. A genuinely novel approach to knowledge work — one we have not yet imagined — might reveal that the inventory was incomplete, or that what we called a single operation was actually two distinct operations conflated by tradition. There is also a subtler risk: because [[false universalism applies same processing logic regardless of domain]], the confidence that structural components are universal can seduce a derivation agent into assuming the operations' content is equally transferable — but while every system needs a "process" step, research-style claim extraction has nothing in common with therapy-style pattern recognition, and confusing the universal presence of an operation with universal applicability of its implementation is the most insidious derivation failure. Since [[derivation generates knowledge systems from composable research claims not template customization]], the derivation engine should treat universality as a strong empirical pattern rather than a logical necessity, remaining open to expanding the inventory if evidence accumulates.
26
+
27
+ ---
28
+ ---
29
+
30
+ Relevant Notes:
31
+ - [[ten universal primitives form the kernel of every viable agent knowledge system]] — translates this universal observation into agent-specific prescription: the kernel is what agents specifically need from the universal inventory, selecting and adapting components for context-window-bound operation
32
+ - [[every knowledge domain shares a four-phase processing skeleton that diverges only in the process step]] — decomposes the 'process' and 'connect' operations into a sequential pipeline, showing that the universal operations have internal structure and ordering constraints
33
+ - [[methodology traditions are named points in a shared configuration space not competing paradigms]] — explains WHY the components are universal: traditions are configurations of the same underlying system, so they necessarily implement the same operations with different parameter choices
34
+ - [[derivation generates knowledge systems from composable research claims not template customization]] — the practical consequence: universality enables derivation because a generator can enumerate the fixed inventory of operations and components, then compose domain-appropriate implementations for each
35
+ - [[composable knowledge architecture builds systems from independent toggleable modules not monolithic templates]] — maps universal components to toggleable modules: each structural component becomes a module, and the composable architecture makes mixing implementations across traditions concrete
36
+ - [[eight configuration dimensions parameterize the space of possible knowledge systems]] — orthogonal decomposition: universal components are WHAT every system has while configuration dimensions are HOW each component varies across systems
37
+ - [[schema field names are the only domain specific element in the universal note pattern]] — sharpens at note-format level: the nine universal structural components appear at system level, but within the note format specifically the universality is even more extreme — four of five components are completely domain-invariant, with only YAML field names carrying domain specificity
38
+ - [[the vault methodology transfers because it encodes cognitive science not domain specifics]] — provides the causal explanation: these operations and components recur because each implements a cognitive operation that cognition itself requires (Cowan's working memory limits, spreading activation, elaborative encoding, information foraging), grounding the empirical universality in cognitive science rather than mere pattern observation
39
+ - [[maintenance operations are more universal than creative pipelines because structural health is domain-invariant]] — differential universality within the inventory: while all eight operations are universal in the sense that every system performs them, the maintain operation transfers with less adaptation than process or synthesize because it checks structural properties (schema compliance, link integrity, graph topology) rather than domain semantics
40
+ - [[storage versus thinking distinction determines which tool patterns apply]] — upstream classification that operates on the universal inventory: storage systems (PARA, GTD) weight capture-structure-retrieve while thinking systems (Zettelkasten, Evergreen) weight process-connect-synthesize, showing that universal presence does not mean uniform emphasis
41
+ - [[false universalism applies same processing logic regardless of domain]] — the essential counterweight to this note's universality claim: the universal inventory of operations is fixed but the CONTENT of the process operation varies by domain, and confusing structural universality with operational universality is the most insidious derivation failure
42
+ - [[configuration dimensions interact so choices in one create pressure on others]] — coupling constraint on the universal inventory: how you implement one component constrains how you can implement others, so the universal inventory creates a coupled design space rather than an independent buffet of interchangeable parts
43
+ - [[dense interlinked research claims enable derivation while sparse references only enable templating]] — the quality threshold for the research graph: the universal inventory this note catalogs must be densely interlinked with interaction knowledge to support derivation rather than mere templating
44
+
45
+ Topics:
46
+ - [[design-dimensions]]