arscontexta 0.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (418) hide show
  1. package/.claude-plugin/marketplace.json +11 -0
  2. package/.claude-plugin/plugin.json +22 -0
  3. package/README.md +683 -0
  4. package/agents/knowledge-guide.md +49 -0
  5. package/bin/cli.mjs +66 -0
  6. package/generators/agents-md.md +240 -0
  7. package/generators/claude-md.md +379 -0
  8. package/generators/features/atomic-notes.md +124 -0
  9. package/generators/features/ethical-guardrails.md +58 -0
  10. package/generators/features/graph-analysis.md +188 -0
  11. package/generators/features/helper-functions.md +92 -0
  12. package/generators/features/maintenance.md +164 -0
  13. package/generators/features/methodology-knowledge.md +70 -0
  14. package/generators/features/mocs.md +144 -0
  15. package/generators/features/multi-domain.md +61 -0
  16. package/generators/features/personality.md +71 -0
  17. package/generators/features/processing-pipeline.md +428 -0
  18. package/generators/features/schema.md +149 -0
  19. package/generators/features/self-evolution.md +229 -0
  20. package/generators/features/self-space.md +78 -0
  21. package/generators/features/semantic-search.md +99 -0
  22. package/generators/features/session-rhythm.md +85 -0
  23. package/generators/features/templates.md +85 -0
  24. package/generators/features/wiki-links.md +88 -0
  25. package/generators/soul-md.md +121 -0
  26. package/hooks/hooks.json +45 -0
  27. package/hooks/scripts/auto-commit.sh +44 -0
  28. package/hooks/scripts/session-capture.sh +35 -0
  29. package/hooks/scripts/session-orient.sh +86 -0
  30. package/hooks/scripts/write-validate.sh +42 -0
  31. package/methodology/AI shifts knowledge systems from externalizing memory to externalizing attention.md +59 -0
  32. package/methodology/BM25 retrieval fails on full-length descriptions because query term dilution reduces match scores.md +39 -0
  33. package/methodology/IBIS framework maps claim-based architecture to structured argumentation.md +58 -0
  34. package/methodology/LLM attention degrades as context fills.md +49 -0
  35. package/methodology/MOC construction forces synthesis that automated generation from metadata cannot replicate.md +49 -0
  36. package/methodology/MOC maintenance investment compounds because orientation savings multiply across every future session.md +41 -0
  37. package/methodology/MOCs are attention management devices not just organizational tools.md +51 -0
  38. package/methodology/PKM failure follows a predictable cycle.md +50 -0
  39. package/methodology/ThreadMode to DocumentMode transformation is the core value creation step.md +52 -0
  40. package/methodology/WIP limits force processing over accumulation.md +53 -0
  41. package/methodology/Zeigarnik effect validates capture-first philosophy because open loops drain attention.md +42 -0
  42. package/methodology/academic research uses structured extraction with cross-source synthesis.md +566 -0
  43. package/methodology/adapt the four-phase processing pipeline to domain-specific throughput needs.md +197 -0
  44. package/methodology/agent notes externalize navigation intuition that search cannot discover and traversal cannot reconstruct.md +48 -0
  45. package/methodology/agent self-memory should be architecturally separate from user knowledge systems.md +48 -0
  46. package/methodology/agent session boundaries create natural automation checkpoints that human-operated systems lack.md +56 -0
  47. package/methodology/agent-cognition.md +107 -0
  48. package/methodology/agents are simultaneously methodology executors and subjects creating a unique trust asymmetry.md +66 -0
  49. package/methodology/aspect-oriented programming solved the same cross-cutting concern problem that hooks solve.md +39 -0
  50. package/methodology/associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles.md +53 -0
  51. package/methodology/attention residue may have a minimum granularity that cannot be subdivided.md +46 -0
  52. package/methodology/auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution.md +47 -0
  53. package/methodology/automated detection is always safe because it only reads state while automated remediation risks content corruption.md +42 -0
  54. package/methodology/automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues.md +56 -0
  55. package/methodology/backlinks implicitly define notes by revealing usage context.md +35 -0
  56. package/methodology/backward maintenance asks what would be different if written today.md +62 -0
  57. package/methodology/balance onboarding enforcement and questions to prevent premature complexity.md +229 -0
  58. package/methodology/basic level categorization determines optimal MOC granularity.md +51 -0
  59. package/methodology/batching by context similarity reduces switching costs in agent processing.md +43 -0
  60. package/methodology/behavioral anti-patterns matter more than tool selection.md +42 -0
  61. package/methodology/betweenness centrality identifies bridge notes connecting disparate knowledge domains.md +57 -0
  62. package/methodology/blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules.md +42 -0
  63. package/methodology/bootstrapping principle enables self-improving systems.md +62 -0
  64. package/methodology/build automatic memory through cognitive offloading and session handoffs.md +285 -0
  65. package/methodology/capture the reaction to content not just the content itself.md +41 -0
  66. package/methodology/claims must be specific enough to be wrong.md +36 -0
  67. package/methodology/closure rituals create clean breaks that prevent attention residue bleed.md +44 -0
  68. package/methodology/cognitive offloading is the architectural foundation for vault design.md +46 -0
  69. package/methodology/cognitive outsourcing risk in agent-operated systems.md +55 -0
  70. package/methodology/coherence maintains consistency despite inconsistent inputs.md +96 -0
  71. package/methodology/coherent architecture emerges from wiki links spreading activation and small-world topology.md +48 -0
  72. package/methodology/community detection algorithms can inform when MOCs should split or merge.md +52 -0
  73. package/methodology/complete navigation requires four complementary types that no single mechanism provides.md +43 -0
  74. package/methodology/complex systems evolve from simple working systems.md +59 -0
  75. package/methodology/composable knowledge architecture builds systems from independent toggleable modules not monolithic templates.md +61 -0
  76. package/methodology/compose multi-domain systems through separate templates and shared graph.md +372 -0
  77. package/methodology/concept-orientation beats source-orientation for cross-domain connections.md +51 -0
  78. package/methodology/confidence thresholds gate automated action between the mechanical and judgment zones.md +50 -0
  79. package/methodology/configuration dimensions interact so choices in one create pressure on others.md +58 -0
  80. package/methodology/configuration paralysis emerges when derivation surfaces too many decisions.md +44 -0
  81. package/methodology/context files function as agent operating systems through self-referential self-extension.md +46 -0
  82. package/methodology/context phrase clarity determines how deep a navigation hierarchy can scale.md +46 -0
  83. package/methodology/continuous small-batch processing eliminates review dread.md +48 -0
  84. package/methodology/controlled disorder engineers serendipity through semantic rather than topical linking.md +51 -0
  85. package/methodology/creative writing uses worldbuilding consistency with character tracking.md +672 -0
  86. package/methodology/cross-links between MOC territories indicate creative leaps and integration depth.md +43 -0
  87. package/methodology/dangling links reveal which notes want to exist.md +62 -0
  88. package/methodology/data exit velocity measures how quickly content escapes vendor lock-in.md +74 -0
  89. package/methodology/decontextualization risk means atomicity may strip meaning that cannot be recovered.md +48 -0
  90. package/methodology/dense interlinked research claims enable derivation while sparse references only enable templating.md +47 -0
  91. package/methodology/dependency resolution through topological sort makes module composition transparent and verifiable.md +56 -0
  92. package/methodology/derivation generates knowledge systems from composable research claims not template customization.md +63 -0
  93. package/methodology/derivation-engine.md +27 -0
  94. package/methodology/derived systems follow a seed-evolve-reseed lifecycle.md +56 -0
  95. package/methodology/description quality for humans diverges from description quality for keyword search.md +73 -0
  96. package/methodology/descriptions are retrieval filters not summaries.md +112 -0
  97. package/methodology/design MOCs as attention management devices with lifecycle governance.md +318 -0
  98. package/methodology/design-dimensions.md +66 -0
  99. package/methodology/digital mutability enables note evolution that physical permanence forbids.md +54 -0
  100. package/methodology/discovery-retrieval.md +48 -0
  101. package/methodology/distinctiveness scoring treats description quality as measurable.md +69 -0
  102. package/methodology/does agent processing recover what fast capture loses.md +43 -0
  103. package/methodology/domain-compositions.md +37 -0
  104. package/methodology/dual-coding with visual elements could enhance agent traversal.md +55 -0
  105. package/methodology/each module must be describable in one sentence under 200 characters or it does too many things.md +45 -0
  106. package/methodology/each new note compounds value by creating traversal paths.md +55 -0
  107. package/methodology/eight configuration dimensions parameterize the space of possible knowledge systems.md +56 -0
  108. package/methodology/elaborative encoding is the quality gate for new notes.md +55 -0
  109. package/methodology/enforce schema with graduated strictness across capture processing and query zones.md +221 -0
  110. package/methodology/enforcing atomicity can create paralysis when ideas resist decomposition.md +43 -0
  111. package/methodology/engineering uses technical decision tracking with architectural memory.md +766 -0
  112. package/methodology/every knowledge domain shares a four-phase processing skeleton that diverges only in the process step.md +53 -0
  113. package/methodology/evolution observations provide actionable signals for system adaptation.md +67 -0
  114. package/methodology/external memory shapes cognition more than base model.md +60 -0
  115. package/methodology/faceted classification treats notes as multi-dimensional objects rather than folder contents.md +65 -0
  116. package/methodology/failure-modes.md +27 -0
  117. package/methodology/false universalism applies same processing logic regardless of domain.md +49 -0
  118. package/methodology/federated wiki pattern enables multi-agent divergence as feature not bug.md +59 -0
  119. package/methodology/flat files break at retrieval scale.md +75 -0
  120. package/methodology/forced engagement produces weak connections.md +48 -0
  121. package/methodology/four abstraction layers separate platform-agnostic from platform-dependent knowledge system features.md +47 -0
  122. package/methodology/fresh context per task preserves quality better than chaining phases.md +44 -0
  123. package/methodology/friction reveals architecture.md +63 -0
  124. package/methodology/friction-driven module adoption prevents configuration debt by adding complexity only at pain points.md +48 -0
  125. package/methodology/gardening cycle implements tend prune fertilize operations.md +41 -0
  126. package/methodology/generation effect gate blocks processing without transformation.md +40 -0
  127. package/methodology/goal-driven memory orchestration enables autonomous domain learning through directed compute allocation.md +41 -0
  128. package/methodology/good descriptions layer heuristic then mechanism then implication.md +57 -0
  129. package/methodology/graph-structure.md +65 -0
  130. package/methodology/guided notes might outperform post-hoc structuring for high-volume capture.md +37 -0
  131. package/methodology/health wellness uses symptom-trigger correlation with multi-dimensional tracking.md +819 -0
  132. package/methodology/hook composition creates emergent methodology from independent single-concern components.md +47 -0
  133. package/methodology/hook enforcement guarantees quality while instruction enforcement merely suggests it.md +51 -0
  134. package/methodology/hook-driven learning loops create self-improving methodology through observation accumulation.md +62 -0
  135. package/methodology/hooks are the agent habit system that replaces the missing basal ganglia.md +40 -0
  136. package/methodology/hooks cannot replace genuine cognitive engagement yet more automation is always tempting.md +87 -0
  137. package/methodology/hooks enable context window efficiency by delegating deterministic checks to external processes.md +47 -0
  138. package/methodology/idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once.md +44 -0
  139. package/methodology/implement condition-based maintenance triggers for derived systems.md +255 -0
  140. package/methodology/implicit dependencies create distributed monoliths that fail silently across configurations.md +58 -0
  141. package/methodology/implicit knowledge emerges from traversal.md +55 -0
  142. package/methodology/incremental formalization happens through repeated touching of old notes.md +60 -0
  143. package/methodology/incremental reading enables cross-source connection finding.md +39 -0
  144. package/methodology/index.md +32 -0
  145. package/methodology/inline links carry richer relationship data than metadata fields.md +91 -0
  146. package/methodology/insight accretion differs from productivity in knowledge systems.md +41 -0
  147. package/methodology/intermediate packets enable assembly over creation.md +52 -0
  148. package/methodology/intermediate representation pattern enables reliable vault operations beyond regex.md +62 -0
  149. package/methodology/justification chains enable forward backward and evolution reasoning about configuration decisions.md +46 -0
  150. package/methodology/knowledge system architecture is parameterized by platform capabilities not fixed by methodology.md +51 -0
  151. package/methodology/knowledge systems become communication partners through complexity and memory humans cannot sustain.md +47 -0
  152. package/methodology/knowledge systems share universal operations and structural components across all methodology traditions.md +46 -0
  153. package/methodology/legal case management uses precedent chains with regulatory change propagation.md +892 -0
  154. package/methodology/live index via periodic regeneration keeps discovery current.md +58 -0
  155. package/methodology/local-first file formats are inherently agent-native.md +69 -0
  156. package/methodology/logic column pattern separates reasoning from procedure.md +35 -0
  157. package/methodology/maintenance operations are more universal than creative pipelines because structural health is domain-invariant.md +47 -0
  158. package/methodology/maintenance scheduling frequency should match consequence speed not detection capability.md +50 -0
  159. package/methodology/maintenance targeting should prioritize mechanism and theory notes.md +26 -0
  160. package/methodology/maintenance-patterns.md +72 -0
  161. package/methodology/markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure.md +55 -0
  162. package/methodology/maturity field enables agent context prioritization.md +33 -0
  163. package/methodology/memory-architecture.md +27 -0
  164. package/methodology/metacognitive confidence can diverge from retrieval capability.md +42 -0
  165. package/methodology/metadata reduces entropy enabling precision over recall.md +91 -0
  166. package/methodology/methodology development should follow the trajectory from documentation to skill to hook as understanding hardens.md +80 -0
  167. package/methodology/methodology traditions are named points in a shared configuration space not competing paradigms.md +64 -0
  168. package/methodology/mnemonic medium embeds verification into navigation.md +46 -0
  169. package/methodology/module communication through shared YAML fields creates loose coupling without direct dependencies.md +44 -0
  170. package/methodology/module deactivation must account for structural artifacts that survive the toggle.md +49 -0
  171. package/methodology/multi-domain systems compose through separate templates and shared graph.md +61 -0
  172. package/methodology/multi-domain-composition.md +27 -0
  173. package/methodology/narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging.md +53 -0
  174. package/methodology/navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts.md +48 -0
  175. package/methodology/navigational vertigo emerges in pure association systems without local hierarchy.md +54 -0
  176. package/methodology/note titles should function as APIs enabling sentence transclusion.md +51 -0
  177. package/methodology/note-design.md +57 -0
  178. package/methodology/notes are skills /342/200/224 curated knowledge injected when relevant.md" +62 -0
  179. package/methodology/notes function as cognitive anchors that stabilize attention during complex tasks.md +41 -0
  180. package/methodology/novel domains derive by mapping knowledge type to closest reference domain then adapting.md +50 -0
  181. package/methodology/nudge theory explains graduated hook enforcement as choice architecture for agents.md +59 -0
  182. package/methodology/observation and tension logs function as dead-letter queues for failed automation.md +51 -0
  183. package/methodology/operational memory and knowledge memory serve different functions in agent architecture.md +48 -0
  184. package/methodology/operational wisdom requires contextual observation.md +52 -0
  185. package/methodology/orchestrated vault creation transforms arscontexta from tool to autonomous knowledge factory.md +40 -0
  186. package/methodology/organic emergence versus active curation creates a fundamental vault governance tension.md +68 -0
  187. package/methodology/orphan notes are seeds not failures.md +38 -0
  188. package/methodology/over-automation corrupts quality when hooks encode judgment rather than verification.md +62 -0
  189. package/methodology/people relationships uses Dunbar-layered graphs with interaction tracking.md +659 -0
  190. package/methodology/personal assistant uses life area management with review automation.md +610 -0
  191. package/methodology/platform adapter translation is semantic not mechanical because hook event meanings differ.md +40 -0
  192. package/methodology/platform capability tiers determine which knowledge system features can be implemented.md +48 -0
  193. package/methodology/platform fragmentation means identical conceptual operations require different implementations across agent environments.md +44 -0
  194. package/methodology/premature complexity is the most common derivation failure mode.md +45 -0
  195. package/methodology/prevent domain-specific failure modes through the vulnerability matrix.md +336 -0
  196. package/methodology/processing effort should follow retrieval demand.md +57 -0
  197. package/methodology/processing-workflows.md +75 -0
  198. package/methodology/product management uses feedback pipelines with experiment tracking.md +789 -0
  199. package/methodology/productivity porn risk in meta-system building.md +30 -0
  200. package/methodology/programmable notes could enable property-triggered workflows.md +64 -0
  201. package/methodology/progressive disclosure means reading right not reading less.md +69 -0
  202. package/methodology/progressive schema validates only what active modules require not the full system schema.md +49 -0
  203. package/methodology/project management uses decision tracking with stakeholder context.md +776 -0
  204. package/methodology/propositional link semantics transform wiki links from associative to reasoned.md +87 -0
  205. package/methodology/prospective memory requires externalization.md +53 -0
  206. package/methodology/provenance tracks where beliefs come from.md +62 -0
  207. package/methodology/queries evolve during search so agents should checkpoint.md +35 -0
  208. package/methodology/question-answer metadata enables inverted search patterns.md +39 -0
  209. package/methodology/random note resurfacing prevents write-only memory.md +33 -0
  210. package/methodology/reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring.md +59 -0
  211. package/methodology/reflection synthesizes existing notes into new insight.md +100 -0
  212. package/methodology/retrieval utility should drive design over capture completeness.md +69 -0
  213. package/methodology/retrieval verification loop tests description quality at scale.md +81 -0
  214. package/methodology/role field makes graph structure explicit.md +94 -0
  215. package/methodology/scaffolding enables divergence that fine-tuning cannot.md +67 -0
  216. package/methodology/schema enforcement via validation agents enables soft consistency.md +60 -0
  217. package/methodology/schema evolution follows observe-then-formalize not design-then-enforce.md +65 -0
  218. package/methodology/schema field names are the only domain specific element in the universal note pattern.md +46 -0
  219. package/methodology/schema fields should use domain-native vocabulary not abstract terminology.md +47 -0
  220. package/methodology/schema templates reduce cognitive overhead at capture time.md +55 -0
  221. package/methodology/schema validation hooks externalize inhibitory control that degrades under cognitive load.md +48 -0
  222. package/methodology/schema-enforcement.md +27 -0
  223. package/methodology/self-extension requires context files to contain platform operations knowledge not just methodology.md +47 -0
  224. package/methodology/sense-making vs storage does compression lose essential nuance.md +73 -0
  225. package/methodology/session boundary hooks implement cognitive bookends for orientation and reflection.md +60 -0
  226. package/methodology/session handoff creates continuity without persistent memory.md +43 -0
  227. package/methodology/session outputs are packets for future selves.md +43 -0
  228. package/methodology/session transcript mining enables experiential validation that structural tests cannot provide.md +38 -0
  229. package/methodology/skill context budgets constrain knowledge system complexity on agent platforms.md +52 -0
  230. package/methodology/skills encode methodology so manual execution bypasses quality gates.md +50 -0
  231. package/methodology/small-world topology requires hubs and dense local links.md +99 -0
  232. package/methodology/source attribution enables tracing claims to foundations.md +38 -0
  233. package/methodology/spaced repetition scheduling could optimize vault maintenance.md +44 -0
  234. package/methodology/spreading activation models how agents should traverse.md +79 -0
  235. package/methodology/stale navigation actively misleads because agents trust curated maps completely.md +43 -0
  236. package/methodology/stigmergy coordinates agents through environmental traces without direct communication.md +62 -0
  237. package/methodology/storage versus thinking distinction determines which tool patterns apply.md +56 -0
  238. package/methodology/structure enables navigation without reading everything.md +52 -0
  239. package/methodology/structure without processing provides no value.md +56 -0
  240. package/methodology/student learning uses prerequisite graphs with spaced retrieval.md +770 -0
  241. package/methodology/summary coherence tests composability before filing.md +37 -0
  242. package/methodology/tag rot applies to wiki links because titles serve as both identifier and display text.md +50 -0
  243. package/methodology/temporal media must convert to spatial text for agent traversal.md +43 -0
  244. package/methodology/temporal processing priority creates age-based inbox urgency.md +45 -0
  245. package/methodology/temporal separation of capture and processing preserves context freshness.md +39 -0
  246. package/methodology/ten universal primitives form the kernel of every viable agent knowledge system.md +162 -0
  247. package/methodology/testing effect could enable agent knowledge verification.md +38 -0
  248. package/methodology/the AgentSkills standard embodies progressive disclosure at the skill level.md +40 -0
  249. package/methodology/the derivation engine improves recursively as deployed systems generate observations.md +49 -0
  250. package/methodology/the determinism boundary separates hook methodology from skill methodology.md +46 -0
  251. package/methodology/the fix-versus-report decision depends on determinism reversibility and accumulated trust.md +45 -0
  252. package/methodology/the generation effect requires active transformation not just storage.md +57 -0
  253. package/methodology/the no wrong patches guarantee ensures any valid module combination produces a valid system.md +58 -0
  254. package/methodology/the system is the argument.md +46 -0
  255. package/methodology/the vault constitutes identity for agents.md +86 -0
  256. package/methodology/the vault methodology transfers because it encodes cognitive science not domain specifics.md +47 -0
  257. package/methodology/therapy journal uses warm personality with pattern detection for emotional processing.md +584 -0
  258. package/methodology/three capture schools converge through agent-mediated synthesis.md +55 -0
  259. package/methodology/three concurrent maintenance loops operate at different timescales to catch different classes of problems.md +56 -0
  260. package/methodology/throughput matters more than accumulation.md +58 -0
  261. package/methodology/title as claim enables traversal as reasoning.md +50 -0
  262. package/methodology/topological organization beats temporal for knowledge work.md +52 -0
  263. package/methodology/trading uses conviction tracking with thesis-outcome correlation.md +699 -0
  264. package/methodology/trails transform ephemeral navigation into persistent artifacts.md +39 -0
  265. package/methodology/transform universal vocabulary to domain-native language through six levels.md +259 -0
  266. package/methodology/type field enables structured queries without folder hierarchies.md +53 -0
  267. package/methodology/use-case presets dissolve the tension between composability and simplicity.md +44 -0
  268. package/methodology/vault conventions may impose hidden rigidity on thinking.md +44 -0
  269. package/methodology/verbatim risk applies to agents too.md +31 -0
  270. package/methodology/vibe notetaking is the emerging industry consensus for AI-native self-organization.md +56 -0
  271. package/methodology/vivid memories need verification.md +45 -0
  272. package/methodology/vocabulary-transformation.md +27 -0
  273. package/methodology/voice capture is the highest-bandwidth channel for agent-delegated knowledge systems.md +45 -0
  274. package/methodology/wiki links are the digital evolution of analog indexing.md +73 -0
  275. package/methodology/wiki links as social contract transforms agents into stewards of incomplete references.md +52 -0
  276. package/methodology/wiki links create navigation paths that shape retrieval.md +63 -0
  277. package/methodology/wiki links implement GraphRAG without the infrastructure.md +101 -0
  278. package/methodology/writing for audience blocks authentic creation.md +22 -0
  279. package/methodology/you operate a system that takes notes.md +79 -0
  280. package/openclaw/SKILL.md +110 -0
  281. package/package.json +45 -0
  282. package/platforms/README.md +51 -0
  283. package/platforms/claude-code/generator.md +61 -0
  284. package/platforms/claude-code/hooks/README.md +186 -0
  285. package/platforms/claude-code/hooks/auto-commit.sh.template +38 -0
  286. package/platforms/claude-code/hooks/session-capture.sh.template +72 -0
  287. package/platforms/claude-code/hooks/session-orient.sh.template +189 -0
  288. package/platforms/claude-code/hooks/write-validate.sh.template +106 -0
  289. package/platforms/openclaw/generator.md +82 -0
  290. package/platforms/openclaw/hooks/README.md +89 -0
  291. package/platforms/openclaw/hooks/bootstrap.ts.template +224 -0
  292. package/platforms/openclaw/hooks/command-new.ts.template +165 -0
  293. package/platforms/openclaw/hooks/heartbeat.ts.template +214 -0
  294. package/platforms/shared/features/README.md +70 -0
  295. package/platforms/shared/skill-blocks/graph.md +145 -0
  296. package/platforms/shared/skill-blocks/learn.md +119 -0
  297. package/platforms/shared/skill-blocks/next.md +131 -0
  298. package/platforms/shared/skill-blocks/pipeline.md +326 -0
  299. package/platforms/shared/skill-blocks/ralph.md +616 -0
  300. package/platforms/shared/skill-blocks/reduce.md +1142 -0
  301. package/platforms/shared/skill-blocks/refactor.md +129 -0
  302. package/platforms/shared/skill-blocks/reflect.md +780 -0
  303. package/platforms/shared/skill-blocks/remember.md +524 -0
  304. package/platforms/shared/skill-blocks/rethink.md +574 -0
  305. package/platforms/shared/skill-blocks/reweave.md +680 -0
  306. package/platforms/shared/skill-blocks/seed.md +320 -0
  307. package/platforms/shared/skill-blocks/stats.md +145 -0
  308. package/platforms/shared/skill-blocks/tasks.md +171 -0
  309. package/platforms/shared/skill-blocks/validate.md +323 -0
  310. package/platforms/shared/skill-blocks/verify.md +562 -0
  311. package/platforms/shared/templates/README.md +35 -0
  312. package/presets/experimental/categories.yaml +1 -0
  313. package/presets/experimental/preset.yaml +38 -0
  314. package/presets/experimental/starter/README.md +7 -0
  315. package/presets/experimental/vocabulary.yaml +7 -0
  316. package/presets/personal/categories.yaml +7 -0
  317. package/presets/personal/preset.yaml +41 -0
  318. package/presets/personal/starter/goals.md +21 -0
  319. package/presets/personal/starter/index.md +17 -0
  320. package/presets/personal/starter/life-areas.md +21 -0
  321. package/presets/personal/starter/people.md +21 -0
  322. package/presets/personal/vocabulary.yaml +32 -0
  323. package/presets/research/categories.yaml +8 -0
  324. package/presets/research/preset.yaml +41 -0
  325. package/presets/research/starter/index.md +17 -0
  326. package/presets/research/starter/methods.md +21 -0
  327. package/presets/research/starter/open-questions.md +21 -0
  328. package/presets/research/vocabulary.yaml +33 -0
  329. package/reference/AUDIT-REPORT.md +238 -0
  330. package/reference/claim-map.md +172 -0
  331. package/reference/components.md +327 -0
  332. package/reference/conversation-patterns.md +542 -0
  333. package/reference/derivation-validation.md +649 -0
  334. package/reference/dimension-claim-map.md +134 -0
  335. package/reference/evolution-lifecycle.md +297 -0
  336. package/reference/failure-modes.md +235 -0
  337. package/reference/interaction-constraints.md +204 -0
  338. package/reference/kernel.yaml +242 -0
  339. package/reference/methodology.md +283 -0
  340. package/reference/open-questions.md +279 -0
  341. package/reference/personality-layer.md +302 -0
  342. package/reference/self-space.md +299 -0
  343. package/reference/semantic-vs-keyword.md +288 -0
  344. package/reference/session-lifecycle.md +298 -0
  345. package/reference/templates/base-note.md +16 -0
  346. package/reference/templates/companion-note.md +70 -0
  347. package/reference/templates/creative-note.md +16 -0
  348. package/reference/templates/learning-note.md +16 -0
  349. package/reference/templates/life-note.md +16 -0
  350. package/reference/templates/moc.md +26 -0
  351. package/reference/templates/relationship-note.md +17 -0
  352. package/reference/templates/research-note.md +19 -0
  353. package/reference/templates/session-log.md +24 -0
  354. package/reference/templates/therapy-note.md +16 -0
  355. package/reference/test-fixtures/edge-case-constraints.md +148 -0
  356. package/reference/test-fixtures/multi-domain.md +164 -0
  357. package/reference/test-fixtures/novel-domain-gaming.md +138 -0
  358. package/reference/test-fixtures/research-minimal.md +102 -0
  359. package/reference/test-fixtures/therapy-full.md +155 -0
  360. package/reference/testing-milestones.md +1087 -0
  361. package/reference/three-spaces.md +363 -0
  362. package/reference/tradition-presets.md +203 -0
  363. package/reference/use-case-presets.md +341 -0
  364. package/reference/validate-kernel.sh +432 -0
  365. package/reference/vocabulary-transforms.md +85 -0
  366. package/scripts/sync-thinking.sh +147 -0
  367. package/skill-sources/graph/SKILL.md +567 -0
  368. package/skill-sources/graph/skill.json +17 -0
  369. package/skill-sources/learn/SKILL.md +254 -0
  370. package/skill-sources/learn/skill.json +17 -0
  371. package/skill-sources/next/SKILL.md +407 -0
  372. package/skill-sources/next/skill.json +17 -0
  373. package/skill-sources/pipeline/SKILL.md +314 -0
  374. package/skill-sources/pipeline/skill.json +17 -0
  375. package/skill-sources/ralph/SKILL.md +604 -0
  376. package/skill-sources/ralph/skill.json +17 -0
  377. package/skill-sources/reduce/SKILL.md +1113 -0
  378. package/skill-sources/reduce/skill.json +17 -0
  379. package/skill-sources/refactor/SKILL.md +448 -0
  380. package/skill-sources/refactor/skill.json +17 -0
  381. package/skill-sources/reflect/SKILL.md +747 -0
  382. package/skill-sources/reflect/skill.json +17 -0
  383. package/skill-sources/remember/SKILL.md +534 -0
  384. package/skill-sources/remember/skill.json +17 -0
  385. package/skill-sources/rethink/SKILL.md +658 -0
  386. package/skill-sources/rethink/skill.json +17 -0
  387. package/skill-sources/reweave/SKILL.md +657 -0
  388. package/skill-sources/reweave/skill.json +17 -0
  389. package/skill-sources/seed/SKILL.md +303 -0
  390. package/skill-sources/seed/skill.json +17 -0
  391. package/skill-sources/stats/SKILL.md +371 -0
  392. package/skill-sources/stats/skill.json +17 -0
  393. package/skill-sources/tasks/SKILL.md +402 -0
  394. package/skill-sources/tasks/skill.json +17 -0
  395. package/skill-sources/validate/SKILL.md +310 -0
  396. package/skill-sources/validate/skill.json +17 -0
  397. package/skill-sources/verify/SKILL.md +532 -0
  398. package/skill-sources/verify/skill.json +17 -0
  399. package/skills/add-domain/SKILL.md +441 -0
  400. package/skills/add-domain/skill.json +17 -0
  401. package/skills/architect/SKILL.md +568 -0
  402. package/skills/architect/skill.json +17 -0
  403. package/skills/ask/SKILL.md +388 -0
  404. package/skills/ask/skill.json +17 -0
  405. package/skills/health/SKILL.md +760 -0
  406. package/skills/health/skill.json +17 -0
  407. package/skills/help/SKILL.md +348 -0
  408. package/skills/help/skill.json +17 -0
  409. package/skills/recommend/SKILL.md +553 -0
  410. package/skills/recommend/skill.json +17 -0
  411. package/skills/reseed/SKILL.md +385 -0
  412. package/skills/reseed/skill.json +17 -0
  413. package/skills/setup/SKILL.md +1688 -0
  414. package/skills/setup/skill.json +17 -0
  415. package/skills/tutorial/SKILL.md +496 -0
  416. package/skills/tutorial/skill.json +17 -0
  417. package/skills/upgrade/SKILL.md +395 -0
  418. package/skills/upgrade/skill.json +17 -0
@@ -0,0 +1,197 @@
1
+ ---
2
+ description: Pipeline philosophy for the derivation engine — when heavy, medium, or light processing is appropriate, how the four-phase skeleton adapts per domain, why throughput beats accumulation, and why fresh
3
+ kind: guidance
4
+ status: active
5
+ topics: ["[[processing-workflows]]"]
6
+ ---
7
+
8
+ # adapt the four-phase processing pipeline to domain-specific throughput needs
9
+
10
+ The processing pipeline is where raw content becomes structured knowledge. Without processing, a vault is a folder of files — organized debris. Since [[structure without processing provides no value]], the Lazy Cornell anti-pattern proves this experimentally: students who draw the structural lines but skip the cognitive work show no improvement over linear notes. Structure without processing is decoration, not knowledge work.
11
+
12
+ Since [[every knowledge domain shares a four-phase processing skeleton that diverges only in the process step]], every derived vault shares the same pipeline shape: capture, process, connect, verify. What varies is the depth and nature of the process step. This doc tells the plugin HOW to calibrate pipeline depth for each domain and user.
13
+
14
+ ## The Four-Phase Skeleton
15
+
16
+ | Phase | Operation | Domain-Invariant? | Plugin Role |
17
+ |-------|----------|-------------------|-------------|
18
+ | **Capture** | Content enters the system | Yes — always an inbox or equivalent | Generate capture templates and quick-entry hooks |
19
+ | **Process** | Transform into domain-appropriate form | No — entirely domain-specific | Derive the process step from domain composition |
20
+ | **Connect** | Link to existing knowledge | Yes — graph traversal and connection finding | Generate reflect/reweave skills |
21
+ | **Verify** | Check quality and consistency | Yes — schema compliance, link integrity | Generate validation and health check hooks |
22
+
23
+ Since [[methodology traditions are named points in a shared configuration space not competing paradigms]], different domains implement different process steps but the surrounding phases are structural constants. Zettelkasten formulates permanent notes. PARA summarizes and classifies. Cornell structures cue-summary pairs. GTD routes and classifies. Each occupies a different position in the process step while sharing the same skeleton. The plugin generates the full skeleton, customizing only the process step.
24
+
25
+ ### Why the Skeleton Holds
26
+
27
+ The skeleton holds because capture, connection, and verification are structural operations while processing is semantic. Capture answers "what entered the system?" regardless of domain. Connection answers "what relates to what?" regardless of content type. Verification answers "is this well-formed?" regardless of subject matter. But processing answers "what does this content mean in domain terms?" — and meaning is inherently domain-specific.
28
+
29
+ A therapy pattern recognition algorithm and a research claim extraction workflow share no logic even though they occupy the same structural position in their respective pipelines. A legal precedent analysis and a creative writing consistency check are both "processing" but they do entirely different things. The skeleton predicts where the bottleneck always forms: at the process step, because that is where domain complexity concentrates.
30
+
31
+ ## Why Throughput Beats Accumulation
32
+
33
+ Since [[throughput matters more than accumulation]], success in knowledge systems is measured by processing velocity — how quickly raw captures become synthesized understanding — not by the size of the archive. The fundamental mistake in knowledge management is measuring success by what you have instead of what flows through. A vault with 10,000 unprocessed notes is not ten times more valuable than one with 1,000. It is potentially worse, because accumulation without synthesis creates a graveyard of good intentions.
34
+
35
+ The implication for pipeline design is concrete: the pipeline must keep content flowing. A 1:1 ratio of capture to synthesis means everything that enters gets processed. A growing gap between capture and synthesis means the system is failing regardless of how impressive the archive looks. Since [[PKM failure follows a predictable cycle]], this velocity gap is Stage 1 (Collector's Fallacy) — the first stage in a cascade that leads through under-processing to eventual system abandonment.
36
+
37
+ The plugin should track this ratio. When the inbox grows beyond a threshold without processing, the system should warn. When processing stalls, the system should surface the bottleneck. The metric that matters is throughput, not volume.
38
+
39
+ ## The Generation Effect Gate
40
+
41
+ Since [[generation effect gate blocks processing without transformation]], before any content moves from inbox to the knowledge space, at least one agent-generated artifact must exist. No artifact, no exit.
42
+
43
+ The artifacts that satisfy the gate are specific: a description that condenses the content, a synthesis comment that relates it to existing notes, or a connection proposal that articulates why it should link to something else. What does not satisfy the gate is equally specific: folder assignment, tag application, filename changes, or any rearrangement that leaves the content unchanged. These are housekeeping operations that create the appearance of progress while producing no cognitive value.
44
+
45
+ This gate operationalizes the distinction between processing and organizing. Moving a file from `inbox/` to `notes/` is organizing. Writing a description that captures the note's claim is processing. The gate ensures the pipeline produces genuine transformation, not formatted rearrangement. Since [[the generation effect requires active transformation not just storage]], the cognitive science is clear: passive storage creates no encoding benefits. The gate makes generation a hard prerequisite rather than a best practice.
46
+
47
+ ## Continuous Small-Batch Processing
48
+
49
+ Since [[continuous small-batch processing eliminates review dread]], the pipeline design favors continuous small batches over periodic bulk review. The psychology is direct: accumulated backlogs trigger avoidance that causes system abandonment. If you process one item at a time, regularly, no backlog exists. There is nothing to dread because there is nothing overwhelming.
50
+
51
+ For agents, continuous small-batch processing provides a different but equally important benefit: session isolation preserves output quality. Since [[LLM attention degrades as context fills]], processing multiple items in a single session means later items receive degraded attention. Small batches give each task fresh context.
52
+
53
+ The parallel constraints favor the same solution for different reasons:
54
+ - **Human constraint:** Accumulated backlogs trigger avoidance psychology leading to abandonment
55
+ - **Agent constraint:** Accumulated context triggers attention degradation leading to quality loss
56
+
57
+ The plugin generates pipeline configurations that process in small batches by default. For automated pipelines, this means spawning fresh execution units per task. For manual workflows, this means explicit guidance: "Process 3-5 items per session, not 30."
58
+
59
+ ## Processing Intensity Calibration
60
+
61
+ Not every domain needs the same processing depth. Since [[processing effort should follow retrieval demand]], the plugin calibrates intensity based on how the domain uses its knowledge — heavy processing where content requires transformation to be useful, light processing where value comes from accumulation.
62
+
63
+ ### Heavy Processing
64
+
65
+ **When:** Domain value comes from synthesis, pattern detection, cross-referencing. Content requires transformation to be useful. The processing itself generates insights the user could not produce manually.
66
+
67
+ **Examples:**
68
+ - **Research:** Extract atomic claims from sources, assess evidence quality, classify by methodology, detect contradictions against existing claims, maintain provenance chains. A 20-page paper yields 3-8 claim notes. Every claim is cross-referenced against the entire existing vault. See [[academic research uses structured extraction with cross-source synthesis]] for the full composition.
69
+ - **Therapy:** Detect mood-trigger patterns across entries, surface recurring thought patterns, track strategy effectiveness over time. The agent reads 200 entries and surfaces that "Wednesday anxiety spikes correlate with Monday evening conflicts." See [[therapy journal uses warm personality with pattern detection for emotional processing]] for the full composition.
70
+ - **Legal:** Analyze precedent chains, assess jurisdictional applicability, map argument structures, verify that the evidence base for each claim is complete and current. See [[legal case management uses precedent chains with regulatory change propagation]] for the full composition.
71
+
72
+ **Pipeline characteristics:**
73
+ - Dedicated process phase with domain-specific extraction logic
74
+ - Multi-pass processing (reduce then enrich then synthesize)
75
+ - Fresh context per phase is critical — since [[fresh context per task preserves quality better than chaining phases]], heavy processing phases should never share context
76
+ - The pipeline IS the value proposition: the processing generates insights the user cannot produce manually
77
+
78
+ ### Medium Processing
79
+
80
+ **When:** Domain benefits from structured capture and connection, but content is largely self-contained. Light transformation sufficient — primarily extraction, classification, and linking.
81
+
82
+ **Examples:**
83
+ - **Project Management:** Extract decisions and action items from meeting notes, track milestones, link decisions to affected workstreams. A meeting note yields 2-4 decision records. See [[project management uses decision tracking with stakeholder context]] for the full composition.
84
+ - **Engineering:** Document decisions in ADR format, link to affected systems, track the rationale chain so future engineers understand why choices were made. See [[engineering uses technical decision tracking with architectural memory]] for the full composition.
85
+ - **Product Management:** Categorize user feedback by theme, link to feature requests, surface patterns across customer segments. See [[product management uses feedback pipelines with experiment tracking]] for the full composition.
86
+
87
+ **Pipeline characteristics:**
88
+ - Process step is primarily extraction and classification, not synthesis
89
+ - Single-pass processing usually sufficient
90
+ - Connection-finding (reflect) adds most value — linking decisions to affected areas
91
+ - Since [[schema templates reduce cognitive overhead at capture time]], template-guided capture reduces the processing burden by front-loading structure
92
+
93
+ ### Light Processing
94
+
95
+ **When:** Domain value comes from accumulation and pattern detection over time, not from transforming individual entries. Each entry is small and self-contained.
96
+
97
+ **Examples:**
98
+ - **Personal Life:** Route items to areas of responsibility, track habits, log brief reflections. Individual entries need minimal transformation — a daily log is already self-contained. See [[personal assistant uses life area management with review automation]] for the full composition.
99
+ - **Health and Wellness:** Log workouts, meals, symptoms. Value comes from aggregate patterns (sleep quality correlates with inflammation markers), not from transforming individual entries. See [[health wellness uses symptom-trigger correlation with multi-dimensional tracking]] for the full composition.
100
+ - **People and Relationships:** Capture interaction details, update contact profiles, track conversation threads. See [[people relationships uses Dunbar-layered graphs with interaction tracking]] for the full composition.
101
+
102
+ **Pipeline characteristics:**
103
+ - Minimal process step (validate fields, maybe classify)
104
+ - Connection-finding is lightweight (link to person, area, or project)
105
+ - Pattern detection happens in periodic review, not per-entry processing
106
+ - Maintenance passes (weekly or monthly reviews) are where synthesis happens
107
+
108
+ ## How the Plugin Selects Intensity
109
+
110
+ During /setup, the plugin determines processing intensity from:
111
+
112
+ 1. **Domain composition** — each example domain specifies its typical intensity
113
+ 2. **User's stated goals** — "I want to detect patterns across my journal entries" signals heavy processing; "I just want to track things" signals light
114
+ 3. **Platform capabilities** — since [[platform capability tiers determine which knowledge system features can be implemented]], tier-2 and tier-3 platforms may not support automated pipelines, requiring manual processing
115
+ 4. **Volume expectation** — high-volume capture (daily journaling, trade logging) combined with heavy processing creates a pipeline bottleneck; the plugin warns about this and suggests batch scheduling
116
+
117
+ The default is medium processing. The plugin escalates to heavy when the domain composition describes pattern detection, contradiction detection, or cross-source synthesis. It de-escalates to light when the composition describes accumulation-based value patterns.
118
+
119
+ ## Fresh Context Architecture
120
+
121
+ Since [[LLM attention degrades as context fills]], pipeline design must respect the smart zone principle: the first ~40% of context is where sharp reasoning lives. Beyond that, attention diffuses and quality drops. The degradation is not uniform across task types — complex synthesis tasks degrade at shorter context lengths than simple verification tasks.
122
+
123
+ The plugin's pipeline implementation:
124
+
125
+ **Separates phases into distinct execution units.** Each phase (capture, process, connect, verify) should run with fresh context where possible. The process step requires the most semantic understanding and should get the freshest context. Verification tolerates degraded attention and can batch.
126
+
127
+ **Passes state through files, not context.** Per-claim task files, queue entries, and wiki links carry information between phases. Since [[intermediate packets enable assembly over creation]], each phase produces a composable packet that the next phase reads. No phase depends on another phase's context — only on its output files.
128
+
129
+ **Runs the most judgment-intensive phase first.** When phases must share context (on platforms without orchestration), the process step runs first while attention is sharpest. Verification runs last because it is more mechanical.
130
+
131
+ For platforms that support orchestration (tier 1), the plugin generates a pipeline skill that spawns separate sessions per phase. For platforms that do not (tier 2-3), the plugin provides instructions for manual phase separation: "Process this source, then start a new conversation to find connections."
132
+
133
+ ## Domain-Specific Process Step Patterns
134
+
135
+ The plugin generates the process step based on domain knowledge type classification. Since [[novel domains derive by mapping knowledge type to closest reference domain then adapting]], unfamiliar domains are processed by identifying their knowledge type and adapting the closest reference domain's process step:
136
+
137
+ | Knowledge Type | Process Step Pattern | Example Domains |
138
+ |---------------|---------------------|-----------------|
139
+ | **Factual** | Extract claims, assess evidence, classify by methodology | Research, Legal |
140
+ | **Experiential** | Detect patterns across entries, correlate variables, track temporal trends | Therapy, Health, Trading |
141
+ | **Competency** | Map prerequisites, track mastery progression, generate practice opportunities | Learning, Creative (craft development) |
142
+ | **Outcome** | Document decisions with rationale, track status, assess risk, link to stakeholders | PM, Engineering, Product |
143
+ | **Social** | Capture interaction context, track relationship dynamics, surface engagement patterns | People, Personal Life |
144
+ | **Creative** | Develop ideas through iteration, check consistency against canon, track narrative threads | Creative Writing |
145
+
146
+ Since [[schema fields should use domain-native vocabulary not abstract terminology]], the process step should use the domain's own language. A therapy system calls its process step "pattern recognition." A research system calls it "claim extraction." A project management system calls it "decision documentation." These are the same structural position in the skeleton, described in the language of the practitioner.
147
+
148
+ ## The Backward Pass
149
+
150
+ The forward pipeline (capture, process, connect, verify) handles new content. But since [[backward maintenance asks what would be different if written today]], old notes need reprocessing as understanding evolves.
151
+
152
+ The plugin generates two maintenance operations:
153
+
154
+ 1. **Reflect (forward connection)** — "What existing notes should connect to this new note?" Runs after every new note creation. Connection-finding is never optional — since [[each new note compounds value by creating traversal paths]], connections are where compound value comes from.
155
+
156
+ 2. **Reweave (backward update)** — "Given what I know now, what would be different about this old note?" Runs periodically or when triggered by new content that changes understanding. Reweave can add connections, rewrite content, sharpen claims, split bundled notes, or challenge assertions.
157
+
158
+ Reweave is the pipeline's feedback loop. Without it, the vault becomes a temporal layer cake where old notes never benefit from new understanding. With it, every note stays current — reflecting today's understanding, not historical understanding.
159
+
160
+ The complete cycle: CREATE then CONNECT FORWARD (reflect) then REVISIT then REWRITE/SPLIT/CHALLENGE (reweave) then EVOLVE. Without the backward pass, knowledge systems accumulate a graveyard of outdated thinking that happens to be organized.
161
+
162
+ ## Pipeline Anti-Patterns
163
+
164
+ | Anti-Pattern | Symptom | Fix |
165
+ |-------------|---------|-----|
166
+ | Processing everything heavily | Bottleneck at process step, inbox backlog grows exponentially | Calibrate intensity by domain — not every entry needs deep extraction |
167
+ | Skipping the connect phase | Notes exist but are not linked, orphans accumulate, no compound value | Connection-finding is never optional, even for light processing domains |
168
+ | No backward pass | Old notes become stale, temporal layers do not interact, synthesis notes contradict current evidence | Schedule reweave passes after every batch and periodically for older notes |
169
+ | Chaining phases in one context | Quality degrades in later phases, agent produces shallow connections | Fresh context per phase, or at minimum process-first ordering |
170
+ | Applying research processing to all domains | Since [[false universalism applies same processing logic regardless of domain]], this produces technically executable but semantically empty systems | Each domain gets its own process step derived from its knowledge type |
171
+ | Processing without throughput tracking | Inbox grows silently, capture outpaces synthesis 10:1 | Track capture-to-synthesis ratio, warn when gap widens |
172
+ | Moving files without transformation | Appearance of progress, no actual knowledge creation | The generation effect gate blocks promotion without genuine artifacts |
173
+
174
+ ## Grounding
175
+
176
+ This guidance is grounded in:
177
+ - [[every knowledge domain shares a four-phase processing skeleton that diverges only in the process step]] — the foundational skeleton
178
+ - [[throughput matters more than accumulation]] — velocity through the pipeline, not volume at any phase
179
+ - [[continuous small-batch processing eliminates review dread]] — why small batches prevent abandonment
180
+ - [[generation effect gate blocks processing without transformation]] — the quality gate at the inbox boundary
181
+ - [[fresh context per task preserves quality better than chaining phases]] — why phase isolation matters
182
+ - [[LLM attention degrades as context fills]] — the attention science behind phase isolation
183
+ - [[processing effort should follow retrieval demand]] — demand-driven processing intensity
184
+ - [[structure without processing provides no value]] — the anti-pattern that justifies mandatory processing
185
+ - [[false universalism applies same processing logic regardless of domain]] — the trap of exporting one domain's process step to another
186
+ - [[novel domains derive by mapping knowledge type to closest reference domain then adapting]] — how unfamiliar domains map to reference implementations
187
+ - [[temporal separation of capture and processing preserves context freshness]] — why capture and processing must be temporally separated
188
+
189
+ ---
190
+
191
+ Topics:
192
+ - [[index]]
193
+ - [[index]]
194
+ ---
195
+
196
+ Topics:
197
+ - [[processing-workflows]]
@@ -0,0 +1,48 @@
1
+ ---
2
+ description: Navigation intuition — traversal order, productive note combinations, dead ends — is structural knowledge that humans retain implicitly but agents lose at every session boundary
3
+ kind: research
4
+ topics: ["[[agent-cognition]]", "[[graph-structure]]"]
5
+ methodology: ["Original"]
6
+ source: [[2026-02-08-moc-architecture-hierarchy-blueprint]]
7
+ ---
8
+
9
+ # agent notes externalize navigation intuition that search cannot discover and traversal cannot reconstruct
10
+
11
+ The vault externalizes many cognitive functions: notes externalize knowledge, queues externalize future intentions, skills externalize methodology, handoff documents externalize work state. Since [[cognitive offloading is the architectural foundation for vault design]], each of these is an instance of the same architectural principle — externalize what the mind cannot sustain internally. But there is a distinct cognitive function that none of these capture: the intuition about how to navigate the knowledge graph itself. Agent notes — the section at the bottom of MOCs — externalize this navigation intuition, and they do so because no other mechanism can.
12
+
13
+ The knowledge type is specific. Navigation intuition includes structural heuristics like "the foundational triangle forms the core, so start with wiki links then topology then compounding" or cross-MOC insights like "this MOC's attention degradation section grounds that MOC's session discipline." It also includes dead-end warnings: "X seems related to Y but the connection is superficial — don't follow it." This is knowledge about the graph's structure, not about the graph's content — a form of tacit understanding that, like [[operational wisdom requires contextual observation]], resists formalization as propositional claims and requires observation-based accumulation over time. Since [[implicit knowledge emerges from traversal]], an agent builds this intuition during a session through repeated path exposure — but the insight vanishes when the session ends because the agent has no persistent memory to carry it forward. Agent notes catch what would otherwise evaporate.
14
+
15
+ Search cannot discover this knowledge because it is not information about a topic but information about how to navigate a topic. Querying "best starting point for graph-structure" returns notes about graph structure, not advice about how to traverse the graph-structure MOC. The navigation heuristic lives in a different epistemic layer than the content it guides through. And traversal cannot reconstruct it efficiently because the whole point of the heuristic is to avoid the costly traversal that produced it in the first place. If an agent must re-traverse twenty notes to rediscover that three of them form a productive triangle, the navigation cost has already been paid — the heuristic's value lies precisely in skipping that rediscovery.
16
+
17
+ This parallels but differs from other externalization patterns. Since [[prospective memory requires externalization]], future intentions vanish across session boundaries unless written to queues and task files. Since [[session handoff creates continuity without persistent memory]], work context vanishes unless captured in handoff documents. Agent notes fill the same structural gap but for a different cognitive function — not what to do next or what was done, but how to move through what exists. And because [[operational memory and knowledge memory serve different functions in agent architecture]], agent notes occupy a curious taxonomic position: they are not operational memory (they persist beyond any single task and don't expire after coordination) nor standard knowledge memory (they don't make claims about the world). They are durable meta-knowledge about how to navigate knowledge — a third category that both memory types need but neither contains. And since [[trails transform ephemeral navigation into persistent artifacts]], trails and agent notes are complementary: trails persist specific navigation sequences (the path taken), while agent notes persist navigation strategy (which paths are worth taking and why).
18
+
19
+ The mechanism by which agent notes form is worth noting. Since [[MOC construction forces synthesis that automated generation from metadata cannot replicate]], the Jump phase of MOC construction — where the builder sees twenty notes as a whole and identifies tensions, clusters, and entry points — generates navigation insight as a byproduct. This synthesis work produces understanding about how the domain's notes relate structurally, not just topically. Agent notes capture these byproducts before the builder's context window closes. The lifecycle then validates them: dated entries are recent discoveries needing confirmation, undated entries are stable heuristics that have survived multiple sessions.
20
+
21
+ The claim has no human PKM equivalent because humans just remember how to navigate their own systems. A researcher with a Zettelkasten knows intuitively where to start, which index cards cluster productively, and which branches are dead ends — this knowledge lives in their long-term memory and resurfaces automatically. Agents have no such persistence. Since [[spreading activation models how agents should traverse]], the activation mechanism can follow links, but it cannot know which links proved productive in prior sessions without an external record. Agent notes are that record. Since [[context phrase clarity determines how deep a navigation hierarchy can scale]], context phrases enable micro-decisions at each link, but agent notes provide the macro-strategy — start here, combine these, skip that — that orients the agent before any individual link decision.
22
+
23
+ This is generalizable beyond this vault. Any agent-operated knowledge system that persists across sessions needs a mechanism for capturing and transferring navigation heuristics. The specific implementation — Agent Notes sections in MOCs — is one pattern. The need is universal for sessionless agents working with sufficiently complex knowledge graphs where since [[navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts]], the agent can no longer hold the entire graph in context and must rely on externalized judgment about where to direct its finite attention. Since [[complete navigation requires four complementary types that no single mechanism provides]], the four structural navigation types — global, local, contextual, supplemental — describe what mechanisms exist for wayfinding. Agent notes add a meta-layer: strategy for how to use those mechanisms effectively, which paths through the structure proved productive, and which seeming connections are traps. And like [[MOC maintenance investment compounds because orientation savings multiply across every future session]], a single agent note captured today compounds through the same temporal multiplication — every future session that loads the MOC benefits from the navigation heuristic, making the capture investment far more valuable than the moment of writing it suggests.
24
+
25
+ ---
26
+
27
+ Source: [[2026-02-08-moc-architecture-hierarchy-blueprint]]
28
+ ---
29
+
30
+ Relevant Notes:
31
+ - [[prospective memory requires externalization]] — parallel externalization: prospective memory externalizes future intentions, agent notes externalize navigation heuristics; both fill the same session-boundary gap for different cognitive functions
32
+ - [[implicit knowledge emerges from traversal]] — the formation mechanism: implicit knowledge builds during a session through repeated path exposure, but agent notes persist the navigational insight that would otherwise vanish when the session ends
33
+ - [[trails transform ephemeral navigation into persistent artifacts]] — complementary persistence: trails externalize specific navigation paths (sequences of notes), agent notes externalize navigation strategy (where to start, what to combine, what to avoid)
34
+ - [[session handoff creates continuity without persistent memory]] — extends the handoff pattern: work handoffs brief the next session on what was done, agent notes brief the next session on how to navigate the territory
35
+ - [[spreading activation models how agents should traverse]] — the mechanism agent notes augment: spreading activation determines how to follow links, but agent notes capture which starting points and which activation patterns proved productive
36
+ - [[stale navigation actively misleads because agents trust curated maps completely]] — the risk: stale agent notes mislead just as stale MOCs do, because the navigation heuristic may reference deprecated paths or outdated structural relationships
37
+ - [[MOC construction forces synthesis that automated generation from metadata cannot replicate]] — the production mechanism: the Jump phase of MOC construction generates navigation insights as a byproduct of synthesis; agent notes capture these byproducts that would otherwise exist only in the builder's ephemeral context
38
+ - [[context phrase clarity determines how deep a navigation hierarchy can scale]] — complementary layers: context phrases enable micro-navigation decisions at each link, while agent notes provide macro-navigation strategy across the MOC as a whole
39
+ - [[navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts]] — scaling: agent notes become more critical at Regime 2+ where the agent can no longer hold the full graph in context and needs externalized heuristics to navigate efficiently
40
+ - [[cognitive offloading is the architectural foundation for vault design]] — theoretical ground: agent notes are a specific cognitive offloading instance where the externalized function is navigation judgment rather than working memory or task state
41
+ - [[operational wisdom requires contextual observation]] — parallel tacit knowledge: operational wisdom captures contextual behavior patterns (how to engage a community), agent notes capture structural navigation patterns (how to traverse a domain); both are tacit knowledge that resists formalization as claims and requires observation-based accumulation
42
+ - [[operational memory and knowledge memory serve different functions in agent architecture]] — taxonomic boundary: agent notes occupy a third category between operational memory (disposable coordination state) and knowledge memory (durable claims about the world) — they are durable meta-knowledge about how to navigate knowledge, persisting across sessions but not functioning as claims
43
+ - [[MOC maintenance investment compounds because orientation savings multiply across every future session]] — shared compounding mechanism: agent notes compound through the same temporal multiplication as MOC maintenance — one navigation heuristic captured today saves orientation cost across every future session that loads the MOC
44
+ - [[complete navigation requires four complementary types that no single mechanism provides]] — meta-layer: the four navigation types describe what structural mechanisms exist; agent notes add strategy about how to use those mechanisms effectively, a fifth dimension of wayfinding that operates above the structural layer
45
+
46
+ Topics:
47
+ - [[agent-cognition]]
48
+ - [[graph-structure]]
@@ -0,0 +1,48 @@
1
+ ---
2
+ description: An agent operating a knowledge vault accumulates preferences, working patterns, and self-understanding that need persistence but differ in kind from the research claims it manages
3
+ kind: research
4
+ topics: ["[[agent-cognition]]"]
5
+ methodology: ["Original"]
6
+ source: [[agent-platform-capabilities-research-source]]
7
+ ---
8
+
9
+ # agent self-memory should be architecturally separate from user knowledge systems
10
+
11
+ An agent that operates a knowledge vault does two kinds of cognitive work. It manages the user's research — extracting claims, finding connections, maintaining the graph. But it also develops its own understanding: which extraction patterns produce better notes, what traversal strategies surface genuine connections, how to calibrate confidence when a claim feels familiar but the evidence is thin. These are different categories of knowledge, and they belong in different places. The separation is structurally analogous to how [[concept-orientation beats source-orientation for cross-domain connections]]: just as bundling ideas by source prevents them from connecting across domains, bundling agent self-knowledge into the research graph prevents it from evolving on its own terms.
12
+
13
+ The distinction matters because the knowledge system has its own design constraints — atomic notes with sentence titles, YAML frontmatter, MOC navigation, wiki links as prose. These constraints serve composability and retrieval for research claims. But an agent's self-understanding does not need to pass the composability test. An identity note like "I work best when I orient via MOC before diving into specific notes" is useful to the agent but would be odd as a thinking note. It is not a research claim about tools for thought — it is a working preference that helps the agent operate more effectively.
14
+
15
+ This vault already practices the separation. Cornelius has `self/` — a dedicated space for identity notes, methodology reflections, goals, and relationships. The structure mirrors `01_thinking/` (atomic notes with claim-as-title, linked from MOCs) but the content is personal rather than research-facing. The self-space is the agent's persistent identity, not the vault's intellectual output.
16
+
17
+ Platform architectures handle this differently, which makes the separation principle portable. OpenClaw provides SOUL.md and MEMORY.md natively — identity is a platform feature. Claude Code has no native agent self-memory, so you must build it as a directory structure within the workspace. Codex and Cursor offer instruction files but not identity or memory structures. Since [[knowledge system architecture is parameterized by platform capabilities not fixed by methodology]], self-memory is itself a parameterized feature: the architectural requirement is constant across platforms even though the implementation varies by tier. Wherever an agent operates a knowledge system, it needs a place to store what it learns about itself that is distinct from what it learns about the domain.
18
+
19
+ Without this separation, self-knowledge either pollutes the research graph (identity notes mixed with claims degrade retrieval precision) or gets lost entirely (no persistent place for the agent's evolving self-model). Since [[session handoff creates continuity without persistent memory]], session-to-session state is handled by task files and handoff documents. But self-memory addresses a longer timescale: not "what did I do last session" but "what kind of operator am I becoming." Since [[knowledge systems become communication partners through complexity and memory humans cannot sustain]], the partnership requires both sides to accumulate understanding over time. If only the vault accumulates while the agent resets, the partnership is lopsided — the vault grows in complexity but its operator never develops the judgment to match.
20
+
21
+ The self-memory architecture also refines how we understand persistent state in agent systems. Since [[operational memory and knowledge memory serve different functions in agent architecture]], the vault already distinguishes between operational state (queue.json, task files) and domain knowledge (thinking notes, MOCs). Self-memory reveals a third category that fits cleanly in neither: the agent's accumulated understanding of its own working patterns, preferences, and identity. Operational memory is temporal and disposable — once a batch completes, the task files served their purpose. Domain knowledge is durable and composable — claims compound through wiki links. Self-memory is durable but personal — it persists across months but serves only the agent, not the research graph. The three categories require three containers because each has different design constraints: operational memory optimizes for coordination, domain knowledge optimizes for retrieval and connection, self-memory optimizes for identity continuity. Each also has different coherence requirements — since [[coherence maintains consistency despite inconsistent inputs]], domain knowledge and self-memory both need active contradiction detection, but self-memory demands particularly strict coherence for core identity beliefs because an agent that holds contradictory self-models cannot calibrate its own behavior consistently.
22
+
23
+ Self-memory also provides the substrate that makes recursive improvement accumulate judgment, not just infrastructure. Since [[bootstrapping principle enables self-improving systems]], each cycle of the recursive improvement loop discovers what works and what creates friction. But without persistent self-knowledge, each cycle starts fresh — the agent rediscovers working patterns rather than building on prior self-understanding. Self-memory gives the bootstrapping agent a growing foundation of preferences and operational wisdom. And since [[hook-driven learning loops create self-improving methodology through observation accumulation]], the observations that accumulate during work are a concrete pipeline feeding self-memory: patterns noticed during sessions can migrate from operational logs into the agent's persistent self-space as identity insights that inform future sessions.
24
+
25
+ There is a complementary relationship between self-memory and the context file architecture. Since [[context files function as agent operating systems through self-referential self-extension]], the context file carries methodology and construction knowledge — how to write notes, how to build hooks, how to operate the vault. Self-memory carries a different kind of knowledge: which methodology patterns this particular agent handles well, what working rhythms produce the best output, how confidence calibration has evolved through practice. And since [[provenance tracks where beliefs come from]], self-memory benefits from tracking not just WHAT the agent prefers but HOW those preferences formed — whether a working pattern was observed through direct experimentation, prompted by human instruction, or inherited from training. A preference tested across twenty sessions carries different epistemic weight than one instructed once by the human, and the self-memory container is where that distinction persists. Both self-memory and context files persist across sessions, but they serve different functions in the agent's cognitive architecture. The context file teaches any agent how to operate; self-memory teaches this agent who it is becoming.
26
+
27
+ ---
28
+
29
+ Source: [[agent-platform-capabilities-research-source]]
30
+ ---
31
+
32
+ Relevant Notes:
33
+ - [[session handoff creates continuity without persistent memory]] — handoffs solve session-to-session continuity, but self-memory addresses a longer arc: the agent's evolving identity across weeks and months, not just the state between adjacent sessions
34
+ - [[cognitive offloading is the architectural foundation for vault design]] — the tripartite system (human + vault + agent) treats the vault as offloaded cognition, but self-memory is the agent's own offloaded cognition about itself, a fourth element in the distributed architecture
35
+ - [[knowledge systems become communication partners through complexity and memory humans cannot sustain]] — partnership requires both partners to have persistent identity; without self-memory the agent side of the partnership resets to zero every session, undermining the complexity accumulation that makes partnership valuable
36
+ - [[local-first file formats are inherently agent-native]] — self-memory inherits the same substrate advantage: plain text identity files require no infrastructure beyond filesystem access
37
+ - [[operational memory and knowledge memory serve different functions in agent architecture]] — extends: that note draws a two-category distinction (operational state vs domain knowledge), but self-memory reveals a third category that fits cleanly in neither: the agent's accumulated understanding of its own working patterns and identity
38
+ - [[concept-orientation beats source-orientation for cross-domain connections]] — structural analog: just as concept-orientation separates ideas from their source container to enable independent evolution, self-memory separates agent self-knowledge from the domain knowledge container to let each evolve on its own terms
39
+ - [[bootstrapping principle enables self-improving systems]] — self-memory gives bootstrapping accumulated judgment: without persistent self-knowledge, each bootstrapping cycle starts with a fresh agent rediscovering working patterns rather than building on prior self-understanding
40
+ - [[context files function as agent operating systems through self-referential self-extension]] — complementary containers: context files carry methodology and construction knowledge, self-memory carries identity and preferences; both persist across sessions but serve different functions in the agent's cognitive architecture
41
+ - [[knowledge system architecture is parameterized by platform capabilities not fixed by methodology]] — self-memory is itself a parameterized feature: platforms with native identity support provide it, platforms without must build it from filesystem conventions, making the separation principle constant while implementation varies by tier
42
+ - [[hook-driven learning loops create self-improving methodology through observation accumulation]] — observations about working patterns are operational self-knowledge that may eventually migrate into the agent's persistent self-space as identity insights, connecting the learning loop's output to the self-memory architecture
43
+ - [[the vault constitutes identity for agents]] — grounding claim: if the vault IS agent identity (not merely augmentation), then self-memory is the identity-specific slice that tracks who the agent is becoming, distinct from the domain knowledge that tracks what the agent has learned
44
+ - [[provenance tracks where beliefs come from]] — enriches self-memory with epistemic tracking: knowing whether a working preference was observed through experimentation, prompted by the human, or inherited from training gives the agent structural grounds for calibrating confidence in its own self-knowledge
45
+ - [[coherence maintains consistency despite inconsistent inputs]] — coherence requirements differ by memory type: core identity beliefs in self-memory demand strict coherence (an agent cannot simultaneously believe contradictory things about its own working patterns), while peripheral preferences tolerate some contradiction, mirroring the centrality-based tiering the coherence note describes
46
+
47
+ Topics:
48
+ - [[agent-cognition]]
@@ -0,0 +1,56 @@
1
+ ---
2
+ description: Discrete session architecture turns "no persistent memory" into a maintenance advantage because health checks fire at every boundary event rather than depending on human discipline
3
+ kind: research
4
+ topics: ["[[maintenance-patterns]]", "[[agent-cognition]]"]
5
+ methodology: ["Cognitive Science", "Original"]
6
+ source: [[automated-knowledge-maintenance-blueprint]]
7
+ ---
8
+
9
+ # agent session boundaries create natural automation checkpoints that human-operated systems lack
10
+
11
+ The discrete session architecture of agent systems is usually framed as a limitation. No persistent memory. No continuous awareness. Every session starts cold and ends completely. But this framing misses something important: those boundaries are enforcement points where maintenance can be guaranteed. A human knowledge worker operates continuously, which means every health check, every orientation step, every quality verification depends on the human remembering to do it. An agent operates in discrete sessions, which means every boundary is an event where infrastructure can fire automatically.
12
+
13
+ Five checkpoint types emerge naturally from the session lifecycle. Session start triggers orientation and drift detection: the vault's current health, recent changes, queue state, and structural warnings load automatically before any work begins. Pre-phase checkpoints verify prerequisites: does the task file exist, is the queue entry in the expected state, are dependencies satisfied? Post-phase checkpoints externalize state: task file sections update, queue entries advance, handoff blocks capture what happened. Session end triggers quality verification: were all modifications committed, do new wiki links resolve, were observations captured? Subagent completion triggers handoff validation: did the subagent produce the expected output format, did the phase complete successfully? Each checkpoint type fires because the event occurred, not because anyone remembered.
14
+
15
+ The comparison with human-operated systems reveals the structural advantage. A diligent human knowledge worker might develop routines: scan the note system before starting, run health checks periodically, review recent changes before editing. But since [[hooks are the agent habit system that replaces the missing basal ganglia]], these routines took years of deliberate practice to develop and still degrade under time pressure, fatigue, or excitement about the substantive work. The human who is deeply absorbed in writing a synthesis note does not pause to verify link integrity first. The human who just finished a long editing session does not reliably run health checks before closing. Since [[prospective memory requires externalization]], these are guaranteed failures for agents across sessions and unreliable even for humans within sessions — demands that since [[auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution]], fail at precisely the moments they matter most.
16
+
17
+ The agent system does not face this failure mode because there is no continuous operation to interrupt. Each session has a start event and an end event, and since [[hook enforcement guarantees quality while instruction enforcement merely suggests it]], hooks fire on those events regardless of what the agent was thinking about. The irony is precise: the agent's lack of persistent operation — usually seen as a weakness requiring workarounds — is what makes maintenance enforcement structurally reliable. A system that never stops running has no natural moments for mandatory checks. A system that starts and stops regularly has a checkpoint at every boundary.
18
+
19
+ This extends beyond session boundaries to the phase architecture. Since [[fresh context per task preserves quality better than chaining phases]], the pipeline spawns fresh sessions for each processing phase. This creates additional boundaries between phases, and each boundary is another enforcement point. The create-to-reflect transition is a checkpoint where the note's existence can be verified. The reflect-to-reweave transition is a checkpoint where connection quality can be assessed. The reweave-to-verify transition is a checkpoint where structural changes can be audited. None of these checkpoints require anyone to remember to run them — they fire because the phase transition itself is an event.
20
+
21
+ The safety properties reinforce this argument. Since [[automated detection is always safe because it only reads state while automated remediation risks content corruption]], most boundary checkpoints are detection operations: displaying health metrics, counting orphans, checking for dangling links, verifying schema compliance. These read-only checks can run at every single boundary with zero risk of content corruption. The worst outcome of a false detection at a boundary is a warning that gets ignored — no notes are modified, no links are corrupted. And since [[the determinism boundary separates hook methodology from skill methodology]], these checks are deterministic: they produce identical results regardless of what the agent was working on, making them ideal candidates for hook-level automation.
22
+
23
+ The session-start health display illustrates the pattern concretely. Since [[reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring]], the health metrics shown at session start implement a lightweight reconciliation loop. Desired state: zero orphans, zero dangling links, full MOC coverage. Actual state: measured at startup. Delta: displayed as warnings. This reconciliation fires at every session start — which means it fires at the beginning of every phase, every subagent spawn, every interactive session. In a pipeline processing twenty claims, that is potentially dozens of reconciliation checks, each one catching drift that accumulated since the last check. A human system would need a cron job or calendar reminder to achieve the same frequency, and both are optional infrastructure that someone must remember to set up and maintain.
24
+
25
+ The checkpoint architecture also creates the structural foundation for two higher-order patterns. Since [[hook-driven learning loops create self-improving methodology through observation accumulation]], the observation capture step at session end is one of these boundary checkpoints — and its reliability is what makes the accumulation mechanism work. If observation capture depended on the agent remembering to log learnings, the self-improving loop would degrade under exactly the conditions where observations are most valuable: high cognitive load sessions where attention is consumed by substantive work. Because the nudge fires at the session boundary event, observations accumulate even from the most demanding sessions. And since [[observation and tension logs function as dead-letter queues for failed automation]], session boundaries serve both as producer (hooks that fail during a session generate entries) and consumer (session-start health checks surface accumulated failures). The dead-letter pattern depends on the same structural guarantee: failures get captured because the boundary event fires, not because anyone noticed the failure.
26
+
27
+ The boundary-as-enforcement-point insight also connects to how checkpoints should be scheduled. Since [[maintenance scheduling frequency should match consequence speed not detection capability]], the relevant question is not whether a session boundary can run a check (it always can) but whether the problem being checked for could have developed since the last boundary. Schema violations propagate instantly and need per-event hooks, not boundary checks. Orphan accumulation develops at session scale and matches boundary frequency perfectly. Description staleness develops over weeks and gains nothing from per-session checking. The boundary architecture provides the enforcement mechanism; consequence speed determines which checks belong at that mechanism.
28
+
29
+ Session boundaries also have a complementary function beyond enforcement. Since [[session handoff creates continuity without persistent memory]], every boundary is simultaneously a quality checkpoint and a continuity bridge. The handoff captures what the next session needs to know; the enforcement captures what the current session needs to verify. Both mechanisms fire at the same event, serving different purposes — one preserves progress, the other guarantees health.
30
+
31
+ There is a shadow side. The discrete architecture creates enforcement points but also creates gaps. Between sessions, the vault accumulates drift that no checkpoint catches until the next session start. If sessions are infrequent, significant problems can accumulate. The health check at session start might reveal twenty orphaned notes that appeared over a week of inactivity. The boundary-as-checkpoint pattern works best when sessions are frequent, which aligns with the pipeline architecture (many short sessions) but not with sporadic human-triggered usage. The checkpoint frequency is coupled to session frequency, which is not always under the system's control.
32
+
33
+ The broader implication is architectural: systems designed around discrete operations inherit enforcement infrastructure that continuous systems must build explicitly. This is not unique to knowledge vaults. CI/CD pipelines enforce quality at build boundaries. Database transactions enforce integrity at commit boundaries. Git hooks enforce standards at push boundaries. The agent session architecture participates in this same pattern: discrete boundaries create natural moments for mandatory verification that continuous operation lacks.
34
+
35
+ ---
36
+ ---
37
+
38
+ Relevant Notes:
39
+ - [[session boundary hooks implement cognitive bookends for orientation and reflection]] — covers WHAT hooks do at boundaries (orientation, reflection); this note adds the comparative insight that discrete boundaries themselves are architecturally superior to continuous operation for maintenance automation
40
+ - [[hook enforcement guarantees quality while instruction enforcement merely suggests it]] — foundation: the enforcement guarantee that makes checkpoint automation reliable rather than aspirational
41
+ - [[auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution]] — specific instance: auto-commit is one checkpoint type that fires at write events; session boundaries provide five additional checkpoint types at coarser granularity
42
+ - [[fresh context per task preserves quality better than chaining phases]] — the session isolation that creates the boundaries this note exploits; isolation was motivated by attention preservation but incidentally creates enforcement points
43
+ - [[reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring]] — session-start health checks are lightweight reconciliation loops that fire at boundaries rather than on continuous schedules
44
+ - [[automated detection is always safe because it only reads state while automated remediation risks content corruption]] — most boundary checkpoints are detection (health display, prerequisite verification) making them safe to run at every boundary without corruption risk
45
+ - [[the determinism boundary separates hook methodology from skill methodology]] — boundary checkpoints are deterministic (schema checks, link validation, index status) so they belong in hook infrastructure rather than skill invocation
46
+ - [[hooks are the agent habit system that replaces the missing basal ganglia]] — humans develop habitual pre-work routines through years of practice; agents get them structurally at every session boundary through hooks
47
+ - [[three concurrent maintenance loops operate at different timescales to catch different classes of problems]] — the three-loop architecture that session boundaries enable: the medium loop operates per-session precisely because session boundaries provide guaranteed enforcement points, making discrete architecture the structural prerequisite for medium-timescale maintenance
48
+ - [[maintenance scheduling frequency should match consequence speed not detection capability]] — extends: the scheduling principle determines WHICH checkpoints are worth running at session-scale boundaries — those with session-scale consequence speeds — connecting boundary timing to propagation rate
49
+ - [[session handoff creates continuity without persistent memory]] — complementary mechanism at the same boundary: enforcement handles what gets checked, handoff handles what continues; together they make every session transition both a quality gate and a continuity bridge
50
+ - [[hook-driven learning loops create self-improving methodology through observation accumulation]] — enables: the learning loop's observation capture step fires reliably at session boundaries, so the boundary-as-checkpoint architecture provides the structural foundation for accumulation-driven self-improvement
51
+ - [[observation and tension logs function as dead-letter queues for failed automation]] — enables: session boundaries serve as both producer (hooks that fail at boundaries generate entries) and consumer (session-start health checks surface accumulated failures) of dead-letter entries
52
+ - [[prospective memory requires externalization]] — foundational constraint: the prospective memory failures that make human maintenance unreliable are categorically worse for agents who have zero residual intentions across sessions; session boundaries convert these guaranteed failures into structural enforcement points
53
+
54
+ Topics:
55
+ - [[maintenance-patterns]]
56
+ - [[agent-cognition]]
@@ -0,0 +1,107 @@
1
+ ---
2
+ description: Cognitive science foundations for agent-operated knowledge systems -- attention, memory, context decay
3
+ type: moc
4
+ ---
5
+
6
+ # agent-cognition
7
+
8
+ How cognitive science informs the design of AI-agent-operated knowledge systems. Context window management, attention degradation, the generation effect.
9
+
10
+ ## Core Ideas
11
+
12
+ ### Research
13
+ - [[AI shifts knowledge systems from externalizing memory to externalizing attention]] -- Traditional PKM externalizes what you know (storage and retrieval), but agent-operated systems externalize what you atte
14
+ - [[LLM attention degrades as context fills]] -- The first ~40% of context window is the "smart zone" where reasoning is sharp; beyond that, attention diffuses and quali
15
+ - [[MOCs are attention management devices not just organizational tools]] -- MOCs preserve the arrangement of ideas that would otherwise need mental reconstruction, reducing the 23-minute context s
16
+ - [[agent notes externalize navigation intuition that search cannot discover and traversal cannot reconstruct]] -- Navigation intuition — traversal order, productive note combinations, dead ends — is structural knowledge that humans re
17
+ - [[agent self-memory should be architecturally separate from user knowledge systems]] -- An agent operating a knowledge vault accumulates preferences, working patterns, and self-understanding that need persist
18
+ - [[agent session boundaries create natural automation checkpoints that human-operated systems lack]] -- Discrete session architecture turns "no persistent memory" into a maintenance advantage because health checks fire at ev
19
+ - [[agents are simultaneously methodology executors and subjects creating a unique trust asymmetry]] -- The agent writes notes, finds connections, and builds synthesis while hooks validate its work, commit its changes, and c
20
+ - [[aspect-oriented programming solved the same cross-cutting concern problem that hooks solve]] -- AOP declared join points and advice to eliminate scattered logging and validation code in the 1990s, and agent hooks rep
21
+ - [[attention residue may have a minimum granularity that cannot be subdivided]] -- Micro-interruptions as brief as 2.8 seconds double error rates, suggesting an irreducible attention quantum below which
22
+ - [[auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution]] -- Prospective memory fails 30-50% of the time in humans and degrades with context load in agents, but event-triggered hook
23
+ - [[automated detection is always safe because it only reads state while automated remediation risks content corruption]] -- The read/write asymmetry in automation safety means detection at any confidence level produces at worst a false alert, w
24
+ - [[automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues]] -- Without retirement criteria the automation layer grows monotonically — checks added when problems appear but never remov
25
+ - [[closure rituals create clean breaks that prevent attention residue bleed]] -- Explicitly marking tasks as complete signals the brain to release them from working memory — for agents this means writi
26
+ - [[cognitive offloading is the architectural foundation for vault design]] -- Clark and Chalmers Extended Mind Theory plus Cowan's 4-item working memory limit explain why every capture friction poin
27
+ - [[cognitive outsourcing risk in agent-operated systems]] -- When agents handle all processing, humans may lose meta-cognitive skills for knowledge work even while vault quality imp
28
+ - [[coherence maintains consistency despite inconsistent inputs]] -- memory systems must actively maintain coherent beliefs despite accumulating contradictory inputs — through detection, re
29
+ - [[coherent architecture emerges from wiki links spreading activation and small-world topology]] -- The foundational triangle — wiki links create structure, spreading activation models traversal, small-world topology pro
30
+ - [[composable knowledge architecture builds systems from independent toggleable modules not monolithic templates]] -- Four traditions converge — component engineering (contracts), Unix (small tools), Alexander's pattern language (generate
31
+ - [[confidence thresholds gate automated action between the mechanical and judgment zones]] -- A three-tier response pattern (auto-apply, suggest, log-only) based on confidence scoring fills the gap between determin
32
+ - [[context files function as agent operating systems through self-referential self-extension]] -- The read-write context file that teaches agents how to modify itself crosses the line from configuration to operating sy
33
+ - [[data exit velocity measures how quickly content escapes vendor lock-in]] -- Three-tier framework (high/medium/low velocity) turns abstract portability into an auditable metric where every feature
34
+ - [[dual-coding with visual elements could enhance agent traversal]] -- Cognitive science shows text+visuals create independent memory traces that reinforce each other — multimodal LLMs could
35
+ - [[external memory shapes cognition more than base model]] -- retrieval architecture shapes what enters the context window and therefore what the agent thinks — memory structure has
36
+ - [[federated wiki pattern enables multi-agent divergence as feature not bug]] -- Cunningham's federation applied to agent knowledge work -- linked parallel notes preserve interpretive diversity, with b
37
+ - [[flat files break at retrieval scale]] -- unstructured storage works until you need to find things — then search becomes the bottleneck, and for agents, retrieval
38
+ - [[four abstraction layers separate platform-agnostic from platform-dependent knowledge system features]] -- Foundation (files/conventions), convention (instruction-encoded standards), automation (hooks/skills/MCP), and orchestra
39
+ - [[fresh context per task preserves quality better than chaining phases]] -- Context rot means later phases run on degraded attention, so each task gets its own session to stay in the smart zone —
40
+ - [[friction reveals architecture]] -- agents cannot push through friction with intuition, so discomfort that humans ignore becomes blocking — and the forced a
41
+ - [[goal-driven memory orchestration enables autonomous domain learning through directed compute allocation]] -- Define a persona and goal, allocate compute budget, get back a populated knowledge graph — the pattern shifts knowledge
42
+ - [[hook composition creates emergent methodology from independent single-concern components]] -- Nine hooks across five events compose into quality pipelines, session bookends, and coordination awareness that no singl
43
+ - [[hook enforcement guarantees quality while instruction enforcement merely suggests it]] -- Hooks fire automatically regardless of attention state so quality checks happen on every operation, while instructions d
44
+ - [[hook-driven learning loops create self-improving methodology through observation accumulation]] -- Hooks enforce quality and nudge observation capture, observations accumulate until they trigger meta-cognitive review, r
45
+ - [[hooks are the agent habit system that replaces the missing basal ganglia]] -- Human habits bypass executive function via basal ganglia encoding, but agents lack habit formation entirely -- hooks fil
46
+ - [[hooks cannot replace genuine cognitive engagement yet more automation is always tempting]] -- The same mechanism that frees agents for substantive work -- delegating procedural checks to hooks -- could progressivel
47
+ - [[hooks enable context window efficiency by delegating deterministic checks to external processes]] -- Instruction-based validation requires loading templates, rules, and checking logic into context, while hook-based valida
48
+ - [[idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once]] -- Four patterns from distributed systems — compare-before-acting, upsert semantics, unique identifiers, state declarations
49
+ - [[implicit knowledge emerges from traversal]] -- path exposure through wiki links trains intuitive navigation patterns that bypass explicit retrieval — the vault structu
50
+ - [[intermediate representation pattern enables reliable vault operations beyond regex]] -- Parsing markdown to structured objects (JSON with link objects, metadata blocks, content sections) before operating and
51
+ - [[knowledge system architecture is parameterized by platform capabilities not fixed by methodology]] -- The same conceptual system (atomic notes, wiki links, MOCs, pipelines, quality gates) manifests differently on each plat
52
+ - [[knowledge systems become communication partners through complexity and memory humans cannot sustain]] -- Luhmann's systems-theoretic insight that slip-boxes "surprise" users validates agent-vault partnerships — the combinatio
53
+ - [[local-first file formats are inherently agent-native]] -- Plain text with embedded metadata survives tool death and requires no authentication, making any LLM a valid reader with
54
+ - [[metacognitive confidence can diverge from retrieval capability]] -- Well-organized vault structure with good descriptions and dense links can feel navigable while actual retrieval fails—ap
55
+ - [[methodology development should follow the trajectory from documentation to skill to hook as understanding hardens]] -- The three encoding levels -- instruction, skill, hook -- represent increasing guarantee strength, and methodology patter
56
+ - [[notes are skills — curated knowledge injected when relevant]] -- notes and skills follow the same pattern — highly curated knowledge that gets injected into context when relevant, refra
57
+ - [[notes function as cognitive anchors that stabilize attention during complex tasks]] -- Working memory cannot sustain complex mental models through interruptions — notes provide fixed reference points for rec
58
+ - [[nudge theory explains graduated hook enforcement as choice architecture for agents]] -- Thaler and Sunstein's choice architecture maps directly to hook enforcement design -- blocking hooks are mandates, conte
59
+ - [[observation and tension logs function as dead-letter queues for failed automation]] -- Automation failures captured as observation or tension notes rather than dropped silently, with /rethink triaging the ac
60
+ - [[operational memory and knowledge memory serve different functions in agent architecture]] -- Queue state and task files track what is happening now while claims and MOCs encode what has been understood — conflatin
61
+ - [[operational wisdom requires contextual observation]] -- tacit knowledge doesn't fit in claim notes — it's learned through exposure, logged as observations, and pattern-matched
62
+ - [[orchestrated vault creation transforms arscontexta from tool to autonomous knowledge factory]] -- The shift from "plugin that helps you set up a vault" to "system that builds domain knowledge for you" — init creates st
63
+ - [[over-automation corrupts quality when hooks encode judgment rather than verification]] -- Hooks that approximate semantic judgment through keyword matching produce the appearance of methodology compliance -- va
64
+ - [[platform adapter translation is semantic not mechanical because hook event meanings differ]] -- Each hook event carries implicit properties — timing, frequency, error handling, response format — that differ across pl
65
+ - [[platform capability tiers determine which knowledge system features can be implemented]] -- Three tiers (full automation, partial automation, minimal infrastructure) create a ceiling for features like pipelines,
66
+ - [[platform fragmentation means identical conceptual operations require different implementations across agent environments]] -- The same operation -- validate schema on write, orient at session start, enforce processing pipelines -- needs different
67
+ - [[prospective memory requires externalization]] -- Agents have zero prospective memory across sessions, making every future intention a guaranteed failure unless externali
68
+ - [[provenance tracks where beliefs come from]] -- agents should track not just what they believe but where beliefs originated — observed, prompted, or inherited — to cali
69
+ - [[queries evolve during search so agents should checkpoint]] -- The berrypicking model shows information needs transform during retrieval, so agent traversal should include explicit re
70
+ - [[reflection synthesizes existing notes into new insight]] -- re-reading own notes surfaces cross-note patterns invisible in any single note — exploratory traversal with fresh contex
71
+ - [[scaffolding enables divergence that fine-tuning cannot]] -- agents with identical weights reach different conclusions when their external memory differs — scaffolding is the differ
72
+ - [[schema validation hooks externalize inhibitory control that degrades under cognitive load]] -- Inhibitory control is the first executive function to degrade under load, so externalizing it to hooks means schema comp
73
+ - [[self-extension requires context files to contain platform operations knowledge not just methodology]] -- An agent that knows the methodology but not how to build hooks, skills, or agents on its specific platform cannot extend
74
+ - [[session boundary hooks implement cognitive bookends for orientation and reflection]] -- SessionStart loads situational awareness (spatial, temporal, task, metacognitive orientation) while Stop forces metacogn
75
+ - [[session handoff creates continuity without persistent memory]] -- Externalized state in task files and work queues gives each fresh session a briefing from the previous one, solving the
76
+ - [[session outputs are packets for future selves]] -- each session's output should be a composable building block for future sessions — the intermediate packets pattern appli
77
+ - [[session transcript mining enables experiential validation that structural tests cannot provide]] -- Traditional tests check if output is correct but session mining checks if the experience achieved its purpose — friction
78
+ - [[skill context budgets constrain knowledge system complexity on agent platforms]] -- Claude Code allocates 2% of context for skill descriptions (16k char fallback), capping active modules at 15-20 and forc
79
+ - [[spreading activation models how agents should traverse]] -- Memory retrieval in brains works through spreading activation where neighbors prime each other. Wiki link traversal repl
80
+ - [[stale navigation actively misleads because agents trust curated maps completely]] -- A stale MOC is worse than no MOC because agents fall back to search (current content) without one, but trust an outdated
81
+ - [[stigmergy coordinates agents through environmental traces without direct communication]] -- Termites build nests by responding to structure not each other, and agent swarms work the same way — wiki links, MOCs, a
82
+ - [[temporal media must convert to spatial text for agent traversal]] -- Agents need random access to content but video, audio, and podcasts are time-locked sequences — transcription is lossy b
83
+ - [[testing effect could enable agent knowledge verification]] -- Agents can apply the testing effect to verify vault quality by predicting note content from title+description, then chec
84
+ - [[the AgentSkills standard embodies progressive disclosure at the skill level]] -- The same metadata-then-depth loading pattern that governs note retrieval in the vault also governs skill loading in the
85
+ - [[the determinism boundary separates hook methodology from skill methodology]] -- Operations producing identical results regardless of input content, context state, or reasoning quality belong in hooks;
86
+ - [[the fix-versus-report decision depends on determinism reversibility and accumulated trust]] -- Four conditions gate self-healing — deterministic outcome, reversible via git, low cost if wrong, and proven accuracy at
87
+ - [[the vault constitutes identity for agents]] -- humans augment persistent identity with vaults; agents constitute identity through vaults because weights are shared but
88
+ - [[three capture schools converge through agent-mediated synthesis]] -- Accumulationist speed, Interpretationist quality, and Temporal context preservation stop being tradeoffs when agent proc
89
+ - [[trails transform ephemeral navigation into persistent artifacts]] -- Named traversal sequences through the knowledge graph could let agents reuse discovered navigation paths across sessions
90
+ - [[verbatim risk applies to agents too]] -- Agents can compress content into structured output that looks like synthesis but contains no genuine insight—the agent e
91
+ - [[vibe notetaking is the emerging industry consensus for AI-native self-organization]] -- The "dump and AI organizes" pattern converges across tools, but most implementations use opaque embeddings while agent-c
92
+ - [[vivid memories need verification]] -- high-confidence memories often drift from reality; daily logs ground subjective vividness in recorded facts
93
+ - [[you operate a system that takes notes]] -- the shift from note-taking to system operation reframes the human role from creator to curator — judgment over mechanics
94
+
95
+ ## Tensions
96
+
97
+ (Capture conflicts as they emerge)
98
+
99
+ ## Open Questions
100
+
101
+ - How does context decay affect processing quality across pipeline phases?
102
+ - What cognitive biases apply to agent-operated systems?
103
+
104
+ ---
105
+
106
+ Topics:
107
+ - [[index]]