arscontexta 0.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (418) hide show
  1. package/.claude-plugin/marketplace.json +11 -0
  2. package/.claude-plugin/plugin.json +22 -0
  3. package/README.md +683 -0
  4. package/agents/knowledge-guide.md +49 -0
  5. package/bin/cli.mjs +66 -0
  6. package/generators/agents-md.md +240 -0
  7. package/generators/claude-md.md +379 -0
  8. package/generators/features/atomic-notes.md +124 -0
  9. package/generators/features/ethical-guardrails.md +58 -0
  10. package/generators/features/graph-analysis.md +188 -0
  11. package/generators/features/helper-functions.md +92 -0
  12. package/generators/features/maintenance.md +164 -0
  13. package/generators/features/methodology-knowledge.md +70 -0
  14. package/generators/features/mocs.md +144 -0
  15. package/generators/features/multi-domain.md +61 -0
  16. package/generators/features/personality.md +71 -0
  17. package/generators/features/processing-pipeline.md +428 -0
  18. package/generators/features/schema.md +149 -0
  19. package/generators/features/self-evolution.md +229 -0
  20. package/generators/features/self-space.md +78 -0
  21. package/generators/features/semantic-search.md +99 -0
  22. package/generators/features/session-rhythm.md +85 -0
  23. package/generators/features/templates.md +85 -0
  24. package/generators/features/wiki-links.md +88 -0
  25. package/generators/soul-md.md +121 -0
  26. package/hooks/hooks.json +45 -0
  27. package/hooks/scripts/auto-commit.sh +44 -0
  28. package/hooks/scripts/session-capture.sh +35 -0
  29. package/hooks/scripts/session-orient.sh +86 -0
  30. package/hooks/scripts/write-validate.sh +42 -0
  31. package/methodology/AI shifts knowledge systems from externalizing memory to externalizing attention.md +59 -0
  32. package/methodology/BM25 retrieval fails on full-length descriptions because query term dilution reduces match scores.md +39 -0
  33. package/methodology/IBIS framework maps claim-based architecture to structured argumentation.md +58 -0
  34. package/methodology/LLM attention degrades as context fills.md +49 -0
  35. package/methodology/MOC construction forces synthesis that automated generation from metadata cannot replicate.md +49 -0
  36. package/methodology/MOC maintenance investment compounds because orientation savings multiply across every future session.md +41 -0
  37. package/methodology/MOCs are attention management devices not just organizational tools.md +51 -0
  38. package/methodology/PKM failure follows a predictable cycle.md +50 -0
  39. package/methodology/ThreadMode to DocumentMode transformation is the core value creation step.md +52 -0
  40. package/methodology/WIP limits force processing over accumulation.md +53 -0
  41. package/methodology/Zeigarnik effect validates capture-first philosophy because open loops drain attention.md +42 -0
  42. package/methodology/academic research uses structured extraction with cross-source synthesis.md +566 -0
  43. package/methodology/adapt the four-phase processing pipeline to domain-specific throughput needs.md +197 -0
  44. package/methodology/agent notes externalize navigation intuition that search cannot discover and traversal cannot reconstruct.md +48 -0
  45. package/methodology/agent self-memory should be architecturally separate from user knowledge systems.md +48 -0
  46. package/methodology/agent session boundaries create natural automation checkpoints that human-operated systems lack.md +56 -0
  47. package/methodology/agent-cognition.md +107 -0
  48. package/methodology/agents are simultaneously methodology executors and subjects creating a unique trust asymmetry.md +66 -0
  49. package/methodology/aspect-oriented programming solved the same cross-cutting concern problem that hooks solve.md +39 -0
  50. package/methodology/associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles.md +53 -0
  51. package/methodology/attention residue may have a minimum granularity that cannot be subdivided.md +46 -0
  52. package/methodology/auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution.md +47 -0
  53. package/methodology/automated detection is always safe because it only reads state while automated remediation risks content corruption.md +42 -0
  54. package/methodology/automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues.md +56 -0
  55. package/methodology/backlinks implicitly define notes by revealing usage context.md +35 -0
  56. package/methodology/backward maintenance asks what would be different if written today.md +62 -0
  57. package/methodology/balance onboarding enforcement and questions to prevent premature complexity.md +229 -0
  58. package/methodology/basic level categorization determines optimal MOC granularity.md +51 -0
  59. package/methodology/batching by context similarity reduces switching costs in agent processing.md +43 -0
  60. package/methodology/behavioral anti-patterns matter more than tool selection.md +42 -0
  61. package/methodology/betweenness centrality identifies bridge notes connecting disparate knowledge domains.md +57 -0
  62. package/methodology/blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules.md +42 -0
  63. package/methodology/bootstrapping principle enables self-improving systems.md +62 -0
  64. package/methodology/build automatic memory through cognitive offloading and session handoffs.md +285 -0
  65. package/methodology/capture the reaction to content not just the content itself.md +41 -0
  66. package/methodology/claims must be specific enough to be wrong.md +36 -0
  67. package/methodology/closure rituals create clean breaks that prevent attention residue bleed.md +44 -0
  68. package/methodology/cognitive offloading is the architectural foundation for vault design.md +46 -0
  69. package/methodology/cognitive outsourcing risk in agent-operated systems.md +55 -0
  70. package/methodology/coherence maintains consistency despite inconsistent inputs.md +96 -0
  71. package/methodology/coherent architecture emerges from wiki links spreading activation and small-world topology.md +48 -0
  72. package/methodology/community detection algorithms can inform when MOCs should split or merge.md +52 -0
  73. package/methodology/complete navigation requires four complementary types that no single mechanism provides.md +43 -0
  74. package/methodology/complex systems evolve from simple working systems.md +59 -0
  75. package/methodology/composable knowledge architecture builds systems from independent toggleable modules not monolithic templates.md +61 -0
  76. package/methodology/compose multi-domain systems through separate templates and shared graph.md +372 -0
  77. package/methodology/concept-orientation beats source-orientation for cross-domain connections.md +51 -0
  78. package/methodology/confidence thresholds gate automated action between the mechanical and judgment zones.md +50 -0
  79. package/methodology/configuration dimensions interact so choices in one create pressure on others.md +58 -0
  80. package/methodology/configuration paralysis emerges when derivation surfaces too many decisions.md +44 -0
  81. package/methodology/context files function as agent operating systems through self-referential self-extension.md +46 -0
  82. package/methodology/context phrase clarity determines how deep a navigation hierarchy can scale.md +46 -0
  83. package/methodology/continuous small-batch processing eliminates review dread.md +48 -0
  84. package/methodology/controlled disorder engineers serendipity through semantic rather than topical linking.md +51 -0
  85. package/methodology/creative writing uses worldbuilding consistency with character tracking.md +672 -0
  86. package/methodology/cross-links between MOC territories indicate creative leaps and integration depth.md +43 -0
  87. package/methodology/dangling links reveal which notes want to exist.md +62 -0
  88. package/methodology/data exit velocity measures how quickly content escapes vendor lock-in.md +74 -0
  89. package/methodology/decontextualization risk means atomicity may strip meaning that cannot be recovered.md +48 -0
  90. package/methodology/dense interlinked research claims enable derivation while sparse references only enable templating.md +47 -0
  91. package/methodology/dependency resolution through topological sort makes module composition transparent and verifiable.md +56 -0
  92. package/methodology/derivation generates knowledge systems from composable research claims not template customization.md +63 -0
  93. package/methodology/derivation-engine.md +27 -0
  94. package/methodology/derived systems follow a seed-evolve-reseed lifecycle.md +56 -0
  95. package/methodology/description quality for humans diverges from description quality for keyword search.md +73 -0
  96. package/methodology/descriptions are retrieval filters not summaries.md +112 -0
  97. package/methodology/design MOCs as attention management devices with lifecycle governance.md +318 -0
  98. package/methodology/design-dimensions.md +66 -0
  99. package/methodology/digital mutability enables note evolution that physical permanence forbids.md +54 -0
  100. package/methodology/discovery-retrieval.md +48 -0
  101. package/methodology/distinctiveness scoring treats description quality as measurable.md +69 -0
  102. package/methodology/does agent processing recover what fast capture loses.md +43 -0
  103. package/methodology/domain-compositions.md +37 -0
  104. package/methodology/dual-coding with visual elements could enhance agent traversal.md +55 -0
  105. package/methodology/each module must be describable in one sentence under 200 characters or it does too many things.md +45 -0
  106. package/methodology/each new note compounds value by creating traversal paths.md +55 -0
  107. package/methodology/eight configuration dimensions parameterize the space of possible knowledge systems.md +56 -0
  108. package/methodology/elaborative encoding is the quality gate for new notes.md +55 -0
  109. package/methodology/enforce schema with graduated strictness across capture processing and query zones.md +221 -0
  110. package/methodology/enforcing atomicity can create paralysis when ideas resist decomposition.md +43 -0
  111. package/methodology/engineering uses technical decision tracking with architectural memory.md +766 -0
  112. package/methodology/every knowledge domain shares a four-phase processing skeleton that diverges only in the process step.md +53 -0
  113. package/methodology/evolution observations provide actionable signals for system adaptation.md +67 -0
  114. package/methodology/external memory shapes cognition more than base model.md +60 -0
  115. package/methodology/faceted classification treats notes as multi-dimensional objects rather than folder contents.md +65 -0
  116. package/methodology/failure-modes.md +27 -0
  117. package/methodology/false universalism applies same processing logic regardless of domain.md +49 -0
  118. package/methodology/federated wiki pattern enables multi-agent divergence as feature not bug.md +59 -0
  119. package/methodology/flat files break at retrieval scale.md +75 -0
  120. package/methodology/forced engagement produces weak connections.md +48 -0
  121. package/methodology/four abstraction layers separate platform-agnostic from platform-dependent knowledge system features.md +47 -0
  122. package/methodology/fresh context per task preserves quality better than chaining phases.md +44 -0
  123. package/methodology/friction reveals architecture.md +63 -0
  124. package/methodology/friction-driven module adoption prevents configuration debt by adding complexity only at pain points.md +48 -0
  125. package/methodology/gardening cycle implements tend prune fertilize operations.md +41 -0
  126. package/methodology/generation effect gate blocks processing without transformation.md +40 -0
  127. package/methodology/goal-driven memory orchestration enables autonomous domain learning through directed compute allocation.md +41 -0
  128. package/methodology/good descriptions layer heuristic then mechanism then implication.md +57 -0
  129. package/methodology/graph-structure.md +65 -0
  130. package/methodology/guided notes might outperform post-hoc structuring for high-volume capture.md +37 -0
  131. package/methodology/health wellness uses symptom-trigger correlation with multi-dimensional tracking.md +819 -0
  132. package/methodology/hook composition creates emergent methodology from independent single-concern components.md +47 -0
  133. package/methodology/hook enforcement guarantees quality while instruction enforcement merely suggests it.md +51 -0
  134. package/methodology/hook-driven learning loops create self-improving methodology through observation accumulation.md +62 -0
  135. package/methodology/hooks are the agent habit system that replaces the missing basal ganglia.md +40 -0
  136. package/methodology/hooks cannot replace genuine cognitive engagement yet more automation is always tempting.md +87 -0
  137. package/methodology/hooks enable context window efficiency by delegating deterministic checks to external processes.md +47 -0
  138. package/methodology/idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once.md +44 -0
  139. package/methodology/implement condition-based maintenance triggers for derived systems.md +255 -0
  140. package/methodology/implicit dependencies create distributed monoliths that fail silently across configurations.md +58 -0
  141. package/methodology/implicit knowledge emerges from traversal.md +55 -0
  142. package/methodology/incremental formalization happens through repeated touching of old notes.md +60 -0
  143. package/methodology/incremental reading enables cross-source connection finding.md +39 -0
  144. package/methodology/index.md +32 -0
  145. package/methodology/inline links carry richer relationship data than metadata fields.md +91 -0
  146. package/methodology/insight accretion differs from productivity in knowledge systems.md +41 -0
  147. package/methodology/intermediate packets enable assembly over creation.md +52 -0
  148. package/methodology/intermediate representation pattern enables reliable vault operations beyond regex.md +62 -0
  149. package/methodology/justification chains enable forward backward and evolution reasoning about configuration decisions.md +46 -0
  150. package/methodology/knowledge system architecture is parameterized by platform capabilities not fixed by methodology.md +51 -0
  151. package/methodology/knowledge systems become communication partners through complexity and memory humans cannot sustain.md +47 -0
  152. package/methodology/knowledge systems share universal operations and structural components across all methodology traditions.md +46 -0
  153. package/methodology/legal case management uses precedent chains with regulatory change propagation.md +892 -0
  154. package/methodology/live index via periodic regeneration keeps discovery current.md +58 -0
  155. package/methodology/local-first file formats are inherently agent-native.md +69 -0
  156. package/methodology/logic column pattern separates reasoning from procedure.md +35 -0
  157. package/methodology/maintenance operations are more universal than creative pipelines because structural health is domain-invariant.md +47 -0
  158. package/methodology/maintenance scheduling frequency should match consequence speed not detection capability.md +50 -0
  159. package/methodology/maintenance targeting should prioritize mechanism and theory notes.md +26 -0
  160. package/methodology/maintenance-patterns.md +72 -0
  161. package/methodology/markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure.md +55 -0
  162. package/methodology/maturity field enables agent context prioritization.md +33 -0
  163. package/methodology/memory-architecture.md +27 -0
  164. package/methodology/metacognitive confidence can diverge from retrieval capability.md +42 -0
  165. package/methodology/metadata reduces entropy enabling precision over recall.md +91 -0
  166. package/methodology/methodology development should follow the trajectory from documentation to skill to hook as understanding hardens.md +80 -0
  167. package/methodology/methodology traditions are named points in a shared configuration space not competing paradigms.md +64 -0
  168. package/methodology/mnemonic medium embeds verification into navigation.md +46 -0
  169. package/methodology/module communication through shared YAML fields creates loose coupling without direct dependencies.md +44 -0
  170. package/methodology/module deactivation must account for structural artifacts that survive the toggle.md +49 -0
  171. package/methodology/multi-domain systems compose through separate templates and shared graph.md +61 -0
  172. package/methodology/multi-domain-composition.md +27 -0
  173. package/methodology/narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging.md +53 -0
  174. package/methodology/navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts.md +48 -0
  175. package/methodology/navigational vertigo emerges in pure association systems without local hierarchy.md +54 -0
  176. package/methodology/note titles should function as APIs enabling sentence transclusion.md +51 -0
  177. package/methodology/note-design.md +57 -0
  178. package/methodology/notes are skills /342/200/224 curated knowledge injected when relevant.md" +62 -0
  179. package/methodology/notes function as cognitive anchors that stabilize attention during complex tasks.md +41 -0
  180. package/methodology/novel domains derive by mapping knowledge type to closest reference domain then adapting.md +50 -0
  181. package/methodology/nudge theory explains graduated hook enforcement as choice architecture for agents.md +59 -0
  182. package/methodology/observation and tension logs function as dead-letter queues for failed automation.md +51 -0
  183. package/methodology/operational memory and knowledge memory serve different functions in agent architecture.md +48 -0
  184. package/methodology/operational wisdom requires contextual observation.md +52 -0
  185. package/methodology/orchestrated vault creation transforms arscontexta from tool to autonomous knowledge factory.md +40 -0
  186. package/methodology/organic emergence versus active curation creates a fundamental vault governance tension.md +68 -0
  187. package/methodology/orphan notes are seeds not failures.md +38 -0
  188. package/methodology/over-automation corrupts quality when hooks encode judgment rather than verification.md +62 -0
  189. package/methodology/people relationships uses Dunbar-layered graphs with interaction tracking.md +659 -0
  190. package/methodology/personal assistant uses life area management with review automation.md +610 -0
  191. package/methodology/platform adapter translation is semantic not mechanical because hook event meanings differ.md +40 -0
  192. package/methodology/platform capability tiers determine which knowledge system features can be implemented.md +48 -0
  193. package/methodology/platform fragmentation means identical conceptual operations require different implementations across agent environments.md +44 -0
  194. package/methodology/premature complexity is the most common derivation failure mode.md +45 -0
  195. package/methodology/prevent domain-specific failure modes through the vulnerability matrix.md +336 -0
  196. package/methodology/processing effort should follow retrieval demand.md +57 -0
  197. package/methodology/processing-workflows.md +75 -0
  198. package/methodology/product management uses feedback pipelines with experiment tracking.md +789 -0
  199. package/methodology/productivity porn risk in meta-system building.md +30 -0
  200. package/methodology/programmable notes could enable property-triggered workflows.md +64 -0
  201. package/methodology/progressive disclosure means reading right not reading less.md +69 -0
  202. package/methodology/progressive schema validates only what active modules require not the full system schema.md +49 -0
  203. package/methodology/project management uses decision tracking with stakeholder context.md +776 -0
  204. package/methodology/propositional link semantics transform wiki links from associative to reasoned.md +87 -0
  205. package/methodology/prospective memory requires externalization.md +53 -0
  206. package/methodology/provenance tracks where beliefs come from.md +62 -0
  207. package/methodology/queries evolve during search so agents should checkpoint.md +35 -0
  208. package/methodology/question-answer metadata enables inverted search patterns.md +39 -0
  209. package/methodology/random note resurfacing prevents write-only memory.md +33 -0
  210. package/methodology/reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring.md +59 -0
  211. package/methodology/reflection synthesizes existing notes into new insight.md +100 -0
  212. package/methodology/retrieval utility should drive design over capture completeness.md +69 -0
  213. package/methodology/retrieval verification loop tests description quality at scale.md +81 -0
  214. package/methodology/role field makes graph structure explicit.md +94 -0
  215. package/methodology/scaffolding enables divergence that fine-tuning cannot.md +67 -0
  216. package/methodology/schema enforcement via validation agents enables soft consistency.md +60 -0
  217. package/methodology/schema evolution follows observe-then-formalize not design-then-enforce.md +65 -0
  218. package/methodology/schema field names are the only domain specific element in the universal note pattern.md +46 -0
  219. package/methodology/schema fields should use domain-native vocabulary not abstract terminology.md +47 -0
  220. package/methodology/schema templates reduce cognitive overhead at capture time.md +55 -0
  221. package/methodology/schema validation hooks externalize inhibitory control that degrades under cognitive load.md +48 -0
  222. package/methodology/schema-enforcement.md +27 -0
  223. package/methodology/self-extension requires context files to contain platform operations knowledge not just methodology.md +47 -0
  224. package/methodology/sense-making vs storage does compression lose essential nuance.md +73 -0
  225. package/methodology/session boundary hooks implement cognitive bookends for orientation and reflection.md +60 -0
  226. package/methodology/session handoff creates continuity without persistent memory.md +43 -0
  227. package/methodology/session outputs are packets for future selves.md +43 -0
  228. package/methodology/session transcript mining enables experiential validation that structural tests cannot provide.md +38 -0
  229. package/methodology/skill context budgets constrain knowledge system complexity on agent platforms.md +52 -0
  230. package/methodology/skills encode methodology so manual execution bypasses quality gates.md +50 -0
  231. package/methodology/small-world topology requires hubs and dense local links.md +99 -0
  232. package/methodology/source attribution enables tracing claims to foundations.md +38 -0
  233. package/methodology/spaced repetition scheduling could optimize vault maintenance.md +44 -0
  234. package/methodology/spreading activation models how agents should traverse.md +79 -0
  235. package/methodology/stale navigation actively misleads because agents trust curated maps completely.md +43 -0
  236. package/methodology/stigmergy coordinates agents through environmental traces without direct communication.md +62 -0
  237. package/methodology/storage versus thinking distinction determines which tool patterns apply.md +56 -0
  238. package/methodology/structure enables navigation without reading everything.md +52 -0
  239. package/methodology/structure without processing provides no value.md +56 -0
  240. package/methodology/student learning uses prerequisite graphs with spaced retrieval.md +770 -0
  241. package/methodology/summary coherence tests composability before filing.md +37 -0
  242. package/methodology/tag rot applies to wiki links because titles serve as both identifier and display text.md +50 -0
  243. package/methodology/temporal media must convert to spatial text for agent traversal.md +43 -0
  244. package/methodology/temporal processing priority creates age-based inbox urgency.md +45 -0
  245. package/methodology/temporal separation of capture and processing preserves context freshness.md +39 -0
  246. package/methodology/ten universal primitives form the kernel of every viable agent knowledge system.md +162 -0
  247. package/methodology/testing effect could enable agent knowledge verification.md +38 -0
  248. package/methodology/the AgentSkills standard embodies progressive disclosure at the skill level.md +40 -0
  249. package/methodology/the derivation engine improves recursively as deployed systems generate observations.md +49 -0
  250. package/methodology/the determinism boundary separates hook methodology from skill methodology.md +46 -0
  251. package/methodology/the fix-versus-report decision depends on determinism reversibility and accumulated trust.md +45 -0
  252. package/methodology/the generation effect requires active transformation not just storage.md +57 -0
  253. package/methodology/the no wrong patches guarantee ensures any valid module combination produces a valid system.md +58 -0
  254. package/methodology/the system is the argument.md +46 -0
  255. package/methodology/the vault constitutes identity for agents.md +86 -0
  256. package/methodology/the vault methodology transfers because it encodes cognitive science not domain specifics.md +47 -0
  257. package/methodology/therapy journal uses warm personality with pattern detection for emotional processing.md +584 -0
  258. package/methodology/three capture schools converge through agent-mediated synthesis.md +55 -0
  259. package/methodology/three concurrent maintenance loops operate at different timescales to catch different classes of problems.md +56 -0
  260. package/methodology/throughput matters more than accumulation.md +58 -0
  261. package/methodology/title as claim enables traversal as reasoning.md +50 -0
  262. package/methodology/topological organization beats temporal for knowledge work.md +52 -0
  263. package/methodology/trading uses conviction tracking with thesis-outcome correlation.md +699 -0
  264. package/methodology/trails transform ephemeral navigation into persistent artifacts.md +39 -0
  265. package/methodology/transform universal vocabulary to domain-native language through six levels.md +259 -0
  266. package/methodology/type field enables structured queries without folder hierarchies.md +53 -0
  267. package/methodology/use-case presets dissolve the tension between composability and simplicity.md +44 -0
  268. package/methodology/vault conventions may impose hidden rigidity on thinking.md +44 -0
  269. package/methodology/verbatim risk applies to agents too.md +31 -0
  270. package/methodology/vibe notetaking is the emerging industry consensus for AI-native self-organization.md +56 -0
  271. package/methodology/vivid memories need verification.md +45 -0
  272. package/methodology/vocabulary-transformation.md +27 -0
  273. package/methodology/voice capture is the highest-bandwidth channel for agent-delegated knowledge systems.md +45 -0
  274. package/methodology/wiki links are the digital evolution of analog indexing.md +73 -0
  275. package/methodology/wiki links as social contract transforms agents into stewards of incomplete references.md +52 -0
  276. package/methodology/wiki links create navigation paths that shape retrieval.md +63 -0
  277. package/methodology/wiki links implement GraphRAG without the infrastructure.md +101 -0
  278. package/methodology/writing for audience blocks authentic creation.md +22 -0
  279. package/methodology/you operate a system that takes notes.md +79 -0
  280. package/openclaw/SKILL.md +110 -0
  281. package/package.json +45 -0
  282. package/platforms/README.md +51 -0
  283. package/platforms/claude-code/generator.md +61 -0
  284. package/platforms/claude-code/hooks/README.md +186 -0
  285. package/platforms/claude-code/hooks/auto-commit.sh.template +38 -0
  286. package/platforms/claude-code/hooks/session-capture.sh.template +72 -0
  287. package/platforms/claude-code/hooks/session-orient.sh.template +189 -0
  288. package/platforms/claude-code/hooks/write-validate.sh.template +106 -0
  289. package/platforms/openclaw/generator.md +82 -0
  290. package/platforms/openclaw/hooks/README.md +89 -0
  291. package/platforms/openclaw/hooks/bootstrap.ts.template +224 -0
  292. package/platforms/openclaw/hooks/command-new.ts.template +165 -0
  293. package/platforms/openclaw/hooks/heartbeat.ts.template +214 -0
  294. package/platforms/shared/features/README.md +70 -0
  295. package/platforms/shared/skill-blocks/graph.md +145 -0
  296. package/platforms/shared/skill-blocks/learn.md +119 -0
  297. package/platforms/shared/skill-blocks/next.md +131 -0
  298. package/platforms/shared/skill-blocks/pipeline.md +326 -0
  299. package/platforms/shared/skill-blocks/ralph.md +616 -0
  300. package/platforms/shared/skill-blocks/reduce.md +1142 -0
  301. package/platforms/shared/skill-blocks/refactor.md +129 -0
  302. package/platforms/shared/skill-blocks/reflect.md +780 -0
  303. package/platforms/shared/skill-blocks/remember.md +524 -0
  304. package/platforms/shared/skill-blocks/rethink.md +574 -0
  305. package/platforms/shared/skill-blocks/reweave.md +680 -0
  306. package/platforms/shared/skill-blocks/seed.md +320 -0
  307. package/platforms/shared/skill-blocks/stats.md +145 -0
  308. package/platforms/shared/skill-blocks/tasks.md +171 -0
  309. package/platforms/shared/skill-blocks/validate.md +323 -0
  310. package/platforms/shared/skill-blocks/verify.md +562 -0
  311. package/platforms/shared/templates/README.md +35 -0
  312. package/presets/experimental/categories.yaml +1 -0
  313. package/presets/experimental/preset.yaml +38 -0
  314. package/presets/experimental/starter/README.md +7 -0
  315. package/presets/experimental/vocabulary.yaml +7 -0
  316. package/presets/personal/categories.yaml +7 -0
  317. package/presets/personal/preset.yaml +41 -0
  318. package/presets/personal/starter/goals.md +21 -0
  319. package/presets/personal/starter/index.md +17 -0
  320. package/presets/personal/starter/life-areas.md +21 -0
  321. package/presets/personal/starter/people.md +21 -0
  322. package/presets/personal/vocabulary.yaml +32 -0
  323. package/presets/research/categories.yaml +8 -0
  324. package/presets/research/preset.yaml +41 -0
  325. package/presets/research/starter/index.md +17 -0
  326. package/presets/research/starter/methods.md +21 -0
  327. package/presets/research/starter/open-questions.md +21 -0
  328. package/presets/research/vocabulary.yaml +33 -0
  329. package/reference/AUDIT-REPORT.md +238 -0
  330. package/reference/claim-map.md +172 -0
  331. package/reference/components.md +327 -0
  332. package/reference/conversation-patterns.md +542 -0
  333. package/reference/derivation-validation.md +649 -0
  334. package/reference/dimension-claim-map.md +134 -0
  335. package/reference/evolution-lifecycle.md +297 -0
  336. package/reference/failure-modes.md +235 -0
  337. package/reference/interaction-constraints.md +204 -0
  338. package/reference/kernel.yaml +242 -0
  339. package/reference/methodology.md +283 -0
  340. package/reference/open-questions.md +279 -0
  341. package/reference/personality-layer.md +302 -0
  342. package/reference/self-space.md +299 -0
  343. package/reference/semantic-vs-keyword.md +288 -0
  344. package/reference/session-lifecycle.md +298 -0
  345. package/reference/templates/base-note.md +16 -0
  346. package/reference/templates/companion-note.md +70 -0
  347. package/reference/templates/creative-note.md +16 -0
  348. package/reference/templates/learning-note.md +16 -0
  349. package/reference/templates/life-note.md +16 -0
  350. package/reference/templates/moc.md +26 -0
  351. package/reference/templates/relationship-note.md +17 -0
  352. package/reference/templates/research-note.md +19 -0
  353. package/reference/templates/session-log.md +24 -0
  354. package/reference/templates/therapy-note.md +16 -0
  355. package/reference/test-fixtures/edge-case-constraints.md +148 -0
  356. package/reference/test-fixtures/multi-domain.md +164 -0
  357. package/reference/test-fixtures/novel-domain-gaming.md +138 -0
  358. package/reference/test-fixtures/research-minimal.md +102 -0
  359. package/reference/test-fixtures/therapy-full.md +155 -0
  360. package/reference/testing-milestones.md +1087 -0
  361. package/reference/three-spaces.md +363 -0
  362. package/reference/tradition-presets.md +203 -0
  363. package/reference/use-case-presets.md +341 -0
  364. package/reference/validate-kernel.sh +432 -0
  365. package/reference/vocabulary-transforms.md +85 -0
  366. package/scripts/sync-thinking.sh +147 -0
  367. package/skill-sources/graph/SKILL.md +567 -0
  368. package/skill-sources/graph/skill.json +17 -0
  369. package/skill-sources/learn/SKILL.md +254 -0
  370. package/skill-sources/learn/skill.json +17 -0
  371. package/skill-sources/next/SKILL.md +407 -0
  372. package/skill-sources/next/skill.json +17 -0
  373. package/skill-sources/pipeline/SKILL.md +314 -0
  374. package/skill-sources/pipeline/skill.json +17 -0
  375. package/skill-sources/ralph/SKILL.md +604 -0
  376. package/skill-sources/ralph/skill.json +17 -0
  377. package/skill-sources/reduce/SKILL.md +1113 -0
  378. package/skill-sources/reduce/skill.json +17 -0
  379. package/skill-sources/refactor/SKILL.md +448 -0
  380. package/skill-sources/refactor/skill.json +17 -0
  381. package/skill-sources/reflect/SKILL.md +747 -0
  382. package/skill-sources/reflect/skill.json +17 -0
  383. package/skill-sources/remember/SKILL.md +534 -0
  384. package/skill-sources/remember/skill.json +17 -0
  385. package/skill-sources/rethink/SKILL.md +658 -0
  386. package/skill-sources/rethink/skill.json +17 -0
  387. package/skill-sources/reweave/SKILL.md +657 -0
  388. package/skill-sources/reweave/skill.json +17 -0
  389. package/skill-sources/seed/SKILL.md +303 -0
  390. package/skill-sources/seed/skill.json +17 -0
  391. package/skill-sources/stats/SKILL.md +371 -0
  392. package/skill-sources/stats/skill.json +17 -0
  393. package/skill-sources/tasks/SKILL.md +402 -0
  394. package/skill-sources/tasks/skill.json +17 -0
  395. package/skill-sources/validate/SKILL.md +310 -0
  396. package/skill-sources/validate/skill.json +17 -0
  397. package/skill-sources/verify/SKILL.md +532 -0
  398. package/skill-sources/verify/skill.json +17 -0
  399. package/skills/add-domain/SKILL.md +441 -0
  400. package/skills/add-domain/skill.json +17 -0
  401. package/skills/architect/SKILL.md +568 -0
  402. package/skills/architect/skill.json +17 -0
  403. package/skills/ask/SKILL.md +388 -0
  404. package/skills/ask/skill.json +17 -0
  405. package/skills/health/SKILL.md +760 -0
  406. package/skills/health/skill.json +17 -0
  407. package/skills/help/SKILL.md +348 -0
  408. package/skills/help/skill.json +17 -0
  409. package/skills/recommend/SKILL.md +553 -0
  410. package/skills/recommend/skill.json +17 -0
  411. package/skills/reseed/SKILL.md +385 -0
  412. package/skills/reseed/skill.json +17 -0
  413. package/skills/setup/SKILL.md +1688 -0
  414. package/skills/setup/skill.json +17 -0
  415. package/skills/tutorial/SKILL.md +496 -0
  416. package/skills/tutorial/skill.json +17 -0
  417. package/skills/upgrade/SKILL.md +395 -0
  418. package/skills/upgrade/skill.json +17 -0
@@ -0,0 +1,372 @@
1
+ ---
2
+ description: How domains compose when users have multiple use cases — shared infrastructure, separate schemas, and the graph as the integration layer
3
+ kind: guidance
4
+ status: active
5
+ topics: ["[[multi-domain-composition]]"]
6
+ ---
7
+
8
+ # compose multi-domain systems through separate templates and shared graph
9
+
10
+ Most users don't fit a single domain. A researcher also manages projects. A therapist tracks their own health. An engineer has personal goals. Since [[multi-domain systems compose through separate templates and shared graph]], the plugin must generate vaults that serve multiple domains without conflict.
11
+
12
+ This doc tells the plugin HOW to compose domains and where composition breaks down.
13
+
14
+ ## Why Compose Instead of Separate Vaults
15
+
16
+ The temptation with multiple use cases is to create separate vaults — one for research, one for personal life, one for projects. This feels clean but destroys the primary value of a knowledge graph: cross-domain connections.
17
+
18
+ Separate vaults miss connections that only exist across domain boundaries:
19
+
20
+ - A research finding about cognitive load directly explains why your therapy patient struggles with a specific intervention
21
+ - Your project management estimation patterns correlate with your personal health data during crunch periods
22
+ - Your creative writing research into medieval trade routes connects to your personal finance mental model
23
+
24
+ These connections are invisible in separate vaults because the graph edges that would connect them don't exist. The value of a composed system comes precisely from connections that separate systems would miss.
25
+
26
+ The cost of composition is maintenance complexity. The benefit is connection density. Since [[cross-links between MOC territories indicate creative leaps and integration depth]], the connections justify the complexity.
27
+
28
+ ## The Five Composition Rules
29
+
30
+ Every multi-domain system must follow these five rules. They are the invariants that prevent domain interference while enabling cross-domain value.
31
+
32
+ ### Rule 1: Separate Templates, Shared Graph
33
+
34
+ Each domain gets its own note templates with domain-specific schemas. But all notes live in the same graph (same wiki link namespace, same connection space). Templates isolate structure; the graph unifies meaning.
35
+
36
+ ### Rule 2: No Field Name Conflicts
37
+
38
+ When two domains use the same field name with different semantics, prefix to disambiguate. Research `status: preliminary | open | dissolved` and PM `status: not-started | in-progress | done` become `research_status` and `project_status`. Universal fields (`description`, `topics`) maintain consistent semantics.
39
+
40
+ ### Rule 3: Cross-Domain Reflect
41
+
42
+ The reflect phase searches across ALL domains, not just the domain of the new note. A therapy insight might connect to a research claim. A project decision might link to an engineering ADR. Connection-finding that stays within domain boundaries defeats the purpose of composition.
43
+
44
+ ### Rule 4: Domain-Specific Processing
45
+
46
+ The process step (reduce phase) adapts per domain. Research extraction differs from therapy pattern detection which differs from PM decision documentation. Since [[every knowledge domain shares a four-phase processing skeleton that diverges only in the process step]], the pipeline skeleton is shared but each domain's process step is distinct.
47
+
48
+ ### Rule 5: Progressive Context Loading
49
+
50
+ The context file loads universal methodology first, then loads domain-specific configuration only when the user is working in that domain. A multi-domain context file that dumps all domain configurations at once wastes context window space. Progressive loading means the agent gets full depth in the active domain without noise from inactive ones.
51
+
52
+ ## The Composition Principle
53
+
54
+ Domains compose through three layers:
55
+
56
+ | Layer | Shared or Separate | Why |
57
+ |-------|--------------------|-----|
58
+ | **Infrastructure** | Shared | One inbox, one context file, one processing pipeline, one maintenance system |
59
+ | **Schemas** | Separate per domain | Each domain has its own note types with domain-specific fields |
60
+ | **Graph** | Shared | Cross-domain connections are where the real value lives |
61
+
62
+ The infrastructure layer runs once. The schema layer runs per domain. The graph layer connects everything.
63
+
64
+ ## What's Shared
65
+
66
+ ### Inbox
67
+ One capture zone for all domains. Content gets routed to the correct domain during processing (the reduce phase classifies and routes). A single inbox means: one place to dump, no decision about where to put things at capture time.
68
+
69
+ ### Context File
70
+ One CLAUDE.md (or equivalent) with universal methodology + per-domain configuration sections. The agent loads one file at session start, not one per domain.
71
+
72
+ ### Processing Pipeline
73
+ The four-phase skeleton (capture → process → connect → verify) runs for all domains. The process step adapts per domain, but the skeleton is shared infrastructure. Since [[every knowledge domain shares a four-phase processing skeleton that diverges only in the process step]], there's no need for separate pipelines.
74
+
75
+ ### Maintenance System
76
+ Structural health checks (orphan detection, schema validation, MOC sizing) run across all domains simultaneously. Since [[maintenance operations are more universal than creative pipelines because structural health is domain-invariant]], one maintenance system covers all domains.
77
+
78
+ ### Universal Schema Fields
79
+ These fields exist on every note regardless of domain:
80
+ - `description` — retrieval filter
81
+ - `topics` — MOC membership (domain-specific MOCs)
82
+
83
+ ## What's Separate
84
+
85
+ ### Note Templates
86
+ Each domain has its own note types with domain-specific schemas:
87
+ ```
88
+ templates/
89
+ ├── research-claim.md # Research domain
90
+ ├── research-source.md
91
+ ├── therapy-reflection.md # Therapy domain
92
+ ├── therapy-pattern.md
93
+ ├── project-decision.md # PM domain
94
+ ├── project-risk.md
95
+ ```
96
+
97
+ Templates don't conflict because note types are distinct. A `therapy-reflection` and a `research-claim` have different schemas and serve different purposes.
98
+
99
+ ### MOC Hierarchies
100
+ Each domain generates its own MOC tree:
101
+ ```
102
+ 01_thinking/
103
+ ├── research/
104
+ │ ├── index.md
105
+ │ ├── methodology.md
106
+ │ └── [topic MOCs]
107
+ ├── therapy/
108
+ │ ├── patterns.md
109
+ │ ├── growth.md
110
+ │ └── [pattern MOCs]
111
+ ├── projects/
112
+ │ ├── project-a.md
113
+ │ └── project-b.md
114
+ ```
115
+
116
+ ### Processing Logic
117
+ The process step (reduce phase) applies domain-specific extraction:
118
+ - Research reduce: extract claims, classify methodology
119
+ - Therapy reduce: detect patterns, identify triggers
120
+ - PM reduce: document decisions, extract action items
121
+
122
+ The pipeline routes to the right extraction logic based on content type.
123
+
124
+ ## The Cross-Domain Graph
125
+
126
+ This is where composition creates unique value. Because all domains share one graph, connections can span domains:
127
+
128
+ - A research finding about cognitive load connects to a therapy pattern about overwhelm
129
+ - A PM decision links to a research claim that influenced it
130
+ - A personal health pattern correlates with a trading performance pattern
131
+ - An engineering ADR links to a product management PRD that motivated it
132
+
133
+ Since [[concept-orientation beats source-orientation for cross-domain connections]], cross-domain connections happen naturally when notes are organized by concept rather than source domain.
134
+
135
+ ### Four Cross-Domain Connection Patterns
136
+
137
+ Not all cross-domain connections are the same. The plugin recognizes four patterns, each with different discovery mechanisms:
138
+
139
+ **1. Temporal Correlation**
140
+ Events in different domains that co-occur in time. Sleep quality (health domain) drops on the same days that trading losses increase (trading domain). Therapy mood entries correlate with project stress periods.
141
+
142
+ **Discovery mechanism:** Temporal overlap detection across domain notes. Query notes from different domains that share date ranges and compare field values.
143
+
144
+ **Example:** "Your mood entries below 3 cluster on weeks where your PM domain shows sprint overruns. The correlation is 0.7 over the last 3 months."
145
+
146
+ **2. Entity Sharing**
147
+ The same person, concept, or object appears across domains. A colleague tracked in your PM domain (stakeholder) is also in your People domain (relationship). A research concept appears in both your academic and creative writing domains.
148
+
149
+ **Discovery mechanism:** Wiki link and entity name matching across domain boundaries. When the same `[[person]]` or `[[concept]]` appears in notes from different domains, the connection is structural.
150
+
151
+ **Example:** "[[marcus-chen]] appears in your PM domain as a stakeholder and in your People domain as a mentoring relationship. His communication preferences in one inform engagement strategy in the other."
152
+
153
+ **3. Causal Chains**
154
+ A finding or event in one domain causes or explains something in another. A research claim about attention limits explains why a PM process fails. A health protocol change produces measurable effects on creative output.
155
+
156
+ **Discovery mechanism:** Semantic search across domains. The reflect phase uses `mcp__qmd__query` without domain filtering to find conceptually related notes across all domains.
157
+
158
+ **Example:** "Your research claim that [[attention degrades nonlinearly after the third concurrent task]] may explain the PM pattern in [[cross-team dependencies cause more delays than technical complexity]] — the dependencies force context switching that triggers the attention degradation."
159
+
160
+ **4. Goal Alignment**
161
+ Goals or outcomes in different domains serve the same higher purpose. Research productivity goals align with personal growth goals. Engineering quality goals align with product user satisfaction goals.
162
+
163
+ **Discovery mechanism:** Goal-tracing across domain MOCs. When domain MOCs reference similar outcomes or objectives, the alignment surfaces through MOC comparison.
164
+
165
+ **Example:** "Your personal goal 'reduce work stress' connects to your PM goal 'improve sprint estimation accuracy' — both address the same root cause (overcommitment)."
166
+
167
+ ### Cross-Domain Connection Finding
168
+
169
+ The reflect phase finds connections regardless of domain boundaries. Since [[elaborative encoding is the quality gate for new notes]], connection quality requires articulating WHY two cross-domain notes relate, not just that they do.
170
+
171
+ The plugin generates cross-domain reflect prompts:
172
+ - "Does this therapy pattern connect to any personal health notes?" (temporal correlation)
173
+ - "Does this research finding have implications for any active projects?" (causal chain)
174
+ - "Does this engineering decision relate to any product management goals?" (goal alignment)
175
+ - "Does this person appear in any other domain context?" (entity sharing)
176
+
177
+ ### Cross-Domain MOC References
178
+
179
+ Notes can appear in MOCs from multiple domains:
180
+ ```yaml
181
+ topics: ["[[research-methodology]]", "[[therapy-patterns]]"]
182
+ ```
183
+
184
+ A note about "cognitive load affects emotional regulation" genuinely belongs in both research and therapy MOCs. Multi-MOC membership IS the cross-domain connection.
185
+
186
+ ## Progressive Context Loading for Multi-Domain
187
+
188
+ Rule 5 (progressive context loading) requires careful implementation. In a multi-domain vault, the context file must serve multiple domains without wasting context window space on inactive ones.
189
+
190
+ ### Context File Structure
191
+
192
+ ```
193
+ context-file.md
194
+ ├── Universal Methodology (always loaded)
195
+ │ ├── Note design patterns
196
+ │ ├── Processing skeleton
197
+ │ └── Quality standards
198
+ ├── Domain: Research (loaded when working on research)
199
+ │ ├── Claim extraction protocol
200
+ │ ├── Research-specific schemas
201
+ │ └── Citation management
202
+ ├── Domain: Therapy (loaded when working on therapy)
203
+ │ ├── Ethical constraints
204
+ │ ├── Pattern detection protocol
205
+ │ └── Session preparation workflow
206
+ └── Cross-Domain (loaded when connections span domains)
207
+ ├── Cross-domain reflect prompts
208
+ ├── Shared entity resolution
209
+ └── Temporal correlation detection
210
+ ```
211
+
212
+ ### Loading Strategy
213
+
214
+ The agent determines which domain section to load based on:
215
+ 1. **User's stated task** — "Let's work on my research" → load Research section
216
+ 2. **Note being processed** — a therapy entry → load Therapy section
217
+ 3. **Cross-domain detection** — a therapy note mentions a research concept → load Cross-Domain section additionally
218
+
219
+ This prevents context file bloat in multi-domain systems. A 3-domain context file might be 3000 lines total, but any single session loads only the universal section (500 lines) plus one domain section (800 lines) — manageable context window usage.
220
+
221
+ ## Composition Patterns
222
+
223
+ ### Pattern 1: Primary + Secondary
224
+
225
+ One domain is primary (most content, deepest structure), others are secondary (lighter use).
226
+
227
+ **Example:** Researcher + Personal Assistant (see [[academic research uses structured extraction with cross-source synthesis]] + [[personal assistant uses life area management with review automation]])
228
+
229
+ **Architecture:**
230
+ - Research gets full three-tier MOC hierarchy with heavy processing
231
+ - Personal Life gets flat-peer MOCs for life areas with light processing
232
+ - Shared inbox with content-based routing during reduce
233
+ - Cross-domain: research insights link to personal goals; research workload patterns connect to personal energy management
234
+
235
+ **Practical composition:** Dr. Engel's vault (from the research example) adds a personal domain tracking life areas, goals, and relationships. Her research goal "publish attention allocation paper by Q3" appears in both research MOCs (as a project) and personal goals (as a life area milestone). When her research processing detects a productivity pattern, the agent can surface its connection to her personal goal of better work-life balance.
236
+
237
+ ### Pattern 2: Equal Peers
238
+
239
+ Multiple domains receive roughly equal investment.
240
+
241
+ **Example:** Creative Writing + Research (see [[creative writing uses worldbuilding consistency with character tracking]] + [[academic research uses structured extraction with cross-source synthesis]])
242
+
243
+ **Architecture:**
244
+ - Each domain gets full MOC hierarchy
245
+ - Shared processing pipeline with domain-routing
246
+ - Cross-domain: research findings feed worldbuilding decisions; creative exploration generates research questions
247
+
248
+ **Practical composition:** A speculative fiction writer researching quantum physics for hard sci-fi. Research claims about quantum entanglement become worldbuilding constraints. The creative writing domain's consistency graph links to research claims: when a research claim is updated (new experiment contradicts the physics), the agent flags dependent world rules and scenes. Research and fiction share a vocabulary — but the fiction domain uses it metaphorically while the research domain uses it literally. The vocabulary transformation layer handles this.
249
+
250
+ ### Pattern 3: Professional + Personal
251
+
252
+ Work and life domains compose.
253
+
254
+ **Example:** Engineering + Health + People (see [[engineering uses technical decision tracking with architectural memory]] + [[health wellness uses symptom-trigger correlation with multi-dimensional tracking]] + [[people relationships uses Dunbar-layered graphs with interaction tracking]])
255
+
256
+ **Architecture:**
257
+ - Engineering gets three-tier MOCs with medium processing
258
+ - Health gets flat-peer dimension MOCs with light processing (data accumulation)
259
+ - People spans both contexts (colleagues + friends + family)
260
+ - Cross-domain: health patterns correlate with work performance; people notes span professional and personal contexts
261
+
262
+ **Practical composition:** An engineering manager tracks technical decisions (ADRs, architecture), personal health metrics (sleep, exercise, stress), and relationships (team members, mentors, friends). The People domain's entity sharing means `[[marcus-chen]]` exists as both a direct report (engineering context) and someone the manager mentors (people context). Health data correlates with engineering productivity: the agent can surface "Your sleep quality drops below 6 hours on weeks with 3+ architecture reviews — consider spacing them differently."
263
+
264
+ ## Composition Conflicts
265
+
266
+ Most composition is additive (domains add note types, no conflicts). But some configurations can interfere:
267
+
268
+ ### Schema Field Collisions
269
+ If two domains use the same field name with different semantics:
270
+ - Research `status: preliminary | open | dissolved` (claim maturity)
271
+ - PM `status: not-started | in-progress | done` (task completion)
272
+
273
+ **Resolution:** Namespace domain-specific fields when collision detected: `research_status`, `project_status`. Universal fields (`description`, `topics`) use consistent semantics.
274
+
275
+ ### Processing Priority Conflicts
276
+ When inbox items could belong to multiple domains:
277
+
278
+ **Resolution:** The reduce phase classifies first, then routes. Classification is based on content signals, not user tagging. The plugin can ask when ambiguous.
279
+
280
+ ### MOC Overlap
281
+ When a topic belongs in multiple domain hierarchies:
282
+
283
+ **Resolution:** Multi-MOC membership is a feature, not a conflict. The note appears in all relevant MOCs. Since [[cross-links between MOC territories indicate creative leaps and integration depth]], this is actively valuable.
284
+
285
+ ### Maintenance Burden Scaling
286
+ More domains = more note types = more maintenance checks.
287
+
288
+ **Resolution:** Structural maintenance is shared (orphan detection works on all note types). Domain-specific maintenance (therapy pattern staleness, PM risk register review) scales per domain but is condition-triggered, not scheduled. The plugin warns when total domain count creates maintenance pressure.
289
+
290
+ ## How the Plugin Handles Multi-Domain
291
+
292
+ ### During /setup
293
+ 1. Ask "What domains do you work in?" (may be multiple)
294
+ 2. Identify primary domain
295
+ 3. Generate shared infrastructure once
296
+ 4. Add domain-specific layers per domain
297
+ 5. Explain cross-domain connection opportunities
298
+
299
+ ### During /extend
300
+ When the user adds a new domain to an existing vault:
301
+ 1. Add new note templates (no conflict with existing)
302
+ 2. Add new MOC hierarchy (separate from existing)
303
+ 3. Update processing pipeline to recognize new content types
304
+ 4. Run cross-domain reflect to find connections between new and existing domains
305
+ 5. Update context file with new domain configuration
306
+
307
+ ### During /recommend
308
+ Check for cross-domain opportunities:
309
+ - Notes that could connect across domains but don't
310
+ - Domains with no cross-domain links (isolated silos)
311
+ - Common patterns across domains (same concept, different vocabulary)
312
+
313
+ ## Domain Composition Matrix
314
+
315
+ Which domains compose well and what unique value the composition creates:
316
+
317
+ | Primary | Secondary | Cross-Domain Value |
318
+ |---------|-----------|-------------------|
319
+ | Research | PM | Research findings inform project decisions |
320
+ | Research | Personal | Research goals align with life goals |
321
+ | Therapy | Health | Mood-health correlations across domains |
322
+ | Therapy | People | Relationship patterns inform therapeutic insights |
323
+ | PM | Engineering | Decisions trace to technical implementation |
324
+ | PM | Product | Project execution ties to product strategy |
325
+ | Creative | Research | Worldbuilding grounded in real research |
326
+ | Learning | Research | Study notes become research foundations |
327
+ | Trading | Health | Health patterns correlate with trading performance |
328
+ | Personal | Health | Life area health includes physical health |
329
+ | Personal | People | Relationship maintenance as life area |
330
+ | Engineering | Product | Technical decisions linked to product requirements |
331
+
332
+ ## Anti-Patterns
333
+
334
+ | Anti-Pattern | Why It Fails | Better Approach |
335
+ |-------------|-------------|-----------------|
336
+ | Separate vaults per domain | Loses cross-domain connections — the primary value of composition | Shared graph, separate schemas |
337
+ | One schema for all domains | Since [[false universalism applies same processing logic regardless of domain]] | Domain-specific templates |
338
+ | No cross-domain reflect | Domains become silos within the same vault | Explicit cross-domain connection finding |
339
+ | Too many domains at once | Maintenance burden overwhelms | Start with 1-2 domains, add as needed |
340
+ | Identical processing for all domains | Each domain's process step differs | Route to domain-specific extraction logic |
341
+
342
+ ## Domain Examples
343
+
344
+ These domain compositions can be combined to demonstrate composition patterns:
345
+
346
+ - [[therapy journal uses warm personality with pattern detection for emotional processing]] + [[health wellness uses symptom-trigger correlation with multi-dimensional tracking]] — Mood-health correlation: therapy `mood` and `trigger` fields connect to health `sleep_quality`, `exercise`, and `nutrition` fields, enabling cross-domain pattern detection
347
+ - [[academic research uses structured extraction with cross-source synthesis]] + [[project management uses decision tracking with stakeholder context]] — Research findings inform project decisions: a claim note about methodology links to a decision note that cited the finding as rationale
348
+ - [[engineering uses technical decision tracking with architectural memory]] + [[product management uses feedback pipelines with experiment tracking]] — Technical decisions linked to product requirements: ADRs reference PRD goals, feature MOCs link to architecture decisions
349
+ - [[personal assistant uses life area management with review automation]] + [[people relationships uses Dunbar-layered graphs with interaction tracking]] — Relationship maintenance as life area: person MOCs cross-reference life area goals, interaction tracking feeds into personal growth patterns
350
+ - [[trading uses conviction tracking with thesis-outcome correlation]] + [[health wellness uses symptom-trigger correlation with multi-dimensional tracking]] — Performance-health correlation: trading journal `emotions_during` fields connect to health metrics, enabling detection of physiological factors in trading performance
351
+
352
+ ## Grounding
353
+
354
+ This guidance is grounded in:
355
+ - [[multi-domain systems compose through separate templates and shared graph]] — the foundational composition principle (Rule 1)
356
+ - [[every knowledge domain shares a four-phase processing skeleton that diverges only in the process step]] — shared pipeline, domain-specific process steps (Rule 4)
357
+ - [[concept-orientation beats source-orientation for cross-domain connections]] — cross-domain links need concept organization, not domain silos
358
+ - [[maintenance operations are more universal than creative pipelines because structural health is domain-invariant]] — shared maintenance across all domains
359
+ - [[cross-links between MOC territories indicate creative leaps and integration depth]] — cross-domain links as the primary value signal of composition
360
+ - [[false universalism applies same processing logic regardless of domain]] — why Rule 4 (domain-specific processing) is necessary
361
+ - [[friction-driven module adoption prevents configuration debt by adding complexity only at pain points]] — why add-domain is incremental, not upfront
362
+ - [[schema evolution follows observe-then-formalize not design-then-enforce]] — domain-specific fields emerge from use, not from upfront design
363
+
364
+ ---
365
+
366
+ Topics:
367
+ - [[index]]
368
+ - [[index]]
369
+ ---
370
+
371
+ Topics:
372
+ - [[multi-domain-composition]]
@@ -0,0 +1,51 @@
1
+ ---
2
+ description: Notes organized by source bundle ideas by origin not meaning, preventing the same concept from different authors from meeting — extract concepts into independent nodes instead
3
+ kind: research
4
+ topics: ["[[graph-structure]]"]
5
+ methodology: ["Evergreen"]
6
+ ---
7
+
8
+ # concept-orientation beats source-orientation for cross-domain connections
9
+
10
+ When you create "notes on Book X," you bundle ideas by where they came from rather than what they mean. The author becomes the organizing principle. A concept that appears in three different books exists in three separate places, isolated from itself. Since [[topological organization beats temporal for knowledge work]] argues for organizing by concept rather than chronology, concept-orientation applies that same principle to source material: organize by meaning, not by origin.
11
+
12
+ Concept-oriented notes flip this. Instead of a container labeled "Matuschak's ideas," you extract each idea into its own note: "associative ontologies beat hierarchical taxonomies," "concept handles compress thought," "spaced repetition builds durable memory." Now these claims can link to claims from other sources that address the same territory. Because [[note titles should function as APIs enabling sentence transclusion]], each extracted concept becomes a callable unit that can be invoked in arguments across the entire graph — something a source-bundled document can never do. When Luhmann's zettelkasten principles overlap with Matuschak's evergreen notes, the overlap becomes visible because both feed into the same concept nodes.
13
+
14
+ The mechanism is simple: source-orientation creates documents that don't interact. Concept-orientation creates nodes that can form edges across any domain boundary. The retrieval consequence compounds with scale: since [[flat files break at retrieval scale]], source-bundled notes are functionally flat files organized by origin, hitting the same retrieval wall where finding a concept requires remembering which source mentioned it. Since [[each new note compounds value by creating traversal paths]], the architecture choice determines whether cross-domain connections are even possible. Source-bundled documents can only compound within their bundle. Concept nodes compound across the entire graph.
15
+
16
+ This has practical implications for capture and processing. Source material should go to archive — the full article, book notes, interview transcript preserved for reference. But the artifacts that enter the knowledge graph should be concept extractions: independent claims that can participate in cross-domain synthesis. The extraction pass is not optional decoration; it's the step that transforms documents into graph nodes. Since [[structure without processing provides no value]], merely filing source documents — even with good folder structure — produces no knowledge graph benefits. The extraction IS the processing that creates value.
17
+
18
+ For agents, this matters because agents traverse graphs. An agent exploring "what do different thinkers say about note-taking friction?" can only find convergent answers if those answers exist as concept nodes rather than source-bundled documents. The query requires cross-domain traversal, which requires cross-domain edges, which requires concept-orientation at the extraction stage. Since [[wiki links implement GraphRAG without the infrastructure]], the explicit curated edges between concept nodes enable the multi-hop reasoning that makes cross-domain synthesis possible. Source-bundled documents can't participate in this graph traversal — they're isolated containers that don't form edges.
19
+
20
+ Concept extraction also enables the hub formation that makes graphs navigable. Since [[small-world topology requires hubs and dense local links]], efficient traversal requires power-law distribution: a few high-connectivity hubs creating shortcuts, many focused nodes forming local clusters. Source-bundled notes can't become hubs — they're isolated by their origin. Extracted concept nodes can accumulate links from multiple sources, multiple contexts, multiple MOCs. A concept like "retrieval utility" that appears across Zettelkasten, Cornell, and evergreen notes traditions becomes a hub through convergent citation — impossible if those traditions stayed bundled in their source containers.
21
+
22
+ Since [[storage versus thinking distinction determines which tool patterns apply]], concept extraction is the diagnostic moment — the operation that commits a system to being a thinking system rather than a storage system. Source-bundled notes that stay organized by origin are performing a storage operation regardless of what the user intended. The extraction step is where the system declares its type.
23
+
24
+ The cost is extraction effort — and something subtler. Source-bundled notes are fast to create — just annotate while reading. Concept extraction requires the additional step of asking "what is this claim, independent of who said it?" But since [[the generation effect requires active transformation not just storage]], this additional effort is precisely the processing that generates value. Source-bundled notes feel like work but produce inventory. Concept extraction feels harder but produces network nodes. Since [[ThreadMode to DocumentMode transformation is the core value creation step]], this extraction step is where ThreadMode dies and DocumentMode begins — source-bundled notes are ThreadMode organized by origin, while extracted concept notes are DocumentMode organized by meaning. But because [[decontextualization risk means atomicity may strip meaning that cannot be recovered]], the extraction that enables cross-domain edges also strips the argumentative scaffolding that gave claims their original force. The architectural bet is that cross-domain connections are worth more than preserved source context — but the bet may not hold equally for all claim types, particularly contextual heuristics and trade-off judgments whose meaning depends on knowing when they apply.
25
+
26
+ At the system architecture level, concept-orientation is what makes [[multi-domain systems compose through separate templates and shared graph]] tractable. When a personal vault tracks research, health, and projects simultaneously, cross-domain connections — a stress pattern correlating with decision quality, a sleep insight relating to cognitive performance — can only form if both domains produced independent concept nodes. Source-bundled notes trapped in their domain of origin cannot participate in the cross-domain reflect phase that multi-domain composition depends on.
27
+
28
+ Concept-orientation also turns out to be a prerequisite for multi-agent collaboration patterns. Since [[federated wiki pattern enables multi-agent divergence as feature not bug]], when multiple agents process the same territory and develop different interpretations, they need independent concept nodes to offer alternative perspectives ON. "Notes on Book X" cannot federate because there is no single concept to reinterpret. But "spaced repetition builds durable memory" can support alternative perspectives — one agent emphasizing the cognitive mechanism, another emphasizing the implementation pattern. Concept-orientation creates the units that federation operates on, making cross-domain connections and multi-perspective coexistence both architecturally possible from the same extraction step.
29
+ ---
30
+
31
+ Relevant Notes:
32
+ - [[each new note compounds value by creating traversal paths]] — explains why architecture choice matters: concept nodes can compound across domains, source bundles cannot
33
+ - [[source attribution enables tracing claims to foundations]] — concept-orientation doesn't abandon provenance; Source footers preserve the link back while freeing concepts to participate independently
34
+ - [[the generation effect requires active transformation not just storage]] — concept extraction is the transformation; source bundling is storage that mimics processing
35
+ - [[topological organization beats temporal for knowledge work]] — the foundational design principle this note extends: organize by meaning (concept/topic) rather than origin (source/date)
36
+ - [[note titles should function as APIs enabling sentence transclusion]] — extracted concepts with sentence titles become callable APIs; source-bundled documents cannot be invoked this way
37
+ - [[wiki links implement GraphRAG without the infrastructure]] — concept nodes enable multi-hop graph traversal; source-bundled documents can't participate in the link graph
38
+ - [[structure without processing provides no value]] — extraction is the processing that creates value; filing sources without extraction is the Lazy Cornell anti-pattern
39
+ - [[incremental reading enables cross-source connection finding]] — complementary approach: concept-orientation makes cross-source connections architecturally possible; incremental reading surfaces them during processing
40
+ - [[small-world topology requires hubs and dense local links]] — concept extraction enables hub formation: high-connectivity concept nodes that aggregate links from multiple sources can become the hubs that create shortcuts across the network
41
+ - [[federated wiki pattern enables multi-agent divergence as feature not bug]] — downstream consequence: concept-orientation is a prerequisite for federation because only concept nodes can support multiple legitimate interpretations; source-bundled notes have nothing to federate
42
+ - [[faceted classification treats notes as multi-dimensional objects rather than folder contents]] — explains WHY concept extraction matters for classification: extracted concepts have multiple independent properties (topic, methodology, type) that faceted access can slice from multiple angles, while source-bundled documents have only one meaningful facet (origin)
43
+ - [[narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging]] — explains why concept extraction works in single-operator vaults: the vocabulary used to title extracted concepts doesn't need to generalize beyond the operator who will search it, so personal retrieval keys can be maximally specific to the operator's conceptual landscape
44
+ - [[ThreadMode to DocumentMode transformation is the core value creation step]] — names this extraction step precisely: source-bundled notes are ThreadMode organized by origin, concept notes are DocumentMode organized by meaning; concept extraction IS the ThreadMode-to-DocumentMode transformation applied to source material
45
+ - [[storage versus thinking distinction determines which tool patterns apply]] — diagnostic: concept extraction is the operation that commits a system to being a thinking system rather than a storage system; source-bundled notes that stay organized by origin are performing a storage operation regardless of intent
46
+ - [[multi-domain systems compose through separate templates and shared graph]] — architectural consequence: concept extraction is the prerequisite for cross-domain composition because only independent concept nodes can form edges across domain boundaries during cross-domain reflect
47
+ - [[decontextualization risk means atomicity may strip meaning that cannot be recovered]] — names the cost that concept extraction creates: cross-domain edges require stripping source context, but contextual heuristics and trade-off judgments may lose essential meaning in the process
48
+ - [[flat files break at retrieval scale]] — the scale consequence: source-bundled notes are functionally flat files organized by origin, and they hit the same retrieval wall — past ~200 sources, finding a concept buried inside a source bundle requires remembering which source mentioned it
49
+
50
+ Topics:
51
+ - [[graph-structure]]
@@ -0,0 +1,50 @@
1
+ ---
2
+ description: A three-tier response pattern (auto-apply, suggest, log-only) based on confidence scoring fills the gap between deterministic hooks and semantic skills, as ClueBot NG's 0.9 revert threshold
3
+ kind: research
4
+ topics: ["[[agent-cognition]]", "[[maintenance-patterns]]"]
5
+ methodology: ["Original", "Systems Theory"]
6
+ source: [[automated-knowledge-maintenance-research-source]]
7
+ ---
8
+
9
+ # confidence thresholds gate automated action between the mechanical and judgment zones
10
+
11
+ Since [[the determinism boundary separates hook methodology from skill methodology]], the vault treats automation as binary: deterministic operations go to hooks, judgment operations go to skills. But this binary obscures a third zone that sits between them — operations where automation can act if its confidence is high enough, and should defer when it is not. The determinism boundary asks "is this operation deterministic?" The confidence threshold asks the complementary question: "when the operation is NOT fully deterministic, can we still automate it above a confidence threshold?"
12
+
13
+ Wikipedia's ClueBot NG demonstrates the pattern concretely. It scores every edit on a 0-1 scale using a neural network trained on vandalism examples. Above 0.9, it auto-reverts — the confidence is high enough that the cost of occasional false positives is lower than the cost of delayed vandalism response. Between 0.5 and 0.9, it flags the edit for human review — the confidence is meaningful but not sufficient for autonomous action. Below 0.5, it logs the edit without flagging — the signal is too weak to warrant attention. This three-tier response pattern (auto-apply, suggest, log-only) is not specific to vandalism detection. It generalizes wherever an automated system can score its own confidence.
14
+
15
+ The vault already implements implicit confidence tiers through its search architecture. Since qmd provides three search modes — keyword (`search`), vector (`vsearch`), and hybrid with LLM reranking (`query`) — each mode represents a different confidence level in its results. LLM-reranked results from `query` mode are higher confidence than raw vector matches, which are higher confidence than keyword matches. The three-tier search architecture is itself a confidence-gated system, even though the thresholds are implicit in the tool choice rather than explicit in a numerical score. An agent choosing `query` over `search` for connection finding is implicitly choosing higher-confidence results at the cost of slower execution — the same trade-off ClueBot NG makes when setting its threshold at 0.9 rather than 0.5.
16
+
17
+ The design principle underlying confidence thresholds is conservative asymmetry: the cost of incorrect automated action exceeds the cost of missed automation opportunity. Since [[automated detection is always safe because it only reads state while automated remediation risks content corruption]], the read/write asymmetry reveals why confidence thresholds apply selectively — detection needs no confidence gating because false alerts are cheap and correctable, while remediation needs gating proportional to the irreversibility of the change. This is also the asymmetry that since [[over-automation corrupts quality when hooks encode judgment rather than verification]] makes over-automation dangerous — a wrong automated action corrupts silently while a missed opportunity is merely a missed opportunity. Confidence thresholds operationalize this asymmetry by requiring high confidence before granting automated authority. The threshold itself encodes the system's risk tolerance: a 0.9 threshold says "we accept one false positive in ten automated actions," while a 0.95 threshold says "we accept one in twenty." The correct threshold depends on how expensive each false positive is relative to how expensive each missed true positive is.
18
+
19
+ This extends the existing determinism boundary from a line to a spectrum with three zones. At one end, fully mechanical operations — schema field presence, file existence, format compliance — need no confidence scoring because their correctness is deterministic. Since [[hook enforcement guarantees quality while instruction enforcement merely suggests it]], these belong in hooks that fire on every operation. At the other end, fully semantic operations — connection quality evaluation, claim specificity judgment, synthesis across domains — need full agent reasoning regardless of any confidence score. Between these poles sits the confidence-gated zone: duplicate detection where high-similarity pairs can be flagged automatically but ambiguous pairs need evaluation, tag suggestions where strong keyword matches can be auto-applied but weaker matches need review, stale note detection where notes unchanged for months in active topic areas are confidently stale but notes in stable domains require judgment.
20
+
21
+ Since [[nudge theory explains graduated hook enforcement as choice architecture for agents]], the confidence threshold pattern parallels nudge theory's enforcement graduation but along a different axis. Nudge theory graduates the severity of response to a detected violation — warn versus block. Confidence thresholds graduate whether an automated system should act at all based on how certain it is about its assessment. A system could combine both: high confidence plus structural violation triggers auto-fix (block-level severity with high confidence), medium confidence plus qualitative issue triggers a suggestion (nudge-level severity with medium confidence), low confidence triggers only logging. The two graduation axes — enforcement severity and confidence level — create a two-dimensional design space for automation decisions.
22
+
23
+ The practical implication for knowledge vault automation is that many operations currently treated as either fully-automated or fully-manual could benefit from confidence-gated intermediate states. Since [[schema enforcement via validation agents enables soft consistency]], soft enforcement already implements one form of this: warning rather than blocking when confidence that the violation matters is medium. But the pattern extends beyond schema validation. Duplicate detection could auto-flag pairs with cosine similarity above 0.95, suggest pairs between 0.8 and 0.95, and ignore pairs below 0.8. Orphan detection could auto-escalate notes orphaned for over 30 days in active topic areas, suggest attention for notes orphaned 7-30 days, and suppress alerts for notes orphaned less than 7 days (the expected window between create and reflect). Connection suggestion could surface high-confidence semantic matches as ready-to-add links, medium-confidence matches as "consider connecting," and suppress low-confidence noise entirely. The risk tolerance at each tier also interacts with the operation's reversibility: since [[idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once]], idempotent operations can tolerate lower confidence thresholds because a wrong action that can be safely re-run or undone is less costly than a wrong action that compounds on retry.
24
+
25
+ The shadow side of confidence thresholds is that they create a false sense of precision. A threshold of 0.9 sounds objective, but the underlying model's calibration determines whether a 0.9 score actually means 90% probability of correctness. Since [[metacognitive confidence can diverge from retrieval capability]], the same divergence between self-assessment and actual capability applies here at the system level: poorly calibrated models produce confidence scores that do not correspond to actual accuracy — a 0.9 score might reflect 70% actual accuracy if the model is systematically overconfident. This is the metacognitive version of the Goodhart problem: the confidence score becomes a target, and if the scoring model is wrong, the threshold gates on the wrong thing. The practical defense is empirical validation: track the false positive rate at each threshold level, and adjust thresholds based on observed accuracy rather than theoretical calibration.
26
+
27
+ There is also a temporal dimension to threshold design. Early in a system's life, when the automation has limited training data and the knowledge graph is sparse, thresholds should be conservative — higher confidence required before automated action. As the system accumulates more data and the automation's accuracy can be validated empirically, thresholds can relax. This parallels the broader pattern that since [[methodology development should follow the trajectory from documentation to skill to hook as understanding hardens]], each encoding level requires operational experience at the previous level before promotion. Confidence thresholds follow an analogous trajectory: start with high thresholds (conservative, suggest-only), lower thresholds as empirical evidence accumulates (graduated action), and eventually some operations may prove reliable enough to move entirely into the mechanical zone. The confidence zone is not a permanent category — it is a staging area where operations mature toward either full automation or permanent human oversight. But since [[hooks cannot replace genuine cognitive engagement yet more automation is always tempting]], this maturation trajectory itself carries the expansion temptation: as thresholds lower and more operations graduate to full automation, the agent's judgment scope narrows progressively — the same erosion dynamic that confidence gating was partly designed to moderate.
28
+
29
+ ---
30
+ ---
31
+
32
+ Relevant Notes:
33
+ - [[the determinism boundary separates hook methodology from skill methodology]] -- foundation: establishes the binary hook/skill divide that this note extends into a spectrum by identifying the confidence-gated zone between the two poles
34
+ - [[over-automation corrupts quality when hooks encode judgment rather than verification]] -- develops the failure mode that confidence thresholds are designed to prevent: acting on low-confidence judgments produces the same invisible corruption as encoding judgment in hooks
35
+ - [[nudge theory explains graduated hook enforcement as choice architecture for agents]] -- parallel graduation: nudge theory graduates enforcement severity (warn vs block), this note graduates automation scope (auto-apply vs suggest vs log) based on confidence rather than severity
36
+ - [[hook enforcement guarantees quality while instruction enforcement merely suggests it]] -- the detection guarantee that makes confidence scoring possible: hooks ensure every operation is evaluated, and confidence thresholds determine what happens after evaluation
37
+ - [[schema enforcement via validation agents enables soft consistency]] -- soft enforcement is a specific instance of confidence-gated action: warn-without-blocking is the response when confidence that the violation matters is medium rather than high
38
+ - [[reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring]] -- reconciliation is the scheduling pattern whose remediation side benefits from confidence gating: detection is always safe (read-only), but remediation actions range from mechanical to judgment-requiring, and confidence thresholds determine which remediation actions can execute autonomously
39
+ - [[maintenance scheduling frequency should match consequence speed not detection capability]] — consequence speed determines WHEN to check, confidence thresholds determine HOW AGGRESSIVELY to respond: together they parameterize the full automation scheduling decision across temporal and response dimensions
40
+ - [[automated detection is always safe because it only reads state while automated remediation risks content corruption]] -- deeper principle: the read/write asymmetry explains why confidence thresholds apply selectively to remediation but not detection; detection failure is a cheap false alert while remediation failure is content corruption proportional to irreversibility
41
+ - [[methodology development should follow the trajectory from documentation to skill to hook as understanding hardens]] -- temporal parallel: the methodology trajectory (instruction to skill to hook) mirrors the confidence trajectory (high threshold to lower threshold to full automation) as evidence accumulates through operational experience
42
+ - [[metacognitive confidence can diverge from retrieval capability]] -- calibration risk: poorly calibrated confidence scores produce the same divergence between self-assessment and actual capability that metacognitive confidence describes for retrieval; the system believes its 0.9 threshold means 90% accuracy when it may not
43
+ - [[hooks cannot replace genuine cognitive engagement yet more automation is always tempting]] -- expansion pressure: the temporal maturation of thresholds (lowering as evidence accumulates) IS the automation expansion dynamic this tension warns about; confidence gating moderates expansion but the trajectory still narrows agent judgment scope over time
44
+ - [[idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once]] -- risk modulation: idempotent operations can tolerate lower confidence thresholds because errors are reversible and repetition is harmless, adding a third safety filter that interacts with both determinism and confidence
45
+ - [[three concurrent maintenance loops operate at different timescales to catch different classes of problems]] — maps the confidence spectrum onto operational architecture: the fast loop operates in the mechanical zone (deterministic, auto-apply), the medium loop straddles the boundary (mechanical detection, judgment-requiring remediation), and the slow loop operates primarily in the judgment zone, making the three loops a concrete instantiation of the confidence gradient
46
+ - [[the fix-versus-report decision depends on determinism reversibility and accumulated trust]] — temporal complement: confidence thresholds gate individual actions at a point in time, while the fix-versus-report framework adds the temporal dimension of accumulated trust that determines when confidence gating can relax; together they parameterize both the instantaneous and historical dimensions of automation authority
47
+
48
+ Topics:
49
+ - [[agent-cognition]]
50
+ - [[maintenance-patterns]]
@@ -0,0 +1,58 @@
1
+ ---
2
+ description: Atomic granularity forces explicit linking, deep navigation, and heavy processing — the valid space is far smaller than the combinatorial product because each choice constrains its neighbors
3
+ kind: research
4
+ topics: ["[[design-dimensions]]"]
5
+ methodology: ["Original"]
6
+ source: [[knowledge-system-derivation-blueprint]]
7
+ ---
8
+
9
+ # configuration dimensions interact so choices in one create pressure on others
10
+
11
+ Knowledge system design involves at least eight dimensions — since [[eight configuration dimensions parameterize the space of possible knowledge systems]], the list includes granularity, organization, linking philosophy, processing intensity, navigation depth, maintenance cadence, schema density, and automation level. The tempting assumption is that these are independent knobs: pick your preferred setting on each, combine them, and deploy. But the dimensions are coupled. A choice at one end of one spectrum creates pressure toward specific regions of other spectra, and ignoring that pressure produces systems that are internally incoherent.
12
+
13
+ The clearest example is granularity cascading through everything else. Choosing atomic notes (one claim per file) means each note has minimal internal context, so connections between notes must be explicit — you cannot rely on "it's in the same document" proximity. This forces explicit linking. But thousands of atomic notes with dense explicit links require deep navigation structures to remain traversable, because since [[navigational vertigo emerges in pure association systems without local hierarchy]], a flat sea of equally-small nodes becomes disorienting without MOC hierarchy to provide landmarks. And maintaining all those links and navigation structures demands heavy processing — extraction pipelines, reflection passes, reweaving cycles. The granularity choice didn't just set one parameter. It created pressure across linking, navigation, and processing simultaneously. And since [[every knowledge domain shares a four-phase processing skeleton that diverges only in the process step]], the processing intensity cascade is particularly consequential — the process step is the only phase that varies by domain, so dimension interactions concentrate at that single variable point while capture, connect, and verify absorb pressure as structural constants.
14
+
15
+ The reverse cascade is equally constraining. Coarse granularity (one note per source, or topic-level documents) means each note carries enough internal context that lightweight linking suffices — a few cross-references rather than dense typed connections. Navigation can be shallow because fewer nodes need organizing. Processing can be lighter because each note is more self-contained. The entire configuration coheres at the coarse pole just as it coheres at the atomic pole, but mixing atomic granularity with shallow navigation produces an incoherent system where notes are too small to be self-contained and too numerous to find.
16
+
17
+ Automation level creates a parallel cascade. Full automation (hooks, skills, pipelines) enables dense schemas because validation catches errors that humans would miss. It enables heavy processing because pipelines handle volume that manual invocation cannot sustain. But manual operation — where every action requires explicit invocation — pressures toward minimal schemas (less to remember, less to validate by hand) and lighter processing (each step is expensive in attention). Deploying dense schemas in a manual system creates a maintenance burden that collapses under its own weight. Deploying minimal schemas in a fully automated system wastes the infrastructure's enforcement capacity. The cascade operates even within automation — since [[skill context budgets constrain knowledge system complexity on agent platforms]], limited skill slots force methodology back into instruction encoding, which degrades enforcement strength, which pressures toward simpler schemas. Since [[methodology development should follow the trajectory from documentation to skill to hook as understanding hardens]], the automation dimension itself has internal structure where different encoding levels create different pressure profiles.
18
+
19
+ Volume introduces a third pressure axis. Since [[small-world topology requires hubs and dense local links]], a 500-note vault needs deep navigation, semantic search, and automated maintenance to remain traversable. A 50-note vault works with shallow navigation and grep because the agent can hold the entire structure in context. The volume dimension doesn't just affect search modality — it pressures navigation depth and maintenance cadence simultaneously, because what works at small scale (manual, shallow, grep-based) becomes unnavigable at large scale.
20
+
21
+ The implication for derivation is that the valid configuration space is much smaller than the combinatorial product. Eight dimensions with even three positions each produces 6,561 theoretical combinations. But most are incoherent — atomic granularity with shallow navigation, manual operation with dense schemas, high volume with no automated maintenance. Since [[derivation generates knowledge systems from composable research claims not template customization]], a derivation engine that treats dimensions as independent will produce specifications that look reasonable in isolation but fail in practice because the pressures conflict. But the coupling structure cuts both ways: since [[configuration paralysis emerges when derivation surfaces too many decisions]], the same interaction constraints that make independent dimension selection incoherent also make inference tractable — resolving a few primary choices propagates through the coupling to determine secondary ones, so the derivation engine can reduce the decision surface from the full combinatorial product to the genuine choice points where user constraints leave multiple viable paths. Since [[knowledge system architecture is parameterized by platform capabilities not fixed by methodology]], the generator must understand not just which parameters to set but which parameter combinations form coherent operating points.
22
+
23
+ This reframes methodology traditions as discovered coherence points rather than arbitrary preferences — a framing that [[methodology traditions are named points in a shared configuration space not competing paradigms]] develops extensively. Zettelkasten coheres at the atomic-explicit-deep-heavy end. PARA coheres at the coarse-lightweight-shallow-manual end. Each tradition represents a region of the configuration space where the dimension interactions have been resolved through practice. The value of understanding interactions is that it enables generating NEW coherent configurations for novel use cases — combinations that no existing methodology has discovered but that respect the coupling constraints.
24
+
25
+ An upstream constraint narrows the space further: since [[storage versus thinking distinction determines which tool patterns apply]], the storage/thinking classification determines which coherence regions are even viable before dimension tuning begins. A storage system cannot coherently adopt atomic granularity with heavy processing because its purpose does not generate the synthesis demand that justifies the processing cost. A thinking system cannot coherently adopt coarse granularity with minimal linking because it sacrifices the composability that synthesis requires. The classification constrains which cascade patterns produce functional systems. And since [[complex systems evolve from simple working systems]], even within a coherent region, Gall's Law adds a temporal constraint: start at the simple end and let friction drive elaboration rather than deploying the full coherent configuration from day one.
26
+
27
+ Multi-domain composition introduces a further pressure axis that operates across domain boundaries. Since [[multi-domain systems compose through separate templates and shared graph]], when multiple domains coexist in one graph, dimension settings in one domain can conflict with settings in another. A domain with high processing intensity and dense schemas shares graph space with a domain using light processing and minimal schemas, and the shared navigation structure must serve both. The cross-domain surface area grows quadratically with each new domain, making dimension interaction analysis not just intra-system but inter-domain. Whether the shared graph mechanics — wiki links, MOCs, progressive disclosure — can bridge domains with different structural densities is an open question that tests how hard the coupling constraints really are.
28
+
29
+ Dimension coupling also explains why evolution eventually requires reseeding. Since [[derived systems follow a seed-evolve-reseed lifecycle]], small adaptations in one dimension accumulate pressure on others — adding a schema field creates query expectations, adding a MOC shifts navigation patterns. Each adaptation is locally justified but the accumulated pressure drifts the configuration into an incoherent region. This is dimension interaction operating over time: the coupling constraints that make initial derivation a constraint-satisfaction problem also make evolution a drift-detection problem, because each incremental change shifts the system's position in configuration space without checking whether the new position still satisfies the coupling constraints.
30
+
31
+ There is a shadow side. If dimension interactions are strong enough, they may reduce the space of viable configurations to essentially the known methodology traditions, making "novel derivation" more theoretical than practical. The test is whether the interaction constraints are hard (violating them produces failure) or soft (violating them produces friction that can be overcome with compensating mechanisms). Since [[four abstraction layers separate platform-agnostic from platform-dependent knowledge system features]], automation can compensate for some mismatches — automated linking can sustain atomic granularity with less navigation depth than the pure interaction would demand. The real question is how much automation can compensate versus how much the coupling is fundamental. The novel domain case tests this directly — since [[novel domains derive by mapping knowledge type to closest reference domain then adapting]], adapting a reference domain means adjusting specific dimensions, and each adjustment must respect the coupling constraints or produce an incoherent system. Whether adaptation can safely deviate from the reference coherence point measures exactly how hard the interaction constraints are.
32
+
33
+ ---
34
+ ---
35
+
36
+ Relevant Notes:
37
+ - [[knowledge system architecture is parameterized by platform capabilities not fixed by methodology]] — establishes that dimensions are parameterized, this note adds that parameters are coupled rather than independent
38
+ - [[eight configuration dimensions parameterize the space of possible knowledge systems]] — defines the dimensions this note shows are coupled; that note treats them as largely independent in definition, this note develops their practical entanglement
39
+ - [[methodology traditions are named points in a shared configuration space not competing paradigms]] — interaction pressure explains WHY traditions cohere where they do: each tradition resolved the coupling through practice, which this note formalizes as constraint satisfaction
40
+ - [[derivation generates knowledge systems from composable research claims not template customization]] — derivation must satisfy these interaction constraints or produce incoherent systems; justification chains are the mechanism for verifying coherence
41
+ - [[four abstraction layers separate platform-agnostic from platform-dependent knowledge system features]] — layer dependencies are one mechanism of dimension interaction: automation-level choices cascade through processing intensity and schema density
42
+ - [[enforcing atomicity can create paralysis when ideas resist decomposition]] — documents the creation-time cost at the atomic pole, which this note predicts: fine granularity creates pressure toward heavy processing, and when that processing burden exceeds capacity the system collapses
43
+ - [[decontextualization risk means atomicity may strip meaning that cannot be recovered]] — documents the retrieval-time cost at the atomic pole: fine granularity strips argumentative context during extraction, and the coupling means the heavy processing that atomicity demands may not compensate for what extraction loses
44
+ - [[small-world topology requires hubs and dense local links]] — the topological consequence of granularity choice: atomic notes require dense local linking to maintain navigability, which is exactly the pressure this note describes
45
+ - [[navigational vertigo emerges in pure association systems without local hierarchy]] — the failure mode when granularity-navigation coupling is violated: atomic notes without the deep navigation they demand become unnavigable
46
+ - [[complex systems evolve from simple working systems]] — complementary constraint: even at a coherent configuration point, Gall's Law says start simple and evolve; interaction pressures are evolutionary pressures that drive correction or collapse
47
+ - [[skill context budgets constrain knowledge system complexity on agent platforms]] — concrete instance of the automation cascade: skill budgets constrain methodology encoding which cascades through schema density and processing intensity
48
+ - [[methodology development should follow the trajectory from documentation to skill to hook as understanding hardens]] — the trajectory IS an automation-level progression; this note explains why different automation levels create different pressures on schema density and processing intensity
49
+ - [[storage versus thinking distinction determines which tool patterns apply]] — upstream classification that determines which coherence region is viable; storage and thinking systems occupy different interaction-resolved regions of the configuration space
50
+ - [[every knowledge domain shares a four-phase processing skeleton that diverges only in the process step]] — explains why the processing intensity cascade is particularly consequential: since only one of four pipeline phases varies by domain, dimension interactions concentrate at that single variable point
51
+ - [[novel domains derive by mapping knowledge type to closest reference domain then adapting]] — concrete application of dimension coupling during derivation: adapting a reference domain for a novel use case must respect interaction constraints because changing one dimension (e.g., temporal dynamics) cascades through maintenance cadence, navigation depth, and schema density
52
+ - [[multi-domain systems compose through separate templates and shared graph]] — cross-domain pressure: adding a second domain does not simply double the configuration space because dimension interactions propagate across domain boundaries, and the shared graph must accommodate domains with different structural densities
53
+ - [[derived systems follow a seed-evolve-reseed lifecycle]] — temporal consequence of coupling: accumulated incremental adaptations in coupled dimensions drift configuration into incoherent regions, explaining why evolution alone is insufficient and principled restructuring (reseeding) is periodically required
54
+ - [[configuration paralysis emerges when derivation surfaces too many decisions]] — the UX consequence of coupling: dimension interactions are precisely what makes inference tractable, because resolving a few primary choices propagates through the coupling structure to determine secondary ones, reducing the decision surface from the full combinatorial product to genuine choice points
55
+ - [[premature complexity is the most common derivation failure mode]] — dimension coupling is the amplification mechanism: each choice beyond minimum viable cascades through neighbors, so small increases in initial complexity produce disproportionate system-level complexity, making the complexity budget more critical than a linear count would suggest
56
+
57
+ Topics:
58
+ - [[design-dimensions]]