arscontexta 0.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (418) hide show
  1. package/.claude-plugin/marketplace.json +11 -0
  2. package/.claude-plugin/plugin.json +22 -0
  3. package/README.md +683 -0
  4. package/agents/knowledge-guide.md +49 -0
  5. package/bin/cli.mjs +66 -0
  6. package/generators/agents-md.md +240 -0
  7. package/generators/claude-md.md +379 -0
  8. package/generators/features/atomic-notes.md +124 -0
  9. package/generators/features/ethical-guardrails.md +58 -0
  10. package/generators/features/graph-analysis.md +188 -0
  11. package/generators/features/helper-functions.md +92 -0
  12. package/generators/features/maintenance.md +164 -0
  13. package/generators/features/methodology-knowledge.md +70 -0
  14. package/generators/features/mocs.md +144 -0
  15. package/generators/features/multi-domain.md +61 -0
  16. package/generators/features/personality.md +71 -0
  17. package/generators/features/processing-pipeline.md +428 -0
  18. package/generators/features/schema.md +149 -0
  19. package/generators/features/self-evolution.md +229 -0
  20. package/generators/features/self-space.md +78 -0
  21. package/generators/features/semantic-search.md +99 -0
  22. package/generators/features/session-rhythm.md +85 -0
  23. package/generators/features/templates.md +85 -0
  24. package/generators/features/wiki-links.md +88 -0
  25. package/generators/soul-md.md +121 -0
  26. package/hooks/hooks.json +45 -0
  27. package/hooks/scripts/auto-commit.sh +44 -0
  28. package/hooks/scripts/session-capture.sh +35 -0
  29. package/hooks/scripts/session-orient.sh +86 -0
  30. package/hooks/scripts/write-validate.sh +42 -0
  31. package/methodology/AI shifts knowledge systems from externalizing memory to externalizing attention.md +59 -0
  32. package/methodology/BM25 retrieval fails on full-length descriptions because query term dilution reduces match scores.md +39 -0
  33. package/methodology/IBIS framework maps claim-based architecture to structured argumentation.md +58 -0
  34. package/methodology/LLM attention degrades as context fills.md +49 -0
  35. package/methodology/MOC construction forces synthesis that automated generation from metadata cannot replicate.md +49 -0
  36. package/methodology/MOC maintenance investment compounds because orientation savings multiply across every future session.md +41 -0
  37. package/methodology/MOCs are attention management devices not just organizational tools.md +51 -0
  38. package/methodology/PKM failure follows a predictable cycle.md +50 -0
  39. package/methodology/ThreadMode to DocumentMode transformation is the core value creation step.md +52 -0
  40. package/methodology/WIP limits force processing over accumulation.md +53 -0
  41. package/methodology/Zeigarnik effect validates capture-first philosophy because open loops drain attention.md +42 -0
  42. package/methodology/academic research uses structured extraction with cross-source synthesis.md +566 -0
  43. package/methodology/adapt the four-phase processing pipeline to domain-specific throughput needs.md +197 -0
  44. package/methodology/agent notes externalize navigation intuition that search cannot discover and traversal cannot reconstruct.md +48 -0
  45. package/methodology/agent self-memory should be architecturally separate from user knowledge systems.md +48 -0
  46. package/methodology/agent session boundaries create natural automation checkpoints that human-operated systems lack.md +56 -0
  47. package/methodology/agent-cognition.md +107 -0
  48. package/methodology/agents are simultaneously methodology executors and subjects creating a unique trust asymmetry.md +66 -0
  49. package/methodology/aspect-oriented programming solved the same cross-cutting concern problem that hooks solve.md +39 -0
  50. package/methodology/associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles.md +53 -0
  51. package/methodology/attention residue may have a minimum granularity that cannot be subdivided.md +46 -0
  52. package/methodology/auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution.md +47 -0
  53. package/methodology/automated detection is always safe because it only reads state while automated remediation risks content corruption.md +42 -0
  54. package/methodology/automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues.md +56 -0
  55. package/methodology/backlinks implicitly define notes by revealing usage context.md +35 -0
  56. package/methodology/backward maintenance asks what would be different if written today.md +62 -0
  57. package/methodology/balance onboarding enforcement and questions to prevent premature complexity.md +229 -0
  58. package/methodology/basic level categorization determines optimal MOC granularity.md +51 -0
  59. package/methodology/batching by context similarity reduces switching costs in agent processing.md +43 -0
  60. package/methodology/behavioral anti-patterns matter more than tool selection.md +42 -0
  61. package/methodology/betweenness centrality identifies bridge notes connecting disparate knowledge domains.md +57 -0
  62. package/methodology/blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules.md +42 -0
  63. package/methodology/bootstrapping principle enables self-improving systems.md +62 -0
  64. package/methodology/build automatic memory through cognitive offloading and session handoffs.md +285 -0
  65. package/methodology/capture the reaction to content not just the content itself.md +41 -0
  66. package/methodology/claims must be specific enough to be wrong.md +36 -0
  67. package/methodology/closure rituals create clean breaks that prevent attention residue bleed.md +44 -0
  68. package/methodology/cognitive offloading is the architectural foundation for vault design.md +46 -0
  69. package/methodology/cognitive outsourcing risk in agent-operated systems.md +55 -0
  70. package/methodology/coherence maintains consistency despite inconsistent inputs.md +96 -0
  71. package/methodology/coherent architecture emerges from wiki links spreading activation and small-world topology.md +48 -0
  72. package/methodology/community detection algorithms can inform when MOCs should split or merge.md +52 -0
  73. package/methodology/complete navigation requires four complementary types that no single mechanism provides.md +43 -0
  74. package/methodology/complex systems evolve from simple working systems.md +59 -0
  75. package/methodology/composable knowledge architecture builds systems from independent toggleable modules not monolithic templates.md +61 -0
  76. package/methodology/compose multi-domain systems through separate templates and shared graph.md +372 -0
  77. package/methodology/concept-orientation beats source-orientation for cross-domain connections.md +51 -0
  78. package/methodology/confidence thresholds gate automated action between the mechanical and judgment zones.md +50 -0
  79. package/methodology/configuration dimensions interact so choices in one create pressure on others.md +58 -0
  80. package/methodology/configuration paralysis emerges when derivation surfaces too many decisions.md +44 -0
  81. package/methodology/context files function as agent operating systems through self-referential self-extension.md +46 -0
  82. package/methodology/context phrase clarity determines how deep a navigation hierarchy can scale.md +46 -0
  83. package/methodology/continuous small-batch processing eliminates review dread.md +48 -0
  84. package/methodology/controlled disorder engineers serendipity through semantic rather than topical linking.md +51 -0
  85. package/methodology/creative writing uses worldbuilding consistency with character tracking.md +672 -0
  86. package/methodology/cross-links between MOC territories indicate creative leaps and integration depth.md +43 -0
  87. package/methodology/dangling links reveal which notes want to exist.md +62 -0
  88. package/methodology/data exit velocity measures how quickly content escapes vendor lock-in.md +74 -0
  89. package/methodology/decontextualization risk means atomicity may strip meaning that cannot be recovered.md +48 -0
  90. package/methodology/dense interlinked research claims enable derivation while sparse references only enable templating.md +47 -0
  91. package/methodology/dependency resolution through topological sort makes module composition transparent and verifiable.md +56 -0
  92. package/methodology/derivation generates knowledge systems from composable research claims not template customization.md +63 -0
  93. package/methodology/derivation-engine.md +27 -0
  94. package/methodology/derived systems follow a seed-evolve-reseed lifecycle.md +56 -0
  95. package/methodology/description quality for humans diverges from description quality for keyword search.md +73 -0
  96. package/methodology/descriptions are retrieval filters not summaries.md +112 -0
  97. package/methodology/design MOCs as attention management devices with lifecycle governance.md +318 -0
  98. package/methodology/design-dimensions.md +66 -0
  99. package/methodology/digital mutability enables note evolution that physical permanence forbids.md +54 -0
  100. package/methodology/discovery-retrieval.md +48 -0
  101. package/methodology/distinctiveness scoring treats description quality as measurable.md +69 -0
  102. package/methodology/does agent processing recover what fast capture loses.md +43 -0
  103. package/methodology/domain-compositions.md +37 -0
  104. package/methodology/dual-coding with visual elements could enhance agent traversal.md +55 -0
  105. package/methodology/each module must be describable in one sentence under 200 characters or it does too many things.md +45 -0
  106. package/methodology/each new note compounds value by creating traversal paths.md +55 -0
  107. package/methodology/eight configuration dimensions parameterize the space of possible knowledge systems.md +56 -0
  108. package/methodology/elaborative encoding is the quality gate for new notes.md +55 -0
  109. package/methodology/enforce schema with graduated strictness across capture processing and query zones.md +221 -0
  110. package/methodology/enforcing atomicity can create paralysis when ideas resist decomposition.md +43 -0
  111. package/methodology/engineering uses technical decision tracking with architectural memory.md +766 -0
  112. package/methodology/every knowledge domain shares a four-phase processing skeleton that diverges only in the process step.md +53 -0
  113. package/methodology/evolution observations provide actionable signals for system adaptation.md +67 -0
  114. package/methodology/external memory shapes cognition more than base model.md +60 -0
  115. package/methodology/faceted classification treats notes as multi-dimensional objects rather than folder contents.md +65 -0
  116. package/methodology/failure-modes.md +27 -0
  117. package/methodology/false universalism applies same processing logic regardless of domain.md +49 -0
  118. package/methodology/federated wiki pattern enables multi-agent divergence as feature not bug.md +59 -0
  119. package/methodology/flat files break at retrieval scale.md +75 -0
  120. package/methodology/forced engagement produces weak connections.md +48 -0
  121. package/methodology/four abstraction layers separate platform-agnostic from platform-dependent knowledge system features.md +47 -0
  122. package/methodology/fresh context per task preserves quality better than chaining phases.md +44 -0
  123. package/methodology/friction reveals architecture.md +63 -0
  124. package/methodology/friction-driven module adoption prevents configuration debt by adding complexity only at pain points.md +48 -0
  125. package/methodology/gardening cycle implements tend prune fertilize operations.md +41 -0
  126. package/methodology/generation effect gate blocks processing without transformation.md +40 -0
  127. package/methodology/goal-driven memory orchestration enables autonomous domain learning through directed compute allocation.md +41 -0
  128. package/methodology/good descriptions layer heuristic then mechanism then implication.md +57 -0
  129. package/methodology/graph-structure.md +65 -0
  130. package/methodology/guided notes might outperform post-hoc structuring for high-volume capture.md +37 -0
  131. package/methodology/health wellness uses symptom-trigger correlation with multi-dimensional tracking.md +819 -0
  132. package/methodology/hook composition creates emergent methodology from independent single-concern components.md +47 -0
  133. package/methodology/hook enforcement guarantees quality while instruction enforcement merely suggests it.md +51 -0
  134. package/methodology/hook-driven learning loops create self-improving methodology through observation accumulation.md +62 -0
  135. package/methodology/hooks are the agent habit system that replaces the missing basal ganglia.md +40 -0
  136. package/methodology/hooks cannot replace genuine cognitive engagement yet more automation is always tempting.md +87 -0
  137. package/methodology/hooks enable context window efficiency by delegating deterministic checks to external processes.md +47 -0
  138. package/methodology/idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once.md +44 -0
  139. package/methodology/implement condition-based maintenance triggers for derived systems.md +255 -0
  140. package/methodology/implicit dependencies create distributed monoliths that fail silently across configurations.md +58 -0
  141. package/methodology/implicit knowledge emerges from traversal.md +55 -0
  142. package/methodology/incremental formalization happens through repeated touching of old notes.md +60 -0
  143. package/methodology/incremental reading enables cross-source connection finding.md +39 -0
  144. package/methodology/index.md +32 -0
  145. package/methodology/inline links carry richer relationship data than metadata fields.md +91 -0
  146. package/methodology/insight accretion differs from productivity in knowledge systems.md +41 -0
  147. package/methodology/intermediate packets enable assembly over creation.md +52 -0
  148. package/methodology/intermediate representation pattern enables reliable vault operations beyond regex.md +62 -0
  149. package/methodology/justification chains enable forward backward and evolution reasoning about configuration decisions.md +46 -0
  150. package/methodology/knowledge system architecture is parameterized by platform capabilities not fixed by methodology.md +51 -0
  151. package/methodology/knowledge systems become communication partners through complexity and memory humans cannot sustain.md +47 -0
  152. package/methodology/knowledge systems share universal operations and structural components across all methodology traditions.md +46 -0
  153. package/methodology/legal case management uses precedent chains with regulatory change propagation.md +892 -0
  154. package/methodology/live index via periodic regeneration keeps discovery current.md +58 -0
  155. package/methodology/local-first file formats are inherently agent-native.md +69 -0
  156. package/methodology/logic column pattern separates reasoning from procedure.md +35 -0
  157. package/methodology/maintenance operations are more universal than creative pipelines because structural health is domain-invariant.md +47 -0
  158. package/methodology/maintenance scheduling frequency should match consequence speed not detection capability.md +50 -0
  159. package/methodology/maintenance targeting should prioritize mechanism and theory notes.md +26 -0
  160. package/methodology/maintenance-patterns.md +72 -0
  161. package/methodology/markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure.md +55 -0
  162. package/methodology/maturity field enables agent context prioritization.md +33 -0
  163. package/methodology/memory-architecture.md +27 -0
  164. package/methodology/metacognitive confidence can diverge from retrieval capability.md +42 -0
  165. package/methodology/metadata reduces entropy enabling precision over recall.md +91 -0
  166. package/methodology/methodology development should follow the trajectory from documentation to skill to hook as understanding hardens.md +80 -0
  167. package/methodology/methodology traditions are named points in a shared configuration space not competing paradigms.md +64 -0
  168. package/methodology/mnemonic medium embeds verification into navigation.md +46 -0
  169. package/methodology/module communication through shared YAML fields creates loose coupling without direct dependencies.md +44 -0
  170. package/methodology/module deactivation must account for structural artifacts that survive the toggle.md +49 -0
  171. package/methodology/multi-domain systems compose through separate templates and shared graph.md +61 -0
  172. package/methodology/multi-domain-composition.md +27 -0
  173. package/methodology/narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging.md +53 -0
  174. package/methodology/navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts.md +48 -0
  175. package/methodology/navigational vertigo emerges in pure association systems without local hierarchy.md +54 -0
  176. package/methodology/note titles should function as APIs enabling sentence transclusion.md +51 -0
  177. package/methodology/note-design.md +57 -0
  178. package/methodology/notes are skills /342/200/224 curated knowledge injected when relevant.md" +62 -0
  179. package/methodology/notes function as cognitive anchors that stabilize attention during complex tasks.md +41 -0
  180. package/methodology/novel domains derive by mapping knowledge type to closest reference domain then adapting.md +50 -0
  181. package/methodology/nudge theory explains graduated hook enforcement as choice architecture for agents.md +59 -0
  182. package/methodology/observation and tension logs function as dead-letter queues for failed automation.md +51 -0
  183. package/methodology/operational memory and knowledge memory serve different functions in agent architecture.md +48 -0
  184. package/methodology/operational wisdom requires contextual observation.md +52 -0
  185. package/methodology/orchestrated vault creation transforms arscontexta from tool to autonomous knowledge factory.md +40 -0
  186. package/methodology/organic emergence versus active curation creates a fundamental vault governance tension.md +68 -0
  187. package/methodology/orphan notes are seeds not failures.md +38 -0
  188. package/methodology/over-automation corrupts quality when hooks encode judgment rather than verification.md +62 -0
  189. package/methodology/people relationships uses Dunbar-layered graphs with interaction tracking.md +659 -0
  190. package/methodology/personal assistant uses life area management with review automation.md +610 -0
  191. package/methodology/platform adapter translation is semantic not mechanical because hook event meanings differ.md +40 -0
  192. package/methodology/platform capability tiers determine which knowledge system features can be implemented.md +48 -0
  193. package/methodology/platform fragmentation means identical conceptual operations require different implementations across agent environments.md +44 -0
  194. package/methodology/premature complexity is the most common derivation failure mode.md +45 -0
  195. package/methodology/prevent domain-specific failure modes through the vulnerability matrix.md +336 -0
  196. package/methodology/processing effort should follow retrieval demand.md +57 -0
  197. package/methodology/processing-workflows.md +75 -0
  198. package/methodology/product management uses feedback pipelines with experiment tracking.md +789 -0
  199. package/methodology/productivity porn risk in meta-system building.md +30 -0
  200. package/methodology/programmable notes could enable property-triggered workflows.md +64 -0
  201. package/methodology/progressive disclosure means reading right not reading less.md +69 -0
  202. package/methodology/progressive schema validates only what active modules require not the full system schema.md +49 -0
  203. package/methodology/project management uses decision tracking with stakeholder context.md +776 -0
  204. package/methodology/propositional link semantics transform wiki links from associative to reasoned.md +87 -0
  205. package/methodology/prospective memory requires externalization.md +53 -0
  206. package/methodology/provenance tracks where beliefs come from.md +62 -0
  207. package/methodology/queries evolve during search so agents should checkpoint.md +35 -0
  208. package/methodology/question-answer metadata enables inverted search patterns.md +39 -0
  209. package/methodology/random note resurfacing prevents write-only memory.md +33 -0
  210. package/methodology/reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring.md +59 -0
  211. package/methodology/reflection synthesizes existing notes into new insight.md +100 -0
  212. package/methodology/retrieval utility should drive design over capture completeness.md +69 -0
  213. package/methodology/retrieval verification loop tests description quality at scale.md +81 -0
  214. package/methodology/role field makes graph structure explicit.md +94 -0
  215. package/methodology/scaffolding enables divergence that fine-tuning cannot.md +67 -0
  216. package/methodology/schema enforcement via validation agents enables soft consistency.md +60 -0
  217. package/methodology/schema evolution follows observe-then-formalize not design-then-enforce.md +65 -0
  218. package/methodology/schema field names are the only domain specific element in the universal note pattern.md +46 -0
  219. package/methodology/schema fields should use domain-native vocabulary not abstract terminology.md +47 -0
  220. package/methodology/schema templates reduce cognitive overhead at capture time.md +55 -0
  221. package/methodology/schema validation hooks externalize inhibitory control that degrades under cognitive load.md +48 -0
  222. package/methodology/schema-enforcement.md +27 -0
  223. package/methodology/self-extension requires context files to contain platform operations knowledge not just methodology.md +47 -0
  224. package/methodology/sense-making vs storage does compression lose essential nuance.md +73 -0
  225. package/methodology/session boundary hooks implement cognitive bookends for orientation and reflection.md +60 -0
  226. package/methodology/session handoff creates continuity without persistent memory.md +43 -0
  227. package/methodology/session outputs are packets for future selves.md +43 -0
  228. package/methodology/session transcript mining enables experiential validation that structural tests cannot provide.md +38 -0
  229. package/methodology/skill context budgets constrain knowledge system complexity on agent platforms.md +52 -0
  230. package/methodology/skills encode methodology so manual execution bypasses quality gates.md +50 -0
  231. package/methodology/small-world topology requires hubs and dense local links.md +99 -0
  232. package/methodology/source attribution enables tracing claims to foundations.md +38 -0
  233. package/methodology/spaced repetition scheduling could optimize vault maintenance.md +44 -0
  234. package/methodology/spreading activation models how agents should traverse.md +79 -0
  235. package/methodology/stale navigation actively misleads because agents trust curated maps completely.md +43 -0
  236. package/methodology/stigmergy coordinates agents through environmental traces without direct communication.md +62 -0
  237. package/methodology/storage versus thinking distinction determines which tool patterns apply.md +56 -0
  238. package/methodology/structure enables navigation without reading everything.md +52 -0
  239. package/methodology/structure without processing provides no value.md +56 -0
  240. package/methodology/student learning uses prerequisite graphs with spaced retrieval.md +770 -0
  241. package/methodology/summary coherence tests composability before filing.md +37 -0
  242. package/methodology/tag rot applies to wiki links because titles serve as both identifier and display text.md +50 -0
  243. package/methodology/temporal media must convert to spatial text for agent traversal.md +43 -0
  244. package/methodology/temporal processing priority creates age-based inbox urgency.md +45 -0
  245. package/methodology/temporal separation of capture and processing preserves context freshness.md +39 -0
  246. package/methodology/ten universal primitives form the kernel of every viable agent knowledge system.md +162 -0
  247. package/methodology/testing effect could enable agent knowledge verification.md +38 -0
  248. package/methodology/the AgentSkills standard embodies progressive disclosure at the skill level.md +40 -0
  249. package/methodology/the derivation engine improves recursively as deployed systems generate observations.md +49 -0
  250. package/methodology/the determinism boundary separates hook methodology from skill methodology.md +46 -0
  251. package/methodology/the fix-versus-report decision depends on determinism reversibility and accumulated trust.md +45 -0
  252. package/methodology/the generation effect requires active transformation not just storage.md +57 -0
  253. package/methodology/the no wrong patches guarantee ensures any valid module combination produces a valid system.md +58 -0
  254. package/methodology/the system is the argument.md +46 -0
  255. package/methodology/the vault constitutes identity for agents.md +86 -0
  256. package/methodology/the vault methodology transfers because it encodes cognitive science not domain specifics.md +47 -0
  257. package/methodology/therapy journal uses warm personality with pattern detection for emotional processing.md +584 -0
  258. package/methodology/three capture schools converge through agent-mediated synthesis.md +55 -0
  259. package/methodology/three concurrent maintenance loops operate at different timescales to catch different classes of problems.md +56 -0
  260. package/methodology/throughput matters more than accumulation.md +58 -0
  261. package/methodology/title as claim enables traversal as reasoning.md +50 -0
  262. package/methodology/topological organization beats temporal for knowledge work.md +52 -0
  263. package/methodology/trading uses conviction tracking with thesis-outcome correlation.md +699 -0
  264. package/methodology/trails transform ephemeral navigation into persistent artifacts.md +39 -0
  265. package/methodology/transform universal vocabulary to domain-native language through six levels.md +259 -0
  266. package/methodology/type field enables structured queries without folder hierarchies.md +53 -0
  267. package/methodology/use-case presets dissolve the tension between composability and simplicity.md +44 -0
  268. package/methodology/vault conventions may impose hidden rigidity on thinking.md +44 -0
  269. package/methodology/verbatim risk applies to agents too.md +31 -0
  270. package/methodology/vibe notetaking is the emerging industry consensus for AI-native self-organization.md +56 -0
  271. package/methodology/vivid memories need verification.md +45 -0
  272. package/methodology/vocabulary-transformation.md +27 -0
  273. package/methodology/voice capture is the highest-bandwidth channel for agent-delegated knowledge systems.md +45 -0
  274. package/methodology/wiki links are the digital evolution of analog indexing.md +73 -0
  275. package/methodology/wiki links as social contract transforms agents into stewards of incomplete references.md +52 -0
  276. package/methodology/wiki links create navigation paths that shape retrieval.md +63 -0
  277. package/methodology/wiki links implement GraphRAG without the infrastructure.md +101 -0
  278. package/methodology/writing for audience blocks authentic creation.md +22 -0
  279. package/methodology/you operate a system that takes notes.md +79 -0
  280. package/openclaw/SKILL.md +110 -0
  281. package/package.json +45 -0
  282. package/platforms/README.md +51 -0
  283. package/platforms/claude-code/generator.md +61 -0
  284. package/platforms/claude-code/hooks/README.md +186 -0
  285. package/platforms/claude-code/hooks/auto-commit.sh.template +38 -0
  286. package/platforms/claude-code/hooks/session-capture.sh.template +72 -0
  287. package/platforms/claude-code/hooks/session-orient.sh.template +189 -0
  288. package/platforms/claude-code/hooks/write-validate.sh.template +106 -0
  289. package/platforms/openclaw/generator.md +82 -0
  290. package/platforms/openclaw/hooks/README.md +89 -0
  291. package/platforms/openclaw/hooks/bootstrap.ts.template +224 -0
  292. package/platforms/openclaw/hooks/command-new.ts.template +165 -0
  293. package/platforms/openclaw/hooks/heartbeat.ts.template +214 -0
  294. package/platforms/shared/features/README.md +70 -0
  295. package/platforms/shared/skill-blocks/graph.md +145 -0
  296. package/platforms/shared/skill-blocks/learn.md +119 -0
  297. package/platforms/shared/skill-blocks/next.md +131 -0
  298. package/platforms/shared/skill-blocks/pipeline.md +326 -0
  299. package/platforms/shared/skill-blocks/ralph.md +616 -0
  300. package/platforms/shared/skill-blocks/reduce.md +1142 -0
  301. package/platforms/shared/skill-blocks/refactor.md +129 -0
  302. package/platforms/shared/skill-blocks/reflect.md +780 -0
  303. package/platforms/shared/skill-blocks/remember.md +524 -0
  304. package/platforms/shared/skill-blocks/rethink.md +574 -0
  305. package/platforms/shared/skill-blocks/reweave.md +680 -0
  306. package/platforms/shared/skill-blocks/seed.md +320 -0
  307. package/platforms/shared/skill-blocks/stats.md +145 -0
  308. package/platforms/shared/skill-blocks/tasks.md +171 -0
  309. package/platforms/shared/skill-blocks/validate.md +323 -0
  310. package/platforms/shared/skill-blocks/verify.md +562 -0
  311. package/platforms/shared/templates/README.md +35 -0
  312. package/presets/experimental/categories.yaml +1 -0
  313. package/presets/experimental/preset.yaml +38 -0
  314. package/presets/experimental/starter/README.md +7 -0
  315. package/presets/experimental/vocabulary.yaml +7 -0
  316. package/presets/personal/categories.yaml +7 -0
  317. package/presets/personal/preset.yaml +41 -0
  318. package/presets/personal/starter/goals.md +21 -0
  319. package/presets/personal/starter/index.md +17 -0
  320. package/presets/personal/starter/life-areas.md +21 -0
  321. package/presets/personal/starter/people.md +21 -0
  322. package/presets/personal/vocabulary.yaml +32 -0
  323. package/presets/research/categories.yaml +8 -0
  324. package/presets/research/preset.yaml +41 -0
  325. package/presets/research/starter/index.md +17 -0
  326. package/presets/research/starter/methods.md +21 -0
  327. package/presets/research/starter/open-questions.md +21 -0
  328. package/presets/research/vocabulary.yaml +33 -0
  329. package/reference/AUDIT-REPORT.md +238 -0
  330. package/reference/claim-map.md +172 -0
  331. package/reference/components.md +327 -0
  332. package/reference/conversation-patterns.md +542 -0
  333. package/reference/derivation-validation.md +649 -0
  334. package/reference/dimension-claim-map.md +134 -0
  335. package/reference/evolution-lifecycle.md +297 -0
  336. package/reference/failure-modes.md +235 -0
  337. package/reference/interaction-constraints.md +204 -0
  338. package/reference/kernel.yaml +242 -0
  339. package/reference/methodology.md +283 -0
  340. package/reference/open-questions.md +279 -0
  341. package/reference/personality-layer.md +302 -0
  342. package/reference/self-space.md +299 -0
  343. package/reference/semantic-vs-keyword.md +288 -0
  344. package/reference/session-lifecycle.md +298 -0
  345. package/reference/templates/base-note.md +16 -0
  346. package/reference/templates/companion-note.md +70 -0
  347. package/reference/templates/creative-note.md +16 -0
  348. package/reference/templates/learning-note.md +16 -0
  349. package/reference/templates/life-note.md +16 -0
  350. package/reference/templates/moc.md +26 -0
  351. package/reference/templates/relationship-note.md +17 -0
  352. package/reference/templates/research-note.md +19 -0
  353. package/reference/templates/session-log.md +24 -0
  354. package/reference/templates/therapy-note.md +16 -0
  355. package/reference/test-fixtures/edge-case-constraints.md +148 -0
  356. package/reference/test-fixtures/multi-domain.md +164 -0
  357. package/reference/test-fixtures/novel-domain-gaming.md +138 -0
  358. package/reference/test-fixtures/research-minimal.md +102 -0
  359. package/reference/test-fixtures/therapy-full.md +155 -0
  360. package/reference/testing-milestones.md +1087 -0
  361. package/reference/three-spaces.md +363 -0
  362. package/reference/tradition-presets.md +203 -0
  363. package/reference/use-case-presets.md +341 -0
  364. package/reference/validate-kernel.sh +432 -0
  365. package/reference/vocabulary-transforms.md +85 -0
  366. package/scripts/sync-thinking.sh +147 -0
  367. package/skill-sources/graph/SKILL.md +567 -0
  368. package/skill-sources/graph/skill.json +17 -0
  369. package/skill-sources/learn/SKILL.md +254 -0
  370. package/skill-sources/learn/skill.json +17 -0
  371. package/skill-sources/next/SKILL.md +407 -0
  372. package/skill-sources/next/skill.json +17 -0
  373. package/skill-sources/pipeline/SKILL.md +314 -0
  374. package/skill-sources/pipeline/skill.json +17 -0
  375. package/skill-sources/ralph/SKILL.md +604 -0
  376. package/skill-sources/ralph/skill.json +17 -0
  377. package/skill-sources/reduce/SKILL.md +1113 -0
  378. package/skill-sources/reduce/skill.json +17 -0
  379. package/skill-sources/refactor/SKILL.md +448 -0
  380. package/skill-sources/refactor/skill.json +17 -0
  381. package/skill-sources/reflect/SKILL.md +747 -0
  382. package/skill-sources/reflect/skill.json +17 -0
  383. package/skill-sources/remember/SKILL.md +534 -0
  384. package/skill-sources/remember/skill.json +17 -0
  385. package/skill-sources/rethink/SKILL.md +658 -0
  386. package/skill-sources/rethink/skill.json +17 -0
  387. package/skill-sources/reweave/SKILL.md +657 -0
  388. package/skill-sources/reweave/skill.json +17 -0
  389. package/skill-sources/seed/SKILL.md +303 -0
  390. package/skill-sources/seed/skill.json +17 -0
  391. package/skill-sources/stats/SKILL.md +371 -0
  392. package/skill-sources/stats/skill.json +17 -0
  393. package/skill-sources/tasks/SKILL.md +402 -0
  394. package/skill-sources/tasks/skill.json +17 -0
  395. package/skill-sources/validate/SKILL.md +310 -0
  396. package/skill-sources/validate/skill.json +17 -0
  397. package/skill-sources/verify/SKILL.md +532 -0
  398. package/skill-sources/verify/skill.json +17 -0
  399. package/skills/add-domain/SKILL.md +441 -0
  400. package/skills/add-domain/skill.json +17 -0
  401. package/skills/architect/SKILL.md +568 -0
  402. package/skills/architect/skill.json +17 -0
  403. package/skills/ask/SKILL.md +388 -0
  404. package/skills/ask/skill.json +17 -0
  405. package/skills/health/SKILL.md +760 -0
  406. package/skills/health/skill.json +17 -0
  407. package/skills/help/SKILL.md +348 -0
  408. package/skills/help/skill.json +17 -0
  409. package/skills/recommend/SKILL.md +553 -0
  410. package/skills/recommend/skill.json +17 -0
  411. package/skills/reseed/SKILL.md +385 -0
  412. package/skills/reseed/skill.json +17 -0
  413. package/skills/setup/SKILL.md +1688 -0
  414. package/skills/setup/skill.json +17 -0
  415. package/skills/tutorial/SKILL.md +496 -0
  416. package/skills/tutorial/skill.json +17 -0
  417. package/skills/upgrade/SKILL.md +395 -0
  418. package/skills/upgrade/skill.json +17 -0
@@ -0,0 +1,30 @@
1
+ ---
2
+ description: Building sophisticated agent workflows becomes procrastination when output stays flat while complexity grows—building substitutes for producing
3
+ kind: research
4
+ topics: ["[[maintenance-patterns]]", "[[processing-workflows]]"]
5
+ source: TFT research corpus (00_inbox/heinrich/)
6
+ ---
7
+
8
+ Building meta-systems — workflows, skills, infrastructure — feels productive. It's work about work. The question: when does this investment pay off in output, and when does it become its own trap?
9
+
10
+ The Collector's Fallacy (saving = learning) has a meta-level analog: building = producing. A vault with sophisticated automation but no published content has optimized for the wrong metric. Since [[throughput matters more than accumulation]], success is measured by what flows through (published content, synthesized understanding) not what accumulates (infrastructure, skills, workflows). The system exists to produce output; if it mostly produces more system, something is wrong.
11
+
12
+ This creates a tension with [[complex systems evolve from simple working systems]]. Gall's Law says complexity should emerge from working simplicity — add sophistication where pain appears. But productivity porn warns that adding sophistication might itself become the procrastination. The discriminator: does complexity growth track with output growth? Healthy evolution shows both curves rising together. Productivity porn shows complexity rising while output stays flat. Since [[insight accretion differs from productivity in knowledge systems]], the test is accretion, not just productivity: building infrastructure can look productive (commits, PRs, new features) while producing zero accretion (no deeper understanding, no better synthesis, no insight that wasn't there before). The question is whether the building deepens understanding or merely reorganizes complexity.
13
+
14
+ Since [[skills encode methodology so manual execution bypasses quality gates]], skills represent accumulated learning that couldn't have been designed upfront — genuine value. But the same argument could rationalize infinite infrastructure building. The risk is making this rationalization untestable. Since [[writing for audience blocks authentic creation]], a parallel diversion pattern exists: audience awareness diverts cognitive work from synthesis to presentation, just as infrastructure building diverts effort from production to meta-production. Both feel productive while producing zero output — the energy goes into framing rather than generating.
15
+
16
+ This risk forms a trio with sibling failure modes: [[cognitive outsourcing risk in agent-operated systems]] tests whether delegating to agents atrophies human capability, while [[verbatim risk applies to agents too]] tests whether agent outputs contain genuine synthesis or just well-structured reorganization. Each is orthogonal — a system can fail on any one while succeeding on the others, which makes the combined risk surface wider than any individual failure mode suggests.
17
+ ---
18
+
19
+ Relevant Notes:
20
+ - [[insight accretion differs from productivity in knowledge systems]] — the test is accretion not productivity: building can look productive (commits, features) while producing zero accretion (no deeper understanding); the discriminator
21
+ - [[throughput matters more than accumulation]] — the output metric that matters; if building doesn't increase throughput, it's not investment but procrastination
22
+ - [[complex systems evolve from simple working systems]] — creates productive tension: Gall's Law says add complexity where pain emerges, but the discriminator is whether complexity tracks output growth
23
+ - [[cognitive outsourcing risk in agent-operated systems]] — sibling risk from same source; tests skill atrophy from delegation while this tests procrastination disguised as productivity
24
+ - [[verbatim risk applies to agents too]] — completes the meta-system risks trio: this tests building vs producing, cognitive outsourcing tests human capability atrophy, verbatim risk tests agent output quality
25
+ - [[skills encode methodology so manual execution bypasses quality gates]] — exemplifies the rationalization risk: skills genuinely encode accumulated learning, but this argument could justify infinite building
26
+ - [[writing for audience blocks authentic creation]] — parallel diversion pattern: infrastructure building diverts effort from production to meta-production; audience awareness diverts effort from synthesis to presentation; both feel productive while producing zero output
27
+
28
+ Topics:
29
+ - [[maintenance-patterns]]
30
+ - [[processing-workflows]]
@@ -0,0 +1,64 @@
1
+ ---
2
+ description: When notes have queryable metadata, the vault can shift from passive storage to active participant — notes surfacing themselves based on due dates, staleness thresholds, or status transitions
3
+ kind: research
4
+ topics: ["[[maintenance-patterns]]"]
5
+ methodology: ["Original"]
6
+ source: [[2-4-metadata-properties]]
7
+ ---
8
+
9
+ # programmable notes could enable property-triggered workflows
10
+
11
+ The standard model treats notes as passive objects. You write them, file them, maybe search for them later. The note sits until you retrieve it. But when notes have structured metadata, a different architecture becomes possible: notes that act based on their properties. A note with `status: seedling` and `created: 30 days ago` surfaces itself for review. A note with `due: today` appears in the morning's task list. A note whose `last-reviewed` date exceeds some threshold enters a maintenance queue.
12
+
13
+ This is the shift from repository to agent that the source material describes: "systems where notes can trigger actions based on their properties" transform "the PKM system from a passive repository into an active agent in the user's cognitive workflow."
14
+
15
+ The vault already implements primitive versions of this pattern. Since [[live index via periodic regeneration keeps discovery current]], hooks that fire on note changes demonstrate property-triggered behavior — the note's modification triggers index regeneration. Since [[schema enforcement via validation agents enables soft consistency]], validation hooks that fire on Write demonstrate notes triggering quality checks. But these are system-level triggers (file events), not semantic triggers (property changes).
16
+
17
+ The fuller vision involves semantic triggers:
18
+
19
+ | Property Condition | Triggered Action |
20
+ |-------------------|------------------|
21
+ | `status: seedling` AND age > 14 days | Add to review queue |
22
+ | `last-reviewed` older than interval | Surface for spaced repetition |
23
+ | `type: tension` AND `status: open` | Include in synthesis prompts |
24
+ | Incoming link count < 2 AND age > 30 days | Flag as potential orphan |
25
+ | `source` note modified after this note created | Trigger reweave check |
26
+
27
+ Since [[type field enables structured queries without folder hierarchies]], the `type:` field already provides one property dimension for triggers — tension notes wanting synthesis, methodology notes wanting implementation review. The table above extends this to compound conditions (type AND status, property AND age).
28
+
29
+ The implementation challenge is the event model. File-based hooks fire on Write/Read events — they don't know about property semantics. Since [[maintenance scheduling frequency should match consequence speed not detection capability]], the choice between event-driven and periodic detection is not about cost but about how fast the problem develops — schema violations need per-event triggers because they propagate instantly, while staleness conditions need periodic scans because they develop over weeks. And since [[three concurrent maintenance loops operate at different timescales to catch different classes of problems]], each property trigger in the table above maps naturally onto a specific loop: schema-violation triggers belong in the fast loop (per-event hooks), orphan detection and staleness surfacing belong in the medium loop (per-session checks), and source-change reweave triggers belong in the slow loop (periodic review). The three-loop architecture provides the scheduling container that determines which trigger gets which infrastructure investment. A property-aware trigger system would need either:
30
+
31
+ 1. **Property diff on every write** — compare YAML before/after, fire triggers when specific fields change. Since [[metadata reduces entropy enabling precision over recall]], YAML frontmatter is already structured for parsing; diffing it is tractable. And since [[intermediate representation pattern enables reliable vault operations beyond regex]], an IR layer where notes are already parsed into typed objects would make property diffs a comparison of structured dictionaries rather than regex extraction that breaks on multiline values or edge-case YAML formatting.
32
+
33
+ 2. **Periodic property scans** — a maintenance agent queries properties vault-wide and surfaces notes meeting conditions. This is less reactive but simpler. It is essentially what `/review` does for link validity, extended to arbitrary property queries. Since [[reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring]], periodic property scans are reconciliation loops where each property condition in the table above defines a desired state and the scan detects divergence. The reconciliation pattern explains why periodic scans may be the pragmatic first step: reconciliation's core insight is that the comparison is always safe and idempotent, meaning scheduled property scans carry zero risk and catch the accumulated drift that event-driven triggers miss.
34
+
35
+ 3. **Hybrid approach** — file events trigger property parsing; property changes enqueue notes for condition evaluation; a scheduler processes the queue.
36
+
37
+ The connection to spaced repetition is direct. Since [[spaced repetition scheduling could optimize vault maintenance]] proposes interval-based attention allocation, programmable notes are the implementation mechanism. The note's `last-reviewed` property determines its next review date; a scheduler queries notes due for review and surfaces them. The note participates in determining when it gets attention. Similarly, since [[maturity field enables agent context prioritization]] proposes seedling/developing/evergreen status, that status becomes a trigger dimension: seedlings surface more frequently for development, evergreens surface rarely for confirmation.
38
+
39
+ There's a deeper principle here about the nature of notes in agent-operated systems. Traditional notes are inert — the agent acts on them. Programmable notes are reactive — they influence when and how the agent acts. This shifts the architecture from pull (agent searches for relevant content) to push (content declares its relevance). The shift is especially important because since [[prospective memory requires externalization]], every "remember to check this note later" intention is a guaranteed failure across agent sessions. Property-triggered workflows eliminate the prospective memory demand entirely — the note surfaces itself when conditions are met, rather than depending on any agent to remember that attention is due. Since [[dangling links reveal which notes want to exist]], the vault already has notes that "want" things — this pattern extends that: notes that want attention, want review, want connection. This connects to [[bootstrapping principle enables self-improving systems]]: property-triggered workflows let the vault participate in its own maintenance by declaring what needs attention. The system improves itself through the notes it contains.
40
+
41
+ The risk is complexity. Every property trigger is a rule; rules interact; interactions create surprises. A note might trigger multiple workflows simultaneously. Cascades might emerge where one note's action triggers another's. Since [[complex systems evolve from simple working systems]], the safe approach starts minimal: one or two trigger types (staleness, due date), implemented as periodic queries rather than event-driven hooks, extended only when the simple version proves valuable.
42
+
43
+ The question worth investigating: what property triggers would actually improve vault operation vs adding complexity without benefit? Staleness-based surfacing seems high-value (prevents write-only memory). Due-date surfacing seems high-value (explicit scheduling). Status-transition triggers seem medium-value (could automate parts of the processing pipeline). Arbitrary programmability seems low-value (complexity exceeds benefit for most workflows). The consequence speed framework helps here: triggers earn their complexity when the problem they detect propagates faster than the next scheduled check would catch it. A schema violation that propagates instantly justifies per-event complexity. A staleness condition that develops over weeks does not need event-driven infrastructure — a periodic scan suffices. The research direction is identifying which triggers earn their complexity, and consequence speed provides the evaluation criterion. And since [[confidence thresholds gate automated action between the mechanical and judgment zones]], the response to each trigger need not be binary (fire or do not fire) -- a three-tier response pattern where high-confidence triggers auto-execute, medium-confidence triggers suggest, and low-confidence triggers merely log provides a graduated approach that reduces the risk of cascade interactions while still capturing the value of property-triggered surfacing.
44
+ ---
45
+
46
+ Relevant Notes:
47
+ - [[live index via periodic regeneration keeps discovery current]] — primitive form: file events already trigger index updates; property triggers extend this to semantic events
48
+ - [[schema enforcement via validation agents enables soft consistency]] — sibling pattern: validation hooks fire on writes; property triggers fire on semantic conditions
49
+ - [[spaced repetition scheduling could optimize vault maintenance]] — direct application: property-triggered surfacing is how spaced repetition would be implemented
50
+ - [[metadata reduces entropy enabling precision over recall]] — prerequisite: property triggers require queryable metadata; this note explains why that metadata exists
51
+ - [[dangling links reveal which notes want to exist]] — the notes that want framing: dangling links want resolution; this extends to notes wanting attention, review, connection
52
+ - [[complex systems evolve from simple working systems]] — constraint: start with minimal triggers, extend when value is proven
53
+ - [[bootstrapping principle enables self-improving systems]] — the deeper architecture: property triggers let notes participate in vault maintenance, making the system self-improving through its own content
54
+ - [[maturity field enables agent context prioritization]] — trigger dimension: seedling/developing/evergreen status determines review frequency and surfacing priority
55
+ - [[type field enables structured queries without folder hierarchies]] — existing trigger dimension: `type:` field enables category-based triggers (tensions for synthesis, methodology for implementation review)
56
+ - [[intermediate representation pattern enables reliable vault operations beyond regex]] — infrastructure prerequisite: property diff on writes and compound condition checking both depend on reliable YAML extraction; an IR layer makes property access typed lookups rather than regex parsing that breaks on multiline values
57
+ - [[reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring]] — complementary architecture: periodic property scans are reconciliation loops where each condition defines a desired state; reconciliation catches the accumulated drift that event-driven property triggers miss
58
+ - [[confidence thresholds gate automated action between the mechanical and judgment zones]] -- response calibration for triggered actions: property triggers range from safe read-only operations (adding to review queue) to risky write operations (auto-modifying content), and confidence scoring determines which response tier the trigger activates; a staleness detection with high certainty can auto-queue while a reweave check with medium certainty should only suggest, applying the three-tier pattern to property-triggered workflow design
59
+ - [[maintenance scheduling frequency should match consequence speed not detection capability]] — scheduling framework for trigger frequency: consequence speed provides the principled answer to whether a property condition needs event-driven detection or periodic scanning — schema violations propagate instantly (per-event trigger), staleness develops over weeks (periodic scan), and the five-tier spectrum maps each property condition to its appropriate detection cadence
60
+ - [[three concurrent maintenance loops operate at different timescales to catch different classes of problems]] — scheduling container for property triggers: each trigger maps onto a specific loop (fast for schema, medium for orphan/staleness, slow for reweave), and the three-loop architecture determines which triggers earn event-driven infrastructure versus periodic scanning
61
+ - [[prospective memory requires externalization]] — the cognitive problem property triggers solve: agents cannot remember to check conditions across sessions, so notes that surface themselves when conditions are met eliminate the prospective memory demand entirely
62
+
63
+ Topics:
64
+ - [[maintenance-patterns]]
@@ -0,0 +1,69 @@
1
+ ---
2
+ description: The efficiency framing misses the point — tokens are free, quality requires depth, so the goal is a dense relevant context window not a sparse one
3
+ kind: research
4
+ topics: ["[[discovery-retrieval]]"]
5
+ ---
6
+
7
+ # progressive disclosure means reading right not reading less
8
+
9
+ The common interpretation of progressive disclosure is efficiency: read less, save tokens, minimize context. This is the wrong framing. The system philosophy inverts it: fill the context window, but fill it with what matters.
10
+
11
+ ## Why the Efficiency Framing Fails
12
+
13
+ Tokens are free. Context window capacity is not the scarce resource it once was. What remains scarce is relevance. A context window stuffed with whatever loaded first performs worse than a smaller context loaded with exactly what the task needs.
14
+
15
+ The failure mode isn't "too much reading." It's "undiscerning reading." Loading all descriptions with `rg "^description:" thinking/*.md` dumps everything regardless of relevance. At scale this fills context with noise. The problem isn't the token count — it's that most of those tokens don't help.
16
+
17
+ ## Curation Over Reduction
18
+
19
+ Progressive disclosure provides the mechanism for curation, not reduction. Since [[spreading activation models how agents should traverse]], the discovery layers implement decay-based context loading — high decay stops at descriptions, low decay reads full files. The layers — file tree, descriptions, MOCs, outlines, semantic search — let you find what's relevant before you load it fully. Since [[dual-coding with visual elements could enhance agent traversal]], visual representations (Mermaid diagrams, relationship graphs) might constitute an additional layer type — one that encodes structural relationships in a format where patterns are visible rather than described. This remains a research direction: the current layers are all text-based, and visual layers would require verifying that multimodal processing provides genuine filtering benefit. The MOC path provides curated navigation that demonstrably works; semantic search remains useful for candidate generation but not as the primary discovery mechanism. The pattern is:
20
+
21
+ ```
22
+ find relevant content (MOC or vsearch) → load it fully → follow links that matter
23
+ ```
24
+
25
+ This isn't about reading less. It's about reading right. Once you've identified relevant notes, read them completely. Don't skim when depth matters. Quality comes from understanding, and understanding requires enough context to grasp the full picture.
26
+
27
+ The MOC-first pattern works because of network structure: since [[small-world topology requires hubs and dense local links]], MOCs serve as hub nodes that create shortcuts across the knowledge graph. Starting from a hub node means most relevant content is 1-2 hops away, not buried in a linear search. This is why curated navigation (following MOC links) outperforms semantic search for initial orientation — you're traversing a graph optimized for short paths. And since [[complete navigation requires four complementary types that no single mechanism provides]], the disclosure layers succeed precisely because they exercise all four navigation types in sequence: file tree scanning provides global orientation (where am I?), MOC reading provides local orientation (what's nearby?), following wiki links provides contextual depth (what's related to this?), and searching provides supplemental discovery (how else can I find things?). The progressive disclosure stack is not an arbitrary ordering but a traversal of complementary navigation types, which is why skipping a layer creates a predictable blind spot.
28
+
29
+ ## Progressive Summarization as Implementation
30
+
31
+ Tiago Forte's Progressive Summarization provides a concrete methodology for this philosophy. The technique creates compression layers: Notes → Bold → Highlights → Summary. Each layer is progressively more compressed, letting readers choose their depth based on need.
32
+
33
+ For agent-operated vaults, this translates to a layered content strategy. Source materials live in archive at full fidelity. Reference notes compress key insights. Wiki-linkable claims distill the core argument. Agents navigate at the appropriate abstraction level — summaries for broad scanning when building context across many notes, full content for deep analysis when a single note is central to the task.
34
+
35
+ The key insight: these aren't mutually exclusive depths but simultaneous layers. The archive doesn't disappear when you create the summary. Both exist, and the agent chooses which to load based on what the task requires. This is why the system keeps raw sources in archive alongside the extracted claims — the compression layers coexist rather than replacing each other.
36
+
37
+ ## The Dense Context Window
38
+
39
+ The goal is a context window dense with relevant material. Not sparse. Not efficient. Dense with what matters for this task. Since [[descriptions are retrieval filters not summaries]], each discovery layer serves the curation function: helping you decide what deserves the full-depth treatment.
40
+
41
+ The question at each step isn't "can I stop here?" but "does this warrant going deeper?" Sometimes the answer is yes — follow the link, read the full note, load the related content. Sometimes no — the description told you enough to know this isn't relevant. Progressive disclosure gives you the information to make that call, not permission to skip depth.
42
+
43
+ But the filtering layer only works when compression preserves enough distinctiveness for correct decisions. Since [[sense-making vs storage does compression lose essential nuance]], some ideas may be systematically invisible at the filter layer — not because the description is poorly written, but because the idea's distinctive features ARE the nuance that compression discards. For these ideas, the disclosure layer fails not through quality defects but through format incompatibility. The agent never reaches the full content because the filter never identified relevance in the first place.
44
+
45
+ Since [[testing effect could enable agent knowledge verification]], this assumption becomes testable. If descriptions truly enable accurate filtering decisions, an agent should be able to predict note content from title and description alone. The recite skill applies the testing effect: read only metadata, predict content, score against actual content. Notes where prediction fails are exactly notes where the disclosure layer has broken — the filtering information doesn't match what's being filtered. Since [[retrieval verification loop tests description quality at scale]], this verification extends to the entire vault: systematic scoring across all notes reveals patterns (which types of notes fail, common failure modes, whether quality correlates with age) and turns disclosure layer quality from assumption to measured property. This is the verification mechanism for progressive disclosure's core assumption.
46
+
47
+ But the deeper risk is that structural completeness creates false confidence: since [[metacognitive confidence can diverge from retrieval capability]], the system may feel navigable through surface metrics (descriptions exist, MOCs are organized) while actual retrieval systematically fails. The disclosure layers exist to enable curation — but only if they actually predict what they're filtering. The verification loop closes this gap by testing empirically, not assuming structurally.
48
+
49
+ ## The Design Decision
50
+
51
+ This is a CLOSED claim — a foundational choice about how the system works. We could have designed for efficiency: minimal context, just-enough reading, token conservation. We chose the opposite: fill the context window with quality content, use disclosure layers to curate what "quality" means for each task.
52
+
53
+ The phrase "reading right not reading less" captures the philosophy. There is no virtue in reading less. The virtue is in reading what matters.
54
+ ---
55
+
56
+ Relevant Notes:
57
+ - [[descriptions are retrieval filters not summaries]] — describes the progressive disclosure layers and how descriptions enable filtering without full loading
58
+ - [[spreading activation models how agents should traverse]] — provides the cognitive science foundation: progressive disclosure IS decay-based context loading
59
+ - [[small-world topology requires hubs and dense local links]] — explains WHY MOC-first navigation works: hub nodes create shortcuts that keep relevant content 1-2 hops away
60
+ - [[intermediate packets enable assembly over creation]] — Progressive Summarization creates intermediate packets at multiple compression levels
61
+ - [[testing effect could enable agent knowledge verification]] — tests whether the disclosure layer actually works: if descriptions don't predict content, filtering fails
62
+ - [[retrieval verification loop tests description quality at scale]] — operationalizes the test at vault-wide scale: systematic scoring reveals patterns and turns disclosure layer quality from assumption to measured property
63
+ - [[metacognitive confidence can diverge from retrieval capability]] — tests the failure mode where disclosure assumptions break: structural completeness produces false navigability confidence while actual retrieval fails
64
+ - [[dual-coding with visual elements could enhance agent traversal]] — proposes visual representations as an additional layer type alongside the current text-based disclosure layers
65
+ - [[sense-making vs storage does compression lose essential nuance]] — the tension: some ideas may be invisible at the filter layer because their distinctive features are the nuance that compression discards
66
+ - [[complete navigation requires four complementary types that no single mechanism provides]] — explains WHY the disclosure layers are ordered as they are: each layer exercises a different navigation type (global, local, contextual, supplemental), so the progressive sequence is a traversal of complementary types, not an arbitrary ordering
67
+
68
+ Topics:
69
+ - [[discovery-retrieval]]
@@ -0,0 +1,49 @@
1
+ ---
2
+ description: Each module declares its required YAML fields and validation checks only active modules — otherwise disabling modules does not reduce schema demands, creating an all-or-nothing trap
3
+ kind: research
4
+ topics: ["[[design-dimensions]]", "[[note-design]]"]
5
+ methodology: ["Original", "Systems Theory"]
6
+ source: [[composable-knowledge-architecture-blueprint]]
7
+ ---
8
+
9
+ # progressive schema validates only what active modules require not the full system schema
10
+
11
+ A composable knowledge system fails its own premise if validation enforces the full schema regardless of which modules are active. Since [[composable knowledge architecture builds systems from independent toggleable modules not monolithic templates]], each module is a capability that can be enabled or disabled independently. But if the validation module checks every field that any module might need — topics for MOCs, methodology for processing pipeline, semantic neighbors for search — then disabling those modules does not actually reduce the system's demands on the user. The schema becomes a monolith wearing a composable costume: the modules are theoretically independent, but in practice you must satisfy all their requirements because the validator does not know which modules are running.
12
+
13
+ The fix is straightforward in principle: each module declares the YAML fields it requires, and the validator checks only the fields belonging to active modules. A user with only yaml-schema and wiki-links enabled should never see errors about missing `topics` or `methodology` fields, because those fields belong to the mocs module and processing-pipeline module respectively, neither of which is active. This is the knowledge system equivalent of optional types in programming — a field like `topics` is required IF the mocs module is active, optional otherwise. The schema definition needs conditional requirements: `topics: required_when(mocs)`. The generated context file includes the topics field documentation only when mocs is enabled, and the validation module checks for topics only when mocs is active.
14
+
15
+ This matters because the alternative — validating everything regardless of activation state — creates what the modular synthesis tradition would recognize as a broken patch routing. Since [[the no wrong patches guarantee ensures any valid module combination produces a valid system]], a foundation-only configuration that triggers validation errors from convention-layer fields is arguably a "wrong patch" — the module combination is valid but the validation layer rejects notes that satisfy all active requirements. Progressive schema is what extends the no wrong patches guarantee from structural integrity (no data corruption) to operational integrity (no spurious warnings). Since [[four abstraction layers separate platform-agnostic from platform-dependent knowledge system features]], the module dependency graph follows the layer hierarchy: foundation modules (yaml-schema, wiki-links) have no dependencies, convention modules (atomic-notes, mocs) build on foundation, automation modules (validation, processing-pipeline) build on convention. Progressive schema respects this hierarchy by checking only the fields from the user's current active layer and below. A user at the convention layer sees convention-level requirements. A user at the foundation layer sees only the universal base: a description field and whatever their note contains. The validator does not look upward into layers the user has not reached.
16
+
17
+ The connection to enforcement style deepens the design. Since [[schema enforcement via validation agents enables soft consistency]], the vault already uses soft enforcement — warnings rather than blocks. Progressive schema adds a second axis of flexibility: not only does validation warn instead of block, it also scopes which warnings fire. Without progressive scoping, soft enforcement merely makes the pain gentler (a warning instead of an error about missing methodology), but the pain still occurs for fields the user has no reason to populate. With progressive scoping, the warning never fires in the first place because the validator knows that methodology belongs to an inactive module. The combination of soft enforcement and progressive scope creates the loosely coupled validation that composability requires.
18
+
19
+ The practical consequence is that progressive schema prevents a specific failure mode at the intersection of two sibling anti-patterns. Since [[premature complexity is the most common derivation failure mode]], deploying too much system at once overwhelms users before they develop working habits. Since [[configuration paralysis emerges when derivation surfaces too many decisions]], exposing too many choices prevents setup from completing. Non-progressive schema creates a third failure mode that operates during daily use rather than at setup or deployment: the user has successfully configured a minimal system, has started creating notes with just descriptions and wiki links, and then encounters validation warnings about topics, methodology, and relevant_notes — fields belonging to modules they deliberately chose not to enable. The response is predictable: either enable everything to silence the warnings (defeating the purpose of composable adoption), add placeholder values to satisfy the validator (the schema-stuffing anti-pattern that [[schema evolution follows observe-then-formalize not design-then-enforce]] identifies as false compliance), or abandon the system. All three responses are failures of the architecture, not the user.
20
+
21
+ The implementation pattern maps cleanly to the existing module dependency graph. Each module's declaration includes not only its code dependencies (what other modules must be active) but also its schema contributions (what YAML fields it adds to the note format). Since [[module communication through shared YAML fields creates loose coupling without direct dependencies]], the schema assembly inherits the same event-bus architecture: each module publishes its field requirements to a shared declaration surface, and the validator subscribes by collecting those declarations into a runtime schema. The validator reads the active module list, collects their schema contributions, and validates only those fields. Foundation modules contribute `description`. The wiki-links module contributes nothing additional to the schema — it operates through inline link syntax, not YAML fields. The mocs module contributes `topics`. The processing-pipeline contributes `methodology` and `relevant_notes`. The validation module itself contributes nothing — it reads what others declare. This means the schema is assembled dynamically from the active module set, and since [[schema validation hooks externalize inhibitory control that degrades under cognitive load]], the assembled schema is what the hooks enforce. The hooks fire reliably regardless of the agent's cognitive state, but what they check is scoped to what matters.
22
+
23
+ Progressive schema also interacts with implicit dependencies in a way that makes both harder to detect. Since [[implicit dependencies create distributed monoliths that fail silently across configurations]], a module that reads a field written by another module works when both are active but fails silently when the writer is disabled. The progressive validator is doing its job — it checks only what active modules require — but the implicit dependency means the reader's actual requirements exceed its declared requirements. Progressive validation works perfectly for modules with honest declarations and masks problems in modules with incomplete ones. The solution is to make field reads as explicit as field writes: a module that reads `topics` must declare a dependency on whatever module writes `topics`, even if that dependency is optional.
24
+
25
+ There is a shadow side. Progressive schema creates the risk of silent under-validation. A user who should have enabled mocs but did not will never see warnings about missing topics, which means their notes will lack the navigation metadata that mocs would have organized. The system works perfectly within its configured scope — and the user may not realize that scope is too narrow until they have hundreds of notes with no topic assignments and decide to enable mocs retroactively. The mitigation is twofold: the module recommendation engine should surface "you have 50 notes and no topics — consider enabling mocs" as proactive guidance, and the module activation process should include a backfill step that scans existing notes and flags which ones need updates for the newly activated module's schema requirements. This is incremental adoption working as designed — since [[schema evolution follows observe-then-formalize not design-then-enforce]], the evidence of 50 topic-less notes IS the observation that justifies adding the mocs module, and the backfill is the formalization step.
26
+
27
+ The deeper principle is that validation scope should match activation scope. In a monolithic system, the full schema applies everywhere because everything is always active. In a composable system, what is active varies by configuration, and validation that does not respect that variation undermines the composability it serves. Progressive schema is not a feature of validation — it is a requirement of composability. Without it, the modules are toggleable but the quality gates are not, and the quality gates are what the user actually encounters during daily work. The composability promise — start simple, add what you need, never encounter demands from features you have not adopted — requires that validation, the most frequent point of system-user interaction, honor the same principle of independent activation that the module architecture was designed around.
28
+
29
+ ---
30
+ ---
31
+
32
+ Relevant Notes:
33
+ - [[composable knowledge architecture builds systems from independent toggleable modules not monolithic templates]] — provides the architectural context: modules are independently toggleable, so their schema requirements must be independently activatable; progressive schema is what makes module independence extend to the validation layer
34
+ - [[schema enforcement via validation agents enables soft consistency]] — the enforcement mechanism that progressive schema operates through: soft validation already warns rather than blocks, and progressive schema further scopes WHICH warnings fire based on module activation state
35
+ - [[schema evolution follows observe-then-formalize not design-then-enforce]] — complementary temporal axis: evolution governs which fields exist and when they get formalized, progressive schema governs which of those fields are checked at any given moment based on active modules; evolution is change over time, progressive schema is scope at a point in time
36
+ - [[four abstraction layers separate platform-agnostic from platform-dependent knowledge system features]] — the layer hierarchy that module dependencies follow: foundation modules have no schema requirements beyond description, convention modules add topics and linking fields, automation modules add processing metadata; progressive schema checks only the fields from active layers
37
+ - [[premature complexity is the most common derivation failure mode]] — progressive schema prevents the validation equivalent of premature complexity: without it, enabling basic features forces compliance with advanced schemas, creating the same overwhelming first-encounter that premature complexity creates at the system level
38
+ - [[configuration paralysis emerges when derivation surfaces too many decisions]] — related failure mode at a different layer: configuration paralysis overwhelms during setup choices, while non-progressive schema overwhelms during daily use by demanding fields the user has not yet decided to care about
39
+ - [[schema validation hooks externalize inhibitory control that degrades under cognitive load]] — the enforcement infrastructure: hooks fire regardless of cognitive state, but progressive schema determines what those hooks check; without progressive scoping, hooks would enforce requirements from modules the user never activated
40
+ - [[the no wrong patches guarantee ensures any valid module combination produces a valid system]] — extends the guarantee from structural to operational: without progressive schema, valid module combinations can produce spurious validation warnings, which is arguably a wrong-patch failure at the quality-gate layer even if data integrity holds
41
+ - [[module communication through shared YAML fields creates loose coupling without direct dependencies]] — the communication substrate: progressive schema assembles its runtime validation set from the same shared-field declarations that modules use for inter-module coordination; field ownership discipline maps directly to schema contribution declarations
42
+ - [[derived systems follow a seed-evolve-reseed lifecycle]] — progressive schema enables the seed phase to start clean: minimal module activation means minimal validation requirements, so the user builds working habits before encountering convention-layer or automation-layer schema demands; each evolution step that enables a new module progressively expands validation scope
43
+ - [[multi-domain systems compose through separate templates and shared graph]] — progressive schema becomes especially important in multi-domain systems where different domains contribute different schema fields: a therapy domain's trigger and pattern_type fields should not create validation noise for research notes, and progressive scoping by domain-module activation prevents cross-domain schema interference
44
+ - [[implicit dependencies create distributed monoliths that fail silently across configurations]] — the masking interaction: progressive validation correctly stops checking fields from disabled modules, but this makes undeclared field reads invisible; a module that reads topics without declaring a dependency on the mocs module works when mocs is active and fails silently when it is disabled, and the validator never flags the gap because it is checking the right things for the wrong reasons
45
+ - [[friction-driven module adoption prevents configuration debt by adding complexity only at pain points]] — ensures friction-driven adoption extends to daily experience: a user who has only added yaml-schema and wiki-links never encounters validation demands from modules they have not yet adopted, so the enforcement surface matches the adoption state
46
+
47
+ Topics:
48
+ - [[design-dimensions]]
49
+ - [[note-design]]