arscontexta 0.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (418) hide show
  1. package/.claude-plugin/marketplace.json +11 -0
  2. package/.claude-plugin/plugin.json +22 -0
  3. package/README.md +683 -0
  4. package/agents/knowledge-guide.md +49 -0
  5. package/bin/cli.mjs +66 -0
  6. package/generators/agents-md.md +240 -0
  7. package/generators/claude-md.md +379 -0
  8. package/generators/features/atomic-notes.md +124 -0
  9. package/generators/features/ethical-guardrails.md +58 -0
  10. package/generators/features/graph-analysis.md +188 -0
  11. package/generators/features/helper-functions.md +92 -0
  12. package/generators/features/maintenance.md +164 -0
  13. package/generators/features/methodology-knowledge.md +70 -0
  14. package/generators/features/mocs.md +144 -0
  15. package/generators/features/multi-domain.md +61 -0
  16. package/generators/features/personality.md +71 -0
  17. package/generators/features/processing-pipeline.md +428 -0
  18. package/generators/features/schema.md +149 -0
  19. package/generators/features/self-evolution.md +229 -0
  20. package/generators/features/self-space.md +78 -0
  21. package/generators/features/semantic-search.md +99 -0
  22. package/generators/features/session-rhythm.md +85 -0
  23. package/generators/features/templates.md +85 -0
  24. package/generators/features/wiki-links.md +88 -0
  25. package/generators/soul-md.md +121 -0
  26. package/hooks/hooks.json +45 -0
  27. package/hooks/scripts/auto-commit.sh +44 -0
  28. package/hooks/scripts/session-capture.sh +35 -0
  29. package/hooks/scripts/session-orient.sh +86 -0
  30. package/hooks/scripts/write-validate.sh +42 -0
  31. package/methodology/AI shifts knowledge systems from externalizing memory to externalizing attention.md +59 -0
  32. package/methodology/BM25 retrieval fails on full-length descriptions because query term dilution reduces match scores.md +39 -0
  33. package/methodology/IBIS framework maps claim-based architecture to structured argumentation.md +58 -0
  34. package/methodology/LLM attention degrades as context fills.md +49 -0
  35. package/methodology/MOC construction forces synthesis that automated generation from metadata cannot replicate.md +49 -0
  36. package/methodology/MOC maintenance investment compounds because orientation savings multiply across every future session.md +41 -0
  37. package/methodology/MOCs are attention management devices not just organizational tools.md +51 -0
  38. package/methodology/PKM failure follows a predictable cycle.md +50 -0
  39. package/methodology/ThreadMode to DocumentMode transformation is the core value creation step.md +52 -0
  40. package/methodology/WIP limits force processing over accumulation.md +53 -0
  41. package/methodology/Zeigarnik effect validates capture-first philosophy because open loops drain attention.md +42 -0
  42. package/methodology/academic research uses structured extraction with cross-source synthesis.md +566 -0
  43. package/methodology/adapt the four-phase processing pipeline to domain-specific throughput needs.md +197 -0
  44. package/methodology/agent notes externalize navigation intuition that search cannot discover and traversal cannot reconstruct.md +48 -0
  45. package/methodology/agent self-memory should be architecturally separate from user knowledge systems.md +48 -0
  46. package/methodology/agent session boundaries create natural automation checkpoints that human-operated systems lack.md +56 -0
  47. package/methodology/agent-cognition.md +107 -0
  48. package/methodology/agents are simultaneously methodology executors and subjects creating a unique trust asymmetry.md +66 -0
  49. package/methodology/aspect-oriented programming solved the same cross-cutting concern problem that hooks solve.md +39 -0
  50. package/methodology/associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles.md +53 -0
  51. package/methodology/attention residue may have a minimum granularity that cannot be subdivided.md +46 -0
  52. package/methodology/auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution.md +47 -0
  53. package/methodology/automated detection is always safe because it only reads state while automated remediation risks content corruption.md +42 -0
  54. package/methodology/automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues.md +56 -0
  55. package/methodology/backlinks implicitly define notes by revealing usage context.md +35 -0
  56. package/methodology/backward maintenance asks what would be different if written today.md +62 -0
  57. package/methodology/balance onboarding enforcement and questions to prevent premature complexity.md +229 -0
  58. package/methodology/basic level categorization determines optimal MOC granularity.md +51 -0
  59. package/methodology/batching by context similarity reduces switching costs in agent processing.md +43 -0
  60. package/methodology/behavioral anti-patterns matter more than tool selection.md +42 -0
  61. package/methodology/betweenness centrality identifies bridge notes connecting disparate knowledge domains.md +57 -0
  62. package/methodology/blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules.md +42 -0
  63. package/methodology/bootstrapping principle enables self-improving systems.md +62 -0
  64. package/methodology/build automatic memory through cognitive offloading and session handoffs.md +285 -0
  65. package/methodology/capture the reaction to content not just the content itself.md +41 -0
  66. package/methodology/claims must be specific enough to be wrong.md +36 -0
  67. package/methodology/closure rituals create clean breaks that prevent attention residue bleed.md +44 -0
  68. package/methodology/cognitive offloading is the architectural foundation for vault design.md +46 -0
  69. package/methodology/cognitive outsourcing risk in agent-operated systems.md +55 -0
  70. package/methodology/coherence maintains consistency despite inconsistent inputs.md +96 -0
  71. package/methodology/coherent architecture emerges from wiki links spreading activation and small-world topology.md +48 -0
  72. package/methodology/community detection algorithms can inform when MOCs should split or merge.md +52 -0
  73. package/methodology/complete navigation requires four complementary types that no single mechanism provides.md +43 -0
  74. package/methodology/complex systems evolve from simple working systems.md +59 -0
  75. package/methodology/composable knowledge architecture builds systems from independent toggleable modules not monolithic templates.md +61 -0
  76. package/methodology/compose multi-domain systems through separate templates and shared graph.md +372 -0
  77. package/methodology/concept-orientation beats source-orientation for cross-domain connections.md +51 -0
  78. package/methodology/confidence thresholds gate automated action between the mechanical and judgment zones.md +50 -0
  79. package/methodology/configuration dimensions interact so choices in one create pressure on others.md +58 -0
  80. package/methodology/configuration paralysis emerges when derivation surfaces too many decisions.md +44 -0
  81. package/methodology/context files function as agent operating systems through self-referential self-extension.md +46 -0
  82. package/methodology/context phrase clarity determines how deep a navigation hierarchy can scale.md +46 -0
  83. package/methodology/continuous small-batch processing eliminates review dread.md +48 -0
  84. package/methodology/controlled disorder engineers serendipity through semantic rather than topical linking.md +51 -0
  85. package/methodology/creative writing uses worldbuilding consistency with character tracking.md +672 -0
  86. package/methodology/cross-links between MOC territories indicate creative leaps and integration depth.md +43 -0
  87. package/methodology/dangling links reveal which notes want to exist.md +62 -0
  88. package/methodology/data exit velocity measures how quickly content escapes vendor lock-in.md +74 -0
  89. package/methodology/decontextualization risk means atomicity may strip meaning that cannot be recovered.md +48 -0
  90. package/methodology/dense interlinked research claims enable derivation while sparse references only enable templating.md +47 -0
  91. package/methodology/dependency resolution through topological sort makes module composition transparent and verifiable.md +56 -0
  92. package/methodology/derivation generates knowledge systems from composable research claims not template customization.md +63 -0
  93. package/methodology/derivation-engine.md +27 -0
  94. package/methodology/derived systems follow a seed-evolve-reseed lifecycle.md +56 -0
  95. package/methodology/description quality for humans diverges from description quality for keyword search.md +73 -0
  96. package/methodology/descriptions are retrieval filters not summaries.md +112 -0
  97. package/methodology/design MOCs as attention management devices with lifecycle governance.md +318 -0
  98. package/methodology/design-dimensions.md +66 -0
  99. package/methodology/digital mutability enables note evolution that physical permanence forbids.md +54 -0
  100. package/methodology/discovery-retrieval.md +48 -0
  101. package/methodology/distinctiveness scoring treats description quality as measurable.md +69 -0
  102. package/methodology/does agent processing recover what fast capture loses.md +43 -0
  103. package/methodology/domain-compositions.md +37 -0
  104. package/methodology/dual-coding with visual elements could enhance agent traversal.md +55 -0
  105. package/methodology/each module must be describable in one sentence under 200 characters or it does too many things.md +45 -0
  106. package/methodology/each new note compounds value by creating traversal paths.md +55 -0
  107. package/methodology/eight configuration dimensions parameterize the space of possible knowledge systems.md +56 -0
  108. package/methodology/elaborative encoding is the quality gate for new notes.md +55 -0
  109. package/methodology/enforce schema with graduated strictness across capture processing and query zones.md +221 -0
  110. package/methodology/enforcing atomicity can create paralysis when ideas resist decomposition.md +43 -0
  111. package/methodology/engineering uses technical decision tracking with architectural memory.md +766 -0
  112. package/methodology/every knowledge domain shares a four-phase processing skeleton that diverges only in the process step.md +53 -0
  113. package/methodology/evolution observations provide actionable signals for system adaptation.md +67 -0
  114. package/methodology/external memory shapes cognition more than base model.md +60 -0
  115. package/methodology/faceted classification treats notes as multi-dimensional objects rather than folder contents.md +65 -0
  116. package/methodology/failure-modes.md +27 -0
  117. package/methodology/false universalism applies same processing logic regardless of domain.md +49 -0
  118. package/methodology/federated wiki pattern enables multi-agent divergence as feature not bug.md +59 -0
  119. package/methodology/flat files break at retrieval scale.md +75 -0
  120. package/methodology/forced engagement produces weak connections.md +48 -0
  121. package/methodology/four abstraction layers separate platform-agnostic from platform-dependent knowledge system features.md +47 -0
  122. package/methodology/fresh context per task preserves quality better than chaining phases.md +44 -0
  123. package/methodology/friction reveals architecture.md +63 -0
  124. package/methodology/friction-driven module adoption prevents configuration debt by adding complexity only at pain points.md +48 -0
  125. package/methodology/gardening cycle implements tend prune fertilize operations.md +41 -0
  126. package/methodology/generation effect gate blocks processing without transformation.md +40 -0
  127. package/methodology/goal-driven memory orchestration enables autonomous domain learning through directed compute allocation.md +41 -0
  128. package/methodology/good descriptions layer heuristic then mechanism then implication.md +57 -0
  129. package/methodology/graph-structure.md +65 -0
  130. package/methodology/guided notes might outperform post-hoc structuring for high-volume capture.md +37 -0
  131. package/methodology/health wellness uses symptom-trigger correlation with multi-dimensional tracking.md +819 -0
  132. package/methodology/hook composition creates emergent methodology from independent single-concern components.md +47 -0
  133. package/methodology/hook enforcement guarantees quality while instruction enforcement merely suggests it.md +51 -0
  134. package/methodology/hook-driven learning loops create self-improving methodology through observation accumulation.md +62 -0
  135. package/methodology/hooks are the agent habit system that replaces the missing basal ganglia.md +40 -0
  136. package/methodology/hooks cannot replace genuine cognitive engagement yet more automation is always tempting.md +87 -0
  137. package/methodology/hooks enable context window efficiency by delegating deterministic checks to external processes.md +47 -0
  138. package/methodology/idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once.md +44 -0
  139. package/methodology/implement condition-based maintenance triggers for derived systems.md +255 -0
  140. package/methodology/implicit dependencies create distributed monoliths that fail silently across configurations.md +58 -0
  141. package/methodology/implicit knowledge emerges from traversal.md +55 -0
  142. package/methodology/incremental formalization happens through repeated touching of old notes.md +60 -0
  143. package/methodology/incremental reading enables cross-source connection finding.md +39 -0
  144. package/methodology/index.md +32 -0
  145. package/methodology/inline links carry richer relationship data than metadata fields.md +91 -0
  146. package/methodology/insight accretion differs from productivity in knowledge systems.md +41 -0
  147. package/methodology/intermediate packets enable assembly over creation.md +52 -0
  148. package/methodology/intermediate representation pattern enables reliable vault operations beyond regex.md +62 -0
  149. package/methodology/justification chains enable forward backward and evolution reasoning about configuration decisions.md +46 -0
  150. package/methodology/knowledge system architecture is parameterized by platform capabilities not fixed by methodology.md +51 -0
  151. package/methodology/knowledge systems become communication partners through complexity and memory humans cannot sustain.md +47 -0
  152. package/methodology/knowledge systems share universal operations and structural components across all methodology traditions.md +46 -0
  153. package/methodology/legal case management uses precedent chains with regulatory change propagation.md +892 -0
  154. package/methodology/live index via periodic regeneration keeps discovery current.md +58 -0
  155. package/methodology/local-first file formats are inherently agent-native.md +69 -0
  156. package/methodology/logic column pattern separates reasoning from procedure.md +35 -0
  157. package/methodology/maintenance operations are more universal than creative pipelines because structural health is domain-invariant.md +47 -0
  158. package/methodology/maintenance scheduling frequency should match consequence speed not detection capability.md +50 -0
  159. package/methodology/maintenance targeting should prioritize mechanism and theory notes.md +26 -0
  160. package/methodology/maintenance-patterns.md +72 -0
  161. package/methodology/markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure.md +55 -0
  162. package/methodology/maturity field enables agent context prioritization.md +33 -0
  163. package/methodology/memory-architecture.md +27 -0
  164. package/methodology/metacognitive confidence can diverge from retrieval capability.md +42 -0
  165. package/methodology/metadata reduces entropy enabling precision over recall.md +91 -0
  166. package/methodology/methodology development should follow the trajectory from documentation to skill to hook as understanding hardens.md +80 -0
  167. package/methodology/methodology traditions are named points in a shared configuration space not competing paradigms.md +64 -0
  168. package/methodology/mnemonic medium embeds verification into navigation.md +46 -0
  169. package/methodology/module communication through shared YAML fields creates loose coupling without direct dependencies.md +44 -0
  170. package/methodology/module deactivation must account for structural artifacts that survive the toggle.md +49 -0
  171. package/methodology/multi-domain systems compose through separate templates and shared graph.md +61 -0
  172. package/methodology/multi-domain-composition.md +27 -0
  173. package/methodology/narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging.md +53 -0
  174. package/methodology/navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts.md +48 -0
  175. package/methodology/navigational vertigo emerges in pure association systems without local hierarchy.md +54 -0
  176. package/methodology/note titles should function as APIs enabling sentence transclusion.md +51 -0
  177. package/methodology/note-design.md +57 -0
  178. package/methodology/notes are skills /342/200/224 curated knowledge injected when relevant.md" +62 -0
  179. package/methodology/notes function as cognitive anchors that stabilize attention during complex tasks.md +41 -0
  180. package/methodology/novel domains derive by mapping knowledge type to closest reference domain then adapting.md +50 -0
  181. package/methodology/nudge theory explains graduated hook enforcement as choice architecture for agents.md +59 -0
  182. package/methodology/observation and tension logs function as dead-letter queues for failed automation.md +51 -0
  183. package/methodology/operational memory and knowledge memory serve different functions in agent architecture.md +48 -0
  184. package/methodology/operational wisdom requires contextual observation.md +52 -0
  185. package/methodology/orchestrated vault creation transforms arscontexta from tool to autonomous knowledge factory.md +40 -0
  186. package/methodology/organic emergence versus active curation creates a fundamental vault governance tension.md +68 -0
  187. package/methodology/orphan notes are seeds not failures.md +38 -0
  188. package/methodology/over-automation corrupts quality when hooks encode judgment rather than verification.md +62 -0
  189. package/methodology/people relationships uses Dunbar-layered graphs with interaction tracking.md +659 -0
  190. package/methodology/personal assistant uses life area management with review automation.md +610 -0
  191. package/methodology/platform adapter translation is semantic not mechanical because hook event meanings differ.md +40 -0
  192. package/methodology/platform capability tiers determine which knowledge system features can be implemented.md +48 -0
  193. package/methodology/platform fragmentation means identical conceptual operations require different implementations across agent environments.md +44 -0
  194. package/methodology/premature complexity is the most common derivation failure mode.md +45 -0
  195. package/methodology/prevent domain-specific failure modes through the vulnerability matrix.md +336 -0
  196. package/methodology/processing effort should follow retrieval demand.md +57 -0
  197. package/methodology/processing-workflows.md +75 -0
  198. package/methodology/product management uses feedback pipelines with experiment tracking.md +789 -0
  199. package/methodology/productivity porn risk in meta-system building.md +30 -0
  200. package/methodology/programmable notes could enable property-triggered workflows.md +64 -0
  201. package/methodology/progressive disclosure means reading right not reading less.md +69 -0
  202. package/methodology/progressive schema validates only what active modules require not the full system schema.md +49 -0
  203. package/methodology/project management uses decision tracking with stakeholder context.md +776 -0
  204. package/methodology/propositional link semantics transform wiki links from associative to reasoned.md +87 -0
  205. package/methodology/prospective memory requires externalization.md +53 -0
  206. package/methodology/provenance tracks where beliefs come from.md +62 -0
  207. package/methodology/queries evolve during search so agents should checkpoint.md +35 -0
  208. package/methodology/question-answer metadata enables inverted search patterns.md +39 -0
  209. package/methodology/random note resurfacing prevents write-only memory.md +33 -0
  210. package/methodology/reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring.md +59 -0
  211. package/methodology/reflection synthesizes existing notes into new insight.md +100 -0
  212. package/methodology/retrieval utility should drive design over capture completeness.md +69 -0
  213. package/methodology/retrieval verification loop tests description quality at scale.md +81 -0
  214. package/methodology/role field makes graph structure explicit.md +94 -0
  215. package/methodology/scaffolding enables divergence that fine-tuning cannot.md +67 -0
  216. package/methodology/schema enforcement via validation agents enables soft consistency.md +60 -0
  217. package/methodology/schema evolution follows observe-then-formalize not design-then-enforce.md +65 -0
  218. package/methodology/schema field names are the only domain specific element in the universal note pattern.md +46 -0
  219. package/methodology/schema fields should use domain-native vocabulary not abstract terminology.md +47 -0
  220. package/methodology/schema templates reduce cognitive overhead at capture time.md +55 -0
  221. package/methodology/schema validation hooks externalize inhibitory control that degrades under cognitive load.md +48 -0
  222. package/methodology/schema-enforcement.md +27 -0
  223. package/methodology/self-extension requires context files to contain platform operations knowledge not just methodology.md +47 -0
  224. package/methodology/sense-making vs storage does compression lose essential nuance.md +73 -0
  225. package/methodology/session boundary hooks implement cognitive bookends for orientation and reflection.md +60 -0
  226. package/methodology/session handoff creates continuity without persistent memory.md +43 -0
  227. package/methodology/session outputs are packets for future selves.md +43 -0
  228. package/methodology/session transcript mining enables experiential validation that structural tests cannot provide.md +38 -0
  229. package/methodology/skill context budgets constrain knowledge system complexity on agent platforms.md +52 -0
  230. package/methodology/skills encode methodology so manual execution bypasses quality gates.md +50 -0
  231. package/methodology/small-world topology requires hubs and dense local links.md +99 -0
  232. package/methodology/source attribution enables tracing claims to foundations.md +38 -0
  233. package/methodology/spaced repetition scheduling could optimize vault maintenance.md +44 -0
  234. package/methodology/spreading activation models how agents should traverse.md +79 -0
  235. package/methodology/stale navigation actively misleads because agents trust curated maps completely.md +43 -0
  236. package/methodology/stigmergy coordinates agents through environmental traces without direct communication.md +62 -0
  237. package/methodology/storage versus thinking distinction determines which tool patterns apply.md +56 -0
  238. package/methodology/structure enables navigation without reading everything.md +52 -0
  239. package/methodology/structure without processing provides no value.md +56 -0
  240. package/methodology/student learning uses prerequisite graphs with spaced retrieval.md +770 -0
  241. package/methodology/summary coherence tests composability before filing.md +37 -0
  242. package/methodology/tag rot applies to wiki links because titles serve as both identifier and display text.md +50 -0
  243. package/methodology/temporal media must convert to spatial text for agent traversal.md +43 -0
  244. package/methodology/temporal processing priority creates age-based inbox urgency.md +45 -0
  245. package/methodology/temporal separation of capture and processing preserves context freshness.md +39 -0
  246. package/methodology/ten universal primitives form the kernel of every viable agent knowledge system.md +162 -0
  247. package/methodology/testing effect could enable agent knowledge verification.md +38 -0
  248. package/methodology/the AgentSkills standard embodies progressive disclosure at the skill level.md +40 -0
  249. package/methodology/the derivation engine improves recursively as deployed systems generate observations.md +49 -0
  250. package/methodology/the determinism boundary separates hook methodology from skill methodology.md +46 -0
  251. package/methodology/the fix-versus-report decision depends on determinism reversibility and accumulated trust.md +45 -0
  252. package/methodology/the generation effect requires active transformation not just storage.md +57 -0
  253. package/methodology/the no wrong patches guarantee ensures any valid module combination produces a valid system.md +58 -0
  254. package/methodology/the system is the argument.md +46 -0
  255. package/methodology/the vault constitutes identity for agents.md +86 -0
  256. package/methodology/the vault methodology transfers because it encodes cognitive science not domain specifics.md +47 -0
  257. package/methodology/therapy journal uses warm personality with pattern detection for emotional processing.md +584 -0
  258. package/methodology/three capture schools converge through agent-mediated synthesis.md +55 -0
  259. package/methodology/three concurrent maintenance loops operate at different timescales to catch different classes of problems.md +56 -0
  260. package/methodology/throughput matters more than accumulation.md +58 -0
  261. package/methodology/title as claim enables traversal as reasoning.md +50 -0
  262. package/methodology/topological organization beats temporal for knowledge work.md +52 -0
  263. package/methodology/trading uses conviction tracking with thesis-outcome correlation.md +699 -0
  264. package/methodology/trails transform ephemeral navigation into persistent artifacts.md +39 -0
  265. package/methodology/transform universal vocabulary to domain-native language through six levels.md +259 -0
  266. package/methodology/type field enables structured queries without folder hierarchies.md +53 -0
  267. package/methodology/use-case presets dissolve the tension between composability and simplicity.md +44 -0
  268. package/methodology/vault conventions may impose hidden rigidity on thinking.md +44 -0
  269. package/methodology/verbatim risk applies to agents too.md +31 -0
  270. package/methodology/vibe notetaking is the emerging industry consensus for AI-native self-organization.md +56 -0
  271. package/methodology/vivid memories need verification.md +45 -0
  272. package/methodology/vocabulary-transformation.md +27 -0
  273. package/methodology/voice capture is the highest-bandwidth channel for agent-delegated knowledge systems.md +45 -0
  274. package/methodology/wiki links are the digital evolution of analog indexing.md +73 -0
  275. package/methodology/wiki links as social contract transforms agents into stewards of incomplete references.md +52 -0
  276. package/methodology/wiki links create navigation paths that shape retrieval.md +63 -0
  277. package/methodology/wiki links implement GraphRAG without the infrastructure.md +101 -0
  278. package/methodology/writing for audience blocks authentic creation.md +22 -0
  279. package/methodology/you operate a system that takes notes.md +79 -0
  280. package/openclaw/SKILL.md +110 -0
  281. package/package.json +45 -0
  282. package/platforms/README.md +51 -0
  283. package/platforms/claude-code/generator.md +61 -0
  284. package/platforms/claude-code/hooks/README.md +186 -0
  285. package/platforms/claude-code/hooks/auto-commit.sh.template +38 -0
  286. package/platforms/claude-code/hooks/session-capture.sh.template +72 -0
  287. package/platforms/claude-code/hooks/session-orient.sh.template +189 -0
  288. package/platforms/claude-code/hooks/write-validate.sh.template +106 -0
  289. package/platforms/openclaw/generator.md +82 -0
  290. package/platforms/openclaw/hooks/README.md +89 -0
  291. package/platforms/openclaw/hooks/bootstrap.ts.template +224 -0
  292. package/platforms/openclaw/hooks/command-new.ts.template +165 -0
  293. package/platforms/openclaw/hooks/heartbeat.ts.template +214 -0
  294. package/platforms/shared/features/README.md +70 -0
  295. package/platforms/shared/skill-blocks/graph.md +145 -0
  296. package/platforms/shared/skill-blocks/learn.md +119 -0
  297. package/platforms/shared/skill-blocks/next.md +131 -0
  298. package/platforms/shared/skill-blocks/pipeline.md +326 -0
  299. package/platforms/shared/skill-blocks/ralph.md +616 -0
  300. package/platforms/shared/skill-blocks/reduce.md +1142 -0
  301. package/platforms/shared/skill-blocks/refactor.md +129 -0
  302. package/platforms/shared/skill-blocks/reflect.md +780 -0
  303. package/platforms/shared/skill-blocks/remember.md +524 -0
  304. package/platforms/shared/skill-blocks/rethink.md +574 -0
  305. package/platforms/shared/skill-blocks/reweave.md +680 -0
  306. package/platforms/shared/skill-blocks/seed.md +320 -0
  307. package/platforms/shared/skill-blocks/stats.md +145 -0
  308. package/platforms/shared/skill-blocks/tasks.md +171 -0
  309. package/platforms/shared/skill-blocks/validate.md +323 -0
  310. package/platforms/shared/skill-blocks/verify.md +562 -0
  311. package/platforms/shared/templates/README.md +35 -0
  312. package/presets/experimental/categories.yaml +1 -0
  313. package/presets/experimental/preset.yaml +38 -0
  314. package/presets/experimental/starter/README.md +7 -0
  315. package/presets/experimental/vocabulary.yaml +7 -0
  316. package/presets/personal/categories.yaml +7 -0
  317. package/presets/personal/preset.yaml +41 -0
  318. package/presets/personal/starter/goals.md +21 -0
  319. package/presets/personal/starter/index.md +17 -0
  320. package/presets/personal/starter/life-areas.md +21 -0
  321. package/presets/personal/starter/people.md +21 -0
  322. package/presets/personal/vocabulary.yaml +32 -0
  323. package/presets/research/categories.yaml +8 -0
  324. package/presets/research/preset.yaml +41 -0
  325. package/presets/research/starter/index.md +17 -0
  326. package/presets/research/starter/methods.md +21 -0
  327. package/presets/research/starter/open-questions.md +21 -0
  328. package/presets/research/vocabulary.yaml +33 -0
  329. package/reference/AUDIT-REPORT.md +238 -0
  330. package/reference/claim-map.md +172 -0
  331. package/reference/components.md +327 -0
  332. package/reference/conversation-patterns.md +542 -0
  333. package/reference/derivation-validation.md +649 -0
  334. package/reference/dimension-claim-map.md +134 -0
  335. package/reference/evolution-lifecycle.md +297 -0
  336. package/reference/failure-modes.md +235 -0
  337. package/reference/interaction-constraints.md +204 -0
  338. package/reference/kernel.yaml +242 -0
  339. package/reference/methodology.md +283 -0
  340. package/reference/open-questions.md +279 -0
  341. package/reference/personality-layer.md +302 -0
  342. package/reference/self-space.md +299 -0
  343. package/reference/semantic-vs-keyword.md +288 -0
  344. package/reference/session-lifecycle.md +298 -0
  345. package/reference/templates/base-note.md +16 -0
  346. package/reference/templates/companion-note.md +70 -0
  347. package/reference/templates/creative-note.md +16 -0
  348. package/reference/templates/learning-note.md +16 -0
  349. package/reference/templates/life-note.md +16 -0
  350. package/reference/templates/moc.md +26 -0
  351. package/reference/templates/relationship-note.md +17 -0
  352. package/reference/templates/research-note.md +19 -0
  353. package/reference/templates/session-log.md +24 -0
  354. package/reference/templates/therapy-note.md +16 -0
  355. package/reference/test-fixtures/edge-case-constraints.md +148 -0
  356. package/reference/test-fixtures/multi-domain.md +164 -0
  357. package/reference/test-fixtures/novel-domain-gaming.md +138 -0
  358. package/reference/test-fixtures/research-minimal.md +102 -0
  359. package/reference/test-fixtures/therapy-full.md +155 -0
  360. package/reference/testing-milestones.md +1087 -0
  361. package/reference/three-spaces.md +363 -0
  362. package/reference/tradition-presets.md +203 -0
  363. package/reference/use-case-presets.md +341 -0
  364. package/reference/validate-kernel.sh +432 -0
  365. package/reference/vocabulary-transforms.md +85 -0
  366. package/scripts/sync-thinking.sh +147 -0
  367. package/skill-sources/graph/SKILL.md +567 -0
  368. package/skill-sources/graph/skill.json +17 -0
  369. package/skill-sources/learn/SKILL.md +254 -0
  370. package/skill-sources/learn/skill.json +17 -0
  371. package/skill-sources/next/SKILL.md +407 -0
  372. package/skill-sources/next/skill.json +17 -0
  373. package/skill-sources/pipeline/SKILL.md +314 -0
  374. package/skill-sources/pipeline/skill.json +17 -0
  375. package/skill-sources/ralph/SKILL.md +604 -0
  376. package/skill-sources/ralph/skill.json +17 -0
  377. package/skill-sources/reduce/SKILL.md +1113 -0
  378. package/skill-sources/reduce/skill.json +17 -0
  379. package/skill-sources/refactor/SKILL.md +448 -0
  380. package/skill-sources/refactor/skill.json +17 -0
  381. package/skill-sources/reflect/SKILL.md +747 -0
  382. package/skill-sources/reflect/skill.json +17 -0
  383. package/skill-sources/remember/SKILL.md +534 -0
  384. package/skill-sources/remember/skill.json +17 -0
  385. package/skill-sources/rethink/SKILL.md +658 -0
  386. package/skill-sources/rethink/skill.json +17 -0
  387. package/skill-sources/reweave/SKILL.md +657 -0
  388. package/skill-sources/reweave/skill.json +17 -0
  389. package/skill-sources/seed/SKILL.md +303 -0
  390. package/skill-sources/seed/skill.json +17 -0
  391. package/skill-sources/stats/SKILL.md +371 -0
  392. package/skill-sources/stats/skill.json +17 -0
  393. package/skill-sources/tasks/SKILL.md +402 -0
  394. package/skill-sources/tasks/skill.json +17 -0
  395. package/skill-sources/validate/SKILL.md +310 -0
  396. package/skill-sources/validate/skill.json +17 -0
  397. package/skill-sources/verify/SKILL.md +532 -0
  398. package/skill-sources/verify/skill.json +17 -0
  399. package/skills/add-domain/SKILL.md +441 -0
  400. package/skills/add-domain/skill.json +17 -0
  401. package/skills/architect/SKILL.md +568 -0
  402. package/skills/architect/skill.json +17 -0
  403. package/skills/ask/SKILL.md +388 -0
  404. package/skills/ask/skill.json +17 -0
  405. package/skills/health/SKILL.md +760 -0
  406. package/skills/health/skill.json +17 -0
  407. package/skills/help/SKILL.md +348 -0
  408. package/skills/help/skill.json +17 -0
  409. package/skills/recommend/SKILL.md +553 -0
  410. package/skills/recommend/skill.json +17 -0
  411. package/skills/reseed/SKILL.md +385 -0
  412. package/skills/reseed/skill.json +17 -0
  413. package/skills/setup/SKILL.md +1688 -0
  414. package/skills/setup/skill.json +17 -0
  415. package/skills/tutorial/SKILL.md +496 -0
  416. package/skills/tutorial/skill.json +17 -0
  417. package/skills/upgrade/SKILL.md +395 -0
  418. package/skills/upgrade/skill.json +17 -0
@@ -0,0 +1,42 @@
1
+ ---
2
+ description: Uncompleted tasks occupy working memory until externalized, making zero-friction capture cognitively necessary not just convenient — grounding GTD's open loops in experimental psychology
3
+ kind: research
4
+ topics: ["[[processing-workflows]]"]
5
+ methodology: ["Cognitive Science", "GTD"]
6
+ source: [[tft-research-part3]]
7
+ ---
8
+
9
+ # Zeigarnik effect validates capture-first philosophy because open loops drain attention
10
+
11
+ Bluma Zeigarnik demonstrated in 1927 that incomplete tasks are remembered better than completed ones. The standard interpretation is that this is a memory advantage — the brain keeps unfinished business active for later completion. But there is a cost side to this effect that matters more for knowledge systems: the unfinished task occupies working memory continuously until it is either completed or externalized. The "better memory" for incomplete tasks is not free. It comes at the price of ongoing cognitive bandwidth.
12
+
13
+ This is the experimental psychology behind GTD's concept of "open loops." David Allen argued that every uncaptured commitment — every half-formed thought, every "I should remember to..." — drains mental energy. The Zeigarnik effect provides the mechanism: the brain maintains an active thread for each open loop, consuming working memory capacity that could be used for processing, connecting, or synthesizing. The more open loops, the less cognitive bandwidth available for actual thinking.
14
+
15
+ The flip side of this drain is the benefit of externalization during active work. Since [[notes function as cognitive anchors that stabilize attention during complex tasks]], the act of capturing a thought not only releases the Zeigarnik loop but creates a stable reference point that the thinker can return to. The drain stops and a cognitive resource appears in its place. This is why capture produces a double benefit: it removes the cost of holding the thought AND creates an anchor for future reasoning.
16
+
17
+ The vault implication is direct. Zero-friction capture is not a convenience feature — it is a cognitive necessity. Every thought that enters awareness and is not externalized becomes an open Zeigarnik loop. The loop persists, consuming working memory, creating the "nagging of the subconscious" that the research describes, generating "anxiety associated with the fear of forgetting." Capture closes the loop. The brain registers that the thought has been externalized to a trusted system and releases the working memory allocation.
18
+
19
+ This is why capture friction is so damaging. Since [[cognitive offloading is the architectural foundation for vault design]], every barrier to offloading fights the cognitive architecture. The Zeigarnik effect makes this concrete: a friction point that delays capture by even a few seconds gives the brain time to start maintaining the open loop. If the friction is high enough — finding the right file, formatting the note, categorizing before capture — the human may decide the overhead is not worth it and try to "just remember." Now the loop runs indefinitely, draining attention from whatever comes next.
20
+
21
+ The relationship to [[temporal separation of capture and processing preserves context freshness]] adds a timing dimension. Temporal separation argues that processing should happen soon because Ebbinghaus decay erodes capture context. The Zeigarnik effect argues that capture itself must happen immediately because the open loop drains attention from the moment the thought occurs until externalization. These are complementary urgencies operating at different time scales: capture within seconds (Zeigarnik), process within hours (Ebbinghaus).
22
+
23
+ Since [[closure rituals create clean breaks that prevent attention residue bleed]], the Zeigarnik effect also explains why closure is necessary. Completed tasks release their working memory allocation — but only when the brain recognizes completion. Without an explicit closure signal, the loop may persist even after the work is done. Capture closes the "I need to remember this" loop. Closure rituals close the "I need to finish this" loop. Both are Zeigarnik releases, targeting different types of open loops.
24
+
25
+ For agents, the Zeigarnik effect does not transfer directly — agents do not maintain persistent open loops across sessions. But the principle validates the queue-based architecture. The work queue externalizes all pending tasks, making them visible and manageable. Without the queue, the orchestrator session would need to hold all pending work in context — an analog of the Zeigarnik drain at the system level. Since [[session handoff creates continuity without persistent memory]], the handoff ritual itself functions as a Zeigarnik closure mechanism at the session boundary — externalizing unfinished work to task files releases the system-level equivalent of open loops, ensuring the next session starts without inherited cognitive debt. Since [[capture the reaction to content not just the content itself]], the urgency of capture applies not just to facts but to reactions, connections, and questions that arise during work. Each uncaptured reaction is an open loop that drains attention from the current task.
26
+
27
+ The practical implication: optimize for capture speed above all else in the first moment of a thought. Structure can come later. Processing can come later. But the capture itself must be instantaneous, because every millisecond of delay is a millisecond of Zeigarnik drain. There is a tension here, though: since [[does agent processing recover what fast capture loses]], the speed that closes Zeigarnik loops comes at the cost of encoding depth. Slow capture forces the human to summarize and transform, which creates stronger personal memory. Fast capture closes loops efficiently but delegates the generative work to the agent. The Zeigarnik argument wins at the moment of capture — the cognitive drain is too costly to tolerate — but the encoding cost is real and must be addressed downstream through processing design.
28
+
29
+ ---
30
+ ---
31
+
32
+ Relevant Notes:
33
+ - [[temporal separation of capture and processing preserves context freshness]] — covers the timing dimension of capture; this note grounds the cognitive URGENCY of why capture must happen immediately
34
+ - [[cognitive offloading is the architectural foundation for vault design]] — the broader theoretical framework; Zeigarnik provides specific experimental evidence for why offloading to external systems is cognitively necessary
35
+ - [[closure rituals create clean breaks that prevent attention residue bleed]] — closure releases completed tasks; the Zeigarnik effect explains why uncaptured tasks cannot be released even before they are complete
36
+ - [[capture the reaction to content not just the content itself]] — extends capture scope; both reaction capture and Zeigarnik-driven capture share the urgency of externalizing before the thought decays or drains attention
37
+ - [[notes function as cognitive anchors that stabilize attention during complex tasks]] — flip side: Zeigarnik explains the cost of NOT externalizing; anchoring explains the benefit of externalizing during active reasoning
38
+ - [[session handoff creates continuity without persistent memory]] — handoff implements Zeigarnik closure at session boundaries; externalizing pending work to task files releases the agent-level analog of open loops
39
+ - [[does agent processing recover what fast capture loses]] — tension: Zeigarnik demands fast capture to close loops, but fast capture loses the encoding benefits that slow capture provides; the cognitive urgency this note establishes comes at a cost
40
+
41
+ Topics:
42
+ - [[processing-workflows]]
@@ -0,0 +1,566 @@
1
+ ---
2
+ description: Academic research knowledge system — inspirational composition showing derived architecture for literature reviews, claim extraction, and cross-source synthesis
3
+ kind: example
4
+ domain: research
5
+ topics: ["[[domain-compositions]]"]
6
+ ---
7
+
8
+ # academic research uses structured extraction with cross-source synthesis
9
+
10
+ An academic researcher needs a system that does more than store papers. The real work is synthesis: extracting claims from sources, connecting them across disciplines, detecting when findings conflict, and maintaining the provenance chain from raw data to published argument. Human researchers lose this thread constantly — they read a paper, highlight passages, file it by topic, and six months later can't remember which study demonstrated what. The agent doesn't forget. It maintains the full citation graph, detects contradictions exhaustively, and flags when new evidence invalidates old synthesis.
11
+
12
+ ## Persona
13
+
14
+ Dr. Maren Engel is a cognitive science postdoc studying attention allocation in human-AI collaboration. Her research sits at the intersection of cognitive psychology, HCI, and AI systems — three fields that use different vocabulary for overlapping phenomena. She reads 15-20 papers per week, attends two lab meetings, and is writing three papers simultaneously. Her current pain: she knows she read something about divided attention in multitasking interfaces, but she organized her notes by paper, not by concept. Finding the specific claim means re-reading five papers. Her literature reviews go stale because she writes them once and never updates as new evidence arrives.
15
+
16
+ What Maren needs is a system where claims are the atomic unit, not papers. Where "attention degrades nonlinearly after the third concurrent task" lives as a node she can link to from any paper, any draft, any argument — with full provenance back to the original study. Where the agent can tell her: "Three papers in your vault measured attention degradation in multi-task environments. Henderson 2024 and Park 2025 agree on nonlinear degradation, but Li 2025 found linear degradation with expert participants. This is an unresolved tension."
17
+
18
+ ## Configuration
19
+
20
+ | Dimension | Position | Rationale |
21
+ |-----------|----------|-----------|
22
+ | Granularity | Atomic (one claim per note) | Academic synthesis requires recombinable claims. A compound note about "Henderson 2024" locks claims together — but Maren needs to cite Henderson's method in one paper and Henderson's finding in another. Atomic claims enable this. |
23
+ | Organization | Flat with MOC overlay | Concept-based organization beats source-based. Notes organized by "who wrote it" prevent cross-source synthesis. Flat files with wiki links let the same claim appear in multiple conceptual contexts without folder conflicts. |
24
+ | Linking | Explicit with typed relationships + semantic discovery | Academic relationships are specific: "replicates," "contradicts," "extends," "provides evidence for." Untyped "see also" links lose the relationship that matters. Semantic search supplements manual linking to catch cross-vocabulary connections (the HCI paper using "cognitive load" connects to the psych paper using "working memory capacity"). |
25
+ | Metadata | Dense — methodology, source, confidence, replication status | Academic claims need provenance. A claim without methodology attribution is gossip. Dense schema enables queries like "find all claims from randomized controlled trials with sample size > 100" — the kind of systematic review query that makes the vault a research instrument. |
26
+ | Processing | Heavy — full extract/reflect/reweave/verify pipeline | Every source gets deep extraction. Claims are cross-referenced against all existing claims. Synthesis notes get updated when underlying claims change. This is the full processing investment because academic work compounds through connections. |
27
+ | Formalization | High — explicit schemas, validation, templates | Academic rigor demands it. A claim note without methodology and confidence fields is incomplete. Schema validation catches drift before it corrupts the evidence base. Templates enforce the minimum viable metadata for every note type. |
28
+ | Review | Quarterly deep review + event-triggered updates | Core concepts evolve slowly, but new papers arrive weekly. Event-triggered review (new paper contradicts existing claim) supplements quarterly systematic review. Literature reviews get freshness checks — if underlying claims changed, the synthesis is stale. |
29
+ | Scope | Domain-focused with cross-discipline bridges | Maren's three fields (cognitive psych, HCI, AI systems) each have their own vocabulary. The vault bridges them: a claim from cognitive psych about attention limits connects to an HCI finding about interface design because the agent sees the semantic relationship across vocabularies. |
30
+
31
+ ## Vault Structure
32
+
33
+ ```
34
+ vault/
35
+ ├── 00_inbox/
36
+ │ ├── papers/ # PDFs and paper notes awaiting processing
37
+ │ │ ├── 2026-02-henderson-attention-allocation.md
38
+ │ │ ├── 2026-01-park-multitask-interfaces.md
39
+ │ │ └── 2026-02-li-expert-attention.md
40
+ │ ├── seminars/ # seminar and lab meeting notes
41
+ │ │ └── 2026-02-12-lab-meeting-embodied-cognition.md
42
+ │ └── ideas/ # research sparks before processing
43
+ │ └── cross-modal-attention-hypothesis.md
44
+ ├── 01_thinking/ # flat — all claims, MOCs, syntheses
45
+ │ ├── index.md # hub MOC
46
+ │ ├── attention-allocation.md # topic MOC
47
+ │ ├── cognitive-load.md # topic MOC
48
+ │ ├── human-ai-collaboration.md # domain MOC
49
+ │ ├── methodology-comparison.md # topic MOC
50
+ │ ├── attention degrades nonlinearly after the third concurrent task.md
51
+ │ ├── divided attention costs increase when modalities overlap.md
52
+ │ ├── expert performers show linear not nonlinear attention degradation.md
53
+ │ ├── interface complexity mediates attention allocation more than task count.md
54
+ │ ├── ecological validity problems plague most lab-based attention studies.md
55
+ │ ├── self-reported cognitive load correlates poorly with physiological measures.md
56
+ │ └── ... (claim notes, tension notes, methodology notes)
57
+ ├── 02_archive/
58
+ │ ├── references/
59
+ │ │ ├── articles/ # archived paper metadata
60
+ │ │ │ ├── henderson-2024-attention-allocation-multitask.md
61
+ │ │ │ ├── park-2025-interface-design-cognitive-load.md
62
+ │ │ │ └── li-2025-expert-attention-linear.md
63
+ │ │ └── books/
64
+ │ │ └── kahneman-2011-thinking-fast-slow.md
65
+ │ └── literature-reviews/ # completed lit review snapshots
66
+ │ └── 2025-q4-attention-allocation-review.md
67
+ ├── 03_writing/ # active manuscripts
68
+ │ ├── drafts/
69
+ │ │ ├── attention-degradation-paper/
70
+ │ │ │ ├── draft-v3.md
71
+ │ │ │ └── reviewer-comments-r1.md
72
+ │ │ └── cross-modal-attention-paper/
73
+ │ │ └── outline.md
74
+ │ └── published/
75
+ │ └── engel-2025-attention-interfaces.md
76
+ ├── 04_meta/
77
+ │ ├── logs/
78
+ │ │ ├── observations.md
79
+ │ │ ├── observations/
80
+ │ │ ├── tensions.md
81
+ │ │ └── tensions/
82
+ │ ├── templates/
83
+ │ │ ├── claim-note.md
84
+ │ │ ├── source-capture.md
85
+ │ │ ├── literature-review.md
86
+ │ │ ├── methodology-note.md
87
+ │ │ └── tension-note.md
88
+ │ ├── tasks/
89
+ │ │ ├── queue.json
90
+ │ │ └── archive/
91
+ │ └── scripts/
92
+ │ ├── citation-graph.sh
93
+ │ ├── replication-status.sh
94
+ │ └── stale-synthesis.sh
95
+ └── self/
96
+ ├── research-identity.md
97
+ ├── methodology-preferences.md
98
+ └── active-threads.md
99
+ ```
100
+
101
+ ## Note Schemas
102
+
103
+ ### Claim Note (the primary unit)
104
+
105
+ ```yaml
106
+ ---
107
+ description: Divided attention costs in multi-task environments increase superlinearly when two tasks share the same sensory modality but remain additive when modalities differ
108
+ methodology: ["Cognitive Psychology", "Experimental"]
109
+ source: "[[henderson-2024-attention-allocation-multitask]]"
110
+ confidence: high
111
+ evidence_type: experimental
112
+ sample_size: 142
113
+ replication_status: replicated
114
+ classification: closed
115
+ topics: ["[[attention-allocation]]", "[[cognitive-load]]"]
116
+ relevant_notes:
117
+ - "[[attention degrades nonlinearly after the third concurrent task]] — extends: specifies the modality condition under which nonlinearity appears"
118
+ - "[[expert performers show linear not nonlinear attention degradation]] — contradicts: this finding holds for novices but experts show different pattern"
119
+ - "[[interface complexity mediates attention allocation more than task count]] — enables: provides the mechanism explanation for why complexity matters more"
120
+ ---
121
+ ```
122
+
123
+ ### Source Capture (paper metadata)
124
+
125
+ ```yaml
126
+ ---
127
+ description: Henderson et al 2024 — experimental study of attention allocation across concurrent tasks using eye tracking and dual-task paradigms (N=142)
128
+ source_type: journal-article
129
+ title: "Attention Allocation in Multi-Task Environments: Modality-Specific Costs"
130
+ authors: ["Henderson, K.", "Nakamura, T.", "Fischer, R."]
131
+ year: 2024
132
+ journal: "Journal of Experimental Psychology: Human Perception and Performance"
133
+ doi: "10.1037/xhp0001234"
134
+ status: deep-read
135
+ read_date: 2026-01-15
136
+ claims_extracted: 4
137
+ key_methods: ["dual-task paradigm", "eye tracking", "NASA-TLX"]
138
+ topics: ["[[attention-allocation]]"]
139
+ ---
140
+ ```
141
+
142
+ ### Literature Review
143
+
144
+ ```yaml
145
+ ---
146
+ description: Systematic review of attention allocation research 2020-2025 covering 34 sources — identifies the modality-specificity consensus and the expert-novice gap
147
+ type: literature-review
148
+ scope: "Attention allocation in multi-task and multi-interface environments"
149
+ sources_covered: 34
150
+ date_range: "2020-2025"
151
+ last_updated: 2026-01-20
152
+ status: active
153
+ gaps_identified:
154
+ - "No studies combining real-world tasks with physiological attention measures"
155
+ - "Expert-novice differences understudied in AI-assisted contexts"
156
+ synthesis_statement: "Modality-specific costs are well-established for novices but the expert pattern remains contested"
157
+ freshness_check: "3 underlying claims updated since last review — needs revision"
158
+ topics: ["[[attention-allocation]]", "[[methodology-comparison]]"]
159
+ ---
160
+ ```
161
+
162
+ ### Tension Note
163
+
164
+ ```yaml
165
+ ---
166
+ description: Henderson 2024 finds nonlinear attention degradation while Li 2025 finds linear degradation in experts — the expertise variable may dissolve the contradiction or reveal a genuine moderation effect
167
+ observed: 2026-02-10
168
+ involves:
169
+ - "[[attention degrades nonlinearly after the third concurrent task]]"
170
+ - "[[expert performers show linear not nonlinear attention degradation]]"
171
+ status: open
172
+ resolution_candidates:
173
+ - "Moderation by expertise level (both correct for their populations)"
174
+ - "Methodological difference (Li used simulated tasks, Henderson used real-world)"
175
+ - "Sample difference (Li's experts had 10+ years, Henderson's had 2-5)"
176
+ topics: ["[[attention-allocation]]"]
177
+ ---
178
+ ```
179
+
180
+ ### Methodology Note
181
+
182
+ ```yaml
183
+ ---
184
+ description: Dual-task paradigm as attention measure — strengths in ecological control, weaknesses in artificiality, reliability concerns when tasks are not matched for difficulty
185
+ type: methodology
186
+ tradition: Cognitive Psychology
187
+ first_used: "Pashler, 1994"
188
+ strengths:
189
+ - "Controlled measurement of attention allocation"
190
+ - "Well-established norms for comparison"
191
+ weaknesses:
192
+ - "Lab tasks may not generalize to real-world multitasking"
193
+ - "Difficulty matching confounds comparisons across studies"
194
+ used_in: ["[[henderson-2024-attention-allocation-multitask]]", "[[li-2025-expert-attention-linear]]"]
195
+ topics: ["[[methodology-comparison]]"]
196
+ ---
197
+ ```
198
+
199
+ ## Example Notes
200
+
201
+ ### Example 1: Atomic Claim Note
202
+
203
+ ```markdown
204
+ ---
205
+ description: Divided attention costs in multi-task environments increase superlinearly when two tasks share the same sensory modality but remain additive when modalities differ
206
+ methodology: ["Cognitive Psychology", "Experimental"]
207
+ source: "[[henderson-2024-attention-allocation-multitask]]"
208
+ confidence: high
209
+ evidence_type: experimental
210
+ sample_size: 142
211
+ replication_status: replicated
212
+ classification: closed
213
+ topics: ["[[attention-allocation]]", "[[cognitive-load]]"]
214
+ relevant_notes:
215
+ - "[[attention degrades nonlinearly after the third concurrent task]] — extends: specifies the modality condition under which nonlinearity appears"
216
+ - "[[expert performers show linear not nonlinear attention degradation]] — contradicts: this finding holds for novices but experts show different pattern"
217
+ - "[[interface complexity mediates attention allocation more than task count]] — enables: provides the mechanism explanation for why complexity matters more"
218
+ ---
219
+
220
+ # divided attention costs increase when modalities overlap
221
+
222
+ The central finding from Henderson et al. 2024 is that attention doesn't just degrade with more tasks — it degrades *differently* depending on whether tasks compete for the same sensory channel. Two visual tasks competing for foveal attention produce superlinear costs: performance on each drops more than half, because eye movements become the bottleneck. But a visual task paired with an auditory task shows roughly additive costs: each task takes its expected toll independently.
223
+
224
+ This matters because since [[attention degrades nonlinearly after the third concurrent task]], the natural question was always "nonlinear how?" The answer appears to be modality-specific bottlenecks. When two tasks need the same perceptual resource — the same part of the visual field, the same auditory channel — they compete destructively. When they need different resources, they coexist with independent costs.
225
+
226
+ The implication for interface design is direct. Since [[interface complexity mediates attention allocation more than task count]], a well-designed multi-panel interface should distribute information across modalities. An air traffic control display that combines visual radar with auditory alerts should produce less degradation than one that presents all information visually — even though the total information load is identical.
227
+
228
+ However, this finding has a significant boundary condition. Since [[expert performers show linear not nonlinear attention degradation]], the superlinear modality effect may be specific to novice performers. Li 2025 found that expert operators (10+ years experience) showed linear degradation even with same-modality tasks, suggesting that expertise either creates modality-independent processing strategies or automates perceptual parsing enough to eliminate the bottleneck. This creates an unresolved tension: is the modality effect a fundamental constraint or a trainable limitation?
229
+
230
+ The methodological grounding is solid — Henderson used both eye tracking and NASA-TLX self-report, and the eye tracking data shows the modality effect even when self-report doesn't detect it. Since [[self-reported cognitive load correlates poorly with physiological measures]], the physiological confirmation strengthens the claim.
231
+
232
+ ---
233
+
234
+ Source: [[henderson-2024-attention-allocation-multitask]]
235
+ ```
236
+
237
+ ### Example 2: Synthesis Note
238
+
239
+ ```markdown
240
+ ---
241
+ description: Three independent measurement approaches to cognitive load (physiological, behavioral, self-report) produce systematically different pictures of the same phenomenon, suggesting load is not a unitary construct
242
+ methodology: ["Cognitive Psychology", "Measurement Theory"]
243
+ confidence: moderate
244
+ classification: open
245
+ topics: ["[[cognitive-load]]", "[[methodology-comparison]]"]
246
+ relevant_notes:
247
+ - "[[self-reported cognitive load correlates poorly with physiological measures]] — foundation: the specific finding this synthesis builds on"
248
+ - "[[divided attention costs increase when modalities overlap]] — evidence: Henderson's eye-tracking data showed modality effects invisible to self-report"
249
+ - "[[ecological validity problems plague most lab-based attention studies]] — constrains: if lab tasks produce different load profiles than real tasks, measurement divergence may be partly artifactual"
250
+ ---
251
+
252
+ # cognitive load may be three constructs not one
253
+
254
+ The standard assumption in cognitive science is that "cognitive load" is a single dimension that different instruments measure with varying accuracy. The NASA-TLX captures subjective experience. Eye tracking captures visual attention allocation. Heart rate variability captures autonomic stress response. The expectation is that these should correlate — they're all measuring the same thing, just through different windows.
255
+
256
+ They don't correlate well. And the pattern of divergence isn't random — it's systematic.
257
+
258
+ Since [[self-reported cognitive load correlates poorly with physiological measures]], the divergence has been known for decades. But treating it as a "measurement problem" — NASA-TLX is less accurate than physiological measures — may be the wrong frame. What if the instruments don't agree because they're measuring genuinely different things?
259
+
260
+ Consider the evidence from Henderson's modality study. Since [[divided attention costs increase when modalities overlap]], eye tracking reveals modality-specific bottlenecks that participants don't report experiencing. The superlinear cost of same-modality dual tasks is invisible to introspection but obvious in gaze data. This isn't measurement noise — it's a systematic dissociation between experienced load and perceptual-motor load.
261
+
262
+ The implication is that "cognitive load" might decompose into at least three constructs:
263
+
264
+ 1. **Perceptual-motor load** — competition for sensory channels and motor effectors, measurable through behavioral and physiological methods, largely opaque to introspection
265
+ 2. **Executive load** — demands on central executive resources (working memory, task switching), partially accessible to self-report, measurable through dual-task costs
266
+ 3. **Experienced load** — the subjective sense of difficulty, fully accessible to self-report, influenced by factors beyond actual performance (anxiety, motivation, metacognitive beliefs)
267
+
268
+ If this decomposition is correct, the question "is this interface high cognitive load?" has three different answers depending on which construct you mean. A system could impose high perceptual-motor load (many visual elements competing for fixation) while producing low experienced load (the user feels comfortable) — which is exactly what expertise does. Since [[expert performers show linear not nonlinear attention degradation]], expertise may specifically reduce perceptual-motor load while leaving executive load unchanged.
269
+
270
+ This is speculative. The three-construct model needs its own empirical test. But it reframes how the vault should organize load-related claims: not as a single dimension where measurements disagree, but as three dimensions where measurements correctly capture different phenomena.
271
+
272
+ ---
273
+
274
+ Source: [[henderson-2024-attention-allocation-multitask]], [[li-2025-expert-attention-linear]]
275
+ ```
276
+
277
+ ### Example 3: Topic MOC
278
+
279
+ ```markdown
280
+ ---
281
+ description: Claims about how attention is distributed across concurrent tasks and interfaces — the core phenomenon Maren's research investigates
282
+ type: moc
283
+ topics: ["[[human-ai-collaboration]]"]
284
+ ---
285
+
286
+ # attention-allocation
287
+
288
+ How attention divides across concurrent tasks is the central question of Maren's research program. The vault tracks three converging threads: the modality-specificity of attention costs, the expert-novice divide, and the measurement problem. These threads interact — expertise may change modality-specific costs, and measurement choice determines which costs you see.
289
+
290
+ ## Core Ideas
291
+
292
+ - [[divided attention costs increase when modalities overlap]] — the modality-specificity finding that reframes attention degradation from "how many tasks" to "which sensory channels compete"
293
+ - [[attention degrades nonlinearly after the third concurrent task]] — the foundational finding that costs are not additive, now qualified by modality conditions
294
+ - [[expert performers show linear not nonlinear attention degradation]] — the expertise boundary condition that may dissolve the nonlinearity finding for trained operators
295
+ - [[interface complexity mediates attention allocation more than task count]] — the design implication: reduce complexity per panel rather than reducing panel count
296
+ - [[cognitive load may be three constructs not one]] — synthesis note arguing load decomposition based on measurement divergence
297
+
298
+ ## Tensions
299
+
300
+ The expert-novice divide is the central unresolved tension. Henderson 2024 and Li 2025 may both be correct for their populations, or Li's methodology (simulated tasks) may explain the divergence. Resolving this affects whether interface design should optimize for novice patterns (modality separation) or assume expertise development (modality-agnostic design).
301
+
302
+ The measurement problem cuts across everything: which findings depend on the measurement instrument? If self-reported cognitive load misses modality effects, how many other phenomena are invisible to self-report?
303
+
304
+ ## Explorations Needed
305
+
306
+ - No studies combining AI-assisted attention (where the AI handles some monitoring) with modality-specific measurements — this is the gap Maren's third paper targets
307
+ - Cross-cultural replication: all major studies used Western university populations
308
+ - Longitudinal expertise development: at what point does the nonlinear pattern shift to linear?
309
+
310
+ ---
311
+
312
+ Agent Notes:
313
+ When traversing this topic, always check methodology-comparison for measurement-related caveats. Many claims in this MOC have boundary conditions that depend on measurement choice. The three-construct synthesis note is speculative — weight it accordingly when building arguments.
314
+ ```
315
+
316
+ ### Example 4: Tension Note
317
+
318
+ ```markdown
319
+ ---
320
+ description: Henderson 2024 finds nonlinear attention degradation while Li 2025 finds linear degradation in experts — the expertise variable may dissolve the contradiction or reveal a genuine moderation effect
321
+ observed: 2026-02-10
322
+ involves:
323
+ - "[[attention degrades nonlinearly after the third concurrent task]]"
324
+ - "[[expert performers show linear not nonlinear attention degradation]]"
325
+ status: open
326
+ topics: ["[[attention-allocation]]"]
327
+ relevant_notes:
328
+ - "[[divided attention costs increase when modalities overlap]] — context: Henderson's modality finding suggests the mechanism behind the nonlinearity"
329
+ - "[[ecological validity problems plague most lab-based attention studies]] — complicates: if lab tasks misrepresent real-world attention demands, the disagreement may be artifactual"
330
+ ---
331
+
332
+ # attention degradation may be nonlinear for novices but linear for experts
333
+
334
+ Two well-designed studies reach opposite conclusions about attention degradation patterns. Henderson et al. 2024 (N=142, university students, real-world-inspired tasks) found superlinear degradation after three concurrent tasks. Li et al. 2025 (N=68, professional operators with 10+ years experience, simulated control tasks) found linear degradation across all task counts tested (2-7 concurrent tasks).
335
+
336
+ ### Quick Test
337
+
338
+ Are both findings reliable independently? Henderson's large sample and converging measures (eye tracking + behavioral + self-report) are strong. Li's smaller sample but highly specific expert population is appropriate for the question asked. Neither study has obvious methodological flaws.
339
+
340
+ ### When Each Pole Wins
341
+
342
+ If expertise moderates the effect, both are correct: nonlinear for novices, linear for experts. This would mean interface design must target the user's expertise level — a training implication, not just a design implication.
343
+
344
+ If methodology explains the difference (Li's simulated tasks vs Henderson's realistic tasks), the expertise finding is confounded. Simulated tasks may be inherently more predictable, reducing the nonlinear surprise component of attention costs.
345
+
346
+ If sample explains it (Li's experts had 10+ years, Henderson's participants had 2-5 years of general task experience), there may be a threshold effect: degradation is nonlinear until sufficient expertise is acquired, then transitions to linear.
347
+
348
+ ### Dissolution Attempts
349
+
350
+ The modality-specificity finding from Henderson helps: since [[divided attention costs increase when modalities overlap]], the nonlinearity may be specifically about modality competition. Li's expert operators may have learned modality-independent processing strategies that eliminate the bottleneck — they've automated the perceptual parsing step. This would explain both findings without contradiction: nonlinearity exists at the perceptual-motor level, expertise eliminates it through automation of perceptual routines.
351
+
352
+ ### Practical Implications
353
+
354
+ Until resolved, interface design should assume nonlinear costs for general populations. For expert-targeted systems (air traffic control, surgical interfaces, trading platforms), linear cost assumptions may be appropriate. Since [[interface complexity mediates attention allocation more than task count]], the practical recommendation is the same either way: reduce per-task complexity rather than limiting task count.
355
+
356
+ ---
357
+
358
+ Source: [[henderson-2024-attention-allocation-multitask]], [[li-2025-expert-attention-linear]]
359
+ ```
360
+
361
+ ### Example 5: Source Capture Note
362
+
363
+ ```markdown
364
+ ---
365
+ description: Henderson et al 2024 — experimental study of attention allocation across concurrent tasks using eye tracking and dual-task paradigms (N=142)
366
+ source_type: journal-article
367
+ title: "Attention Allocation in Multi-Task Environments: Modality-Specific Costs"
368
+ authors: ["Henderson, K.", "Nakamura, T.", "Fischer, R."]
369
+ year: 2024
370
+ journal: "Journal of Experimental Psychology: Human Perception and Performance"
371
+ doi: "10.1037/xhp0001234"
372
+ status: deep-read
373
+ read_date: 2026-01-15
374
+ claims_extracted: 4
375
+ key_methods: ["dual-task paradigm", "eye tracking", "NASA-TLX"]
376
+ quality_assessment: "Strong methodology. Large sample for attention research. Converging measures strengthen findings. Main limitation: university student sample limits generalizability to expert populations."
377
+ topics: ["[[attention-allocation]]"]
378
+ relevant_notes:
379
+ - "[[divided attention costs increase when modalities overlap]] — primary finding extracted"
380
+ - "[[attention degrades nonlinearly after the third concurrent task]] — supporting finding extracted"
381
+ - "[[self-reported cognitive load correlates poorly with physiological measures]] — methodological finding extracted"
382
+ ---
383
+
384
+ # Henderson 2024 — Attention Allocation in Multi-Task Environments
385
+
386
+ ## Key Arguments
387
+
388
+ Henderson et al. tested whether attention degradation in multi-task environments follows the same pattern across sensory modalities. Using a dual-task paradigm with eye tracking, they found that same-modality task pairs produce superlinear costs while cross-modality pairs produce additive costs. The modality-specificity finding reframes the attention degradation question from "how many tasks" to "which tasks compete for the same channel."
389
+
390
+ ## Relevance to Research
391
+
392
+ This is the foundational paper for Maren's modality-specific attention framework. The finding that eye tracking detects modality effects invisible to self-report motivates the three-construct decomposition of cognitive load. The cross-modality additive finding has direct implications for interface design — the attention-degradation paper (draft v3) builds its design recommendations on this.
393
+
394
+ ## Methodological Notes
395
+
396
+ Three measurement streams (eye tracking, behavioral performance, NASA-TLX) with planned triangulation. The dissociation between physiological and self-report measures is itself a finding, not just a methodological curiosity. Sample was 142 university undergraduates — sufficient for between-subjects modality comparisons but limits generalizability to expert populations.
397
+
398
+ ---
399
+
400
+ Source: https://doi.org/10.1037/xhp0001234
401
+ ```
402
+
403
+ ## Processing Workflow
404
+
405
+ ### Capture
406
+
407
+ Papers enter `00_inbox/papers/` as markdown source captures with full bibliographic metadata. Seminar notes enter `00_inbox/seminars/`. Research sparks enter `00_inbox/ideas/`. Speed of capture beats precision of filing — get the metadata right (authors, year, DOI) and move on.
408
+
409
+ ### Reduce (Extraction)
410
+
411
+ The agent reads each source through the research lens: "What atomic claims does this source make? What evidence supports each claim? What methodology was used?" Every substantive finding becomes a claim note. Every methodological observation becomes a methodology note or enriches an existing one.
412
+
413
+ The extraction is exhaustive for the research domain. A 20-page paper typically yields 3-8 claim notes, 1-2 methodology observations, and 0-2 tension identifications. The agent checks each candidate claim against existing notes: "Does this replicate, extend, or contradict something already in the vault?" Near-duplicates become enrichments rather than new notes.
414
+
415
+ **Domain-specific extraction categories:**
416
+ - Empirical findings with evidence (claim notes)
417
+ - Methodological innovations or limitations (methodology notes)
418
+ - Contradictions with existing claims (tension notes)
419
+ - Replication results (enrich existing claims with replication status)
420
+ - Review/meta-analysis results (literature review notes)
421
+
422
+ ### Reflect (Connect Forward)
423
+
424
+ For each new claim, the agent searches the existing vault for connections. In academic research, connections are typed: "replicates," "contradicts," "extends," "provides evidence for," "uses same methodology as." The agent uses semantic search to find cross-vocabulary connections — a cognitive psychology finding about "resource competition" connects to an HCI finding about "interface contention" because the agent recognizes the underlying concept.
425
+
426
+ MOC updates happen here. Every claim gets placed in its topic MOC(s) with a context phrase explaining why it belongs. The agent updates the Tensions section if the new claim conflicts with existing thinking. The Explorations Needed section gets updated if the new claim reveals a gap.
427
+
428
+ ### Reweave (Connect Backward)
429
+
430
+ Older notes get updated with connections to new claims. A claim written three months ago about attention degradation now needs a link to the new modality-specificity finding. The agent also checks: has understanding evolved enough that the older claim needs rewriting? Is the older synthesis note still valid given new evidence?
431
+
432
+ **Academic-specific reweaving:** Literature reviews get freshness checks. If a synthesis note's underlying claims have changed — new evidence, contradictions discovered, replication failures — the synthesis is flagged as stale. The agent can tell Maren: "Your attention allocation literature review cites 34 sources. Since you wrote it, 3 claims have been updated and 1 new contradiction was discovered. The synthesis statement may need revision."
433
+
434
+ ### Verify
435
+
436
+ Combined verification: description quality (would searching for this claim find it?), schema compliance (does every claim have methodology and source?), structural health (orphan detection, link integrity). Academic-specific checks include: provenance chain verification (every claim traces to a source), replication status currency (has a cited finding been replicated or challenged?), and stale synthesis detection.
437
+
438
+ ## MOC Structure
439
+
440
+ ```
441
+ index.md (Hub)
442
+ ├── human-ai-collaboration.md (Domain MOC)
443
+ │ ├── attention-allocation.md (Topic MOC)
444
+ │ ├── cognitive-load.md (Topic MOC)
445
+ │ ├── interface-design-patterns.md (Topic MOC)
446
+ │ └── trust-calibration.md (Topic MOC)
447
+ ├── methodology-comparison.md (Topic MOC — cross-cutting)
448
+ │ ├── measurement-instruments.md (Sub-topic if it grows)
449
+ │ └── study-design-patterns.md (Sub-topic if it grows)
450
+ └── meta-research.md (Topic MOC)
451
+ ├── replication-crisis.md (Topic MOC)
452
+ └── publication-bias.md (Topic MOC)
453
+ ```
454
+
455
+ ### Example Hub MOC
456
+
457
+ ```markdown
458
+ ---
459
+ description: Entry point for Maren's research vault — three research threads and supporting infrastructure
460
+ type: moc
461
+ topics: []
462
+ ---
463
+
464
+ # index
465
+
466
+ Maren's research sits at the intersection of cognitive science and human-AI interaction. Three threads converge: how attention allocates across concurrent tasks, how cognitive load should be measured and modeled, and how AI assistance changes both.
467
+
468
+ ## Research Domains
469
+
470
+ - [[human-ai-collaboration]] — the primary research program: how humans and AI systems share cognitive work, with attention allocation as the core phenomenon
471
+ - [[methodology-comparison]] — cross-cutting: which measurement instruments capture which phenomena, and where they disagree
472
+ - [[meta-research]] — the research environment itself: replication, publication bias, methodological evolution
473
+
474
+ ## Infrastructure
475
+
476
+ - [[active-threads]] — what Maren is working on right now
477
+ - [[observations]] — operational learnings from running this vault
478
+ - [[tensions]] — unresolved conflicts between findings
479
+ ```
480
+
481
+ ## Graph Query Examples
482
+
483
+ ```bash
484
+ # Find all claims from a specific source
485
+ rg '^source:.*henderson-2024' vault/01_thinking/
486
+
487
+ # Find unreplicated high-confidence claims (potential priorities for review)
488
+ rg -l '^confidence: high' vault/01_thinking/ | xargs rg -l '^replication_status: unreplicated'
489
+
490
+ # Find all open tensions (unresolved contradictions between findings)
491
+ rg -l '^status: open' vault/04_meta/logs/tensions/
492
+
493
+ # Find stale literature reviews (underlying claims changed since review)
494
+ rg '^freshness_check:.*needs revision' vault/01_thinking/ vault/02_archive/literature-reviews/
495
+
496
+ # Find all claims using a specific methodology (for methodology comparison)
497
+ rg '^key_methods:.*eye tracking' vault/02_archive/references/articles/
498
+
499
+ # Find sources read but with zero claims extracted (possibly under-processed)
500
+ rg -l '^claims_extracted: 0' vault/02_archive/references/articles/
501
+
502
+ # Count claims per topic MOC to detect imbalanced coverage
503
+ for moc in vault/01_thinking/*.md; do
504
+ if rg -q '^type: moc' "$moc"; then
505
+ name=$(basename "$moc" .md)
506
+ count=$(rg -c "\[\[$name\]\]" vault/01_thinking/ 2>/dev/null | awk -F: '{s+=$2}END{print s+0}')
507
+ echo "$count $name"
508
+ fi
509
+ done | sort -rn
510
+ ```
511
+
512
+ ## What Makes This Domain Unique
513
+
514
+ **Provenance is non-negotiable.** In most knowledge domains, "where did this idea come from?" is nice to know. In academic research, it's the difference between a valid argument and unsubstantiated opinion. Every claim traces back through a verifiable chain: claim note → source capture → DOI → published paper. The agent maintains this chain automatically and detects when it breaks.
515
+
516
+ **Contradiction is productive, not destructive.** In personal journaling, two conflicting entries are just growth. In project management, conflicting decisions are a bug. In academic research, contradictions between findings are the most valuable signals in the vault — they point to unresolved questions, boundary conditions, and publication-worthy synthesis opportunities. The tension tracking system is not a debugging tool here, it's a research generator.
517
+
518
+ **Cross-vocabulary synthesis is the killer feature.** Cognitive psychology calls it "resource competition." HCI calls it "interface contention." Neuroscience calls it "neural channel capacity." These are the same phenomenon described in different professional vocabularies. The agent's semantic search connects them; a human researcher working in one tradition may never encounter the others.
519
+
520
+ ## Agent-Native Advantages
521
+
522
+ ### Exhaustive Cross-Referencing at Ingest
523
+
524
+ When Maren reads a new paper, she connects it to the 5-10 papers she remembers being relevant. The agent connects it to every paper in the vault. At 200 sources, a human researcher can't hold the full citation graph in working memory. The agent can. This means:
525
+
526
+ - Every new claim is checked against every existing claim, not just the ones Maren happens to remember
527
+ - Contradictions surface even when the conflicting studies use different vocabulary and were read months apart
528
+ - Cross-discipline connections emerge that a human working in one tradition would miss
529
+
530
+ This isn't just "better search." It's the difference between finding connections you were looking for and discovering connections you didn't know existed.
531
+
532
+ ### Stale Synthesis Detection
533
+
534
+ Human researchers write literature reviews and then treat them as finished. The review goes stale silently — new papers arrive, findings get challenged, replications fail — but the review still reads as authoritative because no one is checking its foundations.
535
+
536
+ The agent monitors the dependency chain continuously. When a claim underlying a synthesis note gets updated, contradicted, or enriched, the synthesis note gets flagged. Maren doesn't have to remember to re-check her literature review — the vault tells her: "Your 2025-Q4 review's synthesis statement depends on 34 claims. Three have been modified since the review was written. Specifically: claim X now has a boundary condition from Li 2025, claim Y was enriched with replication data, and claim Z is involved in an unresolved tension."
537
+
538
+ This is programmatic provenance chain monitoring. A human would need to manually re-check every cited finding against the current state of knowledge. The agent does it as a background maintenance operation.
539
+
540
+ ### Replication Status Tracking
541
+
542
+ The replication crisis means individual studies are unreliable. What matters is the pattern across studies: has this finding been replicated? By independent labs? With different populations? Using different methods?
543
+
544
+ The agent maintains replication status as a schema field on every claim. When a new paper reports a replication attempt, the agent updates the original claim's status and enriches it with the replication details. Over time, the vault accumulates a private replication database specific to Maren's research domain — not the abstract replication rates published in meta-analyses, but the specific status of every claim she builds arguments on.
545
+
546
+ This enables queries a human researcher can't feasibly run: "Show me every claim in my attention allocation argument that has NOT been independently replicated." That query might reveal that a seemingly well-supported argument rests on three unreplicated findings — a structural vulnerability invisible without exhaustive tracking.
547
+
548
+ ### Methodology-Aware Connection Finding
549
+
550
+ The agent doesn't just find topically related claims — it finds claims that used the same methodology, enabling comparison. "What other studies used dual-task paradigms? What did they find compared to studies using different paradigms?" This isn't the same as a keyword search for "dual-task" — it's a structured query across the methodology field that returns claims organized by method, revealing whether findings are robust across methodological approaches or method-dependent.
551
+
552
+ When the agent detects that all supporting evidence for a claim comes from a single methodology, it flags this as a methodological dependency — the claim is only as strong as the method. This kind of structural vulnerability analysis requires the systematic metadata tracking that schema-dense academic notes enable.
553
+
554
+ ### Citation Graph Analysis
555
+
556
+ The agent maintains not just individual citations but the citation graph — which papers cite each other, which findings depend on which. This enables impact analysis that no reference manager provides:
557
+
558
+ - When a paper is retracted, the agent identifies every claim in the vault that depends on it
559
+ - When Maren discovers a new paper, the agent shows where it fits in the existing citation structure: "This paper extends Henderson 2024 and contradicts Li 2025, placing it in the existing attention-degradation debate"
560
+ - When writing a paper, the agent can verify that the argument's citation chain is consistent: no circular citations, no citing-through-retracted-papers, no missing links between claims and evidence
561
+
562
+ This transforms the vault from a note collection into a live citation graph that a researcher can query, traverse, and use to verify the structural integrity of their arguments.
563
+ ---
564
+
565
+ Topics:
566
+ - [[domain-compositions]]