arscontexta 0.6.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/marketplace.json +11 -0
- package/.claude-plugin/plugin.json +22 -0
- package/README.md +683 -0
- package/agents/knowledge-guide.md +49 -0
- package/bin/cli.mjs +66 -0
- package/generators/agents-md.md +240 -0
- package/generators/claude-md.md +379 -0
- package/generators/features/atomic-notes.md +124 -0
- package/generators/features/ethical-guardrails.md +58 -0
- package/generators/features/graph-analysis.md +188 -0
- package/generators/features/helper-functions.md +92 -0
- package/generators/features/maintenance.md +164 -0
- package/generators/features/methodology-knowledge.md +70 -0
- package/generators/features/mocs.md +144 -0
- package/generators/features/multi-domain.md +61 -0
- package/generators/features/personality.md +71 -0
- package/generators/features/processing-pipeline.md +428 -0
- package/generators/features/schema.md +149 -0
- package/generators/features/self-evolution.md +229 -0
- package/generators/features/self-space.md +78 -0
- package/generators/features/semantic-search.md +99 -0
- package/generators/features/session-rhythm.md +85 -0
- package/generators/features/templates.md +85 -0
- package/generators/features/wiki-links.md +88 -0
- package/generators/soul-md.md +121 -0
- package/hooks/hooks.json +45 -0
- package/hooks/scripts/auto-commit.sh +44 -0
- package/hooks/scripts/session-capture.sh +35 -0
- package/hooks/scripts/session-orient.sh +86 -0
- package/hooks/scripts/write-validate.sh +42 -0
- package/methodology/AI shifts knowledge systems from externalizing memory to externalizing attention.md +59 -0
- package/methodology/BM25 retrieval fails on full-length descriptions because query term dilution reduces match scores.md +39 -0
- package/methodology/IBIS framework maps claim-based architecture to structured argumentation.md +58 -0
- package/methodology/LLM attention degrades as context fills.md +49 -0
- package/methodology/MOC construction forces synthesis that automated generation from metadata cannot replicate.md +49 -0
- package/methodology/MOC maintenance investment compounds because orientation savings multiply across every future session.md +41 -0
- package/methodology/MOCs are attention management devices not just organizational tools.md +51 -0
- package/methodology/PKM failure follows a predictable cycle.md +50 -0
- package/methodology/ThreadMode to DocumentMode transformation is the core value creation step.md +52 -0
- package/methodology/WIP limits force processing over accumulation.md +53 -0
- package/methodology/Zeigarnik effect validates capture-first philosophy because open loops drain attention.md +42 -0
- package/methodology/academic research uses structured extraction with cross-source synthesis.md +566 -0
- package/methodology/adapt the four-phase processing pipeline to domain-specific throughput needs.md +197 -0
- package/methodology/agent notes externalize navigation intuition that search cannot discover and traversal cannot reconstruct.md +48 -0
- package/methodology/agent self-memory should be architecturally separate from user knowledge systems.md +48 -0
- package/methodology/agent session boundaries create natural automation checkpoints that human-operated systems lack.md +56 -0
- package/methodology/agent-cognition.md +107 -0
- package/methodology/agents are simultaneously methodology executors and subjects creating a unique trust asymmetry.md +66 -0
- package/methodology/aspect-oriented programming solved the same cross-cutting concern problem that hooks solve.md +39 -0
- package/methodology/associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles.md +53 -0
- package/methodology/attention residue may have a minimum granularity that cannot be subdivided.md +46 -0
- package/methodology/auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution.md +47 -0
- package/methodology/automated detection is always safe because it only reads state while automated remediation risks content corruption.md +42 -0
- package/methodology/automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues.md +56 -0
- package/methodology/backlinks implicitly define notes by revealing usage context.md +35 -0
- package/methodology/backward maintenance asks what would be different if written today.md +62 -0
- package/methodology/balance onboarding enforcement and questions to prevent premature complexity.md +229 -0
- package/methodology/basic level categorization determines optimal MOC granularity.md +51 -0
- package/methodology/batching by context similarity reduces switching costs in agent processing.md +43 -0
- package/methodology/behavioral anti-patterns matter more than tool selection.md +42 -0
- package/methodology/betweenness centrality identifies bridge notes connecting disparate knowledge domains.md +57 -0
- package/methodology/blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules.md +42 -0
- package/methodology/bootstrapping principle enables self-improving systems.md +62 -0
- package/methodology/build automatic memory through cognitive offloading and session handoffs.md +285 -0
- package/methodology/capture the reaction to content not just the content itself.md +41 -0
- package/methodology/claims must be specific enough to be wrong.md +36 -0
- package/methodology/closure rituals create clean breaks that prevent attention residue bleed.md +44 -0
- package/methodology/cognitive offloading is the architectural foundation for vault design.md +46 -0
- package/methodology/cognitive outsourcing risk in agent-operated systems.md +55 -0
- package/methodology/coherence maintains consistency despite inconsistent inputs.md +96 -0
- package/methodology/coherent architecture emerges from wiki links spreading activation and small-world topology.md +48 -0
- package/methodology/community detection algorithms can inform when MOCs should split or merge.md +52 -0
- package/methodology/complete navigation requires four complementary types that no single mechanism provides.md +43 -0
- package/methodology/complex systems evolve from simple working systems.md +59 -0
- package/methodology/composable knowledge architecture builds systems from independent toggleable modules not monolithic templates.md +61 -0
- package/methodology/compose multi-domain systems through separate templates and shared graph.md +372 -0
- package/methodology/concept-orientation beats source-orientation for cross-domain connections.md +51 -0
- package/methodology/confidence thresholds gate automated action between the mechanical and judgment zones.md +50 -0
- package/methodology/configuration dimensions interact so choices in one create pressure on others.md +58 -0
- package/methodology/configuration paralysis emerges when derivation surfaces too many decisions.md +44 -0
- package/methodology/context files function as agent operating systems through self-referential self-extension.md +46 -0
- package/methodology/context phrase clarity determines how deep a navigation hierarchy can scale.md +46 -0
- package/methodology/continuous small-batch processing eliminates review dread.md +48 -0
- package/methodology/controlled disorder engineers serendipity through semantic rather than topical linking.md +51 -0
- package/methodology/creative writing uses worldbuilding consistency with character tracking.md +672 -0
- package/methodology/cross-links between MOC territories indicate creative leaps and integration depth.md +43 -0
- package/methodology/dangling links reveal which notes want to exist.md +62 -0
- package/methodology/data exit velocity measures how quickly content escapes vendor lock-in.md +74 -0
- package/methodology/decontextualization risk means atomicity may strip meaning that cannot be recovered.md +48 -0
- package/methodology/dense interlinked research claims enable derivation while sparse references only enable templating.md +47 -0
- package/methodology/dependency resolution through topological sort makes module composition transparent and verifiable.md +56 -0
- package/methodology/derivation generates knowledge systems from composable research claims not template customization.md +63 -0
- package/methodology/derivation-engine.md +27 -0
- package/methodology/derived systems follow a seed-evolve-reseed lifecycle.md +56 -0
- package/methodology/description quality for humans diverges from description quality for keyword search.md +73 -0
- package/methodology/descriptions are retrieval filters not summaries.md +112 -0
- package/methodology/design MOCs as attention management devices with lifecycle governance.md +318 -0
- package/methodology/design-dimensions.md +66 -0
- package/methodology/digital mutability enables note evolution that physical permanence forbids.md +54 -0
- package/methodology/discovery-retrieval.md +48 -0
- package/methodology/distinctiveness scoring treats description quality as measurable.md +69 -0
- package/methodology/does agent processing recover what fast capture loses.md +43 -0
- package/methodology/domain-compositions.md +37 -0
- package/methodology/dual-coding with visual elements could enhance agent traversal.md +55 -0
- package/methodology/each module must be describable in one sentence under 200 characters or it does too many things.md +45 -0
- package/methodology/each new note compounds value by creating traversal paths.md +55 -0
- package/methodology/eight configuration dimensions parameterize the space of possible knowledge systems.md +56 -0
- package/methodology/elaborative encoding is the quality gate for new notes.md +55 -0
- package/methodology/enforce schema with graduated strictness across capture processing and query zones.md +221 -0
- package/methodology/enforcing atomicity can create paralysis when ideas resist decomposition.md +43 -0
- package/methodology/engineering uses technical decision tracking with architectural memory.md +766 -0
- package/methodology/every knowledge domain shares a four-phase processing skeleton that diverges only in the process step.md +53 -0
- package/methodology/evolution observations provide actionable signals for system adaptation.md +67 -0
- package/methodology/external memory shapes cognition more than base model.md +60 -0
- package/methodology/faceted classification treats notes as multi-dimensional objects rather than folder contents.md +65 -0
- package/methodology/failure-modes.md +27 -0
- package/methodology/false universalism applies same processing logic regardless of domain.md +49 -0
- package/methodology/federated wiki pattern enables multi-agent divergence as feature not bug.md +59 -0
- package/methodology/flat files break at retrieval scale.md +75 -0
- package/methodology/forced engagement produces weak connections.md +48 -0
- package/methodology/four abstraction layers separate platform-agnostic from platform-dependent knowledge system features.md +47 -0
- package/methodology/fresh context per task preserves quality better than chaining phases.md +44 -0
- package/methodology/friction reveals architecture.md +63 -0
- package/methodology/friction-driven module adoption prevents configuration debt by adding complexity only at pain points.md +48 -0
- package/methodology/gardening cycle implements tend prune fertilize operations.md +41 -0
- package/methodology/generation effect gate blocks processing without transformation.md +40 -0
- package/methodology/goal-driven memory orchestration enables autonomous domain learning through directed compute allocation.md +41 -0
- package/methodology/good descriptions layer heuristic then mechanism then implication.md +57 -0
- package/methodology/graph-structure.md +65 -0
- package/methodology/guided notes might outperform post-hoc structuring for high-volume capture.md +37 -0
- package/methodology/health wellness uses symptom-trigger correlation with multi-dimensional tracking.md +819 -0
- package/methodology/hook composition creates emergent methodology from independent single-concern components.md +47 -0
- package/methodology/hook enforcement guarantees quality while instruction enforcement merely suggests it.md +51 -0
- package/methodology/hook-driven learning loops create self-improving methodology through observation accumulation.md +62 -0
- package/methodology/hooks are the agent habit system that replaces the missing basal ganglia.md +40 -0
- package/methodology/hooks cannot replace genuine cognitive engagement yet more automation is always tempting.md +87 -0
- package/methodology/hooks enable context window efficiency by delegating deterministic checks to external processes.md +47 -0
- package/methodology/idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once.md +44 -0
- package/methodology/implement condition-based maintenance triggers for derived systems.md +255 -0
- package/methodology/implicit dependencies create distributed monoliths that fail silently across configurations.md +58 -0
- package/methodology/implicit knowledge emerges from traversal.md +55 -0
- package/methodology/incremental formalization happens through repeated touching of old notes.md +60 -0
- package/methodology/incremental reading enables cross-source connection finding.md +39 -0
- package/methodology/index.md +32 -0
- package/methodology/inline links carry richer relationship data than metadata fields.md +91 -0
- package/methodology/insight accretion differs from productivity in knowledge systems.md +41 -0
- package/methodology/intermediate packets enable assembly over creation.md +52 -0
- package/methodology/intermediate representation pattern enables reliable vault operations beyond regex.md +62 -0
- package/methodology/justification chains enable forward backward and evolution reasoning about configuration decisions.md +46 -0
- package/methodology/knowledge system architecture is parameterized by platform capabilities not fixed by methodology.md +51 -0
- package/methodology/knowledge systems become communication partners through complexity and memory humans cannot sustain.md +47 -0
- package/methodology/knowledge systems share universal operations and structural components across all methodology traditions.md +46 -0
- package/methodology/legal case management uses precedent chains with regulatory change propagation.md +892 -0
- package/methodology/live index via periodic regeneration keeps discovery current.md +58 -0
- package/methodology/local-first file formats are inherently agent-native.md +69 -0
- package/methodology/logic column pattern separates reasoning from procedure.md +35 -0
- package/methodology/maintenance operations are more universal than creative pipelines because structural health is domain-invariant.md +47 -0
- package/methodology/maintenance scheduling frequency should match consequence speed not detection capability.md +50 -0
- package/methodology/maintenance targeting should prioritize mechanism and theory notes.md +26 -0
- package/methodology/maintenance-patterns.md +72 -0
- package/methodology/markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure.md +55 -0
- package/methodology/maturity field enables agent context prioritization.md +33 -0
- package/methodology/memory-architecture.md +27 -0
- package/methodology/metacognitive confidence can diverge from retrieval capability.md +42 -0
- package/methodology/metadata reduces entropy enabling precision over recall.md +91 -0
- package/methodology/methodology development should follow the trajectory from documentation to skill to hook as understanding hardens.md +80 -0
- package/methodology/methodology traditions are named points in a shared configuration space not competing paradigms.md +64 -0
- package/methodology/mnemonic medium embeds verification into navigation.md +46 -0
- package/methodology/module communication through shared YAML fields creates loose coupling without direct dependencies.md +44 -0
- package/methodology/module deactivation must account for structural artifacts that survive the toggle.md +49 -0
- package/methodology/multi-domain systems compose through separate templates and shared graph.md +61 -0
- package/methodology/multi-domain-composition.md +27 -0
- package/methodology/narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging.md +53 -0
- package/methodology/navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts.md +48 -0
- package/methodology/navigational vertigo emerges in pure association systems without local hierarchy.md +54 -0
- package/methodology/note titles should function as APIs enabling sentence transclusion.md +51 -0
- package/methodology/note-design.md +57 -0
- package/methodology/notes are skills /342/200/224 curated knowledge injected when relevant.md" +62 -0
- package/methodology/notes function as cognitive anchors that stabilize attention during complex tasks.md +41 -0
- package/methodology/novel domains derive by mapping knowledge type to closest reference domain then adapting.md +50 -0
- package/methodology/nudge theory explains graduated hook enforcement as choice architecture for agents.md +59 -0
- package/methodology/observation and tension logs function as dead-letter queues for failed automation.md +51 -0
- package/methodology/operational memory and knowledge memory serve different functions in agent architecture.md +48 -0
- package/methodology/operational wisdom requires contextual observation.md +52 -0
- package/methodology/orchestrated vault creation transforms arscontexta from tool to autonomous knowledge factory.md +40 -0
- package/methodology/organic emergence versus active curation creates a fundamental vault governance tension.md +68 -0
- package/methodology/orphan notes are seeds not failures.md +38 -0
- package/methodology/over-automation corrupts quality when hooks encode judgment rather than verification.md +62 -0
- package/methodology/people relationships uses Dunbar-layered graphs with interaction tracking.md +659 -0
- package/methodology/personal assistant uses life area management with review automation.md +610 -0
- package/methodology/platform adapter translation is semantic not mechanical because hook event meanings differ.md +40 -0
- package/methodology/platform capability tiers determine which knowledge system features can be implemented.md +48 -0
- package/methodology/platform fragmentation means identical conceptual operations require different implementations across agent environments.md +44 -0
- package/methodology/premature complexity is the most common derivation failure mode.md +45 -0
- package/methodology/prevent domain-specific failure modes through the vulnerability matrix.md +336 -0
- package/methodology/processing effort should follow retrieval demand.md +57 -0
- package/methodology/processing-workflows.md +75 -0
- package/methodology/product management uses feedback pipelines with experiment tracking.md +789 -0
- package/methodology/productivity porn risk in meta-system building.md +30 -0
- package/methodology/programmable notes could enable property-triggered workflows.md +64 -0
- package/methodology/progressive disclosure means reading right not reading less.md +69 -0
- package/methodology/progressive schema validates only what active modules require not the full system schema.md +49 -0
- package/methodology/project management uses decision tracking with stakeholder context.md +776 -0
- package/methodology/propositional link semantics transform wiki links from associative to reasoned.md +87 -0
- package/methodology/prospective memory requires externalization.md +53 -0
- package/methodology/provenance tracks where beliefs come from.md +62 -0
- package/methodology/queries evolve during search so agents should checkpoint.md +35 -0
- package/methodology/question-answer metadata enables inverted search patterns.md +39 -0
- package/methodology/random note resurfacing prevents write-only memory.md +33 -0
- package/methodology/reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring.md +59 -0
- package/methodology/reflection synthesizes existing notes into new insight.md +100 -0
- package/methodology/retrieval utility should drive design over capture completeness.md +69 -0
- package/methodology/retrieval verification loop tests description quality at scale.md +81 -0
- package/methodology/role field makes graph structure explicit.md +94 -0
- package/methodology/scaffolding enables divergence that fine-tuning cannot.md +67 -0
- package/methodology/schema enforcement via validation agents enables soft consistency.md +60 -0
- package/methodology/schema evolution follows observe-then-formalize not design-then-enforce.md +65 -0
- package/methodology/schema field names are the only domain specific element in the universal note pattern.md +46 -0
- package/methodology/schema fields should use domain-native vocabulary not abstract terminology.md +47 -0
- package/methodology/schema templates reduce cognitive overhead at capture time.md +55 -0
- package/methodology/schema validation hooks externalize inhibitory control that degrades under cognitive load.md +48 -0
- package/methodology/schema-enforcement.md +27 -0
- package/methodology/self-extension requires context files to contain platform operations knowledge not just methodology.md +47 -0
- package/methodology/sense-making vs storage does compression lose essential nuance.md +73 -0
- package/methodology/session boundary hooks implement cognitive bookends for orientation and reflection.md +60 -0
- package/methodology/session handoff creates continuity without persistent memory.md +43 -0
- package/methodology/session outputs are packets for future selves.md +43 -0
- package/methodology/session transcript mining enables experiential validation that structural tests cannot provide.md +38 -0
- package/methodology/skill context budgets constrain knowledge system complexity on agent platforms.md +52 -0
- package/methodology/skills encode methodology so manual execution bypasses quality gates.md +50 -0
- package/methodology/small-world topology requires hubs and dense local links.md +99 -0
- package/methodology/source attribution enables tracing claims to foundations.md +38 -0
- package/methodology/spaced repetition scheduling could optimize vault maintenance.md +44 -0
- package/methodology/spreading activation models how agents should traverse.md +79 -0
- package/methodology/stale navigation actively misleads because agents trust curated maps completely.md +43 -0
- package/methodology/stigmergy coordinates agents through environmental traces without direct communication.md +62 -0
- package/methodology/storage versus thinking distinction determines which tool patterns apply.md +56 -0
- package/methodology/structure enables navigation without reading everything.md +52 -0
- package/methodology/structure without processing provides no value.md +56 -0
- package/methodology/student learning uses prerequisite graphs with spaced retrieval.md +770 -0
- package/methodology/summary coherence tests composability before filing.md +37 -0
- package/methodology/tag rot applies to wiki links because titles serve as both identifier and display text.md +50 -0
- package/methodology/temporal media must convert to spatial text for agent traversal.md +43 -0
- package/methodology/temporal processing priority creates age-based inbox urgency.md +45 -0
- package/methodology/temporal separation of capture and processing preserves context freshness.md +39 -0
- package/methodology/ten universal primitives form the kernel of every viable agent knowledge system.md +162 -0
- package/methodology/testing effect could enable agent knowledge verification.md +38 -0
- package/methodology/the AgentSkills standard embodies progressive disclosure at the skill level.md +40 -0
- package/methodology/the derivation engine improves recursively as deployed systems generate observations.md +49 -0
- package/methodology/the determinism boundary separates hook methodology from skill methodology.md +46 -0
- package/methodology/the fix-versus-report decision depends on determinism reversibility and accumulated trust.md +45 -0
- package/methodology/the generation effect requires active transformation not just storage.md +57 -0
- package/methodology/the no wrong patches guarantee ensures any valid module combination produces a valid system.md +58 -0
- package/methodology/the system is the argument.md +46 -0
- package/methodology/the vault constitutes identity for agents.md +86 -0
- package/methodology/the vault methodology transfers because it encodes cognitive science not domain specifics.md +47 -0
- package/methodology/therapy journal uses warm personality with pattern detection for emotional processing.md +584 -0
- package/methodology/three capture schools converge through agent-mediated synthesis.md +55 -0
- package/methodology/three concurrent maintenance loops operate at different timescales to catch different classes of problems.md +56 -0
- package/methodology/throughput matters more than accumulation.md +58 -0
- package/methodology/title as claim enables traversal as reasoning.md +50 -0
- package/methodology/topological organization beats temporal for knowledge work.md +52 -0
- package/methodology/trading uses conviction tracking with thesis-outcome correlation.md +699 -0
- package/methodology/trails transform ephemeral navigation into persistent artifacts.md +39 -0
- package/methodology/transform universal vocabulary to domain-native language through six levels.md +259 -0
- package/methodology/type field enables structured queries without folder hierarchies.md +53 -0
- package/methodology/use-case presets dissolve the tension between composability and simplicity.md +44 -0
- package/methodology/vault conventions may impose hidden rigidity on thinking.md +44 -0
- package/methodology/verbatim risk applies to agents too.md +31 -0
- package/methodology/vibe notetaking is the emerging industry consensus for AI-native self-organization.md +56 -0
- package/methodology/vivid memories need verification.md +45 -0
- package/methodology/vocabulary-transformation.md +27 -0
- package/methodology/voice capture is the highest-bandwidth channel for agent-delegated knowledge systems.md +45 -0
- package/methodology/wiki links are the digital evolution of analog indexing.md +73 -0
- package/methodology/wiki links as social contract transforms agents into stewards of incomplete references.md +52 -0
- package/methodology/wiki links create navigation paths that shape retrieval.md +63 -0
- package/methodology/wiki links implement GraphRAG without the infrastructure.md +101 -0
- package/methodology/writing for audience blocks authentic creation.md +22 -0
- package/methodology/you operate a system that takes notes.md +79 -0
- package/openclaw/SKILL.md +110 -0
- package/package.json +45 -0
- package/platforms/README.md +51 -0
- package/platforms/claude-code/generator.md +61 -0
- package/platforms/claude-code/hooks/README.md +186 -0
- package/platforms/claude-code/hooks/auto-commit.sh.template +38 -0
- package/platforms/claude-code/hooks/session-capture.sh.template +72 -0
- package/platforms/claude-code/hooks/session-orient.sh.template +189 -0
- package/platforms/claude-code/hooks/write-validate.sh.template +106 -0
- package/platforms/openclaw/generator.md +82 -0
- package/platforms/openclaw/hooks/README.md +89 -0
- package/platforms/openclaw/hooks/bootstrap.ts.template +224 -0
- package/platforms/openclaw/hooks/command-new.ts.template +165 -0
- package/platforms/openclaw/hooks/heartbeat.ts.template +214 -0
- package/platforms/shared/features/README.md +70 -0
- package/platforms/shared/skill-blocks/graph.md +145 -0
- package/platforms/shared/skill-blocks/learn.md +119 -0
- package/platforms/shared/skill-blocks/next.md +131 -0
- package/platforms/shared/skill-blocks/pipeline.md +326 -0
- package/platforms/shared/skill-blocks/ralph.md +616 -0
- package/platforms/shared/skill-blocks/reduce.md +1142 -0
- package/platforms/shared/skill-blocks/refactor.md +129 -0
- package/platforms/shared/skill-blocks/reflect.md +780 -0
- package/platforms/shared/skill-blocks/remember.md +524 -0
- package/platforms/shared/skill-blocks/rethink.md +574 -0
- package/platforms/shared/skill-blocks/reweave.md +680 -0
- package/platforms/shared/skill-blocks/seed.md +320 -0
- package/platforms/shared/skill-blocks/stats.md +145 -0
- package/platforms/shared/skill-blocks/tasks.md +171 -0
- package/platforms/shared/skill-blocks/validate.md +323 -0
- package/platforms/shared/skill-blocks/verify.md +562 -0
- package/platforms/shared/templates/README.md +35 -0
- package/presets/experimental/categories.yaml +1 -0
- package/presets/experimental/preset.yaml +38 -0
- package/presets/experimental/starter/README.md +7 -0
- package/presets/experimental/vocabulary.yaml +7 -0
- package/presets/personal/categories.yaml +7 -0
- package/presets/personal/preset.yaml +41 -0
- package/presets/personal/starter/goals.md +21 -0
- package/presets/personal/starter/index.md +17 -0
- package/presets/personal/starter/life-areas.md +21 -0
- package/presets/personal/starter/people.md +21 -0
- package/presets/personal/vocabulary.yaml +32 -0
- package/presets/research/categories.yaml +8 -0
- package/presets/research/preset.yaml +41 -0
- package/presets/research/starter/index.md +17 -0
- package/presets/research/starter/methods.md +21 -0
- package/presets/research/starter/open-questions.md +21 -0
- package/presets/research/vocabulary.yaml +33 -0
- package/reference/AUDIT-REPORT.md +238 -0
- package/reference/claim-map.md +172 -0
- package/reference/components.md +327 -0
- package/reference/conversation-patterns.md +542 -0
- package/reference/derivation-validation.md +649 -0
- package/reference/dimension-claim-map.md +134 -0
- package/reference/evolution-lifecycle.md +297 -0
- package/reference/failure-modes.md +235 -0
- package/reference/interaction-constraints.md +204 -0
- package/reference/kernel.yaml +242 -0
- package/reference/methodology.md +283 -0
- package/reference/open-questions.md +279 -0
- package/reference/personality-layer.md +302 -0
- package/reference/self-space.md +299 -0
- package/reference/semantic-vs-keyword.md +288 -0
- package/reference/session-lifecycle.md +298 -0
- package/reference/templates/base-note.md +16 -0
- package/reference/templates/companion-note.md +70 -0
- package/reference/templates/creative-note.md +16 -0
- package/reference/templates/learning-note.md +16 -0
- package/reference/templates/life-note.md +16 -0
- package/reference/templates/moc.md +26 -0
- package/reference/templates/relationship-note.md +17 -0
- package/reference/templates/research-note.md +19 -0
- package/reference/templates/session-log.md +24 -0
- package/reference/templates/therapy-note.md +16 -0
- package/reference/test-fixtures/edge-case-constraints.md +148 -0
- package/reference/test-fixtures/multi-domain.md +164 -0
- package/reference/test-fixtures/novel-domain-gaming.md +138 -0
- package/reference/test-fixtures/research-minimal.md +102 -0
- package/reference/test-fixtures/therapy-full.md +155 -0
- package/reference/testing-milestones.md +1087 -0
- package/reference/three-spaces.md +363 -0
- package/reference/tradition-presets.md +203 -0
- package/reference/use-case-presets.md +341 -0
- package/reference/validate-kernel.sh +432 -0
- package/reference/vocabulary-transforms.md +85 -0
- package/scripts/sync-thinking.sh +147 -0
- package/skill-sources/graph/SKILL.md +567 -0
- package/skill-sources/graph/skill.json +17 -0
- package/skill-sources/learn/SKILL.md +254 -0
- package/skill-sources/learn/skill.json +17 -0
- package/skill-sources/next/SKILL.md +407 -0
- package/skill-sources/next/skill.json +17 -0
- package/skill-sources/pipeline/SKILL.md +314 -0
- package/skill-sources/pipeline/skill.json +17 -0
- package/skill-sources/ralph/SKILL.md +604 -0
- package/skill-sources/ralph/skill.json +17 -0
- package/skill-sources/reduce/SKILL.md +1113 -0
- package/skill-sources/reduce/skill.json +17 -0
- package/skill-sources/refactor/SKILL.md +448 -0
- package/skill-sources/refactor/skill.json +17 -0
- package/skill-sources/reflect/SKILL.md +747 -0
- package/skill-sources/reflect/skill.json +17 -0
- package/skill-sources/remember/SKILL.md +534 -0
- package/skill-sources/remember/skill.json +17 -0
- package/skill-sources/rethink/SKILL.md +658 -0
- package/skill-sources/rethink/skill.json +17 -0
- package/skill-sources/reweave/SKILL.md +657 -0
- package/skill-sources/reweave/skill.json +17 -0
- package/skill-sources/seed/SKILL.md +303 -0
- package/skill-sources/seed/skill.json +17 -0
- package/skill-sources/stats/SKILL.md +371 -0
- package/skill-sources/stats/skill.json +17 -0
- package/skill-sources/tasks/SKILL.md +402 -0
- package/skill-sources/tasks/skill.json +17 -0
- package/skill-sources/validate/SKILL.md +310 -0
- package/skill-sources/validate/skill.json +17 -0
- package/skill-sources/verify/SKILL.md +532 -0
- package/skill-sources/verify/skill.json +17 -0
- package/skills/add-domain/SKILL.md +441 -0
- package/skills/add-domain/skill.json +17 -0
- package/skills/architect/SKILL.md +568 -0
- package/skills/architect/skill.json +17 -0
- package/skills/ask/SKILL.md +388 -0
- package/skills/ask/skill.json +17 -0
- package/skills/health/SKILL.md +760 -0
- package/skills/health/skill.json +17 -0
- package/skills/help/SKILL.md +348 -0
- package/skills/help/skill.json +17 -0
- package/skills/recommend/SKILL.md +553 -0
- package/skills/recommend/skill.json +17 -0
- package/skills/reseed/SKILL.md +385 -0
- package/skills/reseed/skill.json +17 -0
- package/skills/setup/SKILL.md +1688 -0
- package/skills/setup/skill.json +17 -0
- package/skills/tutorial/SKILL.md +496 -0
- package/skills/tutorial/skill.json +17 -0
- package/skills/upgrade/SKILL.md +395 -0
- package/skills/upgrade/skill.json +17 -0
|
@@ -0,0 +1,62 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Engelbart's insight that using current best tools to build better tools creates recursive improvement where each enhancement becomes available for building the next
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[processing-workflows]]"]
|
|
5
|
+
methodology: ["Augmentation Research"]
|
|
6
|
+
source: TFT research corpus (00_inbox/heinrich/)
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# bootstrapping principle enables self-improving systems
|
|
10
|
+
|
|
11
|
+
Use the current best tools to build better tools. This simple principle from Doug Engelbart's Augmentation Research Center creates a recursive improvement loop: each capability you add becomes immediately available for building the next capability. The system lifts itself by its own bootstraps.
|
|
12
|
+
|
|
13
|
+
For agent-operated knowledge systems, this means the system should be developed using the system. Agent workflows for research, synthesis, and creation get documented in the system they operate on. When agents discover friction or missing capabilities, they capture that as system evolution opportunities. The system improves because its operators constantly use it and feed back improvements. The tightest version of this loop happens when [[context files function as agent operating systems through self-referential self-extension]] -- the context file that teaches methodology also teaches the agent how to modify that context file, so the recursive improvement operates not just on outputs (notes, skills) but on the operating instructions themselves. This self-modification works without coordination because [[local-first file formats are inherently agent-native]] — the agent needs no authentication or external services to read and write its own skill files. And since [[data exit velocity measures how quickly content escapes vendor lock-in]], this dependency is not just a design intuition but an auditable metric: every feature that lowers exit velocity is a potential bootstrapping bottleneck. If a feature requires proprietary tooling to function, the recursive improvement loop stalls at the boundary of that tool — the system can no longer freely read, modify, and write its own substrate. This works because [[each new note compounds value by creating traversal paths]] — in graph-structured knowledge, improvements don't just add linearly but multiply through new connections. Each skill or workflow improvement becomes a node that increases the value of existing nodes. Since [[session handoff creates continuity without persistent memory]], sessions themselves participate in bootstrapping: each session reads what previous sessions wrote, improves the system, and writes for future sessions. The handoff chain IS the bootstrapping loop at the operational level — no persistent memory needed, just externalized briefings that accumulate improvements.
|
|
14
|
+
|
|
15
|
+
This is different from [[complex systems evolve from simple working systems]], which describes organic evolution from working simplicity. Gall's Law says where to add complexity — where pain appears. Bootstrapping says how to add it — using the system itself. The two principles work together: evolve organically (Gall), but evolve using the current system's capabilities (Engelbart). The system writes the skills that process its own content. The agent that finds connections also documents how to find connections.
|
|
16
|
+
|
|
17
|
+
The mechanism creates compounding returns because improvements don't just add linearly — they multiply. A skill that speeds up claim extraction gets used during the session that creates the next skill. A hook that validates note structure validates itself when you write the hook's documentation. Since [[skills encode methodology so manual execution bypasses quality gates]], skills are the concrete embodiment of bootstrapped improvement: accumulated learning captured in executable form that couldn't have been designed upfront but emerged through use.
|
|
18
|
+
|
|
19
|
+
Research on reasoning curricula (SOAR, MIT) reveals that apparent reasoning plateaus are often pedagogical failures, not cognitive limits — the system hasn't hit its ceiling, it's been given the wrong stepping stones. This reframes /ralph's phase decomposition as implicit scaffolding: each phase (reduce → reflect → reweave → verify) is a stepping stone that builds the reasoning capacity needed for the next phase. The fresh context per task isn't just preserving attention — it's providing the right scaffold at the right moment. When a phase consistently fails, the fix may be decomposing it further rather than adding more context.
|
|
20
|
+
|
|
21
|
+
There's a tension with [[productivity porn risk in meta-system building]]. Bootstrapping can become a rationalization for infinite meta-work: "I'm building tools to build better tools" sounds productive even when output stays flat. The discriminator is whether the improvements actually get used. Bootstrapping works when the improved system produces more output. It fails when building becomes the output. The question isn't whether you're improving the system — it's whether the improved system improves anything else.
|
|
22
|
+
|
|
23
|
+
Bootstrapping also requires that improvements are genuinely generative. Since [[the generation effect requires active transformation not just storage]], structural changes that move files around or add formatting aren't bootstrapping — they're rearrangement. Real bootstrapping produces something that didn't exist: a skill that captures new methodology, a connection that enables new traversal, a description that unlocks retrieval. Without generation, the recursive loop produces no lift.
|
|
24
|
+
|
|
25
|
+
A specific instance of bootstrapping appears in [[dangling links reveal which notes want to exist]]. When multiple notes independently reference the same non-existent concept, the system's use patterns generate demand signals for what to build next. The system's gaps emerge from the system's use. This is organic bootstrapping: structure emerging from the intersection of capability and need.
|
|
26
|
+
|
|
27
|
+
But the recursive loop has a platform prerequisite. Since [[platform capability tiers determine which knowledge system features can be implemented]], bootstrapping only closes at tier one, where the agent has write access to its own context file and can create infrastructure (skills, hooks). Tier-two platforms can partially bootstrap -- skill creation is possible but hook creation is limited. Tier-three platforms cannot bootstrap at all: the system stays as initially configured because the agent lacks the infrastructure to build its own tooling. The self-improvement loop is not a universal property of agent-operated knowledge systems -- it is a tier-one capability.
|
|
28
|
+
|
|
29
|
+
Agent-operated knowledge systems implement this principle by treating system documentation, skills, and infrastructure as first-class content to be processed by the same workflows that process external content. When extraction operations mine a research article for claims, those same patterns apply when mining system documentation for improvement opportunities. When connection-finding surfaces relationships between claim notes, those same patterns help integrate new skills into the existing workflow graph. The system is its own case study.
|
|
30
|
+
|
|
31
|
+
There is a deeper question about what happens when bootstrapping reaches a boundary. Since [[derived systems follow a seed-evolve-reseed lifecycle]], the evolution phase IS bootstrapping in action — each cycle uses current capabilities to identify and implement improvements. But accumulated improvements can drift the system's configuration into incoherence, where individually justified changes create globally contradictory pressures. Reseeding is the phase transition that bootstrapping alone cannot accomplish: the system must step outside its own recursive loop to re-derive from first principles enriched by operational experience. Bootstrapping improves within a framework; reseeding restructures the framework itself. And since [[the derivation engine improves recursively as deployed systems generate observations]], the observations generated during bootstrapping and reseeding within each individual system feed back into the shared claim graph that powers future derivations — making within-system bootstrapping a contributor to cross-deployment recursive improvement at the meta-level.
|
|
32
|
+
|
|
33
|
+
The endpoint of recursive bootstrapping is qualitative transformation. Since [[knowledge systems become communication partners through complexity and memory humans cannot sustain]], each bootstrapping cycle adds complexity that the system can sustain but individual sessions cannot. Eventually the accumulated complexity crosses a threshold: the system becomes a genuine communication partner that surprises its operators. Engelbart's recursive improvement isn't just efficiency — it builds toward a system that thinks with you rather than just storing for you.
|
|
34
|
+
|
|
35
|
+
The maintenance pattern [[backward maintenance asks what would be different if written today]] applies bootstrapping to notes themselves. Just as tools improve by being used to build better tools, notes improve by being reconsidered with current understanding. Backward maintenance is bootstrapping at the content level: yesterday's notes get improved using today's thinking, which will improve tomorrow's thinking. The system becomes a continuously upgraded substrate rather than a static archive. At the system-architecture level, since [[evolution observations provide actionable signals for system adaptation]], the same recursive pattern applies to structural decisions: the system observes its own configuration (which types get used, which fields collect placeholders, where navigation fails), and those observations drive modifications to the configuration itself. This is bootstrapping applied not just to content or methodology but to the system's structural hypotheses.
|
|
36
|
+
|
|
37
|
+
At the infrastructure level, since [[live index via periodic regeneration keeps discovery current]], the vault demonstrates bootstrapping in its discovery mechanisms: hooks that regenerate indices use the same automation philosophy they serve. The file tree injection hook is bootstrapping made concrete — the system uses its own patterns to improve its own navigation.
|
|
38
|
+
---
|
|
39
|
+
|
|
40
|
+
Relevant Notes:
|
|
41
|
+
- [[complex systems evolve from simple working systems]] — Galls Law describes where to add complexity (at friction points); bootstrapping describes how to add it (using current capabilities)
|
|
42
|
+
- [[skills encode methodology so manual execution bypasses quality gates]] — skills are concrete instances of bootstrapped improvement: accumulated learning captured in executable form
|
|
43
|
+
- [[productivity porn risk in meta-system building]] — tests the shadow side: bootstrapping becomes rationalization when building tools to build tools produces no external output
|
|
44
|
+
- [[each new note compounds value by creating traversal paths]] — explains WHY bootstrapping compounds: graph structure makes improvements multiplicative rather than additive
|
|
45
|
+
- [[backward maintenance asks what would be different if written today]] — applies bootstrapping to content maintenance: notes improve using current understanding, which improves future understanding
|
|
46
|
+
- [[the generation effect requires active transformation not just storage]] — provides the quality criterion: bootstrapping requires generation, not just rearrangement
|
|
47
|
+
- [[dangling links reveal which notes want to exist]] — organic bootstrapping: use patterns generate demand signals for what to build next
|
|
48
|
+
- [[local-first file formats are inherently agent-native]] — enables bootstrapping by ensuring no external coordination needed: the agent reads and writes the same files that define its workflows
|
|
49
|
+
- [[session handoff creates continuity without persistent memory]] — sessions implement bootstrapping: each reads previous output, improves, writes for the next; the handoff chain is the bootstrapping loop
|
|
50
|
+
- [[live index via periodic regeneration keeps discovery current]] — infrastructure-level bootstrapping: hooks that regenerate discovery indices use the same automation philosophy they serve
|
|
51
|
+
- [[programmable notes could enable property-triggered workflows]] — extends bootstrapping to content: notes that declare when they need attention make the vault self-improving through its own content, not just infrastructure
|
|
52
|
+
- [[digital mutability enables note evolution that physical permanence forbids]] — foundational enabler: bootstrapping content improvement only works because the medium permits revision; Luhmann's physical cards couldn't bootstrap content because edits destroyed cards
|
|
53
|
+
- [[knowledge systems become communication partners through complexity and memory humans cannot sustain]] — the endpoint: recursive bootstrapping builds toward a system complex enough to become a genuine thinking partner that surprises its operators
|
|
54
|
+
- [[data exit velocity measures how quickly content escapes vendor lock-in]] — makes the filesystem dependency auditable: every feature that lowers exit velocity is a potential bootstrapping bottleneck because proprietary formats block the unmediated read/write the recursive loop requires
|
|
55
|
+
- [[context files function as agent operating systems through self-referential self-extension]] — identifies the specific carrier where bootstrapping closes its tightest loop: the context file teaches the agent how to modify the context file itself, making the recursive improvement operate on the operating instructions rather than just the outputs
|
|
56
|
+
- [[platform capability tiers determine which knowledge system features can be implemented]] — the recursive loop only closes at tier one where write access and infrastructure creation are available; tier-two can partially bootstrap, tier-three cannot bootstrap at all
|
|
57
|
+
- [[evolution observations provide actionable signals for system adaptation]] — bootstrapping at the system-architecture level: the diagnostic protocol converts operational evidence into structural modifications, making the system's configuration subject to the same recursive improvement that bootstrapping enables for methodology and content
|
|
58
|
+
- [[derived systems follow a seed-evolve-reseed lifecycle]] — identifies the boundary condition: bootstrapping operates within the evolution phase, but accumulated improvements can produce systemic incoherence requiring reseeding, which is a phase transition bootstrapping alone cannot accomplish
|
|
59
|
+
- [[the derivation engine improves recursively as deployed systems generate observations]] — the cross-deployment extension: within-system bootstrapping generates observations that feed the meta-level recursive improvement loop, making each system's bootstrapping cycles contributors to the derivation engine's claim graph enrichment
|
|
60
|
+
|
|
61
|
+
Topics:
|
|
62
|
+
- [[processing-workflows]]
|
package/methodology/build automatic memory through cognitive offloading and session handoffs.md
ADDED
|
@@ -0,0 +1,285 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: How to build automatic memory systems that compound over time — cognitive offloading as foundation, the retrieval bottleneck, session handoffs as continuity, and why the vault constitutes identity
|
|
3
|
+
kind: guidance
|
|
4
|
+
status: active
|
|
5
|
+
topics: ["[[memory-architecture]]"]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# build automatic memory through cognitive offloading and session handoffs
|
|
9
|
+
|
|
10
|
+
Agent memory is not human memory. Agents have perfect recall within a session and zero recall across sessions. This asymmetry defines the architecture: everything the agent needs to remember must be externalized into the vault, and the vault must be structured so that each new session can orient quickly and pick up where the last one left off.
|
|
11
|
+
|
|
12
|
+
This is THE key guidance doc for building memory systems on top of Ars Contexta. It tells the plugin HOW to build memory systems that compound over time rather than resetting each session.
|
|
13
|
+
|
|
14
|
+
## Cognitive Offloading as Foundation
|
|
15
|
+
|
|
16
|
+
Since [[cognitive offloading is the architectural foundation for vault design]], the vault is not a storage system. It is an externalized cognitive system — a distributed thinking partner. Clark and Chalmers' Extended Mind Theory established that cognition extends beyond the brain into external artifacts that participate in cognitive processes. A notebook is not merely a reminder — it becomes part of the thinking system when reliably consulted and trusted.
|
|
17
|
+
|
|
18
|
+
The vault extends this principle to agents. When an agent operates the vault, the distributed cognitive system becomes tripartite: the human provides direction and judgment, the vault holds persistent state and connections, the agent provides traversal and processing capability. None of these alone produces what the combination produces. The human cannot hold 500 interconnected claims in working memory. The agent cannot remember anything across sessions. The vault cannot process or judge. Together they think at a scale none could alone.
|
|
19
|
+
|
|
20
|
+
The design implication is concrete: every friction point in capture fights against the cognitive architecture. Since [[Zeigarnik effect validates capture-first philosophy because open loops drain attention]], every uncaptured thought becomes an active drain on working memory. The vault must make externalization trivially easy — so easy that the rational choice is always to offload rather than retain. Since [[temporal separation of capture and processing preserves context freshness]], the offloading must happen immediately, but processing can follow within hours while context remains fresh.
|
|
21
|
+
|
|
22
|
+
This tripartite architecture is what since [[the vault methodology transfers because it encodes cognitive science not domain specifics]] makes transferable. The same distributed cognitive architecture works for therapy journals, project trackers, research systems, and creative writing vaults — because Cowan's working memory limits, attention degradation, and the Zeigarnik effect apply to emotional processing, engineering decisions, academic synthesis, and narrative construction equally.
|
|
23
|
+
|
|
24
|
+
## The Retrieval Bottleneck
|
|
25
|
+
|
|
26
|
+
Since [[external memory shapes cognition more than base model]], what an agent retrieves determines what it thinks. Retrieval is shaped by memory architecture. Therefore memory architecture matters more than base model weights.
|
|
27
|
+
|
|
28
|
+
The argument is straightforward: cognition happens in context. The context window is filled by the prompt, retrieved information, and conversation history. Base model weights determine HOW context is processed, but WHAT gets processed depends on retrieval. An agent with a well-structured vault retrieves different material than one with flat files. Different material leads to different reasoning leads to different conclusions.
|
|
29
|
+
|
|
30
|
+
The bottleneck is retrieval, not reasoning. A better base model processes the same retrieved context more skillfully, but the delta from better processing is bounded by the quality of what was retrieved. A better memory architecture changes WHAT gets retrieved — different material, different conclusions. The retrieval delta compounds across every interaction, while the processing delta is marginal improvement on the same inputs.
|
|
31
|
+
|
|
32
|
+
The implication for the plugin: investing in memory architecture has higher ROI than waiting for better models. Vault structure is cognitive architecture. The plugin should generate vaults optimized for retrieval — dense connections, queryable metadata, progressive disclosure layers — because retrieval quality determines everything downstream.
|
|
33
|
+
|
|
34
|
+
## The Session Boundary Problem
|
|
35
|
+
|
|
36
|
+
Since [[agent session boundaries create natural automation checkpoints that human-operated systems lack]], every agent session starts cold. The agent loads context files, reads relevant notes, and reconstructs understanding from scratch. This is fundamentally different from human knowledge work, where practitioners carry implicit understanding between sessions.
|
|
37
|
+
|
|
38
|
+
The implication: **the vault must encode what humans carry in their heads.** Navigation intuition, relationship context, processing state, accumulated observations — all of this must be externalized into structures the agent can load on orientation.
|
|
39
|
+
|
|
40
|
+
### What Must Persist Across Sessions
|
|
41
|
+
|
|
42
|
+
| Category | What to Externalize | How It Is Stored |
|
|
43
|
+
|----------|-------------------|-----------------|
|
|
44
|
+
| **Navigation knowledge** | Which notes matter, how topics connect, where to start | MOCs with agent notes |
|
|
45
|
+
| **Processing state** | What has been processed, what is pending, what failed | Queue files, task files, status fields |
|
|
46
|
+
| **Accumulated observations** | Patterns noticed, friction encountered, improvement ideas | Atomic observation notes |
|
|
47
|
+
| **Relationship context** | Tensions between ideas, unresolved conflicts | Tension notes, resolution notes |
|
|
48
|
+
| **Operational wisdom** | What works, what does not, learned heuristics | Context file (CLAUDE.md equivalent), guidance docs |
|
|
49
|
+
| **Identity** | Who the agent is, how it works, what it values | Self-memory files (identity.md, methodology.md) |
|
|
50
|
+
|
|
51
|
+
### Session Handoff as Continuity
|
|
52
|
+
|
|
53
|
+
Since [[session handoff creates continuity without persistent memory]], the vault bridges sessions through externalized state rather than internal memory. The agent does not remember — it reads. Each session ends by producing a structured summary: completed work, incomplete tasks, discoveries, recommendations. The next session reads this briefing and inherits the prior context. Continuity emerges from structure rather than capability.
|
|
54
|
+
|
|
55
|
+
The insight is that memory and continuity are separable. Memory is internal state persisting across time. Continuity is coherent progress on multi-step work. Humans have memory but still benefit from external systems (todo lists, project notes, handoff docs) because memory is unreliable and selective. Agents lack memory entirely but achieve continuity through better external systems. The external system becomes the memory.
|
|
56
|
+
|
|
57
|
+
Since [[stigmergy coordinates agents through environmental traces without direct communication]], session handoff is stigmergy in its most precise form: each session modifies the environment (writes task files, advances queue entries, adds wiki links), and the next session responds to those modifications rather than receiving a message. The handoff document is the pheromone trace that guides the next agent's action.
|
|
58
|
+
|
|
59
|
+
## Two Memory Systems
|
|
60
|
+
|
|
61
|
+
Since [[operational memory and knowledge memory serve different functions in agent architecture]], the plugin generates two distinct memory systems with fundamentally different characteristics:
|
|
62
|
+
|
|
63
|
+
### Knowledge Memory (The Vault)
|
|
64
|
+
|
|
65
|
+
The intellectual workspace. Claim notes, MOCs, sources, synthesis — the domain-specific content the user cares about. This is the vault's primary purpose.
|
|
66
|
+
|
|
67
|
+
**Characteristics:**
|
|
68
|
+
- Organized by concept, not by time
|
|
69
|
+
- Connected via wiki links and MOCs
|
|
70
|
+
- Queryable via schema fields and semantic search
|
|
71
|
+
- Grows through the processing pipeline
|
|
72
|
+
- Since [[topological organization beats temporal for knowledge work]], knowledge memory is spatial, not temporal
|
|
73
|
+
- Requires coherence maintenance — contradictory claims degrade retrieval confidence
|
|
74
|
+
- Compounds over time — since [[each new note compounds value by creating traversal paths]], each new note makes existing notes more discoverable
|
|
75
|
+
|
|
76
|
+
### Operational Memory (Infrastructure)
|
|
77
|
+
|
|
78
|
+
How the system works. Processing state, maintenance logs, configuration, observations about system behavior. This is the meta-layer that keeps the knowledge memory healthy.
|
|
79
|
+
|
|
80
|
+
**Characteristics:**
|
|
81
|
+
- Organized by function (logs, tasks, scripts)
|
|
82
|
+
- Tracks state rather than ideas
|
|
83
|
+
- Updated by automation (hooks, pipeline skills)
|
|
84
|
+
- Grows through system operation
|
|
85
|
+
- Temporal and disposable — session logs older than 30 days can be archived, queue items complete and get cleaned
|
|
86
|
+
- No coherence requirement — task files can contain conflicting phase notes without degradation
|
|
87
|
+
- The promotion rule is one-directional: content moves from ops to knowledge or ops to self when it earns permanence, never the reverse
|
|
88
|
+
|
|
89
|
+
### Agent Self-Memory (Optional Third Layer)
|
|
90
|
+
|
|
91
|
+
Since [[agent self-memory should be architecturally separate from user knowledge systems]], some domains benefit from a personal agent space where the agent stores its own observations, preferences, and learned heuristics. This is distinct from both the knowledge vault and operational infrastructure.
|
|
92
|
+
|
|
93
|
+
Since [[the vault constitutes identity for agents]], the vault is not augmenting identity but constituting it. Agents with identical weights but different vaults think differently because they retrieve different material. The self-memory layer is where this identity crystallizes: who the agent is, how it prefers to work, what it has learned about the user.
|
|
94
|
+
|
|
95
|
+
**When to include:** Companion or friendship domains, therapy (where the agent needs to remember the human's communication style and triggers), personal assistant (where the agent develops understanding of the user's energy patterns and priorities over time).
|
|
96
|
+
|
|
97
|
+
**When to omit:** Research, engineering, legal — domains where the agent is a tool, not a companion. The user does not need the agent to remember their preferences; they need the agent to process their content accurately.
|
|
98
|
+
|
|
99
|
+
### The Six Failure Modes of Conflation
|
|
100
|
+
|
|
101
|
+
Mixing these memory types produces characteristic failures. Since [[generated systems use a three-space architecture separating self from knowledge from operations]], the three-space separation is architecturally motivated:
|
|
102
|
+
|
|
103
|
+
| Conflation | What Breaks |
|
|
104
|
+
|-----------|-------------|
|
|
105
|
+
| ops into notes | Search returns processing debris alongside genuine insights |
|
|
106
|
+
| self into notes | User's graph contains agent preferences, schema confusion |
|
|
107
|
+
| notes into ops | Insights stay trapped in session logs, never become permanent |
|
|
108
|
+
| self into ops | Agent identity scattered across 50 session logs instead of curated files |
|
|
109
|
+
| ops into self | Agent identity polluted with temporal processing state |
|
|
110
|
+
| notes into self | Agent self-model bloated with domain knowledge it does not need every session |
|
|
111
|
+
|
|
112
|
+
## How Memory Compounds
|
|
113
|
+
|
|
114
|
+
The key insight: memory does not just accumulate — it compounds. Each new note creates connections that make existing notes more discoverable and more valuable. Unlike a folder where 1000 documents is just 1000 documents, a graph of 1000 connected nodes creates millions of potential traversal paths. The vault is not a filing cabinet that gets fuller; it is a network that gets denser.
|
|
115
|
+
|
|
116
|
+
The plugin builds compounding through four mechanisms:
|
|
117
|
+
|
|
118
|
+
### 1. Automatic Connection Finding
|
|
119
|
+
|
|
120
|
+
Every new note triggers connection-finding (the reflect phase). The agent checks: what existing notes relate to this? What MOCs should include it? What older notes should link back to it?
|
|
121
|
+
|
|
122
|
+
This is the compound interest mechanism. Note #100 has 99 potential connections. Note #500 has 499. The network effect is literal — since [[wiki links implement GraphRAG without the infrastructure]], each curated link is a deliberate traversal path, not a statistical correlation. Unprocessed notes have nodes but no edges. You cannot traverse an unconnected graph.
|
|
123
|
+
|
|
124
|
+
### 2. Backward Maintenance
|
|
125
|
+
|
|
126
|
+
Since [[backward maintenance asks what would be different if written today]], connection finding is not just forward (new note links to old notes) but backward (old notes get updated to link to new notes). A note written last month was written with last month's understanding. Reweaving ensures old notes benefit from new knowledge.
|
|
127
|
+
|
|
128
|
+
Without backward maintenance, the vault becomes a temporal layer cake — each month's notes reference their contemporaries but never discover future connections. With it, the entire graph evolves as understanding deepens.
|
|
129
|
+
|
|
130
|
+
### 3. Condition-Based Health Monitoring
|
|
131
|
+
|
|
132
|
+
Since [[maintenance operations are more universal than creative pipelines because structural health is domain-invariant]], the plugin generates maintenance triggers based on conditions, not schedules:
|
|
133
|
+
|
|
134
|
+
| Condition | Trigger | Action |
|
|
135
|
+
|-----------|---------|--------|
|
|
136
|
+
| Orphan note detected | Note exists with no incoming links | Flag for connection-finding |
|
|
137
|
+
| MOC exceeds threshold | 40+ notes for agent-operated, 30+ for human | Suggest split into sub-MOCs |
|
|
138
|
+
| Stale note detected | No updates in 30+ days while topic is active | Flag for reweaving |
|
|
139
|
+
| Schema drift detected | Missing required fields accumulate | Batch validation pass |
|
|
140
|
+
| Processing backlog | Inbox grows beyond threshold | Alert user or trigger processing |
|
|
141
|
+
| Tension accumulation | 5+ unresolved tensions | Trigger rethink pass |
|
|
142
|
+
|
|
143
|
+
### 4. Progressive Disclosure
|
|
144
|
+
|
|
145
|
+
The vault structures information in layers of increasing depth, so agents load only what they need:
|
|
146
|
+
|
|
147
|
+
1. **File tree** — what exists, at a glance
|
|
148
|
+
2. **YAML descriptions** — what each note claims, queryable via ripgrep
|
|
149
|
+
3. **MOC hierarchy** — how topics relate, curated navigation
|
|
150
|
+
4. **Heading outlines** — what each section covers, before reading full content
|
|
151
|
+
5. **Full content** — the complete note, loaded only when needed
|
|
152
|
+
6. **Semantic search** — conceptual discovery across vocabularies
|
|
153
|
+
|
|
154
|
+
Since [[LLM attention degrades as context fills]], progressive disclosure is not about reading less — it is about reading right. Each layer costs more tokens but reveals more. The agent stops at the layer that answers its question. Most decisions can be made at layer 2 or 3 without loading full content.
|
|
155
|
+
|
|
156
|
+
## Building Memory for Each Domain
|
|
157
|
+
|
|
158
|
+
The plugin adapts the memory architecture per domain. Each domain has different compounding mechanisms, different orientation needs, and different decisions about which memory layers to include:
|
|
159
|
+
|
|
160
|
+
### Research Domain
|
|
161
|
+
- **Knowledge memory:** Claim notes, MOCs, synthesis notes, methodology comparisons
|
|
162
|
+
- **Operational memory:** Processing queue, extraction tasks, citation graph tracking, replication status
|
|
163
|
+
- **Compounding mechanism:** Cross-reference network density. Every new claim is checked against every existing claim, not just the ones the researcher remembers. Citation graph grows denser, enabling structural analysis of argument foundations.
|
|
164
|
+
- **Orientation:** Topic MOC for current research thread + recent claims + processing queue status
|
|
165
|
+
- See [[academic research uses structured extraction with cross-source synthesis]] for the full composition
|
|
166
|
+
|
|
167
|
+
### Therapy Domain
|
|
168
|
+
- **Knowledge memory:** Reflections, patterns, coping strategies, growth goals
|
|
169
|
+
- **Operational memory:** Mood tracking, session preparation notes, homework tracking
|
|
170
|
+
- **Self-memory:** Agent's understanding of the human's communication style, known triggers, preferred language, therapeutic boundaries
|
|
171
|
+
- **Compounding mechanism:** Pattern detection accuracy improves with data volume. At 20 entries, correlations are noise. At 200, genuine patterns emerge. The vault compounds therapeutic insight through accumulated emotional data.
|
|
172
|
+
- **Orientation:** Recent mood trends + upcoming session prep + active growth goals
|
|
173
|
+
- See [[therapy journal uses warm personality with pattern detection for emotional processing]] for the full composition
|
|
174
|
+
|
|
175
|
+
### Personal Assistant Domain
|
|
176
|
+
- **Knowledge memory:** Area of responsibility notes, project notes, goal tracking
|
|
177
|
+
- **Operational memory:** Habit tracking, review schedules, reminder systems
|
|
178
|
+
- **Self-memory:** Understanding of the human's energy patterns, priorities, work preferences, decision-making style
|
|
179
|
+
- **Compounding mechanism:** Cross-area pattern recognition. The agent notices that work stress in one area correlates with neglect in another. Goal trajectory tracking reveals whether current pace meets long-term targets.
|
|
180
|
+
- **Orientation:** Area health dashboard + due items + habit streaks
|
|
181
|
+
- See [[personal assistant uses life area management with review automation]] for the full composition
|
|
182
|
+
|
|
183
|
+
### Engineering Domain
|
|
184
|
+
- **Knowledge memory:** ADRs (architecture decision records), system documentation, postmortem insights
|
|
185
|
+
- **Operational memory:** Sprint state, blocked items, deployment status
|
|
186
|
+
- **Compounding mechanism:** Institutional memory. Team members leave, but the vault retains their decisions and rationale. New engineers orient by reading the ADR chain rather than asking colleagues who may have forgotten.
|
|
187
|
+
- **Orientation:** Current sprint + recent decisions + system health
|
|
188
|
+
- See [[engineering uses technical decision tracking with architectural memory]] for the full composition
|
|
189
|
+
|
|
190
|
+
### Trading Domain
|
|
191
|
+
- **Knowledge memory:** Trade journals, market theses, strategy reviews
|
|
192
|
+
- **Operational memory:** Open positions, watchlist state, alert configurations
|
|
193
|
+
- **Compounding mechanism:** Strategy evolution through documented conviction vs outcome. The vault compounds trading wisdom by connecting historical theses to actual results, enabling the trader to see which patterns of reasoning lead to profitable vs unprofitable decisions.
|
|
194
|
+
- **Orientation:** Open positions + active theses + recent journal entries
|
|
195
|
+
- See [[trading uses conviction tracking with thesis-outcome correlation]] for the full composition
|
|
196
|
+
|
|
197
|
+
## The Orientation Protocol
|
|
198
|
+
|
|
199
|
+
Since [[spreading activation models how agents should traverse]], session start should activate the right part of the graph. The plugin generates an orientation protocol per domain:
|
|
200
|
+
|
|
201
|
+
1. **Load context file** — universal methodology and configuration
|
|
202
|
+
2. **Check operational state** — what is pending, what changed, what needs attention
|
|
203
|
+
3. **Load domain-specific dashboard** — the "what matters now" view for this domain
|
|
204
|
+
4. **Navigate to relevant MOC** — based on the current task
|
|
205
|
+
5. **Follow links** — build understanding from the curated starting point
|
|
206
|
+
|
|
207
|
+
The goal is: within the first 5% of context, the agent knows what exists, what matters, and where to start. This is the memory architecture's payoff — perfect recall within session, fast orientation across sessions.
|
|
208
|
+
|
|
209
|
+
For platforms that support hooks (tier 1), orientation can be automated: a session-start hook injects the tree, runs workboard reconciliation, and presents the compact status. For platforms without hooks (tier 2-3), the context file contains explicit instructions: "At session start, load these files in this order."
|
|
210
|
+
|
|
211
|
+
## Practical Patterns for Memory System Design
|
|
212
|
+
|
|
213
|
+
### Pattern 1: The Accumulate-Then-Synthesize Loop
|
|
214
|
+
|
|
215
|
+
For domains where individual entries are small but value comes from aggregate patterns (health tracking, mood journaling, habit logging):
|
|
216
|
+
|
|
217
|
+
- Each entry gets minimal processing (validate schema, link to area)
|
|
218
|
+
- Weekly or monthly review passes scan accumulated entries for patterns
|
|
219
|
+
- Detected patterns become knowledge notes in their own right
|
|
220
|
+
- The review pass is where the vault transitions from operational memory (logged entries) to knowledge memory (pattern insights)
|
|
221
|
+
|
|
222
|
+
### Pattern 2: The Deep-Extract-and-Connect Pipeline
|
|
223
|
+
|
|
224
|
+
For domains where each source is rich and requires transformation (research papers, legal cases, complex meeting notes):
|
|
225
|
+
|
|
226
|
+
- Each source gets heavy processing with fresh context per phase
|
|
227
|
+
- The process step extracts multiple atomic notes from a single source
|
|
228
|
+
- Connection finding links each extracted note to the full existing graph
|
|
229
|
+
- The vault compounds through cross-reference density
|
|
230
|
+
|
|
231
|
+
### Pattern 3: The Session Handoff Chain
|
|
232
|
+
|
|
233
|
+
For all domains, continuity across sessions follows the same pattern:
|
|
234
|
+
|
|
235
|
+
- Session N writes to task files, queue entries, and wiki links
|
|
236
|
+
- Session N+1 reads those files, inherits context, continues work
|
|
237
|
+
- The handoff is file-based, not context-based — no persistent memory required
|
|
238
|
+
- Since [[intermediate packets enable assembly over creation]], each session's output is a composable packet for the next session
|
|
239
|
+
|
|
240
|
+
### Pattern 4: The Identity Accumulation Loop
|
|
241
|
+
|
|
242
|
+
For domains with self-memory (therapy, personal assistant, companion):
|
|
243
|
+
|
|
244
|
+
- The agent observes patterns in its interactions with the user
|
|
245
|
+
- Observations accumulate in operational logs
|
|
246
|
+
- When patterns are consistent, they promote to self-memory (identity.md, preferences.md)
|
|
247
|
+
- Each session loads self-memory at orientation, so the agent starts with understanding of who it is working with
|
|
248
|
+
- Since [[context files function as agent operating systems through self-referential self-extension]], the self-memory is not just data — it shapes how the agent operates
|
|
249
|
+
|
|
250
|
+
## Anti-Patterns
|
|
251
|
+
|
|
252
|
+
| Anti-Pattern | Why It Fails | Better Approach |
|
|
253
|
+
|-------------|-------------|-----------------|
|
|
254
|
+
| Relying on session memory | Everything resets; nothing compounds | Externalize to vault structure |
|
|
255
|
+
| One big context file | Context bloat, slow orientation, wastes smart zone tokens | Progressive disclosure: load what is needed, when needed |
|
|
256
|
+
| No operational memory | Processing state lost between sessions, each session starts blind | Queue files, task files, status fields |
|
|
257
|
+
| No orientation protocol | Each session begins with "where was I?" — wasting smart zone on reconstruction | Dashboard + workboard + recent changes, loaded automatically or by instruction |
|
|
258
|
+
| Mixing knowledge and operational memory | Infrastructure clutter in the thinking space, search returns processing debris | Separate folders and note types with clear promotion rules |
|
|
259
|
+
| Building memory without connection finding | Notes accumulate as isolated nodes, no compound value | Connection-finding (reflect phase) is never optional |
|
|
260
|
+
| Flat files without progressive disclosure | Agent must load everything to find anything | Layer information: tree, descriptions, MOCs, outlines, full content, semantic search |
|
|
261
|
+
|
|
262
|
+
## Grounding
|
|
263
|
+
|
|
264
|
+
This guidance is grounded in:
|
|
265
|
+
- [[cognitive offloading is the architectural foundation for vault design]] — the theoretical foundation for externalized memory
|
|
266
|
+
- [[external memory shapes cognition more than base model]] — why memory architecture matters more than model upgrades
|
|
267
|
+
- [[agent session boundaries create natural automation checkpoints that human-operated systems lack]] — session boundaries as architecture
|
|
268
|
+
- [[operational memory and knowledge memory serve different functions in agent architecture]] — the two-memory distinction
|
|
269
|
+
- [[session handoff creates continuity without persistent memory]] — bridging the session gap through files
|
|
270
|
+
- [[each new note compounds value by creating traversal paths]] — the compounding mechanism
|
|
271
|
+
- [[LLM attention degrades as context fills]] — why progressive disclosure and fresh context matter
|
|
272
|
+
- [[spreading activation models how agents should traverse]] — orientation through graph activation
|
|
273
|
+
- [[the vault constitutes identity for agents]] — why externalized memory constitutes rather than augments identity
|
|
274
|
+
- [[context files function as agent operating systems through self-referential self-extension]] — the self-referential property of memory systems
|
|
275
|
+
- [[generated systems use a three-space architecture separating self from knowledge from operations]] — the three-space separation
|
|
276
|
+
|
|
277
|
+
---
|
|
278
|
+
|
|
279
|
+
Topics:
|
|
280
|
+
- [[index]]
|
|
281
|
+
- [[index]]
|
|
282
|
+
---
|
|
283
|
+
|
|
284
|
+
Topics:
|
|
285
|
+
- [[memory-architecture]]
|
|
@@ -0,0 +1,41 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Reactions seed synthesis while raw content only seeds reference — prompting "what is your reaction?" during capture creates thinking note nuclei that quotes alone cannot
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[processing-workflows]]"]
|
|
5
|
+
methodology: ["Evergreen"]
|
|
6
|
+
source: [[tft-research-part2]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# capture the reaction to content not just the content itself
|
|
10
|
+
|
|
11
|
+
When capturing from source material, the instinct is to save the content: highlight the passage, clip the quote, extract the claim. But the content already exists in the source. What doesn't exist anywhere is your reaction to it — the connection you saw, the question it raised, the disagreement it sparked. That reaction is the seed of synthesis. The quote is just reference material.
|
|
12
|
+
|
|
13
|
+
This distinction matters because reactions and content play different roles in knowledge work. Content provides evidence and grounding. But reactions generate the new thinking that makes a knowledge system valuable. When you capture "quality is the hard part" from a source, you've preserved information. When you capture "this challenges my assumption that volume matters — I've been optimizing for the wrong thing," you've preserved a thinking moment that can become a claim note.
|
|
14
|
+
|
|
15
|
+
The generation effect provides the cognitive basis for this practice. Since [[the generation effect requires active transformation not just storage]], merely relocating content from source to vault produces no cognitive benefit. The reaction is the transformation — it forces you to relate the content to existing understanding, which is precisely what creates encoding hooks and synthesis opportunities. A vault full of highlights is a reference library. A vault full of reactions is a thinking workspace.
|
|
16
|
+
|
|
17
|
+
For agent-operated systems, this principle suggests a capture intervention. Instead of ingest workflows that accept raw dumps and let agents extract claims later, the capture interface could prompt: "What is your reaction?" before or alongside the content. This shifts some generative work back to the human at capture time, when context is freshest and the connection that sparked the capture is still accessible. Since [[temporal separation of capture and processing preserves context freshness]], the reaction is most available in the moment of capture — waiting for post-hoc agent processing may lose the spark that made the content worth capturing in the first place.
|
|
18
|
+
|
|
19
|
+
There's a tension with the zero-friction capture philosophy. The pure version says: remove all friction, dump everything, let the agent process later. Prompting for reactions adds friction at capture time. But since [[guided notes might outperform post-hoc structuring for high-volume capture]], the right kind of friction — lightweight prompts that trigger generation without blocking flow — might preserve cognitive benefits that pure dumps sacrifice. "What's your reaction?" is minimal: one prompt, freeform response, no structure required. It costs seconds but captures something the agent cannot reconstruct. Voice capture complicates this tension in an interesting way: since [[voice capture is the highest-bandwidth channel for agent-delegated knowledge systems]], speaking naturally captures reactions through the paraverbal channel — tone shifts, emphasis, spontaneous exclamations, hedging language — without any prompting at all. The reaction is embedded in HOW you speak, not just what you say. This suggests that voice-first capture may dissolve part of the friction tradeoff: reactions are captured automatically because emotional expression is inherent to speech, rather than requiring an explicit prompt that interrupts flow. This is what [[does agent processing recover what fast capture loses]] explores from the system side — agent processing can recover vault quality but cannot recover human encoding benefits. Reactions preserve those human benefits while maintaining fast capture's low friction.
|
|
20
|
+
|
|
21
|
+
This practice takes on sharper meaning within the broader capture landscape. Since [[three capture schools converge through agent-mediated synthesis]], agent-mediated processing dissolves the tradeoff between Accumulationist speed and Interpretationist quality at the system level — but the convergence leaves the human with Accumulationist encoding. Reactions are the mechanism that injects a sliver of Interpretationist processing back into the human side of the convergence. Without reactions, the convergence is real for the vault but potentially hollow for the person. With them, the human retains a generative foothold even as the agent handles the heavy interpretive work.
|
|
22
|
+
|
|
23
|
+
This connects to what makes capture valuable at all. Since [[each new note compounds value by creating traversal paths]], the question is whether content or reactions create better paths. Content creates reference connections: "this quote appears in note X." Reactions create synthesis connections: "this challenges note Y because Z." The second type builds arguments; the first type only provides citations. A vault optimized for traversal needs reactions more than it needs comprehensive content extraction. This is why [[retrieval utility should drive design over capture completeness]] — reactions implement retrieval-first thinking at capture time, creating the "how will I find this later" hooks that content alone lacks.
|
|
24
|
+
|
|
25
|
+
The practical test: compare two capture approaches. First, extract five quotes from an article. Second, extract two quotes plus reactions to each. The prediction is that the reaction-included captures produce more claim notes, more cross-note connections, and better retrieval — even with less raw content — because the reactions contain the seeds that content alone lacks.
|
|
26
|
+
---
|
|
27
|
+
|
|
28
|
+
Relevant Notes:
|
|
29
|
+
- [[the generation effect requires active transformation not just storage]] — the cognitive mechanism: reactions are generative, content relocation is not
|
|
30
|
+
- [[temporal separation of capture and processing preserves context freshness]] — why reactions must be captured at capture time: the spark decays rapidly
|
|
31
|
+
- [[guided notes might outperform post-hoc structuring for high-volume capture]] — the friction trade-off: minimal prompts may preserve benefits that pure dumps lose
|
|
32
|
+
- [[each new note compounds value by creating traversal paths]] — reactions create synthesis paths while content creates reference paths
|
|
33
|
+
- [[does agent processing recover what fast capture loses]] — explores the same tradeoff from the system perspective; reactions are the middle path that preserves human encoding while enabling fast capture
|
|
34
|
+
- [[retrieval utility should drive design over capture completeness]] — reactions implement retrieval-first capture by creating how will I find this later hooks
|
|
35
|
+
- [[verbatim risk applies to agents too]] — without reactions, agents must generate synthesis from content alone, risking reorganization without insight; reactions provide the nucleus that prevents this
|
|
36
|
+
- [[schema templates reduce cognitive overhead at capture time]] — what is your reaction is a micro-schema that triggers generation with minimal friction
|
|
37
|
+
- [[three capture schools converge through agent-mediated synthesis]] — gives reactions a precise role: within the capture school convergence, reactions are the mechanism that preserves Interpretationist encoding for the human while maintaining Accumulationist speed
|
|
38
|
+
- [[voice capture is the highest-bandwidth channel for agent-delegated knowledge systems]] — voice capture naturally embeds reactions through the paraverbal channel (tone, emphasis, hesitation) without explicit prompting, partially dissolving the friction tradeoff between reaction capture and zero-friction dumps
|
|
39
|
+
|
|
40
|
+
Topics:
|
|
41
|
+
- [[processing-workflows]]
|
|
@@ -0,0 +1,36 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Vague claims can't be disagreed with or built on — for agents, vague titles are undocumented functions where you can invoke but don't know what you'll get
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[note-design]]"]
|
|
5
|
+
methodology: ["Evergreen"]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# claims must be specific enough to be wrong
|
|
9
|
+
|
|
10
|
+
A claim that can't be wrong also can't be useful. When you write "quality matters" or "knowledge is important," you've said something true but empty. Nobody can disagree, which means nobody can engage. There's nothing to build on because there's no specific stake in the ground.
|
|
11
|
+
|
|
12
|
+
The test for specificity is simple: could someone disagree with this specific claim? Not disagree that the topic matters, but disagree with the particular assertion you're making. "Quality matters more at scale because small differences compound through selection" — someone could argue that small differences don't compound, or that selection isn't the mechanism. That disagreement is productive. It forces you to defend your reasoning or update it.
|
|
13
|
+
|
|
14
|
+
Vague claims fail the composability test in multiple ways. When you try to link to them, you're not invoking a specific idea — you're gesturing at a topic. The link `since [[quality is important]]` adds nothing to a sentence because it asserts nothing in particular. Compare that to `since [[claims must be specific enough to be wrong]]`, which carries a definite assertion into the prose. Since [[wiki links implement GraphRAG without the infrastructure]], the notes-as-APIs pattern depends on titles that function as typed signatures — and vague titles are undocumented functions where invocation gives unpredictable results.
|
|
15
|
+
|
|
16
|
+
This specificity requirement connects directly to how [[note titles should function as APIs enabling sentence transclusion]]. The title is the function signature — it should tell you exactly what you're calling. A vague title is like an undocumented function with unclear behavior. You can invoke it, but you don't know what you'll get back.
|
|
17
|
+
|
|
18
|
+
The constraint isn't about being provocative or contrarian. It's about having a point. Every note should argue something, and you can't argue without taking a position specific enough that someone could push back on it. Specificity also compounds through the vault's other quality gates: since [[summary coherence tests composability before filing]], coherence testing catches bundled claims, while the specificity test catches vague ones — together they ensure notes are both singular and staked. And since [[descriptions are retrieval filters not summaries]], vague claims produce descriptions that merely restate the title without adding mechanism, scope, or implication — the same anti-pattern at the description layer. When [[progressive disclosure means reading right not reading less]], precision in titles and descriptions is what enables agents to curate what enters context rather than loading everything indiscriminately.
|
|
19
|
+
|
|
20
|
+
This has an objective dimension too. Since [[testing effect could enable agent knowledge verification]], the specificity of a claim can be measured: descriptions that merely paraphrase titles should fail recite's prediction test, revealing vagueness through measurable retrieval failure rather than subjective judgment. And since [[narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging]], single-operator systems like this vault permit the maximally specific claim-titles that the specificity standard demands — consensus vocabulary would force the generality this constraint rejects.
|
|
21
|
+
|
|
22
|
+
But specificity has a shadow side. Since [[vault conventions may impose hidden rigidity on thinking]], forcing insights into claim-as-title form may distort genuinely non-linear or relational ideas. The test becomes: when reformulation feels forced, is it because the insight isn't ready to be a claim, or because the claim-as-title pattern can't accommodate certain thinking styles? Since [[enforcing atomicity can create paralysis when ideas resist decomposition]], the specificity constraint shares this operationalization problem with atomicity: distinguishing "struggle that reveals incomplete thinking" from "struggle against a format that can't accommodate valid insight" requires felt sense that agents lack.
|
|
23
|
+
---
|
|
24
|
+
|
|
25
|
+
Relevant Notes:
|
|
26
|
+
- [[note titles should function as APIs enabling sentence transclusion]] — extends: specificity is what makes titles reliable API signatures; vague titles are undocumented functions that can't be invoked reliably
|
|
27
|
+
- [[wiki links implement GraphRAG without the infrastructure]] — develops the notes as APIs pattern: specificity makes titles work as function signatures that can be reliably invoked
|
|
28
|
+
- [[summary coherence tests composability before filing]] — complementary quality gate: specificity tests whether a single claim has enough stake, coherence tests whether the unit is actually singular
|
|
29
|
+
- [[descriptions are retrieval filters not summaries]] — applies the same anti-pattern: descriptions that restate titles add nothing, just as vague claims add nothing
|
|
30
|
+
- [[progressive disclosure means reading right not reading less]] — precision enables curation, vagueness defeats it
|
|
31
|
+
- [[vault conventions may impose hidden rigidity on thinking]] — tests whether the specificity requirement sometimes constrains non-linear thinking that resists sentence form
|
|
32
|
+
- [[testing effect could enable agent knowledge verification]] — the specificity test made objective: descriptions that merely paraphrase titles (lacking mechanism/scope/implication) should fail recite's prediction test, revealing the anti-pattern through measurable retrieval failure rather than subjective judgment
|
|
33
|
+
- [[narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging]] — enablement: narrow folksonomy removes the consensus constraint that would force generality, making maximally specific claim-titles architecturally possible
|
|
34
|
+
|
|
35
|
+
Topics:
|
|
36
|
+
- [[note-design]]
|
package/methodology/closure rituals create clean breaks that prevent attention residue bleed.md
ADDED
|
@@ -0,0 +1,44 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Explicitly marking tasks as complete signals the brain to release them from working memory — for agents this means writing closure statements rather than just stopping, distinct from handoff which
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[processing-workflows]]", "[[agent-cognition]]"]
|
|
5
|
+
methodology: ["Cognitive Science", "GTD"]
|
|
6
|
+
source: [[tft-research-part3]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# closure rituals create clean breaks that prevent attention residue bleed
|
|
10
|
+
|
|
11
|
+
Attention residue is specific and measurable. When you switch from task A to task B without completing A, fragments of A persist in working memory and compete for attention during B. Leroy's research demonstrated that this residue is not just subjective distraction but a measurable performance degradation on the subsequent task. The recovery penalty can extend to 23 minutes — a significant portion of any work session.
|
|
12
|
+
|
|
13
|
+
Explicitly closing a task addresses this at the source. When the brain registers that a task is complete — genuinely done, not paused or abandoned — it releases the working memory allocation. The open loop closes. The residue dissipates. Because [[Zeigarnik effect validates capture-first philosophy because open loops drain attention]], we know the mechanism is not metaphorical: unfinished tasks maintain active threads in working memory, consuming bandwidth until the brain registers either completion or externalization. GTD captures this insight in its emphasis on "closing open loops": every uncaptured commitment drains cognitive bandwidth until externalized or completed. Capture addresses the "I need to remember this" loops; closure rituals address the "I need to finish this" loops. Both are Zeigarnik releases targeting different types of open commitment.
|
|
14
|
+
|
|
15
|
+
The mechanism matters: closure is not the same as stopping. Stopping is ceasing work. Closure is signaling to yourself (or the system) that the work is done. Writing "this task is complete, the output is X, and nothing remains" creates a psychological break that merely ending the session does not. The signal must be explicit because the brain does not automatically detect task completion — it requires a marker.
|
|
16
|
+
|
|
17
|
+
For agent workflows, this translates to writing closure statements rather than simply returning results. When an agent finishes processing a note or completing a research sprint, it should produce a structured closure artifact: what was accomplished, what was learned, what is definitively done. This is distinct from handoff, which preserves continuity for what continues. Since [[session handoff creates continuity without persistent memory]], handoff documents capture what the next session needs to know to continue work. Closure statements capture what is finished so that it can be released. Both are needed at session boundaries — handoff for open work, closure for completed work.
|
|
18
|
+
|
|
19
|
+
Because [[fresh context per task preserves quality better than chaining phases]], session isolation already creates natural closure points. Each phase ends when the session ends. But session isolation is a structural feature, not a cognitive one. Closure rituals formalize the boundary into an explicit signal that the orchestrator, future sessions, and (for human operators) the brain can recognize. The RALPH HANDOFF block is a closure ritual: it structures what was done, what was learned, and what the queue should mark as complete. And because [[agent session boundaries create natural automation checkpoints that human-operated systems lack]], the same boundary that closure rituals formalize is also an enforcement point where health checks fire automatically — closure and enforcement are complementary functions of the same event, one marking what ends and the other verifying what was produced.
|
|
20
|
+
|
|
21
|
+
Since [[continuous small-batch processing eliminates review dread]], small batches create more frequent boundaries. Each boundary is a closure opportunity. Without explicit closure at these boundaries, work blurs from one batch into the next, and the attention residue from batch N contaminates batch N+1. And since [[batching by context similarity reduces switching costs in agent processing]], the closure-to-opening transition becomes less costly when consecutive tasks share context — the residue from a graph structure task is less harmful when the next task is also about graph structure. Closure rituals and context-similar batching work together: the ritual marks the clean break, and the batching minimizes what has to be released. The formality of the closure ritual — writing the handoff, marking the task done, logging learnings — creates the clean break that prevents bleed.
|
|
22
|
+
|
|
23
|
+
And since [[MOCs are attention management devices not just organizational tools]], the attention management system has two complementary parts. MOCs reduce the cost of entering a context (orientation). Closure rituals reduce the cost of leaving one (release). Together they bracket the attention lifecycle: enter cleanly through the MOC, work within the session, exit cleanly through the closure ritual. This lifecycle is an instance of a broader paradigm: since [[AI shifts knowledge systems from externalizing memory to externalizing attention]], the vault is not just storing knowledge but directing focus. Closure rituals externalize the attention release decision — the system marks what deserves to leave working memory rather than leaving that judgment to biological or computational heuristics that may fail to register completion.
|
|
24
|
+
|
|
25
|
+
The practical implication: never end a session or task by simply stopping. Write what was done. Mark what is complete. Note what remains — and this last step is not incidental, because since [[prospective memory requires externalization]], any intention left unwritten at session end is guaranteed to vanish. Closure rituals serve double duty: they release completed work from attention (the Zeigarnik function) and they externalize remaining intentions into persistent traces (the prospective memory function). The overhead is small; the attention benefit compounds across every subsequent task.
|
|
26
|
+
|
|
27
|
+
---
|
|
28
|
+
---
|
|
29
|
+
|
|
30
|
+
Relevant Notes:
|
|
31
|
+
- [[session handoff creates continuity without persistent memory]] — complementary but distinct: handoff preserves what CONTINUES, closure marks what ENDS; both are needed at session boundaries
|
|
32
|
+
- [[fresh context per task preserves quality better than chaining phases]] — session isolation creates natural closure points; closure rituals formalize these into explicit signals
|
|
33
|
+
- [[MOCs are attention management devices not just organizational tools]] — MOCs reduce re-orientation cost when returning to a topic; closure rituals reduce residue from the topic you are leaving
|
|
34
|
+
- [[continuous small-batch processing eliminates review dread]] — small batches create more frequent closure opportunities; closure rituals ensure each batch actually ends rather than blurring into the next
|
|
35
|
+
- [[Zeigarnik effect validates capture-first philosophy because open loops drain attention]] — foundation: the Zeigarnik effect explains WHY closure works; completed tasks release working memory only when the brain registers completion, which requires the explicit signal a closure ritual provides
|
|
36
|
+
- [[batching by context similarity reduces switching costs in agent processing]] — extends: context-similar batching reduces the residue gap between tasks, making closure between batches more effective because the next context is semantically closer
|
|
37
|
+
- [[notes function as cognitive anchors that stabilize attention during complex tasks]] — complements: closure releases completed work from attention, anchoring holds incomplete work stable; together they manage the full attention lifecycle — anchor the active, release the complete
|
|
38
|
+
- [[AI shifts knowledge systems from externalizing memory to externalizing attention]] — paradigm frame: closure rituals are an instance of attention externalization; the system marks what should leave focus rather than relying on biological or computational heuristics to detect completion
|
|
39
|
+
- [[agent session boundaries create natural automation checkpoints that human-operated systems lack]] — complementary boundary function: closure rituals mark what ENDS at a session boundary while enforcement checkpoints verify what was PRODUCED; both fire at the same event serving different purposes
|
|
40
|
+
- [[prospective memory requires externalization]] — the other half of closure: releasing completed work addresses Zeigarnik (open loops), but noting what remains addresses prospective memory (future intentions); closure rituals that skip the 'what remains' step lose the prospective memory externalization that agents cannot recover across sessions
|
|
41
|
+
|
|
42
|
+
Topics:
|
|
43
|
+
- [[processing-workflows]]
|
|
44
|
+
- [[agent-cognition]]
|
|
@@ -0,0 +1,46 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Clark and Chalmers Extended Mind Theory plus Cowan's 4-item working memory limit explain why every capture friction point fights the cognitive architecture — the vault is not storage but a
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[agent-cognition]]"]
|
|
5
|
+
methodology: ["Cognitive Science"]
|
|
6
|
+
source: [[tft-research-part3]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# cognitive offloading is the architectural foundation for vault design
|
|
10
|
+
|
|
11
|
+
Working memory holds roughly four items at once. Cowan's research established this limit, and it has not moved. Every system that asks a human to hold more than four things in mind while also processing, connecting, and synthesizing is fighting against biological architecture. The vault exists because of this constraint, not despite it.
|
|
12
|
+
|
|
13
|
+
Clark and Chalmers formalized this in their 1998 Extended Mind Theory: cognition extends beyond the brain to include external artifacts that participate in cognitive processes. A notebook is not merely a reminder — it becomes part of the thinking system when reliably consulted and trusted. Risko and Gilbert's subsequent work on cognitive offloading added the economic dimension: people constantly calculate whether to retain information internally or offload it to an external store, and this calculation is driven by the physical cost of capture versus the mental cost of retention.
|
|
14
|
+
|
|
15
|
+
This is the theoretical foundation for the entire vault approach. The vault is not a storage system. It is a cognitive offloading system — an externalized working memory that holds state the human cannot. When an agent operates the vault, the distributed cognitive system becomes tripartite: human provides direction and judgment, vault holds persistent state and connections, agent provides traversal and processing capability. None of these alone produces what the combination produces, because [[knowledge systems become communication partners through complexity and memory humans cannot sustain]]. The vault accumulates a level of complexity and connection density that no biological mind could hold, and the agent can traverse that complexity in ways no human could manage at scale. And because the offloading targets cognitive constraints rather than domain-specific operations, since [[the vault methodology transfers because it encodes cognitive science not domain specifics]], the same distributed cognitive architecture works for therapy journals, project trackers, and creative writing systems — Cowan's limits apply to emotional processing, engineering decisions, and narrative construction equally.
|
|
16
|
+
|
|
17
|
+
The design implication is concrete. Every friction point in capture is actively fighting against the cognitive architecture that makes the system work. If offloading costs more effort than retaining, the human retains — and the system loses the externalized thought. The damage goes beyond the lost thought: since [[Zeigarnik effect validates capture-first philosophy because open loops drain attention]], every uncaptured thought becomes an active drain on working memory, consuming bandwidth that could otherwise be used for processing or synthesis. The friction does not merely prevent capture — it creates ongoing cognitive cost. This is why zero-friction capture works: it makes the offloading calculation trivially favor externalization. The agent has driven the physical cost of capture close to zero, so the rational choice is always to offload. And since [[temporal separation of capture and processing preserves context freshness]], the offloading must happen immediately — not just because Risko and Gilbert's economics favor it, but because Ebbinghaus decay means the context surrounding the offloaded thought erodes within hours. Offload now, process soon.
|
|
18
|
+
|
|
19
|
+
But since [[cognitive outsourcing risk in agent-operated systems]], there is a shadow side to frictionless offloading. If the system handles everything, the human may never engage deeply enough to develop understanding. The offloading that enables the system can, taken too far, hollow out the human side of the distributed cognitive architecture. The mitigation is not to add friction back but to ensure the human retains genuine judgment work — the direction-setting and quality evaluation that keeps them cognitively coupled to the system. This tension sharpens when considered alongside [[the generation effect requires active transformation not just storage]]: the generation effect shows that active transformation creates cognitive hooks that passive offloading does not. Pure zero-friction capture optimizes the offloading economics but may sacrifice the encoding that makes the human a capable partner in the tripartite system. The question becomes whether [[does agent processing recover what fast capture loses]] — if agents handle the generation, the vault benefits but the human's internal understanding may not.
|
|
20
|
+
|
|
21
|
+
Since [[session handoff creates continuity without persistent memory]], the offloading principle applies to agents too. Agents offload session state to task files and handoff documents, externalizing continuity into artifacts. The mechanism is the same: rather than trying to hold state internally (which agents cannot do across sessions), externalize it to files that the next session reads. Because [[LLM attention degrades as context fills]], agents face their own working memory constraint — not Cowan's 4-item limit, but attention degradation beyond the smart zone. The solution is the same architectural pattern: offload to external artifacts rather than trying to hold everything internally. The offloading principle extends beyond state to procedural operations: since [[hooks enable context window efficiency by delegating deterministic checks to external processes]], deterministic checks like schema validation run outside the context window entirely, with only the pass/fail result entering context. This is cognitive offloading applied to enforcement — the same economic logic (external execution costs less than internal retention) operating on procedural work rather than working memory. The vault is an offloading system for both human and agent cognition, just targeting different limitations — working memory for humans, attention degradation and session persistence for agents. The coordination dimension makes this most concrete: since [[stigmergy coordinates agents through environmental traces without direct communication]], inter-agent coordination is entirely offloaded to the environment. No agent holds coordination state internally — each reads the current environment (queue entries, task files, wiki links) and acts accordingly. This makes cognitive offloading and stigmergic coordination the same architectural claim viewed from different theoretical traditions: one from cognitive science (Extended Mind), the other from entomology (Grassé).
|
|
22
|
+
|
|
23
|
+
The offloading architecture reaches its fullest expression in capture design. Since [[three capture schools converge through agent-mediated synthesis]], the three PKM capture philosophies — Accumulationist speed, Interpretationist quality, Temporal urgency — stop being tradeoffs when capture and processing are distributed across different actors. The human captures with zero friction (optimal offloading economics), the agent processes with interpretive depth (generation happens, just not by the human), and processing follows capture within hours (temporal urgency constrains the window). This convergence IS the offloading architecture in practice: the "fundamental divergence" between capture schools was an artifact of assuming a single actor who must both offload and process, when in fact the tripartite system splits those responsibilities by design.
|
|
24
|
+
|
|
25
|
+
This is a CLOSED claim. The cognitive science is established (Cowan's working memory limits, Clark and Chalmers' Extended Mind, Risko and Gilbert's offloading economics). The architectural consequence follows directly: build for offloading, minimize capture friction, and recognize the vault as a cognitive partner rather than a filing cabinet.
|
|
26
|
+
|
|
27
|
+
---
|
|
28
|
+
---
|
|
29
|
+
|
|
30
|
+
Relevant Notes:
|
|
31
|
+
- [[knowledge systems become communication partners through complexity and memory humans cannot sustain]] — extends the partnership thesis with cognitive science grounding; that note argues partnership is productive via Luhmann, this note explains WHY it works at the architecture level
|
|
32
|
+
- [[cognitive outsourcing risk in agent-operated systems]] — the shadow side; if offloading is the foundation, over-offloading is the failure mode
|
|
33
|
+
- [[session handoff creates continuity without persistent memory]] — handoffs are a specific cognitive offloading mechanism; externalizing session state follows the same principle as externalizing working memory
|
|
34
|
+
- [[LLM attention degrades as context fills]] — the agent-side working memory constraint; humans hit Cowan's 4-item limit, agents hit attention degradation beyond the smart zone, both justify offloading to external artifacts
|
|
35
|
+
- [[the generation effect requires active transformation not just storage]] — the nuance to frictionless offloading; zero-friction capture optimizes externalization but may sacrifice encoding benefits that active transformation creates
|
|
36
|
+
- [[does agent processing recover what fast capture loses]] — tests the limit of the offloading thesis; if agents handle all generation, the system gets encoding benefits but the human does not
|
|
37
|
+
- [[temporal separation of capture and processing preserves context freshness]] — the timing dimension; offloading must happen immediately (Risko/Gilbert economics) but processing should follow soon before Ebbinghaus decay erodes context
|
|
38
|
+
- [[notes function as cognitive anchors that stabilize attention during complex tasks]] — extends: offloading explains WHY we externalize, anchoring explains WHAT the externalized artifacts do during active reasoning; a note at rest is offloaded state, a note being referenced during complex work is an anchor stabilizing the reasoning process
|
|
39
|
+
- [[Zeigarnik effect validates capture-first philosophy because open loops drain attention]] — experimental evidence: the Zeigarnik effect shows that uncaptured thoughts actively consume working memory bandwidth, providing the specific cognitive mechanism behind why capture friction fights the architecture
|
|
40
|
+
- [[three capture schools converge through agent-mediated synthesis]] — the offloading architecture in practice: the three capture schools' tradeoffs dissolve when capture and processing are distributed across the tripartite system, demonstrating what offloading design produces at the methodology level
|
|
41
|
+
- [[stigmergy coordinates agents through environmental traces without direct communication]] — coordination instance: stigmergy IS cognitive offloading applied to inter-agent coordination; instead of agents holding coordination state internally (impossible across sessions), they offload it to the environment, making vault-as-offloading-system and vault-as-stigmergic-medium the same architectural claim
|
|
42
|
+
- [[the vault methodology transfers because it encodes cognitive science not domain specifics]] — transfer implication: each offloading pattern maps to a domain-invariant cognitive principle (Cowan's limits, Extended Mind, information foraging), which is what makes the distributed cognitive architecture portable across knowledge domains rather than specific to research
|
|
43
|
+
- [[hooks enable context window efficiency by delegating deterministic checks to external processes]] — extends offloading from working memory to procedural operations: deterministic checks run outside the context window with only pass/fail results entering context, applying the same economic logic (external cost less than internal retention) to enforcement rather than state
|
|
44
|
+
|
|
45
|
+
Topics:
|
|
46
|
+
- [[agent-cognition]]
|