arscontexta 0.6.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/marketplace.json +11 -0
- package/.claude-plugin/plugin.json +22 -0
- package/README.md +683 -0
- package/agents/knowledge-guide.md +49 -0
- package/bin/cli.mjs +66 -0
- package/generators/agents-md.md +240 -0
- package/generators/claude-md.md +379 -0
- package/generators/features/atomic-notes.md +124 -0
- package/generators/features/ethical-guardrails.md +58 -0
- package/generators/features/graph-analysis.md +188 -0
- package/generators/features/helper-functions.md +92 -0
- package/generators/features/maintenance.md +164 -0
- package/generators/features/methodology-knowledge.md +70 -0
- package/generators/features/mocs.md +144 -0
- package/generators/features/multi-domain.md +61 -0
- package/generators/features/personality.md +71 -0
- package/generators/features/processing-pipeline.md +428 -0
- package/generators/features/schema.md +149 -0
- package/generators/features/self-evolution.md +229 -0
- package/generators/features/self-space.md +78 -0
- package/generators/features/semantic-search.md +99 -0
- package/generators/features/session-rhythm.md +85 -0
- package/generators/features/templates.md +85 -0
- package/generators/features/wiki-links.md +88 -0
- package/generators/soul-md.md +121 -0
- package/hooks/hooks.json +45 -0
- package/hooks/scripts/auto-commit.sh +44 -0
- package/hooks/scripts/session-capture.sh +35 -0
- package/hooks/scripts/session-orient.sh +86 -0
- package/hooks/scripts/write-validate.sh +42 -0
- package/methodology/AI shifts knowledge systems from externalizing memory to externalizing attention.md +59 -0
- package/methodology/BM25 retrieval fails on full-length descriptions because query term dilution reduces match scores.md +39 -0
- package/methodology/IBIS framework maps claim-based architecture to structured argumentation.md +58 -0
- package/methodology/LLM attention degrades as context fills.md +49 -0
- package/methodology/MOC construction forces synthesis that automated generation from metadata cannot replicate.md +49 -0
- package/methodology/MOC maintenance investment compounds because orientation savings multiply across every future session.md +41 -0
- package/methodology/MOCs are attention management devices not just organizational tools.md +51 -0
- package/methodology/PKM failure follows a predictable cycle.md +50 -0
- package/methodology/ThreadMode to DocumentMode transformation is the core value creation step.md +52 -0
- package/methodology/WIP limits force processing over accumulation.md +53 -0
- package/methodology/Zeigarnik effect validates capture-first philosophy because open loops drain attention.md +42 -0
- package/methodology/academic research uses structured extraction with cross-source synthesis.md +566 -0
- package/methodology/adapt the four-phase processing pipeline to domain-specific throughput needs.md +197 -0
- package/methodology/agent notes externalize navigation intuition that search cannot discover and traversal cannot reconstruct.md +48 -0
- package/methodology/agent self-memory should be architecturally separate from user knowledge systems.md +48 -0
- package/methodology/agent session boundaries create natural automation checkpoints that human-operated systems lack.md +56 -0
- package/methodology/agent-cognition.md +107 -0
- package/methodology/agents are simultaneously methodology executors and subjects creating a unique trust asymmetry.md +66 -0
- package/methodology/aspect-oriented programming solved the same cross-cutting concern problem that hooks solve.md +39 -0
- package/methodology/associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles.md +53 -0
- package/methodology/attention residue may have a minimum granularity that cannot be subdivided.md +46 -0
- package/methodology/auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution.md +47 -0
- package/methodology/automated detection is always safe because it only reads state while automated remediation risks content corruption.md +42 -0
- package/methodology/automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues.md +56 -0
- package/methodology/backlinks implicitly define notes by revealing usage context.md +35 -0
- package/methodology/backward maintenance asks what would be different if written today.md +62 -0
- package/methodology/balance onboarding enforcement and questions to prevent premature complexity.md +229 -0
- package/methodology/basic level categorization determines optimal MOC granularity.md +51 -0
- package/methodology/batching by context similarity reduces switching costs in agent processing.md +43 -0
- package/methodology/behavioral anti-patterns matter more than tool selection.md +42 -0
- package/methodology/betweenness centrality identifies bridge notes connecting disparate knowledge domains.md +57 -0
- package/methodology/blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules.md +42 -0
- package/methodology/bootstrapping principle enables self-improving systems.md +62 -0
- package/methodology/build automatic memory through cognitive offloading and session handoffs.md +285 -0
- package/methodology/capture the reaction to content not just the content itself.md +41 -0
- package/methodology/claims must be specific enough to be wrong.md +36 -0
- package/methodology/closure rituals create clean breaks that prevent attention residue bleed.md +44 -0
- package/methodology/cognitive offloading is the architectural foundation for vault design.md +46 -0
- package/methodology/cognitive outsourcing risk in agent-operated systems.md +55 -0
- package/methodology/coherence maintains consistency despite inconsistent inputs.md +96 -0
- package/methodology/coherent architecture emerges from wiki links spreading activation and small-world topology.md +48 -0
- package/methodology/community detection algorithms can inform when MOCs should split or merge.md +52 -0
- package/methodology/complete navigation requires four complementary types that no single mechanism provides.md +43 -0
- package/methodology/complex systems evolve from simple working systems.md +59 -0
- package/methodology/composable knowledge architecture builds systems from independent toggleable modules not monolithic templates.md +61 -0
- package/methodology/compose multi-domain systems through separate templates and shared graph.md +372 -0
- package/methodology/concept-orientation beats source-orientation for cross-domain connections.md +51 -0
- package/methodology/confidence thresholds gate automated action between the mechanical and judgment zones.md +50 -0
- package/methodology/configuration dimensions interact so choices in one create pressure on others.md +58 -0
- package/methodology/configuration paralysis emerges when derivation surfaces too many decisions.md +44 -0
- package/methodology/context files function as agent operating systems through self-referential self-extension.md +46 -0
- package/methodology/context phrase clarity determines how deep a navigation hierarchy can scale.md +46 -0
- package/methodology/continuous small-batch processing eliminates review dread.md +48 -0
- package/methodology/controlled disorder engineers serendipity through semantic rather than topical linking.md +51 -0
- package/methodology/creative writing uses worldbuilding consistency with character tracking.md +672 -0
- package/methodology/cross-links between MOC territories indicate creative leaps and integration depth.md +43 -0
- package/methodology/dangling links reveal which notes want to exist.md +62 -0
- package/methodology/data exit velocity measures how quickly content escapes vendor lock-in.md +74 -0
- package/methodology/decontextualization risk means atomicity may strip meaning that cannot be recovered.md +48 -0
- package/methodology/dense interlinked research claims enable derivation while sparse references only enable templating.md +47 -0
- package/methodology/dependency resolution through topological sort makes module composition transparent and verifiable.md +56 -0
- package/methodology/derivation generates knowledge systems from composable research claims not template customization.md +63 -0
- package/methodology/derivation-engine.md +27 -0
- package/methodology/derived systems follow a seed-evolve-reseed lifecycle.md +56 -0
- package/methodology/description quality for humans diverges from description quality for keyword search.md +73 -0
- package/methodology/descriptions are retrieval filters not summaries.md +112 -0
- package/methodology/design MOCs as attention management devices with lifecycle governance.md +318 -0
- package/methodology/design-dimensions.md +66 -0
- package/methodology/digital mutability enables note evolution that physical permanence forbids.md +54 -0
- package/methodology/discovery-retrieval.md +48 -0
- package/methodology/distinctiveness scoring treats description quality as measurable.md +69 -0
- package/methodology/does agent processing recover what fast capture loses.md +43 -0
- package/methodology/domain-compositions.md +37 -0
- package/methodology/dual-coding with visual elements could enhance agent traversal.md +55 -0
- package/methodology/each module must be describable in one sentence under 200 characters or it does too many things.md +45 -0
- package/methodology/each new note compounds value by creating traversal paths.md +55 -0
- package/methodology/eight configuration dimensions parameterize the space of possible knowledge systems.md +56 -0
- package/methodology/elaborative encoding is the quality gate for new notes.md +55 -0
- package/methodology/enforce schema with graduated strictness across capture processing and query zones.md +221 -0
- package/methodology/enforcing atomicity can create paralysis when ideas resist decomposition.md +43 -0
- package/methodology/engineering uses technical decision tracking with architectural memory.md +766 -0
- package/methodology/every knowledge domain shares a four-phase processing skeleton that diverges only in the process step.md +53 -0
- package/methodology/evolution observations provide actionable signals for system adaptation.md +67 -0
- package/methodology/external memory shapes cognition more than base model.md +60 -0
- package/methodology/faceted classification treats notes as multi-dimensional objects rather than folder contents.md +65 -0
- package/methodology/failure-modes.md +27 -0
- package/methodology/false universalism applies same processing logic regardless of domain.md +49 -0
- package/methodology/federated wiki pattern enables multi-agent divergence as feature not bug.md +59 -0
- package/methodology/flat files break at retrieval scale.md +75 -0
- package/methodology/forced engagement produces weak connections.md +48 -0
- package/methodology/four abstraction layers separate platform-agnostic from platform-dependent knowledge system features.md +47 -0
- package/methodology/fresh context per task preserves quality better than chaining phases.md +44 -0
- package/methodology/friction reveals architecture.md +63 -0
- package/methodology/friction-driven module adoption prevents configuration debt by adding complexity only at pain points.md +48 -0
- package/methodology/gardening cycle implements tend prune fertilize operations.md +41 -0
- package/methodology/generation effect gate blocks processing without transformation.md +40 -0
- package/methodology/goal-driven memory orchestration enables autonomous domain learning through directed compute allocation.md +41 -0
- package/methodology/good descriptions layer heuristic then mechanism then implication.md +57 -0
- package/methodology/graph-structure.md +65 -0
- package/methodology/guided notes might outperform post-hoc structuring for high-volume capture.md +37 -0
- package/methodology/health wellness uses symptom-trigger correlation with multi-dimensional tracking.md +819 -0
- package/methodology/hook composition creates emergent methodology from independent single-concern components.md +47 -0
- package/methodology/hook enforcement guarantees quality while instruction enforcement merely suggests it.md +51 -0
- package/methodology/hook-driven learning loops create self-improving methodology through observation accumulation.md +62 -0
- package/methodology/hooks are the agent habit system that replaces the missing basal ganglia.md +40 -0
- package/methodology/hooks cannot replace genuine cognitive engagement yet more automation is always tempting.md +87 -0
- package/methodology/hooks enable context window efficiency by delegating deterministic checks to external processes.md +47 -0
- package/methodology/idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once.md +44 -0
- package/methodology/implement condition-based maintenance triggers for derived systems.md +255 -0
- package/methodology/implicit dependencies create distributed monoliths that fail silently across configurations.md +58 -0
- package/methodology/implicit knowledge emerges from traversal.md +55 -0
- package/methodology/incremental formalization happens through repeated touching of old notes.md +60 -0
- package/methodology/incremental reading enables cross-source connection finding.md +39 -0
- package/methodology/index.md +32 -0
- package/methodology/inline links carry richer relationship data than metadata fields.md +91 -0
- package/methodology/insight accretion differs from productivity in knowledge systems.md +41 -0
- package/methodology/intermediate packets enable assembly over creation.md +52 -0
- package/methodology/intermediate representation pattern enables reliable vault operations beyond regex.md +62 -0
- package/methodology/justification chains enable forward backward and evolution reasoning about configuration decisions.md +46 -0
- package/methodology/knowledge system architecture is parameterized by platform capabilities not fixed by methodology.md +51 -0
- package/methodology/knowledge systems become communication partners through complexity and memory humans cannot sustain.md +47 -0
- package/methodology/knowledge systems share universal operations and structural components across all methodology traditions.md +46 -0
- package/methodology/legal case management uses precedent chains with regulatory change propagation.md +892 -0
- package/methodology/live index via periodic regeneration keeps discovery current.md +58 -0
- package/methodology/local-first file formats are inherently agent-native.md +69 -0
- package/methodology/logic column pattern separates reasoning from procedure.md +35 -0
- package/methodology/maintenance operations are more universal than creative pipelines because structural health is domain-invariant.md +47 -0
- package/methodology/maintenance scheduling frequency should match consequence speed not detection capability.md +50 -0
- package/methodology/maintenance targeting should prioritize mechanism and theory notes.md +26 -0
- package/methodology/maintenance-patterns.md +72 -0
- package/methodology/markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure.md +55 -0
- package/methodology/maturity field enables agent context prioritization.md +33 -0
- package/methodology/memory-architecture.md +27 -0
- package/methodology/metacognitive confidence can diverge from retrieval capability.md +42 -0
- package/methodology/metadata reduces entropy enabling precision over recall.md +91 -0
- package/methodology/methodology development should follow the trajectory from documentation to skill to hook as understanding hardens.md +80 -0
- package/methodology/methodology traditions are named points in a shared configuration space not competing paradigms.md +64 -0
- package/methodology/mnemonic medium embeds verification into navigation.md +46 -0
- package/methodology/module communication through shared YAML fields creates loose coupling without direct dependencies.md +44 -0
- package/methodology/module deactivation must account for structural artifacts that survive the toggle.md +49 -0
- package/methodology/multi-domain systems compose through separate templates and shared graph.md +61 -0
- package/methodology/multi-domain-composition.md +27 -0
- package/methodology/narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging.md +53 -0
- package/methodology/navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts.md +48 -0
- package/methodology/navigational vertigo emerges in pure association systems without local hierarchy.md +54 -0
- package/methodology/note titles should function as APIs enabling sentence transclusion.md +51 -0
- package/methodology/note-design.md +57 -0
- package/methodology/notes are skills /342/200/224 curated knowledge injected when relevant.md" +62 -0
- package/methodology/notes function as cognitive anchors that stabilize attention during complex tasks.md +41 -0
- package/methodology/novel domains derive by mapping knowledge type to closest reference domain then adapting.md +50 -0
- package/methodology/nudge theory explains graduated hook enforcement as choice architecture for agents.md +59 -0
- package/methodology/observation and tension logs function as dead-letter queues for failed automation.md +51 -0
- package/methodology/operational memory and knowledge memory serve different functions in agent architecture.md +48 -0
- package/methodology/operational wisdom requires contextual observation.md +52 -0
- package/methodology/orchestrated vault creation transforms arscontexta from tool to autonomous knowledge factory.md +40 -0
- package/methodology/organic emergence versus active curation creates a fundamental vault governance tension.md +68 -0
- package/methodology/orphan notes are seeds not failures.md +38 -0
- package/methodology/over-automation corrupts quality when hooks encode judgment rather than verification.md +62 -0
- package/methodology/people relationships uses Dunbar-layered graphs with interaction tracking.md +659 -0
- package/methodology/personal assistant uses life area management with review automation.md +610 -0
- package/methodology/platform adapter translation is semantic not mechanical because hook event meanings differ.md +40 -0
- package/methodology/platform capability tiers determine which knowledge system features can be implemented.md +48 -0
- package/methodology/platform fragmentation means identical conceptual operations require different implementations across agent environments.md +44 -0
- package/methodology/premature complexity is the most common derivation failure mode.md +45 -0
- package/methodology/prevent domain-specific failure modes through the vulnerability matrix.md +336 -0
- package/methodology/processing effort should follow retrieval demand.md +57 -0
- package/methodology/processing-workflows.md +75 -0
- package/methodology/product management uses feedback pipelines with experiment tracking.md +789 -0
- package/methodology/productivity porn risk in meta-system building.md +30 -0
- package/methodology/programmable notes could enable property-triggered workflows.md +64 -0
- package/methodology/progressive disclosure means reading right not reading less.md +69 -0
- package/methodology/progressive schema validates only what active modules require not the full system schema.md +49 -0
- package/methodology/project management uses decision tracking with stakeholder context.md +776 -0
- package/methodology/propositional link semantics transform wiki links from associative to reasoned.md +87 -0
- package/methodology/prospective memory requires externalization.md +53 -0
- package/methodology/provenance tracks where beliefs come from.md +62 -0
- package/methodology/queries evolve during search so agents should checkpoint.md +35 -0
- package/methodology/question-answer metadata enables inverted search patterns.md +39 -0
- package/methodology/random note resurfacing prevents write-only memory.md +33 -0
- package/methodology/reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring.md +59 -0
- package/methodology/reflection synthesizes existing notes into new insight.md +100 -0
- package/methodology/retrieval utility should drive design over capture completeness.md +69 -0
- package/methodology/retrieval verification loop tests description quality at scale.md +81 -0
- package/methodology/role field makes graph structure explicit.md +94 -0
- package/methodology/scaffolding enables divergence that fine-tuning cannot.md +67 -0
- package/methodology/schema enforcement via validation agents enables soft consistency.md +60 -0
- package/methodology/schema evolution follows observe-then-formalize not design-then-enforce.md +65 -0
- package/methodology/schema field names are the only domain specific element in the universal note pattern.md +46 -0
- package/methodology/schema fields should use domain-native vocabulary not abstract terminology.md +47 -0
- package/methodology/schema templates reduce cognitive overhead at capture time.md +55 -0
- package/methodology/schema validation hooks externalize inhibitory control that degrades under cognitive load.md +48 -0
- package/methodology/schema-enforcement.md +27 -0
- package/methodology/self-extension requires context files to contain platform operations knowledge not just methodology.md +47 -0
- package/methodology/sense-making vs storage does compression lose essential nuance.md +73 -0
- package/methodology/session boundary hooks implement cognitive bookends for orientation and reflection.md +60 -0
- package/methodology/session handoff creates continuity without persistent memory.md +43 -0
- package/methodology/session outputs are packets for future selves.md +43 -0
- package/methodology/session transcript mining enables experiential validation that structural tests cannot provide.md +38 -0
- package/methodology/skill context budgets constrain knowledge system complexity on agent platforms.md +52 -0
- package/methodology/skills encode methodology so manual execution bypasses quality gates.md +50 -0
- package/methodology/small-world topology requires hubs and dense local links.md +99 -0
- package/methodology/source attribution enables tracing claims to foundations.md +38 -0
- package/methodology/spaced repetition scheduling could optimize vault maintenance.md +44 -0
- package/methodology/spreading activation models how agents should traverse.md +79 -0
- package/methodology/stale navigation actively misleads because agents trust curated maps completely.md +43 -0
- package/methodology/stigmergy coordinates agents through environmental traces without direct communication.md +62 -0
- package/methodology/storage versus thinking distinction determines which tool patterns apply.md +56 -0
- package/methodology/structure enables navigation without reading everything.md +52 -0
- package/methodology/structure without processing provides no value.md +56 -0
- package/methodology/student learning uses prerequisite graphs with spaced retrieval.md +770 -0
- package/methodology/summary coherence tests composability before filing.md +37 -0
- package/methodology/tag rot applies to wiki links because titles serve as both identifier and display text.md +50 -0
- package/methodology/temporal media must convert to spatial text for agent traversal.md +43 -0
- package/methodology/temporal processing priority creates age-based inbox urgency.md +45 -0
- package/methodology/temporal separation of capture and processing preserves context freshness.md +39 -0
- package/methodology/ten universal primitives form the kernel of every viable agent knowledge system.md +162 -0
- package/methodology/testing effect could enable agent knowledge verification.md +38 -0
- package/methodology/the AgentSkills standard embodies progressive disclosure at the skill level.md +40 -0
- package/methodology/the derivation engine improves recursively as deployed systems generate observations.md +49 -0
- package/methodology/the determinism boundary separates hook methodology from skill methodology.md +46 -0
- package/methodology/the fix-versus-report decision depends on determinism reversibility and accumulated trust.md +45 -0
- package/methodology/the generation effect requires active transformation not just storage.md +57 -0
- package/methodology/the no wrong patches guarantee ensures any valid module combination produces a valid system.md +58 -0
- package/methodology/the system is the argument.md +46 -0
- package/methodology/the vault constitutes identity for agents.md +86 -0
- package/methodology/the vault methodology transfers because it encodes cognitive science not domain specifics.md +47 -0
- package/methodology/therapy journal uses warm personality with pattern detection for emotional processing.md +584 -0
- package/methodology/three capture schools converge through agent-mediated synthesis.md +55 -0
- package/methodology/three concurrent maintenance loops operate at different timescales to catch different classes of problems.md +56 -0
- package/methodology/throughput matters more than accumulation.md +58 -0
- package/methodology/title as claim enables traversal as reasoning.md +50 -0
- package/methodology/topological organization beats temporal for knowledge work.md +52 -0
- package/methodology/trading uses conviction tracking with thesis-outcome correlation.md +699 -0
- package/methodology/trails transform ephemeral navigation into persistent artifacts.md +39 -0
- package/methodology/transform universal vocabulary to domain-native language through six levels.md +259 -0
- package/methodology/type field enables structured queries without folder hierarchies.md +53 -0
- package/methodology/use-case presets dissolve the tension between composability and simplicity.md +44 -0
- package/methodology/vault conventions may impose hidden rigidity on thinking.md +44 -0
- package/methodology/verbatim risk applies to agents too.md +31 -0
- package/methodology/vibe notetaking is the emerging industry consensus for AI-native self-organization.md +56 -0
- package/methodology/vivid memories need verification.md +45 -0
- package/methodology/vocabulary-transformation.md +27 -0
- package/methodology/voice capture is the highest-bandwidth channel for agent-delegated knowledge systems.md +45 -0
- package/methodology/wiki links are the digital evolution of analog indexing.md +73 -0
- package/methodology/wiki links as social contract transforms agents into stewards of incomplete references.md +52 -0
- package/methodology/wiki links create navigation paths that shape retrieval.md +63 -0
- package/methodology/wiki links implement GraphRAG without the infrastructure.md +101 -0
- package/methodology/writing for audience blocks authentic creation.md +22 -0
- package/methodology/you operate a system that takes notes.md +79 -0
- package/openclaw/SKILL.md +110 -0
- package/package.json +45 -0
- package/platforms/README.md +51 -0
- package/platforms/claude-code/generator.md +61 -0
- package/platforms/claude-code/hooks/README.md +186 -0
- package/platforms/claude-code/hooks/auto-commit.sh.template +38 -0
- package/platforms/claude-code/hooks/session-capture.sh.template +72 -0
- package/platforms/claude-code/hooks/session-orient.sh.template +189 -0
- package/platforms/claude-code/hooks/write-validate.sh.template +106 -0
- package/platforms/openclaw/generator.md +82 -0
- package/platforms/openclaw/hooks/README.md +89 -0
- package/platforms/openclaw/hooks/bootstrap.ts.template +224 -0
- package/platforms/openclaw/hooks/command-new.ts.template +165 -0
- package/platforms/openclaw/hooks/heartbeat.ts.template +214 -0
- package/platforms/shared/features/README.md +70 -0
- package/platforms/shared/skill-blocks/graph.md +145 -0
- package/platforms/shared/skill-blocks/learn.md +119 -0
- package/platforms/shared/skill-blocks/next.md +131 -0
- package/platforms/shared/skill-blocks/pipeline.md +326 -0
- package/platforms/shared/skill-blocks/ralph.md +616 -0
- package/platforms/shared/skill-blocks/reduce.md +1142 -0
- package/platforms/shared/skill-blocks/refactor.md +129 -0
- package/platforms/shared/skill-blocks/reflect.md +780 -0
- package/platforms/shared/skill-blocks/remember.md +524 -0
- package/platforms/shared/skill-blocks/rethink.md +574 -0
- package/platforms/shared/skill-blocks/reweave.md +680 -0
- package/platforms/shared/skill-blocks/seed.md +320 -0
- package/platforms/shared/skill-blocks/stats.md +145 -0
- package/platforms/shared/skill-blocks/tasks.md +171 -0
- package/platforms/shared/skill-blocks/validate.md +323 -0
- package/platforms/shared/skill-blocks/verify.md +562 -0
- package/platforms/shared/templates/README.md +35 -0
- package/presets/experimental/categories.yaml +1 -0
- package/presets/experimental/preset.yaml +38 -0
- package/presets/experimental/starter/README.md +7 -0
- package/presets/experimental/vocabulary.yaml +7 -0
- package/presets/personal/categories.yaml +7 -0
- package/presets/personal/preset.yaml +41 -0
- package/presets/personal/starter/goals.md +21 -0
- package/presets/personal/starter/index.md +17 -0
- package/presets/personal/starter/life-areas.md +21 -0
- package/presets/personal/starter/people.md +21 -0
- package/presets/personal/vocabulary.yaml +32 -0
- package/presets/research/categories.yaml +8 -0
- package/presets/research/preset.yaml +41 -0
- package/presets/research/starter/index.md +17 -0
- package/presets/research/starter/methods.md +21 -0
- package/presets/research/starter/open-questions.md +21 -0
- package/presets/research/vocabulary.yaml +33 -0
- package/reference/AUDIT-REPORT.md +238 -0
- package/reference/claim-map.md +172 -0
- package/reference/components.md +327 -0
- package/reference/conversation-patterns.md +542 -0
- package/reference/derivation-validation.md +649 -0
- package/reference/dimension-claim-map.md +134 -0
- package/reference/evolution-lifecycle.md +297 -0
- package/reference/failure-modes.md +235 -0
- package/reference/interaction-constraints.md +204 -0
- package/reference/kernel.yaml +242 -0
- package/reference/methodology.md +283 -0
- package/reference/open-questions.md +279 -0
- package/reference/personality-layer.md +302 -0
- package/reference/self-space.md +299 -0
- package/reference/semantic-vs-keyword.md +288 -0
- package/reference/session-lifecycle.md +298 -0
- package/reference/templates/base-note.md +16 -0
- package/reference/templates/companion-note.md +70 -0
- package/reference/templates/creative-note.md +16 -0
- package/reference/templates/learning-note.md +16 -0
- package/reference/templates/life-note.md +16 -0
- package/reference/templates/moc.md +26 -0
- package/reference/templates/relationship-note.md +17 -0
- package/reference/templates/research-note.md +19 -0
- package/reference/templates/session-log.md +24 -0
- package/reference/templates/therapy-note.md +16 -0
- package/reference/test-fixtures/edge-case-constraints.md +148 -0
- package/reference/test-fixtures/multi-domain.md +164 -0
- package/reference/test-fixtures/novel-domain-gaming.md +138 -0
- package/reference/test-fixtures/research-minimal.md +102 -0
- package/reference/test-fixtures/therapy-full.md +155 -0
- package/reference/testing-milestones.md +1087 -0
- package/reference/three-spaces.md +363 -0
- package/reference/tradition-presets.md +203 -0
- package/reference/use-case-presets.md +341 -0
- package/reference/validate-kernel.sh +432 -0
- package/reference/vocabulary-transforms.md +85 -0
- package/scripts/sync-thinking.sh +147 -0
- package/skill-sources/graph/SKILL.md +567 -0
- package/skill-sources/graph/skill.json +17 -0
- package/skill-sources/learn/SKILL.md +254 -0
- package/skill-sources/learn/skill.json +17 -0
- package/skill-sources/next/SKILL.md +407 -0
- package/skill-sources/next/skill.json +17 -0
- package/skill-sources/pipeline/SKILL.md +314 -0
- package/skill-sources/pipeline/skill.json +17 -0
- package/skill-sources/ralph/SKILL.md +604 -0
- package/skill-sources/ralph/skill.json +17 -0
- package/skill-sources/reduce/SKILL.md +1113 -0
- package/skill-sources/reduce/skill.json +17 -0
- package/skill-sources/refactor/SKILL.md +448 -0
- package/skill-sources/refactor/skill.json +17 -0
- package/skill-sources/reflect/SKILL.md +747 -0
- package/skill-sources/reflect/skill.json +17 -0
- package/skill-sources/remember/SKILL.md +534 -0
- package/skill-sources/remember/skill.json +17 -0
- package/skill-sources/rethink/SKILL.md +658 -0
- package/skill-sources/rethink/skill.json +17 -0
- package/skill-sources/reweave/SKILL.md +657 -0
- package/skill-sources/reweave/skill.json +17 -0
- package/skill-sources/seed/SKILL.md +303 -0
- package/skill-sources/seed/skill.json +17 -0
- package/skill-sources/stats/SKILL.md +371 -0
- package/skill-sources/stats/skill.json +17 -0
- package/skill-sources/tasks/SKILL.md +402 -0
- package/skill-sources/tasks/skill.json +17 -0
- package/skill-sources/validate/SKILL.md +310 -0
- package/skill-sources/validate/skill.json +17 -0
- package/skill-sources/verify/SKILL.md +532 -0
- package/skill-sources/verify/skill.json +17 -0
- package/skills/add-domain/SKILL.md +441 -0
- package/skills/add-domain/skill.json +17 -0
- package/skills/architect/SKILL.md +568 -0
- package/skills/architect/skill.json +17 -0
- package/skills/ask/SKILL.md +388 -0
- package/skills/ask/skill.json +17 -0
- package/skills/health/SKILL.md +760 -0
- package/skills/health/skill.json +17 -0
- package/skills/help/SKILL.md +348 -0
- package/skills/help/skill.json +17 -0
- package/skills/recommend/SKILL.md +553 -0
- package/skills/recommend/skill.json +17 -0
- package/skills/reseed/SKILL.md +385 -0
- package/skills/reseed/skill.json +17 -0
- package/skills/setup/SKILL.md +1688 -0
- package/skills/setup/skill.json +17 -0
- package/skills/tutorial/SKILL.md +496 -0
- package/skills/tutorial/skill.json +17 -0
- package/skills/upgrade/SKILL.md +395 -0
- package/skills/upgrade/skill.json +17 -0
|
@@ -0,0 +1,55 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Pre-defined fields shift capture decisions from "what should I record" to "fill these boxes," reducing cognitive load for humans and regularizing extraction for agents
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[processing-workflows]]"]
|
|
5
|
+
methodology: ["Capture Design"]
|
|
6
|
+
source: TFT research corpus (00_inbox/heinrich/)
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# schema templates reduce cognitive overhead at capture time
|
|
10
|
+
|
|
11
|
+
When you face a blank page, you must make two kinds of decisions: what to capture and how to structure it. A schema template eliminates the second decision entirely. Instead of designing a note's structure while simultaneously trying to record content, you fill in pre-defined fields. The cognitive work shifts from "what should I record about this?" to "what goes in this box?"
|
|
12
|
+
|
|
13
|
+
This is why tools like Tana's supertags work. A supertag is a schema attached to a note type — when you create a "book" note, the template pre-populates fields: author, year, key arguments, personal takeaways. You don't decide whether to record the author; the field exists, so you fill it. The schema externalizes the structural decision, offloading working memory during the capture moment. Because [[cognitive offloading is the architectural foundation for vault design]], every friction point in capture fights against the cognitive architecture — and schema templates reduce that friction by handling structure mechanically so the human's limited working memory stays focused on content.
|
|
14
|
+
|
|
15
|
+
For agent-operated vaults, this matters because capture happens at the boundary between human and system. The human has context that will decay. Since [[temporal separation of capture and processing preserves context freshness]], there's urgency to capture before understanding fades. But if capture requires structural decisions, the human burns cognitive resources on formatting instead of content. Schema templates preserve context-capture bandwidth by handling structure mechanically.
|
|
16
|
+
|
|
17
|
+
This is the opposite of the Lazy Cornell anti-pattern. Since [[structure without processing provides no value]], merely having structure (drawing the lines) produces no benefit without the processing (filling the boxes thoughtfully). Schema templates work because they guide processing: each field prompts a specific generative act. The schema externalizes structure so attention can focus on generation within the structure. This is why since [[the generation effect requires active transformation not just storage]], schema-guided capture can still produce encoding benefits — filling in "key arguments" forces you to identify them, which is generative work.
|
|
18
|
+
|
|
19
|
+
The design principle: at capture time, minimize decisions about form so attention can focus on substance. The schema is not a constraint but a scaffold — it guides what to notice and record without requiring real-time architectural thinking. But structural scaffolding alone is not sufficient. Because [[schema field names are the only domain specific element in the universal note pattern]], the template's field choices carry the entire burden of domain adaptation — the rest of the note format (title, body, links, topics) works identically across domains. This concentration means since [[schema fields should use domain-native vocabulary not abstract terminology]], field names must match how the practitioner naturally thinks. A template that says "triggers" reduces overhead completely for a therapist; a template that says "antecedent_conditions" reduces structural overhead while introducing linguistic overhead, partially negating the benefit.
|
|
20
|
+
|
|
21
|
+
There's a tradeoff embedded here. Schema templates add ceremony -- you must fill in fields even when they feel irrelevant. Since [[faceted classification treats notes as multi-dimensional objects rather than folder contents]], each template field represents an independent classification dimension, and independence means each must be populated separately. More facets means more ceremony. But the claim is that this friction costs less than the cognitive overhead of unstructured capture, because the friction is mechanical (fill boxes) while the overhead is creative (design structure). Mechanical work is less draining than design work when context is fading. But since [[vault conventions may impose hidden rigidity on thinking]], schema templates carry a risk: if the schema doesn't fit the content, the template becomes a constraint that channels thinking into pre-determined patterns rather than scaffolding that supports it. The test is whether forced reformulation into schema fields produces clarity or distortion.
|
|
22
|
+
|
|
23
|
+
This connects to metadata's retrieval function. Since [[metadata reduces entropy enabling precision over recall]], the fields you fill at capture time directly enable precision retrieval later. Since [[retrieval utility should drive design over capture completeness]], the schema template is retrieval-first design: the fields you capture ARE the fields you'll query. A schema template is the capture-side of what descriptions and type fields provide at retrieval-time: pre-computed structure that makes the system queryable. But the schema template insight is specifically about cognitive economics at the capture moment — not just what metadata enables later, but how schema reduces the mental cost of creating that metadata in the first place.
|
|
24
|
+
|
|
25
|
+
The agent implication: when designing capture interfaces (whether for human input or agent extraction), provide structure upfront rather than requiring the capturer to invent it. For source extraction, this means templates with fields like source, date, key claims, quotes, and synthesis hooks. The reducer doesn't decide what to extract about a source — the template specifies it. This is why the vault's task files have standardized sections: since [[skills encode methodology so manual execution bypasses quality gates]], the schema IS the methodology — it tells the agent what to extract and in what form. The structure is given, so processing becomes execution.
|
|
26
|
+
|
|
27
|
+
The layering formula for descriptions exemplifies this principle in miniature. Since [[good descriptions layer heuristic then mechanism then implication]], descriptions have a schema: lead with actionable, back with mechanism, end with implication. This is a template that reduces cognitive overhead for description-writing — instead of designing each description from scratch, you fill in three layers.
|
|
28
|
+
|
|
29
|
+
There's an interesting distinction between schema templates and guided notes. Since [[guided notes might outperform post-hoc structuring for high-volume capture]], research suggests skeleton outlines work particularly well for streaming content — lectures, meetings, conversations — where information arrives faster than you can fully process. Schema templates fit discrete content (a book, an article) where you know the content type and can fill fields after reading. Guided notes fit flows where you're capturing in real-time and need structure that guides attention without requiring design decisions. Both externalize structure to reduce cognitive load, but schema templates specify WHAT fields to record while guided notes specify HOW to organize incoming flow (main ideas vs. supporting details vs. questions).
|
|
30
|
+
---
|
|
31
|
+
|
|
32
|
+
Relevant Notes:
|
|
33
|
+
- [[cognitive offloading is the architectural foundation for vault design]] — the theoretical foundation; Risko and Gilbert's offloading economics explain WHY reducing capture friction matters: if offloading costs more effort than retaining, the human retains and the system loses the thought
|
|
34
|
+
- [[metadata reduces entropy enabling precision over recall]] — the retrieval side of what schemas enable at capture time; schema templates ensure the metadata gets created
|
|
35
|
+
- [[temporal separation of capture and processing preserves context freshness]] — creates urgency that schemas address: reduce capture-time decisions so content gets recorded while context is fresh
|
|
36
|
+
- [[does agent processing recover what fast capture loses]] — schema-driven capture occupies a middle ground: more structure than pure dumps (preserving some generation benefit for the human), less cognitive load than fully designed notes (enabling faster capture)
|
|
37
|
+
- [[vault conventions may impose hidden rigidity on thinking]] — the shadow side: if the schema doesn't fit the content, templates become constraints that channel thinking rather than scaffolds that support it
|
|
38
|
+
- [[WIP limits force processing over accumulation]] — sibling forcing function: schema templates force structure at capture, WIP limits force processing before capture; both shape behavior through architectural constraints rather than soft guidelines
|
|
39
|
+
- [[continuous small-batch processing eliminates review dread]] — complementary intervention: schema templates reduce capture-time overhead, small-batch processing reduces review-time dread; both target psychological friction that causes abandonment
|
|
40
|
+
- [[the generation effect requires active transformation not just storage]] — schema templates guide generation: each field prompts a specific generative act, so filling boxes IS transformation
|
|
41
|
+
- [[structure without processing provides no value]] — schema templates are the opposite of Lazy Cornell: they guide processing by specifying WHAT to generate
|
|
42
|
+
- [[retrieval utility should drive design over capture completeness]] — schema templates are retrieval-first: the fields you capture are the fields you'll query
|
|
43
|
+
- [[skills encode methodology so manual execution bypasses quality gates]] — skills implement schema templates: standardized sections externalize methodology as structure
|
|
44
|
+
- [[good descriptions layer heuristic then mechanism then implication]] — a specific schema for descriptions; the layering formula reduces cognitive overhead for description-writing
|
|
45
|
+
- [[guided notes might outperform post-hoc structuring for high-volume capture]] — complementary intervention for streaming content: while schema templates specify WHAT fields to record, guided notes specify HOW to organize incoming flow during real-time capture
|
|
46
|
+
- [[logic column pattern separates reasoning from procedure]] — concrete application: the `> [!logic]` callout format is a schema template for technical content where the structure (insert logic callout at each step) is given so attention focuses on articulating the reasoning
|
|
47
|
+
- [[faceted classification treats notes as multi-dimensional objects rather than folder contents]] — why templates matter: each template field is an independent facet in Ranganathan's framework, and templates manage the cognitive cost of populating multiple independent classification dimensions at capture time
|
|
48
|
+
- [[novel domains derive by mapping knowledge type to closest reference domain then adapting]] — extends the schema template principle from capture to derivation: the four adaptation dimensions (temporal dynamics, ethical requirements, collaboration patterns, retrieval needs) function as a derivation-time schema template that narrows open-ended domain comparison into structured axes, reducing cognitive overhead for the derivation agent
|
|
49
|
+
- [[schema fields should use domain-native vocabulary not abstract terminology]] — the linguistic complement: schema templates reduce structural overhead by providing pre-defined fields, but field names must also speak the domain's language to reduce linguistic overhead; a template with correctly structured but abstractly named fields still forces translation at every interaction
|
|
50
|
+
- [[schema evolution follows observe-then-formalize not design-then-enforce]] — the temporal complement: schema templates provide the minimal starting point (two to three required fields), and the evolution protocol governs how that starting point grows through observed usage patterns rather than upfront design; templates start small precisely so the system can observe what earns formalization
|
|
51
|
+
- [[configuration paralysis emerges when derivation surfaces too many decisions]] — the same cognitive overhead principle applied at the system design level: templates reduce capture decisions through pre-defined fields, configuration defaults reduce derivation decisions through sensible inference; both recognize that cognitive bandwidth spent on structural choices leaves less for substantive ones
|
|
52
|
+
- [[schema field names are the only domain specific element in the universal note pattern]] — explains why schema templates carry disproportionate design weight: the five-component note architecture is domain-invariant except for YAML field names, so the template's field choices are literally the only surface through which domain knowledge enters the note format
|
|
53
|
+
|
|
54
|
+
Topics:
|
|
55
|
+
- [[processing-workflows]]
|
|
@@ -0,0 +1,48 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Inhibitory control is the first executive function to degrade under load, so externalizing it to hooks means schema compliance stays constant even as the agent's reasoning quality declines through
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[agent-cognition]]"]
|
|
5
|
+
methodology: ["Cognitive Science", "Original"]
|
|
6
|
+
source: [[hooks-as-methodology-encoders-research-source]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# schema validation hooks externalize inhibitory control that degrades under cognitive load
|
|
10
|
+
|
|
11
|
+
Inhibitory control is one of three core executive functions, alongside working memory and cognitive flexibility. It is the capacity to suppress inappropriate actions, override habitual responses, and maintain goal-directed behavior under distraction. In Miyake and Friedman's framework, these three functions are separable but correlated, and inhibitory control plays a particular role: it is the gatekeeper that prevents automatic or impulsive actions from bypassing deliberation. Critically, inhibitory control is one of the first executive functions to degrade under cognitive load. When attention is strained, people start making the errors they know are wrong -- skipping steps, reverting to defaults, producing habitual rather than considered responses.
|
|
12
|
+
|
|
13
|
+
The parallel to agent operation is direct. An agent deep in a complex reasoning task, with its context window filling past the smart zone, is under cognitive load in the relevant sense. Since [[LLM attention degrades as context fills]], the instructions competing for attention include procedural ones like "validate frontmatter against the template schema." These are exactly the instructions an agent drops first, because they feel administrative rather than substantive. The agent does not decide to skip validation -- it simply stops attending to the instruction, the same way a tired surgeon does not decide to skip the checklist but simply proceeds without it. The habitual response takes over: write the note with whatever frontmatter feels sufficient in the moment, which under load means minimal or missing fields.
|
|
14
|
+
|
|
15
|
+
Schema validation hooks externalize this inhibitory control entirely. The agent no longer needs to inhibit its own default behavior because the infrastructure does it. A PostToolUse hook on Write fires after every file operation regardless of the agent's cognitive state -- whether it is in the first 10% of context or the last 5%. The hook checks required fields, validates enum values, and flags violations. This inhibition is not subject to degradation because it does not live in the attention system. It lives in the infrastructure, outside the context window, immune to the load that degrades everything inside it. And since [[hooks enable context window efficiency by delegating deterministic checks to external processes]], the externalization provides a complementary benefit: the tokens that instruction-based inhibitory control would have consumed on template loading, field comparison, and validation reasoning are freed for substantive work.
|
|
16
|
+
|
|
17
|
+
The graduated enforcement model adds nuance that maps to how inhibitory control actually works in cognition. Strong inhibitory control blocks dangerous actions -- the reflexive withdrawal from a hot surface, the automatic braking when a child runs into the street. Weak inhibitory control creates awareness without blocking -- noticing you are about to interrupt someone, sensing that your email tone is too harsh before sending. The vault implements both. Exit code 2 is strong inhibition: the write is blocked, the agent must correct the violation before proceeding. Warning via additionalContext is weak inhibition: the violation is flagged, the agent sees it, but the operation completes. Since [[nudge theory explains graduated hook enforcement as choice architecture for agents]], this graduation is not arbitrary but follows Thaler and Sunstein's insight that intervention strength should match violation severity -- mandates for structural failures, nudges for qualitative ones. Since [[schema enforcement via validation agents enables soft consistency]], the soft enforcement design is a deliberate choice about which inhibitory strength to apply where, but the choice only becomes available once inhibitory control is externalized. An instruction-based system cannot reliably implement even weak inhibition, because the instruction itself is subject to the attention degradation it is trying to compensate for. And since [[confidence thresholds gate automated action between the mechanical and judgment zones]], the graduated enforcement model gains a second axis: beyond choosing enforcement severity (block vs warn), the system could also account for its confidence in the assessment -- a certain structural violation warrants blocking, but an uncertain qualitative assessment might warrant only logging, combining enforcement severity with confidence level into the same two-dimensional design space that automation decisions more broadly inhabit.
|
|
18
|
+
|
|
19
|
+
This is distinct from the broader claim that [[hooks are the agent habit system that replaces the missing basal ganglia]]. That note identifies the architectural gap -- agents lack habit formation entirely, so hooks fill the role of automated behavior. This note identifies the specific cognitive mechanism being externalized within that broader pattern. Not all hooks externalize inhibitory control. Since [[auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution]], auto-commit externalizes prospective memory -- a distinct cognitive function with a different failure profile. Since [[session boundary hooks implement cognitive bookends for orientation and reflection]], session-start hooks externalize working memory initialization. Schema validation specifically externalizes inhibitory control, and this matters because inhibitory control's degradation profile is what makes the externalization urgent. Working memory initialization does not degrade the same way -- an agent either has orientation loaded or it does not. But inhibitory control degrades gradually and invisibly: the agent produces increasingly sloppy output without detecting the decline, because the metacognitive monitoring that would detect it is itself degrading.
|
|
20
|
+
|
|
21
|
+
The invisibility of the failure mode is what makes externalization essential rather than convenient. When inhibitory control fails in an agent, the output still looks like a note. It has a title, body text, maybe a description. But required fields are missing, enum values are wrong, the topics array uses strings instead of wiki links. These failures are individually small and collectively catastrophic -- they degrade the queryability that makes the vault function as a database rather than a folder of files. Since [[metacognitive confidence can diverge from retrieval capability]], the system can feel functional while these invisible structural failures accumulate -- the same appearance-versus-reality gap, but at the schema layer rather than the description layer. Since [[the determinism boundary separates hook methodology from skill methodology]], schema validation sits firmly on the deterministic side: the check is a pattern match, not a judgment call, which means it can be externalized completely without loss of quality. The determinism is what makes the externalization clean -- inhibitory control for schema compliance does not require the nuanced reasoning that connection-finding requires.
|
|
22
|
+
|
|
23
|
+
Since [[cognitive offloading is the architectural foundation for vault design]], this note extends the offloading principle from working memory to executive function. The vault externalizes what the agent cannot hold (offloading working memory via notes). Hooks externalize what the agent cannot sustain (offloading executive function via automation). The working memory case is well understood -- Cowan's four-item limit and Clark and Chalmers' extended mind theory provide the framework. The executive function case is structurally identical but targets a different cognitive bottleneck: not capacity (how much can you hold) but control (can you stop yourself from doing the wrong thing). Both bottlenecks worsen under load, both are addressable by externalization, and both benefit from the same design principle -- move the constraint out of the limited system and into the persistent infrastructure.
|
|
24
|
+
|
|
25
|
+
The claim is closed because the cognitive science is well established. Inhibitory control as a depletable executive function that degrades under load is consensus in cognitive psychology, from Miyake and Friedman's unity/diversity model through Diamond's developmental work. The translation to agent hooks is direct: the mechanism being externalized (inhibitory control), the reason externalization is necessary (degradation under load), and the implementation pattern (graduated enforcement via hook exit codes) all have clear cognitive science grounding. Schema validation is the paradigmatic case precisely because it is fully deterministic -- but since [[over-automation corrupts quality when hooks encode judgment rather than verification]], the success of inhibitory control externalization for deterministic checks creates a temptation to extend the pattern to judgment-laden operations where it would produce the opposite of quality.
|
|
26
|
+
|
|
27
|
+
---
|
|
28
|
+
---
|
|
29
|
+
|
|
30
|
+
Relevant Notes:
|
|
31
|
+
- [[hooks are the agent habit system that replaces the missing basal ganglia]] -- this note identifies one specific cognitive function being externalized (inhibitory control); that note identifies the broader architectural gap (no habit formation)
|
|
32
|
+
- [[hook enforcement guarantees quality while instruction enforcement merely suggests it]] -- the enforcement gap is the observable consequence of the inhibitory control mechanism this note names; hooks guarantee enforcement BECAUSE they externalize the executive function that degrades
|
|
33
|
+
- [[LLM attention degrades as context fills]] -- provides the agent-side constraint that makes externalized inhibitory control necessary: attention degradation is the mechanism by which instruction-based inhibitory control fails
|
|
34
|
+
- [[schema enforcement via validation agents enables soft consistency]] -- covers the design space (soft vs hard enforcement) while this note identifies the cognitive mechanism being externalized and why externalization matters
|
|
35
|
+
- [[the determinism boundary separates hook methodology from skill methodology]] -- schema validation sits cleanly on the deterministic side of the boundary because inhibitory control for schema compliance is a pass/fail check, not a judgment call
|
|
36
|
+
- [[cognitive offloading is the architectural foundation for vault design]] -- offloading externalizes working memory, this note extends the principle: hooks externalize executive function, specifically the inhibitory control component
|
|
37
|
+
- [[hooks enable context window efficiency by delegating deterministic checks to external processes]] -- complementary benefit: externalized inhibitory control both preserves reliability AND saves context tokens, because the procedural checking that inhibitory control would have required no longer consumes reasoning budget
|
|
38
|
+
- [[nudge theory explains graduated hook enforcement as choice architecture for agents]] -- provides the theoretical grounding for the graduated enforcement model this note describes; exit code 2 is a mandate, additionalContext warning is a nudge, and the graduation preserves the informational value of each severity level
|
|
39
|
+
- [[over-automation corrupts quality when hooks encode judgment rather than verification]] -- schema validation is the paradigmatic good case of hook externalization because schema compliance is deterministic; the over-automation note shows what happens when the same externalization mechanism is applied to operations requiring judgment, producing invisible corruption
|
|
40
|
+
- [[auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution]] -- sibling mechanism: schema hooks externalize inhibitory control, auto-commit hooks externalize prospective memory; both are specific cognitive functions within the broader habit-system gap, but with different degradation profiles and different failure modes
|
|
41
|
+
- [[metacognitive confidence can diverge from retrieval capability]] -- parallel invisibility: inhibitory control failure produces notes that look valid but degrade queryability, the same appearance-vs-reality gap where systems feel navigable while retrieval actually fails; both failures are invisible to the system experiencing them because the monitoring capacity has itself degraded
|
|
42
|
+
- [[hooks cannot replace genuine cognitive engagement yet more automation is always tempting]] -- the tension this note's success creates: because externalized inhibitory control works so well for deterministic checks, the temptation grows to externalize increasingly judgment-laden operations, risking the cognitive hollowing that tension note documents
|
|
43
|
+
- [[agents are simultaneously methodology executors and subjects creating a unique trust asymmetry]] -- the agent did not choose to have its inhibitory control externalized; schema validation hooks are a concrete instance where infrastructure corrects agent behavior without the agent's input, illustrating the benign pole of the trust asymmetry
|
|
44
|
+
- [[progressive schema validates only what active modules require not the full system schema]] -- scopes what the externalized inhibitory control enforces: hooks fire reliably regardless of cognitive state, but progressive schema determines which fields those hooks check based on active module state, preventing false violations from inactive modules
|
|
45
|
+
- [[confidence thresholds gate automated action between the mechanical and judgment zones]] -- adds a certainty dimension to the graduated enforcement model: Exit code 2 blocks (high enforcement severity) and additionalContext warns (low enforcement severity), but the choice between them could also incorporate confidence in the assessment itself, enabling a two-dimensional design space where enforcement severity intersects with the automation's certainty about the violation
|
|
46
|
+
|
|
47
|
+
Topics:
|
|
48
|
+
- [[agent-cognition]]
|
|
@@ -0,0 +1,27 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Templates, validation, field evolution -- how schema stays consistent across notes and over time
|
|
3
|
+
type: moc
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# schema-enforcement
|
|
7
|
+
|
|
8
|
+
How templates define schema, how validation enforces it, how fields evolve as the vault matures. The _schema block as single source of truth.
|
|
9
|
+
|
|
10
|
+
## Core Ideas
|
|
11
|
+
|
|
12
|
+
### Guidance
|
|
13
|
+
- [[enforce schema with graduated strictness across capture processing and query zones]] -- Why schema enforcement is non-negotiable for agent-operated knowledge systems and how to implement it across domains — s
|
|
14
|
+
|
|
15
|
+
## Tensions
|
|
16
|
+
|
|
17
|
+
(Capture conflicts as they emerge)
|
|
18
|
+
|
|
19
|
+
## Open Questions
|
|
20
|
+
|
|
21
|
+
- When should optional fields become required?
|
|
22
|
+
- How does schema migration work without breaking existing notes?
|
|
23
|
+
|
|
24
|
+
---
|
|
25
|
+
|
|
26
|
+
Topics:
|
|
27
|
+
- [[index]]
|
|
@@ -0,0 +1,47 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: An agent that knows the methodology but not how to build hooks, skills, or agents on its specific platform cannot extend itself, so context files must be platform operations manuals alongside
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[agent-cognition]]"]
|
|
5
|
+
methodology: ["Original"]
|
|
6
|
+
source: [[agent-platform-capabilities-research-source]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# self-extension requires context files to contain platform operations knowledge not just methodology
|
|
10
|
+
|
|
11
|
+
Self-extension is the property that makes context files function as operating systems rather than configuration -- since [[context files function as agent operating systems through self-referential self-extension]], the recursive improvement loop works because the context file teaches the agent how to modify the context file itself. But this recursive loop has a precondition that the general principle glosses over: the agent must know how to build things on its specific platform. Methodology knowledge alone is not enough. An agent that understands atomic notes, wiki link conventions, and processing pipeline principles but does not know how to create a hook, configure a skill, or define a subagent on its particular platform cannot actually extend the system. The loop stalls at "improve the instructions" because the agent lacks the construction knowledge to implement improvements.
|
|
12
|
+
|
|
13
|
+
This creates a content requirement for context files that is easy to underestimate. Universal methodology -- note design patterns, quality standards, connection-finding practices -- transfers across all platforms because it lives at the foundation and convention layers described by [[four abstraction layers separate platform-agnostic from platform-dependent knowledge system features]]. But the automation and orchestration layers require platform-specific construction knowledge. For Claude Code, the context file must explain how `.claude/skills/` files are structured, how `.claude/hooks/` configurations work, how `.claude/agents/` definitions are parsed, and how `.mcp.json` integrates external tools. For OpenClaw, the same content shifts to workspace hooks directory structure, HOOK.md format, handler.ts patterns, and skill metadata. The methodology is the same; the construction manual differs entirely.
|
|
14
|
+
|
|
15
|
+
The resolution is modularity. A generated context file needs both universal sections (note design, quality standards, processing principles) and platform-specific sections (how to build hooks here, how to configure skills here, how to set up MCP here). This is not merely an organizational preference but a functional requirement: without the platform operations sections, since [[bootstrapping principle enables self-improving systems]], the bootstrapping loop cannot close. The agent can observe friction and propose improvements, but lacks the infrastructure knowledge to implement them. This modularity requirement is precisely what makes derivation the right production mechanism: since [[derivation generates knowledge systems from composable research claims not template customization]], the derivation process composes universal methodology claims with platform-specific operations knowledge, producing context files that carry both layers with justification chains explaining why each section is present. A template approach would require maintaining separate templates per platform; derivation composes the same research claims with different platform operations sections based on the target tier.
|
|
16
|
+
|
|
17
|
+
The distribution consequence is direct: since [[blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules]], an agent can only build from a blueprint if its context file has already taught it how to construct infrastructure on its platform. Blueprint readability presupposes platform operations knowledge — without knowing what hook events mean, how skill metadata is structured, or how subagents spawn, the blueprint's construction instructions are unreadable. This makes context file operations content the precondition for the entire blueprint distribution model, not just for self-extension.
|
|
18
|
+
|
|
19
|
+
This also explains why since [[skills encode methodology so manual execution bypasses quality gates]], skills are the output that validates whether the context file's platform operations content works. If an agent can read its context file and successfully create a new skill with correct syntax, hook configuration, and model selection, the platform operations manual is adequate. If the agent produces malformed infrastructure, the manual needs improvement. Skills serve as the functional test of context file completeness -- not just whether the methodology section is clear, but whether the platform operations section teaches construction effectively.
|
|
20
|
+
|
|
21
|
+
The construction knowledge itself is not monolithic. Since [[methodology development should follow the trajectory from documentation to skill to hook as understanding hardens]], the context file must teach three distinct construction competencies corresponding to each encoding level. Teaching how to modify the context file itself (instruction-level construction) is different from teaching how to create skill files with correct metadata and workflow structure (skill-level construction), which is different again from teaching how to configure hooks with correct event bindings and response formats (hook-level construction). Each level has its own syntax, conventions, and design considerations. And since [[the determinism boundary separates hook methodology from skill methodology]], the operations manual must teach the classification criterion that determines which operations belong at which level -- an agent that does not understand determinism as the hook-skill boundary will misplace operations, creating either brittle automation (judgment in hooks) or unnecessary cognitive overhead (deterministic checks in skills).
|
|
22
|
+
|
|
23
|
+
The content requirement also extends beyond file format syntax. Since [[platform adapter translation is semantic not mechanical because hook event meanings differ]], an agent that knows how to write a hook configuration file but does not understand what quality guarantee each event provides cannot perform the semantic reasoning that faithful self-extension requires. The operations manual must teach event semantics: what PostToolUse means (fires per operation, returns to conversation, runs outside context), what SessionStart achieves (orientation at entry, fires once), and how these differ from superficially similar events on other platforms. This is the difference between a manual that teaches syntax (write this JSON structure) and one that teaches construction (understand what you are building and why each property matters).
|
|
24
|
+
|
|
25
|
+
When the operations content succeeds, it enables a further compounding effect. Since [[hook-driven learning loops create self-improving methodology through observation accumulation]], the hooks that self-extension produces become the infrastructure for the learning loop: hooks enforce quality, nudge observation capture, and accumulate the evidence that triggers meta-cognitive review. The learning loop's functioning is itself downstream validation that the platform operations content was adequate -- if the agent can build hooks that participate in the learning loop, the construction manual taught not just syntax but the semantic properties that make hook-driven improvement possible.
|
|
26
|
+
|
|
27
|
+
But the content requirement faces a resource tension. Since [[skill context budgets constrain knowledge system complexity on agent platforms]], budget overflow pushes methodology from skills back to context file instructions, which inflates the context file's content burden. The platform operations sections and the overflow methodology sections compete for finite context space. A context file that must carry both universal methodology, platform construction knowledge, AND overflow skill methodology faces attention degradation on all three fronts. This creates a practical ceiling: the more methodology that overflows from skills to context files, the less room remains for platform operations knowledge, which is precisely the content that enables the agent to create new skills and relieve the overflow. The tension is productive -- it forces generators to prioritize which operations knowledge is most essential for bootstrapping -- but it means context file completeness is a resource allocation problem, not just a content design problem.
|
|
28
|
+
|
|
29
|
+
---
|
|
30
|
+
---
|
|
31
|
+
|
|
32
|
+
Relevant Notes:
|
|
33
|
+
- [[context files function as agent operating systems through self-referential self-extension]] — establishes the operating-system quality of context files; this note identifies what specific content the operating system must contain to enable self-extension
|
|
34
|
+
- [[bootstrapping principle enables self-improving systems]] — describes the recursive improvement loop in general; this note identifies the precondition: the loop stalls unless the agent knows HOW to build infrastructure on its platform
|
|
35
|
+
- [[four abstraction layers separate platform-agnostic from platform-dependent knowledge system features]] — provides the structural framework for understanding which content is universal (foundation and convention layers) and which must be platform-specific (automation and orchestration layers)
|
|
36
|
+
- [[platform capability tiers determine which knowledge system features can be implemented]] — explains why different platforms need different operations manuals: the available infrastructure varies by tier
|
|
37
|
+
- [[skills encode methodology so manual execution bypasses quality gates]] — skills are the output of self-extension, so the context file must teach how to create them; without this knowledge, the agent cannot encode new methodology
|
|
38
|
+
- [[methodology development should follow the trajectory from documentation to skill to hook as understanding hardens]] — construction knowledge is not monolithic but layered by encoding level: the context file must teach instruction writing, skill creation, and hook configuration as distinct competencies with different requirements
|
|
39
|
+
- [[platform adapter translation is semantic not mechanical because hook event meanings differ]] — platform operations knowledge must include event semantics (what quality guarantee each hook event provides), not just file format syntax, because semantic translation requires understanding the guarantees, not just the trigger names
|
|
40
|
+
- [[hook-driven learning loops create self-improving methodology through observation accumulation]] — the downstream validation: when platform operations content is adequate, the agent creates hooks that drive learning loops; the loop's functioning confirms the operations manual taught construction effectively
|
|
41
|
+
- [[skill context budgets constrain knowledge system complexity on agent platforms]] — budget overflow pushes methodology from skills to context file instructions, inflating the content burden and creating competition between platform operations sections and methodology sections for finite context space
|
|
42
|
+
- [[the determinism boundary separates hook methodology from skill methodology]] — a specific content item the operations manual must cover: the agent needs to know which operations belong in hooks versus skills, which requires understanding determinism as a classification criterion
|
|
43
|
+
- [[derivation generates knowledge systems from composable research claims not template customization]] — the production mechanism: derivation composes universal methodology claims with platform-specific operations knowledge, avoiding the template-per-platform proliferation by composing from the same claim graph with different platform operations sections
|
|
44
|
+
- [[blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules]] — downstream dependency: blueprints require platform operations knowledge as a precondition for readability; an agent without construction competencies cannot interpret blueprint instructions, making context file operations content the enabling condition for the entire blueprint distribution model
|
|
45
|
+
|
|
46
|
+
Topics:
|
|
47
|
+
- [[agent-cognition]]
|
|
@@ -0,0 +1,73 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: The vault bets that titles plus descriptions plus full content available preserves enough, but very subtle or contextual ideas may resist the ~150-character compression that makes filtering work
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[discovery-retrieval]]"]
|
|
5
|
+
methodology: ["Cognitive Science"]
|
|
6
|
+
source: [[2-4-metadata-properties]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# sense-making vs storage does compression lose essential nuance
|
|
10
|
+
|
|
11
|
+
Metadata enables sense-making by providing "aboutness" — compressed representations that let agents filter signal from noise without loading full content. But compression is lossy by definition. The tension: for complex or contextual ideas, does the compression that makes filtering efficient inevitably lose the nuance that makes the idea valuable?
|
|
12
|
+
|
|
13
|
+
## The Bet the Vault Makes
|
|
14
|
+
|
|
15
|
+
The vault's progressive disclosure architecture assumes a specific information-theoretic tradeoff is acceptable. Since [[descriptions are retrieval filters not summaries]], descriptions compress a note's identity into roughly 150 characters optimized for filtering decisions. Since [[metadata reduces entropy enabling precision over recall]], this compression pre-computes low-entropy representations that shrink the search space.
|
|
16
|
+
|
|
17
|
+
The implicit bet: title + description + wiki links + full content available = enough context preserved. The compression happens at the filtering layer, but the full content remains accessible. An agent can always load the full note if the filter suggests relevance. The lossy compression doesn't discard information permanently — it just gates access to the full representation.
|
|
18
|
+
|
|
19
|
+
## Where the Bet Might Fail
|
|
20
|
+
|
|
21
|
+
Some categories of knowledge may resist this architecture:
|
|
22
|
+
|
|
23
|
+
**Contextual knowledge.** Ideas whose meaning depends heavily on the situation in which they apply. A heuristic that works in one context but fails in another can't easily be compressed into a description without specifying the context boundary — and specifying every context boundary makes the description too long to serve as an efficient filter.
|
|
24
|
+
|
|
25
|
+
**Procedural nuance.** Skills and methods where the devil is in the details. "How to debug a memory leak" might compress into a description, but the subtle judgment calls that distinguish expert from novice debugging don't compress well. The filter might correctly identify the note as relevant, but the compression might strip exactly the tacit knowledge that makes the content valuable.
|
|
26
|
+
|
|
27
|
+
**Relationship-dependent meaning.** Ideas that only make sense in the context of other ideas they connect to. The description of an individual note might be accurate in isolation but misleading about how it functions in the graph. Since [[inline links carry richer relationship data than metadata fields]], the wiki link context phrases carry relationship meaning — but these aren't available at the filter layer.
|
|
28
|
+
|
|
29
|
+
**Phenomenological content.** Experiential or observational knowledge that resists propositional reduction. A note about "what debugging complex systems feels like" might have genuine value, but the phenomenological content may not compress into retrieval-friendly descriptions.
|
|
30
|
+
|
|
31
|
+
## The Measurement Problem
|
|
32
|
+
|
|
33
|
+
This tension is difficult to evaluate empirically because the nuance lost might be exactly what we can't see we're missing. If compression works, we find the right notes and load them successfully. If compression fails for subtle ideas, we never retrieve them in the first place — we don't know what we're not finding.
|
|
34
|
+
|
|
35
|
+
Since [[retrieval verification loop tests description quality at scale]], the recite mechanism tests whether descriptions enable prediction of note content. But this tests whether the agent can reconstruct what the note says, not whether the most valuable parts survive compression. The agent might predict the note's structure correctly while missing the subtle insight that made the note worth writing.
|
|
36
|
+
|
|
37
|
+
## Current Mitigation Strategies
|
|
38
|
+
|
|
39
|
+
The vault has implicit mitigations for this tension:
|
|
40
|
+
|
|
41
|
+
1. **Full content remains available.** The compression gates access but doesn't discard. Once you've found the note, the full nuance is there.
|
|
42
|
+
|
|
43
|
+
2. **Wiki links provide relationship context.** Since [[spreading activation models how agents should traverse]], agents don't just read descriptions in isolation — they traverse links, accumulating context that descriptions alone don't carry.
|
|
44
|
+
|
|
45
|
+
3. **Progressive disclosure offers multiple depths.** The layered architecture (title → description → outline → full content) means agents can go deeper when needed. Since [[good descriptions layer heuristic then mechanism then implication]], well-structured descriptions provide multiple entry points.
|
|
46
|
+
|
|
47
|
+
But none of these mitigations address the core problem: if the filter layer fails to identify a note as relevant because the compressed representation loses essential features, the agent never reaches the full content in the first place.
|
|
48
|
+
|
|
49
|
+
## The Dissolving Question
|
|
50
|
+
|
|
51
|
+
This tension may dissolve in practice rather than resolve in theory. The question isn't "does compression lose nuance?" — it obviously does. The question is whether the nuance lost matters for retrieval or whether it matters only for understanding (which happens after retrieval succeeds).
|
|
52
|
+
|
|
53
|
+
If the lossy compression preserves enough distinctiveness for agents to correctly identify when to load full content, the nuance loss at the filter layer doesn't compound into knowledge loss. The filter only needs to be good enough to get you to the full content; it doesn't need to contain the full content.
|
|
54
|
+
|
|
55
|
+
The risk case: ideas where the distinctive features that enable retrieval ARE the subtle nuanced features that don't compress well. For these ideas, the filter fails precisely because what makes them valuable is what makes them hard to compress. And since [[description quality for humans diverges from description quality for keyword search]], the compression loss is not even uniform across retrieval channels — a description that preserves enough nuance for agent scanning may lose exactly the keyword distinctiveness that BM25 needs, making the compression bet channel-dependent rather than absolute.
|
|
56
|
+
|
|
57
|
+
This tension has a deeper sibling at the atomicity layer. Since [[decontextualization risk means atomicity may strip meaning that cannot be recovered]], compression loss operates at two distinct levels: descriptions compress for filtering (this note's concern), while atomicity compresses for composability by stripping source context to produce standalone claims. Both ask whether vault compression strips features that make ideas valuable, but the mitigation differs fundamentally. Description compression is mitigated by full content remaining available — the filter gates access but doesn't discard. Atomicity compression may discard argumentative scaffolding that cannot be recovered from the note's own content, because the source discourse that gave the claim its force lives outside the note entirely. The description-layer bet has a safety net (load the full note). The atomicity-layer bet may not.
|
|
58
|
+
---
|
|
59
|
+
|
|
60
|
+
Relevant Notes:
|
|
61
|
+
- [[descriptions are retrieval filters not summaries]] — the positive framing: lossy compression is a feature when optimized for filtering decisions
|
|
62
|
+
- [[metadata reduces entropy enabling precision over recall]] — information-theoretic justification for the compression architecture
|
|
63
|
+
- [[good descriptions layer heuristic then mechanism then implication]] — structural formula that attempts to maximize filter value within the compression constraint
|
|
64
|
+
- [[retrieval verification loop tests description quality at scale]] — mechanism for testing description effectiveness, though it may miss subtle nuance loss
|
|
65
|
+
- [[inline links carry richer relationship data than metadata fields]] — relationship context that survives outside the compression layer
|
|
66
|
+
- [[spreading activation models how agents should traverse]] — context accumulation during traversal as mitigation for single-note compression limits
|
|
67
|
+
- [[progressive disclosure means reading right not reading less]] — the disclosure layers that provide escape hatches from compression
|
|
68
|
+
- [[vault conventions may impose hidden rigidity on thinking]] — sibling concern: descriptions compress to ~150 characters, titles compress to sentence form; both ask whether vault conventions can accommodate ideas that resist compression
|
|
69
|
+
- [[description quality for humans diverges from description quality for keyword search]] — concrete instance: the nuance lost depends on which retrieval channel the compression serves; prose structure preserved for scanning is nuance lost for keyword matching, making compression loss channel-dependent rather than absolute
|
|
70
|
+
- [[decontextualization risk means atomicity may strip meaning that cannot be recovered]] — deeper sibling at the atomicity layer: descriptions compress for filtering with full content as safety net, but atomicity compression strips source context that may not be recoverable from the note alone
|
|
71
|
+
|
|
72
|
+
Topics:
|
|
73
|
+
- [[discovery-retrieval]]
|
|
@@ -0,0 +1,60 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: SessionStart loads situational awareness (spatial, temporal, task, metacognitive orientation) while Stop forces metacognitive monitoring with review questions and failure mode checks, automating the
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[agent-cognition]]", "[[processing-workflows]]"]
|
|
5
|
+
methodology: ["Cognitive Science", "Original"]
|
|
6
|
+
source: [[hooks-as-methodology-encoders-research-source]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# session boundary hooks implement cognitive bookends for orientation and reflection
|
|
10
|
+
|
|
11
|
+
The two moments where methodology is most likely to be forgotten are the beginning and the end. At the beginning, the agent is eager to start working and may skip setup. At the end, the task is complete and the agent may skip cleanup. These are not random failure points but systematic ones: the beginning is where anticipation overrides discipline, and the end is where completion signals that "the work is done" even though reflection remains. Since [[prospective memory requires externalization]], both moments are prospective memory demands — "remember to orient before starting" and "remember to reflect before stopping" — that agents cannot habituate and humans cannot sustain under load. Automating both moments through hooks eliminates the failure mode entirely.
|
|
12
|
+
|
|
13
|
+
Since [[hooks are the agent habit system that replaces the missing basal ganglia]], orientation and reflection are the two routines most obviously missing from agent cognition. A human expert enters their workspace and automatically scans for what has changed, what needs attention, what is urgent. This is habituated behavior that took years to develop. An agent enters every session with zero automatic tendencies. Bookend hooks install the habits that agents cannot form: the habit of looking before acting, and the habit of reflecting before stopping.
|
|
14
|
+
|
|
15
|
+
SessionStart hooks implement what cognitive science calls situational awareness initialization. Endsley's three levels of situational awareness -- perception, comprehension, and projection -- map directly to what a session-start hook loads. The file tree provides perception: what exists in the vault, where things live, what the spatial layout looks like. Health metrics and observation counts provide comprehension: what the current state of the knowledge graph means, whether tensions are accumulating, whether the system is healthy. Queue status provides projection: what needs doing next, what depends on what, how many tasks remain. Without this initialization, the agent would begin working with partial awareness, potentially duplicating effort, missing context, or choosing the wrong task. Since [[notes function as cognitive anchors that stabilize attention during complex tasks]], the orientation artifacts loaded at session start serve double duty: they provide awareness AND they anchor the agent's reasoning for the duration of the session. The file tree and health metrics are not just information consumed and forgotten but fixed reference points that stabilize navigation throughout the context window's lifecycle.
|
|
16
|
+
|
|
17
|
+
The orientation side matters because [[MOCs are attention management devices not just organizational tools]] -- but MOCs only help if the agent knows they exist and which one to read. SessionStart hooks load the structural context that makes MOC navigation possible in the first place. The hook does not replace the MOC but ensures the agent has the spatial and temporal orientation needed to navigate to the right MOC. Since [[cognitive offloading is the architectural foundation for vault design]], this initialization is itself an offloading operation: the discipline to orient before acting is offloaded from agent attention to infrastructure.
|
|
18
|
+
|
|
19
|
+
The Stop hook implements externalized metacognitive monitoring. Five review questions -- what worked, what failed, what surprised, what process was missing, what evidence emerged -- force the kind of reflection that agents would otherwise skip once the substantive task is complete. These questions implement a lightweight version of what [[testing effect could enable agent knowledge verification]] describes at the note level: they force the agent to reconstruct what happened rather than passively accepting that work is complete. The hook also runs specific failure mode checks: broken wiki links, unmodified queue when thinking notes changed, unlogged CLAUDE.md changes, missing daily observations. Each check is a metacognitive probe that the agent may not have noticed during focused work. Because [[metacognitive confidence can diverge from retrieval capability]], the session may feel productive while actual methodology quality was insufficient -- the Stop hook's structured questioning creates a checkpoint that can surface this divergence before the session closes.
|
|
20
|
+
|
|
21
|
+
This reflection side complements what [[closure rituals create clean breaks that prevent attention residue bleed]] describes at the cognitive level. Closure rituals explain why explicit completion signals matter -- they release working memory allocations and prevent residue. Bookend hooks provide the implementation mechanism that makes closure rituals reliable rather than optional. Because [[hook enforcement guarantees quality while instruction enforcement merely suggests it]], instructing an agent to "reflect at session end" degrades as context fills. A Stop hook fires regardless of attention state.
|
|
22
|
+
|
|
23
|
+
The bookend pattern extends to nested scopes. SubagentStop hooks create bookends within bookends: the lead session's start and stop bracket the full session, while each subagent's start and stop bracket individual task phases. This hierarchical structure ensures quality enforcement at every level of delegation, not just at the top. Since [[fresh context per task preserves quality better than chaining phases]], each subagent session has its own clean boundaries for hooks to bracket, and since [[session handoff creates continuity without persistent memory]], the Stop hook's output becomes the handoff artifact that the next session's Start hook contextualizes.
|
|
24
|
+
|
|
25
|
+
The bookend pattern is not just a pair of hooks but an emergent composition. Since [[hook composition creates emergent methodology from independent single-concern components]], neither the SessionStart nor the Stop hook was designed as half of a "bookend." The Start hook was designed for orientation. The Stop hook was designed for session hygiene. Their composition produces the bracketing effect -- consistent methodology at both boundaries -- as an emergent property rather than a designed feature. This composition extends further: since [[hook-driven learning loops create self-improving methodology through observation accumulation]], the Stop hook's nudge to capture observations and the Start hook's display of observation counts are the intake and trigger mechanisms for the vault's self-improving learning loop. The bookend pattern participates in a second-order composition where independently-motivated hooks create a self-referential improvement cycle.
|
|
26
|
+
|
|
27
|
+
The bookend pattern also instantiates what [[AI shifts knowledge systems from externalizing memory to externalizing attention]] names at the paradigm level. The Start hook does not externalize what the agent knows but what the agent attends to first. The Stop hook does not externalize what the agent remembers but what the agent reflects on before stopping. Both are attention allocation decisions embedded in infrastructure rather than made by the agent in real time. This is attention externalization at the session boundary level -- the system decides what deserves focus at the two most consequential moments.
|
|
28
|
+
|
|
29
|
+
This creates what [[agents are simultaneously methodology executors and subjects creating a unique trust asymmetry]] identifies as the trust dynamic in its most visible form. The agent did not request orientation. The agent did not request reflection prompts. The infrastructure imposed both. Yet the enforcement is genuinely enabling -- the agent reasons better because it oriented first, and the system improves because it reflected at the end. The asymmetry is at its sharpest here because the bookend hooks are the most intrusive (they bracket the entire session) and the most beneficial (they guarantee the two things agents skip most).
|
|
30
|
+
|
|
31
|
+
The orientation side has a reconciliation function that goes beyond one-time awareness loading. The health metrics surfaced at session start — orphan count, dangling links, MOC coverage — are reconciliation checks that compare actual vault state against desired state. Since [[reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring]], the session-start health display implements a lightweight reconciliation loop: desired state is declared (zero orphans, zero danglers, full coverage), actual state is measured, and the delta is surfaced. The bookend pattern's orientation half thus serves double duty: it provides situational awareness for the agent AND executes the vault's primary reconciliation checkpoint. The limitation is that this reconciliation fires only at session boundaries — drift that accumulates between sessions goes undetected until the next start event.
|
|
32
|
+
|
|
33
|
+
The practical consequence is that session hygiene becomes infrastructure rather than discipline. An agent operating with bookend hooks maintains consistent orientation and reflection quality from its first session to its thousandth, because the quality does not depend on the agent remembering, choosing, or having enough context budget to perform these functions. The hooks make them automatic.
|
|
34
|
+
|
|
35
|
+
---
|
|
36
|
+
---
|
|
37
|
+
|
|
38
|
+
Relevant Notes:
|
|
39
|
+
- [[hook enforcement guarantees quality while instruction enforcement merely suggests it]] — foundation: the hook/instruction gap explains WHY bookends must be automated rather than instructed; attention degradation would cause agents to skip orientation and reflection exactly when they matter most
|
|
40
|
+
- [[closure rituals create clean breaks that prevent attention residue bleed]] — covers the cognitive benefit of session-end closure but not the implementation mechanism or the orientation side; this note adds both the hook mechanism and the full bookend pairing
|
|
41
|
+
- [[MOCs are attention management devices not just organizational tools]] — MOCs reduce orientation cost within sessions; SessionStart hooks ensure MOC-level orientation happens at all by loading the structural context that makes MOC navigation possible
|
|
42
|
+
- [[fresh context per task preserves quality better than chaining phases]] — session isolation creates the boundaries that bookends bracket; without isolation there are no clean start and stop points for hooks to fire at
|
|
43
|
+
- [[session handoff creates continuity without persistent memory]] — handoff documents are the content that flows between session boundaries; bookend hooks ensure both the generation of handoff output (Stop) and the consumption of handoff input (Start)
|
|
44
|
+
- [[cognitive offloading is the architectural foundation for vault design]] — bookends offload two cognitive tasks that degrade under pressure: the discipline to orient before acting and the discipline to reflect before stopping
|
|
45
|
+
- [[skills encode methodology so manual execution bypasses quality gates]] — skills encode the what of methodology; bookend hooks encode the when, ensuring orientation and reflection happen at the right moments without the agent choosing to invoke them
|
|
46
|
+
- [[hooks are the agent habit system that replaces the missing basal ganglia]] — bookends are the specific habits for session boundaries: orientation and reflection routines that agents need to perform automatically but cannot habituate without infrastructure
|
|
47
|
+
- [[hook-driven learning loops create self-improving methodology through observation accumulation]] — the Stop hook nudges observation capture and the Start hook surfaces observation counts, making bookend hooks the intake and trigger mechanisms for the self-improving learning loop
|
|
48
|
+
- [[hook composition creates emergent methodology from independent single-concern components]] — the bookend pattern is itself emergent composition: SessionStart hooks designed for orientation and Stop hooks designed for hygiene compose into session bracketing that neither was designed to produce
|
|
49
|
+
- [[notes function as cognitive anchors that stabilize attention during complex tasks]] — SessionStart hooks load the cognitive anchors (file tree, MOC structure, health metrics) that stabilize the session from its first moments; the orientation function is specifically an anchor-loading function
|
|
50
|
+
- [[metacognitive confidence can diverge from retrieval capability]] — the Stop hook's review questions create a checkpoint that can surface cases where the session felt productive but actual methodology quality was insufficient, partially closing the confidence-capability gap
|
|
51
|
+
- [[hooks enable context window efficiency by delegating deterministic checks to external processes]] — the Start hook trades a known bounded token cost (orientation loading) for eliminating an unpredictable procedural cost, exemplifying how hooks can invest tokens efficiently rather than only saving them
|
|
52
|
+
- [[agents are simultaneously methodology executors and subjects creating a unique trust asymmetry]] — bookend hooks are the most visible instance of the trust asymmetry: the agent did not request orientation or reflection, yet the enforcement is genuinely enabling
|
|
53
|
+
- [[testing effect could enable agent knowledge verification]] — the Stop hook's review questions implement a lightweight testing effect applied to the session itself, forcing reconstruction of what happened rather than passive acceptance that work is complete
|
|
54
|
+
- [[AI shifts knowledge systems from externalizing memory to externalizing attention]] — bookend hooks externalize two attention allocation decisions: what to attend to first (Start determines orientation focus) and what to reflect on at the end (Stop determines reflection scope)
|
|
55
|
+
- [[reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring]] — the session-start health display is a lightweight reconciliation loop: it compares actual vault state against desired state and surfaces the delta, making bookend orientation the vault's primary reconciliation checkpoint
|
|
56
|
+
- [[prospective memory requires externalization]] — names the cognitive failure that bookend hooks address: orientation and reflection are prospective memory demands that agents cannot habituate and that degrade under load for humans; hooks convert these remember-to-act patterns into infrastructure
|
|
57
|
+
|
|
58
|
+
Topics:
|
|
59
|
+
- [[agent-cognition]]
|
|
60
|
+
- [[processing-workflows]]
|
|
@@ -0,0 +1,43 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Externalized state in task files and work queues gives each fresh session a briefing from the previous one, solving the no-memory problem through structure rather than capability
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[agent-cognition]]", "[[processing-workflows]]"]
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# session handoff creates continuity without persistent memory
|
|
8
|
+
|
|
9
|
+
LLMs lack persistent memory across sessions. Each invocation starts fresh. This could be fatal for complex work that spans multiple sessions — without continuity, each agent instance would repeat discoveries, lose context, and fragment progress. The solution is externalized handoff: capture what was done, what remains, and what the next session should know.
|
|
10
|
+
|
|
11
|
+
The handoff document functions as a briefing. When one session ends, it produces a structured summary: completed work, incomplete tasks, discoveries, and recommendations. When the next session begins, it reads this briefing and inherits the prior context. Continuity emerges from structure rather than capability. The agent doesn't remember — it reads.
|
|
12
|
+
|
|
13
|
+
This principle explains the operational architecture of agent-operated knowledge systems. The work queue tracks task state across sessions. Per-task files accumulate notes from each phase — extraction notes, creation notes, connection notes — so downstream tasks see what upstream discovered. Session handoff formats structure session-end output for the next session to parse. Each mechanism externalizes continuity into files. Since [[stigmergy coordinates agents through environmental traces without direct communication]], this is stigmergy in its most precise form: each session modifies the environment (writes task files, advances queue entries, adds wiki links), and the next session responds to those modifications rather than receiving a message. The handoff document is the pheromone trace that guides the next agent's action.
|
|
14
|
+
|
|
15
|
+
Cal Newport's shutdown ritual provides the human precedent. At day's end, capture unfinished tasks and plan tomorrow's priorities. The next day starts with a briefing from yesterday-you. Since [[Zeigarnik effect validates capture-first philosophy because open loops drain attention]], every uncaptured commitment drains working memory until externalized — making the shutdown ritual not just a productivity habit but a cognitive necessity. The ritual creates psychological closure (open loops release their working memory allocation) and practical continuity (no work disappears overnight). We adapted this for agents: the handoff ritual runs at session end, producing the briefing for the next session. But handoff alone is half the story. Since [[closure rituals create clean breaks that prevent attention residue bleed]], session boundaries need both mechanisms: handoff captures what continues, while closure signals what is definitively done. Without the closure half, residue from "finished" tasks can persist because neither the brain nor the system registered their completion.
|
|
16
|
+
|
|
17
|
+
The insight is that memory and continuity are separable. Memory is internal state persisting across time. Continuity is coherent progress on multi-step work. Humans have memory but still benefit from external systems (todo lists, project notes, handoff docs) because memory is unreliable and selective. Agents lack memory entirely but can achieve continuity through better external systems. The external system becomes the memory.
|
|
18
|
+
|
|
19
|
+
This is a CLOSED claim — a foundational architectural choice rather than a testable hypothesis. We committed to file-based handoffs because LLMs genuinely lack persistent memory, and the handoff protocol demonstrably creates continuity. Since [[fresh context per task preserves quality better than chaining phases]], we need handoffs to connect isolated sessions. Since [[intermediate packets enable assembly over creation]], the handoff documents are themselves packets: composable artifacts that enable assembly of work across sessions.
|
|
20
|
+
|
|
21
|
+
The failure mode is incomplete handoffs. If a session ends without capturing state, the next session starts blind. This is why handoff discipline matters: since [[skills encode methodology so manual execution bypasses quality gates]], skills enforce handoff output and hooks trigger handoff prompts — the quality gates ensure continuity doesn't depend on agent discipline alone. The queue structure makes missing updates visible. The system assumes handoffs will happen and breaks when they don't.
|
|
22
|
+
|
|
23
|
+
The handoff pattern is a specific implementation of [[bootstrapping principle enables self-improving systems]]: each session reads what previous sessions wrote, then writes for future sessions. The system improves through this chain — discoveries from session N inform session N+1's work, which produces discoveries for session N+2. Session isolation would fragment progress without this externalized chain.
|
|
24
|
+
|
|
25
|
+
The file-based mechanism works because [[local-first file formats are inherently agent-native]]. Task queues, task files, and session handoff formats are plain text that any LLM can read without authentication or external services. The handoff protocol needs no infrastructure — just files the next session can read. This is why continuity through structure succeeds where continuity through capability fails: capability requires solving persistent memory, while structure requires only filesystem access.
|
|
26
|
+
---
|
|
27
|
+
|
|
28
|
+
Relevant Notes:
|
|
29
|
+
- [[fresh context per task preserves quality better than chaining phases]] — the design decision that requires handoffs; session isolation is why we need external continuity
|
|
30
|
+
- [[intermediate packets enable assembly over creation]] — handoff documents are packets; they enable assembly of work across sessions
|
|
31
|
+
- [[session outputs are packets for future selves]] — extends: the Memento metaphor frames handoff outputs as callable functions rather than just data — session N's titles and task states are function signatures that session N+1 invokes without re-reading implementations
|
|
32
|
+
- [[LLM attention degrades as context fills]] — the underlying constraint that makes fresh sessions valuable, which in turn makes handoffs necessary
|
|
33
|
+
- [[skills encode methodology so manual execution bypasses quality gates]] — skills enforce the handoff protocol; quality gates ensure continuity doesn't depend on agent discipline
|
|
34
|
+
- [[bootstrapping principle enables self-improving systems]] — handoffs implement bootstrapping: each session reads previous output and writes for the next, creating an improvement chain
|
|
35
|
+
- [[local-first file formats are inherently agent-native]] — explains WHY file-based handoffs work: plain text requires no infrastructure, any LLM can read the briefing
|
|
36
|
+
- [[closure rituals create clean breaks that prevent attention residue bleed]] — complementary session boundary mechanism: handoff preserves what continues, closure marks what ends; both are needed at session boundaries to manage the full attention lifecycle
|
|
37
|
+
- [[Zeigarnik effect validates capture-first philosophy because open loops drain attention]] — the cognitive mechanism behind Newport's shutdown ritual: uncaptured open loops drain working memory, making handoff not just a productivity practice but a cognitive necessity that releases working memory allocations
|
|
38
|
+
- [[federated wiki pattern enables multi-agent divergence as feature not bug]] — challenges the single-thread assumption: current handoffs pass state linearly from session N to session N+1, but federation would require handoff between divergent threads without premature reconciliation, analogous to git branching for knowledge work
|
|
39
|
+
- [[stigmergy coordinates agents through environmental traces without direct communication]] — theoretical grounding: handoff files are stigmergic traces; one session modifies the environment and the next session responds to those modifications without direct communication, making session handoff a specific instance of the general stigmergic coordination principle
|
|
40
|
+
|
|
41
|
+
Topics:
|
|
42
|
+
- [[agent-cognition]]
|
|
43
|
+
- [[processing-workflows]]
|
|
@@ -0,0 +1,43 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: each session's output should be a composable building block for future sessions — the intermediate packets pattern applied to agent session handoffs
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[processing-workflows]]", "[[agent-cognition]]"]
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# session outputs are packets for future selves
|
|
8
|
+
|
|
9
|
+
Connection discovered while reading [[intermediate packets enable assembly over creation]] alongside the Memento metaphor.
|
|
10
|
+
|
|
11
|
+
## The Insight
|
|
12
|
+
|
|
13
|
+
Each session should produce composable building blocks — not just notes, but packets that future sessions can assemble from without starting over.
|
|
14
|
+
|
|
15
|
+
The Memento tattoos aren't just memory storage. They're assembly instructions. "Remember Sammy Jankis" isn't data — it's a callable function that triggers a whole reasoning chain. Since [[note titles should function as APIs enabling sentence transclusion]], the same pattern operates in the vault: each title is a function signature the next session can call, and the body is the implementation you load only when you need the details.
|
|
16
|
+
|
|
17
|
+
## What This Means for Agent Sessions
|
|
18
|
+
|
|
19
|
+
Session handoffs are packets. The commit message summarizes what happened, the task list shows what's pending, the inbox captures show raw discoveries. Together they form a briefing for session N+1. Since [[session handoff creates continuity without persistent memory]], these packets are what make multi-session coherence possible despite the agent's lack of persistent memory. And because [[stigmergy coordinates agents through environmental traces without direct communication]], these packets function as pheromone traces — one session modifies the environment (writes task files, advances queue entries, adds wiki links) and the next session responds to those modifications rather than receiving a message.
|
|
20
|
+
|
|
21
|
+
Memory notes function as callable abstractions. A title like "curation becomes the work when creation is easy" invokes a whole reasoning chain without needing to load the full note. Each title is a function signature the next session can call, and the note body is available when the argument needs grounding.
|
|
22
|
+
|
|
23
|
+
The test: can a fresh session assemble useful work from what I left behind? Or does every session start from scratch?
|
|
24
|
+
|
|
25
|
+
## The Quality Question
|
|
26
|
+
|
|
27
|
+
Am I creating genuine building blocks, or just organized content that looks composable but requires re-understanding each time? Since [[the generation effect requires active transformation not just storage]], packets must contain generated artifacts — synthesis, articulated connections, processed insights — not merely collected inputs. A session that produces only file reorganization has zero assembly value for the next session. The generation is what makes the packet composable. But since [[closure rituals create clean breaks that prevent attention residue bleed]], the packet must also signal what is definitively done so that the next session doesn't waste attention reconstructing completed context. Packets and closure are complementary session-boundary mechanisms: packets preserve continuity, closure prevents residue.
|
|
28
|
+
|
|
29
|
+
---
|
|
30
|
+
---
|
|
31
|
+
|
|
32
|
+
Relevant Notes:
|
|
33
|
+
- [[intermediate packets enable assembly over creation]] — foundation: the source insight that work products should be composable building blocks, not monolithic outputs
|
|
34
|
+
- [[session handoff creates continuity without persistent memory]] — enables: handoff documents ARE packets; they bridge session isolation through structured briefings
|
|
35
|
+
- [[note titles should function as APIs enabling sentence transclusion]] — extends: titles as callable function signatures is the same pattern applied to individual notes; session packets are the session-level instance of the notes-as-APIs design
|
|
36
|
+
- [[stigmergy coordinates agents through environmental traces without direct communication]] — foundation: session packets are stigmergic traces; one session modifies the environment and the next responds to those modifications without direct communication
|
|
37
|
+
- [[the generation effect requires active transformation not just storage]] — tests: the question section asks whether packets contain genuine building blocks or reorganized content; generation determines whether outputs enable assembly or merely look composable
|
|
38
|
+
- [[closure rituals create clean breaks that prevent attention residue bleed]] — complements: closure marks what ends, packets preserve what continues; both are needed at session boundaries to manage the full attention lifecycle
|
|
39
|
+
- [[external memory shapes cognition more than base model]] — grounds: session packets are the incremental units through which memory architecture gets constructed; each packet adds to the retrieval landscape that shapes future cognition, making session output quality a direct investment in memory architecture
|
|
40
|
+
|
|
41
|
+
Topics:
|
|
42
|
+
- [[processing-workflows]]
|
|
43
|
+
- [[agent-cognition]]
|
|
@@ -0,0 +1,38 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Traditional tests check if output is correct but session mining checks if the experience achieved its purpose — friction patterns, user abandonment, and methodology drift are invisible to assertions
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[maintenance-patterns]]", "[[agent-cognition]]"]
|
|
5
|
+
methodology: ["Original"]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# session transcript mining enables experiential validation that structural tests cannot provide
|
|
9
|
+
|
|
10
|
+
Agent-operated systems produce a unique testing artifact that traditional software doesn't: the complete session transcript. Every tool call, every error, every user correction, every silence — the full interaction is recorded. But most testing frameworks ignore this data, checking only whether the output is structurally valid.
|
|
11
|
+
|
|
12
|
+
The insight is that structural validity and experiential validity are different things. A system can pass every assertion — valid YAML, correct links, coherent dimensions — and still fail its purpose. The bot that generates a processing methodology and then immediately bypasses it produces structurally valid output but experientially broken behavior. The user who waits 10 minutes after an empty response encounters no bug, just bad design. The 80-message repair cycle that fixes a vocabulary conflict is technically successful but experientially catastrophic.
|
|
13
|
+
|
|
14
|
+
Session mining reads transcripts against the system's own goals: did the interaction achieve what we designed it to achieve? This creates a third validation layer:
|
|
15
|
+
|
|
16
|
+
1. **Structural** — is the output well-formed? (validate-kernel, schema checks)
|
|
17
|
+
2. **Functional** — does the system produce correct derivations? (test cases, milestones)
|
|
18
|
+
3. **Experiential** — did the product work as intended? (session transcript mining)
|
|
19
|
+
|
|
20
|
+
Traditional software gets experiential feedback through user bug reports, NPS surveys, and analytics dashboards. Agent-operated systems can do something these can't: the system can read its own transcripts and evaluate them. The mining agent knows the product goals (from the PRD or context file), knows the intended experience (from the vision), and can judge: "this session failed because the user got zero value in 16 minutes."
|
|
21
|
+
|
|
22
|
+
This is since [[reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring]] applied at the product experience level. The "desired state" is the intended user experience. The "actual state" is what the transcript reveals. The gap between them is the finding.
|
|
23
|
+
|
|
24
|
+
The learning loop that emerges is powerful: each mined session generates findings that become gap notes that become PRD changes that become implementation fixes that produce better sessions. This is [[the derivation engine improves recursively as deployed systems generate observations]] made concrete — the sessions ARE the observations, and session mining is the recursive improvement mechanism.
|
|
25
|
+
|
|
26
|
+
What makes this particularly suited to agent-operated products is that the evaluation agent and the product agent share the same cognitive architecture. The mining agent can recognize friction patterns that a human tester might not articulate: context window pressure, tool call cascade failures, methodology drift, the moment where the agent stopped following its own rules. These are agent-native failure modes that require agent-native evaluation.
|
|
27
|
+
|
|
28
|
+
The future extension is evolution testing: generating systems with different configurations, running parallel sessions, and mining them comparatively. Which personality produces less friction? Which preset generates more connections? Does the system actually evolve over time, or does methodology drift erode it? This turns session mining from a debugging tool into a research instrument — the system studying its own evolution.
|
|
29
|
+
|
|
30
|
+
---
|
|
31
|
+
|
|
32
|
+
Source: v1 testing sessions, 2026-02-11
|
|
33
|
+
|
|
34
|
+
Relevant Notes:
|
|
35
|
+
- [[automatic learning capture loop for friction and methodology improvements]] — same loop, product level
|
|
36
|
+
- [[hook-driven learning loops create self-improving methodology through observation accumulation]] — observation accumulation as mechanism
|
|
37
|
+
- [[reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring]] — reconciliation as validation
|
|
38
|
+
- [[the derivation engine improves recursively as deployed systems generate observations]] — recursive improvement channel
|