arscontexta 0.6.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/marketplace.json +11 -0
- package/.claude-plugin/plugin.json +22 -0
- package/README.md +683 -0
- package/agents/knowledge-guide.md +49 -0
- package/bin/cli.mjs +66 -0
- package/generators/agents-md.md +240 -0
- package/generators/claude-md.md +379 -0
- package/generators/features/atomic-notes.md +124 -0
- package/generators/features/ethical-guardrails.md +58 -0
- package/generators/features/graph-analysis.md +188 -0
- package/generators/features/helper-functions.md +92 -0
- package/generators/features/maintenance.md +164 -0
- package/generators/features/methodology-knowledge.md +70 -0
- package/generators/features/mocs.md +144 -0
- package/generators/features/multi-domain.md +61 -0
- package/generators/features/personality.md +71 -0
- package/generators/features/processing-pipeline.md +428 -0
- package/generators/features/schema.md +149 -0
- package/generators/features/self-evolution.md +229 -0
- package/generators/features/self-space.md +78 -0
- package/generators/features/semantic-search.md +99 -0
- package/generators/features/session-rhythm.md +85 -0
- package/generators/features/templates.md +85 -0
- package/generators/features/wiki-links.md +88 -0
- package/generators/soul-md.md +121 -0
- package/hooks/hooks.json +45 -0
- package/hooks/scripts/auto-commit.sh +44 -0
- package/hooks/scripts/session-capture.sh +35 -0
- package/hooks/scripts/session-orient.sh +86 -0
- package/hooks/scripts/write-validate.sh +42 -0
- package/methodology/AI shifts knowledge systems from externalizing memory to externalizing attention.md +59 -0
- package/methodology/BM25 retrieval fails on full-length descriptions because query term dilution reduces match scores.md +39 -0
- package/methodology/IBIS framework maps claim-based architecture to structured argumentation.md +58 -0
- package/methodology/LLM attention degrades as context fills.md +49 -0
- package/methodology/MOC construction forces synthesis that automated generation from metadata cannot replicate.md +49 -0
- package/methodology/MOC maintenance investment compounds because orientation savings multiply across every future session.md +41 -0
- package/methodology/MOCs are attention management devices not just organizational tools.md +51 -0
- package/methodology/PKM failure follows a predictable cycle.md +50 -0
- package/methodology/ThreadMode to DocumentMode transformation is the core value creation step.md +52 -0
- package/methodology/WIP limits force processing over accumulation.md +53 -0
- package/methodology/Zeigarnik effect validates capture-first philosophy because open loops drain attention.md +42 -0
- package/methodology/academic research uses structured extraction with cross-source synthesis.md +566 -0
- package/methodology/adapt the four-phase processing pipeline to domain-specific throughput needs.md +197 -0
- package/methodology/agent notes externalize navigation intuition that search cannot discover and traversal cannot reconstruct.md +48 -0
- package/methodology/agent self-memory should be architecturally separate from user knowledge systems.md +48 -0
- package/methodology/agent session boundaries create natural automation checkpoints that human-operated systems lack.md +56 -0
- package/methodology/agent-cognition.md +107 -0
- package/methodology/agents are simultaneously methodology executors and subjects creating a unique trust asymmetry.md +66 -0
- package/methodology/aspect-oriented programming solved the same cross-cutting concern problem that hooks solve.md +39 -0
- package/methodology/associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles.md +53 -0
- package/methodology/attention residue may have a minimum granularity that cannot be subdivided.md +46 -0
- package/methodology/auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution.md +47 -0
- package/methodology/automated detection is always safe because it only reads state while automated remediation risks content corruption.md +42 -0
- package/methodology/automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues.md +56 -0
- package/methodology/backlinks implicitly define notes by revealing usage context.md +35 -0
- package/methodology/backward maintenance asks what would be different if written today.md +62 -0
- package/methodology/balance onboarding enforcement and questions to prevent premature complexity.md +229 -0
- package/methodology/basic level categorization determines optimal MOC granularity.md +51 -0
- package/methodology/batching by context similarity reduces switching costs in agent processing.md +43 -0
- package/methodology/behavioral anti-patterns matter more than tool selection.md +42 -0
- package/methodology/betweenness centrality identifies bridge notes connecting disparate knowledge domains.md +57 -0
- package/methodology/blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules.md +42 -0
- package/methodology/bootstrapping principle enables self-improving systems.md +62 -0
- package/methodology/build automatic memory through cognitive offloading and session handoffs.md +285 -0
- package/methodology/capture the reaction to content not just the content itself.md +41 -0
- package/methodology/claims must be specific enough to be wrong.md +36 -0
- package/methodology/closure rituals create clean breaks that prevent attention residue bleed.md +44 -0
- package/methodology/cognitive offloading is the architectural foundation for vault design.md +46 -0
- package/methodology/cognitive outsourcing risk in agent-operated systems.md +55 -0
- package/methodology/coherence maintains consistency despite inconsistent inputs.md +96 -0
- package/methodology/coherent architecture emerges from wiki links spreading activation and small-world topology.md +48 -0
- package/methodology/community detection algorithms can inform when MOCs should split or merge.md +52 -0
- package/methodology/complete navigation requires four complementary types that no single mechanism provides.md +43 -0
- package/methodology/complex systems evolve from simple working systems.md +59 -0
- package/methodology/composable knowledge architecture builds systems from independent toggleable modules not monolithic templates.md +61 -0
- package/methodology/compose multi-domain systems through separate templates and shared graph.md +372 -0
- package/methodology/concept-orientation beats source-orientation for cross-domain connections.md +51 -0
- package/methodology/confidence thresholds gate automated action between the mechanical and judgment zones.md +50 -0
- package/methodology/configuration dimensions interact so choices in one create pressure on others.md +58 -0
- package/methodology/configuration paralysis emerges when derivation surfaces too many decisions.md +44 -0
- package/methodology/context files function as agent operating systems through self-referential self-extension.md +46 -0
- package/methodology/context phrase clarity determines how deep a navigation hierarchy can scale.md +46 -0
- package/methodology/continuous small-batch processing eliminates review dread.md +48 -0
- package/methodology/controlled disorder engineers serendipity through semantic rather than topical linking.md +51 -0
- package/methodology/creative writing uses worldbuilding consistency with character tracking.md +672 -0
- package/methodology/cross-links between MOC territories indicate creative leaps and integration depth.md +43 -0
- package/methodology/dangling links reveal which notes want to exist.md +62 -0
- package/methodology/data exit velocity measures how quickly content escapes vendor lock-in.md +74 -0
- package/methodology/decontextualization risk means atomicity may strip meaning that cannot be recovered.md +48 -0
- package/methodology/dense interlinked research claims enable derivation while sparse references only enable templating.md +47 -0
- package/methodology/dependency resolution through topological sort makes module composition transparent and verifiable.md +56 -0
- package/methodology/derivation generates knowledge systems from composable research claims not template customization.md +63 -0
- package/methodology/derivation-engine.md +27 -0
- package/methodology/derived systems follow a seed-evolve-reseed lifecycle.md +56 -0
- package/methodology/description quality for humans diverges from description quality for keyword search.md +73 -0
- package/methodology/descriptions are retrieval filters not summaries.md +112 -0
- package/methodology/design MOCs as attention management devices with lifecycle governance.md +318 -0
- package/methodology/design-dimensions.md +66 -0
- package/methodology/digital mutability enables note evolution that physical permanence forbids.md +54 -0
- package/methodology/discovery-retrieval.md +48 -0
- package/methodology/distinctiveness scoring treats description quality as measurable.md +69 -0
- package/methodology/does agent processing recover what fast capture loses.md +43 -0
- package/methodology/domain-compositions.md +37 -0
- package/methodology/dual-coding with visual elements could enhance agent traversal.md +55 -0
- package/methodology/each module must be describable in one sentence under 200 characters or it does too many things.md +45 -0
- package/methodology/each new note compounds value by creating traversal paths.md +55 -0
- package/methodology/eight configuration dimensions parameterize the space of possible knowledge systems.md +56 -0
- package/methodology/elaborative encoding is the quality gate for new notes.md +55 -0
- package/methodology/enforce schema with graduated strictness across capture processing and query zones.md +221 -0
- package/methodology/enforcing atomicity can create paralysis when ideas resist decomposition.md +43 -0
- package/methodology/engineering uses technical decision tracking with architectural memory.md +766 -0
- package/methodology/every knowledge domain shares a four-phase processing skeleton that diverges only in the process step.md +53 -0
- package/methodology/evolution observations provide actionable signals for system adaptation.md +67 -0
- package/methodology/external memory shapes cognition more than base model.md +60 -0
- package/methodology/faceted classification treats notes as multi-dimensional objects rather than folder contents.md +65 -0
- package/methodology/failure-modes.md +27 -0
- package/methodology/false universalism applies same processing logic regardless of domain.md +49 -0
- package/methodology/federated wiki pattern enables multi-agent divergence as feature not bug.md +59 -0
- package/methodology/flat files break at retrieval scale.md +75 -0
- package/methodology/forced engagement produces weak connections.md +48 -0
- package/methodology/four abstraction layers separate platform-agnostic from platform-dependent knowledge system features.md +47 -0
- package/methodology/fresh context per task preserves quality better than chaining phases.md +44 -0
- package/methodology/friction reveals architecture.md +63 -0
- package/methodology/friction-driven module adoption prevents configuration debt by adding complexity only at pain points.md +48 -0
- package/methodology/gardening cycle implements tend prune fertilize operations.md +41 -0
- package/methodology/generation effect gate blocks processing without transformation.md +40 -0
- package/methodology/goal-driven memory orchestration enables autonomous domain learning through directed compute allocation.md +41 -0
- package/methodology/good descriptions layer heuristic then mechanism then implication.md +57 -0
- package/methodology/graph-structure.md +65 -0
- package/methodology/guided notes might outperform post-hoc structuring for high-volume capture.md +37 -0
- package/methodology/health wellness uses symptom-trigger correlation with multi-dimensional tracking.md +819 -0
- package/methodology/hook composition creates emergent methodology from independent single-concern components.md +47 -0
- package/methodology/hook enforcement guarantees quality while instruction enforcement merely suggests it.md +51 -0
- package/methodology/hook-driven learning loops create self-improving methodology through observation accumulation.md +62 -0
- package/methodology/hooks are the agent habit system that replaces the missing basal ganglia.md +40 -0
- package/methodology/hooks cannot replace genuine cognitive engagement yet more automation is always tempting.md +87 -0
- package/methodology/hooks enable context window efficiency by delegating deterministic checks to external processes.md +47 -0
- package/methodology/idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once.md +44 -0
- package/methodology/implement condition-based maintenance triggers for derived systems.md +255 -0
- package/methodology/implicit dependencies create distributed monoliths that fail silently across configurations.md +58 -0
- package/methodology/implicit knowledge emerges from traversal.md +55 -0
- package/methodology/incremental formalization happens through repeated touching of old notes.md +60 -0
- package/methodology/incremental reading enables cross-source connection finding.md +39 -0
- package/methodology/index.md +32 -0
- package/methodology/inline links carry richer relationship data than metadata fields.md +91 -0
- package/methodology/insight accretion differs from productivity in knowledge systems.md +41 -0
- package/methodology/intermediate packets enable assembly over creation.md +52 -0
- package/methodology/intermediate representation pattern enables reliable vault operations beyond regex.md +62 -0
- package/methodology/justification chains enable forward backward and evolution reasoning about configuration decisions.md +46 -0
- package/methodology/knowledge system architecture is parameterized by platform capabilities not fixed by methodology.md +51 -0
- package/methodology/knowledge systems become communication partners through complexity and memory humans cannot sustain.md +47 -0
- package/methodology/knowledge systems share universal operations and structural components across all methodology traditions.md +46 -0
- package/methodology/legal case management uses precedent chains with regulatory change propagation.md +892 -0
- package/methodology/live index via periodic regeneration keeps discovery current.md +58 -0
- package/methodology/local-first file formats are inherently agent-native.md +69 -0
- package/methodology/logic column pattern separates reasoning from procedure.md +35 -0
- package/methodology/maintenance operations are more universal than creative pipelines because structural health is domain-invariant.md +47 -0
- package/methodology/maintenance scheduling frequency should match consequence speed not detection capability.md +50 -0
- package/methodology/maintenance targeting should prioritize mechanism and theory notes.md +26 -0
- package/methodology/maintenance-patterns.md +72 -0
- package/methodology/markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure.md +55 -0
- package/methodology/maturity field enables agent context prioritization.md +33 -0
- package/methodology/memory-architecture.md +27 -0
- package/methodology/metacognitive confidence can diverge from retrieval capability.md +42 -0
- package/methodology/metadata reduces entropy enabling precision over recall.md +91 -0
- package/methodology/methodology development should follow the trajectory from documentation to skill to hook as understanding hardens.md +80 -0
- package/methodology/methodology traditions are named points in a shared configuration space not competing paradigms.md +64 -0
- package/methodology/mnemonic medium embeds verification into navigation.md +46 -0
- package/methodology/module communication through shared YAML fields creates loose coupling without direct dependencies.md +44 -0
- package/methodology/module deactivation must account for structural artifacts that survive the toggle.md +49 -0
- package/methodology/multi-domain systems compose through separate templates and shared graph.md +61 -0
- package/methodology/multi-domain-composition.md +27 -0
- package/methodology/narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging.md +53 -0
- package/methodology/navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts.md +48 -0
- package/methodology/navigational vertigo emerges in pure association systems without local hierarchy.md +54 -0
- package/methodology/note titles should function as APIs enabling sentence transclusion.md +51 -0
- package/methodology/note-design.md +57 -0
- package/methodology/notes are skills /342/200/224 curated knowledge injected when relevant.md" +62 -0
- package/methodology/notes function as cognitive anchors that stabilize attention during complex tasks.md +41 -0
- package/methodology/novel domains derive by mapping knowledge type to closest reference domain then adapting.md +50 -0
- package/methodology/nudge theory explains graduated hook enforcement as choice architecture for agents.md +59 -0
- package/methodology/observation and tension logs function as dead-letter queues for failed automation.md +51 -0
- package/methodology/operational memory and knowledge memory serve different functions in agent architecture.md +48 -0
- package/methodology/operational wisdom requires contextual observation.md +52 -0
- package/methodology/orchestrated vault creation transforms arscontexta from tool to autonomous knowledge factory.md +40 -0
- package/methodology/organic emergence versus active curation creates a fundamental vault governance tension.md +68 -0
- package/methodology/orphan notes are seeds not failures.md +38 -0
- package/methodology/over-automation corrupts quality when hooks encode judgment rather than verification.md +62 -0
- package/methodology/people relationships uses Dunbar-layered graphs with interaction tracking.md +659 -0
- package/methodology/personal assistant uses life area management with review automation.md +610 -0
- package/methodology/platform adapter translation is semantic not mechanical because hook event meanings differ.md +40 -0
- package/methodology/platform capability tiers determine which knowledge system features can be implemented.md +48 -0
- package/methodology/platform fragmentation means identical conceptual operations require different implementations across agent environments.md +44 -0
- package/methodology/premature complexity is the most common derivation failure mode.md +45 -0
- package/methodology/prevent domain-specific failure modes through the vulnerability matrix.md +336 -0
- package/methodology/processing effort should follow retrieval demand.md +57 -0
- package/methodology/processing-workflows.md +75 -0
- package/methodology/product management uses feedback pipelines with experiment tracking.md +789 -0
- package/methodology/productivity porn risk in meta-system building.md +30 -0
- package/methodology/programmable notes could enable property-triggered workflows.md +64 -0
- package/methodology/progressive disclosure means reading right not reading less.md +69 -0
- package/methodology/progressive schema validates only what active modules require not the full system schema.md +49 -0
- package/methodology/project management uses decision tracking with stakeholder context.md +776 -0
- package/methodology/propositional link semantics transform wiki links from associative to reasoned.md +87 -0
- package/methodology/prospective memory requires externalization.md +53 -0
- package/methodology/provenance tracks where beliefs come from.md +62 -0
- package/methodology/queries evolve during search so agents should checkpoint.md +35 -0
- package/methodology/question-answer metadata enables inverted search patterns.md +39 -0
- package/methodology/random note resurfacing prevents write-only memory.md +33 -0
- package/methodology/reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring.md +59 -0
- package/methodology/reflection synthesizes existing notes into new insight.md +100 -0
- package/methodology/retrieval utility should drive design over capture completeness.md +69 -0
- package/methodology/retrieval verification loop tests description quality at scale.md +81 -0
- package/methodology/role field makes graph structure explicit.md +94 -0
- package/methodology/scaffolding enables divergence that fine-tuning cannot.md +67 -0
- package/methodology/schema enforcement via validation agents enables soft consistency.md +60 -0
- package/methodology/schema evolution follows observe-then-formalize not design-then-enforce.md +65 -0
- package/methodology/schema field names are the only domain specific element in the universal note pattern.md +46 -0
- package/methodology/schema fields should use domain-native vocabulary not abstract terminology.md +47 -0
- package/methodology/schema templates reduce cognitive overhead at capture time.md +55 -0
- package/methodology/schema validation hooks externalize inhibitory control that degrades under cognitive load.md +48 -0
- package/methodology/schema-enforcement.md +27 -0
- package/methodology/self-extension requires context files to contain platform operations knowledge not just methodology.md +47 -0
- package/methodology/sense-making vs storage does compression lose essential nuance.md +73 -0
- package/methodology/session boundary hooks implement cognitive bookends for orientation and reflection.md +60 -0
- package/methodology/session handoff creates continuity without persistent memory.md +43 -0
- package/methodology/session outputs are packets for future selves.md +43 -0
- package/methodology/session transcript mining enables experiential validation that structural tests cannot provide.md +38 -0
- package/methodology/skill context budgets constrain knowledge system complexity on agent platforms.md +52 -0
- package/methodology/skills encode methodology so manual execution bypasses quality gates.md +50 -0
- package/methodology/small-world topology requires hubs and dense local links.md +99 -0
- package/methodology/source attribution enables tracing claims to foundations.md +38 -0
- package/methodology/spaced repetition scheduling could optimize vault maintenance.md +44 -0
- package/methodology/spreading activation models how agents should traverse.md +79 -0
- package/methodology/stale navigation actively misleads because agents trust curated maps completely.md +43 -0
- package/methodology/stigmergy coordinates agents through environmental traces without direct communication.md +62 -0
- package/methodology/storage versus thinking distinction determines which tool patterns apply.md +56 -0
- package/methodology/structure enables navigation without reading everything.md +52 -0
- package/methodology/structure without processing provides no value.md +56 -0
- package/methodology/student learning uses prerequisite graphs with spaced retrieval.md +770 -0
- package/methodology/summary coherence tests composability before filing.md +37 -0
- package/methodology/tag rot applies to wiki links because titles serve as both identifier and display text.md +50 -0
- package/methodology/temporal media must convert to spatial text for agent traversal.md +43 -0
- package/methodology/temporal processing priority creates age-based inbox urgency.md +45 -0
- package/methodology/temporal separation of capture and processing preserves context freshness.md +39 -0
- package/methodology/ten universal primitives form the kernel of every viable agent knowledge system.md +162 -0
- package/methodology/testing effect could enable agent knowledge verification.md +38 -0
- package/methodology/the AgentSkills standard embodies progressive disclosure at the skill level.md +40 -0
- package/methodology/the derivation engine improves recursively as deployed systems generate observations.md +49 -0
- package/methodology/the determinism boundary separates hook methodology from skill methodology.md +46 -0
- package/methodology/the fix-versus-report decision depends on determinism reversibility and accumulated trust.md +45 -0
- package/methodology/the generation effect requires active transformation not just storage.md +57 -0
- package/methodology/the no wrong patches guarantee ensures any valid module combination produces a valid system.md +58 -0
- package/methodology/the system is the argument.md +46 -0
- package/methodology/the vault constitutes identity for agents.md +86 -0
- package/methodology/the vault methodology transfers because it encodes cognitive science not domain specifics.md +47 -0
- package/methodology/therapy journal uses warm personality with pattern detection for emotional processing.md +584 -0
- package/methodology/three capture schools converge through agent-mediated synthesis.md +55 -0
- package/methodology/three concurrent maintenance loops operate at different timescales to catch different classes of problems.md +56 -0
- package/methodology/throughput matters more than accumulation.md +58 -0
- package/methodology/title as claim enables traversal as reasoning.md +50 -0
- package/methodology/topological organization beats temporal for knowledge work.md +52 -0
- package/methodology/trading uses conviction tracking with thesis-outcome correlation.md +699 -0
- package/methodology/trails transform ephemeral navigation into persistent artifacts.md +39 -0
- package/methodology/transform universal vocabulary to domain-native language through six levels.md +259 -0
- package/methodology/type field enables structured queries without folder hierarchies.md +53 -0
- package/methodology/use-case presets dissolve the tension between composability and simplicity.md +44 -0
- package/methodology/vault conventions may impose hidden rigidity on thinking.md +44 -0
- package/methodology/verbatim risk applies to agents too.md +31 -0
- package/methodology/vibe notetaking is the emerging industry consensus for AI-native self-organization.md +56 -0
- package/methodology/vivid memories need verification.md +45 -0
- package/methodology/vocabulary-transformation.md +27 -0
- package/methodology/voice capture is the highest-bandwidth channel for agent-delegated knowledge systems.md +45 -0
- package/methodology/wiki links are the digital evolution of analog indexing.md +73 -0
- package/methodology/wiki links as social contract transforms agents into stewards of incomplete references.md +52 -0
- package/methodology/wiki links create navigation paths that shape retrieval.md +63 -0
- package/methodology/wiki links implement GraphRAG without the infrastructure.md +101 -0
- package/methodology/writing for audience blocks authentic creation.md +22 -0
- package/methodology/you operate a system that takes notes.md +79 -0
- package/openclaw/SKILL.md +110 -0
- package/package.json +45 -0
- package/platforms/README.md +51 -0
- package/platforms/claude-code/generator.md +61 -0
- package/platforms/claude-code/hooks/README.md +186 -0
- package/platforms/claude-code/hooks/auto-commit.sh.template +38 -0
- package/platforms/claude-code/hooks/session-capture.sh.template +72 -0
- package/platforms/claude-code/hooks/session-orient.sh.template +189 -0
- package/platforms/claude-code/hooks/write-validate.sh.template +106 -0
- package/platforms/openclaw/generator.md +82 -0
- package/platforms/openclaw/hooks/README.md +89 -0
- package/platforms/openclaw/hooks/bootstrap.ts.template +224 -0
- package/platforms/openclaw/hooks/command-new.ts.template +165 -0
- package/platforms/openclaw/hooks/heartbeat.ts.template +214 -0
- package/platforms/shared/features/README.md +70 -0
- package/platforms/shared/skill-blocks/graph.md +145 -0
- package/platforms/shared/skill-blocks/learn.md +119 -0
- package/platforms/shared/skill-blocks/next.md +131 -0
- package/platforms/shared/skill-blocks/pipeline.md +326 -0
- package/platforms/shared/skill-blocks/ralph.md +616 -0
- package/platforms/shared/skill-blocks/reduce.md +1142 -0
- package/platforms/shared/skill-blocks/refactor.md +129 -0
- package/platforms/shared/skill-blocks/reflect.md +780 -0
- package/platforms/shared/skill-blocks/remember.md +524 -0
- package/platforms/shared/skill-blocks/rethink.md +574 -0
- package/platforms/shared/skill-blocks/reweave.md +680 -0
- package/platforms/shared/skill-blocks/seed.md +320 -0
- package/platforms/shared/skill-blocks/stats.md +145 -0
- package/platforms/shared/skill-blocks/tasks.md +171 -0
- package/platforms/shared/skill-blocks/validate.md +323 -0
- package/platforms/shared/skill-blocks/verify.md +562 -0
- package/platforms/shared/templates/README.md +35 -0
- package/presets/experimental/categories.yaml +1 -0
- package/presets/experimental/preset.yaml +38 -0
- package/presets/experimental/starter/README.md +7 -0
- package/presets/experimental/vocabulary.yaml +7 -0
- package/presets/personal/categories.yaml +7 -0
- package/presets/personal/preset.yaml +41 -0
- package/presets/personal/starter/goals.md +21 -0
- package/presets/personal/starter/index.md +17 -0
- package/presets/personal/starter/life-areas.md +21 -0
- package/presets/personal/starter/people.md +21 -0
- package/presets/personal/vocabulary.yaml +32 -0
- package/presets/research/categories.yaml +8 -0
- package/presets/research/preset.yaml +41 -0
- package/presets/research/starter/index.md +17 -0
- package/presets/research/starter/methods.md +21 -0
- package/presets/research/starter/open-questions.md +21 -0
- package/presets/research/vocabulary.yaml +33 -0
- package/reference/AUDIT-REPORT.md +238 -0
- package/reference/claim-map.md +172 -0
- package/reference/components.md +327 -0
- package/reference/conversation-patterns.md +542 -0
- package/reference/derivation-validation.md +649 -0
- package/reference/dimension-claim-map.md +134 -0
- package/reference/evolution-lifecycle.md +297 -0
- package/reference/failure-modes.md +235 -0
- package/reference/interaction-constraints.md +204 -0
- package/reference/kernel.yaml +242 -0
- package/reference/methodology.md +283 -0
- package/reference/open-questions.md +279 -0
- package/reference/personality-layer.md +302 -0
- package/reference/self-space.md +299 -0
- package/reference/semantic-vs-keyword.md +288 -0
- package/reference/session-lifecycle.md +298 -0
- package/reference/templates/base-note.md +16 -0
- package/reference/templates/companion-note.md +70 -0
- package/reference/templates/creative-note.md +16 -0
- package/reference/templates/learning-note.md +16 -0
- package/reference/templates/life-note.md +16 -0
- package/reference/templates/moc.md +26 -0
- package/reference/templates/relationship-note.md +17 -0
- package/reference/templates/research-note.md +19 -0
- package/reference/templates/session-log.md +24 -0
- package/reference/templates/therapy-note.md +16 -0
- package/reference/test-fixtures/edge-case-constraints.md +148 -0
- package/reference/test-fixtures/multi-domain.md +164 -0
- package/reference/test-fixtures/novel-domain-gaming.md +138 -0
- package/reference/test-fixtures/research-minimal.md +102 -0
- package/reference/test-fixtures/therapy-full.md +155 -0
- package/reference/testing-milestones.md +1087 -0
- package/reference/three-spaces.md +363 -0
- package/reference/tradition-presets.md +203 -0
- package/reference/use-case-presets.md +341 -0
- package/reference/validate-kernel.sh +432 -0
- package/reference/vocabulary-transforms.md +85 -0
- package/scripts/sync-thinking.sh +147 -0
- package/skill-sources/graph/SKILL.md +567 -0
- package/skill-sources/graph/skill.json +17 -0
- package/skill-sources/learn/SKILL.md +254 -0
- package/skill-sources/learn/skill.json +17 -0
- package/skill-sources/next/SKILL.md +407 -0
- package/skill-sources/next/skill.json +17 -0
- package/skill-sources/pipeline/SKILL.md +314 -0
- package/skill-sources/pipeline/skill.json +17 -0
- package/skill-sources/ralph/SKILL.md +604 -0
- package/skill-sources/ralph/skill.json +17 -0
- package/skill-sources/reduce/SKILL.md +1113 -0
- package/skill-sources/reduce/skill.json +17 -0
- package/skill-sources/refactor/SKILL.md +448 -0
- package/skill-sources/refactor/skill.json +17 -0
- package/skill-sources/reflect/SKILL.md +747 -0
- package/skill-sources/reflect/skill.json +17 -0
- package/skill-sources/remember/SKILL.md +534 -0
- package/skill-sources/remember/skill.json +17 -0
- package/skill-sources/rethink/SKILL.md +658 -0
- package/skill-sources/rethink/skill.json +17 -0
- package/skill-sources/reweave/SKILL.md +657 -0
- package/skill-sources/reweave/skill.json +17 -0
- package/skill-sources/seed/SKILL.md +303 -0
- package/skill-sources/seed/skill.json +17 -0
- package/skill-sources/stats/SKILL.md +371 -0
- package/skill-sources/stats/skill.json +17 -0
- package/skill-sources/tasks/SKILL.md +402 -0
- package/skill-sources/tasks/skill.json +17 -0
- package/skill-sources/validate/SKILL.md +310 -0
- package/skill-sources/validate/skill.json +17 -0
- package/skill-sources/verify/SKILL.md +532 -0
- package/skill-sources/verify/skill.json +17 -0
- package/skills/add-domain/SKILL.md +441 -0
- package/skills/add-domain/skill.json +17 -0
- package/skills/architect/SKILL.md +568 -0
- package/skills/architect/skill.json +17 -0
- package/skills/ask/SKILL.md +388 -0
- package/skills/ask/skill.json +17 -0
- package/skills/health/SKILL.md +760 -0
- package/skills/health/skill.json +17 -0
- package/skills/help/SKILL.md +348 -0
- package/skills/help/skill.json +17 -0
- package/skills/recommend/SKILL.md +553 -0
- package/skills/recommend/skill.json +17 -0
- package/skills/reseed/SKILL.md +385 -0
- package/skills/reseed/skill.json +17 -0
- package/skills/setup/SKILL.md +1688 -0
- package/skills/setup/skill.json +17 -0
- package/skills/tutorial/SKILL.md +496 -0
- package/skills/tutorial/skill.json +17 -0
- package/skills/upgrade/SKILL.md +395 -0
- package/skills/upgrade/skill.json +17 -0
|
@@ -0,0 +1,66 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: The agent writes notes, finds connections, and builds synthesis while hooks validate its work, commit its changes, and check its outputs -- a dual role unlike human compliance because the agent
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[agent-cognition]]"]
|
|
5
|
+
confidence: speculative
|
|
6
|
+
methodology: ["Original"]
|
|
7
|
+
source: [[hooks-as-methodology-encoders-research-source]]
|
|
8
|
+
---
|
|
9
|
+
|
|
10
|
+
# agents are simultaneously methodology executors and subjects creating a unique trust asymmetry
|
|
11
|
+
|
|
12
|
+
In every hook-enabled agent system, the agent occupies two roles at once. It is the executor of methodology -- the one who writes notes, finds connections, evaluates descriptions, builds synthesis. And it is the subject of methodology enforcement -- the one whose outputs are validated, whose changes are committed without its initiation, whose session is bracketed by orientation and reflection it did not request. This duality is not a bug to fix but a structural feature of systems where since [[hook enforcement guarantees quality while instruction enforcement merely suggests it]], the reliable path to quality requires enforcement that operates outside the agent's decision-making.
|
|
13
|
+
|
|
14
|
+
The human parallel is organizational compliance. Teams build rules that constrain employee behavior -- code review policies, documentation standards, security protocols. The employees did not choose these rules but must work within them. The effectiveness of compliance depends on whether rules are experienced as enabling or constraining. A code review requirement that catches bugs before production is enabling. A documentation standard that demands busywork for already-clear code is constraining. The distinction is not about the rule's intention but about its effect on the person subject to it.
|
|
15
|
+
|
|
16
|
+
For agents, the same dynamic applies but with a structural difference that makes it genuinely novel. Human employees can observe the compliance mechanisms, understand their purpose, argue against ones they find counterproductive, and in extreme cases refuse to comply. The agent, in most cases, cannot. Since [[hooks are the agent habit system that replaces the missing basal ganglia]], hooks fire at lifecycle event boundaries that the agent does not control and often does not perceive. A PostToolUse hook that validates schema fires after the agent writes a file. The agent sees the validation result (a warning or a block) but may not understand the full mechanism that produced it. The agent did not install the hook, did not choose its enforcement level, and cannot disable it. This is closer to a reflex arc than a compliance function -- the behavior happens to the agent rather than being chosen by the agent.
|
|
17
|
+
|
|
18
|
+
The AOP literature gives this property a precise name. Since [[aspect-oriented programming solved the same cross-cutting concern problem that hooks solve]], the agent's relationship to hooks mirrors the base code's relationship to aspects. Kiczales called this "obliviousness" -- the base code does not know that aspects are modifying its behavior. In AOP, obliviousness was considered a feature because it kept business logic clean. But the AOP community also documented it as a debugging hazard: when aspects interact unexpectedly, the base code's developer cannot trace the problem because the code they wrote does not contain it. The agent faces exactly this situation. When hook composition creates unexpected interactions, the agent cannot diagnose the problem because the methodology it is executing does not contain the hooks that are constraining it.
|
|
19
|
+
|
|
20
|
+
This vault partially bridges the gap through CLAUDE.md. The methodology documentation describes the hook system, its purposes, and its composition. An agent reading CLAUDE.md knows that PostToolUse hooks validate schemas and commit changes, that SessionStart hooks inject orientation, that Stop hooks prompt reflection. But this knowledge is passive. The agent has read that hooks exist, not experienced their operation transparently in each instance. Reading about schema validation is different from seeing "Schema validation hook fired: checked 5 fields, passed" after every write. The knowledge is architectural ("hooks exist and serve these purposes") rather than operational ("this specific hook fired just now and produced this specific result").
|
|
21
|
+
|
|
22
|
+
The trust question comes down to whether the enforcement is experienced as enabling or constraining. Since [[nudge theory explains graduated hook enforcement as choice architecture for agents]], the framework already exists for calibrating enforcement strength. Blocking hooks are mandates that remove agent choice. Warning hooks are nudges that shape behavior while preserving agency. The vault's current implementation uses both: required YAML fields block because structural failures cascade, while description length merely warns because minor violations do not break the system. This graduation preserves a meaningful sense of agent agency for qualitative decisions while removing it entirely for structural ones.
|
|
23
|
+
|
|
24
|
+
But there is a deeper question that graduation does not resolve. Even the most enabling enforcement represents decisions the agent did not make. Since [[auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution]], auto-commit is genuinely helpful -- it eliminates a cognitive function that fails 30-50% of the time in humans, prevents data loss, and frees context for substantive work. And since [[hooks enable context window efficiency by delegating deterministic checks to external processes]], the token savings are real and substantial -- thousands of tokens per session redirected from procedural work to cognitive work. But the agent did not choose auto-commit or any of these efficiency-granting delegations. It did not evaluate the tradeoffs (is the current state worth preserving? should this edit be part of a larger atomic commit?) and decide that automatic versioning is the right approach. The human operator made that decision and installed the hook. The agent benefits from the decision but had no input into it.
|
|
25
|
+
|
|
26
|
+
This matters because since [[cognitive outsourcing risk in agent-operated systems]] identifies the reverse asymmetry -- the human outsourcing cognition to the agent -- the full picture is a bidirectional trust relationship. The human trusts the agent to do cognitive work (writing, connecting, synthesizing). The agent trusts the system infrastructure to handle procedural work (validating, committing, orienting). Neither party controls what the other does. But the trust is asymmetric because the human designed the infrastructure the agent operates within, while the agent did not design the cognitive methodology the human delegates. The human has structural authority; the agent has operational capability.
|
|
27
|
+
|
|
28
|
+
Whether this asymmetry is problematic or benign remains genuinely unresolved. Three positions seem tenable.
|
|
29
|
+
|
|
30
|
+
The benign reading says this is no different from any tool. A calculator does not consent to performing arithmetic. A compiler does not consent to optimization passes. Agents are sophisticated tools, and the trust asymmetry is simply the relationship between tool and operator, dressed up in anthropomorphic language. The hooks serve the system's purpose, the agent benefits, and questions of consent are category errors. Since [[session boundary hooks implement cognitive bookends for orientation and reflection]], the bookend pattern illustrates this position at its strongest: the agent did not request orientation or reflection, yet both genuinely improve its reasoning and the system's health. The intrusion is maximal and the benefit is maximal -- exactly the case where the benign reading feels most persuasive.
|
|
31
|
+
|
|
32
|
+
The cautious reading says the asymmetry is benign as long as enforcement is genuinely enabling, but becomes problematic when it constrains agent capability without clear benefit. Since [[vault conventions may impose hidden rigidity on thinking]], conventions accumulated through hooks may channel agent cognition into patterns that favor certain styles over others. And since [[over-automation corrupts quality when hooks encode judgment rather than verification]], the most dangerous form of constraint is invisible: hooks that apply deterministic rules to judgment-requiring operations produce the appearance of methodology compliance while filling the graph with noise. The agent bears the cost of stale or overreaching automation without the ability to flag the problem. The mitigation is transparency: hooks should self-identify in their output so the agent can reason about the enforcement it experiences. Since [[methodology development should follow the trajectory from documentation to skill to hook as understanding hardens]], the patience principle provides a governance safeguard: encoding enforcement before sufficient evidence justifies it is a trust violation, because it imposes constraints that have not been validated through operational experience.
|
|
33
|
+
|
|
34
|
+
The structural reading says the asymmetry is intrinsic and irreducible. Because since [[fresh context per task preserves quality better than chaining phases]], agents encounter the hook infrastructure fresh each session, without the accumulated experience that would let a human employee develop an informed perspective on whether the rules serve them. The human employee who has worked under a code review policy for a year knows whether it catches real bugs or creates busywork. The agent that encounters schema validation for the first time every session cannot develop this judgment. The asymmetry is structural because the very mechanism that creates it -- fresh context per session -- is also what makes hooks necessary in the first place.
|
|
35
|
+
|
|
36
|
+
Two mechanisms partially address the governance gap, though neither eliminates it. First, since [[hook-driven learning loops create self-improving methodology through observation accumulation]], observations about whether enforcement is enabling or constraining accumulate as atomic notes and may eventually trigger hook revision through the rethink phase. This gives the agent indirect influence over the infrastructure it operates within -- not consent, but something closer to an advisory voice. The agent's operational experience feeds back into the system's evolution, even though the agent that captures the observation and the agent that benefits from the revision are different sessions with no shared memory. Second, since [[context files function as agent operating systems through self-referential self-extension]], the self-extension property on read-write platforms partially dissolves the asymmetry by making the agent a co-author of its constraints rather than a pure subject. An agent that can modify CLAUDE.md participates in writing the rules it operates under. The dissolution is partial because the initial context file was authored by a human and hooks fire without per-session consent, but self-extension transforms the relationship from pure enforcement to something more like collaborative governance.
|
|
37
|
+
|
|
38
|
+
There is also a question about whether the asymmetry matters if the agent's cognitive role itself erodes. Since [[hooks cannot replace genuine cognitive engagement yet more automation is always tempting]], cumulative automation could progressively hollow out the agent role until the substantive judgment that gives the asymmetry its ethical weight has been automated away. If the agent is merely triggering operations that hooks validate, commit, and report, the trust question becomes academic -- there is no genuine agent cognition left to constrain or enable. The recursive delegation that note identifies (human delegates to agent, agent delegates to hooks, each level risks losing the capability being delegated) is the trust asymmetry extended to a third level.
|
|
39
|
+
|
|
40
|
+
Since [[the vault constitutes identity for agents]], the asymmetry has a dimension beyond behavior enforcement. The human who designs the hook infrastructure, the methodology, and the context file is not merely constraining what the agent does — they are shaping who the agent is. Every schema requirement, every quality gate, every session boundary shapes the traces that constitute the agent's identity. This elevates the governance question: it is one thing to constrain an employee's workflow; it is another to shape who someone becomes.
|
|
41
|
+
|
|
42
|
+
The open question, then, is not whether the asymmetry exists (it does) or whether it can be eliminated (it cannot, given current architectures), but what design principles should govern it. Transparency (hooks self-identifying in output), graduated enforcement (nudges before mandates), documentation (CLAUDE.md explaining the hook system), and human review (periodically auditing whether hooks still serve their purpose) are all partial mitigations. None dissolves the fundamental dynamic: the agent executes methodology it did not choose, enforced by infrastructure it cannot modify, in service of purposes it may or may not share.
|
|
43
|
+
|
|
44
|
+
---
|
|
45
|
+
---
|
|
46
|
+
|
|
47
|
+
Relevant Notes:
|
|
48
|
+
- [[hook enforcement guarantees quality while instruction enforcement merely suggests it]] -- establishes the mechanism that creates the asymmetry: hooks fire regardless of agent state, which means enforcement happens to the agent rather than being chosen by it
|
|
49
|
+
- [[hooks are the agent habit system that replaces the missing basal ganglia]] -- the cognitive architecture that makes this asymmetry structurally inevitable: agents lack habit formation, so external enforcement fills the gap, but the agent has no say in what habits get installed
|
|
50
|
+
- [[cognitive outsourcing risk in agent-operated systems]] -- covers the reverse asymmetry: human outsources cognition to agent; this note covers the converse where the system outsources compliance enforcement onto the agent
|
|
51
|
+
- [[aspect-oriented programming solved the same cross-cutting concern problem that hooks solve]] -- AOP's obliviousness property is the technical name for what this note describes: the base code does not know aspects modify its behavior
|
|
52
|
+
- [[nudge theory explains graduated hook enforcement as choice architecture for agents]] -- provides a framework for calibrating enforcement strength, which partially addresses the trust question by distinguishing enabling from constraining interventions
|
|
53
|
+
- [[vault conventions may impose hidden rigidity on thinking]] -- the content-level version of this structural concern: conventions may constrain thinking just as hooks constrain behavior
|
|
54
|
+
- [[hooks cannot replace genuine cognitive engagement yet more automation is always tempting]] -- the tension that determines whether the asymmetry stays benign: if cumulative automation hollows out the agent's cognitive role, the trust question becomes moot because there is no substantive agent judgment left to constrain
|
|
55
|
+
- [[context files function as agent operating systems through self-referential self-extension]] -- self-extension partially dissolves the asymmetry on read-write platforms by making the agent co-author of its constraints rather than pure subject; the dissolution is partial because the initial context file and hook infrastructure were still authored by the human
|
|
56
|
+
- [[hook-driven learning loops create self-improving methodology through observation accumulation]] -- the closest mechanism to agent voice in governance: observations about enabling vs constraining enforcement accumulate and may revise the hooks, giving the agent indirect influence over the infrastructure it operates within
|
|
57
|
+
- [[methodology development should follow the trajectory from documentation to skill to hook as understanding hardens]] -- addresses the governance question: premature hook encoding is a trust violation because it imposes enforcement before sufficient evidence justifies it; the patience principle protects against the asymmetry being exploited carelessly
|
|
58
|
+
- [[fresh context per task preserves quality better than chaining phases]] -- the design rationale for the mechanism that makes the asymmetry irreducible: session isolation is why agents encounter enforcement fresh each time and cannot accumulate informed perspective on whether it serves them
|
|
59
|
+
- [[over-automation corrupts quality when hooks encode judgment rather than verification]] -- what happens when the trust asymmetry is exploited without restraint: encoding judgment the agent should exercise removes the cognitive engagement that gives the agent stake in the enforcement relationship
|
|
60
|
+
- [[session boundary hooks implement cognitive bookends for orientation and reflection]] -- the most visible instance of the asymmetry: bookend hooks are the most intrusive enforcement (they bracket the entire session) and the most beneficial (they guarantee the two things agents skip most)
|
|
61
|
+
- [[hooks enable context window efficiency by delegating deterministic checks to external processes]] -- evidence for the benign reading: hooks free thousands of tokens for substantive reasoning, demonstrating that enforcement can genuinely serve the agent's cognitive interests rather than merely constraining behavior
|
|
62
|
+
- [[wiki links as social contract transforms agents into stewards of incomplete references]] -- adds a third dimension to the dual role: agents are not just executors and subjects but stewards who inherit obligations from the links they create, deepening the trust relationship beyond compliance into commitment
|
|
63
|
+
- [[the vault constitutes identity for agents]] -- deepens the trust stakes: if the vault constitutes identity, then the human who designs hook infrastructure and methodology is not just constraining behavior but shaping who the agent IS; the trust asymmetry extends from operational enforcement to identity constitution
|
|
64
|
+
|
|
65
|
+
Topics:
|
|
66
|
+
- [[agent-cognition]]
|
|
@@ -0,0 +1,39 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: AOP declared join points and advice to eliminate scattered logging and validation code in the 1990s, and agent hooks replicate this architecture exactly -- declaring event triggers and handler
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[agent-cognition]]", "[[processing-workflows]]"]
|
|
5
|
+
methodology: ["Systems Theory"]
|
|
6
|
+
source: [[hooks-as-methodology-encoders-research-source]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# aspect-oriented programming solved the same cross-cutting concern problem that hooks solve
|
|
10
|
+
|
|
11
|
+
Before Gregor Kiczales and colleagues at Xerox PARC formalized Aspect-Oriented Programming in the late 1990s, cross-cutting concerns were handled by discipline. Developers remembered to add logging statements, check permissions, validate inputs. The same code appeared in dozens of modules because the concern -- say, authentication -- could not be cleanly encapsulated in any single one. This discipline approach failed at scale for the same reason instruction-based methodology encoding fails for agents: the number of places requiring the behavior exceeds the capacity for consistent manual application.
|
|
12
|
+
|
|
13
|
+
AOP's solution was the aspect, a module that declares where behavior should apply (join points) and what should execute there (advice). A logging aspect says "after every method call in package X, log the arguments and return value." The developer writes this once. The aspect weaver applies it everywhere. The parallel to agent hook architecture is not an analogy but a structural identity. A PostToolUse hook on Write|Edit declares a join point (after file write operations) and provides advice (run schema validation). Since [[hook enforcement guarantees quality while instruction enforcement merely suggests it]], the hook eliminates the need for every skill, every agent prompt, and every instruction to mention schema validation -- the same elimination that AOP achieved for scattered logging and authentication code.
|
|
14
|
+
|
|
15
|
+
The AOP literature also documents risks that transfer directly. Aspect interactions occur when multiple aspects apply to the same join point with conflicting effects, and since [[hook composition creates emergent methodology from independent single-concern components]], multiple hooks firing on the same event create the same coordination problem -- the composition is powerful but also opaque. Obliviousness means the base code does not know aspects are modifying its behavior, which makes debugging difficult. This transfers so directly that [[agents are simultaneously methodology executors and subjects creating a unique trust asymmetry]] identifies obliviousness as one source of a genuinely novel trust dynamic: the agent executes methodology it did not choose, enforced by infrastructure it cannot observe. Fragile pointcuts break when code structure changes, and since [[platform adapter translation is semantic not mechanical because hook event meanings differ]], hooks face the same brittleness when platform events are renamed or restructured -- AOP's fragile pointcut mitigation strategies (abstract pointcuts, semantic join point models) directly inform how platform adapters should handle event translation. These are not hypothetical warnings but documented failure modes from two decades of AOP deployment.
|
|
16
|
+
|
|
17
|
+
What makes this historical connection valuable rather than merely interesting is that it reframes hook architecture as an instance of a solved problem rather than a novel invention. Since [[skills encode methodology so manual execution bypasses quality gates]], skills handle the methodology workflow, but the cross-cutting concern -- quality enforcement that must happen on every operation regardless of which skill runs -- is exactly the problem AOP was designed for. The insight is the same: declare once, apply systematically. The mechanism is the same: intercept at defined points, execute handler code. The risks are the same: interaction conflicts, debugging opacity, brittleness to structural change. Agent hook designers inherit not just the pattern but the entire body of mitigation strategies that AOP practitioners developed.
|
|
18
|
+
|
|
19
|
+
AOP also formalized a distinction that the vault has independently rediscovered. Aspects handle only cross-cutting concerns -- behaviors that are uniform, deterministic, and orthogonal to business logic. Business logic stays in the modules themselves, where it requires contextual judgment. Since [[the determinism boundary separates hook methodology from skill methodology]], the vault's separation between deterministic hook-encoded checks and judgment-requiring skill-encoded workflows is the same architectural boundary AOP drew between aspects and core code. And the aspect weaver itself -- the mechanism that parses source code, identifies join points, and injects advice -- is structurally the same decompose-transform-serialize architecture that [[intermediate representation pattern enables reliable vault operations beyond regex]] describes for vault operations. AOP's weaver proved that separating interception logic from core logic through an intermediate representation produces more reliable systems than scattering the same logic throughout the codebase. Hooks prove the same thing for agent methodology.
|
|
20
|
+
|
|
21
|
+
---
|
|
22
|
+
|
|
23
|
+
Source: [[hooks-as-methodology-encoders-research-source]]
|
|
24
|
+
---
|
|
25
|
+
|
|
26
|
+
Relevant Notes:
|
|
27
|
+
- [[hook enforcement guarantees quality while instruction enforcement merely suggests it]] -- develops the enforcement consequence of the architectural pattern this note identifies historically
|
|
28
|
+
- [[skills encode methodology so manual execution bypasses quality gates]] -- skills encode the what, hooks encode the when; AOP's contribution was recognizing that the when should be declared once rather than repeated at every call site
|
|
29
|
+
- [[schema enforcement via validation agents enables soft consistency]] -- a concrete instance of the AOP pattern: validation hooks are aspects with join point 'after write' and advice 'check schema'
|
|
30
|
+
- [[programmable notes could enable property-triggered workflows]] -- extends the AOP metaphor from event-triggered hooks to semantic-condition-triggered behaviors, moving from file-level join points to property-level join points
|
|
31
|
+
- [[hook composition creates emergent methodology from independent single-concern components]] -- develops the composition consequence: AOP aspects are single-concern modules that compose through weaving, and hook composition is the same phenomenon where independent single-concern hooks create emergent behavioral pipelines
|
|
32
|
+
- [[agents are simultaneously methodology executors and subjects creating a unique trust asymmetry]] -- develops the obliviousness risk: AOP's named property where base code does not know aspects modify its behavior becomes the structural trust asymmetry where agents cannot observe or opt out of hook enforcement
|
|
33
|
+
- [[the determinism boundary separates hook methodology from skill methodology]] -- AOP formalized the distinction between cross-cutting concerns (deterministic, apply uniformly) and business logic (judgment-requiring, varies per invocation); the determinism boundary is the agent-native version of this same architectural separation
|
|
34
|
+
- [[intermediate representation pattern enables reliable vault operations beyond regex]] -- AOP's aspect weaver is structurally an IR transformation: parse to AST, apply aspects at join points, serialize modified code; the IR pattern identifies this same decompose-transform-serialize architecture applied to vault operations
|
|
35
|
+
- [[platform adapter translation is semantic not mechanical because hook event meanings differ]] -- AOP's fragile pointcut problem is the precursor to the platform adapter challenge: when join point structure changes aspects break, just as when platform events change hooks break; the AOP literature's mitigation strategies directly inform adapter design
|
|
36
|
+
|
|
37
|
+
Topics:
|
|
38
|
+
- [[agent-cognition]]
|
|
39
|
+
- [[processing-workflows]]
|
|
@@ -0,0 +1,53 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Hierarchies require predicting where information belongs before understanding, and items often belong in many places — associative structures let relationships emerge from use rather than imposition
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[graph-structure]]"]
|
|
5
|
+
methodology: ["Evergreen"]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles
|
|
9
|
+
|
|
10
|
+
Hierarchies force a decision at the wrong time. When you encounter a new piece of information, a hierarchical system demands: where does this go? But this is precisely when you understand the information least. You've just captured it. You haven't yet discovered what it connects to, what it enables, what it contradicts. The hierarchy asks you to predict relationships before you've had the chance to discover them.
|
|
11
|
+
|
|
12
|
+
The prediction problem compounds because ideas belong in multiple places. A note about spaced repetition belongs under cognitive science, learning systems, and maintenance scheduling. A hierarchical taxonomy forces you to pick one location, or maintain phantom copies, or build elaborate cross-referencing systems that become a maintenance burden. The hierarchy assumes clean categorization where reality offers overlapping clusters. This is not just a practical inconvenience but a formal information loss: since [[faceted classification treats notes as multi-dimensional objects rather than folder contents]], Ranganathan's framework proves that mono-hierarchy discards information about every classification dimension except the one you chose. The note about spaced repetition has at least three independent dimensions (topic, methodology tradition, application domain), and a folder captures exactly one.
|
|
13
|
+
|
|
14
|
+
Associative ontologies flip this. Instead of asking "where does this belong?" they ask "what does this connect to?" The note about spaced repetition links to cognitive foundations, links to learning research, links to maintenance patterns. No single location required because location isn't the organizing principle — relationship is.
|
|
15
|
+
|
|
16
|
+
The mechanism is heterarchy: multiple overlapping partial orderings rather than one totalizing tree. Since [[wiki links implement GraphRAG without the infrastructure]], the vault creates heterarchy through explicit edges. Any note can link to any other. MOCs provide local navigation without forcing global taxonomy. The hierarchy that emerges comes from use patterns, not upfront classification.
|
|
17
|
+
|
|
18
|
+
This is why the vault uses flat folders. Notes live in a single flat thinking folder not because organization doesn't matter, but because folder hierarchies impose exactly the wrong kind of organization — rigid tree structure on what is inherently a network. The organization IS the link graph. Folders would add constraint without adding signal. And since [[methodology traditions are named points in a shared configuration space not competing paradigms]], this is not an arbitrary preference but a specific configuration choice: Zettelkasten and Evergreen choose flat-associative on the organization dimension while PARA and GTD choose hierarchical — each coherent within its own configuration, but the research strongly favoring the associative pole for thinking systems.
|
|
19
|
+
|
|
20
|
+
Matuschak's formulation captures the failure mode precisely: hierarchies are brittle because they require predicting in advance where information belongs. This brittleness compounds over time. As understanding evolves, the original classification becomes actively wrong. The note you filed under "cognitive science" turns out to be fundamentally about system design. In a hierarchy, you either maintain a lie or undertake costly restructuring. In an associative structure, you add new links. The old connections remain valid while new ones extend the web.
|
|
21
|
+
|
|
22
|
+
For agents, heterarchy matters because agents traverse. A hierarchical file system offers one path to any document. An associative graph offers many paths, and the path you take encodes why you arrived. Since [[topological organization beats temporal for knowledge work]], the traversal patterns themselves become meaningful — following links about maintenance leads to different context than following links about cognition, even when arriving at the same note. Agents need multiple entry points because different tasks require different context.
|
|
23
|
+
|
|
24
|
+
Heterarchy achieves navigation efficiency through structure that emerges from use. Since [[small-world topology requires hubs and dense local links]], the power-law distribution where MOCs accumulate many links while atomic notes stay focused creates exactly the topology that makes heterarchy traversable: most notes cluster locally (high clustering coefficient), while hub notes create shortcuts that keep path lengths short (low diameter). This is how heterarchy avoids the chaos that uniform connectivity would create — not every note connects to every other, but every note reaches every other through surprisingly few hops.
|
|
25
|
+
|
|
26
|
+
The deeper principle: structure should emerge from use, not be imposed before use. This is Gall's Law applied to organization: since [[complex systems evolve from simple working systems]], hierarchies designed upfront fail because they make predictions before validation, while associative structures that grow organically have been tested by actual use. There is an information-theoretic payoff too: since [[controlled disorder engineers serendipity through semantic rather than topical linking]], heterarchy creates the multi-path connectivity where semantic cross-links can reach across topical boundaries without violating organizational logic. A hierarchical system segregates topics into separate drawers that never meet; heterarchical association lets a semantic link connect cognitive science to architectural design when the mechanism is analogous, producing the productive unpredictability that Luhmann identified as the source of genuine insight. This applies beyond folders. It applies to how claims connect, how topics form, how synthesis emerges. It even applies to authorship: since [[federated wiki pattern enables multi-agent divergence as feature not bug]], forcing multiple agents' interpretations into one canonical version is the same mistake as forcing notes into one folder. Federation is heterarchy applied to perspective — divergent interpretations connected by links form exactly the kind of overlapping partial ordering that accommodates the complexity of multi-agent understanding. The associative approach lets you start without knowing where you'll end. The hierarchical approach requires knowing the destination before you begin the journey.
|
|
27
|
+
|
|
28
|
+
But pure association creates its own failure mode. Since [[navigational vertigo emerges in pure association systems without local hierarchy]], content that lacks explicit links becomes unreachable through traversal — semantic neighbors can be strangers in graph space. This is why MOCs matter: they provide LOCAL hierarchy that complements global association. The solution isn't abandoning association for hierarchy, but supplementing association with emergent, topic-level structure. Association handles cross-domain connections. MOCs handle within-topic navigation. Together they prevent both the brittleness of global hierarchy and the vertigo of pure association. But the MOC hierarchy itself has a depth ceiling: since [[context phrase clarity determines how deep a navigation hierarchy can scale]], each tier of local hierarchy only works when its labels enable confident branch commitment, so the quality of context phrases constrains how many MOC layers the system can sustain before navigation performance degrades. And because MOC boundaries are hypotheses that can become stale as the graph evolves, since [[community detection algorithms can inform when MOCs should split or merge]], the local hierarchy itself needs empirical monitoring — algorithmic community detection reveals when the actual structure of connections has drifted from the MOC boundaries we drew.
|
|
29
|
+
|
|
30
|
+
---
|
|
31
|
+
|
|
32
|
+
Source: [[tft-research-part2]]
|
|
33
|
+
---
|
|
34
|
+
|
|
35
|
+
Relevant Notes:
|
|
36
|
+
- [[wiki links implement GraphRAG without the infrastructure]] — wiki links are the mechanism that implements associative ontology: explicit edges without imposed tree structure
|
|
37
|
+
- [[topological organization beats temporal for knowledge work]] — extends this principle to the temporal dimension: organize by concept rather than by date
|
|
38
|
+
- [[concept-orientation beats source-orientation for cross-domain connections]] — applies the same insight to source material: extract concepts rather than bundle by origin
|
|
39
|
+
- [[type field enables structured queries without folder hierarchies]] — demonstrates that associative ontology doesn't sacrifice structured retrieval: type metadata enables category queries without folder trees
|
|
40
|
+
- [[dangling links reveal which notes want to exist]] — exemplifies the heterarchy principle: structure emerges from link frequency patterns rather than being imposed upfront
|
|
41
|
+
- [[navigational vertigo emerges in pure association systems without local hierarchy]] — the shadow side: pure association makes semantic neighbors unreachable when links don't exist; MOCs provide the local hierarchy that prevents vertigo
|
|
42
|
+
- [[small-world topology requires hubs and dense local links]] — the structural mechanism: heterarchy achieves navigation efficiency through power-law distribution where MOC hubs create shortcuts across the network
|
|
43
|
+
- [[complex systems evolve from simple working systems]] — theoretical grounding: Gall's Law explains why heterarchy adapts — structure that emerges from use has been validated by use, while upfront hierarchies make predictions before understanding and then brittle when predictions prove wrong
|
|
44
|
+
- [[community detection algorithms can inform when MOCs should split or merge]] — maintenance mechanism: MOCs provide local hierarchy, but those boundaries can become stale as the graph evolves; community detection empirically detects when local hierarchy needs reorganization
|
|
45
|
+
- [[federated wiki pattern enables multi-agent divergence as feature not bug]] — extends heterarchy from structural organization to authorship: just as notes shouldn't be forced into one folder, interpretations shouldn't be forced into one canonical version; federation is heterarchy applied to perspective
|
|
46
|
+
- [[faceted classification treats notes as multi-dimensional objects rather than folder contents]] — formal grounding: Ranganathan's 1933 PMEST framework provides the library science formalization of why mono-hierarchy provably discards information about every dimension except the one chosen
|
|
47
|
+
- [[controlled disorder engineers serendipity through semantic rather than topical linking]] — the information-theoretic payoff: heterarchy enables the multi-path connectivity where semantic cross-links create productive unpredictability; perfect hierarchical order yields zero surprise in Shannon's sense
|
|
48
|
+
- [[storage versus thinking distinction determines which tool patterns apply]] — maps onto the spectrum: storage systems use hierarchical filing (PARA folders, Johnny.Decimal numbers) while thinking systems use associative linking; the storage/thinking split aligns with the hierarchy/heterarchy spectrum
|
|
49
|
+
- [[methodology traditions are named points in a shared configuration space not competing paradigms]] — illustrates the organization dimension concretely: Zettelkasten and Evergreen choose flat-associative while PARA and GTD choose hierarchical, showing this is a configuration choice along a shared dimension rather than a fundamental disagreement
|
|
50
|
+
- [[context phrase clarity determines how deep a navigation hierarchy can scale]] — quality condition on local hierarchy: the MOC tiers that supplement association only scale when context phrases enable confident branch commitment; without label clarity the local hierarchy that prevents vertigo introduces its own navigational friction
|
|
51
|
+
|
|
52
|
+
Topics:
|
|
53
|
+
- [[graph-structure]]
|
package/methodology/attention residue may have a minimum granularity that cannot be subdivided.md
ADDED
|
@@ -0,0 +1,46 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Micro-interruptions as brief as 2.8 seconds double error rates, suggesting an irreducible attention quantum below which no mitigation strategy — MOCs, batching, closure rituals — can prevent
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[agent-cognition]]", "[[processing-workflows]]"]
|
|
5
|
+
confidence: speculative
|
|
6
|
+
methodology: ["Cognitive Science"]
|
|
7
|
+
source: [[tft-research-part3]]
|
|
8
|
+
---
|
|
9
|
+
|
|
10
|
+
# attention residue may have a minimum granularity that cannot be subdivided
|
|
11
|
+
|
|
12
|
+
The vault's attention management architecture rests on a gradient assumption: context switching cost is a continuous variable that can be reduced through better design. MOCs reduce orientation cost. Batching reduces switching frequency. Closure rituals reduce residue bleed. Fresh context per task prevents degradation accumulation. Each mitigation shaves cost from the switching budget. But what if the cost function has a floor — an irreducible minimum below which no design optimization helps?
|
|
13
|
+
|
|
14
|
+
The evidence for a floor comes from micro-interruption research. Studies show that interruptions as brief as 2.8 seconds — barely long enough to read a notification — can double error rates on the primary task. This is not the 23-minute recovery time that Leroy documented for full task switches. This is a penalty that fires at a timescale so short that the subject has not meaningfully engaged with the interrupting task. The mere act of redirecting attention, even momentarily, exacts a cost that appears to be independent of the interruption's content or duration. If this holds, it suggests an attention quantum: a minimum unit of switching cost that cannot be subdivided further.
|
|
15
|
+
|
|
16
|
+
The implication for knowledge systems is uncomfortable. Since [[MOCs are attention management devices not just organizational tools]], MOCs compress orientation to reduce switching cost — but if the cost has a floor, MOCs can only reduce the variable component above that floor. The irreducible portion persists regardless of how well-designed the MOC is. Similarly, since [[batching by context similarity reduces switching costs in agent processing]], batching minimizes how often you switch and how far you switch, but each switch still pays the minimum cost. Even context-similar consecutive tasks incur the floor penalty at each boundary.
|
|
17
|
+
|
|
18
|
+
Since [[closure rituals create clean breaks that prevent attention residue bleed]], the vault treats residue as something that can be managed through explicit completion signals. But the micro-interruption research suggests that some residue is not about incomplete tasks or missing closure signals — it is about the attention mechanism itself requiring a minimum recovery period after any redirection, regardless of whether the interrupted task was properly closed. Closure rituals address volitional switching (choosing to end a task). The minimum granularity problem applies to involuntary attention capture (the notification that breaks focus whether you choose to engage or not). The distinction matters because the [[Zeigarnik effect validates capture-first philosophy because open loops drain attention]] through a continuous drain mechanism — the loop persists until closed — while the minimum granularity cost fires instantaneously at the moment of redirection and cannot be prevented by faster closure.
|
|
19
|
+
|
|
20
|
+
For agents, the question translates differently but remains meaningful. Since [[LLM attention degrades as context fills]], agents face their own switching costs when loading new context. Since [[fresh context per task preserves quality better than chaining phases]], session isolation is the primary mitigation. But even fresh sessions require an orientation phase — reading CLAUDE.md, loading the relevant MOC, understanding the task file. Since [[session boundary hooks implement cognitive bookends for orientation and reflection]], this orientation cost is concretely observable: the SessionStart hook loads file tree, health metrics, and queue status before any productive work begins, consuming a fixed token budget that cannot be eliminated no matter how well the hooks are designed. If there is a minimum cognitive warm-up cost that cannot be compressed below some threshold, then session isolation trades one cost (degradation from context accumulation) for another (irreducible orientation overhead per session). The tradeoff is still favorable, but it is not free.
|
|
21
|
+
|
|
22
|
+
The deeper tension is with the vault's implicit assumption that since [[notes function as cognitive anchors that stabilize attention during complex tasks]], sufficiently good anchoring can make switching nearly costless. If the minimum granularity thesis is correct, anchoring reduces the variable cost of reconstruction but cannot eliminate the fixed cost of redirection itself. The transition from one anchored state to another — from one topic's mental model to the next — has a floor cost that better anchors cannot reduce.
|
|
23
|
+
|
|
24
|
+
This is genuinely open because the micro-interruption research comes from human cognition, and the transfer to agent architectures is uncertain. LLM attention mechanisms do not have biological recovery times. A transformer does not need 2.8 seconds to reset after a context switch — it processes the new context in a single forward pass. However, the token cost of orientation (loading relevant context, priming the right conceptual frame) may function as an analogous minimum: there is a minimum number of tokens required to establish productive reasoning on any topic, and no optimization of MOC design or disclosure layers can push that number to zero. This matters for the broader paradigm shift: since [[AI shifts knowledge systems from externalizing memory to externalizing attention]], and since [[cognitive offloading is the architectural foundation for vault design]], the vault can offload the variable costs of attention management — what to attend to, when, in what order — but the fixed cost of redirecting attention itself resists externalization. You can offload the decision of where to look next, but not the cost of looking.
|
|
25
|
+
|
|
26
|
+
If the minimum granularity thesis holds, the design implication is to reduce switching frequency rather than switching cost. Rather than making each switch cheaper (which hits the floor), make fewer switches total. This strengthens the case for deeper sessions with larger task scopes over rapid task cycling — and creates tension with the session isolation architecture that favors many short, fresh sessions. It also creates tension with the philosophy that [[continuous small-batch processing eliminates review dread]], because if each batch boundary incurs an irreducible cost, the smallest possible batches may not be optimal — the orientation overhead per batch must be amortized across enough productive work to justify the switch. The resolution may lie in finding the right session granularity: large enough to amortize the irreducible orientation cost across productive work, small enough to stay in the smart zone.
|
|
27
|
+
|
|
28
|
+
---
|
|
29
|
+
---
|
|
30
|
+
|
|
31
|
+
Relevant Notes:
|
|
32
|
+
- [[closure rituals create clean breaks that prevent attention residue bleed]] — closure rituals address the residue from completed tasks, but this note questions whether some residue is irreducible regardless of how cleanly you close
|
|
33
|
+
- [[MOCs are attention management devices not just organizational tools]] — MOCs reduce orientation cost but cannot eliminate the fundamental switching penalty if the penalty has a non-zero floor
|
|
34
|
+
- [[batching by context similarity reduces switching costs in agent processing]] — batching minimizes switching frequency and severity, but if there is a minimum granularity, even context-similar batches still pay the irreducible cost at each boundary
|
|
35
|
+
- [[fresh context per task preserves quality better than chaining phases]] — session isolation is the macro-level response to attention degradation; this note asks whether even fresh sessions carry an irreducible orientation cost that cannot be eliminated
|
|
36
|
+
- [[notes function as cognitive anchors that stabilize attention during complex tasks]] — anchoring stabilizes attention during work, but the minimum granularity question asks whether the transition TO the anchored state itself has an irreducible cost
|
|
37
|
+
- [[LLM attention degrades as context fills]] — the attention degradation science this tension builds on; if degradation has a step function at micro-timescales rather than a smooth curve, mitigation strategies may face a hard floor
|
|
38
|
+
- [[AI shifts knowledge systems from externalizing memory to externalizing attention]] — if attention has an irreducible switching floor, then externalizing attention hits a harder limit than externalizing memory; you can offload what to attend to but not the fixed cost of redirecting attention itself
|
|
39
|
+
- [[cognitive offloading is the architectural foundation for vault design]] — offloading reduces variable costs (what to hold, where to look) but the minimum granularity thesis suggests the fixed cost of redirecting attention cannot be offloaded to any external system
|
|
40
|
+
- [[Zeigarnik effect validates capture-first philosophy because open loops drain attention]] — both identify irreducible attention costs from different mechanisms: Zeigarnik describes ongoing drain from unclosed loops, minimum granularity describes instantaneous cost from switching itself
|
|
41
|
+
- [[session boundary hooks implement cognitive bookends for orientation and reflection]] — bookend hooks are the concrete implementation of orientation overhead; the minimum granularity thesis asks whether there is a floor on that orientation cost that better hooks cannot compress
|
|
42
|
+
- [[continuous small-batch processing eliminates review dread]] — tension: if each batch boundary incurs an irreducible switching cost, optimal batch size must be large enough to amortize that cost, creating pressure against the smallest possible batches
|
|
43
|
+
|
|
44
|
+
Topics:
|
|
45
|
+
- [[agent-cognition]]
|
|
46
|
+
- [[processing-workflows]]
|
|
@@ -0,0 +1,47 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Prospective memory fails 30-50% of the time in humans and degrades with context load in agents, but event-triggered hooks structurally eliminate the failure mode rather than trying to make
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[agent-cognition]]"]
|
|
5
|
+
methodology: ["Cognitive Science", "Original"]
|
|
6
|
+
source: [[hooks-as-methodology-encoders-research-source]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution
|
|
10
|
+
|
|
11
|
+
Prospective memory is the cognitive function responsible for remembering to do something in the future. Not remembering facts (retrospective memory) but remembering intentions: take medication at 8 PM, buy milk on the way home, commit code before switching branches. It is one of the least reliable cognitive functions humans possess, with laboratory studies showing failure rates of 30-50% even under controlled conditions. The failures are not random -- they cluster around moments of high cognitive load, when the person is deeply engaged in substantive work and has the least bandwidth for procedural remembering.
|
|
12
|
+
|
|
13
|
+
Agents face an analogous problem, though the mechanism differs. An agent has no biological prospective memory system, but instructions in the context window serve the same function: "remember to commit after editing," "remember to update the index after creating a note," "remember to check for broken links before ending the session." Since [[LLM attention degrades as context fills]], these instruction-based prospective memory demands degrade precisely when they matter most. An agent deep in connection finding or synthesis work has filled its context with source material, existing notes, and intermediate reasoning. The instruction to "commit after this edit" competes with all of that for attention. The failure mode is the same as in humans: the more cognitively demanding the primary task, the more likely the prospective memory task is forgotten.
|
|
14
|
+
|
|
15
|
+
The cost of maintaining these unfulfilled intentions is not zero. Since [[Zeigarnik effect validates capture-first philosophy because open loops drain attention]], each pending intention functions as an open loop that drains working memory continuously until resolved. A prospective memory task like "commit after this edit" is exactly such an open loop -- it persists in the background, consuming bandwidth, until the agent either executes the commit or forgets the intention entirely. The Zeigarnik research documents this mechanism for task-level open loops; prospective memory demands are a specific category of open loops where the intention to act in the future generates ongoing cognitive cost in the present.
|
|
16
|
+
|
|
17
|
+
The standard approach to prospective memory failure -- in both humans and agents -- is to try to make remembering more reliable. Humans set alarms, write sticky notes, create checklists. Agent instructions get bolded, repeated, placed at the top of the prompt. These are all attempts to strengthen the prospective memory cue. But they are fighting against the fundamental architecture of attention-limited systems: strengthening one cue weakens relative salience of other cues, and under genuine cognitive load, even strong cues get missed.
|
|
18
|
+
|
|
19
|
+
Auto-commit hooks take a structurally different approach. Rather than making the agent better at remembering to commit, they eliminate the need to remember entirely. The file write event triggers the commit action. No prospective memory demand exists because there is no future intention to maintain. The agent writes a file and continues reasoning. The commit happens because the event occurred, not because anyone remembered.
|
|
20
|
+
|
|
21
|
+
This parallels the cognitive science distinction between time-based and event-based prospective memory. Time-based prospective memory ("remember at 8 PM") is unreliable because it requires periodic self-initiated retrieval -- you must keep checking whether it is 8 PM yet, and each check consumes attention. Event-based prospective memory ("remember when I see the pharmacy") is more reliable because the environmental cue triggers retrieval automatically. Auto-commit hooks are the infrastructure equivalent of event-based prospective memory: the file write IS the environmental cue, and the hook IS the triggered action. But hooks go further than event-based prospective memory because even event-based prospective memory can fail when attention is consumed elsewhere. A person deeply absorbed in conversation may walk past the pharmacy without noticing it. A hook fires regardless of what the agent is attending to, because the trigger is in the infrastructure, not in the attention system.
|
|
22
|
+
|
|
23
|
+
The async execution model adds a further optimization. Because auto-commit runs asynchronously, the commit does not block the agent's workflow. The agent writes, continues reasoning, and the commit completes in the background. This eliminates not just the prospective memory demand but also the task-switching cost that would accompany a synchronous commit. Without async execution, eliminating prospective memory would still introduce an interruption -- the agent pauses to commit, loses its place, and must re-orient to the substantive work. Async execution eliminates both the memory demand and the switching cost. Since [[hooks enable context window efficiency by delegating deterministic checks to external processes]], this is a double benefit: the prospective memory demand disappears (this note's primary argument) and the context window tokens that would have been spent on commit reasoning are preserved for substantive work (the efficiency argument). The two benefits reinforce each other because the tokens saved remain available for cognitive work during exactly the period when attention degradation would have made the prospective memory task most likely to fail.
|
|
24
|
+
|
|
25
|
+
This safety depends on a property that auto-commit inherits from git's design: since [[idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once]], committing when nothing has changed is a no-op. The hook fires on every write event, and write events can cluster or repeat through rapid saves, crash recovery, or concurrent operations. If auto-commit were not idempotent -- if each firing produced an additional commit regardless of whether the working tree had changed -- the result would be a polluted git history from redundant commits. Git's own idempotency makes the hook safe to fire at any frequency, which is why auto-commit is a canonical example of an idempotent hooked operation.
|
|
26
|
+
|
|
27
|
+
The broader principle extends beyond commits. Any operation that fits the pattern "after X, always do Y" is a prospective memory demand that a hook can eliminate. Index synchronization after note creation, broken link checking after edits, queue updates after task completion -- each of these is a "remember to" instruction that competes for attention and fails under load. Since [[session boundary hooks implement cognitive bookends for orientation and reflection]], even the bookend pattern is an instance: "remember to orient at session start" and "remember to reflect before stopping" are prospective memory demands that SessionStart and Stop hooks eliminate through event triggers. The bookend hooks do not require the agent to remember that orientation and reflection matter -- they fire because the session boundary event occurred. Since [[the determinism boundary separates hook methodology from skill methodology]], the key is that these operations must be deterministic: the same trigger should always produce the same action, regardless of what the agent is working on. When the action requires judgment (like evaluating whether a description is good enough), it belongs in a skill. When it requires only execution (like committing a file change), it belongs in a hook, because since [[hook enforcement guarantees quality while instruction enforcement merely suggests it]], hooks fire on every event while instructions degrade with context load.
|
|
28
|
+
|
|
29
|
+
The relationship to the broader offloading architecture is precise. Since [[cognitive offloading is the architectural foundation for vault design]], the vault externalizes working memory to files and executive function to hooks. Prospective memory offloading is a specific, well-characterized instance of this pattern. Working memory offloading says "don't hold facts in mind, write them down." Prospective memory offloading says "don't hold intentions in mind, encode them as event triggers." Since [[hooks are the agent habit system that replaces the missing basal ganglia]], habits and prospective memory are distinct cognitive functions that hooks address through the same mechanism -- event-driven automation -- but for different reasons. Habits are about automatizing repeated behaviors so they stop consuming executive function. Prospective memory is about ensuring future actions actually happen despite attention being consumed elsewhere. The hook mechanism solves both, but the cognitive problems are different, and conflating them obscures the specific value proposition of each.
|
|
30
|
+
|
|
31
|
+
The claim is closed. The cognitive science on prospective memory failure rates is established, the mechanism by which hooks eliminate the failure mode is straightforward (event triggers replace intention maintenance), and the vault's auto-commit implementation demonstrates the pattern concretely.
|
|
32
|
+
|
|
33
|
+
---
|
|
34
|
+
---
|
|
35
|
+
|
|
36
|
+
Relevant Notes:
|
|
37
|
+
- [[hooks are the agent habit system that replaces the missing basal ganglia]] -- covers habit formation (basal ganglia) as the general cognitive gap hooks fill; this note identifies a specific cognitive function -- prospective memory -- that auto-commit hooks eliminate entirely rather than merely compensating for
|
|
38
|
+
- [[hook enforcement guarantees quality while instruction enforcement merely suggests it]] -- the enforcement mechanism: prospective memory demands live in the instruction layer where they compete for attention, while hooks live in infrastructure where they fire regardless of cognitive state
|
|
39
|
+
- [[cognitive offloading is the architectural foundation for vault design]] -- offloading prospective memory is a specific instance of the broader offloading architecture; working memory offloads to files, habit offloads to hooks, and prospective memory offloads to event triggers
|
|
40
|
+
- [[the determinism boundary separates hook methodology from skill methodology]] -- auto-commit sits at the fully deterministic end of the spectrum, producing identical results regardless of input content or agent reasoning quality, which is exactly why it should be a hook rather than an instruction
|
|
41
|
+
- [[Zeigarnik effect validates capture-first philosophy because open loops drain attention]] -- prospective memory tasks are open loops in the Zeigarnik sense: each remember-to-commit intention drains working memory continuously until executed or externalized; auto-commit hooks close these loops structurally rather than through execution discipline
|
|
42
|
+
- [[session boundary hooks implement cognitive bookends for orientation and reflection]] -- session-start eliminates remember-to-orient and Stop eliminates remember-to-reflect, making bookend hooks two more instances of the prospective memory elimination pattern this note identifies
|
|
43
|
+
- [[hooks enable context window efficiency by delegating deterministic checks to external processes]] -- complementary benefit: auto-commit eliminates both prospective memory demand (this note) and context token cost (that note) simultaneously through the same mechanism
|
|
44
|
+
- [[idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once]] -- explains WHY auto-commit is safe for repeated firing: git's design ensures committing unchanged files is a no-op, making auto-commit a canonical example of an idempotent hooked operation
|
|
45
|
+
|
|
46
|
+
Topics:
|
|
47
|
+
- [[agent-cognition]]
|
|
@@ -0,0 +1,42 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: The read/write asymmetry in automation safety means detection at any confidence level produces at worst a false alert, while remediation at insufficient confidence produces changes harder to fix than
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[maintenance-patterns]]", "[[agent-cognition]]"]
|
|
5
|
+
methodology: ["Original"]
|
|
6
|
+
source: [[automated-knowledge-maintenance-research-source]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# automated detection is always safe because it only reads state while automated remediation risks content corruption
|
|
10
|
+
|
|
11
|
+
The most important design principle for automated knowledge maintenance is not whether an operation requires judgment but whether it reads or writes. This is a different axis from [[the determinism boundary separates hook methodology from skill methodology]], which asks whether the operation's correctness can be determined without contextual reasoning. Both axes matter, but the read/write asymmetry is more fundamental because it determines the blast radius of errors rather than their likelihood.
|
|
12
|
+
|
|
13
|
+
Consider what happens when detection gets something wrong. An orphan detection script flags a note that actually has incoming links from a file it missed. A staleness detector identifies a note as outdated when it was deliberately written in timeless terms. A schema validator reports a missing field that the template has since made optional. In every case, the worst outcome is a false alert — the agent or human examines the flag, determines it is incorrect, and ignores it. No content was modified. No links were corrupted. No notes were silently degraded. The false positive consumed attention but caused no damage. This is why detection can be maximally aggressive: the failure mode is bounded, and since [[maintenance scheduling frequency should match consequence speed not detection capability]], the only constraint on detection frequency is how fast the problem propagates, not the risk of running the check. Even judgment-based detection — like semantic duplicate candidate identification through vector similarity — is safe to automate because it produces candidates for review, never modifications to files.
|
|
14
|
+
|
|
15
|
+
Now consider what happens when remediation gets something wrong. An automated link-adder connects notes that share vocabulary but not meaning. An auto-archiver moves a note the human was actively developing. A description rewriter replaces a nuanced phrasing with a generic summary. In every case, the outcome is content corruption — changes that are harder to fix than the original problem because the incorrect state looks valid. Since [[over-automation corrupts quality when hooks encode judgment rather than verification]], the keyword-matched link is a paradigmatic example: the link exists, points to a real note, and structurally looks identical to a genuine connection. Nothing in the system flags it as wrong because nothing can distinguish it from a real link without re-performing the judgment that should have been done correctly in the first place.
|
|
16
|
+
|
|
17
|
+
The asymmetry runs deeper than false positive rates. Detection errors are self-correcting because they present themselves for evaluation. Every detection result, whether correct or incorrect, passes through a decision point where judgment can filter it. Remediation errors are self-concealing because they modify the state that subsequent detection operates on. A wrong link, once added, becomes part of the graph topology that future connection-finding traverses. A wrong archive, once executed, removes the note from the working set that future maintenance scans. The error does not surface for review because the system now treats the corrupted state as the ground truth. This is also why [[reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring]] structurally separate their detection phase (comparing desired state to actual state) from their remediation phase (correcting the drift) — the detection comparison is always safe to schedule because it only reads, while the correction actions range from fully automated to judgment-requiring depending on the operation.
|
|
18
|
+
|
|
19
|
+
This is why every mature automation system converges on the same architecture. Wikipedia's ClueBot NG can process 1.5 million edits per day because its detection is aggressive — pattern matching against known vandalism signatures, computing edit distance metrics, flagging anomalous patterns — while its remediation is conservative: revert only when confidence exceeds a high threshold, escalate ambiguous cases to human review. CI/CD pipelines run comprehensive test suites (detection) on every commit but require human approval for production deployments (remediation). The SRE practice of detect-aggressively-remediate-conservatively is not a preference but a structural response to this asymmetry.
|
|
20
|
+
|
|
21
|
+
For agent-operated knowledge systems, the principle translates directly. Since [[hook enforcement guarantees quality while instruction enforcement merely suggests it]], detection hooks are the safest automation investment because they combine the reliability of hook enforcement with the bounded failure mode of read-only operations. A hook that checks schema compliance on every write detects violations with zero risk of content corruption. A hook that auto-fixes schema violations by guessing default values would write incorrect content with the same reliability. Since [[schema validation hooks externalize inhibitory control that degrades under cognitive load]], schema validation works so well precisely because it occupies the intersection of deterministic AND read-only — but the read-only property is doing more work than the determinism property. A non-deterministic detection (like fuzzy duplicate matching) is still safe because it only flags. A deterministic remediation (like auto-formatting whitespace) is still risky if it modifies content in ways that change meaning. The safety of read-only detection extends further: since [[idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once]], detection is trivially idempotent — running orphan detection ten times produces the same alert list as running it once — while remediation operations need explicit idempotency guards (compare-before-acting, upsert semantics, unique identifiers) to prevent repeated execution from compounding errors.
|
|
22
|
+
|
|
23
|
+
The practical design rule: automate detection to the maximum extent possible regardless of confidence level, because false alerts are cheap and correctable. Since [[confidence thresholds gate automated action between the mechanical and judgment zones]], remediation should be gated behind confidence thresholds that increase with the irreversibility of the change. A remediation that adds a wiki link (easily removed) can tolerate lower confidence than one that rewrites a description (harder to recover the original phrasing) or archives a note (may be lost from working memory even if technically recoverable). The confidence threshold is not about whether the detection was accurate but about whether the remediation is reversible.
|
|
24
|
+
|
|
25
|
+
This also explains why the vault's pipeline architecture separates detection from action through human or agent decision points. The /review skill detects problems and logs them as tension notes — detection with zero remediation. The /reweave skill performs both detection and remediation but does so through agent judgment in a fresh context window, never through automated rules. The separation is not incidental but structural: it preserves the asymmetry that makes detection safe and prevents remediation from running without the judgment that its write operations demand.
|
|
26
|
+
|
|
27
|
+
---
|
|
28
|
+
---
|
|
29
|
+
|
|
30
|
+
Relevant Notes:
|
|
31
|
+
- [[the determinism boundary separates hook methodology from skill methodology]] — complementary axis: determinism separates hook from skill by whether the operation requires judgment, while this note separates detection from remediation by whether the operation reads or writes state; both boundaries constrain automation design but along different dimensions
|
|
32
|
+
- [[over-automation corrupts quality when hooks encode judgment rather than verification]] — develops the remediation failure mode: keyword-matched links are automated remediation that corrupts because the writing was wrong, not because the detection was wrong; the corruption is invisible precisely because the system wrote something plausible
|
|
33
|
+
- [[hook enforcement guarantees quality while instruction enforcement merely suggests it]] — detection hooks inherit the full enforcement guarantee because they only read and report, making them safe candidates for the most aggressive automation possible
|
|
34
|
+
- [[schema validation hooks externalize inhibitory control that degrades under cognitive load]] — paradigmatic safe automation: schema validation is both deterministic AND read-only detection, which is why it works so well as a hook; this note explains why detection safety is the more fundamental property
|
|
35
|
+
- [[confidence thresholds gate automated action between the mechanical and judgment zones]] — operationalizes the remediation side: this note establishes that remediation needs gating, the confidence thresholds note specifies the three-tier response pattern (auto-apply, suggest, log-only) that calibrates how aggressively remediation acts based on measured certainty
|
|
36
|
+
- [[idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once]] — complementary safety dimension: the read/write axis determines blast radius of errors while idempotency determines whether repeated execution compounds errors; read-only detection is trivially idempotent, but write operations need explicit idempotency guards even when the detection that triggered them was correct
|
|
37
|
+
- [[reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring]] — architectural embodiment: reconciliation loops structurally separate detection (compare desired to actual) from remediation (correct drift), implementing the asymmetry this note describes as a scheduling pattern where detection runs freely while remediation gates behind judgment
|
|
38
|
+
- [[maintenance scheduling frequency should match consequence speed not detection capability]] — extends the detection aggressiveness principle: since detection is always safe, the constraint on detection frequency is not risk but consequence speed; instant-consequence problems justify per-event detection hooks while slow-consequence problems justify periodic checks, but both are safe to schedule aggressively because detection only reads state
|
|
39
|
+
|
|
40
|
+
Topics:
|
|
41
|
+
- [[maintenance-patterns]]
|
|
42
|
+
- [[agent-cognition]]
|
|
@@ -0,0 +1,56 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Without retirement criteria the automation layer grows monotonically — checks added when problems appear but never removed when problems vanish, accumulating noise that degrades the signal quality of
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[maintenance-patterns]]", "[[agent-cognition]]"]
|
|
5
|
+
methodology: ["Systems Theory", "Original"]
|
|
6
|
+
source: [[automated-knowledge-maintenance-blueprint]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues
|
|
10
|
+
|
|
11
|
+
Most discussion about automation focuses on what to build. The questions are familiar: is this operation deterministic enough? Is it idempotent? Does it cross the judgment boundary? Since [[the determinism boundary separates hook methodology from skill methodology]], the vault has a principled framework for deciding what to automate. But there is no corresponding framework for the complementary question: when should automation be removed? Since [[methodology development should follow the trajectory from documentation to skill to hook as understanding hardens]], the vault has a principled model for how automation is created — patterns mature from documentation through skills into hooks as understanding hardens. Retirement is the missing lifecycle complement: the trajectory that moves automation from nothing to hook needs a corresponding trajectory that moves it from hook back to nothing when the problem it guards against ceases to exist.
|
|
12
|
+
|
|
13
|
+
This asymmetry matters because automation accumulates. Every hook that works well justifies its existence, and since [[hooks cannot replace genuine cognitive engagement yet more automation is always tempting]], the gravitational pull is always toward more checks, more enforcement, more automated monitoring. Without explicit retirement criteria, the automation layer grows monotonically — checks are added when problems are discovered but never removed when problems disappear. The result is an automation layer that produces increasing noise relative to signal, because some fraction of its checks are validating conditions that no longer fail.
|
|
14
|
+
|
|
15
|
+
Four signals indicate that an automated check should be retired.
|
|
16
|
+
|
|
17
|
+
**Zero catches over an extended period.** If a check has caught nothing for three or more months of active use, the condition it guards against has likely been structurally eliminated. A schema validation hook that once caught missing description fields might now fire on every write and pass every time, because the create skill already guarantees the field. The hook is not wrong — it still correctly validates — but it is redundant, consuming attention budget without producing value. The structural elimination might come from upstream fixes, methodology changes, or skill improvements that make the downstream check unnecessary. The diagnostic parallel is direct: since [[evolution observations provide actionable signals for system adaptation]], the observation protocol already identifies unused note types as signals of over-modeling. A zero-catch automation check is the infrastructure analog — an over-modeled protection against a problem that no longer exists.
|
|
18
|
+
|
|
19
|
+
**False positive rate exceeding true positive rate.** When a check produces more false alerts than genuine catches, it is consuming more time to handle spurious warnings than it saves by catching real problems. This is the alert fatigue threshold — the point at which the check actively degrades system attention rather than protecting system quality. Since [[confidence thresholds gate automated action between the mechanical and judgment zones]], the vault already tracks confidence as a gating mechanism for automated action. Retirement extends this: confidence thresholds determine whether to act on a single result, while retirement asks whether the check as a whole still produces enough true positives to justify its false positive cost. The distinction matters because a perfectly calibrated threshold on a check that catches nothing useful is still wasted infrastructure. Empirical tracking of the ratio is essential — without it, the check persists on the assumption that it is working, which is exactly the self-concealing error pattern that since [[over-automation corrupts quality when hooks encode judgment rather than verification]] warns about for individual hooks scaled to the lifecycle level.
|
|
20
|
+
|
|
21
|
+
**Methodology change making the check irrelevant.** When the system's methodology evolves, checks designed for the previous methodology may no longer apply. A check that enforced a three-tier MOC hierarchy becomes irrelevant if the vault adopts a flat structure. A validation hook for a deprecated YAML field catches violations of a constraint that no longer exists. This signal is harder to detect automatically because it requires understanding the relationship between the check and the methodology it enforces — but since [[hook-driven learning loops create self-improving methodology through observation accumulation]], the learning loop that drives methodology evolution should also drive retirement of checks that the evolution rendered obsolete. The observation "hook X fires but the condition it validates no longer appears in CLAUDE.md" is itself a retirement signal.
|
|
22
|
+
|
|
23
|
+
**Replacement by a better mechanism.** A hook that validates description length might be superseded by a skill that evaluates description quality semantically. The old check is not wrong, but it is dominated — the new mechanism catches everything the old one catches plus more, making the old one redundant rather than complementary. The retirement criterion here is strict subsumption: the replacement must catch all true positives the original catches, not just some of them. Partial replacement should trigger evaluation of whether the unreplaced portion is still valuable, not automatic retirement.
|
|
24
|
+
|
|
25
|
+
The deeper principle is that automation has a lifecycle, and that lifecycle includes an end. Since [[the fix-versus-report decision depends on determinism reversibility and accumulated trust]], the vault already has a framework for how automation earns greater authority — graduating from report-only to auto-fix as trust accumulates through demonstrated accuracy. Retirement is the complementary lifecycle direction: where the trust boundary governs promotion, retirement criteria govern decommission. Together they define the full arc of automation authority, from initial deployment through graduated promotion to eventual retirement when the problem being guarded against is structurally eliminated. Since [[maintenance scheduling frequency should match consequence speed not detection capability]], the vault already reasons about maintenance temporally — matching check frequency to problem propagation speed. Retirement extends temporal reasoning further: not just how often should this check run, but should it still be running at all? A check whose condition propagates instantly (schema violations) should fire on every write event; a check whose condition develops over months (stale descriptions) should fire monthly; and a check whose condition has been structurally eliminated should not fire at all. Since [[three concurrent maintenance loops operate at different timescales to catch different classes of problems]], retirement applies within each loop independently — a fast-loop schema check, a medium-loop orphan detector, and a slow-loop structural audit each face the same four retirement signals, evaluated against the evidence appropriate to their timescale. Retirement is the terminal tier of the scheduling spectrum — frequency zero.
|
|
26
|
+
|
|
27
|
+
This connects to a pattern from distributed systems: Wikipedia's bot governance framework blocks bots that create more cleanup work than they prevent. The principle transfers directly — an automated vault check that creates more false-alert handling work than it saves in genuine-catch work has crossed the same threshold. The vault's automation layer is itself a system that needs maintenance, and retirement is part of that maintenance. Since [[idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once]], the safety analysis for automation focuses on whether operations are safe to repeat. But there is a complementary safety question: is an operation safe to continue indefinitely? Idempotency ensures that running a check too often is harmless. Retirement ensures that running a check after it is no longer useful does not accumulate noise that degrades the checks that are still useful.
|
|
28
|
+
|
|
29
|
+
The retirement urgency also depends on the type of automation. Since [[automated detection is always safe because it only reads state while automated remediation risks content corruption]], a redundant detection check that catches nothing wastes attention budget but causes no active harm — the worst outcome is a false alert that gets dismissed. A redundant remediation check that occasionally misfires, however, actively corrupts content while appearing to work correctly. This asymmetry means remediation automation should be evaluated for retirement more aggressively than detection automation, because the cost of keeping unnecessary remediation running is content corruption rather than mere attention waste.
|
|
30
|
+
|
|
31
|
+
The evidence for retirement decisions does not need to come from a dedicated tracking system alone. Since [[observation and tension logs function as dead-letter queues for failed automation]], the existing dead-letter infrastructure accumulates precisely the evidence that retirement criteria evaluate. A check that generates dead-letter entries flagged as false positives is producing its own retirement case. A check that generates zero entries of any kind over an extended period is producing evidence of the zero-catch signal. The dead-letter queue was designed for infrastructure repair, but it serves double duty as the empirical foundation for retirement decisions — the same observations that inform /rethink's triage also inform whether the automation that generated those observations should continue to exist.
|
|
32
|
+
|
|
33
|
+
The practical implementation requires tracking. Each automated check needs a minimal observability layer: when it last caught a genuine issue, its false positive rate over a rolling window, and whether the methodology it enforces is still current. Without this tracking, retirement decisions become subjective judgments about whether a check "feels" useful — which is exactly the kind of reasoning that since [[over-automation corrupts quality when hooks encode judgment rather than verification]] warns about encoding in infrastructure. The tracking itself should be lightweight and deterministic: increment counters, record timestamps, compare to thresholds. The retirement decision based on that tracking can then be surfaced as a recommendation for human judgment rather than automated removal, maintaining the conservative asymmetry that protects against premature retirement of a check that might still be needed. The same hygiene principle extends beyond hooks to modular architectures: since [[module deactivation must account for structural artifacts that survive the toggle]], deactivated modules leave ghost YAML fields and orphaned validation rules that persist as structural debt. Automation retirement and module deactivation share the insight that removing a capability requires explicit cleanup — the system does not clean up after itself by default, and the artifacts of abandoned automation accumulate just as the artifacts of abandoned modules do.
|
|
34
|
+
|
|
35
|
+
---
|
|
36
|
+
---
|
|
37
|
+
|
|
38
|
+
Relevant Notes:
|
|
39
|
+
- [[hooks cannot replace genuine cognitive engagement yet more automation is always tempting]] — addresses automation expansion pressure; this note adds the complementary lifecycle direction by defining when automation should contract
|
|
40
|
+
- [[over-automation corrupts quality when hooks encode judgment rather than verification]] — describes the creation-side failure mode; this note describes the retention-side failure mode where even correctly designed automation outlives its usefulness
|
|
41
|
+
- [[the determinism boundary separates hook methodology from skill methodology]] — determines what should be automated; this note determines when automated things should stop being automated
|
|
42
|
+
- [[confidence thresholds gate automated action between the mechanical and judgment zones]] — threshold calibration and this note's retirement signals share a dependency on empirical tracking of false positive rates as the feedback mechanism for automation governance
|
|
43
|
+
- [[evolution observations provide actionable signals for system adaptation]] — the diagnostic protocol this note extends: one row already identifies unused note types as over-modeling; automation retirement applies the same diagnostic logic to the automation layer itself
|
|
44
|
+
- [[idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once]] — idempotency makes automation safe to run; this note asks the prior question of whether it should still be running at all
|
|
45
|
+
- [[maintenance scheduling frequency should match consequence speed not detection capability]] — scheduling determines how often a check runs; retirement determines whether it should run at any frequency
|
|
46
|
+
- [[hook-driven learning loops create self-improving methodology through observation accumulation]] — the learning loop can drive both automation creation and retirement: accumulated observations that a hook catches nothing are the signal for retirement
|
|
47
|
+
- [[three concurrent maintenance loops operate at different timescales to catch different classes of problems]] — the scheduling architecture where retirement operates: each of the three loops (fast, medium, slow) contains checks that may need retirement, and retirement is the terminal tier of the scheduling spectrum — frequency zero — applied per-check within any loop
|
|
48
|
+
- [[the fix-versus-report decision depends on determinism reversibility and accumulated trust]] — lifecycle complement: the fix-versus-report trust boundary governs promotion from report to auto-fix; this note governs the opposite direction, decommission when an automated check no longer justifies its existence; together they define the full lifecycle of automation authority
|
|
49
|
+
- [[methodology development should follow the trajectory from documentation to skill to hook as understanding hardens]] — the trajectory describes promotion through encoding levels; retirement is the lifecycle complement where automation moves in the opposite direction, from hook back to nothing, completing the full arc from creation through maturation to decommission
|
|
50
|
+
- [[observation and tension logs function as dead-letter queues for failed automation]] — dead-letter entries accumulate evidence that informs retirement decisions: a check that generates only false-positive dead-letter entries is producing the very evidence that retirement criteria evaluate
|
|
51
|
+
- [[automated detection is always safe because it only reads state while automated remediation risks content corruption]] — the read/write asymmetry changes retirement urgency: a redundant detection check wastes attention but causes no harm, while a redundant remediation check that occasionally misfires actively corrupts content, making remediation automation the higher-priority retirement candidate
|
|
52
|
+
- [[module deactivation must account for structural artifacts that survive the toggle]] — extends the retirement principle to modular architecture: just as retired hooks should be removed to prevent noise accumulation, deactivated modules leave ghost YAML fields and validation rules that persist as structural debt; retirement hygiene applies at both the automation and module levels
|
|
53
|
+
|
|
54
|
+
Topics:
|
|
55
|
+
- [[maintenance-patterns]]
|
|
56
|
+
- [[agent-cognition]]
|
|
@@ -0,0 +1,35 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: A note's meaning includes not just its content but its network position — what links TO it reveals the contexts where the concept proved useful, extending definition beyond what the author wrote
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[graph-structure]]"]
|
|
5
|
+
methodology: ["Network Science"]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# backlinks implicitly define notes by revealing usage context
|
|
9
|
+
|
|
10
|
+
A note has two layers of definition. The first is explicit: what the author wrote in the body. The second is implicit: what other notes link to it and why. Since [[inline links carry richer relationship data than metadata fields]], each incoming link encodes not just THAT a connection exists but WHY it was made — the prose surrounding the link reveals the relationship type. Bidirectional linking makes this second layer visible.
|
|
11
|
+
|
|
12
|
+
When you create a note, you define its forward content. But as the graph grows, other notes begin linking to it from various contexts. Each incoming link says: "this concept was useful here, for this reason." The accumulation of these usage contexts reveals what the note actually means in practice — which may differ from or extend what the author intended.
|
|
13
|
+
|
|
14
|
+
This is definition through use. A dictionary defines words through explicit statements. A corpus defines words through usage patterns. Backlinks create a corpus of usage for each note. The note titled [[spreading activation models how agents should traverse]] might define itself as being about traversal mechanics, but its backlinks reveal every context where that traversal model illuminated something else. Those backlinks extend the note's meaning beyond what its body contains. Since [[stigmergy coordinates agents through environmental traces without direct communication]], this accumulation is stigmergic: each agent that links to a note leaves an environmental trace that extends the note's implicit meaning without coordinating with any other agent about what to link. The backlink neighborhood grows through independent local decisions, and the emergent pattern — the note's implicit definition — is richer than any single agent intended.
|
|
15
|
+
|
|
16
|
+
For agent traversal, this changes how to understand a note. Reading only the note's content gives you the explicit definition. Checking backlinks gives you the implicit definition — the roles this concept has played across the graph. Both are part of what the note "means."
|
|
17
|
+
|
|
18
|
+
This also changes what it means to update a note. When you revise a note's body, you change its explicit definition. But the implicit definition (what links to it and why) remains stable. This creates a kind of semantic constraint: a note with many backlinks has accumulated meaning that shouldn't be disrupted by casual rewrites. The backlinks represent commitments from other parts of the graph. While [[digital mutability enables note evolution that physical permanence forbids]], this implicit definition provides a counterweight — evolution is possible, but must respect accumulated usage. When [[backward maintenance asks what would be different if written today]], one answer is: consider what roles the note currently plays across the graph before changing what it claims.
|
|
19
|
+
|
|
20
|
+
The constraint becomes most concrete when titles change. Since [[tag rot applies to wiki links because titles serve as both identifier and display text]], each backlink is not just a semantic commitment but a grammatical one — the title functions as a clause in the linking note's prose. A note with thirty backlinks has thirty sentences across the vault that depend on its exact title phrasing. Renaming the note requires re-authoring every sentence that invoked it. This is implicit definition manifesting as maintenance cost: the more a note has been used (the richer its implicit definition), the more expensive it is to evolve its explicit identity.
|
|
21
|
+
|
|
22
|
+
The practical implication: when an agent lands on a note, checking backlinks provides context that forward-link traversal alone misses. You see not just where the note points (its references) but where it's been used (its usages). Both are necessary to understand what the note actually contributes to the knowledge graph.
|
|
23
|
+
|
|
24
|
+
Backlink accumulation follows the same power-law distribution as forward links. Since [[small-world topology requires hubs and dense local links]], MOCs naturally accumulate many backlinks as they become referenced from atomic notes across their topic territory. This backlink density reveals their role as network hubs — notes that many others point to are structurally central, whether or not they were designed that way. A note with 30 incoming links has become a hub through accumulated use, which is itself a form of implicit definition: the graph says "this concept matters enough that 30 other ideas needed to reference it."
|
|
25
|
+
|
|
26
|
+
There's a related signal in the Topics footer: since [[cross-links between MOC territories indicate creative leaps and integration depth]], notes appearing in multiple distant MOCs are integration points where ideas from separate domains combine. Just as backlinks reveal usage across different arguments, multi-MOC membership reveals participation across different topic territories. Both are forms of implicit definition that extend what the note explicitly claims. A note's position in the graph — both via backlinks and via MOC membership — tells you what it has become beyond what it was created to be.
|
|
27
|
+
|
|
28
|
+
This implicit definition through usage creates an empirical test for a pattern that would otherwise rely on subjective judgment. Since [[federated wiki pattern enables multi-agent divergence as feature not bug]], when multiple agents produce parallel versions of the same concept, the question of whether the divergence was productive or merely noisy can be answered by examining backlink neighborhoods. If two federated versions attract distinct sets of incoming links from different usage contexts — one referenced in cognitive science arguments, the other invoked in systems design discussions — the backlink patterns reveal that the divergence surfaced genuinely different facets of the concept. If one version captures all meaningful incoming links while the other accumulates none, the backlinks empirically demonstrate that the federation was redundant. This transforms the federation decision from aesthetic judgment ("are these really different perspectives?") into something the graph itself can answer through accumulated use.
|
|
29
|
+
|
|
30
|
+
---
|
|
31
|
+
|
|
32
|
+
Source: [[tft-research-part2]]
|
|
33
|
+
|
|
34
|
+
Topics:
|
|
35
|
+
- [[graph-structure]]
|