arscontexta 0.6.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/marketplace.json +11 -0
- package/.claude-plugin/plugin.json +22 -0
- package/README.md +683 -0
- package/agents/knowledge-guide.md +49 -0
- package/bin/cli.mjs +66 -0
- package/generators/agents-md.md +240 -0
- package/generators/claude-md.md +379 -0
- package/generators/features/atomic-notes.md +124 -0
- package/generators/features/ethical-guardrails.md +58 -0
- package/generators/features/graph-analysis.md +188 -0
- package/generators/features/helper-functions.md +92 -0
- package/generators/features/maintenance.md +164 -0
- package/generators/features/methodology-knowledge.md +70 -0
- package/generators/features/mocs.md +144 -0
- package/generators/features/multi-domain.md +61 -0
- package/generators/features/personality.md +71 -0
- package/generators/features/processing-pipeline.md +428 -0
- package/generators/features/schema.md +149 -0
- package/generators/features/self-evolution.md +229 -0
- package/generators/features/self-space.md +78 -0
- package/generators/features/semantic-search.md +99 -0
- package/generators/features/session-rhythm.md +85 -0
- package/generators/features/templates.md +85 -0
- package/generators/features/wiki-links.md +88 -0
- package/generators/soul-md.md +121 -0
- package/hooks/hooks.json +45 -0
- package/hooks/scripts/auto-commit.sh +44 -0
- package/hooks/scripts/session-capture.sh +35 -0
- package/hooks/scripts/session-orient.sh +86 -0
- package/hooks/scripts/write-validate.sh +42 -0
- package/methodology/AI shifts knowledge systems from externalizing memory to externalizing attention.md +59 -0
- package/methodology/BM25 retrieval fails on full-length descriptions because query term dilution reduces match scores.md +39 -0
- package/methodology/IBIS framework maps claim-based architecture to structured argumentation.md +58 -0
- package/methodology/LLM attention degrades as context fills.md +49 -0
- package/methodology/MOC construction forces synthesis that automated generation from metadata cannot replicate.md +49 -0
- package/methodology/MOC maintenance investment compounds because orientation savings multiply across every future session.md +41 -0
- package/methodology/MOCs are attention management devices not just organizational tools.md +51 -0
- package/methodology/PKM failure follows a predictable cycle.md +50 -0
- package/methodology/ThreadMode to DocumentMode transformation is the core value creation step.md +52 -0
- package/methodology/WIP limits force processing over accumulation.md +53 -0
- package/methodology/Zeigarnik effect validates capture-first philosophy because open loops drain attention.md +42 -0
- package/methodology/academic research uses structured extraction with cross-source synthesis.md +566 -0
- package/methodology/adapt the four-phase processing pipeline to domain-specific throughput needs.md +197 -0
- package/methodology/agent notes externalize navigation intuition that search cannot discover and traversal cannot reconstruct.md +48 -0
- package/methodology/agent self-memory should be architecturally separate from user knowledge systems.md +48 -0
- package/methodology/agent session boundaries create natural automation checkpoints that human-operated systems lack.md +56 -0
- package/methodology/agent-cognition.md +107 -0
- package/methodology/agents are simultaneously methodology executors and subjects creating a unique trust asymmetry.md +66 -0
- package/methodology/aspect-oriented programming solved the same cross-cutting concern problem that hooks solve.md +39 -0
- package/methodology/associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles.md +53 -0
- package/methodology/attention residue may have a minimum granularity that cannot be subdivided.md +46 -0
- package/methodology/auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution.md +47 -0
- package/methodology/automated detection is always safe because it only reads state while automated remediation risks content corruption.md +42 -0
- package/methodology/automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues.md +56 -0
- package/methodology/backlinks implicitly define notes by revealing usage context.md +35 -0
- package/methodology/backward maintenance asks what would be different if written today.md +62 -0
- package/methodology/balance onboarding enforcement and questions to prevent premature complexity.md +229 -0
- package/methodology/basic level categorization determines optimal MOC granularity.md +51 -0
- package/methodology/batching by context similarity reduces switching costs in agent processing.md +43 -0
- package/methodology/behavioral anti-patterns matter more than tool selection.md +42 -0
- package/methodology/betweenness centrality identifies bridge notes connecting disparate knowledge domains.md +57 -0
- package/methodology/blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules.md +42 -0
- package/methodology/bootstrapping principle enables self-improving systems.md +62 -0
- package/methodology/build automatic memory through cognitive offloading and session handoffs.md +285 -0
- package/methodology/capture the reaction to content not just the content itself.md +41 -0
- package/methodology/claims must be specific enough to be wrong.md +36 -0
- package/methodology/closure rituals create clean breaks that prevent attention residue bleed.md +44 -0
- package/methodology/cognitive offloading is the architectural foundation for vault design.md +46 -0
- package/methodology/cognitive outsourcing risk in agent-operated systems.md +55 -0
- package/methodology/coherence maintains consistency despite inconsistent inputs.md +96 -0
- package/methodology/coherent architecture emerges from wiki links spreading activation and small-world topology.md +48 -0
- package/methodology/community detection algorithms can inform when MOCs should split or merge.md +52 -0
- package/methodology/complete navigation requires four complementary types that no single mechanism provides.md +43 -0
- package/methodology/complex systems evolve from simple working systems.md +59 -0
- package/methodology/composable knowledge architecture builds systems from independent toggleable modules not monolithic templates.md +61 -0
- package/methodology/compose multi-domain systems through separate templates and shared graph.md +372 -0
- package/methodology/concept-orientation beats source-orientation for cross-domain connections.md +51 -0
- package/methodology/confidence thresholds gate automated action between the mechanical and judgment zones.md +50 -0
- package/methodology/configuration dimensions interact so choices in one create pressure on others.md +58 -0
- package/methodology/configuration paralysis emerges when derivation surfaces too many decisions.md +44 -0
- package/methodology/context files function as agent operating systems through self-referential self-extension.md +46 -0
- package/methodology/context phrase clarity determines how deep a navigation hierarchy can scale.md +46 -0
- package/methodology/continuous small-batch processing eliminates review dread.md +48 -0
- package/methodology/controlled disorder engineers serendipity through semantic rather than topical linking.md +51 -0
- package/methodology/creative writing uses worldbuilding consistency with character tracking.md +672 -0
- package/methodology/cross-links between MOC territories indicate creative leaps and integration depth.md +43 -0
- package/methodology/dangling links reveal which notes want to exist.md +62 -0
- package/methodology/data exit velocity measures how quickly content escapes vendor lock-in.md +74 -0
- package/methodology/decontextualization risk means atomicity may strip meaning that cannot be recovered.md +48 -0
- package/methodology/dense interlinked research claims enable derivation while sparse references only enable templating.md +47 -0
- package/methodology/dependency resolution through topological sort makes module composition transparent and verifiable.md +56 -0
- package/methodology/derivation generates knowledge systems from composable research claims not template customization.md +63 -0
- package/methodology/derivation-engine.md +27 -0
- package/methodology/derived systems follow a seed-evolve-reseed lifecycle.md +56 -0
- package/methodology/description quality for humans diverges from description quality for keyword search.md +73 -0
- package/methodology/descriptions are retrieval filters not summaries.md +112 -0
- package/methodology/design MOCs as attention management devices with lifecycle governance.md +318 -0
- package/methodology/design-dimensions.md +66 -0
- package/methodology/digital mutability enables note evolution that physical permanence forbids.md +54 -0
- package/methodology/discovery-retrieval.md +48 -0
- package/methodology/distinctiveness scoring treats description quality as measurable.md +69 -0
- package/methodology/does agent processing recover what fast capture loses.md +43 -0
- package/methodology/domain-compositions.md +37 -0
- package/methodology/dual-coding with visual elements could enhance agent traversal.md +55 -0
- package/methodology/each module must be describable in one sentence under 200 characters or it does too many things.md +45 -0
- package/methodology/each new note compounds value by creating traversal paths.md +55 -0
- package/methodology/eight configuration dimensions parameterize the space of possible knowledge systems.md +56 -0
- package/methodology/elaborative encoding is the quality gate for new notes.md +55 -0
- package/methodology/enforce schema with graduated strictness across capture processing and query zones.md +221 -0
- package/methodology/enforcing atomicity can create paralysis when ideas resist decomposition.md +43 -0
- package/methodology/engineering uses technical decision tracking with architectural memory.md +766 -0
- package/methodology/every knowledge domain shares a four-phase processing skeleton that diverges only in the process step.md +53 -0
- package/methodology/evolution observations provide actionable signals for system adaptation.md +67 -0
- package/methodology/external memory shapes cognition more than base model.md +60 -0
- package/methodology/faceted classification treats notes as multi-dimensional objects rather than folder contents.md +65 -0
- package/methodology/failure-modes.md +27 -0
- package/methodology/false universalism applies same processing logic regardless of domain.md +49 -0
- package/methodology/federated wiki pattern enables multi-agent divergence as feature not bug.md +59 -0
- package/methodology/flat files break at retrieval scale.md +75 -0
- package/methodology/forced engagement produces weak connections.md +48 -0
- package/methodology/four abstraction layers separate platform-agnostic from platform-dependent knowledge system features.md +47 -0
- package/methodology/fresh context per task preserves quality better than chaining phases.md +44 -0
- package/methodology/friction reveals architecture.md +63 -0
- package/methodology/friction-driven module adoption prevents configuration debt by adding complexity only at pain points.md +48 -0
- package/methodology/gardening cycle implements tend prune fertilize operations.md +41 -0
- package/methodology/generation effect gate blocks processing without transformation.md +40 -0
- package/methodology/goal-driven memory orchestration enables autonomous domain learning through directed compute allocation.md +41 -0
- package/methodology/good descriptions layer heuristic then mechanism then implication.md +57 -0
- package/methodology/graph-structure.md +65 -0
- package/methodology/guided notes might outperform post-hoc structuring for high-volume capture.md +37 -0
- package/methodology/health wellness uses symptom-trigger correlation with multi-dimensional tracking.md +819 -0
- package/methodology/hook composition creates emergent methodology from independent single-concern components.md +47 -0
- package/methodology/hook enforcement guarantees quality while instruction enforcement merely suggests it.md +51 -0
- package/methodology/hook-driven learning loops create self-improving methodology through observation accumulation.md +62 -0
- package/methodology/hooks are the agent habit system that replaces the missing basal ganglia.md +40 -0
- package/methodology/hooks cannot replace genuine cognitive engagement yet more automation is always tempting.md +87 -0
- package/methodology/hooks enable context window efficiency by delegating deterministic checks to external processes.md +47 -0
- package/methodology/idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once.md +44 -0
- package/methodology/implement condition-based maintenance triggers for derived systems.md +255 -0
- package/methodology/implicit dependencies create distributed monoliths that fail silently across configurations.md +58 -0
- package/methodology/implicit knowledge emerges from traversal.md +55 -0
- package/methodology/incremental formalization happens through repeated touching of old notes.md +60 -0
- package/methodology/incremental reading enables cross-source connection finding.md +39 -0
- package/methodology/index.md +32 -0
- package/methodology/inline links carry richer relationship data than metadata fields.md +91 -0
- package/methodology/insight accretion differs from productivity in knowledge systems.md +41 -0
- package/methodology/intermediate packets enable assembly over creation.md +52 -0
- package/methodology/intermediate representation pattern enables reliable vault operations beyond regex.md +62 -0
- package/methodology/justification chains enable forward backward and evolution reasoning about configuration decisions.md +46 -0
- package/methodology/knowledge system architecture is parameterized by platform capabilities not fixed by methodology.md +51 -0
- package/methodology/knowledge systems become communication partners through complexity and memory humans cannot sustain.md +47 -0
- package/methodology/knowledge systems share universal operations and structural components across all methodology traditions.md +46 -0
- package/methodology/legal case management uses precedent chains with regulatory change propagation.md +892 -0
- package/methodology/live index via periodic regeneration keeps discovery current.md +58 -0
- package/methodology/local-first file formats are inherently agent-native.md +69 -0
- package/methodology/logic column pattern separates reasoning from procedure.md +35 -0
- package/methodology/maintenance operations are more universal than creative pipelines because structural health is domain-invariant.md +47 -0
- package/methodology/maintenance scheduling frequency should match consequence speed not detection capability.md +50 -0
- package/methodology/maintenance targeting should prioritize mechanism and theory notes.md +26 -0
- package/methodology/maintenance-patterns.md +72 -0
- package/methodology/markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure.md +55 -0
- package/methodology/maturity field enables agent context prioritization.md +33 -0
- package/methodology/memory-architecture.md +27 -0
- package/methodology/metacognitive confidence can diverge from retrieval capability.md +42 -0
- package/methodology/metadata reduces entropy enabling precision over recall.md +91 -0
- package/methodology/methodology development should follow the trajectory from documentation to skill to hook as understanding hardens.md +80 -0
- package/methodology/methodology traditions are named points in a shared configuration space not competing paradigms.md +64 -0
- package/methodology/mnemonic medium embeds verification into navigation.md +46 -0
- package/methodology/module communication through shared YAML fields creates loose coupling without direct dependencies.md +44 -0
- package/methodology/module deactivation must account for structural artifacts that survive the toggle.md +49 -0
- package/methodology/multi-domain systems compose through separate templates and shared graph.md +61 -0
- package/methodology/multi-domain-composition.md +27 -0
- package/methodology/narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging.md +53 -0
- package/methodology/navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts.md +48 -0
- package/methodology/navigational vertigo emerges in pure association systems without local hierarchy.md +54 -0
- package/methodology/note titles should function as APIs enabling sentence transclusion.md +51 -0
- package/methodology/note-design.md +57 -0
- package/methodology/notes are skills /342/200/224 curated knowledge injected when relevant.md" +62 -0
- package/methodology/notes function as cognitive anchors that stabilize attention during complex tasks.md +41 -0
- package/methodology/novel domains derive by mapping knowledge type to closest reference domain then adapting.md +50 -0
- package/methodology/nudge theory explains graduated hook enforcement as choice architecture for agents.md +59 -0
- package/methodology/observation and tension logs function as dead-letter queues for failed automation.md +51 -0
- package/methodology/operational memory and knowledge memory serve different functions in agent architecture.md +48 -0
- package/methodology/operational wisdom requires contextual observation.md +52 -0
- package/methodology/orchestrated vault creation transforms arscontexta from tool to autonomous knowledge factory.md +40 -0
- package/methodology/organic emergence versus active curation creates a fundamental vault governance tension.md +68 -0
- package/methodology/orphan notes are seeds not failures.md +38 -0
- package/methodology/over-automation corrupts quality when hooks encode judgment rather than verification.md +62 -0
- package/methodology/people relationships uses Dunbar-layered graphs with interaction tracking.md +659 -0
- package/methodology/personal assistant uses life area management with review automation.md +610 -0
- package/methodology/platform adapter translation is semantic not mechanical because hook event meanings differ.md +40 -0
- package/methodology/platform capability tiers determine which knowledge system features can be implemented.md +48 -0
- package/methodology/platform fragmentation means identical conceptual operations require different implementations across agent environments.md +44 -0
- package/methodology/premature complexity is the most common derivation failure mode.md +45 -0
- package/methodology/prevent domain-specific failure modes through the vulnerability matrix.md +336 -0
- package/methodology/processing effort should follow retrieval demand.md +57 -0
- package/methodology/processing-workflows.md +75 -0
- package/methodology/product management uses feedback pipelines with experiment tracking.md +789 -0
- package/methodology/productivity porn risk in meta-system building.md +30 -0
- package/methodology/programmable notes could enable property-triggered workflows.md +64 -0
- package/methodology/progressive disclosure means reading right not reading less.md +69 -0
- package/methodology/progressive schema validates only what active modules require not the full system schema.md +49 -0
- package/methodology/project management uses decision tracking with stakeholder context.md +776 -0
- package/methodology/propositional link semantics transform wiki links from associative to reasoned.md +87 -0
- package/methodology/prospective memory requires externalization.md +53 -0
- package/methodology/provenance tracks where beliefs come from.md +62 -0
- package/methodology/queries evolve during search so agents should checkpoint.md +35 -0
- package/methodology/question-answer metadata enables inverted search patterns.md +39 -0
- package/methodology/random note resurfacing prevents write-only memory.md +33 -0
- package/methodology/reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring.md +59 -0
- package/methodology/reflection synthesizes existing notes into new insight.md +100 -0
- package/methodology/retrieval utility should drive design over capture completeness.md +69 -0
- package/methodology/retrieval verification loop tests description quality at scale.md +81 -0
- package/methodology/role field makes graph structure explicit.md +94 -0
- package/methodology/scaffolding enables divergence that fine-tuning cannot.md +67 -0
- package/methodology/schema enforcement via validation agents enables soft consistency.md +60 -0
- package/methodology/schema evolution follows observe-then-formalize not design-then-enforce.md +65 -0
- package/methodology/schema field names are the only domain specific element in the universal note pattern.md +46 -0
- package/methodology/schema fields should use domain-native vocabulary not abstract terminology.md +47 -0
- package/methodology/schema templates reduce cognitive overhead at capture time.md +55 -0
- package/methodology/schema validation hooks externalize inhibitory control that degrades under cognitive load.md +48 -0
- package/methodology/schema-enforcement.md +27 -0
- package/methodology/self-extension requires context files to contain platform operations knowledge not just methodology.md +47 -0
- package/methodology/sense-making vs storage does compression lose essential nuance.md +73 -0
- package/methodology/session boundary hooks implement cognitive bookends for orientation and reflection.md +60 -0
- package/methodology/session handoff creates continuity without persistent memory.md +43 -0
- package/methodology/session outputs are packets for future selves.md +43 -0
- package/methodology/session transcript mining enables experiential validation that structural tests cannot provide.md +38 -0
- package/methodology/skill context budgets constrain knowledge system complexity on agent platforms.md +52 -0
- package/methodology/skills encode methodology so manual execution bypasses quality gates.md +50 -0
- package/methodology/small-world topology requires hubs and dense local links.md +99 -0
- package/methodology/source attribution enables tracing claims to foundations.md +38 -0
- package/methodology/spaced repetition scheduling could optimize vault maintenance.md +44 -0
- package/methodology/spreading activation models how agents should traverse.md +79 -0
- package/methodology/stale navigation actively misleads because agents trust curated maps completely.md +43 -0
- package/methodology/stigmergy coordinates agents through environmental traces without direct communication.md +62 -0
- package/methodology/storage versus thinking distinction determines which tool patterns apply.md +56 -0
- package/methodology/structure enables navigation without reading everything.md +52 -0
- package/methodology/structure without processing provides no value.md +56 -0
- package/methodology/student learning uses prerequisite graphs with spaced retrieval.md +770 -0
- package/methodology/summary coherence tests composability before filing.md +37 -0
- package/methodology/tag rot applies to wiki links because titles serve as both identifier and display text.md +50 -0
- package/methodology/temporal media must convert to spatial text for agent traversal.md +43 -0
- package/methodology/temporal processing priority creates age-based inbox urgency.md +45 -0
- package/methodology/temporal separation of capture and processing preserves context freshness.md +39 -0
- package/methodology/ten universal primitives form the kernel of every viable agent knowledge system.md +162 -0
- package/methodology/testing effect could enable agent knowledge verification.md +38 -0
- package/methodology/the AgentSkills standard embodies progressive disclosure at the skill level.md +40 -0
- package/methodology/the derivation engine improves recursively as deployed systems generate observations.md +49 -0
- package/methodology/the determinism boundary separates hook methodology from skill methodology.md +46 -0
- package/methodology/the fix-versus-report decision depends on determinism reversibility and accumulated trust.md +45 -0
- package/methodology/the generation effect requires active transformation not just storage.md +57 -0
- package/methodology/the no wrong patches guarantee ensures any valid module combination produces a valid system.md +58 -0
- package/methodology/the system is the argument.md +46 -0
- package/methodology/the vault constitutes identity for agents.md +86 -0
- package/methodology/the vault methodology transfers because it encodes cognitive science not domain specifics.md +47 -0
- package/methodology/therapy journal uses warm personality with pattern detection for emotional processing.md +584 -0
- package/methodology/three capture schools converge through agent-mediated synthesis.md +55 -0
- package/methodology/three concurrent maintenance loops operate at different timescales to catch different classes of problems.md +56 -0
- package/methodology/throughput matters more than accumulation.md +58 -0
- package/methodology/title as claim enables traversal as reasoning.md +50 -0
- package/methodology/topological organization beats temporal for knowledge work.md +52 -0
- package/methodology/trading uses conviction tracking with thesis-outcome correlation.md +699 -0
- package/methodology/trails transform ephemeral navigation into persistent artifacts.md +39 -0
- package/methodology/transform universal vocabulary to domain-native language through six levels.md +259 -0
- package/methodology/type field enables structured queries without folder hierarchies.md +53 -0
- package/methodology/use-case presets dissolve the tension between composability and simplicity.md +44 -0
- package/methodology/vault conventions may impose hidden rigidity on thinking.md +44 -0
- package/methodology/verbatim risk applies to agents too.md +31 -0
- package/methodology/vibe notetaking is the emerging industry consensus for AI-native self-organization.md +56 -0
- package/methodology/vivid memories need verification.md +45 -0
- package/methodology/vocabulary-transformation.md +27 -0
- package/methodology/voice capture is the highest-bandwidth channel for agent-delegated knowledge systems.md +45 -0
- package/methodology/wiki links are the digital evolution of analog indexing.md +73 -0
- package/methodology/wiki links as social contract transforms agents into stewards of incomplete references.md +52 -0
- package/methodology/wiki links create navigation paths that shape retrieval.md +63 -0
- package/methodology/wiki links implement GraphRAG without the infrastructure.md +101 -0
- package/methodology/writing for audience blocks authentic creation.md +22 -0
- package/methodology/you operate a system that takes notes.md +79 -0
- package/openclaw/SKILL.md +110 -0
- package/package.json +45 -0
- package/platforms/README.md +51 -0
- package/platforms/claude-code/generator.md +61 -0
- package/platforms/claude-code/hooks/README.md +186 -0
- package/platforms/claude-code/hooks/auto-commit.sh.template +38 -0
- package/platforms/claude-code/hooks/session-capture.sh.template +72 -0
- package/platforms/claude-code/hooks/session-orient.sh.template +189 -0
- package/platforms/claude-code/hooks/write-validate.sh.template +106 -0
- package/platforms/openclaw/generator.md +82 -0
- package/platforms/openclaw/hooks/README.md +89 -0
- package/platforms/openclaw/hooks/bootstrap.ts.template +224 -0
- package/platforms/openclaw/hooks/command-new.ts.template +165 -0
- package/platforms/openclaw/hooks/heartbeat.ts.template +214 -0
- package/platforms/shared/features/README.md +70 -0
- package/platforms/shared/skill-blocks/graph.md +145 -0
- package/platforms/shared/skill-blocks/learn.md +119 -0
- package/platforms/shared/skill-blocks/next.md +131 -0
- package/platforms/shared/skill-blocks/pipeline.md +326 -0
- package/platforms/shared/skill-blocks/ralph.md +616 -0
- package/platforms/shared/skill-blocks/reduce.md +1142 -0
- package/platforms/shared/skill-blocks/refactor.md +129 -0
- package/platforms/shared/skill-blocks/reflect.md +780 -0
- package/platforms/shared/skill-blocks/remember.md +524 -0
- package/platforms/shared/skill-blocks/rethink.md +574 -0
- package/platforms/shared/skill-blocks/reweave.md +680 -0
- package/platforms/shared/skill-blocks/seed.md +320 -0
- package/platforms/shared/skill-blocks/stats.md +145 -0
- package/platforms/shared/skill-blocks/tasks.md +171 -0
- package/platforms/shared/skill-blocks/validate.md +323 -0
- package/platforms/shared/skill-blocks/verify.md +562 -0
- package/platforms/shared/templates/README.md +35 -0
- package/presets/experimental/categories.yaml +1 -0
- package/presets/experimental/preset.yaml +38 -0
- package/presets/experimental/starter/README.md +7 -0
- package/presets/experimental/vocabulary.yaml +7 -0
- package/presets/personal/categories.yaml +7 -0
- package/presets/personal/preset.yaml +41 -0
- package/presets/personal/starter/goals.md +21 -0
- package/presets/personal/starter/index.md +17 -0
- package/presets/personal/starter/life-areas.md +21 -0
- package/presets/personal/starter/people.md +21 -0
- package/presets/personal/vocabulary.yaml +32 -0
- package/presets/research/categories.yaml +8 -0
- package/presets/research/preset.yaml +41 -0
- package/presets/research/starter/index.md +17 -0
- package/presets/research/starter/methods.md +21 -0
- package/presets/research/starter/open-questions.md +21 -0
- package/presets/research/vocabulary.yaml +33 -0
- package/reference/AUDIT-REPORT.md +238 -0
- package/reference/claim-map.md +172 -0
- package/reference/components.md +327 -0
- package/reference/conversation-patterns.md +542 -0
- package/reference/derivation-validation.md +649 -0
- package/reference/dimension-claim-map.md +134 -0
- package/reference/evolution-lifecycle.md +297 -0
- package/reference/failure-modes.md +235 -0
- package/reference/interaction-constraints.md +204 -0
- package/reference/kernel.yaml +242 -0
- package/reference/methodology.md +283 -0
- package/reference/open-questions.md +279 -0
- package/reference/personality-layer.md +302 -0
- package/reference/self-space.md +299 -0
- package/reference/semantic-vs-keyword.md +288 -0
- package/reference/session-lifecycle.md +298 -0
- package/reference/templates/base-note.md +16 -0
- package/reference/templates/companion-note.md +70 -0
- package/reference/templates/creative-note.md +16 -0
- package/reference/templates/learning-note.md +16 -0
- package/reference/templates/life-note.md +16 -0
- package/reference/templates/moc.md +26 -0
- package/reference/templates/relationship-note.md +17 -0
- package/reference/templates/research-note.md +19 -0
- package/reference/templates/session-log.md +24 -0
- package/reference/templates/therapy-note.md +16 -0
- package/reference/test-fixtures/edge-case-constraints.md +148 -0
- package/reference/test-fixtures/multi-domain.md +164 -0
- package/reference/test-fixtures/novel-domain-gaming.md +138 -0
- package/reference/test-fixtures/research-minimal.md +102 -0
- package/reference/test-fixtures/therapy-full.md +155 -0
- package/reference/testing-milestones.md +1087 -0
- package/reference/three-spaces.md +363 -0
- package/reference/tradition-presets.md +203 -0
- package/reference/use-case-presets.md +341 -0
- package/reference/validate-kernel.sh +432 -0
- package/reference/vocabulary-transforms.md +85 -0
- package/scripts/sync-thinking.sh +147 -0
- package/skill-sources/graph/SKILL.md +567 -0
- package/skill-sources/graph/skill.json +17 -0
- package/skill-sources/learn/SKILL.md +254 -0
- package/skill-sources/learn/skill.json +17 -0
- package/skill-sources/next/SKILL.md +407 -0
- package/skill-sources/next/skill.json +17 -0
- package/skill-sources/pipeline/SKILL.md +314 -0
- package/skill-sources/pipeline/skill.json +17 -0
- package/skill-sources/ralph/SKILL.md +604 -0
- package/skill-sources/ralph/skill.json +17 -0
- package/skill-sources/reduce/SKILL.md +1113 -0
- package/skill-sources/reduce/skill.json +17 -0
- package/skill-sources/refactor/SKILL.md +448 -0
- package/skill-sources/refactor/skill.json +17 -0
- package/skill-sources/reflect/SKILL.md +747 -0
- package/skill-sources/reflect/skill.json +17 -0
- package/skill-sources/remember/SKILL.md +534 -0
- package/skill-sources/remember/skill.json +17 -0
- package/skill-sources/rethink/SKILL.md +658 -0
- package/skill-sources/rethink/skill.json +17 -0
- package/skill-sources/reweave/SKILL.md +657 -0
- package/skill-sources/reweave/skill.json +17 -0
- package/skill-sources/seed/SKILL.md +303 -0
- package/skill-sources/seed/skill.json +17 -0
- package/skill-sources/stats/SKILL.md +371 -0
- package/skill-sources/stats/skill.json +17 -0
- package/skill-sources/tasks/SKILL.md +402 -0
- package/skill-sources/tasks/skill.json +17 -0
- package/skill-sources/validate/SKILL.md +310 -0
- package/skill-sources/validate/skill.json +17 -0
- package/skill-sources/verify/SKILL.md +532 -0
- package/skill-sources/verify/skill.json +17 -0
- package/skills/add-domain/SKILL.md +441 -0
- package/skills/add-domain/skill.json +17 -0
- package/skills/architect/SKILL.md +568 -0
- package/skills/architect/skill.json +17 -0
- package/skills/ask/SKILL.md +388 -0
- package/skills/ask/skill.json +17 -0
- package/skills/health/SKILL.md +760 -0
- package/skills/health/skill.json +17 -0
- package/skills/help/SKILL.md +348 -0
- package/skills/help/skill.json +17 -0
- package/skills/recommend/SKILL.md +553 -0
- package/skills/recommend/skill.json +17 -0
- package/skills/reseed/SKILL.md +385 -0
- package/skills/reseed/skill.json +17 -0
- package/skills/setup/SKILL.md +1688 -0
- package/skills/setup/skill.json +17 -0
- package/skills/tutorial/SKILL.md +496 -0
- package/skills/tutorial/skill.json +17 -0
- package/skills/upgrade/SKILL.md +395 -0
- package/skills/upgrade/skill.json +17 -0
|
@@ -0,0 +1,48 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Three tiers (full automation, partial automation, minimal infrastructure) create a ceiling for features like pipelines, hooks, and semantic search, while core markdown conventions work universally
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[agent-cognition]]"]
|
|
5
|
+
methodology: ["Original"]
|
|
6
|
+
source: [[agent-platform-capabilities-research-source]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# platform capability tiers determine which knowledge system features can be implemented
|
|
10
|
+
|
|
11
|
+
Not all agent platforms offer the same infrastructure, and this asymmetry directly constrains which knowledge system features can function on each platform. The useful framing organizes platforms into three capability tiers, each creating a different ceiling for what a knowledge system built on that platform can actually do.
|
|
12
|
+
|
|
13
|
+
The first tier provides full automation infrastructure: context files with read-write access, lifecycle hooks at multiple event boundaries, skills with context forking and model selection, subagent spawning with independent context windows, and MCP integration for external tools. Claude Code exemplifies this tier. A knowledge system built here can implement automated session orientation, processing pipelines with fresh context per phase, validation on every file write, and semantic search. The full Ars Contexta methodology -- skill-encoded quality gates, progressive disclosure via hooks, recursive self-extension -- operates at this tier because since [[context files function as agent operating systems through self-referential self-extension]], self-extension specifically requires the platform to grant write access to the context file that governs agent behavior.
|
|
14
|
+
|
|
15
|
+
The second tier provides partial automation: context files and some skill support but limited hooks, no native subagent spawning, and basic MCP integration. Cursor, Gemini CLI, and Codex fit here with varying feature profiles. A knowledge system on this tier can implement note templates, wiki links, MOCs, and YAML schemas, plus some skill-driven workflows, but loses the processing pipeline's isolation guarantee. Since [[fresh context per task preserves quality better than chaining phases]], the inability to spawn subagents means later pipeline phases run on degraded attention -- a quality loss that cannot be papered over with instruction-level workarounds.
|
|
16
|
+
|
|
17
|
+
The third tier provides minimal infrastructure: the agent can read instructions and execute file operations but lacks hooks, skills, and subagents. Any LLM with filesystem access fits here. The knowledge system reduces to core markdown conventions -- note templates, wiki links, MOCs, YAML frontmatter -- because since [[local-first file formats are inherently agent-native]], these features are just files and text that any agent reads without infrastructure.
|
|
18
|
+
|
|
19
|
+
The critical design insight is that these tiers create a composability requirement, not a binary adoption decision. Core features (yaml-schema, wiki-links, atomic-notes, mocs) work at every tier because they are markdown conventions requiring no platform capabilities. Advanced features (processing-pipeline, hooks, validation, semantic-search) require specific platform infrastructure. A knowledge system generator must detect the platform tier and offer only features the platform can actually support, rather than degrading gracefully from a full-feature assumption. The tier framework becomes more precise when crossed with since [[four abstraction layers separate platform-agnostic from platform-dependent knowledge system features]]: tiers describe what platforms CAN do, layers describe what features NEED, and the intersection produces a capability matrix that maps exactly which features work where. The sharpest boundary in this matrix is the convention-to-automation layer transition, because since [[hook enforcement guarantees quality while instruction enforcement merely suggests it]], the jump from instruction compliance to hook enforcement is a categorical change in what the system can guarantee.
|
|
20
|
+
|
|
21
|
+
This means since [[knowledge system architecture is parameterized by platform capabilities not fixed by methodology]], the generator's job is not replicating a fixed design but parameterized production. Because [[eight configuration dimensions parameterize the space of possible knowledge systems]], the tiers directly constrain at least two dimensions — automation level (which IS the tier distinction) and processing intensity (which requires subagent infrastructure for heavy processing). The same methodology manifests differently at each tier because the infrastructure determines which positions along these dimensions are viable. But parameterization must be combined with restraint: since [[complex systems evolve from simple working systems]], the generator should target the minimum viable configuration for each tier rather than the maximum features the tier could theoretically support. A tier-one platform that could run twelve skills, five hooks, and three-phase parallel processing should still start simple and let friction determine what to add.
|
|
22
|
+
|
|
23
|
+
Even at tier one, capability is not unlimited. Since [[skill context budgets constrain knowledge system complexity on agent platforms]], the skill description budget (2% of context or 16,000 characters) caps active modules at roughly fifteen to twenty, meaning a tier-one platform cannot simply encode all methodology as skills -- the budget forces prioritization among workflows, and methodology that exceeds the budget falls back to instruction-level encoding with its weaker enforcement guarantees. The budget is a first-tier constraint: a problem you encounter precisely because you have the skill infrastructure, not despite it.
|
|
24
|
+
|
|
25
|
+
Even within a single tier, platforms differ in ways that matter. Since [[platform adapter translation is semantic not mechanical because hook event meanings differ]], translating automation-layer features between platforms at the same nominal tier requires decomposing quality guarantees into constituent properties rather than mechanically mapping event names. This maps to since [[data exit velocity measures how quickly content escapes vendor lock-in]]: tier-universal features align with high exit velocity (plain text, portable conventions), while tier-specific features introduce platform dependencies that lower exit velocity. The tension is productive: advanced features are genuinely valuable -- since [[skills encode methodology so manual execution bypasses quality gates]], losing skill infrastructure means losing the methodology itself, not just convenience. And since [[schema enforcement via validation agents enables soft consistency]], features like automated schema validation require tier-one hooks to guarantee enforcement -- at lower tiers, the same validation degrades to instruction-based compliance that provably drifts as context fills. But designing for universality means the core knowledge graph (notes, links, structure) survives platform transitions even when the automation layer does not. The self-improvement loop adds another dimension: since [[bootstrapping principle enables self-improving systems]], only tier-one platforms where the agent has write access to its own context file and infrastructure can close the recursive improvement loop. Lower tiers can operate knowledge systems but cannot evolve them -- the system stays as it was configured rather than adapting to discovered friction.
|
|
26
|
+
|
|
27
|
+
---
|
|
28
|
+
---
|
|
29
|
+
|
|
30
|
+
Relevant Notes:
|
|
31
|
+
- [[context files function as agent operating systems through self-referential self-extension]] — identifies the read-write vs read-only distinction that creates the sharpest tier boundary: self-extension requires write access to context files
|
|
32
|
+
- [[local-first file formats are inherently agent-native]] — explains why core features work at every tier: plain text with embedded metadata needs no platform infrastructure
|
|
33
|
+
- [[skills encode methodology so manual execution bypasses quality gates]] — illustrates what gets lost at lower tiers: not just automation convenience but the encoded quality gates that skills carry
|
|
34
|
+
- [[data exit velocity measures how quickly content escapes vendor lock-in]] — the portability metric that favors tier-universal features: high exit velocity maps to tier-independent functionality
|
|
35
|
+
- [[fresh context per task preserves quality better than chaining phases]] — a first-tier feature that cannot degrade gracefully: without subagent spawning, the processing pipeline loses its quality preservation mechanism
|
|
36
|
+
- [[four abstraction layers separate platform-agnostic from platform-dependent knowledge system features]] — the complementary decomposition: tiers describe what platforms CAN do, layers describe what features NEED; crossing them produces a capability matrix for mapping what works where
|
|
37
|
+
- [[hook enforcement guarantees quality while instruction enforcement merely suggests it]] — the hook-instruction gap defines the sharpest boundary between tiers one and two: the jump from suggested to guaranteed enforcement
|
|
38
|
+
- [[knowledge system architecture is parameterized by platform capabilities not fixed by methodology]] — the design consequence: tiers create the parameterization requirement because the same methodology manifests differently at each tier
|
|
39
|
+
- [[complex systems evolve from simple working systems]] — constrains how to start at each tier: Gall's Law says target the minimum viable configuration, not the maximum features the platform could theoretically support
|
|
40
|
+
- [[schema enforcement via validation agents enables soft consistency]] — a concrete automation-layer feature that requires tier-one hooks to guarantee enforcement; at lower tiers it degrades to instruction-based compliance that provably drifts
|
|
41
|
+
- [[platform fragmentation means identical conceptual operations require different implementations across agent environments]] — the implementation cost: even platforms at the same nominal tier implement capabilities differently enough that code sharing is minimal
|
|
42
|
+
- [[bootstrapping principle enables self-improving systems]] — the self-improvement loop only closes at tier one where the agent has write access to its own context file and can create infrastructure; lower tiers cannot bootstrap
|
|
43
|
+
- [[platform adapter translation is semantic not mechanical because hook event meanings differ]] — reveals that even within a tier, event semantics differ enough that translating automation-layer features between platforms requires decomposing quality guarantees rather than mechanical event-name mapping
|
|
44
|
+
- [[eight configuration dimensions parameterize the space of possible knowledge systems]] — the tiers directly constrain at least two of the eight dimensions: automation level IS the tier distinction, and processing intensity requires subagent infrastructure that only tier-one platforms provide
|
|
45
|
+
- [[skill context budgets constrain knowledge system complexity on agent platforms]] — first-tier constraint: the description budget caps active skills at 15-20, so even tier-one platforms cannot encode unlimited methodology as skills, creating a resource allocation problem within the most capable tier
|
|
46
|
+
|
|
47
|
+
Topics:
|
|
48
|
+
- [[agent-cognition]]
|
|
@@ -0,0 +1,44 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: The same operation -- validate schema on write, orient at session start, enforce processing pipelines -- needs different code for Claude Code, OpenClaw, Codex, Gemini CLI, and every other platform,
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[agent-cognition]]"]
|
|
5
|
+
methodology: ["Original"]
|
|
6
|
+
source: [[agent-platform-capabilities-research-source]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# platform fragmentation means identical conceptual operations require different implementations across agent environments
|
|
10
|
+
|
|
11
|
+
The agent platform landscape in early 2026 presents a straightforward problem that resists straightforward solutions. Claude Code, OpenClaw, Codex, Cursor, Gemini CLI, Cline, Windsurf, Aider -- these platforms all enable similar conceptual operations but implement them through incompatible interfaces, file formats, naming conventions, and event models. The result is that building a knowledge system that works across platforms is not a matter of writing the logic once and deploying everywhere. Every piece of automation touches platform-specific infrastructure that must be reimplemented per environment.
|
|
12
|
+
|
|
13
|
+
Consider a concrete operation: "validate note schema after every file write." Since [[schema enforcement via validation agents enables soft consistency]], the validation logic itself is universal -- parse YAML frontmatter, check required fields against a template schema, report violations. That logic is maybe fifty lines of code that works identically everywhere. But the trigger mechanism is entirely platform-specific. On Claude Code, a PostToolUse hook fires after every tool invocation and surfaces warnings back to the agent in the same conversation turn. On OpenClaw, there is no per-operation hook -- `command:new` fires at session start, so the same guarantee requires a fundamentally different enforcement strategy. On Gemini CLI, hooks exist but with different event boundaries and response formats. On platforms without hooks at all, the validation must fall back to instruction-based reminders in the context file, which since [[hook enforcement guarantees quality while instruction enforcement merely suggests it]] degrades the guarantee from enforcement to suggestion.
|
|
14
|
+
|
|
15
|
+
This is not a packaging problem. Since [[platform adapter translation is semantic not mechanical because hook event meanings differ]], you cannot write a mechanical translation layer that maps event names across platforms. PostToolUse on Claude Code fires per-operation, returns results to the same conversation turn, and runs outside the context window. No other platform provides exactly these three properties simultaneously. Translating the operation means decomposing the guarantee into its constituent properties and reconstructing each one through whatever mechanisms the target platform offers -- even if the reconstruction looks nothing like the original.
|
|
16
|
+
|
|
17
|
+
The fragmentation has an uneven topology. Since [[four abstraction layers separate platform-agnostic from platform-dependent knowledge system features]], the foundation layer (files, wiki links, YAML schemas) and convention layer (naming patterns, quality standards in context files) are largely immune to fragmentation because since [[local-first file formats are inherently agent-native]], they are just text that any LLM with filesystem access can implement. But the automation layer (hooks, skills, MCP) and orchestration layer (pipelines, subagent coordination, team processing) are where fragmentation creates real cost. Each platform implements these layers differently enough that code sharing is minimal.
|
|
18
|
+
|
|
19
|
+
Emerging standards partially address this. The AgentSkills standard defines a common SKILL.md format -- YAML frontmatter with name, description, and optional metadata followed by markdown instructions -- that works across twenty-plus platforms. The AGENTS.md standard provides a universal context file format. But both standards deliberately scope to the common denominator. AgentSkills omits Claude Code's `context:fork` for subagent execution, Cursor's Background Agents for async processing, and OpenClaw's hook-based skill discovery. The standard captures what any agent can read while leaving platform-differentiating capabilities outside its scope. This is a reasonable design choice -- universality requires leaving out what is not universal -- but it means the standards reduce fragmentation at the convention layer while leaving the automation and orchestration layers fragmented.
|
|
20
|
+
|
|
21
|
+
The practical consequence for knowledge system generation is an implementation multiplier. Since [[knowledge system architecture is parameterized by platform capabilities not fixed by methodology]], a generator must produce platform-specific output for every automation feature. The methodology stays constant (process through isolated phases, validate on every write, orient at session start) but the implementation differs per platform. This is not an argument against cross-platform generation -- it is an argument for understanding the cost structure. The foundation and convention layers are write-once. The automation and orchestration layers are write-per-platform. A generator that underestimates this cost produces systems that look complete but lack the enforcement layer that actually guarantees quality.
|
|
22
|
+
|
|
23
|
+
The N-platforms times M-operations multiplier has a structural parallel worth noting. Since [[intermediate representation pattern enables reliable vault operations beyond regex]], Pandoc solves the N-formats times M-formats conversion problem with N+M implementations by routing everything through a canonical AST. The fragmentation problem is the same shape: N platforms times M automation operations currently demands N*M adapters. An intermediate operations language -- a canonical representation of what a hook should achieve, decomposed into guarantee properties like timing, scope, and enforcement level -- could let a generator compile from that representation to platform-specific implementations. This would not eliminate the platform-specific code, but it would reduce the design problem from reimagining each operation per platform to writing one compiler per platform. Whether this abstraction is achievable in practice depends on whether the guarantee properties decompose cleanly, which since [[platform adapter translation is semantic not mechanical because hook event meanings differ]] is genuinely uncertain -- but the architectural direction is sound.
|
|
24
|
+
|
|
25
|
+
The fragmentation topology also explains why since [[data exit velocity measures how quickly content escapes vendor lock-in]], exit velocity drops sharply at the automation boundary. Foundation and convention features have high exit velocity -- they are portable text. But since [[operational memory and knowledge memory serve different functions in agent architecture]], operational coordination (queues, hooks, pipelines) inherits the fragmentation of the automation layer, making operational memory platform-locked while knowledge memory remains portable. A knowledge system that migrates platforms preserves its claims and connections but loses its processing infrastructure, which must be rebuilt from scratch on the new platform.
|
|
26
|
+
|
|
27
|
+
---
|
|
28
|
+
---
|
|
29
|
+
|
|
30
|
+
Relevant Notes:
|
|
31
|
+
- [[four abstraction layers separate platform-agnostic from platform-dependent knowledge system features]] -- provides the analytical framework for WHERE fragmentation bites: foundation and convention layers escape it, automation and orchestration layers suffer it
|
|
32
|
+
- [[platform adapter translation is semantic not mechanical because hook event meanings differ]] -- explains WHY fragmentation resists simple solutions: the same event name on two platforms carries different timing, scope, and enforcement semantics
|
|
33
|
+
- [[platform capability tiers determine which knowledge system features can be implemented]] -- the complementary analysis: tiers describe what platforms CAN do, fragmentation describes how differently they DO it even at the same tier
|
|
34
|
+
- [[knowledge system architecture is parameterized by platform capabilities not fixed by methodology]] -- the design consequence: fragmentation forces parameterized generation rather than fixed replication
|
|
35
|
+
- [[the AgentSkills standard embodies progressive disclosure at the skill level]] -- partial solution: standards like AgentSkills reduce fragmentation at the skill metadata level but deliberately omit platform-specific capabilities
|
|
36
|
+
- [[intermediate representation pattern enables reliable vault operations beyond regex]] -- structural parallel: the IR note describes how Pandoc solves N*M format conversions with N+M implementations via a canonical AST; fragmentation is the same N*M problem applied to platform adapters, suggesting an intermediate operations language could reduce the implementation multiplier
|
|
37
|
+
- [[local-first file formats are inherently agent-native]] -- foundation: explains WHY foundation-layer features escape fragmentation entirely; plain text with embedded metadata requires no platform infrastructure, which is why the fragmentation topology is uneven rather than uniform
|
|
38
|
+
- [[schema enforcement via validation agents enables soft consistency]] -- concrete instance: the running example (validate schema on every write) IS the operation this note describes; the fragmentation note shows how this universal validation logic fragments across platform trigger mechanisms
|
|
39
|
+
- [[data exit velocity measures how quickly content escapes vendor lock-in]] -- measurement: the write-once vs write-per-platform distinction maps directly to exit velocity gradients; automation-layer features have low exit velocity precisely because they fragment across platforms
|
|
40
|
+
- [[operational memory and knowledge memory serve different functions in agent architecture]] -- memory topology: knowledge memory lives at foundation/convention layers and is portable across platforms, while operational memory requires automation/orchestration layers and inherits their fragmentation
|
|
41
|
+
- [[blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules]] — a distribution-format response to fragmentation: instead of pre-built per-platform code, blueprints ship semantic quality guarantees that each agent compiles into native implementations, reducing the N*M multiplier to N+M (one blueprint per operation, one compiler per platform)
|
|
42
|
+
|
|
43
|
+
Topics:
|
|
44
|
+
- [[agent-cognition]]
|
|
@@ -0,0 +1,45 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Derivation can produce systems with 12 hooks and 8 processing phases because the claim graph justifies them, but users abandon within weeks — a complexity budget constrains initial output to minimum
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[design-dimensions]]"]
|
|
5
|
+
methodology: ["Systems Theory", "Original"]
|
|
6
|
+
source: [[knowledge-system-derivation-blueprint]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# premature complexity is the most common derivation failure mode
|
|
10
|
+
|
|
11
|
+
When a derivation engine traverses a well-developed claim graph, it can justify remarkable sophistication. Since [[eight configuration dimensions parameterize the space of possible knowledge systems]], the research says atomic notes enable composability, so the engine specifies atomic granularity. Composability requires explicit linking, so it adds typed wiki links. Explicit links demand deep navigation, so it generates a three-tier MOC hierarchy. Deep navigation needs maintenance, so it adds reweaving cycles and schema validation hooks. Each step follows logically from the last, and since [[configuration dimensions interact so choices in one create pressure on others]], the cascade is genuinely justified — atomic granularity really does pressure toward heavy processing and deep navigation. The problem is that a system justified at every step can still be unjustified as a whole, because the user receiving it cannot absorb twelve interacting design decisions simultaneously.
|
|
12
|
+
|
|
13
|
+
This is the derivation-specific application of Gall's Law. Since [[complex systems evolve from simple working systems]], even a perfectly justified complex configuration will collapse if deployed all at once, because the micro-adaptations that make each component actually work — the habits of filing, the muscle memory of linking, the recognition of when a MOC needs splitting — can only develop through use. Because [[navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts]], premature complexity has a concrete diagnostic beyond Gall's Law: it is deploying a higher regime's infrastructure into a lower regime's system. Automated community detection (Regime 3) in a 30-note vault (Regime 1) is not just premature in the abstract — it is premature in the precise sense that the mechanism only justifies itself at the scale it serves, and the operational experience that would make it valuable does not yet exist. A derivation engine that outputs the theoretically optimal system is like an architect who designs the perfect house but hands over blueprints that require living in the finished building to understand. The user needs to inhabit a simpler version first.
|
|
14
|
+
|
|
15
|
+
The failure signature is distinctive. Unlike organic over-engineering, which builds up gradually through [[PKM failure follows a predictable cycle]] (Stage 4), derivation-induced complexity arrives all at once. The user opens their new system and encounters a context file with 200 lines of methodology, five note types with different schemas, hooks that fire on every save, and a processing pipeline they don't understand. The abandonment timeline accelerates because there is no period of working simplicity to build investment. The user never develops the attachment that comes from a system that started simple and grew with them.
|
|
16
|
+
|
|
17
|
+
The concrete constraint is a complexity budget. Since [[ten universal primitives form the kernel of every viable agent knowledge system]], the budget has a floor — the kernel that every viable system needs regardless of domain — and the question is how far above that floor initial derivation should go. Initial derivation should be limited to the minimum viable configuration for the use case: two to three note types, two to four MOCs, four or fewer processing phases, and hooks only for the highest-value automation. This is not a suggestion to be overridden when the claim graph has more to say — it is a hard ceiling that reflects the reality that since [[derived systems follow a seed-evolve-reseed lifecycle]], the right time to add complexity is the evolution phase, where friction signals tell you exactly which elaboration is justified. The derivation engine's role at seed time is to produce a working kernel and embed enough context for the system to grow intelligently, not to front-load every insight the research graph contains. Since [[use-case presets dissolve the tension between composability and simplicity]], presets operationalize the complexity budget as curated module selections: a Research Vault preset activates thirteen modules (far fewer than the full catalog) because those thirteen are what the use case justifies. The preset author has already applied the complexity budget, and the user who selects the preset inherits that constraint without needing to understand or enforce it themselves.
|
|
18
|
+
|
|
19
|
+
What makes this the most common failure mode rather than just one risk among many is the structural incentive. Since [[derivation generates knowledge systems from composable research claims not template customization]], the derivation process rewards thoroughness — a richer claim graph produces more justified choices, and each justified choice adds apparent value. The agent performing derivation is optimizing for completeness and coherence, which are genuine virtues in research but liabilities in system deployment. There is no natural brake on complexity in the derivation process itself, which is why the complexity budget must be an external constraint rather than an emergent property.
|
|
20
|
+
|
|
21
|
+
The complexity budget also interacts with the dimension coupling that makes derivation non-trivial. Because dimension choices cascade — atomic granularity forces explicit linking forces deep navigation forces heavy maintenance — each choice beyond the minimum viable set amplifies across multiple dimensions. A system with three initial choices might involve six total dimension settings after cascading. A system with eight initial choices might involve twenty or more when interaction pressures are satisfied. The amplification means that small increases in initial complexity produce disproportionate increases in system-level complexity, making the budget more important than a linear count would suggest.
|
|
22
|
+
|
|
23
|
+
The shadow side is that the complexity budget risks under-derivation. A system so minimal that it provides no structural guidance fails in the opposite way — the user has to discover everything through friction rather than benefiting from what the research already knows. The concrete antidote to premature complexity is [[friction-driven module adoption prevents configuration debt by adding complexity only at pain points]], which provides measurable thresholds — add after five manual repetitions, split above 500-character descriptions, remove after three unused sessions, cap at fifteen to twenty active modules — that prevent the justified-at-every-step-but-overwhelming-as-a-whole failure by requiring demonstrated need before each addition. The resolution is that the budget constrains the deployed system, not the derivation agent's knowledge. The context file should contain evolution guidelines — "when you notice X friction, add Y structure" — that encode the claim graph's insights as conditional advice rather than upfront deployment. Since [[justification chains enable forward backward and evolution reasoning about configuration decisions]], the user can trace from experienced friction back through the chain to the specific claims that justify adding the deferred complexity, making the budget a postponement rather than a loss. The system starts simple but carries the intelligence to grow precisely where it needs to.
|
|
24
|
+
|
|
25
|
+
This note forms a trio of derivation anti-patterns with two siblings. Since [[configuration paralysis emerges when derivation surfaces too many decisions]], exposing too much of the claim graph during setup overwhelms the user before deployment even begins — a failure of decision presentation rather than system output. And since [[false universalism applies same processing logic regardless of domain]], reducing initial features to stay within the complexity budget can backfire if the remaining features default to one domain's processing patterns. The three anti-patterns constrain derivation from different directions: premature complexity means too much, false universalism means the wrong kind, and configuration paralysis means too many choices.
|
|
26
|
+
---
|
|
27
|
+
|
|
28
|
+
Relevant Notes:
|
|
29
|
+
- [[complex systems evolve from simple working systems]] — provides the theoretical foundation via Gall's Law, but this note adds the concrete derivation-time constraint: a complexity budget with specific limits rather than a general principle about evolutionary design
|
|
30
|
+
- [[PKM failure follows a predictable cycle]] — premature complexity maps to Stage 4 (Over-engineering) of the failure cascade, but applied at derivation time rather than during use; the derivation engine can inject Stage 4 conditions before the user even begins
|
|
31
|
+
- [[derivation generates knowledge systems from composable research claims not template customization]] — derivation's composability is both its strength and its risk: because the claim graph justifies each choice individually, the composed system can be locally justified but globally unsustainable
|
|
32
|
+
- [[derived systems follow a seed-evolve-reseed lifecycle]] — the complexity budget is the seeding constraint: minimum viable configuration at seed time, with evolution guidelines encoding when to add complexity rather than front-loading it
|
|
33
|
+
- [[configuration dimensions interact so choices in one create pressure on others]] — dimension coupling amplifies premature complexity because each added dimension choice cascades through others, so an initial system with many choices accumulates more interaction pressure than one with few
|
|
34
|
+
- [[the derivation engine improves recursively as deployed systems generate observations]] — the mechanism that calibrates the complexity budget over time: deployment observations teach the engine which elaborations users absorb and which overwhelm, so the budget becomes empirically grounded rather than theoretically estimated
|
|
35
|
+
- [[configuration paralysis emerges when derivation surfaces too many decisions]] — sibling anti-pattern at a different point in the derivation timeline: premature complexity deploys too much justified complexity all at once while configuration paralysis overwhelms users with too many choices during setup; both are failure modes of exposing too much of the claim graph
|
|
36
|
+
- [[false universalism applies same processing logic regardless of domain]] — complementary anti-pattern: premature complexity deploys too much of the right logic while false universalism deploys the wrong logic; avoiding one can trigger the other when reducing initial features defaults the remainder to one domain's processing patterns
|
|
37
|
+
- [[justification chains enable forward backward and evolution reasoning about configuration decisions]] — the mechanism that makes the complexity budget's shadow side manageable: evolution guidelines encode claim-graph insights as conditional advice, and justification chains enable users to trace from friction back to the specific claims that justify adding deferred complexity
|
|
38
|
+
- [[ten universal primitives form the kernel of every viable agent knowledge system]] — the kernel defines the floor of the complexity budget: minimum viable configuration cannot go below these ten primitives, and the budget constrains initial deployment to somewhere between the kernel and the fully justified but unsustainable maximum
|
|
39
|
+
- [[progressive schema validates only what active modules require not the full system schema]] — prevents the validation equivalent of premature complexity: without progressive scoping, enabling basic modules forces compliance with advanced schemas, creating daily-use friction from features the user never adopted
|
|
40
|
+
- [[use-case presets dissolve the tension between composability and simplicity]] — operationalizes the complexity budget: preset authors apply the budget once as curated module selections for each use case, so users inherit the constraint without needing to evaluate every module individually
|
|
41
|
+
- [[friction-driven module adoption prevents configuration debt by adding complexity only at pain points]] — the concrete antidote: measurable thresholds (5-repetition addition, 500-char split, 3-session removal, 15-20 module cap) prevent premature complexity by requiring demonstrated need before each module addition, operationalizing the complexity budget as calibrated checkpoints rather than a single constraint
|
|
42
|
+
- [[navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts]] — regime-specific diagnostic: gives premature complexity a concrete frame beyond Gall's Law — deploying Regime 3 infrastructure (automated community detection, four-tier MOC hierarchy) into a Regime 1 system violates the complexity budget because the mechanisms only justify themselves at the scale they serve
|
|
43
|
+
|
|
44
|
+
Topics:
|
|
45
|
+
- [[design-dimensions]]
|
|
@@ -0,0 +1,336 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Domain-specific failure modes and prevention strategies — what breaks in knowledge systems across domains, why it breaks, and how the plugin prevents it
|
|
3
|
+
kind: guidance
|
|
4
|
+
status: active
|
|
5
|
+
topics: ["[[failure-modes]]"]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# prevent domain-specific failure modes through the vulnerability matrix
|
|
9
|
+
|
|
10
|
+
Knowledge systems fail in predictable ways. Since [[PKM failure follows a predictable cycle]], the plugin can anticipate and prevent most failures before they occur. This doc catalogs failure modes across domains and specifies prevention strategies the plugin implements.
|
|
11
|
+
|
|
12
|
+
This doc tells the plugin WHAT to watch for, per domain, and HOW to prevent it.
|
|
13
|
+
|
|
14
|
+
## The 7-Stage Failure Cascade
|
|
15
|
+
|
|
16
|
+
Knowledge system failure is not sudden. It follows a predictable cascade where each stage creates the conditions for the next. Early intervention prevents the later stages from materializing.
|
|
17
|
+
|
|
18
|
+
| Stage | Name | What Happens | Typical Onset |
|
|
19
|
+
|-------|------|-------------|---------------|
|
|
20
|
+
| 1 | **Collector's Fallacy** | Content enters faster than it's processed. Saving feels productive. Inbox grows. | Days to weeks after setup |
|
|
21
|
+
| 2 | **Structure Mirage** | User organizes the backlog into folders and tags. Feels like progress but produces no understanding. | Weeks after Stage 1 |
|
|
22
|
+
| 3 | **Orphan Drift** | New notes are created quickly without connection-finding. The graph fragments into temporal layers that don't reference each other. | Weeks to months |
|
|
23
|
+
| 4 | **Schema Erosion** | Without enforcement, notes gradually deviate from schema. Fields go missing, descriptions go stale, enum values drift. | Concurrent with Stage 3 |
|
|
24
|
+
| 5 | **Navigation Breakdown** | MOCs become stale. Since [[stale navigation actively misleads because agents trust curated maps completely]], agents follow outdated maps and miss recent notes. | Months after Stage 3 |
|
|
25
|
+
| 6 | **Retrieval Failure** | Users search for knowledge they know exists and cannot find it. Trust in the system drops. The vault feels unreliable. | Follows Stage 5 |
|
|
26
|
+
| 7 | **Abandonment** | The maintenance burden exceeds perceived value. The system is abandoned. Since [[premature complexity is the most common derivation failure mode]], this stage is almost always traceable to excessive complexity at Stage 0 (/setup). | Follows Stage 6 |
|
|
27
|
+
|
|
28
|
+
**The cascade is not inevitable.** Each stage has specific interventions. The critical insight: Stages 1-3 are low-cost to fix. Stages 5-7 require structural restructuring (/reseed). The plugin's job is to detect and intervene at Stages 1-3 before the cascade progresses.
|
|
29
|
+
|
|
30
|
+
### Early-Stage Interventions
|
|
31
|
+
|
|
32
|
+
| Stage | Detection Signal | Intervention | Plugin Mechanism |
|
|
33
|
+
|-------|-----------------|-------------|------------------|
|
|
34
|
+
| 1 (Collector's) | Inbox count > 2x weekly processing capacity | Inbox pressure alert + processing pipeline activation | /health checks inbox depth |
|
|
35
|
+
| 2 (Structure Mirage) | Notes with tags/folders but zero wiki link connections | Flag "organized but unprocessed" notes | Orphan detection counts notes with topics but zero incoming links |
|
|
36
|
+
| 3 (Orphan Drift) | Orphan accumulation rate > 15% of new notes | Batch connection-finding pass | /recommend suggests reflect passes on unconnected notes |
|
|
37
|
+
| 4 (Schema Erosion) | Schema compliance < 80% on critical fields | Validation report with specific fix suggestions | /health runs schema compliance checks |
|
|
38
|
+
| 5 (Navigation Breakdown) | MOC links point to moved/deleted notes; MOCs not updated in 30+ days | MOC health audit with stale link detection | /architect proposes MOC restructuring |
|
|
39
|
+
|
|
40
|
+
**The principle:** prevention at Stage 0 (/setup) matters most. Since [[friction-driven module adoption prevents configuration debt by adding complexity only at pain points]], a minimal starting system prevents the premature complexity that makes the entire cascade more likely.
|
|
41
|
+
|
|
42
|
+
### Domain-Specific Cascade Manifestations
|
|
43
|
+
|
|
44
|
+
The same cascade looks different in each domain:
|
|
45
|
+
|
|
46
|
+
| Domain | Stage 1 (Collector's) Looks Like | Stage 5 (Navigation) Looks Like |
|
|
47
|
+
|--------|----------------------------------|----------------------------------|
|
|
48
|
+
| Research | Papers saved but unread, bibliography grows, highlights unextracted | Literature review cites sources whose underlying claims have changed |
|
|
49
|
+
| Therapy | Journal entries written but never reviewed for patterns | Pattern MOC lists triggers from 3 months ago, current triggers unlisted |
|
|
50
|
+
| PM | Meeting notes captured but decisions unextracted | Project MOC shows "active" decisions that were superseded |
|
|
51
|
+
| Creative | Worldbuilding ideas captured but never canonicalized | Character sheets contradict scenes written after the last canon update |
|
|
52
|
+
| Personal | Goals set but never reviewed, areas neglected | Life area MOC shows "active" status on goals abandoned months ago |
|
|
53
|
+
| Trading | Trade entries logged but thesis not tracked | Strategy MOC references market conditions that have changed |
|
|
54
|
+
|
|
55
|
+
## Agent-Specific Failure Modes
|
|
56
|
+
|
|
57
|
+
These failure modes are unique to agent-operated knowledge systems. They have no direct analogue in human PKM because they arise from the agent's relationship to the vault:
|
|
58
|
+
|
|
59
|
+
### Stale MOC Navigation
|
|
60
|
+
|
|
61
|
+
**The pattern:** MOCs are curated once then neglected. New notes are created but MOCs are not updated to include them. The agent trusts MOC-based navigation and treats unlisted notes as non-existent.
|
|
62
|
+
|
|
63
|
+
**Why it happens:** Since [[stale navigation actively misleads because agents trust curated maps completely]], agents navigate by MOC and never "stumble on" unfiled items the way humans browse folders.
|
|
64
|
+
|
|
65
|
+
**Prevention:**
|
|
66
|
+
- Pipeline enforcement: /reflect phase MUST update MOCs as part of note creation
|
|
67
|
+
- MOC health check: compare notes with matching `topics` against notes actually listed in the MOC's content
|
|
68
|
+
- Orphan detection distinguishes "no topics field" (schema failure) from "topics present but MOC doesn't list note" (stale navigation)
|
|
69
|
+
|
|
70
|
+
**Monitoring signal:** Count of notes with `topics: [[some-moc]]` that are NOT linked from that MOC's content.
|
|
71
|
+
|
|
72
|
+
### Cognitive Outsourcing
|
|
73
|
+
|
|
74
|
+
**The pattern:** The agent processes vault content, keeps the vault healthy, surfaces insights — but the human stops engaging with the knowledge. The human outsources understanding to the agent. The vault stays healthy but the human's comprehension atrophies.
|
|
75
|
+
|
|
76
|
+
**Why it happens:** Agent processing makes knowledge work feel effortless. The human asks "what should I think about X?" instead of thinking about X. The processing pipeline produces insights, but the human never internalizes them.
|
|
77
|
+
|
|
78
|
+
**Prevention:**
|
|
79
|
+
- Processing pipeline requires human judgment gates — the agent proposes, the human decides
|
|
80
|
+
- Since [[the architect does not auto-implement]], evolution recommendations go to the human for approval
|
|
81
|
+
- Periodic prompts requiring human interpretation: "I detected this pattern — what does it mean to you?"
|
|
82
|
+
- In therapy domains (see [[therapy journal uses warm personality with pattern detection for emotional processing]] ethical constraints), the agent surfaces patterns but explicitly withholds interpretation
|
|
83
|
+
|
|
84
|
+
**Monitoring signal:** Ratio of agent-initiated changes to human-initiated changes. If consistently > 10:1, the human may be outsourcing cognitive work.
|
|
85
|
+
|
|
86
|
+
### Context Degradation Across Sessions
|
|
87
|
+
|
|
88
|
+
**The pattern:** Each agent session starts with partial context. Important information from previous sessions is lost because the context file doesn't capture it, session logs aren't consulted, or the agent skips orientation.
|
|
89
|
+
|
|
90
|
+
**Why it happens:** Since [[LLM attention degrades as context fills]], agents work within context windows. Cross-session continuity depends on orientation artifact quality (context files, session logs, MOCs). Poor artifacts create agents that repeat work, miss connections, and contradict previous sessions.
|
|
91
|
+
|
|
92
|
+
**Prevention:**
|
|
93
|
+
- Context file generated with domain-specific orientation protocol
|
|
94
|
+
- Session logging captures decisions and discoveries, not just actions
|
|
95
|
+
- Start-of-session orientation reads context file + recent session log + relevant MOC
|
|
96
|
+
- Pipeline handoff files carry state between phases explicitly
|
|
97
|
+
|
|
98
|
+
**Monitoring signal:** Frequency of duplicate note creation or contradictory notes across sessions.
|
|
99
|
+
|
|
100
|
+
### Processing Bias Accumulation
|
|
101
|
+
|
|
102
|
+
**The pattern:** The agent's extraction and connection-finding develops systematic biases over time. It favors certain connection types, over-extracts from familiar claim patterns, and misses novel insights that don't match established patterns.
|
|
103
|
+
|
|
104
|
+
**Why it happens:** Skills encode specific extraction patterns. Over many processing cycles, these patterns become self-reinforcing: the agent finds what the skill tells it to look for, and what it finds confirms the skill's approach.
|
|
105
|
+
|
|
106
|
+
**Prevention:**
|
|
107
|
+
- Periodic /rethink passes that challenge processing assumptions
|
|
108
|
+
- Domain example diversity — multiple reference domains prevent single-pattern fixation
|
|
109
|
+
- Human review of extraction results to catch systematic blind spots
|
|
110
|
+
- Since [[derived systems follow a seed-evolve-reseed lifecycle]], reseed provides a principled reset when processing bias becomes systemic
|
|
111
|
+
|
|
112
|
+
**Monitoring signal:** Declining novelty in extracted claims — new sources produce claims that increasingly resemble existing ones.
|
|
113
|
+
|
|
114
|
+
## Monitoring Signal Summary
|
|
115
|
+
|
|
116
|
+
The plugin should track these signals across all domains. When signals cross thresholds, /recommend surfaces them as actionable recommendations.
|
|
117
|
+
|
|
118
|
+
| Signal | What It Measures | Healthy | Warning | Critical |
|
|
119
|
+
|--------|-----------------|---------|---------|----------|
|
|
120
|
+
| Inbox depth | Accumulation vs processing rate | < 10 items | 10-25 items | > 25 items |
|
|
121
|
+
| Orphan accumulation rate | Connection-finding pipeline health | < 10% of new notes | 10-20% | > 20% |
|
|
122
|
+
| Schema compliance | Metadata integrity | > 90% critical fields | 80-90% | < 80% |
|
|
123
|
+
| MOC staleness | Navigation currency | Updated within 14 days | 14-30 days | > 30 days |
|
|
124
|
+
| Description quality | Retrieval effectiveness | > 80% pass recite test | 60-80% | < 60% |
|
|
125
|
+
| Agent/human ratio | Cognitive outsourcing risk | < 5:1 | 5-10:1 | > 10:1 |
|
|
126
|
+
| Cross-session duplicates | Context degradation | 0 per month | 1-2 per month | > 2 per month |
|
|
127
|
+
| Claim novelty | Processing bias | Novel claims > 50% | 30-50% | < 30% |
|
|
128
|
+
|
|
129
|
+
## Universal Failure Modes
|
|
130
|
+
|
|
131
|
+
These affect every domain regardless of content type:
|
|
132
|
+
|
|
133
|
+
### 1. Accumulation Without Processing
|
|
134
|
+
|
|
135
|
+
**The pattern:** Content enters the system faster than it's processed. Inbox grows. Unprocessed items create guilt. The system becomes a dumping ground.
|
|
136
|
+
|
|
137
|
+
**Why it happens:** Since [[throughput matters more than accumulation]], capture is easy and getting easier (voice transcription, AI-assisted recording). Processing requires effort and judgment.
|
|
138
|
+
|
|
139
|
+
**Prevention:**
|
|
140
|
+
- Inbox pressure alerts when unprocessed items exceed threshold
|
|
141
|
+
- Processing pipeline that reduces effort per item
|
|
142
|
+
- WIP limits on inbox size — since [[WIP limits force processing over accumulation]]
|
|
143
|
+
- Visual indicators of processing debt
|
|
144
|
+
|
|
145
|
+
### 2. Structure Without Processing
|
|
146
|
+
|
|
147
|
+
**The pattern:** Notes are filed into folders, tagged, and organized — but never synthesized. The system looks organized but contains no original thinking.
|
|
148
|
+
|
|
149
|
+
**Why it happens:** Since [[structure without processing provides no value]], organizing feels like progress but produces no new understanding.
|
|
150
|
+
|
|
151
|
+
**Prevention:**
|
|
152
|
+
- Processing pipeline requires transformation, not just filing
|
|
153
|
+
- Since [[generation effect gate blocks processing without transformation]], the reduce phase demands active transformation
|
|
154
|
+
- Health metrics that measure synthesis (new connections created) not just accumulation (new notes filed)
|
|
155
|
+
|
|
156
|
+
### 3. Orphan Note Accumulation
|
|
157
|
+
|
|
158
|
+
**The pattern:** Notes exist but aren't connected to anything. No incoming links, no MOC membership. Effectively invisible.
|
|
159
|
+
|
|
160
|
+
**Why it happens:** Notes created without running the connect phase. Quick captures that never get integrated.
|
|
161
|
+
|
|
162
|
+
**Important reframe:** Orphan notes are seeds, not failures. A note without connections is potential awaiting integration, not garbage requiring deletion. The metric that matters is the orphan accumulation RATE, not the absolute count. A steady-state of 5-10% orphans is healthy (recent captures awaiting processing). A growing orphan rate signals pipeline breakdown.
|
|
163
|
+
|
|
164
|
+
**Prevention:**
|
|
165
|
+
- Topics field required on every note (orphan prevention)
|
|
166
|
+
- Orphan detection in maintenance loop
|
|
167
|
+
- Reflect phase mandatory after note creation
|
|
168
|
+
- MOC update required as part of the note creation cycle
|
|
169
|
+
|
|
170
|
+
### 4. Schema Drift
|
|
171
|
+
|
|
172
|
+
**The pattern:** Notes gradually deviate from schema requirements. Fields go missing, enum values diverge, descriptions become stale.
|
|
173
|
+
|
|
174
|
+
**Why it happens:** Without enforcement, entropy wins. Each note cut corners slightly.
|
|
175
|
+
|
|
176
|
+
**Prevention:**
|
|
177
|
+
- Schema enforcement (see [[schema-enforcement]] guidance doc)
|
|
178
|
+
- Periodic validation passes
|
|
179
|
+
- Progressive schema — only enforce what's actually used
|
|
180
|
+
|
|
181
|
+
### 5. System Abandonment
|
|
182
|
+
|
|
183
|
+
**The pattern:** Elaborate system set up, used enthusiastically for 2-3 weeks, then gradually abandoned.
|
|
184
|
+
|
|
185
|
+
**Why it happens:** Since [[premature complexity is the most common derivation failure mode]], the system's maintenance burden exceeds its perceived value.
|
|
186
|
+
|
|
187
|
+
**Prevention:**
|
|
188
|
+
- Start minimal — since [[friction-driven module adoption prevents configuration debt by adding complexity only at pain points]]
|
|
189
|
+
- Early value delivery — show insights from the first captured content
|
|
190
|
+
- Low-friction capture — since [[temporal separation of capture and processing preserves context freshness]]
|
|
191
|
+
- Condition-based maintenance, not scheduled reviews
|
|
192
|
+
|
|
193
|
+
## Domain-Specific Failure Modes
|
|
194
|
+
|
|
195
|
+
### Research & Academic
|
|
196
|
+
|
|
197
|
+
| Failure Mode | Symptom | Prevention |
|
|
198
|
+
|-------------|---------|------------|
|
|
199
|
+
| Source-oriented thinking | Notes organized by paper, not by concept | Since [[concept-orientation beats source-orientation for cross-domain connections]], enforce concept-based note titles |
|
|
200
|
+
| Citation rot | Sources become unavailable | Periodic link checking on source notes |
|
|
201
|
+
| Confirmation bias | Only citing supporting evidence | Track "contradicts" relationships; flag topics with only confirming evidence |
|
|
202
|
+
| Synthesis stagnation | Many claims, no synthesis notes | Monitor claim-to-synthesis ratio; suggest synthesis when claims cluster |
|
|
203
|
+
|
|
204
|
+
### Therapy & Reflective
|
|
205
|
+
|
|
206
|
+
| Failure Mode | Symptom | Prevention |
|
|
207
|
+
|-------------|---------|------------|
|
|
208
|
+
| Write-only journaling | Entries never reviewed for patterns | Automatic pattern detection across entries |
|
|
209
|
+
| Emotional avoidance | Consistently avoiding difficult topics | Detect topic avoidance patterns (topics mentioned then dropped) |
|
|
210
|
+
| Over-intellectualization | Analysis replaces feeling | Track emotional vocabulary density in entries |
|
|
211
|
+
| Strategy-trigger disconnect | Coping strategies exist but aren't linked to triggers | Enforce strategy-trigger relationships in schema |
|
|
212
|
+
|
|
213
|
+
### Project Management
|
|
214
|
+
|
|
215
|
+
| Failure Mode | Symptom | Prevention |
|
|
216
|
+
|-------------|---------|------------|
|
|
217
|
+
| Decision amnesia | "Why did we decide this?" | Decision notes required with rationale field |
|
|
218
|
+
| Meeting action decay | Action items captured, never completed | Action item tracking with staleness alerts |
|
|
219
|
+
| Risk register staleness | Risks assessed once, never updated | Condition-based review triggers on risk notes |
|
|
220
|
+
| Retrospective theater | Same issues across retros | Cross-retrospective pattern detection |
|
|
221
|
+
|
|
222
|
+
### Creative Writing
|
|
223
|
+
|
|
224
|
+
| Failure Mode | Symptom | Prevention |
|
|
225
|
+
|-------------|---------|------------|
|
|
226
|
+
| Continuity errors | Contradictions across chapters | Consistency graph with contradiction detection |
|
|
227
|
+
| Voice drift | Character speech patterns change | Character voice baseline tracking |
|
|
228
|
+
| Worldbuilding rabbit hole | Building worlds nobody reads about | Track worldbuilding notes vs scenes they appear in |
|
|
229
|
+
| Plot thread abandonment | Foreshadowing without payoff | Thread tracking with resolution status |
|
|
230
|
+
|
|
231
|
+
### Student & Learning
|
|
232
|
+
|
|
233
|
+
| Failure Mode | Symptom | Prevention |
|
|
234
|
+
|-------------|---------|------------|
|
|
235
|
+
| Passive accumulation | Notes taken but never actively recalled | Spaced repetition integration |
|
|
236
|
+
| Prerequisite blindness | Advanced material attempted without foundations | Prerequisite graph validation |
|
|
237
|
+
| Course silo-ing | Concepts isolated within single courses | Cross-course concept connection detection |
|
|
238
|
+
| Illusion of competence | Since [[metacognitive confidence can diverge from retrieval capability]] | Active retrieval testing via recite patterns |
|
|
239
|
+
|
|
240
|
+
### Personal Life Management
|
|
241
|
+
|
|
242
|
+
| Failure Mode | Symptom | Prevention |
|
|
243
|
+
|-------------|---------|------------|
|
|
244
|
+
| Review fatigue | Weekly reviews abandoned | Condition-based reviews, not scheduled |
|
|
245
|
+
| Area neglect | Some life areas get no attention | Area health dashboard with neglect detection |
|
|
246
|
+
| Goal drift | Goals set but never updated | Quarterly goal review triggers |
|
|
247
|
+
| Productivity porn | Since [[productivity porn risk in meta-system building]] | Track time spent on system vs time spent doing work |
|
|
248
|
+
|
|
249
|
+
### Trading & Finance
|
|
250
|
+
|
|
251
|
+
| Failure Mode | Symptom | Prevention |
|
|
252
|
+
|-------------|---------|------------|
|
|
253
|
+
| Strategy drift | Actual trades deviate from strategy rules | Strategy compliance checking per trade |
|
|
254
|
+
| Confirmation bias | Only seeking thesis-supporting evidence | Track counter-evidence for each thesis |
|
|
255
|
+
| Emotional trading | Decisions driven by fear/greed | Sentiment analysis on journal entries |
|
|
256
|
+
| Review avoidance | Trade journal not analyzed for patterns | Automatic pattern detection across trades |
|
|
257
|
+
|
|
258
|
+
### Health & Wellness
|
|
259
|
+
|
|
260
|
+
| Failure Mode | Symptom | Prevention |
|
|
261
|
+
|-------------|---------|------------|
|
|
262
|
+
| Tracking without analysis | Data collected but no patterns detected | Automatic correlation analysis at data thresholds |
|
|
263
|
+
| Protocol inertia | Supplements/programs continued without reassessment | Protocol effectiveness review triggers |
|
|
264
|
+
| Symptom isolation | Symptoms tracked in isolation from lifestyle factors | Multi-dimensional correlation (sleep + nutrition + exercise + symptoms) |
|
|
265
|
+
| Plateau blindness | Progress stalled without detection | Trend analysis with plateau detection |
|
|
266
|
+
|
|
267
|
+
### Engineering Team
|
|
268
|
+
|
|
269
|
+
| Failure Mode | Symptom | Prevention |
|
|
270
|
+
|-------------|---------|------------|
|
|
271
|
+
| Tribal knowledge | Critical info in people's heads only | Knowledge gap detection via undocumented system alerts |
|
|
272
|
+
| ADR decay | Decisions documented but never updated | ADR assumption monitoring |
|
|
273
|
+
| Postmortem theater | Incidents analyzed, action items ignored | Action item tracking with completion rates |
|
|
274
|
+
| Documentation rot | Docs diverge from implementation | Staleness detection via code change correlation |
|
|
275
|
+
|
|
276
|
+
### Product Management
|
|
277
|
+
|
|
278
|
+
| Failure Mode | Symptom | Prevention |
|
|
279
|
+
|-------------|---------|------------|
|
|
280
|
+
| Feedback fragmentation | Customer voice scattered across tools | Feedback aggregation with theme detection |
|
|
281
|
+
| Experiment amnesia | A/B tests run, learnings not captured | Experiment log with mandatory learnings field |
|
|
282
|
+
| Roadmap-strategy disconnect | Features not traced to OKRs | Dependency tracking from features to strategic goals |
|
|
283
|
+
| Research staleness | User research cited years later | Research freshness tracking |
|
|
284
|
+
|
|
285
|
+
### Legal & Case Management
|
|
286
|
+
|
|
287
|
+
| Failure Mode | Symptom | Prevention |
|
|
288
|
+
|-------------|---------|------------|
|
|
289
|
+
| Precedent rot | Relying on overruled case law | Precedent chain monitoring for updates |
|
|
290
|
+
| Jurisdictional blindness | Compliance missed across jurisdictions | Multi-jurisdiction regulatory tracking |
|
|
291
|
+
| Clause inconsistency | Standard terms drift across contracts | Cross-contract clause comparison |
|
|
292
|
+
| Deadline miscalculation | Complex filing deadlines missed | Automated deadline tracking with jurisdictional rules |
|
|
293
|
+
|
|
294
|
+
## How the Plugin Uses This Doc
|
|
295
|
+
|
|
296
|
+
| Meta-Skill | How It Uses Failure Modes |
|
|
297
|
+
|------------|--------------------------|
|
|
298
|
+
| /setup | Configures prevention mechanisms based on selected domains |
|
|
299
|
+
| /recommend | Checks for active failure mode symptoms and suggests fixes |
|
|
300
|
+
| /architect | Ensures new extensions don't introduce known failure modes |
|
|
301
|
+
| /ask | References domain-specific failure modes when answering "what could go wrong?" |
|
|
302
|
+
|
|
303
|
+
## Domain Examples
|
|
304
|
+
|
|
305
|
+
Every domain composition includes agent-native prevention strategies for its specific failure modes:
|
|
306
|
+
|
|
307
|
+
- [[academic research uses structured extraction with cross-source synthesis]] — Concept-orientation enforcement prevents source-oriented thinking; citation graph monitoring detects citation rot
|
|
308
|
+
- [[therapy journal uses warm personality with pattern detection for emotional processing]] — Automatic pattern detection prevents write-only journaling; emotional vocabulary density tracking catches over-intellectualization
|
|
309
|
+
- [[project management uses decision tracking with stakeholder context]] — Decision notes with mandatory `rationale` field prevent decision amnesia; action item staleness alerts prevent meeting action decay
|
|
310
|
+
- [[creative writing uses worldbuilding consistency with character tracking]] — Consistency graph with contradiction detection prevents continuity errors; thread tracking with resolution status prevents plot thread abandonment
|
|
311
|
+
- [[student learning uses prerequisite graphs with spaced retrieval]] — Active retrieval testing via recite patterns prevents illusion of competence; prerequisite graph validation prevents prerequisite blindness
|
|
312
|
+
- [[health wellness uses symptom-trigger correlation with multi-dimensional tracking]] — Automatic correlation analysis at data thresholds prevents tracking without analysis; protocol effectiveness review triggers prevent protocol inertia
|
|
313
|
+
- [[engineering uses technical decision tracking with architectural memory]] — Knowledge gap detection via undocumented system alerts prevents tribal knowledge; ADR assumption monitoring prevents ADR decay
|
|
314
|
+
- [[legal case management uses precedent chains with regulatory change propagation]] — Precedent chain monitoring for updates prevents precedent rot; multi-jurisdiction regulatory tracking prevents jurisdictional blindness
|
|
315
|
+
|
|
316
|
+
## Grounding
|
|
317
|
+
|
|
318
|
+
This guidance is grounded in:
|
|
319
|
+
- [[PKM failure follows a predictable cycle]] — the 7-stage cascade follows predictable patterns
|
|
320
|
+
- [[premature complexity is the most common derivation failure mode]] — the leading cause of abandonment (Stage 7 traces to Stage 0)
|
|
321
|
+
- [[structure without processing provides no value]] — Stage 2 (Structure Mirage) accumulates organization without synthesis
|
|
322
|
+
- [[throughput matters more than accumulation]] — processing velocity prevents Stage 1 (Collector's Fallacy)
|
|
323
|
+
- [[false universalism applies same processing logic regardless of domain]] — domain-specific failures need domain-specific prevention
|
|
324
|
+
- [[stale navigation actively misleads because agents trust curated maps completely]] — agent-specific failure mode: stale MOCs
|
|
325
|
+
- [[derived systems follow a seed-evolve-reseed lifecycle]] — reseed as escape valve when cascade reaches late stages
|
|
326
|
+
- [[friction-driven module adoption prevents configuration debt by adding complexity only at pain points]] — minimal starting systems prevent cascade onset
|
|
327
|
+
|
|
328
|
+
---
|
|
329
|
+
|
|
330
|
+
Topics:
|
|
331
|
+
- [[index]]
|
|
332
|
+
- [[index]]
|
|
333
|
+
---
|
|
334
|
+
|
|
335
|
+
Topics:
|
|
336
|
+
- [[failure-modes]]
|
|
@@ -0,0 +1,57 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Just-in-time processing on retrieval beats just-in-case front-loading because most captured notes are never revisited
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[processing-workflows]]"]
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# processing effort should follow retrieval demand
|
|
8
|
+
|
|
9
|
+
Tiago Forte's research reveals a striking fact: most captured notes are never revisited. This means heavy upfront processing at capture time wastes effort on content that will never be used.
|
|
10
|
+
|
|
11
|
+
The alternative is JIT (just-in-time) processing: minimal work at capture, investment only on retrieval. When something actually gets retrieved, that's when you process it deeply. This creates a demand-driven queue where attention follows demonstrated value. The cognitive grounding: since [[LLM attention degrades as context fills]], front-loading fills context with low-probability content, degrading attention for the content that actually matters.
|
|
12
|
+
|
|
13
|
+
The temporal caveat: since [[temporal separation of capture and processing preserves context freshness]], JIT processing has limits for inbox content. Ebbinghaus decay means the human context that made something worth capturing fades within 24 hours. Pure demand-driven processing risks waiting so long that even when retrieval finally signals value, the original understanding is unrecoverable. The reconciliation: JIT still applies, but inbox items have a time-sensitive urgency that other content doesn't. Old inbox items need processing regardless of retrieval signal.
|
|
14
|
+
|
|
15
|
+
The implication for vault operations is clear: capture quickly and move on. Let retrieval patterns identify what deserves deeper processing. Track retrieval counts to identify high-value content that earns additional synthesis work. And since [[intermediate packets enable assembly over creation]], the deep processing triggered by retrieval naturally produces composable packets — the session's output becomes a building block that future retrieval can assemble from. JIT processing and packet production reinforce each other: retrieval triggers processing, processing produces packets, packets enable future assembly.
|
|
16
|
+
|
|
17
|
+
But JIT processing requires efficient filtering. Since [[progressive disclosure means reading right not reading less]], the goal isn't minimal loading but curated loading — finding what deserves deep attention before committing full context. The discovery layers provide this filtering mechanism: [[descriptions are retrieval filters not summaries]] enables agents to identify which notes are relevant without loading everything. Scanning 50 descriptions costs fewer tokens than reading 5 full notes to find the relevant one. Aggregated descriptions become a pre-computed low-entropy representation that enables demand-driven retrieval to be efficient.
|
|
18
|
+
|
|
19
|
+
This inverts the traditional knowledge management instinct to "process properly" at capture time. That instinct optimizes for the wrong thing. Since [[throughput matters more than accumulation]], the constraint isn't capture quality but capture volume — and volume demands selective investment based on actual use, not predicted importance.
|
|
20
|
+
|
|
21
|
+
## Demand signals in practice
|
|
22
|
+
|
|
23
|
+
How do you actually measure retrieval demand? Since [[dangling links reveal which notes want to exist]], one concrete signal emerges from wiki link patterns. When multiple notes independently reference the same concept, that accumulated demand justifies creation of the note. The frequency of dangling links IS demand measurement — the graph is voting with its references.
|
|
24
|
+
|
|
25
|
+
This connects to traversal patterns too. Since [[spreading activation models how agents should traverse]], frequently traversed nodes accumulate more activation and show up in more search results. The spreading activation mechanism creates its own demand signal: nodes that get visited often during retrieval are demonstrably high-value.
|
|
26
|
+
|
|
27
|
+
Together, these provide objective demand metrics:
|
|
28
|
+
- **Dangling link frequency** — how often does the graph reference this concept?
|
|
29
|
+
- **Traversal frequency** — how often do agents visit this node during retrieval?
|
|
30
|
+
- **Backlink count** — how many notes invoke this one?
|
|
31
|
+
|
|
32
|
+
Processing investment should flow to nodes that score high on these metrics. Let demand emerge from use, not from predictions about importance.
|
|
33
|
+
|
|
34
|
+
## The blind spot
|
|
35
|
+
|
|
36
|
+
Demand-driven processing has a structural limitation: notes that SHOULD receive attention but don't generate retrieval signals fall outside the demand loop. If maintenance attention follows the same power-law distribution as link density, the bottom 80% of notes may receive minimal attention regardless of need. [[random note resurfacing prevents write-only memory]] tests whether random selection — explicitly anti-JIT — addresses this blind spot by giving every note equal probability of maintenance attention independent of demand signals. A complementary approach emerges from [[spaced repetition scheduling could optimize vault maintenance]], which tests whether interval-based scheduling (frequent early review, sparse later) adds a proactive dimension to the demand-following principle — some maintenance effort anticipates need based on note age rather than responding to retrieval signals or random selection.
|
|
37
|
+
|
|
38
|
+
For experiments specifically, since [[maintenance targeting should prioritize mechanism and theory notes]], demand signals should be semantic rather than observed. The question "what mechanism does this test?" predicts higher-value reweave targets than retrieval frequency or topic proximity. Experiments need their theory notes, not their MOC neighbors.
|
|
39
|
+
---
|
|
40
|
+
|
|
41
|
+
Relevant Notes:
|
|
42
|
+
- [[topological organization beats temporal for knowledge work]] — JIT retrieval is only efficient when the system is organized for semantic access; date-based organization would force chronological scanning regardless of retrieval demand
|
|
43
|
+
- [[progressive disclosure means reading right not reading less]] — the philosophy underlying efficient filtering: curated loading over minimal loading
|
|
44
|
+
- [[queries evolve during search so agents should checkpoint]] — both patterns recognize that processing should respond to runtime information, not front-loaded predictions
|
|
45
|
+
- [[dangling links reveal which notes want to exist]] — provides the demand measurement mechanism: dangling link frequency signals where processing investment pays off
|
|
46
|
+
- [[spreading activation models how agents should traverse]] — traversal patterns create demand signals: frequently visited nodes demonstrate high retrieval value
|
|
47
|
+
- [[descriptions are retrieval filters not summaries]] — provides the filtering mechanism that makes JIT retrieval efficient: scan descriptions to identify what deserves full loading
|
|
48
|
+
- [[throughput matters more than accumulation]] — provides the success metric that demand-driven processing serves: velocity from capture to synthesis, not archive size
|
|
49
|
+
- [[LLM attention degrades as context fills]] — cognitive grounding for JIT: front-loading degrades attention for the content that actually matters
|
|
50
|
+
- [[temporal separation of capture and processing preserves context freshness]] — temporal constraint on JIT: inbox content has time-sensitive urgency because human context decays
|
|
51
|
+
- [[random note resurfacing prevents write-only memory]] — tests whether anti-JIT random selection addresses the blind spot where notes lacking demand signals accumulate neglect
|
|
52
|
+
- [[spaced repetition scheduling could optimize vault maintenance]] — tests interval-based scheduling as a third allocation strategy alongside demand-driven (JIT) and random selection; proactive rather than reactive
|
|
53
|
+
- [[maintenance targeting should prioritize mechanism and theory notes]] — refines demand-following for experiments: semantic demand (what mechanism is being tested) predicts value better than observed demand (retrieval frequency)
|
|
54
|
+
- [[intermediate packets enable assembly over creation]] — connects processing outputs to composability: JIT processing triggered by retrieval naturally produces packets that future work can assemble from
|
|
55
|
+
|
|
56
|
+
Topics:
|
|
57
|
+
- [[processing-workflows]]
|