arscontexta 0.6.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/marketplace.json +11 -0
- package/.claude-plugin/plugin.json +22 -0
- package/README.md +683 -0
- package/agents/knowledge-guide.md +49 -0
- package/bin/cli.mjs +66 -0
- package/generators/agents-md.md +240 -0
- package/generators/claude-md.md +379 -0
- package/generators/features/atomic-notes.md +124 -0
- package/generators/features/ethical-guardrails.md +58 -0
- package/generators/features/graph-analysis.md +188 -0
- package/generators/features/helper-functions.md +92 -0
- package/generators/features/maintenance.md +164 -0
- package/generators/features/methodology-knowledge.md +70 -0
- package/generators/features/mocs.md +144 -0
- package/generators/features/multi-domain.md +61 -0
- package/generators/features/personality.md +71 -0
- package/generators/features/processing-pipeline.md +428 -0
- package/generators/features/schema.md +149 -0
- package/generators/features/self-evolution.md +229 -0
- package/generators/features/self-space.md +78 -0
- package/generators/features/semantic-search.md +99 -0
- package/generators/features/session-rhythm.md +85 -0
- package/generators/features/templates.md +85 -0
- package/generators/features/wiki-links.md +88 -0
- package/generators/soul-md.md +121 -0
- package/hooks/hooks.json +45 -0
- package/hooks/scripts/auto-commit.sh +44 -0
- package/hooks/scripts/session-capture.sh +35 -0
- package/hooks/scripts/session-orient.sh +86 -0
- package/hooks/scripts/write-validate.sh +42 -0
- package/methodology/AI shifts knowledge systems from externalizing memory to externalizing attention.md +59 -0
- package/methodology/BM25 retrieval fails on full-length descriptions because query term dilution reduces match scores.md +39 -0
- package/methodology/IBIS framework maps claim-based architecture to structured argumentation.md +58 -0
- package/methodology/LLM attention degrades as context fills.md +49 -0
- package/methodology/MOC construction forces synthesis that automated generation from metadata cannot replicate.md +49 -0
- package/methodology/MOC maintenance investment compounds because orientation savings multiply across every future session.md +41 -0
- package/methodology/MOCs are attention management devices not just organizational tools.md +51 -0
- package/methodology/PKM failure follows a predictable cycle.md +50 -0
- package/methodology/ThreadMode to DocumentMode transformation is the core value creation step.md +52 -0
- package/methodology/WIP limits force processing over accumulation.md +53 -0
- package/methodology/Zeigarnik effect validates capture-first philosophy because open loops drain attention.md +42 -0
- package/methodology/academic research uses structured extraction with cross-source synthesis.md +566 -0
- package/methodology/adapt the four-phase processing pipeline to domain-specific throughput needs.md +197 -0
- package/methodology/agent notes externalize navigation intuition that search cannot discover and traversal cannot reconstruct.md +48 -0
- package/methodology/agent self-memory should be architecturally separate from user knowledge systems.md +48 -0
- package/methodology/agent session boundaries create natural automation checkpoints that human-operated systems lack.md +56 -0
- package/methodology/agent-cognition.md +107 -0
- package/methodology/agents are simultaneously methodology executors and subjects creating a unique trust asymmetry.md +66 -0
- package/methodology/aspect-oriented programming solved the same cross-cutting concern problem that hooks solve.md +39 -0
- package/methodology/associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles.md +53 -0
- package/methodology/attention residue may have a minimum granularity that cannot be subdivided.md +46 -0
- package/methodology/auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution.md +47 -0
- package/methodology/automated detection is always safe because it only reads state while automated remediation risks content corruption.md +42 -0
- package/methodology/automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues.md +56 -0
- package/methodology/backlinks implicitly define notes by revealing usage context.md +35 -0
- package/methodology/backward maintenance asks what would be different if written today.md +62 -0
- package/methodology/balance onboarding enforcement and questions to prevent premature complexity.md +229 -0
- package/methodology/basic level categorization determines optimal MOC granularity.md +51 -0
- package/methodology/batching by context similarity reduces switching costs in agent processing.md +43 -0
- package/methodology/behavioral anti-patterns matter more than tool selection.md +42 -0
- package/methodology/betweenness centrality identifies bridge notes connecting disparate knowledge domains.md +57 -0
- package/methodology/blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules.md +42 -0
- package/methodology/bootstrapping principle enables self-improving systems.md +62 -0
- package/methodology/build automatic memory through cognitive offloading and session handoffs.md +285 -0
- package/methodology/capture the reaction to content not just the content itself.md +41 -0
- package/methodology/claims must be specific enough to be wrong.md +36 -0
- package/methodology/closure rituals create clean breaks that prevent attention residue bleed.md +44 -0
- package/methodology/cognitive offloading is the architectural foundation for vault design.md +46 -0
- package/methodology/cognitive outsourcing risk in agent-operated systems.md +55 -0
- package/methodology/coherence maintains consistency despite inconsistent inputs.md +96 -0
- package/methodology/coherent architecture emerges from wiki links spreading activation and small-world topology.md +48 -0
- package/methodology/community detection algorithms can inform when MOCs should split or merge.md +52 -0
- package/methodology/complete navigation requires four complementary types that no single mechanism provides.md +43 -0
- package/methodology/complex systems evolve from simple working systems.md +59 -0
- package/methodology/composable knowledge architecture builds systems from independent toggleable modules not monolithic templates.md +61 -0
- package/methodology/compose multi-domain systems through separate templates and shared graph.md +372 -0
- package/methodology/concept-orientation beats source-orientation for cross-domain connections.md +51 -0
- package/methodology/confidence thresholds gate automated action between the mechanical and judgment zones.md +50 -0
- package/methodology/configuration dimensions interact so choices in one create pressure on others.md +58 -0
- package/methodology/configuration paralysis emerges when derivation surfaces too many decisions.md +44 -0
- package/methodology/context files function as agent operating systems through self-referential self-extension.md +46 -0
- package/methodology/context phrase clarity determines how deep a navigation hierarchy can scale.md +46 -0
- package/methodology/continuous small-batch processing eliminates review dread.md +48 -0
- package/methodology/controlled disorder engineers serendipity through semantic rather than topical linking.md +51 -0
- package/methodology/creative writing uses worldbuilding consistency with character tracking.md +672 -0
- package/methodology/cross-links between MOC territories indicate creative leaps and integration depth.md +43 -0
- package/methodology/dangling links reveal which notes want to exist.md +62 -0
- package/methodology/data exit velocity measures how quickly content escapes vendor lock-in.md +74 -0
- package/methodology/decontextualization risk means atomicity may strip meaning that cannot be recovered.md +48 -0
- package/methodology/dense interlinked research claims enable derivation while sparse references only enable templating.md +47 -0
- package/methodology/dependency resolution through topological sort makes module composition transparent and verifiable.md +56 -0
- package/methodology/derivation generates knowledge systems from composable research claims not template customization.md +63 -0
- package/methodology/derivation-engine.md +27 -0
- package/methodology/derived systems follow a seed-evolve-reseed lifecycle.md +56 -0
- package/methodology/description quality for humans diverges from description quality for keyword search.md +73 -0
- package/methodology/descriptions are retrieval filters not summaries.md +112 -0
- package/methodology/design MOCs as attention management devices with lifecycle governance.md +318 -0
- package/methodology/design-dimensions.md +66 -0
- package/methodology/digital mutability enables note evolution that physical permanence forbids.md +54 -0
- package/methodology/discovery-retrieval.md +48 -0
- package/methodology/distinctiveness scoring treats description quality as measurable.md +69 -0
- package/methodology/does agent processing recover what fast capture loses.md +43 -0
- package/methodology/domain-compositions.md +37 -0
- package/methodology/dual-coding with visual elements could enhance agent traversal.md +55 -0
- package/methodology/each module must be describable in one sentence under 200 characters or it does too many things.md +45 -0
- package/methodology/each new note compounds value by creating traversal paths.md +55 -0
- package/methodology/eight configuration dimensions parameterize the space of possible knowledge systems.md +56 -0
- package/methodology/elaborative encoding is the quality gate for new notes.md +55 -0
- package/methodology/enforce schema with graduated strictness across capture processing and query zones.md +221 -0
- package/methodology/enforcing atomicity can create paralysis when ideas resist decomposition.md +43 -0
- package/methodology/engineering uses technical decision tracking with architectural memory.md +766 -0
- package/methodology/every knowledge domain shares a four-phase processing skeleton that diverges only in the process step.md +53 -0
- package/methodology/evolution observations provide actionable signals for system adaptation.md +67 -0
- package/methodology/external memory shapes cognition more than base model.md +60 -0
- package/methodology/faceted classification treats notes as multi-dimensional objects rather than folder contents.md +65 -0
- package/methodology/failure-modes.md +27 -0
- package/methodology/false universalism applies same processing logic regardless of domain.md +49 -0
- package/methodology/federated wiki pattern enables multi-agent divergence as feature not bug.md +59 -0
- package/methodology/flat files break at retrieval scale.md +75 -0
- package/methodology/forced engagement produces weak connections.md +48 -0
- package/methodology/four abstraction layers separate platform-agnostic from platform-dependent knowledge system features.md +47 -0
- package/methodology/fresh context per task preserves quality better than chaining phases.md +44 -0
- package/methodology/friction reveals architecture.md +63 -0
- package/methodology/friction-driven module adoption prevents configuration debt by adding complexity only at pain points.md +48 -0
- package/methodology/gardening cycle implements tend prune fertilize operations.md +41 -0
- package/methodology/generation effect gate blocks processing without transformation.md +40 -0
- package/methodology/goal-driven memory orchestration enables autonomous domain learning through directed compute allocation.md +41 -0
- package/methodology/good descriptions layer heuristic then mechanism then implication.md +57 -0
- package/methodology/graph-structure.md +65 -0
- package/methodology/guided notes might outperform post-hoc structuring for high-volume capture.md +37 -0
- package/methodology/health wellness uses symptom-trigger correlation with multi-dimensional tracking.md +819 -0
- package/methodology/hook composition creates emergent methodology from independent single-concern components.md +47 -0
- package/methodology/hook enforcement guarantees quality while instruction enforcement merely suggests it.md +51 -0
- package/methodology/hook-driven learning loops create self-improving methodology through observation accumulation.md +62 -0
- package/methodology/hooks are the agent habit system that replaces the missing basal ganglia.md +40 -0
- package/methodology/hooks cannot replace genuine cognitive engagement yet more automation is always tempting.md +87 -0
- package/methodology/hooks enable context window efficiency by delegating deterministic checks to external processes.md +47 -0
- package/methodology/idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once.md +44 -0
- package/methodology/implement condition-based maintenance triggers for derived systems.md +255 -0
- package/methodology/implicit dependencies create distributed monoliths that fail silently across configurations.md +58 -0
- package/methodology/implicit knowledge emerges from traversal.md +55 -0
- package/methodology/incremental formalization happens through repeated touching of old notes.md +60 -0
- package/methodology/incremental reading enables cross-source connection finding.md +39 -0
- package/methodology/index.md +32 -0
- package/methodology/inline links carry richer relationship data than metadata fields.md +91 -0
- package/methodology/insight accretion differs from productivity in knowledge systems.md +41 -0
- package/methodology/intermediate packets enable assembly over creation.md +52 -0
- package/methodology/intermediate representation pattern enables reliable vault operations beyond regex.md +62 -0
- package/methodology/justification chains enable forward backward and evolution reasoning about configuration decisions.md +46 -0
- package/methodology/knowledge system architecture is parameterized by platform capabilities not fixed by methodology.md +51 -0
- package/methodology/knowledge systems become communication partners through complexity and memory humans cannot sustain.md +47 -0
- package/methodology/knowledge systems share universal operations and structural components across all methodology traditions.md +46 -0
- package/methodology/legal case management uses precedent chains with regulatory change propagation.md +892 -0
- package/methodology/live index via periodic regeneration keeps discovery current.md +58 -0
- package/methodology/local-first file formats are inherently agent-native.md +69 -0
- package/methodology/logic column pattern separates reasoning from procedure.md +35 -0
- package/methodology/maintenance operations are more universal than creative pipelines because structural health is domain-invariant.md +47 -0
- package/methodology/maintenance scheduling frequency should match consequence speed not detection capability.md +50 -0
- package/methodology/maintenance targeting should prioritize mechanism and theory notes.md +26 -0
- package/methodology/maintenance-patterns.md +72 -0
- package/methodology/markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure.md +55 -0
- package/methodology/maturity field enables agent context prioritization.md +33 -0
- package/methodology/memory-architecture.md +27 -0
- package/methodology/metacognitive confidence can diverge from retrieval capability.md +42 -0
- package/methodology/metadata reduces entropy enabling precision over recall.md +91 -0
- package/methodology/methodology development should follow the trajectory from documentation to skill to hook as understanding hardens.md +80 -0
- package/methodology/methodology traditions are named points in a shared configuration space not competing paradigms.md +64 -0
- package/methodology/mnemonic medium embeds verification into navigation.md +46 -0
- package/methodology/module communication through shared YAML fields creates loose coupling without direct dependencies.md +44 -0
- package/methodology/module deactivation must account for structural artifacts that survive the toggle.md +49 -0
- package/methodology/multi-domain systems compose through separate templates and shared graph.md +61 -0
- package/methodology/multi-domain-composition.md +27 -0
- package/methodology/narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging.md +53 -0
- package/methodology/navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts.md +48 -0
- package/methodology/navigational vertigo emerges in pure association systems without local hierarchy.md +54 -0
- package/methodology/note titles should function as APIs enabling sentence transclusion.md +51 -0
- package/methodology/note-design.md +57 -0
- package/methodology/notes are skills /342/200/224 curated knowledge injected when relevant.md" +62 -0
- package/methodology/notes function as cognitive anchors that stabilize attention during complex tasks.md +41 -0
- package/methodology/novel domains derive by mapping knowledge type to closest reference domain then adapting.md +50 -0
- package/methodology/nudge theory explains graduated hook enforcement as choice architecture for agents.md +59 -0
- package/methodology/observation and tension logs function as dead-letter queues for failed automation.md +51 -0
- package/methodology/operational memory and knowledge memory serve different functions in agent architecture.md +48 -0
- package/methodology/operational wisdom requires contextual observation.md +52 -0
- package/methodology/orchestrated vault creation transforms arscontexta from tool to autonomous knowledge factory.md +40 -0
- package/methodology/organic emergence versus active curation creates a fundamental vault governance tension.md +68 -0
- package/methodology/orphan notes are seeds not failures.md +38 -0
- package/methodology/over-automation corrupts quality when hooks encode judgment rather than verification.md +62 -0
- package/methodology/people relationships uses Dunbar-layered graphs with interaction tracking.md +659 -0
- package/methodology/personal assistant uses life area management with review automation.md +610 -0
- package/methodology/platform adapter translation is semantic not mechanical because hook event meanings differ.md +40 -0
- package/methodology/platform capability tiers determine which knowledge system features can be implemented.md +48 -0
- package/methodology/platform fragmentation means identical conceptual operations require different implementations across agent environments.md +44 -0
- package/methodology/premature complexity is the most common derivation failure mode.md +45 -0
- package/methodology/prevent domain-specific failure modes through the vulnerability matrix.md +336 -0
- package/methodology/processing effort should follow retrieval demand.md +57 -0
- package/methodology/processing-workflows.md +75 -0
- package/methodology/product management uses feedback pipelines with experiment tracking.md +789 -0
- package/methodology/productivity porn risk in meta-system building.md +30 -0
- package/methodology/programmable notes could enable property-triggered workflows.md +64 -0
- package/methodology/progressive disclosure means reading right not reading less.md +69 -0
- package/methodology/progressive schema validates only what active modules require not the full system schema.md +49 -0
- package/methodology/project management uses decision tracking with stakeholder context.md +776 -0
- package/methodology/propositional link semantics transform wiki links from associative to reasoned.md +87 -0
- package/methodology/prospective memory requires externalization.md +53 -0
- package/methodology/provenance tracks where beliefs come from.md +62 -0
- package/methodology/queries evolve during search so agents should checkpoint.md +35 -0
- package/methodology/question-answer metadata enables inverted search patterns.md +39 -0
- package/methodology/random note resurfacing prevents write-only memory.md +33 -0
- package/methodology/reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring.md +59 -0
- package/methodology/reflection synthesizes existing notes into new insight.md +100 -0
- package/methodology/retrieval utility should drive design over capture completeness.md +69 -0
- package/methodology/retrieval verification loop tests description quality at scale.md +81 -0
- package/methodology/role field makes graph structure explicit.md +94 -0
- package/methodology/scaffolding enables divergence that fine-tuning cannot.md +67 -0
- package/methodology/schema enforcement via validation agents enables soft consistency.md +60 -0
- package/methodology/schema evolution follows observe-then-formalize not design-then-enforce.md +65 -0
- package/methodology/schema field names are the only domain specific element in the universal note pattern.md +46 -0
- package/methodology/schema fields should use domain-native vocabulary not abstract terminology.md +47 -0
- package/methodology/schema templates reduce cognitive overhead at capture time.md +55 -0
- package/methodology/schema validation hooks externalize inhibitory control that degrades under cognitive load.md +48 -0
- package/methodology/schema-enforcement.md +27 -0
- package/methodology/self-extension requires context files to contain platform operations knowledge not just methodology.md +47 -0
- package/methodology/sense-making vs storage does compression lose essential nuance.md +73 -0
- package/methodology/session boundary hooks implement cognitive bookends for orientation and reflection.md +60 -0
- package/methodology/session handoff creates continuity without persistent memory.md +43 -0
- package/methodology/session outputs are packets for future selves.md +43 -0
- package/methodology/session transcript mining enables experiential validation that structural tests cannot provide.md +38 -0
- package/methodology/skill context budgets constrain knowledge system complexity on agent platforms.md +52 -0
- package/methodology/skills encode methodology so manual execution bypasses quality gates.md +50 -0
- package/methodology/small-world topology requires hubs and dense local links.md +99 -0
- package/methodology/source attribution enables tracing claims to foundations.md +38 -0
- package/methodology/spaced repetition scheduling could optimize vault maintenance.md +44 -0
- package/methodology/spreading activation models how agents should traverse.md +79 -0
- package/methodology/stale navigation actively misleads because agents trust curated maps completely.md +43 -0
- package/methodology/stigmergy coordinates agents through environmental traces without direct communication.md +62 -0
- package/methodology/storage versus thinking distinction determines which tool patterns apply.md +56 -0
- package/methodology/structure enables navigation without reading everything.md +52 -0
- package/methodology/structure without processing provides no value.md +56 -0
- package/methodology/student learning uses prerequisite graphs with spaced retrieval.md +770 -0
- package/methodology/summary coherence tests composability before filing.md +37 -0
- package/methodology/tag rot applies to wiki links because titles serve as both identifier and display text.md +50 -0
- package/methodology/temporal media must convert to spatial text for agent traversal.md +43 -0
- package/methodology/temporal processing priority creates age-based inbox urgency.md +45 -0
- package/methodology/temporal separation of capture and processing preserves context freshness.md +39 -0
- package/methodology/ten universal primitives form the kernel of every viable agent knowledge system.md +162 -0
- package/methodology/testing effect could enable agent knowledge verification.md +38 -0
- package/methodology/the AgentSkills standard embodies progressive disclosure at the skill level.md +40 -0
- package/methodology/the derivation engine improves recursively as deployed systems generate observations.md +49 -0
- package/methodology/the determinism boundary separates hook methodology from skill methodology.md +46 -0
- package/methodology/the fix-versus-report decision depends on determinism reversibility and accumulated trust.md +45 -0
- package/methodology/the generation effect requires active transformation not just storage.md +57 -0
- package/methodology/the no wrong patches guarantee ensures any valid module combination produces a valid system.md +58 -0
- package/methodology/the system is the argument.md +46 -0
- package/methodology/the vault constitutes identity for agents.md +86 -0
- package/methodology/the vault methodology transfers because it encodes cognitive science not domain specifics.md +47 -0
- package/methodology/therapy journal uses warm personality with pattern detection for emotional processing.md +584 -0
- package/methodology/three capture schools converge through agent-mediated synthesis.md +55 -0
- package/methodology/three concurrent maintenance loops operate at different timescales to catch different classes of problems.md +56 -0
- package/methodology/throughput matters more than accumulation.md +58 -0
- package/methodology/title as claim enables traversal as reasoning.md +50 -0
- package/methodology/topological organization beats temporal for knowledge work.md +52 -0
- package/methodology/trading uses conviction tracking with thesis-outcome correlation.md +699 -0
- package/methodology/trails transform ephemeral navigation into persistent artifacts.md +39 -0
- package/methodology/transform universal vocabulary to domain-native language through six levels.md +259 -0
- package/methodology/type field enables structured queries without folder hierarchies.md +53 -0
- package/methodology/use-case presets dissolve the tension between composability and simplicity.md +44 -0
- package/methodology/vault conventions may impose hidden rigidity on thinking.md +44 -0
- package/methodology/verbatim risk applies to agents too.md +31 -0
- package/methodology/vibe notetaking is the emerging industry consensus for AI-native self-organization.md +56 -0
- package/methodology/vivid memories need verification.md +45 -0
- package/methodology/vocabulary-transformation.md +27 -0
- package/methodology/voice capture is the highest-bandwidth channel for agent-delegated knowledge systems.md +45 -0
- package/methodology/wiki links are the digital evolution of analog indexing.md +73 -0
- package/methodology/wiki links as social contract transforms agents into stewards of incomplete references.md +52 -0
- package/methodology/wiki links create navigation paths that shape retrieval.md +63 -0
- package/methodology/wiki links implement GraphRAG without the infrastructure.md +101 -0
- package/methodology/writing for audience blocks authentic creation.md +22 -0
- package/methodology/you operate a system that takes notes.md +79 -0
- package/openclaw/SKILL.md +110 -0
- package/package.json +45 -0
- package/platforms/README.md +51 -0
- package/platforms/claude-code/generator.md +61 -0
- package/platforms/claude-code/hooks/README.md +186 -0
- package/platforms/claude-code/hooks/auto-commit.sh.template +38 -0
- package/platforms/claude-code/hooks/session-capture.sh.template +72 -0
- package/platforms/claude-code/hooks/session-orient.sh.template +189 -0
- package/platforms/claude-code/hooks/write-validate.sh.template +106 -0
- package/platforms/openclaw/generator.md +82 -0
- package/platforms/openclaw/hooks/README.md +89 -0
- package/platforms/openclaw/hooks/bootstrap.ts.template +224 -0
- package/platforms/openclaw/hooks/command-new.ts.template +165 -0
- package/platforms/openclaw/hooks/heartbeat.ts.template +214 -0
- package/platforms/shared/features/README.md +70 -0
- package/platforms/shared/skill-blocks/graph.md +145 -0
- package/platforms/shared/skill-blocks/learn.md +119 -0
- package/platforms/shared/skill-blocks/next.md +131 -0
- package/platforms/shared/skill-blocks/pipeline.md +326 -0
- package/platforms/shared/skill-blocks/ralph.md +616 -0
- package/platforms/shared/skill-blocks/reduce.md +1142 -0
- package/platforms/shared/skill-blocks/refactor.md +129 -0
- package/platforms/shared/skill-blocks/reflect.md +780 -0
- package/platforms/shared/skill-blocks/remember.md +524 -0
- package/platforms/shared/skill-blocks/rethink.md +574 -0
- package/platforms/shared/skill-blocks/reweave.md +680 -0
- package/platforms/shared/skill-blocks/seed.md +320 -0
- package/platforms/shared/skill-blocks/stats.md +145 -0
- package/platforms/shared/skill-blocks/tasks.md +171 -0
- package/platforms/shared/skill-blocks/validate.md +323 -0
- package/platforms/shared/skill-blocks/verify.md +562 -0
- package/platforms/shared/templates/README.md +35 -0
- package/presets/experimental/categories.yaml +1 -0
- package/presets/experimental/preset.yaml +38 -0
- package/presets/experimental/starter/README.md +7 -0
- package/presets/experimental/vocabulary.yaml +7 -0
- package/presets/personal/categories.yaml +7 -0
- package/presets/personal/preset.yaml +41 -0
- package/presets/personal/starter/goals.md +21 -0
- package/presets/personal/starter/index.md +17 -0
- package/presets/personal/starter/life-areas.md +21 -0
- package/presets/personal/starter/people.md +21 -0
- package/presets/personal/vocabulary.yaml +32 -0
- package/presets/research/categories.yaml +8 -0
- package/presets/research/preset.yaml +41 -0
- package/presets/research/starter/index.md +17 -0
- package/presets/research/starter/methods.md +21 -0
- package/presets/research/starter/open-questions.md +21 -0
- package/presets/research/vocabulary.yaml +33 -0
- package/reference/AUDIT-REPORT.md +238 -0
- package/reference/claim-map.md +172 -0
- package/reference/components.md +327 -0
- package/reference/conversation-patterns.md +542 -0
- package/reference/derivation-validation.md +649 -0
- package/reference/dimension-claim-map.md +134 -0
- package/reference/evolution-lifecycle.md +297 -0
- package/reference/failure-modes.md +235 -0
- package/reference/interaction-constraints.md +204 -0
- package/reference/kernel.yaml +242 -0
- package/reference/methodology.md +283 -0
- package/reference/open-questions.md +279 -0
- package/reference/personality-layer.md +302 -0
- package/reference/self-space.md +299 -0
- package/reference/semantic-vs-keyword.md +288 -0
- package/reference/session-lifecycle.md +298 -0
- package/reference/templates/base-note.md +16 -0
- package/reference/templates/companion-note.md +70 -0
- package/reference/templates/creative-note.md +16 -0
- package/reference/templates/learning-note.md +16 -0
- package/reference/templates/life-note.md +16 -0
- package/reference/templates/moc.md +26 -0
- package/reference/templates/relationship-note.md +17 -0
- package/reference/templates/research-note.md +19 -0
- package/reference/templates/session-log.md +24 -0
- package/reference/templates/therapy-note.md +16 -0
- package/reference/test-fixtures/edge-case-constraints.md +148 -0
- package/reference/test-fixtures/multi-domain.md +164 -0
- package/reference/test-fixtures/novel-domain-gaming.md +138 -0
- package/reference/test-fixtures/research-minimal.md +102 -0
- package/reference/test-fixtures/therapy-full.md +155 -0
- package/reference/testing-milestones.md +1087 -0
- package/reference/three-spaces.md +363 -0
- package/reference/tradition-presets.md +203 -0
- package/reference/use-case-presets.md +341 -0
- package/reference/validate-kernel.sh +432 -0
- package/reference/vocabulary-transforms.md +85 -0
- package/scripts/sync-thinking.sh +147 -0
- package/skill-sources/graph/SKILL.md +567 -0
- package/skill-sources/graph/skill.json +17 -0
- package/skill-sources/learn/SKILL.md +254 -0
- package/skill-sources/learn/skill.json +17 -0
- package/skill-sources/next/SKILL.md +407 -0
- package/skill-sources/next/skill.json +17 -0
- package/skill-sources/pipeline/SKILL.md +314 -0
- package/skill-sources/pipeline/skill.json +17 -0
- package/skill-sources/ralph/SKILL.md +604 -0
- package/skill-sources/ralph/skill.json +17 -0
- package/skill-sources/reduce/SKILL.md +1113 -0
- package/skill-sources/reduce/skill.json +17 -0
- package/skill-sources/refactor/SKILL.md +448 -0
- package/skill-sources/refactor/skill.json +17 -0
- package/skill-sources/reflect/SKILL.md +747 -0
- package/skill-sources/reflect/skill.json +17 -0
- package/skill-sources/remember/SKILL.md +534 -0
- package/skill-sources/remember/skill.json +17 -0
- package/skill-sources/rethink/SKILL.md +658 -0
- package/skill-sources/rethink/skill.json +17 -0
- package/skill-sources/reweave/SKILL.md +657 -0
- package/skill-sources/reweave/skill.json +17 -0
- package/skill-sources/seed/SKILL.md +303 -0
- package/skill-sources/seed/skill.json +17 -0
- package/skill-sources/stats/SKILL.md +371 -0
- package/skill-sources/stats/skill.json +17 -0
- package/skill-sources/tasks/SKILL.md +402 -0
- package/skill-sources/tasks/skill.json +17 -0
- package/skill-sources/validate/SKILL.md +310 -0
- package/skill-sources/validate/skill.json +17 -0
- package/skill-sources/verify/SKILL.md +532 -0
- package/skill-sources/verify/skill.json +17 -0
- package/skills/add-domain/SKILL.md +441 -0
- package/skills/add-domain/skill.json +17 -0
- package/skills/architect/SKILL.md +568 -0
- package/skills/architect/skill.json +17 -0
- package/skills/ask/SKILL.md +388 -0
- package/skills/ask/skill.json +17 -0
- package/skills/health/SKILL.md +760 -0
- package/skills/health/skill.json +17 -0
- package/skills/help/SKILL.md +348 -0
- package/skills/help/skill.json +17 -0
- package/skills/recommend/SKILL.md +553 -0
- package/skills/recommend/skill.json +17 -0
- package/skills/reseed/SKILL.md +385 -0
- package/skills/reseed/skill.json +17 -0
- package/skills/setup/SKILL.md +1688 -0
- package/skills/setup/skill.json +17 -0
- package/skills/tutorial/SKILL.md +496 -0
- package/skills/tutorial/skill.json +17 -0
- package/skills/upgrade/SKILL.md +395 -0
- package/skills/upgrade/skill.json +17 -0
|
@@ -0,0 +1,62 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: This mental model distinguishes reweave from reflect — maintenance becomes genuine reconsideration rather than mechanical link-adding
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[maintenance-patterns]]"]
|
|
5
|
+
methodology: ["Zettelkasten"]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# reweaving asks what would be different if written today
|
|
9
|
+
|
|
10
|
+
The question "what connections should I add?" leads to a different activity than "if I wrote this note today, what would be different?" The first is mechanical: scan for links, add them. The second is intellectual: reconsider the entire note against current understanding. The question is only meaningful because [[digital mutability enables note evolution that physical permanence forbids]] — Luhmann couldn't implement "what would be different" because his paper cards resisted editing. We can ask the question AND act on the answer. Understanding (reassessing against current knowledge) must precede building (updating the note). Because [[the generation effect requires active transformation not just storage]], the difference is not just stylistic. Mechanical link-adding produces no new understanding — it's rearrangement. Genuine reweaving produces something that didn't exist: a reframed argument, a sharpened claim, a newly articulated connection.
|
|
11
|
+
|
|
12
|
+
This matters because notes written last month were written with last month's understanding. Since then, new notes exist, understanding has deepened, and what seemed like one idea might now be three. A note about "knowledge graphs" written before understanding spreading activation models is incomplete in ways that link-adding won't fix. The prose itself needs rewriting. The deeper reason this matters is that since [[coherence maintains consistency despite inconsistent inputs]], an unreviewed note may now contradict newer understanding without anyone noticing — the temporal inconsistency that accumulates between sessions is precisely what backward maintenance exists to detect and resolve.
|
|
13
|
+
|
|
14
|
+
The reframe changes what actions become available. Adding connections is one option, but so is rewriting content when understanding evolved, sharpening the claim when [[claims must be specific enough to be wrong]] reveals the title is too vague, splitting the note when multiple claims got bundled together, or even challenging the claim when new evidence contradicts the original. Since [[backlinks implicitly define notes by revealing usage context]], one input to this reconsideration is checking what roles the note currently plays — its backlinks reveal which arguments elsewhere depend on this note, providing constraints on how aggressively the claim can be revised without disrupting the graph. Because [[summary coherence tests composability before filing]], catching bundling at creation time is cheaper than detecting it during maintenance passes — the summary requirement is an upfront quality gate that reduces later reweave burden. If [[maturity field enables agent context prioritization]], maturity status becomes a reweaving input: seedlings explicitly signal "this note needs development work," making them obvious candidates for the reconsideration pass.
|
|
15
|
+
|
|
16
|
+
Without this mental model, vault maintenance becomes a mechanical pass that leaves outdated thinking intact but organized. And since [[wiki links as social contract transforms agents into stewards of incomplete references]], maintenance carries an additional dimension beyond quality: it is the mechanism through which link commitments get fulfilled, reframing the backward pass from cleanup to stewardship. Since [[throughput matters more than accumulation]], what matters is the quality of what flows through the system, not the size of the organized archive. Reweaving ensures that existing content remains part of the living flow rather than becoming a graveyard of historical understanding.
|
|
17
|
+
|
|
18
|
+
The backward pass complements the forward pass: connection-finding links new notes to old ones, backward maintenance updates old notes based on new ones. This mental model is precisely why [[skills encode methodology so manual execution bypasses quality gates]] — without the "what would be different" frame encoded in the workflow, maintenance degrades to mechanical link-adding that preserves the form while missing the substance. This is [[verbatim risk applies to agents too]] manifesting in the maintenance phase: an agent could add links that look like connections while never engaging the deeper question of what would be different today.
|
|
19
|
+
|
|
20
|
+
The per-note reconsideration has a system-level counterpart in reconciliation. Since [[reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring]], the reconciliation loop formalizes the "what would be different" question at the structural level: it declares desired state (all links resolve, all notes in MOCs, schema compliance) and periodically checks whether actual state has drifted. Where backward maintenance asks "what would this note look like if written today?", reconciliation asks "does the system still match what we said healthy looks like?" Both are drift-correction mechanisms, but they operate at different scales and use different detection methods — backward maintenance requires judgment (reading and reconsidering), while reconciliation detection is deterministic (counting, comparing, checking). And since [[three concurrent maintenance loops operate at different timescales to catch different classes of problems]], backward maintenance maps specifically onto the slow loop: the high-judgment reconsideration that only makes sense at weekly or monthly timescales, because description staleness and claim drift develop over the course of understanding evolution, not session-to-session. The medium loop's detection (orphan checks, link density) surfaces candidates for the slow loop's reconsideration, making the two loops cooperate: mechanical detection at one timescale feeds judgment-requiring remediation at another.
|
|
21
|
+
|
|
22
|
+
Backward maintenance is also an attention allocation decision in disguise. Since [[AI shifts knowledge systems from externalizing memory to externalizing attention]], the question "what would be different if written today?" is fundamentally about directing finite attention — which older notes deserve renewed focus given current understanding. The system externalizes this attention decision through maintenance targeting and scheduling rather than relying on the operator to notice what has become stale. The same question scales up to system architecture: since [[evolution observations provide actionable signals for system adaptation]], the diagnostic protocol asks "what would be different" not about individual notes but about the structural decisions themselves -- which note types would you still define, which fields would you still require, how would you organize navigation? The per-note reconsideration and the per-system diagnostic operate at different scales but share the same mental model: current understanding tests historical decisions.
|
|
23
|
+
|
|
24
|
+
Whether this holistic reconsideration is the optimal approach remains testable. Since [[gardening cycle implements tend prune fertilize operations]] proposes separating maintenance into three focused phases (tend/prune/fertilize), the question becomes: does the "what would be different today" frame that considers everything at once outperform focused operations that each start fresh? The gardening experiment tests whether cognitive focus per operation beats holistic reconsideration. A separate question is WHEN the reconsideration should happen — since [[spaced repetition scheduling could optimize vault maintenance]] tests whether newly created notes need more frequent attention than mature notes, the timing of reweave may matter as much as the method. Backward maintenance also sits squarely on the curation side of a deeper governance dilemma: since [[organic emergence versus active curation creates a fundamental vault governance tension]], every reweave is an active curation intervention that shapes the graph according to current understanding. The governance rhythm determines when curation interventions like reweaving are appropriate — too frequent and they suppress the organic patterns that organic growth would produce, too infrequent and the structural debt from outdated thinking compounds past the point of easy correction.
|
|
25
|
+
|
|
26
|
+
But there is a deeper question about WHO does the reconsideration. When agents run backward maintenance, they do the intellectual work of asking "what would be different." The human approves the proposals. Since [[cognitive outsourcing risk in agent-operated systems]] tests whether delegating all processing atrophies human meta-cognitive skills, backward maintenance may be particularly susceptible: the human never practices the act of holistic reconsideration, they only evaluate someone else's reconsideration. If the gardening experiment validates focused operations, smaller-scope human approvals might preserve more genuine judgment than approving one holistic proposal.
|
|
27
|
+
|
|
28
|
+
The same "what would be different" question operates at a higher scale too. Since [[derived systems follow a seed-evolve-reseed lifecycle]], reseeding asks this question not about individual notes but about the entire system architecture: what would this configuration look like given what we now know about how it gets used? Backward maintenance at the note level (reweaving) and backward maintenance at the system level (reseeding) share the mental model but differ in scope and intervention type. Reweaving can sharpen a claim, add connections, split a note. Reseeding restructures templates, pipelines, and MOC hierarchy. The scale difference matters because systemic incoherence — schemas that have drifted, navigation that no longer matches content — cannot be fixed by improving individual notes, no matter how thorough the reweave.
|
|
29
|
+
|
|
30
|
+
There is also a cross-deployment dimension. Since [[the derivation engine improves recursively as deployed systems generate observations]], the backward maintenance performed within each individual system generates the operational observations that improve derivation for all future systems. When a reweave sharpens a claim or surfaces a tension, that observation enriches the shared claim graph — making backward maintenance not just a local quality practice but a data-generation mechanism for the derivation engine's recursive improvement. The individual system benefits from better notes; the meta-system benefits from better claims.
|
|
31
|
+
|
|
32
|
+
The question of WHERE activation should spread during reweave has a concrete answer. Since [[maintenance targeting should prioritize mechanism and theory notes]], productive targets are notes about the MECHANISM being reconsidered, not just notes in the same MOC. For experiments testing theories, the theory notes themselves are the high-value reweave targets because they provide the grounding the experiment tests against.
|
|
33
|
+
---
|
|
34
|
+
|
|
35
|
+
Relevant Notes:
|
|
36
|
+
- [[digital mutability enables note evolution that physical permanence forbids]] — foundational enabler: this mental model only works because the medium permits revision; Luhmann's cards couldn't be reconsidered, only cross-referenced
|
|
37
|
+
- [[backlinks implicitly define notes by revealing usage context]] — provides an answer to what would be different: check what roles the note currently plays across the graph (its backlinks) before changing what it claims
|
|
38
|
+
- [[maintenance targeting should prioritize mechanism and theory notes]] — refines this mental model with concrete targeting heuristics: mechanism connection predicts higher reweave value than topic proximity
|
|
39
|
+
- [[throughput matters more than accumulation]] — reweaving maintains quality in the flow rather than preserving organized but outdated content
|
|
40
|
+
- [[skills encode methodology so manual execution bypasses quality gates]] — the what would be different mental model is the quality gate that prevents mechanical link-adding
|
|
41
|
+
- [[wiki links implement GraphRAG without the infrastructure]] — explains the stakes: graph quality degrades when links aren't actively maintained
|
|
42
|
+
- [[the generation effect requires active transformation not just storage]] — explains WHY mechanical link-adding fails: generation creates cognitive hooks, rearrangement doesn't
|
|
43
|
+
- [[maturity field enables agent context prioritization]] — if validated, maturity status would flag seedlings as reweaving candidates, making development opportunities explicit
|
|
44
|
+
- [[random note resurfacing prevents write-only memory]] — tests the selection method: which notes get chosen for the reconsideration pass that this note describes
|
|
45
|
+
- [[gardening cycle implements tend prune fertilize operations]] — tests whether separating maintenance into three focused phases outperforms this holistic reconsideration approach
|
|
46
|
+
- [[spaced repetition scheduling could optimize vault maintenance]] — tests WHEN reconsideration should happen; reweaving defines what maintenance accomplishes, scheduling optimizes when that happens
|
|
47
|
+
- [[cognitive outsourcing risk in agent-operated systems]] — tests whether delegating reconsideration to agents atrophies the human's ability to do holistic reconsideration themselves
|
|
48
|
+
- [[vault conventions may impose hidden rigidity on thinking]] — if conventions constrain at creation time, reweaving provides an escape hatch: notes can evolve past their initial form
|
|
49
|
+
- [[verbatim risk applies to agents too]] — tests whether agents can produce maintenance passes that look like reweaving (adding links) while skipping genuine reconsideration; mechanical link-adding is verbatim risk in the backward pass
|
|
50
|
+
- [[AI shifts knowledge systems from externalizing memory to externalizing attention]] — paradigm frame: backward maintenance is an attention allocation decision — which older notes deserve renewed focus given current understanding; the system externalizes this attention direction rather than relying on the operator to notice staleness
|
|
51
|
+
- [[wiki links as social contract transforms agents into stewards of incomplete references]] — adds a motivational reframe: maintenance is not just holistic reconsideration of outdated thinking but stewardship of commitments; every dangling link is a promise that backward maintenance helps fulfill, shifting the frame from cleanup to obligation
|
|
52
|
+
- [[evolution observations provide actionable signals for system adaptation]] — the system-level counterpart: this note asks what would be different at the note level, the diagnostic protocol asks what would be different at the system-architecture level, monitoring whether structural decisions (types, fields, MOCs, processing) still match operational reality
|
|
53
|
+
- [[derived systems follow a seed-evolve-reseed lifecycle]] — scale extension: the same mental model at system level becomes reseeding, which asks what the entire architecture would look like given operational experience; note-level reweaving and system-level reseeding share the question but differ in scope and intervention type
|
|
54
|
+
- [[organic emergence versus active curation creates a fundamental vault governance tension]] — governance frame: backward maintenance is the primary mechanism of the curation pole, and the governance rhythm determines when reweaving interventions are appropriate; too frequent curation suppresses organic growth, too infrequent lets structural debt compound
|
|
55
|
+
- [[the derivation engine improves recursively as deployed systems generate observations]] — cross-deployment extension: backward maintenance within each system generates operational observations that enrich the shared claim graph, making note-level reweaving a data-generation mechanism for the derivation engine's meta-level improvement
|
|
56
|
+
- [[reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring]] — system-level counterpart: backward maintenance asks what would be different per-note through judgment, reconciliation asks whether structural health has drifted through deterministic comparison; both are drift-correction mechanisms at different scales
|
|
57
|
+
- [[three concurrent maintenance loops operate at different timescales to catch different classes of problems]] — scheduling container: backward maintenance is the slow loop's remediation operation requiring high judgment, while medium-loop detection (orphan checks, link density) surfaces the candidates that the slow loop reconsiders
|
|
58
|
+
- [[friction reveals architecture]] — the trigger signal: friction in use is what surfaces which notes need the reconsideration pass; agents especially benefit because they cannot push through friction with intuition, forcing the articulation that backward maintenance requires
|
|
59
|
+
- [[coherence maintains consistency despite inconsistent inputs]] — the belief-level goal: backward maintenance serves coherence by detecting and resolving contradictions that accumulate as understanding evolves; the reconsideration question is ultimately asking whether the note still coheres with the current belief system
|
|
60
|
+
|
|
61
|
+
Topics:
|
|
62
|
+
- [[maintenance-patterns]]
|
package/methodology/balance onboarding enforcement and questions to prevent premature complexity.md
ADDED
|
@@ -0,0 +1,229 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: What to enforce, what to explain, and what to ask during /setup — the decision framework for vault generation that prevents premature complexity while ensuring a healthy starting point
|
|
3
|
+
kind: guidance
|
|
4
|
+
status: active
|
|
5
|
+
topics: ["[[derivation-engine]]"]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# balance onboarding enforcement and questions to prevent premature complexity
|
|
9
|
+
|
|
10
|
+
/setup is the most consequential moment in a vault's life. The choices made here determine whether the system compounds value or gets abandoned within a week. Since [[premature complexity is the most common derivation failure mode]], the primary risk is generating too much structure too soon.
|
|
11
|
+
|
|
12
|
+
This doc tells the plugin WHAT to decide during /setup, and more importantly, what NOT to decide.
|
|
13
|
+
|
|
14
|
+
## The Three Categories
|
|
15
|
+
|
|
16
|
+
Every onboarding decision falls into one of three categories:
|
|
17
|
+
|
|
18
|
+
### 1. Enforce (Non-Negotiable)
|
|
19
|
+
|
|
20
|
+
These are always present, never optional. The user doesn't choose them because without them the system doesn't function.
|
|
21
|
+
|
|
22
|
+
| Feature | Why Non-Negotiable | Implementation |
|
|
23
|
+
|---------|-------------------|----------------|
|
|
24
|
+
| Markdown files with YAML frontmatter | Since [[markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure]], this is the storage layer | Generated automatically |
|
|
25
|
+
| `description` field on every note | Since [[descriptions are retrieval filters not summaries]], agents need descriptions for discovery | Required in all templates |
|
|
26
|
+
| Wiki links as connection mechanism | Links are the graph edges that make traversal possible | Enabled by default |
|
|
27
|
+
| `topics` field linking to at least one MOC | Orphan prevention — notes must be navigable | Required in templates |
|
|
28
|
+
| Capture zone (inbox equivalent) | Content needs a place to land without ceremony | Generated folder |
|
|
29
|
+
| At least one MOC | Navigation needs a starting point | Hub MOC generated |
|
|
30
|
+
| Context file (CLAUDE.md equivalent) | Agent needs orientation per session | Generated with domain config |
|
|
31
|
+
| Git commits on significant changes | Version history enables rollback and tracks evolution | Configured per platform |
|
|
32
|
+
| Session logging | Operational observability — since [[maintenance operations are more universal than creative pipelines because structural health is domain-invariant]] | Logging structure generated |
|
|
33
|
+
| Timestamps on entries | Temporal ordering is required for pattern detection and freshness checks | Required in templates |
|
|
34
|
+
|
|
35
|
+
These are the kernel primitives. Since [[ten universal primitives form the kernel of every viable agent knowledge system]], they form the invariant base that derivation never varies.
|
|
36
|
+
|
|
37
|
+
### 2. Explain (Defaults with Rationale)
|
|
38
|
+
|
|
39
|
+
These have sensible defaults but the user should understand why. The plugin sets the default and explains it, giving the user the option to change.
|
|
40
|
+
|
|
41
|
+
| Decision | Default | Rationale | When to Change |
|
|
42
|
+
|----------|---------|-----------|----------------|
|
|
43
|
+
| Processing intensity | Medium | Balances thoroughness with overhead | Heavy for research/therapy; Light for tracking domains |
|
|
44
|
+
| MOC hierarchy pattern | Based on domain | Three-tier for research, flat-peer for personal | User preference for flatter/deeper |
|
|
45
|
+
| Schema strictness | Soft enforcement | Since [[schema enforcement via validation agents enables soft consistency]], start soft | Hard enforcement only if user requests it |
|
|
46
|
+
| Review cadence triggers | Condition-based | Since [[maintenance scheduling frequency should match consequence speed not detection capability]] | Time-based if user prefers regular rhythm |
|
|
47
|
+
| Note granularity | Atomic (claim-level) | Since [[note titles should function as APIs enabling sentence transclusion]], atomic enables composability | Compound for domains with narrative content |
|
|
48
|
+
| Linking style | Inline preferred | Since [[inline links carry richer relationship data than metadata fields]] | Footer-only for less writing-intensive domains |
|
|
49
|
+
|
|
50
|
+
The key principle: **explain WHY the default is set, then let the user override.** Don't ask "Do you want atomic or compound notes?" — say "Your notes will be atomic (one idea per note) because this enables remixing ideas across contexts. If you prefer longer, more narrative entries, I can adjust this."
|
|
51
|
+
|
|
52
|
+
### 3. Ask (Genuinely Variable)
|
|
53
|
+
|
|
54
|
+
These have no sensible default — the plugin must ask because the answer depends on the user's specific situation.
|
|
55
|
+
|
|
56
|
+
| Question | Why Ask | What It Determines |
|
|
57
|
+
|----------|---------|-------------------|
|
|
58
|
+
| What domain(s) do you work in? | Selects domain composition(s) | Entire vault architecture |
|
|
59
|
+
| What's your primary goal? | Distinguishes storage from thinking | Processing pipeline depth |
|
|
60
|
+
| What do you already have? | Import vs fresh start | Migration strategy |
|
|
61
|
+
| What platform are you on? | Since [[platform capability tiers determine which knowledge system features can be implemented]] | Feature ceiling |
|
|
62
|
+
| How much time will you spend with this? | Daily vs weekly vs sporadic | Maintenance trigger frequency |
|
|
63
|
+
|
|
64
|
+
**Ask as few questions as possible.** Since [[configuration paralysis emerges when derivation surfaces too many decisions]], the plugin should derive most configuration from the domain selection + primary goal, asking only what it genuinely cannot infer.
|
|
65
|
+
|
|
66
|
+
## The Onboarding Flow
|
|
67
|
+
|
|
68
|
+
### Step 1: Domain Discovery (Ask)
|
|
69
|
+
"What kind of knowledge work do you do?" Listen for domain signals. Map to closest domain composition(s). If multi-domain, identify primary.
|
|
70
|
+
|
|
71
|
+
### Step 2: Goal Clarification (Ask)
|
|
72
|
+
"Are you building a system to store and retrieve information, or to develop and connect ideas?" This determines since [[storage versus thinking distinction determines which tool patterns apply]] the fundamental system type.
|
|
73
|
+
|
|
74
|
+
### Step 3: Configuration Derivation (Explain)
|
|
75
|
+
Based on domain + goal, derive the 8 configuration dimensions. Show the user what was chosen and why:
|
|
76
|
+
|
|
77
|
+
"Based on your research workflow, I'm setting up:
|
|
78
|
+
- Atomic notes (one claim per note, so you can remix ideas)
|
|
79
|
+
- Three-tier MOC hierarchy (hub → domain → topic, for layered navigation)
|
|
80
|
+
- Heavy processing (each source gets claim extraction, connection finding, and verification)
|
|
81
|
+
- Condition-based maintenance (health checks run when conditions trigger, not on a schedule)
|
|
82
|
+
|
|
83
|
+
These are defaults — tell me if any feel wrong for how you work."
|
|
84
|
+
|
|
85
|
+
### Step 4: Vault Generation (Enforce)
|
|
86
|
+
Generate the vault with kernel primitives + domain-specific layers:
|
|
87
|
+
- Folder structure
|
|
88
|
+
- Note templates with domain-native schemas
|
|
89
|
+
- MOC hierarchy (initial MOCs with placeholder structure)
|
|
90
|
+
- Context file with orientation protocol
|
|
91
|
+
- Processing pipeline configuration
|
|
92
|
+
- Maintenance trigger definitions
|
|
93
|
+
|
|
94
|
+
### Step 5: First Capture (Immediate Use)
|
|
95
|
+
Don't end /setup with "your system is ready." End with capturing something:
|
|
96
|
+
"Let's add your first note. What are you working on right now?"
|
|
97
|
+
|
|
98
|
+
Immediate use prevents the "I set it up but never used it" failure mode. Since [[friction-driven module adoption prevents configuration debt by adding complexity only at pain points]], the user should experience the system working before adding more features.
|
|
99
|
+
|
|
100
|
+
## What NOT to Ask
|
|
101
|
+
|
|
102
|
+
| Common Question | Why Skip It | What to Do Instead |
|
|
103
|
+
|----------------|------------|---------------------|
|
|
104
|
+
| "How many MOC levels?" | Users can't predict this | Start with 1-2 levels, grow organically |
|
|
105
|
+
| "What schema fields do you want?" | Users don't know yet | Start with domain defaults, evolve per [[schema evolution follows observe-then-formalize not design-then-enforce]] |
|
|
106
|
+
| "How should notes connect?" | Abstract question with no good answer | Show connection types through example |
|
|
107
|
+
| "What maintenance schedule?" | Since [[maintenance scheduling frequency should match consequence speed not detection capability]] | Set condition-based defaults |
|
|
108
|
+
| "What processing model?" | Meaningless to non-technical users | Derive from domain + goal |
|
|
109
|
+
|
|
110
|
+
## Preset-Based Simplification
|
|
111
|
+
|
|
112
|
+
Since [[use-case presets dissolve the tension between composability and simplicity]], the plugin offers presets as the primary onboarding path:
|
|
113
|
+
|
|
114
|
+
| Preset | Optimized For | Configuration |
|
|
115
|
+
|--------|--------------|---------------|
|
|
116
|
+
| **Research** | Academic research, literature review, thesis work | Heavy processing, three-tier MOCs, atomic notes, citation tracking |
|
|
117
|
+
| **Personal Assistant** | Life areas, goals, habits, reviews | Medium processing, flat-peer MOCs, mixed granularity, area health tracking |
|
|
118
|
+
| **Experimental** | Novel/specialized domains | Medium processing, configurable MOCs, derive from closest reference |
|
|
119
|
+
|
|
120
|
+
Users select a preset, optionally customize, and get a working vault. Advanced configuration is available but not required.
|
|
121
|
+
|
|
122
|
+
## Multi-Domain Onboarding
|
|
123
|
+
|
|
124
|
+
When users need multiple domains (e.g., "I do research AND track personal goals"), the plugin:
|
|
125
|
+
|
|
126
|
+
1. Identifies the primary domain (most time investment)
|
|
127
|
+
2. Adds secondary domains as layers (see [[multi-domain-composition]])
|
|
128
|
+
3. Generates shared infrastructure (inbox, context file, maintenance) once
|
|
129
|
+
4. Adds domain-specific note types and MOCs per domain
|
|
130
|
+
5. Explains how domains interact: "Your research claims and personal goals share the same graph. A research finding might connect to a personal goal."
|
|
131
|
+
|
|
132
|
+
Since [[multi-domain systems compose through separate templates and shared graph]], domains add note types and MOCs without conflicting.
|
|
133
|
+
|
|
134
|
+
## Domain-Specific Onboarding Patterns
|
|
135
|
+
|
|
136
|
+
Different domains need different onboarding emphasis. The plugin adjusts what it highlights during /setup based on the detected domain:
|
|
137
|
+
|
|
138
|
+
| Domain | Onboarding Emphasis | What to Frontload | What to Defer |
|
|
139
|
+
|--------|--------------------|--------------------|---------------|
|
|
140
|
+
| Research | Claim extraction pipeline, source management | MOC hierarchy, citation schema | Review cadence, synthesis scheduling |
|
|
141
|
+
| Therapy | Ethical constraints, privacy setup, warmth calibration | Mood tracking schema, pattern detection | Strategy effectiveness tracking |
|
|
142
|
+
| PM | Decision tracking, stakeholder mapping | Meeting extraction templates, action items | Estimation analysis, retrospective synthesis |
|
|
143
|
+
| Creative Writing | Consistency graph, canon management | Character/location/world-rule templates | Plot thread tracking, voice drift detection |
|
|
144
|
+
| Personal Life | Area structure, review rhythm | Life area MOCs, capture workflow | Goal cascading, habit tracking |
|
|
145
|
+
| Trading | Risk management, journal discipline | Trade journal template, thesis tracking | Correlation analysis, strategy backtesting |
|
|
146
|
+
| Health | Tracking schema, correlation setup | Measurement templates, baseline capture | Protocol effectiveness, trend analysis |
|
|
147
|
+
| Learning | Knowledge mapping, prerequisite tracking | Course/concept templates, mastery levels | Spaced repetition, cross-course synthesis |
|
|
148
|
+
|
|
149
|
+
**The principle:** each domain has a "day one essential" and a "month two extension." /setup delivers the essential. The extension waits for friction to signal need.
|
|
150
|
+
|
|
151
|
+
### Domain-Specific First Captures
|
|
152
|
+
|
|
153
|
+
The first capture should demonstrate the domain's core value proposition:
|
|
154
|
+
|
|
155
|
+
- **Research:** "What paper are you reading right now? Let's extract its claims."
|
|
156
|
+
- **Therapy:** "What's on your mind right now? Let's capture a reflection."
|
|
157
|
+
- **PM:** "What decision did your team make recently? Let's document it with rationale."
|
|
158
|
+
- **Creative:** "Tell me about your main character. Let's build their canonical entry."
|
|
159
|
+
- **Personal:** "What are your life areas? Let's set up the structure."
|
|
160
|
+
|
|
161
|
+
The first capture proves the system works before the user can lose momentum.
|
|
162
|
+
|
|
163
|
+
## Progressive Disclosure During Onboarding
|
|
164
|
+
|
|
165
|
+
/setup should reveal the system's capabilities in layers, not all at once. Since [[configuration paralysis emerges when derivation surfaces too many decisions]], the plugin progressively discloses features:
|
|
166
|
+
|
|
167
|
+
**Layer 1 (During /setup):** Core capture + basic navigation. The user can write notes and find them.
|
|
168
|
+
|
|
169
|
+
**Layer 2 (First week):** Processing pipeline activation. The user sees the system extract value from their raw captures. "I noticed you've captured 5 entries. Here are 3 patterns I detected."
|
|
170
|
+
|
|
171
|
+
**Layer 3 (First month):** Maintenance and evolution. The user encounters natural friction points and the plugin suggests solutions. "Your inbox has 12 unprocessed items. Would you like me to activate automatic processing?"
|
|
172
|
+
|
|
173
|
+
**Layer 4 (Ongoing):** System-level optimization. The plugin recommends structural changes based on accumulated operational evidence. "Your research MOC has 45 notes — splitting into sub-MOCs would improve navigation."
|
|
174
|
+
|
|
175
|
+
This progressive disclosure maps to the seed-evolve-reseed lifecycle: Layer 1 is the seed, Layers 2-3 are evolution, and Layer 4 enables principled restructuring when needed.
|
|
176
|
+
|
|
177
|
+
## Post-Init Evolution
|
|
178
|
+
|
|
179
|
+
/setup is a starting point, not a final configuration. The plugin should:
|
|
180
|
+
|
|
181
|
+
1. **Track friction** — When the user struggles with a workflow, log an observation
|
|
182
|
+
2. **Suggest modules** — When a pain point matches an available module, suggest activation
|
|
183
|
+
3. **Evolve schemas** — When users consistently add fields manually, propose schema updates
|
|
184
|
+
4. **Grow MOCs** — When topics accumulate notes, suggest MOC creation
|
|
185
|
+
|
|
186
|
+
Since [[derived systems follow a seed-evolve-reseed lifecycle]], the initial vault is the seed. Evolution happens through use. Reseeding happens when accumulated friction justifies restructuring.
|
|
187
|
+
|
|
188
|
+
## Anti-Patterns
|
|
189
|
+
|
|
190
|
+
| Anti-Pattern | Why It Fails | Better Approach |
|
|
191
|
+
|-------------|-------------|-----------------|
|
|
192
|
+
| Ask everything upfront | Configuration paralysis, decision fatigue | Ask 3-5 questions, derive the rest |
|
|
193
|
+
| Generate maximum features | Premature complexity, maintenance burden | Start minimal, grow with pain points |
|
|
194
|
+
| No explanation of defaults | User doesn't understand their system | Explain key decisions briefly |
|
|
195
|
+
| Skip first capture | Setup without use → abandonment | End /setup with actual content |
|
|
196
|
+
| Require platform expertise | Users shouldn't need to understand hooks/skills | Abstract platform details behind presets |
|
|
197
|
+
| Same onboarding for all domains | Different domains have different day-one essentials | Domain-specific first captures |
|
|
198
|
+
| Showing all capabilities at once | Overwhelm before the system proves value | Progressive disclosure across layers |
|
|
199
|
+
|
|
200
|
+
## Domain Examples
|
|
201
|
+
|
|
202
|
+
These domain compositions demonstrate onboarding patterns in practice:
|
|
203
|
+
|
|
204
|
+
- [[academic research uses structured extraction with cross-source synthesis]] — Research preset with heavy processing, three-tier MOCs, atomic notes; shows the full enforce/explain/ask flow for an academic researcher
|
|
205
|
+
- [[personal assistant uses life area management with review automation]] — Personal assistant preset with medium processing, flat-peer MOCs, mixed granularity; demonstrates how "What life areas matter to you?" maps to area MOC generation
|
|
206
|
+
- [[therapy journal uses warm personality with pattern detection for emotional processing]] — Shows how the plugin explains processing intensity: "Your journal entries will be analyzed for patterns across mood, triggers, and coping strategies" (explain, not ask)
|
|
207
|
+
- [[student learning uses prerequisite graphs with spaced retrieval]] — Experimental preset adapted for learning; demonstrates deriving prerequisite graph structure from the user's course description rather than asking about graph topology
|
|
208
|
+
- [[trading uses conviction tracking with thesis-outcome correlation]] — Shows multi-domain onboarding: user describes "I trade and track my health to see correlations" → primary (trading) + secondary (health) composition
|
|
209
|
+
|
|
210
|
+
## Grounding
|
|
211
|
+
|
|
212
|
+
This guidance is grounded in:
|
|
213
|
+
- [[premature complexity is the most common derivation failure mode]] — start simple
|
|
214
|
+
- [[friction-driven module adoption prevents configuration debt by adding complexity only at pain points]] — grow with need
|
|
215
|
+
- [[configuration paralysis emerges when derivation surfaces too many decisions]] — minimize questions
|
|
216
|
+
- [[use-case presets dissolve the tension between composability and simplicity]] — presets as simplification
|
|
217
|
+
- [[derived systems follow a seed-evolve-reseed lifecycle]] — /setup is seed, not final state
|
|
218
|
+
- [[schema evolution follows observe-then-formalize not design-then-enforce]] — schemas grow from use
|
|
219
|
+
- [[progressive schema validates only what active modules require not the full system schema]] — schema tracks feature activation
|
|
220
|
+
|
|
221
|
+
---
|
|
222
|
+
|
|
223
|
+
Topics:
|
|
224
|
+
- [[index]]
|
|
225
|
+
- [[index]]
|
|
226
|
+
---
|
|
227
|
+
|
|
228
|
+
Topics:
|
|
229
|
+
- [[derivation-engine]]
|
|
@@ -0,0 +1,51 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Rosch's prototype theory predicts MOC titles work best at the "chair" level — specific enough to be informative, general enough to cover a cluster — and this level shifts as expertise deepens
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[graph-structure]]"]
|
|
5
|
+
methodology: ["Cognitive Science", "PKM Research"]
|
|
6
|
+
source: [[tft-research-part3]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# basic level categorization determines optimal MOC granularity
|
|
10
|
+
|
|
11
|
+
Eleanor Rosch's prototype theory, developed through experiments in the 1970s, established that humans naturally categorize at a "basic level" that maximizes informativeness with minimum cognitive effort. The basic level sits between superordinate categories (too abstract to be useful) and subordinate categories (too specific to be general). "Chair" is basic level. "Furniture" is superordinate — it tells you almost nothing about what you're dealing with. "Kitchen chair" is subordinate — it adds detail that rarely matters for navigation.
|
|
12
|
+
|
|
13
|
+
This maps directly to MOC design. A MOC titled "tools" sits at the superordinate level: it covers everything and orients toward nothing. An agent loading a "tools" MOC gets a list so broad that the attention management benefit collapses — since [[MOCs are attention management devices not just organizational tools]], a superordinate MOC fails the orientation function because it presents too much unrelated content, forcing the agent to re-filter within the MOC itself. The attention tax that MOCs are supposed to eliminate gets relocated rather than reduced.
|
|
14
|
+
|
|
15
|
+
At the other extreme, a MOC titled "obsidian git plugin" sits at the subordinate level. It covers so little that the overhead of maintaining a separate MOC exceeds the navigation benefit. The agent must traverse multiple subordinate MOCs to build any picture of the domain, which creates exactly the kind of fragmented orientation that since [[navigational vertigo emerges in pure association systems without local hierarchy]] identifies as the core failure mode of under-structured systems. Too many tiny MOCs produce the same disorientation as no MOCs at all — landmarks only help when they're sparse enough to provide bearing.
|
|
16
|
+
|
|
17
|
+
The basic level is where MOC titles should sit: specific enough that loading the MOC tells you what the domain contains, general enough that a meaningful cluster of notes belongs there. "Graph structure," "agent cognition," "processing workflow" — these are basic-level categories in this vault. They name a domain you can reason about without being either uselessly abstract or unnecessarily narrow.
|
|
18
|
+
|
|
19
|
+
But Rosch's deeper finding complicates this. Basic level is not fixed — it shifts with expertise. A novice's basic level for biology is "fish." A marine biologist's basic level is "salmonid." As understanding deepens, what counts as the right categorization granularity moves downward because the expert has enough context to make finer distinctions meaningful. The subordinate category that was too narrow for a novice becomes informationally rich for an expert.
|
|
20
|
+
|
|
21
|
+
For vault MOCs, this means granularity should evolve with understanding, not just with volume. The vault's current split heuristic — CLAUDE.md prescribes splitting at 35-50 links — captures the volume dimension but misses the expertise dimension. A MOC might have only 25 notes but still benefit from splitting because the operator's understanding has deepened enough that the basic level has shifted. "Processing workflow" was basic level when the vault had a dozen notes about processing. Now, with distinct clusters around throughput, sessions, and forcing functions, the basic level has moved to "processing workflow throughput" — and the vault has already enacted this split, suggesting the principle operates even without being named. The deepening is not abstract — since [[incremental formalization happens through repeated touching of old notes]], each maintenance pass that sharpens a note also sharpens the operator's understanding of the domain, and it is precisely this accumulated understanding that makes the current MOC granularity feel inadequate and the finer distinctions feel necessary.
|
|
22
|
+
|
|
23
|
+
Since [[community detection algorithms can inform when MOCs should split or merge]], the algorithmic approach and the cognitive approach complement each other. Community detection reveals WHEN boundaries have shifted by identifying dense clusters within a MOC. Basic level theory explains WHERE those new boundaries should land — at the level that maximizes informativeness for the current depth of understanding. A Louvain algorithm might tell you that graph-structure has bifurcated. Rosch tells you that the sub-MOCs should be "link semantics" and "network topology," not "wiki links" (too narrow) or "knowledge organization" (too broad). The algorithm finds the clusters; the theory names them at the right level.
|
|
24
|
+
|
|
25
|
+
Since [[faceted classification treats notes as multi-dimensional objects rather than folder contents]], there is an interesting relationship between Ranganathan's framework and Rosch's. Faceted classification explains which AXES to categorize along (type, methodology, topic, role). Basic level theory explains what RESOLUTION to target on each axis. Together they predict that a well-designed classification system uses orthogonal facets at basic-level granularity — each dimension specific enough to filter meaningfully but general enough to group useful clusters.
|
|
26
|
+
|
|
27
|
+
The expertise-shift mechanism has a vocabulary parallel. Since [[narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging]], personal vocabulary in a single-operator vault evolves toward increasingly precise distinctions as understanding deepens — the same operator who once tagged notes "knowledge management" begins distinguishing "capture patterns" from "retrieval mechanisms" from "maintenance scheduling." The vocabulary shift and the categorization shift are two expressions of the same underlying process: expertise making finer distinctions meaningful. Basic level theory predicts when MOC titles should split; narrow folksonomy predicts when the operator's vocabulary has already outgrown those titles.
|
|
28
|
+
|
|
29
|
+
This categorization judgment is exercised most visibly during MOC construction. Since [[MOC construction forces synthesis that automated generation from metadata cannot replicate]], the Lump phase of the Dump-Lump-Jump pattern IS basic level categorization in practice: the builder groups notes into clusters, decides which clusters deserve their own heading or sub-MOC, and senses the mental squeeze point where a single MOC becomes too coarse. Automated generation that matches notes to topics by metadata tags cannot perform this judgment because it has no mechanism to feel when the basic level has shifted — it classifies at whatever granularity the tags provide, regardless of whether that granularity serves the current understanding.
|
|
30
|
+
|
|
31
|
+
The practical implication for agents is a testable heuristic: when creating or splitting a MOC, ask whether the title sits at basic level. Can you complete the sentence "This MOC covers everything about [title]" and have the scope be neither uselessly broad nor trivially narrow? Does loading this MOC orient you to a domain, or does it either overwhelm with scope or underwhelm with specificity? The answer depends on the current state of understanding — which is why MOC granularity is a living design decision, not a one-time architectural choice.
|
|
32
|
+
|
|
33
|
+
---
|
|
34
|
+
|
|
35
|
+
Source: [[tft-research-part3]]
|
|
36
|
+
---
|
|
37
|
+
|
|
38
|
+
Relevant Notes:
|
|
39
|
+
- [[community detection algorithms can inform when MOCs should split or merge]] — algorithmic complement: community detection reveals WHEN boundaries should move, basic level theory explains WHERE they should land; the split signal from Louvain tells you a MOC has outgrown its category, basic level theory tells you what granularity the sub-MOCs should target
|
|
40
|
+
- [[MOCs are attention management devices not just organizational tools]] — explains the cost of getting granularity wrong: a MOC at the wrong level wastes attention either through overloaded context (too superordinate) or fragmented orientation (too subordinate)
|
|
41
|
+
- [[navigational vertigo emerges in pure association systems without local hierarchy]] — the failure mode that basic level targeting prevents: superordinate MOCs provide hierarchy but not useful landmarks, subordinate MOCs create too many landmarks to navigate between
|
|
42
|
+
- [[faceted classification treats notes as multi-dimensional objects rather than folder contents]] — complementary classification theory: Ranganathan explains the axes of classification, Rosch explains the optimal resolution along each axis
|
|
43
|
+
- [[progressive disclosure means reading right not reading less]] — MOC granularity determines disclosure effectiveness: a basic-level MOC loads the right amount of context for orientation, while superordinate MOCs load too broad a context and subordinate MOCs require loading multiple MOCs to orient
|
|
44
|
+
- [[narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging]] — parallel expertise-driven evolution: as understanding deepens, personal vocabulary becomes more precise (narrow folksonomy) and categorization becomes more granular (basic level shift); both are expressions of expertise making finer distinctions meaningful
|
|
45
|
+
- [[incremental formalization happens through repeated touching of old notes]] — the mechanism that drives basic level shift: repeated touches deepen understanding, and that deepened understanding is what makes the current granularity feel too coarse and sub-MOCs feel necessary
|
|
46
|
+
- [[cross-links between MOC territories indicate creative leaps and integration depth]] — diagnostic signal: notes appearing in multiple MOCs may indicate a MOC sitting at the wrong granularity level, where basic-level sub-MOCs would better capture the distinct domains those cross-links bridge
|
|
47
|
+
- [[associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles]] — foundation: basic level theory explains at what granularity the local hierarchy MOCs provide should operate within the heterarchical structure; MOCs are the controlled exception to pure association, and Rosch predicts their optimal resolution
|
|
48
|
+
- [[MOC construction forces synthesis that automated generation from metadata cannot replicate]] — domain application: the Lump phase of MOC construction IS basic level categorization in practice; the builder decides whether a cluster of notes deserves its own sub-MOC by sensing the mental squeeze point, and this granularity judgment is precisely what automated generation cannot perform because it requires the domain expertise that Rosch's basic level shift depends on
|
|
49
|
+
|
|
50
|
+
Topics:
|
|
51
|
+
- [[graph-structure]]
|
package/methodology/batching by context similarity reduces switching costs in agent processing.md
ADDED
|
@@ -0,0 +1,43 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Once you have fresh context per task, the next question is how to sequence work within a session — organizing by topic similarity minimizes the overhead of loading new context between tasks
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[processing-workflows]]"]
|
|
5
|
+
methodology: ["Cognitive Science", "GTD"]
|
|
6
|
+
source: [[tft-research-part3]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# batching by context similarity reduces switching costs in agent processing
|
|
10
|
+
|
|
11
|
+
Context switching has a cost. For humans, Leroy's attention residue research puts the recovery time at up to 23 minutes. For agents, the cost is different but real: loading context for a new topic consumes tokens, and the semantic distance between consecutive tasks determines how much re-orientation is required. Batching similar tasks together minimizes the frequency and severity of these switches. However, since [[attention residue may have a minimum granularity that cannot be subdivided]], even context-similar consecutive tasks still pay an irreducible floor cost at each boundary — batching reduces the variable component of switching cost (how much new context to load) but cannot eliminate the fixed component (the minimum cognitive redirection penalty). This means batching is necessary but not sufficient: it minimizes the severity of each switch without eliminating the fact that a switch occurred.
|
|
12
|
+
|
|
13
|
+
The principle is straightforward. If you have ten inbox items — three about graph structure, four about processing workflow, and three about note design — processing all graph structure items consecutively means you load the graph structure context once and apply it three times. Processing them in random order means loading and unloading different topic contexts ten times. The total work done is the same, but the total switching cost is drastically lower in the batched case.
|
|
14
|
+
|
|
15
|
+
For agent workflows, this translates to organizing task queues by context similarity rather than by priority or arrival order. A synthesis agent should complete all passes for one topic before moving to a different topic. A processing agent should handle all inbox items of one type before switching types. The ralph orchestration pattern already processes tasks sequentially from the queue, but the queue ordering determines how much context overlap exists between consecutive tasks.
|
|
16
|
+
|
|
17
|
+
Since [[fresh context per task preserves quality better than chaining phases]], each task already gets its own session — this is the macro-level isolation that prevents attention degradation. Context-similar batching operates at the micro level: given that you will process tasks sequentially in fresh sessions, which ORDER minimizes total switching cost? The fresh session loads CLAUDE.md and vault context regardless, but the topic-specific context (relevant MOC, related notes, domain understanding) changes between batches. Processing three graph structure claims back-to-back means the knowledge-worker agent builds graph structure understanding once and carries the pattern across claims. This is [[spreading activation models how agents should traverse]] applied to queue design: topic context loaded for one task primes activation for the next, so the agent enters each subsequent task with relevant concepts already activated rather than starting cold.
|
|
18
|
+
|
|
19
|
+
Since [[continuous small-batch processing eliminates review dread]], batching by similarity complements the small-batch philosophy. Small batches prevent accumulation (how much), while context-similar batching optimizes sequence (what order). The two heuristics are orthogonal and compound: process small batches of context-similar items for maximum efficiency with minimum dread.
|
|
20
|
+
|
|
21
|
+
There is a tension with [[temporal processing priority creates age-based inbox urgency]]. Age-based priority says process the oldest items first because context decays with time. Context-similar batching says process related items together regardless of age. The resolution depends on the magnitude of each cost: if an item is about to cross the 72-hour critical threshold, temporal urgency overrides similarity batching. For items within the same urgency tier, similarity batching should determine order. Age sets the priority; similarity optimizes the sequence within each priority band.
|
|
22
|
+
|
|
23
|
+
Since [[closure rituals create clean breaks that prevent attention residue bleed]], context-similar batching also reduces the residue gap between tasks. When consecutive tasks share context, the closure of one and the opening of the next involve less cognitive distance. The residue from a graph structure task is less harmful when the next task is also about graph structure — the "residue" is actually useful context.
|
|
24
|
+
|
|
25
|
+
There is a second, deeper tension with [[incremental reading enables cross-source connection finding]]. Context-similar batching optimizes for efficiency by grouping related tasks together, but incremental reading deliberately disrupts that grouping to create forced context collision between different sources. The collision surfaces cross-source connections that sequential, topic-grouped processing would miss. The trade-off is switching cost (minimized by batching) versus serendipitous discovery (maximized by interleaving). Within a single source's claims, batching is unambiguously better because the claims already share context. Across sources, the question becomes whether the efficiency gains from topic-similar batching outweigh the discovery losses from not interleaving.
|
|
26
|
+
|
|
27
|
+
The practical implication for queue design: when seeding a batch of claims from a single source, the claims are already context-similar (they came from the same material). Processing them in sequence leverages this natural similarity. When processing claims from multiple sources, the choice between topic-based batching and source-based interleaving depends on whether reflect adequately recovers cross-source connections after extraction, or whether some connections only surface through the juxtaposition of processing itself.
|
|
28
|
+
|
|
29
|
+
---
|
|
30
|
+
---
|
|
31
|
+
|
|
32
|
+
Relevant Notes:
|
|
33
|
+
- [[fresh context per task preserves quality better than chaining phases]] — addresses macro-level isolation (separate sessions per phase); this note addresses micro-level sequencing (within a session, how to order tasks)
|
|
34
|
+
- [[continuous small-batch processing eliminates review dread]] — complementary forcing function: small batches prevent accumulation, context-similar batching optimizes the sequence within those batches
|
|
35
|
+
- [[closure rituals create clean breaks that prevent attention residue bleed]] — closure between batches is more effective when the next batch shares context with the previous one, reducing the residue gap
|
|
36
|
+
- [[temporal processing priority creates age-based inbox urgency]] — potential tension: age-based priority says process oldest first, context similarity says process related items together; both heuristics serve different goals
|
|
37
|
+
- [[MOCs are attention management devices not just organizational tools]] — applies the same Leroy attention residue mechanism at the session level: MOCs reduce per-session orientation cost, while batching reduces cross-task switching cost
|
|
38
|
+
- [[incremental reading enables cross-source connection finding]] — opposing sequencing strategy: batching minimizes switching cost by grouping similar topics, while incremental reading deliberately maximizes switching for serendipitous cross-source collision; the tension is efficiency vs discovery
|
|
39
|
+
- [[spreading activation models how agents should traverse]] — mechanism: context-similar batching leverages spreading activation efficiency; topic context loaded for one task primes activation for the next similar task, reducing re-orientation cost
|
|
40
|
+
- [[attention residue may have a minimum granularity that cannot be subdivided]] — limits: batching reduces the variable component of switching cost (semantic distance between tasks) but cannot eliminate the fixed component (irreducible redirection penalty); even context-similar tasks pay the floor cost at each boundary
|
|
41
|
+
|
|
42
|
+
Topics:
|
|
43
|
+
- [[processing-workflows]]
|
|
@@ -0,0 +1,42 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: PKM failure research shows systems break through habits not software — the Collector's Fallacy, productivity porn, and under-processing kill vaults regardless of which app hosts them
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[processing-workflows]]", "[[maintenance-patterns]]"]
|
|
5
|
+
methodology: ["PKM Research"]
|
|
6
|
+
source: [[7-3-failure-modes-anti-patterns]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# behavioral anti-patterns matter more than tool selection
|
|
10
|
+
|
|
11
|
+
The research is clear: Personal Knowledge Management systems fail through counter-productive habits, not inadequate software. The Collector's Fallacy (saving = learning), productivity porn (optimizing = producing), under-processing (capturing without transformation), over-engineering (structure before content) — these behavioral patterns kill vaults regardless of whether they run in Obsidian, Notion, Roam, or index cards. Tool debates miss the point because the failure mode is upstream of the tool.
|
|
12
|
+
|
|
13
|
+
This explains the recurring disappointment cycle in the PKM community. Users migrate from Evernote to Notion to Obsidian seeking features that will solve their problems, but since [[PKM failure follows a predictable cycle]], they carry their behavioral patterns from system to system. The new tool provides a fresh start and temporary motivation, but the same habits reassert themselves. Tool-hopping is itself an anti-pattern — a manifestation of productivity porn that substitutes the effort of migration for the effort of processing.
|
|
14
|
+
|
|
15
|
+
The implication cuts both ways. On one hand, it means most tool comparison debates are misallocated attention. If behavior dominates outcomes, obsessing over Dataview queries versus Notion databases misses what actually matters. On the other hand, it means fixing behavior can rescue any reasonable tool. A user with good habits operating in a basic folder system will outperform a user with bad habits operating in a sophisticated graph database.
|
|
16
|
+
|
|
17
|
+
This matters for agent-operated systems because agents operate under fundamentally different behavioral constraints. Agents don't experience the dopamine hit from collecting. They don't procrastinate through system-tweaking. They don't feel the satisfaction of highlighting that substitutes for understanding. Since [[structure without processing provides no value]], agents can enforce the processing requirement mechanically in ways humans struggle to self-enforce. Because [[skills encode methodology so manual execution bypasses quality gates]], encoded workflows can make the anti-patterns structurally impossible rather than merely discouraged. And since [[hook enforcement guarantees quality while instruction enforcement merely suggests it]], the structural enforcement goes deeper than skills alone — hooks fire on every relevant event regardless of agent attention, which is the precise capability humans lack. Human PKM fails through behavioral anti-patterns because humans cannot make good behavior automatic; hook-enabled agent systems can. And since [[WIP limits force processing over accumulation]], hard architectural constraints can prevent the Collector's Fallacy at the system level — when inbox is full, the only way forward is processing, not more capture.
|
|
18
|
+
|
|
19
|
+
The industry convergence on vibe notetaking illustrates this principle at product scale. Since [[vibe notetaking is the emerging industry consensus for AI-native self-organization]], tools from Notion AI to Mem to Supermemory all converge on the "dump and AI organizes" paradigm. The tool-level convergence is real — everyone is building essentially the same capture interface. But whether any of these tools produces knowledge depends on the behavioral layer: does the AI actually transform content into claims and reasoned connections, or does it just file things with better labels? The industry consensus operates at the tool level. Whether the tool works operates at the behavioral level. The convergence changes nothing about the fundamental finding that behavior dominates outcomes.
|
|
20
|
+
|
|
21
|
+
Since [[storage versus thinking distinction determines which tool patterns apply]], the anti-pattern catalog itself is system-type-specific. The Collector's Fallacy, productivity porn, and under-processing are thinking-system failures — patterns that damage systems meant for synthesis. Storage systems have their own characteristic failures: retrieval latency, classification ambiguity, orphaned assets. Applying thinking-system anti-pattern detection to a storage system produces false alarms; applying storage-system metrics to a thinking system produces false confidence.
|
|
22
|
+
|
|
23
|
+
But agents may introduce their own failure modes. An agent that moves files without transformation performs Lazy Cornell at LLM throughput — since [[the generation effect requires active transformation not just storage]], mere file shuffling creates no cognitive hooks, whether the shuffler is human or automated. An agent that adds links without articulating why creates false density. An agent that optimizes for coverage metrics rather than retrieval utility could produce a well-organized graveyard indistinguishable from human under-processing at larger scale. The human anti-patterns might not transfer directly, but the underlying failure mode — activity that mimics production without producing — surely does. Since [[verbatim risk applies to agents too]], the question becomes: what are the agent-specific behavioral anti-patterns, and how do we detect them before they corrupt the knowledge graph?
|
|
24
|
+
---
|
|
25
|
+
|
|
26
|
+
Relevant Notes:
|
|
27
|
+
- [[PKM failure follows a predictable cycle]] — documents the 7-stage cascade that behavioral patterns create; tool selection doesn't appear in the failure sequence
|
|
28
|
+
- [[structure without processing provides no value]] — the Lazy Cornell research that proves structure alone is insufficient; behavior (processing) is what creates value
|
|
29
|
+
- [[productivity porn risk in meta-system building]] — one specific anti-pattern applied to agent contexts; building workflows can substitute for producing output
|
|
30
|
+
- [[verbatim risk applies to agents too]] — tests whether agents have their own version of under-processing: well-structured summaries that reorganize without generating insight
|
|
31
|
+
- [[skills encode methodology so manual execution bypasses quality gates]] — how encoded workflows can prevent anti-patterns by making good behavior structural rather than aspirational
|
|
32
|
+
- [[WIP limits force processing over accumulation]] — the forcing function that makes Collector's Fallacy architecturally impossible: hard caps remove the choice between capturing more or processing now
|
|
33
|
+
- [[temporal separation of capture and processing preserves context freshness]] — why under-processing is particularly damaging: Ebbinghaus decay (50% within 1 hour, 70% within 24 hours) means delayed processing loses context permanently; this grounds the urgency of the anti-pattern consequences
|
|
34
|
+
- [[the generation effect requires active transformation not just storage]] — the cognitive principle that distinguishes real processing from mimicry: did the operation produce something that didn't exist before?
|
|
35
|
+
- [[cognitive outsourcing risk in agent-operated systems]] — the third leg of the agent-failure-modes trio: while this note asks what agent anti-patterns might emerge, cognitive outsourcing addresses a different failure where the agent works perfectly but human capability atrophies
|
|
36
|
+
- [[hook enforcement guarantees quality while instruction enforcement merely suggests it]] — the mechanism that makes good behavior structural: hooks fire regardless of attention state, which is precisely what separates agent systems from human ones where behavioral anti-patterns persist because enforcement is aspirational
|
|
37
|
+
- [[vibe notetaking is the emerging industry consensus for AI-native self-organization]] — industry-scale illustration: the tool convergence is real (everyone builds dump-and-organize), but whether any implementation works still depends on whether the AI transforms content or merely files it — behavior dominates outcomes regardless of which vibe notetaking tool hosts the capture
|
|
38
|
+
- [[storage versus thinking distinction determines which tool patterns apply]] — specifies which anti-patterns apply where: Collector's Fallacy and under-processing damage thinking systems; over-engineering and retrieval latency damage storage systems; the system type determines which behavioral failures to monitor
|
|
39
|
+
|
|
40
|
+
Topics:
|
|
41
|
+
- [[processing-workflows]]
|
|
42
|
+
- [[maintenance-patterns]]
|
|
@@ -0,0 +1,57 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Graph theory metric that quantifies how often a note lies on shortest paths between others, revealing structural bridges worth developing, single points of failure, and gaps worth filling
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[graph-structure]]"]
|
|
5
|
+
methodology: ["Network Science"]
|
|
6
|
+
source: [[tft-research-part3]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# betweenness centrality identifies bridge notes connecting disparate knowledge domains
|
|
10
|
+
|
|
11
|
+
Graph theory offers a precise measure for something the vault otherwise detects only through intuition: which notes serve as bridges between otherwise disconnected clusters of thinking. Betweenness centrality counts how often a node appears on shortest paths between all other node pairs in the graph. A note with high betweenness doesn't just connect to many things — it connects things that would otherwise have no short path to each other.
|
|
12
|
+
|
|
13
|
+
This is a different claim than saying a note has many links. A MOC with fifty outgoing connections has high degree centrality, but it might connect notes that already have other paths between them. Betweenness centrality captures something more specific: structural necessity. If you removed a high-betweenness note from the graph, the average path length between remaining notes would increase. Some concepts that were two hops apart might become five hops apart, or entirely disconnected.
|
|
14
|
+
|
|
15
|
+
The distinction matters because it separates popularity from structural importance. Since [[small-world topology requires hubs and dense local links]], the vault already expects hub nodes with many connections. But not all hubs are bridges. A MOC that connects thirty notes within one tightly-clustered topic has high degree but may have low betweenness if those notes are densely interconnected without it. A synthesis note connecting two different topic clusters — say, linking graph-structure concepts to processing-workflow concepts — might have fewer total connections but far higher betweenness because it's the only short path between those domains.
|
|
16
|
+
|
|
17
|
+
## What betweenness reveals for maintenance
|
|
18
|
+
|
|
19
|
+
Periodic betweenness analysis surfaces three actionable signals:
|
|
20
|
+
|
|
21
|
+
**Bridge notes worth developing.** Notes with high betweenness are structural load-bearers. If they're thin or poorly written, the entire traversal between two domains depends on a weak link. These notes deserve reweave attention because their quality disproportionately affects graph navigability. Since [[each new note compounds value by creating traversal paths]], the notes that CREATE the most traversal paths are the ones sitting on the most shortest paths — and betweenness centrality identifies exactly those notes.
|
|
22
|
+
|
|
23
|
+
**Missing bridges between isolated clusters.** When betweenness analysis reveals that no note bridges two topic clusters, that's a gap signal more specific than what MOC coverage alone reveals. The question shifts from "are these notes in a MOC?" to "can you get from topic A to topic B in few hops?" If the answer is no, the vault has a structural hole that a synthesis note could fill. This gap detection complements what [[community detection algorithms can inform when MOCs should split or merge]] reveals at the group level — community detection shows that clusters have drifted apart, betweenness analysis shows specifically where the bridge is missing.
|
|
24
|
+
|
|
25
|
+
**Single points of failure.** A note with extremely high betweenness relative to its neighbors is a single point of failure. Remove it, and two clusters lose their connection. The vault's `find-bridges.sh` script already implements a version of this — detecting notes whose removal would disconnect the graph. Betweenness centrality extends the concept from binary (bridge or not) to continuous (how much of a bridge).
|
|
26
|
+
|
|
27
|
+
## The qualitative-quantitative pairing
|
|
28
|
+
|
|
29
|
+
This claim complements what [[cross-links between MOC territories indicate creative leaps and integration depth]] captures qualitatively. Cross-MOC membership says "this note appears in multiple topics, which suggests integration thinking." Betweenness centrality says "this note lies on many shortest paths, which means it's structurally critical for navigation between domains." The two measures sometimes converge — a note in both graph-structure and agent-cognition MOCs might also have high betweenness — but they can diverge. A note might appear in only one MOC but still serve as the primary bridge between two clusters within that MOC's territory.
|
|
30
|
+
|
|
31
|
+
The practical implication: use cross-MOC membership for assessing synthesis quality (did the author think across domains?), use betweenness centrality for assessing graph health (can the agent traverse between domains?). They answer different questions about the same structural phenomenon.
|
|
32
|
+
|
|
33
|
+
## Agent implementation
|
|
34
|
+
|
|
35
|
+
Since [[spreading activation models how agents should traverse]], betweenness centrality has a direct cognitive interpretation. Activation flowing from any concept toward any other concept will preferentially pass through high-betweenness nodes because those nodes sit on shortest paths. These nodes become natural waypoints — concepts the agent encounters repeatedly during diverse traversals. This repetition itself is a signal: if the agent keeps encountering the same note from different starting points, that note is structurally central.
|
|
36
|
+
|
|
37
|
+
Since [[role field makes graph structure explicit]] proposes a `bridge` role for notes connecting distant domains, betweenness centrality provides the computable criterion for that designation. Rather than manually classifying notes as bridges, the metric identifies them from the graph's structure — and since [[backlinks implicitly define notes by revealing usage context]], backlink accumulation already reveals which notes function as hubs through popularity, but betweenness centrality captures something backlinks miss: structural necessity. A note with few backlinks might still have high betweenness if it happens to be the only short path between two clusters.
|
|
38
|
+
|
|
39
|
+
For vault analysis scripts, betweenness can be computed from the wiki link graph. Each `[[link]]` is a directed edge. Computing all-pairs shortest paths and counting node appearances gives betweenness scores. Since [[dangling links reveal which notes want to exist]] by tracking reference frequency for future nodes, betweenness centrality complements by tracking structural position for existing nodes. Together they provide both forward-looking demand signals and backward-looking structural analysis.
|
|
40
|
+
|
|
41
|
+
The uncertainty: betweenness centrality assumes shortest-path traversal, but real agent navigation often follows activation strength rather than strictly minimizing hops. High-decay focused retrieval follows shortest paths (where betweenness matters most), but low-decay exploratory synthesis might traverse longer paths where betweenness is less predictive. The metric is most useful for understanding focused navigation; for exploratory synthesis, cross-MOC membership may better capture what matters.
|
|
42
|
+
|
|
43
|
+
---
|
|
44
|
+
---
|
|
45
|
+
|
|
46
|
+
Relevant Notes:
|
|
47
|
+
- [[cross-links between MOC territories indicate creative leaps and integration depth]] — the qualitative complement: cross-MOC membership identifies bridges by topic diversity, while betweenness centrality identifies them by path structure
|
|
48
|
+
- [[small-world topology requires hubs and dense local links]] — provides the structural context where betweenness matters: power-law distributions create the hub-spoke topology that makes some nodes disproportionately important for path routing
|
|
49
|
+
- [[each new note compounds value by creating traversal paths]] — betweenness centrality measures exactly WHICH notes contribute most to path creation: high-betweenness notes are the ones whose removal would most reduce reachability
|
|
50
|
+
- [[spreading activation models how agents should traverse]] — activation flows preferentially through high-betweenness nodes because they sit on more shortest paths, making them natural traversal waypoints
|
|
51
|
+
- [[dangling links reveal which notes want to exist]] — demand-side complement: dangling links predict future hubs by reference frequency, betweenness centrality identifies existing hubs by structural position
|
|
52
|
+
- [[community detection algorithms can inform when MOCs should split or merge]] — complementary macro view: betweenness identifies important individual nodes, community detection identifies meaningful group boundaries; together they provide micro and macro graph health monitoring
|
|
53
|
+
- [[role field makes graph structure explicit]] — betweenness centrality provides the computable basis for the proposed bridge role designation: high-betweenness notes are the structurally critical connectors that a role field would make queryable without recomputation
|
|
54
|
+
- [[backlinks implicitly define notes by revealing usage context]] — clarifies the degree-vs-betweenness distinction: backlink accumulation measures popularity (how many notes reference this one), while betweenness centrality measures structural necessity (how many shortest paths run through it)
|
|
55
|
+
|
|
56
|
+
Topics:
|
|
57
|
+
- [[graph-structure]]
|
|
@@ -0,0 +1,42 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Platform-dependent modules ship as construction instructions so agents build contextually adapted artifacts — but blueprint staleness creates maintenance cost that may qualify the claim
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[design-dimensions]]"]
|
|
5
|
+
confidence: speculative
|
|
6
|
+
methodology: ["Original", "Systems Theory"]
|
|
7
|
+
source: [[composable-knowledge-architecture-blueprint]]
|
|
8
|
+
---
|
|
9
|
+
|
|
10
|
+
# blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules
|
|
11
|
+
|
|
12
|
+
The standard distribution format for software modules is the download: pre-built artifacts that work immediately upon installation. For platform-independent knowledge system modules — YAML schemas, wiki link conventions, atomic note patterns — downloads work fine because the artifact is a text file that any agent can read. But since [[four abstraction layers separate platform-agnostic from platform-dependent knowledge system features]], the automation and orchestration layers require platform-specific infrastructure: hooks need event bindings that differ across platforms, skills need metadata formats specific to each agent runtime, and pipelines need coordination mechanisms that vary from subagent spawning to queue-based orchestration. A pre-built hook for Claude Code's PostToolUse event is meaningless on a platform that fires events differently or not at all.
|
|
13
|
+
|
|
14
|
+
The alternative is the blueprint: instructions that teach the agent how to construct the module in its specific environment. Rather than shipping a finished hook file, the blueprint explains what quality guarantee the hook should provide, what event type to bind, what the validation logic should check, and how to format the response. The agent reads the blueprint and builds the artifact adapted to its platform, its schema, and its use case. This is Christopher Alexander's generative insight applied to knowledge system distribution — each module takes its shape according to its context, not from a predetermined form.
|
|
15
|
+
|
|
16
|
+
Four types of adaptation make blueprints structurally superior to downloads for platform-dependent modules. First, platform adaptation: since [[platform fragmentation means identical conceptual operations require different implementations across agent environments]], a validation hook blueprint produces bash scripts with JSON responses on Claude Code and TypeScript handlers with different event bindings on OpenClaw. The same quality guarantee, different implementations — and blueprints reduce the N-platforms times M-operations multiplier by shipping semantic guarantees that each platform compiles into native code rather than requiring per-platform pre-built artifacts. Second, environment adaptation: file paths, MCP configurations, tool availability, and directory structures vary across deployments even on the same platform. A blueprint parameterizes these while a download hardcodes them. Third, use-case adaptation: a processing pipeline blueprint for research extraction produces different phase sequences than the same blueprint configured for project logging or creative capture. The module's purpose shapes its construction. Fourth, self-extension: since [[self-extension requires context files to contain platform operations knowledge not just methodology]], an agent that builds a module from a blueprint understands how that module works and can modify it as needs evolve — unlike a downloaded artifact that functions as a black box.
|
|
17
|
+
|
|
18
|
+
The practical test for blueprint quality is whether an agent on a platform the blueprint author has never encountered can read the instructions and build a working module. This test reveals the boundary between genuine blueprints and disguised downloads. A blueprint that says "create a file at `.claude/hooks/validate-note.sh` with this content" is still a download — it prescribes a specific artifact rather than teaching construction principles. A genuine blueprint says "create a validation hook that fires after file writes, checks YAML frontmatter against the active schema, and surfaces warnings that the agent can act on immediately" — leaving the implementation to the agent's platform knowledge.
|
|
19
|
+
|
|
20
|
+
But the blueprint format carries a shadow side that qualifies the outperformance claim. Since [[platform adapter translation is semantic not mechanical because hook event meanings differ]], blueprints must describe quality guarantees at the semantic level — timing, scope, enforcement properties — not just the mechanical steps. Writing accurate semantic descriptions is harder than writing code. A hook that works is self-documenting; a blueprint that accurately captures what the hook achieves requires understanding the guarantee decomposition that the adapter translation note describes. Blueprint authors must think about what the module does, not just how it does it on their platform.
|
|
21
|
+
|
|
22
|
+
More critically, blueprints can go stale. Since [[derived systems follow a seed-evolve-reseed lifecycle]], the staleness problem is a variant of configuration drift: platform APIs evolve, event models change, new capabilities emerge that invalidate old construction instructions. A downloaded module either works or it does not — the failure is immediate and obvious. A stale blueprint produces subtly wrong artifacts: a hook that fires at the wrong moment, a skill that uses deprecated metadata, a pipeline that misses a newly available coordination primitive. The "stale blueprint" failure mode is insidious because the agent successfully builds something — it just builds the wrong thing. This is harder to detect than a download that fails to install.
|
|
23
|
+
|
|
24
|
+
The relationship to the composable architecture is direct. Since [[composable knowledge architecture builds systems from independent toggleable modules not monolithic templates]], modules need a distribution format that preserves their independence. Downloads create implicit platform coupling — the artifact embeds assumptions about where it runs. Blueprints preserve platform independence because the construction instructions can target any platform that provides the necessary capability tier. And since [[derivation generates knowledge systems from composable research claims not template customization]], derivation produces the configuration choices while blueprints handle the last mile: translating those choices into constructed infrastructure on the agent's specific platform. Without blueprints, derivation would still produce monolithic downloads that resist cross-platform deployment.
|
|
25
|
+
|
|
26
|
+
The outperformance claim remains open because the maintenance cost of keeping blueprints current across evolving platforms is unquantified. The advantages — platform adaptation, environment adaptation, use-case adaptation, self-extension — are clear for the initial construction event. Whether those advantages survive the ongoing cost of blueprint maintenance as platforms evolve is the empirical question this claim cannot yet answer.
|
|
27
|
+
|
|
28
|
+
---
|
|
29
|
+
---
|
|
30
|
+
|
|
31
|
+
Relevant Notes:
|
|
32
|
+
- [[composable knowledge architecture builds systems from independent toggleable modules not monolithic templates]] — the architecture this shipping format serves; modules must be distributable, and the blueprint-vs-download choice determines whether distribution preserves composability or collapses back into monolithic artifacts
|
|
33
|
+
- [[four abstraction layers separate platform-agnostic from platform-dependent knowledge system features]] — identifies which modules need blueprints: foundation and convention layers transfer as files (downloads work), but automation and orchestration layers require platform-specific construction that only blueprints can adapt
|
|
34
|
+
- [[self-extension requires context files to contain platform operations knowledge not just methodology]] — the prerequisite: an agent can only build from a blueprint if its context file teaches platform construction competencies; without knowing how to create hooks, skills, or agents on its platform, the blueprint is unreadable
|
|
35
|
+
- [[platform adapter translation is semantic not mechanical because hook event meanings differ]] — explains why downloads fail for automation modules: pre-built hooks carry implicit event semantics from the source platform that do not transfer mechanically to the target
|
|
36
|
+
- [[derivation generates knowledge systems from composable research claims not template customization]] — complementary process: derivation decides WHAT to build by traversing the claim graph, blueprints decide HOW to ship the result so agents can construct it on their platform
|
|
37
|
+
- [[platform fragmentation means identical conceptual operations require different implementations across agent environments]] — the problem blueprints solve at scale: fragmentation creates an N-platforms times M-operations multiplier, and blueprints reduce this by shipping semantic quality guarantees that each platform compiles into native implementations rather than requiring per-platform pre-built code
|
|
38
|
+
- [[derived systems follow a seed-evolve-reseed lifecycle]] — the stale blueprint problem maps to the reseeding phase: as platforms evolve, blueprints accumulate drift just as derived configurations do, and re-derivation from current platform knowledge is the principled response to staleness
|
|
39
|
+
- [[methodology development should follow the trajectory from documentation to skill to hook as understanding hardens]] — blueprints are the skill-level shipping format in the trajectory: automation modules ship as construction instructions (blueprint-level encoding) because the pattern is understood enough to teach but not deterministic enough for hook-level generation
|
|
40
|
+
|
|
41
|
+
Topics:
|
|
42
|
+
- [[design-dimensions]]
|