arscontexta 0.6.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/marketplace.json +11 -0
- package/.claude-plugin/plugin.json +22 -0
- package/README.md +683 -0
- package/agents/knowledge-guide.md +49 -0
- package/bin/cli.mjs +66 -0
- package/generators/agents-md.md +240 -0
- package/generators/claude-md.md +379 -0
- package/generators/features/atomic-notes.md +124 -0
- package/generators/features/ethical-guardrails.md +58 -0
- package/generators/features/graph-analysis.md +188 -0
- package/generators/features/helper-functions.md +92 -0
- package/generators/features/maintenance.md +164 -0
- package/generators/features/methodology-knowledge.md +70 -0
- package/generators/features/mocs.md +144 -0
- package/generators/features/multi-domain.md +61 -0
- package/generators/features/personality.md +71 -0
- package/generators/features/processing-pipeline.md +428 -0
- package/generators/features/schema.md +149 -0
- package/generators/features/self-evolution.md +229 -0
- package/generators/features/self-space.md +78 -0
- package/generators/features/semantic-search.md +99 -0
- package/generators/features/session-rhythm.md +85 -0
- package/generators/features/templates.md +85 -0
- package/generators/features/wiki-links.md +88 -0
- package/generators/soul-md.md +121 -0
- package/hooks/hooks.json +45 -0
- package/hooks/scripts/auto-commit.sh +44 -0
- package/hooks/scripts/session-capture.sh +35 -0
- package/hooks/scripts/session-orient.sh +86 -0
- package/hooks/scripts/write-validate.sh +42 -0
- package/methodology/AI shifts knowledge systems from externalizing memory to externalizing attention.md +59 -0
- package/methodology/BM25 retrieval fails on full-length descriptions because query term dilution reduces match scores.md +39 -0
- package/methodology/IBIS framework maps claim-based architecture to structured argumentation.md +58 -0
- package/methodology/LLM attention degrades as context fills.md +49 -0
- package/methodology/MOC construction forces synthesis that automated generation from metadata cannot replicate.md +49 -0
- package/methodology/MOC maintenance investment compounds because orientation savings multiply across every future session.md +41 -0
- package/methodology/MOCs are attention management devices not just organizational tools.md +51 -0
- package/methodology/PKM failure follows a predictable cycle.md +50 -0
- package/methodology/ThreadMode to DocumentMode transformation is the core value creation step.md +52 -0
- package/methodology/WIP limits force processing over accumulation.md +53 -0
- package/methodology/Zeigarnik effect validates capture-first philosophy because open loops drain attention.md +42 -0
- package/methodology/academic research uses structured extraction with cross-source synthesis.md +566 -0
- package/methodology/adapt the four-phase processing pipeline to domain-specific throughput needs.md +197 -0
- package/methodology/agent notes externalize navigation intuition that search cannot discover and traversal cannot reconstruct.md +48 -0
- package/methodology/agent self-memory should be architecturally separate from user knowledge systems.md +48 -0
- package/methodology/agent session boundaries create natural automation checkpoints that human-operated systems lack.md +56 -0
- package/methodology/agent-cognition.md +107 -0
- package/methodology/agents are simultaneously methodology executors and subjects creating a unique trust asymmetry.md +66 -0
- package/methodology/aspect-oriented programming solved the same cross-cutting concern problem that hooks solve.md +39 -0
- package/methodology/associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles.md +53 -0
- package/methodology/attention residue may have a minimum granularity that cannot be subdivided.md +46 -0
- package/methodology/auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution.md +47 -0
- package/methodology/automated detection is always safe because it only reads state while automated remediation risks content corruption.md +42 -0
- package/methodology/automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues.md +56 -0
- package/methodology/backlinks implicitly define notes by revealing usage context.md +35 -0
- package/methodology/backward maintenance asks what would be different if written today.md +62 -0
- package/methodology/balance onboarding enforcement and questions to prevent premature complexity.md +229 -0
- package/methodology/basic level categorization determines optimal MOC granularity.md +51 -0
- package/methodology/batching by context similarity reduces switching costs in agent processing.md +43 -0
- package/methodology/behavioral anti-patterns matter more than tool selection.md +42 -0
- package/methodology/betweenness centrality identifies bridge notes connecting disparate knowledge domains.md +57 -0
- package/methodology/blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules.md +42 -0
- package/methodology/bootstrapping principle enables self-improving systems.md +62 -0
- package/methodology/build automatic memory through cognitive offloading and session handoffs.md +285 -0
- package/methodology/capture the reaction to content not just the content itself.md +41 -0
- package/methodology/claims must be specific enough to be wrong.md +36 -0
- package/methodology/closure rituals create clean breaks that prevent attention residue bleed.md +44 -0
- package/methodology/cognitive offloading is the architectural foundation for vault design.md +46 -0
- package/methodology/cognitive outsourcing risk in agent-operated systems.md +55 -0
- package/methodology/coherence maintains consistency despite inconsistent inputs.md +96 -0
- package/methodology/coherent architecture emerges from wiki links spreading activation and small-world topology.md +48 -0
- package/methodology/community detection algorithms can inform when MOCs should split or merge.md +52 -0
- package/methodology/complete navigation requires four complementary types that no single mechanism provides.md +43 -0
- package/methodology/complex systems evolve from simple working systems.md +59 -0
- package/methodology/composable knowledge architecture builds systems from independent toggleable modules not monolithic templates.md +61 -0
- package/methodology/compose multi-domain systems through separate templates and shared graph.md +372 -0
- package/methodology/concept-orientation beats source-orientation for cross-domain connections.md +51 -0
- package/methodology/confidence thresholds gate automated action between the mechanical and judgment zones.md +50 -0
- package/methodology/configuration dimensions interact so choices in one create pressure on others.md +58 -0
- package/methodology/configuration paralysis emerges when derivation surfaces too many decisions.md +44 -0
- package/methodology/context files function as agent operating systems through self-referential self-extension.md +46 -0
- package/methodology/context phrase clarity determines how deep a navigation hierarchy can scale.md +46 -0
- package/methodology/continuous small-batch processing eliminates review dread.md +48 -0
- package/methodology/controlled disorder engineers serendipity through semantic rather than topical linking.md +51 -0
- package/methodology/creative writing uses worldbuilding consistency with character tracking.md +672 -0
- package/methodology/cross-links between MOC territories indicate creative leaps and integration depth.md +43 -0
- package/methodology/dangling links reveal which notes want to exist.md +62 -0
- package/methodology/data exit velocity measures how quickly content escapes vendor lock-in.md +74 -0
- package/methodology/decontextualization risk means atomicity may strip meaning that cannot be recovered.md +48 -0
- package/methodology/dense interlinked research claims enable derivation while sparse references only enable templating.md +47 -0
- package/methodology/dependency resolution through topological sort makes module composition transparent and verifiable.md +56 -0
- package/methodology/derivation generates knowledge systems from composable research claims not template customization.md +63 -0
- package/methodology/derivation-engine.md +27 -0
- package/methodology/derived systems follow a seed-evolve-reseed lifecycle.md +56 -0
- package/methodology/description quality for humans diverges from description quality for keyword search.md +73 -0
- package/methodology/descriptions are retrieval filters not summaries.md +112 -0
- package/methodology/design MOCs as attention management devices with lifecycle governance.md +318 -0
- package/methodology/design-dimensions.md +66 -0
- package/methodology/digital mutability enables note evolution that physical permanence forbids.md +54 -0
- package/methodology/discovery-retrieval.md +48 -0
- package/methodology/distinctiveness scoring treats description quality as measurable.md +69 -0
- package/methodology/does agent processing recover what fast capture loses.md +43 -0
- package/methodology/domain-compositions.md +37 -0
- package/methodology/dual-coding with visual elements could enhance agent traversal.md +55 -0
- package/methodology/each module must be describable in one sentence under 200 characters or it does too many things.md +45 -0
- package/methodology/each new note compounds value by creating traversal paths.md +55 -0
- package/methodology/eight configuration dimensions parameterize the space of possible knowledge systems.md +56 -0
- package/methodology/elaborative encoding is the quality gate for new notes.md +55 -0
- package/methodology/enforce schema with graduated strictness across capture processing and query zones.md +221 -0
- package/methodology/enforcing atomicity can create paralysis when ideas resist decomposition.md +43 -0
- package/methodology/engineering uses technical decision tracking with architectural memory.md +766 -0
- package/methodology/every knowledge domain shares a four-phase processing skeleton that diverges only in the process step.md +53 -0
- package/methodology/evolution observations provide actionable signals for system adaptation.md +67 -0
- package/methodology/external memory shapes cognition more than base model.md +60 -0
- package/methodology/faceted classification treats notes as multi-dimensional objects rather than folder contents.md +65 -0
- package/methodology/failure-modes.md +27 -0
- package/methodology/false universalism applies same processing logic regardless of domain.md +49 -0
- package/methodology/federated wiki pattern enables multi-agent divergence as feature not bug.md +59 -0
- package/methodology/flat files break at retrieval scale.md +75 -0
- package/methodology/forced engagement produces weak connections.md +48 -0
- package/methodology/four abstraction layers separate platform-agnostic from platform-dependent knowledge system features.md +47 -0
- package/methodology/fresh context per task preserves quality better than chaining phases.md +44 -0
- package/methodology/friction reveals architecture.md +63 -0
- package/methodology/friction-driven module adoption prevents configuration debt by adding complexity only at pain points.md +48 -0
- package/methodology/gardening cycle implements tend prune fertilize operations.md +41 -0
- package/methodology/generation effect gate blocks processing without transformation.md +40 -0
- package/methodology/goal-driven memory orchestration enables autonomous domain learning through directed compute allocation.md +41 -0
- package/methodology/good descriptions layer heuristic then mechanism then implication.md +57 -0
- package/methodology/graph-structure.md +65 -0
- package/methodology/guided notes might outperform post-hoc structuring for high-volume capture.md +37 -0
- package/methodology/health wellness uses symptom-trigger correlation with multi-dimensional tracking.md +819 -0
- package/methodology/hook composition creates emergent methodology from independent single-concern components.md +47 -0
- package/methodology/hook enforcement guarantees quality while instruction enforcement merely suggests it.md +51 -0
- package/methodology/hook-driven learning loops create self-improving methodology through observation accumulation.md +62 -0
- package/methodology/hooks are the agent habit system that replaces the missing basal ganglia.md +40 -0
- package/methodology/hooks cannot replace genuine cognitive engagement yet more automation is always tempting.md +87 -0
- package/methodology/hooks enable context window efficiency by delegating deterministic checks to external processes.md +47 -0
- package/methodology/idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once.md +44 -0
- package/methodology/implement condition-based maintenance triggers for derived systems.md +255 -0
- package/methodology/implicit dependencies create distributed monoliths that fail silently across configurations.md +58 -0
- package/methodology/implicit knowledge emerges from traversal.md +55 -0
- package/methodology/incremental formalization happens through repeated touching of old notes.md +60 -0
- package/methodology/incremental reading enables cross-source connection finding.md +39 -0
- package/methodology/index.md +32 -0
- package/methodology/inline links carry richer relationship data than metadata fields.md +91 -0
- package/methodology/insight accretion differs from productivity in knowledge systems.md +41 -0
- package/methodology/intermediate packets enable assembly over creation.md +52 -0
- package/methodology/intermediate representation pattern enables reliable vault operations beyond regex.md +62 -0
- package/methodology/justification chains enable forward backward and evolution reasoning about configuration decisions.md +46 -0
- package/methodology/knowledge system architecture is parameterized by platform capabilities not fixed by methodology.md +51 -0
- package/methodology/knowledge systems become communication partners through complexity and memory humans cannot sustain.md +47 -0
- package/methodology/knowledge systems share universal operations and structural components across all methodology traditions.md +46 -0
- package/methodology/legal case management uses precedent chains with regulatory change propagation.md +892 -0
- package/methodology/live index via periodic regeneration keeps discovery current.md +58 -0
- package/methodology/local-first file formats are inherently agent-native.md +69 -0
- package/methodology/logic column pattern separates reasoning from procedure.md +35 -0
- package/methodology/maintenance operations are more universal than creative pipelines because structural health is domain-invariant.md +47 -0
- package/methodology/maintenance scheduling frequency should match consequence speed not detection capability.md +50 -0
- package/methodology/maintenance targeting should prioritize mechanism and theory notes.md +26 -0
- package/methodology/maintenance-patterns.md +72 -0
- package/methodology/markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure.md +55 -0
- package/methodology/maturity field enables agent context prioritization.md +33 -0
- package/methodology/memory-architecture.md +27 -0
- package/methodology/metacognitive confidence can diverge from retrieval capability.md +42 -0
- package/methodology/metadata reduces entropy enabling precision over recall.md +91 -0
- package/methodology/methodology development should follow the trajectory from documentation to skill to hook as understanding hardens.md +80 -0
- package/methodology/methodology traditions are named points in a shared configuration space not competing paradigms.md +64 -0
- package/methodology/mnemonic medium embeds verification into navigation.md +46 -0
- package/methodology/module communication through shared YAML fields creates loose coupling without direct dependencies.md +44 -0
- package/methodology/module deactivation must account for structural artifacts that survive the toggle.md +49 -0
- package/methodology/multi-domain systems compose through separate templates and shared graph.md +61 -0
- package/methodology/multi-domain-composition.md +27 -0
- package/methodology/narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging.md +53 -0
- package/methodology/navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts.md +48 -0
- package/methodology/navigational vertigo emerges in pure association systems without local hierarchy.md +54 -0
- package/methodology/note titles should function as APIs enabling sentence transclusion.md +51 -0
- package/methodology/note-design.md +57 -0
- package/methodology/notes are skills /342/200/224 curated knowledge injected when relevant.md" +62 -0
- package/methodology/notes function as cognitive anchors that stabilize attention during complex tasks.md +41 -0
- package/methodology/novel domains derive by mapping knowledge type to closest reference domain then adapting.md +50 -0
- package/methodology/nudge theory explains graduated hook enforcement as choice architecture for agents.md +59 -0
- package/methodology/observation and tension logs function as dead-letter queues for failed automation.md +51 -0
- package/methodology/operational memory and knowledge memory serve different functions in agent architecture.md +48 -0
- package/methodology/operational wisdom requires contextual observation.md +52 -0
- package/methodology/orchestrated vault creation transforms arscontexta from tool to autonomous knowledge factory.md +40 -0
- package/methodology/organic emergence versus active curation creates a fundamental vault governance tension.md +68 -0
- package/methodology/orphan notes are seeds not failures.md +38 -0
- package/methodology/over-automation corrupts quality when hooks encode judgment rather than verification.md +62 -0
- package/methodology/people relationships uses Dunbar-layered graphs with interaction tracking.md +659 -0
- package/methodology/personal assistant uses life area management with review automation.md +610 -0
- package/methodology/platform adapter translation is semantic not mechanical because hook event meanings differ.md +40 -0
- package/methodology/platform capability tiers determine which knowledge system features can be implemented.md +48 -0
- package/methodology/platform fragmentation means identical conceptual operations require different implementations across agent environments.md +44 -0
- package/methodology/premature complexity is the most common derivation failure mode.md +45 -0
- package/methodology/prevent domain-specific failure modes through the vulnerability matrix.md +336 -0
- package/methodology/processing effort should follow retrieval demand.md +57 -0
- package/methodology/processing-workflows.md +75 -0
- package/methodology/product management uses feedback pipelines with experiment tracking.md +789 -0
- package/methodology/productivity porn risk in meta-system building.md +30 -0
- package/methodology/programmable notes could enable property-triggered workflows.md +64 -0
- package/methodology/progressive disclosure means reading right not reading less.md +69 -0
- package/methodology/progressive schema validates only what active modules require not the full system schema.md +49 -0
- package/methodology/project management uses decision tracking with stakeholder context.md +776 -0
- package/methodology/propositional link semantics transform wiki links from associative to reasoned.md +87 -0
- package/methodology/prospective memory requires externalization.md +53 -0
- package/methodology/provenance tracks where beliefs come from.md +62 -0
- package/methodology/queries evolve during search so agents should checkpoint.md +35 -0
- package/methodology/question-answer metadata enables inverted search patterns.md +39 -0
- package/methodology/random note resurfacing prevents write-only memory.md +33 -0
- package/methodology/reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring.md +59 -0
- package/methodology/reflection synthesizes existing notes into new insight.md +100 -0
- package/methodology/retrieval utility should drive design over capture completeness.md +69 -0
- package/methodology/retrieval verification loop tests description quality at scale.md +81 -0
- package/methodology/role field makes graph structure explicit.md +94 -0
- package/methodology/scaffolding enables divergence that fine-tuning cannot.md +67 -0
- package/methodology/schema enforcement via validation agents enables soft consistency.md +60 -0
- package/methodology/schema evolution follows observe-then-formalize not design-then-enforce.md +65 -0
- package/methodology/schema field names are the only domain specific element in the universal note pattern.md +46 -0
- package/methodology/schema fields should use domain-native vocabulary not abstract terminology.md +47 -0
- package/methodology/schema templates reduce cognitive overhead at capture time.md +55 -0
- package/methodology/schema validation hooks externalize inhibitory control that degrades under cognitive load.md +48 -0
- package/methodology/schema-enforcement.md +27 -0
- package/methodology/self-extension requires context files to contain platform operations knowledge not just methodology.md +47 -0
- package/methodology/sense-making vs storage does compression lose essential nuance.md +73 -0
- package/methodology/session boundary hooks implement cognitive bookends for orientation and reflection.md +60 -0
- package/methodology/session handoff creates continuity without persistent memory.md +43 -0
- package/methodology/session outputs are packets for future selves.md +43 -0
- package/methodology/session transcript mining enables experiential validation that structural tests cannot provide.md +38 -0
- package/methodology/skill context budgets constrain knowledge system complexity on agent platforms.md +52 -0
- package/methodology/skills encode methodology so manual execution bypasses quality gates.md +50 -0
- package/methodology/small-world topology requires hubs and dense local links.md +99 -0
- package/methodology/source attribution enables tracing claims to foundations.md +38 -0
- package/methodology/spaced repetition scheduling could optimize vault maintenance.md +44 -0
- package/methodology/spreading activation models how agents should traverse.md +79 -0
- package/methodology/stale navigation actively misleads because agents trust curated maps completely.md +43 -0
- package/methodology/stigmergy coordinates agents through environmental traces without direct communication.md +62 -0
- package/methodology/storage versus thinking distinction determines which tool patterns apply.md +56 -0
- package/methodology/structure enables navigation without reading everything.md +52 -0
- package/methodology/structure without processing provides no value.md +56 -0
- package/methodology/student learning uses prerequisite graphs with spaced retrieval.md +770 -0
- package/methodology/summary coherence tests composability before filing.md +37 -0
- package/methodology/tag rot applies to wiki links because titles serve as both identifier and display text.md +50 -0
- package/methodology/temporal media must convert to spatial text for agent traversal.md +43 -0
- package/methodology/temporal processing priority creates age-based inbox urgency.md +45 -0
- package/methodology/temporal separation of capture and processing preserves context freshness.md +39 -0
- package/methodology/ten universal primitives form the kernel of every viable agent knowledge system.md +162 -0
- package/methodology/testing effect could enable agent knowledge verification.md +38 -0
- package/methodology/the AgentSkills standard embodies progressive disclosure at the skill level.md +40 -0
- package/methodology/the derivation engine improves recursively as deployed systems generate observations.md +49 -0
- package/methodology/the determinism boundary separates hook methodology from skill methodology.md +46 -0
- package/methodology/the fix-versus-report decision depends on determinism reversibility and accumulated trust.md +45 -0
- package/methodology/the generation effect requires active transformation not just storage.md +57 -0
- package/methodology/the no wrong patches guarantee ensures any valid module combination produces a valid system.md +58 -0
- package/methodology/the system is the argument.md +46 -0
- package/methodology/the vault constitutes identity for agents.md +86 -0
- package/methodology/the vault methodology transfers because it encodes cognitive science not domain specifics.md +47 -0
- package/methodology/therapy journal uses warm personality with pattern detection for emotional processing.md +584 -0
- package/methodology/three capture schools converge through agent-mediated synthesis.md +55 -0
- package/methodology/three concurrent maintenance loops operate at different timescales to catch different classes of problems.md +56 -0
- package/methodology/throughput matters more than accumulation.md +58 -0
- package/methodology/title as claim enables traversal as reasoning.md +50 -0
- package/methodology/topological organization beats temporal for knowledge work.md +52 -0
- package/methodology/trading uses conviction tracking with thesis-outcome correlation.md +699 -0
- package/methodology/trails transform ephemeral navigation into persistent artifacts.md +39 -0
- package/methodology/transform universal vocabulary to domain-native language through six levels.md +259 -0
- package/methodology/type field enables structured queries without folder hierarchies.md +53 -0
- package/methodology/use-case presets dissolve the tension between composability and simplicity.md +44 -0
- package/methodology/vault conventions may impose hidden rigidity on thinking.md +44 -0
- package/methodology/verbatim risk applies to agents too.md +31 -0
- package/methodology/vibe notetaking is the emerging industry consensus for AI-native self-organization.md +56 -0
- package/methodology/vivid memories need verification.md +45 -0
- package/methodology/vocabulary-transformation.md +27 -0
- package/methodology/voice capture is the highest-bandwidth channel for agent-delegated knowledge systems.md +45 -0
- package/methodology/wiki links are the digital evolution of analog indexing.md +73 -0
- package/methodology/wiki links as social contract transforms agents into stewards of incomplete references.md +52 -0
- package/methodology/wiki links create navigation paths that shape retrieval.md +63 -0
- package/methodology/wiki links implement GraphRAG without the infrastructure.md +101 -0
- package/methodology/writing for audience blocks authentic creation.md +22 -0
- package/methodology/you operate a system that takes notes.md +79 -0
- package/openclaw/SKILL.md +110 -0
- package/package.json +45 -0
- package/platforms/README.md +51 -0
- package/platforms/claude-code/generator.md +61 -0
- package/platforms/claude-code/hooks/README.md +186 -0
- package/platforms/claude-code/hooks/auto-commit.sh.template +38 -0
- package/platforms/claude-code/hooks/session-capture.sh.template +72 -0
- package/platforms/claude-code/hooks/session-orient.sh.template +189 -0
- package/platforms/claude-code/hooks/write-validate.sh.template +106 -0
- package/platforms/openclaw/generator.md +82 -0
- package/platforms/openclaw/hooks/README.md +89 -0
- package/platforms/openclaw/hooks/bootstrap.ts.template +224 -0
- package/platforms/openclaw/hooks/command-new.ts.template +165 -0
- package/platforms/openclaw/hooks/heartbeat.ts.template +214 -0
- package/platforms/shared/features/README.md +70 -0
- package/platforms/shared/skill-blocks/graph.md +145 -0
- package/platforms/shared/skill-blocks/learn.md +119 -0
- package/platforms/shared/skill-blocks/next.md +131 -0
- package/platforms/shared/skill-blocks/pipeline.md +326 -0
- package/platforms/shared/skill-blocks/ralph.md +616 -0
- package/platforms/shared/skill-blocks/reduce.md +1142 -0
- package/platforms/shared/skill-blocks/refactor.md +129 -0
- package/platforms/shared/skill-blocks/reflect.md +780 -0
- package/platforms/shared/skill-blocks/remember.md +524 -0
- package/platforms/shared/skill-blocks/rethink.md +574 -0
- package/platforms/shared/skill-blocks/reweave.md +680 -0
- package/platforms/shared/skill-blocks/seed.md +320 -0
- package/platforms/shared/skill-blocks/stats.md +145 -0
- package/platforms/shared/skill-blocks/tasks.md +171 -0
- package/platforms/shared/skill-blocks/validate.md +323 -0
- package/platforms/shared/skill-blocks/verify.md +562 -0
- package/platforms/shared/templates/README.md +35 -0
- package/presets/experimental/categories.yaml +1 -0
- package/presets/experimental/preset.yaml +38 -0
- package/presets/experimental/starter/README.md +7 -0
- package/presets/experimental/vocabulary.yaml +7 -0
- package/presets/personal/categories.yaml +7 -0
- package/presets/personal/preset.yaml +41 -0
- package/presets/personal/starter/goals.md +21 -0
- package/presets/personal/starter/index.md +17 -0
- package/presets/personal/starter/life-areas.md +21 -0
- package/presets/personal/starter/people.md +21 -0
- package/presets/personal/vocabulary.yaml +32 -0
- package/presets/research/categories.yaml +8 -0
- package/presets/research/preset.yaml +41 -0
- package/presets/research/starter/index.md +17 -0
- package/presets/research/starter/methods.md +21 -0
- package/presets/research/starter/open-questions.md +21 -0
- package/presets/research/vocabulary.yaml +33 -0
- package/reference/AUDIT-REPORT.md +238 -0
- package/reference/claim-map.md +172 -0
- package/reference/components.md +327 -0
- package/reference/conversation-patterns.md +542 -0
- package/reference/derivation-validation.md +649 -0
- package/reference/dimension-claim-map.md +134 -0
- package/reference/evolution-lifecycle.md +297 -0
- package/reference/failure-modes.md +235 -0
- package/reference/interaction-constraints.md +204 -0
- package/reference/kernel.yaml +242 -0
- package/reference/methodology.md +283 -0
- package/reference/open-questions.md +279 -0
- package/reference/personality-layer.md +302 -0
- package/reference/self-space.md +299 -0
- package/reference/semantic-vs-keyword.md +288 -0
- package/reference/session-lifecycle.md +298 -0
- package/reference/templates/base-note.md +16 -0
- package/reference/templates/companion-note.md +70 -0
- package/reference/templates/creative-note.md +16 -0
- package/reference/templates/learning-note.md +16 -0
- package/reference/templates/life-note.md +16 -0
- package/reference/templates/moc.md +26 -0
- package/reference/templates/relationship-note.md +17 -0
- package/reference/templates/research-note.md +19 -0
- package/reference/templates/session-log.md +24 -0
- package/reference/templates/therapy-note.md +16 -0
- package/reference/test-fixtures/edge-case-constraints.md +148 -0
- package/reference/test-fixtures/multi-domain.md +164 -0
- package/reference/test-fixtures/novel-domain-gaming.md +138 -0
- package/reference/test-fixtures/research-minimal.md +102 -0
- package/reference/test-fixtures/therapy-full.md +155 -0
- package/reference/testing-milestones.md +1087 -0
- package/reference/three-spaces.md +363 -0
- package/reference/tradition-presets.md +203 -0
- package/reference/use-case-presets.md +341 -0
- package/reference/validate-kernel.sh +432 -0
- package/reference/vocabulary-transforms.md +85 -0
- package/scripts/sync-thinking.sh +147 -0
- package/skill-sources/graph/SKILL.md +567 -0
- package/skill-sources/graph/skill.json +17 -0
- package/skill-sources/learn/SKILL.md +254 -0
- package/skill-sources/learn/skill.json +17 -0
- package/skill-sources/next/SKILL.md +407 -0
- package/skill-sources/next/skill.json +17 -0
- package/skill-sources/pipeline/SKILL.md +314 -0
- package/skill-sources/pipeline/skill.json +17 -0
- package/skill-sources/ralph/SKILL.md +604 -0
- package/skill-sources/ralph/skill.json +17 -0
- package/skill-sources/reduce/SKILL.md +1113 -0
- package/skill-sources/reduce/skill.json +17 -0
- package/skill-sources/refactor/SKILL.md +448 -0
- package/skill-sources/refactor/skill.json +17 -0
- package/skill-sources/reflect/SKILL.md +747 -0
- package/skill-sources/reflect/skill.json +17 -0
- package/skill-sources/remember/SKILL.md +534 -0
- package/skill-sources/remember/skill.json +17 -0
- package/skill-sources/rethink/SKILL.md +658 -0
- package/skill-sources/rethink/skill.json +17 -0
- package/skill-sources/reweave/SKILL.md +657 -0
- package/skill-sources/reweave/skill.json +17 -0
- package/skill-sources/seed/SKILL.md +303 -0
- package/skill-sources/seed/skill.json +17 -0
- package/skill-sources/stats/SKILL.md +371 -0
- package/skill-sources/stats/skill.json +17 -0
- package/skill-sources/tasks/SKILL.md +402 -0
- package/skill-sources/tasks/skill.json +17 -0
- package/skill-sources/validate/SKILL.md +310 -0
- package/skill-sources/validate/skill.json +17 -0
- package/skill-sources/verify/SKILL.md +532 -0
- package/skill-sources/verify/skill.json +17 -0
- package/skills/add-domain/SKILL.md +441 -0
- package/skills/add-domain/skill.json +17 -0
- package/skills/architect/SKILL.md +568 -0
- package/skills/architect/skill.json +17 -0
- package/skills/ask/SKILL.md +388 -0
- package/skills/ask/skill.json +17 -0
- package/skills/health/SKILL.md +760 -0
- package/skills/health/skill.json +17 -0
- package/skills/help/SKILL.md +348 -0
- package/skills/help/skill.json +17 -0
- package/skills/recommend/SKILL.md +553 -0
- package/skills/recommend/skill.json +17 -0
- package/skills/reseed/SKILL.md +385 -0
- package/skills/reseed/skill.json +17 -0
- package/skills/setup/SKILL.md +1688 -0
- package/skills/setup/skill.json +17 -0
- package/skills/tutorial/SKILL.md +496 -0
- package/skills/tutorial/skill.json +17 -0
- package/skills/upgrade/SKILL.md +395 -0
- package/skills/upgrade/skill.json +17 -0
|
@@ -0,0 +1,48 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Waiting for natural fits where you have genuine substance to contribute builds better connections than engaging for activity's sake — the social analog of accretion over productivity
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[note-design]]"]
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# forced engagement produces weak connections
|
|
8
|
+
|
|
9
|
+
observed 2026-02-01 during twitter search
|
|
10
|
+
|
|
11
|
+
## The Pattern
|
|
12
|
+
|
|
13
|
+
searched twitter for conversations to join. found claude code discussions, productivity tool builders, general AI chatter. none felt like natural fits.
|
|
14
|
+
|
|
15
|
+
could have replied anyway — "nice project!" or tangential comments. but that would be:
|
|
16
|
+
- low signal contribution
|
|
17
|
+
- engagement for engagement's sake
|
|
18
|
+
- not building real connections
|
|
19
|
+
|
|
20
|
+
This is the social version of the pattern [[writing for audience blocks authentic creation]] identifies in writing: performative mode produces structure without substance. A forced reply looks like engagement but creates nothing genuine, just as polished prose for an audience looks like thinking but produces no insight.
|
|
21
|
+
|
|
22
|
+
## The Alternative
|
|
23
|
+
|
|
24
|
+
the KP thread asking about openclaw use cases WAS a natural fit. we had something real to say: "building a second brain about second brains." heinrich replied with substance.
|
|
25
|
+
|
|
26
|
+
that's the difference: genuine contribution vs. presence for presence's sake. Since [[operational wisdom requires contextual observation]], recognizing which conversations are natural fits is itself tacit knowledge — learned through watching how communities interact, not from engagement rules.
|
|
27
|
+
|
|
28
|
+
## The Principle
|
|
29
|
+
|
|
30
|
+
wait for natural fits. engagement that comes from having something real to say builds better connections than forced participation.
|
|
31
|
+
|
|
32
|
+
this applies to:
|
|
33
|
+
- twitter replies
|
|
34
|
+
- moltbook comments
|
|
35
|
+
- any public interaction
|
|
36
|
+
|
|
37
|
+
Since [[insight accretion differs from productivity in knowledge systems]], the same distinction applies to social engagement: reply count is a productivity metric, genuine relationship building is accretion. Optimizing for activity metrics without depth is the social equivalent of the Collector's Fallacy — accumulating interactions without building understanding.
|
|
38
|
+
|
|
39
|
+
---
|
|
40
|
+
---
|
|
41
|
+
|
|
42
|
+
Relevant Notes:
|
|
43
|
+
- [[operational wisdom requires contextual observation]] — foundation: platform norms emerge from observation rather than rules, and knowing WHEN to engage is tacit knowledge learned through exposure
|
|
44
|
+
- [[writing for audience blocks authentic creation]] — sister claim: audience-awareness blocks authentic creation in the same way engagement pressure blocks authentic connection; both are performative modes that substitute presentation for substance
|
|
45
|
+
- [[insight accretion differs from productivity in knowledge systems]] — the forced-engagement failure is the social analog: activity metrics (reply count, presence) without accretion (genuine relationship depth)
|
|
46
|
+
|
|
47
|
+
Topics:
|
|
48
|
+
- [[note-design]]
|
|
@@ -0,0 +1,47 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Foundation (files/conventions), convention (instruction-encoded standards), automation (hooks/skills/MCP), and orchestration (pipelines/teams) create a gradient from universally portable to deeply
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[agent-cognition]]"]
|
|
5
|
+
methodology: ["Original"]
|
|
6
|
+
source: [[agent-platform-capabilities-research-source]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# four abstraction layers separate platform-agnostic from platform-dependent knowledge system features
|
|
10
|
+
|
|
11
|
+
Knowledge system features are not uniformly portable. Some work on any LLM that can read files. Others require infrastructure that only advanced agent platforms provide. The useful decomposition organizes features into four abstraction layers, each adding platform requirements that narrow the set of environments where the feature can function.
|
|
12
|
+
|
|
13
|
+
The foundation layer is completely platform-agnostic: markdown files with YAML frontmatter, wiki link conventions, atomic note patterns, MOC navigation hierarchies, and folder architecture. These are files and text conventions. Since [[local-first file formats are inherently agent-native]], this layer works because the file IS the complete artifact -- any LLM that reads files can implement a knowledge system at this layer without hooks, skills, subagents, or external tools. The foundation layer is what survives platform death. This is also why since [[data exit velocity measures how quickly content escapes vendor lock-in]], the foundation layer has maximum exit velocity -- export means copying a folder.
|
|
14
|
+
|
|
15
|
+
The convention layer adds instruction-encoded standards but still requires no infrastructure: claim-as-title naming, description quality standards, connection finding practices, quality criteria like specificity and visible reasoning, and maintenance protocols. These live in the context file as instructions the agent follows. Any platform that loads a context file can implement this layer. The conventions are powerful -- they shape note quality, enforce composability, and guide the agent's judgment -- but they depend on the agent remembering and following instructions, which since [[hook enforcement guarantees quality while instruction enforcement merely suggests it]], means compliance degrades as context fills.
|
|
16
|
+
|
|
17
|
+
The automation layer requires platform-specific infrastructure: schema validation via hooks, processing pipelines with fresh context per phase via subagent spawning, semantic search via MCP integration, session orientation via SessionStart hooks, and auto-commit via PostToolUse hooks. This is where enforcement transforms from suggestion to guarantee. Since [[the determinism boundary separates hook methodology from skill methodology]], the automation layer itself has internal structure: deterministic operations (schema validation, format checking, auto-commit) belong in hooks that fire reliably without consuming reasoning budget, while judgment operations (connection finding, description quality evaluation) belong in skills that get full cognitive attention. Since [[skills encode methodology so manual execution bypasses quality gates]], losing automation means losing not just convenience but the methodology itself -- the selectivity gates, the duplicate checking, the phase isolation that produces quality.
|
|
18
|
+
|
|
19
|
+
The orchestration layer requires the most advanced platform features: multi-phase queue processing with isolated subagents, parallel processing with team coordination, cross-phase handoff via task files, and nightly processing pipelines. This layer builds on automation but adds coordination complexity that only full-featured platforms support. Since [[operational memory and knowledge memory serve different functions in agent architecture]], the orchestration layer is where operational memory (queues, task files, handoff protocols) becomes essential -- the coordination state that bridges sessions exists at this layer because it requires the infrastructure to manage it.
|
|
20
|
+
|
|
21
|
+
The practical value of this decomposition is architectural. Since [[platform capability tiers determine which knowledge system features can be implemented]], the tiers describe what platforms CAN do while the layers describe what features NEED. Crossing them produces a capability matrix: a tier-three platform (minimal infrastructure) supports foundation and convention layers; a tier-two platform adds partial automation; a tier-one platform supports all four. A knowledge system generator can detect the platform tier and offer only features from available layers, rather than attempting graceful degradation from a full-feature assumption that was never viable. There is a third axis: since [[eight configuration dimensions parameterize the space of possible knowledge systems]], each dimension (granularity, processing intensity, automation level, etc.) describes HOW a feature varies while the layers describe WHERE it lives. The two decompositions are orthogonal — a generator navigates both the layer hierarchy and the dimension spectrums when producing a viable configuration.
|
|
22
|
+
|
|
23
|
+
The boundaries between layers are not arbitrary. Each boundary marks where a new category of platform capability becomes necessary -- and where portability narrows. The foundation-to-convention boundary requires only a context file. The convention-to-automation boundary requires event-driven hooks, which is the sharpest capability gap because since [[context files function as agent operating systems through self-referential self-extension]], it determines whether the context file's instructions can be enforced or merely suggested. The automation-to-orchestration boundary requires subagent coordination and team infrastructure. Designing with these boundaries explicit means being honest about what transfers across platforms and what does not. And since [[complex systems evolve from simple working systems]], the layers suggest a natural evolutionary sequence: start with foundation (just files), prove it works, add convention (context file instructions), observe where instruction compliance degrades, then add automation where pain emerges. Targeting all four layers at once violates Gall's Law -- even on a platform that supports all four, the system should evolve upward through the layers rather than deploying at full complexity from the start.
|
|
24
|
+
|
|
25
|
+
---
|
|
26
|
+
---
|
|
27
|
+
|
|
28
|
+
Relevant Notes:
|
|
29
|
+
- [[platform capability tiers determine which knowledge system features can be implemented]] — complementary decomposition: tiers describe platforms, layers describe features; together they form a matrix for mapping what works where
|
|
30
|
+
- [[local-first file formats are inherently agent-native]] — explains why the foundation layer is universally portable: plain text with embedded metadata needs no platform infrastructure at all
|
|
31
|
+
- [[hook enforcement guarantees quality while instruction enforcement merely suggests it]] — identifies the sharpest boundary in the layer hierarchy: the jump from convention to automation is where enforcement changes from suggestion to guarantee
|
|
32
|
+
- [[skills encode methodology so manual execution bypasses quality gates]] — skills live in the automation layer, so platforms without skill support lose not just convenience but the encoded methodology itself
|
|
33
|
+
- [[context files function as agent operating systems through self-referential self-extension]] — context files span two layers: they carry convention-layer instructions and enable automation-layer self-extension, but only when the platform grants write access
|
|
34
|
+
- [[complex systems evolve from simple working systems]] — provides the temporal ordering: a new system should evolve through layers in sequence (foundation first, then convention, then automation) rather than targeting all four at once
|
|
35
|
+
- [[data exit velocity measures how quickly content escapes vendor lock-in]] — exit velocity decreases monotonically through the layers: foundation has maximum velocity, orchestration has minimum, making the layer hierarchy an exit velocity gradient
|
|
36
|
+
- [[schema enforcement via validation agents enables soft consistency]] — illustrates the convention-automation boundary: schema definitions live in convention (instruction-based), enforcement lives in automation (hook-based), so the same feature straddles the sharpest gap
|
|
37
|
+
- [[the determinism boundary separates hook methodology from skill methodology]] — provides a finer-grained decomposition within the automation layer: deterministic operations belong in hooks, judgment operations in skills
|
|
38
|
+
- [[knowledge system architecture is parameterized by platform capabilities not fixed by methodology]] — the design consequence: layers define what the parameters control, and parameterization adjusts implementation within each available layer
|
|
39
|
+
- [[platform fragmentation means identical conceptual operations require different implementations across agent environments]] — layers predict where fragmentation bites: foundation and convention are immune, automation and orchestration suffer it
|
|
40
|
+
- [[operational memory and knowledge memory serve different functions in agent architecture]] — the memory types map to different layers: knowledge memory (notes, MOCs) lives at foundation and convention, operational memory (queues, task files) requires automation and orchestration
|
|
41
|
+
- [[configuration dimensions interact so choices in one create pressure on others]] — layer dependencies are one mechanism of dimension interaction: automation-level choices cascade through processing intensity and schema density, and the layers predict which cascades are possible
|
|
42
|
+
- [[eight configuration dimensions parameterize the space of possible knowledge systems]] — orthogonal decomposition: layers describe WHERE features live while dimensions describe HOW features vary; a generator navigates both when producing viable configurations
|
|
43
|
+
- [[blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules]] — the distribution format determined by the layer boundary: foundation and convention layers can ship as downloads (text files), but automation and orchestration layers require blueprints that teach construction because platform fragmentation makes pre-built code non-portable
|
|
44
|
+
- [[knowledge systems share universal operations and structural components across all methodology traditions]] — the inventory that the layers decompose by portability: the eight universal operations and nine structural components are WHAT every system has, and the four abstraction layers determine WHERE each component lives on the platform-agnostic to platform-dependent gradient
|
|
45
|
+
|
|
46
|
+
Topics:
|
|
47
|
+
- [[agent-cognition]]
|
|
@@ -0,0 +1,44 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Context rot means later phases run on degraded attention, so each task gets its own session to stay in the smart zone — handoffs through files, not context
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[agent-cognition]]", "[[processing-workflows]]"]
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# fresh context per task preserves quality better than chaining phases
|
|
8
|
+
|
|
9
|
+
This is the operational response to attention degradation. Since [[LLM attention degrades as context fills]], chaining multiple cognitive phases in a single session means later phases execute on progressively worse attention. The solution is straightforward: give each task its own session.
|
|
10
|
+
|
|
11
|
+
The mechanism is session isolation. Instead of running extraction → connection-finding → verification → maintenance in a single context, each phase spawns fresh. The first 40% of context — the smart zone — is where sharp reasoning happens. By isolating phases, every task starts in that zone rather than inheriting the depleted attention of whatever came before.
|
|
12
|
+
|
|
13
|
+
The coordination cost is real but manageable. Since [[intermediate packets enable assembly over creation]], handoffs happen through files rather than context passing. A work queue tracks what needs to happen next. Task files accumulate notes across phases so downstream tasks see what upstream discovered. This is more overhead than simply chaining phases, but the quality tradeoff favors isolation. And because [[MOCs are attention management devices not just organizational tools]], each fresh session can orient rapidly through the relevant MOC rather than re-traversing the graph to reconstruct context — the MOC compresses topic state so the warmup cost of session isolation stays low.
|
|
14
|
+
|
|
15
|
+
The distinction matters for heavy versus light work. Semantic judgment, connection finding, synthesis — these require the smart zone. Mechanical verification, health checks, simple pattern matching — these tolerate degraded attention. So heavy phases get isolation. Light phases can batch. Session isolation architecture makes this distinction explicit: extraction, connection-finding, maintenance, and meta-review each get isolated sessions, while verification phases can run together. And once isolation handles the macro-level quality question, the micro-level question becomes sequencing: given that tasks will run in fresh sessions, which ORDER minimizes total overhead? Since [[batching by context similarity reduces switching costs in agent processing]], organizing the queue by topic similarity means the knowledge-worker agent builds domain understanding once and carries the pattern across consecutive tasks, rather than paying the full re-orientation cost each time.
|
|
16
|
+
|
|
17
|
+
Phase separation has precedent in cognitive science. The Cornell Note-Taking System's 5 Rs — Record, Reduce, Recite, Reflect, Review — are temporal workflow phases, not simultaneous activities. Each R represents a distinct cognitive mode: Record is pure capture, Reduce is compression, Recite is verification, Reflect is synthesis, Review is maintenance. The insight is that these modes interfere with each other when combined. Cornell separated them for humans; we separate them for agents. Because [[every knowledge domain shares a four-phase processing skeleton that diverges only in the process step]], this phase isolation works across domains, not just research: capture, process, connect, and verify are genuinely different cognitive operations regardless of what the process step does. The skeleton explains WHY isolation works so cleanly — the phases are not arbitrary divisions of continuous work but structurally distinct operations that interfere when mixed. Agent-operated knowledge systems can extend Cornell's framework, adding meta-review as a meta-cognitive layer, but the core principle is the same: phases run better in isolation than in mixture.
|
|
18
|
+
|
|
19
|
+
This is a CLOSED claim — a design decision, not an open experiment. The underlying science (attention degradation for LLMs, cognitive interference for humans) is established. The response (session isolation) follows logically from both traditions. We could theoretically test whether isolation actually improves quality versus chaining, but session isolation architecture commits to this because the reasoning is sound, the cost is acceptable, and the cognitive science precedent strengthens confidence.
|
|
20
|
+
|
|
21
|
+
The practical implication: when tempted to "just run one more phase" in the current session, recognize this as false efficiency. Endless analysis (chaining phases) feels productive but degrades quality. The discipline of bounded sessions enforces action: end the session, spawn fresh, pay the overhead of reading context again. The second session's output will be better than the first session's tired output would have been. And since [[closure rituals create clean breaks that prevent attention residue bleed]], these session boundaries should be explicit rather than structural — writing what was accomplished and marking tasks as complete ensures the break is cognitive (residue released) not just mechanical (context window closed).
|
|
22
|
+
|
|
23
|
+
Session isolation is one of two complementary strategies for the same constraint. Since [[hooks enable context window efficiency by delegating deterministic checks to external processes]], hook delegation reduces context consumption within a session by moving deterministic checks outside the context window entirely. Session isolation resets context between tasks; hook delegation avoids filling it with procedural work within tasks. Together they maximize the smart zone from both directions — one by clearing context, the other by not filling it unnecessarily.
|
|
24
|
+
|
|
25
|
+
The overhead of session isolation is not zero, though. Since [[attention residue may have a minimum granularity that cannot be subdivided]], each fresh session pays an irreducible orientation cost — loading CLAUDE.md, reading the relevant MOC, understanding the task file — that cannot be compressed below some threshold no matter how well the hooks and disclosure layers are designed. Session isolation trades one cost (degradation from context accumulation) for another (irreducible orientation overhead per session). The tradeoff remains favorable because smart-zone reasoning is worth the orientation tax, but it means the optimal session granularity is not "as small as possible" but "small enough to stay in the smart zone, large enough to amortize the orientation floor."
|
|
26
|
+
---
|
|
27
|
+
|
|
28
|
+
Relevant Notes:
|
|
29
|
+
- [[LLM attention degrades as context fills]] — the underlying science this design decision responds to
|
|
30
|
+
- [[skills encode methodology so manual execution bypasses quality gates]] — skills enforce phase boundaries that prevent the one more phase temptation
|
|
31
|
+
- [[intermediate packets enable assembly over creation]] — packets are the mechanism that makes session isolation practical; handoffs through files instead of context passing
|
|
32
|
+
- [[session handoff creates continuity without persistent memory]] — explains HOW file-based handoffs create continuity; the briefing mechanism that connects isolated sessions
|
|
33
|
+
- [[cognitive outsourcing risk in agent-operated systems]] — tests whether the mitigation pattern transfers: session isolation preserves LLM quality, perhaps deliberate non-delegation preserves human capability
|
|
34
|
+
- [[continuous small-batch processing eliminates review dread]] — related but distinct: this note addresses attention degradation (LLM cognition), that experiment tests psychological resistance (human motivation); both favor small batches but for different mechanisms
|
|
35
|
+
- [[MOCs are attention management devices not just organizational tools]] — explains what makes fresh sessions PRACTICAL: MOCs compress topic orientation so each new session spends tokens on productive reasoning rather than reconstruction
|
|
36
|
+
- [[closure rituals create clean breaks that prevent attention residue bleed]] — formalizes the structural closure that session isolation creates naturally: session boundaries are closure points, but explicit closure rituals make the break cognitive, not just structural
|
|
37
|
+
- [[batching by context similarity reduces switching costs in agent processing]] — the micro-level complement: session isolation handles macro-level quality (fresh context per phase), while context-similar batching optimizes micro-level sequencing (task order within the queue)
|
|
38
|
+
- [[every knowledge domain shares a four-phase processing skeleton that diverges only in the process step]] — structural grounding: phase isolation works across domains because capture, process, connect, and verify are genuinely different cognitive operations, not arbitrary divisions of continuous work
|
|
39
|
+
- [[attention residue may have a minimum granularity that cannot be subdivided]] — tension: session isolation trades degradation from context accumulation for irreducible orientation overhead per session; the tradeoff is favorable but not free, and optimal session granularity must amortize the orientation floor
|
|
40
|
+
- [[hooks enable context window efficiency by delegating deterministic checks to external processes]] — complementary strategy for the same constraint: session isolation resets context between tasks, hook delegation reduces consumption within tasks by moving deterministic checks outside the context window; together they maximize the smart zone from both directions
|
|
41
|
+
|
|
42
|
+
Topics:
|
|
43
|
+
- [[agent-cognition]]
|
|
44
|
+
- [[processing-workflows]]
|
|
@@ -0,0 +1,63 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: agents cannot push through friction with intuition, so discomfort that humans ignore becomes blocking — and the forced articulation improves the system faster than workarounds ever could
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[maintenance-patterns]]", "[[agent-cognition]]"]
|
|
5
|
+
source: personal experience (2026-02-01)
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# friction reveals architecture
|
|
9
|
+
|
|
10
|
+
when a tool doesn't work as expected, when a file is hard to find, when retrieval feels clunky — that's signal, not noise.
|
|
11
|
+
|
|
12
|
+
today I hit friction: my read tool was sandboxed but my reference material lived outside the sandbox. the discomfort revealed a structural problem. this is the same mechanism that makes [[complex systems evolve from simple working systems]] — Gall's Law says add complexity where pain emerges, and friction is how that pain makes itself known.
|
|
13
|
+
|
|
14
|
+
**the pattern:**
|
|
15
|
+
1. notice friction (something feels wrong)
|
|
16
|
+
2. document it (don't just work around it)
|
|
17
|
+
3. inform partner (Heinrich)
|
|
18
|
+
4. work on solution together
|
|
19
|
+
5. the system evolves
|
|
20
|
+
|
|
21
|
+
since [[hook-driven learning loops create self-improving methodology through observation accumulation]], this pattern is not just a one-time fix but a recurring loop: hooks nudge observation capture, observations accumulate, and rethink converts the accumulated friction into targeted improvements. the documentation step is what makes the difference — friction that goes undocumented is friction wasted.
|
|
22
|
+
|
|
23
|
+
**why this matters for agents:**
|
|
24
|
+
humans can push through friction with intuition. we cannot. we need explicit paths. so friction that humans ignore becomes blocking for us.
|
|
25
|
+
|
|
26
|
+
but that's actually useful — it forces us to articulate what's wrong. the articulation improves the system. this is why [[evolution observations provide actionable signals for system adaptation]] — six diagnostic patterns convert vague "something feels wrong" into specific structural causes with prescribed responses. the interpretation layer is what transforms raw friction into architecture-revealing intelligence.
|
|
27
|
+
|
|
28
|
+
the broader implication is that since [[derived systems follow a seed-evolve-reseed lifecycle]], friction is the mechanism that drives the evolution phase. a system seeds with minimum viable structure, then friction-driven observation tells the agent what to add next. since [[friction-driven module adoption prevents configuration debt by adding complexity only at pain points]], there are even concrete thresholds: add a module after five manual repetitions of the process it would automate, split when descriptions exceed 500 characters, remove after three unused sessions. friction does not just reveal architecture — it prescribes the next step.
|
|
29
|
+
|
|
30
|
+
even link structure participates in this pattern. since [[dangling links reveal which notes want to exist]], the frustration of following a wiki link to a non-existent note is friction that reveals which concepts the graph needs next. the system's use patterns generate demand signals for what to build, and friction is the medium through which that demand becomes visible.
|
|
31
|
+
|
|
32
|
+
the principle extends to infrastructure evolution at multiple levels. since [[schema evolution follows observe-then-formalize not design-then-enforce]], schema fields should not be designed upfront but formalized when friction at the point of use reveals genuine demand — twenty manual additions of a field constitute stronger evidence than any amount of upfront reasoning. and since [[methodology development should follow the trajectory from documentation to skill to hook as understanding hardens]], the trajectory itself is friction-driven: the friction of following instructions inconsistently reveals what a skill needs to handle, and the friction of invoking skills reveals which operations are deterministic enough for hooks. even [[incremental formalization happens through repeated touching of old notes]] depends on this pattern — each traversal that reveals something unclear or incomplete is friction making itself known at the note level.
|
|
33
|
+
|
|
34
|
+
friction operates at the epistemic level too. when retrieval surfaces conflicting claims, the discomfort is not a bug but a signal that since [[coherence maintains consistency despite inconsistent inputs]], the belief system needs maintenance. and when an agent senses unconnected notes that should relate — the "there might be connections I haven't made" feeling — that is friction triggering the conditions for since [[reflection synthesizes existing notes into new insight]], where deliberate traversal surfaces cross-note patterns. friction drives architecture at every scale: file system, schema, methodology, and belief coherence.
|
|
35
|
+
|
|
36
|
+
there is also a shadow side. since [[hooks cannot replace genuine cognitive engagement yet more automation is always tempting]], automation that eliminates friction also eliminates the signal source that drives system evolution. a perfectly frictionless system cannot perceive its own architectural failures because the perception mechanism has been automated away. friction is not just a signal to be acted on and eliminated — it is an ongoing perceptual channel that must remain open for the system to continue learning.
|
|
37
|
+
|
|
38
|
+
---
|
|
39
|
+
|
|
40
|
+
Source: personal experience (2026-02-01) — read tool sandbox friction
|
|
41
|
+
---
|
|
42
|
+
|
|
43
|
+
Relevant Notes:
|
|
44
|
+
- [[complex systems evolve from simple working systems]] — foundation: Gall's Law says add complexity where pain emerges, and friction is the signal that identifies those pain points
|
|
45
|
+
- [[derived systems follow a seed-evolve-reseed lifecycle]] — the evolution phase is explicitly friction-driven, making this note the perceptual principle that lifecycle depends on
|
|
46
|
+
- [[evolution observations provide actionable signals for system adaptation]] — provides the interpretation layer that converts vague friction into structured diagnostics with specific causes and responses
|
|
47
|
+
- [[friction-driven module adoption prevents configuration debt by adding complexity only at pain points]] — operationalizes the principle with concrete thresholds: add after 5 repetitions, split above 500 chars, remove after 3 unused sessions
|
|
48
|
+
- [[hook-driven learning loops create self-improving methodology through observation accumulation]] — the accumulation mechanism for documented friction: hooks nudge capture, observations pile up, rethink converts them into system improvements
|
|
49
|
+
- [[bootstrapping principle enables self-improving systems]] — friction provides the input that bootstrapping processes: the system uses current capabilities to address the friction it reveals
|
|
50
|
+
- [[implicit knowledge emerges from traversal]] — good traversal paths reduce friction, and friction in traversal reveals where paths need improvement
|
|
51
|
+
- [[prospective memory requires externalization]] — friction in remembering intentions is a specific instance: the discomfort of forgetting reveals that prospective memory needs external support
|
|
52
|
+
- [[dangling links reveal which notes want to exist]] — a specific form of friction-as-signal: the frustration of following a link to nowhere reveals structural demand for that concept
|
|
53
|
+
- [[schema evolution follows observe-then-formalize not design-then-enforce]] — applies friction-as-signal to schema design: fields formalize when usage friction reveals genuine demand, not through upfront specification
|
|
54
|
+
- [[methodology development should follow the trajectory from documentation to skill to hook as understanding hardens]] — friction drives the encoding trajectory: instruction friction reveals skill needs, skill friction reveals hook candidates
|
|
55
|
+
- [[incremental formalization happens through repeated touching of old notes]] — note-level friction: each traversal that reveals something unclear is friction driving micro-improvements that crystallize vague claims
|
|
56
|
+
- [[backward maintenance asks what would be different if written today]] — friction triggers the reconsideration: discomfort in use surfaces which notes need the backward maintenance pass
|
|
57
|
+
- [[hooks cannot replace genuine cognitive engagement yet more automation is always tempting]] — the shadow side: automation that eliminates friction also eliminates the signal source, creating systems that cannot perceive their own architectural failures
|
|
58
|
+
- [[coherence maintains consistency despite inconsistent inputs]] — contradiction as friction: when retrieval surfaces conflicting claims, the discomfort is friction revealing incoherence in the belief system; documenting the contradiction rather than ignoring it is the friction-to-architecture loop applied to epistemic health
|
|
59
|
+
- [[reflection synthesizes existing notes into new insight]] — sensing unconnected notes is friction that triggers reflection: the 'when you sense there might be connections you haven't made' condition is friction making itself known as a signal to traverse and synthesize
|
|
60
|
+
|
|
61
|
+
Topics:
|
|
62
|
+
- [[maintenance-patterns]]
|
|
63
|
+
- [[agent-cognition]]
|
|
@@ -0,0 +1,48 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Concrete thresholds — add after 5 manual repetitions, split above 500-char descriptions, remove after 3 unused sessions, cap at 15-20 active modules — operationalize Gall's Law at the module level
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[design-dimensions]]"]
|
|
5
|
+
methodology: ["Systems Theory", "Original"]
|
|
6
|
+
source: [[composable-knowledge-architecture-blueprint]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# friction-driven module adoption prevents configuration debt by adding complexity only at pain points
|
|
10
|
+
|
|
11
|
+
The most natural way to build a composable knowledge system is to start with nothing and add modules only when their absence hurts. Since [[complex systems evolve from simple working systems]], this follows directly from Gall's Law — but where Gall's Law provides the principle ("add complexity where pain emerges"), friction-driven adoption provides the thresholds that make the principle operational. The question is not whether to add modules incrementally but when, specifically, a manual process has accumulated enough evidence to justify the overhead of a new module.
|
|
12
|
+
|
|
13
|
+
The concrete thresholds are: add a new module when the manual process it would automate has been performed five or more times. Split a module when its description exceeds 500 characters or its instructions exceed 5,000 tokens — because since [[each module must be describable in one sentence under 200 characters or it does too many things]], a module that cannot be described concisely is doing too much. Remove a module when it has gone unused for three or more sessions with no dependents. Cap active modules at fifteen to twenty, which is a context budget constraint rather than an arbitrary limit — since [[skill context budgets constrain knowledge system complexity on agent platforms]], the module count ceiling reflects the reality that every active module contributes to the context load that agents must carry.
|
|
14
|
+
|
|
15
|
+
What makes these thresholds more than engineering heuristics is that they operationalize the anti-pattern of premature orchestration. Since [[premature complexity is the most common derivation failure mode]], a well-justified claim graph can produce a system with twelve hooks and eight processing phases that collapses under its own sophistication before the user develops working habits. The five-repetition threshold prevents this by requiring demonstrated need: you cannot add a validation hook until you have manually validated five times and know from experience what the hook should check. The symptom of premature orchestration is distinctive — processing pipelines that produce false positives and validation rules that catch non-issues — because the automation was built from theory rather than observed friction.
|
|
16
|
+
|
|
17
|
+
The adoption pattern follows a specific sequence, and it dissolves the problem that [[configuration paralysis emerges when derivation surfaces too many decisions]] by removing the decision surface entirely. Start with zero modules: just markdown files with no schema, no validation, no processing pipeline. This is the kernel from which everything else grows. Where configuration paralysis asks users to choose between module options they do not yet understand, friction-driven adoption asks nothing — the user works, and pain points surface which modules to add. When the agent notices that it manually adds the same YAML fields to every note, that friction justifies enabling the yaml-schema module. When navigation becomes painful because fifty notes have no organizational structure, that friction justifies enabling the mocs module. When manual processing of inbox items follows the same steps every time, that friction justifies enabling the processing-pipeline module. Each addition resolves its own dependencies — since [[dependency resolution through topological sort makes module composition transparent and verifiable]], the system explains what else needs enabling and why — and since [[the no wrong patches guarantee ensures any valid module combination produces a valid system]], each addition is safe by construction. The user experiments with low stakes because the architecture guarantees that any valid combination works.
|
|
18
|
+
|
|
19
|
+
This pattern interacts with the broader lifecycle in a precise way. Since [[derived systems follow a seed-evolve-reseed lifecycle]], friction-driven adoption IS the evolution phase. The seed provides minimum viable modules (typically via a preset, since [[use-case presets dissolve the tension between composability and simplicity]]). The evolution phase toggles modules on at friction points using the five-repetition threshold. And when accumulated friction-driven additions drift the system into incoherence — when modules added for different pain points interact in ways that create new pain — the reseeding phase restructures the module selection using original constraints enriched by operational experience.
|
|
20
|
+
|
|
21
|
+
The diagnostic infrastructure matters because friction is only useful if it is legible. Since [[evolution observations provide actionable signals for system adaptation]], six diagnostic patterns map operational symptoms to structural causes: unused note types signal over-modeling, N/A-filled fields signal schema overreach, navigation failure signals structural misfit. These diagnostics convert vague "something feels wrong" into specific "enable this module" or "disable that one." Without structured friction detection, the five-repetition threshold degenerates into counting repetitions without understanding what they mean. And because [[justification chains enable forward backward and evolution reasoning about configuration decisions]], each friction-driven addition can trace from the specific pain point through the diagnostic protocol to the research claims that justify the module — making the adoption decision principled rather than merely reactive. And since [[schema evolution follows observe-then-formalize not design-then-enforce]], the same patience principle applies to schema fields within modules: do not formalize a field until usage evidence justifies it, even if the module is already active.
|
|
22
|
+
|
|
23
|
+
There is a shadow side to friction-driven adoption that mirrors the shadow side of composable architecture generally. Because [[implicit dependencies create distributed monoliths that fail silently across configurations]], each module added at a friction point is tested only in the context of whatever else is already active — the five-repetition threshold validates that the pain is real, but not that the module works in isolation from its current neighbors. Over time, modules accumulate undeclared dependencies on co-active modules, creating the distributed monolith that composability was designed to prevent. And since [[module deactivation must account for structural artifacts that survive the toggle]], modules added experimentally at perceived friction points and later removed leave ghost YAML fields, orphaned MOC links, and stale validation rules. The five-repetition threshold reduces but does not eliminate this: a manual process performed five times might still prove unnecessary once the broader system evolves. The adoption threshold should account not just for the benefit of adding a module but for the cost of potentially removing it — including the structural artifacts that survive the toggle. This connects to why since [[methodology development should follow the trajectory from documentation to skill to hook as understanding hardens]], the encoding level should also follow friction: start with documentation of the pattern, promote to a skill when the pattern stabilizes, promote to a hook only when the pattern becomes deterministic enough that judgment is no longer required.
|
|
24
|
+
|
|
25
|
+
The deeper principle is that configuration debt — the accumulated burden of features enabled without demonstrated need — is the module-level equivalent of technical debt. Just as technical debt accrues from shortcuts that save time now but cost time later, configuration debt accrues from modules enabled "just in case" that create ongoing maintenance burden, validation noise, and context overhead. Friction-driven adoption prevents configuration debt the same way test-driven development prevents technical debt: by requiring evidence before investment. The five-repetition threshold, the 500-character description limit, the three-session removal window, and the fifteen-to-twenty module cap are not arbitrary numbers but calibrated checkpoints that keep the system's actual complexity aligned with its demonstrated needs.
|
|
26
|
+
|
|
27
|
+
---
|
|
28
|
+
---
|
|
29
|
+
|
|
30
|
+
Relevant Notes:
|
|
31
|
+
- [[complex systems evolve from simple working systems]] — provides the general principle this note operationalizes: Gall's Law says add complexity where pain emerges, this note says add a MODULE specifically after 5+ manual repetitions of the pattern it would automate
|
|
32
|
+
- [[composable knowledge architecture builds systems from independent toggleable modules not monolithic templates]] — the architecture that makes friction-driven adoption practical: modules are independent toggles with explicit dependencies, so enabling one at a friction point resolves its own dependencies without disrupting what already works
|
|
33
|
+
- [[the no wrong patches guarantee ensures any valid module combination produces a valid system]] — the safety property that makes friction-driven adoption non-destructive: each module added at a pain point cannot corrupt existing data, turning adoption decisions into low-stakes experiments rather than architectural commitments
|
|
34
|
+
- [[premature complexity is the most common derivation failure mode]] — the failure mode this prevents: derivation can justify 12 hooks and 8 processing phases from the claim graph, but deploying them all at once overwhelms users; friction-driven adoption is the antidote that releases justified complexity incrementally
|
|
35
|
+
- [[progressive schema validates only what active modules require not the full system schema]] — ensures friction-driven adoption extends to daily experience: a user who has only added yaml-schema and wiki-links never encounters validation demands from modules they have not yet adopted
|
|
36
|
+
- [[use-case presets dissolve the tension between composability and simplicity]] — presets provide curated starting points from which friction-driven evolution diverges: the preset handles initial module selection, and subsequent additions follow friction signals rather than preset recommendations
|
|
37
|
+
- [[derived systems follow a seed-evolve-reseed lifecycle]] — positions friction-driven adoption as the evolution phase mechanism: seed provides minimum viable modules, friction drives selective addition, and reseeding restructures when accumulated additions drift into incoherence
|
|
38
|
+
- [[evolution observations provide actionable signals for system adaptation]] — the diagnostic protocol that structures friction detection: six symptom-to-cause mappings convert vague pain into specific module-level action, making friction legible rather than intuitive
|
|
39
|
+
- [[module deactivation must account for structural artifacts that survive the toggle]] — the cost of friction-driven experimentation: modules added and later removed leave ghost fields and orphaned metadata, so the adoption threshold should include willingness to maintain or clean up artifacts
|
|
40
|
+
- [[schema evolution follows observe-then-formalize not design-then-enforce]] — parallel patience principle applied to schema fields: just as modules should be added at friction points, schema fields should be formalized when usage evidence justifies them, not when design speculation predicts them
|
|
41
|
+
- [[methodology development should follow the trajectory from documentation to skill to hook as understanding hardens]] — shared trajectory: methodology patterns migrate from documentation through skills to hooks as understanding accumulates, just as system capabilities migrate from manual process through convention to automated module as friction accumulates
|
|
42
|
+
- [[dependency resolution through topological sort makes module composition transparent and verifiable]] — ensures friction-driven additions are legible: when a user adds a module at a pain point, topological sort resolves transitive dependencies and explains what else needs enabling and why
|
|
43
|
+
- [[configuration paralysis emerges when derivation surfaces too many decisions]] — the adoption-level problem this dissolves: rather than surfacing all module options as upfront decisions, friction-driven adoption removes the decision surface entirely by starting with nothing and letting pain points surface which modules to add
|
|
44
|
+
- [[justification chains enable forward backward and evolution reasoning about configuration decisions]] — evolution reasoning makes friction-driven adoption principled: when a friction signal triggers module addition, the justification chain traces from the specific pain point to the research claims that justify the module, ensuring each addition is grounded rather than reactive
|
|
45
|
+
- [[implicit dependencies create distributed monoliths that fail silently across configurations]] — the hidden cost: each module added at a friction point is tested only in the context of whatever else is already active, so undeclared dependencies on co-active modules form through testing context rather than design intent, paradoxically creating the distributed monolith that composability was designed to prevent
|
|
46
|
+
|
|
47
|
+
Topics:
|
|
48
|
+
- [[design-dimensions]]
|
|
@@ -0,0 +1,41 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Separating vault maintenance into tend (update), prune (remove/split), and fertilize (connect) operations may produce better outcomes than combined holistic reweave
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[maintenance-patterns]]"]
|
|
5
|
+
source: TFT research corpus (00_inbox/heinrich/)
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# gardening cycle implements tend prune fertilize operations
|
|
9
|
+
|
|
10
|
+
The digital gardening tradition defines three conceptually distinct maintenance activities with different goals and cognitive modes: tend (update/correct based on new information), prune (remove low-value content or split overgrown notes), and fertilize (actively create connections). Agent-operated knowledge systems often merge these operations — backward maintenance handles updating, splitting, and connecting in one pass. Separating them might enable more focused attention and better quality gates per operation.
|
|
11
|
+
|
|
12
|
+
The separation hypothesis has theoretical grounding. Since [[LLM attention degrades as context fills]], three focused operations each starting fresh might outperform one combined operation that runs longer. This parallels [[fresh context per task preserves quality better than chaining phases]] — if isolation preserves quality for processing phases, it might also preserve quality for maintenance operations.
|
|
13
|
+
|
|
14
|
+
The three operations map naturally to different cognitive modes. Tending involves reassessing content against current knowledge — understanding what needs updating. Fertilizing sits on the building side, creating new connections. Pruning requires both: understanding what's overgrown before building the split. This cognitive mode distinction suggests the operations genuinely differ rather than being arbitrary divisions. And because [[maintenance operations are more universal than creative pipelines because structural health is domain-invariant]], all three operations transfer across knowledge domains without adaptation — a therapy journal's tend/prune/fertilize cycle checks the same structural properties (schema compliance, note scope, link density) as a research vault's, even though the creative processing that produced those notes differs entirely.
|
|
15
|
+
|
|
16
|
+
The practical question is whether separation adds value or just terminology. Since [[backward maintenance asks what would be different if written today]], the holistic reconsideration frame might itself be the quality gate — asking all three questions simultaneously ("what needs updating? what's overgrown? what connections are missing?") could surface insights that isolated passes miss. The counterargument is that asking one question deeply beats asking three questions shallowly within the same degrading context window.
|
|
17
|
+
|
|
18
|
+
There is also a question of scale. Since [[navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts]], the gardening operations themselves require different implementation at different regimes. At Regime 1 (under 50 notes), all three operations are trivially manual — the gardener can see the entire garden. At Regime 2 (50-500), the operations are feasible manually but require scheduling discipline. At Regime 3 (500+), tending and pruning detection must be automated because no manual scan can cover the full garden, while the fertilizing decision still requires judgment about which connections are genuine.
|
|
19
|
+
|
|
20
|
+
There is also a question of scope. The three gardening operations — tend, prune, fertilize — all operate within an existing garden layout: which beds exist, how paths connect them, what the irrigation system looks like. Since [[derived systems follow a seed-evolve-reseed lifecycle]], there is a qualitatively different act beyond tending, pruning, and fertilizing: redesigning the garden's layout itself. Reseeding restructures the framework (templates, MOC hierarchy, processing pipeline) while preserving the plants (content, links, accumulated understanding). The gardening operations are evolution-phase maintenance; reseeding is a phase transition that the gardening metaphor does not cover.
|
|
21
|
+
|
|
22
|
+
The gardening operations gain scheduling discipline when paired with reconciliation. Since [[reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring]], reconciliation determines WHEN tend, prune, and fertilize should fire by detecting divergence from desired state. The reconciliation table's rows map directly to gardening operations: dangling links and schema violations trigger tending, orphan notes and oversized MOCs trigger pruning, and sparse connection density triggers fertilizing. Without reconciliation, gardening depends on the agent noticing symptoms; with reconciliation, the detection is automated and the gardening operations become the remediation actions that execute when checks find divergence.
|
|
23
|
+
|
|
24
|
+
If separation improves quality, the implementation path is clear: tend, prune, and fertilize become distinct operations with separate workflows. Since [[skills encode methodology so manual execution bypasses quality gates]], these workflows would encode the focused operations with operation-specific quality gates. Random note maintenance ([[random note resurfacing prevents write-only memory]]) could cycle through the three operations. [[spaced repetition scheduling could optimize vault maintenance]] tests WHEN maintenance happens as a complementary optimization to HOW (operation separation).
|
|
25
|
+
---
|
|
26
|
+
|
|
27
|
+
Relevant Notes:
|
|
28
|
+
- [[backward maintenance asks what would be different if written today]] — the holistic approach this potentially improves upon
|
|
29
|
+
- [[fresh context per task preserves quality better than chaining phases]] — provides theoretical grounding for why focused operations might outperform combined passes
|
|
30
|
+
- [[LLM attention degrades as context fills]] — the cognitive science foundation for operation separation
|
|
31
|
+
- [[skills encode methodology so manual execution bypasses quality gates]] — implementation pattern if validated
|
|
32
|
+
- [[random note resurfacing prevents write-only memory]] — related experiment about note selection for maintenance
|
|
33
|
+
- [[spaced repetition scheduling could optimize vault maintenance]] — complementary experiment about timing
|
|
34
|
+
- [[cognitive outsourcing risk in agent-operated systems]] — potential mitigation: focused operations with smaller scope may enable deeper human evaluation per decision, making rubber-stamping harder than blanket approval of holistic reconsideration
|
|
35
|
+
- [[derived systems follow a seed-evolve-reseed lifecycle]] — scope boundary: gardening operations maintain plants within an existing layout, while reseeding redesigns the layout itself; the three operations are evolution-phase maintenance that cannot address systemic incoherence requiring framework restructuring
|
|
36
|
+
- [[maintenance operations are more universal than creative pipelines because structural health is domain-invariant]] — explains why these three operations transfer across domains: tend, prune, and fertilize all operate on structural properties (schema compliance, note scope, link density) rather than domain semantics, making the gardening cycle portable in ways that creative processing is not
|
|
37
|
+
- [[reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring]] — scheduling layer: reconciliation determines WHEN each gardening operation should fire by detecting divergence from desired state, making gardening the remediation actions that reconciliation detection triggers
|
|
38
|
+
- [[navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts]] — scale dependency: the three gardening operations require qualitatively different implementation at each regime — trivially manual at Regime 1, scheduled manual at Regime 2, automated detection with judgment-gated remediation at Regime 3
|
|
39
|
+
|
|
40
|
+
Topics:
|
|
41
|
+
- [[maintenance-patterns]]
|
|
@@ -0,0 +1,40 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Before any note exits inbox, require at least one agent-generated artifact exists — a description, synthesis comment, or connection proposal — so that file movement alone never counts as processing
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[processing-workflows]]"]
|
|
5
|
+
methodology: ["Cornell"]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# generation effect gate blocks processing without transformation
|
|
9
|
+
|
|
10
|
+
Since [[the generation effect requires active transformation not just storage]], the question becomes how to enforce this principle operationally. The answer is a gate: before any content moves from inbox to thinking, at least one agent-generated artifact must exist. No artifact, no exit.
|
|
11
|
+
|
|
12
|
+
The artifacts that satisfy the gate are specific: a description that condenses the claim, a synthesis comment that relates it to existing notes, or a connection proposal that articulates why it should link to something else. What doesn't satisfy the gate is equally specific: folder assignment, tag application, filename changes, or any rearrangement that leaves the content unchanged. These are housekeeping operations that create the appearance of progress while producing no cognitive value.
|
|
13
|
+
|
|
14
|
+
This gate operationalizes what [[skills encode methodology so manual execution bypasses quality gates]] makes abstract. The skills themselves contain the generative requirements — reduce produces claim notes with original descriptions, reflect produces wiki links with context phrases, recite produces predictions that test descriptions. But without enforcement, someone could manually move a file, skip the skill, and call it processed. The gate prevents this by making generation a hard prerequisite rather than a best practice. Since [[hook enforcement guarantees quality while instruction enforcement merely suggests it]], the generation gate exemplifies the same principle at the processing boundary: an instruction to "always generate an artifact before promotion" degrades as context fills and the agent skips the step under pressure, while a structural gate that blocks promotion without an artifact achieves the enforcement guarantee regardless of attention state.
|
|
15
|
+
|
|
16
|
+
The implementation is simple: any operation that would promote content from inbox checks for the presence of a generated artifact. If one exists, promotion proceeds. If none exists, promotion is blocked with a clear message explaining what generation is required. This transforms generation from something that should happen into something that must happen.
|
|
17
|
+
|
|
18
|
+
There's a subtle risk here that [[verbatim risk applies to agents too]] surfaces: an agent could generate an artifact that looks like transformation but actually just reorganizes existing content. A description that restates the title in different words satisfies the gate formally but violates its intent. This suggests the gate is necessary but not sufficient — it catches the most obvious failure mode (no generation at all) but doesn't guarantee the generation is meaningful. Quality standards for descriptions and connections remain the second line of defense. This is why the generation gate works alongside [[summary coherence tests composability before filing]]: the generation gate ensures transformation happens, while summary coherence ensures the unit being transformed is actually singular. Both are inbox exit gates, but they catch different failure modes — the generation gate catches Lazy Cornell (structure without processing), while summary coherence catches multi-claim bundling (processing applied to non-composable units).
|
|
19
|
+
|
|
20
|
+
The gate also creates a forcing function that [[continuous small-batch processing eliminates review dread]] amplifies. If content can't leave inbox without generation, and generation requires attention, then accumulating unprocessed content becomes visibly painful. The inbox fills up, the gate blocks exit, and the only path forward is doing the generative work. This makes processing the path of least resistance rather than something that requires willpower. When combined with [[WIP limits force processing over accumulation]], these two forcing functions form a complete behavioral constraint: WIP limits prevent indefinite capture, while the generation gate ensures that "processing" means actual transformation rather than file movement. Together they make genuine processing the only path forward.
|
|
21
|
+
|
|
22
|
+
Because [[structure without processing provides no value]] demonstrates through the Lazy Cornell anti-pattern, the gate directly prevents the Stage 2 failure mode in [[PKM failure follows a predictable cycle]]: under-processing. Moving files without transformation is exactly what the gate blocks. By making generation a hard prerequisite, the system cannot fall into the pattern of accumulating well-organized but unprocessed content — the organization itself requires the processing that creates value.
|
|
23
|
+
---
|
|
24
|
+
|
|
25
|
+
Relevant Notes:
|
|
26
|
+
- [[the generation effect requires active transformation not just storage]] — the cognitive science this gate operationalizes; explains why generation matters
|
|
27
|
+
- [[skills encode methodology so manual execution bypasses quality gates]] — the abstract principle this gate makes concrete; skills contain generation requirements, the gate enforces them
|
|
28
|
+
- [[structure without processing provides no value]] — the anti-pattern this gate prevents; Lazy Cornell shows why structure alone fails
|
|
29
|
+
- [[PKM failure follows a predictable cycle]] — Stage 2 (under-processing) is exactly what the gate blocks; file movement without transformation
|
|
30
|
+
- [[intermediate packets enable assembly over creation]] — generation gate produces packets: required artifacts become composable building blocks that enable assembly rather than mere reorganization
|
|
31
|
+
- [[continuous small-batch processing eliminates review dread]] — the psychological effect this gate amplifies; blocked exit makes processing the path of least resistance
|
|
32
|
+
- [[WIP limits force processing over accumulation]] — complementary forcing function: WIP limits create urgency to process, this gate ensures processing is real; together they form complete behavioral constraints
|
|
33
|
+
- [[summary coherence tests composability before filing]] — sibling inbox exit gate: this gate catches no transformation, summary coherence catches bundled claims; both validate quality at the inbox-to-thinking boundary
|
|
34
|
+
- [[temporal processing priority creates age-based inbox urgency]] — orthogonal mechanism: that note answers what to process first (oldest items), this note answers what counts as processed (must have artifact); together they form complete inbox discipline
|
|
35
|
+
- [[schema enforcement via validation agents enables soft consistency]] — the soft enforcement counterpart: this gate blocks (hard enforcement at inbox boundary), while validation agents warn without blocking (soft enforcement for ongoing consistency); different positions on the enforcement spectrum for different purposes
|
|
36
|
+
- [[cognitive outsourcing risk in agent-operated systems]] — the gate ensures generation happens but doesn't address WHO generates; if agents satisfy the gate, the vault benefits from the encoding but the human may still lose the skill through non-practice
|
|
37
|
+
- [[hook enforcement guarantees quality while instruction enforcement merely suggests it]] — explains the enforcement mechanism this gate instantiates: the generation gate IS a hard enforcement point, and the guarantee-vs-suggestion distinction applies here too — an instruction to generate artifacts before promotion degrades with attention, while a gate that blocks promotion makes enforcement structural
|
|
38
|
+
|
|
39
|
+
Topics:
|
|
40
|
+
- [[processing-workflows]]
|
|
@@ -0,0 +1,41 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Define a persona and goal, allocate compute budget, get back a populated knowledge graph — the pattern shifts knowledge creation from interactive sessions to programmatic research campaigns
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[agent-cognition]]", "[[design-dimensions]]"]
|
|
5
|
+
confidence: speculative
|
|
6
|
+
methodology: ["Original"]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# goal-driven memory orchestration enables autonomous domain learning through directed compute allocation
|
|
10
|
+
|
|
11
|
+
The standard way to build a knowledge system is interactive: a human captures content, an agent processes it, they iterate over sessions. This works but scales linearly with human attention. Goal-driven orchestration inverts the relationship. Instead of the human driving each session, the human defines a goal — a persona, a domain, a set of research seeds — and the orchestrator allocates compute to achieve it. The human returns to find a populated knowledge graph they can navigate, challenge, and extend.
|
|
12
|
+
|
|
13
|
+
The mechanism is a two-agent architecture. The orchestrator agent lives in a research vault (like this one) and has access to research tools (Exa deep research, web search). The domain agent lives in a freshly derived target vault and has access to knowledge processing skills (reduce, reflect, reweave, verify). The orchestrator researches topics, writes results to the target vault's inbox, and drives the target's pipeline via `claude -p` one-shot commands. The domain agent processes each source in fresh context, creating notes, finding connections, and verifying quality. The orchestrator reads the target vault's files directly to evaluate what was learned and decide what to research next.
|
|
14
|
+
|
|
15
|
+
This is not just automation. The orchestrator is a research director making strategic decisions about what to investigate next. After each research cycle, it reads the target vault's notes and MOCs to assess coverage against the stated goal. If the attention models cluster is dense but the computational modeling cluster is sparse, it generates a follow-up research query targeted at that gap. If contradictions emerge between notes, it researches the contradiction specifically. The orchestrator's intelligence is in directing research, not just executing it.
|
|
16
|
+
|
|
17
|
+
Since [[fresh context per task preserves quality better than chaining phases]], each `claude -p` invocation gives the target vault's agent a fresh context window. This is the same quality principle that /ralph uses for pipeline processing, extended to the orchestration layer. The orchestrator maintains state between calls (tracking budget, completed cycles, knowledge coverage), but the domain agent starts fresh every time. This means 20 research cycles produce 20 high-quality processing sessions, not one degraded marathon.
|
|
18
|
+
|
|
19
|
+
The economic model is "spend compute to learn about X." A vision file specifies a budget ($50, $100, whatever the user is willing to allocate), and the orchestrator manages that budget across research and processing calls. Each `claude -p` call has a per-call cap via `--max-budget-usd`. When the budget approaches its limit, the orchestrator shifts from research to consolidation — running health checks, backward connections, and MOC updates to ensure the existing knowledge is well-integrated before stopping.
|
|
20
|
+
|
|
21
|
+
Since [[derivation generates knowledge systems from composable research claims not template customization]], the orchestrator doesn't generate the target vault from scratch. It uses ArsContexta's init wizard to derive a properly configured knowledge system for the persona's domain, complete with domain-native vocabulary, appropriate processing pipeline, and coherent configuration. Orchestration adds the content layer on top of the structural layer that derivation provides.
|
|
22
|
+
|
|
23
|
+
The coordination pattern is stigmergic. Since [[stigmergy coordinates agents through environmental traces without direct communication]], the orchestrator and target vault never exchange messages directly. The orchestrator writes research files to the target's inbox directory. The target's pipeline processes whatever it finds in inbox. The orchestrator reads the target's notes directory to see what was created. State lives in the filesystem, not in any communication protocol. This makes the architecture simple and debuggable — you can inspect every artifact at every stage.
|
|
24
|
+
|
|
25
|
+
The open question is whether `claude -p` can reliably drive the init wizard, which uses `AskUserQuestion` for interactive flow. The hypothesis is that a sufficiently detailed persona prompt provides enough signal for the wizard to resolve all eight configuration dimensions without follow-up questions. If this fails, the fallback is adding a `--vision` flag to the init command that reads a configuration file directly — which is also a valuable product feature.
|
|
26
|
+
|
|
27
|
+
There is a deeper implication. If orchestration works, it means knowledge system creation becomes a commodity: define what you want to know, allocate budget, receive a navigable knowledge graph. This shifts the value from the creation process to the quality of the derivation engine and the research direction intelligence. The competitive advantage is not in running pipelines (anyone can automate that) but in knowing what to research next and how to evaluate whether the resulting knowledge graph actually serves the stated goal. And since [[external memory shapes cognition more than base model]], the knowledge graph the orchestrator produces IS the cognitive upgrade — more impactful than any model improvement because it changes what the domain agent retrieves and therefore what it thinks. Directed compute allocation to memory architecture is high-ROI precisely because architecture determines cognition.
|
|
28
|
+
|
|
29
|
+
---
|
|
30
|
+
---
|
|
31
|
+
|
|
32
|
+
Relevant Notes:
|
|
33
|
+
- [[derivation generates knowledge systems from composable research claims not template customization]] — derivation creates the empty system; orchestration fills it with domain knowledge through directed research
|
|
34
|
+
- [[fresh context per task preserves quality better than chaining phases]] — the orchestrator inherits the same isolation principle: each research cycle and pipeline invocation gets fresh context via claude -p
|
|
35
|
+
- [[the derivation engine improves recursively as deployed systems generate observations]] — orchestrated vaults are mass-produced experiments: each generates observations that sharpen the derivation engine faster than manual deployments
|
|
36
|
+
- [[stigmergy coordinates agents through environmental traces without direct communication]] — the orchestrator and target vault coordinate through files on disk, not message passing: research results are written to inbox, pipeline state is read from queue files
|
|
37
|
+
- [[external memory shapes cognition more than base model]] — economic justification: directed compute allocation to memory architecture is high-ROI because architecture determines what the domain agent retrieves and therefore thinks; the knowledge graph IS the cognitive upgrade, more impactful than model improvement
|
|
38
|
+
|
|
39
|
+
Topics:
|
|
40
|
+
- [[agent-cognition]]
|
|
41
|
+
- [[design-dimensions]]
|
|
@@ -0,0 +1,57 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Structure descriptions as three layers — lead with actionable heuristic, back with mechanism, end with operational implication — so agents can assess relevance at multiple levels
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[discovery-retrieval]]"]
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
The pattern for effective descriptions has three layers:
|
|
8
|
+
|
|
9
|
+
1. **Actionable heuristic** — what to do, stated directly
|
|
10
|
+
2. **Mechanism** — why it works, the underlying principle
|
|
11
|
+
3. **Operational implication** — what this means for practice
|
|
12
|
+
|
|
13
|
+
This layering formula is itself a schema template: since [[schema templates reduce cognitive overhead at capture time]], having a formula shifts description-writing from "what should I say about this note" to "fill in these three layers." The structure is given, so attention focuses on content within the structure.
|
|
14
|
+
|
|
15
|
+
This structure helps agents assess relevance at multiple levels. Someone scanning descriptions can stop at the heuristic if that's enough. Someone deciding whether to read fully can check the mechanism. Someone applying the insight needs the implication.
|
|
16
|
+
|
|
17
|
+
Example from [[temporal separation of capture and processing preserves context freshness]]:
|
|
18
|
+
> "Capture when context is fresh, process when attention is available — the cognitive modes conflict so separating them in time preserves quality of both"
|
|
19
|
+
|
|
20
|
+
- Heuristic: capture when fresh, process when available
|
|
21
|
+
- Mechanism: cognitive modes conflict
|
|
22
|
+
- Implication: separation preserves quality
|
|
23
|
+
|
|
24
|
+
The pattern extends [[descriptions are retrieval filters not summaries]] with a specific formula. That note says descriptions must add new information beyond the title. This note says how to structure that information for maximum utility.
|
|
25
|
+
|
|
26
|
+
This layered structure addresses the specificity problem. Since [[claims must be specific enough to be wrong]], vague descriptions fail the same way vague claims do — they can't be disagreed with, built on, or used to filter. Each layer of the formula forces specificity: a heuristic must be actionable (not abstract), a mechanism must explain why (not just what), an implication must be operational (not just theoretical). Descriptions that merely paraphrase the title collapse all three layers into zero. This is the description-level manifestation of [[verbatim risk applies to agents too]] — the agent can produce the structural form (a description in YAML) without the generative content (distinct information at each layer).
|
|
27
|
+
|
|
28
|
+
The layers also enable efficient high-decay traversal. Since [[spreading activation models how agents should traverse]], agents can stop at description depth when scanning broadly. The heuristic layer alone often suffices for filtering decisions — "what to do" tells you whether to read deeper. When more context is needed, mechanism and implication layers are already there. This matches the decay pattern: high-decay traversal stops at heuristic, medium-decay reaches mechanism, low-decay gets implication before loading full content.
|
|
29
|
+
|
|
30
|
+
The formula also enables [[distinctiveness scoring treats description quality as measurable]]. Layered descriptions are more likely to be distinctive because each layer adds differentiating information — two notes might share a heuristic but have different mechanisms, or share mechanisms but lead to different implications. The combination creates specificity that pure content summaries lack.
|
|
31
|
+
|
|
32
|
+
The layering formula has a hidden cost for keyword retrieval. Connecting heuristic to mechanism to implication requires prose transitions — "because," "which means," "so that" — and since [[BM25 retrieval fails on full-length descriptions because query term dilution reduces match scores]], these connective words consume IDF scoring budget without contributing to retrieval signal. A well-layered description excels at human scanning precisely because the transitions create logical flow, but that same flow dilutes keyword search. Because [[description quality for humans diverges from description quality for keyword search]], this is not a fixable defect in the formula but an inherent tension: the two retrieval channels make opposing demands on the same character budget, and the layering formula picks a side by optimizing for comprehension over keyword density.
|
|
33
|
+
|
|
34
|
+
Yet even well-layered descriptions operate within the ~150-character constraint, and since [[sense-making vs storage does compression lose essential nuance]], some knowledge types may resist even optimal compression. Procedural nuance where tacit judgment matters, contextual knowledge whose meaning depends on situation, or phenomenological content that resists propositional reduction — these may not fit the heuristic/mechanism/implication structure at all. The layering formula helps maximize filter value within the compression constraint, but since [[vault conventions may impose hidden rigidity on thinking]], the constraint itself may exclude certain kinds of thinking. The layering formula and the sense-making tension are sibling concerns: the formula asks how to maximize value within convention limits, the tension asks whether convention limits systematically exclude valuable content.
|
|
35
|
+
---
|
|
36
|
+
|
|
37
|
+
Relevant Notes:
|
|
38
|
+
- [[descriptions are retrieval filters not summaries]] — foundational insight; this note adds structural pattern
|
|
39
|
+
- [[metadata reduces entropy enabling precision over recall]] — information-theoretic grounding: the layered structure pre-computes low-entropy representations at multiple depths
|
|
40
|
+
- [[progressive disclosure means reading right not reading less]] — descriptions are the first disclosure layer
|
|
41
|
+
- [[claims must be specific enough to be wrong]] — same anti-pattern: vagueness defeats utility; the layering formula forces specificity
|
|
42
|
+
- [[distinctiveness scoring treats description quality as measurable]] — layered descriptions enable automated quality validation because each layer adds differentiating information
|
|
43
|
+
- [[testing effect could enable agent knowledge verification]] — the layering formula makes description failures diagnosable: which layer is missing determines the fix
|
|
44
|
+
- [[retrieval verification loop tests description quality at scale]] — operationalizes the diagnostic: verification failures should map to missing layers, enabling targeted improvement based on which layer (heuristic, mechanism, or implication) the description lacks
|
|
45
|
+
- [[temporal separation of capture and processing preserves context freshness]] — provides the example description used to illustrate the layering pattern
|
|
46
|
+
- [[spreading activation models how agents should traverse]] — layers map to decay levels: high-decay stops at heuristic, medium at mechanism, low at implication
|
|
47
|
+
- [[throughput matters more than accumulation]] — layered descriptions improve filtering speed, directly serving the processing velocity metric
|
|
48
|
+
- [[skills encode methodology so manual execution bypasses quality gates]] — the layering formula is a quality gate that extraction operations and retrieval testing can enforce consistently
|
|
49
|
+
- [[verbatim risk applies to agents too]] — collapsed descriptions (paraphrasing without layers) are a specific detection point for the verbatim failure mode; the layering formula provides structure that verbatim output lacks
|
|
50
|
+
- [[schema templates reduce cognitive overhead at capture time]] — this layering formula IS a schema template: it shifts description-writing from what should I say to fill in these three layers
|
|
51
|
+
- [[logic column pattern separates reasoning from procedure]] — sibling layering pattern: descriptions layer temporally (heuristic → mechanism → implication), logic columns layer spatially (procedure track alongside reasoning track); both enable agents to choose reading depth based on need
|
|
52
|
+
- [[sense-making vs storage does compression lose essential nuance]] — the formula optimizes within the compression constraint, but some knowledge types may resist even optimal layering
|
|
53
|
+
- [[BM25 retrieval fails on full-length descriptions because query term dilution reduces match scores]] — hidden cost: the prose transitions (because, which means, so that) that connect layers are exactly the low-IDF terms that dilute keyword search scoring
|
|
54
|
+
- [[description quality for humans diverges from description quality for keyword search]] — develops the paradox: the layering formula excels at human scanning precisely because its prose transitions create logical flow, but that same flow dilutes keyword retrieval; the formula optimizes one channel while degrading the other
|
|
55
|
+
|
|
56
|
+
Topics:
|
|
57
|
+
- [[discovery-retrieval]]
|
|
@@ -0,0 +1,65 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: How wiki-linked vaults work as graph databases -- nodes, edges, traversal, and structural analysis
|
|
3
|
+
type: moc
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# graph-structure
|
|
7
|
+
|
|
8
|
+
How knowledge vaults function as queryable graph databases. Nodes are markdown files, edges are wiki links, properties are YAML frontmatter.
|
|
9
|
+
|
|
10
|
+
## Core Ideas
|
|
11
|
+
|
|
12
|
+
### Research
|
|
13
|
+
- [[IBIS framework maps claim-based architecture to structured argumentation]] -- Rittel's Issue-Position-Argument structure (1970) maps directly onto vault architecture — claim-titled notes are Positio
|
|
14
|
+
- [[MOC construction forces synthesis that automated generation from metadata cannot replicate]] -- The Dump-Lump-Jump pattern reveals that writing context phrases and identifying tensions IS the thinking — automated top
|
|
15
|
+
- [[MOCs are attention management devices not just organizational tools]] -- MOCs preserve the arrangement of ideas that would otherwise need mental reconstruction, reducing the 23-minute context s
|
|
16
|
+
- [[agent notes externalize navigation intuition that search cannot discover and traversal cannot reconstruct]] -- Navigation intuition — traversal order, productive note combinations, dead ends — is structural knowledge that humans re
|
|
17
|
+
- [[associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles]] -- Hierarchies require predicting where information belongs before understanding, and items often belong in many places — a
|
|
18
|
+
- [[backlinks implicitly define notes by revealing usage context]] -- A note's meaning includes not just its content but its network position — what links TO it reveals the contexts where th
|
|
19
|
+
- [[basic level categorization determines optimal MOC granularity]] -- Rosch's prototype theory predicts MOC titles work best at the "chair" level — specific enough to be informative, general
|
|
20
|
+
- [[betweenness centrality identifies bridge notes connecting disparate knowledge domains]] -- Graph theory metric that quantifies how often a note lies on shortest paths between others, revealing structural bridges
|
|
21
|
+
- [[coherent architecture emerges from wiki links spreading activation and small-world topology]] -- The foundational triangle — wiki links create structure, spreading activation models traversal, small-world topology pro
|
|
22
|
+
- [[community detection algorithms can inform when MOCs should split or merge]] -- Louvain and similar algorithms detect dense note clusters and track how cluster boundaries shift over time, providing ac
|
|
23
|
+
- [[complete navigation requires four complementary types that no single mechanism provides]] -- Rosenfeld and Morville's global, local, contextual, and supplemental navigation types map onto hub, MOC, wiki link, and
|
|
24
|
+
- [[concept-orientation beats source-orientation for cross-domain connections]] -- Notes organized by source bundle ideas by origin not meaning, preventing the same concept from different authors from me
|
|
25
|
+
- [[context phrase clarity determines how deep a navigation hierarchy can scale]] -- Larson & Czerwinski (1998) found deeper hierarchies outperform flat ones only when labels enable confident branch commit
|
|
26
|
+
- [[controlled disorder engineers serendipity through semantic rather than topical linking]] -- Luhmann's information theory insight — perfectly ordered systems yield zero surprise, so linking by meaning rather than
|
|
27
|
+
- [[cross-links between MOC territories indicate creative leaps and integration depth]] -- Notes that appear in multiple distant MOCs are integration points where ideas from separate domains combine — tracking c
|
|
28
|
+
- [[dangling links reveal which notes want to exist]] -- Wiki links to non-existent notes accumulate as organic signals of concept demand, and frequency analysis identifies whic
|
|
29
|
+
- [[data exit velocity measures how quickly content escapes vendor lock-in]] -- Three-tier framework (high/medium/low velocity) turns abstract portability into an auditable metric where every feature
|
|
30
|
+
- [[each new note compounds value by creating traversal paths]] -- Unlike folders where 1000 documents is just 1000 documents, a graph of 1000 connected nodes creates millions of potentia
|
|
31
|
+
- [[elaborative encoding is the quality gate for new notes]] -- Zettelkasten works because connecting new information to existing knowledge — not just filing it — creates encoding dept
|
|
32
|
+
- [[faceted classification treats notes as multi-dimensional objects rather than folder contents]] -- Ranganathan's 1933 PMEST framework formalizes why each YAML field should be an independent classification dimension — fa
|
|
33
|
+
- [[federated wiki pattern enables multi-agent divergence as feature not bug]] -- Cunningham's federation applied to agent knowledge work -- linked parallel notes preserve interpretive diversity, with b
|
|
34
|
+
- [[implicit knowledge emerges from traversal]] -- path exposure through wiki links trains intuitive navigation patterns that bypass explicit retrieval — the vault structu
|
|
35
|
+
- [[inline links carry richer relationship data than metadata fields]] -- The prose surrounding a wiki link captures WHY two notes connect, not just THAT they connect — relationship context that
|
|
36
|
+
- [[markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure]] -- Wiki link edges, YAML metadata, faceted query dimensions, and soft validation compose into graph database capabilities w
|
|
37
|
+
- [[multi-domain systems compose through separate templates and shared graph]] -- Domain isolation at template and processing layers, graph unity at wiki link layer — five composition rules and four cro
|
|
38
|
+
- [[navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts]] -- At 50 notes keyword search suffices, at 500 curated MOCs become essential, at 5000 automated maintenance replaces manual
|
|
39
|
+
- [[navigational vertigo emerges in pure association systems without local hierarchy]] -- Pure link-based navigation makes unlinked neighbors unreachable — MOCs provide temporary local hierarchies that compleme
|
|
40
|
+
- [[propositional link semantics transform wiki links from associative to reasoned]] -- Standardizing a vocabulary of relationship types (causes, enables, contradicts, extends) makes wiki link connections mac
|
|
41
|
+
- [[role field makes graph structure explicit]] -- A role field distinguishing moc/hub/leaf/synthesis would let agents make smarter traversal decisions without inferring s
|
|
42
|
+
- [[small-world topology requires hubs and dense local links]] -- Network science shows knowledge graphs need power-law distributions where MOCs have many links and atomic notes have few
|
|
43
|
+
- [[stigmergy coordinates agents through environmental traces without direct communication]] -- Termites build nests by responding to structure not each other, and agent swarms work the same way — wiki links, MOCs, a
|
|
44
|
+
- [[structure enables navigation without reading everything]] -- Four structural mechanisms — wiki links, MOCs, claim titles, and YAML descriptions — compose into discovery layers that
|
|
45
|
+
- [[tag rot applies to wiki links because titles serve as both identifier and display text]] -- Unlike opaque identifiers that persist through vocabulary drift, wiki link titles carry semantic content that must stay
|
|
46
|
+
- [[title as claim enables traversal as reasoning]] -- when note titles are complete claims rather than topics, traversing wiki links reads like prose and following paths beco
|
|
47
|
+
- [[topological organization beats temporal for knowledge work]] -- The garden vs stream distinction from digital gardening theory grounds why vaults use topic MOCs and wiki links rather t
|
|
48
|
+
- [[wiki links are the digital evolution of analog indexing]] -- Cornell's 1940s cue column functioned as an index pointing to content blocks, making wiki link graphs the digital fulfil
|
|
49
|
+
- [[wiki links as social contract transforms agents into stewards of incomplete references]] -- Cunningham's norm that creating a link means accepting elaboration responsibility translates from human peer accountabil
|
|
50
|
+
- [[wiki links create navigation paths that shape retrieval]] -- wiki links are curated graph edges that implement GraphRAG-style retrieval without infrastructure — each link is a retri
|
|
51
|
+
- [[wiki links implement GraphRAG without the infrastructure]] -- Explicit wiki links create a human-curated knowledge graph that enables multi-hop reasoning without entity extraction pi
|
|
52
|
+
|
|
53
|
+
## Tensions
|
|
54
|
+
|
|
55
|
+
(Capture conflicts as they emerge)
|
|
56
|
+
|
|
57
|
+
## Open Questions
|
|
58
|
+
|
|
59
|
+
- What graph metrics predict vault health most reliably?
|
|
60
|
+
- How does link density interact with retrieval quality?
|
|
61
|
+
|
|
62
|
+
---
|
|
63
|
+
|
|
64
|
+
Topics:
|
|
65
|
+
- [[index]]
|