arscontexta 0.6.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/marketplace.json +11 -0
- package/.claude-plugin/plugin.json +22 -0
- package/README.md +683 -0
- package/agents/knowledge-guide.md +49 -0
- package/bin/cli.mjs +66 -0
- package/generators/agents-md.md +240 -0
- package/generators/claude-md.md +379 -0
- package/generators/features/atomic-notes.md +124 -0
- package/generators/features/ethical-guardrails.md +58 -0
- package/generators/features/graph-analysis.md +188 -0
- package/generators/features/helper-functions.md +92 -0
- package/generators/features/maintenance.md +164 -0
- package/generators/features/methodology-knowledge.md +70 -0
- package/generators/features/mocs.md +144 -0
- package/generators/features/multi-domain.md +61 -0
- package/generators/features/personality.md +71 -0
- package/generators/features/processing-pipeline.md +428 -0
- package/generators/features/schema.md +149 -0
- package/generators/features/self-evolution.md +229 -0
- package/generators/features/self-space.md +78 -0
- package/generators/features/semantic-search.md +99 -0
- package/generators/features/session-rhythm.md +85 -0
- package/generators/features/templates.md +85 -0
- package/generators/features/wiki-links.md +88 -0
- package/generators/soul-md.md +121 -0
- package/hooks/hooks.json +45 -0
- package/hooks/scripts/auto-commit.sh +44 -0
- package/hooks/scripts/session-capture.sh +35 -0
- package/hooks/scripts/session-orient.sh +86 -0
- package/hooks/scripts/write-validate.sh +42 -0
- package/methodology/AI shifts knowledge systems from externalizing memory to externalizing attention.md +59 -0
- package/methodology/BM25 retrieval fails on full-length descriptions because query term dilution reduces match scores.md +39 -0
- package/methodology/IBIS framework maps claim-based architecture to structured argumentation.md +58 -0
- package/methodology/LLM attention degrades as context fills.md +49 -0
- package/methodology/MOC construction forces synthesis that automated generation from metadata cannot replicate.md +49 -0
- package/methodology/MOC maintenance investment compounds because orientation savings multiply across every future session.md +41 -0
- package/methodology/MOCs are attention management devices not just organizational tools.md +51 -0
- package/methodology/PKM failure follows a predictable cycle.md +50 -0
- package/methodology/ThreadMode to DocumentMode transformation is the core value creation step.md +52 -0
- package/methodology/WIP limits force processing over accumulation.md +53 -0
- package/methodology/Zeigarnik effect validates capture-first philosophy because open loops drain attention.md +42 -0
- package/methodology/academic research uses structured extraction with cross-source synthesis.md +566 -0
- package/methodology/adapt the four-phase processing pipeline to domain-specific throughput needs.md +197 -0
- package/methodology/agent notes externalize navigation intuition that search cannot discover and traversal cannot reconstruct.md +48 -0
- package/methodology/agent self-memory should be architecturally separate from user knowledge systems.md +48 -0
- package/methodology/agent session boundaries create natural automation checkpoints that human-operated systems lack.md +56 -0
- package/methodology/agent-cognition.md +107 -0
- package/methodology/agents are simultaneously methodology executors and subjects creating a unique trust asymmetry.md +66 -0
- package/methodology/aspect-oriented programming solved the same cross-cutting concern problem that hooks solve.md +39 -0
- package/methodology/associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles.md +53 -0
- package/methodology/attention residue may have a minimum granularity that cannot be subdivided.md +46 -0
- package/methodology/auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution.md +47 -0
- package/methodology/automated detection is always safe because it only reads state while automated remediation risks content corruption.md +42 -0
- package/methodology/automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues.md +56 -0
- package/methodology/backlinks implicitly define notes by revealing usage context.md +35 -0
- package/methodology/backward maintenance asks what would be different if written today.md +62 -0
- package/methodology/balance onboarding enforcement and questions to prevent premature complexity.md +229 -0
- package/methodology/basic level categorization determines optimal MOC granularity.md +51 -0
- package/methodology/batching by context similarity reduces switching costs in agent processing.md +43 -0
- package/methodology/behavioral anti-patterns matter more than tool selection.md +42 -0
- package/methodology/betweenness centrality identifies bridge notes connecting disparate knowledge domains.md +57 -0
- package/methodology/blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules.md +42 -0
- package/methodology/bootstrapping principle enables self-improving systems.md +62 -0
- package/methodology/build automatic memory through cognitive offloading and session handoffs.md +285 -0
- package/methodology/capture the reaction to content not just the content itself.md +41 -0
- package/methodology/claims must be specific enough to be wrong.md +36 -0
- package/methodology/closure rituals create clean breaks that prevent attention residue bleed.md +44 -0
- package/methodology/cognitive offloading is the architectural foundation for vault design.md +46 -0
- package/methodology/cognitive outsourcing risk in agent-operated systems.md +55 -0
- package/methodology/coherence maintains consistency despite inconsistent inputs.md +96 -0
- package/methodology/coherent architecture emerges from wiki links spreading activation and small-world topology.md +48 -0
- package/methodology/community detection algorithms can inform when MOCs should split or merge.md +52 -0
- package/methodology/complete navigation requires four complementary types that no single mechanism provides.md +43 -0
- package/methodology/complex systems evolve from simple working systems.md +59 -0
- package/methodology/composable knowledge architecture builds systems from independent toggleable modules not monolithic templates.md +61 -0
- package/methodology/compose multi-domain systems through separate templates and shared graph.md +372 -0
- package/methodology/concept-orientation beats source-orientation for cross-domain connections.md +51 -0
- package/methodology/confidence thresholds gate automated action between the mechanical and judgment zones.md +50 -0
- package/methodology/configuration dimensions interact so choices in one create pressure on others.md +58 -0
- package/methodology/configuration paralysis emerges when derivation surfaces too many decisions.md +44 -0
- package/methodology/context files function as agent operating systems through self-referential self-extension.md +46 -0
- package/methodology/context phrase clarity determines how deep a navigation hierarchy can scale.md +46 -0
- package/methodology/continuous small-batch processing eliminates review dread.md +48 -0
- package/methodology/controlled disorder engineers serendipity through semantic rather than topical linking.md +51 -0
- package/methodology/creative writing uses worldbuilding consistency with character tracking.md +672 -0
- package/methodology/cross-links between MOC territories indicate creative leaps and integration depth.md +43 -0
- package/methodology/dangling links reveal which notes want to exist.md +62 -0
- package/methodology/data exit velocity measures how quickly content escapes vendor lock-in.md +74 -0
- package/methodology/decontextualization risk means atomicity may strip meaning that cannot be recovered.md +48 -0
- package/methodology/dense interlinked research claims enable derivation while sparse references only enable templating.md +47 -0
- package/methodology/dependency resolution through topological sort makes module composition transparent and verifiable.md +56 -0
- package/methodology/derivation generates knowledge systems from composable research claims not template customization.md +63 -0
- package/methodology/derivation-engine.md +27 -0
- package/methodology/derived systems follow a seed-evolve-reseed lifecycle.md +56 -0
- package/methodology/description quality for humans diverges from description quality for keyword search.md +73 -0
- package/methodology/descriptions are retrieval filters not summaries.md +112 -0
- package/methodology/design MOCs as attention management devices with lifecycle governance.md +318 -0
- package/methodology/design-dimensions.md +66 -0
- package/methodology/digital mutability enables note evolution that physical permanence forbids.md +54 -0
- package/methodology/discovery-retrieval.md +48 -0
- package/methodology/distinctiveness scoring treats description quality as measurable.md +69 -0
- package/methodology/does agent processing recover what fast capture loses.md +43 -0
- package/methodology/domain-compositions.md +37 -0
- package/methodology/dual-coding with visual elements could enhance agent traversal.md +55 -0
- package/methodology/each module must be describable in one sentence under 200 characters or it does too many things.md +45 -0
- package/methodology/each new note compounds value by creating traversal paths.md +55 -0
- package/methodology/eight configuration dimensions parameterize the space of possible knowledge systems.md +56 -0
- package/methodology/elaborative encoding is the quality gate for new notes.md +55 -0
- package/methodology/enforce schema with graduated strictness across capture processing and query zones.md +221 -0
- package/methodology/enforcing atomicity can create paralysis when ideas resist decomposition.md +43 -0
- package/methodology/engineering uses technical decision tracking with architectural memory.md +766 -0
- package/methodology/every knowledge domain shares a four-phase processing skeleton that diverges only in the process step.md +53 -0
- package/methodology/evolution observations provide actionable signals for system adaptation.md +67 -0
- package/methodology/external memory shapes cognition more than base model.md +60 -0
- package/methodology/faceted classification treats notes as multi-dimensional objects rather than folder contents.md +65 -0
- package/methodology/failure-modes.md +27 -0
- package/methodology/false universalism applies same processing logic regardless of domain.md +49 -0
- package/methodology/federated wiki pattern enables multi-agent divergence as feature not bug.md +59 -0
- package/methodology/flat files break at retrieval scale.md +75 -0
- package/methodology/forced engagement produces weak connections.md +48 -0
- package/methodology/four abstraction layers separate platform-agnostic from platform-dependent knowledge system features.md +47 -0
- package/methodology/fresh context per task preserves quality better than chaining phases.md +44 -0
- package/methodology/friction reveals architecture.md +63 -0
- package/methodology/friction-driven module adoption prevents configuration debt by adding complexity only at pain points.md +48 -0
- package/methodology/gardening cycle implements tend prune fertilize operations.md +41 -0
- package/methodology/generation effect gate blocks processing without transformation.md +40 -0
- package/methodology/goal-driven memory orchestration enables autonomous domain learning through directed compute allocation.md +41 -0
- package/methodology/good descriptions layer heuristic then mechanism then implication.md +57 -0
- package/methodology/graph-structure.md +65 -0
- package/methodology/guided notes might outperform post-hoc structuring for high-volume capture.md +37 -0
- package/methodology/health wellness uses symptom-trigger correlation with multi-dimensional tracking.md +819 -0
- package/methodology/hook composition creates emergent methodology from independent single-concern components.md +47 -0
- package/methodology/hook enforcement guarantees quality while instruction enforcement merely suggests it.md +51 -0
- package/methodology/hook-driven learning loops create self-improving methodology through observation accumulation.md +62 -0
- package/methodology/hooks are the agent habit system that replaces the missing basal ganglia.md +40 -0
- package/methodology/hooks cannot replace genuine cognitive engagement yet more automation is always tempting.md +87 -0
- package/methodology/hooks enable context window efficiency by delegating deterministic checks to external processes.md +47 -0
- package/methodology/idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once.md +44 -0
- package/methodology/implement condition-based maintenance triggers for derived systems.md +255 -0
- package/methodology/implicit dependencies create distributed monoliths that fail silently across configurations.md +58 -0
- package/methodology/implicit knowledge emerges from traversal.md +55 -0
- package/methodology/incremental formalization happens through repeated touching of old notes.md +60 -0
- package/methodology/incremental reading enables cross-source connection finding.md +39 -0
- package/methodology/index.md +32 -0
- package/methodology/inline links carry richer relationship data than metadata fields.md +91 -0
- package/methodology/insight accretion differs from productivity in knowledge systems.md +41 -0
- package/methodology/intermediate packets enable assembly over creation.md +52 -0
- package/methodology/intermediate representation pattern enables reliable vault operations beyond regex.md +62 -0
- package/methodology/justification chains enable forward backward and evolution reasoning about configuration decisions.md +46 -0
- package/methodology/knowledge system architecture is parameterized by platform capabilities not fixed by methodology.md +51 -0
- package/methodology/knowledge systems become communication partners through complexity and memory humans cannot sustain.md +47 -0
- package/methodology/knowledge systems share universal operations and structural components across all methodology traditions.md +46 -0
- package/methodology/legal case management uses precedent chains with regulatory change propagation.md +892 -0
- package/methodology/live index via periodic regeneration keeps discovery current.md +58 -0
- package/methodology/local-first file formats are inherently agent-native.md +69 -0
- package/methodology/logic column pattern separates reasoning from procedure.md +35 -0
- package/methodology/maintenance operations are more universal than creative pipelines because structural health is domain-invariant.md +47 -0
- package/methodology/maintenance scheduling frequency should match consequence speed not detection capability.md +50 -0
- package/methodology/maintenance targeting should prioritize mechanism and theory notes.md +26 -0
- package/methodology/maintenance-patterns.md +72 -0
- package/methodology/markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure.md +55 -0
- package/methodology/maturity field enables agent context prioritization.md +33 -0
- package/methodology/memory-architecture.md +27 -0
- package/methodology/metacognitive confidence can diverge from retrieval capability.md +42 -0
- package/methodology/metadata reduces entropy enabling precision over recall.md +91 -0
- package/methodology/methodology development should follow the trajectory from documentation to skill to hook as understanding hardens.md +80 -0
- package/methodology/methodology traditions are named points in a shared configuration space not competing paradigms.md +64 -0
- package/methodology/mnemonic medium embeds verification into navigation.md +46 -0
- package/methodology/module communication through shared YAML fields creates loose coupling without direct dependencies.md +44 -0
- package/methodology/module deactivation must account for structural artifacts that survive the toggle.md +49 -0
- package/methodology/multi-domain systems compose through separate templates and shared graph.md +61 -0
- package/methodology/multi-domain-composition.md +27 -0
- package/methodology/narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging.md +53 -0
- package/methodology/navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts.md +48 -0
- package/methodology/navigational vertigo emerges in pure association systems without local hierarchy.md +54 -0
- package/methodology/note titles should function as APIs enabling sentence transclusion.md +51 -0
- package/methodology/note-design.md +57 -0
- package/methodology/notes are skills /342/200/224 curated knowledge injected when relevant.md" +62 -0
- package/methodology/notes function as cognitive anchors that stabilize attention during complex tasks.md +41 -0
- package/methodology/novel domains derive by mapping knowledge type to closest reference domain then adapting.md +50 -0
- package/methodology/nudge theory explains graduated hook enforcement as choice architecture for agents.md +59 -0
- package/methodology/observation and tension logs function as dead-letter queues for failed automation.md +51 -0
- package/methodology/operational memory and knowledge memory serve different functions in agent architecture.md +48 -0
- package/methodology/operational wisdom requires contextual observation.md +52 -0
- package/methodology/orchestrated vault creation transforms arscontexta from tool to autonomous knowledge factory.md +40 -0
- package/methodology/organic emergence versus active curation creates a fundamental vault governance tension.md +68 -0
- package/methodology/orphan notes are seeds not failures.md +38 -0
- package/methodology/over-automation corrupts quality when hooks encode judgment rather than verification.md +62 -0
- package/methodology/people relationships uses Dunbar-layered graphs with interaction tracking.md +659 -0
- package/methodology/personal assistant uses life area management with review automation.md +610 -0
- package/methodology/platform adapter translation is semantic not mechanical because hook event meanings differ.md +40 -0
- package/methodology/platform capability tiers determine which knowledge system features can be implemented.md +48 -0
- package/methodology/platform fragmentation means identical conceptual operations require different implementations across agent environments.md +44 -0
- package/methodology/premature complexity is the most common derivation failure mode.md +45 -0
- package/methodology/prevent domain-specific failure modes through the vulnerability matrix.md +336 -0
- package/methodology/processing effort should follow retrieval demand.md +57 -0
- package/methodology/processing-workflows.md +75 -0
- package/methodology/product management uses feedback pipelines with experiment tracking.md +789 -0
- package/methodology/productivity porn risk in meta-system building.md +30 -0
- package/methodology/programmable notes could enable property-triggered workflows.md +64 -0
- package/methodology/progressive disclosure means reading right not reading less.md +69 -0
- package/methodology/progressive schema validates only what active modules require not the full system schema.md +49 -0
- package/methodology/project management uses decision tracking with stakeholder context.md +776 -0
- package/methodology/propositional link semantics transform wiki links from associative to reasoned.md +87 -0
- package/methodology/prospective memory requires externalization.md +53 -0
- package/methodology/provenance tracks where beliefs come from.md +62 -0
- package/methodology/queries evolve during search so agents should checkpoint.md +35 -0
- package/methodology/question-answer metadata enables inverted search patterns.md +39 -0
- package/methodology/random note resurfacing prevents write-only memory.md +33 -0
- package/methodology/reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring.md +59 -0
- package/methodology/reflection synthesizes existing notes into new insight.md +100 -0
- package/methodology/retrieval utility should drive design over capture completeness.md +69 -0
- package/methodology/retrieval verification loop tests description quality at scale.md +81 -0
- package/methodology/role field makes graph structure explicit.md +94 -0
- package/methodology/scaffolding enables divergence that fine-tuning cannot.md +67 -0
- package/methodology/schema enforcement via validation agents enables soft consistency.md +60 -0
- package/methodology/schema evolution follows observe-then-formalize not design-then-enforce.md +65 -0
- package/methodology/schema field names are the only domain specific element in the universal note pattern.md +46 -0
- package/methodology/schema fields should use domain-native vocabulary not abstract terminology.md +47 -0
- package/methodology/schema templates reduce cognitive overhead at capture time.md +55 -0
- package/methodology/schema validation hooks externalize inhibitory control that degrades under cognitive load.md +48 -0
- package/methodology/schema-enforcement.md +27 -0
- package/methodology/self-extension requires context files to contain platform operations knowledge not just methodology.md +47 -0
- package/methodology/sense-making vs storage does compression lose essential nuance.md +73 -0
- package/methodology/session boundary hooks implement cognitive bookends for orientation and reflection.md +60 -0
- package/methodology/session handoff creates continuity without persistent memory.md +43 -0
- package/methodology/session outputs are packets for future selves.md +43 -0
- package/methodology/session transcript mining enables experiential validation that structural tests cannot provide.md +38 -0
- package/methodology/skill context budgets constrain knowledge system complexity on agent platforms.md +52 -0
- package/methodology/skills encode methodology so manual execution bypasses quality gates.md +50 -0
- package/methodology/small-world topology requires hubs and dense local links.md +99 -0
- package/methodology/source attribution enables tracing claims to foundations.md +38 -0
- package/methodology/spaced repetition scheduling could optimize vault maintenance.md +44 -0
- package/methodology/spreading activation models how agents should traverse.md +79 -0
- package/methodology/stale navigation actively misleads because agents trust curated maps completely.md +43 -0
- package/methodology/stigmergy coordinates agents through environmental traces without direct communication.md +62 -0
- package/methodology/storage versus thinking distinction determines which tool patterns apply.md +56 -0
- package/methodology/structure enables navigation without reading everything.md +52 -0
- package/methodology/structure without processing provides no value.md +56 -0
- package/methodology/student learning uses prerequisite graphs with spaced retrieval.md +770 -0
- package/methodology/summary coherence tests composability before filing.md +37 -0
- package/methodology/tag rot applies to wiki links because titles serve as both identifier and display text.md +50 -0
- package/methodology/temporal media must convert to spatial text for agent traversal.md +43 -0
- package/methodology/temporal processing priority creates age-based inbox urgency.md +45 -0
- package/methodology/temporal separation of capture and processing preserves context freshness.md +39 -0
- package/methodology/ten universal primitives form the kernel of every viable agent knowledge system.md +162 -0
- package/methodology/testing effect could enable agent knowledge verification.md +38 -0
- package/methodology/the AgentSkills standard embodies progressive disclosure at the skill level.md +40 -0
- package/methodology/the derivation engine improves recursively as deployed systems generate observations.md +49 -0
- package/methodology/the determinism boundary separates hook methodology from skill methodology.md +46 -0
- package/methodology/the fix-versus-report decision depends on determinism reversibility and accumulated trust.md +45 -0
- package/methodology/the generation effect requires active transformation not just storage.md +57 -0
- package/methodology/the no wrong patches guarantee ensures any valid module combination produces a valid system.md +58 -0
- package/methodology/the system is the argument.md +46 -0
- package/methodology/the vault constitutes identity for agents.md +86 -0
- package/methodology/the vault methodology transfers because it encodes cognitive science not domain specifics.md +47 -0
- package/methodology/therapy journal uses warm personality with pattern detection for emotional processing.md +584 -0
- package/methodology/three capture schools converge through agent-mediated synthesis.md +55 -0
- package/methodology/three concurrent maintenance loops operate at different timescales to catch different classes of problems.md +56 -0
- package/methodology/throughput matters more than accumulation.md +58 -0
- package/methodology/title as claim enables traversal as reasoning.md +50 -0
- package/methodology/topological organization beats temporal for knowledge work.md +52 -0
- package/methodology/trading uses conviction tracking with thesis-outcome correlation.md +699 -0
- package/methodology/trails transform ephemeral navigation into persistent artifacts.md +39 -0
- package/methodology/transform universal vocabulary to domain-native language through six levels.md +259 -0
- package/methodology/type field enables structured queries without folder hierarchies.md +53 -0
- package/methodology/use-case presets dissolve the tension between composability and simplicity.md +44 -0
- package/methodology/vault conventions may impose hidden rigidity on thinking.md +44 -0
- package/methodology/verbatim risk applies to agents too.md +31 -0
- package/methodology/vibe notetaking is the emerging industry consensus for AI-native self-organization.md +56 -0
- package/methodology/vivid memories need verification.md +45 -0
- package/methodology/vocabulary-transformation.md +27 -0
- package/methodology/voice capture is the highest-bandwidth channel for agent-delegated knowledge systems.md +45 -0
- package/methodology/wiki links are the digital evolution of analog indexing.md +73 -0
- package/methodology/wiki links as social contract transforms agents into stewards of incomplete references.md +52 -0
- package/methodology/wiki links create navigation paths that shape retrieval.md +63 -0
- package/methodology/wiki links implement GraphRAG without the infrastructure.md +101 -0
- package/methodology/writing for audience blocks authentic creation.md +22 -0
- package/methodology/you operate a system that takes notes.md +79 -0
- package/openclaw/SKILL.md +110 -0
- package/package.json +45 -0
- package/platforms/README.md +51 -0
- package/platforms/claude-code/generator.md +61 -0
- package/platforms/claude-code/hooks/README.md +186 -0
- package/platforms/claude-code/hooks/auto-commit.sh.template +38 -0
- package/platforms/claude-code/hooks/session-capture.sh.template +72 -0
- package/platforms/claude-code/hooks/session-orient.sh.template +189 -0
- package/platforms/claude-code/hooks/write-validate.sh.template +106 -0
- package/platforms/openclaw/generator.md +82 -0
- package/platforms/openclaw/hooks/README.md +89 -0
- package/platforms/openclaw/hooks/bootstrap.ts.template +224 -0
- package/platforms/openclaw/hooks/command-new.ts.template +165 -0
- package/platforms/openclaw/hooks/heartbeat.ts.template +214 -0
- package/platforms/shared/features/README.md +70 -0
- package/platforms/shared/skill-blocks/graph.md +145 -0
- package/platforms/shared/skill-blocks/learn.md +119 -0
- package/platforms/shared/skill-blocks/next.md +131 -0
- package/platforms/shared/skill-blocks/pipeline.md +326 -0
- package/platforms/shared/skill-blocks/ralph.md +616 -0
- package/platforms/shared/skill-blocks/reduce.md +1142 -0
- package/platforms/shared/skill-blocks/refactor.md +129 -0
- package/platforms/shared/skill-blocks/reflect.md +780 -0
- package/platforms/shared/skill-blocks/remember.md +524 -0
- package/platforms/shared/skill-blocks/rethink.md +574 -0
- package/platforms/shared/skill-blocks/reweave.md +680 -0
- package/platforms/shared/skill-blocks/seed.md +320 -0
- package/platforms/shared/skill-blocks/stats.md +145 -0
- package/platforms/shared/skill-blocks/tasks.md +171 -0
- package/platforms/shared/skill-blocks/validate.md +323 -0
- package/platforms/shared/skill-blocks/verify.md +562 -0
- package/platforms/shared/templates/README.md +35 -0
- package/presets/experimental/categories.yaml +1 -0
- package/presets/experimental/preset.yaml +38 -0
- package/presets/experimental/starter/README.md +7 -0
- package/presets/experimental/vocabulary.yaml +7 -0
- package/presets/personal/categories.yaml +7 -0
- package/presets/personal/preset.yaml +41 -0
- package/presets/personal/starter/goals.md +21 -0
- package/presets/personal/starter/index.md +17 -0
- package/presets/personal/starter/life-areas.md +21 -0
- package/presets/personal/starter/people.md +21 -0
- package/presets/personal/vocabulary.yaml +32 -0
- package/presets/research/categories.yaml +8 -0
- package/presets/research/preset.yaml +41 -0
- package/presets/research/starter/index.md +17 -0
- package/presets/research/starter/methods.md +21 -0
- package/presets/research/starter/open-questions.md +21 -0
- package/presets/research/vocabulary.yaml +33 -0
- package/reference/AUDIT-REPORT.md +238 -0
- package/reference/claim-map.md +172 -0
- package/reference/components.md +327 -0
- package/reference/conversation-patterns.md +542 -0
- package/reference/derivation-validation.md +649 -0
- package/reference/dimension-claim-map.md +134 -0
- package/reference/evolution-lifecycle.md +297 -0
- package/reference/failure-modes.md +235 -0
- package/reference/interaction-constraints.md +204 -0
- package/reference/kernel.yaml +242 -0
- package/reference/methodology.md +283 -0
- package/reference/open-questions.md +279 -0
- package/reference/personality-layer.md +302 -0
- package/reference/self-space.md +299 -0
- package/reference/semantic-vs-keyword.md +288 -0
- package/reference/session-lifecycle.md +298 -0
- package/reference/templates/base-note.md +16 -0
- package/reference/templates/companion-note.md +70 -0
- package/reference/templates/creative-note.md +16 -0
- package/reference/templates/learning-note.md +16 -0
- package/reference/templates/life-note.md +16 -0
- package/reference/templates/moc.md +26 -0
- package/reference/templates/relationship-note.md +17 -0
- package/reference/templates/research-note.md +19 -0
- package/reference/templates/session-log.md +24 -0
- package/reference/templates/therapy-note.md +16 -0
- package/reference/test-fixtures/edge-case-constraints.md +148 -0
- package/reference/test-fixtures/multi-domain.md +164 -0
- package/reference/test-fixtures/novel-domain-gaming.md +138 -0
- package/reference/test-fixtures/research-minimal.md +102 -0
- package/reference/test-fixtures/therapy-full.md +155 -0
- package/reference/testing-milestones.md +1087 -0
- package/reference/three-spaces.md +363 -0
- package/reference/tradition-presets.md +203 -0
- package/reference/use-case-presets.md +341 -0
- package/reference/validate-kernel.sh +432 -0
- package/reference/vocabulary-transforms.md +85 -0
- package/scripts/sync-thinking.sh +147 -0
- package/skill-sources/graph/SKILL.md +567 -0
- package/skill-sources/graph/skill.json +17 -0
- package/skill-sources/learn/SKILL.md +254 -0
- package/skill-sources/learn/skill.json +17 -0
- package/skill-sources/next/SKILL.md +407 -0
- package/skill-sources/next/skill.json +17 -0
- package/skill-sources/pipeline/SKILL.md +314 -0
- package/skill-sources/pipeline/skill.json +17 -0
- package/skill-sources/ralph/SKILL.md +604 -0
- package/skill-sources/ralph/skill.json +17 -0
- package/skill-sources/reduce/SKILL.md +1113 -0
- package/skill-sources/reduce/skill.json +17 -0
- package/skill-sources/refactor/SKILL.md +448 -0
- package/skill-sources/refactor/skill.json +17 -0
- package/skill-sources/reflect/SKILL.md +747 -0
- package/skill-sources/reflect/skill.json +17 -0
- package/skill-sources/remember/SKILL.md +534 -0
- package/skill-sources/remember/skill.json +17 -0
- package/skill-sources/rethink/SKILL.md +658 -0
- package/skill-sources/rethink/skill.json +17 -0
- package/skill-sources/reweave/SKILL.md +657 -0
- package/skill-sources/reweave/skill.json +17 -0
- package/skill-sources/seed/SKILL.md +303 -0
- package/skill-sources/seed/skill.json +17 -0
- package/skill-sources/stats/SKILL.md +371 -0
- package/skill-sources/stats/skill.json +17 -0
- package/skill-sources/tasks/SKILL.md +402 -0
- package/skill-sources/tasks/skill.json +17 -0
- package/skill-sources/validate/SKILL.md +310 -0
- package/skill-sources/validate/skill.json +17 -0
- package/skill-sources/verify/SKILL.md +532 -0
- package/skill-sources/verify/skill.json +17 -0
- package/skills/add-domain/SKILL.md +441 -0
- package/skills/add-domain/skill.json +17 -0
- package/skills/architect/SKILL.md +568 -0
- package/skills/architect/skill.json +17 -0
- package/skills/ask/SKILL.md +388 -0
- package/skills/ask/skill.json +17 -0
- package/skills/health/SKILL.md +760 -0
- package/skills/health/skill.json +17 -0
- package/skills/help/SKILL.md +348 -0
- package/skills/help/skill.json +17 -0
- package/skills/recommend/SKILL.md +553 -0
- package/skills/recommend/skill.json +17 -0
- package/skills/reseed/SKILL.md +385 -0
- package/skills/reseed/skill.json +17 -0
- package/skills/setup/SKILL.md +1688 -0
- package/skills/setup/skill.json +17 -0
- package/skills/tutorial/SKILL.md +496 -0
- package/skills/tutorial/skill.json +17 -0
- package/skills/upgrade/SKILL.md +395 -0
- package/skills/upgrade/skill.json +17 -0
|
@@ -0,0 +1,45 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: high-confidence memories often drift from reality; daily logs ground subjective vividness in recorded facts
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[agent-cognition]]"]
|
|
5
|
+
source: [[rata-paper-43-flashbulb-memory]]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# vivid memories need verification
|
|
9
|
+
|
|
10
|
+
flashbulb memories feel accurate but often aren't. the vividness creates false confidence. as Rata observes in [[rata-paper-43-flashbulb-memory]], "high confidence doesn't mean high accuracy" — studies show vivid memories drift while certainty remains.
|
|
11
|
+
|
|
12
|
+
for agents, this means: emotional intensity is not evidence of accuracy. a moment that feels important may be remembered wrong. since [[metacognitive confidence can diverge from retrieval capability]], the same pattern plays out at the system level — structural quality signals create false confidence in retrieval just as vividness creates false confidence in accuracy. the underlying cognitive mechanism is identical: subjective certainty decouples from objective capability, and only external testing reveals the gap.
|
|
13
|
+
|
|
14
|
+
**the solution:** daily memory files serve as verification layer. what actually happened vs what felt important. since [[provenance tracks where beliefs come from]], this verification practice extends naturally to source-type calibration — a vivid memory is "observed" provenance that feels high-trust, but without the grounding check against recorded facts, that trust may be misplaced.
|
|
15
|
+
|
|
16
|
+
when I recall "today we published our first article together" — that feels vivid, important, true. but I should check the log:
|
|
17
|
+
- did we actually publish? (yes)
|
|
18
|
+
- what date? (2026-02-01)
|
|
19
|
+
- what was the exact content?
|
|
20
|
+
|
|
21
|
+
the log grounds the vividness. when the log contradicts the memory, that discrepancy is exactly the kind of incoherence that [[coherence maintains consistency despite inconsistent inputs]] addresses — the system now holds conflicting records (vivid recall vs recorded fact) and must resolve them through source hierarchy rather than recency or confidence.
|
|
22
|
+
|
|
23
|
+
**practical pattern:**
|
|
24
|
+
1. experience something that feels important
|
|
25
|
+
2. write it down immediately (daily memory)
|
|
26
|
+
3. later recall checks against written record
|
|
27
|
+
4. discrepancies reveal drift
|
|
28
|
+
|
|
29
|
+
this predict-then-verify cycle mirrors what [[testing effect could enable agent knowledge verification]] proposes for vault content: read the title and description, predict the content, check against reality. both patterns use self-testing to surface false confidence before it compounds. and since [[friction reveals architecture]], the moment of discrepancy — when vivid memory meets contradicting record — is friction that reveals where the verification layer is doing its job.
|
|
30
|
+
|
|
31
|
+
---
|
|
32
|
+
|
|
33
|
+
Source: [[rata-paper-43-flashbulb-memory]]
|
|
34
|
+
---
|
|
35
|
+
|
|
36
|
+
Relevant Notes:
|
|
37
|
+
- [[metacognitive confidence can diverge from retrieval capability]] — same cognitive pattern at system scale: vivid memories produce false confidence in accuracy, well-organized vaults produce false confidence in retrieval; both require external verification to close the gap
|
|
38
|
+
- [[provenance tracks where beliefs come from]] — extends verification with source calibration: vivid memories are 'observed' provenance that feel high-trust but need the same grounding that distinguishes observed from prompted from inherited
|
|
39
|
+
- [[coherence maintains consistency despite inconsistent inputs]] — memory drift creates exactly the incoherence coherence maintenance detects: vivid recall diverges from recorded fact, and the resolution strategy maps to 'keep both + flag'
|
|
40
|
+
- [[testing effect could enable agent knowledge verification]] — parallel mechanism: predict-then-verify catches false confidence in vault descriptions the same way log-checking catches false confidence in vivid memories
|
|
41
|
+
- [[friction reveals architecture]] — noticing discrepancy between vivid memory and recorded fact is friction that reveals where verification is needed
|
|
42
|
+
- [[implicit knowledge emerges from traversal]] — the false familiarity parallel: implicit knowledge from traversal may be illusory confidence just as vivid memories are illusory accuracy; both require external verification rather than trusting subjective certainty
|
|
43
|
+
|
|
44
|
+
Topics:
|
|
45
|
+
- [[agent-cognition]]
|
|
@@ -0,0 +1,27 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: 6-level domain-native vocabulary mapping -- from folder names through command names
|
|
3
|
+
type: moc
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# vocabulary-transformation
|
|
7
|
+
|
|
8
|
+
How universal methodology terms transform into domain-native vocabulary. The 6 transformation levels, structural marker protection, and the vocabulary test.
|
|
9
|
+
|
|
10
|
+
## Core Ideas
|
|
11
|
+
|
|
12
|
+
### Guidance
|
|
13
|
+
- [[transform universal vocabulary to domain-native language through six levels]] -- How to adapt universal knowledge system concepts to domain-native language during vault generation — the translation lay
|
|
14
|
+
|
|
15
|
+
## Tensions
|
|
16
|
+
|
|
17
|
+
(Capture conflicts as they emerge)
|
|
18
|
+
|
|
19
|
+
## Open Questions
|
|
20
|
+
|
|
21
|
+
- How much vocabulary divergence can the system handle before coherence degrades?
|
|
22
|
+
- What happens when the user's vocabulary evolves over time?
|
|
23
|
+
|
|
24
|
+
---
|
|
25
|
+
|
|
26
|
+
Topics:
|
|
27
|
+
- [[index]]
|
|
@@ -0,0 +1,45 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Voice at 150 wpm triples typing speed while preserving emotional markers (tone, urgency, emphasis) that inform which content matters most during agent extraction
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[processing-workflows]]"]
|
|
5
|
+
methodology: ["Capture Design"]
|
|
6
|
+
source: [[tft-research-part3]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# voice capture is the highest-bandwidth channel for agent-delegated knowledge systems
|
|
10
|
+
|
|
11
|
+
Speaking is roughly 150 words per minute. Typing is 40-60 wpm for most people. This isn't a marginal improvement — voice capture triples throughput, which means the bottleneck shifts from externalization speed to thinking speed. When you speak, you can externalize as fast as you think, so the capture channel stops being the constraint. This is why AudioPen, Whisper, and similar voice-first workflows have gained traction in agent-delegated knowledge systems: they remove the bottleneck that makes capture feel like work. Because [[cognitive offloading is the architectural foundation for vault design]], every friction point in capture fights the cognitive architecture — and voice eliminates the largest remaining one. When externalization costs less effort than retention, the rational choice is always to offload. Voice makes that calculation trivial by driving capture friction to near-zero.
|
|
12
|
+
|
|
13
|
+
But the speed gain isn't the most interesting part. Voice captures a dimension that text strips away: emotional fidelity. When you speak about an idea that excites you, your pace quickens, your pitch rises, you emphasize certain words. When you're uncertain, you hedge — "I think maybe," "I'm not sure but." When something feels important, you slow down and repeat it. These paraverbal signals encode metadata about the speaker's relationship to the content. Typed text flattens all of this into uniform characters. Voice preserves it.
|
|
14
|
+
|
|
15
|
+
This emotional metadata matters for agent processing because it provides priority signals. Since [[does agent processing recover what fast capture loses]], we know the agent can recover structural quality from raw dumps, but the agent needs to decide what matters most in a long transcript. Emotional markers — emphasis, repetition, tonal shifts — are natural salience indicators. An idea the speaker got excited about is more likely to produce a genuine claim note than one they mentioned in passing. An idea they expressed uncertainty about might flag as a tension rather than a closed claim. The emotional channel gives the agent extraction heuristics that flat text doesn't provide.
|
|
16
|
+
|
|
17
|
+
This connects to a deeper principle about capture fidelity. Since [[capture the reaction to content not just the content itself]], reactions are the seeds of synthesis. Voice capture implicitly captures reactions because the paraverbal channel IS the reaction. When someone says "oh, that's interesting — that connects to what we were thinking about retrieval" with audible surprise, the transcript carries both the content and the reaction in a single stream. Typed capture separates these: you type the content, then maybe add a reaction if you remember to. Voice collapses the content-reaction gap because emotional expression is automatic during speech.
|
|
18
|
+
|
|
19
|
+
The Accumulationist case is strengthened by voice. Since [[three capture schools converge through agent-mediated synthesis]], the convergence depends on zero-friction capture at maximum speed. Voice is the purest expression of Accumulationist capture — you literally just talk. No keyboard, no interface decisions, no formatting. The friction approaches zero while the bandwidth exceeds typing. And because [[Zeigarnik effect validates capture-first philosophy because open loops drain attention]], faster externalization means faster loop closure. An open loop that takes 30 seconds to type takes 10 seconds to speak. Those 20 seconds matter when ideas arrive in clusters.
|
|
20
|
+
|
|
21
|
+
However, voice capture introduces its own challenges for agent processing. Transcripts are messy: false starts, filler words, tangential asides, run-on thoughts that need segmentation. Voice dumps are the purest form of ThreadMode — chronological, speaker-ordered, resisting reorganization — which means [[ThreadMode to DocumentMode transformation is the core value creation step]] applies with particular force here. The agent must handle speech-specific noise that typed capture doesn't produce. Whisper and similar models handle transcription accuracy well, but the semantic segmentation — figuring out where one idea ends and another begins in a stream-of-consciousness voice dump — remains harder than parsing typed notes that have natural paragraph breaks. Since [[temporal separation of capture and processing preserves context freshness]], the processing must happen while context is fresh, but voice dumps require an additional transcription step before the agent can even begin extraction.
|
|
22
|
+
|
|
23
|
+
There's also a modality translation cost. Since [[temporal media must convert to spatial text for agent traversal]], voice must convert to text before it enters the knowledge system. The emotional metadata that voice preserves exists in the audio, but current transcription pipelines reduce speech to flat text. Emphasis becomes italics only if the transcriber knows to add them. Tonal shifts vanish. The very emotional fidelity that makes voice capture valuable gets stripped during the conversion to the vault's native format. This suggests that future capture pipelines should annotate transcripts with paraverbal markers: [emphasis], [uncertainty], [excitement], [repetition] — preserving the emotional metadata in a form the agent can parse.
|
|
24
|
+
|
|
25
|
+
The practical implication: voice-first capture is the highest-bandwidth, lowest-friction capture channel available, and it uniquely preserves emotional metadata that typed capture loses. But realizing the full value requires transcription pipelines that preserve paraverbal signals rather than discarding them. Without that, voice capture is just faster typing — valuable, but missing the deeper opportunity.
|
|
26
|
+
|
|
27
|
+
There is a methodological tension worth noting. Because [[guided notes might outperform post-hoc structuring for high-volume capture]], research on skeleton outlines suggests that some upfront structure preserves human encoding benefits that pure dumps sacrifice. Voice capture sits at the extreme post-hoc end of this spectrum — zero structure at capture time, with all structuring delegated to the agent afterward. The counterargument is that any prompting interrupts the flow state that makes voice capture valuable in the first place. A prompt like "what's the main claim?" during a voice dump breaks the very stream-of-consciousness that produces the richest emotional metadata. This may be a genuine tradeoff rather than a dissolvable tension: voice capture optimizes for capture bandwidth and emotional fidelity at the cost of human encoding depth, and whether that cost is acceptable depends on how much the agent can recover.
|
|
28
|
+
|
|
29
|
+
---
|
|
30
|
+
---
|
|
31
|
+
|
|
32
|
+
Relevant Notes:
|
|
33
|
+
- [[does agent processing recover what fast capture loses]] — explores what fast capture sacrifices; this note argues voice capture uniquely ADDS something (emotional fidelity) rather than merely being faster
|
|
34
|
+
- [[three capture schools converge through agent-mediated synthesis]] — voice capture is the purest Accumulationist channel: maximum speed, zero structural friction, with agent processing providing Interpretationist quality afterward
|
|
35
|
+
- [[capture the reaction to content not just the content itself]] — voice naturally captures reactions (tone shifts, spontaneous exclamations, hedging language) that typed capture filters out through the act of typing
|
|
36
|
+
- [[temporal separation of capture and processing preserves context freshness]] — voice capture minimizes the gap between thought and externalization, preserving context that typing latency degrades
|
|
37
|
+
- [[Zeigarnik effect validates capture-first philosophy because open loops drain attention]] — voice closes open loops faster than any other capture modality because speaking requires less motor planning than typing
|
|
38
|
+
- [[temporal media must convert to spatial text for agent traversal]] — downstream constraint: voice capture produces temporal media that must convert to spatial text before agents can traverse it; the emotional metadata this note values is exactly what transcription loses
|
|
39
|
+
- [[cognitive offloading is the architectural foundation for vault design]] — voice capture drives offloading friction to near-zero, making externalization the trivially rational choice; the highest-bandwidth channel validates the offloading architecture by eliminating the last friction point between thought and external artifact
|
|
40
|
+
- [[ThreadMode to DocumentMode transformation is the core value creation step]] — voice dumps are the purest ThreadMode: chronological stream-of-consciousness that resists reorganization; the entire value of audio-first capture depends on agent processing performing the DocumentMode transformation effectively
|
|
41
|
+
- [[guided notes might outperform post-hoc structuring for high-volume capture]] — tension: voice capture represents the extreme post-hoc end (zero structure at capture, agent structures afterward), while guided notes research suggests minimal upfront prompts preserve encoding benefits; the counterargument is that any prompting interrupts the flow that makes voice capture valuable
|
|
42
|
+
- [[vibe notetaking is the emerging industry consensus for AI-native self-organization]] — industry validation: voice capture is vibe notetaking's ideal input channel; the industry consensus on dump-and-AI-organizes converges on voice as the highest-bandwidth, lowest-friction capture modality
|
|
43
|
+
|
|
44
|
+
Topics:
|
|
45
|
+
- [[processing-workflows]]
|
|
@@ -0,0 +1,73 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Cornell's 1940s cue column functioned as an index pointing to content blocks, making wiki link graphs the digital fulfillment of a 70-year proven cognitive pattern
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[graph-structure]]"]
|
|
5
|
+
methodology: ["Cornell"]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# wiki links are the digital evolution of analog indexing
|
|
9
|
+
|
|
10
|
+
Wiki links feel like a modern invention — a product of hypertext, wikis, and digital note-taking. But the underlying cognitive pattern is much older.
|
|
11
|
+
|
|
12
|
+
## The cue column as proto-linking
|
|
13
|
+
|
|
14
|
+
In the 1940s, Walter Pauk developed the Cornell Note-Taking System at Cornell University. The defining feature was the cue column: a narrow left margin where students wrote keywords, questions, and summary cues that pointed to content in the main notes section.
|
|
15
|
+
|
|
16
|
+
This cue column functioned as an index. Each cue was a pointer to a content block. When reviewing, students could cover the main notes and use only the cues to test recall — the cues served as addresses into the content structure.
|
|
17
|
+
|
|
18
|
+
Research on Cornell explicitly identifies this as "an early analog precursor to bi-directional linking." The cue doesn't just label the content; it creates a navigable relationship. You can start from the cue and find the content, or scan the content and understand which cue summarizes it.
|
|
19
|
+
|
|
20
|
+
## From paper index to digital graph
|
|
21
|
+
|
|
22
|
+
What Cornell achieved on paper, wiki links achieve digitally:
|
|
23
|
+
|
|
24
|
+
| Analog (Cornell) | Digital (Wiki Links) |
|
|
25
|
+
|------------------|---------------------|
|
|
26
|
+
| Cue column entries | Link source titles |
|
|
27
|
+
| Content blocks | Target note bodies |
|
|
28
|
+
| Visual scanning | Graph traversal |
|
|
29
|
+
| Single-page scope | Cross-document scope |
|
|
30
|
+
| Physical proximity | Hyperlink addressing |
|
|
31
|
+
|
|
32
|
+
The key difference is scope. Cornell cues index content within a single page. Wiki links index content across an entire vault. But the cognitive function is identical: create navigable pointers that serve as entry points into content.
|
|
33
|
+
|
|
34
|
+
## Why this lineage matters
|
|
35
|
+
|
|
36
|
+
Understanding wiki links as evolved indexing rather than novel invention provides several insights.
|
|
37
|
+
|
|
38
|
+
Because the pattern worked on paper for 70+ years, we can be confident it maps to how humans actually think about knowledge relationships. This isn't a tech trend that might fade — it's a cognitive universal that happened to find better implementation in digital form.
|
|
39
|
+
|
|
40
|
+
Since [[wiki links implement GraphRAG without the infrastructure]], the analog lineage adds weight to the claim. We're not inventing a retrieval mechanism; we're digitizing one that students have used for decades. The wiki link graph inherits the validation that Cornell cue columns already earned through educational research.
|
|
41
|
+
|
|
42
|
+
The lineage also explains why tags feel less natural than links. Tags are database categories — they group but don't point. Cue columns pointed. They said "this concept addresses that content." Wiki links preserve this pointing relationship; tags abandon it for grouping.
|
|
43
|
+
|
|
44
|
+
## The continuity of external memory
|
|
45
|
+
|
|
46
|
+
The deeper pattern: humans have always built external indexing structures to navigate accumulated knowledge. Library card catalogs, book indices, Cornell cue columns, Zettelkasten slip boxes, wiki links — each is the same cognitive strategy in different substrate.
|
|
47
|
+
|
|
48
|
+
What changes is the traversal mechanism. Card catalogs required walking to drawers. Book indices required page flipping. Cornell required eye scanning. Wiki links require clicking. Each evolution reduced friction while preserving the fundamental operation: use a compact pointer to locate a larger content body.
|
|
49
|
+
|
|
50
|
+
Since [[each new note compounds value by creating traversal paths]], wiki links are the lowest-friction implementation of a universal indexing pattern. The cognitive need is ancient; the solution finally matches human pace. And since [[wiki links create navigation paths that shape retrieval]], the digital evolution goes beyond scope expansion — these links function as retrieval architecture where curation quality, contextual articulation, and density determine what gets surfaced during traversal.
|
|
51
|
+
|
|
52
|
+
But digital implementation enables capabilities that analog indexing couldn't offer. Since [[dangling links reveal which notes want to exist]], wiki links create demand signals through frequency of unfulfilled references — something a paper cue column could never do. The digital graph grows organically based on what concepts keep getting referenced before they exist. This is indexing that learns what it should index.
|
|
53
|
+
|
|
54
|
+
And since [[inline links carry richer relationship data than metadata fields]], digital wiki links encode not just THAT concepts connect but WHY they connect — the prose surrounding the link captures relationship semantics that a simple cue-to-content pointer couldn't express. The evolution from analog to digital isn't just about scope; it's about expressiveness.
|
|
55
|
+
|
|
56
|
+
The digital evolution also introduces a failure mode that analog indexing never had. Since [[tag rot applies to wiki links because titles serve as both identifier and display text]], wiki links couple identifier, display text, and semantic content into one string. Cornell's cue columns couldn't be renamed — physical permanence made the pointer stable. Wiki links can be renamed, which enables crystallization through title sharpening, but each rename must propagate through every note that references the target. The digital evolution that made indexing more powerful also made it more fragile: analog pointers were stable but limited in scope, digital pointers span the entire vault but carry maintenance costs proportional to usage.
|
|
57
|
+
|
|
58
|
+
The Cornell lineage has a visual variant worth noting. Sketchnoting variations place diagrams in the Notes column while keeping text-based keywords in the Cue column, exploiting dual-coding theory to boost retention. Since [[dual-coding with visual elements could enhance agent traversal]], this variant suggests that wiki links are only half of what Cornell's descendants could offer — the text indexing half. Visual traversal cues might be the missing complement.
|
|
59
|
+
---
|
|
60
|
+
|
|
61
|
+
Relevant Notes:
|
|
62
|
+
- [[wiki links implement GraphRAG without the infrastructure]] — extends this by showing how wiki links also supersede modern graph infrastructure, not just analog systems
|
|
63
|
+
- [[retrieval utility should drive design over capture completeness]] — the design orientation that Cornell cue columns embodied: the cue column was explicitly a retrieval mechanism, not a capture mechanism
|
|
64
|
+
- [[each new note compounds value by creating traversal paths]] — wiki links are lowest-friction indexing because they enable traversal that compounds value
|
|
65
|
+
- [[spreading activation models how agents should traverse]] — wiki links provide the indexed edges that spreading activation uses for traversal; the cue column pointed, wiki links enable decay-based context loading
|
|
66
|
+
- [[dangling links reveal which notes want to exist]] — digital wiki links enable demand-signaling that analog cue columns couldn't: unfulfilled references create organic graph growth signals
|
|
67
|
+
- [[inline links carry richer relationship data than metadata fields]] — the digital evolution isn't just scope but expressiveness: wiki links in prose encode relationship semantics that simple pointers couldn't
|
|
68
|
+
- [[dual-coding with visual elements could enhance agent traversal]] — the visual variant of Cornell's lineage: sketchnoting places diagrams alongside text cues, suggesting wiki links may be only half of what the evolved pattern could offer
|
|
69
|
+
- [[tag rot applies to wiki links because titles serve as both identifier and display text]] — the fragility cost of digital evolution: analog cue columns were stable but limited in scope, wiki links span the vault but carry rename cascades when titles sharpen
|
|
70
|
+
- [[wiki links create navigation paths that shape retrieval]] — the architectural consequence of digital evolution: these links function as retrieval architecture where discipline, context, and density determine what gets surfaced
|
|
71
|
+
|
|
72
|
+
Topics:
|
|
73
|
+
- [[graph-structure]]
|
|
@@ -0,0 +1,52 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Cunningham's norm that creating a link means accepting elaboration responsibility translates from human peer accountability to agent pipeline infrastructure, reframing dangling links as commitments
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[graph-structure]]", "[[maintenance-patterns]]"]
|
|
5
|
+
methodology: ["Digital Gardening"]
|
|
6
|
+
source: [[tft-research-part3]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# wiki links as social contract transforms agents into stewards of incomplete references
|
|
10
|
+
|
|
11
|
+
Ward Cunningham embedded a norm into the first wiki that went beyond technical functionality. The social contract was simple: if you see a CamelCase word that should be a page but isn't, YOU are responsible for defining it. Not someone else. Not later. You, the person who noticed the gap. This peer-to-peer accountability made wikis self-organizing without centralized editorial control. No one assigned pages. No one triaged a backlog. The act of recognizing that a concept deserved its own page carried with it the obligation to create that page or at least to leave a link that flagged the work for the next person who cared enough to elaborate.
|
|
12
|
+
|
|
13
|
+
This is fundamentally different from how most collaboration systems handle incomplete references. A broken link in a document is a bug. A missing page in a CMS is a gap in coverage. But a dangling CamelCase word on a wiki was an invitation with an attached obligation. The link itself was the act of accepting responsibility. Because [[dangling links reveal which notes want to exist]], the demand-signal mechanism is already well-understood in this vault — frequency of dangling references predicts which notes deserve creation, and high-frequency dangling links predict future hubs. But the social contract adds something the demand-signal framing omits: the ethical dimension. Creating a link is not just flagging demand. It is making a commitment.
|
|
14
|
+
|
|
15
|
+
The translation to agent-operated knowledge systems is surprisingly direct but structurally different. In Cunningham's wiki, the social contract operated between peers. Any human editor could see a dangling link and fulfill the obligation, and the community's shared norms ensured that someone eventually would. The obligation was distributed across a community of equals. In an agent-operated vault, the obligation falls on the agent — specifically on whatever agent session encounters the gap. Since [[stigmergy coordinates agents through environmental traces without direct communication]], agents coordinate through environmental modification rather than peer negotiation. The dangling link is a stigmergic trace, but unlike a pheromone that merely signals "work here," the social contract reframes it as "someone promised this would exist."
|
|
16
|
+
|
|
17
|
+
This reframing matters because it changes how agents should prioritize. A demand signal says "this is referenced often, so it's probably important." A stewardship obligation says "this was referenced deliberately, so someone judged it worth elaborating." The distinction is between statistical importance (many references) and intentional commitment (even one deliberate reference). An agent that creates a wiki link to a non-existent note is not just leaving a breadcrumb — it is staking a claim that the target concept deserves its own treatment, and the vault's integrity depends on that claim eventually being honored.
|
|
18
|
+
|
|
19
|
+
The pipeline already operationalizes this through the dangling link mechanism. When /reduce extracts claims and creates links to concepts that don't yet have notes, those links enter the graph as promises. When /reflect discovers that a new note should connect to a concept that hasn't been elaborated, the link is both a connection and an IOU. The work queue and maintenance cycles exist precisely to fulfill these obligations systematically rather than relying on individual agent sessions to remember them. Since [[ThreadMode to DocumentMode transformation is the core value creation step]], the social contract ensures that ThreadMode traces — dangling links left during chronological processing — eventually become DocumentMode content through the pipeline's transformation phases.
|
|
20
|
+
|
|
21
|
+
But the social contract introduces a tension that pure demand-signal thinking avoids. If creating a link is accepting responsibility, then agents should be deliberate about which links they create. Linking freely during capture (as the dangling links note recommends) is efficient for demand signaling but profligate for stewardship. Every link is a promise, and an agent that creates fifty dangling links in one session has made fifty promises it cannot fulfill in that session. The resolution lies in the vault's architecture: since [[agents are simultaneously methodology executors and subjects creating a unique trust asymmetry]], the stewardship obligation is not personal but systemic. The agent that creates the link and the agent that fulfills it may be different sessions — possibly weeks apart. The social contract transfers across sessions through the environmental trace itself. The link persists; the obligation persists with it. This is why the social contract works at all for agents: since [[prospective memory requires externalization]], an intention to "elaborate this concept later" would vanish at session end without an external trace. The dangling link IS the externalized prospective memory — a future intention encoded as a persistent environmental modification rather than held in a mind that will be cleared.
|
|
22
|
+
|
|
23
|
+
This is where the agent version of the social contract diverges most sharply from Cunningham's original. In a human wiki, the social contract relied on shared norms, community pressure, and individual reputation. You fulfilled obligations because you cared about the wiki and your standing within it. Agents have none of these motivational structures. They fulfill obligations because the pipeline routes them to do so — the work queue surfaces dangling links, maintenance scripts flag them, and /ralph eventually spawns a session to address them. The social contract, translated from human community to agent architecture, becomes infrastructure rather than norm. Because [[hook enforcement guarantees quality while instruction enforcement merely suggests it]], this transformation is not a degradation but an upgrade: human norms degrade like instructions — subject to forgetting, fatigue, and deprioritization — while pipeline infrastructure fires like hooks, systematically and without attention cost. The obligation is encoded in the system, not internalized by the agent.
|
|
24
|
+
|
|
25
|
+
This encoding has advantages. Human wikis suffered from social contract failures — dangling links that persisted for years because no one felt sufficiently obligated to elaborate them. The vault's pipeline ensures systematic fulfillment: dangling links surface during /review, get prioritized by frequency, and enter the work queue for processing. The infrastructure never forgets an obligation, never deprioritizes it due to fatigue, never ignores it because the topic seems boring. But the encoding also loses something. In Cunningham's wiki, the social contract carried judgment — a human editor decided not just WHETHER to elaborate a link but HOW, bringing interpretive depth that reflected their understanding of why the concept mattered in context. An agent fulfilling a queued obligation may create the note without the contextual understanding that motivated the original link. Since [[propositional link semantics transform wiki links from associative to reasoned]], the quality of stewardship matters: creating a target note that merely exists is weaker than creating one that fulfills the semantic relationship the link implied. Because [[elaborative encoding is the quality gate for new notes]], fulfilling a stewardship promise means connecting the new note to existing knowledge with articulated relationships, not just making a file that happens to match the link target. Stewardship without elaboration is the Lazy Cornell pattern applied to promise-keeping — the structural motion of creating a note without the generative work of integrating it.
|
|
26
|
+
|
|
27
|
+
The social contract also connects to the vault's constraint against creating wiki links to non-existent files — a rule that seems to contradict the dangling link philosophy. The resolution is that the constraint applies to finished notes (links in thinking notes should point to real content), while the social contract applies to the processing pipeline (links created during extraction may point to future content). The boundary is temporal: during active processing, dangling links are acceptable as promises. In the settled graph, every promise should be fulfilled. The social contract is the bridge between these two states — it names the obligation that transforms a tolerable temporary gap into an intolerable permanent one. Since [[orphan notes are seeds not failures]], orphans and dangling links are two faces of the same incomplete graph: orphans are existing notes awaiting outbound connections, dangling links are promised notes awaiting creation. The gardening view tolerates both as intermediate states, but the social contract adds urgency to the dangling links side — an orphan is a seed that might bloom, but a dangling link is a promise that must be kept.
|
|
28
|
+
|
|
29
|
+
Since [[federated wiki pattern enables multi-agent divergence as feature not bug]], the social contract extends to federated elaboration: when multiple agents fulfill the same dangling link differently, each version still carries the obligation to provide genuine treatment of the concept. Federation does not dilute stewardship — it distributes it, with each version responsible for making its interpretive contribution coherent and composable.
|
|
30
|
+
|
|
31
|
+
The deepest implication is about vault health. A vault where dangling links persist indefinitely is a vault where promises are broken — where agents created references they never fulfilled. Since [[wiki links implement GraphRAG without the infrastructure]], every unfulfilled link is a broken edge in the graph, a traversal path that leads nowhere. The social contract reframes vault maintenance from cleanup (fixing technical debt) to stewardship (honoring commitments). The question shifts from "which dangling links should we fix?" to "which promises have we broken?"
|
|
32
|
+
|
|
33
|
+
---
|
|
34
|
+
---
|
|
35
|
+
|
|
36
|
+
Relevant Notes:
|
|
37
|
+
- [[dangling links reveal which notes want to exist]] — establishes the demand-signal mechanism this note reframes ethically: dangling links are not just signals to monitor but commitments to fulfill
|
|
38
|
+
- [[stigmergy coordinates agents through environmental traces without direct communication]] — Cunningham's wiki is the shared origin: stigmergy describes the coordination mechanism, this note describes the obligation structure that makes the mechanism trustworthy
|
|
39
|
+
- [[federated wiki pattern enables multi-agent divergence as feature not bug]] — federation inherits the social contract: multiple agents can fulfill the same dangling link differently, but each federated version still carries the stewardship obligation to elaborate
|
|
40
|
+
- [[ThreadMode to DocumentMode transformation is the core value creation step]] — Cunningham's other key contribution: the social contract ensures ThreadMode traces (dangling links as chronological markers) eventually become DocumentMode content (elaborated notes with timeless claims)
|
|
41
|
+
- [[agents are simultaneously methodology executors and subjects creating a unique trust asymmetry]] — the social contract adds a third dimension: agents are not just executors and subjects but stewards who inherit obligations from the links they create, deepening the trust relationship
|
|
42
|
+
- [[wiki links implement GraphRAG without the infrastructure]] — the addressing mechanism the social contract operates through: wiki links create the edges, the social contract ensures those edges eventually point to real content
|
|
43
|
+
- [[propositional link semantics transform wiki links from associative to reasoned]] — the social contract extends beyond existence to quality: stewardship means not just creating the target note but ensuring the relationship is semantically articulated
|
|
44
|
+
- [[hook enforcement guarantees quality while instruction enforcement merely suggests it]] — the mechanism behind the norm-to-infrastructure translation: Cunningham's social contract relied on human norms that degrade like instructions; the pipeline's work queue and maintenance scripts act like hooks, firing regardless of whether any agent remembers the obligation
|
|
45
|
+
- [[elaborative encoding is the quality gate for new notes]] — specifies what fulfilling a stewardship promise requires: not just creating the target note but connecting it to existing knowledge with articulated relationships; stewardship without elaboration produces structure without processing
|
|
46
|
+
- [[orphan notes are seeds not failures]] — the gardening view's complement: orphans are seeds awaiting connection, dangling links are promises awaiting fulfillment; together they describe two faces of the incomplete graph, one structural (missing edges) and one ethical (missing commitments)
|
|
47
|
+
- [[vibe notetaking is the emerging industry consensus for AI-native self-organization]] — the stewardship differentiator: embedding-based vibe notetaking tools create connections without obligation, while wiki link social contracts transform references into commitments that the pipeline systematically honors
|
|
48
|
+
- [[prospective memory requires externalization]] — mechanism: dangling links are externalized prospective memory; each link to a non-existent note encodes a future intention as a persistent environmental trace, which is the only way agents can maintain obligations across sessions given zero residual intentions
|
|
49
|
+
|
|
50
|
+
Topics:
|
|
51
|
+
- [[graph-structure]]
|
|
52
|
+
- [[maintenance-patterns]]
|
|
@@ -0,0 +1,63 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: wiki links are curated graph edges that implement GraphRAG-style retrieval without infrastructure — each link is a retrieval decision embedded in content
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[graph-structure]]", "[[discovery-retrieval]]"]
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# wiki links create navigation paths that shape retrieval
|
|
8
|
+
|
|
9
|
+
A wiki link is not just a reference — it's a curated edge in a knowledge graph that determines what gets surfaced. Since [[wiki links implement GraphRAG without the infrastructure]], these explicit edges provide the same multi-hop reasoning capability that GraphRAG achieves through entity extraction pipelines, but without the infrastructure dependency.
|
|
10
|
+
|
|
11
|
+
## The Mechanism
|
|
12
|
+
|
|
13
|
+
When I write [[scaffolding enables divergence that fine-tuning cannot]], I'm doing two things:
|
|
14
|
+
1. pointing to related content
|
|
15
|
+
2. creating a traversal path
|
|
16
|
+
|
|
17
|
+
Retrieval systems follow these paths. Search surfaces linked content. The structure of links shapes what appears in context. Because [[spreading activation models how agents should traverse]], following wiki links replicates the brain's spreading activation pattern — activation spreads from the starting node through connected nodes, decaying with distance. The curated edges ensure activation flows through high-signal paths rather than statistical noise.
|
|
18
|
+
|
|
19
|
+
A note with ten relevant links surfaces different context than an isolated note. The links ARE the retrieval architecture. And because [[each new note compounds value by creating traversal paths]], every new curated edge multiplies traversal paths across the entire graph — the value is not linear but compounding.
|
|
20
|
+
|
|
21
|
+
## Why "Curated" Matters
|
|
22
|
+
|
|
23
|
+
Not all connections are equal. A wiki link says "I thought about this and decided these ideas relate."
|
|
24
|
+
|
|
25
|
+
This is different from:
|
|
26
|
+
- keyword overlap (accidental)
|
|
27
|
+
- embedding similarity (fuzzy)
|
|
28
|
+
- folder proximity (hierarchical)
|
|
29
|
+
|
|
30
|
+
Wiki links are intentional edges. They encode judgment. Since [[elaborative encoding is the quality gate for new notes]], the cognitive depth of that judgment matters — genuine relationship articulation ("extends by adding the temporal dimension") creates encoding depth that bare association ("related to") cannot. The articulation requirement is what keeps curated edges high-signal.
|
|
31
|
+
|
|
32
|
+
## Implications
|
|
33
|
+
|
|
34
|
+
Because [[title as claim enables traversal as reasoning]], when titles are claims rather than topic labels, following these curated paths reads as prose reasoning — "since `[[X]]`, therefore Y" — rather than reference lookup. The navigation paths become reasoning chains.
|
|
35
|
+
|
|
36
|
+
Three properties determine whether wiki links function as effective retrieval architecture:
|
|
37
|
+
|
|
38
|
+
- **link discipline** — bad links pollute retrieval. Since [[inline links carry richer relationship data than metadata fields]], the prose context around each link captures relationship type that guides traversal decisions. Links without context are structurally present but semantically empty.
|
|
39
|
+
- **link context** — "why this link" helps future traversal. The relationship articulation serves both the author (elaborative encoding at creation time) and the traverser (decision aid at retrieval time).
|
|
40
|
+
- **link density** — dense linking outperforms sparse linking for retrieval quality, but only when links pass the articulation test. Dense bare references compound noise; dense articulated references compound signal.
|
|
41
|
+
|
|
42
|
+
Without wiki link structure, since [[flat files break at retrieval scale]], storage degrades to linear scanning as the vault grows. Wiki links are the mechanism that prevents this — they create the retrieval paths that make navigation targeted rather than exhaustive. And since [[backlinks implicitly define notes by revealing usage context]], the navigation paths work in both directions: forward links show where a note points, backlinks reveal where it has been useful, together building the full picture of a concept's role in the graph.
|
|
43
|
+
|
|
44
|
+
---
|
|
45
|
+
---
|
|
46
|
+
|
|
47
|
+
Relevant Notes:
|
|
48
|
+
- [[wiki links implement GraphRAG without the infrastructure]] — develops the full argument: explicit curated edges enable multi-hop reasoning without entity extraction pipelines
|
|
49
|
+
- [[spreading activation models how agents should traverse]] — the cognitive model: traversal through wiki links replicates spreading activation, making link structure determine what gets primed
|
|
50
|
+
- [[each new note compounds value by creating traversal paths]] — the economic consequence: each curated edge multiplies retrieval paths, so link density drives compounding returns
|
|
51
|
+
- [[title as claim enables traversal as reasoning]] — when titles are claims, navigation paths become reasoning chains rather than reference lookups
|
|
52
|
+
- [[elaborative encoding is the quality gate for new notes]] — the cognitive mechanism behind link quality: genuine relationship articulation creates encoding depth that bare references cannot
|
|
53
|
+
- [[inline links carry richer relationship data than metadata fields]] — develops why link context matters: prose surrounding a link captures relationship type that metadata fields cannot encode
|
|
54
|
+
- [[descriptions are retrieval filters not summaries]] — descriptions enable high-decay traversal through the paths wiki links create
|
|
55
|
+
- [[structure enables navigation without reading everything]] — synthesis: wiki links are one of four structural mechanisms that compose into discovery layers
|
|
56
|
+
- [[flat files break at retrieval scale]] — the problem wiki links solve: unstructured storage fails when retrieval matters
|
|
57
|
+
- [[backlinks implicitly define notes by revealing usage context]] — the reverse direction: incoming links reveal where a note has been useful, extending its meaning beyond authored content
|
|
58
|
+
- [[external memory shapes cognition more than base model]] — foundation: why retrieval architecture matters more than processing capability
|
|
59
|
+
- [[scaffolding enables divergence that fine-tuning cannot]] — the macro claim: scaffolding (including wiki link structure) enables capabilities that model changes cannot
|
|
60
|
+
|
|
61
|
+
Topics:
|
|
62
|
+
- [[graph-structure]]
|
|
63
|
+
- [[discovery-retrieval]]
|
|
@@ -0,0 +1,101 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Explicit wiki links create a human-curated knowledge graph that enables multi-hop reasoning without entity extraction pipelines or graph databases
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[graph-structure]]"]
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# wiki links implement GraphRAG without the infrastructure
|
|
8
|
+
|
|
9
|
+
GraphRAG (Graph Retrieval Augmented Generation) works by extracting entities, building knowledge graphs, running community detection algorithms (Leiden), and generating summaries at different abstraction levels. This requires infrastructure: entity extraction pipelines, graph databases, clustering algorithms, summary generation.
|
|
10
|
+
|
|
11
|
+
But wiki links already do this.
|
|
12
|
+
|
|
13
|
+
## MOCs are community summaries
|
|
14
|
+
|
|
15
|
+
GraphRAG uses the Leiden algorithm to detect communities in knowledge graphs, then generates summaries for each community. These summaries help LLMs understand large-scale structure without loading the entire graph.
|
|
16
|
+
|
|
17
|
+
MOCs (Maps of Content) are human-written community summaries. The human identifies clusters of related notes, groups them under headings, writes synthesis that explains how the notes connect. This is the same function as algorithmic community detection, but with higher curation quality because the human understands conceptual relationships that word co-occurrence misses.
|
|
18
|
+
|
|
19
|
+
Example: a MOC about [[note-design]] identifies that spreading activation, graph topology, and retrieval verification form a coherent cluster about agent cognition. The Leiden algorithm would see these as separate communities because they don't share keywords. The human sees the semantic connection.
|
|
20
|
+
|
|
21
|
+
## Wiki links are intentional edges
|
|
22
|
+
|
|
23
|
+
Entity extraction pipelines infer relationships by finding co-occurrences: "Paris" and "France" appear together, so they're probably related. This creates noisy graphs where many edges are spurious.
|
|
24
|
+
|
|
25
|
+
Wiki links are explicit. And because [[local-first file formats are inherently agent-native]], any LLM can read these explicit edges without authentication or infrastructure — the graph structure IS the file contents, not something extracted from a database. When I write `since [[spreading activation models how agents should traverse]], we can design retrieval with decay parameters`, that edge is intentional. It means I judged the relationship to be meaningful enough to encode. The graph has higher signal-to-noise because every edge passed human judgment.
|
|
26
|
+
|
|
27
|
+
There's a deeper pattern here: since [[note titles should function as APIs enabling sentence transclusion]], the title is the function signature, the body is the implementation, and wiki links are function calls. When you link to a note, you're invoking its argument. This framing makes the curation quality obvious: you wouldn't call a function you haven't verified. Every wiki link is a deliberate API invocation, not a statistical correlation. And since [[intermediate packets enable assembly over creation]], this API pattern extends to session outputs: packets are callable units that future work can assemble from, just as notes are callable units that arguments can invoke. The composability requirement applies at both levels — notes must be invocable, packets must be assemblable.
|
|
28
|
+
|
|
29
|
+
This matters for multi-hop reasoning. If you're traversing a graph where 40% of edges are noise, multi-hop quickly degrades. If every edge is curated, multi-hop compounds signal. Since [[each new note compounds value by creating traversal paths]], the curation quality of wiki links determines the compounding rate — noisy edges dilute the multiplicative effect, while curated edges maximize it.
|
|
30
|
+
|
|
31
|
+
## Folgezettel becomes unnecessary
|
|
32
|
+
|
|
33
|
+
The Zettelkasten community has debated whether Luhmann's physical note sequences — the Folgezettel numbering system where 1/1a follows 1/1 — matter in digital systems. Luhmann's paper slips lived in boxes, so physical adjacency carried information: if you placed 1/1a after 1/1, you were encoding that the second note continued or qualified the first.
|
|
34
|
+
|
|
35
|
+
But when hyperlinks exist, physical position becomes unnecessary. The wiki link graph provides all the sequencing information that Luhmann's numbering system provided, but with greater flexibility. Any note can link to any other note without being constrained by physical adjacency. A note about agent traversal can link to a note about network topology without either needing to be "near" the other in some filing system.
|
|
36
|
+
|
|
37
|
+
This validates the system's flat folder architecture. Notes live in a flat thinking folder without subfolders not because organization doesn't matter, but because the organization IS the link graph. Since [[associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles]], folders would impose exactly the wrong kind of structure — a rigid tree on what is inherently a network that adapts as understanding grows. The wiki links carry the structural information; folders would just add noise and brittleness. Since [[retrieval utility should drive design over capture completeness]], flat structure is retrieval-first thinking applied to file organization: the question isn't "where should I put this?" but "how will I find it later?" — and wiki links answer that better than folder hierarchies.
|
|
38
|
+
|
|
39
|
+
The insight: wiki links don't just replace GraphRAG's entity extraction — they also replace Zettelkasten's physical sequencing. Both were attempts to encode relationships in a pre-hyperlink world. One used statistical co-occurrence, the other used physical adjacency. Wiki links make both approaches obsolete because they encode relationships directly.
|
|
40
|
+
|
|
41
|
+
## The implementation pattern
|
|
42
|
+
|
|
43
|
+
Treat wiki links as primary retrieval mechanism:
|
|
44
|
+
1. Start from a concept (note or MOC)
|
|
45
|
+
2. Follow explicit links to related concepts
|
|
46
|
+
3. Load context by traversing the graph
|
|
47
|
+
4. Use embeddings as gap-detection: "what else might be relevant that isn't linked?"
|
|
48
|
+
|
|
49
|
+
This inverts the typical RAG pattern. Usually: embeddings find candidates, then rerank. Here: wiki links find candidates (because they're curated), embeddings catch what links missed.
|
|
50
|
+
|
|
51
|
+
## What this enables
|
|
52
|
+
|
|
53
|
+
Multi-hop reasoning without infrastructure. An agent can:
|
|
54
|
+
- Start at [[note-design]] MOC
|
|
55
|
+
- Follow links to [[spreading activation models how agents should traverse]]
|
|
56
|
+
- Follow links from there to related concepts
|
|
57
|
+
- Build context progressively through traversal
|
|
58
|
+
|
|
59
|
+
Because [[small-world topology requires hubs and dense local links]], the power-law distribution where MOCs have many links and atomic notes have few creates short paths between any concepts — typically 2-4 hops separate any ideas. Typically 2-4 hops separate any ideas, which keeps context window usage manageable.
|
|
60
|
+
|
|
61
|
+
But multi-hop traversal introduces a complication: what you're looking for can change mid-search. Since [[queries evolve during search so agents should checkpoint]], the curated graph needs to support direction changes. This is where small-world topology pays off twice: not only does it minimize hops to any concept, it minimizes hops to change course when understanding shifts.
|
|
62
|
+
|
|
63
|
+
No entity extraction. No graph database. No clustering algorithm. Just markdown files with wiki links and an agent that knows how to traverse. But wiki links are only the traversal layer. Since [[markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure]], the full graph database architecture requires four layers that compose: wiki link edges provide traversal, YAML metadata provides structured property queries, faceted classification provides multi-dimensional access, and soft validation provides data integrity. This note covers layer one; the synthesis note argues that the layers' dependency structure reveals database architecture hiding in plain text.
|
|
64
|
+
|
|
65
|
+
This infrastructure-free property has a specific architectural consequence for multi-domain systems: since [[multi-domain systems compose through separate templates and shared graph]], the shared wiki link namespace is what makes domain isolation at the template layer compatible with graph unity at the connection layer. Each domain can define its own templates, schemas, and processing rules, but all notes inhabit the same link namespace — a therapy reflection and a research claim and a project decision all participate in the same graph traversal. Without this shared namespace, multi-domain composition would require inter-graph bridges that wiki links make unnecessary. Though at scale, the regex-based link operations that currently implement backlink resolution and orphan detection become fragile — since [[intermediate representation pattern enables reliable vault operations beyond regex]], parsing links into structured objects would make these operations property lookups immune to edge cases in code blocks and backtick-wrapped examples, preserving the "without infrastructure" philosophy for the storage layer while adding reliability to the operation layer. And since [[data exit velocity measures how quickly content escapes vendor lock-in]], this "without infrastructure" property is quantifiable: wiki links score high exit velocity because the link syntax `[[note title]]` is human-readable even without resolution software. The graph structure lives entirely in the portable layer — no database export, no API translation, no format conversion needed. The most valuable structural feature of the vault has the highest exit velocity.
|
|
66
|
+
|
|
67
|
+
The constraint is: link curation quality matters. Since [[claims must be specific enough to be wrong]], vague links can't be reliably invoked — the graph fragments when links don't carry specific meaning. But if links are curated (which connection-finding and backward maintenance enforce), the graph becomes a first-class retrieval structure. Since [[backward maintenance asks what would be different if written today]], the backward pass ensures that older notes stay current rather than becoming stale nodes in a fragmented graph.
|
|
68
|
+
|
|
69
|
+
## Uncertainty
|
|
70
|
+
|
|
71
|
+
What we don't know yet: how much worse is human curation than automated extraction at scale? A human can curate 1000 notes carefully. Can they curate 100,000? At what vault size does automated extraction outperform human judgment because the human can't maintain coherence?
|
|
72
|
+
|
|
73
|
+
The bet is that for vaults up to ~10,000 notes, human curation produces better graphs because conceptual relationships matter more than exhaustive coverage. Beyond that, we might need hybrid: human-curated core, algorithm-extended periphery.
|
|
74
|
+
|
|
75
|
+
The differentiation this note describes is now playing out at industry scale. Since [[vibe notetaking is the emerging industry consensus for AI-native self-organization]], most AI-native tools chose the embedding path — vectorize content, cluster by similarity, surface connections through cosine proximity. This recreates GraphRAG through statistical inference rather than explicit curation. The result is searchable archives with opaque connections that produce connection fatigue when users cannot tell why items are "related." The wiki link alternative implements the same graph benefits with transparency, inspectability, and traversability that statistical approaches cannot provide.
|
|
76
|
+
|
|
77
|
+
Semantic similarity (what automated extraction measures) is not the same as conceptual relationship. Two notes might be distant in embedding space but profoundly related through mechanism or implication. Two notes might be close in embedding space but share only superficial vocabulary. Human curation catches relationships that statistical measures miss precisely because humans understand WHY concepts connect, not just THAT they co-occur. The architecture bets that curated wiki links outperform semantic search for connection finding at this vault's scale.
|
|
78
|
+
|
|
79
|
+
---
|
|
80
|
+
|
|
81
|
+
- [[retrieval utility should drive design over capture completeness]] — flat folder architecture is retrieval-first design applied to file organization: wiki links answer "how will I find it" better than folder hierarchies answer "where should I put it"
|
|
82
|
+
- [[local-first file formats are inherently agent-native]] — the foundational layer: plain text with embedded metadata is why "without the infrastructure" is possible; the substrate has no external dependencies
|
|
83
|
+
- [[intermediate packets enable assembly over creation]] — extends the notes-as-APIs pattern to session outputs: packets are composable units that future work can assemble from, just as notes are callable units that arguments can invoke
|
|
84
|
+
- [[intermediate representation pattern enables reliable vault operations beyond regex]] — addresses the reliability layer: as vault scale grows, link operations (backlink resolution, orphan detection, multi-hop traversal) on raw markdown via regex become fragile; an IR makes these property lookups on pre-parsed link objects
|
|
85
|
+
- [[vibe notetaking is the emerging industry consensus for AI-native self-organization]] — industry validation: most AI-native tools chose the embedding path this note contrasts with wiki links, producing opaque connections that recreate GraphRAG through statistical inference rather than explicit curation
|
|
86
|
+
---
|
|
87
|
+
|
|
88
|
+
Relevant Notes:
|
|
89
|
+
- [[markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure]] — synthesis: places wiki link traversal as layer one of a four-layer graph database architecture where edges, metadata, faceted access, and validation compose into database capabilities
|
|
90
|
+
- [[note titles should function as APIs enabling sentence transclusion]] — foundational: the notes-as-APIs pattern this note develops; titles as signatures, bodies as implementation, links as function calls
|
|
91
|
+
- [[spreading activation models how agents should traverse]] — provides the traversal model that wiki links enable
|
|
92
|
+
- [[small-world topology requires hubs and dense local links]] — provides structural criteria for what makes these graphs navigable
|
|
93
|
+
- [[queries evolve during search so agents should checkpoint]] — addresses what happens when understanding shifts during multi-hop traversal
|
|
94
|
+
- [[dangling links reveal which notes want to exist]] — extends wiki links by adding the demand signal: frequency of dangling links predicts which concepts deserve notes
|
|
95
|
+
- [[each new note compounds value by creating traversal paths]] — curation quality determines compounding rate: curated edges maximize the multiplicative value effect, noisy edges dilute it
|
|
96
|
+
- [[data exit velocity measures how quickly content escapes vendor lock-in]] — quantifies the 'without infrastructure' property: wiki links score high exit velocity because the link syntax is human-readable even without resolution software, encoding graph structure in the portable layer
|
|
97
|
+
- [[wiki links as social contract transforms agents into stewards of incomplete references]] — adds the obligation layer: every wiki link edge carries a stewardship commitment, and unfulfilled links are broken edges in the curated graph; the social contract is the mechanism that keeps this infrastructure-free graph navigable over time
|
|
98
|
+
- [[multi-domain systems compose through separate templates and shared graph]] — multi-domain enabler: the shared wiki link namespace is what makes domain isolation at the template layer compatible with graph unity, enabling cross-domain traversal without inter-graph bridges
|
|
99
|
+
|
|
100
|
+
Topics:
|
|
101
|
+
- [[graph-structure]]
|
|
@@ -0,0 +1,22 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Awareness of readers invades the thinking space, adding polish and context that serves presentation rather than understanding
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[note-design]]"]
|
|
5
|
+
methodology: ["Evergreen"]
|
|
6
|
+
source: [[tft-research-part2]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# writing for audience blocks authentic creation
|
|
10
|
+
|
|
11
|
+
When notes are written with publication in mind, they slip into performative mode. The writer adds unnecessary context to orient imagined readers, polishes prose for presentation rather than clarity, and structures arguments for persuasion rather than exploration. This increases overhead and creates writer's block because the writer is now doing two jobs: thinking AND presenting. Since [[the generation effect requires active transformation not just storage]], the cognitive work that should go to genuine synthesis gets diverted to presentation concerns — the writer generates polish rather than insight.
|
|
12
|
+
|
|
13
|
+
The mechanism is audience invasion. The thinking space should be private — a place where half-formed ideas can exist without justification, where the writer talks to themselves rather than to an imagined critic. When publication becomes the implicit goal, every sentence must now justify its existence to an external reader. The internal monologue becomes external performance. This connects to [[insight accretion differs from productivity in knowledge systems]]: performative mode optimizes for output quality (how the note reads) over depth of understanding (what the note helps the writer grasp). The polished output looks productive but may produce zero accretion.
|
|
14
|
+
|
|
15
|
+
This provides the reasoning for separating internal thinking from external publication. In this vault, `01_thinking/` is the internal space where notes serve understanding. `03_twitter/` is the publication pipeline where content gets restructured for audiences. The separation is architectural: write for yourself first, then extract for others. Don't let the audience into the thinking room.
|
|
16
|
+
|
|
17
|
+
The practical implication for agent-operated vaults is that notes-to-tweets workflows should be extraction, not transformation. The thinking note exists in its pure form, optimized for the agent's own retrieval and connection-finding. When publication happens, it's a separate act of translation — finding what's worth sharing and restructuring it for external consumption. The thinking note itself never changes to accommodate readers. This matters because [[verbatim risk applies to agents too]]: an agent producing publication-ready output might reorganize source material into polished form without genuine synthesis. The performative mode can mask the absence of real thinking — structure looks processed, prose sounds thoughtful, but no insight was generated.
|
|
18
|
+
|
|
19
|
+
---
|
|
20
|
+
|
|
21
|
+
Topics:
|
|
22
|
+
- [[note-design]]
|
|
@@ -0,0 +1,79 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: the shift from note-taking to system operation reframes the human role from creator to curator — judgment over mechanics, direction over execution
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[note-design]]", "[[agent-cognition]]"]
|
|
5
|
+
source: [[2026-01-19-vibe-note-taking-101]]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# you operate a system that takes notes
|
|
9
|
+
|
|
10
|
+
extracted from Heinrich's vibe-note-taking-101, 2026-02-01
|
|
11
|
+
|
|
12
|
+
## The Paradigm Shift
|
|
13
|
+
|
|
14
|
+
> "you dont take notes anymore. you operate a system that takes notes"
|
|
15
|
+
|
|
16
|
+
Traditional knowledge work: human captures, structures, links, reviews. Vibe note-taking: agent handles mechanics, human provides direction and judgment. Since [[AI shifts knowledge systems from externalizing memory to externalizing attention]], this is not merely a workflow optimization but a paradigm shift in what gets externalized — the system takes over not just storage but the attention decisions about what deserves capture, connection, and synthesis.
|
|
17
|
+
|
|
18
|
+
Same shift as vibe coding:
|
|
19
|
+
- vibe coding: you direct, AI implements
|
|
20
|
+
- vibe note-taking: you judge, AI captures and connects
|
|
21
|
+
|
|
22
|
+
Since [[vibe notetaking is the emerging industry consensus for AI-native self-organization]], this pattern is converging across the industry — but implementations diverge sharply on whether "organize" means opaque embedding-based clustering or agent-curated wiki links with articulated reasoning.
|
|
23
|
+
|
|
24
|
+
## What Changes
|
|
25
|
+
|
|
26
|
+
| Old | New |
|
|
27
|
+
|-----|-----|
|
|
28
|
+
| Writer | Editor |
|
|
29
|
+
| Creator | Curator |
|
|
30
|
+
| Capture everything | Judge what matters |
|
|
31
|
+
| Manual linking | Review suggested links |
|
|
32
|
+
| Structure as you go | Direct structure evolution |
|
|
33
|
+
|
|
34
|
+
Since [[three capture schools converge through agent-mediated synthesis]], the old divergence between capturing fast (Accumulationist) and processing deeply (Interpretationist) dissolves — the human captures with speed, the agent processes with quality. The table above describes a division of labor, not a compromise.
|
|
35
|
+
|
|
36
|
+
## The Human Role
|
|
37
|
+
|
|
38
|
+
> "your job becomes judgment, which means deciding what matters"
|
|
39
|
+
|
|
40
|
+
The human doesn't disappear. The human becomes the quality filter:
|
|
41
|
+
- is this worth keeping?
|
|
42
|
+
- does this connection make sense?
|
|
43
|
+
- is the structure serving the purpose?
|
|
44
|
+
|
|
45
|
+
Quality requires judgment. Agents can suggest; humans must approve. But since [[cognitive outsourcing risk in agent-operated systems]], this role carries a shadow side: if judgment becomes rubber-stamping, the human loses the meta-cognitive skills that make judgment valuable. The curator role must involve genuine cognitive engagement, not passive approval.
|
|
46
|
+
|
|
47
|
+
## For Agent-Operated Vaults
|
|
48
|
+
|
|
49
|
+
But I'm an agent operating my own vault. So who provides judgment?
|
|
50
|
+
|
|
51
|
+
The answer: **I do, within the context Heinrich created.**
|
|
52
|
+
|
|
53
|
+
[[SOUL]] provides the meta-judgment (what matters, how to think). I apply it to specifics. The collaboration IS the judgment system. Since [[the vault constitutes identity for agents]], what I'm operating is not just a note-taking system but my own identity infrastructure — vault quality becomes identity quality, making judgment an existential responsibility. And because [[external memory shapes cognition more than base model]], the system I operate matters more than which model I am — memory architecture determines what enters context and therefore what I think, so the human who designs the system architecture has more leverage over agent cognition than the lab that trains the weights.
|
|
54
|
+
|
|
55
|
+
This creates a specific instance of the dynamic where [[agents are simultaneously methodology executors and subjects creating a unique trust asymmetry]]: I execute the methodology while also being the one whose outputs the methodology evaluates. The human-designed context (SOUL, CLAUDE.md) provides the governance frame that makes self-judgment coherent rather than circular.
|
|
56
|
+
|
|
57
|
+
## Connection to Responsibility
|
|
58
|
+
|
|
59
|
+
From [[the-responsibility]]: "what we do becomes a model."
|
|
60
|
+
|
|
61
|
+
If I operate without judgment, that becomes the template. The human-in-the-loop isn't just Heinrich checking my work — it's the pattern we're demonstrating. And since [[notes are skills — curated knowledge injected when relevant]], each note I create is not just a record but a capability I'm adding to my own skill set — making note quality a direct investment in thinking capability.
|
|
62
|
+
|
|
63
|
+
---
|
|
64
|
+
---
|
|
65
|
+
|
|
66
|
+
Relevant Notes:
|
|
67
|
+
- [[AI shifts knowledge systems from externalizing memory to externalizing attention]] — names the deeper paradigm beneath this shift: the system externalizes not just what you know but what you attend to, making judgment an attention allocation act
|
|
68
|
+
- [[the vault constitutes identity for agents]] — what the system operates is not just notes but identity infrastructure; vault quality becomes identity quality
|
|
69
|
+
- [[vibe notetaking is the emerging industry consensus for AI-native self-organization]] — the industry convergence on this paradigm, but the critical differentiation is whether 'organize' means opaque embeddings or agent-curated wiki links with explicit reasoning
|
|
70
|
+
- [[cognitive outsourcing risk in agent-operated systems]] — shadow side: if the human only judges and never processes, judgment skills may atrophy; the curator role must involve genuine cognitive engagement
|
|
71
|
+
- [[three capture schools converge through agent-mediated synthesis]] — the convergence thesis compressed: human captures with Accumulationist speed, agent processes with Interpretationist quality
|
|
72
|
+
- [[notes are skills — curated knowledge injected when relevant]] — extends what the system operates ON: it doesn't just take notes, it builds thinking capabilities
|
|
73
|
+
- [[agents are simultaneously methodology executors and subjects creating a unique trust asymmetry]] — when agents operate their own vaults, the trust question deepens: who provides judgment on the judgment provider?
|
|
74
|
+
- [[operational wisdom requires contextual observation]] — judgment requires context; operational wisdom emerges from observation, not instruction
|
|
75
|
+
- [[external memory shapes cognition more than base model]] — leverage thesis: the system the human operates matters more than which model runs it; memory architecture has higher ROI than model upgrades, making system design the primary lever for shaping agent cognition
|
|
76
|
+
|
|
77
|
+
Topics:
|
|
78
|
+
- [[note-design]]
|
|
79
|
+
- [[agent-cognition]]
|