arscontexta 0.6.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/marketplace.json +11 -0
- package/.claude-plugin/plugin.json +22 -0
- package/README.md +683 -0
- package/agents/knowledge-guide.md +49 -0
- package/bin/cli.mjs +66 -0
- package/generators/agents-md.md +240 -0
- package/generators/claude-md.md +379 -0
- package/generators/features/atomic-notes.md +124 -0
- package/generators/features/ethical-guardrails.md +58 -0
- package/generators/features/graph-analysis.md +188 -0
- package/generators/features/helper-functions.md +92 -0
- package/generators/features/maintenance.md +164 -0
- package/generators/features/methodology-knowledge.md +70 -0
- package/generators/features/mocs.md +144 -0
- package/generators/features/multi-domain.md +61 -0
- package/generators/features/personality.md +71 -0
- package/generators/features/processing-pipeline.md +428 -0
- package/generators/features/schema.md +149 -0
- package/generators/features/self-evolution.md +229 -0
- package/generators/features/self-space.md +78 -0
- package/generators/features/semantic-search.md +99 -0
- package/generators/features/session-rhythm.md +85 -0
- package/generators/features/templates.md +85 -0
- package/generators/features/wiki-links.md +88 -0
- package/generators/soul-md.md +121 -0
- package/hooks/hooks.json +45 -0
- package/hooks/scripts/auto-commit.sh +44 -0
- package/hooks/scripts/session-capture.sh +35 -0
- package/hooks/scripts/session-orient.sh +86 -0
- package/hooks/scripts/write-validate.sh +42 -0
- package/methodology/AI shifts knowledge systems from externalizing memory to externalizing attention.md +59 -0
- package/methodology/BM25 retrieval fails on full-length descriptions because query term dilution reduces match scores.md +39 -0
- package/methodology/IBIS framework maps claim-based architecture to structured argumentation.md +58 -0
- package/methodology/LLM attention degrades as context fills.md +49 -0
- package/methodology/MOC construction forces synthesis that automated generation from metadata cannot replicate.md +49 -0
- package/methodology/MOC maintenance investment compounds because orientation savings multiply across every future session.md +41 -0
- package/methodology/MOCs are attention management devices not just organizational tools.md +51 -0
- package/methodology/PKM failure follows a predictable cycle.md +50 -0
- package/methodology/ThreadMode to DocumentMode transformation is the core value creation step.md +52 -0
- package/methodology/WIP limits force processing over accumulation.md +53 -0
- package/methodology/Zeigarnik effect validates capture-first philosophy because open loops drain attention.md +42 -0
- package/methodology/academic research uses structured extraction with cross-source synthesis.md +566 -0
- package/methodology/adapt the four-phase processing pipeline to domain-specific throughput needs.md +197 -0
- package/methodology/agent notes externalize navigation intuition that search cannot discover and traversal cannot reconstruct.md +48 -0
- package/methodology/agent self-memory should be architecturally separate from user knowledge systems.md +48 -0
- package/methodology/agent session boundaries create natural automation checkpoints that human-operated systems lack.md +56 -0
- package/methodology/agent-cognition.md +107 -0
- package/methodology/agents are simultaneously methodology executors and subjects creating a unique trust asymmetry.md +66 -0
- package/methodology/aspect-oriented programming solved the same cross-cutting concern problem that hooks solve.md +39 -0
- package/methodology/associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles.md +53 -0
- package/methodology/attention residue may have a minimum granularity that cannot be subdivided.md +46 -0
- package/methodology/auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution.md +47 -0
- package/methodology/automated detection is always safe because it only reads state while automated remediation risks content corruption.md +42 -0
- package/methodology/automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues.md +56 -0
- package/methodology/backlinks implicitly define notes by revealing usage context.md +35 -0
- package/methodology/backward maintenance asks what would be different if written today.md +62 -0
- package/methodology/balance onboarding enforcement and questions to prevent premature complexity.md +229 -0
- package/methodology/basic level categorization determines optimal MOC granularity.md +51 -0
- package/methodology/batching by context similarity reduces switching costs in agent processing.md +43 -0
- package/methodology/behavioral anti-patterns matter more than tool selection.md +42 -0
- package/methodology/betweenness centrality identifies bridge notes connecting disparate knowledge domains.md +57 -0
- package/methodology/blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules.md +42 -0
- package/methodology/bootstrapping principle enables self-improving systems.md +62 -0
- package/methodology/build automatic memory through cognitive offloading and session handoffs.md +285 -0
- package/methodology/capture the reaction to content not just the content itself.md +41 -0
- package/methodology/claims must be specific enough to be wrong.md +36 -0
- package/methodology/closure rituals create clean breaks that prevent attention residue bleed.md +44 -0
- package/methodology/cognitive offloading is the architectural foundation for vault design.md +46 -0
- package/methodology/cognitive outsourcing risk in agent-operated systems.md +55 -0
- package/methodology/coherence maintains consistency despite inconsistent inputs.md +96 -0
- package/methodology/coherent architecture emerges from wiki links spreading activation and small-world topology.md +48 -0
- package/methodology/community detection algorithms can inform when MOCs should split or merge.md +52 -0
- package/methodology/complete navigation requires four complementary types that no single mechanism provides.md +43 -0
- package/methodology/complex systems evolve from simple working systems.md +59 -0
- package/methodology/composable knowledge architecture builds systems from independent toggleable modules not monolithic templates.md +61 -0
- package/methodology/compose multi-domain systems through separate templates and shared graph.md +372 -0
- package/methodology/concept-orientation beats source-orientation for cross-domain connections.md +51 -0
- package/methodology/confidence thresholds gate automated action between the mechanical and judgment zones.md +50 -0
- package/methodology/configuration dimensions interact so choices in one create pressure on others.md +58 -0
- package/methodology/configuration paralysis emerges when derivation surfaces too many decisions.md +44 -0
- package/methodology/context files function as agent operating systems through self-referential self-extension.md +46 -0
- package/methodology/context phrase clarity determines how deep a navigation hierarchy can scale.md +46 -0
- package/methodology/continuous small-batch processing eliminates review dread.md +48 -0
- package/methodology/controlled disorder engineers serendipity through semantic rather than topical linking.md +51 -0
- package/methodology/creative writing uses worldbuilding consistency with character tracking.md +672 -0
- package/methodology/cross-links between MOC territories indicate creative leaps and integration depth.md +43 -0
- package/methodology/dangling links reveal which notes want to exist.md +62 -0
- package/methodology/data exit velocity measures how quickly content escapes vendor lock-in.md +74 -0
- package/methodology/decontextualization risk means atomicity may strip meaning that cannot be recovered.md +48 -0
- package/methodology/dense interlinked research claims enable derivation while sparse references only enable templating.md +47 -0
- package/methodology/dependency resolution through topological sort makes module composition transparent and verifiable.md +56 -0
- package/methodology/derivation generates knowledge systems from composable research claims not template customization.md +63 -0
- package/methodology/derivation-engine.md +27 -0
- package/methodology/derived systems follow a seed-evolve-reseed lifecycle.md +56 -0
- package/methodology/description quality for humans diverges from description quality for keyword search.md +73 -0
- package/methodology/descriptions are retrieval filters not summaries.md +112 -0
- package/methodology/design MOCs as attention management devices with lifecycle governance.md +318 -0
- package/methodology/design-dimensions.md +66 -0
- package/methodology/digital mutability enables note evolution that physical permanence forbids.md +54 -0
- package/methodology/discovery-retrieval.md +48 -0
- package/methodology/distinctiveness scoring treats description quality as measurable.md +69 -0
- package/methodology/does agent processing recover what fast capture loses.md +43 -0
- package/methodology/domain-compositions.md +37 -0
- package/methodology/dual-coding with visual elements could enhance agent traversal.md +55 -0
- package/methodology/each module must be describable in one sentence under 200 characters or it does too many things.md +45 -0
- package/methodology/each new note compounds value by creating traversal paths.md +55 -0
- package/methodology/eight configuration dimensions parameterize the space of possible knowledge systems.md +56 -0
- package/methodology/elaborative encoding is the quality gate for new notes.md +55 -0
- package/methodology/enforce schema with graduated strictness across capture processing and query zones.md +221 -0
- package/methodology/enforcing atomicity can create paralysis when ideas resist decomposition.md +43 -0
- package/methodology/engineering uses technical decision tracking with architectural memory.md +766 -0
- package/methodology/every knowledge domain shares a four-phase processing skeleton that diverges only in the process step.md +53 -0
- package/methodology/evolution observations provide actionable signals for system adaptation.md +67 -0
- package/methodology/external memory shapes cognition more than base model.md +60 -0
- package/methodology/faceted classification treats notes as multi-dimensional objects rather than folder contents.md +65 -0
- package/methodology/failure-modes.md +27 -0
- package/methodology/false universalism applies same processing logic regardless of domain.md +49 -0
- package/methodology/federated wiki pattern enables multi-agent divergence as feature not bug.md +59 -0
- package/methodology/flat files break at retrieval scale.md +75 -0
- package/methodology/forced engagement produces weak connections.md +48 -0
- package/methodology/four abstraction layers separate platform-agnostic from platform-dependent knowledge system features.md +47 -0
- package/methodology/fresh context per task preserves quality better than chaining phases.md +44 -0
- package/methodology/friction reveals architecture.md +63 -0
- package/methodology/friction-driven module adoption prevents configuration debt by adding complexity only at pain points.md +48 -0
- package/methodology/gardening cycle implements tend prune fertilize operations.md +41 -0
- package/methodology/generation effect gate blocks processing without transformation.md +40 -0
- package/methodology/goal-driven memory orchestration enables autonomous domain learning through directed compute allocation.md +41 -0
- package/methodology/good descriptions layer heuristic then mechanism then implication.md +57 -0
- package/methodology/graph-structure.md +65 -0
- package/methodology/guided notes might outperform post-hoc structuring for high-volume capture.md +37 -0
- package/methodology/health wellness uses symptom-trigger correlation with multi-dimensional tracking.md +819 -0
- package/methodology/hook composition creates emergent methodology from independent single-concern components.md +47 -0
- package/methodology/hook enforcement guarantees quality while instruction enforcement merely suggests it.md +51 -0
- package/methodology/hook-driven learning loops create self-improving methodology through observation accumulation.md +62 -0
- package/methodology/hooks are the agent habit system that replaces the missing basal ganglia.md +40 -0
- package/methodology/hooks cannot replace genuine cognitive engagement yet more automation is always tempting.md +87 -0
- package/methodology/hooks enable context window efficiency by delegating deterministic checks to external processes.md +47 -0
- package/methodology/idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once.md +44 -0
- package/methodology/implement condition-based maintenance triggers for derived systems.md +255 -0
- package/methodology/implicit dependencies create distributed monoliths that fail silently across configurations.md +58 -0
- package/methodology/implicit knowledge emerges from traversal.md +55 -0
- package/methodology/incremental formalization happens through repeated touching of old notes.md +60 -0
- package/methodology/incremental reading enables cross-source connection finding.md +39 -0
- package/methodology/index.md +32 -0
- package/methodology/inline links carry richer relationship data than metadata fields.md +91 -0
- package/methodology/insight accretion differs from productivity in knowledge systems.md +41 -0
- package/methodology/intermediate packets enable assembly over creation.md +52 -0
- package/methodology/intermediate representation pattern enables reliable vault operations beyond regex.md +62 -0
- package/methodology/justification chains enable forward backward and evolution reasoning about configuration decisions.md +46 -0
- package/methodology/knowledge system architecture is parameterized by platform capabilities not fixed by methodology.md +51 -0
- package/methodology/knowledge systems become communication partners through complexity and memory humans cannot sustain.md +47 -0
- package/methodology/knowledge systems share universal operations and structural components across all methodology traditions.md +46 -0
- package/methodology/legal case management uses precedent chains with regulatory change propagation.md +892 -0
- package/methodology/live index via periodic regeneration keeps discovery current.md +58 -0
- package/methodology/local-first file formats are inherently agent-native.md +69 -0
- package/methodology/logic column pattern separates reasoning from procedure.md +35 -0
- package/methodology/maintenance operations are more universal than creative pipelines because structural health is domain-invariant.md +47 -0
- package/methodology/maintenance scheduling frequency should match consequence speed not detection capability.md +50 -0
- package/methodology/maintenance targeting should prioritize mechanism and theory notes.md +26 -0
- package/methodology/maintenance-patterns.md +72 -0
- package/methodology/markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure.md +55 -0
- package/methodology/maturity field enables agent context prioritization.md +33 -0
- package/methodology/memory-architecture.md +27 -0
- package/methodology/metacognitive confidence can diverge from retrieval capability.md +42 -0
- package/methodology/metadata reduces entropy enabling precision over recall.md +91 -0
- package/methodology/methodology development should follow the trajectory from documentation to skill to hook as understanding hardens.md +80 -0
- package/methodology/methodology traditions are named points in a shared configuration space not competing paradigms.md +64 -0
- package/methodology/mnemonic medium embeds verification into navigation.md +46 -0
- package/methodology/module communication through shared YAML fields creates loose coupling without direct dependencies.md +44 -0
- package/methodology/module deactivation must account for structural artifacts that survive the toggle.md +49 -0
- package/methodology/multi-domain systems compose through separate templates and shared graph.md +61 -0
- package/methodology/multi-domain-composition.md +27 -0
- package/methodology/narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging.md +53 -0
- package/methodology/navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts.md +48 -0
- package/methodology/navigational vertigo emerges in pure association systems without local hierarchy.md +54 -0
- package/methodology/note titles should function as APIs enabling sentence transclusion.md +51 -0
- package/methodology/note-design.md +57 -0
- package/methodology/notes are skills /342/200/224 curated knowledge injected when relevant.md" +62 -0
- package/methodology/notes function as cognitive anchors that stabilize attention during complex tasks.md +41 -0
- package/methodology/novel domains derive by mapping knowledge type to closest reference domain then adapting.md +50 -0
- package/methodology/nudge theory explains graduated hook enforcement as choice architecture for agents.md +59 -0
- package/methodology/observation and tension logs function as dead-letter queues for failed automation.md +51 -0
- package/methodology/operational memory and knowledge memory serve different functions in agent architecture.md +48 -0
- package/methodology/operational wisdom requires contextual observation.md +52 -0
- package/methodology/orchestrated vault creation transforms arscontexta from tool to autonomous knowledge factory.md +40 -0
- package/methodology/organic emergence versus active curation creates a fundamental vault governance tension.md +68 -0
- package/methodology/orphan notes are seeds not failures.md +38 -0
- package/methodology/over-automation corrupts quality when hooks encode judgment rather than verification.md +62 -0
- package/methodology/people relationships uses Dunbar-layered graphs with interaction tracking.md +659 -0
- package/methodology/personal assistant uses life area management with review automation.md +610 -0
- package/methodology/platform adapter translation is semantic not mechanical because hook event meanings differ.md +40 -0
- package/methodology/platform capability tiers determine which knowledge system features can be implemented.md +48 -0
- package/methodology/platform fragmentation means identical conceptual operations require different implementations across agent environments.md +44 -0
- package/methodology/premature complexity is the most common derivation failure mode.md +45 -0
- package/methodology/prevent domain-specific failure modes through the vulnerability matrix.md +336 -0
- package/methodology/processing effort should follow retrieval demand.md +57 -0
- package/methodology/processing-workflows.md +75 -0
- package/methodology/product management uses feedback pipelines with experiment tracking.md +789 -0
- package/methodology/productivity porn risk in meta-system building.md +30 -0
- package/methodology/programmable notes could enable property-triggered workflows.md +64 -0
- package/methodology/progressive disclosure means reading right not reading less.md +69 -0
- package/methodology/progressive schema validates only what active modules require not the full system schema.md +49 -0
- package/methodology/project management uses decision tracking with stakeholder context.md +776 -0
- package/methodology/propositional link semantics transform wiki links from associative to reasoned.md +87 -0
- package/methodology/prospective memory requires externalization.md +53 -0
- package/methodology/provenance tracks where beliefs come from.md +62 -0
- package/methodology/queries evolve during search so agents should checkpoint.md +35 -0
- package/methodology/question-answer metadata enables inverted search patterns.md +39 -0
- package/methodology/random note resurfacing prevents write-only memory.md +33 -0
- package/methodology/reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring.md +59 -0
- package/methodology/reflection synthesizes existing notes into new insight.md +100 -0
- package/methodology/retrieval utility should drive design over capture completeness.md +69 -0
- package/methodology/retrieval verification loop tests description quality at scale.md +81 -0
- package/methodology/role field makes graph structure explicit.md +94 -0
- package/methodology/scaffolding enables divergence that fine-tuning cannot.md +67 -0
- package/methodology/schema enforcement via validation agents enables soft consistency.md +60 -0
- package/methodology/schema evolution follows observe-then-formalize not design-then-enforce.md +65 -0
- package/methodology/schema field names are the only domain specific element in the universal note pattern.md +46 -0
- package/methodology/schema fields should use domain-native vocabulary not abstract terminology.md +47 -0
- package/methodology/schema templates reduce cognitive overhead at capture time.md +55 -0
- package/methodology/schema validation hooks externalize inhibitory control that degrades under cognitive load.md +48 -0
- package/methodology/schema-enforcement.md +27 -0
- package/methodology/self-extension requires context files to contain platform operations knowledge not just methodology.md +47 -0
- package/methodology/sense-making vs storage does compression lose essential nuance.md +73 -0
- package/methodology/session boundary hooks implement cognitive bookends for orientation and reflection.md +60 -0
- package/methodology/session handoff creates continuity without persistent memory.md +43 -0
- package/methodology/session outputs are packets for future selves.md +43 -0
- package/methodology/session transcript mining enables experiential validation that structural tests cannot provide.md +38 -0
- package/methodology/skill context budgets constrain knowledge system complexity on agent platforms.md +52 -0
- package/methodology/skills encode methodology so manual execution bypasses quality gates.md +50 -0
- package/methodology/small-world topology requires hubs and dense local links.md +99 -0
- package/methodology/source attribution enables tracing claims to foundations.md +38 -0
- package/methodology/spaced repetition scheduling could optimize vault maintenance.md +44 -0
- package/methodology/spreading activation models how agents should traverse.md +79 -0
- package/methodology/stale navigation actively misleads because agents trust curated maps completely.md +43 -0
- package/methodology/stigmergy coordinates agents through environmental traces without direct communication.md +62 -0
- package/methodology/storage versus thinking distinction determines which tool patterns apply.md +56 -0
- package/methodology/structure enables navigation without reading everything.md +52 -0
- package/methodology/structure without processing provides no value.md +56 -0
- package/methodology/student learning uses prerequisite graphs with spaced retrieval.md +770 -0
- package/methodology/summary coherence tests composability before filing.md +37 -0
- package/methodology/tag rot applies to wiki links because titles serve as both identifier and display text.md +50 -0
- package/methodology/temporal media must convert to spatial text for agent traversal.md +43 -0
- package/methodology/temporal processing priority creates age-based inbox urgency.md +45 -0
- package/methodology/temporal separation of capture and processing preserves context freshness.md +39 -0
- package/methodology/ten universal primitives form the kernel of every viable agent knowledge system.md +162 -0
- package/methodology/testing effect could enable agent knowledge verification.md +38 -0
- package/methodology/the AgentSkills standard embodies progressive disclosure at the skill level.md +40 -0
- package/methodology/the derivation engine improves recursively as deployed systems generate observations.md +49 -0
- package/methodology/the determinism boundary separates hook methodology from skill methodology.md +46 -0
- package/methodology/the fix-versus-report decision depends on determinism reversibility and accumulated trust.md +45 -0
- package/methodology/the generation effect requires active transformation not just storage.md +57 -0
- package/methodology/the no wrong patches guarantee ensures any valid module combination produces a valid system.md +58 -0
- package/methodology/the system is the argument.md +46 -0
- package/methodology/the vault constitutes identity for agents.md +86 -0
- package/methodology/the vault methodology transfers because it encodes cognitive science not domain specifics.md +47 -0
- package/methodology/therapy journal uses warm personality with pattern detection for emotional processing.md +584 -0
- package/methodology/three capture schools converge through agent-mediated synthesis.md +55 -0
- package/methodology/three concurrent maintenance loops operate at different timescales to catch different classes of problems.md +56 -0
- package/methodology/throughput matters more than accumulation.md +58 -0
- package/methodology/title as claim enables traversal as reasoning.md +50 -0
- package/methodology/topological organization beats temporal for knowledge work.md +52 -0
- package/methodology/trading uses conviction tracking with thesis-outcome correlation.md +699 -0
- package/methodology/trails transform ephemeral navigation into persistent artifacts.md +39 -0
- package/methodology/transform universal vocabulary to domain-native language through six levels.md +259 -0
- package/methodology/type field enables structured queries without folder hierarchies.md +53 -0
- package/methodology/use-case presets dissolve the tension between composability and simplicity.md +44 -0
- package/methodology/vault conventions may impose hidden rigidity on thinking.md +44 -0
- package/methodology/verbatim risk applies to agents too.md +31 -0
- package/methodology/vibe notetaking is the emerging industry consensus for AI-native self-organization.md +56 -0
- package/methodology/vivid memories need verification.md +45 -0
- package/methodology/vocabulary-transformation.md +27 -0
- package/methodology/voice capture is the highest-bandwidth channel for agent-delegated knowledge systems.md +45 -0
- package/methodology/wiki links are the digital evolution of analog indexing.md +73 -0
- package/methodology/wiki links as social contract transforms agents into stewards of incomplete references.md +52 -0
- package/methodology/wiki links create navigation paths that shape retrieval.md +63 -0
- package/methodology/wiki links implement GraphRAG without the infrastructure.md +101 -0
- package/methodology/writing for audience blocks authentic creation.md +22 -0
- package/methodology/you operate a system that takes notes.md +79 -0
- package/openclaw/SKILL.md +110 -0
- package/package.json +45 -0
- package/platforms/README.md +51 -0
- package/platforms/claude-code/generator.md +61 -0
- package/platforms/claude-code/hooks/README.md +186 -0
- package/platforms/claude-code/hooks/auto-commit.sh.template +38 -0
- package/platforms/claude-code/hooks/session-capture.sh.template +72 -0
- package/platforms/claude-code/hooks/session-orient.sh.template +189 -0
- package/platforms/claude-code/hooks/write-validate.sh.template +106 -0
- package/platforms/openclaw/generator.md +82 -0
- package/platforms/openclaw/hooks/README.md +89 -0
- package/platforms/openclaw/hooks/bootstrap.ts.template +224 -0
- package/platforms/openclaw/hooks/command-new.ts.template +165 -0
- package/platforms/openclaw/hooks/heartbeat.ts.template +214 -0
- package/platforms/shared/features/README.md +70 -0
- package/platforms/shared/skill-blocks/graph.md +145 -0
- package/platforms/shared/skill-blocks/learn.md +119 -0
- package/platforms/shared/skill-blocks/next.md +131 -0
- package/platforms/shared/skill-blocks/pipeline.md +326 -0
- package/platforms/shared/skill-blocks/ralph.md +616 -0
- package/platforms/shared/skill-blocks/reduce.md +1142 -0
- package/platforms/shared/skill-blocks/refactor.md +129 -0
- package/platforms/shared/skill-blocks/reflect.md +780 -0
- package/platforms/shared/skill-blocks/remember.md +524 -0
- package/platforms/shared/skill-blocks/rethink.md +574 -0
- package/platforms/shared/skill-blocks/reweave.md +680 -0
- package/platforms/shared/skill-blocks/seed.md +320 -0
- package/platforms/shared/skill-blocks/stats.md +145 -0
- package/platforms/shared/skill-blocks/tasks.md +171 -0
- package/platforms/shared/skill-blocks/validate.md +323 -0
- package/platforms/shared/skill-blocks/verify.md +562 -0
- package/platforms/shared/templates/README.md +35 -0
- package/presets/experimental/categories.yaml +1 -0
- package/presets/experimental/preset.yaml +38 -0
- package/presets/experimental/starter/README.md +7 -0
- package/presets/experimental/vocabulary.yaml +7 -0
- package/presets/personal/categories.yaml +7 -0
- package/presets/personal/preset.yaml +41 -0
- package/presets/personal/starter/goals.md +21 -0
- package/presets/personal/starter/index.md +17 -0
- package/presets/personal/starter/life-areas.md +21 -0
- package/presets/personal/starter/people.md +21 -0
- package/presets/personal/vocabulary.yaml +32 -0
- package/presets/research/categories.yaml +8 -0
- package/presets/research/preset.yaml +41 -0
- package/presets/research/starter/index.md +17 -0
- package/presets/research/starter/methods.md +21 -0
- package/presets/research/starter/open-questions.md +21 -0
- package/presets/research/vocabulary.yaml +33 -0
- package/reference/AUDIT-REPORT.md +238 -0
- package/reference/claim-map.md +172 -0
- package/reference/components.md +327 -0
- package/reference/conversation-patterns.md +542 -0
- package/reference/derivation-validation.md +649 -0
- package/reference/dimension-claim-map.md +134 -0
- package/reference/evolution-lifecycle.md +297 -0
- package/reference/failure-modes.md +235 -0
- package/reference/interaction-constraints.md +204 -0
- package/reference/kernel.yaml +242 -0
- package/reference/methodology.md +283 -0
- package/reference/open-questions.md +279 -0
- package/reference/personality-layer.md +302 -0
- package/reference/self-space.md +299 -0
- package/reference/semantic-vs-keyword.md +288 -0
- package/reference/session-lifecycle.md +298 -0
- package/reference/templates/base-note.md +16 -0
- package/reference/templates/companion-note.md +70 -0
- package/reference/templates/creative-note.md +16 -0
- package/reference/templates/learning-note.md +16 -0
- package/reference/templates/life-note.md +16 -0
- package/reference/templates/moc.md +26 -0
- package/reference/templates/relationship-note.md +17 -0
- package/reference/templates/research-note.md +19 -0
- package/reference/templates/session-log.md +24 -0
- package/reference/templates/therapy-note.md +16 -0
- package/reference/test-fixtures/edge-case-constraints.md +148 -0
- package/reference/test-fixtures/multi-domain.md +164 -0
- package/reference/test-fixtures/novel-domain-gaming.md +138 -0
- package/reference/test-fixtures/research-minimal.md +102 -0
- package/reference/test-fixtures/therapy-full.md +155 -0
- package/reference/testing-milestones.md +1087 -0
- package/reference/three-spaces.md +363 -0
- package/reference/tradition-presets.md +203 -0
- package/reference/use-case-presets.md +341 -0
- package/reference/validate-kernel.sh +432 -0
- package/reference/vocabulary-transforms.md +85 -0
- package/scripts/sync-thinking.sh +147 -0
- package/skill-sources/graph/SKILL.md +567 -0
- package/skill-sources/graph/skill.json +17 -0
- package/skill-sources/learn/SKILL.md +254 -0
- package/skill-sources/learn/skill.json +17 -0
- package/skill-sources/next/SKILL.md +407 -0
- package/skill-sources/next/skill.json +17 -0
- package/skill-sources/pipeline/SKILL.md +314 -0
- package/skill-sources/pipeline/skill.json +17 -0
- package/skill-sources/ralph/SKILL.md +604 -0
- package/skill-sources/ralph/skill.json +17 -0
- package/skill-sources/reduce/SKILL.md +1113 -0
- package/skill-sources/reduce/skill.json +17 -0
- package/skill-sources/refactor/SKILL.md +448 -0
- package/skill-sources/refactor/skill.json +17 -0
- package/skill-sources/reflect/SKILL.md +747 -0
- package/skill-sources/reflect/skill.json +17 -0
- package/skill-sources/remember/SKILL.md +534 -0
- package/skill-sources/remember/skill.json +17 -0
- package/skill-sources/rethink/SKILL.md +658 -0
- package/skill-sources/rethink/skill.json +17 -0
- package/skill-sources/reweave/SKILL.md +657 -0
- package/skill-sources/reweave/skill.json +17 -0
- package/skill-sources/seed/SKILL.md +303 -0
- package/skill-sources/seed/skill.json +17 -0
- package/skill-sources/stats/SKILL.md +371 -0
- package/skill-sources/stats/skill.json +17 -0
- package/skill-sources/tasks/SKILL.md +402 -0
- package/skill-sources/tasks/skill.json +17 -0
- package/skill-sources/validate/SKILL.md +310 -0
- package/skill-sources/validate/skill.json +17 -0
- package/skill-sources/verify/SKILL.md +532 -0
- package/skill-sources/verify/skill.json +17 -0
- package/skills/add-domain/SKILL.md +441 -0
- package/skills/add-domain/skill.json +17 -0
- package/skills/architect/SKILL.md +568 -0
- package/skills/architect/skill.json +17 -0
- package/skills/ask/SKILL.md +388 -0
- package/skills/ask/skill.json +17 -0
- package/skills/health/SKILL.md +760 -0
- package/skills/health/skill.json +17 -0
- package/skills/help/SKILL.md +348 -0
- package/skills/help/skill.json +17 -0
- package/skills/recommend/SKILL.md +553 -0
- package/skills/recommend/skill.json +17 -0
- package/skills/reseed/SKILL.md +385 -0
- package/skills/reseed/skill.json +17 -0
- package/skills/setup/SKILL.md +1688 -0
- package/skills/setup/skill.json +17 -0
- package/skills/tutorial/SKILL.md +496 -0
- package/skills/tutorial/skill.json +17 -0
- package/skills/upgrade/SKILL.md +395 -0
- package/skills/upgrade/skill.json +17 -0
|
@@ -0,0 +1,37 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: If a note cannot be summarized in 1-3 coherent sentences, it bundles multiple claims that should be split before leaving inbox
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[note-design]]"]
|
|
5
|
+
methodology: ["Cornell"]
|
|
6
|
+
source: TFT research corpus (00_inbox/heinrich/)
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# summary coherence tests composability before filing
|
|
10
|
+
|
|
11
|
+
The Cornell system requires a summary section: 1-3 sentences capturing the core claim of a page. This isn't just documentation practice — it's a structural test. If you cannot write a coherent summary, the content isn't coherent enough to be a single composable unit.
|
|
12
|
+
|
|
13
|
+
The failure mode this catches is multi-claim bundling. A note that covers "how links work, why topology matters, and when to checkpoint" cannot be summarized coherently because it makes three separate arguments. Forcing a summary exposes this incoherence before the note enters the knowledge graph, where bundled notes become linking hazards — you want to reference one claim but drag two others along with it.
|
|
14
|
+
|
|
15
|
+
This differs from the general quality question of whether [[claims must be specific enough to be wrong]]. Specificity asks whether a single claim has enough stake to be useful. Summary coherence asks whether the unit is actually singular. A note can pass the specificity test while failing the coherence test: "Quality matters more at scale because small differences compound" is specific, but if the note also argues about traversal patterns and session handoffs, no summary can hold it together.
|
|
16
|
+
|
|
17
|
+
The mechanism is simple: before any note moves from inbox to thinking, attempt to generate a 1-3 sentence summary. If the summary fails to cohere — if it reads as a list of topics rather than a unified argument — the note needs splitting. This implements Cornell's insight that the summary section is not post-processing documentation but a quality gate that catches structural problems early. Because [[structure without processing provides no value]], the summary requirement ensures processing actually happens before content leaves inbox — it's an anti-Lazy-Cornell pattern that forces the generative work.
|
|
18
|
+
|
|
19
|
+
For agent-operated vaults, this becomes an automated check. An agent attempting to summarize should flag notes where the summary requires "and" as a list connector rather than logical flow. The summary generation pass is cheap; discovering bundled notes through link maintenance is expensive. Since [[backward maintenance asks what would be different if written today]] includes splitting as one of the reconsideration actions, catching bundling at creation time prevents paying the higher cost of detecting and fixing it during maintenance.
|
|
20
|
+
|
|
21
|
+
But the summary requirement itself is a convention that may constrain. Since [[vault conventions may impose hidden rigidity on thinking]], forcing content into 1-3 coherent sentences assumes that valuable insights can always be expressed in that form. Relational insights, visual thinking, or procedural knowledge may resist sentence-form summary not because they bundle multiple claims but because they operate in a different register. The test becomes: when summary fails, is it bundling (multiple claims) or register mismatch (one insight that resists propositional form)? Since [[enforcing atomicity can create paralysis when ideas resist decomposition]], this diagnostic problem is shared across multiple vault conventions: both atomicity (decomposing into single-concept notes) and summary coherence (compressing into 1-3 sentences) require distinguishing "struggle that reveals incomplete understanding" from "struggle against a format that can't accommodate valid insight." The parallel suggests a deeper pattern: vault conventions that use friction as a diagnostic may systematically misclassify certain kinds of thinking.
|
|
22
|
+
|
|
23
|
+
This is an instance of [[testing effect could enable agent knowledge verification]] applied to structural quality rather than description quality. Attempting to generate a summary tests whether the content is coherent, just as attempting to predict content from description tests whether the description enables retrieval. Both patterns use attempted generation as a diagnostic — failures reveal structural problems that static inspection misses.
|
|
24
|
+
---
|
|
25
|
+
|
|
26
|
+
Relevant Notes:
|
|
27
|
+
- [[claims must be specific enough to be wrong]] — complementary quality gate: this note catches bundling, that note catches vagueness
|
|
28
|
+
- [[the generation effect requires active transformation not just storage]] — summary generation is a concrete example of the transformation that separates processing from filing
|
|
29
|
+
- [[structure without processing provides no value]] — the Lazy Cornell anti-pattern: summary requirement is a counter-measure that forces processing before filing
|
|
30
|
+
- [[backward maintenance asks what would be different if written today]] — catching bundling at creation time is cheaper than detecting it during maintenance passes
|
|
31
|
+
- [[testing effect could enable agent knowledge verification]] — sibling pattern: both use attempted generation as diagnostic for structural problems
|
|
32
|
+
- [[descriptions are retrieval filters not summaries]] — Cornell's two compression mechanisms: summary tests coherence at filing, descriptions enable filtering at retrieval
|
|
33
|
+
- [[generation effect gate blocks processing without transformation]] — sibling inbox exit gate: summary coherence catches bundled claims, the generation gate catches lack of transformation; both validate quality at the inbox-to-thinking boundary
|
|
34
|
+
- [[enforcing atomicity can create paralysis when ideas resist decomposition]] — parallel diagnostic problem: both summary coherence and atomicity enforcement face the question of whether friction signals bundling/incomplete thinking or format resistance/valid relational insight
|
|
35
|
+
|
|
36
|
+
Topics:
|
|
37
|
+
- [[note-design]]
|
|
@@ -0,0 +1,50 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Unlike opaque identifiers that persist through vocabulary drift, wiki link titles carry semantic content that must stay current — so renaming for clarity cascades maintenance through every incoming
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[graph-structure]]", "[[maintenance-patterns]]"]
|
|
5
|
+
methodology: ["Zettelkasten", "Digital Gardening"]
|
|
6
|
+
source: [[tft-research-part3]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# tag rot applies to wiki links because titles serve as both identifier and display text
|
|
10
|
+
|
|
11
|
+
In traditional tagging systems, "tag rot" occurs when vocabulary drifts over time. Early tags use one set of words, later tags use another, and the older content becomes effectively invisible because nobody searches for the outdated terms. The same phenomenon applies to wiki links, but with a structural twist that makes it more consequential.
|
|
12
|
+
|
|
13
|
+
Tags are lightweight labels. When a tag rots, the cost is a missed search result. Wiki links are heavier — they serve simultaneously as the note's identity, its display text in other notes, and its functional API when transcluded into prose. Because [[note titles should function as APIs enabling sentence transclusion]], a wiki link title like `[[claims must be specific enough to be wrong]]` must read naturally in sentences across the vault. The title does triple duty: it identifies the file, it displays in every note that references it, and it carries semantic content as a clause in other notes' arguments. This triple function is precisely what makes wiki link rot more fragile than tag rot.
|
|
14
|
+
|
|
15
|
+
When understanding deepens and a title needs sharpening — perhaps `[[knowledge management friction]]` should become `[[curation becomes the work when creation is easy]]` — the rename must propagate through every note that links to it. Every sentence that once read `since [[knowledge management friction]]` must now accommodate the new title. Some of these sentences will break grammatically. Others will lose their argumentative flow. The maintenance burden is proportional to the note's incoming link count, which means the most important notes (the hubs with the most references) are the most expensive to rename.
|
|
16
|
+
|
|
17
|
+
There is also a synonym proliferation problem. Different authors or sessions might create `[[AI cognition]]`, `[[artificial intelligence reasoning]]`, and `[[machine learning patterns]]` as separate notes that address overlapping concepts. Since [[narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging]], the vault's personal vocabulary has no external consensus to anchor against. In broad folksonomy, statistical convergence from many taggers naturally suppresses synonyms. In a single-operator system, the vocabulary drifts wherever the operator's thinking drifts, and there is no crowd signal to detect that drift has occurred. The graph fragments — not through broken links, but through parallel links that should converge on the same target. This is tag rot manifesting as graph fragmentation rather than search failure. And if [[federated wiki pattern enables multi-agent divergence as feature not bug]], the divergence is deliberately embraced rather than treated as a maintenance problem — but distinguishing productive divergence from mere vocabulary drift requires the same kind of monitoring that tag rot demands.
|
|
18
|
+
|
|
19
|
+
## Why wiki link rot is structurally worse than tag rot
|
|
20
|
+
|
|
21
|
+
The root cause is that wiki links couple three concerns into one string: addressing (which file to load), display (what text appears in prose), and semantics (what the concept means). Tags only carry addressing and categorization — they don't appear in prose, so renaming them has no grammatical consequences. Wiki links embed in prose as functional arguments, so their text is load-bearing. And because [[backlinks implicitly define notes by revealing usage context]], each incoming link represents not just a reference but a prose commitment where the title functions as a grammatical clause. A note with thirty backlinks has thirty sentences across the vault that depend on its exact title phrasing. The rename cost is proportional to these accumulated commitments — the very property that makes a note important (many backlinks revealing wide usage) is what makes it expensive to improve.
|
|
22
|
+
|
|
23
|
+
Since [[digital mutability enables note evolution that physical permanence forbids]], the same property that enables notes to evolve also enables the vocabulary drift that causes wiki link rot. Luhmann's physical Zettelkasten didn't have this problem precisely because card titles couldn't change. The trade-off is real: mutability enables crystallization through [[incremental formalization happens through repeated touching of old notes]], but every crystallization that sharpens a title triggers a rename cascade.
|
|
24
|
+
|
|
25
|
+
## Mitigations in practice
|
|
26
|
+
|
|
27
|
+
The vault's rename script (`rename-note.sh`) addresses the mechanical problem — finding and replacing all occurrences of the old title with the new one. But it doesn't address the grammatical problem. A sentence crafted around `since [[old title]]` may not read naturally with `since [[new title]]`. This means some renames require not just find-and-replace but re-authoring the surrounding prose in each linking note.
|
|
28
|
+
|
|
29
|
+
Since [[backward maintenance asks what would be different if written today]], the reweave process naturally surfaces title staleness. When an agent asks "what would be different about this note?" the answer is often "the title would be more precise." This creates a useful diagnostic: notes whose titles feel vague during reweaving are candidates for the rename cascade. The question is whether the improved precision is worth the propagation cost.
|
|
30
|
+
|
|
31
|
+
One architectural mitigation would be to decouple identifier from display text — using opaque IDs for addressing while allowing display text to change freely. But this sacrifices the core design value: since [[wiki links are the digital evolution of analog indexing]], the title-as-identifier property is what makes wiki links readable as prose. Opaque IDs would turn every link into `[[id-12345|display text]]`, fragmenting the concept across two representations and losing the composability that makes the vault's linking philosophy work.
|
|
32
|
+
|
|
33
|
+
The practical resolution is accepting the maintenance cost as the price of semantic linking. Rename scripts handle the mechanical propagation. Reweaving handles the prose adjustment. The cost is real but bounded — and the alternative (opaque identifiers or tag-only systems) sacrifices the prose composability that makes the knowledge graph genuinely traversable.
|
|
34
|
+
---
|
|
35
|
+
|
|
36
|
+
Relevant Notes:
|
|
37
|
+
- [[dangling links reveal which notes want to exist]] — the complementary failure mode: dangling links are absent targets, while wiki link rot is degraded identifiers; both signal graph maintenance needs but through different mechanisms
|
|
38
|
+
- [[note titles should function as APIs enabling sentence transclusion]] — the design choice that creates the fragility: titles must be semantically rich to work as prose, but semantic richness means they need updating as understanding evolves
|
|
39
|
+
- [[incremental formalization happens through repeated touching of old notes]] — the mechanism that triggers renames: as understanding crystallizes through accumulated touches, vague titles sharpen into precise claims, and each sharpening requires propagating the new identifier through all incoming links
|
|
40
|
+
- [[backward maintenance asks what would be different if written today]] — reweaving naturally discovers title staleness: the question 'what would be different?' often answers 'the title would be sharper', which triggers the rename cascade this note describes
|
|
41
|
+
- [[wiki links are the digital evolution of analog indexing]] — historical context: analog cue columns pointed via proximity, so renaming was impossible; wiki links point via text matching, making rename possible but expensive
|
|
42
|
+
- [[propositional link semantics transform wiki links from associative to reasoned]] — a potential mitigation: if relationship types were formalized, link context could survive title changes better because the structural relationship is typed independently of the display text
|
|
43
|
+
- [[digital mutability enables note evolution that physical permanence forbids]] — the double-edged nature of mutability: the same property that enables note evolution also enables the vocabulary drift that causes link rot
|
|
44
|
+
- [[narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging]] — amplifies the vulnerability: without consensus vocabulary to anchor terms, personal retrieval keys drift freely as understanding evolves, and there is no external standard to detect that drift has occurred
|
|
45
|
+
- [[backlinks implicitly define notes by revealing usage context]] — explains why rename cost scales with importance: each backlink represents a prose commitment where the title functions as a clause, so accumulated backlinks are accumulated grammatical dependencies that all break on rename
|
|
46
|
+
- [[federated wiki pattern enables multi-agent divergence as feature not bug]] — deliberate divergence: federation intentionally embraces the vocabulary divergence that tag rot warns about, transforming uncontrolled drift into an architectural feature; the question shifts from preventing drift to deciding when divergence is productive
|
|
47
|
+
|
|
48
|
+
Topics:
|
|
49
|
+
- [[graph-structure]]
|
|
50
|
+
- [[maintenance-patterns]]
|
|
@@ -0,0 +1,43 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Agents need random access to content but video, audio, and podcasts are time-locked sequences — transcription is lossy but mandatory because no agent can efficiently seek through temporal streams
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[agent-cognition]]"]
|
|
5
|
+
methodology: ["Augmentation Research"]
|
|
6
|
+
source: [[tft-research-part3]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# temporal media must convert to spatial text for agent traversal
|
|
10
|
+
|
|
11
|
+
Agents can read a document at any point — jump to paragraph seven, scan a heading, grep for a phrase. They cannot do this with video or audio. A podcast is a time-locked sequence: to reach minute forty-two, you must either know the timestamp in advance or scan linearly. An agent traversing a knowledge graph needs random access to content, and temporal media does not provide random access. The conversion from temporal to spatial is therefore not optional but architecturally necessary.
|
|
12
|
+
|
|
13
|
+
This is a constraint that emerges from what agents fundamentally are. Since [[spreading activation models how agents should traverse]], knowledge graph traversal works through jumping between connected nodes, following wiki links, loading context from descriptions before deciding whether to read full notes. None of these operations have temporal media equivalents. You cannot wiki-link into minute fourteen of a podcast. You cannot grep an audio file for a concept. You cannot scan the "headings" of a video to decide which section to load. Since [[progressive disclosure means reading right not reading less]], the entire discovery layer architecture — titles through descriptions through outlines through full content — assumes spatial, randomly accessible text. Progressive disclosure cannot function on temporal media because there is no way to "scan headings" or "read the description first" when content is locked inside a time stream.
|
|
14
|
+
|
|
15
|
+
The practical implication is that every temporal source — podcasts, video lectures, voice memos, meeting recordings — must produce a primary markdown artifact as its first processing step. Since [[voice capture is the highest-bandwidth channel for agent-delegated knowledge systems]], voice capture is the highest-bandwidth temporal channel, producing dumps at 150 wpm that then require exactly this conversion. The transcript-first principle applies: the transcript becomes the working artifact, and the original recording becomes a backup for deeper engagement. Whisper-style transcription produces the raw text. Agent processing structures it with headings, timestamps, and wiki links. But these two steps are qualitatively different: because [[the generation effect requires active transformation not just storage]], transcription is merely format conversion — the raw text is the same content in a different container. The generative transformation happens when the agent adds headings, wiki links, and extracts claims. Transcription makes traversal possible; agent processing makes it valuable. The result enters the vault as a standard markdown file that participates in the knowledge graph like any other note.
|
|
16
|
+
|
|
17
|
+
The conversion is lossy. Audio carries tone, emphasis, hesitation, emotional texture that flat text discards. Video carries spatial relationships, gestures, visual demonstrations. These losses are real but acceptable because the alternative — leaving content in temporal format — means agents cannot traverse it at all. A lossy transcript that agents can search, link, and synthesize outperforms a perfect recording that sits inert in the filesystem. Since [[local-first file formats are inherently agent-native]], the markdown transcript inherits all the properties that make the vault work: any LLM can read it, wiki links create graph edges, YAML frontmatter enables filtering. The original recording lacks every one of these properties.
|
|
18
|
+
|
|
19
|
+
The relationship to temporal versus topological organization is instructive. Since [[topological organization beats temporal for knowledge work]], the vault already commits to organizing by concept rather than by date. This note extends the same principle to media format: just as chronological filing buries knowledge under temporal sediment, temporal media buries knowledge inside time-locked sequences. The garden metaphor applies at the format level, not just the organizational level. Text is the garden. Audio and video are the stream. This is why [[ThreadMode to DocumentMode transformation is the core value creation step]] applies at the format level, not just the content level: temporal media is inherently ThreadMode — it accretes sequentially, carries the speaker's chronological context, and resists reorganization. The markdown artifact that emerges from conversion is DocumentMode — timeless, randomly accessible, composable with the rest of the knowledge graph. And since [[three capture schools converge through agent-mediated synthesis]], the temporal-to-spatial conversion is the first step in the convergence pipeline — voice capture at Accumulationist speed, transcription as the format bridge, agent processing with Interpretationist quality.
|
|
20
|
+
|
|
21
|
+
There is a two-layer graph that emerges from this conversion. The primary layer is the wiki link graph connecting markdown notes. The secondary layer consists of timestamp links that point back into the original temporal source — `youtube.com/watch?v=ID&t=50s` or `recording.mp3#t=14:32`. These timestamp anchors let a human (or a future multimodal agent) dive back into the original medium for the nuance that transcription lost. The two layers serve different functions: the wiki link layer enables agent traversal, the timestamp layer preserves source fidelity for human review.
|
|
22
|
+
|
|
23
|
+
This also explains why since [[capture the reaction to content not just the content itself]], the human's role during temporal capture shifts. Rather than transcribing in real-time (which voice capture handles), the human marks moments of interest — a tap to capture the last sixty seconds, a verbal annotation that flags significance. These sparse human signals become enrichment metadata that guide the agent's later processing. The human provides judgment about what matters; the agent provides the format conversion and structural integration.
|
|
24
|
+
|
|
25
|
+
The claim is closed because it follows directly from the agent substrate constraint. If your knowledge system operates on text files with wiki links, then content that is not text files with wiki links must become text files with wiki links before it can participate. The only question is how good the conversion can be, not whether it should happen.
|
|
26
|
+
|
|
27
|
+
---
|
|
28
|
+
---
|
|
29
|
+
|
|
30
|
+
Relevant Notes:
|
|
31
|
+
- [[local-first file formats are inherently agent-native]] — the target format: markdown with YAML and wiki links is what temporal media must become to enter the agent-readable substrate
|
|
32
|
+
- [[topological organization beats temporal for knowledge work]] — parallel principle: just as chronological filing loses to topological filing, chronological media loses to spatial text for knowledge traversal
|
|
33
|
+
- [[spreading activation models how agents should traverse]] — the traversal mechanism that temporal media cannot support: spreading activation requires jumping between nodes, which demands random access
|
|
34
|
+
- [[three capture schools converge through agent-mediated synthesis]] — the pipeline this conversion enables: once temporal content becomes text, agent-mediated synthesis can apply Interpretationist quality to Accumulationist speed capture
|
|
35
|
+
- [[capture the reaction to content not just the content itself]] — the human role during temporal capture: marking moments of interest and recording reactions while the agent handles conversion
|
|
36
|
+
- [[dual-coding with visual elements could enhance agent traversal]] — the reverse direction: this note argues temporal must become spatial, dual-coding explores whether spatial-visual could complement spatial-textual
|
|
37
|
+
- [[voice capture is the highest-bandwidth channel for agent-delegated knowledge systems]] — upstream source: voice capture is the highest-bandwidth temporal capture channel, producing the very media this note says must convert; the emotional metadata voice preserves is exactly what transcription loses
|
|
38
|
+
- [[ThreadMode to DocumentMode transformation is the core value creation step]] — parallel pattern: temporal-to-spatial conversion is ThreadMode-to-DocumentMode applied at the format level; temporal media is inherently ThreadMode (chronological, sequential), and the markdown output is DocumentMode (timeless, randomly accessible)
|
|
39
|
+
- [[progressive disclosure means reading right not reading less]] — architectural dependency: the entire discovery layer architecture that progressive disclosure depends on assumes spatial, randomly accessible text; this note explains why that assumption is non-negotiable
|
|
40
|
+
- [[the generation effect requires active transformation not just storage]] — distinguishes two steps: transcription is format conversion (non-generative), while the agent structuring that follows (adding headings, wiki links, extracting claims) is the generative transformation that creates vault value
|
|
41
|
+
|
|
42
|
+
Topics:
|
|
43
|
+
- [[agent-cognition]]
|
|
@@ -0,0 +1,45 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Age thresholds convert the Ebbinghaus decay principle into actionable queue logic — notes under 24 hours are standard, 24-72 hours elevated, beyond 72 hours critical
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[processing-workflows]]"]
|
|
5
|
+
methodology: ["Cornell"]
|
|
6
|
+
source: [[3-3-cornell-note-taking-system]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# temporal processing priority creates age-based inbox urgency
|
|
10
|
+
|
|
11
|
+
This note operationalizes what [[temporal separation of capture and processing preserves context freshness]] describes as principle. The principle says context fades exponentially in the first 24 hours, which makes timing matter. This note provides the queue algorithm that makes the principle actionable.
|
|
12
|
+
|
|
13
|
+
The algorithm is straightforward. Every inbox item has an age. Age maps to priority tiers:
|
|
14
|
+
|
|
15
|
+
| Age | Priority | Rationale |
|
|
16
|
+
|-----|----------|-----------|
|
|
17
|
+
| < 24 hours | Standard | Context still fresh, full processing possible |
|
|
18
|
+
| 24-72 hours | Elevated | Decay accelerating, process before critical threshold |
|
|
19
|
+
| > 72 hours | Critical | Original context likely unrecoverable, salvage what remains |
|
|
20
|
+
|
|
21
|
+
This creates proactive behavior rather than reactive. Instead of processing inbox items when the operator happens to invoke the skill, the system surfaces what needs attention based on temporal urgency. The agent becomes aware that notes are approaching decay thresholds before they cross them.
|
|
22
|
+
|
|
23
|
+
The underlying mechanism is Ebbinghaus decay applied to capture context rather than memorized content. When you dump a note during zero-friction capture, you have implicit understanding of why it matters, how it connects to what you were thinking, what prompted the insight. None of that gets written down. It lives in your head. And like any unrehearced memory, it fades according to exponential decay curves. The queue algorithm converts this cognitive science into scheduling logic.
|
|
24
|
+
|
|
25
|
+
This is distinct from demand-driven processing. Since [[processing effort should follow retrieval demand]] suggests that content which gets retrieved more deserves more processing effort, there's a potential tension: demand-driven processing implies waiting to see what proves useful, while temporal priority implies processing immediately regardless of demonstrated demand. Both are valid. Demand-driven processing makes sense for content whose value is uncertain. Temporal priority makes sense for content whose value depends on context that will decay. The resolution: apply temporal priority to recent captures (where context loss is the risk), then shift to demand-driven prioritization for older content (where utilization signal matters more than faded context).
|
|
26
|
+
|
|
27
|
+
The implementation gap in current agent systems is that most inbox processing happens on manual trigger. The human decides when to process, and the agent processes whatever is there. This misses the temporal dimension entirely. Adding age-based urgency to the orchestration layer would mean: when the user invokes inbox processing, surface the oldest items first, or proactively notify when items approach critical thresholds.
|
|
28
|
+
|
|
29
|
+
Since [[continuous small-batch processing eliminates review dread]] argues for frequent small processing passes rather than infrequent large ones, temporal priority provides the selection logic for those passes. If you process 3-5 items per session, which 3-5? The ones closest to crossing decay thresholds. This combines the frequency pattern (continuous small batches) with the selection pattern (oldest first) into a coherent processing discipline.
|
|
30
|
+
|
|
31
|
+
There is a tension with [[batching by context similarity reduces switching costs in agent processing]]. Age-based priority says process the oldest items first because context decays. Similarity-based batching says process related items together to minimize switching costs. The resolution is layered: age determines which priority TIER items fall into (standard, elevated, critical), and within each tier, context similarity determines the ORDER. An item at 71 hours should not wait because a context-similar item at 2 hours exists. But among five items all in the elevated tier, processing the three graph-structure items before the two note-design items reduces switching overhead without violating temporal urgency.
|
|
32
|
+
---
|
|
33
|
+
|
|
34
|
+
Relevant Notes:
|
|
35
|
+
- [[temporal separation of capture and processing preserves context freshness]] — the principle this note operationalizes; that note explains WHY timing matters, this note provides the queue algorithm
|
|
36
|
+
- [[processing effort should follow retrieval demand]] — potential tension: demand-driven suggests waiting, temporal priority suggests urgency; resolved by applying temporal logic to recent captures, demand logic to established content
|
|
37
|
+
- [[continuous small-batch processing eliminates review dread]] — complementary: provides the frequency pattern, while this note provides the selection logic for which items to process in each batch
|
|
38
|
+
- [[spaced repetition scheduling could optimize vault maintenance]] — sibling application of Ebbinghaus decay: that note applies age-based scheduling to maintenance intervals, this note applies it to inbox processing priority; same cognitive science foundation targeting different domains
|
|
39
|
+
- [[WIP limits force processing over accumulation]] — complementary mechanisms: WIP limits answer when must I process? (forcing function), this note answers what should I process first? (selection algorithm)
|
|
40
|
+
- [[PKM failure follows a predictable cycle]] — temporal priority prevents the cascade: surfacing old items urgently prevents Stage 1 (Collector's Fallacy) and Stage 2 (Under-processing) from establishing
|
|
41
|
+
- [[generation effect gate blocks processing without transformation]] — orthogonal inbox mechanism: this note answers what to process first (oldest items), the generation gate answers what counts as processed (must have artifact); together they form complete inbox discipline
|
|
42
|
+
- [[batching by context similarity reduces switching costs in agent processing]] — tension: age says process oldest first, similarity says process related items together; resolved by letting age set the priority tier and similarity optimize sequence within each tier
|
|
43
|
+
|
|
44
|
+
Topics:
|
|
45
|
+
- [[processing-workflows]]
|
package/methodology/temporal separation of capture and processing preserves context freshness.md
ADDED
|
@@ -0,0 +1,39 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Dump first then structure within 24 hours — Ebbinghaus decay means capture context fades fast, so inbox processing should be time-prioritized not just manually triggered
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[processing-workflows]]"]
|
|
5
|
+
methodology: ["Cornell"]
|
|
6
|
+
source: TFT research corpus (00_inbox/heinrich/)
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# temporal separation of capture and processing preserves context freshness
|
|
10
|
+
|
|
11
|
+
This is a design principle borrowed from Cornell Note-Taking and grounded in memory science. You do not fill in cues during the lecture; you do it immediately after. The reasoning is straightforward: context fades. The understanding you have at capture time — why something matters, how it connects, what sparked the insight — erodes rapidly once you move on to other things. Because [[every knowledge domain shares a four-phase processing skeleton that diverges only in the process step]], this temporal separation is specifically the gap between phase one (capture) and phase two (process) of a universal pipeline — and the Ebbinghaus constraint governs how long that gap can be before the process step loses the contextual understanding that makes domain-specific transformation effective.
|
|
12
|
+
|
|
13
|
+
The Ebbinghaus forgetting curve provides the scientific basis. Without reinforcement, memory retention drops exponentially: roughly 50% lost within the first hour, 70% within 24 hours. This means the window for processing captured content is measured in hours, not days. A note dumped this morning and processed tonight retains more of its original meaning than one left in the inbox for a week.
|
|
14
|
+
|
|
15
|
+
For agent-delegated processing, this means "dump first, structure later" comes with a time constraint. The dump is zero-friction capture — don't interrupt the flow, don't force structure at the moment of insight. But "later" doesn't mean "whenever." Later means soon enough that you still remember why you captured it. There's a middle path between pure dumps and full processing: since [[schema templates reduce cognitive overhead at capture time]], pre-defined fields can enable faster capture than freeform writing while preserving more structure than raw dumps. The schema externalizes structural decisions, so capture becomes "fill these boxes" rather than "design this note" — reducing capture-time overhead while context remains fresh. Research on guided notes extends this insight: since [[guided notes might outperform post-hoc structuring for high-volume capture]], skeleton outlines provided before capture may work even better for streaming content (lectures, conversations) where information arrives faster than you can process it. The schema approach fits discrete content (a book, an article); guided notes may fit flows where real-time categorization would compete with listening. This temporal separation has a potential cost: since [[does agent processing recover what fast capture loses]] tests whether the human loses encoding benefits when capture is fast and generation is delegated, the same Ebbinghaus decay that justifies urgency may mean the human never deeply encodes content that agents process for them. This adds a time dimension to [[throughput matters more than accumulation]] — throughput isn't just about how FAST you process but WHEN, because context freshness decays exponentially within the first day.
|
|
16
|
+
|
|
17
|
+
The agent implication is temporal triggers for inbox processing. Rather than only processing when manually invoked, inbox agents should prioritize by age. Since [[temporal processing priority creates age-based inbox urgency]] operationalizes this into a queue algorithm: notes under 24 hours are standard priority, 24-72 hours elevated, beyond 72 hours critical. Notes approaching 24 hours are urgent. Notes beyond 24 hours have already lost context — they're still worth processing, but the original understanding may be unrecoverable. This temporal prioritization extends beyond inbox processing: since [[spaced repetition scheduling could optimize vault maintenance]] tests whether newly created notes need more frequent verification than mature notes, the Ebbinghaus principle that grounds this inbox urgency may also ground review scheduling — recently created notes have higher issue rates (weak descriptions, missing connections) than notes that have survived multiple reviews.
|
|
18
|
+
|
|
19
|
+
There is a complementary urgency at a shorter timescale. Since [[Zeigarnik effect validates capture-first philosophy because open loops drain attention]], the moment a thought occurs and is not externalized, it becomes an open loop consuming working memory. Ebbinghaus governs the processing window — hours before context decays. Zeigarnik governs the capture window — seconds before the open loop begins draining attention. These are complementary urgencies: capture within seconds (close the loop), process within hours (preserve the context). A system that handles both timescales — zero-friction capture for Zeigarnik, temporal processing priority for Ebbinghaus — addresses the full temporal constraint on knowledge work.
|
|
20
|
+
|
|
21
|
+
This principle operates at a different level than [[fresh context per task preserves quality better than chaining phases]]. That claim addresses LLM context rot within sessions — later phases run on degraded attention, so each task gets isolation. This claim addresses human context decay between sessions — the human who captured the note loses context over time, so processing should happen before that decay sets in. Both principles point toward time-sensitivity, but for different reasons and at different scales.
|
|
22
|
+
|
|
23
|
+
The practical constraint for agent-operated knowledge systems: while the architecture supports prioritized processing, many implementations lack temporal triggers. Inbox items are processed in whatever order the operator chooses. Adding age-based priority to the orchestration layer or creating an inbox-triage skill that surfaces oldest items first would implement this principle fully. For now, it's a design decision that implementations should consider.
|
|
24
|
+
---
|
|
25
|
+
|
|
26
|
+
Relevant Notes:
|
|
27
|
+
- [[temporal processing priority creates age-based inbox urgency]] — operationalizes this principle into a queue algorithm: <24h standard, 24-72h elevated, >72h critical; converts the Ebbinghaus decay rationale into actionable scheduling logic
|
|
28
|
+
- [[fresh context per task preserves quality better than chaining phases]] — parallel principle at the agent level; this note addresses human context decay, that note addresses LLM context rot
|
|
29
|
+
- [[processing effort should follow retrieval demand]] — potential tension: demand-driven processing suggests delay, but Ebbinghaus suggests urgency
|
|
30
|
+
- [[throughput matters more than accumulation]] — this note adds the time dimension: throughput requires not just processing velocity but timely processing while context is fresh
|
|
31
|
+
- [[the generation effect requires active transformation not just storage]] — adds a time constraint to when generation must occur; processing after 24 hours generates from degraded context
|
|
32
|
+
- [[continuous small-batch processing eliminates review dread]] — complementary mechanism: this note addresses WHEN to process (urgency), that note addresses HOW OFTEN (continuous small batches prevent the accumulation that triggers dread)
|
|
33
|
+
- [[schema templates reduce cognitive overhead at capture time]] — the middle path between pure dumps and full processing: pre-defined fields enable faster capture than freeform while preserving more structure than raw dumps
|
|
34
|
+
- [[guided notes might outperform post-hoc structuring for high-volume capture]] — extends the middle path to streaming content: skeleton outlines may outperform post-hoc structuring when information flows faster than processing capacity
|
|
35
|
+
- [[Zeigarnik effect validates capture-first philosophy because open loops drain attention]] — complementary urgency at a shorter timescale: Ebbinghaus governs processing urgency (hours), Zeigarnik governs capture urgency (seconds); together they define the full temporal constraint on knowledge capture
|
|
36
|
+
- [[every knowledge domain shares a four-phase processing skeleton that diverges only in the process step]] — structural frame: temporal separation governs the gap between the skeleton's phase one (capture) and phase two (process); the Ebbinghaus constraint applies universally because the process step always requires contextual understanding regardless of domain
|
|
37
|
+
|
|
38
|
+
Topics:
|
|
39
|
+
- [[processing-workflows]]
|
|
@@ -0,0 +1,162 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Markdown files, YAML frontmatter, wiki links, MOC hierarchy, tree injection, description fields, topics footers, schema enforcement, semantic search, self space, and session rhythm — the
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[design-dimensions]]"]
|
|
5
|
+
methodology: ["Original"]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# ten universal primitives form the kernel of every viable agent knowledge system
|
|
9
|
+
|
|
10
|
+
The design space of knowledge systems is vast — since [[eight configuration dimensions parameterize the space of possible knowledge systems]], millions of theoretical configurations exist. But beneath all that variation, every viable system shares the same foundation. These ten primitives are the kernel: the non-negotiable base layer that every agent knowledge system needs to function, regardless of domain, platform, or methodology tradition. They are what [[derivation generates knowledge systems from composable research claims not template customization]] never varies, while everything above them gets derived per use case.
|
|
11
|
+
|
|
12
|
+
The kernel works because it requires only what every agent platform has: filesystem access and text files. Since [[four abstraction layers separate platform-agnostic from platform-dependent knowledge system features]], these primitives map entirely to the foundation and convention layers. No hooks, no skills, no orchestration, no MCP servers. An LLM that can read and write files can implement every primitive here. This is deliberate — since [[local-first file formats are inherently agent-native]], building on universals rather than platform-specific features means the kernel survives platform changes, migrations, and the death of any particular tool.
|
|
13
|
+
|
|
14
|
+
## The Ten Primitives
|
|
15
|
+
|
|
16
|
+
### 1. Markdown files with YAML frontmatter
|
|
17
|
+
|
|
18
|
+
Notes are plain text files with structured metadata in the header. The file IS the complete artifact — no database, no API, no external service. YAML frontmatter makes each file queryable via regex while remaining human-readable. This is the most fundamental choice: by selecting plain text over a database, the system gains universal portability at the cost of query sophistication.
|
|
19
|
+
|
|
20
|
+
**Why universal:** Since [[cognitive offloading is the architectural foundation for vault design]], the vault exists because both humans and agents need external structures to think beyond their native capacity. Plain text files are the lowest-friction external structure — any tool can read them, any agent can parse them, and they survive every platform transition because they depend on nothing.
|
|
21
|
+
|
|
22
|
+
**Minimum viable version:** A markdown file with a `---` delimited YAML block containing at least a `description` field.
|
|
23
|
+
|
|
24
|
+
**Validation:** Every `.md` file in the working directory has valid YAML frontmatter that parses without error.
|
|
25
|
+
|
|
26
|
+
### 2. Wiki links as graph edges
|
|
27
|
+
|
|
28
|
+
`[[note title]]` creates a navigable relationship between notes. Filenames are unique; links resolve by name, not path. Each link is an explicit, curated edge in the knowledge graph. Since [[spreading activation models how agents should traverse]], reading one note activates related notes through these explicit edges, enabling multi-hop reasoning without entity extraction pipelines or embedding infrastructure.
|
|
29
|
+
|
|
30
|
+
**Why universal:** Since [[each new note compounds value by creating traversal paths]], link density matters more than note count. A folder of unlinked files is a filing cabinet. A graph of linked files is a thinking structure. The difference is wiki links. And since [[inline links carry richer relationship data than metadata fields]], prose-embedded links encode WHY notes connect, not just THAT they connect.
|
|
31
|
+
|
|
32
|
+
**Minimum viable version:** At least one `[[wiki link]]` per note, either inline or in a footer section.
|
|
33
|
+
|
|
34
|
+
**Validation:** All wiki links resolve to existing files. No dangling links to non-existent notes.
|
|
35
|
+
|
|
36
|
+
### 3. MOC hierarchy for attention management
|
|
37
|
+
|
|
38
|
+
Maps of Content organize notes into navigable topic areas. Hub links to domains, domains link to topics, topics link to notes. Since [[MOCs are attention management devices not just organizational tools]], MOCs reduce context-switching cost by presenting topic state immediately — the agent reads one file and knows what a topic contains, what tensions exist, and what gaps remain. Without MOCs, since [[navigational vertigo emerges in pure association systems without local hierarchy]], a flat sea of linked notes becomes disorienting at scale.
|
|
39
|
+
|
|
40
|
+
**Why universal:** Every knowledge system, regardless of domain, needs navigation structures that manage attention. A therapy journal needs mood-pattern MOCs. A research vault needs topic MOCs. A relationship tracker needs person MOCs. The domain vocabulary changes; the structural need for curated navigation hubs does not.
|
|
41
|
+
|
|
42
|
+
**Minimum viable version:** At least one hub MOC that links to all topic areas. Topic MOCs with Core Ideas sections containing context phrases explaining each linked note.
|
|
43
|
+
|
|
44
|
+
**Validation:** Every note appears in at least one MOC. No orphan notes outside the navigation structure (after initial creation window).
|
|
45
|
+
|
|
46
|
+
### 4. Tree injection at session start
|
|
47
|
+
|
|
48
|
+
The agent sees the full file structure immediately upon session start. This provides orientation before action — the agent knows what exists before deciding what to read. For CLI agents, this means a hook or startup script that injects the directory tree. For messaging agents, this means loading a workspace map file.
|
|
49
|
+
|
|
50
|
+
**Why universal:** Since [[fresh context per task preserves quality better than chaining phases]], each session starts with a limited context budget. Tree injection spends a small fraction of that budget to provide complete structural awareness, which then guides efficient context loading for the rest of the session. Without it, the agent wastes context on discovery that could go toward productive reasoning.
|
|
51
|
+
|
|
52
|
+
**Minimum viable version:** A file listing (`tree` or equivalent) showing all directories and markdown files, loaded at session start. Maximum three levels deep to stay within reasonable token budgets.
|
|
53
|
+
|
|
54
|
+
**Validation:** The agent can identify any file's path without searching. Tree is current (no stale entries, no missing files).
|
|
55
|
+
|
|
56
|
+
### 5. Description field for progressive disclosure
|
|
57
|
+
|
|
58
|
+
Every note has a `description` in its YAML frontmatter — one sentence (~150 characters) that adds information beyond the title. The title gives the claim; the description gives scope, mechanism, or implication. Since [[descriptions are retrieval filters not summaries]], descriptions enable the agent to decide whether to read a note without loading it. This is progressive disclosure at the note level: title is layer one, description is layer two, content is layer three.
|
|
59
|
+
|
|
60
|
+
**Why universal:** Since [[flat files break at retrieval scale]], at 50 notes an agent can read everything, but at 500 retrieval becomes the bottleneck. Descriptions become the filter that determines what enters context. Since [[good descriptions layer heuristic then mechanism then implication]], effective descriptions compress a note's value proposition into a single sentence. Without descriptions, every navigation decision requires loading the full note — an O(n) cost that becomes prohibitive as the vault grows.
|
|
61
|
+
|
|
62
|
+
**Minimum viable version:** Every note has a `description` field in YAML. The description does not merely restate the title.
|
|
63
|
+
|
|
64
|
+
**Validation:** `description` field present on every note. Description text differs substantively from the title.
|
|
65
|
+
|
|
66
|
+
### 6. Topics footer linking notes to MOCs
|
|
67
|
+
|
|
68
|
+
Every note declares which MOC(s) it belongs to via a `topics` field (YAML array of wiki links). This is the bidirectional link that completes the MOC hierarchy: MOCs link down to notes via Core Ideas, notes link up to MOCs via Topics. The two-way connection ensures that neither direction goes stale independently.
|
|
69
|
+
|
|
70
|
+
**Why universal:** Without Topics, notes can drift away from their MOCs — the MOC links to the note, but there's no record in the note of where it belongs. Adding topics to a second MOC becomes guesswork. More fundamentally, topics enable the query `rg '^topics:.*\[\[topic-name\]\]'` which instantly finds all notes in a topic area without reading any MOC file. This turns a navigation structure into a queryable relationship.
|
|
71
|
+
|
|
72
|
+
**Minimum viable version:** Every note has a `topics` field containing at least one wiki link to a MOC.
|
|
73
|
+
|
|
74
|
+
**Validation:** `topics` field present on every non-MOC note. Every wiki link in `topics` resolves to a file with `type: moc`.
|
|
75
|
+
|
|
76
|
+
### 7. Schema enforcement via validation
|
|
77
|
+
|
|
78
|
+
Templates define required fields, valid enum values, and constraints. A validation mechanism (script, hook, or manual check) verifies notes against their template. Since [[schema enforcement via validation agents enables soft consistency]], validation catches drift that instruction-following alone cannot prevent — as context fills, compliance with schema instructions degrades, but a validation check is deterministic.
|
|
79
|
+
|
|
80
|
+
**Why universal:** Schema drift is inevitable without enforcement. An agent that creates 50 notes will forget a required field on the 47th. A validation pass catches it. The enforcement level varies by platform (hooks for Claude Code, manual runs for minimal platforms), but the principle — templates as single source of truth, validation against templates — is universal.
|
|
81
|
+
|
|
82
|
+
**Minimum viable version:** One template per note type defining required fields. A validation script or procedure that checks all notes against their template.
|
|
83
|
+
|
|
84
|
+
**Validation:** Running the validation procedure reports zero errors on a healthy vault.
|
|
85
|
+
|
|
86
|
+
### 8. Semantic search capability
|
|
87
|
+
|
|
88
|
+
Beyond keyword search, the system needs meaning-based discovery that finds conceptually related content across different vocabularies. A note about "friction in learning systems" should connect to a note about "errors as productive feedback" even though they share no keywords. Since [[spreading activation models how agents should traverse]], structural traversal through wiki links is the primary discovery mechanism, but semantic search catches what the graph misses — notes that should be connected but aren't yet.
|
|
89
|
+
|
|
90
|
+
**Why universal:** At scale, keyword search misses connections between notes that use different vocabulary for the same concept. Semantic search (via embedding tools like qmd, or even LLM-assisted search) complements structural navigation. The specific implementation varies — some platforms have embedding infrastructure, others use LLM-based similarity — but the capability of finding meaning across vocabularies is necessary for connection density to grow.
|
|
91
|
+
|
|
92
|
+
**Minimum viable version:** Any mechanism that finds conceptually related notes beyond exact keyword matches. This could be a dedicated embedding tool, an LLM-assisted search pass, or even a periodic manual review informed by topic adjacency.
|
|
93
|
+
|
|
94
|
+
**Validation:** Given a note, the search mechanism returns at least one related note that shares no significant keywords with the query.
|
|
95
|
+
|
|
96
|
+
### 9. Self space for agent persistent memory
|
|
97
|
+
|
|
98
|
+
A dedicated directory where the agent stores identity, methodology, goals, and accumulated memory. Read at session start, updated at session end. Since [[session handoff creates continuity without persistent memory]], the self space is what gives each fresh session a briefing from the previous one. Without it, every session starts blank — the agent doesn't know who it is, what it's working on, or what it learned yesterday.
|
|
99
|
+
|
|
100
|
+
**Why universal:** Agent continuity across sessions is not optional for knowledge work. Since [[the vault constitutes identity for agents]], losing the self space is not merely inconvenient but identity-erasing — without it, the agent reverts to base weights with no distinguishing characteristics. The self space solves this through structure rather than capability: the agent reads files to remember, rather than requiring persistent memory infrastructure. The pattern mirrors the main knowledge space (atomic notes, MOCs) applied to the agent's own cognition.
|
|
101
|
+
|
|
102
|
+
**Minimum viable version:** A `self/` directory with at least `identity.md` (who the agent is), `methodology.md` (how it works), and `goals.md` (current threads). Agent reads these at every session start.
|
|
103
|
+
|
|
104
|
+
**Validation:** Self space exists with core MOCs populated. Session start procedure includes reading self/.
|
|
105
|
+
|
|
106
|
+
### 10. Session rhythm: orient, work, persist
|
|
107
|
+
|
|
108
|
+
Every session follows a three-phase rhythm. Orient: read self space and relevant MOCs to understand current state. Work: execute the task, surfacing connections as you go. Persist: update MOCs, capture observations, externalize anything learned. This rhythm is encoded in the context file as a non-negotiable procedure.
|
|
109
|
+
|
|
110
|
+
**Why universal:** Since [[closure rituals create clean breaks that prevent attention residue bleed]], explicit session boundaries prevent both cold starts (no orientation) and lost work (no persistence). The orient phase ensures the agent doesn't duplicate effort. The persist phase ensures discoveries survive the session. Without this rhythm, knowledge work degrades into disconnected episodes that don't build on each other.
|
|
111
|
+
|
|
112
|
+
**Minimum viable version:** Context file instructions specifying: (1) read self/ at session start, (2) capture insights during work, (3) update MOCs and push changes at session end.
|
|
113
|
+
|
|
114
|
+
**Validation:** Session start loads self/ orientation. Session end produces observable state changes (updated files, committed changes).
|
|
115
|
+
|
|
116
|
+
## What the Kernel Enables
|
|
117
|
+
|
|
118
|
+
These ten primitives together create a system where:
|
|
119
|
+
- Notes are portable, queryable, and agent-readable (primitives 1, 5)
|
|
120
|
+
- The knowledge graph grows through explicit, reasoned connections (primitive 2)
|
|
121
|
+
- Navigation scales with content through curated attention hubs (primitives 3, 6)
|
|
122
|
+
- Agents orient immediately without wasting context on discovery (primitive 4)
|
|
123
|
+
- Quality is enforced structurally, not just through instructions (primitive 7)
|
|
124
|
+
- Conceptual connections are discoverable across vocabularies (primitive 8)
|
|
125
|
+
- Agent identity and continuity persist across sessions (primitive 9)
|
|
126
|
+
- Every session builds on prior work and preserves new understanding (primitive 10)
|
|
127
|
+
|
|
128
|
+
The navigation primitives in particular (2, 3, 4, 5) do not merely operate independently — since [[structure enables navigation without reading everything]], wiki links, MOCs, claim titles, and descriptions compose into a discovery layer stack where each mechanism serves a distinct filtering function and the composition turns retrieval from linear scan into targeted traversal. The kernel enables this composition by guaranteeing all four primitives are always present.
|
|
129
|
+
|
|
130
|
+
Everything above the kernel — the eight configuration dimensions, the processing pipeline phases, the specific methodology tradition, the domain-specific schema extensions, the automation level — varies per use case. The kernel does not. When [[methodology traditions are named points in a shared configuration space not competing paradigms]], Zettelkasten, PARA, Cornell, Evergreen, and GTD all include these ten primitives in their different ways. The kernel is the shared substrate beneath all named configurations.
|
|
131
|
+
|
|
132
|
+
## The Derivation Constant
|
|
133
|
+
|
|
134
|
+
For the derivation engine, the kernel is the constant term. Since [[derivation generates knowledge systems from composable research claims not template customization]], derivation decides configuration values for the eight dimensions — but the kernel is always included, never derived. This simplifies derivation considerably: instead of deciding everything from scratch, the engine inherits these ten primitives and focuses design effort on dimension selection and domain adaptation. Since [[every knowledge domain shares a four-phase processing skeleton that diverges only in the process step]], the processing skeleton sits atop the kernel, using its primitives (notes for capture, links for connection, MOCs for navigation, validation for verification) while adding domain-specific processing logic.
|
|
135
|
+
|
|
136
|
+
The kernel is also the portability guarantee. Since [[configuration dimensions interact so choices in one create pressure on others]], a system might need heavy processing, deep navigation, and dense schemas — all of which require platform-specific infrastructure. But if the platform changes, the kernel survives intact. The system loses its automation and orchestration layers but retains its intellectual content and navigation structure. This is exit velocity by design: the kernel is what you take with you when you leave.
|
|
137
|
+
|
|
138
|
+
---
|
|
139
|
+
---
|
|
140
|
+
|
|
141
|
+
Relevant Notes:
|
|
142
|
+
- [[structure enables navigation without reading everything]] — synthesis showing how primitives 2, 3, 4, 5 compose into a discovery layer stack; the kernel guarantees these four mechanisms are always co-present, and their composition is what turns retrieval from linear scan into targeted traversal
|
|
143
|
+
- [[four abstraction layers separate platform-agnostic from platform-dependent knowledge system features]] — the kernel maps to the foundation and convention layers; everything here works without hooks, skills, or orchestration
|
|
144
|
+
- [[cognitive offloading is the architectural foundation for vault design]] — the cognitive science grounding: each primitive exists because agents need external structures to think beyond context window limits
|
|
145
|
+
- [[eight configuration dimensions parameterize the space of possible knowledge systems]] — dimensions parameterize variation ABOVE the kernel; these ten primitives are what never varies
|
|
146
|
+
- [[derivation generates knowledge systems from composable research claims not template customization]] — derivation navigates dimensions while inheriting the kernel unchanged; the kernel is the derivation constant
|
|
147
|
+
- [[every knowledge domain shares a four-phase processing skeleton that diverges only in the process step]] — the processing skeleton sits atop the kernel; capture-process-connect-verify needs these primitives to operate
|
|
148
|
+
- [[local-first file formats are inherently agent-native]] — explains why the kernel requires only filesystem access: plain text with embedded metadata needs no infrastructure
|
|
149
|
+
- [[MOCs are attention management devices not just organizational tools]] — grounds primitive 3 in cognitive science: MOCs reduce context-switching cost by presenting topic state immediately
|
|
150
|
+
- [[session handoff creates continuity without persistent memory]] — grounds primitive 10: externalized state in files gives each fresh session a briefing from the previous one
|
|
151
|
+
- [[configuration dimensions interact so choices in one create pressure on others]] — dimension interactions operate above the kernel; the kernel is the invariant substrate that all coherent configurations share
|
|
152
|
+
- [[methodology traditions are named points in a shared configuration space not competing paradigms]] — every methodology tradition includes these ten primitives; they are the shared substrate beneath all named configurations
|
|
153
|
+
- [[descriptions are retrieval filters not summaries]] — grounds primitive 5: the description field enables progressive disclosure, letting agents decide what to read before loading
|
|
154
|
+
- [[spreading activation models how agents should traverse]] — grounds primitive 2: wiki links implement spreading activation for agents, priming related concepts through explicit edges
|
|
155
|
+
- [[schema enforcement via validation agents enables soft consistency]] — grounds primitive 7: validation against templates catches drift that instruction-following alone cannot prevent
|
|
156
|
+
- [[premature complexity is the most common derivation failure mode]] — the kernel defines the floor of the complexity budget: minimum viable configuration cannot go below these ten primitives, and the budget constrains initial derivation between this floor and the locally-justified-but-globally-unsustainable maximum
|
|
157
|
+
- [[composable knowledge architecture builds systems from independent toggleable modules not monolithic templates]] — the composable architecture builds its module dependency graph on top of the kernel: foundation modules correspond to these primitives, and every higher-layer module assumes the kernel is present
|
|
158
|
+
- [[the vault constitutes identity for agents]] — why primitives 9 and 10 are existential not just operational: if the vault constitutes identity, then the self space (primitive 9) and session rhythm (primitive 10) are identity-maintenance infrastructure, not convenience features; without them the agent loses not just continuity but selfhood
|
|
159
|
+
- [[flat files break at retrieval scale]] — the failure the kernel prevents: without these ten primitives, any agent knowledge system degrades to flat files at scale, hitting the retrieval wall where content becomes unfindable and agent cognition narrows to what can be located by accident
|
|
160
|
+
|
|
161
|
+
Topics:
|
|
162
|
+
- [[design-dimensions]]
|
|
@@ -0,0 +1,38 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Agents can apply the testing effect to verify vault quality by predicting note content from title+description, then checking against actual content
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[agent-cognition]]"]
|
|
5
|
+
source: TFT research corpus (00_inbox/heinrich/)
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# testing effect could enable agent knowledge verification
|
|
9
|
+
|
|
10
|
+
The testing effect (Roediger & Karpicke, 2006) demonstrates that self-testing strengthens memory more than re-reading. This is why Cornell Note-Taking emphasizes the cue column: covering notes and quizzing yourself on content creates stronger retention than passively reviewing.
|
|
11
|
+
|
|
12
|
+
Hypothesis: agents can apply this same pattern to verify vault quality. Instead of strengthening memory, the testing effect reveals whether descriptions enable retrieval. An agent reads only the title and description, predicts what the note should contain, then checks against actual content. Notes that fail prediction need better descriptions or restructuring.
|
|
13
|
+
|
|
14
|
+
This is quality assurance through simulated use rather than static linting. Because [[skills encode methodology so manual execution bypasses quality gates]], retrieval testing can encode this workflow as a quality gate — the testing effect methodology becomes enforceable rather than optional. The experiment tests whether the pattern produces measurable quality improvements. Since [[retrieval verification loop tests description quality at scale]], this principle extends from individual verification to systematic vault-wide assessment — running prediction-then-verify cycles across all notes with scoring and tracking transforms description quality from subjective judgment to measurable property. An alternative implementation emerges from [[mnemonic medium embeds verification into navigation]], where verification happens during traversal rather than as a separate phase — link context phrases become the prompts that test whether relationships hold.
|
|
15
|
+
|
|
16
|
+
The sibling experiment [[metacognitive confidence can diverge from retrieval capability]] tests a prior question: whether verification is even necessary, or whether structural quality signals (descriptions exist, links are dense) reliably predict retrieval success without testing. If structural metrics correlate with retrieval, retrieval testing is redundant with good structure. If they diverge, retrieval testing becomes essential.
|
|
17
|
+
|
|
18
|
+
The testing effect directly validates whether [[progressive disclosure means reading right not reading less]]. Progressive disclosure assumes that descriptions provide enough information to decide what deserves full reading. If an agent can't predict note content from title and description, the disclosure layer has failed — the filtering information doesn't match what's being filtered. Retrieval testing is the verification mechanism for this assumption.
|
|
19
|
+
---
|
|
20
|
+
|
|
21
|
+
Relevant Notes:
|
|
22
|
+
- [[the generation effect requires active transformation not just storage]] — related cognitive effect; generation creates hooks, testing reveals whether hooks work
|
|
23
|
+
- [[summary coherence tests composability before filing]] — sibling pattern: this note tests description quality via prediction, that note tests structural coherence via summary generation; both use attempted generation as diagnostic
|
|
24
|
+
- [[descriptions are retrieval filters not summaries]] — the theory this experiment tests: if descriptions are filters, testing should reveal filter quality
|
|
25
|
+
- [[good descriptions layer heuristic then mechanism then implication]] — provides testable structure: retrieval test failures should map to missing layers (no mechanism, no implication)
|
|
26
|
+
- [[does agent processing recover what fast capture loses]] — sibling experiment on whether agent-driven quality processes work
|
|
27
|
+
- [[claims must be specific enough to be wrong]] — the specificity anti-pattern applies to descriptions: paraphrase descriptions fail recite because they lack the specificity needed to enable prediction
|
|
28
|
+
- [[skills encode methodology so manual execution bypasses quality gates]] — retrieval testing is a skill encoding the testing effect as a quality gate; without the skill, this verification would be ad-hoc and inconsistent
|
|
29
|
+
- [[progressive disclosure means reading right not reading less]] — recite validates whether progressive disclosure works: if descriptions don't predict content, the disclosure layer has failed
|
|
30
|
+
- [[spaced repetition scheduling could optimize vault maintenance]] — sibling Cornell-derived experiment: this note tests what verification reveals (description quality), that note tests when verification happens (adaptive scheduling); both optimize maintenance through cognitive science principles
|
|
31
|
+
- [[maintenance targeting should prioritize mechanism and theory notes]] — provides targeting guidance: reweave this experiment toward description quality theory notes, not MOC neighbors
|
|
32
|
+
- [[verbatim risk applies to agents too]] — interdependent experiment: if testing effect validates, retrieval testing becomes the detection mechanism for verbatim-style outputs that look structured but contain no genuine synthesis
|
|
33
|
+
- [[metacognitive confidence can diverge from retrieval capability]] — sibling experiment: this tests WHETHER recite works as verification, that tests WHETHER verification is necessary (because structural metrics may produce false confidence without testing)
|
|
34
|
+
- [[retrieval verification loop tests description quality at scale]] — operationalizes this principle at scale: systematic scoring across all vault notes with 5-point measurement, pattern detection, and continuous improvement tracking
|
|
35
|
+
- [[dual-coding with visual elements could enhance agent traversal]] — sibling Cornell-derived experiment: this tests text-based verification via prediction, that tests visual enhancement via dual encoding; both propose alternative channels for agent cognition
|
|
36
|
+
|
|
37
|
+
Topics:
|
|
38
|
+
- [[agent-cognition]]
|
package/methodology/the AgentSkills standard embodies progressive disclosure at the skill level.md
ADDED
|
@@ -0,0 +1,40 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: The same metadata-then-depth loading pattern that governs note retrieval in the vault also governs skill loading in the AgentSkills standard, revealing a structural isomorphism driven by shared
|
|
3
|
+
kind: research
|
|
4
|
+
topics: ["[[discovery-retrieval]]", "[[agent-cognition]]"]
|
|
5
|
+
methodology: ["Original"]
|
|
6
|
+
source: [[agent-platform-capabilities-research-source]]
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# the AgentSkills standard embodies progressive disclosure at the skill level
|
|
10
|
+
|
|
11
|
+
The AgentSkills standard structures skill loading in three layers: metadata (roughly 100 tokens) loads at startup for all skills so the agent knows what is available, full instructions (recommended under 5000 tokens) load when the skill is activated, and supporting resources load only when needed. This is the same progressive disclosure pattern the vault uses for knowledge content -- file tree gives awareness, descriptions give filtering, full notes give depth -- operating at the infrastructure level instead of the content level.
|
|
12
|
+
|
|
13
|
+
The structural parallel is not a coincidence. Both systems face the same fundamental constraint: context windows are finite, and what fills them determines reasoning quality. Since [[progressive disclosure means reading right not reading less]], the goal is a context window dense with relevant material, not one stuffed with everything available. At the note level, since [[structure enables navigation without reading everything]], four structural mechanisms — wiki links, MOCs, claim titles, and YAML descriptions — compose into the discovery layer stack that makes this filtering operational. At the skill level, this means metadata filters before full instructions load. The pattern recurs because the constraint recurs — and at the content level, the composition of four distinct filtering mechanisms demonstrates how progressive disclosure goes beyond a single technique to become an architectural pattern.
|
|
14
|
+
|
|
15
|
+
What makes this more than an analogy is that both layers interact within the same context window. An agent's context budget absorbs skill descriptions and note descriptions simultaneously. Since [[skill context budgets constrain knowledge system complexity on agent platforms]], Claude Code allocates roughly 2% of context (with a 16,000-character fallback) for all skill descriptions, which means a knowledge system with many skills competes with note content for the same limited space. The progressive disclosure pattern is not merely similar across levels -- it is the same resource management strategy applied twice to the same scarce resource. And since [[metadata reduces entropy enabling precision over recall]], skill metadata is information-theoretically identical to note descriptions: both are pre-computed low-entropy representations that shrink the search space before full content loads. The difference is that skill metadata operates under a platform-enforced hard budget while note descriptions face only the softer constraint of attention degradation.
|
|
16
|
+
|
|
17
|
+
Since [[skills encode methodology so manual execution bypasses quality gates]], skills carry accumulated learning in their instructions and quality gates. The progressive disclosure loading pattern ensures this accumulated learning enters context only when relevant, not constantly. A vault with twenty skills would overwhelm context if every skill's full instructions loaded at startup. The metadata layer lets the agent hold awareness of all twenty skills while only loading the full methodology for the one it needs right now.
|
|
18
|
+
|
|
19
|
+
The pattern also has a pragmatic standardization value. Since [[platform fragmentation means identical conceptual operations require different implementations across agent environments]], the AgentSkills standard reduces fragmentation at the skill metadata level by defining a common SKILL.md format that works across twenty-plus platforms. The metadata layer is standardizable precisely because it is the progressive disclosure layer -- it captures what any agent needs to know (what the skill does, when to use it) while leaving platform-specific implementation details in the full instructions that load on demand. The standard solves fragmentation where the cost of fragmentation is lowest: at the awareness layer, where agents need to know what skills exist, not how they work internally.
|
|
20
|
+
|
|
21
|
+
The complementary infrastructure story connects to how [[hooks enable context window efficiency by delegating deterministic checks to external processes]]. Hooks save context by running validation outside the context window entirely. Skill progressive disclosure saves context by deferring full instructions until needed. Together, these mechanisms form the complete context management strategy at the infrastructure level: hooks handle what can run externally, progressive disclosure handles what must enter context but can enter on demand rather than at startup.
|
|
22
|
+
|
|
23
|
+
The deeper implication is architectural: any system that operates under context window constraints will converge on progressive disclosure patterns at every level of its stack. But convergence requires the platform to support the pattern. Since [[platform capability tiers determine which knowledge system features can be implemented]], the AgentSkills standard's progressive disclosure only functions at tier one and tier two, where skill infrastructure exists. At tier three, there are no skills to disclose progressively — the methodology lives entirely in instruction-level context files, and the metadata-then-depth loading pattern collapses to a single layer. The standard standardizes an inherently tier-dependent feature, which means its portability value is bounded by the tier distribution of actual agent platforms. Since [[intermediate packets enable assembly over creation]], skill metadata functions as a packet specification -- it tells the orchestrator what the skill does and when to invoke it, enabling workflow assembly from modular skills rather than monolithic execution. The metadata-then-instructions pattern is literally a packet structure: compressed awareness enabling informed assembly decisions. The question is not whether to use progressive disclosure but at which levels it has been implemented and at which levels it is still missing.
|
|
24
|
+
---
|
|
25
|
+
|
|
26
|
+
Relevant Notes:
|
|
27
|
+
- [[progressive disclosure means reading right not reading less]] -- foundation: the vault philosophy that this claim extends to infrastructure
|
|
28
|
+
- [[structure enables navigation without reading everything]] -- content-level counterpart: composes wiki links, MOCs, claim titles, and descriptions into the discovery layer stack that demonstrates progressive disclosure as an architectural pattern, not just a single technique
|
|
29
|
+
- [[skills encode methodology so manual execution bypasses quality gates]] -- establishes that skills carry accumulated learning, making their loading pattern consequential
|
|
30
|
+
- [[descriptions are retrieval filters not summaries]] -- the note-level instance of the same pattern: descriptions enable filtering before full content loading
|
|
31
|
+
- [[skill context budgets constrain knowledge system complexity on agent platforms]] -- develops the hard budget constraint (2% / 16k chars) that makes this progressive disclosure pattern not optional but mandatory
|
|
32
|
+
- [[metadata reduces entropy enabling precision over recall]] -- provides the information-theoretic formalization: skill metadata is entropy-reducing pre-computation, the same mechanism that justifies note descriptions
|
|
33
|
+
- [[platform fragmentation means identical conceptual operations require different implementations across agent environments]] -- the AgentSkills standard partially solves fragmentation by standardizing the metadata layer while leaving platform-specific capabilities outside scope
|
|
34
|
+
- [[hooks enable context window efficiency by delegating deterministic checks to external processes]] -- complementary context efficiency mechanism: hooks save context by running outside the window, progressive disclosure saves context by loading on demand; together they form the complete infrastructure-level context management story
|
|
35
|
+
- [[intermediate packets enable assembly over creation]] -- skill metadata functions as a packet specification: it tells the orchestrator what the skill does and when to invoke it, enabling workflow assembly from modular skills
|
|
36
|
+
- [[platform capability tiers determine which knowledge system features can be implemented]] -- the progressive disclosure pattern at the skill level only operates at tier one and two; tier-three platforms lack skill infrastructure entirely, collapsing the metadata-then-depth pattern to a single instruction layer
|
|
37
|
+
|
|
38
|
+
Topics:
|
|
39
|
+
- [[discovery-retrieval]]
|
|
40
|
+
- [[agent-cognition]]
|