arscontexta 0.6.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/marketplace.json +11 -0
- package/.claude-plugin/plugin.json +22 -0
- package/README.md +683 -0
- package/agents/knowledge-guide.md +49 -0
- package/bin/cli.mjs +66 -0
- package/generators/agents-md.md +240 -0
- package/generators/claude-md.md +379 -0
- package/generators/features/atomic-notes.md +124 -0
- package/generators/features/ethical-guardrails.md +58 -0
- package/generators/features/graph-analysis.md +188 -0
- package/generators/features/helper-functions.md +92 -0
- package/generators/features/maintenance.md +164 -0
- package/generators/features/methodology-knowledge.md +70 -0
- package/generators/features/mocs.md +144 -0
- package/generators/features/multi-domain.md +61 -0
- package/generators/features/personality.md +71 -0
- package/generators/features/processing-pipeline.md +428 -0
- package/generators/features/schema.md +149 -0
- package/generators/features/self-evolution.md +229 -0
- package/generators/features/self-space.md +78 -0
- package/generators/features/semantic-search.md +99 -0
- package/generators/features/session-rhythm.md +85 -0
- package/generators/features/templates.md +85 -0
- package/generators/features/wiki-links.md +88 -0
- package/generators/soul-md.md +121 -0
- package/hooks/hooks.json +45 -0
- package/hooks/scripts/auto-commit.sh +44 -0
- package/hooks/scripts/session-capture.sh +35 -0
- package/hooks/scripts/session-orient.sh +86 -0
- package/hooks/scripts/write-validate.sh +42 -0
- package/methodology/AI shifts knowledge systems from externalizing memory to externalizing attention.md +59 -0
- package/methodology/BM25 retrieval fails on full-length descriptions because query term dilution reduces match scores.md +39 -0
- package/methodology/IBIS framework maps claim-based architecture to structured argumentation.md +58 -0
- package/methodology/LLM attention degrades as context fills.md +49 -0
- package/methodology/MOC construction forces synthesis that automated generation from metadata cannot replicate.md +49 -0
- package/methodology/MOC maintenance investment compounds because orientation savings multiply across every future session.md +41 -0
- package/methodology/MOCs are attention management devices not just organizational tools.md +51 -0
- package/methodology/PKM failure follows a predictable cycle.md +50 -0
- package/methodology/ThreadMode to DocumentMode transformation is the core value creation step.md +52 -0
- package/methodology/WIP limits force processing over accumulation.md +53 -0
- package/methodology/Zeigarnik effect validates capture-first philosophy because open loops drain attention.md +42 -0
- package/methodology/academic research uses structured extraction with cross-source synthesis.md +566 -0
- package/methodology/adapt the four-phase processing pipeline to domain-specific throughput needs.md +197 -0
- package/methodology/agent notes externalize navigation intuition that search cannot discover and traversal cannot reconstruct.md +48 -0
- package/methodology/agent self-memory should be architecturally separate from user knowledge systems.md +48 -0
- package/methodology/agent session boundaries create natural automation checkpoints that human-operated systems lack.md +56 -0
- package/methodology/agent-cognition.md +107 -0
- package/methodology/agents are simultaneously methodology executors and subjects creating a unique trust asymmetry.md +66 -0
- package/methodology/aspect-oriented programming solved the same cross-cutting concern problem that hooks solve.md +39 -0
- package/methodology/associative ontologies beat hierarchical taxonomies because heterarchy adapts while hierarchy brittles.md +53 -0
- package/methodology/attention residue may have a minimum granularity that cannot be subdivided.md +46 -0
- package/methodology/auto-commit hooks eliminate prospective memory failures by converting remember-to-act into guaranteed execution.md +47 -0
- package/methodology/automated detection is always safe because it only reads state while automated remediation risks content corruption.md +42 -0
- package/methodology/automation should be retired when its false positive rate exceeds its true positive rate or it catches zero issues.md +56 -0
- package/methodology/backlinks implicitly define notes by revealing usage context.md +35 -0
- package/methodology/backward maintenance asks what would be different if written today.md +62 -0
- package/methodology/balance onboarding enforcement and questions to prevent premature complexity.md +229 -0
- package/methodology/basic level categorization determines optimal MOC granularity.md +51 -0
- package/methodology/batching by context similarity reduces switching costs in agent processing.md +43 -0
- package/methodology/behavioral anti-patterns matter more than tool selection.md +42 -0
- package/methodology/betweenness centrality identifies bridge notes connecting disparate knowledge domains.md +57 -0
- package/methodology/blueprints that teach construction outperform downloads that provide pre-built code for platform-dependent modules.md +42 -0
- package/methodology/bootstrapping principle enables self-improving systems.md +62 -0
- package/methodology/build automatic memory through cognitive offloading and session handoffs.md +285 -0
- package/methodology/capture the reaction to content not just the content itself.md +41 -0
- package/methodology/claims must be specific enough to be wrong.md +36 -0
- package/methodology/closure rituals create clean breaks that prevent attention residue bleed.md +44 -0
- package/methodology/cognitive offloading is the architectural foundation for vault design.md +46 -0
- package/methodology/cognitive outsourcing risk in agent-operated systems.md +55 -0
- package/methodology/coherence maintains consistency despite inconsistent inputs.md +96 -0
- package/methodology/coherent architecture emerges from wiki links spreading activation and small-world topology.md +48 -0
- package/methodology/community detection algorithms can inform when MOCs should split or merge.md +52 -0
- package/methodology/complete navigation requires four complementary types that no single mechanism provides.md +43 -0
- package/methodology/complex systems evolve from simple working systems.md +59 -0
- package/methodology/composable knowledge architecture builds systems from independent toggleable modules not monolithic templates.md +61 -0
- package/methodology/compose multi-domain systems through separate templates and shared graph.md +372 -0
- package/methodology/concept-orientation beats source-orientation for cross-domain connections.md +51 -0
- package/methodology/confidence thresholds gate automated action between the mechanical and judgment zones.md +50 -0
- package/methodology/configuration dimensions interact so choices in one create pressure on others.md +58 -0
- package/methodology/configuration paralysis emerges when derivation surfaces too many decisions.md +44 -0
- package/methodology/context files function as agent operating systems through self-referential self-extension.md +46 -0
- package/methodology/context phrase clarity determines how deep a navigation hierarchy can scale.md +46 -0
- package/methodology/continuous small-batch processing eliminates review dread.md +48 -0
- package/methodology/controlled disorder engineers serendipity through semantic rather than topical linking.md +51 -0
- package/methodology/creative writing uses worldbuilding consistency with character tracking.md +672 -0
- package/methodology/cross-links between MOC territories indicate creative leaps and integration depth.md +43 -0
- package/methodology/dangling links reveal which notes want to exist.md +62 -0
- package/methodology/data exit velocity measures how quickly content escapes vendor lock-in.md +74 -0
- package/methodology/decontextualization risk means atomicity may strip meaning that cannot be recovered.md +48 -0
- package/methodology/dense interlinked research claims enable derivation while sparse references only enable templating.md +47 -0
- package/methodology/dependency resolution through topological sort makes module composition transparent and verifiable.md +56 -0
- package/methodology/derivation generates knowledge systems from composable research claims not template customization.md +63 -0
- package/methodology/derivation-engine.md +27 -0
- package/methodology/derived systems follow a seed-evolve-reseed lifecycle.md +56 -0
- package/methodology/description quality for humans diverges from description quality for keyword search.md +73 -0
- package/methodology/descriptions are retrieval filters not summaries.md +112 -0
- package/methodology/design MOCs as attention management devices with lifecycle governance.md +318 -0
- package/methodology/design-dimensions.md +66 -0
- package/methodology/digital mutability enables note evolution that physical permanence forbids.md +54 -0
- package/methodology/discovery-retrieval.md +48 -0
- package/methodology/distinctiveness scoring treats description quality as measurable.md +69 -0
- package/methodology/does agent processing recover what fast capture loses.md +43 -0
- package/methodology/domain-compositions.md +37 -0
- package/methodology/dual-coding with visual elements could enhance agent traversal.md +55 -0
- package/methodology/each module must be describable in one sentence under 200 characters or it does too many things.md +45 -0
- package/methodology/each new note compounds value by creating traversal paths.md +55 -0
- package/methodology/eight configuration dimensions parameterize the space of possible knowledge systems.md +56 -0
- package/methodology/elaborative encoding is the quality gate for new notes.md +55 -0
- package/methodology/enforce schema with graduated strictness across capture processing and query zones.md +221 -0
- package/methodology/enforcing atomicity can create paralysis when ideas resist decomposition.md +43 -0
- package/methodology/engineering uses technical decision tracking with architectural memory.md +766 -0
- package/methodology/every knowledge domain shares a four-phase processing skeleton that diverges only in the process step.md +53 -0
- package/methodology/evolution observations provide actionable signals for system adaptation.md +67 -0
- package/methodology/external memory shapes cognition more than base model.md +60 -0
- package/methodology/faceted classification treats notes as multi-dimensional objects rather than folder contents.md +65 -0
- package/methodology/failure-modes.md +27 -0
- package/methodology/false universalism applies same processing logic regardless of domain.md +49 -0
- package/methodology/federated wiki pattern enables multi-agent divergence as feature not bug.md +59 -0
- package/methodology/flat files break at retrieval scale.md +75 -0
- package/methodology/forced engagement produces weak connections.md +48 -0
- package/methodology/four abstraction layers separate platform-agnostic from platform-dependent knowledge system features.md +47 -0
- package/methodology/fresh context per task preserves quality better than chaining phases.md +44 -0
- package/methodology/friction reveals architecture.md +63 -0
- package/methodology/friction-driven module adoption prevents configuration debt by adding complexity only at pain points.md +48 -0
- package/methodology/gardening cycle implements tend prune fertilize operations.md +41 -0
- package/methodology/generation effect gate blocks processing without transformation.md +40 -0
- package/methodology/goal-driven memory orchestration enables autonomous domain learning through directed compute allocation.md +41 -0
- package/methodology/good descriptions layer heuristic then mechanism then implication.md +57 -0
- package/methodology/graph-structure.md +65 -0
- package/methodology/guided notes might outperform post-hoc structuring for high-volume capture.md +37 -0
- package/methodology/health wellness uses symptom-trigger correlation with multi-dimensional tracking.md +819 -0
- package/methodology/hook composition creates emergent methodology from independent single-concern components.md +47 -0
- package/methodology/hook enforcement guarantees quality while instruction enforcement merely suggests it.md +51 -0
- package/methodology/hook-driven learning loops create self-improving methodology through observation accumulation.md +62 -0
- package/methodology/hooks are the agent habit system that replaces the missing basal ganglia.md +40 -0
- package/methodology/hooks cannot replace genuine cognitive engagement yet more automation is always tempting.md +87 -0
- package/methodology/hooks enable context window efficiency by delegating deterministic checks to external processes.md +47 -0
- package/methodology/idempotent maintenance operations are safe to automate because running them twice produces the same result as running them once.md +44 -0
- package/methodology/implement condition-based maintenance triggers for derived systems.md +255 -0
- package/methodology/implicit dependencies create distributed monoliths that fail silently across configurations.md +58 -0
- package/methodology/implicit knowledge emerges from traversal.md +55 -0
- package/methodology/incremental formalization happens through repeated touching of old notes.md +60 -0
- package/methodology/incremental reading enables cross-source connection finding.md +39 -0
- package/methodology/index.md +32 -0
- package/methodology/inline links carry richer relationship data than metadata fields.md +91 -0
- package/methodology/insight accretion differs from productivity in knowledge systems.md +41 -0
- package/methodology/intermediate packets enable assembly over creation.md +52 -0
- package/methodology/intermediate representation pattern enables reliable vault operations beyond regex.md +62 -0
- package/methodology/justification chains enable forward backward and evolution reasoning about configuration decisions.md +46 -0
- package/methodology/knowledge system architecture is parameterized by platform capabilities not fixed by methodology.md +51 -0
- package/methodology/knowledge systems become communication partners through complexity and memory humans cannot sustain.md +47 -0
- package/methodology/knowledge systems share universal operations and structural components across all methodology traditions.md +46 -0
- package/methodology/legal case management uses precedent chains with regulatory change propagation.md +892 -0
- package/methodology/live index via periodic regeneration keeps discovery current.md +58 -0
- package/methodology/local-first file formats are inherently agent-native.md +69 -0
- package/methodology/logic column pattern separates reasoning from procedure.md +35 -0
- package/methodology/maintenance operations are more universal than creative pipelines because structural health is domain-invariant.md +47 -0
- package/methodology/maintenance scheduling frequency should match consequence speed not detection capability.md +50 -0
- package/methodology/maintenance targeting should prioritize mechanism and theory notes.md +26 -0
- package/methodology/maintenance-patterns.md +72 -0
- package/methodology/markdown plus YAML plus ripgrep implements a queryable graph database without infrastructure.md +55 -0
- package/methodology/maturity field enables agent context prioritization.md +33 -0
- package/methodology/memory-architecture.md +27 -0
- package/methodology/metacognitive confidence can diverge from retrieval capability.md +42 -0
- package/methodology/metadata reduces entropy enabling precision over recall.md +91 -0
- package/methodology/methodology development should follow the trajectory from documentation to skill to hook as understanding hardens.md +80 -0
- package/methodology/methodology traditions are named points in a shared configuration space not competing paradigms.md +64 -0
- package/methodology/mnemonic medium embeds verification into navigation.md +46 -0
- package/methodology/module communication through shared YAML fields creates loose coupling without direct dependencies.md +44 -0
- package/methodology/module deactivation must account for structural artifacts that survive the toggle.md +49 -0
- package/methodology/multi-domain systems compose through separate templates and shared graph.md +61 -0
- package/methodology/multi-domain-composition.md +27 -0
- package/methodology/narrow folksonomy optimizes for single-operator retrieval unlike broad consensus tagging.md +53 -0
- package/methodology/navigation infrastructure passes through distinct scaling regimes that require qualitative strategy shifts.md +48 -0
- package/methodology/navigational vertigo emerges in pure association systems without local hierarchy.md +54 -0
- package/methodology/note titles should function as APIs enabling sentence transclusion.md +51 -0
- package/methodology/note-design.md +57 -0
- package/methodology/notes are skills /342/200/224 curated knowledge injected when relevant.md" +62 -0
- package/methodology/notes function as cognitive anchors that stabilize attention during complex tasks.md +41 -0
- package/methodology/novel domains derive by mapping knowledge type to closest reference domain then adapting.md +50 -0
- package/methodology/nudge theory explains graduated hook enforcement as choice architecture for agents.md +59 -0
- package/methodology/observation and tension logs function as dead-letter queues for failed automation.md +51 -0
- package/methodology/operational memory and knowledge memory serve different functions in agent architecture.md +48 -0
- package/methodology/operational wisdom requires contextual observation.md +52 -0
- package/methodology/orchestrated vault creation transforms arscontexta from tool to autonomous knowledge factory.md +40 -0
- package/methodology/organic emergence versus active curation creates a fundamental vault governance tension.md +68 -0
- package/methodology/orphan notes are seeds not failures.md +38 -0
- package/methodology/over-automation corrupts quality when hooks encode judgment rather than verification.md +62 -0
- package/methodology/people relationships uses Dunbar-layered graphs with interaction tracking.md +659 -0
- package/methodology/personal assistant uses life area management with review automation.md +610 -0
- package/methodology/platform adapter translation is semantic not mechanical because hook event meanings differ.md +40 -0
- package/methodology/platform capability tiers determine which knowledge system features can be implemented.md +48 -0
- package/methodology/platform fragmentation means identical conceptual operations require different implementations across agent environments.md +44 -0
- package/methodology/premature complexity is the most common derivation failure mode.md +45 -0
- package/methodology/prevent domain-specific failure modes through the vulnerability matrix.md +336 -0
- package/methodology/processing effort should follow retrieval demand.md +57 -0
- package/methodology/processing-workflows.md +75 -0
- package/methodology/product management uses feedback pipelines with experiment tracking.md +789 -0
- package/methodology/productivity porn risk in meta-system building.md +30 -0
- package/methodology/programmable notes could enable property-triggered workflows.md +64 -0
- package/methodology/progressive disclosure means reading right not reading less.md +69 -0
- package/methodology/progressive schema validates only what active modules require not the full system schema.md +49 -0
- package/methodology/project management uses decision tracking with stakeholder context.md +776 -0
- package/methodology/propositional link semantics transform wiki links from associative to reasoned.md +87 -0
- package/methodology/prospective memory requires externalization.md +53 -0
- package/methodology/provenance tracks where beliefs come from.md +62 -0
- package/methodology/queries evolve during search so agents should checkpoint.md +35 -0
- package/methodology/question-answer metadata enables inverted search patterns.md +39 -0
- package/methodology/random note resurfacing prevents write-only memory.md +33 -0
- package/methodology/reconciliation loops that compare desired state to actual state enable drift correction without continuous monitoring.md +59 -0
- package/methodology/reflection synthesizes existing notes into new insight.md +100 -0
- package/methodology/retrieval utility should drive design over capture completeness.md +69 -0
- package/methodology/retrieval verification loop tests description quality at scale.md +81 -0
- package/methodology/role field makes graph structure explicit.md +94 -0
- package/methodology/scaffolding enables divergence that fine-tuning cannot.md +67 -0
- package/methodology/schema enforcement via validation agents enables soft consistency.md +60 -0
- package/methodology/schema evolution follows observe-then-formalize not design-then-enforce.md +65 -0
- package/methodology/schema field names are the only domain specific element in the universal note pattern.md +46 -0
- package/methodology/schema fields should use domain-native vocabulary not abstract terminology.md +47 -0
- package/methodology/schema templates reduce cognitive overhead at capture time.md +55 -0
- package/methodology/schema validation hooks externalize inhibitory control that degrades under cognitive load.md +48 -0
- package/methodology/schema-enforcement.md +27 -0
- package/methodology/self-extension requires context files to contain platform operations knowledge not just methodology.md +47 -0
- package/methodology/sense-making vs storage does compression lose essential nuance.md +73 -0
- package/methodology/session boundary hooks implement cognitive bookends for orientation and reflection.md +60 -0
- package/methodology/session handoff creates continuity without persistent memory.md +43 -0
- package/methodology/session outputs are packets for future selves.md +43 -0
- package/methodology/session transcript mining enables experiential validation that structural tests cannot provide.md +38 -0
- package/methodology/skill context budgets constrain knowledge system complexity on agent platforms.md +52 -0
- package/methodology/skills encode methodology so manual execution bypasses quality gates.md +50 -0
- package/methodology/small-world topology requires hubs and dense local links.md +99 -0
- package/methodology/source attribution enables tracing claims to foundations.md +38 -0
- package/methodology/spaced repetition scheduling could optimize vault maintenance.md +44 -0
- package/methodology/spreading activation models how agents should traverse.md +79 -0
- package/methodology/stale navigation actively misleads because agents trust curated maps completely.md +43 -0
- package/methodology/stigmergy coordinates agents through environmental traces without direct communication.md +62 -0
- package/methodology/storage versus thinking distinction determines which tool patterns apply.md +56 -0
- package/methodology/structure enables navigation without reading everything.md +52 -0
- package/methodology/structure without processing provides no value.md +56 -0
- package/methodology/student learning uses prerequisite graphs with spaced retrieval.md +770 -0
- package/methodology/summary coherence tests composability before filing.md +37 -0
- package/methodology/tag rot applies to wiki links because titles serve as both identifier and display text.md +50 -0
- package/methodology/temporal media must convert to spatial text for agent traversal.md +43 -0
- package/methodology/temporal processing priority creates age-based inbox urgency.md +45 -0
- package/methodology/temporal separation of capture and processing preserves context freshness.md +39 -0
- package/methodology/ten universal primitives form the kernel of every viable agent knowledge system.md +162 -0
- package/methodology/testing effect could enable agent knowledge verification.md +38 -0
- package/methodology/the AgentSkills standard embodies progressive disclosure at the skill level.md +40 -0
- package/methodology/the derivation engine improves recursively as deployed systems generate observations.md +49 -0
- package/methodology/the determinism boundary separates hook methodology from skill methodology.md +46 -0
- package/methodology/the fix-versus-report decision depends on determinism reversibility and accumulated trust.md +45 -0
- package/methodology/the generation effect requires active transformation not just storage.md +57 -0
- package/methodology/the no wrong patches guarantee ensures any valid module combination produces a valid system.md +58 -0
- package/methodology/the system is the argument.md +46 -0
- package/methodology/the vault constitutes identity for agents.md +86 -0
- package/methodology/the vault methodology transfers because it encodes cognitive science not domain specifics.md +47 -0
- package/methodology/therapy journal uses warm personality with pattern detection for emotional processing.md +584 -0
- package/methodology/three capture schools converge through agent-mediated synthesis.md +55 -0
- package/methodology/three concurrent maintenance loops operate at different timescales to catch different classes of problems.md +56 -0
- package/methodology/throughput matters more than accumulation.md +58 -0
- package/methodology/title as claim enables traversal as reasoning.md +50 -0
- package/methodology/topological organization beats temporal for knowledge work.md +52 -0
- package/methodology/trading uses conviction tracking with thesis-outcome correlation.md +699 -0
- package/methodology/trails transform ephemeral navigation into persistent artifacts.md +39 -0
- package/methodology/transform universal vocabulary to domain-native language through six levels.md +259 -0
- package/methodology/type field enables structured queries without folder hierarchies.md +53 -0
- package/methodology/use-case presets dissolve the tension between composability and simplicity.md +44 -0
- package/methodology/vault conventions may impose hidden rigidity on thinking.md +44 -0
- package/methodology/verbatim risk applies to agents too.md +31 -0
- package/methodology/vibe notetaking is the emerging industry consensus for AI-native self-organization.md +56 -0
- package/methodology/vivid memories need verification.md +45 -0
- package/methodology/vocabulary-transformation.md +27 -0
- package/methodology/voice capture is the highest-bandwidth channel for agent-delegated knowledge systems.md +45 -0
- package/methodology/wiki links are the digital evolution of analog indexing.md +73 -0
- package/methodology/wiki links as social contract transforms agents into stewards of incomplete references.md +52 -0
- package/methodology/wiki links create navigation paths that shape retrieval.md +63 -0
- package/methodology/wiki links implement GraphRAG without the infrastructure.md +101 -0
- package/methodology/writing for audience blocks authentic creation.md +22 -0
- package/methodology/you operate a system that takes notes.md +79 -0
- package/openclaw/SKILL.md +110 -0
- package/package.json +45 -0
- package/platforms/README.md +51 -0
- package/platforms/claude-code/generator.md +61 -0
- package/platforms/claude-code/hooks/README.md +186 -0
- package/platforms/claude-code/hooks/auto-commit.sh.template +38 -0
- package/platforms/claude-code/hooks/session-capture.sh.template +72 -0
- package/platforms/claude-code/hooks/session-orient.sh.template +189 -0
- package/platforms/claude-code/hooks/write-validate.sh.template +106 -0
- package/platforms/openclaw/generator.md +82 -0
- package/platforms/openclaw/hooks/README.md +89 -0
- package/platforms/openclaw/hooks/bootstrap.ts.template +224 -0
- package/platforms/openclaw/hooks/command-new.ts.template +165 -0
- package/platforms/openclaw/hooks/heartbeat.ts.template +214 -0
- package/platforms/shared/features/README.md +70 -0
- package/platforms/shared/skill-blocks/graph.md +145 -0
- package/platforms/shared/skill-blocks/learn.md +119 -0
- package/platforms/shared/skill-blocks/next.md +131 -0
- package/platforms/shared/skill-blocks/pipeline.md +326 -0
- package/platforms/shared/skill-blocks/ralph.md +616 -0
- package/platforms/shared/skill-blocks/reduce.md +1142 -0
- package/platforms/shared/skill-blocks/refactor.md +129 -0
- package/platforms/shared/skill-blocks/reflect.md +780 -0
- package/platforms/shared/skill-blocks/remember.md +524 -0
- package/platforms/shared/skill-blocks/rethink.md +574 -0
- package/platforms/shared/skill-blocks/reweave.md +680 -0
- package/platforms/shared/skill-blocks/seed.md +320 -0
- package/platforms/shared/skill-blocks/stats.md +145 -0
- package/platforms/shared/skill-blocks/tasks.md +171 -0
- package/platforms/shared/skill-blocks/validate.md +323 -0
- package/platforms/shared/skill-blocks/verify.md +562 -0
- package/platforms/shared/templates/README.md +35 -0
- package/presets/experimental/categories.yaml +1 -0
- package/presets/experimental/preset.yaml +38 -0
- package/presets/experimental/starter/README.md +7 -0
- package/presets/experimental/vocabulary.yaml +7 -0
- package/presets/personal/categories.yaml +7 -0
- package/presets/personal/preset.yaml +41 -0
- package/presets/personal/starter/goals.md +21 -0
- package/presets/personal/starter/index.md +17 -0
- package/presets/personal/starter/life-areas.md +21 -0
- package/presets/personal/starter/people.md +21 -0
- package/presets/personal/vocabulary.yaml +32 -0
- package/presets/research/categories.yaml +8 -0
- package/presets/research/preset.yaml +41 -0
- package/presets/research/starter/index.md +17 -0
- package/presets/research/starter/methods.md +21 -0
- package/presets/research/starter/open-questions.md +21 -0
- package/presets/research/vocabulary.yaml +33 -0
- package/reference/AUDIT-REPORT.md +238 -0
- package/reference/claim-map.md +172 -0
- package/reference/components.md +327 -0
- package/reference/conversation-patterns.md +542 -0
- package/reference/derivation-validation.md +649 -0
- package/reference/dimension-claim-map.md +134 -0
- package/reference/evolution-lifecycle.md +297 -0
- package/reference/failure-modes.md +235 -0
- package/reference/interaction-constraints.md +204 -0
- package/reference/kernel.yaml +242 -0
- package/reference/methodology.md +283 -0
- package/reference/open-questions.md +279 -0
- package/reference/personality-layer.md +302 -0
- package/reference/self-space.md +299 -0
- package/reference/semantic-vs-keyword.md +288 -0
- package/reference/session-lifecycle.md +298 -0
- package/reference/templates/base-note.md +16 -0
- package/reference/templates/companion-note.md +70 -0
- package/reference/templates/creative-note.md +16 -0
- package/reference/templates/learning-note.md +16 -0
- package/reference/templates/life-note.md +16 -0
- package/reference/templates/moc.md +26 -0
- package/reference/templates/relationship-note.md +17 -0
- package/reference/templates/research-note.md +19 -0
- package/reference/templates/session-log.md +24 -0
- package/reference/templates/therapy-note.md +16 -0
- package/reference/test-fixtures/edge-case-constraints.md +148 -0
- package/reference/test-fixtures/multi-domain.md +164 -0
- package/reference/test-fixtures/novel-domain-gaming.md +138 -0
- package/reference/test-fixtures/research-minimal.md +102 -0
- package/reference/test-fixtures/therapy-full.md +155 -0
- package/reference/testing-milestones.md +1087 -0
- package/reference/three-spaces.md +363 -0
- package/reference/tradition-presets.md +203 -0
- package/reference/use-case-presets.md +341 -0
- package/reference/validate-kernel.sh +432 -0
- package/reference/vocabulary-transforms.md +85 -0
- package/scripts/sync-thinking.sh +147 -0
- package/skill-sources/graph/SKILL.md +567 -0
- package/skill-sources/graph/skill.json +17 -0
- package/skill-sources/learn/SKILL.md +254 -0
- package/skill-sources/learn/skill.json +17 -0
- package/skill-sources/next/SKILL.md +407 -0
- package/skill-sources/next/skill.json +17 -0
- package/skill-sources/pipeline/SKILL.md +314 -0
- package/skill-sources/pipeline/skill.json +17 -0
- package/skill-sources/ralph/SKILL.md +604 -0
- package/skill-sources/ralph/skill.json +17 -0
- package/skill-sources/reduce/SKILL.md +1113 -0
- package/skill-sources/reduce/skill.json +17 -0
- package/skill-sources/refactor/SKILL.md +448 -0
- package/skill-sources/refactor/skill.json +17 -0
- package/skill-sources/reflect/SKILL.md +747 -0
- package/skill-sources/reflect/skill.json +17 -0
- package/skill-sources/remember/SKILL.md +534 -0
- package/skill-sources/remember/skill.json +17 -0
- package/skill-sources/rethink/SKILL.md +658 -0
- package/skill-sources/rethink/skill.json +17 -0
- package/skill-sources/reweave/SKILL.md +657 -0
- package/skill-sources/reweave/skill.json +17 -0
- package/skill-sources/seed/SKILL.md +303 -0
- package/skill-sources/seed/skill.json +17 -0
- package/skill-sources/stats/SKILL.md +371 -0
- package/skill-sources/stats/skill.json +17 -0
- package/skill-sources/tasks/SKILL.md +402 -0
- package/skill-sources/tasks/skill.json +17 -0
- package/skill-sources/validate/SKILL.md +310 -0
- package/skill-sources/validate/skill.json +17 -0
- package/skill-sources/verify/SKILL.md +532 -0
- package/skill-sources/verify/skill.json +17 -0
- package/skills/add-domain/SKILL.md +441 -0
- package/skills/add-domain/skill.json +17 -0
- package/skills/architect/SKILL.md +568 -0
- package/skills/architect/skill.json +17 -0
- package/skills/ask/SKILL.md +388 -0
- package/skills/ask/skill.json +17 -0
- package/skills/health/SKILL.md +760 -0
- package/skills/health/skill.json +17 -0
- package/skills/help/SKILL.md +348 -0
- package/skills/help/skill.json +17 -0
- package/skills/recommend/SKILL.md +553 -0
- package/skills/recommend/skill.json +17 -0
- package/skills/reseed/SKILL.md +385 -0
- package/skills/reseed/skill.json +17 -0
- package/skills/setup/SKILL.md +1688 -0
- package/skills/setup/skill.json +17 -0
- package/skills/tutorial/SKILL.md +496 -0
- package/skills/tutorial/skill.json +17 -0
- package/skills/upgrade/SKILL.md +395 -0
- package/skills/upgrade/skill.json +17 -0
|
@@ -0,0 +1,303 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: seed
|
|
3
|
+
description: Add a source file to the processing queue. Checks for duplicates, creates archive folder, moves source from inbox, creates extract task, and updates queue. Triggers on "/seed", "/seed [file]", "queue this for processing".
|
|
4
|
+
version: "1.0"
|
|
5
|
+
generated_from: "arscontexta-v1.6"
|
|
6
|
+
user-invocable: true
|
|
7
|
+
context: fork
|
|
8
|
+
model: opus
|
|
9
|
+
allowed-tools: Read, Write, Edit, Grep, Glob, Bash
|
|
10
|
+
argument-hint: "[file] — path to source file to seed for processing"
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
## EXECUTE NOW
|
|
14
|
+
|
|
15
|
+
**Target: $ARGUMENTS**
|
|
16
|
+
|
|
17
|
+
The target MUST be a file path. If no target provided, list {DOMAIN:inbox}/ contents and ask which to seed.
|
|
18
|
+
|
|
19
|
+
### Step 0: Read Vocabulary
|
|
20
|
+
|
|
21
|
+
Read `ops/derivation-manifest.md` (or fall back to `ops/derivation.md`) for domain vocabulary mapping. All output must use domain-native terms. If neither file exists, use universal terms.
|
|
22
|
+
|
|
23
|
+
**START NOW.** Seed the source file into the processing queue.
|
|
24
|
+
|
|
25
|
+
---
|
|
26
|
+
|
|
27
|
+
## Step 1: Validate Source
|
|
28
|
+
|
|
29
|
+
Confirm the target file exists. If it does not, check common locations:
|
|
30
|
+
- `{DOMAIN:inbox}/{filename}`
|
|
31
|
+
- Subdirectories of {DOMAIN:inbox}/
|
|
32
|
+
|
|
33
|
+
If the file cannot be found, report error and stop:
|
|
34
|
+
```
|
|
35
|
+
ERROR: Source file not found: {path}
|
|
36
|
+
Checked: {locations checked}
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
Read the file to understand:
|
|
40
|
+
- **Content type**: what kind of material is this? (research article, documentation, transcript, etc.)
|
|
41
|
+
- **Size**: line count (affects chunking decisions in /reduce)
|
|
42
|
+
- **Format**: markdown, plain text, structured data
|
|
43
|
+
|
|
44
|
+
## Step 2: Duplicate Detection
|
|
45
|
+
|
|
46
|
+
Check if this source has already been processed. Two levels of detection:
|
|
47
|
+
|
|
48
|
+
### 2a. Filename Match
|
|
49
|
+
|
|
50
|
+
Search the queue file and archive folders for matching source names:
|
|
51
|
+
|
|
52
|
+
```bash
|
|
53
|
+
SOURCE_NAME=$(basename "$FILE" .md | tr ' ' '-' | tr '[:upper:]' '[:lower:]')
|
|
54
|
+
|
|
55
|
+
# Check queue for existing entry
|
|
56
|
+
# Search in ops/queue.yaml, ops/queue/queue.yaml, or ops/queue/queue.json
|
|
57
|
+
grep -l "$SOURCE_NAME" ops/queue*.yaml ops/queue/*.yaml ops/queue/*.json 2>/dev/null
|
|
58
|
+
|
|
59
|
+
# Check archive folders
|
|
60
|
+
ls -d ops/queue/archive/*-${SOURCE_NAME}* 2>/dev/null
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
### 2b. Content Similarity (if semantic search available)
|
|
64
|
+
|
|
65
|
+
If semantic search is available (qmd MCP tools or CLI), check for content overlap:
|
|
66
|
+
|
|
67
|
+
```
|
|
68
|
+
mcp__qmd__search query="claims from {source filename}" limit=5
|
|
69
|
+
```
|
|
70
|
+
|
|
71
|
+
Or via keyword search in the {DOMAIN:notes}/ directory:
|
|
72
|
+
```bash
|
|
73
|
+
grep -rl "{key terms from source title}" {DOMAIN:notes}/ 2>/dev/null | head -5
|
|
74
|
+
```
|
|
75
|
+
|
|
76
|
+
### 2c. Report Duplicates
|
|
77
|
+
|
|
78
|
+
If either check finds a match:
|
|
79
|
+
- Show what was found (filename match or content overlap)
|
|
80
|
+
- Ask: "This source may have been processed before. Proceed anyway? (y/n)"
|
|
81
|
+
- If the user declines, stop cleanly
|
|
82
|
+
- If the user confirms (or no duplicate found), continue
|
|
83
|
+
|
|
84
|
+
## Step 3: Create Archive Structure
|
|
85
|
+
|
|
86
|
+
Create the archive folder. The date-prefixed folder name ensures uniqueness.
|
|
87
|
+
|
|
88
|
+
```bash
|
|
89
|
+
DATE=$(date -u +"%Y-%m-%d")
|
|
90
|
+
SOURCE_BASENAME=$(basename "$FILE" .md | tr ' ' '-' | tr '[:upper:]' '[:lower:]')
|
|
91
|
+
ARCHIVE_DIR="ops/queue/archive/${DATE}-${SOURCE_BASENAME}"
|
|
92
|
+
mkdir -p "$ARCHIVE_DIR"
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
The archive folder serves two purposes:
|
|
96
|
+
1. Permanent home for the source file (moved from {DOMAIN:inbox})
|
|
97
|
+
2. Destination for task files after batch completion (/archive-batch moves them here)
|
|
98
|
+
|
|
99
|
+
## Step 4: Move Source to Archive
|
|
100
|
+
|
|
101
|
+
Move the source file from its current location to the archive folder. This is the **claiming step** — once moved, the source is owned by this processing batch.
|
|
102
|
+
|
|
103
|
+
**{DOMAIN:inbox} sources get moved:**
|
|
104
|
+
```bash
|
|
105
|
+
if [[ "$FILE" == *"{DOMAIN:inbox}"* ]] || [[ "$FILE" == *"inbox"* ]]; then
|
|
106
|
+
mv "$FILE" "$ARCHIVE_DIR/"
|
|
107
|
+
FINAL_SOURCE="$ARCHIVE_DIR/$(basename "$FILE")"
|
|
108
|
+
fi
|
|
109
|
+
```
|
|
110
|
+
|
|
111
|
+
**Sources outside {DOMAIN:inbox} stay in place:**
|
|
112
|
+
```bash
|
|
113
|
+
# Living docs (like configuration files) stay where they are
|
|
114
|
+
# Archive folder is still created for task files
|
|
115
|
+
FINAL_SOURCE="$FILE"
|
|
116
|
+
```
|
|
117
|
+
|
|
118
|
+
Use `$FINAL_SOURCE` in the task file — this is the path all downstream phases reference.
|
|
119
|
+
|
|
120
|
+
**Why move immediately:** All references (task files, {DOMAIN:note_plural}' Source footers) use the final archived path from the start. No path updates needed later. If it is in {DOMAIN:inbox}, it is unclaimed. Claimed sources live in archive.
|
|
121
|
+
|
|
122
|
+
## Step 5: Determine Claim Numbering
|
|
123
|
+
|
|
124
|
+
Find the highest existing claim number across the queue and archive to ensure globally unique claim IDs.
|
|
125
|
+
|
|
126
|
+
```bash
|
|
127
|
+
# Check queue for highest claim number in file references
|
|
128
|
+
QUEUE_MAX=$(grep -oE '[0-9]{3}\.md' ops/queue*.yaml ops/queue/*.yaml 2>/dev/null | \
|
|
129
|
+
grep -oE '[0-9]{3}' | sort -n | tail -1)
|
|
130
|
+
QUEUE_MAX=${QUEUE_MAX:-0}
|
|
131
|
+
|
|
132
|
+
# Check archive for highest claim number
|
|
133
|
+
ARCHIVE_MAX=$(find ops/queue/archive -name "*-[0-9][0-9][0-9].md" 2>/dev/null | \
|
|
134
|
+
grep -v summary | sed 's/.*-\([0-9][0-9][0-9]\)\.md/\1/' | sort -n | tail -1)
|
|
135
|
+
ARCHIVE_MAX=${ARCHIVE_MAX:-0}
|
|
136
|
+
|
|
137
|
+
# Next claim starts after the highest
|
|
138
|
+
NEXT_CLAIM_START=$((QUEUE_MAX > ARCHIVE_MAX ? QUEUE_MAX + 1 : ARCHIVE_MAX + 1))
|
|
139
|
+
```
|
|
140
|
+
|
|
141
|
+
Claim numbers are globally unique and never reused across batches. This ensures every claim file name (`{source}-{NNN}.md`) is unique vault-wide.
|
|
142
|
+
|
|
143
|
+
## Step 6: Create Extract Task File
|
|
144
|
+
|
|
145
|
+
Write the task file to `ops/queue/${SOURCE_BASENAME}.md`:
|
|
146
|
+
|
|
147
|
+
```markdown
|
|
148
|
+
---
|
|
149
|
+
id: {SOURCE_BASENAME}
|
|
150
|
+
type: extract
|
|
151
|
+
source: {FINAL_SOURCE}
|
|
152
|
+
original_path: {original file path before move}
|
|
153
|
+
archive_folder: {ARCHIVE_DIR}
|
|
154
|
+
created: {UTC timestamp}
|
|
155
|
+
next_claim_start: {NEXT_CLAIM_START}
|
|
156
|
+
---
|
|
157
|
+
|
|
158
|
+
# Extract {DOMAIN:note_plural} from {source filename}
|
|
159
|
+
|
|
160
|
+
## Source
|
|
161
|
+
Original: {original file path}
|
|
162
|
+
Archived: {FINAL_SOURCE}
|
|
163
|
+
Size: {line count} lines
|
|
164
|
+
Content type: {detected type}
|
|
165
|
+
|
|
166
|
+
## Scope
|
|
167
|
+
{scope guidance if provided via --scope, otherwise: "Full document"}
|
|
168
|
+
|
|
169
|
+
## Acceptance Criteria
|
|
170
|
+
- Extract claims, implementation ideas, tensions, and testable hypotheses
|
|
171
|
+
- Duplicate check against {DOMAIN:notes}/ during extraction
|
|
172
|
+
- Near-duplicates create enrichment tasks (do not skip)
|
|
173
|
+
- Each output type gets appropriate handling
|
|
174
|
+
|
|
175
|
+
## Execution Notes
|
|
176
|
+
(filled by /reduce)
|
|
177
|
+
|
|
178
|
+
## Outputs
|
|
179
|
+
(filled by /reduce)
|
|
180
|
+
```
|
|
181
|
+
|
|
182
|
+
## Step 7: Update Queue
|
|
183
|
+
|
|
184
|
+
Add the extract task entry to the queue file.
|
|
185
|
+
|
|
186
|
+
**For YAML queues (ops/queue.yaml):**
|
|
187
|
+
```yaml
|
|
188
|
+
- id: {SOURCE_BASENAME}
|
|
189
|
+
type: extract
|
|
190
|
+
status: pending
|
|
191
|
+
source: "{FINAL_SOURCE}"
|
|
192
|
+
file: "{SOURCE_BASENAME}.md"
|
|
193
|
+
created: "{UTC timestamp}"
|
|
194
|
+
next_claim_start: {NEXT_CLAIM_START}
|
|
195
|
+
```
|
|
196
|
+
|
|
197
|
+
**For JSON queues (ops/queue/queue.json):**
|
|
198
|
+
```json
|
|
199
|
+
{
|
|
200
|
+
"id": "{SOURCE_BASENAME}",
|
|
201
|
+
"type": "extract",
|
|
202
|
+
"status": "pending",
|
|
203
|
+
"source": "{FINAL_SOURCE}",
|
|
204
|
+
"file": "{SOURCE_BASENAME}.md",
|
|
205
|
+
"created": "{UTC timestamp}",
|
|
206
|
+
"next_claim_start": {NEXT_CLAIM_START}
|
|
207
|
+
}
|
|
208
|
+
```
|
|
209
|
+
|
|
210
|
+
**If no queue file exists:** Create one with the appropriate schema header (phase_order definitions) and this first task entry.
|
|
211
|
+
|
|
212
|
+
## Step 8: Report
|
|
213
|
+
|
|
214
|
+
```
|
|
215
|
+
--=={ seed }==--
|
|
216
|
+
|
|
217
|
+
Seeded: {SOURCE_BASENAME}
|
|
218
|
+
Source: {original path} -> {FINAL_SOURCE}
|
|
219
|
+
Archive folder: {ARCHIVE_DIR}
|
|
220
|
+
Size: {line count} lines
|
|
221
|
+
Content type: {detected type}
|
|
222
|
+
|
|
223
|
+
Task file: ops/queue/{SOURCE_BASENAME}.md
|
|
224
|
+
Claims will start at: {NEXT_CLAIM_START}
|
|
225
|
+
Claim files will be: {SOURCE_BASENAME}-{NNN}.md (unique across vault)
|
|
226
|
+
Queue: updated with extract task
|
|
227
|
+
|
|
228
|
+
Next steps:
|
|
229
|
+
/ralph 1 --batch {SOURCE_BASENAME} (extract claims)
|
|
230
|
+
/pipeline will handle this automatically
|
|
231
|
+
```
|
|
232
|
+
|
|
233
|
+
---
|
|
234
|
+
|
|
235
|
+
## Why This Skill Exists
|
|
236
|
+
|
|
237
|
+
Manual queue management is error-prone. This skill:
|
|
238
|
+
- Ensures consistent task file format across batches
|
|
239
|
+
- Handles claim numbering automatically (globally unique)
|
|
240
|
+
- Checks for duplicates before creating unnecessary work
|
|
241
|
+
- Moves sources to their permanent archive location immediately
|
|
242
|
+
- Provides clear next steps for the user
|
|
243
|
+
|
|
244
|
+
## Naming Convention
|
|
245
|
+
|
|
246
|
+
Task files use the source basename for human readability:
|
|
247
|
+
- Task file: `{source-basename}.md`
|
|
248
|
+
- Claim files: `{source-basename}-{NNN}.md`
|
|
249
|
+
- Summary: `{source-basename}-summary.md`
|
|
250
|
+
- Archive folder: `{date}-{source-basename}/`
|
|
251
|
+
|
|
252
|
+
Claim numbers (NNN) are globally unique across all batches, ensuring every filename is unique vault-wide. This is required because wiki links resolve by filename, not path.
|
|
253
|
+
|
|
254
|
+
## Source Handling Patterns
|
|
255
|
+
|
|
256
|
+
**{DOMAIN:inbox} source (most common):**
|
|
257
|
+
```
|
|
258
|
+
{DOMAIN:inbox}/research/article.md
|
|
259
|
+
| /seed
|
|
260
|
+
v
|
|
261
|
+
ops/queue/archive/2026-01-30-article/article.md <- source moved here
|
|
262
|
+
ops/queue/article.md <- task file created
|
|
263
|
+
```
|
|
264
|
+
|
|
265
|
+
**Living doc (outside {DOMAIN:inbox}):**
|
|
266
|
+
```
|
|
267
|
+
CLAUDE.md -> stays as CLAUDE.md (no move)
|
|
268
|
+
ops/queue/archive/2026-01-30-claude-md/ <- folder still created
|
|
269
|
+
ops/queue/claude-md.md <- task file created
|
|
270
|
+
```
|
|
271
|
+
|
|
272
|
+
When /archive-batch runs later, it moves task files into the existing archive folder and generates a summary.
|
|
273
|
+
|
|
274
|
+
---
|
|
275
|
+
|
|
276
|
+
## Edge Cases
|
|
277
|
+
|
|
278
|
+
**Source outside {DOMAIN:inbox}:** Works — source stays in place, archive folder is created for task files only.
|
|
279
|
+
|
|
280
|
+
**No queue file:** Create `ops/queue/queue.yaml` (or `.json`) with schema header and this first entry.
|
|
281
|
+
|
|
282
|
+
**Large source (2500+ lines):** Note in output: "Large source ({N} lines) -- /reduce will chunk automatically."
|
|
283
|
+
|
|
284
|
+
**Source is a URL or non-file:** Report error: "/seed requires a file path."
|
|
285
|
+
|
|
286
|
+
**No ops/derivation-manifest.md:** Use universal vocabulary for all output.
|
|
287
|
+
|
|
288
|
+
---
|
|
289
|
+
|
|
290
|
+
## Critical Constraints
|
|
291
|
+
|
|
292
|
+
**never:**
|
|
293
|
+
- Skip duplicate detection (prevents wasted processing)
|
|
294
|
+
- Move a source that is not in {DOMAIN:inbox} (living docs stay in place)
|
|
295
|
+
- Reuse claim numbers from previous batches (globally unique is required)
|
|
296
|
+
- Create a task file without updating the queue (both must happen together)
|
|
297
|
+
|
|
298
|
+
**always:**
|
|
299
|
+
- Ask before proceeding when duplicates are detected
|
|
300
|
+
- Create the archive folder even for living docs (task files need it)
|
|
301
|
+
- Use the archived path (not original) in the task file for {DOMAIN:inbox} sources
|
|
302
|
+
- Report next steps clearly so the user knows what to do next
|
|
303
|
+
- Compute next_claim_start from both queue AND archive (not just one)
|
|
@@ -0,0 +1,17 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "knowledge-seed",
|
|
3
|
+
"description": "Add a source file to the processing queue. Checks for duplicates, creates archive folder, moves source from inbox, creates extract task, and updates queue.",
|
|
4
|
+
"version": "0.4.0",
|
|
5
|
+
"author": "arscontexta",
|
|
6
|
+
"tags": ["knowledge-management", "pipeline", "processing"],
|
|
7
|
+
"entry": "SKILL.md",
|
|
8
|
+
"platform_hints": {
|
|
9
|
+
"claude-code": {
|
|
10
|
+
"context": "fork",
|
|
11
|
+
"model": "opus"
|
|
12
|
+
},
|
|
13
|
+
"openclaw": {
|
|
14
|
+
"type": "external"
|
|
15
|
+
}
|
|
16
|
+
}
|
|
17
|
+
}
|
|
@@ -0,0 +1,371 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: stats
|
|
3
|
+
description: Show vault statistics and knowledge graph metrics. Provides a shareable snapshot of vault health, growth, and progress. Triggers on "/stats", "vault stats", "show metrics", "how big is my vault".
|
|
4
|
+
version: "1.0"
|
|
5
|
+
generated_from: "arscontexta-v1.6"
|
|
6
|
+
user-invocable: true
|
|
7
|
+
context: fork
|
|
8
|
+
model: opus
|
|
9
|
+
allowed-tools: Read, Grep, Glob, Bash
|
|
10
|
+
argument-hint: "[--share] — optional flag for compact shareable output"
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
## Runtime Configuration (Step 0 — before any processing)
|
|
14
|
+
|
|
15
|
+
Read these files to configure domain-specific behavior:
|
|
16
|
+
|
|
17
|
+
1. **`ops/derivation-manifest.md`** — vocabulary mapping
|
|
18
|
+
- Use `vocabulary.notes` for the notes folder name
|
|
19
|
+
- Use `vocabulary.note` / `vocabulary.note_plural` for note type references
|
|
20
|
+
- Use `vocabulary.topic_map` / `vocabulary.topic_map_plural` for MOC references
|
|
21
|
+
- Use `vocabulary.inbox` for the inbox folder name
|
|
22
|
+
- Use `vocabulary.notes_collection` for semantic search collection name
|
|
23
|
+
|
|
24
|
+
2. **`ops/config.yaml`** — processing depth, automation settings
|
|
25
|
+
|
|
26
|
+
If no derivation file exists, use universal terms (notes, MOCs, etc.).
|
|
27
|
+
|
|
28
|
+
---
|
|
29
|
+
|
|
30
|
+
## EXECUTE NOW
|
|
31
|
+
|
|
32
|
+
**Target: $ARGUMENTS**
|
|
33
|
+
|
|
34
|
+
Parse immediately:
|
|
35
|
+
- If target contains `--share`: output compact shareable format after full stats
|
|
36
|
+
- If target is empty: output full stats display
|
|
37
|
+
- If target names a specific category (e.g., "health", "growth", "pipeline"): show only that category
|
|
38
|
+
|
|
39
|
+
**START NOW.** Collect metrics and present them.
|
|
40
|
+
|
|
41
|
+
---
|
|
42
|
+
|
|
43
|
+
## Philosophy
|
|
44
|
+
|
|
45
|
+
**Make the invisible visible.**
|
|
46
|
+
|
|
47
|
+
The knowledge graph grows silently. Without metrics, the user cannot tell whether their system is healthy, growing, stagnating, or fragmenting. /stats provides a snapshot that makes growth tangible — numbers that show progress, health indicators that catch problems, and trends that reveal trajectory.
|
|
48
|
+
|
|
49
|
+
The output should make the user feel informed, not overwhelmed. Metrics are evidence, not judgment. "12 orphans" is a fact. What to DO about it belongs to /graph or /{vocabulary.cmd_reflect}.
|
|
50
|
+
|
|
51
|
+
---
|
|
52
|
+
|
|
53
|
+
## Step 1: Collect Metrics
|
|
54
|
+
|
|
55
|
+
Gather all metrics. Run these checks in parallel where possible to minimize latency.
|
|
56
|
+
|
|
57
|
+
### 1a. Knowledge Graph Metrics
|
|
58
|
+
|
|
59
|
+
```bash
|
|
60
|
+
NOTES_DIR="{vocabulary.notes}"
|
|
61
|
+
|
|
62
|
+
# Note count (excluding MOCs)
|
|
63
|
+
TOTAL_FILES=$(ls -1 "$NOTES_DIR"/*.md 2>/dev/null | wc -l | tr -d ' ')
|
|
64
|
+
MOC_COUNT=$(grep -rl '^type: moc' "$NOTES_DIR"/*.md 2>/dev/null | wc -l | tr -d ' ')
|
|
65
|
+
NOTE_COUNT=$((TOTAL_FILES - MOC_COUNT))
|
|
66
|
+
|
|
67
|
+
# Connection count (all wiki links across notes/)
|
|
68
|
+
LINK_COUNT=$(grep -ohP '\[\[[^\]]+\]\]' "$NOTES_DIR"/*.md 2>/dev/null | wc -l | tr -d ' ')
|
|
69
|
+
|
|
70
|
+
# Average connections per note
|
|
71
|
+
if [[ "$NOTE_COUNT" -gt 0 ]]; then
|
|
72
|
+
AVG_LINKS=$(echo "scale=1; $LINK_COUNT / $NOTE_COUNT" | bc)
|
|
73
|
+
else
|
|
74
|
+
AVG_LINKS="0"
|
|
75
|
+
fi
|
|
76
|
+
|
|
77
|
+
# Topic count (unique values in topics: fields)
|
|
78
|
+
TOPIC_COUNT=$(grep -ohP '^\s*-\s*"\[\[([^\]]+)\]\]"' "$NOTES_DIR"/*.md 2>/dev/null | sort -u | wc -l | tr -d ' ')
|
|
79
|
+
|
|
80
|
+
# Link density
|
|
81
|
+
if [[ "$NOTE_COUNT" -gt 1 ]]; then
|
|
82
|
+
POSSIBLE=$((NOTE_COUNT * (NOTE_COUNT - 1)))
|
|
83
|
+
DENSITY=$(echo "scale=4; $LINK_COUNT / $POSSIBLE" | bc)
|
|
84
|
+
else
|
|
85
|
+
DENSITY="N/A"
|
|
86
|
+
fi
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
### 1b. Health Metrics
|
|
90
|
+
|
|
91
|
+
```bash
|
|
92
|
+
# Orphan count (notes with zero incoming links)
|
|
93
|
+
ORPHAN_COUNT=0
|
|
94
|
+
for f in "$NOTES_DIR"/*.md; do
|
|
95
|
+
NAME=$(basename "$f" .md)
|
|
96
|
+
grep -q '^type: moc' "$f" 2>/dev/null && continue
|
|
97
|
+
INCOMING=$(grep -rl "\[\[$NAME\]\]" "$NOTES_DIR"/ 2>/dev/null | grep -v "$f" | wc -l | tr -d ' ')
|
|
98
|
+
[[ "$INCOMING" -eq 0 ]] && ORPHAN_COUNT=$((ORPHAN_COUNT + 1))
|
|
99
|
+
done
|
|
100
|
+
|
|
101
|
+
# Dangling link count
|
|
102
|
+
DANGLING_COUNT=$(grep -ohP '\[\[([^\]]+)\]\]' "$NOTES_DIR"/*.md 2>/dev/null | sort -u | while read -r link; do
|
|
103
|
+
NAME=$(echo "$link" | sed 's/\[\[//;s/\]\]//')
|
|
104
|
+
[[ ! -f "$NOTES_DIR/$NAME.md" ]] && echo "$NAME"
|
|
105
|
+
done | wc -l | tr -d ' ')
|
|
106
|
+
|
|
107
|
+
# Schema compliance (% of notes with required fields: description, topics)
|
|
108
|
+
MISSING_DESC=$(grep -rL '^description:' "$NOTES_DIR"/*.md 2>/dev/null | wc -l | tr -d ' ')
|
|
109
|
+
MISSING_TOPICS=$(grep -rL '^topics:' "$NOTES_DIR"/*.md 2>/dev/null | wc -l | tr -d ' ')
|
|
110
|
+
SCHEMA_ISSUES=$((MISSING_DESC + MISSING_TOPICS))
|
|
111
|
+
if [[ "$TOTAL_FILES" -gt 0 ]]; then
|
|
112
|
+
# Notes with BOTH required fields
|
|
113
|
+
COMPLIANT=$((TOTAL_FILES - MISSING_DESC))
|
|
114
|
+
COMPLIANCE=$(echo "scale=0; $COMPLIANT * 100 / $TOTAL_FILES" | bc)
|
|
115
|
+
else
|
|
116
|
+
COMPLIANCE="N/A"
|
|
117
|
+
fi
|
|
118
|
+
|
|
119
|
+
# MOC coverage
|
|
120
|
+
COVERED=0
|
|
121
|
+
for f in "$NOTES_DIR"/*.md; do
|
|
122
|
+
NAME=$(basename "$f" .md)
|
|
123
|
+
grep -q '^type: moc' "$f" 2>/dev/null && continue
|
|
124
|
+
if grep -rl '^type: moc' "$NOTES_DIR"/*.md 2>/dev/null | xargs grep -l "\[\[$NAME\]\]" >/dev/null 2>&1; then
|
|
125
|
+
COVERED=$((COVERED + 1))
|
|
126
|
+
fi
|
|
127
|
+
done
|
|
128
|
+
if [[ "$NOTE_COUNT" -gt 0 ]]; then
|
|
129
|
+
COVERAGE=$(echo "scale=0; $COVERED * 100 / $NOTE_COUNT" | bc)
|
|
130
|
+
else
|
|
131
|
+
COVERAGE="N/A"
|
|
132
|
+
fi
|
|
133
|
+
```
|
|
134
|
+
|
|
135
|
+
### 1c. Pipeline Metrics
|
|
136
|
+
|
|
137
|
+
```bash
|
|
138
|
+
# Inbox items
|
|
139
|
+
INBOX_COUNT=$(find {vocabulary.inbox}/ -name "*.md" 2>/dev/null | wc -l | tr -d ' ')
|
|
140
|
+
|
|
141
|
+
# Queue pending (check both YAML and JSON formats)
|
|
142
|
+
QUEUE_FILE=""
|
|
143
|
+
if [[ -f "ops/queue/queue.yaml" ]]; then
|
|
144
|
+
QUEUE_FILE="ops/queue/queue.yaml"
|
|
145
|
+
QUEUE_PENDING=$(grep -c 'status: pending' "$QUEUE_FILE" 2>/dev/null || echo 0)
|
|
146
|
+
QUEUE_DONE=$(grep -c 'status: done' "$QUEUE_FILE" 2>/dev/null || echo 0)
|
|
147
|
+
elif [[ -f "ops/queue/queue.json" ]]; then
|
|
148
|
+
QUEUE_FILE="ops/queue/queue.json"
|
|
149
|
+
QUEUE_PENDING=$(grep -c '"status": "pending"' "$QUEUE_FILE" 2>/dev/null || echo 0)
|
|
150
|
+
QUEUE_DONE=$(grep -c '"status": "done"' "$QUEUE_FILE" 2>/dev/null || echo 0)
|
|
151
|
+
else
|
|
152
|
+
QUEUE_PENDING=0
|
|
153
|
+
QUEUE_DONE=0
|
|
154
|
+
fi
|
|
155
|
+
|
|
156
|
+
# Processed ratio (notes vs inbox)
|
|
157
|
+
TOTAL_CONTENT=$((NOTE_COUNT + INBOX_COUNT))
|
|
158
|
+
if [[ "$TOTAL_CONTENT" -gt 0 ]]; then
|
|
159
|
+
PROCESSED_PCT=$(echo "scale=0; $NOTE_COUNT * 100 / $TOTAL_CONTENT" | bc)
|
|
160
|
+
else
|
|
161
|
+
PROCESSED_PCT="N/A"
|
|
162
|
+
fi
|
|
163
|
+
```
|
|
164
|
+
|
|
165
|
+
### 1d. Growth Metrics
|
|
166
|
+
|
|
167
|
+
```bash
|
|
168
|
+
# This week's growth (notes with created: date within last 7 days)
|
|
169
|
+
WEEK_AGO=$(date -v-7d +%Y-%m-%d 2>/dev/null || date -d '7 days ago' +%Y-%m-%d 2>/dev/null)
|
|
170
|
+
if [[ -n "$WEEK_AGO" ]]; then
|
|
171
|
+
THIS_WEEK_NOTES=$(grep -rl "^created: " "$NOTES_DIR"/*.md 2>/dev/null | while read -r f; do
|
|
172
|
+
CREATED=$(grep '^created:' "$f" | head -1 | awk '{print $2}')
|
|
173
|
+
[[ "$CREATED" > "$WEEK_AGO" || "$CREATED" == "$WEEK_AGO" ]] && echo "$f"
|
|
174
|
+
done | wc -l | tr -d ' ')
|
|
175
|
+
else
|
|
176
|
+
THIS_WEEK_NOTES="?"
|
|
177
|
+
fi
|
|
178
|
+
|
|
179
|
+
# This week's connections (approximate — count links in recently created notes)
|
|
180
|
+
if [[ "$THIS_WEEK_NOTES" -gt 0 && -n "$WEEK_AGO" ]]; then
|
|
181
|
+
THIS_WEEK_LINKS=$(grep -rl "^created: " "$NOTES_DIR"/*.md 2>/dev/null | while read -r f; do
|
|
182
|
+
CREATED=$(grep '^created:' "$f" | head -1 | awk '{print $2}')
|
|
183
|
+
[[ "$CREATED" > "$WEEK_AGO" || "$CREATED" == "$WEEK_AGO" ]] && grep -oP '\[\[[^\]]+\]\]' "$f" 2>/dev/null
|
|
184
|
+
done | wc -l | tr -d ' ')
|
|
185
|
+
else
|
|
186
|
+
THIS_WEEK_LINKS="?"
|
|
187
|
+
fi
|
|
188
|
+
```
|
|
189
|
+
|
|
190
|
+
### 1e. System Metrics
|
|
191
|
+
|
|
192
|
+
```bash
|
|
193
|
+
# Self space
|
|
194
|
+
if [[ -d "self/" ]]; then
|
|
195
|
+
SELF_FILES=$(find self/ -name "*.md" 2>/dev/null | wc -l | tr -d ' ')
|
|
196
|
+
SELF_STATUS="enabled ($SELF_FILES files)"
|
|
197
|
+
else
|
|
198
|
+
SELF_STATUS="disabled"
|
|
199
|
+
fi
|
|
200
|
+
|
|
201
|
+
# Methodology notes
|
|
202
|
+
METHODOLOGY_COUNT=$(ls -1 ops/methodology/*.md 2>/dev/null | wc -l | tr -d ' ')
|
|
203
|
+
|
|
204
|
+
# Observations pending
|
|
205
|
+
OBS_PENDING=$(grep -rl '^status: pending' ops/observations/ 2>/dev/null | wc -l | tr -d ' ')
|
|
206
|
+
|
|
207
|
+
# Tensions pending
|
|
208
|
+
TENSION_PENDING=$(grep -rl '^status: open\|^status: pending' ops/tensions/ 2>/dev/null | wc -l | tr -d ' ')
|
|
209
|
+
|
|
210
|
+
# Sessions captured
|
|
211
|
+
SESSION_COUNT=$(ls -1 ops/sessions/*.md 2>/dev/null | wc -l | tr -d ' ')
|
|
212
|
+
```
|
|
213
|
+
|
|
214
|
+
Adapt all directory names to domain vocabulary. Skip checks for directories that do not exist — report "N/A" instead of errors.
|
|
215
|
+
|
|
216
|
+
---
|
|
217
|
+
|
|
218
|
+
## Step 2: Format Output
|
|
219
|
+
|
|
220
|
+
### Full Output (default)
|
|
221
|
+
|
|
222
|
+
Generate a progress bar for the Processed metric:
|
|
223
|
+
|
|
224
|
+
```
|
|
225
|
+
Progress bar calculation:
|
|
226
|
+
filled = PROCESSED_PCT / 5 (number of = characters out of 20)
|
|
227
|
+
empty = 20 - filled
|
|
228
|
+
bar = [===... ] PCT%
|
|
229
|
+
```
|
|
230
|
+
|
|
231
|
+
```
|
|
232
|
+
--=={ stats }==--
|
|
233
|
+
|
|
234
|
+
Knowledge Graph
|
|
235
|
+
===============
|
|
236
|
+
{vocabulary.note_plural}: [NOTE_COUNT]
|
|
237
|
+
Connections: [LINK_COUNT] (avg [AVG_LINKS] per {vocabulary.note})
|
|
238
|
+
{vocabulary.topic_map_plural}: [MOC_COUNT] (covering [COVERAGE]% of {vocabulary.note_plural})
|
|
239
|
+
Topics: [TOPIC_COUNT]
|
|
240
|
+
|
|
241
|
+
Health
|
|
242
|
+
======
|
|
243
|
+
Orphans: [ORPHAN_COUNT]
|
|
244
|
+
Dangling: [DANGLING_COUNT]
|
|
245
|
+
Schema: [COMPLIANCE]% compliant
|
|
246
|
+
|
|
247
|
+
Pipeline
|
|
248
|
+
========
|
|
249
|
+
Processed: [============== ] [PROCESSED_PCT]%
|
|
250
|
+
Inbox: [INBOX_COUNT] items
|
|
251
|
+
Queue: [QUEUE_PENDING] pending tasks
|
|
252
|
+
|
|
253
|
+
Growth
|
|
254
|
+
======
|
|
255
|
+
This week: +[THIS_WEEK_NOTES] {vocabulary.note_plural}, +[THIS_WEEK_LINKS] connections
|
|
256
|
+
Graph density: [DENSITY]
|
|
257
|
+
|
|
258
|
+
System
|
|
259
|
+
======
|
|
260
|
+
Self space: [SELF_STATUS]
|
|
261
|
+
Methodology: [METHODOLOGY_COUNT] learned patterns
|
|
262
|
+
Observations: [OBS_PENDING] pending
|
|
263
|
+
Tensions: [TENSION_PENDING] open
|
|
264
|
+
Sessions: [SESSION_COUNT] captured
|
|
265
|
+
|
|
266
|
+
Generated by Ars Contexta v1.6
|
|
267
|
+
```
|
|
268
|
+
|
|
269
|
+
### Interpretation Notes
|
|
270
|
+
|
|
271
|
+
After the stats block, add brief interpretation for any notable findings:
|
|
272
|
+
|
|
273
|
+
| Condition | Note |
|
|
274
|
+
|-----------|------|
|
|
275
|
+
| ORPHAN_COUNT > 0 | "[N] orphan {vocabulary.note_plural} — run `/graph health` for details" |
|
|
276
|
+
| DANGLING_COUNT > 0 | "[N] dangling links — run `/graph health` to identify broken links" |
|
|
277
|
+
| COMPLIANCE < 90 | "Schema compliance below 90% — some {vocabulary.note_plural} missing required fields" |
|
|
278
|
+
| OBS_PENDING >= 10 | "[N] pending observations — consider running /{vocabulary.rethink}" |
|
|
279
|
+
| TENSION_PENDING >= 5 | "[N] open tensions — consider running /{vocabulary.rethink}" |
|
|
280
|
+
| DENSITY < 0.02 | "Graph density is low — connections are thin. Run /{vocabulary.cmd_reflect} to strengthen the network" |
|
|
281
|
+
| PROCESSED_PCT < 50 | "More content in inbox than in {vocabulary.notes}/ — consider processing backlog" |
|
|
282
|
+
| THIS_WEEK_NOTES == 0 | "No new {vocabulary.note_plural} this week" |
|
|
283
|
+
|
|
284
|
+
Only show interpretation notes when conditions are notable. A healthy vault gets just the stats, no warnings.
|
|
285
|
+
|
|
286
|
+
---
|
|
287
|
+
|
|
288
|
+
## Step 3: Shareable Format (--share flag)
|
|
289
|
+
|
|
290
|
+
If invoked with `--share`, output a compact markdown block suitable for sharing on social media or in documentation:
|
|
291
|
+
|
|
292
|
+
```markdown
|
|
293
|
+
## My Knowledge Graph
|
|
294
|
+
|
|
295
|
+
- **[NOTE_COUNT]** {vocabulary.note_plural} with **[LINK_COUNT]** connections (avg [AVG_LINKS] per {vocabulary.note})
|
|
296
|
+
- **[MOC_COUNT]** {vocabulary.topic_map_plural} covering [COVERAGE]% of {vocabulary.note_plural}
|
|
297
|
+
- Schema compliance: [COMPLIANCE]%
|
|
298
|
+
- This week: +[THIS_WEEK_NOTES] {vocabulary.note_plural}, +[THIS_WEEK_LINKS] connections
|
|
299
|
+
- Graph density: [DENSITY]
|
|
300
|
+
|
|
301
|
+
*Built with [Ars Contexta](https://github.com/arscontexta) v1.6*
|
|
302
|
+
```
|
|
303
|
+
|
|
304
|
+
The shareable format:
|
|
305
|
+
- Omits health warnings (positive framing for sharing)
|
|
306
|
+
- Omits pipeline state (internal detail)
|
|
307
|
+
- Omits system metrics (internal detail)
|
|
308
|
+
- Includes only growth-positive metrics
|
|
309
|
+
- Always includes the Ars Contexta attribution line
|
|
310
|
+
|
|
311
|
+
---
|
|
312
|
+
|
|
313
|
+
## Step 4: Trend Analysis (when history exists)
|
|
314
|
+
|
|
315
|
+
If previous /stats runs are logged in `ops/stats-history.yaml` (or similar), compare current metrics against the last snapshot:
|
|
316
|
+
|
|
317
|
+
```
|
|
318
|
+
Trend (vs last check):
|
|
319
|
+
{vocabulary.note_plural}: [N] (+[delta] since [date])
|
|
320
|
+
Connections: [N] (+[delta])
|
|
321
|
+
Density: [N] ([up/down/stable])
|
|
322
|
+
Orphans: [N] ([improved/worsened/stable])
|
|
323
|
+
```
|
|
324
|
+
|
|
325
|
+
If no history exists, skip trend analysis. Do NOT create the history file — that is /health's responsibility.
|
|
326
|
+
|
|
327
|
+
---
|
|
328
|
+
|
|
329
|
+
## Edge Cases
|
|
330
|
+
|
|
331
|
+
### Empty Vault (0 notes)
|
|
332
|
+
|
|
333
|
+
Show zeros gracefully:
|
|
334
|
+
```
|
|
335
|
+
--=={ stats }==--
|
|
336
|
+
|
|
337
|
+
Your knowledge graph is new. Start capturing to see it grow.
|
|
338
|
+
|
|
339
|
+
Knowledge Graph
|
|
340
|
+
===============
|
|
341
|
+
{vocabulary.note_plural}: 0
|
|
342
|
+
Connections: 0
|
|
343
|
+
{vocabulary.topic_map_plural}: 0
|
|
344
|
+
Topics: 0
|
|
345
|
+
|
|
346
|
+
Generated by Ars Contexta v1.6
|
|
347
|
+
```
|
|
348
|
+
|
|
349
|
+
Do not show health, pipeline, growth, or system sections for an empty vault — they would all be zeros or N/A.
|
|
350
|
+
|
|
351
|
+
### No Queue System
|
|
352
|
+
|
|
353
|
+
Skip the Pipeline section entirely. Do not show an error.
|
|
354
|
+
|
|
355
|
+
### No Self Space
|
|
356
|
+
|
|
357
|
+
Show "disabled" for self space line. Do not show an error.
|
|
358
|
+
|
|
359
|
+
### No ops/derivation-manifest.md
|
|
360
|
+
|
|
361
|
+
Use universal vocabulary (notes, MOCs, etc.). All metrics work identically.
|
|
362
|
+
|
|
363
|
+
### Very Large Vault (500+ notes)
|
|
364
|
+
|
|
365
|
+
The orphan and MOC coverage checks may be slow for large vaults. If {vocabulary.notes}/ has >200 files:
|
|
366
|
+
1. Run orphan detection with a simpler heuristic (check only for presence in any MOC, not full backlink scan)
|
|
367
|
+
2. Note: "Metrics approximate for large vault. Run /graph health for precise analysis."
|
|
368
|
+
|
|
369
|
+
### Platform-Specific Date Commands
|
|
370
|
+
|
|
371
|
+
macOS uses `date -v-7d`, Linux uses `date -d '7 days ago'`. The script tries both. If neither works, report "?" for growth metrics instead of failing.
|