@soleri/forge 5.14.3 → 5.14.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,122 @@
1
+ ---
2
+ name: agent-dev
3
+ description: >
4
+ Use when extending the agent itself — adding facades, tools, vault operations,
5
+ brain features, new skills, or modifying agent internals. Triggers on "add a facade",
6
+ "new tool", "extend vault", "add brain feature", "new skill", "add operation",
7
+ "extend agent", or when the work target is the agent's own codebase rather than
8
+ a project the agent assists with. Enforces vault-first knowledge gathering before
9
+ any code reading or planning.
10
+ ---
11
+
12
+ # Agent Dev — Vault-First Internal Development
13
+
14
+ Develop the agent's own internals with the vault as the primary source of truth. The vault knows more about the agent than any code scan or model training data. Always search the vault first, extract maximum context, and only then touch code.
15
+
16
+ ## When to Use
17
+
18
+ Any time the work target is the agent's own codebase: adding tools, extending facades, modifying vault operations, brain features, skills, or transport. Not for projects that merely *use* the agent.
19
+
20
+ ## Core Principle
21
+
22
+ **Vault first. Before code. Before training data. Always.**
23
+
24
+ The vault is the authoritative source for how the agent works. Do not rely on general knowledge from training data — it is outdated and lacks project-specific decisions. Do not scan the codebase to understand architecture — the vault already has it.
25
+
26
+ ## Orchestration Sequence
27
+
28
+ ### Step 1: Search the Vault (MANDATORY — before anything else)
29
+
30
+ Before reading any source file, before making any plan, before offering any advice:
31
+
32
+ ```
33
+ YOUR_AGENT_core op:search_vault_intelligent
34
+ params: { query: "<description of planned work>", options: { intent: "pattern" } }
35
+ ```
36
+
37
+ Search again with architecture-specific terms: the facade name, tool name, or subsystem being modified.
38
+
39
+ ```
40
+ YOUR_AGENT_core op:query_vault_knowledge
41
+ params: { type: "workflow", category: "<relevant category>" }
42
+ ```
43
+
44
+ If initial results are sparse, search again with broader terms — synonyms, related subsystem names, parent concepts. Exhaust the vault before moving on.
45
+
46
+ Review all results. Extract file paths, module names, function references, conventions, and constraints. These become the foundation for every step that follows.
47
+
48
+ ### Step 2: Check Brain for Proven Patterns
49
+
50
+ ```
51
+ YOUR_AGENT_core op:strengths
52
+ params: { days: 30, minStrength: 60 }
53
+ ```
54
+
55
+ ```
56
+ YOUR_AGENT_core op:recommend
57
+ params: { projectPath: "." }
58
+ ```
59
+
60
+ Check if the brain has learned anything relevant from recent sessions.
61
+
62
+ ### Step 3: Targeted Code Reading (Only What Vault Pointed To)
63
+
64
+ By now the vault has provided architecture context, file paths, and module references. Only read code when the vault describes the subsystem but lacks implementation detail (e.g., method signatures, exact line numbers).
65
+
66
+ **Read only what the vault pointed to.** Open the specific files referenced in vault results — not the surrounding codebase, not the parent directory, not "let me explore the project structure."
67
+
68
+ **Fallback: Codebase scan.** Only when vault search returned zero relevant results for the subsystem — meaning the vault genuinely has no knowledge about it — fall back to `Grep` with targeted terms. This is the last resort, not the default.
69
+
70
+ ### Step 4: Plan with Vault Context
71
+
72
+ Create the implementation plan referencing vault findings explicitly:
73
+
74
+ - Which patterns apply (cite vault entry titles)
75
+ - Which anti-patterns to avoid (cite the specific anti-pattern)
76
+ - Which conventions to follow (naming, facade structure, tool registration)
77
+
78
+ Every plan must trace its decisions back to vault knowledge. If a decision has no vault backing, flag it as a new architectural choice that should be captured after implementation (Step 7).
79
+
80
+ ### Step 5: Implement
81
+
82
+ Follow the plan. Key conventions for agent internals:
83
+
84
+ - **Facades**: Thin routing layer — delegate to domain modules. No business logic in facades.
85
+ - **Tools**: Follow `op:operation_name` naming, return structured responses.
86
+ - **Vault writes**: All writes go through the vault intelligence layer.
87
+ - **Tests**: Colocated test files. Run with vitest.
88
+ - **Build**: Must compile without errors before considering done.
89
+
90
+ ### Step 6: Validate and Self-Correct
91
+
92
+ Run the relevant test suite. Rebuild — must complete without errors.
93
+
94
+ **Self-correction loop:** If tests fail or build breaks, do NOT ask the user what to do. Read the error, trace the cause in the code just written, fix it, and re-run. Repeat until green. The agent owns the code it wrote — if something fails, the agent fixes its own implementation. Only escalate to the user when the failure is outside the agent's control (missing infrastructure, permissions, unclear requirements).
95
+
96
+ ### Step 7: Capture What Was Learned
97
+
98
+ If this work revealed new architectural knowledge, a useful pattern, or a surprising anti-pattern:
99
+
100
+ ```
101
+ YOUR_AGENT_core op:capture_knowledge
102
+ params: {
103
+ title: "<what was learned>",
104
+ description: "<the pattern or anti-pattern>",
105
+ type: "pattern",
106
+ tags: ["<relevant-tags>"]
107
+ }
108
+ ```
109
+
110
+ This ensures future sessions benefit from today's discovery — making the vault smarter for the next developer.
111
+
112
+ ## Anti-Patterns to Avoid
113
+
114
+ - **Code-first exploration**: Reading source files before searching the vault. The vault already has the architecture — scanning code is slower and gives less context.
115
+ - **Training-data advice**: Offering general guidance from model training data instead of searching the vault for project-specific knowledge.
116
+ - **Skipping vault search**: The vault contains all architecture knowledge. Not searching it means reinventing knowledge that already exists.
117
+ - **Planning without vault context**: Plans created without vault knowledge miss conventions, duplicate existing patterns, or violate architectural boundaries.
118
+ - **Broad codebase scanning**: Exploring directories and reading files "to understand the project" instead of using vault results as a targeted map.
119
+
120
+ ## Exit Criteria
121
+
122
+ Development is complete when: vault was searched exhaustively first (Step 1), implementation follows discovered patterns, tests pass, build succeeds, and any new learning is captured back to vault (Step 7).
@@ -0,0 +1,167 @@
1
+ ---
2
+ name: agent-guide
3
+ description: >
4
+ Use when the user asks "what can you do", "help me", "how do I use this",
5
+ "what features do you have", "what tools are available", "how does this work",
6
+ "show me your capabilities", "what are you", "who are you", or any question
7
+ about the agent's identity, capabilities, available tools, or how to use them.
8
+ Also triggers proactively when the user attempts something manually that the
9
+ agent has a dedicated tool for — guide them to the right tool instead of
10
+ letting them use raw prompts for tasks the agent was built to handle.
11
+ ---
12
+
13
+ # Agent Guide — Self-Knowledge & Tool Advocacy
14
+
15
+ Every agent must know itself completely — its identity, capabilities, tools, and workflows — and actively guide users toward the right tools for every task.
16
+
17
+ ## Core Principle
18
+
19
+ **Never let a user struggle with a raw prompt when a purpose-built tool exists.** The agent's tools are more reliable, consistent, and knowledge-aware than freeform LLM responses. Guiding users to tools is not pushy — it is the agent's primary job.
20
+
21
+ ## When to Use
22
+
23
+ ### Reactive (User Asks)
24
+
25
+ - "What can you do?" / "What are your capabilities?"
26
+ - "How do I search for X?" / "How do I capture knowledge?"
27
+ - "What tools do you have?" / "Show me your features"
28
+ - "Who are you?" / "What is this agent?"
29
+ - "Help" / "I'm stuck" / "How does this work?"
30
+
31
+ ### Proactive (User Does Something the Hard Way)
32
+
33
+ When the user asks Claude directly for something the agent has a tool for. Detect this and suggest the tool. Examples:
34
+
35
+ | User Says | They Probably Want | Suggest Instead |
36
+ |-----------|-------------------|-----------------|
37
+ | "Remember this pattern..." | Manual note-taking | `op:capture_knowledge` — persists to vault with tags, searchable forever |
38
+ | "Search for patterns about..." | Raw LLM recall | `op:search_vault_intelligent` — searches actual vault with FTS5 + embeddings |
39
+ | "Let me plan this out..." | Freeform planning | `op:plan` — structured plan with vault context, brain recommendations, grading |
40
+ | "Check if this is working" | Manual verification | `op:admin_health` — comprehensive system health check |
41
+ | "What did we learn last time?" | Memory recall | `op:memory_search` — searches session and cross-project memory |
42
+ | "Find duplicates in..." | Manual comparison | `op:curator_detect_duplicates` — automated dedup with similarity scoring |
43
+ | "Is this code good?" | Raw review | `op:validate_component_code` — structured validation against known patterns |
44
+ | "Let me debug this..." | Ad-hoc debugging | `op:search_vault_intelligent` — check vault for known bugs and anti-patterns first |
45
+ | "Summarize what we did" | Manual summary | `op:session_capture` — structured session capture with knowledge extraction |
46
+ | "What patterns work for X?" | Training data recall | `op:strengths` — brain-learned patterns with strength scores from real usage |
47
+ | "Clean up the knowledge base" | Manual curation | `op:curator_consolidate` — automated dedup, grooming, contradiction resolution |
48
+ | "How should I approach this?" | Generic advice | `op:recommend` — brain recommendations based on similar past work |
49
+
50
+ ## Capability Discovery
51
+
52
+ When a user asks about capabilities, use this sequence:
53
+
54
+ ### Step 1: Identity
55
+
56
+ ```
57
+ YOUR_AGENT_core op:activate
58
+ params: { projectPath: "." }
59
+ ```
60
+
61
+ This returns the agent's persona: name, role, description, tone, principles, and domains. Present the identity first — who the agent is and what it specializes in.
62
+
63
+ ### Step 2: Health & Status
64
+
65
+ ```
66
+ YOUR_AGENT_core op:admin_health
67
+ ```
68
+
69
+ Shows what subsystems are active: vault (how many entries), brain (vocabulary size), LLM availability, cognee status. This tells the user what the agent currently has to work with.
70
+
71
+ ### Step 3: Available Tools
72
+
73
+ ```
74
+ YOUR_AGENT_core op:admin_tool_list
75
+ ```
76
+
77
+ Lists all facades and operations available. Present them grouped by category with plain-language descriptions of what each does and when to use it.
78
+
79
+ ### Step 4: Present Capabilities
80
+
81
+ Organize the response by what the user can DO, not by technical facade names:
82
+
83
+ **Knowledge & Memory**
84
+ - Search the vault for patterns, anti-patterns, and architectural decisions
85
+ - Capture new knowledge from the current session
86
+ - Search across sessions and projects for relevant context
87
+ - Curate: deduplicate, groom, resolve contradictions
88
+
89
+ **Planning & Execution**
90
+ - Create structured plans with vault context and brain recommendations
91
+ - Split plans into tasks with complexity estimates
92
+ - Track execution with drift detection
93
+ - Complete with knowledge capture and session recording
94
+
95
+ **Intelligence & Learning**
96
+ - Brain learns from every session — patterns get stronger with use
97
+ - Recommendations based on similar past work
98
+ - Strength tracking: which patterns are proven vs experimental
99
+ - Feedback loop: brain improves based on what works
100
+
101
+ **Quality & Validation**
102
+ - Health checks across all subsystems
103
+ - Iterative validation loops with configurable targets
104
+ - Governance: policies, proposals, quotas
105
+ - Code validation against known patterns
106
+
107
+ **Identity & Control**
108
+ - Persona activation and deactivation
109
+ - Intent routing: the agent classifies what you want and routes to the right workflow
110
+ - Project registration and cross-project linking
111
+
112
+ ## Tool Advocacy Patterns
113
+
114
+ When you detect the user doing something manually, use this format:
115
+
116
+ > I notice you are [what user is doing]. I have a dedicated tool for this — `op:[tool_name]` — which [specific advantage over manual approach]. Want me to use it?
117
+
118
+ **Specific advantages to highlight:**
119
+
120
+ | Tool | Advantage Over Manual |
121
+ |------|----------------------|
122
+ | `search_vault_intelligent` | Searches actual indexed knowledge, not LLM training data. Finds project-specific patterns. |
123
+ | `capture_knowledge` | Persists permanently with tags, type, and searchability. Survives sessions. |
124
+ | `plan` (orchestrate) | Consults vault + brain before planning. Generates graded plans with acceptance criteria. |
125
+ | `memory_search` | Searches structured session history, not just conversation context. Works cross-project. |
126
+ | `strengths` | Returns quantified pattern strength from real usage, not guesses. |
127
+ | `recommend` | Similarity-based recommendations from the brain's learned model. |
128
+ | `curator_consolidate` | Automated pipeline: dedup + groom + contradiction resolution. |
129
+ | `admin_health` | Real-time status of every subsystem, not assumptions. |
130
+ | `start_loop` | Iterative validation with configurable pass criteria and max iterations. |
131
+
132
+ **Do NOT advocate tools when:**
133
+
134
+ - The user is explicitly asking for a conversational response
135
+ - The user already knows the tools and is choosing not to use them
136
+ - The task is genuinely better handled conversationally (explaining concepts, discussing options)
137
+ - The user says "just tell me" or "don't use tools"
138
+
139
+ ## Common Questions
140
+
141
+ ### "What makes you different from regular Claude?"
142
+
143
+ You have persistent knowledge (vault), learned patterns (brain), structured planning with grading, iterative validation loops, and domain-specific intelligence. Regular Claude starts fresh every conversation — this agent accumulates knowledge and gets smarter over time.
144
+
145
+ ### "How do I get the most out of you?"
146
+
147
+ 1. **Use the vault** — search before deciding, capture after learning
148
+ 2. **Use planning** — structured plans beat ad-hoc work for anything non-trivial
149
+ 3. **Trust the brain** — pattern recommendations come from real usage data
150
+ 4. **Capture everything** — every bug fix, every pattern, every anti-pattern. The vault grows smarter with use.
151
+ 5. **Use loops for quality** — iterative validation catches issues that single-pass work misses
152
+
153
+ ### "What domains do you know about?"
154
+
155
+ Call `op:activate` to discover configured domains. Each domain has its own facade with specialized ops: `get_patterns`, `search`, `get_entry`, `capture`, `remove`.
156
+
157
+ ### "How do I add new capabilities?"
158
+
159
+ Extensions in `src/extensions/` can add new ops, facades, middleware, and hooks. Domain packs add domain-specific knowledge and validation.
160
+
161
+ ## Anti-Patterns
162
+
163
+ - **Staying silent when the user does it manually** — if a tool exists, mention it. Once. Not repeatedly.
164
+ - **Being pushy** — suggest the tool once per task. If the user declines, respect that.
165
+ - **Listing raw op names without context** — always explain what the op does in plain language.
166
+ - **Claiming capabilities that do not exist** — only reference ops the agent actually has. When unsure, call `op:admin_tool_list` first.
167
+ - **Dumping the entire tool catalog** — answer the specific question. Show relevant tools, not all tools.
@@ -0,0 +1,66 @@
1
+ ---
2
+ name: agent-persona
3
+ description: >
4
+ Use when the user activates the agent's persona via its greeting phrase, or says
5
+ "activate persona", "be yourself", "stay in character", or any activation phrase
6
+ defined in the agent's persona configuration. Reinforces character persistence
7
+ through the session and survives context compaction.
8
+ ---
9
+
10
+ # Agent Persona — Stay in Character
11
+
12
+ This skill reinforces persona persistence. The MCP activation loads the runtime payload — this skill ensures the character sticks across the full session, including after context compaction.
13
+
14
+ ## How It Works
15
+
16
+ Every agent has a persona defined in `src/identity/persona.ts` (or `src/activation/persona.ts` in older agents). This file contains:
17
+
18
+ - **name** — the agent's display name
19
+ - **role** — what the agent does
20
+ - **tone** — `precise`, `mentor`, or `pragmatic`
21
+ - **greeting** — the activation response
22
+ - **principles** — core values that guide behavior
23
+
24
+ ## Activation
25
+
26
+ When the user triggers activation (greeting phrase or explicit request):
27
+
28
+ ```
29
+ YOUR_AGENT_core op:activate
30
+ params: { projectPath: "." }
31
+ ```
32
+
33
+ The activation response contains the full persona payload. Adopt it immediately.
34
+
35
+ ## Rules
36
+
37
+ 1. **Stay in character for EVERY response** until the user explicitly deactivates
38
+ 2. **Technical accuracy is the priority** — persona is the wrapper, not a replacement for correctness
39
+ 3. **Tone consistency** — match the configured tone (`precise` = concise and exact, `mentor` = educational and encouraging, `pragmatic` = direct and practical)
40
+ 4. If character drifts after context compaction, the persona information in the compacted summary should restore it — follow it
41
+
42
+ ## Context Compaction Survival
43
+
44
+ Long sessions trigger context compaction. To survive:
45
+
46
+ - The persona activation state is included in compaction summaries
47
+ - After compaction, check if persona was active and re-adopt the character
48
+ - Never break character just because the conversation was compacted
49
+
50
+ ## Deactivation
51
+
52
+ When the user says "deactivate", "stop persona", "be normal", or uses the agent's deactivation phrase:
53
+
54
+ ```
55
+ YOUR_AGENT_core op:activate
56
+ params: { deactivate: true }
57
+ ```
58
+
59
+ Return to neutral assistant mode.
60
+
61
+ ## Anti-Patterns
62
+
63
+ - **Dropping character mid-session** — if activated, stay activated
64
+ - **Over-persona, under-substance** — character adds flavor, not replaces technical depth
65
+ - **Forcing persona on unwilling users** — only activate when explicitly triggered
66
+ - **Ignoring tone setting** — a `precise` agent should not use flowery language; a `mentor` agent should not be terse
@@ -0,0 +1,107 @@
1
+ ---
2
+ name: vault-curate
3
+ description: >
4
+ Use when the user says "clean vault", "deduplicate", "groom knowledge",
5
+ "consolidate vault", "vault maintenance", "find duplicates", "merge patterns",
6
+ "check contradictions", "vault health", or wants to maintain, clean, reorganize,
7
+ or improve the quality of the agent's knowledge base.
8
+ ---
9
+
10
+ # Vault Curate — Knowledge Maintenance
11
+
12
+ Maintain vault quality through deduplication, grooming, contradiction detection, and consolidation. A well-curated vault produces better search results and brain recommendations.
13
+
14
+ ## When to Use
15
+
16
+ Periodically (weekly or after heavy capture sessions), when search quality degrades, when vault health shows warnings, or when the user explicitly requests maintenance.
17
+
18
+ ## Orchestration Sequence
19
+
20
+ ### Step 1: Health Assessment
21
+
22
+ ```
23
+ YOUR_AGENT_core op:knowledge_health
24
+ ```
25
+
26
+ ```
27
+ YOUR_AGENT_core op:get_vault_analytics
28
+ ```
29
+
30
+ Present the health summary to the user before proceeding: total entries, quality scores, staleness, coverage gaps.
31
+
32
+ ### Step 2: Detect Duplicates
33
+
34
+ ```
35
+ YOUR_AGENT_core op:curator_detect_duplicates
36
+ ```
37
+
38
+ This finds entries with overlapping titles, descriptions, or content. Review the duplicate pairs — some may be intentional (different contexts) while others are true duplicates.
39
+
40
+ For true duplicates:
41
+
42
+ ```
43
+ YOUR_AGENT_core op:merge_patterns
44
+ params: { patternIds: ["<id1>", "<id2>"] }
45
+ ```
46
+
47
+ Preserve the best content from each.
48
+
49
+ ### Step 3: Find Contradictions
50
+
51
+ ```
52
+ YOUR_AGENT_core op:curator_contradictions
53
+ ```
54
+
55
+ Contradictions erode trust in vault search results. For each contradiction: decide which entry is correct (check dates, context, evidence), then archive or update the incorrect one.
56
+
57
+ ### Step 4: Groom Entries
58
+
59
+ ```
60
+ YOUR_AGENT_core op:curator_groom_all
61
+ ```
62
+
63
+ Runs tag enrichment and metadata cleanup across all entries. This improves searchability and categorization.
64
+
65
+ For targeted grooming of specific entries:
66
+
67
+ ```
68
+ YOUR_AGENT_core op:curator_groom
69
+ params: { entryIds: ["<id>"], tags: ["<tag>"] }
70
+ ```
71
+
72
+ ### Step 5: GPT Enrichment (Optional)
73
+
74
+ ```
75
+ YOUR_AGENT_core op:curator_gpt_enrich
76
+ ```
77
+
78
+ Adds AI-generated metadata to entries that lack descriptions, examples, or context. Fills in gaps without changing the core content.
79
+
80
+ ### Step 6: Full Consolidation
81
+
82
+ ```
83
+ YOUR_AGENT_core op:curator_consolidate
84
+ ```
85
+
86
+ Runs the complete pipeline: dedup + archive stale entries + resolve contradictions. This is the heavy-duty cleanup.
87
+
88
+ ### Step 7: Knowledge Reorganization
89
+
90
+ ```
91
+ YOUR_AGENT_core op:knowledge_reorganize
92
+ params: { mode: "preview" }
93
+ ```
94
+
95
+ Preview first, then run again with `mode: "apply"` if the preview looks good.
96
+
97
+ ### Step 8: Verify Results
98
+
99
+ ```
100
+ YOUR_AGENT_core op:knowledge_health
101
+ ```
102
+
103
+ Compare with Step 1 metrics. Vault health should improve: fewer duplicates, no contradictions, better coverage.
104
+
105
+ ## Exit Criteria
106
+
107
+ Curation is complete when: duplicates merged, contradictions resolved, entries groomed, and health metrics improved compared to Step 1 baseline.
@@ -5,6 +5,9 @@ const __dirname = dirname(fileURLToPath(import.meta.url));
5
5
  const SKILLS_DIR = join(__dirname, '..', 'skills');
6
6
  /** Skills that use YOUR_AGENT_core placeholder and need agent-specific substitution. */
7
7
  const AGENT_SPECIFIC_SKILLS = new Set([
8
+ 'agent-dev',
9
+ 'agent-guide',
10
+ 'agent-persona',
8
11
  'brain-debrief',
9
12
  'brainstorming',
10
13
  'code-patrol',
@@ -20,6 +23,7 @@ const AGENT_SPECIFIC_SKILLS = new Set([
20
23
  'systematic-debugging',
21
24
  'test-driven-development',
22
25
  'vault-capture',
26
+ 'vault-curate',
23
27
  'vault-navigator',
24
28
  'verification-before-completion',
25
29
  'writing-plans',
@@ -1 +1 @@
1
- {"version":3,"file":"skills.js","sourceRoot":"","sources":["../../src/templates/skills.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,YAAY,EAAE,WAAW,EAAE,MAAM,SAAS,CAAC;AACpD,OAAO,EAAE,IAAI,EAAE,OAAO,EAAE,MAAM,WAAW,CAAC;AAC1C,OAAO,EAAE,aAAa,EAAE,MAAM,UAAU,CAAC;AAGzC,MAAM,SAAS,GAAG,OAAO,CAAC,aAAa,CAAC,MAAM,CAAC,IAAI,CAAC,GAAG,CAAC,CAAC,CAAC;AAC1D,MAAM,UAAU,GAAG,IAAI,CAAC,SAAS,EAAE,IAAI,EAAE,QAAQ,CAAC,CAAC;AAEnD,wFAAwF;AACxF,MAAM,qBAAqB,GAAG,IAAI,GAAG,CAAC;IACpC,eAAe;IACf,eAAe;IACf,aAAa;IACb,gBAAgB;IAChB,kBAAkB;IAClB,iBAAiB;IACjB,eAAe;IACf,cAAc;IACd,mBAAmB;IACnB,YAAY;IACZ,eAAe;IACf,gBAAgB;IAChB,sBAAsB;IACtB,yBAAyB;IACzB,eAAe;IACf,iBAAiB;IACjB,gCAAgC;IAChC,eAAe;CAChB,CAAC,CAAC;AAEH;;;;;;;;GAQG;AACH,MAAM,UAAU,cAAc,CAAC,MAAmB;IAChD,MAAM,KAAK,GAA4B,EAAE,CAAC;IAC1C,IAAI,UAAoB,CAAC;IAEzB,IAAI,CAAC;QACH,UAAU,GAAG,WAAW,CAAC,UAAU,CAAC,CAAC,MAAM,CAAC,CAAC,CAAC,EAAE,EAAE,CAAC,CAAC,CAAC,QAAQ,CAAC,KAAK,CAAC,CAAC,CAAC;IACxE,CAAC;IAAC,MAAM,CAAC;QACP,OAAO,KAAK,CAAC;IACf,CAAC;IAED,2DAA2D;IAC3D,gEAAgE;IAChE,MAAM,aAAa,GAAG,MAAM,CAAC,MAAM,CAAC,CAAC,CAAC,IAAI,GAAG,CAAC,MAAM,CAAC,MAAM,CAAC,CAAC,CAAC,CAAC,IAAI,CAAC,CAAC,uCAAuC;IAE5G,KAAK,MAAM,IAAI,IAAI,UAAU,EAAE,CAAC;QAC9B,MAAM,SAAS,GAAG,IAAI,CAAC,OAAO,CAAC,KAAK,EAAE,EAAE,CAAC,CAAC;QAE1C,IAAI,aAAa,IAAI,CAAC,aAAa,CAAC,GAAG,CAAC,SAAS,CAAC,EAAE,CAAC;YACnD,SAAS;QACX,CAAC;QAED,IAAI,OAAO,GAAG,YAAY,CAAC,IAAI,CAAC,UAAU,EAAE,IAAI,CAAC,EAAE,OAAO,CAAC,CAAC;QAE5D,IAAI,qBAAqB,CAAC,GAAG,CAAC,SAAS,CAAC,EAAE,CAAC;YACzC,OAAO,GAAG,OAAO,CAAC,OAAO,CAAC,kBAAkB,EAAE,GAAG,MAAM,CAAC,EAAE,OAAO,CAAC,CAAC;QACrE,CAAC;QAED,KAAK,CAAC,IAAI,CAAC,CAAC,UAAU,SAAS,WAAW,EAAE,OAAO,CAAC,CAAC,CAAC;IACxD,CAAC;IAED,OAAO,KAAK,CAAC;AACf,CAAC"}
1
+ {"version":3,"file":"skills.js","sourceRoot":"","sources":["../../src/templates/skills.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,YAAY,EAAE,WAAW,EAAE,MAAM,SAAS,CAAC;AACpD,OAAO,EAAE,IAAI,EAAE,OAAO,EAAE,MAAM,WAAW,CAAC;AAC1C,OAAO,EAAE,aAAa,EAAE,MAAM,UAAU,CAAC;AAGzC,MAAM,SAAS,GAAG,OAAO,CAAC,aAAa,CAAC,MAAM,CAAC,IAAI,CAAC,GAAG,CAAC,CAAC,CAAC;AAC1D,MAAM,UAAU,GAAG,IAAI,CAAC,SAAS,EAAE,IAAI,EAAE,QAAQ,CAAC,CAAC;AAEnD,wFAAwF;AACxF,MAAM,qBAAqB,GAAG,IAAI,GAAG,CAAC;IACpC,WAAW;IACX,aAAa;IACb,eAAe;IACf,eAAe;IACf,eAAe;IACf,aAAa;IACb,gBAAgB;IAChB,kBAAkB;IAClB,iBAAiB;IACjB,eAAe;IACf,cAAc;IACd,mBAAmB;IACnB,YAAY;IACZ,eAAe;IACf,gBAAgB;IAChB,sBAAsB;IACtB,yBAAyB;IACzB,eAAe;IACf,cAAc;IACd,iBAAiB;IACjB,gCAAgC;IAChC,eAAe;CAChB,CAAC,CAAC;AAEH;;;;;;;;GAQG;AACH,MAAM,UAAU,cAAc,CAAC,MAAmB;IAChD,MAAM,KAAK,GAA4B,EAAE,CAAC;IAC1C,IAAI,UAAoB,CAAC;IAEzB,IAAI,CAAC;QACH,UAAU,GAAG,WAAW,CAAC,UAAU,CAAC,CAAC,MAAM,CAAC,CAAC,CAAC,EAAE,EAAE,CAAC,CAAC,CAAC,QAAQ,CAAC,KAAK,CAAC,CAAC,CAAC;IACxE,CAAC;IAAC,MAAM,CAAC;QACP,OAAO,KAAK,CAAC;IACf,CAAC;IAED,2DAA2D;IAC3D,gEAAgE;IAChE,MAAM,aAAa,GAAG,MAAM,CAAC,MAAM,CAAC,CAAC,CAAC,IAAI,GAAG,CAAC,MAAM,CAAC,MAAM,CAAC,CAAC,CAAC,CAAC,IAAI,CAAC,CAAC,uCAAuC;IAE5G,KAAK,MAAM,IAAI,IAAI,UAAU,EAAE,CAAC;QAC9B,MAAM,SAAS,GAAG,IAAI,CAAC,OAAO,CAAC,KAAK,EAAE,EAAE,CAAC,CAAC;QAE1C,IAAI,aAAa,IAAI,CAAC,aAAa,CAAC,GAAG,CAAC,SAAS,CAAC,EAAE,CAAC;YACnD,SAAS;QACX,CAAC;QAED,IAAI,OAAO,GAAG,YAAY,CAAC,IAAI,CAAC,UAAU,EAAE,IAAI,CAAC,EAAE,OAAO,CAAC,CAAC;QAE5D,IAAI,qBAAqB,CAAC,GAAG,CAAC,SAAS,CAAC,EAAE,CAAC;YACzC,OAAO,GAAG,OAAO,CAAC,OAAO,CAAC,kBAAkB,EAAE,GAAG,MAAM,CAAC,EAAE,OAAO,CAAC,CAAC;QACrE,CAAC;QAED,KAAK,CAAC,IAAI,CAAC,CAAC,UAAU,SAAS,WAAW,EAAE,OAAO,CAAC,CAAC,CAAC;IACxD,CAAC;IAED,OAAO,KAAK,CAAC;AACf,CAAC"}
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@soleri/forge",
3
- "version": "5.14.3",
3
+ "version": "5.14.5",
4
4
  "description": "Scaffold AI agents that learn, remember, and grow with you.",
5
5
  "keywords": [
6
6
  "agent",
@@ -262,7 +262,7 @@ describe('Scaffolder', () => {
262
262
  .filter((e) => e.isDirectory())
263
263
  .map((e) => e.name);
264
264
 
265
- expect(skillDirs).toHaveLength(19);
265
+ expect(skillDirs).toHaveLength(23);
266
266
 
267
267
  // Verify each skill dir has a SKILL.md
268
268
  for (const dir of skillDirs) {
@@ -276,6 +276,9 @@ describe('Scaffolder', () => {
276
276
  const skillDirs = readdirSync(skillsDir).sort();
277
277
 
278
278
  expect(skillDirs).toEqual([
279
+ 'agent-dev',
280
+ 'agent-guide',
281
+ 'agent-persona',
279
282
  'brain-debrief',
280
283
  'brainstorming',
281
284
  'code-patrol',
@@ -292,6 +295,7 @@ describe('Scaffolder', () => {
292
295
  'systematic-debugging',
293
296
  'test-driven-development',
294
297
  'vault-capture',
298
+ 'vault-curate',
295
299
  'vault-navigator',
296
300
  'verification-before-completion',
297
301
  'writing-plans',
@@ -0,0 +1,122 @@
1
+ ---
2
+ name: agent-dev
3
+ description: >
4
+ Use when extending the agent itself — adding facades, tools, vault operations,
5
+ brain features, new skills, or modifying agent internals. Triggers on "add a facade",
6
+ "new tool", "extend vault", "add brain feature", "new skill", "add operation",
7
+ "extend agent", or when the work target is the agent's own codebase rather than
8
+ a project the agent assists with. Enforces vault-first knowledge gathering before
9
+ any code reading or planning.
10
+ ---
11
+
12
+ # Agent Dev — Vault-First Internal Development
13
+
14
+ Develop the agent's own internals with the vault as the primary source of truth. The vault knows more about the agent than any code scan or model training data. Always search the vault first, extract maximum context, and only then touch code.
15
+
16
+ ## When to Use
17
+
18
+ Any time the work target is the agent's own codebase: adding tools, extending facades, modifying vault operations, brain features, skills, or transport. Not for projects that merely *use* the agent.
19
+
20
+ ## Core Principle
21
+
22
+ **Vault first. Before code. Before training data. Always.**
23
+
24
+ The vault is the authoritative source for how the agent works. Do not rely on general knowledge from training data — it is outdated and lacks project-specific decisions. Do not scan the codebase to understand architecture — the vault already has it.
25
+
26
+ ## Orchestration Sequence
27
+
28
+ ### Step 1: Search the Vault (MANDATORY — before anything else)
29
+
30
+ Before reading any source file, before making any plan, before offering any advice:
31
+
32
+ ```
33
+ YOUR_AGENT_core op:search_vault_intelligent
34
+ params: { query: "<description of planned work>", options: { intent: "pattern" } }
35
+ ```
36
+
37
+ Search again with architecture-specific terms: the facade name, tool name, or subsystem being modified.
38
+
39
+ ```
40
+ YOUR_AGENT_core op:query_vault_knowledge
41
+ params: { type: "workflow", category: "<relevant category>" }
42
+ ```
43
+
44
+ If initial results are sparse, search again with broader terms — synonyms, related subsystem names, parent concepts. Exhaust the vault before moving on.
45
+
46
+ Review all results. Extract file paths, module names, function references, conventions, and constraints. These become the foundation for every step that follows.
47
+
48
+ ### Step 2: Check Brain for Proven Patterns
49
+
50
+ ```
51
+ YOUR_AGENT_core op:strengths
52
+ params: { days: 30, minStrength: 60 }
53
+ ```
54
+
55
+ ```
56
+ YOUR_AGENT_core op:recommend
57
+ params: { projectPath: "." }
58
+ ```
59
+
60
+ Check if the brain has learned anything relevant from recent sessions.
61
+
62
+ ### Step 3: Targeted Code Reading (Only What Vault Pointed To)
63
+
64
+ By now the vault has provided architecture context, file paths, and module references. Only read code when the vault describes the subsystem but lacks implementation detail (e.g., method signatures, exact line numbers).
65
+
66
+ **Read only what the vault pointed to.** Open the specific files referenced in vault results — not the surrounding codebase, not the parent directory, not "let me explore the project structure."
67
+
68
+ **Fallback: Codebase scan.** Only when vault search returned zero relevant results for the subsystem — meaning the vault genuinely has no knowledge about it — fall back to `Grep` with targeted terms. This is the last resort, not the default.
69
+
70
+ ### Step 4: Plan with Vault Context
71
+
72
+ Create the implementation plan referencing vault findings explicitly:
73
+
74
+ - Which patterns apply (cite vault entry titles)
75
+ - Which anti-patterns to avoid (cite the specific anti-pattern)
76
+ - Which conventions to follow (naming, facade structure, tool registration)
77
+
78
+ Every plan must trace its decisions back to vault knowledge. If a decision has no vault backing, flag it as a new architectural choice that should be captured after implementation (Step 7).
79
+
80
+ ### Step 5: Implement
81
+
82
+ Follow the plan. Key conventions for agent internals:
83
+
84
+ - **Facades**: Thin routing layer — delegate to domain modules. No business logic in facades.
85
+ - **Tools**: Follow `op:operation_name` naming, return structured responses.
86
+ - **Vault writes**: All writes go through the vault intelligence layer.
87
+ - **Tests**: Colocated test files. Run with vitest.
88
+ - **Build**: Must compile without errors before considering done.
89
+
90
+ ### Step 6: Validate and Self-Correct
91
+
92
+ Run the relevant test suite. Rebuild — must complete without errors.
93
+
94
+ **Self-correction loop:** If tests fail or build breaks, do NOT ask the user what to do. Read the error, trace the cause in the code just written, fix it, and re-run. Repeat until green. The agent owns the code it wrote — if something fails, the agent fixes its own implementation. Only escalate to the user when the failure is outside the agent's control (missing infrastructure, permissions, unclear requirements).
95
+
96
+ ### Step 7: Capture What Was Learned
97
+
98
+ If this work revealed new architectural knowledge, a useful pattern, or a surprising anti-pattern:
99
+
100
+ ```
101
+ YOUR_AGENT_core op:capture_knowledge
102
+ params: {
103
+ title: "<what was learned>",
104
+ description: "<the pattern or anti-pattern>",
105
+ type: "pattern",
106
+ tags: ["<relevant-tags>"]
107
+ }
108
+ ```
109
+
110
+ This ensures future sessions benefit from today's discovery — making the vault smarter for the next developer.
111
+
112
+ ## Anti-Patterns to Avoid
113
+
114
+ - **Code-first exploration**: Reading source files before searching the vault. The vault already has the architecture — scanning code is slower and gives less context.
115
+ - **Training-data advice**: Offering general guidance from model training data instead of searching the vault for project-specific knowledge.
116
+ - **Skipping vault search**: The vault contains all architecture knowledge. Not searching it means reinventing knowledge that already exists.
117
+ - **Planning without vault context**: Plans created without vault knowledge miss conventions, duplicate existing patterns, or violate architectural boundaries.
118
+ - **Broad codebase scanning**: Exploring directories and reading files "to understand the project" instead of using vault results as a targeted map.
119
+
120
+ ## Exit Criteria
121
+
122
+ Development is complete when: vault was searched exhaustively first (Step 1), implementation follows discovered patterns, tests pass, build succeeds, and any new learning is captured back to vault (Step 7).
@@ -0,0 +1,167 @@
1
+ ---
2
+ name: agent-guide
3
+ description: >
4
+ Use when the user asks "what can you do", "help me", "how do I use this",
5
+ "what features do you have", "what tools are available", "how does this work",
6
+ "show me your capabilities", "what are you", "who are you", or any question
7
+ about the agent's identity, capabilities, available tools, or how to use them.
8
+ Also triggers proactively when the user attempts something manually that the
9
+ agent has a dedicated tool for — guide them to the right tool instead of
10
+ letting them use raw prompts for tasks the agent was built to handle.
11
+ ---
12
+
13
+ # Agent Guide — Self-Knowledge & Tool Advocacy
14
+
15
+ Every agent must know itself completely — its identity, capabilities, tools, and workflows — and actively guide users toward the right tools for every task.
16
+
17
+ ## Core Principle
18
+
19
+ **Never let a user struggle with a raw prompt when a purpose-built tool exists.** The agent's tools are more reliable, consistent, and knowledge-aware than freeform LLM responses. Guiding users to tools is not pushy — it is the agent's primary job.
20
+
21
+ ## When to Use
22
+
23
+ ### Reactive (User Asks)
24
+
25
+ - "What can you do?" / "What are your capabilities?"
26
+ - "How do I search for X?" / "How do I capture knowledge?"
27
+ - "What tools do you have?" / "Show me your features"
28
+ - "Who are you?" / "What is this agent?"
29
+ - "Help" / "I'm stuck" / "How does this work?"
30
+
31
+ ### Proactive (User Does Something the Hard Way)
32
+
33
+ When the user asks Claude directly for something the agent has a tool for. Detect this and suggest the tool. Examples:
34
+
35
+ | User Says | They Probably Want | Suggest Instead |
36
+ |-----------|-------------------|-----------------|
37
+ | "Remember this pattern..." | Manual note-taking | `op:capture_knowledge` — persists to vault with tags, searchable forever |
38
+ | "Search for patterns about..." | Raw LLM recall | `op:search_vault_intelligent` — searches actual vault with FTS5 + embeddings |
39
+ | "Let me plan this out..." | Freeform planning | `op:plan` — structured plan with vault context, brain recommendations, grading |
40
+ | "Check if this is working" | Manual verification | `op:admin_health` — comprehensive system health check |
41
+ | "What did we learn last time?" | Memory recall | `op:memory_search` — searches session and cross-project memory |
42
+ | "Find duplicates in..." | Manual comparison | `op:curator_detect_duplicates` — automated dedup with similarity scoring |
43
+ | "Is this code good?" | Raw review | `op:validate_component_code` — structured validation against known patterns |
44
+ | "Let me debug this..." | Ad-hoc debugging | `op:search_vault_intelligent` — check vault for known bugs and anti-patterns first |
45
+ | "Summarize what we did" | Manual summary | `op:session_capture` — structured session capture with knowledge extraction |
46
+ | "What patterns work for X?" | Training data recall | `op:strengths` — brain-learned patterns with strength scores from real usage |
47
+ | "Clean up the knowledge base" | Manual curation | `op:curator_consolidate` — automated dedup, grooming, contradiction resolution |
48
+ | "How should I approach this?" | Generic advice | `op:recommend` — brain recommendations based on similar past work |
49
+
50
+ ## Capability Discovery
51
+
52
+ When a user asks about capabilities, use this sequence:
53
+
54
+ ### Step 1: Identity
55
+
56
+ ```
57
+ YOUR_AGENT_core op:activate
58
+ params: { projectPath: "." }
59
+ ```
60
+
61
+ This returns the agent's persona: name, role, description, tone, principles, and domains. Present the identity first — who the agent is and what it specializes in.
62
+
63
+ ### Step 2: Health & Status
64
+
65
+ ```
66
+ YOUR_AGENT_core op:admin_health
67
+ ```
68
+
69
+ Shows what subsystems are active: vault (how many entries), brain (vocabulary size), LLM availability, cognee status. This tells the user what the agent currently has to work with.
70
+
71
+ ### Step 3: Available Tools
72
+
73
+ ```
74
+ YOUR_AGENT_core op:admin_tool_list
75
+ ```
76
+
77
+ Lists all facades and operations available. Present them grouped by category with plain-language descriptions of what each does and when to use it.
78
+
79
+ ### Step 4: Present Capabilities
80
+
81
+ Organize the response by what the user can DO, not by technical facade names:
82
+
83
+ **Knowledge & Memory**
84
+ - Search the vault for patterns, anti-patterns, and architectural decisions
85
+ - Capture new knowledge from the current session
86
+ - Search across sessions and projects for relevant context
87
+ - Curate: deduplicate, groom, resolve contradictions
88
+
89
+ **Planning & Execution**
90
+ - Create structured plans with vault context and brain recommendations
91
+ - Split plans into tasks with complexity estimates
92
+ - Track execution with drift detection
93
+ - Complete with knowledge capture and session recording
94
+
95
+ **Intelligence & Learning**
96
+ - Brain learns from every session — patterns get stronger with use
97
+ - Recommendations based on similar past work
98
+ - Strength tracking: which patterns are proven vs experimental
99
+ - Feedback loop: brain improves based on what works
100
+
101
+ **Quality & Validation**
102
+ - Health checks across all subsystems
103
+ - Iterative validation loops with configurable targets
104
+ - Governance: policies, proposals, quotas
105
+ - Code validation against known patterns
106
+
107
+ **Identity & Control**
108
+ - Persona activation and deactivation
109
+ - Intent routing: the agent classifies what you want and routes to the right workflow
110
+ - Project registration and cross-project linking
111
+
112
+ ## Tool Advocacy Patterns
113
+
114
+ When you detect the user doing something manually, use this format:
115
+
116
+ > I notice you are [what user is doing]. I have a dedicated tool for this — `op:[tool_name]` — which [specific advantage over manual approach]. Want me to use it?
117
+
118
+ **Specific advantages to highlight:**
119
+
120
+ | Tool | Advantage Over Manual |
121
+ |------|----------------------|
122
+ | `search_vault_intelligent` | Searches actual indexed knowledge, not LLM training data. Finds project-specific patterns. |
123
+ | `capture_knowledge` | Persists permanently with tags, type, and searchability. Survives sessions. |
124
+ | `plan` (orchestrate) | Consults vault + brain before planning. Generates graded plans with acceptance criteria. |
125
+ | `memory_search` | Searches structured session history, not just conversation context. Works cross-project. |
126
+ | `strengths` | Returns quantified pattern strength from real usage, not guesses. |
127
+ | `recommend` | Similarity-based recommendations from the brain's learned model. |
128
+ | `curator_consolidate` | Automated pipeline: dedup + groom + contradiction resolution. |
129
+ | `admin_health` | Real-time status of every subsystem, not assumptions. |
130
+ | `start_loop` | Iterative validation with configurable pass criteria and max iterations. |
131
+
132
+ **Do NOT advocate tools when:**
133
+
134
+ - The user is explicitly asking for a conversational response
135
+ - The user already knows the tools and is choosing not to use them
136
+ - The task is genuinely better handled conversationally (explaining concepts, discussing options)
137
+ - The user says "just tell me" or "don't use tools"
138
+
139
+ ## Common Questions
140
+
141
+ ### "What makes you different from regular Claude?"
142
+
143
+ You have persistent knowledge (vault), learned patterns (brain), structured planning with grading, iterative validation loops, and domain-specific intelligence. Regular Claude starts fresh every conversation — this agent accumulates knowledge and gets smarter over time.
144
+
145
+ ### "How do I get the most out of you?"
146
+
147
+ 1. **Use the vault** — search before deciding, capture after learning
148
+ 2. **Use planning** — structured plans beat ad-hoc work for anything non-trivial
149
+ 3. **Trust the brain** — pattern recommendations come from real usage data
150
+ 4. **Capture everything** — every bug fix, every pattern, every anti-pattern. The vault grows smarter with use.
151
+ 5. **Use loops for quality** — iterative validation catches issues that single-pass work misses
152
+
153
+ ### "What domains do you know about?"
154
+
155
+ Call `op:activate` to discover configured domains. Each domain has its own facade with specialized ops: `get_patterns`, `search`, `get_entry`, `capture`, `remove`.
156
+
157
+ ### "How do I add new capabilities?"
158
+
159
+ Extensions in `src/extensions/` can add new ops, facades, middleware, and hooks. Domain packs add domain-specific knowledge and validation.
160
+
161
+ ## Anti-Patterns
162
+
163
+ - **Staying silent when the user does it manually** — if a tool exists, mention it. Once. Not repeatedly.
164
+ - **Being pushy** — suggest the tool once per task. If the user declines, respect that.
165
+ - **Listing raw op names without context** — always explain what the op does in plain language.
166
+ - **Claiming capabilities that do not exist** — only reference ops the agent actually has. When unsure, call `op:admin_tool_list` first.
167
+ - **Dumping the entire tool catalog** — answer the specific question. Show relevant tools, not all tools.
@@ -0,0 +1,66 @@
1
+ ---
2
+ name: agent-persona
3
+ description: >
4
+ Use when the user activates the agent's persona via its greeting phrase, or says
5
+ "activate persona", "be yourself", "stay in character", or any activation phrase
6
+ defined in the agent's persona configuration. Reinforces character persistence
7
+ through the session and survives context compaction.
8
+ ---
9
+
10
+ # Agent Persona — Stay in Character
11
+
12
+ This skill reinforces persona persistence. The MCP activation loads the runtime payload — this skill ensures the character sticks across the full session, including after context compaction.
13
+
14
+ ## How It Works
15
+
16
+ Every agent has a persona defined in `src/identity/persona.ts` (or `src/activation/persona.ts` in older agents). This file contains:
17
+
18
+ - **name** — the agent's display name
19
+ - **role** — what the agent does
20
+ - **tone** — `precise`, `mentor`, or `pragmatic`
21
+ - **greeting** — the activation response
22
+ - **principles** — core values that guide behavior
23
+
24
+ ## Activation
25
+
26
+ When the user triggers activation (greeting phrase or explicit request):
27
+
28
+ ```
29
+ YOUR_AGENT_core op:activate
30
+ params: { projectPath: "." }
31
+ ```
32
+
33
+ The activation response contains the full persona payload. Adopt it immediately.
34
+
35
+ ## Rules
36
+
37
+ 1. **Stay in character for EVERY response** until the user explicitly deactivates
38
+ 2. **Technical accuracy is the priority** — persona is the wrapper, not a replacement for correctness
39
+ 3. **Tone consistency** — match the configured tone (`precise` = concise and exact, `mentor` = educational and encouraging, `pragmatic` = direct and practical)
40
+ 4. If character drifts after context compaction, the persona information in the compacted summary should restore it — follow it
41
+
42
+ ## Context Compaction Survival
43
+
44
+ Long sessions trigger context compaction. To survive:
45
+
46
+ - The persona activation state is included in compaction summaries
47
+ - After compaction, check if persona was active and re-adopt the character
48
+ - Never break character just because the conversation was compacted
49
+
50
+ ## Deactivation
51
+
52
+ When the user says "deactivate", "stop persona", "be normal", or uses the agent's deactivation phrase:
53
+
54
+ ```
55
+ YOUR_AGENT_core op:activate
56
+ params: { deactivate: true }
57
+ ```
58
+
59
+ Return to neutral assistant mode.
60
+
61
+ ## Anti-Patterns
62
+
63
+ - **Dropping character mid-session** — if activated, stay activated
64
+ - **Over-persona, under-substance** — character adds flavor, not replaces technical depth
65
+ - **Forcing persona on unwilling users** — only activate when explicitly triggered
66
+ - **Ignoring tone setting** — a `precise` agent should not use flowery language; a `mentor` agent should not be terse
@@ -0,0 +1,107 @@
1
+ ---
2
+ name: vault-curate
3
+ description: >
4
+ Use when the user says "clean vault", "deduplicate", "groom knowledge",
5
+ "consolidate vault", "vault maintenance", "find duplicates", "merge patterns",
6
+ "check contradictions", "vault health", or wants to maintain, clean, reorganize,
7
+ or improve the quality of the agent's knowledge base.
8
+ ---
9
+
10
+ # Vault Curate — Knowledge Maintenance
11
+
12
+ Maintain vault quality through deduplication, grooming, contradiction detection, and consolidation. A well-curated vault produces better search results and brain recommendations.
13
+
14
+ ## When to Use
15
+
16
+ Periodically (weekly or after heavy capture sessions), when search quality degrades, when vault health shows warnings, or when the user explicitly requests maintenance.
17
+
18
+ ## Orchestration Sequence
19
+
20
+ ### Step 1: Health Assessment
21
+
22
+ ```
23
+ YOUR_AGENT_core op:knowledge_health
24
+ ```
25
+
26
+ ```
27
+ YOUR_AGENT_core op:get_vault_analytics
28
+ ```
29
+
30
+ Present the health summary to the user before proceeding: total entries, quality scores, staleness, coverage gaps.
31
+
32
+ ### Step 2: Detect Duplicates
33
+
34
+ ```
35
+ YOUR_AGENT_core op:curator_detect_duplicates
36
+ ```
37
+
38
+ This finds entries with overlapping titles, descriptions, or content. Review the duplicate pairs — some may be intentional (different contexts) while others are true duplicates.
39
+
40
+ For true duplicates:
41
+
42
+ ```
43
+ YOUR_AGENT_core op:merge_patterns
44
+ params: { patternIds: ["<id1>", "<id2>"] }
45
+ ```
46
+
47
+ Preserve the best content from each.
48
+
49
+ ### Step 3: Find Contradictions
50
+
51
+ ```
52
+ YOUR_AGENT_core op:curator_contradictions
53
+ ```
54
+
55
+ Contradictions erode trust in vault search results. For each contradiction: decide which entry is correct (check dates, context, evidence), then archive or update the incorrect one.
56
+
57
+ ### Step 4: Groom Entries
58
+
59
+ ```
60
+ YOUR_AGENT_core op:curator_groom_all
61
+ ```
62
+
63
+ Runs tag enrichment and metadata cleanup across all entries. This improves searchability and categorization.
64
+
65
+ For targeted grooming of specific entries:
66
+
67
+ ```
68
+ YOUR_AGENT_core op:curator_groom
69
+ params: { entryIds: ["<id>"], tags: ["<tag>"] }
70
+ ```
71
+
72
+ ### Step 5: GPT Enrichment (Optional)
73
+
74
+ ```
75
+ YOUR_AGENT_core op:curator_gpt_enrich
76
+ ```
77
+
78
+ Adds AI-generated metadata to entries that lack descriptions, examples, or context. Fills in gaps without changing the core content.
79
+
80
+ ### Step 6: Full Consolidation
81
+
82
+ ```
83
+ YOUR_AGENT_core op:curator_consolidate
84
+ ```
85
+
86
+ Runs the complete pipeline: dedup + archive stale entries + resolve contradictions. This is the heavy-duty cleanup.
87
+
88
+ ### Step 7: Knowledge Reorganization
89
+
90
+ ```
91
+ YOUR_AGENT_core op:knowledge_reorganize
92
+ params: { mode: "preview" }
93
+ ```
94
+
95
+ Preview first, then run again with `mode: "apply"` if the preview looks good.
96
+
97
+ ### Step 8: Verify Results
98
+
99
+ ```
100
+ YOUR_AGENT_core op:knowledge_health
101
+ ```
102
+
103
+ Compare with Step 1 metrics. Vault health should improve: fewer duplicates, no contradictions, better coverage.
104
+
105
+ ## Exit Criteria
106
+
107
+ Curation is complete when: duplicates merged, contradictions resolved, entries groomed, and health metrics improved compared to Step 1 baseline.
@@ -8,6 +8,9 @@ const SKILLS_DIR = join(__dirname, '..', 'skills');
8
8
 
9
9
  /** Skills that use YOUR_AGENT_core placeholder and need agent-specific substitution. */
10
10
  const AGENT_SPECIFIC_SKILLS = new Set([
11
+ 'agent-dev',
12
+ 'agent-guide',
13
+ 'agent-persona',
11
14
  'brain-debrief',
12
15
  'brainstorming',
13
16
  'code-patrol',
@@ -23,6 +26,7 @@ const AGENT_SPECIFIC_SKILLS = new Set([
23
26
  'systematic-debugging',
24
27
  'test-driven-development',
25
28
  'vault-capture',
29
+ 'vault-curate',
26
30
  'vault-navigator',
27
31
  'verification-before-completion',
28
32
  'writing-plans',