@nano-step/skill-manager 5.2.2 → 5.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,290 @@
1
+ ---
2
+ name: feature-analysis
3
+ description: "Deep code analysis of any feature or service before writing docs, diagrams, or making changes. Enforces read-everything-first discipline. Traces exact execution paths, data transformations, guard clauses, bugs, and gaps between existing docs and actual code. Produces a validated Mermaid diagram and structured analysis output. Language and framework agnostic."
4
+ compatibility: "OpenCode"
5
+ metadata:
6
+ version: "2.0.0"
7
+ tools:
8
+ required:
9
+ - Read (every file in the feature)
10
+ - Bash (find all files, run mermaid validator)
11
+ uses:
12
+ - mermaid-validator skill (validate any diagram produced)
13
+ triggers:
14
+ - "analyze [feature]"
15
+ - "how does X work"
16
+ - "trace the flow of"
17
+ - "understand X"
18
+ - "what does X do"
19
+ - "deep dive into"
20
+ - "working on X - understand it first"
21
+ - "update docs/brain for"
22
+ ---
23
+
24
+ # Feature Analysis Skill
25
+
26
+ A disciplined protocol for deeply analyzing any feature in any codebase before producing docs, diagrams, or making changes. Framework-agnostic. Language-agnostic.
27
+
28
+ ---
29
+
30
+ ## The Core Rule
31
+
32
+ **READ EVERYTHING. PRODUCE NOTHING. THEN SYNTHESIZE.**
33
+
34
+ Do not write a single diagram node, doc line, or description until every file in the feature has been read. Every time you produce output before reading all files, you will miss something.
35
+
36
+ ---
37
+
38
+ ## Phase 1: Discovery — Find Every File
39
+
40
+ Before reading anything, map the full file set.
41
+
42
+ ```bash
43
+ # Find all source files for the feature
44
+ find <feature-dir> -type f | sort
45
+
46
+ # Check imports to catch shared utilities, decorators, helpers
47
+ grep -r "import\|require" <feature-dir> | grep -v node_modules | sort -u
48
+ ```
49
+
50
+ **Read in dependency order (bottom-up — foundations first):**
51
+
52
+ 1. **Entry point / bootstrap** — port, env vars, startup config
53
+ 2. **Schema / model files** — DB schema, columns, nullable, indexes, types
54
+ 3. **Utility / helper files** — every function, every transformation, every constant
55
+ 4. **Decorator / middleware files** — wrapping logic, side effects, return value handling
56
+ 5. **Infrastructure services** — cache, lock, queue, external connections
57
+ 6. **Core business logic** — the main service/handler files
58
+ 7. **External / fetch services** — HTTP calls, filters applied, error handling
59
+ 8. **Entry controllers / routers / handlers** — HTTP method, route, params, return
60
+ 9. **Wiring files** — module/DI config, middleware registration
61
+
62
+ **Do not skip any file. Do not skim.**
63
+
64
+ ---
65
+
66
+ ## Phase 2: Per-File Checklist
67
+
68
+ For each file, answer these questions before moving to the next.
69
+
70
+ ### Entry point / bootstrap
71
+ - [ ] What port or address? (default? env override?)
72
+ - [ ] Any global middleware, pipes, interceptors, or lifecycle hooks?
73
+
74
+ ### Schema / model files
75
+ - [ ] Table/collection name
76
+ - [ ] Every field: type, nullable, default, constraints, indexes
77
+ - [ ] Relations / references to other entities
78
+
79
+ ### Utility / helper files
80
+ - [ ] Every exported function — what does it do, step by step?
81
+ - [ ] For transformations: what inputs? what outputs? what edge cases handled?
82
+ - [ ] Where is this function called? (grep for usages)
83
+ - [ ] How many times is it called within a single method? (once per batch? once per item?)
84
+
85
+ ### Decorator / middleware files
86
+ - [ ] What does it wrap?
87
+ - [ ] What side effects before / after the original method?
88
+ - [ ] **Does it `return` the result of the original method?** (missing `return` = silent discard bug)
89
+ - [ ] Does it use try/finally? What runs in finally?
90
+ - [ ] What happens on the early-exit path?
91
+
92
+ ### Core business logic files
93
+ - [ ] Every method: signature, return type
94
+ - [ ] For each method: trace every line — no summarizing
95
+ - [ ] Accumulator variables — where initialized, where incremented, where returned
96
+ - [ ] Loop structure: sequential or parallel?
97
+ - [ ] Every external call: what service/module, what args, what returned
98
+ - [ ] Guard clauses: every early return / continue / throw
99
+ - [ ] Every branch in conditionals
100
+
101
+ ### External / fetch service files
102
+ - [ ] Exact URLs or endpoints (hardcoded or env?)
103
+ - [ ] Filters applied to response data (which calls filter, which don't?)
104
+ - [ ] Error handling on external calls
105
+
106
+ ### Entry controllers / routers / handlers
107
+ - [ ] HTTP method (GET vs POST — don't assume)
108
+ - [ ] Route path
109
+ - [ ] What core method is called?
110
+ - [ ] What is returned?
111
+
112
+ ### Wiring / module files
113
+ - [ ] What is imported / registered?
114
+ - [ ] What is exported / exposed?
115
+
116
+ ---
117
+
118
+ ## Phase 3: Execution Trace
119
+
120
+ After reading all files, produce a numbered step-by-step trace of the full execution path. This is not prose — it is a precise trace.
121
+
122
+ **Format:**
123
+ ```
124
+ 1. [HTTP METHOD] /route → HandlerName.methodName()
125
+ 2. HandlerName.methodName() → ServiceName.methodName()
126
+ 3. @DecoratorName: step A (e.g. acquire lock, check cache)
127
+ 4. → if condition X: early return [what is returned / not returned]
128
+ 5. ServiceName.methodName():
129
+ 6. step 1: call externalService.fetchAll() → parallel([fetchA(), fetchB()])
130
+ 7. fetchA(): GET https://... → returns all items (no filter)
131
+ 8. fetchB(): GET https://... → filter(x => x.field !== null) → returns filtered
132
+ 9. step 2: parallel([processItems(a, 'typeA'), processItems(b, 'typeB')])
133
+ 10. processItems(items, type):
134
+ 11. init: totalUpdated = 0, totalInserted = 0
135
+ 12. for loop (sequential): i = 0 to items.length, step batchSize
136
+ 13. batch = items.slice(i, i + batchSize)
137
+ 14. { updated, inserted } = await processBatch(batch)
138
+ 15. totalUpdated += updated; totalInserted += inserted
139
+ 16. return { total: items.length, updated: totalUpdated, inserted: totalInserted }
140
+ 17. processBatch(batch):
141
+ 18. guard: if batch.length === 0 → return { updated: 0, inserted: 0 }
142
+ 19. step 1: names = batch.map(item => transform(item.field)) ← called ONCE per batch
143
+ 20. step 2: existing = repo.find(WHERE field IN names)
144
+ 21. step 3: map = existing.reduce(...)
145
+ 22. step 4: for each item in batch:
146
+ 23. value = transform(item.field) ← called AGAIN per item
147
+ 24. ...decision tree...
148
+ 25. repo.save(itemsToSave)
149
+ 26. return { updated, inserted }
150
+ 27. @DecoratorName finally: releaseLock()
151
+ 28. BUG: decorator does not return result → caller receives undefined
152
+ ```
153
+
154
+ **Key things to call out in the trace:**
155
+ - When a utility function is called more than once (note the count and context)
156
+ - Every accumulator variable (where init, where increment, where return)
157
+ - Every guard clause / early exit
158
+ - Sequential vs parallel (for loop vs Promise.all / asyncio.gather / goroutines)
159
+ - Any discarded return values
160
+
161
+ ---
162
+
163
+ ## Phase 4: Data Transformations Audit
164
+
165
+ For every utility/transformation function used:
166
+
167
+ | Function | What it does (step by step) | Called where | Called how many times |
168
+ |----------|----------------------------|--------------|----------------------|
169
+ | `transformFn(x)` | 1. step A 2. step B 3. step C | methodName | TWICE: once in step N (batch), once per item in loop |
170
+
171
+ ---
172
+
173
+ ## Phase 5: Gap Analysis — Docs vs Code
174
+
175
+ Compare existing docs/brain files against what the code actually does:
176
+
177
+ | Claim in docs | What code actually does | Verdict |
178
+ |---------------|------------------------|---------|
179
+ | "POST /endpoint" | `@Get()` in controller | ❌ Wrong |
180
+ | "Port 3000" | `process.env.PORT \|\| 4001` in entrypoint | ❌ Wrong |
181
+ | "function converts X" | Also does Y (undocumented) | ⚠️ Incomplete |
182
+ | "returns JSON result" | Decorator discards return value | ❌ Bug |
183
+
184
+ ---
185
+
186
+ ## Phase 6: Produce Outputs
187
+
188
+ Only now, after phases 1–5 are complete, produce:
189
+
190
+ ### 6a. Structured Analysis Document
191
+
192
+ ```markdown
193
+ ## Feature Analysis: [Feature Name]
194
+ Repo: [repo] | Date: [date]
195
+
196
+ ### Files Read
197
+ - `path/to/controller.ts` — entry point, GET /endpoint, calls ServiceA.run()
198
+ - `path/to/service.ts` — core logic, orchestrates fetch + batch loop
199
+ - [... every file ...]
200
+
201
+ ### Execution Trace
202
+ [numbered trace from Phase 3]
203
+
204
+ ### Data Transformations
205
+ [table from Phase 4]
206
+
207
+ ### Guard Clauses & Edge Cases
208
+ - processBatch: empty batch guard → returns {0,0} immediately
209
+ - fetchItems: filters items where field === null
210
+ - LockManager: if lock not acquired → returns void immediately (no error thrown)
211
+
212
+ ### Bugs / Issues Found
213
+ - path/to/decorator.ts line N: `await originalMethod.apply(this, args)` missing `return`
214
+ → result is discarded, caller always receives undefined
215
+ - [any others]
216
+
217
+ ### Gaps: Docs vs Code
218
+ [table from Phase 5]
219
+
220
+ ### Files to Update
221
+ - [ ] `.agents/_repos/[repo].md` — update port, endpoint method, transformation description
222
+ - [ ] `.agents/_domains/[domain].md` — if architecture changed
223
+ ```
224
+
225
+ ### 6b. Mermaid Diagram
226
+
227
+ Write the diagram. Then **immediately run the validator before doing anything else.**
228
+
229
+ If you have the mermaid-validator skill:
230
+ ```bash
231
+ node /path/to/project/scripts/validate-mermaid.mjs [file.md]
232
+ ```
233
+
234
+ Otherwise validate manually — common syntax errors:
235
+ - Labels with `()` must be wrapped in `"double quotes"`: `A["method()"]`
236
+ - No `\n` in node labels — use `<br/>` or shorten
237
+ - No HTML entities (`&amp;`, `&gt;`) in labels — use literal characters
238
+ - `end` is a reserved word in Mermaid — use `END` or `done` as node IDs
239
+
240
+ If errors → fix → re-run. Do not proceed until clean.
241
+
242
+ **Diagram must include:**
243
+ - Every step from the execution trace
244
+ - Data transformation nodes (show what the function does, not just its name)
245
+ - Guard clauses as decision nodes
246
+ - Parallel vs sequential clearly distinguished
247
+ - Bugs annotated inline (e.g. "BUG: result discarded")
248
+
249
+ ### 6c. Doc / Brain File Updates
250
+
251
+ Update relevant docs with:
252
+ - Corrected facts (port, endpoint method, etc.)
253
+ - The validated Mermaid diagram
254
+ - Data transformation table
255
+ - Known bugs section
256
+
257
+ ---
258
+
259
+ ## Anti-Patterns (What This Skill Prevents)
260
+
261
+ | Anti-pattern | What gets missed | Rule violated |
262
+ |---|---|---|
263
+ | Drew diagram before reading utility files | Transformation called twice — not shown | READ EVERYTHING FIRST |
264
+ | Trusted existing docs for endpoint method | GET vs POST wrong in docs | GAP ANALYSIS required |
265
+ | Summarized service method instead of tracing | Guard clause (empty batch) missed | TRACE NOT SUMMARIZE |
266
+ | Trusted existing docs for port/config | Wrong values | Verify entry point |
267
+ | Read decorator without checking return | Silent result discard bug | RETURN VALUE AUDIT |
268
+ | Merged H1/H2 paths into shared loop node | Sequential vs parallel distinction lost | TRACE LOOP STRUCTURE |
269
+ | Assumed filter applies to all fetches | One fetch had no filter — skipped items | READ EVERY FETCH FILE |
270
+
271
+ ---
272
+
273
+ ## Quick Reference Checklist
274
+
275
+ Before producing any output, verify:
276
+
277
+ - [ ] Entry point read — port/address confirmed
278
+ - [ ] All schema/model files read — every field noted
279
+ - [ ] All utility files read — every transformation step documented
280
+ - [ ] All decorator/middleware files read — return value audited
281
+ - [ ] All core service files read — every method traced line by line
282
+ - [ ] All fetch/external services read — filters noted (which have filters, which don't)
283
+ - [ ] All controller/router/handler files read — HTTP method confirmed (not assumed)
284
+ - [ ] All wiring/module files read — dependency graph understood
285
+ - [ ] Utility functions: call count per method noted
286
+ - [ ] All guard clauses documented
287
+ - [ ] Accumulator variables traced (init → increment → return)
288
+ - [ ] Loop structure confirmed (sequential vs parallel)
289
+ - [ ] Existing docs compared against code (gap analysis done)
290
+ - [ ] Mermaid diagram validated before saving
@@ -0,0 +1,15 @@
1
+ {
2
+ "name": "feature-analysis",
3
+ "version": "2.0.0",
4
+ "description": "Deep code analysis of any feature or service before writing docs, diagrams, or making changes. Enforces read-everything-first discipline with execution tracing, data transformation audits, and gap analysis.",
5
+ "compatibility": "OpenCode",
6
+ "agent": null,
7
+ "commands": [],
8
+ "tags": [
9
+ "analysis",
10
+ "code-review",
11
+ "documentation",
12
+ "mermaid",
13
+ "tracing"
14
+ ]
15
+ }
@@ -0,0 +1,163 @@
1
+ ---
2
+ name: mermaid-validator
3
+ description: "Validate and write correct Mermaid diagrams. Run the validator script before finalizing any .md file containing a mermaid block. Enforces syntax rules that prevent parse errors."
4
+ compatibility: "OpenCode"
5
+ metadata:
6
+ version: "2.0.0"
7
+ tools:
8
+ required:
9
+ - bash (node scripts/validate-mermaid.mjs)
10
+ ---
11
+
12
+ # Mermaid Validator Skill
13
+
14
+ ## MANDATORY WORKFLOW
15
+
16
+ **Any time you write or edit a Mermaid diagram, you MUST:**
17
+
18
+ 1. Write the diagram
19
+ 2. Run the validator
20
+ 3. Fix any errors reported
21
+ 4. Re-run until clean
22
+
23
+ **Never** mark a documentation task complete if the validator reports errors.
24
+
25
+ ---
26
+
27
+ ## Validator Script Setup (Per Project)
28
+
29
+ This skill expects a zero-dependency Node.js validator script at `scripts/validate-mermaid.mjs` in the project root.
30
+
31
+ If the script doesn't exist yet, create it — see the reference implementation in any project that has already set this up, or ask to scaffold it.
32
+
33
+ ```bash
34
+ # Validate all markdown files (default: scans .agents/ directory)
35
+ node scripts/validate-mermaid.mjs
36
+
37
+ # Validate a specific file
38
+ node scripts/validate-mermaid.mjs path/to/file.md
39
+
40
+ # Validate a whole directory
41
+ node scripts/validate-mermaid.mjs path/to/dir/
42
+ ```
43
+
44
+ **Expected clean output:**
45
+ ```
46
+ Scanned 12 file(s), 3 mermaid block(s).
47
+ ✅ All diagrams passed.
48
+ ```
49
+
50
+ **Error output example:**
51
+ ```
52
+ ❌ path/to/file.md
53
+ Line 47 [no-literal-newline]: Literal \n inside node/edge label — use <br/> or rewrite as plain text
54
+ > B --> C[ServiceName.method\n@Decorator]
55
+ ```
56
+
57
+ ---
58
+
59
+ ## Mermaid Syntax Rules (Mandatory Reference)
60
+
61
+ ### ✅ Safe — No quoting needed
62
+ ```
63
+ Letters, digits, spaces, hyphens, underscores, colons, slashes, dots, angle brackets
64
+ ```
65
+
66
+ ### ⚠️ Requires `"double quotes"` around the whole label
67
+ | Character | Wrong | Right |
68
+ |-----------|-------|-------|
69
+ | Parentheses `()` | `A[label (detail)]` | `A["label (detail)"]` |
70
+ | Percent `%` | `A[100%]` | `A["100%"]` |
71
+ | Ampersand `&` | `A[foo & bar]` | `A["foo & bar"]` |
72
+ | Hash `#` | `A[#tag]` | `A["#tag"]` |
73
+ | At-sign `@` | `A[@lock]` | `A["@lock"]` |
74
+
75
+ ### ❌ Never use inside a diagram block
76
+ | Pattern | Wrong | Fix |
77
+ |---------|-------|-----|
78
+ | Literal `\n` in label | `A[Line1\nLine2]` | `A["Line1<br/>Line2"]` or just `A[Line1 Line2]` |
79
+ | HTML entities | `A[foo &amp; bar]` | `A["foo & bar"]` |
80
+ | HTML numeric entities | `A[&#40;parens&#41;]` | `A["(parens)"]` |
81
+ | Reserved word `end` as node ID | `end[task]` | `End[task]` |
82
+
83
+ ### Edge label quoting
84
+ ```
85
+ A -- simple text --> B ✅ fine
86
+ A -- "text with (parens)" --> B ✅ quoted
87
+ A -- text with (parens) --> B ❌ breaks
88
+ ```
89
+
90
+ ### Node shape reference
91
+ ```
92
+ A[Rectangle]
93
+ A(Rounded)
94
+ A([Stadium]) ← OK to have ( inside [ here — this is shape syntax
95
+ A{Diamond}
96
+ A[(Cylinder/DB)]
97
+ A((Circle))
98
+ A>Asymmetric]
99
+ ```
100
+
101
+ ### Mermaid entity codes (inside `"quoted"` labels only)
102
+ ```
103
+ #40; = ( #41; = ) #35; = # #37; = %
104
+ ```
105
+
106
+ ---
107
+
108
+ ## Writing a Mermaid Diagram — Checklist
109
+
110
+ Before saving any diagram, mentally check each line:
111
+
112
+ - [ ] No `\n` inside any label (bracket, brace, or paren)
113
+ - [ ] No `&#NN;` or `&amp;` HTML entities
114
+ - [ ] Any label containing `()` is wrapped in `"double quotes"`
115
+ - [ ] Node IDs are alphanumeric + underscore only (no hyphens in ID itself)
116
+ - [ ] No node ID named `end` (lowercase)
117
+ - [ ] Edge labels containing special chars are `"quoted"`
118
+ - [ ] Diagram has at least one node and one valid statement
119
+
120
+ Then run the validator. If it passes, you're done.
121
+
122
+ ---
123
+
124
+ ## Common Diagram Patterns
125
+
126
+ ### Service method with decorator
127
+ ```mermaid
128
+ flowchart TD
129
+ A[Controller] --> B["Service.method - @Decorator key ttl"]
130
+ ```
131
+ Note: `@` is safe after the first non-@ character. Put the whole label in quotes to be safe.
132
+
133
+ ### Lock/cache decision
134
+ ```mermaid
135
+ flowchart TD
136
+ A --> B{"Redis SET NX EX - key - TTL 1800s"}
137
+ B -- Lock held --> C([Return void])
138
+ B -- Lock acquired --> D[Continue]
139
+ ```
140
+
141
+ ### DB node (cylinder)
142
+ ```mermaid
143
+ flowchart TD
144
+ A --> B[(database.table)]
145
+ ```
146
+
147
+ ### Parallel execution
148
+ ```mermaid
149
+ flowchart TD
150
+ A["Promise.all"] --> B[Task 1]
151
+ A --> C[Task 2]
152
+ ```
153
+
154
+ ### Sequential batch loop
155
+ ```mermaid
156
+ flowchart TD
157
+ A[Start loop] --> B["for i = 0 to items.length step batchSize"]
158
+ B --> C["batch = items.slice(i, i + batchSize)"]
159
+ C --> D["processBatch(batch)"]
160
+ D --> E{More batches?}
161
+ E -- Yes --> B
162
+ E -- No --> F[Return totals]
163
+ ```
@@ -0,0 +1,15 @@
1
+ {
2
+ "name": "mermaid-validator",
3
+ "version": "2.0.0",
4
+ "description": "Validate and write correct Mermaid diagrams. Run the validator script before finalizing any .md file containing a mermaid block. Enforces syntax rules that prevent parse errors.",
5
+ "compatibility": "OpenCode",
6
+ "agent": null,
7
+ "commands": [],
8
+ "tags": [
9
+ "mermaid",
10
+ "validation",
11
+ "diagrams",
12
+ "documentation",
13
+ "syntax"
14
+ ]
15
+ }
@@ -0,0 +1,46 @@
1
+ <!-- OPENCODE-MEMORY:START -->
2
+ <!-- Managed block - do not edit manually. Updated by: npx nano-brain init -->
3
+
4
+ ## Memory System (nano-brain)
5
+
6
+ This project uses **nano-brain** for persistent context across sessions.
7
+
8
+ ### Quick Reference
9
+
10
+ nano-brain supports two access methods. Try MCP first; if unavailable, use CLI.
11
+
12
+ | I want to... | MCP Tool | CLI Fallback |
13
+ |--------------|----------|--------------|
14
+ | Recall past work on a topic | `memory_query("topic")` | `npx nano-brain query "topic"` |
15
+ | Find exact error/function name | `memory_search("exact term")` | `npx nano-brain query "exact term"` |
16
+ | Explore a concept semantically | `memory_vsearch("concept")` | `npx nano-brain query "concept"` |
17
+ | Save a decision for future sessions | `memory_write("decision context")` | Create file in `~/.nano-brain/memory/` |
18
+ | Check index health | `memory_status` | `npx nano-brain status` |
19
+
20
+ ### Session Workflow
21
+
22
+ **Start of session:** Check memory for relevant past context before exploring the codebase.
23
+ ```
24
+ # MCP (if available):
25
+ memory_query("what have we done regarding {current task topic}")
26
+
27
+ # CLI fallback:
28
+ npx nano-brain query "what have we done regarding {current task topic}"
29
+ ```
30
+
31
+ **End of session:** Save key decisions, patterns discovered, and debugging insights.
32
+ ```
33
+ # MCP (if available):
34
+ memory_write("## Summary\n- Decision: ...\n- Why: ...\n- Files: ...")
35
+
36
+ # CLI fallback: create a markdown file
37
+ # File: ~/.nano-brain/memory/YYYY-MM-DD-summary.md
38
+ ```
39
+
40
+ ### When to Search Memory vs Codebase
41
+
42
+ - **"Have we done this before?"** → `memory_query` or `npx nano-brain query` (searches past sessions)
43
+ - **"Where is this in the code?"** → grep / ast-grep (searches current files)
44
+ - **"How does this concept work here?"** → Both (memory for past context + grep for current code)
45
+
46
+ <!-- OPENCODE-MEMORY:END -->
@@ -0,0 +1,77 @@
1
+ # nano-brain
2
+
3
+ Persistent memory for AI coding agents. Hybrid search (BM25 + semantic + LLM reranking) across past sessions, codebase, notes, and daily logs.
4
+
5
+ ## Slash Commands
6
+
7
+ | Command | When |
8
+ |---------|------|
9
+ | `/nano-brain-init` | First-time workspace setup |
10
+ | `/nano-brain-status` | Health check, embedding progress |
11
+ | `/nano-brain-reindex` | After branch switch, pull, or major changes |
12
+
13
+ ## When to Use Memory
14
+
15
+ **Before work:** Recall past decisions, patterns, debugging insights, cross-session context.
16
+ **After work:** Save key decisions, architecture choices, non-obvious fixes, domain knowledge.
17
+
18
+ ## Access Methods: MCP vs CLI
19
+
20
+ nano-brain can be accessed via **MCP tools** (when the MCP server is configured) or **CLI** (always available).
21
+
22
+ **Detection:** Try calling `memory_status` MCP tool first. If it fails with "MCP server not found", fall back to CLI.
23
+
24
+ ### MCP Tools (preferred when available)
25
+
26
+ | Need | MCP Tool |
27
+ |------|----------|
28
+ | Exact keyword (error msg, function name) | `memory_search` via `skill_mcp(mcp_name="nano-brain", tool_name="memory_search")` |
29
+ | Conceptual ("how does auth work") | `memory_vsearch` via `skill_mcp(mcp_name="nano-brain", tool_name="memory_vsearch")` |
30
+ | Best quality, complex question | `memory_query` via `skill_mcp(mcp_name="nano-brain", tool_name="memory_query")` |
31
+ | Retrieve specific doc | `memory_get` / `memory_multi_get` |
32
+ | Save insight or decision (append to daily log) | `memory_write` via `skill_mcp(mcp_name="nano-brain", tool_name="memory_write")` |
33
+ | Set/update a keyed memory (overwrites previous) | `memory_set` via `skill_mcp(mcp_name="nano-brain", tool_name="memory_set")` |
34
+ | Delete a keyed memory | `memory_delete` via `skill_mcp(mcp_name="nano-brain", tool_name="memory_delete")` |
35
+ | List all keyed memories | `memory_keys` via `skill_mcp(mcp_name="nano-brain", tool_name="memory_keys")` |
36
+ | Check health | `memory_status` via `skill_mcp(mcp_name="nano-brain", tool_name="memory_status")` |
37
+ | Rescan source files | `memory_index_codebase` |
38
+ | Refresh all indexes | `memory_update` |
39
+
40
+ ### CLI Fallback (always available)
41
+
42
+ When MCP server is not available, use the CLI via Bash tool:
43
+
44
+ | Need | CLI Command |
45
+ |------|-------------|
46
+ | Best quality search (hybrid: BM25 + vector + reranking) | `npx nano-brain query "search terms"` |
47
+ | Search with collection filter | `npx nano-brain query "terms" -c codebase` |
48
+ | Search with more/fewer results | `npx nano-brain query "terms" -n 20` |
49
+ | Show full content of results | `npx nano-brain query "terms" --full` |
50
+ | Check health & stats | `npx nano-brain status` |
51
+ | Initialize workspace | `npx nano-brain init --root=/path/to/workspace` |
52
+ | Generate embeddings | `npx nano-brain embed` |
53
+ | Harvest sessions | `npx nano-brain harvest` |
54
+ | List collections | `npx nano-brain collection list` |
55
+
56
+ **CLI limitations vs MCP:**
57
+ - CLI only has `query` (unified hybrid search) — no separate `search` (BM25-only) or `vsearch` (vector-only)
58
+ - CLI cannot `write` notes — use MCP or manually create files in `~/.nano-brain/memory/`
59
+ - CLI cannot `get` specific docs by ID — use `query` with specific terms instead
60
+
61
+ **Default:** Use `npx nano-brain query "..."` — it combines BM25 + vector + reranking for best results.
62
+
63
+ ## Collection Filtering
64
+
65
+ Works with both MCP and CLI (`-c` flag):
66
+
67
+ - `codebase` — source files only
68
+ - `sessions` — past AI sessions only
69
+ - `memory` — curated notes only
70
+ - Omit — search everything (recommended)
71
+
72
+ ## Memory vs Native Tools
73
+
74
+ Memory excels at **recall and semantics** — past sessions, conceptual search, cross-project knowledge.
75
+ Native tools (grep, ast-grep, glob) excel at **precise code patterns** — exact matches, AST structure.
76
+
77
+ **They are complementary.** Use both.
@@ -0,0 +1,15 @@
1
+ {
2
+ "name": "nano-brain",
3
+ "version": "1.0.0",
4
+ "description": "Persistent memory for AI coding agents. Hybrid search (BM25 + semantic + LLM reranking) across past sessions, codebase, notes, and daily logs.",
5
+ "compatibility": "OpenCode",
6
+ "agent": null,
7
+ "commands": [],
8
+ "tags": [
9
+ "memory",
10
+ "persistence",
11
+ "search",
12
+ "context",
13
+ "sessions"
14
+ ]
15
+ }