wicked-brain 0.3.3 → 0.3.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/package.json +1 -1
- package/server/package.json +1 -1
- package/skills/wicked-brain-agent/SKILL.md +6 -0
- package/skills/wicked-brain-agent/platform/claude/wicked-brain-onboard.md +27 -4
- package/skills/wicked-brain-batch/SKILL.md +21 -1
- package/skills/wicked-brain-compile/SKILL.md +12 -0
- package/skills/wicked-brain-configure/SKILL.md +10 -2
- package/skills/wicked-brain-enhance/SKILL.md +14 -0
- package/skills/wicked-brain-init/SKILL.md +18 -9
- package/skills/wicked-brain-lint/SKILL.md +14 -2
- package/skills/wicked-brain-lsp/SKILL.md +9 -1
- package/skills/wicked-brain-memory/SKILL.md +38 -0
- package/skills/wicked-brain-query/SKILL.md +17 -2
- package/skills/wicked-brain-read/SKILL.md +9 -1
- package/skills/wicked-brain-retag/SKILL.md +17 -0
- package/skills/wicked-brain-search/SKILL.md +7 -1
- package/skills/wicked-brain-server/SKILL.md +7 -4
- package/skills/wicked-brain-status/SKILL.md +3 -2
- package/skills/wicked-brain-update/SKILL.md +18 -3
package/package.json
CHANGED
package/server/package.json
CHANGED
|
@@ -35,6 +35,12 @@ Read agent definitions from the `agents/` subdirectory relative to this skill fi
|
|
|
35
35
|
|
|
36
36
|
## Dispatch Mode
|
|
37
37
|
|
|
38
|
+
Dispatching a named agent (rather than running inline) gives it isolated
|
|
39
|
+
context, a longer token budget, and access to file-writing tools. This makes
|
|
40
|
+
it better suited for heavy background tasks — consolidation, full project
|
|
41
|
+
onboarding, or large-scale compilation — where inline execution would exhaust
|
|
42
|
+
context or produce incomplete results.
|
|
43
|
+
|
|
38
44
|
1. Read the requested agent's `.md` file from `agents/` at depth 2
|
|
39
45
|
2. Dispatch as a subagent with those instructions using the host CLI's mechanism:
|
|
40
46
|
- Claude Code: use `Agent` tool
|
|
@@ -24,18 +24,41 @@ Create a structured summary of what you found.
|
|
|
24
24
|
|
|
25
25
|
### Step 2: Trace architecture
|
|
26
26
|
|
|
27
|
-
- Identify entry points
|
|
27
|
+
- Identify entry points — look for: `main.js`, `index.js/ts`, `server.js`,
|
|
28
|
+
`app.py`, `main.go`, `src/main.*`, `bin/` executables, `__main__.py`
|
|
28
29
|
- Map module boundaries (directories, packages, namespaces)
|
|
29
|
-
- Identify API surfaces
|
|
30
|
+
- Identify API surfaces — look for: Express/Fastify route definitions
|
|
31
|
+
(`app.get`, `router.post`), CLI command registrations (`program.command`,
|
|
32
|
+
`click.command`, `cobra.Command`), exported function signatures in index files
|
|
33
|
+
- Identify database schemas — look for: migration files, ORM model definitions,
|
|
34
|
+
`CREATE TABLE` statements, schema files (`.prisma`, `schema.rb`, `models/`)
|
|
30
35
|
- Trace primary data flows (request -> handler -> storage -> response)
|
|
31
36
|
- Note external dependencies and integrations
|
|
32
37
|
|
|
33
38
|
### Step 3: Extract conventions
|
|
34
39
|
|
|
35
40
|
- **Naming**: file naming, function naming, variable naming patterns
|
|
36
|
-
- **Testing**: test framework, test file locations, test naming patterns
|
|
41
|
+
- **Testing**: test framework, test file locations, test naming patterns —
|
|
42
|
+
look for: `*.test.*`, `*.spec.*`, `test_*.py`, files in `test/` or `__tests__/`
|
|
37
43
|
- **Build/Deploy**: build commands, deploy scripts, CI/CD patterns
|
|
38
|
-
- **Code style**: formatting, import ordering, comment conventions
|
|
44
|
+
- **Code style**: formatting, import ordering, comment conventions — look for:
|
|
45
|
+
`.eslintrc*`, `.eslintrc.json`, `pyproject.toml` (ruff/black config),
|
|
46
|
+
`.prettierrc`, `rustfmt.toml`, import grouping in existing source files
|
|
47
|
+
|
|
48
|
+
### Monorepo guidance
|
|
49
|
+
|
|
50
|
+
If the project has multiple top-level packages or apps (e.g., a `packages/`,
|
|
51
|
+
`apps/`, or `services/` directory with independent `package.json` / `pyproject.toml`
|
|
52
|
+
files), treat each package as a separate chunk group:
|
|
53
|
+
- Write chunks to `{brain_path}/chunks/extracted/project-{safe_project_name}-{package_name}/`
|
|
54
|
+
- Also write a shared overview chunk at `project-{safe_project_name}/chunk-000-overview.md`
|
|
55
|
+
that summarises the monorepo layout, inter-package relationships, and shared tooling.
|
|
56
|
+
|
|
57
|
+
### Re-onboarding
|
|
58
|
+
|
|
59
|
+
Re-onboarding is triggered **manually** by the user saying "re-onboard this
|
|
60
|
+
project" (or equivalent). It does not run automatically. When re-onboarding,
|
|
61
|
+
follow the archive-then-replace pattern described in Step 4 below.
|
|
39
62
|
|
|
40
63
|
### Step 4: Ingest findings
|
|
41
64
|
|
|
@@ -21,6 +21,21 @@ of executing repetitive tool calls inline.
|
|
|
21
21
|
- Bulk search across many terms
|
|
22
22
|
- Any operation touching more than 5 files
|
|
23
23
|
|
|
24
|
+
## When NOT to use batch
|
|
25
|
+
|
|
26
|
+
If processing fewer than ~10 files, inline ingest is simpler and easier to
|
|
27
|
+
debug — write a batch script only when the repetition would noticeably flood
|
|
28
|
+
context or where a single script run is meaningfully faster.
|
|
29
|
+
|
|
30
|
+
## Error recovery
|
|
31
|
+
|
|
32
|
+
If a batch script crashes mid-way, do not re-index everything from scratch.
|
|
33
|
+
Check which files were already indexed using the server's `search` or `stats`
|
|
34
|
+
action, then resume from the first unprocessed file. The `stats` action returns
|
|
35
|
+
the total indexed document count; `search` can confirm whether a specific file
|
|
36
|
+
path is already in the index. Design batch scripts to skip files whose chunk IDs
|
|
37
|
+
are already present in the index (check before writing).
|
|
38
|
+
|
|
24
39
|
## Why scripts over tool calls
|
|
25
40
|
|
|
26
41
|
| Approach | Context cost | Speed | Reliability |
|
|
@@ -75,7 +90,9 @@ Or keep it for re-runs — the user can run it manually too.
|
|
|
75
90
|
- Node.js scripts are fully cross-platform (same code on macOS/Linux/Windows)
|
|
76
91
|
- Python scripts are fully cross-platform
|
|
77
92
|
- Shell scripts need macOS/Linux + Windows variants — avoid if Node or Python available
|
|
78
|
-
- Use `fetch()` (Node 18+) instead of `curl` in scripts — it's native and cross-platform
|
|
93
|
+
- Use `fetch()` (Node 18+) instead of `curl` in scripts — it's native and cross-platform.
|
|
94
|
+
`fetch()` requires Node 18 or later. If running Node 16 or older, install `node-fetch`
|
|
95
|
+
(`npm install node-fetch`) and import it: `import fetch from 'node-fetch';`
|
|
79
96
|
- Use `node:fs` and `node:path` — they handle platform differences
|
|
80
97
|
|
|
81
98
|
## Template: Node.js batch script
|
|
@@ -84,6 +101,9 @@ See wicked-brain:ingest for a complete example. The key structure:
|
|
|
84
101
|
|
|
85
102
|
```javascript
|
|
86
103
|
#!/usr/bin/env node
|
|
104
|
+
// Note: fetch() is built-in from Node 18+. For Node 16 or older, run:
|
|
105
|
+
// npm install node-fetch and change the line below to:
|
|
106
|
+
// import fetch from 'node-fetch';
|
|
87
107
|
import { ... } from "node:fs";
|
|
88
108
|
import { ... } from "node:path";
|
|
89
109
|
|
|
@@ -69,6 +69,18 @@ Shell fallback:
|
|
|
69
69
|
|
|
70
70
|
Focus on chunks NOT referenced by any wiki article.
|
|
71
71
|
|
|
72
|
+
**Chunk prioritization:** When there are many uncovered chunks, process them in
|
|
73
|
+
this order:
|
|
74
|
+
1. Most recently modified (check file mtime or `authored_at` frontmatter field)
|
|
75
|
+
2. Highest backlink count (use `{"action":"backlinks","params":{"id":"{chunk-path}"}}` — more backlinks = more referenced by other content)
|
|
76
|
+
|
|
77
|
+
**Existing wiki articles:** For each existing wiki article, compare the
|
|
78
|
+
`source_hashes` in its frontmatter against the current content hash of each
|
|
79
|
+
source chunk (first 8 chars of the chunk body's SHA-256). If all hashes match,
|
|
80
|
+
the source chunks are unchanged — skip re-compilation for that article. If any
|
|
81
|
+
hash has changed, re-compile the article and update it in place (overwrite the
|
|
82
|
+
file, update `authored_at` and `source_hashes`).
|
|
83
|
+
|
|
72
84
|
## Step 3: Read uncovered chunks
|
|
73
85
|
|
|
74
86
|
Read uncovered chunks (frontmatter + body) to understand their content.
|
|
@@ -50,11 +50,19 @@ Check for these signals in order (first match wins):
|
|
|
50
50
|
| `COPILOT_CLI` env var or `.github/` exists | Copilot CLI | `.github/copilot-instructions.md` |
|
|
51
51
|
| `.cursor/` exists | Cursor | `.cursor/rules/wicked-brain.md` |
|
|
52
52
|
| `.antigravity/` exists | Antigravity | `.antigravity/rules/wicked-brain.md` |
|
|
53
|
-
| None | Fallback |
|
|
53
|
+
| None matched | Fallback | ask the user |
|
|
54
|
+
|
|
55
|
+
If no signal matches, tell the user: "I couldn't detect your CLI automatically.
|
|
56
|
+
Which agent config file should I write to?" Accept a user-specified path and
|
|
57
|
+
write to that file directly.
|
|
54
58
|
|
|
55
59
|
### Step 3: Write config section
|
|
56
60
|
|
|
57
|
-
Read the target config file.
|
|
61
|
+
Read the target config file. If a `## wicked-brain` section already exists,
|
|
62
|
+
update it in place — replace from the `## wicked-brain` heading to the next
|
|
63
|
+
`##`-level heading (or end of file) with the new content. Do NOT append a
|
|
64
|
+
duplicate section. If no `## wicked-brain` section exists, append it at the
|
|
65
|
+
end of the file.
|
|
58
66
|
|
|
59
67
|
Write a section like this (adapt content to actual brain state):
|
|
60
68
|
|
|
@@ -76,11 +76,25 @@ inferences on content from `chunks/inferred/` — that would be inference-of-inf
|
|
|
76
76
|
which causes confidence laundering (inferred content cites inferred content, making
|
|
77
77
|
unreliable chains appear well-sourced).
|
|
78
78
|
|
|
79
|
+
**Why this matters:** Each inference step introduces uncertainty. When you infer from
|
|
80
|
+
already-inferred content, errors and assumptions compound silently across the chain.
|
|
81
|
+
The result is chunks that sound authoritative but are actually several degrees removed
|
|
82
|
+
from any real source. A `confidence: 0.6` chunk derived from another `confidence: 0.6`
|
|
83
|
+
chunk is not `confidence: 0.6` — it is substantially less reliable, but the metadata
|
|
84
|
+
won't show it. Only raw source material (extracted chunks, actual files) provides a
|
|
85
|
+
trustworthy evidence base.
|
|
86
|
+
|
|
79
87
|
- If you find relevant content in `chunks/inferred/`, you may note it as background
|
|
80
88
|
context but do NOT cite it as a `source_chunk` or use it as evidence for new inferences.
|
|
81
89
|
- Every entry in `source_chunks` in your output MUST start with `chunks/extracted/`.
|
|
82
90
|
- If a gap cannot be filled using only extracted chunks as evidence, skip it.
|
|
83
91
|
|
|
92
|
+
**Bootstrapping caveat:** If the brain contains no `chunks/extracted/` files at all
|
|
93
|
+
(e.g., very early stage with only inferred chunks), do NOT proceed silently. Stop and
|
|
94
|
+
inform the user: "The brain has no extracted source chunks — enhancement requires raw
|
|
95
|
+
source material. Run wicked-brain:ingest to add source files first." This prevents
|
|
96
|
+
a silent chain of inferences with no grounding.
|
|
97
|
+
|
|
84
98
|
## Step 3: Write inferred chunks
|
|
85
99
|
|
|
86
100
|
For each gap, write a new chunk to `{brain_path}/chunks/inferred/{topic}/chunk-NNN.md`:
|
|
@@ -38,7 +38,19 @@ Ask these questions (provide defaults):
|
|
|
38
38
|
- Default (Windows): `%USERPROFILE%\.wicked-brain`
|
|
39
39
|
2. "What should this brain be called?" — Default: directory name
|
|
40
40
|
|
|
41
|
-
### Step 2:
|
|
41
|
+
### Step 2: Dispatch onboard agent (fire and continue)
|
|
42
|
+
|
|
43
|
+
Immediately dispatch the `wicked-brain-onboard` agent for the current project — don't wait for Steps 3–6 to finish first.
|
|
44
|
+
|
|
45
|
+
Pass it:
|
|
46
|
+
- `brain_path`: the path confirmed in Step 1
|
|
47
|
+
- `project_path`: the current working directory
|
|
48
|
+
|
|
49
|
+
**Sequencing rationale:** Onboard starts with a read-only scanning phase (Glob, Grep, Read across the project). That scanning takes meaningful time. Steps 3–6 below are fast — just creating a handful of files and directories. They will complete well before onboard finishes scanning and reaches its write phase (where it needs `brain_path` dirs to exist). So it is safe to fire onboard now and proceed immediately with Steps 3–6; the brain dirs will be in place long before onboard needs them.
|
|
50
|
+
|
|
51
|
+
Continue with Steps 3–6 immediately after dispatching.
|
|
52
|
+
|
|
53
|
+
### Step 3: Create directory structure
|
|
42
54
|
|
|
43
55
|
Use your native Write/mkdir tools to create these directories and files.
|
|
44
56
|
|
|
@@ -61,7 +73,7 @@ mkdir -p {brain_path}/raw {brain_path}/chunks/extracted {brain_path}/chunks/infe
|
|
|
61
73
|
New-Item -ItemType Directory -Force -Path "{brain_path}\raw","{brain_path}\chunks\extracted","{brain_path}\chunks\inferred","{brain_path}\wiki\concepts","{brain_path}\wiki\topics","{brain_path}\_meta"
|
|
62
74
|
```
|
|
63
75
|
|
|
64
|
-
### Step
|
|
76
|
+
### Step 4: Write brain.json
|
|
65
77
|
|
|
66
78
|
Write to `{brain_path}/brain.json`:
|
|
67
79
|
```json
|
|
@@ -76,7 +88,7 @@ Write to `{brain_path}/brain.json`:
|
|
|
76
88
|
|
|
77
89
|
Where `{id}` is the directory name (lowercase, hyphens for spaces) and `{name}` is what the user provided.
|
|
78
90
|
|
|
79
|
-
### Step
|
|
91
|
+
### Step 5: Write config
|
|
80
92
|
|
|
81
93
|
Write to `{brain_path}/_meta/config.json`:
|
|
82
94
|
```json
|
|
@@ -87,7 +99,7 @@ Write to `{brain_path}/_meta/config.json`:
|
|
|
87
99
|
}
|
|
88
100
|
```
|
|
89
101
|
|
|
90
|
-
### Step
|
|
102
|
+
### Step 6: Initialize the event log
|
|
91
103
|
|
|
92
104
|
Use your Write tool to create an empty file at `{brain_path}/_meta/log.jsonl`.
|
|
93
105
|
|
|
@@ -101,10 +113,7 @@ touch {brain_path}/_meta/log.jsonl
|
|
|
101
113
|
New-Item -ItemType File -Force -Path "{brain_path}\_meta\log.jsonl"
|
|
102
114
|
```
|
|
103
115
|
|
|
104
|
-
### Step
|
|
116
|
+
### Step 7: Confirm
|
|
105
117
|
|
|
106
118
|
Tell the user:
|
|
107
|
-
"Brain initialized at `{brain_path}`.
|
|
108
|
-
- `wicked-brain:ingest` to add source files
|
|
109
|
-
- `wicked-brain:search` to search content
|
|
110
|
-
- `wicked-brain:status` to check brain health"
|
|
119
|
+
"Brain initialized at `{brain_path}`. Onboarding agent is running in the background to index the project."
|
|
@@ -46,7 +46,8 @@ Use your Grep tool (preferred):
|
|
|
46
46
|
|
|
47
47
|
Shell fallback:
|
|
48
48
|
- macOS/Linux: `grep -roh '\[\[[^]]*\]\]' {brain_path}/wiki/ {brain_path}/chunks/ 2>/dev/null | sort -u`
|
|
49
|
-
- Windows: `findstr /s /r "\[\[" "{brain_path}\wiki\*.md" "{brain_path}\chunks\*.md" 2>nul`
|
|
49
|
+
- Windows (findstr): `findstr /s /r "\[\[" "{brain_path}\wiki\*.md" "{brain_path}\chunks\*.md" 2>nul`
|
|
50
|
+
- Windows (PowerShell preferred): `Get-ChildItem -Recurse -Path "{brain_path}\wiki","{brain_path}\chunks" -Filter "*.md" | Select-String -Pattern '\[\[' | Select-Object Path,LineNumber,Line`
|
|
50
51
|
|
|
51
52
|
For each link, use the Read tool to check if the target file exists.
|
|
52
53
|
|
|
@@ -63,7 +64,8 @@ Shell fallback:
|
|
|
63
64
|
- Windows:
|
|
64
65
|
```powershell
|
|
65
66
|
Get-ChildItem -Recurse -Filter "chunk-*.md" "{brain_path}\chunks"
|
|
66
|
-
findstr /s /m "chunk-" "{brain_path}\wiki\*.md" 2>nul
|
|
67
|
+
findstr /s /r /m "chunk-" "{brain_path}\wiki\*.md" 2>nul
|
|
68
|
+
# PowerShell preferred: Get-ChildItem -Recurse -Path "{brain_path}\wiki" -Filter "*.md" | Select-String -Pattern "chunk-" -List | Select-Object -ExpandProperty Path
|
|
67
69
|
```
|
|
68
70
|
|
|
69
71
|
### Stale entries
|
|
@@ -103,5 +105,15 @@ For each issue found:
|
|
|
103
105
|
- **message**: what's wrong
|
|
104
106
|
- **fix**: suggested fix (or "auto-fixed" if you fixed it)
|
|
105
107
|
|
|
108
|
+
Auto-fix items include (apply silently, then report as "auto-fixed"):
|
|
109
|
+
- Missing frontmatter fields: fill with safe defaults (e.g., `confidence: low`, `indexed_at: now`)
|
|
110
|
+
- Orphaned index entries: remove entries from the SQLite index whose source file no longer exists
|
|
111
|
+
|
|
112
|
+
Manual review required (flag as error or warning — do NOT auto-fix):
|
|
113
|
+
- Factual contradictions between articles
|
|
114
|
+
- Duplicate content covering the same concept in different files
|
|
115
|
+
- Broken wikilinks where the correct target is ambiguous
|
|
116
|
+
- Stale wiki articles where the underlying chunk content has changed substantially
|
|
117
|
+
|
|
106
118
|
Auto-fix broken links and missing fields where possible. Report everything else.
|
|
107
119
|
```
|
|
@@ -26,10 +26,18 @@ For the brain path default:
|
|
|
26
26
|
|
|
27
27
|
- `curl` works on macOS, Linux, and Windows 10+
|
|
28
28
|
- File paths must be absolute
|
|
29
|
-
- On Windows, use forward slashes in
|
|
29
|
+
- On Windows, use forward slashes in file URIs passed to the server. Most LSP
|
|
30
|
+
servers accept `file:///C:/Users/me/project/file.ts` (forward slashes, three
|
|
31
|
+
leading slashes). Do not use backslashes in URIs even on Windows.
|
|
30
32
|
- Language server install commands assume the package manager is in PATH
|
|
31
33
|
- For Windows PowerShell without npm/pip in PATH, guide the user to install manually
|
|
32
34
|
|
|
35
|
+
**Debugging LSP issues:** If an LSP server appears to hang or returns no
|
|
36
|
+
results on the first request, it may require a workspace initialization
|
|
37
|
+
sequence before accepting queries. Try calling `lsp-health` first — the server
|
|
38
|
+
layer sends an `initialize` handshake on first contact. If hang persists, check
|
|
39
|
+
whether the language server process is running and review its stderr logs.
|
|
40
|
+
|
|
33
41
|
## Config
|
|
34
42
|
|
|
35
43
|
Read `_meta/config.json` for brain path and server port.
|
|
@@ -99,6 +99,44 @@ indexed_at: "{ISO 8601 timestamp}"
|
|
|
99
99
|
{memory content}
|
|
100
100
|
```
|
|
101
101
|
|
|
102
|
+
#### Tier definitions
|
|
103
|
+
|
|
104
|
+
- **working**: Active, session-specific context. Expires quickly (hours to days). Use for in-progress decisions, temporary notes, and things only relevant to the current task.
|
|
105
|
+
- **episodic**: Specific events or decisions from past sessions. Medium longevity. Use for "we decided X on date Y" or "this happened in project Z".
|
|
106
|
+
- **semantic**: Generalized patterns and facts extracted from experience. Permanent by default. Use for stable conventions, recurring patterns, and distilled knowledge that transcends any single session.
|
|
107
|
+
|
|
108
|
+
New memories always start at `tier: working`. Consolidation (wicked-brain:consolidate) promotes them to `episodic` or `semantic` based on access frequency and age.
|
|
109
|
+
|
|
110
|
+
#### Complete example
|
|
111
|
+
|
|
112
|
+
```yaml
|
|
113
|
+
---
|
|
114
|
+
type: decision
|
|
115
|
+
tier: working
|
|
116
|
+
confidence: 0.9
|
|
117
|
+
importance: 7
|
|
118
|
+
ttl_days: null
|
|
119
|
+
session_origin: "2026-04-07T14:23:00Z"
|
|
120
|
+
contains:
|
|
121
|
+
- jwt
|
|
122
|
+
- json-web-token
|
|
123
|
+
- authentication
|
|
124
|
+
- auth
|
|
125
|
+
- tokens
|
|
126
|
+
- session
|
|
127
|
+
- security
|
|
128
|
+
- expiry
|
|
129
|
+
- 15-minutes
|
|
130
|
+
- access-control
|
|
131
|
+
entities:
|
|
132
|
+
people: []
|
|
133
|
+
systems: ["auth-service", "api-gateway"]
|
|
134
|
+
indexed_at: "2026-04-07T14:23:01Z"
|
|
135
|
+
---
|
|
136
|
+
|
|
137
|
+
Decided to use JWT with a 15-minute expiry for the auth-service API. Refresh tokens stored in HttpOnly cookies with 7-day TTL. Rationale: short access token lifetime limits blast radius if a token is leaked.
|
|
138
|
+
```
|
|
139
|
+
|
|
102
140
|
The server's file watcher will auto-index this file.
|
|
103
141
|
|
|
104
142
|
### Step 6: Log the store event
|
|
@@ -44,7 +44,10 @@ Example:
|
|
|
44
44
|
### Check learned synonyms first
|
|
45
45
|
|
|
46
46
|
Before generating synonyms, check if `{brain_path}/_meta/synonyms.json` exists.
|
|
47
|
-
If it does, read it.
|
|
47
|
+
If it does, read it. If it does not exist, skip synonym expansion and proceed
|
|
48
|
+
with LLM-generated synonyms only — `synonyms.json` is auto-generated by
|
|
49
|
+
`wicked-brain:retag` and will be absent on fresh brains.
|
|
50
|
+
Format:
|
|
48
51
|
|
|
49
52
|
```json
|
|
50
53
|
{
|
|
@@ -89,6 +92,9 @@ curl -s -X POST http://localhost:{port}/api \
|
|
|
89
92
|
|
|
90
93
|
Pass a session_id with every search call. This enables access tracking for
|
|
91
94
|
consolidation. Use a consistent session_id for the entire conversation.
|
|
95
|
+
`session_id` is any string that identifies the current conversation or session
|
|
96
|
+
(e.g., a timestamp like `"1712345678"` or a UUID like `"a1b2-c3d4"`). It is
|
|
97
|
+
used for access-log tracking and diversity ranking across repeated searches.
|
|
92
98
|
|
|
93
99
|
If the question implies recency ("recently", "this week", "latest"), add a `since` parameter to the search with an ISO 8601 timestamp. For example, for "this week" use the date 7 days ago:
|
|
94
100
|
```bash
|
|
@@ -97,11 +103,20 @@ curl -s -X POST http://localhost:{port}/api \
|
|
|
97
103
|
-d '{"action":"search","params":{"query":"{term}","limit":10,"session_id":"{session_id}","since":"{iso8601_date}"}}'
|
|
98
104
|
```
|
|
99
105
|
|
|
100
|
-
Also search with grep for exact phrases
|
|
106
|
+
Also search with grep for exact phrases (use your Grep tool when available —
|
|
107
|
+
it is cross-platform and preferred over shell commands):
|
|
108
|
+
|
|
109
|
+
macOS/Linux shell fallback:
|
|
101
110
|
```bash
|
|
102
111
|
grep -rl "{key_terms}" {brain_path}/chunks/ {brain_path}/wiki/ 2>/dev/null | head -10
|
|
103
112
|
```
|
|
104
113
|
|
|
114
|
+
Windows PowerShell fallback:
|
|
115
|
+
```powershell
|
|
116
|
+
Get-ChildItem -Recurse -Path "{brain_path}\chunks","{brain_path}\wiki" -Filter "*.md" |
|
|
117
|
+
Select-String -Pattern "{key_terms}" -List | Select-Object -First 10 -ExpandProperty Path
|
|
118
|
+
```
|
|
119
|
+
|
|
105
120
|
### Log synonym effectiveness
|
|
106
121
|
|
|
107
122
|
For each synonym-expanded search term, log whether it produced results:
|
|
@@ -44,11 +44,19 @@ Body content here...
|
|
|
44
44
|
|
|
45
45
|
Split the file on the second `---` line. Everything before is frontmatter, everything after is body.
|
|
46
46
|
|
|
47
|
+
If the frontmatter delimiters are missing or the YAML is unparseable, do not
|
|
48
|
+
fail — return the raw file content with a warning: "Note: frontmatter missing
|
|
49
|
+
or unparseable. Showing raw content." Continue with empty frontmatter and the
|
|
50
|
+
full file text as body.
|
|
51
|
+
|
|
47
52
|
### Step 3: Count stats
|
|
48
53
|
|
|
49
54
|
- **word_count**: split body on whitespace, count words
|
|
50
55
|
- **link_count**: count occurrences of `[[` in the body (each `[[...]]` is one link)
|
|
51
|
-
- **related**: extract all
|
|
56
|
+
- **related**: extract all wikilink targets from the body. Wikilink formats:
|
|
57
|
+
- `[[path/to/chunk]]` — path is relative to the brain root
|
|
58
|
+
- `[[label|path/to/chunk]]` — display label with explicit path (label is shown to the reader; path is the target)
|
|
59
|
+
- `[[brain-id::path/to/chunk]]` — cross-brain link
|
|
52
60
|
|
|
53
61
|
### Step 4: Return at requested depth
|
|
54
62
|
|
|
@@ -55,6 +55,12 @@ For each existing tag, look it up in the synonym map. Learned expansions take
|
|
|
55
55
|
priority over LLM-generated ones. Supplement with LLM expansions for tags
|
|
56
56
|
not in the map.
|
|
57
57
|
|
|
58
|
+
**Fallback behavior:** If `synonyms.json` is missing or cannot be parsed (malformed
|
|
59
|
+
JSON, permission error, etc.), do not fail. Simply skip the synonym-map lookup for
|
|
60
|
+
that document and proceed with LLM-only expansion. Never abort the retag operation
|
|
61
|
+
because of a missing or broken synonyms file — it is an optional accelerant, not a
|
|
62
|
+
hard dependency.
|
|
63
|
+
|
|
58
64
|
- For each existing tag, add 1-3 synonyms or related terms
|
|
59
65
|
- Extract additional keywords from the content summary
|
|
60
66
|
- Apply the same expansion rules as wicked-brain:memory store:
|
|
@@ -77,3 +83,14 @@ Report:
|
|
|
77
83
|
- Files under threshold
|
|
78
84
|
- Files updated (or would-be-updated in dry run)
|
|
79
85
|
- Sample of expanded tags for verification
|
|
86
|
+
|
|
87
|
+
## Performance Guidance
|
|
88
|
+
|
|
89
|
+
For brains with 1000+ chunks, process files in batches of 100 rather than all at once.
|
|
90
|
+
After each batch, report progress (e.g., "Updated 100/1247 files...") so the user can
|
|
91
|
+
see the operation is running.
|
|
92
|
+
|
|
93
|
+
The operation is safe to interrupt and resume: retag only updates existing chunks in
|
|
94
|
+
place and never deletes data. On resume, Step 2's tag-count filter will naturally skip
|
|
95
|
+
files that were already expanded in a previous run (they will meet the `min_tags`
|
|
96
|
+
threshold). No state file or checkpoint is needed.
|
|
@@ -56,7 +56,13 @@ If connection refused, trigger wicked-brain:server auto-start pattern.
|
|
|
56
56
|
|
|
57
57
|
### Step 3: Dispatch search subagents in parallel
|
|
58
58
|
|
|
59
|
-
Launch one subagent per accessible brain
|
|
59
|
+
Launch one subagent per accessible brain using parallel dispatch:
|
|
60
|
+
|
|
61
|
+
- **Claude Code:** use the Agent tool, launching all subagents in a single message so they run concurrently.
|
|
62
|
+
- **Other CLIs with subagent support:** use the CLI's native parallel dispatch mechanism (e.g., Gemini CLI's parallel tool calls).
|
|
63
|
+
- **No subagent support:** run each brain search sequentially and collect results before merging.
|
|
64
|
+
|
|
65
|
+
Each subagent call passes the brain-specific instructions below.
|
|
60
66
|
|
|
61
67
|
Each search subagent receives these instructions:
|
|
62
68
|
|
|
@@ -36,10 +36,10 @@ For the brain path default:
|
|
|
36
36
|
-H "Content-Type: application/json" \
|
|
37
37
|
-d '{"action":"health","params":{}}'
|
|
38
38
|
```
|
|
39
|
+
A successful response returns `{"status":"ok"}`. If the request succeeds,
|
|
40
|
+
the server is running — report the status and stop.
|
|
39
41
|
|
|
40
|
-
3. If
|
|
41
|
-
|
|
42
|
-
4. If connection refused:
|
|
42
|
+
3. If connection refused:
|
|
43
43
|
a. Read the file at `{brain_path}/_meta/server.pid` to get the PID.
|
|
44
44
|
|
|
45
45
|
b. Check if the process is running:
|
|
@@ -51,7 +51,10 @@ For the brain path default:
|
|
|
51
51
|
```bash
|
|
52
52
|
npx wicked-brain-server --brain {brain_path} --port {port} &
|
|
53
53
|
```
|
|
54
|
-
On Windows (PowerShell):
|
|
54
|
+
On Windows (PowerShell):
|
|
55
|
+
```powershell
|
|
56
|
+
Start-Process -FilePath "npx" -ArgumentList "wicked-brain-server", "--brain", "{brain_path}", "--port", "{port}" -NoNewWindow
|
|
57
|
+
```
|
|
55
58
|
|
|
56
59
|
d. Wait 2 seconds, then retry the health check.
|
|
57
60
|
e. If still failing, tell the user:
|
|
@@ -46,7 +46,7 @@ curl -s -X POST http://localhost:{port}/api \
|
|
|
46
46
|
-d '{"action":"stats","params":{}}'
|
|
47
47
|
```
|
|
48
48
|
|
|
49
|
-
If connection refused,
|
|
49
|
+
If connection refused, invoke the `wicked-brain:server` skill to start the server, then retry.
|
|
50
50
|
|
|
51
51
|
### Step 3: Return at requested depth
|
|
52
52
|
|
|
@@ -67,7 +67,8 @@ Check parent/link accessibility by using the Read tool on
|
|
|
67
67
|
Depth 0 plus:
|
|
68
68
|
- Use the Read tool on `_meta/log.jsonl` (last 50 lines) to identify topic distribution from recent tag events.
|
|
69
69
|
Shell fallback: `tail -50 {brain_path}/_meta/log.jsonl` (macOS/Linux) or `Get-Content "{brain_path}\_meta\log.jsonl" -Tail 50` (Windows PowerShell)
|
|
70
|
-
- Show topic distribution for the last 7 days by searching with a `since` filter
|
|
70
|
+
- Show topic distribution for the last 7 days by searching with a `since` filter.
|
|
71
|
+
The `since` value must be ISO 8601 format (e.g., `2025-01-15T00:00:00Z`):
|
|
71
72
|
```bash
|
|
72
73
|
curl -s -X POST http://localhost:{port}/api \
|
|
73
74
|
-H "Content-Type: application/json" \
|
|
@@ -103,6 +103,11 @@ For each pid file found:
|
|
|
103
103
|
|
|
104
104
|
### Step 6: Verify migration
|
|
105
105
|
|
|
106
|
+
`npx wicked-brain-server` automatically applies all pending schema migrations on
|
|
107
|
+
startup — users do not need to run a separate migration command. Each new server
|
|
108
|
+
version may add tables or columns to the SQLite database; migrations are numbered,
|
|
109
|
+
run in order, and are idempotent (safe to re-run).
|
|
110
|
+
|
|
106
111
|
After server restart, verify the server started successfully and migrations ran:
|
|
107
112
|
|
|
108
113
|
```bash
|
|
@@ -111,9 +116,19 @@ curl -s -X POST http://localhost:{port}/api \
|
|
|
111
116
|
-d '{"action":"health"}'
|
|
112
117
|
```
|
|
113
118
|
|
|
114
|
-
If health check fails,
|
|
115
|
-
|
|
116
|
-
|
|
119
|
+
If the health check fails, the migration may have errored. To diagnose:
|
|
120
|
+
1. Check whether the server process is actually running:
|
|
121
|
+
- Read `{brain_path}/_meta/server.pid` to get the PID.
|
|
122
|
+
- macOS/Linux: `ps -p {pid}` — if the process is absent, the server crashed on startup.
|
|
123
|
+
- Windows: `tasklist /FI "PID eq {pid}"`
|
|
124
|
+
2. The server logs migration errors to **stderr**. If you launched it in the
|
|
125
|
+
foreground, the error will be visible in the terminal. If launched in the
|
|
126
|
+
background, redirect stderr to a file:
|
|
127
|
+
`npx wicked-brain-server --brain "{brain_path}" --port {port} 2>{brain_path}/_meta/server-error.log`
|
|
128
|
+
then read `{brain_path}/_meta/server-error.log`.
|
|
129
|
+
3. Common causes: the SQLite file is locked by another process, or the database
|
|
130
|
+
file is corrupted. Stop all server instances and retry, or delete `.brain.db`
|
|
131
|
+
to force a clean rebuild (data is re-indexed from source files on next ingest).
|
|
117
132
|
|
|
118
133
|
### Step 7: Report
|
|
119
134
|
|