wicked-brain 0.1.2 → 0.3.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/install.mjs +57 -8
- package/package.json +1 -1
- package/server/bin/wicked-brain-server.mjs +54 -7
- package/server/lib/file-watcher.mjs +152 -6
- package/server/lib/lsp-client.mjs +278 -0
- package/server/lib/lsp-helpers.mjs +133 -0
- package/server/lib/lsp-manager.mjs +164 -0
- package/server/lib/lsp-protocol.mjs +123 -0
- package/server/lib/lsp-servers.mjs +290 -0
- package/server/lib/sqlite-search.mjs +216 -10
- package/server/lib/wikilinks.mjs +20 -4
- package/server/package.json +1 -1
- package/skills/wicked-brain-agent/SKILL.md +52 -0
- package/skills/wicked-brain-agent/agents/consolidate.md +138 -0
- package/skills/wicked-brain-agent/agents/context.md +88 -0
- package/skills/wicked-brain-agent/agents/onboard.md +88 -0
- package/skills/wicked-brain-agent/agents/session-teardown.md +84 -0
- package/skills/wicked-brain-agent/hooks/claude-hooks.json +12 -0
- package/skills/wicked-brain-agent/hooks/copilot-hooks.json +10 -0
- package/skills/wicked-brain-agent/hooks/gemini-hooks.json +12 -0
- package/skills/wicked-brain-agent/platform/antigravity/wicked-brain-consolidate.md +103 -0
- package/skills/wicked-brain-agent/platform/antigravity/wicked-brain-context.md +67 -0
- package/skills/wicked-brain-agent/platform/antigravity/wicked-brain-onboard.md +74 -0
- package/skills/wicked-brain-agent/platform/antigravity/wicked-brain-session-teardown.md +72 -0
- package/skills/wicked-brain-agent/platform/claude/wicked-brain-consolidate.md +106 -0
- package/skills/wicked-brain-agent/platform/claude/wicked-brain-context.md +70 -0
- package/skills/wicked-brain-agent/platform/claude/wicked-brain-onboard.md +77 -0
- package/skills/wicked-brain-agent/platform/claude/wicked-brain-session-teardown.md +75 -0
- package/skills/wicked-brain-agent/platform/codex/wicked-brain-consolidate.toml +104 -0
- package/skills/wicked-brain-agent/platform/codex/wicked-brain-context.toml +68 -0
- package/skills/wicked-brain-agent/platform/codex/wicked-brain-onboard.toml +75 -0
- package/skills/wicked-brain-agent/platform/codex/wicked-brain-session-teardown.toml +73 -0
- package/skills/wicked-brain-agent/platform/copilot/wicked-brain-consolidate.agent.md +105 -0
- package/skills/wicked-brain-agent/platform/copilot/wicked-brain-context.agent.md +69 -0
- package/skills/wicked-brain-agent/platform/copilot/wicked-brain-onboard.agent.md +76 -0
- package/skills/wicked-brain-agent/platform/copilot/wicked-brain-session-teardown.agent.md +74 -0
- package/skills/wicked-brain-agent/platform/cursor/wicked-brain-consolidate.md +104 -0
- package/skills/wicked-brain-agent/platform/cursor/wicked-brain-context.md +68 -0
- package/skills/wicked-brain-agent/platform/cursor/wicked-brain-onboard.md +75 -0
- package/skills/wicked-brain-agent/platform/cursor/wicked-brain-session-teardown.md +73 -0
- package/skills/wicked-brain-agent/platform/gemini/wicked-brain-consolidate.md +107 -0
- package/skills/wicked-brain-agent/platform/gemini/wicked-brain-context.md +71 -0
- package/skills/wicked-brain-agent/platform/gemini/wicked-brain-onboard.md +78 -0
- package/skills/wicked-brain-agent/platform/gemini/wicked-brain-session-teardown.md +76 -0
- package/skills/wicked-brain-agent/platform/kiro/wicked-brain-consolidate.json +17 -0
- package/skills/wicked-brain-agent/platform/kiro/wicked-brain-context.json +16 -0
- package/skills/wicked-brain-agent/platform/kiro/wicked-brain-onboard.json +17 -0
- package/skills/wicked-brain-agent/platform/kiro/wicked-brain-session-teardown.json +17 -0
- package/skills/wicked-brain-compile/SKILL.md +8 -0
- package/skills/wicked-brain-configure/SKILL.md +99 -0
- package/skills/wicked-brain-enhance/SKILL.md +19 -0
- package/skills/wicked-brain-ingest/SKILL.md +68 -5
- package/skills/wicked-brain-lint/SKILL.md +14 -0
- package/skills/wicked-brain-lsp/SKILL.md +172 -0
- package/skills/wicked-brain-memory/SKILL.md +144 -0
- package/skills/wicked-brain-query/SKILL.md +78 -1
- package/skills/wicked-brain-retag/SKILL.md +79 -0
- package/skills/wicked-brain-search/SKILL.md +3 -11
- package/skills/wicked-brain-status/SKILL.md +7 -0
- package/skills/wicked-brain-update/SKILL.md +20 -1
|
@@ -0,0 +1,99 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: wicked-brain-configure
|
|
3
|
+
description: Read brain state and write contextual instructions into the active CLI's agent config file. Run after onboarding, major ingests, or consolidation.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# wicked-brain:configure
|
|
7
|
+
|
|
8
|
+
Writes a contextual `## wicked-brain` section into the active CLI/IDE's agent config file.
|
|
9
|
+
|
|
10
|
+
## Config
|
|
11
|
+
|
|
12
|
+
Read `_meta/config.json` for brain path and server port.
|
|
13
|
+
If it doesn't exist, trigger wicked-brain:init.
|
|
14
|
+
|
|
15
|
+
## Process
|
|
16
|
+
|
|
17
|
+
### Step 1: Gather brain state
|
|
18
|
+
|
|
19
|
+
1. Call server stats:
|
|
20
|
+
```bash
|
|
21
|
+
curl -s -X POST http://localhost:{port}/api \
|
|
22
|
+
-H "Content-Type: application/json" \
|
|
23
|
+
-d '{"action":"stats"}'
|
|
24
|
+
```
|
|
25
|
+
|
|
26
|
+
2. Search for top topics — run a broad search to identify dominant tags:
|
|
27
|
+
```bash
|
|
28
|
+
curl -s -X POST http://localhost:{port}/api \
|
|
29
|
+
-H "Content-Type: application/json" \
|
|
30
|
+
-d '{"action":"search","params":{"query":"*","limit":50}}'
|
|
31
|
+
```
|
|
32
|
+
Read frontmatter of top results, count `contains:` tag frequency. Top 10 tags = brain expertise.
|
|
33
|
+
|
|
34
|
+
3. Read `{brain_path}/brain.json` for brain identity and linked brains.
|
|
35
|
+
|
|
36
|
+
4. Read `{brain_path}/_meta/log.jsonl` (last 50 lines) for recent `search_miss` entries — these are knowledge gaps.
|
|
37
|
+
|
|
38
|
+
5. List available agents by reading `skills/wicked-brain-agent/agents/` at depth 0.
|
|
39
|
+
|
|
40
|
+
### Step 2: Detect CLI/IDE
|
|
41
|
+
|
|
42
|
+
Check for these signals in order (first match wins):
|
|
43
|
+
|
|
44
|
+
| Signal | Platform | Config File |
|
|
45
|
+
|--------|----------|------------|
|
|
46
|
+
| `CLAUDE_CODE` env var or `.claude/` exists | Claude Code | `CLAUDE.md` |
|
|
47
|
+
| `CODEX_CLI` env var or `.codex/` exists | Codex | `.codex/instructions.md` |
|
|
48
|
+
| `.kiro/` exists | Kiro | `KIRO.md` |
|
|
49
|
+
| `GEMINI_CLI` env var or `.gemini/` exists | Gemini CLI | `GEMINI.md` |
|
|
50
|
+
| `COPILOT_CLI` env var or `.github/` exists | Copilot CLI | `.github/copilot-instructions.md` |
|
|
51
|
+
| `.cursor/` exists | Cursor | `.cursor/rules/wicked-brain.md` |
|
|
52
|
+
| `.antigravity/` exists | Antigravity | `.antigravity/rules/wicked-brain.md` |
|
|
53
|
+
| None | Fallback | `CLAUDE.md` |
|
|
54
|
+
|
|
55
|
+
### Step 3: Write config section
|
|
56
|
+
|
|
57
|
+
Read the target config file. Find the `## wicked-brain` section (if it exists, replace it). If it doesn't exist, append it.
|
|
58
|
+
|
|
59
|
+
Write a section like this (adapt content to actual brain state):
|
|
60
|
+
|
|
61
|
+
```markdown
|
|
62
|
+
## wicked-brain
|
|
63
|
+
|
|
64
|
+
Digital brain: {brain_id} | {total} indexed items | {chunks} chunks, {wiki} wiki articles, {memory} memories
|
|
65
|
+
|
|
66
|
+
**Domain expertise:** {top 10 tags from step 1}
|
|
67
|
+
|
|
68
|
+
**Knowledge gaps:** {recent search_miss topics, if any}
|
|
69
|
+
|
|
70
|
+
**Linked brains:** {list from brain.json, or "none"}
|
|
71
|
+
|
|
72
|
+
### How to use
|
|
73
|
+
|
|
74
|
+
- **Before responding**: call `wicked-brain:agent` (context) to surface relevant knowledge
|
|
75
|
+
- **Capture learnings**: call `wicked-brain:agent` (session-teardown) at session end
|
|
76
|
+
- **Store a decision/pattern/gotcha**: call `wicked-brain:memory` (store mode)
|
|
77
|
+
- **Ask the brain**: call `wicked-brain:query` for cited answers
|
|
78
|
+
- **Available agents**: consolidate, context, session-teardown, onboard (via `wicked-brain:agent`)
|
|
79
|
+
|
|
80
|
+
### Rules
|
|
81
|
+
|
|
82
|
+
- Do NOT read brain files directly — use skills and agents
|
|
83
|
+
- Always pass session_id with search calls for access tracking
|
|
84
|
+
- Capture non-obvious decisions and gotchas as memories
|
|
85
|
+
```
|
|
86
|
+
|
|
87
|
+
### Step 4: Confirm
|
|
88
|
+
|
|
89
|
+
Report what was written and where:
|
|
90
|
+
- Config file: {path}
|
|
91
|
+
- Brain stats: {total} items, {expertise summary}
|
|
92
|
+
- Gaps noted: {N} search misses
|
|
93
|
+
|
|
94
|
+
## Cross-Platform Notes
|
|
95
|
+
|
|
96
|
+
- Uses Bash to check for env vars and directories
|
|
97
|
+
- Uses Read/Edit tools for config file management
|
|
98
|
+
- All paths use forward slashes
|
|
99
|
+
- On Windows, check `%USERPROFILE%` equivalents for home directory paths
|
|
@@ -49,6 +49,13 @@ curl -s -X POST http://localhost:{port}/api \
|
|
|
49
49
|
-d '{"action":"stats","params":{}}'
|
|
50
50
|
```
|
|
51
51
|
|
|
52
|
+
Read _meta/log.jsonl. Look for:
|
|
53
|
+
1. op: "search_miss" entries — these are topics users searched for but
|
|
54
|
+
couldn't find. Prioritize these as highest-value gaps.
|
|
55
|
+
2. Low-frequency tags in chunks (existing approach) — secondary signal.
|
|
56
|
+
|
|
57
|
+
Group search misses by topic and address the most frequent ones first.
|
|
58
|
+
|
|
52
59
|
Search for thin areas — topics mentioned in existing chunks but with few entries.
|
|
53
60
|
Use your Grep tool on `{brain_path}/chunks/` to find all `contains:` fields and
|
|
54
61
|
count occurrences. Shell fallback:
|
|
@@ -62,6 +69,18 @@ Based on existing content, reason about:
|
|
|
62
69
|
- Connections between concepts that exist but aren't documented
|
|
63
70
|
- Questions the brain can't currently answer
|
|
64
71
|
|
|
72
|
+
## Source Material Rules (CRITICAL)
|
|
73
|
+
|
|
74
|
+
ONLY read from `chunks/extracted/` as source material for new inferences. Never base
|
|
75
|
+
inferences on content from `chunks/inferred/` — that would be inference-of-inference,
|
|
76
|
+
which causes confidence laundering (inferred content cites inferred content, making
|
|
77
|
+
unreliable chains appear well-sourced).
|
|
78
|
+
|
|
79
|
+
- If you find relevant content in `chunks/inferred/`, you may note it as background
|
|
80
|
+
context but do NOT cite it as a `source_chunk` or use it as evidence for new inferences.
|
|
81
|
+
- Every entry in `source_chunks` in your output MUST start with `chunks/extracted/`.
|
|
82
|
+
- If a gap cannot be filled using only extracted chunks as evidence, skip it.
|
|
83
|
+
|
|
65
84
|
## Step 3: Write inferred chunks
|
|
66
85
|
|
|
67
86
|
For each gap, write a new chunk to `{brain_path}/chunks/inferred/{topic}/chunk-NNN.md`:
|
|
@@ -108,6 +108,21 @@ narrative_theme: {the "so what" in 8 words or fewer}
|
|
|
108
108
|
|
|
109
109
|
{Extracted content in markdown format}
|
|
110
110
|
|
|
111
|
+
## Tag Expansion
|
|
112
|
+
|
|
113
|
+
After generating the initial `contains:` tags, expand each keyword with 1-3 synonyms or related terms:
|
|
114
|
+
|
|
115
|
+
- **Abbreviations**: JWT → "json-web-token", K8s → "kubernetes", API → "application-programming-interface"
|
|
116
|
+
- **Synonyms**: "auth" → "authentication", "DB" → "database", "config" → "configuration"
|
|
117
|
+
- **Related concepts**: "JWT" → "tokens", "session", "security"; "PostgreSQL" → "database", "RDBMS"
|
|
118
|
+
- **Domain hierarchy**: specific terms get their general category added
|
|
119
|
+
|
|
120
|
+
Add expanded tags to `contains:` alongside originals. Deduplicate. Cap total tags at 15 per chunk.
|
|
121
|
+
|
|
122
|
+
Example:
|
|
123
|
+
Original tags: ["jwt", "session", "expiry"]
|
|
124
|
+
After expansion: ["jwt", "json-web-token", "tokens", "security", "session", "session-management", "expiry", "timeout", "ttl"]
|
|
125
|
+
|
|
111
126
|
## After writing chunks, index them in the server:
|
|
112
127
|
|
|
113
128
|
curl -s -X POST http://localhost:{port}/api \
|
|
@@ -240,10 +255,34 @@ function splitText(text) {
|
|
|
240
255
|
return chunks.length > 0 ? chunks : [text];
|
|
241
256
|
}
|
|
242
257
|
|
|
258
|
+
async function removeOldChunks(name) {
|
|
259
|
+
const chunkDir = join(BRAIN, "chunks", "extracted", name);
|
|
260
|
+
if (!existsSync(chunkDir)) return;
|
|
261
|
+
const ts = Math.floor(Date.now() / 1000);
|
|
262
|
+
// Remove each chunk from the search index before archiving
|
|
263
|
+
for (const f of readdirSync(chunkDir).filter(f => f.endsWith(".md"))) {
|
|
264
|
+
const id = `chunks/extracted/${name}/${f}`;
|
|
265
|
+
try {
|
|
266
|
+
await fetch(`http://localhost:${PORT}/api`, {
|
|
267
|
+
method: "POST",
|
|
268
|
+
headers: { "Content-Type": "application/json" },
|
|
269
|
+
body: JSON.stringify({ action: "remove", params: { id } }),
|
|
270
|
+
});
|
|
271
|
+
} catch (e) {
|
|
272
|
+
console.error(` Failed to remove ${id}: ${e.message}`);
|
|
273
|
+
}
|
|
274
|
+
}
|
|
275
|
+
// Archive the old directory
|
|
276
|
+
const { renameSync } = await import("node:fs");
|
|
277
|
+
renameSync(chunkDir, `${chunkDir}.archived-${ts}`);
|
|
278
|
+
console.log(` Archived old chunks: ${name}`);
|
|
279
|
+
}
|
|
280
|
+
|
|
243
281
|
async function ingestFile(filePath) {
|
|
244
282
|
const ext = extname(filePath).toLowerCase();
|
|
245
283
|
const rel = relative(SOURCE_DIR, filePath);
|
|
246
284
|
const name = safeName(rel);
|
|
285
|
+
await removeOldChunks(name);
|
|
247
286
|
const chunkDir = join(BRAIN, "chunks", "extracted", name);
|
|
248
287
|
mkdirSync(chunkDir, { recursive: true });
|
|
249
288
|
|
|
@@ -254,7 +293,22 @@ async function ingestFile(filePath) {
|
|
|
254
293
|
const chunkId = `${name}/chunk-${String(i + 1).padStart(3, "0")}`;
|
|
255
294
|
const chunkPath = `chunks/extracted/${chunkId}.md`;
|
|
256
295
|
const ts = new Date().toISOString();
|
|
257
|
-
const
|
|
296
|
+
const STOP = new Set([
|
|
297
|
+
"should","would","could","their","about","which","these","those",
|
|
298
|
+
"there","where","other","after","before","during","while","being",
|
|
299
|
+
"having","because","through","between","without","against","itself",
|
|
300
|
+
"become","becomes","another","however","already","always","around"
|
|
301
|
+
]);
|
|
302
|
+
|
|
303
|
+
// Note: These keywords are for FTS indexing. The LLM-based ingest
|
|
304
|
+
// generates richer synonym-expanded tags in the contains: field.
|
|
305
|
+
// This batch script extracts basic keywords only.
|
|
306
|
+
const keywords = [...new Set(
|
|
307
|
+
chunks[i].toLowerCase()
|
|
308
|
+
.replace(/[^a-z0-9\s-]/g, "")
|
|
309
|
+
.split(/\s+/)
|
|
310
|
+
.filter(w => w.length > 5 && !STOP.has(w))
|
|
311
|
+
)].slice(0, 10);
|
|
258
312
|
|
|
259
313
|
const frontmatter = [
|
|
260
314
|
"---",
|
|
@@ -318,10 +372,19 @@ This pattern should be used **whenever more than 5 files need processing**. It:
|
|
|
318
372
|
|
|
319
373
|
### Step 4: Archive on re-ingest
|
|
320
374
|
|
|
321
|
-
If previous chunks exist for a source,
|
|
322
|
-
|
|
323
|
-
|
|
324
|
-
|
|
375
|
+
If previous chunks exist for a source, **remove them from the search index before archiving**.
|
|
376
|
+
Archived files are invisible to the file watcher, so the server won't clean them up automatically.
|
|
377
|
+
|
|
378
|
+
1. List all .md files in `{brain_path}/chunks/extracted/{safe_name}/`
|
|
379
|
+
2. For each file, call the server to remove it from the index:
|
|
380
|
+
```bash
|
|
381
|
+
curl -s -X POST http://localhost:{port}/api \
|
|
382
|
+
-H "Content-Type: application/json" \
|
|
383
|
+
-d '{"action":"remove","params":{"id":"chunks/extracted/{safe_name}/chunk-NNN.md"}}'
|
|
384
|
+
```
|
|
385
|
+
3. Then rename the directory to archive it:
|
|
386
|
+
- macOS/Linux: `mv "{brain_path}/chunks/extracted/{safe_name}" "{brain_path}/chunks/extracted/{safe_name}.archived-$(date +%s)"`
|
|
387
|
+
- Windows: `Rename-Item "{brain_path}\chunks\extracted\{safe_name}" "{safe_name}.archived-{timestamp}"`
|
|
325
388
|
|
|
326
389
|
### Step 5: Report to user
|
|
327
390
|
|
|
@@ -69,6 +69,12 @@ Shell fallback:
|
|
|
69
69
|
### Stale entries
|
|
70
70
|
Compare source file modification times with chunk creation times.
|
|
71
71
|
|
|
72
|
+
### Stale wiki articles
|
|
73
|
+
For each wiki article with source_hashes in frontmatter:
|
|
74
|
+
- Read each source chunk
|
|
75
|
+
- Compare its current content hash prefix against the stored hash
|
|
76
|
+
- If any hash mismatches or chunk is missing: flag as stale
|
|
77
|
+
|
|
72
78
|
### Missing frontmatter
|
|
73
79
|
Check each chunk has required frontmatter fields (source, chunk_id, confidence, indexed_at).
|
|
74
80
|
|
|
@@ -77,6 +83,14 @@ Check each chunk has required frontmatter fields (source, chunk_id, confidence,
|
|
|
77
83
|
Read a sample of chunks and wiki articles. Check:
|
|
78
84
|
- Are tags consistent? (same concept tagged differently in different chunks)
|
|
79
85
|
- Are there factual contradictions between articles?
|
|
86
|
+
- Check for `contradicts` typed links — query the server:
|
|
87
|
+
```bash
|
|
88
|
+
curl -s -X POST http://localhost:{port}/api \
|
|
89
|
+
-H "Content-Type: application/json" \
|
|
90
|
+
-d '{"action":"contradictions","params":{}}'
|
|
91
|
+
```
|
|
92
|
+
For each contradiction link, read both the source and target to determine
|
|
93
|
+
if the contradiction is resolved. Flag unresolved contradictions as warnings.
|
|
80
94
|
- Are there implicit connections that should be explicit [[links]]?
|
|
81
95
|
- What topics have chunks but no wiki article? (coverage gaps)
|
|
82
96
|
|
|
@@ -0,0 +1,172 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: wicked-brain:lsp
|
|
3
|
+
description: |
|
|
4
|
+
Universal code intelligence via LSP. Queries language servers for definitions,
|
|
5
|
+
references, symbols, call hierarchies, hover info, and diagnostics. Auto-installs
|
|
6
|
+
language servers when missing.
|
|
7
|
+
|
|
8
|
+
Use when: "where is X defined", "who uses X", "what type is X",
|
|
9
|
+
"list symbols in", "find symbol", "who calls X", "blast radius",
|
|
10
|
+
"architecture map", "code diagnostics", "lsp health".
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
# wicked-brain:lsp
|
|
14
|
+
|
|
15
|
+
Universal code intelligence for any CLI/IDE via the brain's LSP client layer.
|
|
16
|
+
|
|
17
|
+
## Cross-Platform Notes
|
|
18
|
+
|
|
19
|
+
Commands in this skill work on macOS, Linux, and Windows. When a command has
|
|
20
|
+
platform differences, alternatives are shown. Your native tools (Read, Write,
|
|
21
|
+
Grep, Glob) work everywhere — prefer them over shell commands when possible.
|
|
22
|
+
|
|
23
|
+
For the brain path default:
|
|
24
|
+
- macOS/Linux: ~/.wicked-brain
|
|
25
|
+
- Windows: %USERPROFILE%\.wicked-brain
|
|
26
|
+
|
|
27
|
+
- `curl` works on macOS, Linux, and Windows 10+
|
|
28
|
+
- File paths must be absolute
|
|
29
|
+
- On Windows, use forward slashes in JSON: `"file":"C:/Users/me/project/file.ts"`
|
|
30
|
+
- Language server install commands assume the package manager is in PATH
|
|
31
|
+
- For Windows PowerShell without npm/pip in PATH, guide the user to install manually
|
|
32
|
+
|
|
33
|
+
## Config
|
|
34
|
+
|
|
35
|
+
Read `_meta/config.json` for brain path and server port.
|
|
36
|
+
If it doesn't exist, trigger wicked-brain:init.
|
|
37
|
+
|
|
38
|
+
## When to Use
|
|
39
|
+
|
|
40
|
+
| You want to... | Action | Example |
|
|
41
|
+
|----------------|--------|---------|
|
|
42
|
+
| Find where something is defined | `lsp-definition` | "Where is UserService defined?" |
|
|
43
|
+
| Find all usages of something | `lsp-references` | "Who uses the validateCredentials function?" |
|
|
44
|
+
| See what type something is | `lsp-hover` | "What's the type of this variable?" |
|
|
45
|
+
| List symbols in a file | `lsp-symbols` | "What classes and functions are in auth/login.ts?" |
|
|
46
|
+
| Find a symbol by name | `lsp-workspace-symbols` | "Find the class PaymentService" |
|
|
47
|
+
| See what implements an interface | `lsp-implementation` | "What classes implement AuthProvider?" |
|
|
48
|
+
| Find who calls a function | `lsp-call-hierarchy-in` | "Who calls processPayment?" |
|
|
49
|
+
| See what a function calls | `lsp-call-hierarchy-out` | "What does processPayment call?" |
|
|
50
|
+
| Check for errors | `lsp-diagnostics` | "Any type errors in this project?" |
|
|
51
|
+
| Analyze blast radius | `lsp-call-hierarchy-in` (recursive) | "What breaks if I change handleRequest?" |
|
|
52
|
+
| Map architecture | `lsp-workspace-symbols` + `lsp-symbols` | "Give me an overview of this codebase" |
|
|
53
|
+
| Check server status | `lsp-health` | "Are language servers running?" |
|
|
54
|
+
|
|
55
|
+
## Process
|
|
56
|
+
|
|
57
|
+
### Step 1: Identify the action
|
|
58
|
+
|
|
59
|
+
Based on what the user/agent needs, pick the appropriate action from the table above.
|
|
60
|
+
|
|
61
|
+
### Step 2: Call the server
|
|
62
|
+
|
|
63
|
+
```bash
|
|
64
|
+
curl -s -X POST http://localhost:{port}/api \
|
|
65
|
+
-H "Content-Type: application/json" \
|
|
66
|
+
-d '{"action":"{action}","params":{params}}'
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
**Position-based actions** (definition, references, hover, implementation, call-hierarchy):
|
|
70
|
+
```json
|
|
71
|
+
{"action":"lsp-definition","params":{"file":"/absolute/path/to/file.ts","line":15,"col":10}}
|
|
72
|
+
```
|
|
73
|
+
Note: `line` and `col` are 0-indexed.
|
|
74
|
+
|
|
75
|
+
**File-based actions** (symbols):
|
|
76
|
+
```json
|
|
77
|
+
{"action":"lsp-symbols","params":{"file":"/absolute/path/to/file.ts"}}
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
**Query-based actions** (workspace-symbols):
|
|
81
|
+
```json
|
|
82
|
+
{"action":"lsp-workspace-symbols","params":{"query":"PaymentService"}}
|
|
83
|
+
```
|
|
84
|
+
|
|
85
|
+
**No-params actions** (health, diagnostics without file):
|
|
86
|
+
```json
|
|
87
|
+
{"action":"lsp-health"}
|
|
88
|
+
{"action":"lsp-diagnostics"}
|
|
89
|
+
```
|
|
90
|
+
|
|
91
|
+
### Step 3: Handle errors — auto-install
|
|
92
|
+
|
|
93
|
+
If the response contains `"error": "language_server_not_found"`, the language server isn't installed. The response includes install instructions:
|
|
94
|
+
|
|
95
|
+
```json
|
|
96
|
+
{"error":"language_server_not_found","language":"typescript","install":{"method":"npm","package":"typescript-language-server typescript"}}
|
|
97
|
+
```
|
|
98
|
+
|
|
99
|
+
Attempt installation based on the method:
|
|
100
|
+
|
|
101
|
+
| Method | Command |
|
|
102
|
+
|--------|---------|
|
|
103
|
+
| npm | `npm install -g {package}` |
|
|
104
|
+
| pip | `pip install {package} 2>/dev/null \|\| pip3 install {package}` |
|
|
105
|
+
| cargo | `cargo install {package}` |
|
|
106
|
+
| gem | `gem install {package}` |
|
|
107
|
+
| go | `go install {package}@latest` |
|
|
108
|
+
| brew | `brew install {package}` |
|
|
109
|
+
| dotnet | `dotnet tool install -g {package}` |
|
|
110
|
+
| rustup | `rustup component add {package}` |
|
|
111
|
+
| manual | Tell the user: "Install {package} manually" |
|
|
112
|
+
|
|
113
|
+
After installation, retry the original LSP request.
|
|
114
|
+
|
|
115
|
+
If installation fails, report to the user:
|
|
116
|
+
"Could not auto-install {language} language server. Install manually: {instructions}"
|
|
117
|
+
|
|
118
|
+
### Step 4: Handle other errors
|
|
119
|
+
|
|
120
|
+
| Error | What to do |
|
|
121
|
+
|-------|-----------|
|
|
122
|
+
| `language_server_crashed` | The server crashed 3 times. Report to user, suggest checking the language server logs. |
|
|
123
|
+
| `unsupported_language` | No known language server for this file extension. |
|
|
124
|
+
| `lsp_timeout` | The language server took too long. May be initializing a large project. Retry once. |
|
|
125
|
+
| `file_outside_workspace` | The file isn't in a registered project. Use `wicked-brain:onboard` to register the project first. |
|
|
126
|
+
|
|
127
|
+
### Step 5: Use results
|
|
128
|
+
|
|
129
|
+
**Definitions/References/Implementation** return locations:
|
|
130
|
+
```json
|
|
131
|
+
{"locations":[{"file":"/path/to/file.ts","line":15,"col":2}]}
|
|
132
|
+
```
|
|
133
|
+
Read the file at that location to show the user the relevant code.
|
|
134
|
+
|
|
135
|
+
**Symbols** return a hierarchy:
|
|
136
|
+
```json
|
|
137
|
+
{"symbols":[{"name":"UserService","kind":"class","line":15,"endLine":45,"children":[...]}]}
|
|
138
|
+
```
|
|
139
|
+
Useful for understanding file structure.
|
|
140
|
+
|
|
141
|
+
**Hover** returns type info:
|
|
142
|
+
```json
|
|
143
|
+
{"content":"(method) UserService.validate(token: string): boolean","language":"typescript"}
|
|
144
|
+
```
|
|
145
|
+
|
|
146
|
+
**Call hierarchy** returns call chains:
|
|
147
|
+
```json
|
|
148
|
+
{"calls":[{"from":{"name":"handleLogin","file":"/path/auth.ts","line":30}}]}
|
|
149
|
+
```
|
|
150
|
+
|
|
151
|
+
**Diagnostics** return errors/warnings:
|
|
152
|
+
```json
|
|
153
|
+
{"diagnostics":[{"line":23,"col":5,"severity":"error","message":"Type 'string' is not assignable to type 'number'"}],"errors":1,"warnings":0}
|
|
154
|
+
```
|
|
155
|
+
|
|
156
|
+
## Blast Radius Analysis
|
|
157
|
+
|
|
158
|
+
To analyze the blast radius of changing a function:
|
|
159
|
+
|
|
160
|
+
1. Call `lsp-call-hierarchy-in` for the function → get direct callers
|
|
161
|
+
2. For each caller, call `lsp-call-hierarchy-in` again → get indirect callers
|
|
162
|
+
3. Continue until the chain stabilizes (usually 2-3 levels)
|
|
163
|
+
4. Report the full call chain with file locations
|
|
164
|
+
|
|
165
|
+
## Architecture Mapping
|
|
166
|
+
|
|
167
|
+
To map a codebase's architecture:
|
|
168
|
+
|
|
169
|
+
1. Call `lsp-workspace-symbols` with query="" to get all symbols
|
|
170
|
+
2. For key files (entry points, main modules), call `lsp-symbols` for detailed hierarchies
|
|
171
|
+
3. Combine with brain's existing wikilinks and backlinks for a complete picture
|
|
172
|
+
4. Use `wicked-brain:compile` to synthesize into a wiki article
|
|
@@ -0,0 +1,144 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: wicked-brain:memory
|
|
3
|
+
description: |
|
|
4
|
+
Store and recall experiential learnings (decisions, patterns, preferences,
|
|
5
|
+
gotchas, discoveries) in the brain's memory system.
|
|
6
|
+
|
|
7
|
+
Use when: "remember this", "store this decision", "recall what we decided",
|
|
8
|
+
"what do I know about", "brain memory".
|
|
9
|
+
---
|
|
10
|
+
|
|
11
|
+
# wicked-brain:memory
|
|
12
|
+
|
|
13
|
+
Store and recall experiential learnings in the brain's memory system.
|
|
14
|
+
|
|
15
|
+
## Cross-Platform Notes
|
|
16
|
+
|
|
17
|
+
- Uses `curl` for server API calls (available on Windows 10+, macOS, Linux)
|
|
18
|
+
- File writes use agent-native tools (Write/Edit), not shell commands
|
|
19
|
+
- Path separator: always use forward slashes in `contains:` and `path` fields
|
|
20
|
+
- Brain path default: `~/.wicked-brain` (macOS/Linux), `%USERPROFILE%\.wicked-brain` (Windows)
|
|
21
|
+
|
|
22
|
+
## Config
|
|
23
|
+
|
|
24
|
+
Read `_meta/config.json` for brain path and server port.
|
|
25
|
+
If it doesn't exist, trigger wicked-brain:init.
|
|
26
|
+
|
|
27
|
+
## Parameters
|
|
28
|
+
|
|
29
|
+
- **mode** (required): `store` or `recall`
|
|
30
|
+
- **content** (store mode): the memory content to store
|
|
31
|
+
- **type** (store mode, optional): `decision`, `pattern`, `preference`, `gotcha`, or `discovery`. Auto-detected if omitted.
|
|
32
|
+
- **ttl_days** (store mode, optional): number of days before this memory expires. Defaults by type.
|
|
33
|
+
- **query** (recall mode): search term for finding memories
|
|
34
|
+
- **filter_type** (recall mode, optional): filter by memory type
|
|
35
|
+
- **filter_tier** (recall mode, optional): filter by tier (`working`, `episodic`, `semantic`)
|
|
36
|
+
|
|
37
|
+
## Store Mode
|
|
38
|
+
|
|
39
|
+
### Step 1: Detect type
|
|
40
|
+
|
|
41
|
+
If type is not provided, classify the content:
|
|
42
|
+
- Contains "decided", "chose", "will use", "going with" → `decision`
|
|
43
|
+
- Contains "pattern", "always", "tends to", "convention" → `pattern`
|
|
44
|
+
- Contains "prefer", "like", "want", "should always" → `preference`
|
|
45
|
+
- Contains "watch out", "careful", "gotcha", "trap", "bug" → `gotcha`
|
|
46
|
+
- Otherwise → `discovery`
|
|
47
|
+
|
|
48
|
+
### Step 2: Apply type defaults
|
|
49
|
+
|
|
50
|
+
| Type | Default importance | Default ttl_days |
|
|
51
|
+
|------|-------------------|-----------------|
|
|
52
|
+
| decision | 7 | null (permanent) |
|
|
53
|
+
| pattern | 6 | null (permanent) |
|
|
54
|
+
| preference | 6 | null (permanent) |
|
|
55
|
+
| gotcha | 5 | 30 |
|
|
56
|
+
| discovery | 4 | 14 |
|
|
57
|
+
|
|
58
|
+
Agent-provided overrides take precedence.
|
|
59
|
+
|
|
60
|
+
### Step 3: Generate tags with synonym expansion
|
|
61
|
+
|
|
62
|
+
Extract 5-10 keyword tags from the content. For each tag, add 1-3 synonyms:
|
|
63
|
+
- Abbreviations: JWT → "json-web-token", K8s → "kubernetes"
|
|
64
|
+
- Synonyms: "auth" → "authentication", "DB" → "database"
|
|
65
|
+
- Related concepts: "JWT" → "tokens", "session", "security"
|
|
66
|
+
- Domain hierarchy: specific terms get their general category added
|
|
67
|
+
|
|
68
|
+
Cap total tags at 15. Deduplicate.
|
|
69
|
+
|
|
70
|
+
### Step 4: Generate safe filename
|
|
71
|
+
|
|
72
|
+
Slugify a summary of the content:
|
|
73
|
+
- Lowercase, replace spaces with hyphens, remove special chars
|
|
74
|
+
- Max 60 characters
|
|
75
|
+
- Example: "Decided to use JWT with 15-min expiry" → `jwt-15min-expiry-decision.md`
|
|
76
|
+
|
|
77
|
+
### Step 5: Write memory file
|
|
78
|
+
|
|
79
|
+
Write to `{brain_path}/memory/{safe_name}.md`:
|
|
80
|
+
|
|
81
|
+
```yaml
|
|
82
|
+
---
|
|
83
|
+
type: {detected or provided type}
|
|
84
|
+
tier: working
|
|
85
|
+
confidence: 0.5
|
|
86
|
+
importance: {from type defaults or override}
|
|
87
|
+
ttl_days: {from type defaults or override, null if permanent}
|
|
88
|
+
session_origin: "{current session identifier or ISO timestamp}"
|
|
89
|
+
contains:
|
|
90
|
+
- {tag1}
|
|
91
|
+
- {tag2}
|
|
92
|
+
- {synonym-expanded tags...}
|
|
93
|
+
entities:
|
|
94
|
+
people: [{if mentioned}]
|
|
95
|
+
systems: [{if mentioned}]
|
|
96
|
+
indexed_at: "{ISO 8601 timestamp}"
|
|
97
|
+
---
|
|
98
|
+
|
|
99
|
+
{memory content}
|
|
100
|
+
```
|
|
101
|
+
|
|
102
|
+
The server's file watcher will auto-index this file.
|
|
103
|
+
|
|
104
|
+
### Step 6: Log the store event
|
|
105
|
+
|
|
106
|
+
Append to `{brain_path}/_meta/log.jsonl`:
|
|
107
|
+
|
|
108
|
+
```json
|
|
109
|
+
{"ts":"{ISO}","op":"memory_store","path":"memory/{safe_name}.md","type":"{type}","tier":"working","author":"agent:memory"}
|
|
110
|
+
```
|
|
111
|
+
|
|
112
|
+
## Recall Mode
|
|
113
|
+
|
|
114
|
+
### Progressive loading
|
|
115
|
+
|
|
116
|
+
- **Depth 0**: frontmatter only — type, tier, confidence, importance, contains tags
|
|
117
|
+
- **Depth 1**: + first 3 lines of content (summary)
|
|
118
|
+
- **Depth 2**: full content
|
|
119
|
+
|
|
120
|
+
### Step 1: Search
|
|
121
|
+
|
|
122
|
+
```bash
|
|
123
|
+
curl -s -X POST http://localhost:{port}/api \
|
|
124
|
+
-H "Content-Type: application/json" \
|
|
125
|
+
-d '{"action":"search","params":{"query":"{query}","limit":10,"session_id":"{session_id}"}}'
|
|
126
|
+
```
|
|
127
|
+
|
|
128
|
+
Pass a session_id with every search call. This enables access tracking for
|
|
129
|
+
consolidation. Use a consistent session_id for the entire conversation.
|
|
130
|
+
|
|
131
|
+
### Step 2: Filter results
|
|
132
|
+
|
|
133
|
+
Filter to paths starting with `memory/`. If filter_type or filter_tier provided, read frontmatter and filter accordingly.
|
|
134
|
+
|
|
135
|
+
### Step 3: Apply tier weighting
|
|
136
|
+
|
|
137
|
+
Re-rank results by applying tier multipliers:
|
|
138
|
+
- `semantic`: score x 1.3
|
|
139
|
+
- `episodic`: score x 1.0
|
|
140
|
+
- `working`: score x 0.8
|
|
141
|
+
|
|
142
|
+
### Step 4: Return at requested depth
|
|
143
|
+
|
|
144
|
+
Return results at the requested depth level. Default to depth 0.
|