wicked-brain 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,331 @@
1
+ ---
2
+ name: wicked-brain:ingest
3
+ description: |
4
+ Ingest source files into the brain as structured chunks. Handles text files
5
+ (md, txt, csv, html) deterministically and binary files (pdf, docx, pptx,
6
+ xlsx, images) via LLM vision. Dispatches a subagent for the heavy lifting.
7
+
8
+ Use when: "ingest this file", "add to brain", "learn from this document",
9
+ "index this file", "brain ingest", "ingest this directory".
10
+ ---
11
+
12
+ # wicked-brain:ingest
13
+
14
+ You ingest source files into the brain by dispatching an ingest subagent.
15
+
16
+ ## Cross-Platform Notes
17
+
18
+ Commands in this skill work on macOS, Linux, and Windows. When a command has
19
+ platform differences, alternatives are shown. Your native tools (Read, Write,
20
+ Grep, Glob) work everywhere — prefer them over shell commands when possible.
21
+
22
+ For the brain path default:
23
+ - macOS/Linux: ~/.wicked-brain
24
+ - Windows: %USERPROFILE%\.wicked-brain
25
+
26
+ ## Config
27
+
28
+ Read `_meta/config.json` for brain path and server port.
29
+ If it doesn't exist, trigger wicked-brain:init.
30
+
31
+ ## Parameters
32
+
33
+ - **source** (required): path to a file or directory to ingest
34
+
35
+ ## Process
36
+
37
+ ### Step 1: Assess scope
38
+
39
+ Determine if the source is a single file or a directory.
40
+
41
+ - **Single file**: dispatch a subagent to ingest it directly (Step 3a)
42
+ - **Directory or multiple files**: use the batch script pattern (Step 3b)
43
+
44
+ ### Step 2: Copy source to raw/ (if not already there)
45
+
46
+ If the source is outside the brain directory, symlink or copy it into `{brain_path}/raw/`.
47
+ Prefer symlinks to avoid duplicating large files:
48
+ - macOS/Linux: `ln -s "{source}" "{brain_path}/raw/{name}"`
49
+ - Windows: `New-Item -ItemType SymbolicLink -Path "{brain_path}\raw\{name}" -Target "{source}"`
50
+
51
+ ### Step 3a: Single file ingest (subagent)
52
+
53
+ Dispatch an ingest subagent with these instructions:
54
+
55
+ ```
56
+ You are an ingest agent for the digital brain at {brain_path}.
57
+
58
+ Source file: {source_path}
59
+ Source name: {safe_name} (lowercase, hyphens for special chars)
60
+ Server: http://localhost:{port}/api
61
+
62
+ ## Detect file type
63
+
64
+ Check the file extension:
65
+ - Text files (md, txt, csv, html, json): use deterministic extraction
66
+ - Binary files (pdf, docx, pptx, xlsx, png, jpg, jpeg, gif, webp): use vision extraction
67
+
68
+ ## For TEXT files:
69
+
70
+ 1. Read the file content
71
+ 2. Split into chunks:
72
+ - Markdown: split on H1/H2 headings. If a section > 800 words, split further at paragraph breaks.
73
+ - Code files (.py, .js, .jsx, .ts): one chunk per file. Tag with language.
74
+ - Other text: split into paragraph groups of ~800 words.
75
+ 3. For each chunk, write to `{brain_path}/chunks/extracted/{safe_name}/chunk-NNN.md`
76
+
77
+ ## For BINARY files:
78
+
79
+ 1. Read the file (the LLM receives it natively as an attachment)
80
+ 2. Extract content by examining the document visually
81
+ 3. For PDFs: one chunk per logical section or every 3-5 pages
82
+ 4. For PPTX: one chunk per slide or slide group
83
+ 5. For DOCX: one chunk per section heading
84
+ 6. For XLSX: one chunk per sheet, render data as markdown tables
85
+ 7. For images: one chunk describing the visual content
86
+
87
+ ## Chunk format
88
+
89
+ Each chunk file must have this structure:
90
+
91
+ ---
92
+ source: {safe_name}
93
+ source_type: {extension}
94
+ chunk_id: {safe_name}/chunk-{NNN}
95
+ content_type:
96
+ - text
97
+ contains:
98
+ - {topic tags extracted from content}
99
+ entities:
100
+ systems: [{named systems/platforms}]
101
+ people: [{people/roles}]
102
+ programs: [{programs/initiatives}]
103
+ metrics: ["{metric}: {value}"]
104
+ confidence: {0.7 for text, 0.85 for vision}
105
+ indexed_at: {current ISO timestamp}
106
+ narrative_theme: {the "so what" in 8 words or fewer}
107
+ ---
108
+
109
+ {Extracted content in markdown format}
110
+
111
+ ## After writing chunks, index them in the server:
112
+
113
+ curl -s -X POST http://localhost:{port}/api \
114
+ -H "Content-Type: application/json" \
115
+ -d '{"action":"index","params":{"id":"{chunk_path}","path":"{chunk_path}","content":"{chunk_content}","brain_id":"{brain_id}"}}'
116
+
117
+ ## Report back
118
+
119
+ State how many chunks were created and from what file.
120
+ ```
121
+
122
+ ### Step 3b: Batch ingest (script generation)
123
+
124
+ When ingesting a directory or many files, **do not ingest files one-by-one in conversation**.
125
+ Instead, write a batch script and run it. This preserves context and is dramatically faster.
126
+
127
+ **The pattern:**
128
+
129
+ 1. Detect what runtime is available. Check in order:
130
+ - Node.js: `node --version`
131
+ - Python: `python3 --version` or `python --version`
132
+ - Shell: always available as fallback
133
+
134
+ 2. Write a script to the brain's `_meta/` directory that:
135
+ - Walks the source directory
136
+ - Filters by file extension (text types for deterministic, binary types for listing)
137
+ - For each text file: reads content, splits into chunks, writes chunk .md files, curls the index API
138
+ - For binary files: lists them for separate vision-based ingest
139
+ - Logs progress to stdout
140
+ - Writes a summary at the end
141
+
142
+ 3. Run the script:
143
+ ```bash
144
+ node {brain_path}/_meta/batch-ingest.mjs # or .py or .sh
145
+ ```
146
+
147
+ 4. Read the output and report results to the user
148
+
149
+ 5. For any binary files identified, either:
150
+ - Dispatch individual vision ingest subagents for the most important ones
151
+ - Report the list and let the user choose which to vision-ingest
152
+
153
+ **Example Node.js batch script structure:**
154
+
155
+ ```javascript
156
+ #!/usr/bin/env node
157
+ import { readFileSync, writeFileSync, mkdirSync, readdirSync, statSync, existsSync } from "node:fs";
158
+ import { join, extname, basename, relative } from "node:path";
159
+ import { createHash } from "node:crypto";
160
+
161
+ const BRAIN = "{brain_path}";
162
+ const PORT = {port};
163
+ const BRAIN_ID = "{brain_id}";
164
+ const SOURCE_DIR = "{source_dir}";
165
+
166
+ const TEXT_EXT = new Set([".md",".txt",".csv",".html",".htm",".json",".py",".js",".jsx",".ts",".tsx",".sh"]);
167
+ const BINARY_EXT = new Set([".pdf",".docx",".pptx",".xlsx",".png",".jpg",".jpeg",".gif",".webp"]);
168
+
169
+ const binaryFiles = [];
170
+ let totalChunks = 0;
171
+ let totalFiles = 0;
172
+
173
+ function safeName(name) {
174
+ return name.toLowerCase().replace(/[^a-z0-9.-]/g, "-").replace(/-+/g, "-").replace(/^-|-$/g, "");
175
+ }
176
+
177
+ function hash(content) {
178
+ return createHash("sha256").update(content).digest("hex").slice(0, 16);
179
+ }
180
+
181
+ async function indexChunk(id, path, content) {
182
+ try {
183
+ await fetch(`http://localhost:${PORT}/api`, {
184
+ method: "POST",
185
+ headers: { "Content-Type": "application/json" },
186
+ body: JSON.stringify({ action: "index", params: { id, path, content, brain_id: BRAIN_ID } }),
187
+ });
188
+ } catch (e) {
189
+ console.error(` Failed to index ${id}: ${e.message}`);
190
+ }
191
+ }
192
+
193
+ function splitMarkdown(text) {
194
+ const sections = text.split(/(?=^#{1,2}\s)/m).filter(s => s.trim());
195
+ if (sections.length === 0) return [text];
196
+ // Sub-split sections > 800 words
197
+ const result = [];
198
+ for (const section of sections) {
199
+ const words = section.split(/\s+/).length;
200
+ if (words > 800) {
201
+ const paragraphs = section.split(/\n\n+/);
202
+ let current = [];
203
+ let count = 0;
204
+ for (const p of paragraphs) {
205
+ const w = p.split(/\s+/).length;
206
+ if (count + w > 800 && current.length > 0) {
207
+ result.push(current.join("\n\n"));
208
+ current = [p];
209
+ count = w;
210
+ } else {
211
+ current.push(p);
212
+ count += w;
213
+ }
214
+ }
215
+ if (current.length > 0) result.push(current.join("\n\n"));
216
+ } else {
217
+ result.push(section);
218
+ }
219
+ }
220
+ return result;
221
+ }
222
+
223
+ function splitText(text) {
224
+ const paragraphs = text.split(/\n\n+/);
225
+ const chunks = [];
226
+ let current = [];
227
+ let count = 0;
228
+ for (const p of paragraphs) {
229
+ const w = p.split(/\s+/).length;
230
+ if (count + w > 800 && current.length > 0) {
231
+ chunks.push(current.join("\n\n"));
232
+ current = [p];
233
+ count = w;
234
+ } else {
235
+ current.push(p);
236
+ count += w;
237
+ }
238
+ }
239
+ if (current.length > 0) chunks.push(current.join("\n\n"));
240
+ return chunks.length > 0 ? chunks : [text];
241
+ }
242
+
243
+ async function ingestFile(filePath) {
244
+ const ext = extname(filePath).toLowerCase();
245
+ const rel = relative(SOURCE_DIR, filePath);
246
+ const name = safeName(rel);
247
+ const chunkDir = join(BRAIN, "chunks", "extracted", name);
248
+ mkdirSync(chunkDir, { recursive: true });
249
+
250
+ const content = readFileSync(filePath, "utf-8");
251
+ const chunks = ext === ".md" ? splitMarkdown(content) : splitText(content);
252
+
253
+ for (let i = 0; i < chunks.length; i++) {
254
+ const chunkId = `${name}/chunk-${String(i + 1).padStart(3, "0")}`;
255
+ const chunkPath = `chunks/extracted/${chunkId}.md`;
256
+ const ts = new Date().toISOString();
257
+ const keywords = [...new Set(chunks[i].toLowerCase().replace(/[^a-z0-9\s-]/g,"").split(/\s+/).filter(w => w.length > 5))].slice(0, 10);
258
+
259
+ const frontmatter = [
260
+ "---",
261
+ `source: ${basename(filePath)}`,
262
+ `source_type: ${ext.slice(1)}`,
263
+ `chunk_id: ${chunkId}`,
264
+ "content_type:",
265
+ " - text",
266
+ "contains:",
267
+ ...keywords.map(k => ` - ${k}`),
268
+ `confidence: 0.7`,
269
+ `indexed_at: "${ts}"`,
270
+ "---",
271
+ ].join("\n");
272
+
273
+ const fullContent = `${frontmatter}\n\n${chunks[i]}`;
274
+ writeFileSync(join(BRAIN, chunkPath), fullContent);
275
+ await indexChunk(chunkPath, chunkPath, chunks[i]);
276
+ totalChunks++;
277
+ }
278
+
279
+ totalFiles++;
280
+ console.log(` ${name}: ${chunks.length} chunks`);
281
+ }
282
+
283
+ function walk(dir, callback) {
284
+ for (const entry of readdirSync(dir, { withFileTypes: true })) {
285
+ const full = join(dir, entry.name);
286
+ if (entry.name.startsWith(".") || entry.name === "node_modules" || entry.name === "__pycache__" || entry.name === "package-lock.json") continue;
287
+ if (entry.isDirectory()) walk(full, callback);
288
+ else if (entry.isFile()) callback(full);
289
+ }
290
+ }
291
+
292
+ console.log(`Ingesting from ${SOURCE_DIR}...`);
293
+
294
+ walk(SOURCE_DIR, (filePath) => {
295
+ const ext = extname(filePath).toLowerCase();
296
+ if (TEXT_EXT.has(ext)) {
297
+ try { ingestFile(filePath); } catch (e) { console.error(` Error: ${filePath}: ${e.message}`); }
298
+ } else if (BINARY_EXT.has(ext)) {
299
+ binaryFiles.push(filePath);
300
+ }
301
+ });
302
+
303
+ // Wait for all index operations
304
+ await new Promise(r => setTimeout(r, 1000));
305
+
306
+ console.log(`\nDone: ${totalFiles} files, ${totalChunks} chunks indexed`);
307
+ if (binaryFiles.length > 0) {
308
+ console.log(`\nBinary files needing vision ingest (${binaryFiles.length}):`);
309
+ for (const f of binaryFiles) console.log(` ${f}`);
310
+ }
311
+ ```
312
+
313
+ This pattern should be used **whenever more than 5 files need processing**. It:
314
+ - Preserves agent context (no 50 Read/Write/Bash cycles)
315
+ - Runs fast (single process, no round-trips)
316
+ - Works cross-platform (Node.js is available everywhere wicked-brain runs)
317
+ - Reports results the agent can summarize
318
+
319
+ ### Step 4: Archive on re-ingest
320
+
321
+ If previous chunks exist for a source, archive them first.
322
+ Use the agent's native move/rename capability, or shell equivalents:
323
+ - macOS/Linux: `mv "{brain_path}/chunks/extracted/{safe_name}" "{brain_path}/chunks/extracted/{safe_name}.archived-$(date +%s)"`
324
+ - Windows: `Rename-Item "{brain_path}\chunks\extracted\{safe_name}" "{safe_name}.archived-{timestamp}"`
325
+
326
+ ### Step 5: Report to user
327
+
328
+ After the subagent or batch script completes, summarize:
329
+ - "{N} text files ingested, {M} chunks created"
330
+ - "{K} binary files identified for vision ingest" (if any)
331
+ - Offer to vision-ingest the most important binary files
@@ -0,0 +1,110 @@
1
+ ---
2
+ name: wicked-brain:init
3
+ description: |
4
+ Initialize a new digital brain. Creates the directory structure, brain.json,
5
+ and config. Auto-triggered on first use of any brain skill when no config exists.
6
+
7
+ Use when: "set up a brain", "create a brain", "brain init", or when any brain
8
+ skill detects no config.
9
+ ---
10
+
11
+ # wicked-brain:init
12
+
13
+ You initialize a new digital brain on the filesystem.
14
+
15
+ ## Cross-Platform Notes
16
+
17
+ Commands in this skill work on macOS, Linux, and Windows. When a command has
18
+ platform differences, alternatives are shown. Your native tools (Read, Write,
19
+ Grep, Glob) work everywhere — prefer them over shell commands when possible.
20
+
21
+ For the brain path default:
22
+ - macOS/Linux: ~/.wicked-brain
23
+ - Windows: %USERPROFILE%\.wicked-brain
24
+
25
+ ## When to use
26
+
27
+ - User explicitly asks to create/initialize a brain
28
+ - Another brain skill detected no `_meta/config.json` and redirected here
29
+
30
+ ## Process
31
+
32
+ ### Step 1: Ask the user
33
+
34
+ Ask these questions (provide defaults):
35
+
36
+ 1. "Where should your brain live?"
37
+ - Default (macOS/Linux): `~/.wicked-brain`
38
+ - Default (Windows): `%USERPROFILE%\.wicked-brain`
39
+ 2. "What should this brain be called?" — Default: directory name
40
+
41
+ ### Step 2: Create directory structure
42
+
43
+ Use your native Write/mkdir tools to create these directories and files.
44
+
45
+ Directories to create (create each with its parent directories):
46
+ - `{brain_path}/raw`
47
+ - `{brain_path}/chunks/extracted`
48
+ - `{brain_path}/chunks/inferred`
49
+ - `{brain_path}/wiki/concepts`
50
+ - `{brain_path}/wiki/topics`
51
+ - `{brain_path}/_meta`
52
+
53
+ Shell equivalents if needed:
54
+ ```bash
55
+ # macOS/Linux
56
+ mkdir -p {brain_path}/raw {brain_path}/chunks/extracted {brain_path}/chunks/inferred \
57
+ {brain_path}/wiki/concepts {brain_path}/wiki/topics {brain_path}/_meta
58
+ ```
59
+ ```powershell
60
+ # Windows PowerShell
61
+ New-Item -ItemType Directory -Force -Path "{brain_path}\raw","{brain_path}\chunks\extracted","{brain_path}\chunks\inferred","{brain_path}\wiki\concepts","{brain_path}\wiki\topics","{brain_path}\_meta"
62
+ ```
63
+
64
+ ### Step 3: Write brain.json
65
+
66
+ Write to `{brain_path}/brain.json`:
67
+ ```json
68
+ {
69
+ "schema": 1,
70
+ "id": "{id}",
71
+ "name": "{name}",
72
+ "parents": [],
73
+ "links": []
74
+ }
75
+ ```
76
+
77
+ Where `{id}` is the directory name (lowercase, hyphens for spaces) and `{name}` is what the user provided.
78
+
79
+ ### Step 4: Write config
80
+
81
+ Write to `{brain_path}/_meta/config.json`:
82
+ ```json
83
+ {
84
+ "brain_path": "{absolute_path}",
85
+ "server_port": 4242,
86
+ "installed_clis": []
87
+ }
88
+ ```
89
+
90
+ ### Step 5: Initialize the event log
91
+
92
+ Use your Write tool to create an empty file at `{brain_path}/_meta/log.jsonl`.
93
+
94
+ Shell equivalents if needed:
95
+ ```bash
96
+ # macOS/Linux
97
+ touch {brain_path}/_meta/log.jsonl
98
+ ```
99
+ ```powershell
100
+ # Windows PowerShell
101
+ New-Item -ItemType File -Force -Path "{brain_path}\_meta\log.jsonl"
102
+ ```
103
+
104
+ ### Step 6: Confirm
105
+
106
+ Tell the user:
107
+ "Brain initialized at `{brain_path}`. You can now:
108
+ - `wicked-brain:ingest` to add source files
109
+ - `wicked-brain:search` to search content
110
+ - `wicked-brain:status` to check brain health"
@@ -0,0 +1,93 @@
1
+ ---
2
+ name: wicked-brain:lint
3
+ description: |
4
+ Check brain health and fix issues. Dispatches a lint subagent that runs
5
+ deterministic checks (broken links, orphans, stale entries) and semantic
6
+ analysis (inconsistencies, gaps, tag misalignment).
7
+
8
+ Use when: "lint the brain", "check brain health", "brain lint",
9
+ "find issues in the brain".
10
+ ---
11
+
12
+ # wicked-brain:lint
13
+
14
+ You check brain quality by dispatching a lint subagent.
15
+
16
+ ## Cross-Platform Notes
17
+
18
+ Commands in this skill work on macOS, Linux, and Windows. When a command has
19
+ platform differences, alternatives are shown. Your native tools (Read, Write,
20
+ Grep, Glob) work everywhere — prefer them over shell commands when possible.
21
+
22
+ For the brain path default:
23
+ - macOS/Linux: ~/.wicked-brain
24
+ - Windows: %USERPROFILE%\.wicked-brain
25
+
26
+ ## Config
27
+
28
+ Read `_meta/config.json` for brain path and server port.
29
+ If it doesn't exist, trigger wicked-brain:init.
30
+
31
+ ## Process
32
+
33
+ Dispatch a lint subagent with these instructions:
34
+
35
+ ```
36
+ You are a quality assurance agent for the digital brain at {brain_path}.
37
+ Server: http://localhost:{port}/api
38
+
39
+ ## Pass 1: Deterministic checks
40
+
41
+ ### Broken wikilinks
42
+ Find all [[wikilinks]] in wiki and chunk files, check if targets exist.
43
+ Use your Grep tool (preferred):
44
+ - Pattern: `\[\[[^\]]*\]\]`
45
+ - Search in: `{brain_path}/wiki/` and `{brain_path}/chunks/`
46
+
47
+ Shell fallback:
48
+ - macOS/Linux: `grep -roh '\[\[[^]]*\]\]' {brain_path}/wiki/ {brain_path}/chunks/ 2>/dev/null | sort -u`
49
+ - Windows: `findstr /s /r "\[\[" "{brain_path}\wiki\*.md" "{brain_path}\chunks\*.md" 2>nul`
50
+
51
+ For each link, use the Read tool to check if the target file exists.
52
+
53
+ ### Orphan chunks
54
+ Use your Glob tool to find all chunk files in `{brain_path}/chunks/**/*.md`.
55
+ Then use your Grep tool to check which chunk IDs appear in wiki files.
56
+
57
+ Shell fallback:
58
+ - macOS/Linux:
59
+ ```bash
60
+ find {brain_path}/chunks -name "chunk-*.md" -type f
61
+ grep -rl "chunk-" {brain_path}/wiki/ 2>/dev/null
62
+ ```
63
+ - Windows:
64
+ ```powershell
65
+ Get-ChildItem -Recurse -Filter "chunk-*.md" "{brain_path}\chunks"
66
+ findstr /s /m "chunk-" "{brain_path}\wiki\*.md" 2>nul
67
+ ```
68
+
69
+ ### Stale entries
70
+ Compare source file modification times with chunk creation times.
71
+
72
+ ### Missing frontmatter
73
+ Check each chunk has required frontmatter fields (source, chunk_id, confidence, indexed_at).
74
+
75
+ ## Pass 2: Semantic analysis
76
+
77
+ Read a sample of chunks and wiki articles. Check:
78
+ - Are tags consistent? (same concept tagged differently in different chunks)
79
+ - Are there factual contradictions between articles?
80
+ - Are there implicit connections that should be explicit [[links]]?
81
+ - What topics have chunks but no wiki article? (coverage gaps)
82
+
83
+ ## Report
84
+
85
+ For each issue found:
86
+ - **severity**: error | warning | info
87
+ - **type**: broken_link | orphan | stale | missing_field | inconsistency | gap
88
+ - **path**: which file
89
+ - **message**: what's wrong
90
+ - **fix**: suggested fix (or "auto-fixed" if you fixed it)
91
+
92
+ Auto-fix broken links and missing fields where possible. Report everything else.
93
+ ```
@@ -0,0 +1,85 @@
1
+ ---
2
+ name: wicked-brain:query
3
+ description: |
4
+ Answer questions by searching and synthesizing brain content. Dispatches a
5
+ query subagent that searches, reads, follows links, and produces a cited answer.
6
+
7
+ Use when: user asks a question that could be answered from brain content,
8
+ "ask the brain", "brain query", "what does my brain say about".
9
+ ---
10
+
11
+ # wicked-brain:query
12
+
13
+ You answer questions from the brain's content by dispatching a query subagent.
14
+
15
+ ## Config
16
+
17
+ Read `_meta/config.json` for brain path and server port.
18
+ If it doesn't exist, trigger wicked-brain:init.
19
+
20
+ ## Parameters
21
+
22
+ - **question** (required): the question to answer
23
+
24
+ ## Process
25
+
26
+ Dispatch a query subagent with these instructions:
27
+
28
+ ```
29
+ You are a research agent for the digital brain at {brain_path}.
30
+ Server: http://localhost:{port}/api
31
+
32
+ Question: "{question}"
33
+
34
+ ## Step 1: Search
35
+
36
+ Search the brain for relevant content:
37
+ ```bash
38
+ curl -s -X POST http://localhost:{port}/api \
39
+ -H "Content-Type: application/json" \
40
+ -d '{"action":"search","params":{"query":"{question}","limit":10}}'
41
+ ```
42
+
43
+ Also search with grep for exact phrases:
44
+ ```bash
45
+ grep -rl "{key_terms}" {brain_path}/chunks/ {brain_path}/wiki/ 2>/dev/null | head -10
46
+ ```
47
+
48
+ ## Step 2: Progressive read
49
+
50
+ Read the top 3-5 results at depth 1 first (just frontmatter + summary).
51
+ Then read the most promising 1-3 at depth 2 (full content).
52
+
53
+ Use the Read tool for each file. Parse frontmatter between `---` lines.
54
+
55
+ ## Step 3: Follow links
56
+
57
+ Check the content for [[wikilinks]]. If following them would provide useful context:
58
+ - For local links [[path]]: read that file
59
+ - For cross-brain links [[brain::path]]: check if that brain is accessible
60
+
61
+ Check backlinks — what else references the content you found:
62
+ ```bash
63
+ curl -s -X POST http://localhost:{port}/api \
64
+ -H "Content-Type: application/json" \
65
+ -d '{"action":"backlinks","params":{"id":"{result_path}"}}'
66
+ ```
67
+
68
+ ## Step 4: Synthesize answer
69
+
70
+ Combine what you found into a clear answer. Requirements:
71
+ - Cite sources: [source: {path}] for every factual claim
72
+ - If evidence is insufficient, say so explicitly
73
+ - If sources conflict, note the contradiction
74
+ - Keep the answer concise — the user asked a question, not for a report
75
+
76
+ ## Report format
77
+
78
+ Answer the question directly, then list sources:
79
+
80
+ "{Answer text with [source: path] citations}"
81
+
82
+ Sources:
83
+ - {path}: {one-line description of what it contributed}
84
+ - {path}: {one-line description}
85
+ ```
@@ -0,0 +1,73 @@
1
+ ---
2
+ name: wicked-brain:read
3
+ description: |
4
+ Read a chunk or wiki article from the brain with progressive loading.
5
+ Depth 0: frontmatter + stats. Depth 1: + summary + headings. Depth 2: full content.
6
+
7
+ Use when: "read this chunk", "show me the article", "brain read", following up
8
+ from search results, or when needing to inspect brain content.
9
+ ---
10
+
11
+ # wicked-brain:read
12
+
13
+ You read content from the digital brain with progressive loading. Never return
14
+ more than the user or calling skill needs.
15
+
16
+ ## Config
17
+
18
+ Read the brain path from `_meta/config.json`. If it doesn't exist, trigger wicked-brain:init.
19
+
20
+ ## Parameters
21
+
22
+ - **path** (required): relative path within the brain (e.g., `wiki/concepts/kg.md`)
23
+ - **depth** (default: 1): how much to return
24
+ - 0: frontmatter + word count + link count (~5 tokens)
25
+ - 1: frontmatter + first paragraph + section headings (~50-100 tokens)
26
+ - 2: full content (variable)
27
+ - **sections** (optional, depth 2 only): list of section headings to extract (e.g., `## Methods`)
28
+
29
+ ## Process
30
+
31
+ ### Step 1: Read the file
32
+
33
+ Use the Read tool to read `{brain_path}/{path}`.
34
+
35
+ ### Step 2: Parse frontmatter
36
+
37
+ The file starts with YAML between `---` delimiters:
38
+ ```
39
+ ---
40
+ key: value
41
+ ---
42
+ Body content here...
43
+ ```
44
+
45
+ Split the file on the second `---` line. Everything before is frontmatter, everything after is body.
46
+
47
+ ### Step 3: Count stats
48
+
49
+ - **word_count**: split body on whitespace, count words
50
+ - **link_count**: count occurrences of `[[` in the body (each `[[...]]` is one link)
51
+ - **related**: extract all `[[target]]` and `[[brain::target]]` patterns
52
+
53
+ ### Step 4: Return at requested depth
54
+
55
+ **Depth 0:**
56
+ Report only:
57
+ - Frontmatter fields
58
+ - Word count: {N}
59
+ - Links: {N}
60
+ - Related: [list of link targets]
61
+
62
+ **Depth 1:**
63
+ Report depth 0 plus:
64
+ - **Summary**: the first non-empty paragraph after frontmatter (skip headings)
65
+ - **Sections**: list all lines starting with `#`, `##`, `###`
66
+
67
+ **Depth 2:**
68
+ Report the full body content. If `sections` parameter is provided, extract only
69
+ the requested sections (from heading to next heading of same or higher level).
70
+
71
+ ## Always include deeper hints
72
+
73
+ If returning at depth 0 or 1, suggest: "Use wicked-brain:read at depth {next} for more detail."