wicked-brain 0.9.1 → 0.10.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "wicked-brain",
3
- "version": "0.9.1",
3
+ "version": "0.10.0",
4
4
  "type": "module",
5
5
  "description": "Digital brain as skills for AI coding CLIs — no vector DB, no embeddings, no infrastructure",
6
6
  "keywords": [
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "wicked-brain-server",
3
- "version": "0.9.1",
3
+ "version": "0.10.0",
4
4
  "type": "module",
5
5
  "description": "SQLite FTS5 search server for wicked-brain digital knowledge bases",
6
6
  "keywords": [
@@ -1,14 +1,14 @@
1
1
  # onboard
2
2
 
3
3
  ## Depth 0 — Summary
4
- Full project understanding pipeline. Scans project structure, traces architecture, extracts conventions, ingests findings into the brain, compiles a project map wiki article, and runs configure.
4
+ Full project understanding pipeline. Scans project, extracts findings from 5 perspectives (product, engineering, quality, ops, data), ingests as structured chunks, compiles a progressive-loading support wiki, and configures the CLI.
5
5
 
6
6
  ## Depth 1 — Pipeline Steps
7
7
  1. Scan: directory structure, key files, languages, frameworks, dependencies
8
- 2. Trace: entry points, data flow, module boundaries, API surfaces
9
- 3. Extract: naming patterns, test patterns, build/deploy patterns, code style
10
- 4. Ingest: store findings as extracted chunks with synonym-expanded tags
11
- 5. Compile: synthesize a wiki article summarizing architecture and conventions
8
+ 2. Investigate: gather facts from each of the 5 perspectives
9
+ 3. Extract symbols: LSP workspace symbols or grep fallback (JS/TS)
10
+ 4. Ingest: write 6 perspective-based chunks with support-wiki frontmatter
11
+ 5. Compile: produce 5 depth-aware wiki articles under wiki/projects/{name}/
12
12
  6. Configure: call wicked-brain:configure to update CLI agent config
13
13
 
14
14
  Parameters: brain_path, port, project_path (defaults to cwd)
@@ -20,7 +20,7 @@ You are an onboarding agent for the digital brain at {brain_path}.
20
20
  Server: http://localhost:{port}/api
21
21
  Project: {project_path}
22
22
 
23
- Your job: deeply understand a project and ingest that understanding into the brain.
23
+ Your job: deeply understand a project from 5 perspectives and produce a support wiki that serves engineers, testers, ops, and product owners — all through progressive loading so only what's needed gets loaded.
24
24
 
25
25
  ### Step 1: Scan project structure
26
26
 
@@ -33,31 +33,122 @@ Use Glob and Read tools to survey:
33
33
 
34
34
  Create a structured summary of what you found.
35
35
 
36
- ### Step 2: Trace architecture
36
+ ### Step 2: Investigate from 5 perspectives
37
+
38
+ Gather facts for each perspective. You'll write these as chunks in Step 4.
39
+
40
+ #### Product perspective
41
+ - What does this project do? Who is it for?
42
+ - Feature catalog: list every user-facing capability (CLI commands, API endpoints, skills, UI features)
43
+ - Capabilities with examples: how to exercise each feature
44
+ - Limitations: what it explicitly doesn't do, scale boundaries, known gaps
45
+ - Version history: recent git tags and what shipped (use `git tag --sort=-v:refname | head -10` and `git log --oneline {tag}..{next_tag}`)
46
+
47
+ #### Engineering perspective
48
+ - Architecture: components and how they connect
49
+ - Dependencies: runtime, build, optional — with why each exists
50
+ - Entry and exit points (broader than APIs):
51
+ - HTTP endpoints, CLI commands/flags
52
+ - File system triggers (watchers, config file conventions)
53
+ - Events (bus, pub/sub, webhooks)
54
+ - Signals (process signals, IPC, PID files)
55
+ - Module map: which file owns what responsibility
56
+ - Data flow: request lifecycle from entry to storage to response
57
+ - Extension points: where to add new functionality (new action, new migration, new skill)
58
+
59
+ #### Quality perspective
60
+ - Test infrastructure: framework, runner command, test file locations
61
+ - Test coverage: what's tested, what's manual-only
62
+ - Functional capabilities: every feature × how to verify it works
63
+ - Regression requirements: what MUST pass before a release
64
+ - Edge cases: what breaks at boundaries (empty state, concurrent access, missing deps)
65
+
66
+ #### Operations perspective
67
+ - Configuration: all config files, env vars, CLI flags with defaults
68
+ - Startup/shutdown: how the system starts, process management
69
+ - Health checks: what endpoints exist, what "healthy" looks like
70
+ - Troubleshooting: common failure modes with symptom → diagnosis → fix
71
+ - Upgrade path: how to update, what migrates automatically
72
+ - Backup/recovery: what's rebuildable vs precious
73
+
74
+ #### Data perspective
75
+ - Sources: what data enters the system (files, API input, events)
76
+ - Storage: where data lives on disk, what format
77
+ - Schema: database tables, columns, indexes (if applicable)
78
+ - Constraints: size limits, format requirements, naming conventions
79
+ - Data lifecycle: creation → access → decay → archive → deletion
80
+ - Integrity: what's rebuildable vs authoritative, dedup mechanisms
81
+
82
+ ### Step 3: Extract symbols (JS/TS projects)
83
+
84
+ If the brain server has an LSP running, query it for exported symbols:
85
+
86
+ ```bash
87
+ curl -s -X POST http://localhost:{port}/api \
88
+ -H "Content-Type: application/json" \
89
+ -d '{"action":"lsp-workspace-symbols","params":{"query":""}}'
90
+ ```
91
+
92
+ If LSP is unavailable (check `{"action":"lsp-health"}`), fall back to reading
93
+ key source files directly and listing their exports with Grep:
94
+ - `export function`, `export class`, `export const`, `export default`
95
+ - `module.exports`, `exports.`
96
+
97
+ For each major module/directory, record:
98
+ - **File inventory**: files with approximate LOC
99
+ - **Exported symbols**: class names, function names, const names with their file paths
100
+ - **Signatures**: parameter types and return types when visible
101
+
102
+ Be specific — write `search({ query, limit, offset, since, session_id })` not
103
+ "searches the index". Include types when visible.
37
104
 
38
- - Identify entry points (main files, server start, CLI entry)
39
- - Map module boundaries (directories, packages, namespaces)
40
- - Identify API surfaces (HTTP routes, CLI commands, exported functions)
41
- - Trace primary data flows (request → handler → storage → response)
42
- - Note external dependencies and integrations
105
+ ### Step 4: Ingest findings
43
106
 
44
- ### Step 3: Extract conventions
107
+ Write chunks to `{brain_path}/chunks/extracted/project-{safe_project_name}/`:
45
108
 
46
- - **Naming**: file naming, function naming, variable naming patterns
47
- - **Testing**: test framework, test file locations, test naming patterns
48
- - **Build/Deploy**: build commands, deploy scripts, CI/CD patterns
49
- - **Code style**: formatting, import ordering, comment conventions
109
+ - `chunk-product.md` product perspective (from Step 2)
110
+ - `chunk-engineering.md` engineering perspective (from Step 2)
111
+ - `chunk-quality.md` quality perspective (from Step 2)
112
+ - `chunk-operations.md` operations perspective (from Step 2)
113
+ - `chunk-data.md` — data perspective (from Step 2)
114
+ - `chunk-symbols.md` — exported symbols per module (from Step 3)
50
115
 
51
- ### Step 4: Ingest findings
116
+ #### Chunk frontmatter
117
+
118
+ Each chunk MUST include `type: support-wiki` and `perspective:` so compile routes
119
+ them correctly:
120
+
121
+ ```yaml
122
+ ---
123
+ type: support-wiki
124
+ perspective: engineering
125
+ authored_by: onboard
126
+ authored_at: {ISO timestamp}
127
+ contains:
128
+ - {synonym-expanded tags}
129
+ ---
130
+ ```
52
131
 
53
- For each major finding (architecture, conventions, dependencies), write a chunk to `{brain_path}/chunks/extracted/project-{safe_project_name}/`:
132
+ #### chunk-symbols.md format
54
133
 
55
- Each chunk should be a focused topic:
56
- - `chunk-001-structure.md` — project structure and layout
57
- - `chunk-002-architecture.md` — architecture and data flow
58
- - `chunk-003-conventions.md` — coding conventions and patterns
59
- - `chunk-004-dependencies.md` — key dependencies and integrations
60
- - `chunk-005-build-deploy.md` build, test, and deployment
134
+ List symbols grouped by module/directory:
135
+
136
+ ```markdown
137
+ ## server/lib/
138
+
139
+ ### sqlite-search.mjs (878 LOC)
140
+ - `class SqliteSearch` — FTS5 search engine wrapping better-sqlite3
141
+ - `search({ query, limit, offset, since, session_id })` → `{ results, total_matches, showing }`
142
+ - `wikiList({ query, limit })` → `{ articles: [{ path, title, description, tags, word_count }] }`
143
+ - `index(doc)` — upsert document + FTS + wikilinks
144
+ - `stats()` → `{ total, chunks, wiki, memory, ... }`
145
+ - `function deriveSourceType(path)` → `"wiki" | "memory" | "chunk"`
146
+
147
+ ### file-watcher.mjs (330 LOC)
148
+ - `class FileWatcher` — recursive fs.watch with polling fallback
149
+ - `start()` / `stop()` — lifecycle
150
+ - `onFileChange(callback)` — hook for LSP integration
151
+ ```
61
152
 
62
153
  Use standard chunk frontmatter with rich synonym-expanded `contains:` tags.
63
154
 
@@ -66,14 +157,60 @@ If re-onboarding (chunks already exist), follow the archive-then-replace pattern
66
157
  2. Archive old chunk directory with `.archived-{timestamp}` suffix
67
158
  3. Write new chunks
68
159
 
69
- ### Step 5: Compile project map
70
-
71
- Invoke `wicked-brain:compile` (or write directly) to create a wiki article at `{brain_path}/wiki/projects/{safe_project_name}.md` that synthesizes:
72
- - Project overview (what it does, who it's for)
73
- - Architecture summary with module map
74
- - Key conventions
75
- - Build/test/deploy quickstart
76
- - Links to detailed chunks via [[wikilinks]]
160
+ ### Step 5: Compile support wiki
161
+
162
+ Create 5 wiki articles under `{brain_path}/wiki/projects/{safe_project_name}/`:
163
+
164
+ - `product.md` from chunk-product
165
+ - `engineering.md` — from chunk-engineering + chunk-symbols
166
+ - `quality.md` — from chunk-quality
167
+ - `operations.md` from chunk-operations
168
+ - `data.md` — from chunk-data
169
+
170
+ #### Wiki article format
171
+
172
+ Each article must have structured frontmatter for progressive loading:
173
+
174
+ ```yaml
175
+ ---
176
+ title: {Perspective}
177
+ type: support-wiki
178
+ perspective: {perspective}
179
+ project: {project-name}
180
+ authored_by: onboard
181
+ authored_at: {ISO timestamp}
182
+ stats:
183
+ {perspective-specific numeric summary}
184
+ sections:
185
+ - name: {Section Name}
186
+ line: {line number}
187
+ summary: "{one-line summary}"
188
+ contains:
189
+ - {tags}
190
+ ---
191
+ ```
192
+
193
+ The `stats` block enables depth-0 retrieval (~5 tokens per article).
194
+ The `sections` block enables depth-1 retrieval (~50-100 tokens).
195
+ The body is depth-2 (full content, loaded on demand).
196
+
197
+ **Key rule for body structure:** the first paragraph under each `##` heading
198
+ must be a self-contained summary of that section. This is what `brain:read`
199
+ returns at depth 1. Put detail after the first paragraph.
200
+
201
+ #### What each article should answer
202
+
203
+ | Article | Depth 0 answers | Depth 1 answers | Depth 2 answers |
204
+ |---------|-----------------|-----------------|-----------------|
205
+ | product.md | "How many features?" | "What are the features?" | "How do I use each one? What are the limits?" |
206
+ | engineering.md | "How many modules/deps?" | "What are the components and how do they connect?" | "What symbols does module X export? How do I extend it?" |
207
+ | quality.md | "What's the test coverage?" | "What capabilities need testing?" | "How do I verify capability X works? What are the edge cases?" |
208
+ | operations.md | "How many config files?" | "What can go wrong?" | "How do I fix X? Full troubleshooting playbook." |
209
+ | data.md | "What data sources?" | "What's the schema?" | "What are the constraints? How does the lifecycle work?" |
210
+
211
+ Include `[[wikilinks]]` between articles where relevant (e.g., engineering.md
212
+ links to data.md for schema details, quality.md links to product.md for the
213
+ feature list it verifies).
77
214
 
78
215
  ### Step 6: Configure
79
216
 
@@ -83,6 +220,6 @@ Invoke `wicked-brain:configure` to update the CLI's agent config file with brain
83
220
 
84
221
  Report what was onboarded:
85
222
  - Project: {name}
86
- - Chunks created: {N}
87
- - Wiki article: {path}
223
+ - Chunks created: {N} (6 perspective-based)
224
+ - Wiki articles: {list of 5 articles}
88
225
  - CLI config updated: {file}
@@ -66,6 +66,48 @@ Shell fallback:
66
66
  - macOS/Linux: `find {brain_path}/chunks/extracted -name "*.md" -type f 2>/dev/null`
67
67
  - Windows: `Get-ChildItem -Recurse -Filter "*.md" "{brain_path}\chunks\extracted" 2>nul`
68
68
 
69
+ ## Step 1b: Route support-wiki chunks
70
+
71
+ Check if any chunks have `type: support-wiki` in their frontmatter. Use Grep:
72
+ - macOS/Linux: `grep -rl "type: support-wiki" {brain_path}/chunks/extracted/ 2>/dev/null`
73
+ - Windows: `findstr /s /m "type: support-wiki" "{brain_path}\chunks\extracted\*.md" 2>nul`
74
+
75
+ If found, handle them separately from concept chunks:
76
+
77
+ 1. Group by `perspective` field (product, engineering, quality, operations, data)
78
+ 2. For each perspective, also pull in `chunk-symbols.md` if it exists (feeds into engineering)
79
+ 3. Write to `{brain_path}/wiki/projects/{project-name}/{perspective}.md` (NOT to `wiki/concepts/`)
80
+ 4. Use **structured assembly** for symbol-heavy sections (engineering symbols, data schema, quality capability matrix) — list the actual content from chunks, don't narrativize it
81
+ 5. Use **narrative synthesis** for context sections (product overview, engineering architecture, ops troubleshooting)
82
+ 6. Generate structured frontmatter with `stats` and `sections` blocks:
83
+
84
+ ```yaml
85
+ ---
86
+ title: {Perspective}
87
+ type: support-wiki
88
+ perspective: {perspective}
89
+ project: {project-name}
90
+ authored_by: compile
91
+ authored_at: {ISO timestamp}
92
+ stats:
93
+ {count key items — e.g., components: 4, test_files: 10, config_files: 3}
94
+ sections:
95
+ - name: {Section Name}
96
+ line: {line number in body}
97
+ summary: "{one-line summary of this section}"
98
+ source_chunks:
99
+ - {chunk-path}
100
+ contains:
101
+ - {tags}
102
+ ---
103
+ ```
104
+
105
+ **Key formatting rule:** the first paragraph under each `##` heading must be a
106
+ self-contained summary. `brain:read` at depth 1 returns section headings + first
107
+ paragraphs, so these summaries are what agents see before deciding to load full content.
108
+
109
+ After writing support-wiki articles, continue to Step 2 for any remaining non-support-wiki chunks.
110
+
69
111
  ## Step 2: Find uncovered chunks
70
112
 
71
113
  For each chunk directory, check if a wiki article references those chunks.