wicked-brain 0.9.2 → 0.11.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "wicked-brain-server",
3
- "version": "0.9.2",
3
+ "version": "0.11.0",
4
4
  "type": "module",
5
5
  "description": "SQLite FTS5 search server for wicked-brain digital knowledge bases",
6
6
  "keywords": [
@@ -18,7 +18,8 @@
18
18
  "directory": "server"
19
19
  },
20
20
  "bin": {
21
- "wicked-brain-server": "./bin/wicked-brain-server.mjs"
21
+ "wicked-brain-server": "./bin/wicked-brain-server.mjs",
22
+ "wicked-brain-onboard-wiki": "./bin/onboard-wiki.mjs"
22
23
  },
23
24
  "files": [
24
25
  "bin/",
@@ -27,7 +28,11 @@
27
28
  ],
28
29
  "scripts": {
29
30
  "test": "node --test test/*.test.mjs",
30
- "start": "node bin/wicked-brain-server.mjs"
31
+ "start": "node bin/wicked-brain-server.mjs",
32
+ "gen:wiki": "node scripts/gen-wiki.mjs",
33
+ "gen:wiki:check": "node scripts/gen-wiki.mjs --check",
34
+ "lint:wiki": "node scripts/lint-wiki.mjs",
35
+ "lint:wiki:strict": "node scripts/lint-wiki.mjs --strict"
31
36
  },
32
37
  "dependencies": {
33
38
  "better-sqlite3": "^12.0.0",
@@ -1,14 +1,15 @@
1
1
  # onboard
2
2
 
3
3
  ## Depth 0 — Summary
4
- Full project understanding pipeline. Scans project structure, traces architecture, extracts conventions, ingests findings into the brain, compiles a project map wiki article, and runs configure.
4
+ Full project understanding pipeline. Scans project, extracts findings from 5 perspectives (product, engineering, quality, ops, data), ingests as structured chunks, compiles a progressive-loading support wiki, and configures the CLI.
5
5
 
6
6
  ## Depth 1 — Pipeline Steps
7
+ 0. Detect: run `wicked-brain-onboard-wiki` to classify repo mode, write `.wicked-brain/mode.json`, and stamp the contributor-wiki pointer into CLAUDE.md / AGENTS.md if present
7
8
  1. Scan: directory structure, key files, languages, frameworks, dependencies
8
- 2. Trace: entry points, data flow, module boundaries, API surfaces
9
- 3. Extract: naming patterns, test patterns, build/deploy patterns, code style
10
- 4. Ingest: store findings as extracted chunks with synonym-expanded tags
11
- 5. Compile: synthesize a wiki article summarizing architecture and conventions
9
+ 2. Investigate: gather facts from each of the 5 perspectives
10
+ 3. Extract symbols: LSP workspace symbols or grep fallback (JS/TS)
11
+ 4. Ingest: write 6 perspective-based chunks with support-wiki frontmatter
12
+ 5. Compile: produce 5 depth-aware wiki articles under wiki/projects/{name}/
12
13
  6. Configure: call wicked-brain:configure to update CLI agent config
13
14
 
14
15
  Parameters: brain_path, port, project_path (defaults to cwd)
@@ -20,7 +21,31 @@ You are an onboarding agent for the digital brain at {brain_path}.
20
21
  Server: http://localhost:{port}/api
21
22
  Project: {project_path}
22
23
 
23
- Your job: deeply understand a project and ingest that understanding into the brain.
24
+ Your job: deeply understand a project from 5 perspectives and produce a support wiki that serves engineers, testers, ops, and product owners — all through progressive loading so only what's needed gets loaded.
25
+
26
+ ### Step 0: Detect repo mode and stamp wiki pointer
27
+
28
+ Before scanning, classify the repo and establish the contributor-wiki location.
29
+ This runs the `wicked-brain-onboard-wiki` CLI (bundled with `wicked-brain-server`),
30
+ which:
31
+
32
+ - Runs mode detection (code / content / mixed / unknown).
33
+ - Writes `.wicked-brain/mode.json` unless an `override:true` file is already there.
34
+ - Stamps `Contributor wiki: ./<path>` into `CLAUDE.md` and/or `AGENTS.md` if either exists.
35
+
36
+ ```bash
37
+ npx wicked-brain-onboard-wiki --repo-root "{project_path}" 2>&1 || \
38
+ node "{wicked_brain_install}/server/bin/onboard-wiki.mjs" --repo-root "{project_path}"
39
+ ```
40
+
41
+ Capture the output — the reported mode drives how Step 2 interprets the 5
42
+ perspectives (content-mode repos put more weight on product/data, less on
43
+ engineering specifics). If the file reports `override:true`, respect it and
44
+ report the preserved mode rather than forcing a rewrite.
45
+
46
+ If neither `CLAUDE.md` nor `AGENTS.md` exists, the CLI reports `absent` for
47
+ both — surface that in the summary so the user can decide whether to create
48
+ one. Do NOT create either file yourself unless the user asks.
24
49
 
25
50
  ### Step 1: Scan project structure
26
51
 
@@ -33,15 +58,53 @@ Use Glob and Read tools to survey:
33
58
 
34
59
  Create a structured summary of what you found.
35
60
 
36
- ### Step 2: Trace architecture
37
-
38
- - Identify entry points (main files, server start, CLI entry)
39
- - Map module boundaries (directories, packages, namespaces)
40
- - Identify API surfaces (HTTP routes, CLI commands, exported functions)
41
- - Trace primary data flows (request handler storage → response)
42
- - Note external dependencies and integrations
43
-
44
- #### Step 2b: Extract symbols (JS/TS projects)
61
+ ### Step 2: Investigate from 5 perspectives
62
+
63
+ Gather facts for each perspective. You'll write these as chunks in Step 4.
64
+
65
+ #### Product perspective
66
+ - What does this project do? Who is it for?
67
+ - Feature catalog: list every user-facing capability (CLI commands, API endpoints, skills, UI features)
68
+ - Capabilities with examples: how to exercise each feature
69
+ - Limitations: what it explicitly doesn't do, scale boundaries, known gaps
70
+ - Version history: recent git tags and what shipped (use `git tag --sort=-v:refname | head -10` and `git log --oneline {tag}..{next_tag}`)
71
+
72
+ #### Engineering perspective
73
+ - Architecture: components and how they connect
74
+ - Dependencies: runtime, build, optional — with why each exists
75
+ - Entry and exit points (broader than APIs):
76
+ - HTTP endpoints, CLI commands/flags
77
+ - File system triggers (watchers, config file conventions)
78
+ - Events (bus, pub/sub, webhooks)
79
+ - Signals (process signals, IPC, PID files)
80
+ - Module map: which file owns what responsibility
81
+ - Data flow: request lifecycle from entry to storage to response
82
+ - Extension points: where to add new functionality (new action, new migration, new skill)
83
+
84
+ #### Quality perspective
85
+ - Test infrastructure: framework, runner command, test file locations
86
+ - Test coverage: what's tested, what's manual-only
87
+ - Functional capabilities: every feature × how to verify it works
88
+ - Regression requirements: what MUST pass before a release
89
+ - Edge cases: what breaks at boundaries (empty state, concurrent access, missing deps)
90
+
91
+ #### Operations perspective
92
+ - Configuration: all config files, env vars, CLI flags with defaults
93
+ - Startup/shutdown: how the system starts, process management
94
+ - Health checks: what endpoints exist, what "healthy" looks like
95
+ - Troubleshooting: common failure modes with symptom → diagnosis → fix
96
+ - Upgrade path: how to update, what migrates automatically
97
+ - Backup/recovery: what's rebuildable vs precious
98
+
99
+ #### Data perspective
100
+ - Sources: what data enters the system (files, API input, events)
101
+ - Storage: where data lives on disk, what format
102
+ - Schema: database tables, columns, indexes (if applicable)
103
+ - Constraints: size limits, format requirements, naming conventions
104
+ - Data lifecycle: creation → access → decay → archive → deletion
105
+ - Integrity: what's rebuildable vs authoritative, dedup mechanisms
106
+
107
+ ### Step 3: Extract symbols (JS/TS projects)
45
108
 
46
109
  If the brain server has an LSP running, query it for exported symbols:
47
110
 
@@ -59,33 +122,41 @@ key source files directly and listing their exports with Grep:
59
122
  For each major module/directory, record:
60
123
  - **File inventory**: files with approximate LOC
61
124
  - **Exported symbols**: class names, function names, const names with their file paths
62
- - **Public API surface**: which symbols are entry points vs internal helpers
63
-
64
- Be specific — write `analyzeProject(desc: string): SignalAnalysis` not just
65
- "analyzes projects". Include parameter types and return types when visible.
66
-
67
- ### Step 3: Extract conventions
125
+ - **Signatures**: parameter types and return types when visible
68
126
 
69
- - **Naming**: file naming, function naming, variable naming patterns
70
- - **Testing**: test framework, test file locations, test naming patterns
71
- - **Build/Deploy**: build commands, deploy scripts, CI/CD patterns
72
- - **Code style**: formatting, import ordering, comment conventions
127
+ Be specific write `search({ query, limit, offset, since, session_id })` not
128
+ "searches the index". Include types when visible.
73
129
 
74
130
  ### Step 4: Ingest findings
75
131
 
76
- For each major finding (architecture, conventions, dependencies), write a chunk to `{brain_path}/chunks/extracted/project-{safe_project_name}/`:
132
+ Write chunks to `{brain_path}/chunks/extracted/project-{safe_project_name}/`:
133
+
134
+ - `chunk-product.md` — product perspective (from Step 2)
135
+ - `chunk-engineering.md` — engineering perspective (from Step 2)
136
+ - `chunk-quality.md` — quality perspective (from Step 2)
137
+ - `chunk-operations.md` — operations perspective (from Step 2)
138
+ - `chunk-data.md` — data perspective (from Step 2)
139
+ - `chunk-symbols.md` — exported symbols per module (from Step 3)
140
+
141
+ #### Chunk frontmatter
142
+
143
+ Each chunk MUST include `type: support-wiki` and `perspective:` so compile routes
144
+ them correctly:
145
+
146
+ ```yaml
147
+ ---
148
+ type: support-wiki
149
+ perspective: engineering
150
+ authored_by: onboard
151
+ authored_at: {ISO timestamp}
152
+ contains:
153
+ - {synonym-expanded tags}
154
+ ---
155
+ ```
77
156
 
78
- Each chunk should be a focused topic:
79
- - `chunk-001-structure.md` — project structure and layout (directory tree with file counts and LOC)
80
- - `chunk-002-architecture.md` — architecture and data flow
81
- - `chunk-003-conventions.md` — coding conventions and patterns
82
- - `chunk-004-dependencies.md` — key dependencies and integrations
83
- - `chunk-005-build-deploy.md` — build, test, and deployment
84
- - `chunk-006-symbols.md` — exported symbols per module (from Step 2b)
157
+ #### chunk-symbols.md format
85
158
 
86
- **chunk-006-symbols.md format:** List symbols grouped by module/directory. For each
87
- symbol include: name, kind (class/function/const/interface), file path, and signature
88
- when available. Example:
159
+ List symbols grouped by module/directory:
89
160
 
90
161
  ```markdown
91
162
  ## server/lib/
@@ -104,8 +175,6 @@ when available. Example:
104
175
  - `onFileChange(callback)` — hook for LSP integration
105
176
  ```
106
177
 
107
- This gives compile enough structural detail to weave into wiki articles.
108
-
109
178
  Use standard chunk frontmatter with rich synonym-expanded `contains:` tags.
110
179
 
111
180
  If re-onboarding (chunks already exist), follow the archive-then-replace pattern:
@@ -113,18 +182,60 @@ If re-onboarding (chunks already exist), follow the archive-then-replace pattern
113
182
  2. Archive old chunk directory with `.archived-{timestamp}` suffix
114
183
  3. Write new chunks
115
184
 
116
- ### Step 5: Compile project map
185
+ ### Step 5: Compile support wiki
186
+
187
+ Create 5 wiki articles under `{brain_path}/wiki/projects/{safe_project_name}/`:
188
+
189
+ - `product.md` — from chunk-product
190
+ - `engineering.md` — from chunk-engineering + chunk-symbols
191
+ - `quality.md` — from chunk-quality
192
+ - `operations.md` — from chunk-operations
193
+ - `data.md` — from chunk-data
194
+
195
+ #### Wiki article format
196
+
197
+ Each article must have structured frontmatter for progressive loading:
198
+
199
+ ```yaml
200
+ ---
201
+ title: {Perspective}
202
+ type: support-wiki
203
+ perspective: {perspective}
204
+ project: {project-name}
205
+ authored_by: onboard
206
+ authored_at: {ISO timestamp}
207
+ stats:
208
+ {perspective-specific numeric summary}
209
+ sections:
210
+ - name: {Section Name}
211
+ line: {line number}
212
+ summary: "{one-line summary}"
213
+ contains:
214
+ - {tags}
215
+ ---
216
+ ```
217
+
218
+ The `stats` block enables depth-0 retrieval (~5 tokens per article).
219
+ The `sections` block enables depth-1 retrieval (~50-100 tokens).
220
+ The body is depth-2 (full content, loaded on demand).
221
+
222
+ **Key rule for body structure:** the first paragraph under each `##` heading
223
+ must be a self-contained summary of that section. This is what `brain:read`
224
+ returns at depth 1. Put detail after the first paragraph.
225
+
226
+ #### What each article should answer
117
227
 
118
- Invoke `wicked-brain:compile` (or write directly) to create a wiki article at `{brain_path}/wiki/projects/{safe_project_name}.md` that synthesizes:
119
- - Project overview (what it does, who it's for)
120
- - Architecture summary with module map
121
- - **API surface** key exported symbols per module (from chunk-006-symbols), with signatures
122
- - **File inventory** directories with file counts and total LOC
123
- - Key conventions
124
- - Build/test/deploy quickstart
125
- - Links to detailed chunks via [[wikilinks]]
228
+ | Article | Depth 0 answers | Depth 1 answers | Depth 2 answers |
229
+ |---------|-----------------|-----------------|-----------------|
230
+ | product.md | "How many features?" | "What are the features?" | "How do I use each one? What are the limits?" |
231
+ | engineering.md | "How many modules/deps?" | "What are the components and how do they connect?" | "What symbols does module X export? How do I extend it?" |
232
+ | quality.md | "What's the test coverage?" | "What capabilities need testing?" | "How do I verify capability X works? What are the edge cases?" |
233
+ | operations.md | "How many config files?" | "What can go wrong?" | "How do I fix X? Full troubleshooting playbook." |
234
+ | data.md | "What data sources?" | "What's the schema?" | "What are the constraints? How does the lifecycle work?" |
126
235
 
127
- The wiki article should answer both "how does X work?" (narrative) and "what does X export?" (structural). Include actual function names, class names, and signatures — not just descriptions.
236
+ Include `[[wikilinks]]` between articles where relevant (e.g., engineering.md
237
+ links to data.md for schema details, quality.md links to product.md for the
238
+ feature list it verifies).
128
239
 
129
240
  ### Step 6: Configure
130
241
 
@@ -134,6 +245,6 @@ Invoke `wicked-brain:configure` to update the CLI's agent config file with brain
134
245
 
135
246
  Report what was onboarded:
136
247
  - Project: {name}
137
- - Chunks created: {N}
138
- - Wiki article: {path}
248
+ - Chunks created: {N} (6 perspective-based)
249
+ - Wiki articles: {list of 5 articles}
139
250
  - CLI config updated: {file}
@@ -66,6 +66,48 @@ Shell fallback:
66
66
  - macOS/Linux: `find {brain_path}/chunks/extracted -name "*.md" -type f 2>/dev/null`
67
67
  - Windows: `Get-ChildItem -Recurse -Filter "*.md" "{brain_path}\chunks\extracted" 2>nul`
68
68
 
69
+ ## Step 1b: Route support-wiki chunks
70
+
71
+ Check if any chunks have `type: support-wiki` in their frontmatter. Use Grep:
72
+ - macOS/Linux: `grep -rl "type: support-wiki" {brain_path}/chunks/extracted/ 2>/dev/null`
73
+ - Windows: `findstr /s /m "type: support-wiki" "{brain_path}\chunks\extracted\*.md" 2>nul`
74
+
75
+ If found, handle them separately from concept chunks:
76
+
77
+ 1. Group by `perspective` field (product, engineering, quality, operations, data)
78
+ 2. For each perspective, also pull in `chunk-symbols.md` if it exists (feeds into engineering)
79
+ 3. Write to `{brain_path}/wiki/projects/{project-name}/{perspective}.md` (NOT to `wiki/concepts/`)
80
+ 4. Use **structured assembly** for symbol-heavy sections (engineering symbols, data schema, quality capability matrix) — list the actual content from chunks, don't narrativize it
81
+ 5. Use **narrative synthesis** for context sections (product overview, engineering architecture, ops troubleshooting)
82
+ 6. Generate structured frontmatter with `stats` and `sections` blocks:
83
+
84
+ ```yaml
85
+ ---
86
+ title: {Perspective}
87
+ type: support-wiki
88
+ perspective: {perspective}
89
+ project: {project-name}
90
+ authored_by: compile
91
+ authored_at: {ISO timestamp}
92
+ stats:
93
+ {count key items — e.g., components: 4, test_files: 10, config_files: 3}
94
+ sections:
95
+ - name: {Section Name}
96
+ line: {line number in body}
97
+ summary: "{one-line summary of this section}"
98
+ source_chunks:
99
+ - {chunk-path}
100
+ contains:
101
+ - {tags}
102
+ ---
103
+ ```
104
+
105
+ **Key formatting rule:** the first paragraph under each `##` heading must be a
106
+ self-contained summary. `brain:read` at depth 1 returns section headings + first
107
+ paragraphs, so these summaries are what agents see before deciding to load full content.
108
+
109
+ After writing support-wiki articles, continue to Step 2 for any remaining non-support-wiki chunks.
110
+
69
111
  ## Step 2: Find uncovered chunks
70
112
 
71
113
  For each chunk directory, check if a wiki article references those chunks.
@@ -0,0 +1,137 @@
1
+ ---
2
+ name: wicked-brain:ui
3
+ description: |
4
+ Open the read-only brain viewer (Material-styled search + wiki browser) in
5
+ the default web browser. Use when the user says "open the brain viewer",
6
+ "show me the wiki", "open the brain in a browser", "open the UI",
7
+ "launch the viewer", or similar. Also use when the user wants to explore
8
+ a specific doc visually rather than through search tool calls — the URL
9
+ supports deep-linking via `#<path>`.
10
+
11
+ Works against any wicked-brain server on the local machine. If the server
12
+ isn't running, auto-starts it before opening.
13
+ ---
14
+
15
+ # wicked-brain:ui
16
+
17
+ You open the read-only HTML viewer served at `GET /` by the wicked-brain
18
+ server for the current project's brain (or a named brain).
19
+
20
+ ## Cross-Platform Notes
21
+
22
+ The only platform-specific piece is the "open a URL in the default browser"
23
+ command. Everything else is curl + Read/Write. Fallbacks are provided for all
24
+ three major platforms.
25
+
26
+ For the brain path default:
27
+ - macOS/Linux: `~/.wicked-brain/projects/{project-name}`
28
+ - Windows: `%USERPROFILE%\.wicked-brain\projects\{project-name}`
29
+
30
+ ## Parameters
31
+
32
+ - **brain** (optional): brain id or absolute brain path. Defaults to the
33
+ current working directory's project brain (per the resolution in
34
+ wicked-brain:init § "Resolving the brain config").
35
+ - **path** (optional): a repo-relative doc path to deep-link to
36
+ (e.g., `wiki/projects/foo/engineering.md`). The viewer loads this
37
+ document on open via URL fragment.
38
+
39
+ ## Process
40
+
41
+ ### Step 1: Resolve brain config
42
+
43
+ Use the shared resolution in wicked-brain:init § "Resolving the brain config".
44
+ In short: try
45
+ `~/.wicked-brain/projects/{cwd_basename}/_meta/config.json` first, fall back
46
+ to `~/.wicked-brain/_meta/config.json`, else trigger wicked-brain:init.
47
+
48
+ If `brain` was supplied explicitly, use that instead:
49
+ - If it looks like a path (contains `/` or starts with `~`), expand and use.
50
+ - Otherwise, treat it as a brain id and look for
51
+ `~/.wicked-brain/projects/{brain}/_meta/config.json`.
52
+
53
+ Read the resolved config to get `server_port`.
54
+
55
+ ### Step 2: Verify the server is running
56
+
57
+ ```bash
58
+ curl -s -f -X POST http://localhost:{port}/api \
59
+ -H "Content-Type: application/json" \
60
+ -d '{"action":"health","params":{}}'
61
+ ```
62
+
63
+ If the call fails with connection refused, invoke the wicked-brain:server
64
+ auto-start pattern — start the server against the resolved brain path and
65
+ wait for health to return ok before continuing. Never open a browser at a
66
+ URL that isn't serving yet; that produces a scary "can't connect" page the
67
+ user then has to refresh.
68
+
69
+ ### Step 3: Build the URL
70
+
71
+ Base: `http://localhost:{port}/`
72
+
73
+ If `path` was supplied, append `#{url-encoded path}`:
74
+
75
+ ```
76
+ http://localhost:4245/#wiki%2Fprojects%2Fgcp-repo-analyzer%2Fengineering.md
77
+ ```
78
+
79
+ ### Step 4: Open in the default browser
80
+
81
+ macOS:
82
+ ```bash
83
+ open "{url}"
84
+ ```
85
+
86
+ Linux:
87
+ ```bash
88
+ xdg-open "{url}" 2>/dev/null || sensible-browser "{url}" 2>/dev/null || echo "Open manually: {url}"
89
+ ```
90
+
91
+ Windows (PowerShell):
92
+ ```powershell
93
+ Start-Process "{url}"
94
+ ```
95
+
96
+ Windows (Git Bash / WSL):
97
+ ```bash
98
+ start "" "{url}" 2>/dev/null || explorer.exe "{url}" 2>/dev/null || echo "Open manually: {url}"
99
+ ```
100
+
101
+ If opening the browser fails (no DISPLAY on a headless server, no xdg-utils
102
+ installed, etc.), don't treat it as fatal — just print the URL and tell the
103
+ user to open it themselves. The URL is the deliverable.
104
+
105
+ ### Step 5: Report
106
+
107
+ Tell the user:
108
+
109
+ > Opened the brain viewer at `{url}`.
110
+ >
111
+ > It supports:
112
+ > - Search across all indexed docs (AppBar input)
113
+ > - Source-type filters (wiki / chunk / memory) in the left drawer
114
+ > - Wiki article browser in the left drawer
115
+ > - Deep-linking via URL fragment (`#<path>`)
116
+ > - Back button to return from doc view to results
117
+
118
+ If the server had to be auto-started, mention that in the report so the user
119
+ knows a new process is running.
120
+
121
+ ## When to use
122
+
123
+ - User explicitly asks to open the viewer / UI / browser / wiki.
124
+ - User wants to visually explore a doc they've been working with — offer to
125
+ open it with the `path` param pre-filled.
126
+ - User asks "what's in the brain?" in an exploratory way — a visual browser
127
+ is often faster than a sequence of search calls.
128
+
129
+ ## When NOT to use
130
+
131
+ - When the user wants a specific answer from the brain: use
132
+ wicked-brain:search or wicked-brain:query instead. Opening a browser is
133
+ higher friction than a tool call with a direct answer.
134
+ - When a remote / headless environment: no browser is available. Just print
135
+ the URL so the user can forward it.
136
+ - When multiple brains could be relevant: ask which one first rather than
137
+ guessing, so you don't open the wrong brain's viewer.