wicked-brain 0.10.0 → 0.11.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -45,28 +45,29 @@ wicked-brain is a set of **skills** — markdown instruction files that teach yo
45
45
  A lightweight background server handles the one thing that needs a database: full-text search via SQLite FTS5.
46
46
 
47
47
  ```
48
- ┌─────────────────────────────────────────┐
49
- │ Your AI CLI (Claude / Gemini / ...) │
50
- │ │
51
- │ Skills: │
52
- │ wicked-brain:ingest → add sources │
53
- │ wicked-brain:search → find content │
54
- │ wicked-brain:query → ask questions│
55
- │ wicked-brain:compile → build wiki │
56
- │ wicked-brain:lint → check quality│
48
+ ┌─────────────────────────────────────────┐ ┌──────────────────────┐
49
+ │ Your AI CLI (Claude / Gemini / ...) │ │ Browser (you) │
50
+ │ │ │ http://localhost/… │
51
+ │ Skills: │ │ Search · Wiki viewer │
52
+ │ wicked-brain:ingest → add sources │ └──────────┬───────────┘
53
+ │ wicked-brain:search → find content │ │ GET /
54
+ │ wicked-brain:query → ask questions│
55
+ │ wicked-brain:compile → build wiki │ ┌──────────────────────┐
56
+ │ wicked-brain:lint → check quality│ │ Viewer HTML │
57
+ │ wicked-brain:ui → open viewer │ └──────────────────────┘
57
58
  │ │
58
59
  │ Your agent uses its own tools: │
59
60
  │ Read, Write, Grep — no special APIs │
60
61
  └────────────┬────────────────────────────┘
61
- curl localhost (search only)
62
+ POST /api (search + index + …)
62
63
 
63
64
  ┌─────────────────────────────────────────┐
64
65
  │ SQLite FTS5 server (auto-starts) │
65
- ~300 lines, one dependency
66
+ Optional: --read-only for shared mode
66
67
  └─────────────────────────────────────────┘
67
68
  ```
68
69
 
69
- **The server is invisible.** It auto-starts when needed and auto-reindexes when files change. You never think about it.
70
+ **The server is invisible.** It auto-starts when needed and auto-reindexes when files change. You never think about it — unless you want to: `open http://localhost:4242/` drops you into a browser viewer that lets you search and browse the wiki without going through an agent.
70
71
 
71
72
  ## Install
72
73
 
@@ -109,6 +110,19 @@ Once installed, just talk to your agent:
109
110
 
110
111
  Every operation uses **progressive loading** — the agent never pulls more than it needs. Search returns one-line summaries first. You drill down only when something looks relevant.
111
112
 
113
+ ## Browser Viewer
114
+
115
+ Every running brain server serves a read-only HTML viewer at `GET /`. Open `http://localhost:4242/` (or whatever port the brain bound to) and you get:
116
+
117
+ - **Search tab** — type-to-refine with live results and source-type filter chips. Empty query shows every doc most-recent-first so you can browse even when you don't know what to search for.
118
+ - **Wiki tab** — grid of cards for each wiki article with tags and word counts.
119
+ - **Header actions** — refresh (re-detect mode + re-index from disk) and delete (purge content; typed confirmation required).
120
+ - **Read-only mode** — start the server with `--read-only` and destructive actions return 403; the viewer greys out the buttons and tells you why.
121
+
122
+ Or let an agent do it: say *"open the brain viewer"* and the `wicked-brain:ui` skill resolves the brain, health-checks the server, and launches your default browser.
123
+
124
+ The viewer has no auth. It's localhost-only, same-machine trust.
125
+
112
126
  ## What Makes It Different
113
127
 
114
128
  | | Vector DB / RAG | wicked-brain |
@@ -145,6 +159,7 @@ Every operation uses **progressive loading** — the agent never pulls more than
145
159
  | `wicked-brain:retag` | Backfill synonym-expanded tags across all chunks for better search recall |
146
160
  | `wicked-brain:update` | Check npm for updates and reinstall skills across all detected CLIs |
147
161
  | `wicked-brain:lsp` | Universal code intelligence via LSP — hover, go-to-definition, diagnostics, completions |
162
+ | `wicked-brain:ui` | Open the read-only browser viewer — Material-styled Search + Wiki tabs over `http://localhost:<port>/` |
148
163
 
149
164
  ## Multi-Brain Federation
150
165
 
@@ -185,6 +200,8 @@ If you really want one brain for everything, you can pass a custom path to
185
200
 
186
201
  ## What's on Disk
187
202
 
203
+ Two places. The **brain** lives in your home directory:
204
+
188
205
  ```
189
206
  ~/.wicked-brain/projects/{project-name}/
190
207
  brain.json # Identity and brain links
@@ -199,7 +216,18 @@ If you really want one brain for everything, you can pass a custom path to
199
216
  .brain.db # SQLite search index (auto-managed)
200
217
  ```
201
218
 
202
- Everything is markdown. Everything is git-committable. Everything is human-readable. The SQLite file is a rebuildable cache — delete it and the server recreates it from your markdown files.
219
+ And a small per-repo marker lives **in the project itself**:
220
+
221
+ ```
222
+ <your-repo>/
223
+ .wicked-brain/
224
+ mode.json # Detected repo mode + wiki location (gitignore this)
225
+ CLAUDE.md # Gets a `Contributor wiki: <path>` pointer stamped in
226
+ ```
227
+
228
+ `mode.json` records whether the repo is `code` / `content` / `mixed` / `unknown` and where the contributor wiki lives (`wiki/`, `docs/wiki/`, etc.). Agents read it first when they need to find the wiki — it's the canonical discovery hint.
229
+
230
+ Everything is markdown. Everything is git-committable (except the brain's `.brain.db` and the per-repo `.wicked-brain/`). Everything is human-readable. The SQLite file is a rebuildable cache — delete it and the server recreates it from your markdown files.
203
231
 
204
232
  ## The Agent is the Parser
205
233
 
@@ -209,22 +237,25 @@ Modern LLMs read PDF, DOCX, PPTX, and XLSX natively. When you ingest a binary do
209
237
 
210
238
  ## Architecture
211
239
 
212
- **~300 lines of server JavaScript** (SQLite FTS5 + file watcher) + **~1,400 lines of skill markdown** (agent instructions).
240
+ Plain Node.js server (SQLite FTS5 + file watcher + optional LSP client + HTML viewer) plus markdown skill instructions your AI CLI consumes. **One runtime dependency** (`better-sqlite3`); LSP layer is hand-rolled JSON-RPC; the viewer is vanilla JS with no build.
213
241
 
214
- That's the entire system. Compare that to a typical RAG stack:
242
+ Compare that to a typical RAG stack:
215
243
 
216
244
  ```
217
- Typical RAG: wicked-brain:
218
- - Embedding model API - SQLite (one file)
245
+ Typical RAG: wicked-brain:
246
+ - Embedding model API - SQLite (one file)
219
247
  - Vector database (Pinecone/Weaviate) - Markdown files
220
248
  - Chunking pipeline - Agent's native tools
221
- - Retrieval service - curl localhost
249
+ - Retrieval service - curl localhost (POST /api)
222
250
  - Re-ranking model - LLM reasoning
223
251
  - Orchestration layer - Skills (markdown)
252
+ - Admin UI - GET / (vanilla HTML, read-only)
224
253
  ───────────────── ─────────────────
225
- ~5,000+ lines, 10+ deps ~1,700 lines, 1 dep
254
+ 10+ deps, opaque vectors 1 runtime dep, plain markdown
226
255
  ```
227
256
 
257
+ See [ARCHITECTURE.md](ARCHITECTURE.md) for component diagrams and the schema layout.
258
+
228
259
  ## Supported CLIs
229
260
 
230
261
  | CLI | Status |
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "wicked-brain",
3
- "version": "0.10.0",
3
+ "version": "0.11.0",
4
4
  "type": "module",
5
5
  "description": "Digital brain as skills for AI coding CLIs — no vector DB, no embeddings, no infrastructure",
6
6
  "keywords": [
@@ -0,0 +1,36 @@
1
+ #!/usr/bin/env node
2
+ /**
3
+ * wicked-brain-onboard-wiki
4
+ *
5
+ * Detect repo mode, write `.wicked-brain/mode.json`, and stamp the
6
+ * `Contributor wiki: <path>` pointer into CLAUDE.md / AGENTS.md.
7
+ *
8
+ * Usage:
9
+ * wicked-brain-onboard-wiki # runs against cwd
10
+ * wicked-brain-onboard-wiki --repo-root <path> # runs against <path>
11
+ * wicked-brain-onboard-wiki --force # overrides override:true
12
+ */
13
+
14
+ import process from "node:process";
15
+ import { runOnboardWiki, formatOnboardResult } from "../lib/onboard-wiki.mjs";
16
+
17
+ const args = process.argv.slice(2);
18
+
19
+ function getFlag(name) {
20
+ const idx = args.indexOf(`--${name}`);
21
+ return idx !== -1 && args[idx + 1] && !args[idx + 1].startsWith("--")
22
+ ? args[idx + 1]
23
+ : null;
24
+ }
25
+
26
+ const repoRoot = getFlag("repo-root") ?? process.cwd();
27
+ const force = args.includes("--force");
28
+
29
+ try {
30
+ const result = await runOnboardWiki(repoRoot, { force });
31
+ console.log(formatOnboardResult(result));
32
+ process.exit(0);
33
+ } catch (err) {
34
+ console.error(`onboard-wiki failed: ${err.message}`);
35
+ process.exit(1);
36
+ }
@@ -8,6 +8,10 @@ import { SqliteSearch } from "../lib/sqlite-search.mjs";
8
8
  import { LspClient } from "../lib/lsp-client.mjs";
9
9
  import { emitEvent, waitForBus } from "../lib/bus.mjs";
10
10
  import { startMemorySubscriber } from "../lib/memory-subscriber.mjs";
11
+ import { renderViewerHtml } from "../lib/viewer-page.mjs";
12
+ import { walkBrainContent, purgeBrainContent } from "../lib/brain-walker.mjs";
13
+ import { runOnboardWiki } from "../lib/onboard-wiki.mjs";
14
+ import { readFile as readFileAsync } from "node:fs/promises";
11
15
 
12
16
  // Parse args
13
17
  const args = argv.slice(2);
@@ -29,6 +33,10 @@ const brainPath = resolve(getArg("brain") || ".");
29
33
  const preferredPort = parseInt(getArg("port") || "4242", 10);
30
34
  // Explicit --port means "bind this exact port or fail" — no probe.
31
35
  const portExplicit = args.includes("--port");
36
+ // Read-only mode disables write + destructive actions at the API layer.
37
+ // Intended for shared / exposed brains where only search and read should
38
+ // be reachable. Default is off to preserve the existing ingest story.
39
+ const readOnly = args.includes("--read-only");
32
40
  const configPath = join(brainPath, "brain.json");
33
41
  // Source path for LSP workspace root — prefer --source flag, fall back to config, then brainPath
34
42
  const sourceArgRaw = getArg("source");
@@ -119,7 +127,7 @@ process.on("SIGINT", () => shutdown());
119
127
 
120
128
  // Action dispatch
121
129
  const actions = {
122
- health: () => db.health(),
130
+ health: () => ({ ...db.health(), read_only: readOnly }),
123
131
  search: (p) => {
124
132
  const result = db.search(p);
125
133
  emitEvent("wicked.search.executed", "brain.search", {
@@ -154,6 +162,10 @@ const actions = {
154
162
  },
155
163
  backlinks: (p) => ({ links: db.backlinks(p.id) }),
156
164
  forward_links: (p) => ({ links: db.forwardLinks(p.id) }),
165
+ get_document: (p) => ({
166
+ document: p.id ? db.getDocument(p.id) : p.path ? db.getDocumentByPath(p.path) : null,
167
+ }),
168
+ list_docs: (p = {}) => db.listDocuments(p),
157
169
  stats: () => db.stats(),
158
170
  memory_stats: () => db.memoryStats(),
159
171
  candidates: (p) => ({ candidates: db.candidates(p) }),
@@ -206,13 +218,76 @@ const actions = {
206
218
  "lsp-call-hierarchy-in": (p) => lsp.callHierarchyIn(p),
207
219
  "lsp-call-hierarchy-out": (p) => lsp.callHierarchyOut(p),
208
220
  "lsp-diagnostics": (p) => lsp.diagnostics(p),
221
+ reonboard: async () => {
222
+ // Detect mode + stamp the CLAUDE.md/AGENTS.md pointer, then rebuild the
223
+ // search index from whatever content is on disk in this brain. Does NOT
224
+ // re-write chunks or wiki — authored content is preserved.
225
+ const onboardTarget = sourceArg ?? brainPath;
226
+ const onboard = await runOnboardWiki(onboardTarget).catch(() => null);
227
+ const entries = await walkBrainContent(brainPath);
228
+ const docs = [];
229
+ for (const entry of entries) {
230
+ const content = await readFileAsync(entry.abs, "utf8");
231
+ docs.push({ id: entry.rel, path: entry.rel, content });
232
+ }
233
+ db.reindex(docs);
234
+ emitEvent("wicked.brain.reonboarded", "brain", {
235
+ brain_id: brainId,
236
+ indexed: docs.length,
237
+ mode: onboard?.detection?.mode ?? null,
238
+ });
239
+ return {
240
+ indexed: docs.length,
241
+ onboard: onboard
242
+ ? {
243
+ mode: onboard.detection.mode,
244
+ wiki_root: onboard.wiki_root,
245
+ mode_write: onboard.mode_write.action,
246
+ stamps: onboard.stamps,
247
+ }
248
+ : { skipped: true },
249
+ };
250
+ },
251
+ purge_brain: async (p = {}) => {
252
+ // Destructive. Wipes chunks/, wiki/, and memory/ content and clears the
253
+ // SQLite index. Requires p.confirm === "DELETE" to execute — typed
254
+ // confirmation from the UI keeps accidental clicks from being catastrophic.
255
+ if (p.confirm !== "DELETE") {
256
+ return { error: 'confirmation missing: pass {"confirm":"DELETE"}' };
257
+ }
258
+ const removed = await purgeBrainContent(brainPath);
259
+ db.reindex([]);
260
+ emitEvent("wicked.brain.purged", "brain", { brain_id: brainId, removed });
261
+ return { removed };
262
+ },
209
263
  };
210
264
 
265
+ // Actions that mutate state. Blocked when the server was started with
266
+ // `--read-only`, plus the two destructive newcomers. The list is explicit
267
+ // rather than heuristic so additions are deliberate.
268
+ const WRITE_ACTIONS = new Set([
269
+ "index",
270
+ "remove",
271
+ "reindex",
272
+ "confirm_link",
273
+ "reonboard",
274
+ "purge_brain",
275
+ ]);
276
+
211
277
  // HTTP server
212
278
  const server = createServer((req, res) => {
279
+ // Read-only viewer at GET /
280
+ if (req.method === "GET" && (req.url === "/" || req.url.startsWith("/?") || req.url.startsWith("/#"))) {
281
+ res.writeHead(200, {
282
+ "Content-Type": "text/html; charset=utf-8",
283
+ "Cache-Control": "no-store",
284
+ });
285
+ res.end(renderViewerHtml({ brainId }));
286
+ return;
287
+ }
213
288
  if (req.method !== "POST" || req.url !== "/api") {
214
289
  res.writeHead(404, { "Content-Type": "application/json" });
215
- res.end(JSON.stringify({ error: "Not found. Use POST /api" }));
290
+ res.end(JSON.stringify({ error: "Not found. Use POST /api or GET /" }));
216
291
  return;
217
292
  }
218
293
  let body = "";
@@ -226,6 +301,11 @@ const server = createServer((req, res) => {
226
301
  res.end(JSON.stringify({ error: `Unknown action: ${action}` }));
227
302
  return;
228
303
  }
304
+ if (readOnly && WRITE_ACTIONS.has(action)) {
305
+ res.writeHead(403, { "Content-Type": "application/json" });
306
+ res.end(JSON.stringify({ error: `Action blocked in --read-only mode: ${action}` }));
307
+ return;
308
+ }
229
309
  // Handle both sync and async results
230
310
  const result = handler(params);
231
311
  Promise.resolve(result)
@@ -0,0 +1,78 @@
1
+ import fs from "node:fs/promises";
2
+ import path from "node:path";
3
+
4
+ /**
5
+ * Walk a brain path and surface every authored `.md` file under the content
6
+ * subdirectories (chunks/, wiki/, memory/). Deliberately excludes `_meta/`,
7
+ * `raw/`, `.brain.db`, and any dotfile/dotdir. Paths returned are relative to
8
+ * the brain path and use forward slashes per INV-PATHS-FORWARD.
9
+ */
10
+ const CONTENT_DIRS = ["chunks", "wiki", "memory"];
11
+
12
+ export async function walkBrainContent(brainPath) {
13
+ const out = [];
14
+ for (const rel of CONTENT_DIRS) {
15
+ const abs = path.join(brainPath, rel);
16
+ await walk(abs, rel, out);
17
+ }
18
+ out.sort((a, b) => a.rel.localeCompare(b.rel));
19
+ return out;
20
+ }
21
+
22
+ /**
23
+ * Remove everything under chunks/, wiki/, memory/ in the brain path. Leaves
24
+ * the directories themselves (with empty .gitkeep placeholders) so the shape
25
+ * of the brain survives a purge. Returns a per-dir file count.
26
+ */
27
+ export async function purgeBrainContent(brainPath) {
28
+ const counts = {};
29
+ for (const rel of CONTENT_DIRS) {
30
+ const abs = path.join(brainPath, rel);
31
+ counts[rel] = await removeDirContents(abs);
32
+ await fs.mkdir(abs, { recursive: true });
33
+ await fs.writeFile(path.join(abs, ".gitkeep"), "", "utf8").catch(() => {});
34
+ }
35
+ return counts;
36
+ }
37
+
38
+ // --- internals ---
39
+
40
+ async function walk(absDir, relDir, out) {
41
+ let entries;
42
+ try {
43
+ entries = await fs.readdir(absDir, { withFileTypes: true });
44
+ } catch {
45
+ return;
46
+ }
47
+ for (const e of entries) {
48
+ if (e.name.startsWith(".")) continue;
49
+ const absChild = path.join(absDir, e.name);
50
+ const relChild = path.posix.join(relDir, e.name);
51
+ if (e.isDirectory()) {
52
+ await walk(absChild, relChild, out);
53
+ } else if (e.isFile() && e.name.endsWith(".md")) {
54
+ out.push({ abs: absChild, rel: relChild.replace(/\\/g, "/") });
55
+ }
56
+ }
57
+ }
58
+
59
+ async function removeDirContents(absDir) {
60
+ let count = 0;
61
+ let entries;
62
+ try {
63
+ entries = await fs.readdir(absDir, { withFileTypes: true });
64
+ } catch {
65
+ return 0;
66
+ }
67
+ for (const e of entries) {
68
+ const abs = path.join(absDir, e.name);
69
+ if (e.isDirectory()) {
70
+ count += await removeDirContents(abs);
71
+ await fs.rm(abs, { recursive: true, force: true });
72
+ } else if (e.isFile()) {
73
+ await fs.rm(abs, { force: true });
74
+ if (!e.name.startsWith(".")) count += 1;
75
+ }
76
+ }
77
+ return count;
78
+ }
@@ -0,0 +1,128 @@
1
+ import fs from "node:fs/promises";
2
+ import path from "node:path";
3
+ import { parseFrontmatter } from "./frontmatter.mjs";
4
+
5
+ /**
6
+ * Canonical registry: maps canonical IDs (e.g. "INV-PATHS-FORWARD") to the
7
+ * single page that owns them. Detects violations of the "one page per ID"
8
+ * rule and broken references.
9
+ *
10
+ * The registry is built once from a list of pages (each with frontmatter
11
+ * data) and is cheap to query. It does not touch the DB — it is a pure
12
+ * map-building function. Persistence and search integration live in
13
+ * sqlite-search.
14
+ */
15
+
16
+ /**
17
+ * Build a registry from an array of { path, data } entries where `data`
18
+ * comes from parseFrontmatter.
19
+ *
20
+ * Returns:
21
+ * {
22
+ * byId: Map<id, path> canonical pages (first claimant wins)
23
+ * duplicates: Array<{ id, paths: string[] }>
24
+ * pages: Array<{ path, canonical_for: string[], references: string[] }>
25
+ * }
26
+ */
27
+ export function buildRegistry(entries) {
28
+ const byId = new Map();
29
+ const duplicateHits = new Map(); // id -> Set of paths
30
+ const pages = [];
31
+
32
+ for (const { path: p, data } of entries) {
33
+ const claimed = normalizeList(data?.canonical_for);
34
+ const refs = normalizeList(data?.references);
35
+ pages.push({ path: p, canonical_for: claimed, references: refs });
36
+
37
+ for (const id of claimed) {
38
+ if (byId.has(id)) {
39
+ if (!duplicateHits.has(id)) {
40
+ duplicateHits.set(id, new Set([byId.get(id)]));
41
+ }
42
+ duplicateHits.get(id).add(p);
43
+ } else {
44
+ byId.set(id, p);
45
+ }
46
+ }
47
+ }
48
+
49
+ const duplicates = [];
50
+ for (const [id, pathSet] of duplicateHits) {
51
+ duplicates.push({ id, paths: [...pathSet].sort() });
52
+ }
53
+ duplicates.sort((a, b) => a.id.localeCompare(b.id));
54
+
55
+ return { byId, duplicates, pages };
56
+ }
57
+
58
+ /**
59
+ * Find references that don't resolve.
60
+ *
61
+ * A reference is resolvable if:
62
+ * - it matches a canonical ID in the registry
63
+ * - OR it matches a known path (present in `knownPaths`)
64
+ * - OR it is an anchor-style link to a canonical ID (e.g. "wiki/invariants.md#INV-A")
65
+ *
66
+ * Returns Array<{ page, ref, reason }>.
67
+ */
68
+ export function findBrokenReferences(registry, knownPaths = new Set()) {
69
+ const broken = [];
70
+ const canonicalIds = new Set(registry.byId.keys());
71
+ for (const page of registry.pages) {
72
+ for (const ref of page.references) {
73
+ if (isResolvable(ref, canonicalIds, knownPaths)) continue;
74
+ broken.push({ page: page.path, ref, reason: "unresolved reference" });
75
+ }
76
+ }
77
+ return broken;
78
+ }
79
+
80
+ /**
81
+ * Walk a wiki root and load every .md file's frontmatter.
82
+ * Returns entries ready for buildRegistry.
83
+ */
84
+ export async function loadWikiEntries(wikiRoot) {
85
+ const entries = [];
86
+ await walkMarkdown(wikiRoot, wikiRoot, entries);
87
+ return entries;
88
+ }
89
+
90
+ async function walkMarkdown(absRoot, absDir, out) {
91
+ let items;
92
+ try {
93
+ items = await fs.readdir(absDir, { withFileTypes: true });
94
+ } catch {
95
+ return;
96
+ }
97
+ for (const item of items) {
98
+ const abs = path.join(absDir, item.name);
99
+ if (item.isDirectory()) {
100
+ await walkMarkdown(absRoot, abs, out);
101
+ } else if (item.isFile() && item.name.endsWith(".md")) {
102
+ const content = await fs.readFile(abs, "utf8");
103
+ const { data } = parseFrontmatter(content);
104
+ const rel = path.relative(absRoot, abs).replace(/\\/g, "/");
105
+ out.push({ path: rel, data });
106
+ }
107
+ }
108
+ }
109
+
110
+ function normalizeList(value) {
111
+ if (value == null) return [];
112
+ if (Array.isArray(value)) return value.map(String);
113
+ if (typeof value === "string") return [value];
114
+ return [];
115
+ }
116
+
117
+ function isResolvable(ref, canonicalIds, knownPaths) {
118
+ if (canonicalIds.has(ref)) return true;
119
+ const hashIdx = ref.indexOf("#");
120
+ if (hashIdx >= 0) {
121
+ const anchorId = ref.slice(hashIdx + 1);
122
+ if (canonicalIds.has(anchorId)) return true;
123
+ const pathPart = ref.slice(0, hashIdx);
124
+ if (pathPart.length > 0 && knownPaths.has(pathPart)) return true;
125
+ }
126
+ if (knownPaths.has(ref)) return true;
127
+ return false;
128
+ }