wicked-brain 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Mike Parcewski
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,208 @@
1
+ # wicked-brain
2
+
3
+ **Your AI agent's memory. No vector DB. No embeddings. No infrastructure.**
4
+
5
+ wicked-brain gives your AI coding CLI a persistent, searchable knowledge base built on markdown and SQLite. Drop in files, let the agent organize them, and query your accumulated knowledge across sessions — all without leaving your terminal.
6
+
7
+ Works with **Claude Code**, **Gemini CLI**, **Copilot CLI**, **Cursor**, and **Codex**.
8
+
9
+ ---
10
+
11
+ ## The Problem
12
+
13
+ Every time you start a new AI session, your agent wakes up blank. The context you spent tokens building yesterday? Gone. The architecture decisions, the research, the tribal knowledge — all lost to the session boundary.
14
+
15
+ The industry's answer has been RAG pipelines: chunk your docs, generate embeddings, spin up a vector database, build a retrieval layer, tune your similarity thresholds, and pray the cosine distance actually surfaces what you need.
16
+
17
+ **That's a lot of infrastructure for "remember what I told you."**
18
+
19
+ ## A Different Approach
20
+
21
+ wicked-brain follows the [Karpathy pattern](https://x.com/karpathy): treat the LLM as a research librarian that actively maintains a structured markdown knowledge base. No embeddings. No vector math. Just files, full-text search, and an agent that reasons about connections.
22
+
23
+ ```
24
+ Your documents ──> Structured chunks ──> Synthesized wiki ──> Answers
25
+ (evidence layer) (knowledge layer)
26
+ ```
27
+
28
+ - **Chunks** are source-faithful extractions with rich metadata (entities, themes, tags)
29
+ - **Wiki articles** are LLM-synthesized concepts with `[[backlinks]]` to source chunks
30
+ - **Every claim traces back** to a specific file you can read, edit, or delete
31
+
32
+ The brain is plain markdown on your filesystem. Open it in Obsidian, VS Code, or `cat`. No black box.
33
+
34
+ ## How It Works
35
+
36
+ wicked-brain is a set of **skills** — markdown instruction files that teach your AI agent how to manage a knowledge base using its native tools (read, write, search, grep).
37
+
38
+ A lightweight background server handles the one thing that needs a database: full-text search via SQLite FTS5.
39
+
40
+ ```
41
+ ┌─────────────────────────────────────────┐
42
+ │ Your AI CLI (Claude / Gemini / ...) │
43
+ │ │
44
+ │ Skills: │
45
+ │ wicked-brain:ingest → add sources │
46
+ │ wicked-brain:search → find content │
47
+ │ wicked-brain:query → ask questions│
48
+ │ wicked-brain:compile → build wiki │
49
+ │ wicked-brain:lint → check quality│
50
+ │ │
51
+ │ Your agent uses its own tools: │
52
+ │ Read, Write, Grep — no special APIs │
53
+ └────────────┬────────────────────────────┘
54
+ │ curl localhost (search only)
55
+
56
+ ┌─────────────────────────────────────────┐
57
+ │ SQLite FTS5 server (auto-starts) │
58
+ │ ~300 lines, one dependency │
59
+ └─────────────────────────────────────────┘
60
+ ```
61
+
62
+ **The server is invisible.** It auto-starts when needed and auto-reindexes when files change. You never think about it.
63
+
64
+ ## Install
65
+
66
+ ```bash
67
+ npx wicked-brain
68
+ ```
69
+
70
+ That's it. The installer detects your AI CLIs and drops in the skills. First time you use any skill, it walks you through setup.
71
+
72
+ Or install via [agent-skills-cli](https://github.com/Karanjot786/agent-skills-cli):
73
+
74
+ ```bash
75
+ skills install wicked-brain
76
+ ```
77
+
78
+ ## Usage
79
+
80
+ Once installed, just talk to your agent:
81
+
82
+ **Ingest a document:**
83
+ > "Ingest this research paper" (works with PDF, DOCX, PPTX, XLSX, images — the LLM reads them natively)
84
+
85
+ **Search your brain:**
86
+ > "Search my brain for knowledge graph construction methods"
87
+
88
+ **Ask a question:**
89
+ > "What does my brain say about our SLA enforcement approach?"
90
+
91
+ **Build wiki articles:**
92
+ > "Compile wiki articles from the chunks we've ingested"
93
+
94
+ **Check health:**
95
+ > "What's in my brain?"
96
+
97
+ Every operation uses **progressive loading** — the agent never pulls more than it needs. Search returns one-line summaries first. You drill down only when something looks relevant.
98
+
99
+ ## What Makes It Different
100
+
101
+ | | Vector DB / RAG | wicked-brain |
102
+ |---|---|---|
103
+ | **Data format** | Opaque embeddings | Human-readable markdown |
104
+ | **Search** | Cosine similarity (nearest neighbor) | Full-text search + LLM reasoning |
105
+ | **Connections** | None (just similarity scores) | Explicit `[[backlinks]]` between concepts |
106
+ | **Auditability** | Low (why did it retrieve this?) | High (every claim links to a source file) |
107
+ | **Infrastructure** | Vector DB + embedding pipeline + retrieval service | One SQLite file + markdown |
108
+ | **Maintenance** | Re-embed on changes, tune thresholds | Agent self-heals via lint and enhance |
109
+ | **Cost to start** | Embedding API calls for entire corpus | Zero (deterministic chunking is free) |
110
+ | **Ideal scale** | Millions of documents | 100 - 10,000 high-signal documents |
111
+
112
+ ## The Skills
113
+
114
+ | Skill | What it does |
115
+ |---|---|
116
+ | `wicked-brain:init` | Set up a new brain (auto-triggers on first use) |
117
+ | `wicked-brain:ingest` | Add source files — text extracted deterministically, binary docs read via LLM vision |
118
+ | `wicked-brain:search` | Parallel search across your brain and linked brains |
119
+ | `wicked-brain:read` | Progressive loading: depth 0 (stats), depth 1 (summary), depth 2 (full content) |
120
+ | `wicked-brain:query` | Answer questions with source citations |
121
+ | `wicked-brain:compile` | Synthesize wiki articles from chunks |
122
+ | `wicked-brain:lint` | Find broken links, orphan chunks, inconsistencies |
123
+ | `wicked-brain:enhance` | Identify and fill knowledge gaps |
124
+ | `wicked-brain:status` | Brain health, stats, orientation |
125
+ | `wicked-brain:server` | Manage the background search server (auto-triggered) |
126
+
127
+ ## Multi-Brain Federation
128
+
129
+ Brains can link to other brains. A personal research brain can reference a team standards brain. A client brain can inherit from a company knowledge base.
130
+
131
+ ```json
132
+ {
133
+ "id": "client-x",
134
+ "parents": ["../company-standards"],
135
+ "links": ["../shared-research"]
136
+ }
137
+ ```
138
+
139
+ When you search, wicked-brain dispatches parallel search agents across all accessible brains and merges the results. Access control is filesystem permissions — if you can read the directory, you can search it.
140
+
141
+ ## What's on Disk
142
+
143
+ ```
144
+ ~/.wicked-brain/
145
+ brain.json # Identity and brain links
146
+ raw/ # Your source files
147
+ chunks/
148
+ extracted/ # Source-faithful extractions with metadata
149
+ inferred/ # LLM-generated content (clearly separated)
150
+ wiki/ # Synthesized articles with [[backlinks]]
151
+ _meta/
152
+ log.jsonl # Append-only event log
153
+ .brain.db # SQLite search index (auto-managed)
154
+ ```
155
+
156
+ Everything is markdown. Everything is git-committable. Everything is human-readable. The SQLite file is a rebuildable cache — delete it and the server recreates it from your markdown files.
157
+
158
+ ## The Agent is the Parser
159
+
160
+ No `pdf-parse`. No `mammoth`. No `pptx-parser`.
161
+
162
+ Modern LLMs read PDF, DOCX, PPTX, and XLSX natively. When you ingest a binary document, the agent reads it with its vision capabilities and writes structured markdown chunks. Better extraction than any library, with semantic understanding built in.
163
+
164
+ ## Architecture
165
+
166
+ **~500 lines of server JavaScript** (SQLite FTS5 + file watcher) + **~900 lines of skill markdown** (agent instructions).
167
+
168
+ That's the entire system. Compare that to a typical RAG stack:
169
+
170
+ ```
171
+ Typical RAG: wicked-brain:
172
+ - Embedding model API - SQLite (one file)
173
+ - Vector database (Pinecone/Weaviate) - Markdown files
174
+ - Chunking pipeline - Agent's native tools
175
+ - Retrieval service - curl localhost
176
+ - Re-ranking model - LLM reasoning
177
+ - Orchestration layer - Skills (markdown)
178
+ ───────────────── ─────────────────
179
+ ~5,000+ lines, 10+ deps ~1,400 lines, 1 dep
180
+ ```
181
+
182
+ ## Supported CLIs
183
+
184
+ | CLI | Status |
185
+ |---|---|
186
+ | Claude Code | Supported |
187
+ | Gemini CLI | Supported |
188
+ | GitHub Copilot CLI | Supported |
189
+ | Cursor | Supported |
190
+ | Codex | Supported |
191
+
192
+ Skills use only universally available operations (read files, write files, run shell commands, grep). No CLI-specific features.
193
+
194
+ ## Philosophy
195
+
196
+ > "You rarely ever write or edit the wiki manually; it's the domain of the LLM." — Andrej Karpathy
197
+
198
+ wicked-brain is built on three beliefs:
199
+
200
+ 1. **Files over databases.** Markdown is the most LLM-friendly, human-readable, future-proof format. Your knowledge shouldn't be locked in embeddings you can't read.
201
+
202
+ 2. **Reasoning over retrieval.** An LLM that reads summaries, follows links, and thinks about connections beats a nearest-neighbor lookup every time — at least for the scale most teams actually operate at.
203
+
204
+ 3. **Skills over infrastructure.** The agent already knows how to read, write, and search files. Teach it a workflow and it becomes a knowledge manager. No new services to deploy.
205
+
206
+ ## License
207
+
208
+ MIT
package/install.mjs ADDED
@@ -0,0 +1,64 @@
1
+ #!/usr/bin/env node
2
+ // wicked-brain installer — detects CLIs and installs skills
3
+
4
+ import { existsSync, mkdirSync, cpSync, readdirSync } from "node:fs";
5
+ import { join, resolve } from "node:path";
6
+ import { homedir } from "node:os";
7
+ import { argv } from "node:process";
8
+ import { fileURLToPath } from "node:url";
9
+
10
+ const __dirname = fileURLToPath(new URL(".", import.meta.url));
11
+ const skillsSource = join(__dirname, "skills");
12
+ const home = homedir();
13
+
14
+ const CLI_TARGETS = [
15
+ { name: "claude", dir: join(home, ".claude", "skills") },
16
+ { name: "gemini", dir: join(home, ".gemini", "skills") },
17
+ { name: "copilot", dir: join(home, ".github", "skills") },
18
+ { name: "codex", dir: join(home, ".codex", "skills") },
19
+ { name: "cursor", dir: join(home, ".cursor", "skills") },
20
+ ];
21
+
22
+ // Detect which CLIs are installed by checking if parent dir exists
23
+ const detected = CLI_TARGETS.filter((t) => {
24
+ const parentDir = resolve(t.dir, "..");
25
+ return existsSync(parentDir);
26
+ });
27
+
28
+ console.log("wicked-brain installer\n");
29
+
30
+ if (detected.length === 0) {
31
+ console.log("No supported AI CLIs detected. Supported: claude, gemini, copilot, codex, cursor");
32
+ console.log("Install skills manually by copying the skills/ directory.");
33
+ process.exit(1);
34
+ }
35
+
36
+ console.log(`Detected CLIs: ${detected.map((d) => d.name).join(", ")}\n`);
37
+
38
+ // Allow filtering via --cli flag
39
+ const cliArg = argv.find((a) => a.startsWith("--cli="));
40
+ const cliFilter = cliArg ? cliArg.split("=")[1].split(",") : null;
41
+ const targets = cliFilter
42
+ ? detected.filter((d) => cliFilter.includes(d.name))
43
+ : detected;
44
+
45
+ // Copy skills to each target CLI
46
+ const skillDirs = readdirSync(skillsSource).filter((d) => !d.startsWith("."));
47
+
48
+ for (const target of targets) {
49
+ console.log(`Installing to ${target.name} (${target.dir})...`);
50
+ mkdirSync(target.dir, { recursive: true });
51
+
52
+ for (const skill of skillDirs) {
53
+ const src = join(skillsSource, skill);
54
+ const dest = join(target.dir, skill);
55
+ cpSync(src, dest, { recursive: true });
56
+ }
57
+
58
+ console.log(` ${skillDirs.length} skills installed`);
59
+ }
60
+
61
+ // Server binary is bundled — npx wicked-brain-server works automatically
62
+ // Skills reference it as: npx wicked-brain-server --brain {path} --port {port}
63
+ console.log("\nServer: bundled (use 'npx wicked-brain-server' to start)");
64
+ console.log(`\nwicked-brain installed! Open your AI CLI and say "wicked-brain:init" to get started.`);
package/package.json ADDED
@@ -0,0 +1,53 @@
1
+ {
2
+ "name": "wicked-brain",
3
+ "version": "0.1.0",
4
+ "type": "module",
5
+ "description": "Digital brain as skills for AI coding CLIs — no vector DB, no embeddings, no infrastructure",
6
+ "keywords": [
7
+ "ai",
8
+ "brain",
9
+ "knowledge-base",
10
+ "skills",
11
+ "claude-code",
12
+ "gemini-cli",
13
+ "copilot",
14
+ "cursor",
15
+ "codex",
16
+ "markdown",
17
+ "sqlite",
18
+ "fts5",
19
+ "rag-alternative"
20
+ ],
21
+ "author": "Mike Parcewski",
22
+ "license": "MIT",
23
+ "repository": {
24
+ "type": "git",
25
+ "url": "git+https://github.com/mikeparcewski/wicked-brain.git"
26
+ },
27
+ "homepage": "https://github.com/mikeparcewski/wicked-brain",
28
+ "bugs": "https://github.com/mikeparcewski/wicked-brain/issues",
29
+ "bin": {
30
+ "wicked-brain": "install.mjs",
31
+ "wicked-brain-server": "server/bin/wicked-brain-server.mjs"
32
+ },
33
+ "dependencies": {
34
+ "better-sqlite3": "^12.0.0"
35
+ },
36
+ "files": [
37
+ "install.mjs",
38
+ "skills/",
39
+ "server/bin/",
40
+ "server/lib/",
41
+ "server/package.json",
42
+ "README.md",
43
+ "LICENSE"
44
+ ],
45
+ "scripts": {
46
+ "test": "cd server && node --test",
47
+ "prepublishOnly": "npm test",
48
+ "release": "npm version patch && git push && git push --tags"
49
+ },
50
+ "engines": {
51
+ "node": ">=18.0.0"
52
+ }
53
+ }
@@ -0,0 +1,95 @@
1
+ #!/usr/bin/env node
2
+ import { createServer } from "node:http";
3
+ import { readFileSync, writeFileSync, unlinkSync, existsSync, mkdirSync } from "node:fs";
4
+ import { join, resolve } from "node:path";
5
+ import { argv, pid, exit } from "node:process";
6
+ import { FileWatcher } from "../lib/file-watcher.mjs";
7
+ import { SqliteSearch } from "../lib/sqlite-search.mjs";
8
+
9
+ // Parse args
10
+ const args = argv.slice(2);
11
+ function getArg(name) {
12
+ const idx = args.indexOf(`--${name}`);
13
+ return idx !== -1 && args[idx + 1] ? args[idx + 1] : null;
14
+ }
15
+
16
+ const brainPath = resolve(getArg("brain") || ".");
17
+ const port = parseInt(getArg("port") || "4242", 10);
18
+ const configPath = join(brainPath, "brain.json");
19
+
20
+ // Read brain config
21
+ let brainId = "unknown";
22
+ try {
23
+ const config = JSON.parse(readFileSync(configPath, "utf-8"));
24
+ brainId = config.id || "unknown";
25
+ } catch {
26
+ console.error(`Warning: Could not read brain.json at ${configPath}`);
27
+ }
28
+
29
+ // Initialize SQLite
30
+ const dbPath = join(brainPath, ".brain.db");
31
+ mkdirSync(join(brainPath, "_meta"), { recursive: true });
32
+ const db = new SqliteSearch(dbPath, brainId);
33
+
34
+ // PID file
35
+ const pidPath = join(brainPath, "_meta", "server.pid");
36
+ writeFileSync(pidPath, String(pid));
37
+
38
+ // Graceful shutdown
39
+ function shutdown() {
40
+ console.log("Shutting down...");
41
+ try { unlinkSync(pidPath); } catch {}
42
+ watcher.stop();
43
+ db.close();
44
+ exit(0);
45
+ }
46
+ process.on("SIGTERM", shutdown);
47
+ process.on("SIGINT", shutdown);
48
+
49
+ // Action dispatch
50
+ const actions = {
51
+ health: () => db.health(),
52
+ search: (p) => db.search(p),
53
+ federated_search: (p) => db.federatedSearch(p),
54
+ index: (p) => db.index(p),
55
+ remove: (p) => db.remove(p.id),
56
+ reindex: (p) => db.reindex(p.docs),
57
+ backlinks: (p) => ({ links: db.backlinks(p.id) }),
58
+ forward_links: (p) => ({ links: db.forwardLinks(p.id) }),
59
+ stats: () => db.stats(),
60
+ };
61
+
62
+ // HTTP server
63
+ const server = createServer((req, res) => {
64
+ if (req.method !== "POST" || req.url !== "/api") {
65
+ res.writeHead(404, { "Content-Type": "application/json" });
66
+ res.end(JSON.stringify({ error: "Not found. Use POST /api" }));
67
+ return;
68
+ }
69
+ let body = "";
70
+ req.on("data", (chunk) => { body += chunk; });
71
+ req.on("end", () => {
72
+ try {
73
+ const { action, params = {} } = JSON.parse(body);
74
+ const handler = actions[action];
75
+ if (!handler) {
76
+ res.writeHead(400, { "Content-Type": "application/json" });
77
+ res.end(JSON.stringify({ error: `Unknown action: ${action}` }));
78
+ return;
79
+ }
80
+ const result = handler(params);
81
+ res.writeHead(200, { "Content-Type": "application/json" });
82
+ res.end(JSON.stringify(result ?? { ok: true }));
83
+ } catch (err) {
84
+ res.writeHead(500, { "Content-Type": "application/json" });
85
+ res.end(JSON.stringify({ error: err.message }));
86
+ }
87
+ });
88
+ });
89
+
90
+ const watcher = new FileWatcher(brainPath, db, brainId);
91
+
92
+ server.listen(port, () => {
93
+ console.log(`wicked-brain-server running on port ${port} (brain: ${brainId}, pid: ${pid})`);
94
+ watcher.start();
95
+ });
@@ -0,0 +1,159 @@
1
+ import { watch, readFileSync, existsSync, readdirSync } from "node:fs";
2
+ import { join, relative } from "node:path";
3
+ import { createHash } from "node:crypto";
4
+
5
+ function normalizePath(p) {
6
+ return p.replace(/\\/g, "/");
7
+ }
8
+
9
+ export class FileWatcher {
10
+ #brainPath;
11
+ #db;
12
+ #brainId;
13
+ #hashes = new Map(); // path -> content hash
14
+ #watchers = [];
15
+ #debounceTimers = new Map();
16
+ #pollInterval = null;
17
+
18
+ constructor(brainPath, db, brainId) {
19
+ this.#brainPath = brainPath;
20
+ this.#db = db;
21
+ this.#brainId = brainId;
22
+ }
23
+
24
+ start() {
25
+ // Build initial hash map
26
+ this.#scanAndHash("chunks");
27
+ this.#scanAndHash("wiki");
28
+
29
+ // Watch directories
30
+ for (const dir of ["chunks", "wiki"]) {
31
+ const absDir = join(this.#brainPath, dir);
32
+ if (!existsSync(absDir)) continue;
33
+
34
+ try {
35
+ const watcher = watch(absDir, { recursive: true }, (eventType, filename) => {
36
+ if (!filename || !filename.endsWith(".md")) return;
37
+ const relPath = normalizePath(`${dir}/${filename}`);
38
+ this.#debounce(relPath, () => this.#handleChange(relPath));
39
+ });
40
+ this.#watchers.push(watcher);
41
+ } catch {
42
+ // recursive watch not supported on this platform (Linux) — fall back to polling
43
+ }
44
+ }
45
+
46
+ // If no watchers were set up (Linux), use polling fallback
47
+ if (this.#watchers.length === 0) {
48
+ this.#startPolling();
49
+ } else {
50
+ console.log(`File watcher active on chunks/ and wiki/`);
51
+ }
52
+ }
53
+
54
+ stop() {
55
+ for (const w of this.#watchers) w.close();
56
+ this.#watchers = [];
57
+ for (const t of this.#debounceTimers.values()) clearTimeout(t);
58
+ this.#debounceTimers.clear();
59
+ if (this.#pollInterval) { clearInterval(this.#pollInterval); this.#pollInterval = null; }
60
+ }
61
+
62
+ // Scan a directory, hash all .md files, index any not yet in the DB
63
+ #scanAndHash(dir) {
64
+ const absDir = join(this.#brainPath, dir);
65
+ if (!existsSync(absDir)) return;
66
+ this.#walkDir(absDir, (absPath) => {
67
+ if (!absPath.endsWith(".md")) return;
68
+ const relPath = normalizePath(relative(this.#brainPath, absPath));
69
+ const content = readFileSync(absPath, "utf-8");
70
+ const hash = this.#hash(content);
71
+ this.#hashes.set(relPath, hash);
72
+ });
73
+ }
74
+
75
+ #walkDir(dir, callback) {
76
+ for (const entry of readdirSync(dir, { withFileTypes: true })) {
77
+ const full = join(dir, entry.name);
78
+ if (entry.isDirectory()) this.#walkDir(full, callback);
79
+ else if (entry.isFile()) callback(full);
80
+ }
81
+ }
82
+
83
+ #handleChange(relPath) {
84
+ const absPath = join(this.#brainPath, relPath);
85
+
86
+ if (!existsSync(absPath)) {
87
+ // File deleted
88
+ if (this.#hashes.has(relPath)) {
89
+ this.#hashes.delete(relPath);
90
+ this.#db.remove(relPath);
91
+ console.log(`[watcher] Removed from index: ${relPath}`);
92
+ }
93
+ return;
94
+ }
95
+
96
+ try {
97
+ const content = readFileSync(absPath, "utf-8");
98
+ const newHash = this.#hash(content);
99
+ const oldHash = this.#hashes.get(relPath);
100
+
101
+ if (newHash === oldHash) return; // No change
102
+
103
+ this.#hashes.set(relPath, newHash);
104
+ this.#db.index({
105
+ id: relPath,
106
+ path: relPath,
107
+ content: content,
108
+ brain_id: this.#brainId,
109
+ });
110
+ console.log(`[watcher] Reindexed: ${relPath}`);
111
+ } catch {
112
+ // File might be mid-write, ignore
113
+ }
114
+ }
115
+
116
+ #startPolling() {
117
+ console.log("File watcher using polling mode (recursive watch not available)");
118
+ this.#pollInterval = setInterval(() => {
119
+ for (const dir of ["chunks", "wiki"]) {
120
+ const absDir = join(this.#brainPath, dir);
121
+ if (!existsSync(absDir)) continue;
122
+ this.#walkDir(absDir, (absPath) => {
123
+ if (!absPath.endsWith(".md")) return;
124
+ const relPath = normalizePath(relative(this.#brainPath, absPath));
125
+ try {
126
+ const content = readFileSync(absPath, "utf-8");
127
+ const newHash = this.#hash(content);
128
+ const oldHash = this.#hashes.get(relPath);
129
+ if (newHash !== oldHash) {
130
+ this.#hashes.set(relPath, newHash);
131
+ this.#db.index({ id: relPath, path: relPath, content, brain_id: this.#brainId });
132
+ console.log(`[watcher] Reindexed: ${relPath}`);
133
+ }
134
+ } catch {}
135
+ });
136
+ }
137
+ // Check for deletions
138
+ for (const [relPath] of this.#hashes) {
139
+ if (!existsSync(join(this.#brainPath, relPath))) {
140
+ this.#hashes.delete(relPath);
141
+ this.#db.remove(relPath);
142
+ console.log(`[watcher] Removed from index: ${relPath}`);
143
+ }
144
+ }
145
+ }, 3000);
146
+ }
147
+
148
+ #debounce(key, fn) {
149
+ if (this.#debounceTimers.has(key)) clearTimeout(this.#debounceTimers.get(key));
150
+ this.#debounceTimers.set(key, setTimeout(() => {
151
+ this.#debounceTimers.delete(key);
152
+ fn();
153
+ }, 500)); // 500ms debounce
154
+ }
155
+
156
+ #hash(content) {
157
+ return createHash("sha256").update(content).digest("hex").slice(0, 16);
158
+ }
159
+ }