claude-second-brain 0.3.0 → 0.4.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -3,7 +3,7 @@
3
3
 
4
4
  **Your notes don't compound. This wiki does.**
5
5
 
6
- [![npm](https://img.shields.io/npm/v/claude-second-brain)](https://www.npmjs.com/package/claude-second-brain) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/jessepinkman9900/claude-second-brain)
6
+ [![npm](https://img.shields.io/npm/v/claude-second-brain)](https://www.npmjs.com/package/claude-second-brain) [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT) [![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/jessepinkman9900/claude-second-brain)
7
7
 
8
8
 
9
9
  The fastest way to start your personal knowledge base powered by Obsidian, Claude Code, qmd, and GitHub.
@@ -19,7 +19,7 @@ One command gives you a fully wired knowledge system:
19
19
  - The scaffolded folder **is your Obsidian vault** — open it directly in [Obsidian](https://obsidian.md), with [obsidian-git](https://github.com/Vinzent03/obsidian-git) pre-configured for seamless sync
20
20
  - GitHub is the source of truth — version history, anywhere access, and a backup you control
21
21
 
22
- You've been reading papers, articles, and books for years. Drop a source in, run `/ingest`, and Claude reads it — extracts what matters, cross-links it to everything you already know, and files it. Ask a question six months later and get cited answers, not a list of files to re-read.
22
+ You've been reading papers, articles, and books for years. Drop a source in, run `/brain-ingest`, and Claude reads it — extracts what matters, cross-links it to everything you already know, and files it. Ask a question six months later and get cited answers, not a list of files to re-read.
23
23
 
24
24
  > **Inspired by [Andrej Karpathy's approach to LLM-powered knowledge management](https://x.com/karpathy/status/2040470801506541998?s=20)** — share an "idea file" with an LLM agent and let it build and maintain your knowledge base.
25
25
 
@@ -58,12 +58,17 @@ Everything is pre-configured. You bring the sources.
58
58
  npx claude-second-brain
59
59
  ```
60
60
 
61
- Creates your vault (the folder itself is the Obsidian vault), installs `mise` + `bun`, and runs `git init` with an initial commit.
61
+ The CLI will ask:
62
+ - **Folder name** — where to create your vault (default: `my-brain`)
63
+ - **qmd index path** — where to store the local search index (default: `~/.cache/qmd/index.sqlite`)
64
+ - **GitHub repo** — optionally create a private repo and push automatically (requires `gh` CLI)
65
+
66
+ Then scaffolds the vault, installs `mise` + `bun`, and runs `git init`.
62
67
 
63
68
  **Step 2 — Initialize inside Claude Code**
64
69
 
65
70
  ```bash
66
- cd my-wiki && claude
71
+ cd my-brain && claude
67
72
  ```
68
73
 
69
74
  Then run:
@@ -74,27 +79,43 @@ Then run:
74
79
 
75
80
  Registers the qmd collections and generates local vector embeddings. First run downloads ~2GB of GGUF models — once.
76
81
 
77
- **Step 3 — Push to GitHub and open in Obsidian**
82
+ **Step 3 — Open in Obsidian (and push to GitHub if not done)**
83
+
84
+ **If you created a GitHub repo during setup**, it's already pushed — skip straight to opening in Obsidian.
85
+
86
+ **If you skipped GitHub during setup**, connect it now:
78
87
 
79
88
  ```bash
80
- git remote add origin https://github.com/you/my-wiki.git
89
+ git remote add origin https://github.com/you/my-brain.git
81
90
  git push -u origin main
82
91
  ```
83
92
 
84
- Open `my-wiki/` as a vault in Obsidian — the folder is already a valid Obsidian vault. The Git plugin is pre-configured — enable it and sync is automatic.
93
+ Open `my-brain/` as a vault in Obsidian — the folder is already a valid Obsidian vault. The Git plugin is pre-configured — enable it and sync is automatic.
85
94
 
86
95
  ---
87
96
 
88
97
  ## Claude Code skills included
89
98
 
90
- The wiki ships with three slash commands that cover the full workflow. No manual prompting, no copy-pasting.
99
+ The wiki ships with a set of slash commands that cover the full workflow. No manual prompting, no copy-pasting.
100
+
101
+ ### Daily workflow
91
102
 
92
- **`/ingest`** — Drop a file into `sources/articles/`, `sources/pdfs/`, or `sources/personal/`. Run `/ingest`. Claude summarizes the source, asks what matters most to you, creates a `wiki/sources/` page, updates or creates related topic pages, flags any contradictions with existing knowledge, and logs everything.
103
+ **`/brain-ingest`** — Drop a file into `sources/articles/`, `sources/pdfs/`, or `sources/personal/`. Run `/brain-ingest`. Claude summarizes the source, asks what matters most to you, creates a `wiki/sources/` page, updates or creates related topic pages, flags any contradictions with existing knowledge, and logs everything.
93
104
 
94
- **`/query`** — Ask anything about what you know. Claude runs hybrid semantic search across the wiki, reads the most relevant pages, and writes an answer with inline `[[wiki/page]]` citations. If the answer synthesizes multiple pages in a novel way, it offers to file it as a permanent `wiki/qa/` entry.
105
+ **`/brain-search`** — Ask anything about what you know. Claude runs hybrid semantic search across the wiki, reads the most relevant pages, and writes an answer with inline `[[wiki/page]]` citations. If the answer synthesizes multiple pages in a novel way, it offers to file it as a permanent `wiki/qa/` entry.
95
106
 
96
107
  **`/lint`** — Health-check the wiki. Surfaces orphan pages, broken links, unresolved contradictions, and data gaps. Reports findings and applies fixes where possible.
97
108
 
109
+ ### Maintenance
110
+
111
+ **`/brain-refresh`** — Re-scan the vault for new or changed files and regenerate vector embeddings. Run after a bulk ingest session or manual edits. Pass `force` to re-embed every chunk (e.g. after changing the embedding model).
112
+
113
+ **`/brain-rebuild`** — **Destructive.** Redesigns the qmd schema: analyzes the wiki, proposes new collections and contexts, waits for your approval, then patches `scripts/qmd/setup.ts`, drops the old index, and rebuilds embeddings from scratch. Use only when the current structure no longer fits how you search.
114
+
115
+ ### Setup
116
+
117
+ **`/setup`** — First-time initialization. Registers the qmd collections and generates local vector embeddings. Run once after scaffolding.
118
+
98
119
  ---
99
120
 
100
121
  ## Access from anywhere
@@ -112,9 +133,64 @@ It's a plain GitHub repo. View and edit files directly in the browser at any tim
112
133
 
113
134
  ## How it works
114
135
 
136
+ Sources flow in on the left, Claude synthesizes them into the wiki, qmd indexes every page into a local hybrid search index, and GitHub makes the whole vault editable from Obsidian and Claude Code on any device.
137
+
138
+ ```mermaid
139
+ flowchart TB
140
+ subgraph Sources["sources/ — raw, immutable"]
141
+ direction LR
142
+ S1[articles/]
143
+ S2[pdfs/]
144
+ S3[personal/]
145
+ end
146
+
147
+ subgraph Skills["Claude Code skills"]
148
+ direction LR
149
+ I["/brain-ingest"]
150
+ Q["/brain-search"]
151
+ R["/brain-refresh"]
152
+ end
153
+
154
+ subgraph Wiki["wiki/ — cross-linked synthesis"]
155
+ direction LR
156
+ W1[overview.md]
157
+ W2[topic / entity pages]
158
+ W3[sources/]
159
+ W4[qa/]
160
+ end
161
+
162
+ QMD[("qmd.sqlite<br/>vector + BM25<br/>hybrid index")]
163
+
164
+ subgraph Remote["GitHub — source of truth"]
165
+ GH[private repo]
166
+ end
167
+
168
+ subgraph Read["Read / edit anywhere"]
169
+ direction LR
170
+ OB["Obsidian<br/>graph · backlinks · mobile"]
171
+ CC["Claude Code<br/>desktop + mobile"]
172
+ end
173
+
174
+ User((you))
175
+
176
+ Sources -->|read| I
177
+ I -->|write pages, cross-link,<br/>flag contradictions| Wiki
178
+ R -.->|chunk + embed<br/>changed files| QMD
179
+ Wiki -.->|indexed by| QMD
180
+ User -->|ask a question| Q
181
+ Q -->|hybrid search| QMD
182
+ QMD -->|top-k pages| Q
183
+ Q -->|cited answer| User
184
+ Wiki <-->|"obsidian-git<br/>auto commit / pull"| GH
185
+ GH --> OB
186
+ GH --> CC
187
+ ```
188
+
189
+ ### The ingest loop, zoomed in
190
+
115
191
  ```
116
192
  ┌─────────────────────┐ ┌─────────────────────┐ ┌─────────────────────┐
117
- │ Drop in a source │ │ /ingest in Claude │ │ Wiki grows │
193
+ │ Drop in a source │ │ /brain-ingest │ │ Wiki grows │
118
194
  │ │ │ Code │ │ │
119
195
  │ · article │──────▶ │ │──────▶ │ · cross-linked │
120
196
  │ · PDF │ │ reads + extracts │ │ pages │
@@ -125,7 +201,7 @@ It's a plain GitHub repo. View and edit files directly in the browser at any tim
125
201
  └─────────────────────┘
126
202
  ```
127
203
 
128
- Query it anytime with `/query`. Get answers with inline `[[wiki/page]]` citations, not a list of files.
204
+ Query it anytime with `/brain-search`. Get answers with inline `[[wiki/page]]` citations, not a list of files.
129
205
 
130
206
  ---
131
207
 
@@ -148,7 +224,7 @@ All pages cross-link with Obsidian `[[wikilinks]]`. Contradictions are flagged w
148
224
  ## Directory layout
149
225
 
150
226
  ```
151
- my-wiki/
227
+ my-brain/
152
228
  ├── CLAUDE.md ← The schema. Claude reads this every session.
153
229
  ├── sources/ ← Your raw inputs. Claude never modifies these.
154
230
  │ ├── articles/
@@ -165,9 +241,33 @@ my-wiki/
165
241
 
166
242
  ---
167
243
 
244
+ ## Installing and updating skills
245
+
246
+ Skills are slash commands Claude Code loads from `.claude/skills/[name]/SKILL.md`. `/brain-ingest`, `/brain-search`, and `/brain-refresh` install globally to `~/.claude/skills/` during setup so they work in any Claude Code session. `/brain-rebuild`, `/lint`, `/setup`, and `/qmd-cli` live inside the vault at `.claude/skills/`.
247
+
248
+ ### Update the built-in wiki skills
249
+
250
+ The wiki's own skills are scaffolded at creation time. To pull in improvements, use `npx skills` pointing to the template skills directory in this repo:
251
+
252
+ ```bash
253
+ # Install or update all 7 wiki skills from the latest template
254
+ npx skills add https://github.com/jessepinkman9900/claude-second-brain/tree/main/template/.claude/skills -a claude-code -y
255
+
256
+ # Or update a specific skill
257
+ npx skills add https://github.com/jessepinkman9900/claude-second-brain/tree/main/template/.claude/skills --skill brain-ingest -a claude-code -y
258
+ ```
259
+
260
+ Once installed via `npx skills`, future updates are a single command:
261
+
262
+ ```bash
263
+ npx skills update -a claude-code
264
+ ```
265
+
266
+ ---
267
+
168
268
  ## Roadmap
169
269
 
170
- - [ ] GitHub Actions for scheduled re-indexing
270
+ - [ ] GitHub Actions to invoke `/brain-refresh` on a schedule
171
271
  - [ ] GitHub agentic workflows — auto-ingest on push, scheduled lint, auto-summary on new sources
172
272
  - [ ] More Claude Code skills (source discovery, topic clustering)
173
273
 
package/bin/create.js CHANGED
@@ -1,14 +1,25 @@
1
1
  #!/usr/bin/env node
2
- import { cp, rename, access } from "fs/promises"
2
+ import { cp, rename, access, readFile, writeFile, mkdir } from "fs/promises"
3
+ import { readdirSync, readFileSync } from "fs"
3
4
  import { join, dirname } from "path"
4
5
  import { fileURLToPath } from "url"
5
6
  import { spawnSync } from "child_process"
6
- import { createInterface } from "readline/promises"
7
+ import { homedir } from "os"
8
+ import * as p from "@clack/prompts"
9
+ import pc from "picocolors"
7
10
 
8
11
  const __dirname = dirname(fileURLToPath(import.meta.url))
9
12
  const TEMPLATE = join(__dirname, "../template")
13
+ const { version } = JSON.parse(readFileSync(join(__dirname, "../package.json"), "utf8"))
10
14
 
15
+ // Non-interactive commands — output piped (won't corrupt spinner)
11
16
  function run(cmd, cwd) {
17
+ const result = spawnSync(cmd[0], cmd.slice(1), { cwd, stdio: "pipe" })
18
+ return result.status === 0
19
+ }
20
+
21
+ // Interactive commands (e.g. gh auth login) — must inherit stdio; call outside spinner
22
+ function runInteractive(cmd, cwd) {
12
23
  const result = spawnSync(cmd[0], cmd.slice(1), { cwd, stdio: "inherit" })
13
24
  return result.status === 0
14
25
  }
@@ -18,14 +29,109 @@ function commandExists(cmd) {
18
29
  return result.status === 0
19
30
  }
20
31
 
32
+ async function patchVault(targetDir, qmdPath, brainName) {
33
+ const filesToPatch = [
34
+ join(targetDir, "scripts/qmd/setup.ts"),
35
+ join(targetDir, "scripts/qmd/reindex.ts"),
36
+ join(targetDir, "CLAUDE.md"),
37
+ ]
38
+
39
+ const skillsDir = join(targetDir, ".claude/skills")
40
+ try {
41
+ for (const skill of readdirSync(skillsDir)) {
42
+ filesToPatch.push(join(skillsDir, skill, "SKILL.md"))
43
+ }
44
+ } catch (err) {
45
+ if (err.code !== "ENOENT") throw err
46
+ }
47
+
48
+ await Promise.all(filesToPatch.map(async file => {
49
+ try {
50
+ let content = await readFile(file, "utf8")
51
+ content = content.replaceAll("__QMD_PATH__", qmdPath)
52
+ await writeFile(file, content, "utf8")
53
+ } catch (err) {
54
+ if (err.code !== "ENOENT") throw err
55
+ }
56
+ }))
57
+
58
+ const readmePath = join(targetDir, "README.md")
59
+ try {
60
+ let readme = await readFile(readmePath, "utf8")
61
+ readme = readme.replaceAll("__BRAIN_NAME__", brainName)
62
+ await writeFile(readmePath, readme, "utf8")
63
+ } catch (err) {
64
+ if (err.code !== "ENOENT") throw err
65
+ }
66
+ }
67
+
68
+ async function installGlobalSkills(qmdPath) {
69
+ const globalSkillsDir = join(homedir(), ".claude", "skills")
70
+
71
+ await Promise.all(["brain-ingest", "brain-search", "brain-refresh"].map(async skillName => {
72
+ const srcFile = join(TEMPLATE, ".claude/skills", skillName, "SKILL.md")
73
+ const destDir = join(globalSkillsDir, skillName)
74
+
75
+ let content = await readFile(srcFile, "utf8")
76
+ content = content.replaceAll("__QMD_PATH__", qmdPath)
77
+
78
+ await mkdir(destDir, { recursive: true })
79
+ await writeFile(join(destDir, "SKILL.md"), content, "utf8")
80
+ }))
81
+ }
82
+
21
83
  async function main() {
22
- let targetName = process.argv[2]
84
+ const isInteractive = Boolean(process.stdin.isTTY)
23
85
 
86
+ p.intro(`${pc.bgCyan(pc.black(" claude-second-brain "))} v${version}`)
87
+
88
+ let targetName = process.argv[2]
24
89
  if (!targetName) {
25
- const rl = createInterface({ input: process.stdin, output: process.stdout })
26
- const answer = await rl.question("Where to create your wiki? (my-wiki) › ")
27
- rl.close()
28
- targetName = answer.trim() || "my-wiki"
90
+ const answer = await p.text({
91
+ message: "Where to create your brain?",
92
+ placeholder: "my-brain",
93
+ defaultValue: "my-brain",
94
+ })
95
+ if (p.isCancel(answer)) { p.cancel("Setup cancelled."); process.exit(0) }
96
+ targetName = answer
97
+ }
98
+
99
+ const defaultQmdPath = join(
100
+ process.env.XDG_CACHE_HOME || join(homedir(), ".cache"),
101
+ "qmd", "index.sqlite"
102
+ )
103
+ let qmdPath
104
+ if (isInteractive) {
105
+ const answer = await p.text({
106
+ message: "Where to store the qmd index?",
107
+ placeholder: defaultQmdPath,
108
+ defaultValue: defaultQmdPath,
109
+ })
110
+ if (p.isCancel(answer)) { p.cancel("Setup cancelled."); process.exit(0) }
111
+ qmdPath = answer
112
+ } else {
113
+ qmdPath = defaultQmdPath
114
+ }
115
+
116
+ let createGhRepo = false
117
+ let ghRepoName = null
118
+ if (isInteractive) {
119
+ const confirm = await p.confirm({
120
+ message: "Create a private GitHub repo?",
121
+ initialValue: false,
122
+ })
123
+ if (p.isCancel(confirm)) { p.cancel("Setup cancelled."); process.exit(0) }
124
+ createGhRepo = confirm
125
+
126
+ if (createGhRepo) {
127
+ const answer = await p.text({
128
+ message: "GitHub repo name?",
129
+ placeholder: targetName,
130
+ defaultValue: targetName,
131
+ })
132
+ if (p.isCancel(answer)) { p.cancel("Setup cancelled."); process.exit(0) }
133
+ ghRepoName = answer
134
+ }
29
135
  }
30
136
 
31
137
  const targetDir = join(process.cwd(), targetName)
@@ -33,17 +139,17 @@ async function main() {
33
139
  // Fail fast if target already exists
34
140
  try {
35
141
  await access(targetDir)
36
- console.error(`\nError: "${targetName}" already exists. Choose a different name or delete it first.`)
142
+ p.cancel(`"${targetName}" already exists. Choose a different name or delete it first.`)
37
143
  process.exit(1)
38
144
  } catch {
39
145
  // Directory doesn't exist — good to go
40
146
  }
41
147
 
42
- // 1. Scaffold
43
- console.log(`\nCreating ${targetName}...`)
44
- await cp(TEMPLATE, targetDir, { recursive: true })
148
+ const spin = p.spinner()
45
149
 
46
- // npm strips .gitignore from published packages — rename the template copy back
150
+ // Scaffold
151
+ spin.start("Scaffolding project")
152
+ await cp(TEMPLATE, targetDir, { recursive: true })
47
153
  try {
48
154
  await rename(
49
155
  join(targetDir, ".gitignore.template"),
@@ -52,49 +158,87 @@ async function main() {
52
158
  } catch {
53
159
  // .gitignore.template not present (e.g. running locally where npm didn't strip it)
54
160
  }
161
+ spin.stop(`${targetName}/ created`)
55
162
 
56
- console.log(`✓ Created ${targetName}/`)
163
+ // Patch vault files with chosen qmd path
164
+ spin.start("Configuring qmd index path")
165
+ await patchVault(targetDir, qmdPath, targetName)
166
+ spin.stop(`qmd index → ${pc.dim(qmdPath)}`)
57
167
 
58
- // 2. Install mise if not present
168
+ // Install mise if not present
59
169
  if (!commandExists("mise")) {
60
- console.log("\nInstalling mise...")
170
+ spin.start("Installing mise")
61
171
  const ok = run(["npm", "install", "-g", "@jdxcode/mise"])
62
- if (ok) {
63
- console.log(" Installed mise")
64
- } else {
65
- console.error(" Failed to install mise — install manually: npm install -g @jdxcode/mise")
66
- }
172
+ if (ok) spin.stop("mise installed")
173
+ else spin.stop("Failed to install mise — run: npm install -g @jdxcode/mise", 1)
67
174
  }
68
175
 
69
- // 3. Run mise install inside the new vault to install bun
70
- console.log("\nInstalling bun via mise...")
176
+ // Run mise install inside the new vault to install bun
177
+ spin.start("Installing bun via mise")
178
+ run(["mise", "trust"], targetDir)
71
179
  const miseOk = run(["mise", "install"], targetDir)
72
- if (miseOk) {
73
- console.log(" Installed bun")
74
- } else {
75
- console.error(" mise install failed — run it manually inside your vault")
76
- }
180
+ if (miseOk) spin.stop("bun installed")
181
+ else spin.stop("mise install failed — run it manually inside your vault", 1)
77
182
 
78
- // 4. Git init
79
- console.log("\nInitializing git repo...")
183
+ // Git init
184
+ spin.start("Initializing git repo")
80
185
  const gitOk = run(["git", "init"], targetDir)
81
186
  if (gitOk) {
82
187
  run(["git", "add", "."], targetDir)
83
188
  run(["git", "commit", "-m", "initial commit"], targetDir)
84
- console.log(" Git repo initialized")
189
+ spin.stop("git repo initialized")
85
190
  } else {
86
- console.error(" git init failed — run it manually inside your vault")
191
+ spin.stop("git init failed — run it manually inside your vault", 1)
192
+ }
193
+
194
+ // GitHub repo (optional)
195
+ if (createGhRepo) {
196
+ if (!commandExists("gh")) {
197
+ p.log.warn(`gh CLI not found — install from https://cli.github.com, then run:\n gh repo create ${ghRepoName} --private --source=. --remote=origin --push`)
198
+ } else {
199
+ const authCheck = spawnSync("gh", ["auth", "status"], { stdio: "pipe" })
200
+ let loggedIn = authCheck.status === 0
201
+
202
+ if (!loggedIn) {
203
+ p.log.info("Not logged in to GitHub. Starting login...")
204
+ loggedIn = runInteractive(["gh", "auth", "login"], targetDir)
205
+ }
206
+
207
+ if (loggedIn) {
208
+ spin.start(`Creating GitHub repo ${pc.dim(ghRepoName)}`)
209
+ const ghOk = run(
210
+ ["gh", "repo", "create", ghRepoName, "--private", "--source=.", "--remote=origin", "--push"],
211
+ targetDir
212
+ )
213
+ if (ghOk) {
214
+ spin.stop(`GitHub repo created (private): ${pc.cyan(ghRepoName)}`)
215
+ } else {
216
+ spin.stop(`gh repo create failed — run: gh repo create ${ghRepoName} --private --source=. --remote=origin --push`, 1)
217
+ }
218
+ }
219
+ }
87
220
  }
88
221
 
89
- console.log(`\n✓ Done! Your wiki is ready.\n`)
90
- console.log("Next steps:")
91
- console.log(` cd ${targetName}`)
92
- console.log(" claude # open Claude Code, then run /setup")
93
- console.log(" git remote add origin <url> # connect to GitHub for sync + Obsidian Mobile")
94
- console.log(" git push -u origin main")
222
+ // Install global skills
223
+ spin.start("Installing global Claude skills")
224
+ await installGlobalSkills(qmdPath)
225
+ spin.stop("Global skills installed")
226
+
227
+ // Next steps
228
+ const nextSteps = [
229
+ `${pc.cyan(`cd ${targetName}`)}`,
230
+ `${pc.cyan("claude")} open Claude Code, then run ${pc.bold("/setup")}`,
231
+ ...(!createGhRepo ? [
232
+ `${pc.cyan("git remote add origin <url>")} connect to GitHub for sync`,
233
+ `${pc.cyan("git push -u origin main")}`,
234
+ ] : []),
235
+ ].join("\n")
236
+
237
+ p.note(nextSteps, "Next steps")
238
+ p.outro("Happy knowledge building!")
95
239
  }
96
240
 
97
241
  main().catch(err => {
98
- console.error(err.message)
242
+ p.cancel(err.message)
99
243
  process.exit(1)
100
244
  })
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "claude-second-brain",
3
- "version": "0.3.0",
3
+ "version": "0.4.1",
4
4
  "description": "Scaffold an LLM-maintained Obsidian knowledge wiki",
5
5
  "type": "module",
6
6
  "bin": {
@@ -14,6 +14,10 @@
14
14
  "engines": {
15
15
  "node": ">=18.0.0"
16
16
  },
17
+ "dependencies": {
18
+ "@clack/prompts": "^1.2.0",
19
+ "picocolors": "^1.1.1"
20
+ },
17
21
  "license": "MIT",
18
22
  "repository": {
19
23
  "type": "git",
@@ -1,10 +1,10 @@
1
1
  ---
2
- name: ingest
3
- description: "Ingest a new source into the wiki. Reads the source, summarizes it, creates a wiki/sources/ page, updates affected topic and entity pages, flags contradictions, and logs the activity. Trigger phrases: /ingest, ingest [file or URL], add this source, read and file this, process this article/paper/note."
2
+ name: brain-ingest
3
+ description: "Ingest a new source into the wiki. Reads the source, summarizes it, creates a wiki/sources/ page, updates affected topic and entity pages, flags contradictions, and logs the activity. Trigger phrases: /brain-ingest, ingest [file or URL], add this source, read and file this, process this article/paper/note."
4
4
  argument-hint: "File path (e.g. sources/articles/my-article.md), URL, or leave blank if source is pasted in chat"
5
5
  ---
6
6
 
7
- # Ingest
7
+ # Brain Ingest
8
8
 
9
9
  Runs the full 9-step ingest workflow defined in CLAUDE.md. Do not skip steps.
10
10
 
@@ -35,7 +35,7 @@ Runs the full 9-step ingest workflow defined in CLAUDE.md. Do not skip steps.
35
35
  - Add the new source to the Sources Ingested section: one-line description + `[[wiki/sources/slug]]` link
36
36
 
37
37
  **Step 5 — Identify affected wiki pages**
38
- - Run: `INDEX_PATH=qmd.sqlite bunx @tobilu/qmd query -c wiki "<source topic and key claims>"`
38
+ - Run: `INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd query -c wiki "<source topic and key claims>"`
39
39
  - Also Glob `wiki/*.md` and `wiki/sources/*.md` to catch anything qmd missed
40
40
  - List all pages to create or update before proceeding
41
41
 
@@ -0,0 +1,119 @@
1
+ ---
2
+ name: brain-rebuild
3
+ description: "Redesign the qmd schema for this vault — analyze the current wiki, recommend a new set of collections and contexts, update scripts/qmd/setup.ts, then tear down and rebuild the index from scratch. Destructive. Trigger phrases: /brain-rebuild, rebuild the brain, restructure qmd, redesign collections, redo contexts, recompute schema."
4
+ argument-hint: "Optional: a hint about the desired structure (e.g. 'split wiki by domain', 'add a separate collection for qa pages')"
5
+ ---
6
+
7
+ # Brain Rebuild
8
+
9
+ Redesigns the qmd schema based on what the wiki actually contains today, then rebuilds the index from scratch. **Destructive** — wipes existing collections/contexts and their embeddings. Always get explicit user approval before applying changes.
10
+
11
+ ## When to Use
12
+
13
+ - The wiki has grown and the current single `wiki` collection no longer matches how the user searches
14
+ - The user wants finer-grained contexts (e.g. distinct context descriptions per sub-folder)
15
+ - After significant reorganization of `wiki/` or `sources/` folder structure
16
+ - Never as a routine refresh — for that, use `/brain-refresh`
17
+
18
+ ## Procedure
19
+
20
+ All commands run from the vault root.
21
+
22
+ ### Step 1 — Survey the current state
23
+
24
+ - Read `scripts/qmd/setup.ts` to see the current collections and contexts
25
+ - Read `CLAUDE.md` to understand the documented schema and any references to collection names
26
+ - List current state from qmd:
27
+ ```bash
28
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd collection list
29
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd context list
30
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd status
31
+ ```
32
+
33
+ ### Step 2 — Analyze the wiki
34
+
35
+ - Glob `wiki/**/*.md` and `sources/**/*.md`
36
+ - Read enough pages (especially `wiki/index.md` and `wiki/overview.md`) to understand actual topic clusters, page-type distribution, and folder structure
37
+ - Identify natural groupings: by domain (e.g. ml, distributed-systems, finance), by page type (topics vs entities vs qa), by source provenance, etc.
38
+
39
+ ### Step 3 — Propose a new schema
40
+
41
+ Produce a recommendation that includes:
42
+
43
+ - **Collections** — name, root path (relative to vault), file glob pattern. Justify each split.
44
+ - **Contexts** — global context, plus per-collection and per-subpath context descriptions. Each context should be a one-sentence description that helps a future search disambiguate between similar pages.
45
+ - **Migration impact** — what changes for `/brain-ingest`, `/brain-search`, `/lint`, and `CLAUDE.md` (e.g. if collection name `wiki` becomes `wiki-topics`, all `-c wiki` invocations break)
46
+
47
+ Present the recommendation as a clear diff against the current schema. Do not modify any files yet.
48
+
49
+ ### Step 4 — Get explicit approval
50
+
51
+ Ask the user: "Apply this schema? This will drop existing collections and their embeddings, then rebuild from scratch. Re-embedding the full wiki may take significant time and download GGUF models if not cached."
52
+
53
+ Do not proceed without an explicit yes.
54
+
55
+ ### Step 5 — Update `scripts/qmd/setup.ts`
56
+
57
+ Edit `scripts/qmd/setup.ts` to reflect the new schema:
58
+
59
+ - Update the `ensureCollection` calls (add, remove, or rename)
60
+ - Update `setGlobalContext` and the `addContext` calls
61
+ - Leave the `DB` constant alone — its value was substituted at scaffold time and must not change
62
+ - Keep the script idempotent
63
+
64
+ ### Step 6 — Update related references
65
+
66
+ If collection names changed:
67
+
68
+ - Update every occurrence in `CLAUDE.md` (e.g. `qmd query -c wiki` → new name)
69
+ - Update `.claude/skills/brain-ingest/SKILL.md`, `.claude/skills/brain-search/SKILL.md`, `.claude/skills/lint/SKILL.md`, `.claude/skills/qmd-cli/SKILL.md` — anywhere collection names are hard-coded
70
+ - Use Grep to find any remaining stale references before moving on
71
+
72
+ ### Step 7 — Tear down the old schema
73
+
74
+ For each existing collection that no longer fits the new schema:
75
+
76
+ ```bash
77
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd collection remove <old-name>
78
+ ```
79
+
80
+ For each obsolete context:
81
+
82
+ ```bash
83
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd context rm <path>
84
+ ```
85
+
86
+ ### Step 8 — Register the new schema
87
+
88
+ Run:
89
+ ```bash
90
+ bun scripts/qmd/setup.ts
91
+ ```
92
+
93
+ This is idempotent — collections that already exist (e.g. ones you kept) are skipped; new ones are added; contexts are upserted.
94
+
95
+ ### Step 9 — Rebuild embeddings from scratch
96
+
97
+ Run:
98
+ ```bash
99
+ bun scripts/qmd/reindex.ts
100
+ ```
101
+
102
+ This indexes all files under the new collections and generates fresh embeddings.
103
+
104
+ ### Step 10 — Verify and report
105
+
106
+ ```bash
107
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd collection list
108
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd context list
109
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd status
110
+ ```
111
+
112
+ Confirm the new collections appear, contexts match the plan, and document/embedding counts are non-zero. Report a summary of what changed (collections added/removed/renamed, contexts updated, files re-indexed, references updated in CLAUDE.md and skills).
113
+
114
+ ## Hard Rules
115
+
116
+ - Never apply schema changes without explicit user approval at Step 4
117
+ - Never delete a collection without first showing the user what will be dropped
118
+ - Always update `CLAUDE.md` and skill files in lockstep with collection renames — stale `-c <name>` references will silently break `/brain-search`
119
+ - Do not touch `wiki/`, `sources/`, or any user content — this skill changes the index schema, not the data
@@ -0,0 +1,63 @@
1
+ ---
2
+ name: brain-refresh
3
+ description: "Refresh the qmd index for this vault — re-scans all collections for new/changed files and regenerates vector embeddings. Use after a bulk ingest session, after manual file edits, or whenever search feels stale. Trigger phrases: /brain-refresh, refresh the brain, refresh embeddings, re-embed wiki, update qmd index, refresh search index."
4
+ argument-hint: "Optional: 'force' to force re-embedding of every chunk (slow, useful after embedding model changes)"
5
+ ---
6
+
7
+ # Brain Refresh
8
+
9
+ Refreshes the qmd index so search reflects the current state of the vault. Wraps `bun scripts/qmd/reindex.ts` (incremental) and the qmd CLI's force-embed flag.
10
+
11
+ ## When to Use
12
+
13
+ - After a `/brain-ingest` session (or several) — batch the refresh, don't run after every file edit
14
+ - After manual edits to `wiki/` or `sources/` files
15
+ - When `/brain-search` results feel stale or miss recently added content
16
+ - After upgrading `@tobilu/qmd` or changing the embedding model — use `force` mode
17
+
18
+ ## Procedure
19
+
20
+ All commands run from the vault root.
21
+
22
+ ### Default — Incremental Refresh
23
+
24
+ Run:
25
+ ```bash
26
+ bun scripts/qmd/reindex.ts
27
+ ```
28
+
29
+ This script does two things:
30
+ 1. **Update** — scans collections for new, changed, or deleted files and updates the index
31
+ 2. **Embed** — generates vector embeddings for any chunks that need them
32
+
33
+ Only chunks that changed get re-embedded, so this is fast on subsequent runs.
34
+
35
+ ### Force Mode (`force` argument)
36
+
37
+ When the user passes `force`, re-embed **every** chunk — not just the changed ones. Use after embedding-model changes or if embeddings appear corrupted.
38
+
39
+ ```bash
40
+ # Step 1 — update the file index first
41
+ bun scripts/qmd/reindex.ts
42
+
43
+ # Step 2 — force re-embed everything
44
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd embed -f
45
+ ```
46
+
47
+ Force mode is slow — confirm with the user before proceeding if the wiki is large.
48
+
49
+ ## Verify
50
+
51
+ After refresh, confirm with:
52
+
53
+ ```bash
54
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd status
55
+ ```
56
+
57
+ Document and embedding counts should be non-zero and reflect recent activity. If embeddings show as `0` or far below document count, re-run the refresh.
58
+
59
+ ## Notes
60
+
61
+ - First run downloads ~2GB of GGUF models — expected, one-time
62
+ - Do not run this after every single file edit — batch it after a session
63
+ - This skill does **not** change the qmd schema (collections, contexts). For that, use `/brain-rebuild`
@@ -1,17 +1,17 @@
1
1
  ---
2
- name: query
3
- description: "Query the wiki to answer a question. Searches semantically, synthesizes an answer with inline citations, and optionally files the answer as a permanent Q&A page. Trigger phrases: /query, what do I know about, search the wiki, find everything about, what does the wiki say about."
2
+ name: brain-search
3
+ description: "Query the wiki to answer a question. Searches semantically, synthesizes an answer with inline citations, and optionally files the answer as a permanent Q&A page. Trigger phrases: /brain-search, what do I know about, search the wiki, find everything about, what does the wiki say about."
4
4
  argument-hint: "The question or topic to query (e.g. 'what do I know about transformer architecture?')"
5
5
  ---
6
6
 
7
- # Query
7
+ # Brain Search
8
8
 
9
9
  Runs the 4-step query workflow defined in CLAUDE.md. Answers come with inline `[[wiki/page]]` citations — not a list of files.
10
10
 
11
11
  ## Workflow
12
12
 
13
13
  **Step 1 — Search the wiki**
14
- - Run hybrid search: `INDEX_PATH=qmd.sqlite bunx @tobilu/qmd query -c wiki "<question>"`
14
+ - Run hybrid search: `INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd query -c wiki "<question>"`
15
15
  - Read `wiki/index.md` to confirm coverage and catch any pages qmd didn't surface
16
16
  - Read the 2–5 most relevant pages in full before synthesizing
17
17
 
@@ -10,10 +10,10 @@ Local semantic document index with hybrid search (BM25 + vector), collection man
10
10
 
11
11
  ## Vault DB
12
12
 
13
- This vault keeps its index at `qmd.sqlite` in the vault root (gitignored). All CLI commands **must** prefix with `INDEX_PATH=qmd.sqlite` (relative path works when running from the vault root, which is always the case here):
13
+ This vault keeps its index at `__QMD_PATH__`. All CLI commands **must** prefix with `INDEX_PATH=__QMD_PATH__`:
14
14
 
15
15
  ```bash
16
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd query -c wiki "..."
16
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd query -c wiki "..."
17
17
  ```
18
18
 
19
19
  The scripts `bun scripts/qmd/setup.ts` and `bun scripts/qmd/reindex.ts` write to this same file. The CLI must match.
@@ -24,40 +24,40 @@ The scripts `bun scripts/qmd/setup.ts` and `bun scripts/qmd/reindex.ts` write to
24
24
 
25
25
  ```bash
26
26
  # Create/index a collection
27
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd collection add <path> --name <name> --mask <glob-pattern>
27
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd collection add <path> --name <name> --mask <glob-pattern>
28
28
  # e.g.:
29
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd collection add wiki --name wiki --mask "**/*.md"
29
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd collection add wiki --name wiki --mask "**/*.md"
30
30
 
31
31
  # List all collections with details
32
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd collection list
32
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd collection list
33
33
 
34
34
  # Remove a collection
35
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd collection remove <name>
35
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd collection remove <name>
36
36
 
37
37
  # Rename a collection
38
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd collection rename <old-name> <new-name>
38
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd collection rename <old-name> <new-name>
39
39
  ```
40
40
 
41
41
  ## Listing Files
42
42
 
43
43
  ```bash
44
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd ls
45
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd ls [collection[/path]]
44
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd ls
45
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd ls [collection[/path]]
46
46
  ```
47
47
 
48
48
  ## Context
49
49
 
50
50
  ```bash
51
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd context add [path] "description text"
52
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd context list
53
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd context rm <path>
51
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd context add [path] "description text"
52
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd context list
53
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd context rm <path>
54
54
  ```
55
55
 
56
56
  ## Retrieving Documents
57
57
 
58
58
  ```bash
59
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd get <file>[:line] [-l N] [--from N]
60
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd multi-get <pattern> [-l N] [--max-bytes N]
59
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd get <file>[:line] [-l N] [--from N]
60
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd multi-get <pattern> [-l N] [--max-bytes N]
61
61
  ```
62
62
 
63
63
  **Multi-get format flags:** `--json`, `--csv`, `--md`, `--xml`, `--files`
@@ -66,16 +66,16 @@ INDEX_PATH=qmd.sqlite bunx @tobilu/qmd multi-get <pattern> [-l N] [--max-bytes N
66
66
 
67
67
  ```bash
68
68
  # Show index status and collections
69
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd status
69
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd status
70
70
 
71
71
  # Re-index all collections
72
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd update [--pull]
72
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd update [--pull]
73
73
 
74
74
  # Create vector embeddings (900 tokens/chunk, 15% overlap)
75
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd embed [-f]
75
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd embed [-f]
76
76
 
77
77
  # Remove cache and orphaned data, vacuum DB
78
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd cleanup
78
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd cleanup
79
79
  ```
80
80
 
81
81
  > After bulk ingest sessions, run `update` then `embed` to keep the index fresh.
@@ -108,28 +108,28 @@ INDEX_PATH=qmd.sqlite bunx @tobilu/qmd cleanup
108
108
 
109
109
  ```bash
110
110
  # Hybrid search across all collections
111
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd query "distributed systems consensus"
111
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd query "distributed systems consensus"
112
112
 
113
113
  # Search only the wiki collection, JSON output
114
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd query -c wiki "transformer architecture" --json
114
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd query -c wiki "transformer architecture" --json
115
115
 
116
116
  # Keyword-only search, top 10, markdown output
117
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd search -c wiki "kafka" -n 10 --md
117
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd search -c wiki "kafka" -n 10 --md
118
118
 
119
119
  # Vector search with full documents
120
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd vsearch "attention mechanism" --full
120
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd vsearch "attention mechanism" --full
121
121
 
122
122
  # Filter by score threshold
123
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd query "machine learning" --all --min-score 0.7
123
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd query "machine learning" --all --min-score 0.7
124
124
  ```
125
125
 
126
126
  ## MCP Server
127
127
 
128
128
  ```bash
129
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd mcp
130
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd mcp --http [--port N]
131
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd mcp --http --daemon
132
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd mcp stop
129
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd mcp
130
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd mcp --http [--port N]
131
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd mcp --http --daemon
132
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd mcp stop
133
133
  ```
134
134
 
135
135
  ## Global Options
@@ -151,18 +151,18 @@ bun scripts/qmd/reindex.ts
151
151
 
152
152
  ### After a bulk ingest session
153
153
  ```bash
154
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd update
155
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd embed
154
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd update
155
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd embed
156
156
  ```
157
157
 
158
158
  ### Research workflow
159
159
  ```bash
160
160
  # Discover relevant pages
161
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd query -c wiki "<topic>"
161
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd query -c wiki "<topic>"
162
162
 
163
163
  # Get a specific file
164
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd get wiki/distributed-systems.md
164
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd get wiki/distributed-systems.md
165
165
 
166
166
  # Get multiple related files
167
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd multi-get "wiki/kafka.md,wiki/distributed-systems.md" --md
167
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd multi-get "wiki/kafka.md,wiki/distributed-systems.md" --md
168
168
  ```
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: setup
3
- description: "Initialize this obsidian-agent-memory vault. Registers qmd collections and generates vector embeddings. Use when: setting up for the first time, re-registering collections, or re-indexing after a bulk ingest session. Trigger phrases: /setup, setup vault, initialize collections, reindex, register qmd."
3
+ description: "Initialize this claude-second-brain vault. Registers qmd collections and generates vector embeddings. Use when: setting up for the first time, re-registering collections, or re-indexing after a bulk ingest session. Trigger phrases: /setup, setup vault, initialize collections, reindex, register qmd."
4
4
  argument-hint: "Optional: 'reindex' to skip registration and only re-index files"
5
5
  ---
6
6
 
@@ -24,7 +24,7 @@ Run:
24
24
  bun scripts/qmd/setup.ts
25
25
  ```
26
26
 
27
- Registers the four qmd collections (`wiki`, `raw-sources`, `human`, `daily-notes`) and their path-level context descriptions. Idempotent — safe to re-run.
27
+ Registers the two core qmd collections (`wiki`, `raw-sources`) and their path-level context descriptions. Idempotent — safe to re-run.
28
28
 
29
29
  **Step 2 — Index files and generate embeddings**
30
30
 
@@ -49,17 +49,17 @@ Do **not** re-run after every single file edit — batch it after a session.
49
49
  Run these three commands to confirm everything is working:
50
50
 
51
51
  ```bash
52
- # List registered collections (expect: wiki, raw-sources, human, daily-notes)
53
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd collection list
52
+ # List registered collections (expect: wiki, raw-sources)
53
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd collection list
54
54
 
55
55
  # List registered contexts
56
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd context list
56
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd context list
57
57
 
58
58
  # Show index status and embedding counts
59
- INDEX_PATH=qmd.sqlite bunx @tobilu/qmd status
59
+ INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd status
60
60
  ```
61
61
 
62
- All four collections should appear in `collection list`. `status` should show non-zero document and embedding counts — if embeddings are 0, re-run step 2.
62
+ Both collections should appear in `collection list`. `status` should show non-zero document and embedding counts — if embeddings are 0, re-run step 2.
63
63
 
64
64
  ## Notes
65
65
  - `bun` is managed via `mise` — ensure `mise install` has been run before setup
@@ -21,8 +21,8 @@ coverage
21
21
 
22
22
  # logs
23
23
  logs
24
- _.log
25
- report.[0-9]_.[0-9]_.[0-9]_.[0-9]_.json
24
+ *.log
25
+ report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json
26
26
 
27
27
  # dotenv environment variable files
28
28
  .env
@@ -51,6 +51,9 @@ Summary of one ingested source. Slug is kebab-case derived from the title. One p
51
51
  ### `qa` — `wiki/qa/[slug].md`
52
52
  A filed answer to a notable query. Created when an answer synthesizes multiple pages in a way worth preserving.
53
53
 
54
+ ### `admin` — `wiki/index.md`, `wiki/log.md`
55
+ Special system/administrative pages. Not synthesis content. There are exactly two: `index.md` (master index) and `log.md` (activity log).
56
+
54
57
  ---
55
58
 
56
59
  ## Frontmatter Format
@@ -59,7 +62,7 @@ All wiki pages use this YAML frontmatter:
59
62
 
60
63
  ```yaml
61
64
  ---
62
- type: overview | topic | entity | source-summary | qa
65
+ type: overview | topic | entity | source-summary | qa | admin
63
66
  tags: [tag1, tag2]
64
67
  sources: ["[[wiki/sources/source-slug]]"]
65
68
  related: ["[[wiki/related-page]]"]
@@ -67,7 +70,7 @@ updated: YYYY-MM-DD
67
70
  ---
68
71
  ```
69
72
 
70
- - `type`: Required. One of the five page types above.
73
+ - `type`: Required. One of the six page types above.
71
74
  - `tags`: Optional. Lowercase, hyphenated. Topics for the Obsidian tag pane.
72
75
  - `sources`: Links to the `wiki/sources/` pages that informed this page.
73
76
  - `related`: Links to other wiki pages directly relevant to this one.
@@ -108,7 +111,7 @@ Run this workflow whenever the user adds a new source. Do not skip steps.
108
111
  - Add the new source to the Sources section with a one-line description and link.
109
112
 
110
113
  **Step 5 — Identify affected wiki pages**
111
- - Run `bunx @tobilu/qmd query -c wiki "<source topic and key claims>"` to surface related existing wiki pages.
114
+ - Run `INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd query -c wiki "<source topic and key claims>"` to surface related existing wiki pages.
112
115
  - Also Glob `wiki/*.md` and `wiki/sources/*.md` to ensure completeness.
113
116
  - List all pages to create or update.
114
117
 
@@ -139,7 +142,7 @@ Run this workflow whenever the user adds a new source. Do not skip steps.
139
142
  Run this workflow when the user asks a question against the wiki.
140
143
 
141
144
  **Step 1 — Search the wiki**
142
- - Run `bunx @tobilu/qmd query -c wiki "<question>"` to surface semantically relevant pages.
145
+ - Run `INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd query -c wiki "<question>"` to surface semantically relevant pages.
143
146
  - Read `wiki/index.md` to confirm coverage and catch any pages qmd didn't surface.
144
147
  - Read the 2-5 most relevant pages fully.
145
148
 
@@ -209,15 +212,15 @@ N issues found, N fixed. [Brief summary of notable findings.]
209
212
  ## Search Tool
210
213
 
211
214
  This vault uses **qmd** (`bunx @tobilu/qmd`) for local semantic and full-text search.
212
- Collections and contexts are registered via `bun scripts/qmd/setup.ts` and stored in `qmd.sqlite` at the vault root (gitignored).
215
+ Collections and contexts are registered via `bun scripts/qmd/setup.ts` and stored at `__QMD_PATH__` (gitignored).
213
216
 
214
217
  Key commands:
215
218
 
216
219
  | Command | Use |
217
220
  |---------|-----|
218
- | `INDEX_PATH=qmd.sqlite bunx @tobilu/qmd query -c wiki "<question>"` | Hybrid search — best for topic discovery |
219
- | `INDEX_PATH=qmd.sqlite bunx @tobilu/qmd search -c wiki "<terms>"` | Fast keyword search (BM25, no LLM) |
220
- | `INDEX_PATH=qmd.sqlite bunx @tobilu/qmd vsearch -c wiki "<question>"` | Pure vector/semantic search |
221
+ | `INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd query -c wiki "<question>"` | Hybrid search — best for topic discovery |
222
+ | `INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd search -c wiki "<terms>"` | Fast keyword search (BM25, no LLM) |
223
+ | `INDEX_PATH=__QMD_PATH__ bunx @tobilu/qmd vsearch -c wiki "<question>"` | Pure vector/semantic search |
221
224
 
222
225
  Add `--json` for structured output. Omit `-c wiki` to search all collections (wiki, raw-sources, human, daily-notes).
223
226
 
@@ -1,4 +1,6 @@
1
- # claude-second-brain
1
+ # __BRAIN_NAME__
2
+
3
+ built using [claude-second-brain](https://github.com/jessepinkman9900/claude-second-brain)
2
4
 
3
5
  **Your notes don't compound. This wiki does.**
4
6
 
@@ -32,12 +34,81 @@ Registers the qmd collections and generates local vector embeddings. First run d
32
34
 
33
35
  ## Your Claude Code skills
34
36
 
35
- **`/ingest`** Add a file to `sources/articles/`, `sources/pdfs/`, or `sources/personal/`, then run `/ingest`. Claude summarizes the source, asks what aspects matter most, updates related wiki pages, flags contradictions, and logs everything.
37
+ ### Daily workflow
38
+
39
+ **`/brain-ingest`** — Add a file to `sources/articles/`, `sources/pdfs/`, or `sources/personal/`, then run `/brain-ingest`. Claude summarizes the source, asks what aspects matter most, updates related wiki pages, flags contradictions, and logs everything.
36
40
 
37
- **`/query`** — Ask anything: `what do I know about [topic]?` Claude searches the wiki semantically and returns a cited answer. If it synthesizes multiple pages in a useful way, it offers to file it as a permanent `wiki/qa/` entry.
41
+ **`/brain-search`** — Ask anything: `what do I know about [topic]?` Claude searches the wiki semantically and returns a cited answer. If it synthesizes multiple pages in a useful way, it offers to file it as a permanent `wiki/qa/` entry.
38
42
 
39
43
  **`/lint`** — Health-check the wiki. Finds orphan pages, broken links, unresolved contradictions, and data gaps. Reports findings and fixes what it can.
40
44
 
45
+ ### Maintenance
46
+
47
+ **`/brain-refresh`** — Re-scan the vault for new or changed files and regenerate vector embeddings. Run after a bulk ingest session or manual edits. Pass `force` to re-embed every chunk.
48
+
49
+ **`/brain-rebuild`** — **Destructive.** Redesigns the qmd schema: analyzes the wiki, proposes new collections and contexts, waits for your approval, then patches `scripts/qmd/setup.ts`, drops the old index, and rebuilds embeddings from scratch.
50
+
51
+ ### Setup
52
+
53
+ **`/setup`** — First-time initialization. Registers the qmd collections and generates local vector embeddings. Run once after scaffolding.
54
+
55
+ ---
56
+
57
+ ## How it works
58
+
59
+ Sources flow in on the left, Claude synthesizes them into the wiki, qmd indexes every page into a local hybrid search index, and GitHub makes the whole vault editable from Obsidian and Claude Code on any device.
60
+
61
+ ```mermaid
62
+ flowchart TB
63
+ subgraph Sources["sources/ — raw, immutable"]
64
+ direction LR
65
+ S1[articles/]
66
+ S2[pdfs/]
67
+ S3[personal/]
68
+ end
69
+
70
+ subgraph Skills["Claude Code skills"]
71
+ direction LR
72
+ I["/brain-ingest"]
73
+ Q["/brain-search"]
74
+ R["/brain-refresh"]
75
+ end
76
+
77
+ subgraph Wiki["wiki/ — cross-linked synthesis"]
78
+ direction LR
79
+ W1[overview.md]
80
+ W2[topic / entity pages]
81
+ W3[sources/]
82
+ W4[qa/]
83
+ end
84
+
85
+ QMD[("qmd.sqlite<br/>vector + BM25<br/>hybrid index")]
86
+
87
+ subgraph Remote["GitHub — source of truth"]
88
+ GH[private repo]
89
+ end
90
+
91
+ subgraph Read["Read / edit anywhere"]
92
+ direction LR
93
+ OB["Obsidian<br/>graph · backlinks · mobile"]
94
+ CC["Claude Code<br/>desktop + mobile"]
95
+ end
96
+
97
+ User((you))
98
+
99
+ Sources -->|read| I
100
+ I -->|write pages, cross-link,<br/>flag contradictions| Wiki
101
+ R -.->|chunk + embed<br/>changed files| QMD
102
+ Wiki -.->|indexed by| QMD
103
+ User -->|ask a question| Q
104
+ Q -->|hybrid search| QMD
105
+ QMD -->|top-k pages| Q
106
+ Q -->|cited answer| User
107
+ Wiki <-->|"obsidian-git<br/>auto commit / pull"| GH
108
+ GH --> OB
109
+ GH --> CC
110
+ ```
111
+
41
112
  ---
42
113
 
43
114
  ## Obsidian Mobile
@@ -82,15 +153,39 @@ claude-second-brain/
82
153
 
83
154
  ---
84
155
 
156
+ ## Installing and updating skills
157
+
158
+ Skills are slash commands Claude Code loads from `.claude/skills/[name]/SKILL.md` in this vault. The wiki ships with `/brain-ingest`, `/brain-search`, `/brain-refresh`, `/brain-rebuild`, `/lint`, `/setup`, and `/qmd-cli` pre-installed.
159
+
160
+ ### Update built-in wiki skills
161
+
162
+ Pull the latest skills from the upstream template:
163
+
164
+ ```bash
165
+ # Install or update all 7 wiki skills
166
+ npx skills add https://github.com/jessepinkman9900/claude-second-brain/tree/main/template/.claude/skills -a claude-code -y
167
+
168
+ # Or update a specific skill
169
+ npx skills add https://github.com/jessepinkman9900/claude-second-brain/tree/main/template/.claude/skills --skill brain-ingest -a claude-code -y
170
+ ```
171
+
172
+ Once installed via `npx skills`, future updates are a single command:
173
+
174
+ ```bash
175
+ npx skills update -a claude-code
176
+ ```
177
+
178
+ ---
179
+
85
180
  ## Re-indexing
86
181
 
87
182
  After a bulk ingest session, re-index to keep search current:
88
183
 
89
- ```bash
90
- bun scripts/qmd/reindex.ts
184
+ ```
185
+ /brain-refresh
91
186
  ```
92
187
 
93
- Or run `/setup` again inside Claude Code.
188
+ This wraps `bun scripts/qmd/reindex.ts` — you can also run that command directly if you're not inside Claude Code. Pass `force` to `/brain-refresh` to re-embed every chunk (e.g. after changing the embedding model).
94
189
 
95
190
  ---
96
191
 
@@ -7,14 +7,12 @@
7
7
  * bun scripts/qmd/reindex.ts
8
8
  *
9
9
  * Suitable as a cronjob:
10
- * 0 * * * * cd /home/user/obsidian-agent-memory && bun scripts/qmd/reindex.ts
10
+ * 0 * * * * cd /home/user/my-brain && bun scripts/qmd/reindex.ts
11
11
  */
12
12
 
13
13
  import { createStore } from "@tobilu/qmd"
14
- import { join } from "path"
15
14
 
16
- const VAULT = join(import.meta.dir, "../..")
17
- const DB = join(VAULT, "qmd.sqlite")
15
+ const DB = "__QMD_PATH__"
18
16
 
19
17
  const store = await createStore({ dbPath: DB })
20
18
 
@@ -13,7 +13,7 @@ import { createStore } from "@tobilu/qmd"
13
13
  import { join } from "path"
14
14
 
15
15
  const VAULT = join(import.meta.dir, "../..")
16
- const DB = join(VAULT, "qmd.sqlite")
16
+ const DB = "__QMD_PATH__"
17
17
 
18
18
  const store = await createStore({ dbPath: DB })
19
19
 
@@ -33,8 +33,6 @@ async function ensureCollection(name: string, relPath: string, pattern: string)
33
33
  console.log("Collections:")
34
34
  await ensureCollection("wiki", "wiki", "**/*.md")
35
35
  await ensureCollection("raw-sources", "sources", "**/*.md")
36
- await ensureCollection("human", "human", "**/*.md")
37
- await ensureCollection("daily-notes", "daily-notes", "**/*.md")
38
36
 
39
37
  // --- Global context ---
40
38
  console.log("\nContexts:")
@@ -55,13 +53,4 @@ await store.addContext("raw-sources", "/articles", "Web articles saved as markdo
55
53
  await store.addContext("raw-sources", "/pdfs", "PDF files or their extracted text")
56
54
  await store.addContext("raw-sources", "/personal", "Personal notes flagged for wiki ingestion")
57
55
 
58
- // human/
59
- await store.addContext("human", "", "User's personal notes — not Claude-maintained; covers work, social life, investing, hobbies, and more")
60
- await store.addContext("human", "/ideas", "Brainstorming and exploratory ideas")
61
- await store.addContext("human", "/job-hunt", "Job search notes and preparation")
62
- await store.addContext("human", "/misc", "Miscellaneous personal notes")
63
-
64
- // daily-notes/
65
- await store.addContext("daily-notes", "", "User's daily and weekly journal entries, dated by year and week number")
66
-
67
56
  console.log("\nSetup complete. Run: bun scripts/qmd/reindex.ts")
@@ -1,5 +1,5 @@
1
1
  ---
2
- type: overview
2
+ type: admin
3
3
  tags: []
4
4
  updated: 2026-01-01
5
5
  ---
@@ -8,7 +8,7 @@ updated: 2026-01-01
8
8
 
9
9
  ## Sources Ingested
10
10
 
11
- _No sources ingested yet. Run `/ingest` to add your first source._
11
+ _No sources ingested yet. Run `/brain-ingest` to add your first source._
12
12
 
13
13
  ## Topic Pages
14
14
 
@@ -1,5 +1,5 @@
1
1
  ---
2
- type: overview
2
+ type: admin
3
3
  tags: []
4
4
  updated: 2026-01-01
5
5
  ---