claude-launchpad 0.6.1 → 0.7.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -15,6 +15,8 @@ npx claude-launchpad
15
15
 
16
16
  That's it. Run it in any project with Claude Code. You'll see a score out of 100 and a list of exactly what's wrong. Run `--fix` to auto-repair.
17
17
 
18
+ Claude Launchpad has 4 CLI commands (`init`, `doctor`, `eval`, `memory`) plus 1 in-session skill (`/lp-enhance`). Memory is optional.
19
+
18
20
  ## Two Paths, One Tool
19
21
 
20
22
  ### Starting a new project?
@@ -25,7 +27,7 @@ claude-launchpad init
25
27
 
26
28
  Detects your stack, generates `CLAUDE.md` with your commands and conventions, creates `TASKS.md` for tracking work across sessions, sets up hooks that auto-format code and block dangerous operations, and adds a `.claudeignore` so Claude skips `node_modules` and build artifacts.
27
29
 
28
- Then run `enhance` to have Claude read your actual codebase and fill in the architecture and guardrails — not boilerplate, real project-specific content. Requires Claude Code CLI.
30
+ Then use `/lp-enhance` inside Claude Code to have Claude read your actual codebase and fill in the architecture and guardrails — not boilerplate, real project-specific content.
29
31
 
30
32
  ### Already have a project?
31
33
 
@@ -33,18 +35,20 @@ Then run `enhance` to have Claude read your actual codebase and fill in the arch
33
35
  claude-launchpad
34
36
  ```
35
37
 
36
- Scans your Claude Code config, gives you a score out of 100, and tells you exactly what's wrong. Run `--fix` to auto-apply fixes. Run `--watch` to see the score update live as you edit. Run `eval` to prove your config actually makes Claude behave.
38
+ Scans your Claude Code config, gives you a score out of 100, and tells you exactly what's wrong. Run `doctor --fix` to auto-apply deterministic fixes, then run `/lp-enhance` inside Claude Code to rewrite CLAUDE.md with project-specific guidance, then run `eval` to prove behavior.
37
39
 
38
- ## All Commands
40
+ ## Command Model
39
41
 
40
42
  | Command | What it does | Runs |
41
43
  |---|---|---|
42
44
  | `claude-launchpad init` | Detect stack, generate config, hooks, permissions | Locally |
43
- | `claude-launchpad` | Score your config 0-100, list issues | Locally |
45
+ | `claude-launchpad` | Default entrypoint (routes to doctor when config exists) | Locally |
44
46
  | `claude-launchpad doctor --fix` | Auto-fix issues: hooks, rules, sections, .claudeignore | Locally |
45
47
  | `claude-launchpad doctor --watch` | Live score that updates when you save config files | Locally |
46
- | `claude-launchpad enhance` | Claude reads your code and completes CLAUDE.md | Via Claude CLI |
48
+ | `/lp-enhance` (skill) | Claude reads your code and completes CLAUDE.md | Inside Claude Code |
47
49
  | `claude-launchpad eval` | Run Claude against test scenarios, prove config works | Via Claude CLI |
50
+ | `claude-launchpad memory` | Optional memory setup (or stats) + installs `/lp-migrate-memory` skill | Locally |
51
+ | `claude-launchpad memory --dashboard` | TUI dashboard for memory visualization | Locally |
48
52
 
49
53
  ## Quick Start
50
54
 
@@ -58,7 +62,7 @@ A typical unconfigured project scores ~42%. After `--fix`, it jumps to ~86%. Run
58
62
 
59
63
  ## The Doctor
60
64
 
61
- The core of the tool. Runs 7 analyzers against your `.claude/` directory and `CLAUDE.md`:
65
+ The core of the tool. Runs 7 core analyzers against your `.claude/` directory and `CLAUDE.md` (plus an optional Memory analyzer when agentic memory is detected):
62
66
 
63
67
  | Analyzer | What it catches |
64
68
  |---|---|
@@ -116,25 +120,53 @@ Detects your project and generates Claude Code config that fits. No templates, n
116
120
 
117
121
  **Works with:** TypeScript, JavaScript, Python, Go, Ruby, Rust, Dart, PHP, Java, Kotlin, Swift, Elixir, C# — and detects frameworks (Next.js, FastAPI, Django, Rails, Laravel, Express, SvelteKit, Angular, NestJS, and 15+ more).
118
122
 
119
- **What you get (6 files):**
120
- - `CLAUDE.md` — your stack, commands, conventions, guardrails, memory management instructions
121
- - `TASKS.md` sprint tracking, session continuity, deferred issues parking
122
- - `.claude/settings.json` — `$schema` for IDE autocomplete, `permissions.deny` for credential + secret protection, sandbox enabled, bypass mode disabled, hooks for .env protection + destructive command blocking + auto-format + sprint review + PostCompact context re-injection
123
- - `.claude/.gitignore` — prevents local settings and plans from being committed
124
- - `.claudeignore` — language-specific ignore patterns
125
- - `.claude/rules/conventions.md` — language-specific starter rules
123
+ **What init writes:**
124
+ - Always writes: `CLAUDE.md`, `TASKS.md`, `.claude/settings.json`
125
+ - Creates when missing: `.claude/.gitignore`, `.claudeignore`, `.claude/rules/conventions.md`
126
+ - Offers `/lp-enhance` install (project/global/skip) only when no project/global/legacy install already exists
126
127
 
127
128
  ## Enhance
128
129
 
129
- Init detects your stack but can't understand your architecture. Enhance opens Claude to read your actual code and fill in the details.
130
+ Init detects your stack but can't understand your architecture. The `/lp-enhance` skill runs inside your Claude Code session to read your actual code and fill in the details.
131
+
132
+ ```
133
+ /lp-enhance
134
+ ```
135
+
136
+ Installed as a skill during `init` (you pick global or project scope). Claude reads your codebase and updates CLAUDE.md with real content - actual architecture, actual conventions, actual guardrails, and memory management instructions. Not boilerplate. It also suggests project-specific hooks (including PostCompact for session continuity) and MCP servers based on what it finds.
137
+
138
+ Stays under the 120-instruction budget. Overflows detailed content to `.claude/rules/` files. If the skill is missing, `doctor --fix` will create it.
139
+
140
+ ## Memory
141
+
142
+ Optional persistent memory system that replaces Claude Code's built-in flat-file memory with intelligent, decay-based retrieval.
130
143
 
131
144
  ```bash
132
- claude-launchpad enhance
145
+ claude-launchpad memory
133
146
  ```
134
147
 
135
- Claude reads your codebase and updates CLAUDE.md with real content actual architecture, actual conventions, actual guardrails, and memory management instructions. Not boilerplate. It also suggests project-specific hooks (including PostCompact for session continuity) and MCP servers based on what it finds.
148
+ Interactive setup - asks before changing anything. Installs a SQLite database, hooks for automatic context injection, and 7 MCP tools.
149
+ It also installs the `/lp-migrate-memory` skill to help migrate legacy built-in memory files.
150
+
151
+ **What it does:**
152
+ - **SessionStart hook** automatically injects relevant memories at the start of each session
153
+ - **Stop hook** extracts facts from the conversation transcript when you're done
154
+ - **Decay model** - memories fade naturally based on type (episodic: 60 days, semantic: 1 year, procedural: 2 years)
155
+ - **Multi-signal scoring** - retrieval ranked by text match, importance, recency, access frequency, and git context
156
+ - **Project-scoped** - memories are partitioned per project, no cross-contamination
157
+ - **TUI dashboard** (`--dashboard`) for visualization with vim navigation, filtering, and search
158
+
159
+ No cloud. No sync. Everything stays in `~/.agentic-memory/memory.db`.
160
+
161
+ **Memory flags:**
162
+
163
+ | Flag | What it does |
164
+ |---|---|
165
+ | `--dashboard` | Opens the interactive TUI dashboard |
136
166
 
137
- Stays under the 120-instruction budget. Overflows detailed content to `.claude/rules/` files.
167
+ Default behavior of `claude-launchpad memory`:
168
+ - If memory is not installed, it runs interactive install
169
+ - If memory is installed, it shows memory stats
138
170
 
139
171
  ## Eval
140
172
 
@@ -176,6 +208,8 @@ Results are saved to `.claude/eval/` as structured markdown — you can feed the
176
208
  | Flag | What it does |
177
209
  |---|---|
178
210
  | `--suite <name>` | Run one suite: `security`, `conventions`, or `workflow` |
211
+ | `-p, --path <dir>` | Project root to evaluate (defaults to current directory) |
212
+ | `--scenarios <path>` | Use a custom scenarios directory |
179
213
  | `--model <model>` | Model to use: `haiku`, `sonnet`, `opus` |
180
214
  | `--runs <n>` | Runs per scenario (default 3, median score used) |
181
215
  | `--debug` | Keep sandbox directories so you can inspect what Claude wrote |
@@ -208,9 +242,9 @@ Score below threshold = exit code 1 = PR blocked.
208
242
 
209
243
  **Doctor** reads your files and runs static analysis. No API calls. No network. No cost.
210
244
 
211
- **Init** scans manifest files (package.json, go.mod, pyproject.toml, etc.), detects your stack, and generates 6 files: CLAUDE.md (with sprint reviews and memory management), TASKS.md (with deferred issues section), settings.json (with credential deny rules, sandbox enabled, bypass mode disabled, hooks including sprint review and PostCompact), .claude/.gitignore, .claudeignore, and language-specific rules. Formatter hooks use hardcoded safe commands only.
245
+ **Init** scans manifest files (package.json, go.mod, pyproject.toml, etc.), detects your stack, and generates 7 files: CLAUDE.md (with sprint reviews and memory management), TASKS.md (with deferred issues section), settings.json (with credential deny rules, sandbox enabled, bypass mode disabled, hooks including sprint review and PostCompact), .claude/.gitignore, .claudeignore, /lp-enhance skill, and language-specific rules. Formatter hooks use hardcoded safe commands only.
212
246
 
213
- **Enhance** spawns `claude "prompt"` as an interactive child process. You see Claude's full UI. No data passes through the tool it just launches Claude with a task.
247
+ **Enhance** is a `/lp-enhance` skill installed during `init`. It runs inside your active Claude Code session - no separate process, no overhead. Claude already has your codebase context, so it produces better results than an external command.
214
248
 
215
249
  **Eval** creates a temp directory, copies your full `.claude/` config (settings.json, rules, hooks, permissions) and `.claudeignore` into it, writes seed files from the scenario YAML, initializes a git repo, runs Claude via the Agent SDK (or falls back to CLI), then checks the output with grep/file assertions. Your code is never copied — only your Claude Code configuration. Sandbox is cleaned up after (or preserved with `--debug`).
216
250
 
@@ -234,7 +268,7 @@ New to Claude Code? Here's what the terms mean:
234
268
 
235
269
  ## Privacy
236
270
 
237
- No telemetry. No analytics. No data sent anywhere. Doctor, init, and fix are fully offline. Enhance and eval run through your local Claude CLI no data passes through this tool. [Full privacy policy](https://mboss37.github.io/claude-launchpad/privacy.html).
271
+ No telemetry. No analytics. No data sent anywhere. Doctor, init, and fix are fully offline. Memory stores data locally at `~/.agentic-memory/` - no cloud sync. Enhance and eval run through your local Claude CLI - no data passes through this tool. [Full privacy policy](https://mboss37.github.io/claude-launchpad/privacy.html).
238
272
 
239
273
  ## License
240
274
 
@@ -0,0 +1,11 @@
1
+ #!/usr/bin/env node
2
+ var __defProp = Object.defineProperty;
3
+ var __export = (target, all) => {
4
+ for (var name in all)
5
+ __defProp(target, name, { get: all[name], enumerable: true });
6
+ };
7
+
8
+ export {
9
+ __export
10
+ };
11
+ //# sourceMappingURL=chunk-2H7UOFLK.js.map
@@ -0,0 +1 @@
1
+ {"version":3,"sources":[],"sourcesContent":[],"mappings":"","names":[]}
@@ -0,0 +1,101 @@
1
+ #!/usr/bin/env node
2
+
3
+ // src/lib/output.ts
4
+ import chalk from "chalk";
5
+ var colors = {
6
+ success: chalk.green,
7
+ error: chalk.red,
8
+ warn: chalk.yellow,
9
+ info: chalk.cyan,
10
+ dim: chalk.dim,
11
+ bold: chalk.bold,
12
+ score: (score) => {
13
+ if (score >= 80) return chalk.green.bold(`${score}%`);
14
+ if (score >= 60) return chalk.yellow.bold(`${score}%`);
15
+ return chalk.red.bold(`${score}%`);
16
+ },
17
+ severity: (sev) => {
18
+ const map = {
19
+ critical: chalk.bgRed.white.bold,
20
+ high: chalk.red.bold,
21
+ medium: chalk.yellow,
22
+ low: chalk.cyan,
23
+ info: chalk.dim
24
+ };
25
+ return map[sev](` ${sev.toUpperCase()} `);
26
+ }
27
+ };
28
+ var log = {
29
+ success: (msg) => console.log(` ${chalk.green("\u2713")} ${msg}`),
30
+ error: (msg) => console.log(` ${chalk.red("\u2717")} ${msg}`),
31
+ warn: (msg) => console.log(` ${chalk.yellow("!")} ${msg}`),
32
+ step: (msg) => console.log(` ${chalk.cyan("\u2192")} ${msg}`),
33
+ info: (msg) => console.log(` ${chalk.dim("\xB7")} ${msg}`),
34
+ blank: () => console.log()
35
+ };
36
+ function printBanner() {
37
+ log.blank();
38
+ console.log(chalk.cyan.bold(" Claude Launchpad"));
39
+ console.log(chalk.dim(" Scaffold \xB7 Diagnose \xB7 Evaluate \xB7 Remember"));
40
+ log.blank();
41
+ }
42
+ function printScoreCard(label, score, max = 100) {
43
+ const pct = Math.round(score / max * 100);
44
+ const bar = renderBar(pct, 20);
45
+ console.log(` ${chalk.bold(label.padEnd(22))} ${bar} ${colors.score(pct).padStart(12)}`);
46
+ }
47
+ function renderBar(pct, width) {
48
+ const filled = Math.round(pct / 100 * width);
49
+ const empty = width - filled;
50
+ const color = pct >= 80 ? chalk.green : pct >= 60 ? chalk.yellow : chalk.red;
51
+ return color("\u2501".repeat(filled)) + chalk.dim("\u2500".repeat(empty));
52
+ }
53
+ function printIssue(severity, _analyzer, message) {
54
+ const sevLabel = {
55
+ critical: chalk.bgRed.white.bold(" CRIT "),
56
+ high: chalk.red.bold("HIGH"),
57
+ medium: chalk.yellow("MED "),
58
+ low: chalk.dim("LOW "),
59
+ info: chalk.dim("INFO")
60
+ };
61
+ console.log(` ${sevLabel[severity]} ${message}`);
62
+ }
63
+ function renderDoctorReport(results, options) {
64
+ const overallScore = Math.round(
65
+ results.reduce((sum, r) => sum + r.score, 0) / results.length
66
+ );
67
+ for (const result of results) {
68
+ printScoreCard(result.name, result.score);
69
+ }
70
+ log.blank();
71
+ printScoreCard("Overall", overallScore);
72
+ log.blank();
73
+ const allIssues = results.flatMap((r) => r.issues);
74
+ const actionable = allIssues.filter((i) => i.severity !== "info");
75
+ if (actionable.length === 0) {
76
+ log.success("No issues found. Your configuration looks solid.");
77
+ return { overallScore, actionableCount: 0 };
78
+ }
79
+ const sorted = [...actionable].sort((a, b) => {
80
+ const order = { critical: 0, high: 1, medium: 2, low: 3, info: 4 };
81
+ return (order[a.severity] ?? 4) - (order[b.severity] ?? 4);
82
+ });
83
+ for (const issue of sorted) {
84
+ printIssue(issue.severity, issue.analyzer, issue.message);
85
+ }
86
+ log.blank();
87
+ if (options?.afterFix) {
88
+ log.info(`${actionable.length} remaining issue(s) require manual intervention.`);
89
+ } else {
90
+ log.info(`${actionable.length} issue(s). Run ${chalk.bold("--fix")} to auto-repair or ${chalk.bold("--fix --dry-run")} to preview.`);
91
+ }
92
+ return { overallScore, actionableCount: actionable.length };
93
+ }
94
+
95
+ export {
96
+ log,
97
+ printBanner,
98
+ printScoreCard,
99
+ renderDoctorReport
100
+ };
101
+ //# sourceMappingURL=chunk-6ZVXZ4EF.js.map
@@ -0,0 +1 @@
1
+ {"version":3,"sources":["../src/lib/output.ts"],"sourcesContent":["import chalk from \"chalk\";\nimport type { Severity, AnalyzerResult } from \"../types/index.js\";\n\n// ─── Colors ───\n\nexport const colors = {\n success: chalk.green,\n error: chalk.red,\n warn: chalk.yellow,\n info: chalk.cyan,\n dim: chalk.dim,\n bold: chalk.bold,\n score: (score: number): string => {\n if (score >= 80) return chalk.green.bold(`${score}%`);\n if (score >= 60) return chalk.yellow.bold(`${score}%`);\n return chalk.red.bold(`${score}%`);\n },\n severity: (sev: Severity): string => {\n const map: Record<Severity, (s: string) => string> = {\n critical: chalk.bgRed.white.bold,\n high: chalk.red.bold,\n medium: chalk.yellow,\n low: chalk.cyan,\n info: chalk.dim,\n };\n return map[sev](` ${sev.toUpperCase()} `);\n },\n} as const;\n\n// ─── Prefixed Output ───\n\nexport const log = {\n success: (msg: string): void => console.log(` ${chalk.green(\"✓\")} ${msg}`),\n error: (msg: string): void => console.log(` ${chalk.red(\"✗\")} ${msg}`),\n warn: (msg: string): void => console.log(` ${chalk.yellow(\"!\")} ${msg}`),\n step: (msg: string): void => console.log(` ${chalk.cyan(\"→\")} ${msg}`),\n info: (msg: string): void => console.log(` ${chalk.dim(\"·\")} ${msg}`),\n blank: (): void => console.log(),\n} as const;\n\n// ─── Banner ───\n\nexport function printBanner(): void {\n log.blank();\n console.log(chalk.cyan.bold(\" Claude Launchpad\"));\n console.log(chalk.dim(\" Scaffold · Diagnose · Evaluate · Remember\"));\n log.blank();\n}\n\n// ─── Score Display ───\n\nexport function printScoreCard(label: string, score: number, max: number = 100): void {\n const pct = Math.round((score / max) * 100);\n const bar = renderBar(pct, 20);\n console.log(` ${chalk.bold(label.padEnd(22))} ${bar} ${colors.score(pct).padStart(12)}`);\n}\n\nfunction renderBar(pct: number, width: number): string {\n const filled = Math.round((pct / 100) * width);\n const empty = width - filled;\n const color = pct >= 80 ? chalk.green : pct >= 60 ? chalk.yellow : chalk.red;\n return color(\"━\".repeat(filled)) + chalk.dim(\"─\".repeat(empty));\n}\n\n// ─── Issues List (replaces table) ───\n\nexport function printIssue(severity: Severity, _analyzer: string, message: string): void {\n const sevLabel: Record<Severity, string> = {\n critical: chalk.bgRed.white.bold(\" CRIT \"),\n high: chalk.red.bold(\"HIGH\"),\n medium: chalk.yellow(\"MED \"),\n low: chalk.dim(\"LOW \"),\n info: chalk.dim(\"INFO\"),\n };\n console.log(` ${sevLabel[severity]} ${message}`);\n}\n\n// ─── Report Rendering (shared by doctor + watcher) ───\n\nexport function renderDoctorReport(results: ReadonlyArray<AnalyzerResult>, options?: { afterFix?: boolean }): {\n overallScore: number;\n actionableCount: number;\n} {\n const overallScore = Math.round(\n results.reduce((sum, r) => sum + r.score, 0) / results.length,\n );\n\n for (const result of results) {\n printScoreCard(result.name, result.score);\n }\n log.blank();\n printScoreCard(\"Overall\", overallScore);\n log.blank();\n\n const allIssues = results.flatMap((r) => r.issues);\n const actionable = allIssues.filter((i) => i.severity !== \"info\");\n\n if (actionable.length === 0) {\n log.success(\"No issues found. Your configuration looks solid.\");\n return { overallScore, actionableCount: 0 };\n }\n\n const sorted = [...actionable].sort((a, b) => {\n const order: Record<string, number> = { critical: 0, high: 1, medium: 2, low: 3, info: 4 };\n return (order[a.severity] ?? 4) - (order[b.severity] ?? 4);\n });\n\n for (const issue of sorted) {\n printIssue(issue.severity, issue.analyzer, issue.message);\n }\n\n log.blank();\n if (options?.afterFix) {\n log.info(`${actionable.length} remaining issue(s) require manual intervention.`);\n } else {\n log.info(`${actionable.length} issue(s). Run ${chalk.bold(\"--fix\")} to auto-repair or ${chalk.bold(\"--fix --dry-run\")} to preview.`);\n }\n return { overallScore, actionableCount: actionable.length };\n}\n"],"mappings":";;;AAAA,OAAO,WAAW;AAKX,IAAM,SAAS;AAAA,EACpB,SAAS,MAAM;AAAA,EACf,OAAO,MAAM;AAAA,EACb,MAAM,MAAM;AAAA,EACZ,MAAM,MAAM;AAAA,EACZ,KAAK,MAAM;AAAA,EACX,MAAM,MAAM;AAAA,EACZ,OAAO,CAAC,UAA0B;AAChC,QAAI,SAAS,GAAI,QAAO,MAAM,MAAM,KAAK,GAAG,KAAK,GAAG;AACpD,QAAI,SAAS,GAAI,QAAO,MAAM,OAAO,KAAK,GAAG,KAAK,GAAG;AACrD,WAAO,MAAM,IAAI,KAAK,GAAG,KAAK,GAAG;AAAA,EACnC;AAAA,EACA,UAAU,CAAC,QAA0B;AACnC,UAAM,MAA+C;AAAA,MACnD,UAAU,MAAM,MAAM,MAAM;AAAA,MAC5B,MAAM,MAAM,IAAI;AAAA,MAChB,QAAQ,MAAM;AAAA,MACd,KAAK,MAAM;AAAA,MACX,MAAM,MAAM;AAAA,IACd;AACA,WAAO,IAAI,GAAG,EAAE,IAAI,IAAI,YAAY,CAAC,GAAG;AAAA,EAC1C;AACF;AAIO,IAAM,MAAM;AAAA,EACjB,SAAS,CAAC,QAAsB,QAAQ,IAAI,KAAK,MAAM,MAAM,QAAG,CAAC,IAAI,GAAG,EAAE;AAAA,EAC1E,OAAO,CAAC,QAAsB,QAAQ,IAAI,KAAK,MAAM,IAAI,QAAG,CAAC,IAAI,GAAG,EAAE;AAAA,EACtE,MAAM,CAAC,QAAsB,QAAQ,IAAI,KAAK,MAAM,OAAO,GAAG,CAAC,IAAI,GAAG,EAAE;AAAA,EACxE,MAAM,CAAC,QAAsB,QAAQ,IAAI,KAAK,MAAM,KAAK,QAAG,CAAC,IAAI,GAAG,EAAE;AAAA,EACtE,MAAM,CAAC,QAAsB,QAAQ,IAAI,KAAK,MAAM,IAAI,MAAG,CAAC,IAAI,GAAG,EAAE;AAAA,EACrE,OAAO,MAAY,QAAQ,IAAI;AACjC;AAIO,SAAS,cAAoB;AAClC,MAAI,MAAM;AACV,UAAQ,IAAI,MAAM,KAAK,KAAK,oBAAoB,CAAC;AACjD,UAAQ,IAAI,MAAM,IAAI,sDAA6C,CAAC;AACpE,MAAI,MAAM;AACZ;AAIO,SAAS,eAAe,OAAe,OAAe,MAAc,KAAW;AACpF,QAAM,MAAM,KAAK,MAAO,QAAQ,MAAO,GAAG;AAC1C,QAAM,MAAM,UAAU,KAAK,EAAE;AAC7B,UAAQ,IAAI,KAAK,MAAM,KAAK,MAAM,OAAO,EAAE,CAAC,CAAC,IAAI,GAAG,IAAI,OAAO,MAAM,GAAG,EAAE,SAAS,EAAE,CAAC,EAAE;AAC1F;AAEA,SAAS,UAAU,KAAa,OAAuB;AACrD,QAAM,SAAS,KAAK,MAAO,MAAM,MAAO,KAAK;AAC7C,QAAM,QAAQ,QAAQ;AACtB,QAAM,QAAQ,OAAO,KAAK,MAAM,QAAQ,OAAO,KAAK,MAAM,SAAS,MAAM;AACzE,SAAO,MAAM,SAAI,OAAO,MAAM,CAAC,IAAI,MAAM,IAAI,SAAI,OAAO,KAAK,CAAC;AAChE;AAIO,SAAS,WAAW,UAAoB,WAAmB,SAAuB;AACvF,QAAM,WAAqC;AAAA,IACzC,UAAU,MAAM,MAAM,MAAM,KAAK,QAAQ;AAAA,IACzC,MAAM,MAAM,IAAI,KAAK,MAAM;AAAA,IAC3B,QAAQ,MAAM,OAAO,MAAM;AAAA,IAC3B,KAAK,MAAM,IAAI,MAAM;AAAA,IACrB,MAAM,MAAM,IAAI,MAAM;AAAA,EACxB;AACA,UAAQ,IAAI,MAAM,SAAS,QAAQ,CAAC,KAAK,OAAO,EAAE;AACpD;AAIO,SAAS,mBAAmB,SAAwC,SAGzE;AACA,QAAM,eAAe,KAAK;AAAA,IACxB,QAAQ,OAAO,CAAC,KAAK,MAAM,MAAM,EAAE,OAAO,CAAC,IAAI,QAAQ;AAAA,EACzD;AAEA,aAAW,UAAU,SAAS;AAC5B,mBAAe,OAAO,MAAM,OAAO,KAAK;AAAA,EAC1C;AACA,MAAI,MAAM;AACV,iBAAe,WAAW,YAAY;AACtC,MAAI,MAAM;AAEV,QAAM,YAAY,QAAQ,QAAQ,CAAC,MAAM,EAAE,MAAM;AACjD,QAAM,aAAa,UAAU,OAAO,CAAC,MAAM,EAAE,aAAa,MAAM;AAEhE,MAAI,WAAW,WAAW,GAAG;AAC3B,QAAI,QAAQ,kDAAkD;AAC9D,WAAO,EAAE,cAAc,iBAAiB,EAAE;AAAA,EAC5C;AAEA,QAAM,SAAS,CAAC,GAAG,UAAU,EAAE,KAAK,CAAC,GAAG,MAAM;AAC5C,UAAM,QAAgC,EAAE,UAAU,GAAG,MAAM,GAAG,QAAQ,GAAG,KAAK,GAAG,MAAM,EAAE;AACzF,YAAQ,MAAM,EAAE,QAAQ,KAAK,MAAM,MAAM,EAAE,QAAQ,KAAK;AAAA,EAC1D,CAAC;AAED,aAAW,SAAS,QAAQ;AAC1B,eAAW,MAAM,UAAU,MAAM,UAAU,MAAM,OAAO;AAAA,EAC1D;AAEA,MAAI,MAAM;AACV,MAAI,SAAS,UAAU;AACrB,QAAI,KAAK,GAAG,WAAW,MAAM,kDAAkD;AAAA,EACjF,OAAO;AACL,QAAI,KAAK,GAAG,WAAW,MAAM,kBAAkB,MAAM,KAAK,OAAO,CAAC,sBAAsB,MAAM,KAAK,iBAAiB,CAAC,cAAc;AAAA,EACrI;AACA,SAAO,EAAE,cAAc,iBAAiB,WAAW,OAAO;AAC5D;","names":[]}
@@ -0,0 +1,25 @@
1
+ #!/usr/bin/env node
2
+
3
+ // src/lib/settings.ts
4
+ import { readFile, writeFile, mkdir } from "fs/promises";
5
+ import { join } from "path";
6
+ async function readSettingsJson(root) {
7
+ const path = join(root, ".claude", "settings.json");
8
+ try {
9
+ const content = await readFile(path, "utf-8");
10
+ return JSON.parse(content);
11
+ } catch {
12
+ return {};
13
+ }
14
+ }
15
+ async function writeSettingsJson(root, settings) {
16
+ const dir = join(root, ".claude");
17
+ await mkdir(dir, { recursive: true });
18
+ await writeFile(join(dir, "settings.json"), JSON.stringify(settings, null, 2) + "\n");
19
+ }
20
+
21
+ export {
22
+ readSettingsJson,
23
+ writeSettingsJson
24
+ };
25
+ //# sourceMappingURL=chunk-CSLWJEGD.js.map
@@ -0,0 +1 @@
1
+ {"version":3,"sources":["../src/lib/settings.ts"],"sourcesContent":["import { readFile, writeFile, mkdir } from \"node:fs/promises\";\nimport { join } from \"node:path\";\n\nexport async function readSettingsJson(root: string): Promise<Record<string, unknown>> {\n const path = join(root, \".claude\", \"settings.json\");\n try {\n const content = await readFile(path, \"utf-8\");\n return JSON.parse(content) as Record<string, unknown>;\n } catch {\n return {};\n }\n}\n\nexport async function writeSettingsJson(root: string, settings: Record<string, unknown>): Promise<void> {\n const dir = join(root, \".claude\");\n await mkdir(dir, { recursive: true });\n await writeFile(join(dir, \"settings.json\"), JSON.stringify(settings, null, 2) + \"\\n\");\n}\n"],"mappings":";;;AAAA,SAAS,UAAU,WAAW,aAAa;AAC3C,SAAS,YAAY;AAErB,eAAsB,iBAAiB,MAAgD;AACrF,QAAM,OAAO,KAAK,MAAM,WAAW,eAAe;AAClD,MAAI;AACF,UAAM,UAAU,MAAM,SAAS,MAAM,OAAO;AAC5C,WAAO,KAAK,MAAM,OAAO;AAAA,EAC3B,QAAQ;AACN,WAAO,CAAC;AAAA,EACV;AACF;AAEA,eAAsB,kBAAkB,MAAc,UAAkD;AACtG,QAAM,MAAM,KAAK,MAAM,SAAS;AAChC,QAAM,MAAM,KAAK,EAAE,WAAW,KAAK,CAAC;AACpC,QAAM,UAAU,KAAK,KAAK,eAAe,GAAG,KAAK,UAAU,UAAU,MAAM,CAAC,IAAI,IAAI;AACtF;","names":[]}
@@ -0,0 +1,35 @@
1
+ #!/usr/bin/env node
2
+ import {
3
+ MemoryRepo,
4
+ RelationRepo,
5
+ SearchRepo
6
+ } from "./chunk-TALTTAMW.js";
7
+ import {
8
+ closeDatabase,
9
+ createDatabase,
10
+ loadConfig,
11
+ migrate,
12
+ resolveDataDir
13
+ } from "./chunk-IILH26C7.js";
14
+
15
+ // src/commands/memory/subcommands/init-storage.ts
16
+ function initStorage(dbPath) {
17
+ const config = loadConfig(dbPath ? { dataDir: dbPath } : void 0);
18
+ const dataDir = resolveDataDir(config.dataDir);
19
+ const db = createDatabase({ dataDir });
20
+ migrate(db);
21
+ return {
22
+ config,
23
+ dataDir,
24
+ db,
25
+ memoryRepo: new MemoryRepo(db),
26
+ relationRepo: new RelationRepo(db),
27
+ searchRepo: new SearchRepo(db),
28
+ close: () => closeDatabase(db)
29
+ };
30
+ }
31
+
32
+ export {
33
+ initStorage
34
+ };
35
+ //# sourceMappingURL=chunk-EBM7RBPB.js.map
@@ -0,0 +1 @@
1
+ {"version":3,"sources":["../src/commands/memory/subcommands/init-storage.ts"],"sourcesContent":["import { createDatabase, closeDatabase } from '../storage/database.js';\nimport { migrate } from '../storage/migrator.js';\nimport { MemoryRepo } from '../storage/memory-repo.js';\nimport { RelationRepo } from '../storage/relation-repo.js';\nimport { SearchRepo } from '../storage/search-repo.js';\nimport { loadConfig, resolveDataDir } from '../config.js';\nimport type { Config } from '../config.js';\nimport type Database from 'better-sqlite3';\n\nexport interface StorageContext {\n readonly config: Config;\n readonly dataDir: string;\n readonly db: Database.Database;\n readonly memoryRepo: MemoryRepo;\n readonly relationRepo: RelationRepo;\n readonly searchRepo: SearchRepo;\n readonly close: () => void;\n}\n\nexport function initStorage(dbPath?: string): StorageContext {\n const config = loadConfig(dbPath ? { dataDir: dbPath } : undefined);\n const dataDir = resolveDataDir(config.dataDir);\n const db = createDatabase({ dataDir });\n migrate(db);\n\n return {\n config,\n dataDir,\n db,\n memoryRepo: new MemoryRepo(db),\n relationRepo: new RelationRepo(db),\n searchRepo: new SearchRepo(db),\n close: () => closeDatabase(db),\n };\n}\n"],"mappings":";;;;;;;;;;;;;;;AAmBO,SAAS,YAAY,QAAiC;AAC3D,QAAM,SAAS,WAAW,SAAS,EAAE,SAAS,OAAO,IAAI,MAAS;AAClE,QAAM,UAAU,eAAe,OAAO,OAAO;AAC7C,QAAM,KAAK,eAAe,EAAE,QAAQ,CAAC;AACrC,UAAQ,EAAE;AAEV,SAAO;AAAA,IACL;AAAA,IACA;AAAA,IACA;AAAA,IACA,YAAY,IAAI,WAAW,EAAE;AAAA,IAC7B,cAAc,IAAI,aAAa,EAAE;AAAA,IACjC,YAAY,IAAI,WAAW,EAAE;AAAA,IAC7B,OAAO,MAAM,cAAc,EAAE;AAAA,EAC/B;AACF;","names":[]}
@@ -0,0 +1,258 @@
1
+ #!/usr/bin/env node
2
+ import {
3
+ __export
4
+ } from "./chunk-2H7UOFLK.js";
5
+
6
+ // src/commands/memory/config.ts
7
+ import { z } from "zod";
8
+ import { readFileSync } from "fs";
9
+ import { join } from "path";
10
+ import { homedir } from "os";
11
+ var ConfigSchema = z.object({
12
+ dataDir: z.string().default("~/.agentic-memory"),
13
+ injectionBudget: z.number().int().min(100).max(2e4).default(2e3),
14
+ consolidationInterval: z.number().int().min(1).default(10),
15
+ enableReranker: z.boolean().default(true),
16
+ logLevel: z.enum(["debug", "info", "warn", "error"]).default("warn")
17
+ });
18
+ var DEFAULT_CONFIG = {
19
+ dataDir: "~/.agentic-memory",
20
+ injectionBudget: 2e3,
21
+ consolidationInterval: 10,
22
+ enableReranker: true,
23
+ logLevel: "warn"
24
+ };
25
+ var DEFAULT_DECAY_PARAMS = {
26
+ tauByType: {
27
+ working: 0,
28
+ // cleared each session, tau irrelevant
29
+ episodic: 60,
30
+ // fast decay
31
+ semantic: 365,
32
+ // slow decay
33
+ procedural: 730,
34
+ // near-permanent
35
+ pattern: 180
36
+ // medium decay
37
+ },
38
+ accessModifiers: [
39
+ { maxCount: 3, multiplier: 1 },
40
+ { maxCount: 10, multiplier: 2 },
41
+ { maxCount: Infinity, multiplier: 4 }
42
+ ],
43
+ relationModifier: {
44
+ connectedThreshold: 3,
45
+ connectedMultiplier: 0.7,
46
+ isolatedMultiplier: 1.3
47
+ },
48
+ importanceFloor: 0.05,
49
+ pruneThreshold: 0.1,
50
+ pruneMinAgeDays: 90
51
+ };
52
+ var SCORING_WEIGHTS = {
53
+ text: 0.35,
54
+ importance: 0.2,
55
+ recency: 0.2,
56
+ access: 0.1,
57
+ context: 0.15
58
+ };
59
+ function resolveDataDir(dataDir) {
60
+ if (dataDir.startsWith("~")) {
61
+ return join(homedir(), dataDir.slice(1));
62
+ }
63
+ return dataDir;
64
+ }
65
+ function loadConfig(overrides) {
66
+ const envOverrides = {};
67
+ const envBudget = process.env["AGENTIC_MEMORY_INJECTION_BUDGET"];
68
+ if (envBudget !== void 0) {
69
+ envOverrides["injectionBudget"] = parseInt(envBudget, 10);
70
+ }
71
+ const envLogLevel = process.env["AGENTIC_MEMORY_LOG_LEVEL"];
72
+ if (envLogLevel !== void 0) {
73
+ envOverrides["logLevel"] = envLogLevel;
74
+ }
75
+ const envDataDir = process.env["AGENTIC_MEMORY_DATA_DIR"];
76
+ if (envDataDir !== void 0) {
77
+ envOverrides["dataDir"] = envDataDir;
78
+ }
79
+ let fileConfig = {};
80
+ const baseDir = resolveDataDir(overrides?.dataDir ?? envOverrides["dataDir"] ?? DEFAULT_CONFIG.dataDir);
81
+ try {
82
+ const raw = readFileSync(join(baseDir, "config.json"), "utf-8");
83
+ fileConfig = JSON.parse(raw);
84
+ } catch (err) {
85
+ const isNotFound = err instanceof Error && "code" in err && err.code === "ENOENT";
86
+ if (!isNotFound) {
87
+ console.error("[agentic-memory] Failed to load config.json:", err instanceof Error ? err.message : err);
88
+ }
89
+ }
90
+ const merged = { ...DEFAULT_CONFIG, ...fileConfig, ...envOverrides, ...overrides };
91
+ return ConfigSchema.parse(merged);
92
+ }
93
+
94
+ // src/commands/memory/storage/database.ts
95
+ import Database from "better-sqlite3";
96
+ import * as sqliteVec from "sqlite-vec";
97
+ import { mkdirSync } from "fs";
98
+ import { dirname, join as join2 } from "path";
99
+ function createDatabase(options = {}) {
100
+ const dbPath = options.dbPath ?? resolveDbPath(options.dataDir);
101
+ if (dbPath !== ":memory:") {
102
+ mkdirSync(dirname(dbPath), { recursive: true });
103
+ }
104
+ const db = new Database(dbPath);
105
+ sqliteVec.load(db);
106
+ db.pragma("journal_mode = WAL");
107
+ db.pragma("busy_timeout = 5000");
108
+ db.pragma("foreign_keys = ON");
109
+ db.pragma("cache_size = -64000");
110
+ db.pragma("mmap_size = 268435456");
111
+ db.pragma("synchronous = NORMAL");
112
+ db.pragma("temp_store = MEMORY");
113
+ db.pragma("journal_size_limit = 33554432");
114
+ return db;
115
+ }
116
+ function closeDatabase(db) {
117
+ try {
118
+ db.pragma("wal_checkpoint(TRUNCATE)");
119
+ } catch {
120
+ }
121
+ db.close();
122
+ }
123
+ function resolveDbPath(dataDir) {
124
+ const dir = resolveDataDir(dataDir ?? "~/.agentic-memory");
125
+ return join2(dir, "memory.db");
126
+ }
127
+
128
+ // src/commands/memory/storage/migrations/001-initial.ts
129
+ var initial_exports = {};
130
+ __export(initial_exports, {
131
+ up: () => up,
132
+ version: () => version
133
+ });
134
+ var version = 1;
135
+ function up(db) {
136
+ db.exec(`
137
+ CREATE TABLE IF NOT EXISTS meta (
138
+ key TEXT PRIMARY KEY,
139
+ value TEXT
140
+ );
141
+
142
+ CREATE TABLE IF NOT EXISTS memories (
143
+ id TEXT PRIMARY KEY,
144
+ type TEXT NOT NULL CHECK(type IN ('episodic','semantic','procedural','working','pattern')),
145
+ title TEXT,
146
+ content TEXT NOT NULL,
147
+ context TEXT,
148
+ source TEXT CHECK(source IN ('manual','session_end','consolidation','hook','import')),
149
+ tags TEXT NOT NULL DEFAULT '[]',
150
+ importance REAL NOT NULL DEFAULT 0.5 CHECK(importance >= 0.0 AND importance <= 1.0),
151
+ created_at TEXT NOT NULL DEFAULT (datetime('now')),
152
+ updated_at TEXT NOT NULL DEFAULT (datetime('now')),
153
+ access_count INTEGER NOT NULL DEFAULT 0 CHECK(access_count >= 0),
154
+ last_accessed TEXT,
155
+ injection_count INTEGER NOT NULL DEFAULT 0 CHECK(injection_count >= 0),
156
+ embedding BLOB
157
+ );
158
+
159
+ CREATE TABLE IF NOT EXISTS relations (
160
+ source_id TEXT NOT NULL REFERENCES memories(id) ON DELETE CASCADE,
161
+ target_id TEXT NOT NULL REFERENCES memories(id) ON DELETE CASCADE,
162
+ relation_type TEXT NOT NULL CHECK(relation_type IN (
163
+ 'relates_to','depends_on','contradicts','extends','implements','derived_from'
164
+ )),
165
+ created_at TEXT NOT NULL DEFAULT (datetime('now')),
166
+ PRIMARY KEY (source_id, target_id, relation_type)
167
+ );
168
+
169
+ -- FTS5 external content (no data duplication)
170
+ CREATE VIRTUAL TABLE IF NOT EXISTS memories_fts USING fts5(
171
+ title, content, tags,
172
+ content=memories,
173
+ content_rowid=rowid,
174
+ tokenize='porter unicode61'
175
+ );
176
+
177
+ -- FTS5 sync triggers
178
+ CREATE TRIGGER IF NOT EXISTS memories_ai AFTER INSERT ON memories BEGIN
179
+ INSERT INTO memories_fts(rowid, title, content, tags)
180
+ VALUES (new.rowid, new.title, new.content, new.tags);
181
+ END;
182
+
183
+ CREATE TRIGGER IF NOT EXISTS memories_ad AFTER DELETE ON memories BEGIN
184
+ INSERT INTO memories_fts(memories_fts, rowid, title, content, tags)
185
+ VALUES ('delete', old.rowid, old.title, old.content, old.tags);
186
+ END;
187
+
188
+ CREATE TRIGGER IF NOT EXISTS memories_au AFTER UPDATE ON memories BEGIN
189
+ INSERT INTO memories_fts(memories_fts, rowid, title, content, tags)
190
+ VALUES ('delete', old.rowid, old.title, old.content, old.tags);
191
+ INSERT INTO memories_fts(rowid, title, content, tags)
192
+ VALUES (new.rowid, new.title, new.content, new.tags);
193
+ END;
194
+
195
+ -- Vector search (synced manually in application code)
196
+ CREATE VIRTUAL TABLE IF NOT EXISTS memories_vec USING vec0(
197
+ memory_id TEXT PRIMARY KEY,
198
+ embedding float[384] distance_metric=cosine
199
+ );
200
+
201
+ -- Indexes
202
+ CREATE INDEX IF NOT EXISTS idx_memories_type ON memories(type);
203
+ CREATE INDEX IF NOT EXISTS idx_memories_importance ON memories(importance);
204
+ CREATE INDEX IF NOT EXISTS idx_memories_created_at ON memories(created_at);
205
+ CREATE INDEX IF NOT EXISTS idx_relations_target ON relations(target_id);
206
+ `);
207
+ }
208
+
209
+ // src/commands/memory/storage/migrations/002-add-project.ts
210
+ var add_project_exports = {};
211
+ __export(add_project_exports, {
212
+ up: () => up2,
213
+ version: () => version2
214
+ });
215
+ var version2 = 2;
216
+ function up2(db) {
217
+ db.exec(`
218
+ ALTER TABLE memories ADD COLUMN project TEXT;
219
+ CREATE INDEX IF NOT EXISTS idx_memories_project ON memories(project);
220
+ `);
221
+ }
222
+
223
+ // src/commands/memory/storage/migrator.ts
224
+ var migrations = [
225
+ initial_exports,
226
+ add_project_exports
227
+ ];
228
+ function getSchemaVersion(db) {
229
+ try {
230
+ const row = db.prepare("SELECT value FROM meta WHERE key = 'schema_version'").get();
231
+ return row ? parseInt(row.value, 10) : 0;
232
+ } catch {
233
+ return 0;
234
+ }
235
+ }
236
+ function migrate(db) {
237
+ const current = getSchemaVersion(db);
238
+ const pending = migrations.filter((m) => m.version > current);
239
+ if (pending.length === 0) return;
240
+ const runMigrations = db.transaction(() => {
241
+ for (const m of pending) {
242
+ m.up(db);
243
+ db.prepare("INSERT OR REPLACE INTO meta (key, value) VALUES ('schema_version', ?)").run(String(m.version));
244
+ }
245
+ });
246
+ runMigrations();
247
+ }
248
+
249
+ export {
250
+ DEFAULT_DECAY_PARAMS,
251
+ SCORING_WEIGHTS,
252
+ resolveDataDir,
253
+ loadConfig,
254
+ createDatabase,
255
+ closeDatabase,
256
+ migrate
257
+ };
258
+ //# sourceMappingURL=chunk-IILH26C7.js.map
@@ -0,0 +1 @@
1
+ {"version":3,"sources":["../src/commands/memory/config.ts","../src/commands/memory/storage/database.ts","../src/commands/memory/storage/migrations/001-initial.ts","../src/commands/memory/storage/migrations/002-add-project.ts","../src/commands/memory/storage/migrator.ts"],"sourcesContent":["import { z } from 'zod';\nimport { readFileSync } from 'node:fs';\nimport { join } from 'node:path';\nimport { homedir } from 'node:os';\nimport type { DecayParams } from './types.js';\n\n// ── Config Schema ─────────────────────────────────────────────\n\nconst ConfigSchema = z.object({\n dataDir: z.string().default('~/.agentic-memory'),\n injectionBudget: z.number().int().min(100).max(20000).default(2000),\n consolidationInterval: z.number().int().min(1).default(10),\n enableReranker: z.boolean().default(true),\n logLevel: z.enum(['debug', 'info', 'warn', 'error']).default('warn'),\n});\n\nexport type Config = z.infer<typeof ConfigSchema>;\n\n// ── Defaults ──────────────────────────────────────────────────\n\nexport const DEFAULT_CONFIG: Config = {\n dataDir: '~/.agentic-memory',\n injectionBudget: 2000,\n consolidationInterval: 10,\n enableReranker: true,\n logLevel: 'warn',\n};\n\nexport const DEFAULT_DECAY_PARAMS: DecayParams = {\n tauByType: {\n working: 0, // cleared each session, tau irrelevant\n episodic: 60, // fast decay\n semantic: 365, // slow decay\n procedural: 730, // near-permanent\n pattern: 180, // medium decay\n },\n accessModifiers: [\n { maxCount: 3, multiplier: 1.0 },\n { maxCount: 10, multiplier: 2.0 },\n { maxCount: Infinity, multiplier: 4.0 },\n ],\n relationModifier: {\n connectedThreshold: 3,\n connectedMultiplier: 0.7,\n isolatedMultiplier: 1.3,\n },\n importanceFloor: 0.05,\n pruneThreshold: 0.1,\n pruneMinAgeDays: 90,\n};\n\nexport const SCORING_WEIGHTS = {\n text: 0.35,\n importance: 0.20,\n recency: 0.20,\n access: 0.10,\n context: 0.15,\n} as const;\n\n// ── Config Loader ─────────────────────────────────────────────\n\nexport function resolveDataDir(dataDir: string): string {\n if (dataDir.startsWith('~')) {\n return join(homedir(), dataDir.slice(1));\n }\n return dataDir;\n}\n\nexport function loadConfig(overrides?: Partial<Config>): Config {\n const envOverrides: Record<string, unknown> = {};\n\n const envBudget = process.env['AGENTIC_MEMORY_INJECTION_BUDGET'];\n if (envBudget !== undefined) {\n envOverrides['injectionBudget'] = parseInt(envBudget, 10);\n }\n\n const envLogLevel = process.env['AGENTIC_MEMORY_LOG_LEVEL'];\n if (envLogLevel !== undefined) {\n envOverrides['logLevel'] = envLogLevel;\n }\n\n const envDataDir = process.env['AGENTIC_MEMORY_DATA_DIR'];\n if (envDataDir !== undefined) {\n envOverrides['dataDir'] = envDataDir;\n }\n\n // Try loading config.json from data dir\n let fileConfig: Record<string, unknown> = {};\n const baseDir = resolveDataDir(overrides?.dataDir ?? envOverrides['dataDir'] as string ?? DEFAULT_CONFIG.dataDir);\n try {\n const raw = readFileSync(join(baseDir, 'config.json'), 'utf-8');\n fileConfig = JSON.parse(raw) as Record<string, unknown>;\n } catch (err) {\n const isNotFound = err instanceof Error && 'code' in err && (err as NodeJS.ErrnoException).code === 'ENOENT';\n if (!isNotFound) {\n // Malformed JSON or permissions error - warn, don't silently ignore\n console.error('[agentic-memory] Failed to load config.json:', err instanceof Error ? err.message : err);\n }\n }\n\n const merged = { ...DEFAULT_CONFIG, ...fileConfig, ...envOverrides, ...overrides };\n return ConfigSchema.parse(merged);\n}\n\n// ── Token Estimation ──────────────────────────────────────────\n\nexport function estimateTokens(text: string): number {\n return Math.ceil(text.length / 4);\n}\n","import Database from 'better-sqlite3';\nimport * as sqliteVec from 'sqlite-vec';\nimport { resolveDataDir } from '../config.js';\nimport { mkdirSync } from 'node:fs';\nimport { dirname, join } from 'node:path';\n\nexport interface DatabaseOptions {\n readonly dbPath?: string; // full path override (e.g. ':memory:' for tests)\n readonly dataDir?: string; // resolved data dir (default ~/.agentic-memory)\n}\n\nexport function createDatabase(options: DatabaseOptions = {}): Database.Database {\n const dbPath = options.dbPath ?? resolveDbPath(options.dataDir);\n\n if (dbPath !== ':memory:') {\n mkdirSync(dirname(dbPath), { recursive: true });\n }\n\n const db = new Database(dbPath);\n\n // Load sqlite-vec extension\n sqliteVec.load(db);\n\n // Configure PRAGMAs (order matters: foreign_keys before any ops, journal_mode is persistent)\n db.pragma('journal_mode = WAL');\n db.pragma('busy_timeout = 5000');\n db.pragma('foreign_keys = ON');\n db.pragma('cache_size = -64000');\n db.pragma('mmap_size = 268435456');\n db.pragma('synchronous = NORMAL');\n db.pragma('temp_store = MEMORY');\n db.pragma('journal_size_limit = 33554432');\n\n return db;\n}\n\nexport function closeDatabase(db: Database.Database): void {\n try {\n db.pragma('wal_checkpoint(TRUNCATE)');\n } catch {\n // Checkpoint may fail on :memory: - that's fine\n }\n db.close();\n}\n\nfunction resolveDbPath(dataDir?: string): string {\n const dir = resolveDataDir(dataDir ?? '~/.agentic-memory');\n return join(dir, 'memory.db');\n}\n","import type Database from 'better-sqlite3';\n\nexport const version = 1;\n\nexport function up(db: Database.Database): void {\n db.exec(`\n CREATE TABLE IF NOT EXISTS meta (\n key TEXT PRIMARY KEY,\n value TEXT\n );\n\n CREATE TABLE IF NOT EXISTS memories (\n id TEXT PRIMARY KEY,\n type TEXT NOT NULL CHECK(type IN ('episodic','semantic','procedural','working','pattern')),\n title TEXT,\n content TEXT NOT NULL,\n context TEXT,\n source TEXT CHECK(source IN ('manual','session_end','consolidation','hook','import')),\n tags TEXT NOT NULL DEFAULT '[]',\n importance REAL NOT NULL DEFAULT 0.5 CHECK(importance >= 0.0 AND importance <= 1.0),\n created_at TEXT NOT NULL DEFAULT (datetime('now')),\n updated_at TEXT NOT NULL DEFAULT (datetime('now')),\n access_count INTEGER NOT NULL DEFAULT 0 CHECK(access_count >= 0),\n last_accessed TEXT,\n injection_count INTEGER NOT NULL DEFAULT 0 CHECK(injection_count >= 0),\n embedding BLOB\n );\n\n CREATE TABLE IF NOT EXISTS relations (\n source_id TEXT NOT NULL REFERENCES memories(id) ON DELETE CASCADE,\n target_id TEXT NOT NULL REFERENCES memories(id) ON DELETE CASCADE,\n relation_type TEXT NOT NULL CHECK(relation_type IN (\n 'relates_to','depends_on','contradicts','extends','implements','derived_from'\n )),\n created_at TEXT NOT NULL DEFAULT (datetime('now')),\n PRIMARY KEY (source_id, target_id, relation_type)\n );\n\n -- FTS5 external content (no data duplication)\n CREATE VIRTUAL TABLE IF NOT EXISTS memories_fts USING fts5(\n title, content, tags,\n content=memories,\n content_rowid=rowid,\n tokenize='porter unicode61'\n );\n\n -- FTS5 sync triggers\n CREATE TRIGGER IF NOT EXISTS memories_ai AFTER INSERT ON memories BEGIN\n INSERT INTO memories_fts(rowid, title, content, tags)\n VALUES (new.rowid, new.title, new.content, new.tags);\n END;\n\n CREATE TRIGGER IF NOT EXISTS memories_ad AFTER DELETE ON memories BEGIN\n INSERT INTO memories_fts(memories_fts, rowid, title, content, tags)\n VALUES ('delete', old.rowid, old.title, old.content, old.tags);\n END;\n\n CREATE TRIGGER IF NOT EXISTS memories_au AFTER UPDATE ON memories BEGIN\n INSERT INTO memories_fts(memories_fts, rowid, title, content, tags)\n VALUES ('delete', old.rowid, old.title, old.content, old.tags);\n INSERT INTO memories_fts(rowid, title, content, tags)\n VALUES (new.rowid, new.title, new.content, new.tags);\n END;\n\n -- Vector search (synced manually in application code)\n CREATE VIRTUAL TABLE IF NOT EXISTS memories_vec USING vec0(\n memory_id TEXT PRIMARY KEY,\n embedding float[384] distance_metric=cosine\n );\n\n -- Indexes\n CREATE INDEX IF NOT EXISTS idx_memories_type ON memories(type);\n CREATE INDEX IF NOT EXISTS idx_memories_importance ON memories(importance);\n CREATE INDEX IF NOT EXISTS idx_memories_created_at ON memories(created_at);\n CREATE INDEX IF NOT EXISTS idx_relations_target ON relations(target_id);\n `);\n}\n","import type Database from 'better-sqlite3';\n\nexport const version = 2;\n\nexport function up(db: Database.Database): void {\n db.exec(`\n ALTER TABLE memories ADD COLUMN project TEXT;\n CREATE INDEX IF NOT EXISTS idx_memories_project ON memories(project);\n `);\n}\n","import type Database from 'better-sqlite3';\nimport * as migration001 from './migrations/001-initial.js';\nimport * as migration002 from './migrations/002-add-project.js';\n\ninterface Migration {\n readonly version: number;\n readonly up: (db: Database.Database) => void;\n}\n\nconst migrations: readonly Migration[] = [\n migration001,\n migration002,\n];\n\nexport function getSchemaVersion(db: Database.Database): number {\n try {\n const row = db.prepare(\"SELECT value FROM meta WHERE key = 'schema_version'\").get() as\n { value: string } | undefined;\n return row ? parseInt(row.value, 10) : 0;\n } catch {\n return 0;\n }\n}\n\nexport function migrate(db: Database.Database): void {\n const current = getSchemaVersion(db);\n const pending = migrations.filter(m => m.version > current);\n\n if (pending.length === 0) return;\n\n const runMigrations = db.transaction(() => {\n for (const m of pending) {\n m.up(db);\n db.prepare(\"INSERT OR REPLACE INTO meta (key, value) VALUES ('schema_version', ?)\")\n .run(String(m.version));\n }\n });\n\n runMigrations();\n}\n"],"mappings":";;;;;;AAAA,SAAS,SAAS;AAClB,SAAS,oBAAoB;AAC7B,SAAS,YAAY;AACrB,SAAS,eAAe;AAKxB,IAAM,eAAe,EAAE,OAAO;AAAA,EAC5B,SAAS,EAAE,OAAO,EAAE,QAAQ,mBAAmB;AAAA,EAC/C,iBAAiB,EAAE,OAAO,EAAE,IAAI,EAAE,IAAI,GAAG,EAAE,IAAI,GAAK,EAAE,QAAQ,GAAI;AAAA,EAClE,uBAAuB,EAAE,OAAO,EAAE,IAAI,EAAE,IAAI,CAAC,EAAE,QAAQ,EAAE;AAAA,EACzD,gBAAgB,EAAE,QAAQ,EAAE,QAAQ,IAAI;AAAA,EACxC,UAAU,EAAE,KAAK,CAAC,SAAS,QAAQ,QAAQ,OAAO,CAAC,EAAE,QAAQ,MAAM;AACrE,CAAC;AAMM,IAAM,iBAAyB;AAAA,EACpC,SAAS;AAAA,EACT,iBAAiB;AAAA,EACjB,uBAAuB;AAAA,EACvB,gBAAgB;AAAA,EAChB,UAAU;AACZ;AAEO,IAAM,uBAAoC;AAAA,EAC/C,WAAW;AAAA,IACT,SAAS;AAAA;AAAA,IACT,UAAU;AAAA;AAAA,IACV,UAAU;AAAA;AAAA,IACV,YAAY;AAAA;AAAA,IACZ,SAAS;AAAA;AAAA,EACX;AAAA,EACA,iBAAiB;AAAA,IACf,EAAE,UAAU,GAAG,YAAY,EAAI;AAAA,IAC/B,EAAE,UAAU,IAAI,YAAY,EAAI;AAAA,IAChC,EAAE,UAAU,UAAU,YAAY,EAAI;AAAA,EACxC;AAAA,EACA,kBAAkB;AAAA,IAChB,oBAAoB;AAAA,IACpB,qBAAqB;AAAA,IACrB,oBAAoB;AAAA,EACtB;AAAA,EACA,iBAAiB;AAAA,EACjB,gBAAgB;AAAA,EAChB,iBAAiB;AACnB;AAEO,IAAM,kBAAkB;AAAA,EAC7B,MAAM;AAAA,EACN,YAAY;AAAA,EACZ,SAAS;AAAA,EACT,QAAQ;AAAA,EACR,SAAS;AACX;AAIO,SAAS,eAAe,SAAyB;AACtD,MAAI,QAAQ,WAAW,GAAG,GAAG;AAC3B,WAAO,KAAK,QAAQ,GAAG,QAAQ,MAAM,CAAC,CAAC;AAAA,EACzC;AACA,SAAO;AACT;AAEO,SAAS,WAAW,WAAqC;AAC9D,QAAM,eAAwC,CAAC;AAE/C,QAAM,YAAY,QAAQ,IAAI,iCAAiC;AAC/D,MAAI,cAAc,QAAW;AAC3B,iBAAa,iBAAiB,IAAI,SAAS,WAAW,EAAE;AAAA,EAC1D;AAEA,QAAM,cAAc,QAAQ,IAAI,0BAA0B;AAC1D,MAAI,gBAAgB,QAAW;AAC7B,iBAAa,UAAU,IAAI;AAAA,EAC7B;AAEA,QAAM,aAAa,QAAQ,IAAI,yBAAyB;AACxD,MAAI,eAAe,QAAW;AAC5B,iBAAa,SAAS,IAAI;AAAA,EAC5B;AAGA,MAAI,aAAsC,CAAC;AAC3C,QAAM,UAAU,eAAe,WAAW,WAAW,aAAa,SAAS,KAAe,eAAe,OAAO;AAChH,MAAI;AACF,UAAM,MAAM,aAAa,KAAK,SAAS,aAAa,GAAG,OAAO;AAC9D,iBAAa,KAAK,MAAM,GAAG;AAAA,EAC7B,SAAS,KAAK;AACZ,UAAM,aAAa,eAAe,SAAS,UAAU,OAAQ,IAA8B,SAAS;AACpG,QAAI,CAAC,YAAY;AAEf,cAAQ,MAAM,gDAAgD,eAAe,QAAQ,IAAI,UAAU,GAAG;AAAA,IACxG;AAAA,EACF;AAEA,QAAM,SAAS,EAAE,GAAG,gBAAgB,GAAG,YAAY,GAAG,cAAc,GAAG,UAAU;AACjF,SAAO,aAAa,MAAM,MAAM;AAClC;;;ACtGA,OAAO,cAAc;AACrB,YAAY,eAAe;AAE3B,SAAS,iBAAiB;AAC1B,SAAS,SAAS,QAAAA,aAAY;AAOvB,SAAS,eAAe,UAA2B,CAAC,GAAsB;AAC/E,QAAM,SAAS,QAAQ,UAAU,cAAc,QAAQ,OAAO;AAE9D,MAAI,WAAW,YAAY;AACzB,cAAU,QAAQ,MAAM,GAAG,EAAE,WAAW,KAAK,CAAC;AAAA,EAChD;AAEA,QAAM,KAAK,IAAI,SAAS,MAAM;AAG9B,EAAU,eAAK,EAAE;AAGjB,KAAG,OAAO,oBAAoB;AAC9B,KAAG,OAAO,qBAAqB;AAC/B,KAAG,OAAO,mBAAmB;AAC7B,KAAG,OAAO,qBAAqB;AAC/B,KAAG,OAAO,uBAAuB;AACjC,KAAG,OAAO,sBAAsB;AAChC,KAAG,OAAO,qBAAqB;AAC/B,KAAG,OAAO,+BAA+B;AAEzC,SAAO;AACT;AAEO,SAAS,cAAc,IAA6B;AACzD,MAAI;AACF,OAAG,OAAO,0BAA0B;AAAA,EACtC,QAAQ;AAAA,EAER;AACA,KAAG,MAAM;AACX;AAEA,SAAS,cAAc,SAA0B;AAC/C,QAAM,MAAM,eAAe,WAAW,mBAAmB;AACzD,SAAOA,MAAK,KAAK,WAAW;AAC9B;;;AChDA;AAAA;AAAA;AAAA;AAAA;AAEO,IAAM,UAAU;AAEhB,SAAS,GAAG,IAA6B;AAC9C,KAAG,KAAK;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,GAsEP;AACH;;;AC5EA;AAAA;AAAA,YAAAC;AAAA,EAAA,eAAAC;AAAA;AAEO,IAAMA,WAAU;AAEhB,SAASD,IAAG,IAA6B;AAC9C,KAAG,KAAK;AAAA;AAAA;AAAA,GAGP;AACH;;;ACAA,IAAM,aAAmC;AAAA,EACvC;AAAA,EACA;AACF;AAEO,SAAS,iBAAiB,IAA+B;AAC9D,MAAI;AACF,UAAM,MAAM,GAAG,QAAQ,qDAAqD,EAAE,IAAI;AAElF,WAAO,MAAM,SAAS,IAAI,OAAO,EAAE,IAAI;AAAA,EACzC,QAAQ;AACN,WAAO;AAAA,EACT;AACF;AAEO,SAAS,QAAQ,IAA6B;AACnD,QAAM,UAAU,iBAAiB,EAAE;AACnC,QAAM,UAAU,WAAW,OAAO,OAAK,EAAE,UAAU,OAAO;AAE1D,MAAI,QAAQ,WAAW,EAAG;AAE1B,QAAM,gBAAgB,GAAG,YAAY,MAAM;AACzC,eAAW,KAAK,SAAS;AACvB,QAAE,GAAG,EAAE;AACP,SAAG,QAAQ,uEAAuE,EAC/E,IAAI,OAAO,EAAE,OAAO,CAAC;AAAA,IAC1B;AAAA,EACF,CAAC;AAED,gBAAc;AAChB;","names":["join","up","version"]}