hmem-mcp 1.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Bumblebiber
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,384 @@
1
+ # hmem — Humanlike Memory for AI Agents
2
+
3
+ > AI agents forget everything when a session ends. hmem changes that.
4
+
5
+ > **Beta:** hmem is functional and actively used in production, but APIs and file formats
6
+ > may still change. Feedback and bug reports welcome.
7
+
8
+ **hmem** is a Model Context Protocol (MCP) server that gives AI agents persistent, humanlike memory — modeled after how human memory actually works.
9
+
10
+ Born as a side project of a multi-agent AI system, hmem solves a real problem: when you work across multiple machines or sessions, your AI instances start from zero every time. They duplicate work, contradict previous decisions, and lose hard-won context.
11
+
12
+ **hmem fixes this.**
13
+
14
+ ---
15
+
16
+ ## The Problem
17
+
18
+ When working across multiple PCs with AI coding agents, every new session was a fresh start. Agents had no knowledge of previous decisions, duplicated work, produced inconsistencies, and wasted tokens catching up.
19
+
20
+ Existing RAG solutions are flat — every memory fragment has the same abstraction level. The agent either gets too much detail and wastes tokens, or too little and loses nuance.
21
+
22
+ ---
23
+
24
+ ## The Solution: 5-Level Humanlike Memory
25
+
26
+ hmem stores and retrieves memory in five nested levels of detail — mirroring how human memory works.
27
+
28
+ ```
29
+ Level 1 ── Coarse summary (always loaded on spawn)
30
+ Level 2 ── More detail
31
+ Level 3 ── Deep context
32
+ Level 4 ── Fine-grained specifics
33
+ Level 5 ── Full verbatim detail
34
+ ```
35
+
36
+ A freshly spawned agent receives only Level 1 — the broadest strokes. When it needs more detail on a specific topic, it makes a tool call to retrieve Level 2 for that entry. And so on, down to full detail.
37
+
38
+ **Result: Agents load exactly as much context as they need — no more, no less.**
39
+
40
+ ---
41
+
42
+ ## How It Works
43
+
44
+ ### Saving Memory
45
+
46
+ After completing a task, an agent calls `write_memory` with tab-indented content. The indentation depth maps to memory levels — multiple entries at the same depth become siblings.
47
+
48
+ ```
49
+ write_memory(prefix="L", content="Always restart MCP server after recompiling TypeScript
50
+ Running process holds old dist — tool calls return stale results
51
+ Fix: kill $(pgrep -f mcp-server)")
52
+ ```
53
+
54
+ ### Loading Memory
55
+
56
+ On spawn, the agent receives all Level 1 summaries. Deeper levels are fetched on demand — by ID, one branch at a time.
57
+
58
+ ```
59
+ read_memory() # → all L1 summaries (~20 tokens)
60
+ read_memory(id="L0003") # → L1 + direct L2 children for this entry
61
+ read_memory(id="L0003.2") # → that L2 node + its L3 children
62
+ ```
63
+
64
+ Each node gets a compound ID (`L0003.2.1`) so any branch is individually addressable.
65
+
66
+ ### Memory Curation
67
+
68
+ A dedicated curator agent runs periodically to maintain memory health. It tracks retrieval counts per entry, promotes frequently accessed memories, and prunes rarely accessed ones — a form of the Ebbinghaus Forgetting Curve.
69
+
70
+ ---
71
+
72
+ ## Key Features
73
+
74
+ - **Hierarchical retrieval** — lazy loading of detail levels saves tokens
75
+ - **True tree structure** — multiple siblings at the same depth (not just one chain)
76
+ - **Persistent across sessions** — agents remember previous work even after restart
77
+ - **Per-agent memory** — each agent has its own `.hmem` file (SQLite)
78
+ - **Shared company knowledge** — `FIRMENWISSEN` store with role-based access control
79
+ - **Retrieval counting** — built-in importance scoring based on access frequency
80
+ - **Skill-file driven** — agents are instructed via skill files, no hardcoded logic
81
+ - **MCP-native** — works with Claude Code, Gemini CLI, OpenCode, and any MCP-compatible tool
82
+
83
+ ---
84
+
85
+ ## Quick Start
86
+
87
+ ### Option A: Install from npm (Recommended)
88
+
89
+ ```bash
90
+ npx hmem init
91
+ ```
92
+
93
+ That's it. The interactive installer will:
94
+ - Detect your installed AI coding tools (Claude Code, OpenCode, Cursor, Windsurf, Cline)
95
+ - Ask whether to install **system-wide** (memories in `~/.hmem/`) or **project-local** (memories in current directory)
96
+ - Configure each tool's MCP settings automatically
97
+ - Create the memory directory and `hmem.config.json`
98
+
99
+ After the installer finishes, restart your AI tool and call `read_memory()` to verify.
100
+
101
+ ### Option B: Install from source
102
+
103
+ ```bash
104
+ git clone https://github.com/Bumblebiber/hmem.git
105
+ cd hmem
106
+ npm install && npm run build
107
+ node dist/cli.js init
108
+ ```
109
+
110
+ ### Option C: Manual Setup (no installer)
111
+
112
+ If you prefer to configure everything yourself:
113
+
114
+ #### 1. Install
115
+
116
+ ```bash
117
+ npm install -g hmem
118
+ ```
119
+
120
+ Or from source: `git clone https://github.com/Bumblebiber/hmem.git && cd hmem && npm install && npm run build`
121
+
122
+ #### 2. Register the MCP server
123
+
124
+ **Claude Code** — global registration:
125
+
126
+ ```bash
127
+ claude mcp add hmem -s user -- npx hmem serve \
128
+ --env HMEM_PROJECT_DIR="$HOME/.hmem"
129
+ ```
130
+
131
+ **OpenCode** — add to `~/.config/opencode/opencode.json` (or project-level `opencode.json`):
132
+
133
+ ```json
134
+ {
135
+ "mcp": {
136
+ "hmem": {
137
+ "type": "local",
138
+ "command": ["npx", "hmem", "serve"],
139
+ "environment": {
140
+ "HMEM_PROJECT_DIR": "~/.hmem"
141
+ },
142
+ "enabled": true
143
+ }
144
+ }
145
+ }
146
+ ```
147
+
148
+ **Cursor / Windsurf / Cline** — add to `~/.cursor/mcp.json` (or equivalent):
149
+
150
+ ```json
151
+ {
152
+ "mcpServers": {
153
+ "hmem": {
154
+ "command": "npx",
155
+ "args": ["hmem", "serve"],
156
+ "env": {
157
+ "HMEM_PROJECT_DIR": "~/.hmem"
158
+ }
159
+ }
160
+ }
161
+ }
162
+ ```
163
+
164
+ > **Windows note:** Use forward slashes or double backslashes in JSON paths.
165
+
166
+ #### 3. Verify the connection
167
+
168
+ Fully restart your AI tool, then call `read_memory()`. You should see a memory listing (empty on first run is fine).
169
+
170
+ In Claude Code, run `/mcp` to check the server status.
171
+
172
+ ---
173
+
174
+ ## Skill Files
175
+
176
+ Skill files teach your AI tool how to use hmem correctly. Copy them to your tool's global skills directory.
177
+
178
+ > **Note:** `hmem init` copies skills automatically. Only use the manual steps below if you skipped the installer.
179
+
180
+ > **After copying skills, fully restart your terminal and AI tool** — skills are loaded at startup and won't appear in a running session.
181
+
182
+ If you installed via npm, find the skills in the package directory:
183
+
184
+ ```bash
185
+ HMEM_DIR="$(npm root -g)/hmem" # global install
186
+ # or: HMEM_DIR="$(dirname $(realpath $(which hmem)))"
187
+ ```
188
+
189
+ If you cloned from source, the skills are in the `skills/` directory.
190
+
191
+ **Claude Code:**
192
+ ```bash
193
+ for skill in hmem-read hmem-write save memory-curate; do
194
+ mkdir -p ~/.claude/skills/$skill
195
+ cp "$HMEM_DIR/skills/$skill/SKILL.md" ~/.claude/skills/$skill/SKILL.md
196
+ done
197
+ ```
198
+
199
+ **Gemini CLI:**
200
+ ```bash
201
+ for skill in hmem-read hmem-write save memory-curate; do
202
+ mkdir -p ~/.gemini/skills/$skill
203
+ cp "$HMEM_DIR/skills/$skill/SKILL.md" ~/.gemini/skills/$skill/SKILL.md
204
+ done
205
+ ```
206
+
207
+ **OpenCode:**
208
+ ```bash
209
+ for skill in hmem-read hmem-write save memory-curate; do
210
+ mkdir -p ~/.config/opencode/skills/$skill
211
+ cp "$HMEM_DIR/skills/$skill/SKILL.md" ~/.config/opencode/skills/$skill/SKILL.md
212
+ done
213
+ ```
214
+
215
+ ---
216
+
217
+ ## MCP Tools
218
+
219
+ ### Memory Tools
220
+
221
+ | Tool | Description |
222
+ |------|-------------|
223
+ | `read_memory` | Read hierarchical memories — L1 summaries or drill into any node by ID |
224
+ | `write_memory` | Save new memory entries with tab-indented hierarchy |
225
+ | `search_memory` | Full-text search across all agent `.hmem` databases |
226
+
227
+ ### Curator Tools (role: ceo)
228
+
229
+ | Tool | Description |
230
+ |------|-------------|
231
+ | `get_audit_queue` | List agents whose memory has changed since last audit |
232
+ | `read_agent_memory` | Read any agent's full memory (for curation) |
233
+ | `fix_agent_memory` | Correct a specific memory entry |
234
+ | `delete_agent_memory` | Delete a memory entry (use sparingly) |
235
+ | `mark_audited` | Mark an agent as audited |
236
+
237
+ ---
238
+
239
+ ## Memory Directory
240
+
241
+ hmem stores all memory files (`.hmem` SQLite databases) and its configuration (`hmem.config.json`) in a single directory. The location depends on how you install:
242
+
243
+ | Install mode | Memory directory | Example |
244
+ |---|---|---|
245
+ | **System-wide** | `~/.hmem/` | `/home/alice/.hmem/` or `C:\Users\Alice\.hmem\` |
246
+ | **Project-local** | Project root (cwd) | `/home/alice/my-project/` |
247
+
248
+ The `hmem init` installer asks which mode you prefer and creates the directory automatically.
249
+
250
+ ### Directory structure
251
+
252
+ ```
253
+ ~/.hmem/ # System-wide memory directory
254
+ memory.hmem # Default agent memory (when no HMEM_AGENT_ID is set)
255
+ SIGURD.hmem # Named agent memory (HMEM_AGENT_ID=SIGURD)
256
+ FIRMENWISSEN.hmem # Shared company knowledge (optional)
257
+ hmem.config.json # Configuration file
258
+ audit_state.json # Curator state (optional)
259
+ ```
260
+
261
+ The MCP configuration files are written to each tool's own config directory — not into `~/.hmem/`:
262
+
263
+ | Tool | Global MCP config path |
264
+ |---|---|
265
+ | Claude Code | `~/.claude/.mcp.json` |
266
+ | OpenCode | `~/.config/opencode/opencode.json` |
267
+ | Cursor | `~/.cursor/mcp.json` |
268
+ | Windsurf | `~/.codeium/windsurf/mcp_config.json` |
269
+ | Cline / Roo Code | `.vscode/mcp.json` (project-only) |
270
+
271
+ ---
272
+
273
+ ## Environment Variables
274
+
275
+ | Variable | Description | Default |
276
+ |----------|-------------|---------|
277
+ | `HMEM_PROJECT_DIR` | Root directory where `.hmem` files are stored | *(required)* |
278
+ | `HMEM_AGENT_ID` | Agent identifier — used as filename and directory name | `""` → `memory.hmem` |
279
+ | `HMEM_AGENT_ROLE` | Permission level: `worker` · `al` · `pl` · `ceo` | `worker` |
280
+
281
+ ---
282
+
283
+ ## Configuration (hmem.config.json)
284
+
285
+ Place an optional `hmem.config.json` in your `HMEM_PROJECT_DIR` to tune behavior. All keys are optional — missing keys fall back to defaults.
286
+
287
+ ```json
288
+ {
289
+ "maxL1Chars": 120,
290
+ "maxLnChars": 50000,
291
+ "maxDepth": 5,
292
+ "defaultReadLimit": 100,
293
+ "recentDepthTiers": [
294
+ { "count": 10, "depth": 2 },
295
+ { "count": 3, "depth": 3 }
296
+ ],
297
+ "prefixes": {
298
+ "P": "Project",
299
+ "L": "Lesson",
300
+ "T": "Task",
301
+ "E": "Error",
302
+ "D": "Decision",
303
+ "M": "Milestone",
304
+ "S": "Skill",
305
+ "F": "Favorite"
306
+ }
307
+ }
308
+ ```
309
+
310
+ ### Custom prefixes
311
+
312
+ The default prefixes (P, L, T, E, D, M, S, F) cover most use cases. To add your own, add entries to the `"prefixes"` key:
313
+
314
+ ```json
315
+ {
316
+ "prefixes": {
317
+ "R": "Research",
318
+ "B": "Bookmark",
319
+ "Q": "Question"
320
+ }
321
+ }
322
+ ```
323
+
324
+ Custom prefixes are **merged** with the defaults — you don't need to repeat the built-in ones. After adding prefixes, restart your AI tool so the MCP server picks up the new config.
325
+
326
+ **Note:** Favorites (F) are special — they are always loaded with L2 detail, regardless of recency position.
327
+
328
+ ### Character limits
329
+
330
+ Two ways to set per-level character limits:
331
+
332
+ **Option A — linear interpolation** (recommended): set only the endpoints; all levels in between are computed automatically.
333
+
334
+ ```json
335
+ { "maxL1Chars": 120, "maxLnChars": 50000 }
336
+ ```
337
+
338
+ With 5 depth levels this yields: `[120, 12780, 25440, 38120, 50000]`
339
+
340
+ **Option B — explicit per-level array**: set each level individually. If fewer entries than `maxDepth`, the last value is repeated.
341
+
342
+ ```json
343
+ { "maxCharsPerLevel": [120, 2500, 10000, 25000, 50000] }
344
+ ```
345
+
346
+ ### Recency gradient (`recentDepthTiers`)
347
+
348
+ Controls how deep children are inlined for the most recent entries in a default `read_memory()` call. Each tier is `{ count, depth }`: the *count* most recent entries get children inlined up to *depth*.
349
+
350
+ Tiers are cumulative — the **highest applicable depth wins** for each entry position.
351
+
352
+ ```json
353
+ "recentDepthTiers": [
354
+ { "count": 3, "depth": 3 }, // last 3 entries → L1 + L2 + L3
355
+ { "count": 10, "depth": 2 } // last 10 entries → L1 + L2
356
+ ]
357
+ ```
358
+
359
+ Result:
360
+ | Entry position | Depth inlined |
361
+ |---|---|
362
+ | 0–2 (most recent) | L1 + L2 + L3 |
363
+ | 3–9 | L1 + L2 |
364
+ | 10+ | L1 only |
365
+
366
+ This mirrors how human memory works: you remember today's events in full detail, last week's in outline, older ones only as headlines.
367
+
368
+ Set to `[]` to disable recency inlining (L1-only for all entries, same as before v1.1).
369
+
370
+ **Backward compat:** The old `"recentChildrenCount": N` key is still accepted and treated as `[{ "count": N, "depth": 2 }]`.
371
+
372
+ ---
373
+
374
+ ## Origin
375
+
376
+ hmem was developed out of necessity: working on a large AI project across multiple machines meant every new Claude Code session started blind. Agents redid work, lost decisions, and contradicted each other.
377
+
378
+ The solution was a memory protocol that works the way humans remember — broad strokes first, details on demand.
379
+
380
+ ---
381
+
382
+ ## License
383
+
384
+ MIT
@@ -0,0 +1,7 @@
1
+ /**
2
+ * Script: cli-init.ts
3
+ * Purpose: Interactive installer for hmem MCP — configures AI coding tools
4
+ * Author: DEVELOPER
5
+ * Created: 2026-02-21
6
+ */
7
+ export declare function runInit(): Promise<void>;
@@ -0,0 +1,291 @@
1
+ /**
2
+ * Script: cli-init.ts
3
+ * Purpose: Interactive installer for hmem MCP — configures AI coding tools
4
+ * Author: DEVELOPER
5
+ * Created: 2026-02-21
6
+ */
7
+ import fs from "node:fs";
8
+ import path from "node:path";
9
+ import os from "node:os";
10
+ import readline from "node:readline";
11
+ const HOME = os.homedir();
12
+ const TOOLS = {
13
+ "claude-code": {
14
+ name: "Claude Code",
15
+ globalDir: path.join(HOME, ".claude"),
16
+ globalFile: ".mcp.json",
17
+ projectDir: ".",
18
+ projectFile: ".mcp.json",
19
+ format: "standard",
20
+ detect: () => fs.existsSync(path.join(HOME, ".claude")),
21
+ },
22
+ "opencode": {
23
+ name: "OpenCode",
24
+ globalDir: path.join(HOME, ".config", "opencode"),
25
+ globalFile: "opencode.json",
26
+ projectDir: ".",
27
+ projectFile: "opencode.json",
28
+ format: "opencode",
29
+ detect: () => fs.existsSync(path.join(HOME, ".config", "opencode")),
30
+ },
31
+ "cursor": {
32
+ name: "Cursor",
33
+ globalDir: path.join(HOME, ".cursor"),
34
+ globalFile: "mcp.json",
35
+ projectDir: ".cursor",
36
+ projectFile: "mcp.json",
37
+ format: "standard",
38
+ detect: () => fs.existsSync(path.join(HOME, ".cursor")),
39
+ },
40
+ "windsurf": {
41
+ name: "Windsurf",
42
+ globalDir: path.join(HOME, ".codeium", "windsurf"),
43
+ globalFile: "mcp_config.json",
44
+ projectDir: ".windsurf",
45
+ projectFile: "mcp.json",
46
+ format: "standard",
47
+ detect: () => fs.existsSync(path.join(HOME, ".codeium", "windsurf"))
48
+ || fs.existsSync(path.join(HOME, ".windsurf")),
49
+ },
50
+ "cline": {
51
+ name: "Cline / Roo Code (VS Code)",
52
+ globalDir: null,
53
+ globalFile: null,
54
+ projectDir: ".vscode",
55
+ projectFile: "mcp.json",
56
+ format: "standard",
57
+ detect: () => fs.existsSync(path.join(HOME, ".vscode")),
58
+ },
59
+ };
60
+ // ---- Readline helpers ----
61
+ let rl;
62
+ function ask(question) {
63
+ return new Promise(resolve => {
64
+ rl.question(question, answer => resolve(answer.trim()));
65
+ });
66
+ }
67
+ async function askChoice(question, choices) {
68
+ console.log(`\n${question}`);
69
+ for (let i = 0; i < choices.length; i++) {
70
+ console.log(` ${i + 1}) ${choices[i]}`);
71
+ }
72
+ while (true) {
73
+ const answer = await ask(`Choice [1-${choices.length}]: `);
74
+ const num = parseInt(answer, 10);
75
+ if (num >= 1 && num <= choices.length)
76
+ return num - 1;
77
+ console.log(` Please enter a number between 1 and ${choices.length}.`);
78
+ }
79
+ }
80
+ async function askMultiChoice(question, choices) {
81
+ console.log(`\n${question}`);
82
+ for (let i = 0; i < choices.length; i++) {
83
+ console.log(` ${i + 1}) ${choices[i]}`);
84
+ }
85
+ console.log(` a) All`);
86
+ while (true) {
87
+ const answer = await ask(`Selection (e.g. 1,3 or a for all): `);
88
+ if (answer.toLowerCase() === "a")
89
+ return choices.map((_, i) => i);
90
+ const nums = answer.split(/[,\s]+/).map(s => parseInt(s.trim(), 10));
91
+ if (nums.every(n => n >= 1 && n <= choices.length))
92
+ return nums.map(n => n - 1);
93
+ console.log(` Invalid selection. Enter numbers separated by commas (e.g. 1,3) or 'a' for all.`);
94
+ }
95
+ }
96
+ // ---- Config generation ----
97
+ /**
98
+ * Generates the MCP config entry for standard tools (Claude Code, Cursor, Windsurf, Cline).
99
+ */
100
+ function standardMcpEntry(projectDir) {
101
+ return {
102
+ mcpServers: {
103
+ hmem: {
104
+ command: "npx",
105
+ args: ["-y", "hmem", "serve"],
106
+ env: {
107
+ HMEM_PROJECT_DIR: projectDir,
108
+ },
109
+ },
110
+ },
111
+ };
112
+ }
113
+ /**
114
+ * Generates the MCP config entry for OpenCode (different schema).
115
+ */
116
+ function opencodeMcpEntry(projectDir) {
117
+ return {
118
+ mcp: {
119
+ hmem: {
120
+ type: "local",
121
+ command: ["npx", "-y", "hmem", "serve"],
122
+ environment: {
123
+ HMEM_PROJECT_DIR: projectDir,
124
+ },
125
+ enabled: true,
126
+ timeout: 30000,
127
+ },
128
+ },
129
+ };
130
+ }
131
+ /**
132
+ * Deep-merges an MCP entry into an existing config object.
133
+ * Never overwrites non-hmem keys.
134
+ */
135
+ function mergeConfig(existing, entry) {
136
+ const result = { ...existing };
137
+ for (const [key, value] of Object.entries(entry)) {
138
+ if (typeof value === "object" && value !== null && !Array.isArray(value)) {
139
+ const existingVal = result[key];
140
+ if (typeof existingVal === "object" && existingVal !== null && !Array.isArray(existingVal)) {
141
+ result[key] = mergeConfig(existingVal, value);
142
+ }
143
+ else {
144
+ result[key] = value;
145
+ }
146
+ }
147
+ else {
148
+ result[key] = value;
149
+ }
150
+ }
151
+ return result;
152
+ }
153
+ /**
154
+ * Writes a config file, creating parent directories if needed.
155
+ */
156
+ function writeConfigFile(filePath, config) {
157
+ const dir = path.dirname(filePath);
158
+ if (!fs.existsSync(dir)) {
159
+ fs.mkdirSync(dir, { recursive: true });
160
+ }
161
+ fs.writeFileSync(filePath, JSON.stringify(config, null, 2) + "\n", "utf-8");
162
+ }
163
+ // ---- Main ----
164
+ export async function runInit() {
165
+ rl = readline.createInterface({ input: process.stdin, output: process.stdout });
166
+ try {
167
+ console.log("\n hmem — Humanlike Memory for AI Agents\n");
168
+ console.log(" This installer configures your AI coding tools to use hmem.\n");
169
+ // Step 1: Detect installed tools
170
+ const detected = [];
171
+ const notDetected = [];
172
+ for (const [id, tool] of Object.entries(TOOLS)) {
173
+ if (tool.detect()) {
174
+ detected.push(id);
175
+ }
176
+ else {
177
+ notDetected.push(id);
178
+ }
179
+ }
180
+ if (detected.length > 0) {
181
+ console.log(" Detected tools:");
182
+ for (const id of detected) {
183
+ console.log(` [x] ${TOOLS[id].name}`);
184
+ }
185
+ }
186
+ if (notDetected.length > 0) {
187
+ for (const id of notDetected) {
188
+ console.log(` [ ] ${TOOLS[id].name} (not found)`);
189
+ }
190
+ }
191
+ // Step 2: System-wide or project-local?
192
+ const scopeIdx = await askChoice("Installation scope:", [
193
+ "System-wide (global — works in any directory)",
194
+ "Project-local (only in current directory)",
195
+ ]);
196
+ const isGlobal = scopeIdx === 0;
197
+ // Step 3: Which tools?
198
+ const allToolIds = isGlobal
199
+ ? detected.filter(id => TOOLS[id].globalDir !== null)
200
+ : detected;
201
+ if (allToolIds.length === 0) {
202
+ console.log("\n No supported tools detected for this scope.");
203
+ console.log(" Install Claude Code, OpenCode, Cursor, or Windsurf first.\n");
204
+ return;
205
+ }
206
+ const toolChoices = allToolIds.map(id => TOOLS[id].name);
207
+ const selectedIndices = await askMultiChoice("Configure hmem for which tools?", toolChoices);
208
+ const selectedTools = selectedIndices.map(i => allToolIds[i]);
209
+ // Step 4: Memory directory
210
+ const defaultDir = isGlobal ? path.join(HOME, ".hmem") : process.cwd();
211
+ const memDirAnswer = await ask(`\nMemory directory [${defaultDir}]: `);
212
+ const memDir = memDirAnswer || defaultDir;
213
+ const absMemDir = path.resolve(memDir);
214
+ // Create memory directory if it doesn't exist
215
+ if (!fs.existsSync(absMemDir)) {
216
+ fs.mkdirSync(absMemDir, { recursive: true });
217
+ console.log(` Created: ${absMemDir}`);
218
+ }
219
+ // Step 5: Agent ID (optional)
220
+ const agentId = await ask(`Agent ID (optional, press Enter to skip): `);
221
+ // Step 6: Write configs
222
+ console.log("\n Writing configuration...\n");
223
+ for (const toolId of selectedTools) {
224
+ const tool = TOOLS[toolId];
225
+ // Determine file path
226
+ let configPath;
227
+ if (isGlobal) {
228
+ configPath = path.join(tool.globalDir, tool.globalFile);
229
+ }
230
+ else {
231
+ const projDir = path.join(process.cwd(), tool.projectDir);
232
+ configPath = path.join(projDir, tool.projectFile);
233
+ }
234
+ // Build project dir for env var
235
+ const envProjectDir = absMemDir;
236
+ // Generate MCP entry
237
+ const entry = tool.format === "opencode"
238
+ ? opencodeMcpEntry(envProjectDir)
239
+ : standardMcpEntry(envProjectDir);
240
+ // Add agent ID if provided
241
+ if (agentId) {
242
+ if (tool.format === "opencode") {
243
+ const mcp = entry.mcp;
244
+ mcp.hmem.environment.HMEM_AGENT_ID = agentId;
245
+ }
246
+ else {
247
+ const servers = entry.mcpServers;
248
+ servers.hmem.env.HMEM_AGENT_ID = agentId;
249
+ }
250
+ }
251
+ // Read existing config (if any) and merge
252
+ let existing = {};
253
+ if (fs.existsSync(configPath)) {
254
+ try {
255
+ existing = JSON.parse(fs.readFileSync(configPath, "utf-8"));
256
+ }
257
+ catch {
258
+ console.log(` WARNING: Could not parse ${configPath} — creating new file.`);
259
+ }
260
+ }
261
+ const merged = mergeConfig(existing, entry);
262
+ writeConfigFile(configPath, merged);
263
+ console.log(` [ok] ${tool.name}: ${configPath}`);
264
+ }
265
+ // Step 7: Create default hmem.config.json if not exists
266
+ const hmemConfigPath = path.join(absMemDir, "hmem.config.json");
267
+ if (!fs.existsSync(hmemConfigPath)) {
268
+ const defaultConfig = {
269
+ maxL1Chars: 120,
270
+ maxLnChars: 50000,
271
+ maxDepth: 5,
272
+ defaultReadLimit: 100,
273
+ recentDepthTiers: [
274
+ { count: 10, depth: 2 },
275
+ { count: 3, depth: 3 },
276
+ ],
277
+ };
278
+ writeConfigFile(hmemConfigPath, defaultConfig);
279
+ console.log(` [ok] Config: ${hmemConfigPath}`);
280
+ }
281
+ console.log(`\n Done! Restart your AI tool(s) to activate hmem.\n`);
282
+ console.log(` Memory directory: ${absMemDir}`);
283
+ if (agentId)
284
+ console.log(` Agent ID: ${agentId}`);
285
+ console.log(`\n Test: Open your AI tool and call read_memory() — it should respond.\n`);
286
+ }
287
+ finally {
288
+ rl.close();
289
+ }
290
+ }
291
+ //# sourceMappingURL=cli-init.js.map