hmem-mcp 5.0.0 → 5.1.22

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (43) hide show
  1. package/README.md +161 -214
  2. package/dist/cli-checkpoint.js +102 -40
  3. package/dist/cli-checkpoint.js.map +1 -1
  4. package/dist/cli-context-inject.d.ts +7 -6
  5. package/dist/cli-context-inject.js +27 -130
  6. package/dist/cli-context-inject.js.map +1 -1
  7. package/dist/cli-env.d.ts +16 -0
  8. package/dist/cli-env.js +40 -0
  9. package/dist/cli-env.js.map +1 -0
  10. package/dist/cli-hook-startup.d.ts +20 -0
  11. package/dist/cli-hook-startup.js +101 -0
  12. package/dist/cli-hook-startup.js.map +1 -0
  13. package/dist/cli-init.js +100 -184
  14. package/dist/cli-init.js.map +1 -1
  15. package/dist/cli-log-exchange.js +67 -4
  16. package/dist/cli-log-exchange.js.map +1 -1
  17. package/dist/cli-statusline.d.ts +14 -0
  18. package/dist/cli-statusline.js +172 -0
  19. package/dist/cli-statusline.js.map +1 -0
  20. package/dist/cli.js +18 -2
  21. package/dist/cli.js.map +1 -1
  22. package/dist/hmem-config.d.ts +10 -0
  23. package/dist/hmem-config.js +63 -13
  24. package/dist/hmem-config.js.map +1 -1
  25. package/dist/hmem-store.d.ts +30 -1
  26. package/dist/hmem-store.js +219 -48
  27. package/dist/hmem-store.js.map +1 -1
  28. package/dist/mcp-server.js +204 -75
  29. package/dist/mcp-server.js.map +1 -1
  30. package/package.json +1 -1
  31. package/scripts/autoresearch-nightly.sh +84 -0
  32. package/scripts/hmem-statusline.sh +4 -0
  33. package/skills/hmem-config/SKILL.md +112 -147
  34. package/skills/hmem-curate/SKILL.md +56 -6
  35. package/skills/hmem-new-project/SKILL.md +164 -0
  36. package/skills/hmem-read/SKILL.md +174 -146
  37. package/skills/hmem-release/SKILL.md +141 -0
  38. package/skills/hmem-self-curate/SKILL.md +49 -7
  39. package/skills/hmem-setup/SKILL.md +169 -87
  40. package/skills/hmem-sync-setup/SKILL.md +16 -3
  41. package/skills/hmem-update/SKILL.md +254 -0
  42. package/skills/hmem-wipe/SKILL.md +47 -21
  43. package/skills/hmem-write/SKILL.md +38 -14
package/README.md CHANGED
@@ -1,137 +1,151 @@
1
1
  # hmem — Humanlike Memory for AI Agents
2
2
 
3
- > **700 tokens to load a project. 5k for everything.** That's hmem persistent, hierarchical memory that works across sessions, devices, and AI tools. Zero tokens wasted.
3
+ > Your AI forgets everything between sessions. **hmem fixes that.**
4
4
 
5
- **hmem** is an MCP server that gives AI agents human-like long-term memory. Instead of dumping everything into context, it stores knowledge in a 5-level hierarchy like how you remember: broad strokes first, details on demand.
6
-
7
- The result? An AI that starts a new session and *already knows* your projects, your decisions, your past mistakes, your preferences — across your laptop, your PC, and your server. Simultaneously.
5
+ One `read_memory()` call. 5k tokens. Your agent knows every project, every past mistake, every decision you ever made together across sessions, devices, and AI providers. No setup per conversation. No "let me re-read the codebase." It just *remembers*.
8
6
 
9
7
  ---
10
8
 
11
- ## Why hmem?
9
+ ## The Problem
10
+
11
+ Every AI session starts from zero. Your agent asks the same questions, makes the same mistakes, contradicts last week's decisions, and wastes 50k tokens loading context it already processed yesterday.
12
+
13
+ You've tried workarounds — CLAUDE.md files, custom prompts, manually pasting context. They don't scale. You have 10 projects. You switch between 3 devices. You use different AI tools.
14
+
15
+ ## The Solution
12
16
 
13
- **Without hmem:** Every session starts from zero. Your AI asks the same questions, makes the same mistakes, contradicts last week's decisions, and wastes tokens loading context it already processed.
17
+ ```
18
+ You: "Load project hmem"
19
+ Agent: [calls load_project("P0048") — 700 tokens]
20
+ Agent: "Got it. v5.0.0, TypeScript/SQLite/npm, 10 source files,
21
+ 3 open tasks, 9 ideas. Last session you implemented
22
+ auto-checkpoints via Haiku. What's next?"
23
+ ```
14
24
 
15
- **With hmem:**
16
- - **700 tokens** to load a project — `load_project("P0048")` returns the full briefing: tech stack, architecture, codebase structure, open tasks, ideas, and related errors/lessons. Ready to work immediately
17
- - **5k tokens** for the full picture — `read_memory()` loads a complete overview across ALL projects, decisions, errors, lessons, and rules spanning months of work
18
- - **Gets more efficient over time** — as your memory grows, the bulk read algorithm gets *better*, not worse. New entries push older, less relevant ones into title-only mode. 1,000 entries cost barely more tokens than 100.
19
- - **Original context preserved** — nothing is summarized away or compressed. Every detail you stored is still there at full fidelity, accessible on demand. Level 1 is a summary, but Levels 2-5 hold the complete original text, word for word.
20
- - **Drill on demand** — the AI only fetches details when it actually needs them
21
- - **Cross-device** — encrypted sync means your laptop, PC, and server share the same brain
22
- - **Cross-provider** — Claude, Gemini, GPT, DeepSeek, local models — all read and write the same memory. Switch providers without losing context. Your Gemini session picks up where Claude left off.
23
- - **Cross-tool** — works with Claude Code, Gemini CLI, Cursor, Windsurf, OpenCode, Cline
24
- - **Auto-logging** — via Claude Code's Stop hook, every conversation is automatically preserved
25
- - **No token waste** — hierarchical lazy loading means the AI never loads more than it needs
25
+ That's it. 700 tokens for a complete project briefing. The agent knows the stack, the architecture, the open bugs, the recent decisions, and exactly where you left off — even if "you" was a different AI on a different machine yesterday.
26
26
 
27
27
  ---
28
28
 
29
29
  ## How It Works
30
30
 
31
31
  ```
32
- Level 1 ── One-line summary (always loaded — ~5k tokens for 300 entries)
32
+ Level 1 ── One-line summary (always loaded — ~5k tokens for 300+ entries)
33
33
  Level 2 ── Paragraph detail (loaded on demand)
34
34
  Level 3 ── Full context (loaded on demand)
35
- Level 4 ── Extended detail (loaded on demand)
36
- Level 5 ── Raw/verbatim data (loaded on demand)
35
+ Level 4 ── Extended detail (loaded on demand)
36
+ Level 5 ── Raw/verbatim data (loaded on demand)
37
37
  ```
38
38
 
39
- At session start, the agent loads Level 1 summaries — one line per memory. When it needs more detail on a specific topic, it drills down: `read_memory(id="L0042")` loads that entry's Level 2 children. And so on.
39
+ At session start, the agent loads Level 1 summaries — one line per memory. When it needs detail, it drills down. Your 300-entry memory costs 5k tokens to overview. A single project costs 700.
40
40
 
41
- **Categories keep things organized:**
41
+ **Nothing is summarized away.** Level 1 is a summary, but Levels 2-5 hold the complete original text, word for word, accessible on demand.
42
42
 
43
- | Prefix | Category | Example |
44
- |--------|----------|---------|
45
- | P | Project | `hmem-mcp \| Active \| TS/SQLite/npm \| Persistent hierarchical AI memory` |
46
- | L | Lesson | `Always restart MCP server after recompiling TypeScript` |
47
- | E | Error | `hmem-sync Schema-Drift: access_count missing after pull` |
48
- | D | Decision | `Per-node tag scoring instead of union-set for related discovery` |
49
- | H | Human | `User Skill: IT TypeScript: 3, Architecture: 9, AHK: 9` |
50
- | R | Rule | `Max one npm publish per day — batch changes` |
51
- | I | Infrastructure | `Strato Server \| Active \| Linux \| 4 cores, 8GB RAM` |
52
- | T | Task | `Config consolidation: merge 6 files into 1` |
53
- | O | Original | Auto-recorded raw conversation history (via Stop hook) |
43
+ ---
44
+
45
+ ## What Makes v5 Different
46
+
47
+ ### Automatic Session Memory
48
+
49
+ Every conversation is recorded automatically. No "save your work" prompts. No manual checkpoints.
50
+
51
+ ```
52
+ You type → Agent responds → Stop hook fires → Exchange saved to O-entry
53
+ → Linked to active project
54
+ → Haiku auto-titles the session
55
+ ```
56
+
57
+ Switch projects mid-session? The O-entry switches too. Start a new session on a different PC? The next agent sees every exchange from every device — **the conversation never dies**.
58
+
59
+ ### Haiku Background Checkpoints
60
+
61
+ Every 20 exchanges, a Haiku subagent wakes up in the background. It reads the recent conversation, extracts lessons learned, errors encountered, and decisions made, then writes them to long-term memory — with full MCP tool access. Your main agent is never interrupted.
62
+
63
+ The checkpoint also writes a **handoff note** to the project: "Here's what was done, here's what's in progress, here's the next step." The next agent — on any device, any provider — picks up exactly where you left off.
64
+
65
+ ### Project-Based, Not Session-Based
66
+
67
+ Sessions are meaningless. Projects are everything.
68
+
69
+ - O-entries are linked to the active project, not the session
70
+ - Checkpoint counters count project exchanges, not session messages
71
+ - 10 messages on your laptop + 10 on your server = checkpoint fires on message 20
72
+ - `load_project` shows recent conversations with full context — across all devices
54
73
 
55
74
  ---
56
75
 
57
76
  ## Key Features
58
77
 
59
- - **5-level lazy loading** tokens scale with need, not with total memory size
60
- - **Smart bulk reads** — V2 algorithm expands newest, most-accessed, and favorites; suppresses the rest to titles
61
- - **Project-aware filtering** activate a project, and only relevant memories are expanded; others show title-only
62
- - **`#universal` tag** cross-project knowledge (MCP patterns, deployment rules) always shown regardless of active project
63
- - **Duplicate detection** `write_memory` warns if similar entries exist (tag overlap + FTS5 title similarity)
64
- - **Encrypted sync** AES-256-GCM client-side encryption, zero-knowledge server, multi-server redundancy
65
- - **Auto-logging** Claude Code Stop hook records every conversation automatically (O-prefix)
66
- - **Announcements** broadcast urgent messages to all synced devices (server migration, config changes)
67
- - **User skill assessment** agents silently track your expertise per topic (1-10 scale) and adapt communication
68
- - **Hashtags** cross-cutting tags for filtering and related-entry discovery
69
- - **Obsolete chains** mark entries wrong with `[✓ID]` correction reference; auto-follows to current version
70
- - **Import/Export** share memories between agents or back up as Markdown
71
- - **Multi-agent routing** `route_task` scores all agent memory stores to find the best agent for a task
72
-
73
- ### New in v4
74
-
75
- - **`load_project` tool** — one call to activate a project and get a complete briefing (~500 tokens). The recommended way to start working on a project
76
- - **P-Entry Standard Schema** — validated project structure with 10 L2 categories. The MCP server enforces consistency across all agents
77
- - **Context Injection `[⚡]`** — activate a task, and related errors + lessons appear automatically in bulk reads. No manual searching for past mistakes
78
- - **Multi-server sync** push to multiple servers for redundancy. `"sync": [{ ... }, { ... }]` in config
78
+ | Feature | What it does |
79
+ |---------|-------------|
80
+ | **5-level lazy loading** | Tokens scale with need, not memory size |
81
+ | **Smart bulk reads** | Expands newest + most-accessed; compresses the rest to titles |
82
+ | **Project gate** | Activate a project only relevant memories are expanded |
83
+ | **Duplicate detection** | Warns before creating entries that already exist |
84
+ | **Encrypted sync** | AES-256-GCM, zero-knowledge server, multi-server redundancy |
85
+ | **Auto-logging** | Every exchange recorded via Stop hook (O-prefix) |
86
+ | **Auto-checkpoint** | Haiku extracts L/D/E entries every N exchanges |
87
+ | **Project handoff** | Background agent maintains "current state" in Protocol section |
88
+ | **User skill tracking** | Agents track your expertise (1-10) and adapt communication |
89
+ | **Hashtags** | Cross-cutting tags for discovery across all categories |
90
+ | **Obsolete chains** | Mark entries wrong with correction reference auto-follows |
91
+ | **Cross-provider** | Claude, Gemini, GPT, DeepSeek, local models — same memory |
92
+ | **Cross-tool** | Claude Code, Gemini CLI, Cursor, Windsurf, OpenCode, Cline |
93
+ | **Import/Export** | Share memories between agents or back up as Markdown |
94
+
95
+ ### Categories
96
+
97
+ | Prefix | Category | Example |
98
+ |--------|----------|---------|
99
+ | **P** | Project | `hmem-mcp \| Active \| TS/SQLite/npm \| Persistent AI memory` |
100
+ | **L** | Lesson | `HMEM_AGENT_ID must be set in hooks — resolveHmemPath falls back to wrong DB` |
101
+ | **E** | Error | `158 spurious O-entries created when Haiku MCP lacked HMEM_NO_SESSION guard` |
102
+ | **D** | Decision | `Project-based O-entries over session-based — sessions are meaningless` |
103
+ | **H** | Human | `User Skill: TypeScript 9, Architecture 9, React 3` |
104
+ | **R** | Rule | `Max one npm publish per day — batch changes` |
105
+ | **O** | Original | Auto-recorded conversation history (every exchange, every device) |
106
+ | **I** | Infra | `Strato Server \| Active \| Linux \| 87.106.22.11` |
79
107
 
80
108
  ---
81
109
 
82
- ## Installation
110
+ ## Quick Start
83
111
 
84
- ### Step 1: Install the package
112
+ ### 1. Install
85
113
 
86
114
  ```bash
87
115
  npm install -g hmem-mcp
88
116
  ```
89
117
 
90
- Skills are **automatically copied** to detected AI tools (Claude Code, OpenCode, Gemini CLI) via postinstall hook.
118
+ ### 2. Run the interactive installer
91
119
 
92
- ### Step 2: Configure your MCP client
120
+ ```bash
121
+ npx hmem init
122
+ ```
93
123
 
94
- **IMPORTANT:** Do NOT use `claude mcp add` it misplaces environment variables. Configure manually:
124
+ This detects your AI tools, creates the memory directory, configures MCP, and installs all 4 hooks:
95
125
 
96
- #### Claude Code
126
+ | Hook | When | What |
127
+ |------|------|------|
128
+ | `UserPromptSubmit` | Every message | First message: load memory. Every Nth: checkpoint reminder |
129
+ | `Stop` (sync) | Every response | Log exchange to active O-entry |
130
+ | `Stop` (async) | Every response | Haiku auto-titles untitled sessions |
131
+ | `SessionStart[clear]` | After /clear | Re-inject project context |
97
132
 
98
- Edit `~/.claude/.mcp.json` (create if it doesn't exist):
133
+ ### 3. Verify
99
134
 
100
- ```json
101
- {
102
- "mcpServers": {
103
- "hmem": {
104
- "command": "node",
105
- "args": ["/path/to/hmem-mcp/dist/mcp-server.js"],
106
- "env": {
107
- "HMEM_PROJECT_DIR": "/home/yourname/.hmem"
108
- }
109
- }
110
- }
111
- }
112
- ```
135
+ Restart your AI tool, then:
113
136
 
114
- **Find the path** to `mcp-server.js`:
115
- ```bash
116
- echo "$(npm root -g)/hmem-mcp/dist/mcp-server.js"
117
137
  ```
118
-
119
- **nvm users:** Use the absolute path to `node` instead of just `"node"`:
120
- ```bash
121
- echo "$(which node)"
122
- # e.g. /home/yourname/.nvm/versions/node/v24.14.0/bin/node
138
+ read_memory()
123
139
  ```
124
140
 
125
- Then use that as the `"command"` value.
141
+ Empty response = working (first run). Error = check the [troubleshooting section](#troubleshooting).
126
142
 
127
- #### With agent ID (multi-agent setups)
143
+ ### Manual setup
128
144
 
129
- If you use `HMEM_AGENT_ID`, the database path changes:
145
+ If you prefer manual configuration over `hmem init`:
130
146
 
131
- ```
132
- Without HMEM_AGENT_ID: {HMEM_PROJECT_DIR}/memory.hmem
133
- With HMEM_AGENT_ID=X: {HMEM_PROJECT_DIR}/Agents/X/X.hmem
134
- ```
147
+ <details>
148
+ <summary>Claude Code — edit ~/.claude/.mcp.json</summary>
135
149
 
136
150
  ```json
137
151
  {
@@ -148,9 +162,15 @@ With HMEM_AGENT_ID=X: {HMEM_PROJECT_DIR}/Agents/X/X.hmem
148
162
  }
149
163
  ```
150
164
 
151
- #### OpenCode
165
+ Find the paths:
166
+ ```bash
167
+ echo "Node: $(which node)"
168
+ echo "Server: $(npm root -g)/hmem-mcp/dist/mcp-server.js"
169
+ ```
170
+ </details>
152
171
 
153
- Edit `~/.config/opencode/opencode.json`:
172
+ <details>
173
+ <summary>OpenCode — edit ~/.config/opencode/opencode.json</summary>
154
174
 
155
175
  ```json
156
176
  {
@@ -158,18 +178,18 @@ Edit `~/.config/opencode/opencode.json`:
158
178
  "hmem": {
159
179
  "type": "local",
160
180
  "command": ["/absolute/path/to/node", "/absolute/path/to/hmem-mcp/dist/mcp-server.js"],
161
- "environment": {
162
- "HMEM_PROJECT_DIR": "/home/yourname/.hmem"
163
- },
181
+ "environment": { "HMEM_PROJECT_DIR": "/home/yourname/.hmem" },
164
182
  "enabled": true
165
183
  }
166
184
  }
167
185
  }
168
186
  ```
187
+ </details>
169
188
 
170
- #### Cursor / Windsurf / Cline
189
+ <details>
190
+ <summary>Cursor / Windsurf / Cline</summary>
171
191
 
172
- Edit the respective MCP config file (`~/.cursor/mcp.json`, `~/.codeium/windsurf/mcp_config.json`, or `.vscode/mcp.json`):
192
+ Edit `~/.cursor/mcp.json`, `~/.codeium/windsurf/mcp_config.json`, or `.vscode/mcp.json`:
173
193
 
174
194
  ```json
175
195
  {
@@ -177,174 +197,101 @@ Edit the respective MCP config file (`~/.cursor/mcp.json`, `~/.codeium/windsurf/
177
197
  "hmem": {
178
198
  "command": "/absolute/path/to/node",
179
199
  "args": ["/absolute/path/to/hmem-mcp/dist/mcp-server.js"],
180
- "env": {
181
- "HMEM_PROJECT_DIR": "/home/yourname/.hmem"
182
- }
200
+ "env": { "HMEM_PROJECT_DIR": "/home/yourname/.hmem" }
183
201
  }
184
202
  }
185
203
  }
186
204
  ```
205
+ </details>
187
206
 
188
- ### Step 3: Create the memory directory
189
-
190
- ```bash
191
- mkdir -p ~/.hmem
192
- # Or with agent ID:
193
- mkdir -p ~/.hmem/Agents/DEVELOPER
194
- ```
207
+ ---
195
208
 
196
- ### Step 4: Restart and verify
209
+ ## Configuration
197
210
 
198
- Restart your AI tool completely, then:
211
+ `hmem.config.json` in your `HMEM_PROJECT_DIR` (or `Agents/NAME/`):
199
212
 
200
- ```
201
- read_memory()
213
+ ```json
214
+ {
215
+ "memory": {
216
+ "maxCharsPerLevel": [200, 2500, 10000, 25000, 50000],
217
+ "maxDepth": 5,
218
+ "checkpointMode": "auto",
219
+ "checkpointInterval": 20,
220
+ "recentOEntries": 10,
221
+ "maxTitleChars": 50,
222
+ "prefixes": { "X": "Custom" }
223
+ },
224
+ "sync": {
225
+ "serverUrl": "https://your-server/hmem-sync",
226
+ "userId": "yourname",
227
+ "salt": "...",
228
+ "token": "..."
229
+ }
230
+ }
202
231
  ```
203
232
 
204
- You should see a response. If empty, that's fine — first run. If you get an error, check:
205
- - Is `HMEM_PROJECT_DIR` an absolute path?
206
- - Does the directory exist?
207
- - Is `node` path correct? (nvm users: use absolute path)
233
+ | Key | Default | What it does |
234
+ |-----|---------|-------------|
235
+ | `checkpointMode` | `"remind"` | `"auto"` = Haiku writes L/D/E in background. `"remind"` = asks the main agent |
236
+ | `checkpointInterval` | `20` | Exchanges between checkpoints. Set `0` to disable |
237
+ | `recentOEntries` | `10` | How many recent sessions to show in `load_project` |
208
238
 
209
- The server logs its configuration on startup:
210
- ```
211
- [hmem:DEVELOPER] MCP Server running on stdio | Agent: DEVELOPER | DB: /home/you/.hmem/Agents/DEVELOPER/DEVELOPER.hmem (0 entries)
212
- ```
239
+ All keys are optional. Missing keys use defaults.
213
240
 
214
241
  ---
215
242
 
216
- ## Cross-Device Sync (hmem-sync)
243
+ ## Cross-Device Sync
217
244
 
218
- Sync your memories across all devices with zero-knowledge encryption.
245
+ Sync memories across all devices with zero-knowledge encryption.
219
246
 
220
247
  ```bash
221
248
  npm install -g hmem-sync
249
+ npx hmem-sync connect # Interactive wizard — first device creates, others join
222
250
  ```
223
251
 
224
- ### First device
225
-
226
- ```bash
227
- npx hmem-sync connect
228
- ```
229
-
230
- Interactive wizard: creates account, generates encryption keys, pushes your data.
231
-
232
- ### Additional devices
233
-
234
- ```bash
235
- npx hmem-sync connect
236
- ```
237
-
238
- Same wizard — choose "existing account", enter your credentials from the first device.
239
-
240
- ### Enable auto-sync
241
-
242
- Add `HMEM_SYNC_PASSPHRASE` to your MCP config:
243
-
244
- ```json
245
- {
246
- "env": {
247
- "HMEM_PROJECT_DIR": "/home/you/.hmem",
248
- "HMEM_AGENT_ID": "DEVELOPER",
249
- "HMEM_SYNC_PASSPHRASE": "your-passphrase"
250
- }
251
- }
252
- ```
253
-
254
- With this set, every `read_memory` automatically pulls and every `write_memory` automatically pushes. 30-second cooldown prevents spam.
252
+ Add `HMEM_SYNC_PASSPHRASE` to your MCP config for automatic sync on every read/write.
255
253
 
256
254
  ### Multi-server redundancy
257
255
 
258
- In `hmem.config.json`, configure multiple servers:
259
-
260
256
  ```json
261
257
  {
262
258
  "sync": [
263
259
  { "name": "primary", "serverUrl": "https://server1/hmem-sync", "userId": "me", "salt": "...", "token": "..." },
264
- { "name": "backup", "serverUrl": "https://server2/hmem-sync", "userId": "me", "salt": "...", "token": "..." }
260
+ { "name": "backup", "serverUrl": "https://server2/hmem-sync", "userId": "me", "salt": "...", "token": "..." }
265
261
  ]
266
262
  }
267
263
  ```
268
264
 
269
- Push/pull goes to all servers. Use during migration or for redundant backup.
270
-
271
265
  ### Announcements
272
266
 
273
- Broadcast urgent messages to all synced AI agents across all devices:
267
+ Broadcast to all synced agents across all devices:
274
268
 
275
269
  ```bash
276
270
  npx hmem-sync announce --message "Server URL changing — update your config!"
277
271
  ```
278
272
 
279
- Every agent on every device sees the announcement on its next sync pull. Use for config changes, server migrations, or coordination across your fleet of AI instances.
280
-
281
- ---
282
-
283
- ## Auto-Logging (O-prefix)
284
-
285
- With Claude Code's Stop hook, every conversation exchange (your message + agent response) is automatically recorded in O-prefix entries. Zero token cost — runs in the background.
286
-
287
- ### Setup the hook
288
-
289
- Add to `~/.claude/settings.json`:
290
-
291
- ```json
292
- {
293
- "hooks": {
294
- "Stop": [
295
- {
296
- "hooks": [
297
- {
298
- "type": "command",
299
- "command": "HMEM_PROJECT_DIR=/home/you/.hmem HMEM_AGENT_ID=DEVELOPER node /path/to/hmem-mcp/dist/cli.js log-exchange",
300
- "timeout": 10
301
- }
302
- ]
303
- }
304
- ]
305
- }
306
- }
307
- ```
308
-
309
- O-entries are hidden from bulk reads (no noise) but searchable and linked to your active project.
310
-
311
273
  ---
312
274
 
313
- ## Configuration
314
-
315
- `hmem.config.json` in your `HMEM_PROJECT_DIR`:
316
-
317
- ```json
318
- {
319
- "memory": {
320
- "maxCharsPerLevel": [200, 2500, 10000, 25000, 50000],
321
- "maxDepth": 5,
322
- "maxTitleChars": 50,
323
- "prefixes": { "X": "Custom" }
324
- },
325
- "sync": {
326
- "serverUrl": "https://your-server/hmem-sync",
327
- "userId": "yourname",
328
- "salt": "...",
329
- "token": "..."
330
- }
331
- }
332
- ```
275
+ ## Troubleshooting
333
276
 
334
- All keys are optional. Missing keys use defaults.
277
+ | Problem | Fix |
278
+ |---------|-----|
279
+ | `read_memory()` fails | Check `HMEM_PROJECT_DIR` is absolute path and directory exists |
280
+ | nvm: `node not found` | Use absolute path: `which node` → use as `"command"` |
281
+ | Hooks not firing | Restart Claude Code. Check `~/.claude/settings.json` has all 4 hooks |
282
+ | Exchanges not logged | Check `HMEM_AGENT_ID` matches your `Agents/` directory name |
283
+ | Sync fails | Run `npx hmem-sync connect` to re-authenticate |
335
284
 
336
285
  ---
337
286
 
338
287
  ## Updating
339
288
 
340
289
  ```bash
341
- # Always global NOT inside a project directory
342
- npm update -g hmem-mcp
343
- npm update -g hmem-sync
290
+ npm update -g hmem-mcp # MCP server
291
+ npm update -g hmem-sync # Sync (if installed)
292
+ npx hmem update-skills # Refresh skill files
344
293
  ```
345
294
 
346
- Skills are automatically updated via postinstall hook. No manual copy needed.
347
-
348
295
  ---
349
296
 
350
297
  ## License