hmem-mcp 4.0.0 → 5.1.21

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (47) hide show
  1. package/README.md +161 -205
  2. package/dist/cli-checkpoint.d.ts +16 -0
  3. package/dist/cli-checkpoint.js +233 -0
  4. package/dist/cli-checkpoint.js.map +1 -0
  5. package/dist/cli-context-inject.d.ts +19 -0
  6. package/dist/cli-context-inject.js +77 -0
  7. package/dist/cli-context-inject.js.map +1 -0
  8. package/dist/cli-env.d.ts +16 -0
  9. package/dist/cli-env.js +40 -0
  10. package/dist/cli-env.js.map +1 -0
  11. package/dist/cli-hook-startup.d.ts +20 -0
  12. package/dist/cli-hook-startup.js +101 -0
  13. package/dist/cli-hook-startup.js.map +1 -0
  14. package/dist/cli-init.js +148 -10
  15. package/dist/cli-init.js.map +1 -1
  16. package/dist/cli-log-exchange.js +87 -23
  17. package/dist/cli-log-exchange.js.map +1 -1
  18. package/dist/cli-statusline.d.ts +14 -0
  19. package/dist/cli-statusline.js +172 -0
  20. package/dist/cli-statusline.js.map +1 -0
  21. package/dist/cli.js +30 -2
  22. package/dist/cli.js.map +1 -1
  23. package/dist/hmem-config.d.ts +31 -0
  24. package/dist/hmem-config.js +76 -12
  25. package/dist/hmem-config.js.map +1 -1
  26. package/dist/hmem-store.d.ts +62 -1
  27. package/dist/hmem-store.js +364 -46
  28. package/dist/hmem-store.js.map +1 -1
  29. package/dist/mcp-server.js +405 -99
  30. package/dist/mcp-server.js.map +1 -1
  31. package/dist/session-cache.d.ts +11 -0
  32. package/dist/session-cache.js +25 -0
  33. package/dist/session-cache.js.map +1 -1
  34. package/package.json +1 -1
  35. package/scripts/autoresearch-nightly.sh +84 -0
  36. package/scripts/hmem-statusline.sh +4 -0
  37. package/skills/hmem-config/SKILL.md +112 -147
  38. package/skills/hmem-curate/SKILL.md +56 -6
  39. package/skills/hmem-new-project/SKILL.md +164 -0
  40. package/skills/hmem-read/SKILL.md +174 -146
  41. package/skills/hmem-release/SKILL.md +141 -0
  42. package/skills/hmem-self-curate/SKILL.md +49 -7
  43. package/skills/hmem-setup/SKILL.md +169 -87
  44. package/skills/hmem-sync-setup/SKILL.md +16 -3
  45. package/skills/hmem-update/SKILL.md +254 -0
  46. package/skills/hmem-wipe/SKILL.md +75 -0
  47. package/skills/hmem-write/SKILL.md +113 -61
package/README.md CHANGED
@@ -1,128 +1,151 @@
1
1
  # hmem — Humanlike Memory for AI Agents
2
2
 
3
- > **Your AI loads 5k tokens and has full context of 400k+.** That's hmem persistent, hierarchical memory that works across sessions, devices, and AI tools. Zero tokens wasted.
3
+ > Your AI forgets everything between sessions. **hmem fixes that.**
4
4
 
5
- **hmem** is an MCP server that gives AI agents human-like long-term memory. Instead of dumping everything into context, it stores knowledge in a 5-level hierarchy like how you remember: broad strokes first, details on demand.
6
-
7
- The result? An AI that starts a new session and *already knows* your projects, your decisions, your past mistakes, your preferences — across your laptop, your PC, and your server. Simultaneously.
5
+ One `read_memory()` call. 5k tokens. Your agent knows every project, every past mistake, every decision you ever made together across sessions, devices, and AI providers. No setup per conversation. No "let me re-read the codebase." It just *remembers*.
8
6
 
9
7
  ---
10
8
 
11
- ## Why hmem?
9
+ ## The Problem
10
+
11
+ Every AI session starts from zero. Your agent asks the same questions, makes the same mistakes, contradicts last week's decisions, and wastes 50k tokens loading context it already processed yesterday.
12
+
13
+ You've tried workarounds — CLAUDE.md files, custom prompts, manually pasting context. They don't scale. You have 10 projects. You switch between 3 devices. You use different AI tools.
14
+
15
+ ## The Solution
12
16
 
13
- **Without hmem:** Every session starts from zero. Your AI asks the same questions, makes the same mistakes, contradicts last week's decisions, and wastes tokens loading context it already processed.
17
+ ```
18
+ You: "Load project hmem"
19
+ Agent: [calls load_project("P0048") — 700 tokens]
20
+ Agent: "Got it. v5.0.0, TypeScript/SQLite/npm, 10 source files,
21
+ 3 open tasks, 9 ideas. Last session you implemented
22
+ auto-checkpoints via Haiku. What's next?"
23
+ ```
14
24
 
15
- **With hmem:**
16
- - **5k tokens** loads a complete overview of 300+ memories spanning months of work
17
- - **Gets more efficient over time** — as your memory grows, the bulk read algorithm gets *better*, not worse. New entries push older, less relevant ones into title-only mode. 1,000 entries cost barely more tokens than 100.
18
- - **Original context preserved** — nothing is summarized away or compressed. Every detail you stored is still there at full fidelity, accessible on demand. Level 1 is a summary, but Levels 2-5 hold the complete original text, word for word.
19
- - **Drill on demand** — the AI only fetches details when it actually needs them
20
- - **Cross-device** — encrypted sync means your laptop, PC, and server share the same brain
21
- - **Cross-tool** — works with Claude Code, Gemini CLI, Cursor, Windsurf, OpenCode, Cline
22
- - **Auto-logging** — via Claude Code's Stop hook, every conversation is automatically preserved
23
- - **No token waste** — hierarchical lazy loading means the AI never loads more than it needs
25
+ That's it. 700 tokens for a complete project briefing. The agent knows the stack, the architecture, the open bugs, the recent decisions, and exactly where you left off — even if "you" was a different AI on a different machine yesterday.
24
26
 
25
27
  ---
26
28
 
27
29
  ## How It Works
28
30
 
29
31
  ```
30
- Level 1 ── One-line summary (always loaded — ~5k tokens for 300 entries)
32
+ Level 1 ── One-line summary (always loaded — ~5k tokens for 300+ entries)
31
33
  Level 2 ── Paragraph detail (loaded on demand)
32
34
  Level 3 ── Full context (loaded on demand)
33
- Level 4 ── Extended detail (loaded on demand)
34
- Level 5 ── Raw/verbatim data (loaded on demand)
35
+ Level 4 ── Extended detail (loaded on demand)
36
+ Level 5 ── Raw/verbatim data (loaded on demand)
35
37
  ```
36
38
 
37
- At session start, the agent loads Level 1 summaries — one line per memory. When it needs more detail on a specific topic, it drills down: `read_memory(id="L0042")` loads that entry's Level 2 children. And so on.
39
+ At session start, the agent loads Level 1 summaries — one line per memory. When it needs detail, it drills down. Your 300-entry memory costs 5k tokens to overview. A single project costs 700.
38
40
 
39
- **Categories keep things organized:**
41
+ **Nothing is summarized away.** Level 1 is a summary, but Levels 2-5 hold the complete original text, word for word, accessible on demand.
40
42
 
41
- | Prefix | Category | Example |
42
- |--------|----------|---------|
43
- | P | Project | `hmem-mcp \| Active \| TS/SQLite/npm \| Persistent hierarchical AI memory` |
44
- | L | Lesson | `Always restart MCP server after recompiling TypeScript` |
45
- | E | Error | `hmem-sync Schema-Drift: access_count missing after pull` |
46
- | D | Decision | `Per-node tag scoring instead of union-set for related discovery` |
47
- | H | Human | `User Skill: IT TypeScript: 3, Architecture: 9, AHK: 9` |
48
- | R | Rule | `Max one npm publish per day — batch changes` |
49
- | I | Infrastructure | `Strato Server \| Active \| Linux \| 4 cores, 8GB RAM` |
50
- | T | Task | `Config consolidation: merge 6 files into 1` |
51
- | O | Original | Auto-recorded raw conversation history (via Stop hook) |
43
+ ---
44
+
45
+ ## What Makes v5 Different
46
+
47
+ ### Automatic Session Memory
48
+
49
+ Every conversation is recorded automatically. No "save your work" prompts. No manual checkpoints.
50
+
51
+ ```
52
+ You type → Agent responds → Stop hook fires → Exchange saved to O-entry
53
+ → Linked to active project
54
+ → Haiku auto-titles the session
55
+ ```
56
+
57
+ Switch projects mid-session? The O-entry switches too. Start a new session on a different PC? The next agent sees every exchange from every device — **the conversation never dies**.
58
+
59
+ ### Haiku Background Checkpoints
60
+
61
+ Every 20 exchanges, a Haiku subagent wakes up in the background. It reads the recent conversation, extracts lessons learned, errors encountered, and decisions made, then writes them to long-term memory — with full MCP tool access. Your main agent is never interrupted.
62
+
63
+ The checkpoint also writes a **handoff note** to the project: "Here's what was done, here's what's in progress, here's the next step." The next agent — on any device, any provider — picks up exactly where you left off.
64
+
65
+ ### Project-Based, Not Session-Based
66
+
67
+ Sessions are meaningless. Projects are everything.
68
+
69
+ - O-entries are linked to the active project, not the session
70
+ - Checkpoint counters count project exchanges, not session messages
71
+ - 10 messages on your laptop + 10 on your server = checkpoint fires on message 20
72
+ - `load_project` shows recent conversations with full context — across all devices
52
73
 
53
74
  ---
54
75
 
55
76
  ## Key Features
56
77
 
57
- - **5-level lazy loading** tokens scale with need, not with total memory size
58
- - **Smart bulk reads** — V2 algorithm expands newest, most-accessed, and favorites; suppresses the rest to titles
59
- - **Project-aware filtering** activate a project, and only relevant memories are expanded; others show title-only
60
- - **`#universal` tag** cross-project knowledge (MCP patterns, deployment rules) always shown regardless of active project
61
- - **Duplicate detection** `write_memory` warns if similar entries exist (tag overlap + FTS5 title similarity)
62
- - **Encrypted sync** AES-256-GCM client-side encryption, zero-knowledge server, multi-server redundancy
63
- - **Auto-logging** Claude Code Stop hook records every conversation automatically (O-prefix)
64
- - **Announcements** broadcast urgent messages to all synced devices (server migration, config changes)
65
- - **User skill assessment** agents silently track your expertise per topic (1-10 scale) and adapt communication
66
- - **Hashtags** cross-cutting tags for filtering and related-entry discovery
67
- - **Obsolete chains** mark entries wrong with `[✓ID]` correction reference; auto-follows to current version
68
- - **Import/Export** share memories between agents or back up as Markdown
69
- - **Multi-agent routing** `route_task` scores all agent memory stores to find the best agent for a task
78
+ | Feature | What it does |
79
+ |---------|-------------|
80
+ | **5-level lazy loading** | Tokens scale with need, not memory size |
81
+ | **Smart bulk reads** | Expands newest + most-accessed; compresses the rest to titles |
82
+ | **Project gate** | Activate a project only relevant memories are expanded |
83
+ | **Duplicate detection** | Warns before creating entries that already exist |
84
+ | **Encrypted sync** | AES-256-GCM, zero-knowledge server, multi-server redundancy |
85
+ | **Auto-logging** | Every exchange recorded via Stop hook (O-prefix) |
86
+ | **Auto-checkpoint** | Haiku extracts L/D/E entries every N exchanges |
87
+ | **Project handoff** | Background agent maintains "current state" in Protocol section |
88
+ | **User skill tracking** | Agents track your expertise (1-10) and adapt communication |
89
+ | **Hashtags** | Cross-cutting tags for discovery across all categories |
90
+ | **Obsolete chains** | Mark entries wrong with correction reference auto-follows |
91
+ | **Cross-provider** | Claude, Gemini, GPT, DeepSeek, local models — same memory |
92
+ | **Cross-tool** | Claude Code, Gemini CLI, Cursor, Windsurf, OpenCode, Cline |
93
+ | **Import/Export** | Share memories between agents or back up as Markdown |
94
+
95
+ ### Categories
96
+
97
+ | Prefix | Category | Example |
98
+ |--------|----------|---------|
99
+ | **P** | Project | `hmem-mcp \| Active \| TS/SQLite/npm \| Persistent AI memory` |
100
+ | **L** | Lesson | `HMEM_AGENT_ID must be set in hooks — resolveHmemPath falls back to wrong DB` |
101
+ | **E** | Error | `158 spurious O-entries created when Haiku MCP lacked HMEM_NO_SESSION guard` |
102
+ | **D** | Decision | `Project-based O-entries over session-based — sessions are meaningless` |
103
+ | **H** | Human | `User Skill: TypeScript 9, Architecture 9, React 3` |
104
+ | **R** | Rule | `Max one npm publish per day — batch changes` |
105
+ | **O** | Original | Auto-recorded conversation history (every exchange, every device) |
106
+ | **I** | Infra | `Strato Server \| Active \| Linux \| 87.106.22.11` |
70
107
 
71
108
  ---
72
109
 
73
- ## Installation
110
+ ## Quick Start
74
111
 
75
- ### Step 1: Install the package
112
+ ### 1. Install
76
113
 
77
114
  ```bash
78
115
  npm install -g hmem-mcp
79
116
  ```
80
117
 
81
- Skills are **automatically copied** to detected AI tools (Claude Code, OpenCode, Gemini CLI) via postinstall hook.
118
+ ### 2. Run the interactive installer
82
119
 
83
- ### Step 2: Configure your MCP client
120
+ ```bash
121
+ npx hmem init
122
+ ```
84
123
 
85
- **IMPORTANT:** Do NOT use `claude mcp add` it misplaces environment variables. Configure manually:
124
+ This detects your AI tools, creates the memory directory, configures MCP, and installs all 4 hooks:
86
125
 
87
- #### Claude Code
126
+ | Hook | When | What |
127
+ |------|------|------|
128
+ | `UserPromptSubmit` | Every message | First message: load memory. Every Nth: checkpoint reminder |
129
+ | `Stop` (sync) | Every response | Log exchange to active O-entry |
130
+ | `Stop` (async) | Every response | Haiku auto-titles untitled sessions |
131
+ | `SessionStart[clear]` | After /clear | Re-inject project context |
88
132
 
89
- Edit `~/.claude/.mcp.json` (create if it doesn't exist):
133
+ ### 3. Verify
90
134
 
91
- ```json
92
- {
93
- "mcpServers": {
94
- "hmem": {
95
- "command": "node",
96
- "args": ["/path/to/hmem-mcp/dist/mcp-server.js"],
97
- "env": {
98
- "HMEM_PROJECT_DIR": "/home/yourname/.hmem"
99
- }
100
- }
101
- }
102
- }
103
- ```
135
+ Restart your AI tool, then:
104
136
 
105
- **Find the path** to `mcp-server.js`:
106
- ```bash
107
- echo "$(npm root -g)/hmem-mcp/dist/mcp-server.js"
108
137
  ```
109
-
110
- **nvm users:** Use the absolute path to `node` instead of just `"node"`:
111
- ```bash
112
- echo "$(which node)"
113
- # e.g. /home/yourname/.nvm/versions/node/v24.14.0/bin/node
138
+ read_memory()
114
139
  ```
115
140
 
116
- Then use that as the `"command"` value.
141
+ Empty response = working (first run). Error = check the [troubleshooting section](#troubleshooting).
117
142
 
118
- #### With agent ID (multi-agent setups)
143
+ ### Manual setup
119
144
 
120
- If you use `HMEM_AGENT_ID`, the database path changes:
145
+ If you prefer manual configuration over `hmem init`:
121
146
 
122
- ```
123
- Without HMEM_AGENT_ID: {HMEM_PROJECT_DIR}/memory.hmem
124
- With HMEM_AGENT_ID=X: {HMEM_PROJECT_DIR}/Agents/X/X.hmem
125
- ```
147
+ <details>
148
+ <summary>Claude Code — edit ~/.claude/.mcp.json</summary>
126
149
 
127
150
  ```json
128
151
  {
@@ -139,9 +162,15 @@ With HMEM_AGENT_ID=X: {HMEM_PROJECT_DIR}/Agents/X/X.hmem
139
162
  }
140
163
  ```
141
164
 
142
- #### OpenCode
165
+ Find the paths:
166
+ ```bash
167
+ echo "Node: $(which node)"
168
+ echo "Server: $(npm root -g)/hmem-mcp/dist/mcp-server.js"
169
+ ```
170
+ </details>
143
171
 
144
- Edit `~/.config/opencode/opencode.json`:
172
+ <details>
173
+ <summary>OpenCode — edit ~/.config/opencode/opencode.json</summary>
145
174
 
146
175
  ```json
147
176
  {
@@ -149,18 +178,18 @@ Edit `~/.config/opencode/opencode.json`:
149
178
  "hmem": {
150
179
  "type": "local",
151
180
  "command": ["/absolute/path/to/node", "/absolute/path/to/hmem-mcp/dist/mcp-server.js"],
152
- "environment": {
153
- "HMEM_PROJECT_DIR": "/home/yourname/.hmem"
154
- },
181
+ "environment": { "HMEM_PROJECT_DIR": "/home/yourname/.hmem" },
155
182
  "enabled": true
156
183
  }
157
184
  }
158
185
  }
159
186
  ```
187
+ </details>
160
188
 
161
- #### Cursor / Windsurf / Cline
189
+ <details>
190
+ <summary>Cursor / Windsurf / Cline</summary>
162
191
 
163
- Edit the respective MCP config file (`~/.cursor/mcp.json`, `~/.codeium/windsurf/mcp_config.json`, or `.vscode/mcp.json`):
192
+ Edit `~/.cursor/mcp.json`, `~/.codeium/windsurf/mcp_config.json`, or `.vscode/mcp.json`:
164
193
 
165
194
  ```json
166
195
  {
@@ -168,174 +197,101 @@ Edit the respective MCP config file (`~/.cursor/mcp.json`, `~/.codeium/windsurf/
168
197
  "hmem": {
169
198
  "command": "/absolute/path/to/node",
170
199
  "args": ["/absolute/path/to/hmem-mcp/dist/mcp-server.js"],
171
- "env": {
172
- "HMEM_PROJECT_DIR": "/home/yourname/.hmem"
173
- }
200
+ "env": { "HMEM_PROJECT_DIR": "/home/yourname/.hmem" }
174
201
  }
175
202
  }
176
203
  }
177
204
  ```
205
+ </details>
178
206
 
179
- ### Step 3: Create the memory directory
180
-
181
- ```bash
182
- mkdir -p ~/.hmem
183
- # Or with agent ID:
184
- mkdir -p ~/.hmem/Agents/DEVELOPER
185
- ```
207
+ ---
186
208
 
187
- ### Step 4: Restart and verify
209
+ ## Configuration
188
210
 
189
- Restart your AI tool completely, then:
211
+ `hmem.config.json` in your `HMEM_PROJECT_DIR` (or `Agents/NAME/`):
190
212
 
191
- ```
192
- read_memory()
213
+ ```json
214
+ {
215
+ "memory": {
216
+ "maxCharsPerLevel": [200, 2500, 10000, 25000, 50000],
217
+ "maxDepth": 5,
218
+ "checkpointMode": "auto",
219
+ "checkpointInterval": 20,
220
+ "recentOEntries": 10,
221
+ "maxTitleChars": 50,
222
+ "prefixes": { "X": "Custom" }
223
+ },
224
+ "sync": {
225
+ "serverUrl": "https://your-server/hmem-sync",
226
+ "userId": "yourname",
227
+ "salt": "...",
228
+ "token": "..."
229
+ }
230
+ }
193
231
  ```
194
232
 
195
- You should see a response. If empty, that's fine — first run. If you get an error, check:
196
- - Is `HMEM_PROJECT_DIR` an absolute path?
197
- - Does the directory exist?
198
- - Is `node` path correct? (nvm users: use absolute path)
233
+ | Key | Default | What it does |
234
+ |-----|---------|-------------|
235
+ | `checkpointMode` | `"remind"` | `"auto"` = Haiku writes L/D/E in background. `"remind"` = asks the main agent |
236
+ | `checkpointInterval` | `20` | Exchanges between checkpoints. Set `0` to disable |
237
+ | `recentOEntries` | `10` | How many recent sessions to show in `load_project` |
199
238
 
200
- The server logs its configuration on startup:
201
- ```
202
- [hmem:DEVELOPER] MCP Server running on stdio | Agent: DEVELOPER | DB: /home/you/.hmem/Agents/DEVELOPER/DEVELOPER.hmem (0 entries)
203
- ```
239
+ All keys are optional. Missing keys use defaults.
204
240
 
205
241
  ---
206
242
 
207
- ## Cross-Device Sync (hmem-sync)
243
+ ## Cross-Device Sync
208
244
 
209
- Sync your memories across all devices with zero-knowledge encryption.
245
+ Sync memories across all devices with zero-knowledge encryption.
210
246
 
211
247
  ```bash
212
248
  npm install -g hmem-sync
249
+ npx hmem-sync connect # Interactive wizard — first device creates, others join
213
250
  ```
214
251
 
215
- ### First device
216
-
217
- ```bash
218
- npx hmem-sync connect
219
- ```
220
-
221
- Interactive wizard: creates account, generates encryption keys, pushes your data.
222
-
223
- ### Additional devices
224
-
225
- ```bash
226
- npx hmem-sync connect
227
- ```
228
-
229
- Same wizard — choose "existing account", enter your credentials from the first device.
230
-
231
- ### Enable auto-sync
232
-
233
- Add `HMEM_SYNC_PASSPHRASE` to your MCP config:
234
-
235
- ```json
236
- {
237
- "env": {
238
- "HMEM_PROJECT_DIR": "/home/you/.hmem",
239
- "HMEM_AGENT_ID": "DEVELOPER",
240
- "HMEM_SYNC_PASSPHRASE": "your-passphrase"
241
- }
242
- }
243
- ```
244
-
245
- With this set, every `read_memory` automatically pulls and every `write_memory` automatically pushes. 30-second cooldown prevents spam.
252
+ Add `HMEM_SYNC_PASSPHRASE` to your MCP config for automatic sync on every read/write.
246
253
 
247
254
  ### Multi-server redundancy
248
255
 
249
- In `hmem.config.json`, configure multiple servers:
250
-
251
256
  ```json
252
257
  {
253
258
  "sync": [
254
259
  { "name": "primary", "serverUrl": "https://server1/hmem-sync", "userId": "me", "salt": "...", "token": "..." },
255
- { "name": "backup", "serverUrl": "https://server2/hmem-sync", "userId": "me", "salt": "...", "token": "..." }
260
+ { "name": "backup", "serverUrl": "https://server2/hmem-sync", "userId": "me", "salt": "...", "token": "..." }
256
261
  ]
257
262
  }
258
263
  ```
259
264
 
260
- Push/pull goes to all servers. Use during migration or for redundant backup.
261
-
262
265
  ### Announcements
263
266
 
264
- Broadcast urgent messages to all synced AI agents across all devices:
267
+ Broadcast to all synced agents across all devices:
265
268
 
266
269
  ```bash
267
270
  npx hmem-sync announce --message "Server URL changing — update your config!"
268
271
  ```
269
272
 
270
- Every agent on every device sees the announcement on its next sync pull. Use for config changes, server migrations, or coordination across your fleet of AI instances.
271
-
272
- ---
273
-
274
- ## Auto-Logging (O-prefix)
275
-
276
- With Claude Code's Stop hook, every conversation exchange (your message + agent response) is automatically recorded in O-prefix entries. Zero token cost — runs in the background.
277
-
278
- ### Setup the hook
279
-
280
- Add to `~/.claude/settings.json`:
281
-
282
- ```json
283
- {
284
- "hooks": {
285
- "Stop": [
286
- {
287
- "hooks": [
288
- {
289
- "type": "command",
290
- "command": "HMEM_PROJECT_DIR=/home/you/.hmem HMEM_AGENT_ID=DEVELOPER node /path/to/hmem-mcp/dist/cli.js log-exchange",
291
- "timeout": 10
292
- }
293
- ]
294
- }
295
- ]
296
- }
297
- }
298
- ```
299
-
300
- O-entries are hidden from bulk reads (no noise) but searchable and linked to your active project.
301
-
302
273
  ---
303
274
 
304
- ## Configuration
305
-
306
- `hmem.config.json` in your `HMEM_PROJECT_DIR`:
307
-
308
- ```json
309
- {
310
- "memory": {
311
- "maxCharsPerLevel": [200, 2500, 10000, 25000, 50000],
312
- "maxDepth": 5,
313
- "maxTitleChars": 50,
314
- "prefixes": { "X": "Custom" }
315
- },
316
- "sync": {
317
- "serverUrl": "https://your-server/hmem-sync",
318
- "userId": "yourname",
319
- "salt": "...",
320
- "token": "..."
321
- }
322
- }
323
- ```
275
+ ## Troubleshooting
324
276
 
325
- All keys are optional. Missing keys use defaults.
277
+ | Problem | Fix |
278
+ |---------|-----|
279
+ | `read_memory()` fails | Check `HMEM_PROJECT_DIR` is absolute path and directory exists |
280
+ | nvm: `node not found` | Use absolute path: `which node` → use as `"command"` |
281
+ | Hooks not firing | Restart Claude Code. Check `~/.claude/settings.json` has all 4 hooks |
282
+ | Exchanges not logged | Check `HMEM_AGENT_ID` matches your `Agents/` directory name |
283
+ | Sync fails | Run `npx hmem-sync connect` to re-authenticate |
326
284
 
327
285
  ---
328
286
 
329
287
  ## Updating
330
288
 
331
289
  ```bash
332
- # Always global NOT inside a project directory
333
- npm update -g hmem-mcp
334
- npm update -g hmem-sync
290
+ npm update -g hmem-mcp # MCP server
291
+ npm update -g hmem-sync # Sync (if installed)
292
+ npx hmem update-skills # Refresh skill files
335
293
  ```
336
294
 
337
- Skills are automatically updated via postinstall hook. No manual copy needed.
338
-
339
295
  ---
340
296
 
341
297
  ## License
@@ -0,0 +1,16 @@
1
+ /**
2
+ * cli-checkpoint.ts
3
+ *
4
+ * Automatic checkpoint: reads recent exchanges from the active O-entry,
5
+ * then spawns a Haiku subagent WITH MCP tool access that writes L/D/E entries
6
+ * and updates the project handoff. The subagent follows the hmem-write skill rules.
7
+ *
8
+ * Designed to run in the background (spawned by the Stop hook when checkpointMode is "auto").
9
+ *
10
+ * Usage: hmem checkpoint
11
+ *
12
+ * Requires env:
13
+ * HMEM_PROJECT_DIR — root directory for .hmem files
14
+ * HMEM_AGENT_ID — agent identifier (optional, auto-detected)
15
+ */
16
+ export declare function checkpoint(): Promise<void>;