hmem-mcp 3.9.0 → 4.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,204 +1,156 @@
1
1
  # hmem — Humanlike Memory for AI Agents
2
2
 
3
- > AI agents forget everything when a session ends. hmem changes that.
3
+ > **Your AI loads 5k tokens and has full context of 400k+.** That's hmem persistent, hierarchical memory that works across sessions, devices, and AI tools. Zero tokens wasted.
4
4
 
5
- > hmem is actively used in production. APIs are stable since v2.0. Feedback and bug reports welcome.
5
+ **hmem** is an MCP server that gives AI agents human-like long-term memory. Instead of dumping everything into context, it stores knowledge in a 5-level hierarchy — like how you remember: broad strokes first, details on demand.
6
6
 
7
- **hmem** is a Model Context Protocol (MCP) server that gives AI agents persistent, humanlike memorymodeled after how human memory actually works.
8
-
9
- Born as a side project of a multi-agent AI system, hmem solves a real problem: when you work across multiple machines or sessions, your AI instances start from zero every time. They duplicate work, contradict previous decisions, and lose hard-won context.
10
-
11
- **hmem fixes this.**
7
+ The result? An AI that starts a new session and *already knows* your projects, your decisions, your past mistakes, your preferencesacross your laptop, your PC, and your server. Simultaneously.
12
8
 
13
9
  ---
14
10
 
11
+ ## Why hmem?
15
12
 
16
- ## Examples
17
-
18
- <img width="1110" height="665" alt="image" src="https://github.com/user-attachments/assets/af7688d2-73e3-44f8-b414-f6afa8904e6c" />
19
- Well, it claims that it can't pinpoint timestamps. But that's not true. It just cant see them (due to token efficiency) :)
20
-
21
-
22
- <img width="1096" height="941" alt="image" src="https://github.com/user-attachments/assets/a751c8f3-41fc-46b6-916a-bcd3862008ad" />
13
+ **Without hmem:** Every session starts from zero. Your AI asks the same questions, makes the same mistakes, contradicts last week's decisions, and wastes tokens loading context it already processed.
23
14
 
24
-
25
- ---
26
-
27
- ## The Problem
28
-
29
- When working across multiple PCs with AI coding agents, every new session was a fresh start. Agents had no knowledge of previous decisions, duplicated work, produced inconsistencies, and wasted tokens catching up.
30
-
31
- Existing RAG solutions are flat every memory fragment has the same abstraction level. The agent either gets too much detail and wastes tokens, or too little and loses nuance.
32
-
33
- ---
34
-
35
- ## The Solution: 5-Level Humanlike Memory
36
-
37
- hmem stores and retrieves memory in five nested levels of detail — mirroring how human memory works.
38
-
39
- ```
40
- Level 1 ── Coarse summary (always loaded on spawn)
41
- Level 2 ── More detail
42
- Level 3 ── Deep context
43
- Level 4 ── Fine-grained specifics
44
- Level 5 ── Full verbatim detail
45
- ```
46
-
47
- A freshly spawned agent receives only Level 1 — the broadest strokes. When it needs more detail on a specific topic, it makes a tool call to retrieve Level 2 for that entry. And so on, down to full detail.
48
-
49
- **Result: Agents load exactly as much context as they need — no more, no less.**
15
+ **With hmem:**
16
+ - **5k tokens** loads a complete overview of 300+ memories spanning months of work
17
+ - **Gets more efficient over time** — as your memory grows, the bulk read algorithm gets *better*, not worse. New entries push older, less relevant ones into title-only mode. 1,000 entries cost barely more tokens than 100.
18
+ - **Original context preserved** — nothing is summarized away or compressed. Every detail you stored is still there at full fidelity, accessible on demand. Level 1 is a summary, but Levels 2-5 hold the complete original text, word for word.
19
+ - **Drill on demand** — the AI only fetches details when it actually needs them
20
+ - **Cross-device** encrypted sync means your laptop, PC, and server share the same brain
21
+ - **Cross-tool** — works with Claude Code, Gemini CLI, Cursor, Windsurf, OpenCode, Cline
22
+ - **Auto-logging**via Claude Code's Stop hook, every conversation is automatically preserved
23
+ - **No token waste** — hierarchical lazy loading means the AI never loads more than it needs
50
24
 
51
25
  ---
52
26
 
53
27
  ## How It Works
54
28
 
55
- <img width="693" height="715" alt="image" src="https://github.com/user-attachments/assets/9dcb382a-6567-4040-99d2-61916a6d7531" />
56
-
57
-
58
- ### Saving Memory
59
-
60
- After completing a task, an agent calls `write_memory` with tab-indented content. The indentation depth maps to memory levels — multiple entries at the same depth become siblings.
61
-
62
- ```
63
- write_memory(prefix="L", content="Always restart MCP server after recompiling TypeScript
64
- Running process holds old dist — tool calls return stale results
65
- Fix: kill $(pgrep -f mcp-server)")
66
- ```
67
-
68
- ### Loading Memory
69
-
70
- On spawn, the agent receives all Level 1 summaries. Deeper levels are fetched on demand — by ID, one branch at a time.
71
-
72
- ```
73
- read_memory() # → all L1 summaries
74
- read_memory(id="L0003") # → L1 + direct L2 children for this entry
75
- read_memory(id="L0003.2") # → that L2 node + its L3 children
76
- ```
77
-
78
- Each node gets a compound ID (`L0003.2.1`) so any branch is individually addressable.
79
-
80
- ### Updating Memory
81
-
82
- Entries can be updated without deleting and recreating them:
83
-
84
- ```
85
- update_memory(id="L0003", content="Corrected L1 summary") # update text
86
- update_memory(id="L0003", favorite=true) # toggle flag only
87
- update_memory(id="L0003.2", content="Fixed sub-node text") # fix a sub-node
88
- append_memory(id="L0003", content="New finding\n\tSub-detail")
89
- ```
90
-
91
- `update_memory` replaces the text of a single node (children preserved). Content is optional — pass only flags to toggle them without repeating the text. `append_memory` adds new child nodes to an existing entry.
92
-
93
- ### Obsolete Entries
94
-
95
- When an entry is outdated, mark it as obsolete — never delete it:
96
-
97
- ```
98
- update_memory(id="E0023", content="...", obsolete=true)
99
- ```
100
-
101
- Obsolete entries are **hidden from bulk reads** and replaced by a summary line at the bottom:
102
-
103
29
  ```
104
- --- 3 obsolete entries hidden (E0023, D0007, L0012) use read_memory(id=X) to view ---
30
+ Level 1 ── One-line summary (always loaded~5k tokens for 300 entries)
31
+ Level 2 ── Paragraph detail (loaded on demand)
32
+ Level 3 ── Full context (loaded on demand)
33
+ Level 4 ── Extended detail (loaded on demand)
34
+ Level 5 ── Raw/verbatim data (loaded on demand)
105
35
  ```
106
36
 
107
- They remain fully searchable and accessible by ID. Past errors still teach future agents what not to do knowledge is never destroyed, only archived.
37
+ At session start, the agent loads Level 1 summaries one line per memory. When it needs more detail on a specific topic, it drills down: `read_memory(id="L0042")` loads that entry's Level 2 children. And so on.
108
38
 
109
- ### Memory Curation
39
+ **Categories keep things organized:**
110
40
 
111
- A dedicated curator agent runs periodically to maintain memory health. It detects duplicates, merges fragmented entries, marks stale pointers, and prunes low-value content — a form of the Ebbinghaus Forgetting Curve.
41
+ | Prefix | Category | Example |
42
+ |--------|----------|---------|
43
+ | P | Project | `hmem-mcp \| Active \| TS/SQLite/npm \| Persistent hierarchical AI memory` |
44
+ | L | Lesson | `Always restart MCP server after recompiling TypeScript` |
45
+ | E | Error | `hmem-sync Schema-Drift: access_count missing after pull` |
46
+ | D | Decision | `Per-node tag scoring instead of union-set for related discovery` |
47
+ | H | Human | `User Skill: IT — TypeScript: 3, Architecture: 9, AHK: 9` |
48
+ | R | Rule | `Max one npm publish per day — batch changes` |
49
+ | I | Infrastructure | `Strato Server \| Active \| Linux \| 4 cores, 8GB RAM` |
50
+ | T | Task | `Config consolidation: merge 6 files into 1` |
51
+ | O | Original | Auto-recorded raw conversation history (via Stop hook) |
112
52
 
113
53
  ---
114
54
 
115
55
  ## Key Features
116
56
 
117
- - **Hierarchical retrieval** — lazy loading of detail levels saves tokens
118
- - **True tree structure** — multiple siblings at the same depth (not just one chain)
119
- - **Compact output** — child IDs render as `.7` instead of `P0029.7`; dates shown only when differing from parent
120
- - **Persistent across sessions** — agents remember previous work even after restart
121
- - **Editable without deletion** — `update_memory` and `append_memory` modify entries in place; content is optional when toggling flags
122
- - **Markers** — `[♥]` favorite, `[P]` pinned, `[!]` obsolete, `[-]` irrelevant, `[*]` active, `[s]` secret — on root entries and sub-nodes
123
- - **Pinned entries** — super-favorites that show all children titles in bulk reads (not just the latest); use for reference entries you need in full every session
124
- - **Hashtags** — cross-cutting tags (`#hmem`, `#security`) for filtering and discovering related entries across prefixes
125
- - **Import/Export** — `export_memory` as Markdown or `.hmem` SQLite clone (excluding secrets); `import_memory` with L1 deduplication, sub-node merge, and automatic ID remapping on conflict
126
- - **Obsolete chain resolution** — mark entries/sub-nodes obsolete with `[✓ID]` reference; `read_memory` auto-follows the chain to the current version
127
- - **Access-count promotion** — most-accessed entries get expanded automatically (`[]`); most-referenced sub-nodes shown as "Hot Nodes"
128
- - **Session cache** — bulk reads suppress already-seen entries with Fibonacci decay; two modes: `discover` (newest-heavy) and `essentials` (importance-heavy)
129
- - **Active-prefix filtering** — mark entries as `[*]` active to focus bulk reads on what matters now; non-active entries still show as compact titles
130
- - **Secret entries** — `[s]` entries/nodes excluded from `export_memory`
131
- - **Titles & compact views** — auto-extracted titles; `titles_only` mode for table-of-contents view
132
- - **Effective-date sorting** — entries with recent appends surface to the top
133
- - **Per-agent memory** — each agent has its own `.hmem` file (SQLite)
134
- - **Skill-file driven** — agents are instructed via skill files, no hardcoded logic
135
- - **MCP-native** — works with Claude Code, Gemini CLI, OpenCode, and any MCP-compatible tool
57
+ - **5-level lazy loading** — tokens scale with need, not with total memory size
58
+ - **Smart bulk reads** — V2 algorithm expands newest, most-accessed, and favorites; suppresses the rest to titles
59
+ - **Project-aware filtering** — activate a project, and only relevant memories are expanded; others show title-only
60
+ - **`#universal` tag** — cross-project knowledge (MCP patterns, deployment rules) always shown regardless of active project
61
+ - **Duplicate detection** — `write_memory` warns if similar entries exist (tag overlap + FTS5 title similarity)
62
+ - **Encrypted sync** — AES-256-GCM client-side encryption, zero-knowledge server, multi-server redundancy
63
+ - **Auto-logging** — Claude Code Stop hook records every conversation automatically (O-prefix)
64
+ - **Announcements** — broadcast urgent messages to all synced devices (server migration, config changes)
65
+ - **User skill assessment** — agents silently track your expertise per topic (1-10 scale) and adapt communication
66
+ - **Hashtags** — cross-cutting tags for filtering and related-entry discovery
67
+ - **Obsolete chains** — mark entries wrong with `[✓ID]` correction reference; auto-follows to current version
68
+ - **Import/Export** — share memories between agents or back up as Markdown
69
+ - **Multi-agent routing** — `route_task` scores all agent memory stores to find the best agent for a task
136
70
 
137
71
  ---
138
72
 
139
- ## Quick Start
73
+ ## Installation
140
74
 
141
- ### Option A: Install from npm (Recommended)
75
+ ### Step 1: Install the package
142
76
 
143
77
  ```bash
144
- npx hmem-mcp init
78
+ npm install -g hmem-mcp
145
79
  ```
146
80
 
147
- That's it. The interactive installer will:
148
- - Detect your installed AI coding tools (Claude Code, OpenCode, Cursor, Windsurf, Cline)
149
- - Ask whether to install **system-wide** (memories in `~/.hmem/`) or **project-local** (memories in current directory)
150
- - **Offer an example memory** with 67 real entries from hmem development — or start fresh
151
- - Configure each tool's MCP settings automatically
152
- - Create the memory directory and `hmem.config.json`
81
+ Skills are **automatically copied** to detected AI tools (Claude Code, OpenCode, Gemini CLI) via postinstall hook.
153
82
 
154
- After the installer finishes, restart your AI tool and call `read_memory()` to verify.
83
+ ### Step 2: Configure your MCP client
155
84
 
156
- > **Example memory:** The installer includes `hmem_developer.hmem` a real `.hmem` database with 67 entries and 287 nodes from actual hmem development. It contains lessons learned, architecture decisions, error fixes, and project milestones a great way to see how hmem works in practice before writing your own entries. You can explore it immediately with `read_memory()` after install, or browse it in the TUI viewer with `python3 hmem-reader.py path/to/memory.hmem`.
85
+ **IMPORTANT:** Do NOT use `claude mcp add` — it misplaces environment variables. Configure manually:
157
86
 
158
- > **Don't forget the skill files!** The MCP server provides the tools (read_memory, write_memory, etc.), but the slash commands (`/hmem-save`, `/hmem-read`) require skill files to be copied to your tool's skills directory. See the [Skill Files](#skill-files) section below — it's a one-time copy-paste.
159
- >
160
- > **Coming from the MCP Registry?** Run `npx hmem-mcp init` first — it configures your tools and creates the memory directory. Then copy the skill files as described below.
87
+ #### Claude Code
161
88
 
162
- ### Option B: Install from source
89
+ Edit `~/.claude/.mcp.json` (create if it doesn't exist):
163
90
 
164
- ```bash
165
- git clone https://github.com/Bumblebiber/hmem.git
166
- cd hmem
167
- npm install && npm run build
168
- node dist/cli.js init
91
+ ```json
92
+ {
93
+ "mcpServers": {
94
+ "hmem": {
95
+ "command": "node",
96
+ "args": ["/path/to/hmem-mcp/dist/mcp-server.js"],
97
+ "env": {
98
+ "HMEM_PROJECT_DIR": "/home/yourname/.hmem"
99
+ }
100
+ }
101
+ }
102
+ }
169
103
  ```
170
104
 
171
- ### Option C: Manual Setup (no installer)
172
-
173
- If you prefer to configure everything yourself:
174
-
175
- #### 1. Install
105
+ **Find the path** to `mcp-server.js`:
106
+ ```bash
107
+ echo "$(npm root -g)/hmem-mcp/dist/mcp-server.js"
108
+ ```
176
109
 
110
+ **nvm users:** Use the absolute path to `node` instead of just `"node"`:
177
111
  ```bash
178
- npm install -g hmem-mcp
112
+ echo "$(which node)"
113
+ # e.g. /home/yourname/.nvm/versions/node/v24.14.0/bin/node
179
114
  ```
180
115
 
181
- Or from source: `git clone https://github.com/Bumblebiber/hmem.git && cd hmem && npm install && npm run build`
116
+ Then use that as the `"command"` value.
182
117
 
183
- #### 2. Register the MCP server
118
+ #### With agent ID (multi-agent setups)
184
119
 
185
- **Claude Code** global registration:
120
+ If you use `HMEM_AGENT_ID`, the database path changes:
186
121
 
187
- ```bash
188
- claude mcp add hmem -s user -- npx hmem-mcp serve \
189
- --env HMEM_PROJECT_DIR="$HOME/.hmem"
122
+ ```
123
+ Without HMEM_AGENT_ID: {HMEM_PROJECT_DIR}/memory.hmem
124
+ With HMEM_AGENT_ID=X: {HMEM_PROJECT_DIR}/Agents/X/X.hmem
190
125
  ```
191
126
 
192
- **OpenCode** — add to `~/.config/opencode/opencode.json` (or project-level `opencode.json`):
127
+ ```json
128
+ {
129
+ "mcpServers": {
130
+ "hmem": {
131
+ "command": "/absolute/path/to/node",
132
+ "args": ["/absolute/path/to/hmem-mcp/dist/mcp-server.js"],
133
+ "env": {
134
+ "HMEM_PROJECT_DIR": "/home/yourname/.hmem",
135
+ "HMEM_AGENT_ID": "DEVELOPER"
136
+ }
137
+ }
138
+ }
139
+ }
140
+ ```
141
+
142
+ #### OpenCode
143
+
144
+ Edit `~/.config/opencode/opencode.json`:
193
145
 
194
146
  ```json
195
147
  {
196
148
  "mcp": {
197
149
  "hmem": {
198
150
  "type": "local",
199
- "command": ["npx", "hmem", "serve"],
151
+ "command": ["/absolute/path/to/node", "/absolute/path/to/hmem-mcp/dist/mcp-server.js"],
200
152
  "environment": {
201
- "HMEM_PROJECT_DIR": "~/.hmem"
153
+ "HMEM_PROJECT_DIR": "/home/yourname/.hmem"
202
154
  },
203
155
  "enabled": true
204
156
  }
@@ -206,352 +158,183 @@ claude mcp add hmem -s user -- npx hmem-mcp serve \
206
158
  }
207
159
  ```
208
160
 
209
- **Cursor / Windsurf / Cline** — add to `~/.cursor/mcp.json` (or equivalent):
161
+ #### Cursor / Windsurf / Cline
162
+
163
+ Edit the respective MCP config file (`~/.cursor/mcp.json`, `~/.codeium/windsurf/mcp_config.json`, or `.vscode/mcp.json`):
210
164
 
211
165
  ```json
212
166
  {
213
167
  "mcpServers": {
214
168
  "hmem": {
215
- "command": "npx",
216
- "args": ["hmem", "serve"],
169
+ "command": "/absolute/path/to/node",
170
+ "args": ["/absolute/path/to/hmem-mcp/dist/mcp-server.js"],
217
171
  "env": {
218
- "HMEM_PROJECT_DIR": "~/.hmem"
172
+ "HMEM_PROJECT_DIR": "/home/yourname/.hmem"
219
173
  }
220
174
  }
221
175
  }
222
176
  }
223
177
  ```
224
178
 
225
- > **Windows note:** Use forward slashes or double backslashes in JSON paths.
226
-
227
- #### 3. Verify the connection
228
-
229
- Fully restart your AI tool, then call `read_memory()`. You should see a memory listing (empty on first run is fine).
230
-
231
- In Claude Code, run `/mcp` to check the server status.
232
-
233
- ---
234
-
235
- ## Skill Files
236
-
237
- Skill files teach your AI tool how to use hmem correctly. Copy them to your tool's global skills directory, then restart your AI tool.
238
-
239
- > **After copying skills, fully restart your terminal and AI tool** — skills are loaded at startup and won't appear in a running session.
240
-
241
- ### Available skills
242
-
243
- | Slash command | What it does |
244
- |---|---|
245
- | `/hmem-read` | Load your memory at session start — call at the beginning of every session |
246
- | `/hmem-write` | Protocol for writing memories correctly (prefixes, hierarchy, anti-patterns) |
247
- | `/hmem-save` | Save session learnings to memory, then commit + push |
248
- | `/hmem-config` | View and adjust memory settings (`hmem.config.json`) interactively |
249
- | `/hmem-curate` | Audit and clean up memory entries (curator role required) |
250
-
251
- ### Copy skills to your tool
252
-
253
- Find the skills directory in the installed package:
179
+ ### Step 3: Create the memory directory
254
180
 
255
181
  ```bash
256
- HMEM_DIR="$(npm root -g)/hmem-mcp"
182
+ mkdir -p ~/.hmem
183
+ # Or with agent ID:
184
+ mkdir -p ~/.hmem/Agents/DEVELOPER
257
185
  ```
258
186
 
259
- If you cloned from source, the skills are in the `skills/` directory.
187
+ ### Step 4: Restart and verify
260
188
 
261
- **Claude Code:**
262
- ```bash
263
- for skill in hmem-read hmem-write hmem-save hmem-config hmem-curate; do
264
- mkdir -p ~/.claude/skills/$skill
265
- cp "$HMEM_DIR/skills/$skill/SKILL.md" ~/.claude/skills/$skill/SKILL.md
266
- done
267
- ```
189
+ Restart your AI tool completely, then:
268
190
 
269
- **Gemini CLI:**
270
- ```bash
271
- for skill in hmem-read hmem-write hmem-save hmem-config hmem-curate; do
272
- mkdir -p ~/.gemini/skills/$skill
273
- cp "$HMEM_DIR/skills/$skill/SKILL.md" ~/.gemini/skills/$skill/SKILL.md
274
- done
275
191
  ```
276
-
277
- **OpenCode:**
278
- ```bash
279
- for skill in hmem-read hmem-write hmem-save hmem-config hmem-curate; do
280
- mkdir -p ~/.config/opencode/skills/$skill
281
- cp "$HMEM_DIR/skills/$skill/SKILL.md" ~/.config/opencode/skills/$skill/SKILL.md
282
- done
192
+ read_memory()
283
193
  ```
284
194
 
285
- ---
286
-
287
- ## MCP Tools
288
-
289
- ### Memory Tools
290
-
291
- | Tool | Description |
292
- |------|-------------|
293
- | `read_memory` | Read memories — L1 summaries, drill by ID, filter by prefix, search by time |
294
- | `write_memory` | Save new memory entries with tab-indented hierarchy |
295
- | `update_memory` | Update text and/or flags of an entry or sub-node (content optional) |
296
- | `append_memory` | Append new child nodes to an existing entry or sub-node |
297
- | `export_memory` | Export non-secret entries as Markdown text or `.hmem` SQLite file |
298
- | `import_memory` | Import entries from a `.hmem` file with deduplication and ID remapping |
299
- | `reset_memory_cache` | Clear session cache so all entries are treated as unseen |
300
- | `search_memory` | Full-text search across all agent `.hmem` databases |
301
- | `memory_stats` | Overview: total entries by prefix, nodes, favorites, pinned, stale count, most-accessed |
302
- | `find_related` | FTS5-based similarity search — find entries with overlapping keywords |
303
- | `memory_health` | Audit report: broken links, orphaned entries, stale favorites, broken obsolete chains |
304
- | `tag_bulk` | Apply tag changes (add/remove) to all entries matching a filter |
305
- | `tag_rename` | Rename a hashtag across all entries and nodes |
306
- | `move_memory` | Move a sub-node (+ entire subtree) to a different parent — updates all IDs and references |
307
-
308
- ### Curator Tools (role: ceo)
309
-
310
- | Tool | Description |
311
- |------|-------------|
312
- | `get_audit_queue` | List agents whose memory has changed since last audit |
313
- | `read_agent_memory` | Read any agent's full memory (for curation) |
314
- | `fix_agent_memory` | Correct a specific entry or sub-node in any agent's memory |
315
- | `append_agent_memory` | Add content to an existing entry in any agent's memory (for merging duplicates) |
316
- | `delete_agent_memory` | Delete a memory entry (prefer `fix_agent_memory(obsolete=true)` — deletion is permanent) |
317
- | `move_agent_memory` | Move a sub-node in any agent's memory to a different parent — updates all IDs and references |
318
- | `mark_audited` | Mark an agent as audited |
319
-
320
- ---
321
-
322
- ## Memory Directory
323
-
324
- hmem stores all memory files (`.hmem` SQLite databases) and its configuration (`hmem.config.json`) in a single directory. The location depends on how you install:
325
-
326
- | Install mode | Memory directory | Example |
327
- |---|---|---|
328
- | **System-wide** | `~/.hmem/` | `/home/alice/.hmem/` or `C:\Users\Alice\.hmem\` |
329
- | **Project-local** | Project root (cwd) | `/home/alice/my-project/` |
330
-
331
- The `hmem init` installer asks which mode you prefer and creates the directory automatically.
332
-
333
- ### Directory structure
195
+ You should see a response. If empty, that's fine — first run. If you get an error, check:
196
+ - Is `HMEM_PROJECT_DIR` an absolute path?
197
+ - Does the directory exist?
198
+ - Is `node` path correct? (nvm users: use absolute path)
334
199
 
200
+ The server logs its configuration on startup:
335
201
  ```
336
- ~/.hmem/ # System-wide memory directory
337
- memory.hmem # Default agent memory (when no HMEM_AGENT_ID is set)
338
- SIGURD.hmem # Named agent memory (HMEM_AGENT_ID=SIGURD)
339
- hmem.config.json # Configuration file
340
- audit_state.json # Curator state (optional)
202
+ [hmem:DEVELOPER] MCP Server running on stdio | Agent: DEVELOPER | DB: /home/you/.hmem/Agents/DEVELOPER/DEVELOPER.hmem (0 entries)
341
203
  ```
342
204
 
343
- The MCP configuration files are written to each tool's own config directory — not into `~/.hmem/`:
344
-
345
- | Tool | Global MCP config path |
346
- |---|---|
347
- | Claude Code | `~/.claude/.mcp.json` |
348
- | OpenCode | `~/.config/opencode/opencode.json` |
349
- | Cursor | `~/.cursor/mcp.json` |
350
- | Windsurf | `~/.codeium/windsurf/mcp_config.json` |
351
- | Cline / Roo Code | `.vscode/mcp.json` (project-only) |
352
-
353
205
  ---
354
206
 
355
- ## Environment Variables
207
+ ## Cross-Device Sync (hmem-sync)
356
208
 
357
- | Variable | Description | Default |
358
- |----------|-------------|---------|
359
- | `HMEM_PROJECT_DIR` | Root directory where `.hmem` files are stored | *(required)* |
360
- | `HMEM_AGENT_ID` | Agent identifier — used as filename and directory name | `""` → `memory.hmem` |
361
- | `HMEM_AGENT_ROLE` | Permission level: `worker` · `al` · `pl` · `ceo` | `worker` |
209
+ Sync your memories across all devices with zero-knowledge encryption.
362
210
 
363
- ---
364
-
365
- ## Configuration (hmem.config.json)
366
-
367
- Place an optional `hmem.config.json` in your `HMEM_PROJECT_DIR` to tune behavior. All keys are optional — missing keys fall back to defaults.
368
-
369
- ```json
370
- {
371
- "maxL1Chars": 120,
372
- "maxLnChars": 50000,
373
- "maxDepth": 5,
374
- "maxTitleChars": 50,
375
- "accessCountTopN": 5,
376
- "bulkReadV2": {
377
- "topNewestCount": 5,
378
- "topAccessCount": 3,
379
- "topObsoleteCount": 3
380
- },
381
- "prefixes": {
382
- "R": "Research"
383
- }
384
- }
211
+ ```bash
212
+ npm install -g hmem-sync
385
213
  ```
386
214
 
387
- ### Memory prefixes
215
+ ### First device
388
216
 
389
- The default prefixes cover most use cases:
390
-
391
- | Prefix | Category | When to use |
392
- |--------|----------|-------------|
393
- | `P` | Project | Project experiences, summaries |
394
- | `L` | Lesson | Lessons learned, best practices |
395
- | `E` | Error | Bugs, errors + their fix |
396
- | `D` | Decision | Architecture decisions with reasoning |
397
- | `T` | Task | Task notes, work progress |
398
- | `M` | Milestone | Key milestones, releases |
399
- | `S` | Skill | Skills, processes, how-to guides |
400
- | `N` | Navigator | Code pointers — where something lives in the codebase |
401
-
402
- To add your own, add entries to the `"prefixes"` key in `hmem.config.json`. Custom prefixes are **merged** with the defaults — you don't need to repeat the built-in ones.
403
-
404
- ### Favorites
405
-
406
- Any entry can be marked as a **favorite** — regardless of its prefix category. Favorites always appear with their L2 detail in bulk reads, marked with `[♥]`.
407
-
408
- ```
409
- write_memory(prefix="D", content="...", favorite=true) # set at creation
410
- update_memory(id="D0010", favorite=true) # set on existing entry
411
- update_memory(id="D0010", favorite=false) # clear the flag
217
+ ```bash
218
+ npx hmem-sync connect
412
219
  ```
413
220
 
414
- Use favorites for reference info you need to see every session — key decisions, API endpoints, frequently consulted patterns. Use sparingly: if everything is a favorite, nothing is.
415
-
416
- ### Pinned entries
221
+ Interactive wizard: creates account, generates encryption keys, pushes your data.
417
222
 
418
- Pinned entries are "super-favorites" — they show **all** children titles in bulk reads, not just the latest one. While favorites show the newest child + `[+N more →]`, pinned entries give you the full table of contents at a glance.
223
+ ### Additional devices
419
224
 
420
- ```
421
- write_memory(prefix="S", content="...", pinned=true) # set at creation
422
- update_memory(id="S0005", pinned=true) # set on existing entry
225
+ ```bash
226
+ npx hmem-sync connect
423
227
  ```
424
228
 
425
- | Display | Normal | Favorite `[♥]` | Pinned `[P]` | `expand=true` |
426
- |---|---|---|---|---|
427
- | Children shown | Latest only | All titles | All titles | All with full content |
428
- | `[+N more →]` hint | Yes | No | No | No |
229
+ Same wizard choose "existing account", enter your credentials from the first device.
429
230
 
430
- Use pinned for entries with many structured sub-entries (handbooks, reference lists, project summaries) where you always want to see the full outline.
231
+ ### Enable auto-sync
431
232
 
432
- ### Hashtags
233
+ Add `HMEM_SYNC_PASSPHRASE` to your MCP config:
433
234
 
434
- Tag entries for cross-cutting search across prefix categories:
435
-
436
- ```
437
- write_memory(prefix="L", content="...", tags=["#security", "#hmem"])
438
- read_memory(tag="#security") # filter bulk reads by tag
439
- read_memory(id="L0042") # shows related entries (2+ shared tags)
440
- tag_bulk(filter={prefix: "E"}, add_tags=["#bugfix"]) # batch-tag all E-entries
441
- tag_rename(old_tag="#old", new_tag="#new") # rename everywhere
235
+ ```json
236
+ {
237
+ "env": {
238
+ "HMEM_PROJECT_DIR": "/home/you/.hmem",
239
+ "HMEM_AGENT_ID": "DEVELOPER",
240
+ "HMEM_SYNC_PASSPHRASE": "your-passphrase"
241
+ }
242
+ }
442
243
  ```
443
244
 
444
- Tags are lowercase, must start with `#`, max 10 per entry. They work on root entries and sub-nodes.
445
-
446
- ### Stale Detection and Memory Health
447
-
448
- ```
449
- read_memory(stale_days=30) # entries not accessed in 30 days, sorted oldest-first
450
- memory_stats() # count by prefix, stale count, favorites, most-accessed
451
- memory_health() # audit: broken links, orphans, stale favorites
452
- find_related(id="P0029") # FTS5 keyword similarity — find thematically related entries
453
- ```
245
+ With this set, every `read_memory` automatically pulls and every `write_memory` automatically pushes. 30-second cooldown prevents spam.
454
246
 
455
- ### Access-count auto-promotion (`accessCountTopN`)
247
+ ### Multi-server redundancy
456
248
 
457
- The top-N most-accessed entries are automatically promoted to L2 depth in bulk reads, marked with `[★]`. This creates "organic favorites" — entries that proved important in practice rise to the surface automatically.
249
+ In `hmem.config.json`, configure multiple servers:
458
250
 
459
251
  ```json
460
- { "accessCountTopN": 5 }
461
- ```
462
-
463
- Set to `0` to disable. Default: `5`.
464
-
465
- **Time-weighted scoring:** Raw access counts would systematically favor older entries (more time to accumulate accesses). hmem uses a logarithmic age decay instead:
466
-
467
- ```
468
- score = access_count / log2(age_in_days + 2)
252
+ {
253
+ "sync": [
254
+ { "name": "primary", "serverUrl": "https://server1/hmem-sync", "userId": "me", "salt": "...", "token": "..." },
255
+ { "name": "backup", "serverUrl": "https://server2/hmem-sync", "userId": "me", "salt": "...", "token": "..." }
256
+ ]
257
+ }
469
258
  ```
470
259
 
471
- This means a 1-day-old entry with 5 accesses (score 3.16) outranks a 1-year-old entry with 6 accesses (score 0.70) — but a genuinely important old entry with 50 accesses (score 5.87) still stays at the top. The decay is gentle enough that consistently useful entries are never buried.
472
-
473
- | Mechanism | When useful |
474
- |---|---|
475
- | **favorite flag** | Entries you know are important from day 1 — even with zero access history |
476
- | **accessCountTopN** | Entries that proved important over time — emerges from actual usage |
260
+ Push/pull goes to all servers. Use during migration or for redundant backup.
477
261
 
478
- ### Bulk reads (V2 algorithm)
262
+ ### Announcements
479
263
 
480
- `read_memory()` groups entries by prefix category. Each category expands a limited number of entries (newest + most-accessed + favorites) with their L2 children; the rest show only the L1 title. Two modes:
264
+ Broadcast urgent messages to all synced AI agents across all devices:
481
265
 
482
- - **`discover`** (default on first read): newest-heavy — good for getting an overview after session start.
483
- - **`essentials`** (auto-selected after context compression): importance-heavy more favorites + most-accessed, fewer newest.
484
-
485
- A **session cache** tracks which entries were already shown. Subsequent bulk reads suppress seen entries with Fibonacci decay `[5,3,2,1,0]`, keeping output fresh without repetition. Use `reset_memory_cache` to clear the cache.
266
+ ```bash
267
+ npx hmem-sync announce --message "Server URL changingupdate your config!"
268
+ ```
486
269
 
487
- To see all children of an entry, use `read_memory(id="P0005")`. For a deep dive with full content, use `read_memory(id="P0005", expand=true)`. For a compact table of contents, use `read_memory(titles_only=true)`.
270
+ Every agent on every device sees the announcement on its next sync pull. Use for config changes, server migrations, or coordination across your fleet of AI instances.
488
271
 
489
- ### Effective-date sorting
272
+ ---
490
273
 
491
- Entries are sorted by `effective_date` — the most recent timestamp across the entry and all its nodes. This means a project entry (`P0005`) that was first written months ago but had a new session note appended today will appear near the top of the listing, alongside truly recent entries.
274
+ ## Auto-Logging (O-prefix)
492
275
 
493
- ### Character limits
276
+ With Claude Code's Stop hook, every conversation exchange (your message + agent response) is automatically recorded in O-prefix entries. Zero token cost — runs in the background.
494
277
 
495
- Two ways to set per-level character limits:
278
+ ### Setup the hook
496
279
 
497
- **Option A — linear interpolation** (recommended): set only the endpoints; all levels in between are computed automatically.
280
+ Add to `~/.claude/settings.json`:
498
281
 
499
282
  ```json
500
- { "maxL1Chars": 120, "maxLnChars": 50000 }
283
+ {
284
+ "hooks": {
285
+ "Stop": [
286
+ {
287
+ "hooks": [
288
+ {
289
+ "type": "command",
290
+ "command": "HMEM_PROJECT_DIR=/home/you/.hmem HMEM_AGENT_ID=DEVELOPER node /path/to/hmem-mcp/dist/cli.js log-exchange",
291
+ "timeout": 10
292
+ }
293
+ ]
294
+ }
295
+ ]
296
+ }
297
+ }
501
298
  ```
502
299
 
503
- With 5 depth levels this yields: `[120, 12780, 25440, 38120, 50000]`
504
-
505
- **Option B — explicit per-level array**: set each level individually. If fewer entries than `maxDepth`, the last value is repeated.
300
+ O-entries are hidden from bulk reads (no noise) but searchable and linked to your active project.
506
301
 
507
- ```json
508
- { "maxCharsPerLevel": [120, 2500, 10000, 25000, 50000] }
509
- ```
302
+ ---
510
303
 
511
- ### Bulk read tuning (`bulkReadV2`)
304
+ ## Configuration
512
305
 
513
- Controls how many entries are expanded in each category during a bulk read:
306
+ `hmem.config.json` in your `HMEM_PROJECT_DIR`:
514
307
 
515
308
  ```json
516
- "bulkReadV2": {
517
- "topNewestCount": 5, // expand the 5 newest entries per prefix
518
- "topAccessCount": 3, // expand the 3 most-accessed per prefix
519
- "topObsoleteCount": 3 // show up to 3 obsolete entries (by access count)
309
+ {
310
+ "memory": {
311
+ "maxCharsPerLevel": [200, 2500, 10000, 25000, 50000],
312
+ "maxDepth": 5,
313
+ "maxTitleChars": 50,
314
+ "prefixes": { "X": "Custom" }
315
+ },
316
+ "sync": {
317
+ "serverUrl": "https://your-server/hmem-sync",
318
+ "userId": "yourname",
319
+ "salt": "...",
320
+ "token": "..."
321
+ }
520
322
  }
521
323
  ```
522
324
 
523
- Favorites are always expanded regardless of these limits. Entries expanded by one slot (e.g. newest) don't count against another (e.g. access).
325
+ All keys are optional. Missing keys use defaults.
524
326
 
525
327
  ---
526
328
 
527
- ## TUI Viewer (hmem-reader.py)
528
-
529
- A terminal-based interactive viewer for browsing `.hmem` memory files. Built with [Textual](https://textual.textualize.io/).
329
+ ## Updating
530
330
 
531
331
  ```bash
532
- pip install textual # one-time dependency
533
- python3 hmem-reader.py # agent selection screen (scans Agents/ directory)
534
- python3 hmem-reader.py THOR # open a specific agent's memory
535
- python3 hmem-reader.py ~/path/to/file.hmem # open any .hmem file directly
332
+ # Always global NOT inside a project directory
333
+ npm update -g hmem-mcp
334
+ npm update -g hmem-sync
536
335
  ```
537
336
 
538
- **Keys:**
539
- | Key | Action |
540
- |-----|--------|
541
- | `r` | Toggle V2 bulk-read view (what agents see on `read_memory()`) |
542
- | `e` / `c` | Expand / collapse all nodes |
543
- | `q` | Quit |
544
- | `Escape` | Back to agent list |
545
-
546
- The V2 view mirrors the MCP server's bulk-read algorithm — including time-weighted access scoring, per-prefix selection, active-prefix filtering, and all markers (`[♥]`, `[!]`, `[*]`, `[s]`, `[-]`) — so you can see exactly what an agent sees at session start.
547
-
548
- ---
549
-
550
- ## Origin
551
-
552
- hmem was developed out of necessity: working on a large AI project across multiple machines meant every new Claude Code session started blind. Agents redid work, lost decisions, and contradicted each other.
553
-
554
- The solution was a memory protocol that works the way humans remember — broad strokes first, details on demand.
337
+ Skills are automatically updated via postinstall hook. No manual copy needed.
555
338
 
556
339
  ---
557
340