@psiclawops/hypermem 0.8.1 → 0.8.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +101 -0
- package/INSTALL.md +887 -0
- package/README.md +89 -6
- package/bin/hypermem-status.mjs +10 -0
- package/dist/version.d.ts +4 -4
- package/dist/version.js +4 -4
- package/docs/API_STABILITY.md +33 -0
- package/docs/KNOWN_LIMITATIONS.md +35 -0
- package/docs/MEMORY_MD_AUTHORING.md +243 -0
- package/docs/MIGRATION.md +56 -0
- package/docs/MIGRATION_GUIDE.md +1083 -0
- package/docs/PHASE1-VALIDATION.md +132 -0
- package/docs/RELEASE_0.8.0_VALIDATION.md +70 -0
- package/docs/RELEASE_PROCESS.md +10 -0
- package/docs/ROADMAP.md +39 -0
- package/docs/SLASH_COMMANDS.md +93 -0
- package/docs/TUNING.md +866 -0
- package/install.sh +516 -0
- package/memory-plugin/dist/index.d.ts +24 -0
- package/memory-plugin/dist/index.d.ts.map +1 -0
- package/memory-plugin/dist/index.js +300 -0
- package/memory-plugin/dist/index.js.map +1 -0
- package/memory-plugin/openclaw.plugin.json +13 -0
- package/memory-plugin/package.json +64 -0
- package/package.json +15 -2
- package/plugin/dist/index.d.ts +153 -0
- package/plugin/dist/index.d.ts.map +1 -0
- package/plugin/dist/index.js +3127 -0
- package/plugin/dist/index.js.map +1 -0
- package/plugin/openclaw.plugin.json +13 -0
- package/plugin/package.json +65 -0
- package/scripts/install-runtime.mjs +81 -0
|
@@ -0,0 +1,1083 @@
|
|
|
1
|
+
# hypermem Migration Guide
|
|
2
|
+
|
|
3
|
+
_One guide for all migration paths. Find your current system in the table below and jump to that section._
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## Quick lookup
|
|
8
|
+
|
|
9
|
+
| Your current setup | Jump to |
|
|
10
|
+
|---|---|
|
|
11
|
+
| No memory system (starting fresh) | [Fresh install](#fresh-install) |
|
|
12
|
+
| OpenClaw built-in memory (`memory.db`) | [From OpenClaw memory.db](#from-openclaw-memorydb) |
|
|
13
|
+
| OpenClaw QMD backend | [From QMD](#from-qmd) |
|
|
14
|
+
| ClawText context engine | [From ClawText](#from-clawtext) |
|
|
15
|
+
| Cognee (ECL pipeline) | [From Cognee](#from-cognee) |
|
|
16
|
+
| Mem0 (cloud or OSS) | [From Mem0](#from-mem0) |
|
|
17
|
+
| Zep (cloud or self-hosted) | [From Zep](#from-zep) |
|
|
18
|
+
| Honcho OpenClaw plugin | [From Honcho](#from-honcho) |
|
|
19
|
+
| OpenClaw memory-lancedb plugin | [From memory-lancedb](#from-memory-lancedb) |
|
|
20
|
+
| Markdown MEMORY.md + daily files only | [From MEMORY.md files](#from-memorymd-files) |
|
|
21
|
+
| Something else / custom engine | [From a custom system](#from-a-custom-system) |
|
|
22
|
+
|
|
23
|
+
---
|
|
24
|
+
|
|
25
|
+
## What hypermem stores
|
|
26
|
+
|
|
27
|
+
Understanding the data model sets expectations for what migrates and what doesn't.
|
|
28
|
+
|
|
29
|
+
**Migrates cleanly from most systems:**
|
|
30
|
+
- Conversation history (messages, roles, timestamps)
|
|
31
|
+
- Facts and preferences extracted from past conversations
|
|
32
|
+
- Structured knowledge entries
|
|
33
|
+
|
|
34
|
+
**Does not migrate (by design):**
|
|
35
|
+
- The hot cache — ephemeral, SQLite-backed in current releases, and rebuilt automatically on first use
|
|
36
|
+
- Embeddings — hypermem regenerates these from imported text on the next indexer run
|
|
37
|
+
- Tool call payloads from sessions where only text content was stored (tool results preserved as prose where available)
|
|
38
|
+
- Graph structure from graph databases (edges, weights, triplets) — these are flattened to facts on import
|
|
39
|
+
|
|
40
|
+
After any migration, the background indexer picks up imported content and builds vector search, topic maps, and knowledge synthesis automatically. A gateway restart is sufficient to trigger it.
|
|
41
|
+
|
|
42
|
+
---
|
|
43
|
+
|
|
44
|
+
## Universal pre-flight checklist
|
|
45
|
+
|
|
46
|
+
Run this before any migration path.
|
|
47
|
+
|
|
48
|
+
**1. Confirm hypermem is installed and has initialized:**
|
|
49
|
+
```bash
|
|
50
|
+
openclaw plugins list | grep hypermem
|
|
51
|
+
ls ~/.openclaw/hypermem/library.db # must exist — send one message first if not
|
|
52
|
+
```
|
|
53
|
+
|
|
54
|
+
If `library.db` doesn't exist yet, start the gateway with hypermem enabled, send one message to any agent, then come back. See [INSTALL.md](../INSTALL.md) for the full installation steps (git clone, build, wire both plugins), then restart and continue.
|
|
55
|
+
|
|
56
|
+
**2. Back up your existing data:**
|
|
57
|
+
```bash
|
|
58
|
+
# OpenClaw built-in memory
|
|
59
|
+
cp ~/.openclaw/memory.db ~/.openclaw/memory.db.pre-hypermem 2>/dev/null || true
|
|
60
|
+
|
|
61
|
+
# ClawText
|
|
62
|
+
cp ~/.openclaw/workspace/.clawtext/session-intelligence.db \
|
|
63
|
+
~/.openclaw/workspace/.clawtext/session-intelligence.db.pre-hypermem 2>/dev/null || true
|
|
64
|
+
|
|
65
|
+
# Cognee
|
|
66
|
+
cp -r ~/.cognee ~/.cognee.pre-hypermem 2>/dev/null || true
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
**3. Every migration script defaults to dry-run.** Nothing is written until you add `--apply`. Read dry-run output before proceeding.
|
|
70
|
+
|
|
71
|
+
**4. You do not need to stop the gateway** for import-only migrations. hypermem uses WAL mode — live sessions and imports coexist safely.
|
|
72
|
+
|
|
73
|
+
---
|
|
74
|
+
|
|
75
|
+
## Fresh install
|
|
76
|
+
|
|
77
|
+
Nothing to migrate. Follow the [INSTALL.md](../INSTALL.md) guide (clone, build, wire plugins, restart). hypermem begins building context from your first conversation. The background indexer starts automatically.
|
|
78
|
+
|
|
79
|
+
---
|
|
80
|
+
|
|
81
|
+
## From OpenClaw memory.db
|
|
82
|
+
|
|
83
|
+
OpenClaw's built-in memory system stores facts, preferences, and context entries in `~/.openclaw/memory.db`. This script imports those entries as facts into hypermem's knowledge store.
|
|
84
|
+
|
|
85
|
+
**What maps to what:**
|
|
86
|
+
|
|
87
|
+
| memory.db | hypermem |
|
|
88
|
+
|---|---|
|
|
89
|
+
| Facts | `facts` table in `library.db` |
|
|
90
|
+
| Preferences | `facts` table with `domain: preference` |
|
|
91
|
+
| Context entries | `facts` table with `domain: general` |
|
|
92
|
+
|
|
93
|
+
**Step 1: Dry run**
|
|
94
|
+
```bash
|
|
95
|
+
node scripts/migrate-memory-db.mjs --agent main
|
|
96
|
+
```
|
|
97
|
+
|
|
98
|
+
Review output — it will show fact counts by type.
|
|
99
|
+
|
|
100
|
+
**Step 2: Import**
|
|
101
|
+
```bash
|
|
102
|
+
node scripts/migrate-memory-db.mjs --agent main --apply
|
|
103
|
+
```
|
|
104
|
+
|
|
105
|
+
**Step 3: Restart**
|
|
106
|
+
```bash
|
|
107
|
+
openclaw gateway restart
|
|
108
|
+
```
|
|
109
|
+
|
|
110
|
+
**Options:**
|
|
111
|
+
```
|
|
112
|
+
--agent <id> Agent to import facts for (default: main)
|
|
113
|
+
--memory-db <path> Path to memory.db (default: ~/.openclaw/memory.db)
|
|
114
|
+
--hypermem-dir <path> hypermem data directory (default: ~/.openclaw/hypermem)
|
|
115
|
+
--limit <n> Import only first N facts (useful for testing)
|
|
116
|
+
--apply Actually write data (default is dry-run)
|
|
117
|
+
```
|
|
118
|
+
|
|
119
|
+
> **Note:** The built-in memory.db is not agent-scoped. All entries go to the agent you specify with `--agent`. If multiple agents share the same memory.db, run the script once per agent.
|
|
120
|
+
|
|
121
|
+
---
|
|
122
|
+
|
|
123
|
+
## From QMD
|
|
124
|
+
|
|
125
|
+
QMD is the OpenClaw local-first memory sidecar — it runs behind `plugins.slots.memory = "memory-core"` with `memory.backend = "qmd"`. hypermem replaces the entire context engine, so this is a slot change: from `slots.memory` (memory-core/QMD) to `slots.contextEngine` (hypermem).
|
|
126
|
+
|
|
127
|
+
**Key difference:** QMD is additive — it augments what the agent receives. hypermem is authoritative — it owns the entire context assembly pipeline. The scopes are different.
|
|
128
|
+
|
|
129
|
+
**What maps to what:**
|
|
130
|
+
|
|
131
|
+
| QMD | hypermem |
|
|
132
|
+
|---|---|
|
|
133
|
+
| Workspace memory files (`MEMORY.md`, `memory/*.md`) | Still used — hypermem reads these directly during bootstrap |
|
|
134
|
+
| Per-agent collections | Per-agent message stores + `library.db` facts |
|
|
135
|
+
| BM25 + vector hybrid search | FTS5 + nomic-embed-text hybrid search |
|
|
136
|
+
| Extra indexed paths (`memory.qmd.paths`) | Not yet supported — see capability gaps below |
|
|
137
|
+
| Session transcript indexing | Covered natively — all message history is indexed |
|
|
138
|
+
| Reranking (QMD cross-encoder) | Not implemented — hypermem uses MMR diversification |
|
|
139
|
+
|
|
140
|
+
**Pre-flight:**
|
|
141
|
+
|
|
142
|
+
Check what QMD has indexed, and note any extra paths:
|
|
143
|
+
```bash
|
|
144
|
+
ls ~/.openclaw/agents/<agentId>/qmd/
|
|
145
|
+
```
|
|
146
|
+
|
|
147
|
+
If you used `memory.qmd.paths` to index extra directories, those paths are not picked up automatically. Copy key content into `MEMORY.md` or daily files before switching, or use the manual import script below.
|
|
148
|
+
|
|
149
|
+
If you used QMD session indexing (`memory.qmd.sessions.enabled: true`), hypermem handles sessions natively going forward. Historical transcripts are not auto-imported — add summaries of important historical sessions to a daily memory file if needed.
|
|
150
|
+
|
|
151
|
+
**Switch:**
|
|
152
|
+
```bash
|
|
153
|
+
# Disable memory-core + QMD, replace with hypermem memory plugin
|
|
154
|
+
openclaw config set plugins.slots.memory hypermem
|
|
155
|
+
|
|
156
|
+
# Enable hypercompositor as the context engine
|
|
157
|
+
openclaw config set plugins.slots.contextEngine hypercompositor
|
|
158
|
+
|
|
159
|
+
# Remove QMD backend config if set
|
|
160
|
+
openclaw config unset agents.defaults.memory
|
|
161
|
+
|
|
162
|
+
openclaw gateway restart
|
|
163
|
+
```
|
|
164
|
+
|
|
165
|
+
QMD collections at `~/.openclaw/agents/<agentId>/qmd/` are untouched. Delete them manually when satisfied the migration is complete.
|
|
166
|
+
|
|
167
|
+
**After restart:** hypermem bootstraps on first use and reads your workspace memory files. The background indexer builds facts and embeddings from `MEMORY.md` and daily files — no separate import step for file-based memory.
|
|
168
|
+
|
|
169
|
+
**If you had content in extra QMD paths**, import it manually:
|
|
170
|
+
```js
|
|
171
|
+
// import-qmd-extras.mjs
|
|
172
|
+
import { HyperMem } from '@psiclawops/hypermem';
|
|
173
|
+
import { readFileSync } from 'node:fs';
|
|
174
|
+
import { join } from 'node:path';
|
|
175
|
+
import { homedir } from 'node:os';
|
|
176
|
+
import { glob } from 'node:fs/promises';
|
|
177
|
+
|
|
178
|
+
const agentId = process.argv[2] ?? 'main';
|
|
179
|
+
const extraDir = process.argv[3];
|
|
180
|
+
const dryRun = !process.argv.includes('--apply');
|
|
181
|
+
|
|
182
|
+
if (!extraDir) {
|
|
183
|
+
console.error('Usage: node import-qmd-extras.mjs <agentId> <directory> [--apply]');
|
|
184
|
+
process.exit(1);
|
|
185
|
+
}
|
|
186
|
+
|
|
187
|
+
const hm = await HyperMem.create({ dir: join(homedir(), '.openclaw/hypermem') });
|
|
188
|
+
let imported = 0;
|
|
189
|
+
const files = [];
|
|
190
|
+
for await (const f of glob('**/*.md', { cwd: extraDir })) files.push(f);
|
|
191
|
+
|
|
192
|
+
for (const file of files) {
|
|
193
|
+
const content = readFileSync(join(extraDir, file), 'utf-8');
|
|
194
|
+
const chunks = content.split(/\n{2,}/).filter(c => c.trim().length > 60);
|
|
195
|
+
for (const chunk of chunks) {
|
|
196
|
+
if (dryRun) {
|
|
197
|
+
console.log(`[dry-run] ${file}: ${chunk.slice(0, 80).trim()}...`);
|
|
198
|
+
} else {
|
|
199
|
+
await hm.addFact(agentId, chunk.trim(), {
|
|
200
|
+
domain: 'general',
|
|
201
|
+
source: `qmd-extra-migration:${file}`,
|
|
202
|
+
confidence: 0.8,
|
|
203
|
+
});
|
|
204
|
+
}
|
|
205
|
+
imported++;
|
|
206
|
+
}
|
|
207
|
+
}
|
|
208
|
+
|
|
209
|
+
console.log(`\n${dryRun ? '[dry-run] ' : ''}${imported} chunks from ${files.length} files`);
|
|
210
|
+
if (dryRun) console.log('Run with --apply to write data.');
|
|
211
|
+
```
|
|
212
|
+
|
|
213
|
+
**Capability gaps vs QMD:**
|
|
214
|
+
|
|
215
|
+
| QMD feature | Status in hypermem |
|
|
216
|
+
|---|---|
|
|
217
|
+
| Reranking (cross-encoder) | Not implemented. Tracked for future release. |
|
|
218
|
+
| Extra path indexing | Not implemented. Use manual fact import as workaround. |
|
|
219
|
+
| Session transcript search | Covered natively. |
|
|
220
|
+
| BM25 hybrid search | Covered — FTS5 + vector hybrid with configurable weights. |
|
|
221
|
+
| Automatic fallback to builtin | Not applicable — hypermem does not fall back. |
|
|
222
|
+
|
|
223
|
+
---
|
|
224
|
+
|
|
225
|
+
## From ClawText
|
|
226
|
+
|
|
227
|
+
ClawText stores full conversation history in `session-intelligence.db`. This script imports all conversations with automatic agent identification from identity anchors.
|
|
228
|
+
|
|
229
|
+
**What maps to what:**
|
|
230
|
+
|
|
231
|
+
| ClawText | hypermem |
|
|
232
|
+
|---|---|
|
|
233
|
+
| Conversation history | Per-agent `messages.db` |
|
|
234
|
+
| Identity anchors | Used to route messages to correct agent DB |
|
|
235
|
+
| Optimization `.jsonl` logs | Not imported (operational data, not conversation history) |
|
|
236
|
+
|
|
237
|
+
**Step 1: Dry run**
|
|
238
|
+
```bash
|
|
239
|
+
node scripts/migrate-clawtext.mjs
|
|
240
|
+
```
|
|
241
|
+
|
|
242
|
+
**Step 2: Import**
|
|
243
|
+
```bash
|
|
244
|
+
node scripts/migrate-clawtext.mjs --apply
|
|
245
|
+
```
|
|
246
|
+
|
|
247
|
+
**Step 3: Restart**
|
|
248
|
+
```bash
|
|
249
|
+
openclaw gateway restart
|
|
250
|
+
```
|
|
251
|
+
|
|
252
|
+
**Options:**
|
|
253
|
+
```
|
|
254
|
+
--apply Actually write data (default is dry-run)
|
|
255
|
+
--limit <n> Import only first N conversations
|
|
256
|
+
--clawtext-db <path> Path to session-intelligence.db
|
|
257
|
+
(default: ~/.openclaw/workspace/.clawtext/session-intelligence.db)
|
|
258
|
+
--hypermem-dir <path> hypermem data directory (default: ~/.openclaw/hypermem)
|
|
259
|
+
```
|
|
260
|
+
|
|
261
|
+
Conversations without a detectable agent identity are routed to `main`.
|
|
262
|
+
|
|
263
|
+
---
|
|
264
|
+
|
|
265
|
+
## From Cognee
|
|
266
|
+
|
|
267
|
+
Cognee is a Python-based ECL (Extract, Cognify, Load) memory engine that stores knowledge in a graph database + vector store. hypermem is a Node.js context engine native to OpenClaw. This is a data migration, not a drop-in swap — the architectures are parallel approaches to the same problem.
|
|
268
|
+
|
|
269
|
+
**What maps to what:**
|
|
270
|
+
|
|
271
|
+
| Cognee | hypermem |
|
|
272
|
+
|---|---|
|
|
273
|
+
| Knowledge graph nodes (entities) | Facts (`facts` table in `library.db`) |
|
|
274
|
+
| Graph relationships / triplets | Knowledge entries (`knowledge` table) |
|
|
275
|
+
| Vector embeddings | Regenerated automatically by the hypermem indexer |
|
|
276
|
+
| Session memory | Per-agent message history (`messages.db` per agent) |
|
|
277
|
+
| Permanent memory | Facts + knowledge in `library.db` |
|
|
278
|
+
| User/tenant scoping | Agent scoping (`agent_id` on all records) |
|
|
279
|
+
|
|
280
|
+
**What does not migrate:** raw graph structure (edges, weights), Cognee's memify feedback loop state, embeddings (regenerated automatically after import).
|
|
281
|
+
|
|
282
|
+
**Step 1: Export your Cognee data**
|
|
283
|
+
|
|
284
|
+
Cognee stores data in whichever backend you configured. Export to a flat JSON file.
|
|
285
|
+
|
|
286
|
+
_Default SQLite backend:_
|
|
287
|
+
```python
|
|
288
|
+
# export_cognee.py
|
|
289
|
+
import asyncio
|
|
290
|
+
import cognee
|
|
291
|
+
import json
|
|
292
|
+
|
|
293
|
+
async def main():
|
|
294
|
+
results = await cognee.search("*", query_type="CHUNKS")
|
|
295
|
+
with open("cognee_export.json", "w") as f:
|
|
296
|
+
json.dump(
|
|
297
|
+
[r.__dict__ if hasattr(r, '__dict__') else str(r) for r in results],
|
|
298
|
+
f, indent=2, default=str
|
|
299
|
+
)
|
|
300
|
+
print(f"Exported {len(results)} entries")
|
|
301
|
+
|
|
302
|
+
asyncio.run(main())
|
|
303
|
+
```
|
|
304
|
+
|
|
305
|
+
For graph backends (Neo4j, Memgraph), export via their native query interfaces and produce a flat JSON list of `{ text, source, type }` objects.
|
|
306
|
+
|
|
307
|
+
**Step 2: Dry run**
|
|
308
|
+
|
|
309
|
+
Save this as `import-from-cognee.mjs` in the hypermem repo root:
|
|
310
|
+
|
|
311
|
+
```js
|
|
312
|
+
// import-from-cognee.mjs
|
|
313
|
+
import { HyperMem } from '@psiclawops/hypermem';
|
|
314
|
+
import { readFileSync } from 'node:fs';
|
|
315
|
+
import { homedir } from 'node:os';
|
|
316
|
+
import { join } from 'node:path';
|
|
317
|
+
|
|
318
|
+
const agentId = process.argv[2] ?? 'main';
|
|
319
|
+
const exportPath = process.argv[3] ?? 'cognee_export.json';
|
|
320
|
+
const dryRun = !process.argv.includes('--apply');
|
|
321
|
+
|
|
322
|
+
const entries = JSON.parse(readFileSync(exportPath, 'utf-8'));
|
|
323
|
+
const hm = await HyperMem.create({ dir: join(homedir(), '.openclaw/hypermem') });
|
|
324
|
+
|
|
325
|
+
let imported = 0;
|
|
326
|
+
let skipped = 0;
|
|
327
|
+
|
|
328
|
+
for (const entry of entries) {
|
|
329
|
+
// Adapt this to match your export format
|
|
330
|
+
const text = entry.text ?? entry.content ?? entry.payload ?? JSON.stringify(entry);
|
|
331
|
+
if (!text || text.length < 40) { skipped++; continue; }
|
|
332
|
+
|
|
333
|
+
if (dryRun) {
|
|
334
|
+
console.log(`[dry-run] would import: ${text.slice(0, 80)}...`);
|
|
335
|
+
imported++;
|
|
336
|
+
continue;
|
|
337
|
+
}
|
|
338
|
+
|
|
339
|
+
await hm.addFact(agentId, text, {
|
|
340
|
+
domain: entry.type ?? 'general',
|
|
341
|
+
source: 'cognee-migration',
|
|
342
|
+
confidence: 0.85,
|
|
343
|
+
});
|
|
344
|
+
imported++;
|
|
345
|
+
}
|
|
346
|
+
|
|
347
|
+
console.log(`\n${dryRun ? '[dry-run] ' : ''}Imported: ${imported}, Skipped: ${skipped}`);
|
|
348
|
+
if (dryRun) console.log('Run with --apply to write data.');
|
|
349
|
+
```
|
|
350
|
+
|
|
351
|
+
```bash
|
|
352
|
+
node import-from-cognee.mjs main cognee_export.json
|
|
353
|
+
```
|
|
354
|
+
|
|
355
|
+
**Step 3: Import**
|
|
356
|
+
```bash
|
|
357
|
+
node import-from-cognee.mjs main cognee_export.json --apply
|
|
358
|
+
```
|
|
359
|
+
|
|
360
|
+
**Step 4: Disable Cognee and switch to hypermem**
|
|
361
|
+
|
|
362
|
+
Stop the Cognee process / MCP server if running, then:
|
|
363
|
+
```bash
|
|
364
|
+
openclaw config set plugins.slots.contextEngine hypercompositor
|
|
365
|
+
openclaw config set plugins.slots.memory hypermem
|
|
366
|
+
openclaw gateway restart
|
|
367
|
+
```
|
|
368
|
+
|
|
369
|
+
Cognee and hypermem do not conflict at runtime (Cognee is a separate process), but there is no reason to run both.
|
|
370
|
+
|
|
371
|
+
**Verify imported facts:**
|
|
372
|
+
```bash
|
|
373
|
+
node -e "
|
|
374
|
+
const { DatabaseSync } = require('node:sqlite');
|
|
375
|
+
const os = require('node:os'), path = require('node:path');
|
|
376
|
+
const db = new DatabaseSync(path.join(os.homedir(), '.openclaw/hypermem/library.db'), { readOnly: true });
|
|
377
|
+
const r = db.prepare(\"SELECT agent_id, COUNT(*) as cnt FROM facts WHERE source='cognee-migration' GROUP BY agent_id\").all();
|
|
378
|
+
r.forEach(x => console.log(x.agent_id + ': ' + x.cnt + ' imported facts'));
|
|
379
|
+
db.close();
|
|
380
|
+
"
|
|
381
|
+
```
|
|
382
|
+
|
|
383
|
+
---
|
|
384
|
+
|
|
385
|
+
## From Mem0
|
|
386
|
+
|
|
387
|
+
Mem0 is a managed memory service (with an OSS self-hosted variant) that stores distilled facts per user or agent. It has a clean export API — this is one of the easier migrations.
|
|
388
|
+
|
|
389
|
+
**What maps to what:**
|
|
390
|
+
|
|
391
|
+
| Mem0 | hypermem |
|
|
392
|
+
|---|---|
|
|
393
|
+
| Memory entries (`memory` field) | Facts in `library.db` |
|
|
394
|
+
| `user_id` scoping | `agent_id` scoping |
|
|
395
|
+
| `agent_id` / `app_id` filters | `agent_id` in hypermem |
|
|
396
|
+
| `metadata` | Stored as fact metadata / domain |
|
|
397
|
+
| `created_at` | Preserved as fact timestamp |
|
|
398
|
+
|
|
399
|
+
**What does not migrate:** Mem0's internal vector embeddings (regenerated automatically), category labels beyond what fits in hypermem's domain field.
|
|
400
|
+
|
|
401
|
+
**Step 1: Export from Mem0**
|
|
402
|
+
|
|
403
|
+
_Cloud (managed API):_
|
|
404
|
+
```python
|
|
405
|
+
# export_mem0.py
|
|
406
|
+
from mem0 import MemoryClient
|
|
407
|
+
import json, time
|
|
408
|
+
|
|
409
|
+
client = MemoryClient(api_key="your_mem0_api_key")
|
|
410
|
+
|
|
411
|
+
# Option A: export job (structured)
|
|
412
|
+
schema = {
|
|
413
|
+
"type": "object",
|
|
414
|
+
"properties": {
|
|
415
|
+
"memories": {
|
|
416
|
+
"type": "array",
|
|
417
|
+
"items": {
|
|
418
|
+
"type": "object",
|
|
419
|
+
"properties": {
|
|
420
|
+
"id": {"type": "string"},
|
|
421
|
+
"content": {"type": "string"},
|
|
422
|
+
"metadata": {"type": "object"},
|
|
423
|
+
"created_at": {"type": "string"}
|
|
424
|
+
}
|
|
425
|
+
}
|
|
426
|
+
}
|
|
427
|
+
}
|
|
428
|
+
}
|
|
429
|
+
response = client.create_memory_export(schema=schema, filters={})
|
|
430
|
+
export_id = response["id"]
|
|
431
|
+
time.sleep(5) # wait for job
|
|
432
|
+
export_data = client.get_memory_export(memory_export_id=export_id)
|
|
433
|
+
with open("mem0_export.json", "w") as f:
|
|
434
|
+
json.dump(export_data, f, indent=2)
|
|
435
|
+
print(f"Exported {len(export_data['memories'])} memories")
|
|
436
|
+
```
|
|
437
|
+
|
|
438
|
+
_Or use `get_all()` for a raw list (OSS or cloud):_
|
|
439
|
+
```python
|
|
440
|
+
from mem0 import MemoryClient
|
|
441
|
+
import json
|
|
442
|
+
|
|
443
|
+
client = MemoryClient(api_key="your_mem0_api_key")
|
|
444
|
+
result = client.get_all(filters={"user_id": "your_user_id"}, page_size=500)
|
|
445
|
+
with open("mem0_export.json", "w") as f:
|
|
446
|
+
# get_all returns {count, results: [{memory, id, metadata, ...}]}
|
|
447
|
+
json.dump(result, f, indent=2)
|
|
448
|
+
print(f"Exported {result['count']} memories")
|
|
449
|
+
```
|
|
450
|
+
|
|
451
|
+
**Step 2: Dry run**
|
|
452
|
+
```bash
|
|
453
|
+
node scripts/migrate-mem0.mjs --agent main mem0_export.json
|
|
454
|
+
```
|
|
455
|
+
|
|
456
|
+
Save this as `scripts/migrate-mem0.mjs`:
|
|
457
|
+
```js
|
|
458
|
+
// scripts/migrate-mem0.mjs
|
|
459
|
+
import { HyperMem } from '@psiclawops/hypermem';
|
|
460
|
+
import { readFileSync } from 'node:fs';
|
|
461
|
+
import { homedir } from 'node:os';
|
|
462
|
+
import { join } from 'node:path';
|
|
463
|
+
|
|
464
|
+
const agentId = process.argv[2] ?? 'main';
|
|
465
|
+
const exportPath = process.argv[3] ?? 'mem0_export.json';
|
|
466
|
+
const dryRun = !process.argv.includes('--apply');
|
|
467
|
+
|
|
468
|
+
const raw = JSON.parse(readFileSync(exportPath, 'utf-8'));
|
|
469
|
+
// handle both export job format {memories: [...]} and get_all format {results: [...]}
|
|
470
|
+
const entries = raw.memories ?? raw.results ?? raw;
|
|
471
|
+
|
|
472
|
+
const hm = await HyperMem.create({ dir: join(homedir(), '.openclaw/hypermem') });
|
|
473
|
+
let imported = 0, skipped = 0;
|
|
474
|
+
|
|
475
|
+
for (const entry of entries) {
|
|
476
|
+
// export job uses 'content', get_all uses 'memory'
|
|
477
|
+
const text = entry.content ?? entry.memory ?? '';
|
|
478
|
+
if (!text || text.length < 20) { skipped++; continue; }
|
|
479
|
+
|
|
480
|
+
const domain = entry.metadata?.category ?? entry.metadata?.type ?? 'general';
|
|
481
|
+
|
|
482
|
+
if (dryRun) {
|
|
483
|
+
console.log(`[dry-run] ${domain}: ${text.slice(0, 80)}`);
|
|
484
|
+
imported++;
|
|
485
|
+
continue;
|
|
486
|
+
}
|
|
487
|
+
|
|
488
|
+
await hm.addFact(agentId, text, {
|
|
489
|
+
domain,
|
|
490
|
+
source: 'mem0-migration',
|
|
491
|
+
confidence: 0.9,
|
|
492
|
+
createdAt: entry.created_at,
|
|
493
|
+
});
|
|
494
|
+
imported++;
|
|
495
|
+
}
|
|
496
|
+
|
|
497
|
+
console.log(`\n${dryRun ? '[dry-run] ' : ''}Imported: ${imported}, Skipped: ${skipped}`);
|
|
498
|
+
if (dryRun) console.log('Run with --apply to write data.');
|
|
499
|
+
```
|
|
500
|
+
|
|
501
|
+
**Step 3: Import**
|
|
502
|
+
```bash
|
|
503
|
+
node scripts/migrate-mem0.mjs --agent main mem0_export.json --apply
|
|
504
|
+
```
|
|
505
|
+
|
|
506
|
+
**Step 4: Restart**
|
|
507
|
+
```bash
|
|
508
|
+
openclaw gateway restart
|
|
509
|
+
```
|
|
510
|
+
|
|
511
|
+
**Verify:**
|
|
512
|
+
```bash
|
|
513
|
+
node -e "
|
|
514
|
+
const { DatabaseSync } = require('node:sqlite');
|
|
515
|
+
const os = require('node:os'), path = require('node:path');
|
|
516
|
+
const db = new DatabaseSync(path.join(os.homedir(), '.openclaw/hypermem/library.db'), { readOnly: true });
|
|
517
|
+
const r = db.prepare(\"SELECT COUNT(*) as cnt FROM facts WHERE source='mem0-migration'\").get();
|
|
518
|
+
console.log('Imported from Mem0:', r.cnt, 'facts');
|
|
519
|
+
db.close();
|
|
520
|
+
"
|
|
521
|
+
```
|
|
522
|
+
|
|
523
|
+
---
|
|
524
|
+
|
|
525
|
+
## From Zep
|
|
526
|
+
|
|
527
|
+
Zep stores conversation history per session and builds a per-user knowledge graph on top. It runs either self-hosted or as a managed cloud service. The migration extracts session messages and any queryable facts.
|
|
528
|
+
|
|
529
|
+
**What maps to what:**
|
|
530
|
+
|
|
531
|
+
| Zep | hypermem |
|
|
532
|
+
|---|---|
|
|
533
|
+
| Session messages (`role`, `role_type`, `content`) | Per-agent `messages.db` |
|
|
534
|
+
| User-level knowledge graph facts | Facts in `library.db` |
|
|
535
|
+
| Group data (shared org context) | Facts in `library.db` with `domain: group` |
|
|
536
|
+
| `session_id` | hypermem session key |
|
|
537
|
+
| `user_id` | `agent_id` |
|
|
538
|
+
|
|
539
|
+
**What does not migrate:** Zep's internal graph edges/weights, fact ratings, ingestion-derived entity relationships (flattened to text facts on import).
|
|
540
|
+
|
|
541
|
+
**Step 1: Export from Zep**
|
|
542
|
+
|
|
543
|
+
```python
|
|
544
|
+
# export_zep.py
|
|
545
|
+
from zep_cloud.client import Zep
|
|
546
|
+
import json
|
|
547
|
+
|
|
548
|
+
client = Zep(api_key="your_zep_api_key")
|
|
549
|
+
# For self-hosted: client = Zep(api_key="unused", base_url="http://localhost:8000")
|
|
550
|
+
# Note: api_key is required by the Pydantic validator even for self-hosted; any non-empty string works.
|
|
551
|
+
|
|
552
|
+
export = {"sessions": [], "facts": []}
|
|
553
|
+
|
|
554
|
+
# Export all users and their sessions
|
|
555
|
+
users = client.user.list() # paginate if needed
|
|
556
|
+
for user in users:
|
|
557
|
+
sessions = client.user.get_sessions(user.user_id)
|
|
558
|
+
for session in sessions:
|
|
559
|
+
messages = client.memory.get_session_messages(session.session_id)
|
|
560
|
+
export["sessions"].append({
|
|
561
|
+
"session_id": session.session_id,
|
|
562
|
+
"user_id": user.user_id,
|
|
563
|
+
"messages": [
|
|
564
|
+
{"role": m.role_type, "content": m.content, "created_at": str(m.created_at)}
|
|
565
|
+
for m in (messages.messages or [])
|
|
566
|
+
]
|
|
567
|
+
})
|
|
568
|
+
|
|
569
|
+
# Export graph facts for this user
|
|
570
|
+
try:
|
|
571
|
+
graph_results = client.graph.search(query="", user_id=user.user_id, limit=500)
|
|
572
|
+
for edge in (graph_results.edges or []):
|
|
573
|
+
export["facts"].append({
|
|
574
|
+
"user_id": user.user_id,
|
|
575
|
+
"text": edge.fact,
|
|
576
|
+
"created_at": str(edge.created_at) if hasattr(edge, 'created_at') else None
|
|
577
|
+
})
|
|
578
|
+
except Exception:
|
|
579
|
+
pass # graph search requires at least one prior message
|
|
580
|
+
|
|
581
|
+
with open("zep_export.json", "w") as f:
|
|
582
|
+
json.dump(export, f, indent=2)
|
|
583
|
+
print(f"Exported {len(export['sessions'])} sessions, {len(export['facts'])} facts")
|
|
584
|
+
```
|
|
585
|
+
|
|
586
|
+
**Step 2: Import**
|
|
587
|
+
|
|
588
|
+
Save as `scripts/migrate-zep.mjs`:
|
|
589
|
+
```js
|
|
590
|
+
// scripts/migrate-zep.mjs
|
|
591
|
+
import { HyperMem } from '@psiclawops/hypermem';
|
|
592
|
+
import { readFileSync } from 'node:fs';
|
|
593
|
+
import { homedir } from 'node:os';
|
|
594
|
+
import { join } from 'node:path';
|
|
595
|
+
|
|
596
|
+
const agentId = process.argv[2] ?? 'main';
|
|
597
|
+
const exportPath = process.argv[3] ?? 'zep_export.json';
|
|
598
|
+
const dryRun = !process.argv.includes('--apply');
|
|
599
|
+
|
|
600
|
+
const { sessions = [], facts = [] } = JSON.parse(readFileSync(exportPath, 'utf-8'));
|
|
601
|
+
const hm = await HyperMem.create({ dir: join(homedir(), '.openclaw/hypermem') });
|
|
602
|
+
|
|
603
|
+
let msgCount = 0, factCount = 0;
|
|
604
|
+
|
|
605
|
+
// Import session messages
|
|
606
|
+
for (const session of sessions) {
|
|
607
|
+
const sessionKey = `zep-migration:${session.session_id}`;
|
|
608
|
+
for (const msg of session.messages ?? []) {
|
|
609
|
+
if (!msg.content) continue;
|
|
610
|
+
if (dryRun) { console.log(`[dry-run] msg [${msg.role}]: ${msg.content.slice(0, 60)}`); msgCount++; continue; }
|
|
611
|
+
if (msg.role === 'user' || msg.role === 'human') {
|
|
612
|
+
await hm.recordUserMessage(agentId, sessionKey, msg.content);
|
|
613
|
+
} else {
|
|
614
|
+
await hm.recordAssistantMessage(agentId, sessionKey, {
|
|
615
|
+
role: 'assistant', textContent: msg.content, toolCalls: [],
|
|
616
|
+
createdAt: msg.created_at ?? new Date().toISOString(),
|
|
617
|
+
});
|
|
618
|
+
}
|
|
619
|
+
msgCount++;
|
|
620
|
+
}
|
|
621
|
+
}
|
|
622
|
+
|
|
623
|
+
// Import knowledge graph facts
|
|
624
|
+
for (const fact of facts) {
|
|
625
|
+
if (!fact.text || fact.text.length < 20) continue;
|
|
626
|
+
if (dryRun) { console.log(`[dry-run] fact: ${fact.text.slice(0, 80)}`); factCount++; continue; }
|
|
627
|
+
await hm.addFact(agentId, fact.text, {
|
|
628
|
+
domain: 'general',
|
|
629
|
+
source: 'zep-migration',
|
|
630
|
+
confidence: 0.85,
|
|
631
|
+
createdAt: fact.created_at,
|
|
632
|
+
});
|
|
633
|
+
factCount++;
|
|
634
|
+
}
|
|
635
|
+
|
|
636
|
+
console.log(`\n${dryRun ? '[dry-run] ' : ''}Messages: ${msgCount}, Facts: ${factCount}`);
|
|
637
|
+
if (dryRun) console.log('Run with --apply to write data.');
|
|
638
|
+
```
|
|
639
|
+
|
|
640
|
+
```bash
|
|
641
|
+
# Dry run
|
|
642
|
+
node scripts/migrate-zep.mjs main zep_export.json
|
|
643
|
+
|
|
644
|
+
# Apply
|
|
645
|
+
node scripts/migrate-zep.mjs main zep_export.json --apply
|
|
646
|
+
|
|
647
|
+
openclaw gateway restart
|
|
648
|
+
```
|
|
649
|
+
|
|
650
|
+
> **Self-hosted Zep:** if you are running the open-source Zep server, you can also export directly from the underlying Postgres database. The `zep.messages` table has `session_id`, `role`, `content`, `created_at` — the import script above accepts the same JSON shape either way.
|
|
651
|
+
|
|
652
|
+
---
|
|
653
|
+
|
|
654
|
+
## From Honcho
|
|
655
|
+
|
|
656
|
+
Honcho is an OpenClaw plugin (`@honcho-ai/openclaw-honcho`) that persists conversations to the Honcho service (hosted or self-hosted) and builds user/agent models over time. Because it is a plugin that runs alongside OpenClaw, migration to hypermem is mostly a slot swap — the data that matters is already in your workspace files, and Honcho's conversation history lives in the Honcho service.
|
|
657
|
+
|
|
658
|
+
**What maps to what:**
|
|
659
|
+
|
|
660
|
+
| Honcho | hypermem |
|
|
661
|
+
|---|---|
|
|
662
|
+
| Workspace memory files (migrated on setup) | Still used — hypermem reads these directly |
|
|
663
|
+
| Honcho session messages | Per-agent `messages.db` (going forward) |
|
|
664
|
+
| Honcho user model / conclusions | Facts in `library.db` |
|
|
665
|
+
| `honcho_context` / `honcho_ask` tools | `memory_search` + hypermem context assembly |
|
|
666
|
+
|
|
667
|
+
**What does not migrate automatically:** Honcho's cross-session conclusions and user model from the Honcho service — these require an API export step below.
|
|
668
|
+
|
|
669
|
+
**Step 1: Export conclusions from Honcho**
|
|
670
|
+
|
|
671
|
+
```python
|
|
672
|
+
# export_honcho.py
|
|
673
|
+
import requests, json, os
|
|
674
|
+
|
|
675
|
+
BASE_URL = os.getenv("HONCHO_BASE_URL", "https://api.honcho.dev")
|
|
676
|
+
API_KEY = os.getenv("HONCHO_API_KEY", "")
|
|
677
|
+
WORKSPACE = os.getenv("HONCHO_WORKSPACE", "openclaw")
|
|
678
|
+
|
|
679
|
+
headers = {"Authorization": f"Bearer {API_KEY}"} if API_KEY else {}
|
|
680
|
+
|
|
681
|
+
export = {"conclusions": [], "sessions": []}
|
|
682
|
+
|
|
683
|
+
# List apps (workspaces) and users
|
|
684
|
+
apps = requests.get(f"{BASE_URL}/v1/apps", headers=headers).json()
|
|
685
|
+
for app in apps.get("items", []):
|
|
686
|
+
app_id = app["id"]
|
|
687
|
+
users = requests.get(f"{BASE_URL}/v1/apps/{app_id}/users", headers=headers).json()
|
|
688
|
+
for user in users.get("items", []):
|
|
689
|
+
user_id = user["id"]
|
|
690
|
+
# Get conclusions (derived memory)
|
|
691
|
+
conclusions = requests.get(
|
|
692
|
+
f"{BASE_URL}/v1/apps/{app_id}/users/{user_id}/conclusions",
|
|
693
|
+
headers=headers
|
|
694
|
+
).json()
|
|
695
|
+
for c in conclusions.get("items", []):
|
|
696
|
+
export["conclusions"].append({"user_id": user_id, "text": c.get("content", ""), "created_at": c.get("created_at")})
|
|
697
|
+
|
|
698
|
+
with open("honcho_export.json", "w") as f:
|
|
699
|
+
json.dump(export, f, indent=2)
|
|
700
|
+
print(f"Exported {len(export['conclusions'])} conclusions")
|
|
701
|
+
```
|
|
702
|
+
|
|
703
|
+
**Step 2: Import conclusions as facts**
|
|
704
|
+
|
|
705
|
+
Save as `scripts/migrate-honcho.mjs`:
|
|
706
|
+
```js
|
|
707
|
+
// scripts/migrate-honcho.mjs
|
|
708
|
+
import { HyperMem } from '@psiclawops/hypermem';
|
|
709
|
+
import { readFileSync } from 'node:fs';
|
|
710
|
+
import { homedir } from 'node:os';
|
|
711
|
+
import { join } from 'node:path';
|
|
712
|
+
|
|
713
|
+
const agentId = process.argv[2] ?? 'main';
|
|
714
|
+
const exportPath = process.argv[3] ?? 'honcho_export.json';
|
|
715
|
+
const dryRun = !process.argv.includes('--apply');
|
|
716
|
+
|
|
717
|
+
const { conclusions = [] } = JSON.parse(readFileSync(exportPath, 'utf-8'));
|
|
718
|
+
const hm = await HyperMem.create({ dir: join(homedir(), '.openclaw/hypermem') });
|
|
719
|
+
let imported = 0, skipped = 0;
|
|
720
|
+
|
|
721
|
+
for (const c of conclusions) {
|
|
722
|
+
const text = c.text ?? '';
|
|
723
|
+
if (!text || text.length < 20) { skipped++; continue; }
|
|
724
|
+
if (dryRun) { console.log(`[dry-run] conclusion: ${text.slice(0, 80)}`); imported++; continue; }
|
|
725
|
+
await hm.addFact(agentId, text, {
|
|
726
|
+
domain: 'general',
|
|
727
|
+
source: 'honcho-migration',
|
|
728
|
+
confidence: 0.9,
|
|
729
|
+
createdAt: c.created_at,
|
|
730
|
+
});
|
|
731
|
+
imported++;
|
|
732
|
+
}
|
|
733
|
+
|
|
734
|
+
console.log(`\n${dryRun ? '[dry-run] ' : ''}Imported: ${imported}, Skipped: ${skipped}`);
|
|
735
|
+
if (dryRun) console.log('Run with --apply to write data.');
|
|
736
|
+
```
|
|
737
|
+
|
|
738
|
+
```bash
|
|
739
|
+
# Dry run
|
|
740
|
+
node scripts/migrate-honcho.mjs main honcho_export.json
|
|
741
|
+
|
|
742
|
+
# Apply
|
|
743
|
+
node scripts/migrate-honcho.mjs main honcho_export.json --apply
|
|
744
|
+
```
|
|
745
|
+
|
|
746
|
+
**Step 3: Uninstall Honcho plugin and enable hypermem**
|
|
747
|
+
|
|
748
|
+
```bash
|
|
749
|
+
openclaw plugins uninstall @honcho-ai/openclaw-honcho
|
|
750
|
+
openclaw config set plugins.slots.contextEngine hypercompositor
|
|
751
|
+
openclaw config set plugins.slots.memory hypermem
|
|
752
|
+
openclaw gateway restart
|
|
753
|
+
```
|
|
754
|
+
|
|
755
|
+
> **Note:** Your workspace memory files (`MEMORY.md`, `memory/*.md`) that Honcho migrated on setup are still in place and will be picked up by hypermem automatically — no re-import needed for those.
|
|
756
|
+
|
|
757
|
+
---
|
|
758
|
+
|
|
759
|
+
## From memory-lancedb
|
|
760
|
+
|
|
761
|
+
`memory-lancedb` is an OpenClaw install-on-demand plugin (`plugins.slots.memory = "memory-lancedb"`) that provides long-term memory with auto-recall and capture using LanceDB as the backing store. Like memory-core, it occupies the `slots.memory` slot — separate from the context engine slot that hypermem owns.
|
|
762
|
+
|
|
763
|
+
**What maps to what:**
|
|
764
|
+
|
|
765
|
+
| memory-lancedb | hypermem |
|
|
766
|
+
|---|---|
|
|
767
|
+
| LanceDB memory vectors | Facts in `library.db` (re-embedded automatically) |
|
|
768
|
+
| Captured memory entries | Facts with domain inferred from content |
|
|
769
|
+
| Auto-recall injection | hypermem context assembly (built-in) |
|
|
770
|
+
| Per-agent tables | Per-agent `library.db` facts |
|
|
771
|
+
|
|
772
|
+
**What does not migrate:** LanceDB vector embeddings (regenerated automatically), auto-capture triggers (hypermem handles recall natively).
|
|
773
|
+
|
|
774
|
+
**Step 1: Locate the LanceDB data directory**
|
|
775
|
+
|
|
776
|
+
```bash
|
|
777
|
+
# Default location — check your config if you changed it
|
|
778
|
+
ls ~/.openclaw/memory-lancedb/
|
|
779
|
+
# or
|
|
780
|
+
openclaw config get plugins.entries.memory-lancedb.config
|
|
781
|
+
```
|
|
782
|
+
|
|
783
|
+
**Step 2: Export entries from LanceDB**
|
|
784
|
+
|
|
785
|
+
```python
|
|
786
|
+
# export_lancedb.py
|
|
787
|
+
import lancedb, json, os
|
|
788
|
+
|
|
789
|
+
db_path = os.path.expanduser("~/.openclaw/memory-lancedb")
|
|
790
|
+
db = lancedb.connect(db_path)
|
|
791
|
+
|
|
792
|
+
export = []
|
|
793
|
+
for table_name in db.table_names():
|
|
794
|
+
table = db.open_table(table_name)
|
|
795
|
+
rows = table.to_pandas()
|
|
796
|
+
for _, row in rows.iterrows():
|
|
797
|
+
text = row.get("text") or row.get("content") or row.get("memory") or ""
|
|
798
|
+
if text and len(str(text)) > 20:
|
|
799
|
+
export.append({
|
|
800
|
+
"agent_id": table_name, # table name is typically the agent id
|
|
801
|
+
"text": str(text),
|
|
802
|
+
"metadata": {k: str(v) for k, v in row.items() if k not in ("text", "content", "memory", "vector")}
|
|
803
|
+
})
|
|
804
|
+
|
|
805
|
+
with open("lancedb_export.json", "w") as f:
|
|
806
|
+
json.dump(export, f, indent=2)
|
|
807
|
+
print(f"Exported {len(export)} entries from {len(db.table_names())} tables")
|
|
808
|
+
```
|
|
809
|
+
|
|
810
|
+
Install lancedb if needed: `pip install lancedb`
|
|
811
|
+
|
|
812
|
+
**Step 3: Import**
|
|
813
|
+
|
|
814
|
+
Save as `scripts/migrate-lancedb.mjs`:
|
|
815
|
+
```js
|
|
816
|
+
// scripts/migrate-lancedb.mjs
|
|
817
|
+
import { HyperMem } from '@psiclawops/hypermem';
|
|
818
|
+
import { readFileSync } from 'node:fs';
|
|
819
|
+
import { homedir } from 'node:os';
|
|
820
|
+
import { join } from 'node:path';
|
|
821
|
+
|
|
822
|
+
// agent override — if set, all entries go to this agent regardless of export agent_id
|
|
823
|
+
const agentOverride = process.argv[2] !== '--apply' && !process.argv[2]?.startsWith('--') ? process.argv[2] : null;
|
|
824
|
+
const exportPath = process.argv.find(a => a.endsWith('.json')) ?? 'lancedb_export.json';
|
|
825
|
+
const dryRun = !process.argv.includes('--apply');
|
|
826
|
+
|
|
827
|
+
const entries = JSON.parse(readFileSync(exportPath, 'utf-8'));
|
|
828
|
+
const hm = await HyperMem.create({ dir: join(homedir(), '.openclaw/hypermem') });
|
|
829
|
+
let imported = 0, skipped = 0;
|
|
830
|
+
|
|
831
|
+
for (const entry of entries) {
|
|
832
|
+
const text = entry.text ?? '';
|
|
833
|
+
if (!text || text.length < 20) { skipped++; continue; }
|
|
834
|
+
const agentId = agentOverride ?? entry.agent_id ?? 'main';
|
|
835
|
+
if (dryRun) { console.log(`[dry-run] [${agentId}] ${text.slice(0, 80)}`); imported++; continue; }
|
|
836
|
+
await hm.addFact(agentId, text, {
|
|
837
|
+
domain: 'general',
|
|
838
|
+
source: 'lancedb-migration',
|
|
839
|
+
confidence: 0.85,
|
|
840
|
+
});
|
|
841
|
+
imported++;
|
|
842
|
+
}
|
|
843
|
+
|
|
844
|
+
console.log(`\n${dryRun ? '[dry-run] ' : ''}Imported: ${imported}, Skipped: ${skipped}`);
|
|
845
|
+
if (dryRun) console.log('Run with --apply to write data.');
|
|
846
|
+
```
|
|
847
|
+
|
|
848
|
+
```bash
|
|
849
|
+
# Dry run (all agents from export)
|
|
850
|
+
node scripts/migrate-lancedb.mjs lancedb_export.json
|
|
851
|
+
|
|
852
|
+
# Or target a specific agent
|
|
853
|
+
node scripts/migrate-lancedb.mjs main lancedb_export.json
|
|
854
|
+
|
|
855
|
+
# Apply
|
|
856
|
+
node scripts/migrate-lancedb.mjs lancedb_export.json --apply
|
|
857
|
+
```
|
|
858
|
+
|
|
859
|
+
**Step 4: Disable memory-lancedb and enable hypermem**
|
|
860
|
+
|
|
861
|
+
```bash
|
|
862
|
+
# Replace memory-lancedb with hypermem memory plugin
|
|
863
|
+
openclaw config set plugins.slots.memory hypermem
|
|
864
|
+
openclaw config set plugins.slots.contextEngine hypercompositor
|
|
865
|
+
openclaw gateway restart
|
|
866
|
+
```
|
|
867
|
+
|
|
868
|
+
LanceDB files at `~/.openclaw/memory-lancedb/` are untouched. Delete when satisfied.
|
|
869
|
+
|
|
870
|
+
---
|
|
871
|
+
|
|
872
|
+
## From MEMORY.md files
|
|
873
|
+
|
|
874
|
+
If your agents use the standard OpenClaw MEMORY.md + daily checkpoint pattern (`memory/YYYY-MM-DD.md`) without any other memory backend, this script scans workspace directories and imports substantive entries as facts.
|
|
875
|
+
|
|
876
|
+
> **If you are coming from QMD**, use the [QMD path](#from-qmd) instead — it covers MEMORY.md files and handles the slot change correctly.
|
|
877
|
+
|
|
878
|
+
**Step 1: Dry run (all agents)**
|
|
879
|
+
```bash
|
|
880
|
+
node scripts/migrate-memory-md.mjs
|
|
881
|
+
```
|
|
882
|
+
|
|
883
|
+
Review output — shows workspaces found, fact counts per agent, and a sample of what would be imported.
|
|
884
|
+
|
|
885
|
+
**Step 2: Import**
|
|
886
|
+
```bash
|
|
887
|
+
node scripts/migrate-memory-md.mjs --apply
|
|
888
|
+
```
|
|
889
|
+
|
|
890
|
+
Or for a single agent:
|
|
891
|
+
```bash
|
|
892
|
+
node scripts/migrate-memory-md.mjs --agent my-agent --apply
|
|
893
|
+
```
|
|
894
|
+
|
|
895
|
+
**Step 3: Restart**
|
|
896
|
+
```bash
|
|
897
|
+
openclaw gateway restart
|
|
898
|
+
```
|
|
899
|
+
|
|
900
|
+
**Options:**
|
|
901
|
+
```
|
|
902
|
+
--agent <id> Only import for this agent (default: all detected)
|
|
903
|
+
--workspace-root <path> Scan workspace directories under this path
|
|
904
|
+
(default: ~/.openclaw)
|
|
905
|
+
--hypermem-dir <path> hypermem data directory (default: ~/.openclaw/hypermem)
|
|
906
|
+
--limit <n> Import only first N facts
|
|
907
|
+
--apply Actually write data (default is dry-run)
|
|
908
|
+
```
|
|
909
|
+
|
|
910
|
+
**Parsing rules:** Imports bullet list items from daily files (`memory/YYYY-MM-DD.md`) only. `MEMORY.md` index files are intentionally skipped — they're pointers, not content. Lines under 40 characters, `→ memory_search(...)` pointers, and code-like lines are also skipped.
|
|
911
|
+
|
|
912
|
+
---
|
|
913
|
+
|
|
914
|
+
## From a custom system
|
|
915
|
+
|
|
916
|
+
Use hypermem's programmatic API to import directly.
|
|
917
|
+
|
|
918
|
+
**Import facts:**
|
|
919
|
+
```js
|
|
920
|
+
import { HyperMem } from '@psiclawops/hypermem';
|
|
921
|
+
|
|
922
|
+
const hm = await HyperMem.create({ dir: '~/.openclaw/hypermem' });
|
|
923
|
+
|
|
924
|
+
await hm.addFact('your-agent-id', 'User prefers dark mode in all UIs', {
|
|
925
|
+
domain: 'preference',
|
|
926
|
+
source: 'migration',
|
|
927
|
+
confidence: 0.9,
|
|
928
|
+
});
|
|
929
|
+
```
|
|
930
|
+
|
|
931
|
+
**Import conversation history:**
|
|
932
|
+
```js
|
|
933
|
+
await hm.recordUserMessage('your-agent-id', 'session-key:your-session', 'Hello, how are you?');
|
|
934
|
+
|
|
935
|
+
await hm.recordAssistantMessage('your-agent-id', 'session-key:your-session', {
|
|
936
|
+
role: 'assistant',
|
|
937
|
+
textContent: 'I am doing well, thank you.',
|
|
938
|
+
toolCalls: [],
|
|
939
|
+
createdAt: new Date().toISOString(),
|
|
940
|
+
});
|
|
941
|
+
```
|
|
942
|
+
|
|
943
|
+
For bulk imports, write a script modeled on `scripts/migrate-clawtext.mjs` — direct SQLite writes are faster than the API for large datasets.
|
|
944
|
+
|
|
945
|
+
After import:
|
|
946
|
+
```bash
|
|
947
|
+
openclaw gateway restart
|
|
948
|
+
```
|
|
949
|
+
|
|
950
|
+
---
|
|
951
|
+
|
|
952
|
+
## Enabling hypermem
|
|
953
|
+
|
|
954
|
+
Once your data is imported, enable both plugins:
|
|
955
|
+
|
|
956
|
+
```bash
|
|
957
|
+
openclaw config set plugins.slots.contextEngine hypercompositor
|
|
958
|
+
openclaw config set plugins.slots.memory hypermem
|
|
959
|
+
openclaw gateway restart
|
|
960
|
+
```
|
|
961
|
+
|
|
962
|
+
If you were on memory-core or QMD, the `slots.memory hypermem` line above already replaces the old memory provider. No separate disable step needed.
|
|
963
|
+
|
|
964
|
+
---
|
|
965
|
+
|
|
966
|
+
## Verifying the migration
|
|
967
|
+
|
|
968
|
+
**Check fact counts by agent:**
|
|
969
|
+
```bash
|
|
970
|
+
node -e "
|
|
971
|
+
const { DatabaseSync } = require('node:sqlite');
|
|
972
|
+
const os = require('node:os'), path = require('node:path');
|
|
973
|
+
const db = new DatabaseSync(path.join(os.homedir(), '.openclaw/hypermem/library.db'), { readOnly: true });
|
|
974
|
+
const rows = db.prepare('SELECT agent_id, COUNT(*) as cnt FROM facts GROUP BY agent_id ORDER BY cnt DESC').all();
|
|
975
|
+
rows.forEach(r => console.log(r.agent_id + ': ' + r.cnt + ' facts'));
|
|
976
|
+
db.close();
|
|
977
|
+
"
|
|
978
|
+
```
|
|
979
|
+
|
|
980
|
+
**Check message history by agent:**
|
|
981
|
+
```bash
|
|
982
|
+
node -e "
|
|
983
|
+
const { DatabaseSync } = require('node:sqlite');
|
|
984
|
+
const os = require('node:os'), path = require('node:path'), fs = require('node:fs');
|
|
985
|
+
const agentsDir = path.join(os.homedir(), '.openclaw/hypermem/agents');
|
|
986
|
+
for (const agent of fs.readdirSync(agentsDir)) {
|
|
987
|
+
const dbPath = path.join(agentsDir, agent, 'messages.db');
|
|
988
|
+
if (!fs.existsSync(dbPath)) continue;
|
|
989
|
+
const db = new DatabaseSync(dbPath, { readOnly: true });
|
|
990
|
+
const r = db.prepare('SELECT COUNT(*) as cnt FROM messages').get();
|
|
991
|
+
console.log(agent + ': ' + r.cnt + ' messages');
|
|
992
|
+
db.close();
|
|
993
|
+
}
|
|
994
|
+
"
|
|
995
|
+
```
|
|
996
|
+
|
|
997
|
+
**Ask your agent to recall something** from the imported history. If recall seems patchy in the first session, the background indexer is still building embeddings — send one message and wait one turn. It runs automatically after ingest.
|
|
998
|
+
|
|
999
|
+
---
|
|
1000
|
+
|
|
1001
|
+
## Rollback
|
|
1002
|
+
|
|
1003
|
+
hypermem does not modify or delete any source data. To roll back from any path:
|
|
1004
|
+
|
|
1005
|
+
```bash
|
|
1006
|
+
# Remove hypercompositor context engine
|
|
1007
|
+
openclaw config unset plugins.slots.contextEngine
|
|
1008
|
+
|
|
1009
|
+
# Restore the memory slot to your previous provider
|
|
1010
|
+
openclaw config set plugins.slots.memory memory-core
|
|
1011
|
+
|
|
1012
|
+
# Restore QMD backend if that's where you came from
|
|
1013
|
+
# openclaw config set agents.defaults.memory.backend qmd
|
|
1014
|
+
|
|
1015
|
+
# Re-enable your previous context engine if you had one
|
|
1016
|
+
openclaw gateway restart
|
|
1017
|
+
```
|
|
1018
|
+
|
|
1019
|
+
Original data (memory.db, ClawText database, QMD collections, Cognee data directory, MEMORY.md files) is untouched throughout.
|
|
1020
|
+
|
|
1021
|
+
---
|
|
1022
|
+
|
|
1023
|
+
## Troubleshooting
|
|
1024
|
+
|
|
1025
|
+
**"library.db not found" during migration**
|
|
1026
|
+
|
|
1027
|
+
hypermem hasn't initialized yet. Start the gateway with the plugin enabled, send one message, then re-run:
|
|
1028
|
+
```bash
|
|
1029
|
+
openclaw gateway restart
|
|
1030
|
+
# wait a few seconds, send one message
|
|
1031
|
+
node scripts/migrate-memory-db.mjs --agent main
|
|
1032
|
+
```
|
|
1033
|
+
|
|
1034
|
+
**Facts imported but agent doesn't recall them**
|
|
1035
|
+
|
|
1036
|
+
The background indexer builds embeddings on a schedule. Force a rebuild:
|
|
1037
|
+
```bash
|
|
1038
|
+
openclaw gateway restart
|
|
1039
|
+
```
|
|
1040
|
+
Send one message and wait one turn — the indexer runs after the first ingest.
|
|
1041
|
+
|
|
1042
|
+
**Duplicate facts after re-running a script**
|
|
1043
|
+
|
|
1044
|
+
All scripts check for duplicates before inserting. Re-running is safe. If you see unexpected duplicates, check whether the same data exists under a different `original_id` in the migration metadata — this can happen if source IDs changed between runs.
|
|
1045
|
+
|
|
1046
|
+
**Agent routed to wrong database**
|
|
1047
|
+
|
|
1048
|
+
ClawText and MEMORY.md scripts infer agent identity from workspace paths and content. If an agent was misidentified, re-run with `--agent <correct-id>` — the idempotency check skips already-imported entries.
|
|
1049
|
+
|
|
1050
|
+
**Cognee export is empty or has unexpected format**
|
|
1051
|
+
|
|
1052
|
+
Cognee's search API behavior varies by backend and version. Try querying with a specific term instead of `"*"`, or export directly from the underlying database (SQLite at `~/.cognee/` by default). Adapt the `text` field extraction in the import script to match your export structure.
|
|
1053
|
+
|
|
1054
|
+
**Mem0 export job returns incomplete data**
|
|
1055
|
+
|
|
1056
|
+
The export job is async. If `get_memory_export()` returns partial results, the job wasn't finished. Add a poll loop:
|
|
1057
|
+
```python
|
|
1058
|
+
import time
|
|
1059
|
+
while True:
|
|
1060
|
+
data = client.get_memory_export(memory_export_id=export_id)
|
|
1061
|
+
if data.get('status') == 'completed': break
|
|
1062
|
+
time.sleep(3)
|
|
1063
|
+
```
|
|
1064
|
+
Alternatively use `get_all()` which is synchronous.
|
|
1065
|
+
|
|
1066
|
+
**Zep self-hosted: graph search returns 404**
|
|
1067
|
+
|
|
1068
|
+
Graph search requires the Zep server to have processed at least one session. If the graph hasn't been built yet, skip the graph export step — session messages are the higher-value data anyway.
|
|
1069
|
+
|
|
1070
|
+
**Honcho conclusions endpoint returns 404**
|
|
1071
|
+
|
|
1072
|
+
The conclusions API path varies by Honcho version. Check `GET /v1/apps/{app_id}/users/{user_id}/metamessages` as an alternative — Honcho's user model is sometimes stored there. If neither works, export directly from the Honcho Postgres database (`honcho.metamessages` table).
|
|
1073
|
+
|
|
1074
|
+
**memory-lancedb export: `lancedb` not installed**
|
|
1075
|
+
|
|
1076
|
+
```bash
|
|
1077
|
+
pip install lancedb
|
|
1078
|
+
```
|
|
1079
|
+
If you don't have Python, the LanceDB files are Arrow IPC format — readable with any Arrow-compatible tool. The table names map directly to agent IDs.
|
|
1080
|
+
|
|
1081
|
+
**QMD extra paths not appearing after migration**
|
|
1082
|
+
|
|
1083
|
+
hypermem does not pick up `memory.qmd.paths` automatically. Use the `import-qmd-extras.mjs` script from the [QMD section](#from-qmd) to import those directories manually.
|