@psiclawops/hypermem 0.9.2 → 0.9.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/INSTALL.md CHANGED
@@ -51,7 +51,7 @@ No gateway, plugin load path, or OpenClaw config is required in library mode. Op
51
51
 
52
52
  This guide is deliberately declarative. Follow the steps in order and verify each install state before moving on.
53
53
 
54
- > **Release note:** if the npm package you installed does not contain `hypermem-install`, `install:runtime`, and `hypermem-model-audit`, you are on an older public release. Use the source-clone path in this guide or wait for the next npm release.
54
+ > **Release note:** current releases ship `hypermem-install`, `install:runtime`, and `hypermem-model-audit`. If your installed package does not contain them, upgrade to the latest `@psiclawops/hypermem` before following this guide.
55
55
 
56
56
  ```bash
57
57
  npm install @psiclawops/hypermem
@@ -529,7 +529,7 @@ npm install
529
529
  npm run build
530
530
  ```
531
531
 
532
- Build both plugins, then install the runtime payload into 's durable plugin directory:
532
+ Build both plugins, then install the runtime payload into OpenClaw's durable plugin directory:
533
533
 
534
534
  ```bash
535
535
  npm --prefix plugin install && npm --prefix plugin run build
package/README.md CHANGED
@@ -20,13 +20,13 @@ Or via the shell installer:
20
20
  curl -fsSL https://raw.githubusercontent.com/PsiClawOps/hypermem/main/install.sh | bash
21
21
  ```
22
22
 
23
- Or install manually via `npm install @psiclawops/hypermem` - see [Installation](#installation) for the full declarative plugin path, verification checkpoints, and setup variants.
23
+ Or install manually via `npm install @psiclawops/hypermem`: see [Installation](#installation) for the full declarative plugin path, verification checkpoints, and setup variants.
24
24
 
25
25
  Release operators should also read:
26
26
 
27
- - [INSTALL.md](./INSTALL.md) - canonical fresh install and upgrade guide
28
- - [docs/INTEGRATION_VALIDATION.md](./docs/INTEGRATION_VALIDATION.md) - end-to-end integration validation contract
29
- - [docs/DIAGNOSTICS.md](./docs/DIAGNOSTICS.md) - status, model audit, compose, trim, and release diagnostics
27
+ - [INSTALL.md](./INSTALL.md): canonical fresh install and upgrade guide
28
+ - [docs/INTEGRATION_VALIDATION.md](./docs/INTEGRATION_VALIDATION.md): end-to-end integration validation contract
29
+ - [docs/DIAGNOSTICS.md](./docs/DIAGNOSTICS.md): status, model audit, compose, trim, and release diagnostics
30
30
 
31
31
  A successful `hypermem-install` only stages the runtime. HyperMem is active only after OpenClaw config is wired, the gateway restarts, and logs show compose activity.
32
32
 
@@ -57,7 +57,7 @@ The difference is not intelligence. It is prompt access. Three failure modes fol
57
57
 
58
58
  ## What OpenClaw provides today
59
59
 
60
- OpenClaw already gives agents a stronger baseline than most stacks. It injects structured guidance into every session:
60
+ OpenClaw already gives agents a strong baseline. It injects structured guidance into every session:
61
61
 
62
62
  | File | What it contributes | Survives session restart? |
63
63
  |---|---|---|
@@ -78,16 +78,16 @@ OpenClaw gives agents a strong starting shape: identity files, user guidance, ta
78
78
 
79
79
  hypermem closes that gap with four SQLite-backed memory layers that stay local, run in-process, and remain queryable across sessions. No external database service. No retrieval stack to babysit.
80
80
 
81
- | Layer | What it holds | Speed |
81
+ | Layer | What it holds | Representative local read |
82
82
  |---|---|---|
83
- | **L1 SQLite `:memory:`** | What the agent needs right now. Identity, recent history, active state. | 0.08ms |
84
- | **L2 History** | Every conversation, queryable and concurrent-safe. Per-agent. | 0.13ms |
85
- | **L3 Semantic** | Finds related content even when the words don't match. | 0.29ms |
86
- | **L4 Knowledge** | Facts, wiki pages, episodes, preferences. Shared across agents. | 0.09ms |
83
+ | **L1 SQLite `:memory:`** | What the agent needs right now. Identity, recent history, active state. | L1 slot GET: 0.08ms avg |
84
+ | **L2 History** | Every conversation, queryable and concurrent-safe. Per-agent. | L2 history window: 0.13ms avg |
85
+ | **L3 Semantic** | Finds related content even when the words don't match. | async/cached; provider-dependent. See [Speed](#speed) and [Diagnostics](./docs/DIAGNOSTICS.md#memory-access-benchmark). |
86
+ | **L4 Knowledge** | Facts, wiki pages, episodes, preferences. Shared across agents. | L4 knowledge query: 0.09ms avg |
87
87
 
88
88
  Durable context stays in SQLite and remains queryable across session boundaries. The retry logic decision from last week, the deployment preferences from last month, and the architecture choices from day one can be pulled back in when they matter.
89
89
 
90
- That changes OpenClaw in a few concrete ways. Starts are warm instead of blank because recent history, ranked facts, active topics, and cached semantic state are loaded before the first turn. Recall survives wording drift because FTS5, sqlite-vec, RRF fusion, and an optional reranker can recover the same idea through different phrasing. Time-aware facts can answer last week and before the release as retrieval problems instead of vague prompt guessing. Shared knowledge stops living in one agent’s scratchpad because `library.db` holds facts, docs, episodes, preferences, fleet state, and output standards with visibility controls.
90
+ That changes OpenClaw in a few concrete ways. Starts are warm instead of blank because recent history, ranked facts, active topics, and cached semantic state are loaded before the first turn. Recall survives wording drift because FTS5, sqlite-vec, RRF fusion, and an optional reranker can recover the same idea through different phrasing. Time-aware facts can answer "last week" and "before the release" as retrieval problems instead of vague prompt guessing. Shared knowledge stops living in one agent’s scratchpad because `library.db` holds facts, docs, episodes, preferences, fleet state, and output standards with visibility controls.
91
91
 
92
92
  ---
93
93
 
@@ -189,11 +189,11 @@ Behavior standards define how your agents write. Anti-sycophancy rules prevent f
189
189
 
190
190
  ### Model adaptation
191
191
 
192
- Different models have different default behaviors. GPT-5.4 tends toward 2x verbosity and long lists. Claude Opus defaults to hedging and preambles. Gemini produces bulleted summaries where prose would be more direct. Model adaptation corrects for these tendencies per model.
192
+ Different providers and model families have different default answer shapes. Model adaptation applies operator-defined output standards per model so those defaults do not leak into every response.
193
193
 
194
194
  Adaptation entries are stored in the `model_output_directives` table and matched by model ID using exact match, then glob pattern (longest wins), then wildcard fallback. Each entry contains:
195
195
 
196
- - **Calibration:** known model tendencies and specific adjustments (e.g., "2x verbosity: cut first drafts in half")
196
+ - **Calibration:** known model tendencies and specific adjustments (e.g., "prefer concise first drafts")
197
197
  - **Corrections:** hard/medium/soft severity rules applied in order (e.g., "No preamble before the answer")
198
198
  - **Task overrides:** per-task-type adjustments
199
199
 
@@ -208,7 +208,7 @@ The example below shows the intended effect of `hyperformProfile: "light"`. hype
208
208
  ```
209
209
  Prompt: "How should I size my context window budget for a long-running agent session?"
210
210
 
211
- WITHOUT hyperform shaping (GPT-5.4 default):
211
+ WITHOUT hyperform shaping (generic verbose default):
212
212
  Here are the key factors to consider when sizing your context window budget:
213
213
 
214
214
  **1. Session depth**
@@ -276,7 +276,7 @@ Reference run, production database: 5,104 facts, 28,441 episodes, 847 knowledge
276
276
  | Operation | avg | p50 | p95 |
277
277
  |---|---|---|---|
278
278
  | L1 slot GET (SQLite in-memory) | 0.08ms | 0.07ms | 0.13ms |
279
- | L1 history window (100 messages) | 0.13ms | 0.11ms | 0.19ms |
279
+ | L2 history window (100 messages) | 0.13ms | 0.11ms | 0.19ms |
280
280
  | L4 facts (top-28, confidence × decay) | 0.28ms | 0.26ms | 0.36ms |
281
281
  | L4 facts + agentId filter | 0.31ms | 0.29ms | 0.40ms |
282
282
  | L4 FTS5 keyword search | 0.06ms | 0.05ms | 0.08ms |
@@ -357,13 +357,13 @@ Facts are ranked by `confidence × recencyDecay`, where decay is exponential wit
357
357
 
358
358
  topic detection ──► scope retrieval to active thread
359
359
 
360
- ┌────┴───────────────────────────────────────────────┐
361
- query 4 layers (parallel)
362
-
363
- │ L1 in-memory L2 History L3 Vectors L4 Library
364
- │ hot state durable semantic facts/wiki
365
- │ 0.1ms 0.16ms 0.29ms 0.08ms
366
- └────┬───────────────────────────────────────────────┘
360
+ ┌────┴────────────────────────────────────────────────────────────────┐
361
+ query 4 layers (parallel)
362
+
363
+ │ L1 in-memory L2 History L3 Vectors L4 Library
364
+ │ hot state durable history semantic recall facts/wiki
365
+ │ 0.08ms avg 0.13ms avg async/cached 0.09ms avg
366
+ └────┬────────────────────────────────────────────────────────────────┘
367
367
 
368
368
  budget allocator ──► 10 slots, fixed token cap
369
369
 
@@ -386,7 +386,7 @@ Slot-level budget allocation is shown in the [hypercompositor diagram](#what-the
386
386
 
387
387
  ## Requirements
388
388
 
389
- **Current release: hypermem 0.9.0.** Changelog: [CHANGELOG.md](./CHANGELOG.md)
389
+ **Current release: hypermem 0.9.2.** Changelog: [CHANGELOG.md](./CHANGELOG.md)
390
390
 
391
391
  | Requirement | Version | Notes |
392
392
  |---|---|---|
@@ -398,7 +398,7 @@ SQLite is a library, not a service. All four layers run in-process with no exter
398
398
  **Runtime version constants** (importable from the package):
399
399
  ```typescript
400
400
  import {
401
- ENGINE_VERSION, // '0.9.0'
401
+ ENGINE_VERSION, // '0.9.2'
402
402
  MIN_NODE_VERSION, // '22.0.0'
403
403
  SQLITE_VEC_VERSION, // '0.1.9'
404
404
  MAIN_SCHEMA_VERSION, // 10 (messages.db)
@@ -0,0 +1,24 @@
1
+ /**
2
+ * HyperMem Memory Plugin
3
+ *
4
+ * Thin adapter that bridges HyperMem's retrieval capabilities into
5
+ * OpenClaw's memory slot contract (`kind: "memory"`).
6
+ *
7
+ * The context engine plugin (hypercompositor) owns the full lifecycle:
8
+ * ingest, assemble, compact, afterTurn, bootstrap, dispose.
9
+ *
10
+ * This plugin owns the memory slot contract:
11
+ * - registerMemoryCapability() with runtime + publicArtifacts
12
+ * - memory_search tool backing via MemorySearchManager
13
+ * - Public artifacts for memory-wiki bridge
14
+ *
15
+ * Both plugins share the same HyperMem singleton (loaded from repo dist).
16
+ */
17
+ declare const _default: {
18
+ id: string;
19
+ name: string;
20
+ description: string;
21
+ configSchema: import("openclaw/plugin-sdk").OpenClawPluginConfigSchema;
22
+ register: NonNullable<import("openclaw/plugin-sdk/plugin-entry").OpenClawPluginDefinition["register"]>;
23
+ } & Pick<import("openclaw/plugin-sdk/plugin-entry").OpenClawPluginDefinition, "kind" | "reload" | "nodeHostCommands" | "securityAuditCollectors">;
24
+ export default _default;
@@ -0,0 +1,378 @@
1
+ /**
2
+ * HyperMem Memory Plugin
3
+ *
4
+ * Thin adapter that bridges HyperMem's retrieval capabilities into
5
+ * OpenClaw's memory slot contract (`kind: "memory"`).
6
+ *
7
+ * The context engine plugin (hypercompositor) owns the full lifecycle:
8
+ * ingest, assemble, compact, afterTurn, bootstrap, dispose.
9
+ *
10
+ * This plugin owns the memory slot contract:
11
+ * - registerMemoryCapability() with runtime + publicArtifacts
12
+ * - memory_search tool backing via MemorySearchManager
13
+ * - Public artifacts for memory-wiki bridge
14
+ *
15
+ * Both plugins share the same HyperMem singleton (loaded from repo dist).
16
+ */
17
+ import { definePluginEntry, emptyPluginConfigSchema } from 'openclaw/plugin-sdk/plugin-entry';
18
+ import { matchTriggers, TRIGGER_REGISTRY } from '@psiclawops/hypermem';
19
+ import path from 'path';
20
+ import fs from 'fs/promises';
21
+ import os from 'os';
22
+ import { fileURLToPath } from 'url';
23
+ // ─── HyperMem singleton ────────────────────────────────────────
24
+ // HyperMem.create() in the core package now dedupes per absolute dataDir, so
25
+ // whichever of the two plugins (context-engine, memory) calls create() first
26
+ // owns the instance. To avoid a race where this plugin would otherwise win
27
+ // boot with no embedding config and force defaults onto the shared instance,
28
+ // we load the same user config file the context-engine plugin loads and pass
29
+ // the full embedding/reranker config through to create().
30
+ const __pluginDir = path.dirname(fileURLToPath(import.meta.url));
31
+ async function resolveHyperMemPath() {
32
+ try {
33
+ const resolvedUrl = await import.meta.resolve('@psiclawops/hypermem');
34
+ return resolvedUrl.startsWith('file:') ? fileURLToPath(resolvedUrl) : resolvedUrl;
35
+ }
36
+ catch {
37
+ return path.resolve(__pluginDir, '../../dist/index.js');
38
+ }
39
+ }
40
+ async function loadFileConfig(dataDir) {
41
+ const configPath = path.join(dataDir, 'config.json');
42
+ try {
43
+ const raw = await fs.readFile(configPath, 'utf-8');
44
+ return JSON.parse(raw);
45
+ }
46
+ catch (err) {
47
+ if (err.code !== 'ENOENT') {
48
+ console.warn(`[hypermem-memory] Failed to parse config.json (using defaults):`, err.message);
49
+ }
50
+ return {};
51
+ }
52
+ }
53
+ let _hm = null;
54
+ let _hmInitPromise = null;
55
+ async function getHyperMem() {
56
+ if (_hm)
57
+ return _hm;
58
+ if (_hmInitPromise)
59
+ return _hmInitPromise;
60
+ _hmInitPromise = (async () => {
61
+ const hypermemPath = await resolveHyperMemPath();
62
+ const mod = await import(hypermemPath);
63
+ const HyperMem = mod.HyperMem;
64
+ const dataDir = path.join(os.homedir(), '.openclaw/hypermem');
65
+ const fileConfig = await loadFileConfig(dataDir);
66
+ const createConfig = {
67
+ dataDir,
68
+ cache: {
69
+ keyPrefix: 'hm:',
70
+ sessionTTL: 14400,
71
+ historyTTL: 86400,
72
+ },
73
+ };
74
+ // Forward embedding + reranker so this plugin's create() call produces
75
+ // an equivalent instance to the context-engine plugin's. Other config
76
+ // sections (compositor, indexer, dreaming, etc.) are owned by the
77
+ // context-engine plugin and only matter when it wins the singleton race.
78
+ if (fileConfig.embedding)
79
+ createConfig.embedding = fileConfig.embedding;
80
+ if (fileConfig.reranker)
81
+ createConfig.reranker = fileConfig.reranker;
82
+ const instance = await HyperMem.create(createConfig);
83
+ _hm = instance;
84
+ return instance;
85
+ })();
86
+ return _hmInitPromise;
87
+ }
88
+ const DOCTRINE_COLLECTIONS = new Set([
89
+ 'governance/policy',
90
+ 'governance/charter',
91
+ 'governance/comms',
92
+ 'operations/agents',
93
+ ]);
94
+ function doctrineScore(chunk, rank) {
95
+ const collectionBoost = chunk.collection.startsWith('governance/') ? 1.25 : 1.1;
96
+ return collectionBoost - Math.min(rank, 9) * 0.03;
97
+ }
98
+ function docChunkToMemoryResult(chunk, rank) {
99
+ return {
100
+ path: chunk.sourcePath,
101
+ startLine: 0,
102
+ endLine: 0,
103
+ score: doctrineScore(chunk, rank),
104
+ snippet: chunk.content.slice(0, 500),
105
+ source: 'memory',
106
+ citation: `[doc:${chunk.collection}:${chunk.sectionPath}]`,
107
+ };
108
+ }
109
+ /**
110
+ * Create a MemorySearchManager backed by HyperMem's retrieval pipeline.
111
+ *
112
+ * Uses HyperMem's:
113
+ * - library.db fact search (FTS5 + BM25)
114
+ * - vector store semantic search (when available)
115
+ * - message search (full-text across conversations)
116
+ */
117
+ function createMemorySearchManager(hm, agentId, workspaceDir) {
118
+ return {
119
+ async search(query, opts) {
120
+ const maxResults = opts?.maxResults ?? 10;
121
+ const minScore = opts?.minScore ?? 0;
122
+ const results = [];
123
+ const seenDocChunks = new Set();
124
+ // 0. Canonical doctrine search. Explicit governance queries should surface
125
+ // policy, charter, comms, and AGENTS chunks before stale daily-memory folklore.
126
+ try {
127
+ const triggers = matchTriggers(query, TRIGGER_REGISTRY)
128
+ .filter(trigger => DOCTRINE_COLLECTIONS.has(trigger.collection))
129
+ .slice(0, 4);
130
+ for (const trigger of triggers) {
131
+ const chunks = hm.queryDocChunks({
132
+ collection: trigger.collection,
133
+ agentId,
134
+ keyword: query,
135
+ limit: Math.max(3, Math.ceil(maxResults / Math.max(1, triggers.length))),
136
+ });
137
+ chunks.forEach((chunk, rank) => {
138
+ const key = `${chunk.sourcePath}:${chunk.sectionPath}:${chunk.sourceHash}`;
139
+ if (seenDocChunks.has(key))
140
+ return;
141
+ seenDocChunks.add(key);
142
+ const result = docChunkToMemoryResult(chunk, rank);
143
+ if (result.score >= minScore)
144
+ results.push(result);
145
+ });
146
+ }
147
+ }
148
+ catch {
149
+ // Doctrine search is a precision boost, not a hard dependency.
150
+ }
151
+ // 1. Fact search (FTS5 + BM25 from library.db)
152
+ try {
153
+ const facts = hm.getActiveFacts(agentId, { limit: maxResults * 2 });
154
+ // Simple keyword matching for facts (FTS5 handles this in the DB layer)
155
+ const queryLower = query.toLowerCase();
156
+ const queryTerms = queryLower.split(/\s+/).filter(t => t.length > 2);
157
+ for (const fact of facts) {
158
+ const contentLower = fact.content.toLowerCase();
159
+ const matchCount = queryTerms.filter(t => contentLower.includes(t)).length;
160
+ if (matchCount === 0)
161
+ continue;
162
+ const score = matchCount / queryTerms.length;
163
+ if (score < minScore)
164
+ continue;
165
+ results.push({
166
+ path: `library://facts/${fact.id}`,
167
+ startLine: 0,
168
+ endLine: 0,
169
+ score,
170
+ snippet: fact.content.slice(0, 300),
171
+ source: 'memory',
172
+ citation: fact.domain ? `[fact:${fact.domain}]` : '[fact]',
173
+ });
174
+ }
175
+ }
176
+ catch {
177
+ // Fact search non-fatal
178
+ }
179
+ // 2. Vector/semantic search (when available)
180
+ try {
181
+ const vectorStore = hm.getVectorStore();
182
+ if (vectorStore) {
183
+ const vectorResults = await hm.semanticSearch(agentId, query, {
184
+ limit: maxResults,
185
+ maxDistance: 1.5,
186
+ });
187
+ for (const vr of vectorResults) {
188
+ const score = 1.0 - (vr.distance / 2.0); // normalize distance to 0-1 score
189
+ if (score < minScore)
190
+ continue;
191
+ results.push({
192
+ path: `vector://${vr.sourceTable}/${vr.sourceId}`,
193
+ startLine: 0,
194
+ endLine: 0,
195
+ score,
196
+ snippet: vr.content.slice(0, 300),
197
+ source: 'memory',
198
+ citation: `[${vr.sourceTable}:${vr.sourceId}]`,
199
+ });
200
+ }
201
+ }
202
+ }
203
+ catch {
204
+ // Vector search non-fatal
205
+ }
206
+ // 3. Message search (FTS5 across conversations)
207
+ try {
208
+ const messageResults = hm.search(agentId, query, maxResults);
209
+ for (const msg of messageResults) {
210
+ const content = msg.textContent ?? '';
211
+ results.push({
212
+ path: `messages://${msg.conversationId ?? 'unknown'}/${msg.id}`,
213
+ startLine: 0,
214
+ endLine: 0,
215
+ score: 0.5, // message search doesn't return scores, use mid-range
216
+ snippet: content.slice(0, 300),
217
+ source: 'sessions',
218
+ citation: `[message:${msg.id}]`,
219
+ });
220
+ }
221
+ }
222
+ catch {
223
+ // Message search non-fatal
224
+ }
225
+ // Deduplicate by content similarity, sort by score, limit
226
+ results.sort((a, b) => b.score - a.score);
227
+ return results.slice(0, maxResults);
228
+ },
229
+ async readFile(params) {
230
+ const absPath = path.resolve(workspaceDir, params.relPath);
231
+ try {
232
+ const content = await fs.readFile(absPath, 'utf-8');
233
+ const lines = content.split('\n');
234
+ const from = params.from ?? 0;
235
+ const count = params.lines ?? lines.length;
236
+ const slice = lines.slice(from, from + count);
237
+ return { text: slice.join('\n'), path: absPath };
238
+ }
239
+ catch (err) {
240
+ return { text: `Error reading ${absPath}: ${err.message}`, path: absPath };
241
+ }
242
+ },
243
+ status() {
244
+ const vectorStore = hm.getVectorStore();
245
+ const vectorStats = vectorStore ? hm.getVectorStats(agentId) : null;
246
+ return {
247
+ backend: 'builtin',
248
+ provider: 'hypermem',
249
+ model: 'hypermem-fts5+vector',
250
+ workspaceDir,
251
+ dbPath: path.join(os.homedir(), '.openclaw/hypermem'),
252
+ sources: ['memory', 'sessions'],
253
+ fts: {
254
+ enabled: true,
255
+ available: true,
256
+ },
257
+ vector: {
258
+ enabled: !!vectorStore,
259
+ available: !!vectorStore,
260
+ dims: vectorStats?.dimensions
261
+ ?? vectorStats?.dims
262
+ ?? undefined,
263
+ },
264
+ custom: {
265
+ vectorStats: vectorStats ?? undefined,
266
+ factCount: hm.getActiveFacts(agentId, { limit: 1 }).length > 0 ? 'available' : 'empty',
267
+ },
268
+ };
269
+ },
270
+ async probeEmbeddingAvailability() {
271
+ try {
272
+ const vectorStore = hm.getVectorStore();
273
+ if (!vectorStore)
274
+ return { ok: false, error: 'Vector store not initialized' };
275
+ return { ok: true };
276
+ }
277
+ catch (err) {
278
+ return { ok: false, error: err.message };
279
+ }
280
+ },
281
+ async probeVectorAvailability() {
282
+ return !!hm.getVectorStore();
283
+ },
284
+ };
285
+ }
286
+ // ─── Manager cache ──────────────────────────────────────────────
287
+ // One manager per agentId; closed on plugin dispose.
288
+ const _managers = new Map();
289
+ // ─── Plugin Entry ───────────────────────────────────────────────
290
+ export default definePluginEntry({
291
+ id: 'hypermem',
292
+ name: 'HyperMem Memory',
293
+ description: 'Bridges HyperMem retrieval (facts, vectors, messages) into the OpenClaw memory slot for memory_search and memory-wiki.',
294
+ kind: 'memory',
295
+ configSchema: emptyPluginConfigSchema(),
296
+ register(api) {
297
+ api.registerMemoryCapability({
298
+ runtime: {
299
+ async getMemorySearchManager(params) {
300
+ try {
301
+ const hm = await getHyperMem();
302
+ const agentId = params.agentId || 'main';
303
+ // Cache managers per agent
304
+ if (!_managers.has(agentId)) {
305
+ // Resolve workspace dir from agent config
306
+ const agents = params.cfg?.agents?.list ?? [];
307
+ const agentCfg = agents.find((a) => a.id === agentId);
308
+ const workspaceDir = agentCfg?.workspace
309
+ ?? path.join(os.homedir(), '.openclaw/workspace');
310
+ _managers.set(agentId, createMemorySearchManager(hm, agentId, workspaceDir));
311
+ }
312
+ return { manager: _managers.get(agentId) };
313
+ }
314
+ catch (err) {
315
+ return { manager: null, error: err.message };
316
+ }
317
+ },
318
+ resolveMemoryBackendConfig(_params) {
319
+ return { backend: 'builtin' };
320
+ },
321
+ async closeAllMemorySearchManagers() {
322
+ _managers.clear();
323
+ },
324
+ },
325
+ publicArtifacts: {
326
+ async listArtifacts(params) {
327
+ const artifacts = [];
328
+ // List memory files for each agent
329
+ const agents = params.cfg?.agents?.list ?? [];
330
+ for (const agent of agents) {
331
+ const agentId = agent.id;
332
+ if (!agentId)
333
+ continue;
334
+ const workspace = agent.workspace;
335
+ if (!workspace)
336
+ continue;
337
+ const memoryDir = path.join(workspace, 'memory');
338
+ try {
339
+ const files = await fs.readdir(memoryDir);
340
+ for (const file of files) {
341
+ if (!file.endsWith('.md'))
342
+ continue;
343
+ artifacts.push({
344
+ kind: 'memory-daily',
345
+ workspaceDir: workspace,
346
+ relativePath: `memory/${file}`,
347
+ absolutePath: path.join(memoryDir, file),
348
+ agentIds: [agentId],
349
+ contentType: 'markdown',
350
+ });
351
+ }
352
+ }
353
+ catch {
354
+ // No memory dir for this agent — skip
355
+ }
356
+ // Also expose MEMORY.md index
357
+ const memoryIndex = path.join(workspace, 'MEMORY.md');
358
+ try {
359
+ await fs.access(memoryIndex);
360
+ artifacts.push({
361
+ kind: 'memory-index',
362
+ workspaceDir: workspace,
363
+ relativePath: 'MEMORY.md',
364
+ absolutePath: memoryIndex,
365
+ agentIds: [agentId],
366
+ contentType: 'markdown',
367
+ });
368
+ }
369
+ catch {
370
+ // No MEMORY.md — skip
371
+ }
372
+ }
373
+ return artifacts;
374
+ },
375
+ },
376
+ });
377
+ },
378
+ });
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "@psiclawops/hypermem-memory",
3
- "version": "0.9.2",
4
- "description": "HyperMem memory plugin for OpenClaw bridges HyperMem retrieval into the memory slot",
3
+ "version": "0.9.3",
4
+ "description": "HyperMem memory plugin for OpenClaw \u2014 bridges HyperMem retrieval into the memory slot",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
7
7
  "license": "Apache-2.0",
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@psiclawops/hypermem",
3
- "version": "0.9.2",
3
+ "version": "0.9.3",
4
4
  "description": "Agent-centric memory and context composition engine for OpenClaw",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
@@ -24,8 +24,8 @@
24
24
  },
25
25
  "scripts": {
26
26
  "build": "tsc",
27
- "prepublishOnly": "npm run build",
28
- "prepack": "npm run build",
27
+ "prepublishOnly": "npm run build:all",
28
+ "prepack": "npm run build:all",
29
29
  "health": "node bin/hypermem-status.mjs --master",
30
30
  "health:master": "node bin/hypermem-status.mjs --master",
31
31
  "install:runtime": "node scripts/install-runtime.mjs",
@@ -44,7 +44,8 @@
44
44
  "validate:version-parity": "node scripts/validate-version-parity.mjs",
45
45
  "validate:public-surface": "node scripts/validate-public-surface.mjs",
46
46
  "bench:memory": "node bench/data-access-bench.mjs --iterations 1000 --warmup 50",
47
- "validate:doctor": "node test/doctor-cli.mjs"
47
+ "validate:doctor": "node test/doctor-cli.mjs",
48
+ "build:all": "npm run build && npm --prefix plugin install --prefer-offline && npm --prefix plugin run build && npm --prefix memory-plugin install --prefer-offline && npm --prefix memory-plugin run build"
48
49
  },
49
50
  "dependencies": {
50
51
  "sqlite-vec": "^0.1.9",