opencode-memsearch 0.2.0 → 0.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -10,27 +10,34 @@ This plugin gives your OpenCode agent long-term memory. It automatically summari
10
10
  - **Cold-start context** — the last 30 lines of the 2 most recent memory files are injected into the system prompt when a new session starts
11
11
  - **Semantic search** — two custom tools (`memsearch_search` and `memsearch_expand`) let the agent search and drill into past memories
12
12
  - **Per-project isolation** — memory collections are scoped by project directory
13
- - **Local embeddings** — uses memsearch's local embedding provider, so no API calls are needed for vector search
13
+ - **Local embeddings** — works with memsearch's ONNX or local embedding providers, so no API keys are needed for vector search
14
+ - **Daemon mode** — optional background daemon keeps the embedding model loaded in memory, reducing search latency from ~5-10s to ~50ms
14
15
  - **Memory protocol** — a system prompt directive instructs the agent to check memory at session start and whenever it encounters a topic that might have prior context
15
16
 
16
17
  ## Prerequisites
17
18
 
18
- You need the `memsearch` CLI installed. The easiest way is via [uv](https://docs.astral.sh/uv/):
19
+ You need the `memsearch` CLI installed with ONNX embeddings. The easiest way is via [uv](https://docs.astral.sh/uv/):
19
20
 
20
21
  ```bash
21
22
  # Install uv (if you don't have it)
22
23
  curl -LsSf https://astral.sh/uv/install.sh | sh
23
24
 
24
- # Install memsearch with local embeddings
25
- uv tool install 'memsearch[local]'
25
+ # Install memsearch with ONNX embeddings (recommended)
26
+ uv tool install 'memsearch[onnx]'
27
+
28
+ # Configure the ONNX embedding provider
29
+ memsearch config set embedding.provider onnx
26
30
  ```
27
31
 
28
32
  Or install directly with pip:
29
33
 
30
34
  ```bash
31
- pip install 'memsearch[local]'
35
+ pip install 'memsearch[onnx]'
36
+ memsearch config set embedding.provider onnx
32
37
  ```
33
38
 
39
+ The ONNX provider uses the `bge-m3` embedding model locally on your machine — no API keys or network requests needed for vector search. If you prefer a different embedding provider (e.g., OpenAI, a local `all-MiniLM-L6-v2` via `memsearch[local]`, or Ollama), see the [memsearch documentation](https://github.com/nicobako/memsearch) for configuration options.
40
+
34
41
  If `memsearch` is not installed, the plugin's tools will return a clear error message asking the agent to tell you to install it.
35
42
 
36
43
  ## Install
@@ -92,17 +99,19 @@ You should add `.memsearch/` to your `.gitignore`.
92
99
 
93
100
  ## Seed script
94
101
 
95
- The repo includes a seed script (`scripts/seed-memories.ts`) that can backfill memory from existing OpenCode sessions. This is useful when first installing the plugin on a project you've already been working on:
102
+ The package includes a seed script that can backfill memory from existing OpenCode sessions. This is useful when first installing the plugin on a project you've already been working on.
103
+
104
+ The seed script requires [Bun](https://bun.sh/) to run.
96
105
 
97
106
  ```bash
98
107
  # Seed from the last 14 days of sessions (default)
99
- bun run scripts/seed-memories.ts
108
+ npx opencode-memsearch-seed
100
109
 
101
110
  # Seed from the last 30 days
102
- bun run scripts/seed-memories.ts --days 30
111
+ npx opencode-memsearch-seed --days 30
103
112
  ```
104
113
 
105
- The script reads directly from the OpenCode SQLite database, summarizes each conversation turn, and writes the results to `.memsearch/memory/`. The seed script respects the same [configuration](#configuration) as the plugin (config file and environment variables).
114
+ Run the script from your project directory. It reads directly from the OpenCode SQLite database, summarizes each conversation turn, and writes the results to `.memsearch/memory/`. The seed script respects the same [configuration](#configuration) as the plugin (config file and environment variables).
106
115
 
107
116
  ## Configuration
108
117
 
@@ -122,7 +131,7 @@ Both files use the same schema. Values from the project config override the glob
122
131
  ```json
123
132
  {
124
133
  "summarization_model": "anthropic/claude-sonnet-4-5",
125
- "auto_configure_embedding": true
134
+ "use_daemon": true
126
135
  }
127
136
  ```
128
137
 
@@ -131,7 +140,7 @@ All fields are optional. The full schema:
131
140
  | Field | Type | Default | Description |
132
141
  |-------|------|---------|-------------|
133
142
  | `summarization_model` | `string` | `"anthropic/claude-haiku-4-5"` | The OpenCode model ID used to summarize conversation turns |
134
- | `auto_configure_embedding` | `boolean` | `true` | Whether the plugin auto-configures memsearch to use local embeddings on startup |
143
+ | `use_daemon` | `boolean` | `true` | Whether to use a background daemon for faster search/index operations |
135
144
 
136
145
  ### Summarization model
137
146
 
@@ -179,47 +188,39 @@ To switch back to local mode:
179
188
  memsearch config set milvus.uri "~/.memsearch/milvus.db"
180
189
  ```
181
190
 
182
- ### Embedding provider
191
+ ### Daemon mode
183
192
 
184
- By default, the plugin auto-configures memsearch to use **local embeddings** (`embedding.provider = local`). This is important because memsearch's own default is `openai`, which would require an API key and make network requests for every index and search operation.
193
+ By default, the plugin starts a background daemon process that keeps the memsearch embedding model loaded in memory. This avoids the Python cold-start penalty (~5-10s) on every search, index, or expand operation reducing latency to ~50ms.
185
194
 
186
- With local embeddings, the `all-MiniLM-L6-v2` model runs on your machine — no API calls needed for vector search.
195
+ The daemon:
196
+ - Starts automatically on session creation
197
+ - Listens on a Unix domain socket at `.memsearch/daemon.sock`
198
+ - Falls back to the CLI transparently if the daemon is unavailable
199
+ - Writes logs to `.memsearch/daemon.log`
200
+ - Stores its PID in `.memsearch/daemon.pid`
187
201
 
188
- To manage the embedding provider yourself (e.g., to use OpenAI embeddings or a custom endpoint), disable auto-configuration:
202
+ To disable the daemon and use the CLI for all operations:
189
203
 
190
204
  ```json
191
205
  {
192
- "auto_configure_embedding": false
206
+ "use_daemon": false
193
207
  }
194
208
  ```
195
209
 
196
210
  Or via environment variable:
197
211
 
198
212
  ```bash
199
- export MEMSEARCH_AUTO_CONFIGURE_EMBEDDING=false
200
- ```
201
-
202
- Then configure memsearch directly:
203
-
204
- ```bash
205
- # Example: use OpenAI embeddings
206
- memsearch config set embedding.provider openai
207
- memsearch config set embedding.api_key "env:OPENAI_API_KEY"
208
-
209
- # Example: use a custom OpenAI-compatible endpoint
210
- memsearch config set embedding.provider openai
211
- memsearch config set embedding.base_url http://localhost:11434/v1
212
- memsearch config set embedding.model nomic-embed-text
213
+ export MEMSEARCH_USE_DAEMON=false
213
214
  ```
214
215
 
215
- See the [memsearch documentation](https://github.com/nicobako/memsearch) for all available embedding options.
216
+ The daemon is most beneficial on machines where Python startup is slow (older hardware, CPU-only inference). On fast machines with NVMe storage, the difference may be negligible.
216
217
 
217
218
  ### Environment variables
218
219
 
219
220
  | Variable | Description |
220
221
  |----------|-------------|
221
222
  | `MEMSEARCH_SUMMARIZATION_MODEL` | Override the model used for summarization (takes precedence over config file) |
222
- | `MEMSEARCH_AUTO_CONFIGURE_EMBEDDING` | Set to `false` or `0` to disable automatic local embedding configuration |
223
+ | `MEMSEARCH_USE_DAEMON` | Set to `false` or `0` to disable the background daemon (uses CLI for all operations) |
223
224
  | `MEMSEARCH_DISABLE` | Set to any value to disable the plugin entirely (used internally to prevent recursion during summarization) |
224
225
 
225
226
  ### Precedence
package/dist/index.js CHANGED
@@ -1,9 +1,12 @@
1
1
  // src/index.ts
2
2
  import { tool } from "@opencode-ai/plugin";
3
3
  import { createHash } from "crypto";
4
- import { readdir, readFile, appendFile, mkdir, writeFile, unlink } from "fs/promises";
5
- import { join, basename, resolve } from "path";
4
+ import { readdir, readFile, appendFile, mkdir, writeFile, unlink, access } from "fs/promises";
5
+ import { join, basename, resolve, dirname } from "path";
6
6
  import { tmpdir, homedir } from "os";
7
+ import { createConnection } from "net";
8
+ import { fileURLToPath } from "url";
9
+ var __dirname = "/Users/jdormit/opencode-memsearch/src";
7
10
  var DEFAULT_SUMMARIZATION_MODEL = "anthropic/claude-haiku-4-5";
8
11
  var GLOBAL_CONFIG_PATH = join(homedir(), ".config", "opencode", "memsearch.config.json");
9
12
  async function loadJsonConfig(path) {
@@ -23,12 +26,12 @@ async function loadConfig(projectDir) {
23
26
  function getSummarizationModel(config) {
24
27
  return process.env.MEMSEARCH_SUMMARIZATION_MODEL || config.summarization_model || DEFAULT_SUMMARIZATION_MODEL;
25
28
  }
26
- function shouldAutoConfigureEmbedding(config) {
27
- const envVal = process.env.MEMSEARCH_AUTO_CONFIGURE_EMBEDDING;
29
+ function shouldUseDaemon(config) {
30
+ const envVal = process.env.MEMSEARCH_USE_DAEMON;
28
31
  if (envVal !== undefined) {
29
32
  return envVal !== "0" && envVal.toLowerCase() !== "false";
30
33
  }
31
- return config.auto_configure_embedding !== false;
34
+ return config.use_daemon !== false;
32
35
  }
33
36
  function deriveCollectionName(directory) {
34
37
  const abs = resolve(directory);
@@ -64,21 +67,98 @@ Rules:
64
67
  - Do NOT ask follow-up questions
65
68
  - STOP immediately after the last bullet point`;
66
69
  var TEMP_DIR = join(tmpdir(), "memsearch-plugin");
70
+ function getDaemonScriptPath() {
71
+ const thisDir = typeof __dirname !== "undefined" ? __dirname : dirname(fileURLToPath(import.meta.url));
72
+ return join(thisDir, "..", "scripts", "memsearch-daemon.py");
73
+ }
74
+ function getDaemonSocketPath(memsearchDir) {
75
+ return join(memsearchDir, "daemon.sock");
76
+ }
77
+ function getDaemonPidPath(memsearchDir) {
78
+ return join(memsearchDir, "daemon.pid");
79
+ }
80
+ function daemonRequest(socketPath, request, timeoutMs = 30000) {
81
+ return new Promise((resolve2) => {
82
+ let responded = false;
83
+ const chunks = [];
84
+ const timer = setTimeout(() => {
85
+ if (!responded) {
86
+ responded = true;
87
+ sock.destroy();
88
+ resolve2(null);
89
+ }
90
+ }, timeoutMs);
91
+ const sock = createConnection({ path: socketPath }, () => {
92
+ sock.end(JSON.stringify(request));
93
+ });
94
+ sock.on("data", (chunk) => {
95
+ chunks.push(Buffer.isBuffer(chunk) ? chunk : Buffer.from(chunk));
96
+ });
97
+ sock.on("end", () => {
98
+ if (responded)
99
+ return;
100
+ responded = true;
101
+ clearTimeout(timer);
102
+ try {
103
+ const data = Buffer.concat(chunks).toString("utf-8");
104
+ resolve2(JSON.parse(data));
105
+ } catch {
106
+ resolve2(null);
107
+ }
108
+ });
109
+ sock.on("error", () => {
110
+ if (responded)
111
+ return;
112
+ responded = true;
113
+ clearTimeout(timer);
114
+ resolve2(null);
115
+ });
116
+ });
117
+ }
118
+ async function isDaemonAlive(socketPath) {
119
+ const resp = await daemonRequest(socketPath, { cmd: "ping" }, 5000);
120
+ return resp?.ok === true;
121
+ }
67
122
  var memsearchPlugin = async ({ client, $, directory }) => {
68
123
  if (process.env.MEMSEARCH_DISABLE) {
69
124
  return {};
70
125
  }
71
126
  const sessions = new Map;
72
127
  let memsearchCmd = null;
128
+ let memsearchPythonPath = null;
73
129
  async function detectMemsearch() {
74
130
  try {
75
131
  await $`which memsearch`.quiet();
76
132
  return ["memsearch"];
77
133
  } catch {}
134
+ return null;
135
+ }
136
+ async function detectMemsearchPython() {
137
+ if (memsearchPythonPath)
138
+ return memsearchPythonPath;
78
139
  try {
79
- await $`which uvx`.quiet();
80
- return ["uvx", "--from", "memsearch[local]", "memsearch"];
140
+ const memsearchBin = (await $`which memsearch`.quiet().text()).trim();
141
+ if (memsearchBin) {
142
+ const content = await readFile(memsearchBin, "utf-8");
143
+ const firstLine = content.split(`
144
+ `)[0];
145
+ if (firstLine.startsWith("#!")) {
146
+ const shebangPath = firstLine.slice(2).trim();
147
+ try {
148
+ await $`${shebangPath} -c "import memsearch"`.quiet();
149
+ memsearchPythonPath = shebangPath;
150
+ return shebangPath;
151
+ } catch {}
152
+ }
153
+ }
81
154
  } catch {}
155
+ for (const py of ["python3", "python"]) {
156
+ try {
157
+ await $`${py} -c "import memsearch"`.quiet();
158
+ memsearchPythonPath = py;
159
+ return py;
160
+ } catch {}
161
+ }
82
162
  return null;
83
163
  }
84
164
  async function ensureMemsearch() {
@@ -87,7 +167,7 @@ var memsearchPlugin = async ({ client, $, directory }) => {
87
167
  memsearchCmd = await detectMemsearch();
88
168
  return memsearchCmd;
89
169
  }
90
- const MEMSEARCH_NOT_FOUND_ERROR = "memsearch is not installed. Tell the user to install it by running: pip install 'memsearch[local]' — or, if they have uv: uv tool install 'memsearch[local]'. See https://github.com/jdormit/opencode-memsearch for details.";
170
+ const MEMSEARCH_NOT_FOUND_ERROR = "memsearch is not installed. Tell the user to install it by running: uv tool install 'memsearch[onnx]' — or with pip: pip install 'memsearch[onnx]'. See https://github.com/jdormit/opencode-memsearch for details.";
91
171
  async function runMemsearch(args, collectionName) {
92
172
  const cmd = memsearchCmd;
93
173
  if (!cmd)
@@ -109,17 +189,108 @@ var memsearchPlugin = async ({ client, $, directory }) => {
109
189
  return "";
110
190
  }
111
191
  }
112
- async function configureLocalEmbedding() {
113
- const cmd = memsearchCmd;
114
- if (!cmd)
115
- return;
116
- const provider = await getMemsearchConfig("embedding.provider");
117
- if (provider !== "local") {
118
- try {
119
- await $`${[...cmd, "config", "set", "embedding.provider", "local"]}`.quiet();
120
- } catch {}
192
+ async function startDaemon(memsearchDir, memoryDir, collectionName) {
193
+ const socketPath = getDaemonSocketPath(memsearchDir);
194
+ const pidPath = getDaemonPidPath(memsearchDir);
195
+ if (await isDaemonAlive(socketPath)) {
196
+ return true;
197
+ }
198
+ await stopDaemon(memsearchDir);
199
+ const pythonPath = await detectMemsearchPython();
200
+ if (!pythonPath)
201
+ return false;
202
+ const daemonScript = getDaemonScriptPath();
203
+ try {
204
+ await access(daemonScript);
205
+ } catch {
206
+ return false;
207
+ }
208
+ try {
209
+ const logPath = join(memsearchDir, "daemon.log");
210
+ const proc = Bun.spawn([
211
+ pythonPath,
212
+ daemonScript,
213
+ "--socket",
214
+ socketPath,
215
+ "--collection",
216
+ collectionName,
217
+ "--paths",
218
+ memoryDir,
219
+ "--pid-file",
220
+ pidPath
221
+ ], {
222
+ stdout: Bun.file(logPath),
223
+ stderr: Bun.file(logPath),
224
+ stdin: "ignore"
225
+ });
226
+ const startTime = Date.now();
227
+ const maxWaitMs = 60000;
228
+ while (Date.now() - startTime < maxWaitMs) {
229
+ await new Promise((r) => setTimeout(r, 500));
230
+ if (await isDaemonAlive(socketPath)) {
231
+ return true;
232
+ }
233
+ }
234
+ return false;
235
+ } catch {
236
+ return false;
121
237
  }
122
238
  }
239
+ async function stopDaemon(memsearchDir) {
240
+ const socketPath = getDaemonSocketPath(memsearchDir);
241
+ const pidPath = getDaemonPidPath(memsearchDir);
242
+ try {
243
+ await daemonRequest(socketPath, { cmd: "shutdown" }, 3000);
244
+ } catch {}
245
+ try {
246
+ const pidStr = await readFile(pidPath, "utf-8");
247
+ const pid = parseInt(pidStr.trim(), 10);
248
+ if (pid) {
249
+ try {
250
+ process.kill(pid);
251
+ } catch {}
252
+ }
253
+ } catch {}
254
+ try {
255
+ await unlink(socketPath);
256
+ } catch {}
257
+ try {
258
+ await unlink(pidPath);
259
+ } catch {}
260
+ }
261
+ async function daemonSearch(memsearchDir, collectionName, query, topK) {
262
+ const socketPath = getDaemonSocketPath(memsearchDir);
263
+ const resp = await daemonRequest(socketPath, {
264
+ cmd: "search",
265
+ query,
266
+ top_k: topK
267
+ });
268
+ if (resp?.ok && Array.isArray(resp.results)) {
269
+ return JSON.stringify(resp.results, null, 2);
270
+ }
271
+ return runMemsearch(["search", query, "--top-k", String(topK), "--json-output"], collectionName);
272
+ }
273
+ async function daemonExpand(memsearchDir, collectionName, chunkHash) {
274
+ const socketPath = getDaemonSocketPath(memsearchDir);
275
+ const resp = await daemonRequest(socketPath, {
276
+ cmd: "expand",
277
+ chunk_hash: chunkHash
278
+ });
279
+ if (resp?.ok && resp.result) {
280
+ return JSON.stringify(resp.result, null, 2);
281
+ }
282
+ return runMemsearch(["expand", chunkHash, "--json-output"], collectionName);
283
+ }
284
+ async function daemonIndex(memsearchDir, collectionName, memoryDir) {
285
+ const socketPath = getDaemonSocketPath(memsearchDir);
286
+ const resp = await daemonRequest(socketPath, {
287
+ cmd: "index",
288
+ paths: [memoryDir]
289
+ });
290
+ if (resp?.ok)
291
+ return;
292
+ runMemsearch(["index", memoryDir], collectionName);
293
+ }
123
294
  async function stopWatch(memsearchDir) {
124
295
  const pidFile = join(memsearchDir, ".watch.pid");
125
296
  try {
@@ -315,9 +486,7 @@ ${tail}
315
486
  await ensureMemsearch();
316
487
  const pluginConfig = await loadConfig(directory);
317
488
  const summarizationModel = getSummarizationModel(pluginConfig);
318
- if (memsearchCmd && shouldAutoConfigureEmbedding(pluginConfig)) {
319
- await configureLocalEmbedding();
320
- }
489
+ const useDaemon = shouldUseDaemon(pluginConfig);
321
490
  return {
322
491
  event: async ({ event }) => {
323
492
  if (event.type === "session.created") {
@@ -336,15 +505,32 @@ ${tail}
336
505
  sessions.set(sessionID, {
337
506
  directory: sessionDir,
338
507
  memoryDir,
508
+ memsearchDir,
339
509
  collectionName,
340
510
  isSummarizing: false,
341
511
  lastSummarizedMessageCount: 0,
342
- headingWritten: false
512
+ headingWritten: false,
513
+ daemonReady: false
343
514
  });
344
- await startWatch(memoryDir, memsearchDir, collectionName);
345
- const milvusUri = await getMemsearchConfig("milvus.uri");
346
- if (!milvusUri.startsWith("http") && !milvusUri.startsWith("tcp")) {
347
- runMemsearch(["index", memoryDir], collectionName);
515
+ if (useDaemon && memsearchCmd) {
516
+ startDaemon(memsearchDir, memoryDir, collectionName).then((ready) => {
517
+ const state = sessions.get(sessionID);
518
+ if (state) {
519
+ state.daemonReady = ready;
520
+ }
521
+ });
522
+ }
523
+ if (!useDaemon) {
524
+ await startWatch(memoryDir, memsearchDir, collectionName);
525
+ }
526
+ if (!useDaemon) {
527
+ const milvusUri = await getMemsearchConfig("milvus.uri");
528
+ if (!milvusUri.startsWith("http") && !milvusUri.startsWith("tcp")) {
529
+ runMemsearch(["index", memoryDir], collectionName);
530
+ }
531
+ } else {
532
+ const socketPath = getDaemonSocketPath(memsearchDir);
533
+ daemonRequest(socketPath, { cmd: "index", paths: [memoryDir] }, 30000).catch(() => {});
348
534
  }
349
535
  const coldStart = await getRecentMemory(memoryDir);
350
536
  if (coldStart) {
@@ -383,10 +569,12 @@ The above is recent memory context from past sessions. Use the memsearch_search
383
569
  state = {
384
570
  directory: sessionDir,
385
571
  memoryDir,
572
+ memsearchDir,
386
573
  collectionName,
387
574
  isSummarizing: false,
388
575
  lastSummarizedMessageCount: 0,
389
- headingWritten: false
576
+ headingWritten: false,
577
+ daemonReady: false
390
578
  };
391
579
  sessions.set(sessionID, state);
392
580
  } catch {
@@ -451,7 +639,11 @@ ${summary}
451
639
  `;
452
640
  await appendFile(memoryFile, entry);
453
641
  state.lastSummarizedMessageCount = messages.length;
454
- runMemsearch(["index", state.memoryDir], state.collectionName);
642
+ if (useDaemon) {
643
+ daemonIndex(state.memsearchDir, state.collectionName, state.memoryDir);
644
+ } else {
645
+ runMemsearch(["index", state.memoryDir], state.collectionName);
646
+ }
455
647
  } catch {} finally {
456
648
  state.isSummarizing = false;
457
649
  }
@@ -470,11 +662,28 @@ ${summary}
470
662
  return MEMSEARCH_NOT_FOUND_ERROR;
471
663
  }
472
664
  const collectionName = deriveCollectionName(context.directory);
665
+ const memsearchDir = join(context.directory, ".memsearch");
666
+ const topK = args.top_k ?? 5;
667
+ if (useDaemon) {
668
+ const raw2 = await daemonSearch(memsearchDir, collectionName, args.query, topK);
669
+ if (!raw2.trim()) {
670
+ return "No results found.";
671
+ }
672
+ try {
673
+ const results = JSON.parse(raw2);
674
+ if (!Array.isArray(results) || results.length === 0) {
675
+ return "No results found.";
676
+ }
677
+ return JSON.stringify(results, null, 2);
678
+ } catch {
679
+ return raw2;
680
+ }
681
+ }
473
682
  const raw = await runMemsearch([
474
683
  "search",
475
684
  args.query,
476
685
  "--top-k",
477
- String(args.top_k ?? 5),
686
+ String(topK),
478
687
  "--json-output"
479
688
  ], collectionName);
480
689
  if (!raw.trim()) {
@@ -502,7 +711,13 @@ ${summary}
502
711
  return MEMSEARCH_NOT_FOUND_ERROR;
503
712
  }
504
713
  const collectionName = deriveCollectionName(context.directory);
505
- const raw = await runMemsearch(["expand", args.chunk_hash, "--json-output"], collectionName);
714
+ const memsearchDir = join(context.directory, ".memsearch");
715
+ let raw;
716
+ if (useDaemon) {
717
+ raw = await daemonExpand(memsearchDir, collectionName, args.chunk_hash);
718
+ } else {
719
+ raw = await runMemsearch(["expand", args.chunk_hash, "--json-output"], collectionName);
720
+ }
506
721
  if (!raw.trim()) {
507
722
  return "Chunk not found.";
508
723
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "opencode-memsearch",
3
- "version": "0.2.0",
3
+ "version": "0.3.0",
4
4
  "description": "Persistent cross-session memory for OpenCode, powered by memsearch",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
@@ -11,6 +11,9 @@
11
11
  "import": "./dist/index.js"
12
12
  }
13
13
  },
14
+ "bin": {
15
+ "opencode-memsearch-seed": "scripts/seed-memories.ts"
16
+ },
14
17
  "files": [
15
18
  "dist",
16
19
  "scripts"
@@ -44,7 +47,7 @@
44
47
  },
45
48
  "devDependencies": {
46
49
  "@opencode-ai/plugin": "^1.3.3",
47
- "@types/bun": "latest",
50
+ "@types/bun": "^1.3.11",
48
51
  "typescript": "^5.8.0"
49
52
  }
50
53
  }
@@ -0,0 +1,267 @@
1
+ #!/usr/bin/env python3
2
+ """
3
+ memsearch-daemon — long-running process that keeps the embedding model loaded.
4
+
5
+ Serves search/index/expand requests over a Unix domain socket, avoiding the
6
+ ~8-11s Python+PyTorch cold-start penalty on every CLI invocation.
7
+
8
+ Protocol:
9
+ Client sends a JSON object (terminated by EOF/shutdown), daemon replies
10
+ with a JSON object.
11
+
12
+ Commands:
13
+ {"cmd": "search", "query": "...", "top_k": 5, "source_prefix": null}
14
+ {"cmd": "index", "paths": ["..."], "force": false}
15
+ {"cmd": "expand", "chunk_hash": "..."}
16
+ {"cmd": "ping"}
17
+ {"cmd": "shutdown"}
18
+
19
+ Started by the opencode-memsearch plugin on session creation.
20
+ """
21
+
22
+ from __future__ import annotations
23
+
24
+ import asyncio
25
+ import json
26
+ import logging
27
+ import os
28
+ import re
29
+ import signal
30
+ import sys
31
+ from pathlib import Path
32
+
33
+ logging.basicConfig(
34
+ level=logging.INFO,
35
+ format="%(asctime)s [memsearch-daemon] %(message)s",
36
+ datefmt="%H:%M:%S",
37
+ )
38
+ logger = logging.getLogger("memsearch-daemon")
39
+
40
+
41
+ def _extract_section(
42
+ all_lines: list[str],
43
+ start_line: int,
44
+ heading_level: int,
45
+ ) -> tuple[str, int, int]:
46
+ """Extract the full section containing the chunk (mirrors CLI logic)."""
47
+ section_start = start_line - 1
48
+ if heading_level > 0:
49
+ for i in range(start_line - 2, -1, -1):
50
+ line = all_lines[i]
51
+ if line.startswith("#"):
52
+ level = len(line) - len(line.lstrip("#"))
53
+ if level <= heading_level:
54
+ section_start = i
55
+ break
56
+
57
+ section_end = len(all_lines)
58
+ if heading_level > 0:
59
+ for i in range(start_line, len(all_lines)):
60
+ line = all_lines[i]
61
+ if line.startswith("#"):
62
+ level = len(line) - len(line.lstrip("#"))
63
+ if level <= heading_level:
64
+ section_end = i
65
+ break
66
+
67
+ content = "\n".join(all_lines[section_start:section_end])
68
+ return content, section_start + 1, section_end
69
+
70
+
71
+ async def handle_search(ms, params: dict) -> dict:
72
+ query = params.get("query", "")
73
+ top_k = params.get("top_k", 5)
74
+ source_prefix = params.get("source_prefix")
75
+ results = await ms.search(query, top_k=top_k, source_prefix=source_prefix)
76
+ return {"ok": True, "results": results}
77
+
78
+
79
+ async def handle_index(ms, params: dict) -> dict:
80
+ paths = params.get("paths", [])
81
+ force = params.get("force", False)
82
+ if paths:
83
+ ms._paths = [str(p) for p in paths]
84
+ count = await ms.index(force=force)
85
+ return {"ok": True, "indexed": count}
86
+
87
+
88
+ async def handle_expand(ms, params: dict) -> dict:
89
+ """Expand a chunk — mirrors the CLI expand command's --json-output."""
90
+ chunk_hash = params.get("chunk_hash", "")
91
+ if not chunk_hash:
92
+ return {"ok": False, "error": "chunk_hash is required"}
93
+
94
+ escaped = chunk_hash.replace("\\", "\\\\").replace('"', '\\"')
95
+ chunks = ms.store.query(filter_expr=f'chunk_hash == "{escaped}"')
96
+ if not chunks:
97
+ return {"ok": False, "error": f"Chunk not found: {chunk_hash}"}
98
+
99
+ chunk = chunks[0]
100
+ source = chunk["source"]
101
+ start_line = chunk["start_line"]
102
+ heading_level = chunk.get("heading_level", 0)
103
+ heading = chunk.get("heading", "")
104
+
105
+ source_path = Path(source)
106
+ if not source_path.exists():
107
+ # Fall back to just returning the stored content
108
+ return {
109
+ "ok": True,
110
+ "result": {
111
+ "chunk_hash": chunk_hash,
112
+ "source": source,
113
+ "heading": heading,
114
+ "start_line": start_line,
115
+ "end_line": chunk["end_line"],
116
+ "content": chunk.get("content", ""),
117
+ },
118
+ }
119
+
120
+ all_lines = source_path.read_text(encoding="utf-8").splitlines()
121
+ expanded, expanded_start, expanded_end = _extract_section(
122
+ all_lines, start_line, heading_level
123
+ )
124
+
125
+ anchor_match = re.search(
126
+ r"<!--\s*session:(\S+)\s+turn:(\S+)\s+transcript:(\S+)\s*-->",
127
+ expanded,
128
+ )
129
+ result: dict = {
130
+ "chunk_hash": chunk_hash,
131
+ "source": source,
132
+ "heading": heading,
133
+ "start_line": expanded_start,
134
+ "end_line": expanded_end,
135
+ "content": expanded,
136
+ }
137
+ if anchor_match:
138
+ result["anchor"] = {
139
+ "session": anchor_match.group(1),
140
+ "turn": anchor_match.group(2),
141
+ "transcript": anchor_match.group(3),
142
+ }
143
+
144
+ return {"ok": True, "result": result}
145
+
146
+
147
+ async def handle_client(reader, writer, ms, shutdown_event):
148
+ try:
149
+ data = await reader.read(1_048_576) # 1MB max request
150
+ if not data:
151
+ return
152
+
153
+ request = json.loads(data.decode("utf-8"))
154
+ cmd = request.get("cmd")
155
+
156
+ if cmd == "search":
157
+ response = await handle_search(ms, request)
158
+ elif cmd == "index":
159
+ response = await handle_index(ms, request)
160
+ elif cmd == "expand":
161
+ response = await handle_expand(ms, request)
162
+ elif cmd == "ping":
163
+ response = {"ok": True, "msg": "pong"}
164
+ elif cmd == "shutdown":
165
+ response = {"ok": True, "msg": "shutting down"}
166
+ shutdown_event.set()
167
+ else:
168
+ response = {"ok": False, "error": f"Unknown command: {cmd}"}
169
+
170
+ writer.write(json.dumps(response, default=str).encode("utf-8"))
171
+ await writer.drain()
172
+ except Exception as e:
173
+ try:
174
+ writer.write(json.dumps({"ok": False, "error": str(e)}).encode("utf-8"))
175
+ await writer.drain()
176
+ except Exception:
177
+ pass
178
+ finally:
179
+ writer.close()
180
+ try:
181
+ await writer.wait_closed()
182
+ except Exception:
183
+ pass
184
+
185
+
186
+ async def main():
187
+ import argparse
188
+
189
+ parser = argparse.ArgumentParser(description="memsearch daemon")
190
+ parser.add_argument("--socket", required=True, help="Unix socket path")
191
+ parser.add_argument("--collection", default=None, help="Milvus collection name")
192
+ parser.add_argument("--paths", nargs="*", default=[], help="Paths to index")
193
+ parser.add_argument("--pid-file", default=None, help="Write PID to this file")
194
+ args = parser.parse_args()
195
+
196
+ socket_path = args.socket
197
+
198
+ # Import memsearch (this is the slow part — loads PyTorch + model)
199
+ from memsearch.config import resolve_config
200
+ from memsearch.core import MemSearch
201
+
202
+ cfg = resolve_config()
203
+
204
+ kwargs: dict = {
205
+ "embedding_provider": cfg.embedding.provider,
206
+ "embedding_model": cfg.embedding.model or None,
207
+ "embedding_batch_size": cfg.embedding.batch_size,
208
+ "embedding_base_url": cfg.embedding.base_url or None,
209
+ "embedding_api_key": cfg.embedding.api_key or None,
210
+ "milvus_uri": cfg.milvus.uri,
211
+ "milvus_token": cfg.milvus.token or None,
212
+ "collection": args.collection or cfg.milvus.collection,
213
+ "max_chunk_size": cfg.chunking.max_chunk_size,
214
+ "overlap_lines": cfg.chunking.overlap_lines,
215
+ "reranker_model": cfg.reranker.model,
216
+ }
217
+
218
+ logger.info("Loading embedding model...")
219
+ ms = MemSearch(args.paths or None, **kwargs)
220
+ logger.info("Model loaded.")
221
+
222
+ # Clean up stale socket
223
+ if os.path.exists(socket_path):
224
+ os.unlink(socket_path)
225
+
226
+ # Ensure parent directory exists
227
+ Path(socket_path).parent.mkdir(parents=True, exist_ok=True)
228
+
229
+ shutdown_event = asyncio.Event()
230
+
231
+ server = await asyncio.start_unix_server(
232
+ lambda r, w: handle_client(r, w, ms, shutdown_event),
233
+ path=socket_path,
234
+ )
235
+
236
+ # Write PID file
237
+ if args.pid_file:
238
+ Path(args.pid_file).parent.mkdir(parents=True, exist_ok=True)
239
+ Path(args.pid_file).write_text(str(os.getpid()))
240
+
241
+ logger.info("Listening on %s (PID %d)", socket_path, os.getpid())
242
+
243
+ # Handle SIGTERM gracefully
244
+ loop = asyncio.get_event_loop()
245
+ for sig in (signal.SIGTERM, signal.SIGINT):
246
+ loop.add_signal_handler(sig, shutdown_event.set)
247
+
248
+ # Wait for shutdown signal
249
+ await shutdown_event.wait()
250
+ logger.info("Shutting down...")
251
+
252
+ server.close()
253
+ await server.wait_closed()
254
+ ms.close()
255
+
256
+ # Cleanup
257
+ if os.path.exists(socket_path):
258
+ os.unlink(socket_path)
259
+ if args.pid_file and os.path.exists(args.pid_file):
260
+ os.unlink(args.pid_file)
261
+
262
+
263
+ if __name__ == "__main__":
264
+ try:
265
+ asyncio.run(main())
266
+ except KeyboardInterrupt:
267
+ pass
@@ -3,7 +3,9 @@
3
3
  * seed-memories.ts — Seed memsearch memory files from recent OpenCode sessions.
4
4
  *
5
5
  * Usage:
6
- * bun run ~/.config/opencode/scripts/seed-memories.ts [--days 14]
6
+ * npx opencode-memsearch-seed [--days 14]
7
+ *
8
+ * Requires Bun (https://bun.sh/) to run.
7
9
  *
8
10
  * This script:
9
11
  * 1. Reads session + message data directly from the OpenCode SQLite database
@@ -25,8 +27,8 @@ import { $ } from "bun"
25
27
  interface PluginConfig {
26
28
  /** Model ID used for summarization (e.g. "anthropic/claude-haiku-4-5") */
27
29
  summarization_model?: string
28
- /** Whether to auto-configure memsearch to use local embeddings (default: true) */
29
- auto_configure_embedding?: boolean
30
+ /** Whether to use the daemon for faster search/index (default: true) */
31
+ use_daemon?: boolean
30
32
  }
31
33
 
32
34
  const DEFAULT_SUMMARIZATION_MODEL = "anthropic/claude-haiku-4-5"
@@ -313,11 +315,9 @@ async function detectMemsearch(): Promise<string[]> {
313
315
  await $`which memsearch`.quiet()
314
316
  return ["memsearch"]
315
317
  } catch {}
316
- try {
317
- await $`which uvx`.quiet()
318
- return ["uvx", "--from", "memsearch[local]", "memsearch"]
319
- } catch {}
320
- throw new Error("memsearch not found. Install it with: pip install 'memsearch[local]' or install uv")
318
+ throw new Error(
319
+ "memsearch is not installed. Install it by running: uv tool install 'memsearch[onnx]' — or with pip: pip install 'memsearch[onnx]'. See https://github.com/jdormit/opencode-memsearch for details."
320
+ )
321
321
  }
322
322
 
323
323
  // Summarize a transcript via `opencode run`