opencode-memsearch 0.1.0 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -57,7 +57,7 @@ OpenCode will install the npm package automatically on startup.
57
57
  When the agent finishes responding (session goes idle), the plugin:
58
58
 
59
59
  1. Extracts the last conversation turn (user message + agent response)
60
- 2. Summarizes it into 2-6 bullet points using Claude Haiku via `opencode run`
60
+ 2. Summarizes it into 2-6 bullet points using an LLM (Claude Haiku by default, [configurable](#summarization-model)) via `opencode run`
61
61
  3. Appends the summary to `.memsearch/memory/YYYY-MM-DD.md`
62
62
  4. Re-indexes the memory directory into the vector database
63
63
 
@@ -102,23 +102,134 @@ bun run scripts/seed-memories.ts
102
102
  bun run scripts/seed-memories.ts --days 30
103
103
  ```
104
104
 
105
- The script reads directly from the OpenCode SQLite database, summarizes each conversation turn with Claude Haiku, and writes the results to `.memsearch/memory/`.
105
+ The script reads directly from the OpenCode SQLite database, summarizes each conversation turn, and writes the results to `.memsearch/memory/`. The seed script respects the same [configuration](#configuration) as the plugin (config file and environment variables).
106
106
 
107
107
  ## Configuration
108
108
 
109
- The plugin auto-configures memsearch to use local embeddings. If you want to use a remote Milvus instance instead of the default local database, configure it via the memsearch CLI:
109
+ The plugin can be configured via a JSON config file and/or environment variables. Environment variables take precedence over config file values, and project-level config takes precedence over global config.
110
+
111
+ ### Config file
112
+
113
+ The plugin looks for config in two locations (highest precedence first):
114
+
115
+ 1. **Project config**: `.memsearch/config.json` in your project root
116
+ 2. **Global config**: `~/.config/opencode/memsearch.config.json`
117
+
118
+ Both files use the same schema. Values from the project config override the global config.
119
+
120
+ **Example:**
121
+
122
+ ```json
123
+ {
124
+ "summarization_model": "anthropic/claude-sonnet-4-5",
125
+ "auto_configure_embedding": true
126
+ }
127
+ ```
128
+
129
+ All fields are optional. The full schema:
130
+
131
+ | Field | Type | Default | Description |
132
+ |-------|------|---------|-------------|
133
+ | `summarization_model` | `string` | `"anthropic/claude-haiku-4-5"` | The OpenCode model ID used to summarize conversation turns |
134
+ | `auto_configure_embedding` | `boolean` | `true` | Whether the plugin auto-configures memsearch to use local embeddings on startup |
135
+
136
+ ### Summarization model
137
+
138
+ Each conversation turn is summarized by an LLM before being stored. By default, the plugin uses `anthropic/claude-haiku-4-5` — a fast, cheap model that produces good summaries.
139
+
140
+ To use a different model, set it in your config file:
141
+
142
+ ```json
143
+ {
144
+ "summarization_model": "anthropic/claude-sonnet-4-5"
145
+ }
146
+ ```
147
+
148
+ Or override it with an environment variable:
149
+
150
+ ```bash
151
+ export MEMSEARCH_SUMMARIZATION_MODEL="openai/gpt-4.1-mini"
152
+ ```
153
+
154
+ The model must be available in your OpenCode configuration (i.e., you must have the provider configured and authenticated). Any model ID that works with `opencode run --model <id>` will work here.
155
+
156
+ ### Milvus storage
157
+
158
+ The plugin uses [Milvus](https://milvus.io/) (via memsearch) as its vector database. There are two modes:
159
+
160
+ #### Local mode (default)
161
+
162
+ By default, memsearch uses **Milvus Lite**, which stores data in a local `.db` file (typically `~/.memsearch/milvus.db`). This requires no server setup — it just works.
163
+
164
+ In local mode, the plugin re-indexes the memory directory on session start (to pick up any memories written since the last session) and again after each new summary is appended. File locking prevents concurrent access issues, so no background watcher is needed.
165
+
166
+ #### Remote mode
167
+
168
+ For concurrent access from multiple sessions or machines, you can point memsearch at a remote Milvus server:
110
169
 
111
170
  ```bash
112
171
  memsearch config set milvus.uri http://localhost:19530
113
172
  ```
114
173
 
115
- In remote mode, the plugin starts a file watcher process that automatically re-indexes memory files when they change.
174
+ In remote mode, the plugin starts a **file watcher** process that continuously re-indexes memory files whenever they change. The watcher runs as a background process with its PID stored in `.memsearch/.watch.pid`.
175
+
176
+ To switch back to local mode:
177
+
178
+ ```bash
179
+ memsearch config set milvus.uri "~/.memsearch/milvus.db"
180
+ ```
181
+
182
+ ### Embedding provider
183
+
184
+ By default, the plugin auto-configures memsearch to use **local embeddings** (`embedding.provider = local`). This is important because memsearch's own default is `openai`, which would require an API key and make network requests for every index and search operation.
185
+
186
+ With local embeddings, the `all-MiniLM-L6-v2` model runs on your machine — no API calls needed for vector search.
187
+
188
+ To manage the embedding provider yourself (e.g., to use OpenAI embeddings or a custom endpoint), disable auto-configuration:
189
+
190
+ ```json
191
+ {
192
+ "auto_configure_embedding": false
193
+ }
194
+ ```
195
+
196
+ Or via environment variable:
197
+
198
+ ```bash
199
+ export MEMSEARCH_AUTO_CONFIGURE_EMBEDDING=false
200
+ ```
201
+
202
+ Then configure memsearch directly:
116
203
 
117
- ## Environment variables
204
+ ```bash
205
+ # Example: use OpenAI embeddings
206
+ memsearch config set embedding.provider openai
207
+ memsearch config set embedding.api_key "env:OPENAI_API_KEY"
208
+
209
+ # Example: use a custom OpenAI-compatible endpoint
210
+ memsearch config set embedding.provider openai
211
+ memsearch config set embedding.base_url http://localhost:11434/v1
212
+ memsearch config set embedding.model nomic-embed-text
213
+ ```
214
+
215
+ See the [memsearch documentation](https://github.com/nicobako/memsearch) for all available embedding options.
216
+
217
+ ### Environment variables
118
218
 
119
219
  | Variable | Description |
120
220
  |----------|-------------|
121
- | `MEMSEARCH_DISABLE` | Set to any value to disable the plugin (used internally to prevent recursion during summarization) |
221
+ | `MEMSEARCH_SUMMARIZATION_MODEL` | Override the model used for summarization (takes precedence over config file) |
222
+ | `MEMSEARCH_AUTO_CONFIGURE_EMBEDDING` | Set to `false` or `0` to disable automatic local embedding configuration |
223
+ | `MEMSEARCH_DISABLE` | Set to any value to disable the plugin entirely (used internally to prevent recursion during summarization) |
224
+
225
+ ### Precedence
226
+
227
+ Configuration values are resolved in this order (highest precedence first):
228
+
229
+ 1. Environment variables
230
+ 2. Project config (`.memsearch/config.json`)
231
+ 3. Global config (`~/.config/opencode/memsearch.config.json`)
232
+ 4. Built-in defaults
122
233
 
123
234
  ## License
124
235
 
package/dist/index.js CHANGED
@@ -3,7 +3,33 @@ import { tool } from "@opencode-ai/plugin";
3
3
  import { createHash } from "crypto";
4
4
  import { readdir, readFile, appendFile, mkdir, writeFile, unlink } from "fs/promises";
5
5
  import { join, basename, resolve } from "path";
6
- import { tmpdir } from "os";
6
+ import { tmpdir, homedir } from "os";
7
+ var DEFAULT_SUMMARIZATION_MODEL = "anthropic/claude-haiku-4-5";
8
+ var GLOBAL_CONFIG_PATH = join(homedir(), ".config", "opencode", "memsearch.config.json");
9
+ async function loadJsonConfig(path) {
10
+ try {
11
+ const content = await readFile(path, "utf-8");
12
+ return JSON.parse(content);
13
+ } catch {
14
+ return {};
15
+ }
16
+ }
17
+ async function loadConfig(projectDir) {
18
+ const projectPath = join(projectDir, ".memsearch", "config.json");
19
+ const globalConfig = await loadJsonConfig(GLOBAL_CONFIG_PATH);
20
+ const projectConfig = await loadJsonConfig(projectPath);
21
+ return { ...globalConfig, ...projectConfig };
22
+ }
23
+ function getSummarizationModel(config) {
24
+ return process.env.MEMSEARCH_SUMMARIZATION_MODEL || config.summarization_model || DEFAULT_SUMMARIZATION_MODEL;
25
+ }
26
+ function shouldAutoConfigureEmbedding(config) {
27
+ const envVal = process.env.MEMSEARCH_AUTO_CONFIGURE_EMBEDDING;
28
+ if (envVal !== undefined) {
29
+ return envVal !== "0" && envVal.toLowerCase() !== "false";
30
+ }
31
+ return config.auto_configure_embedding !== false;
32
+ }
7
33
  function deriveCollectionName(directory) {
8
34
  const abs = resolve(directory);
9
35
  const sanitized = basename(abs).toLowerCase().replace(/[^a-z0-9]/g, "_").replace(/_+/g, "_").replace(/^_|_$/g, "").slice(0, 40);
@@ -37,7 +63,6 @@ Rules:
37
63
  - Do NOT continue the conversation after the bullet points
38
64
  - Do NOT ask follow-up questions
39
65
  - STOP immediately after the last bullet point`;
40
- var HAIKU_MODEL = "anthropic/claude-haiku-4-5";
41
66
  var TEMP_DIR = join(tmpdir(), "memsearch-plugin");
42
67
  var memsearchPlugin = async ({ client, $, directory }) => {
43
68
  if (process.env.MEMSEARCH_DISABLE) {
@@ -202,12 +227,12 @@ ${tail}
202
227
  return lines.join(`
203
228
  `);
204
229
  }
205
- async function summarizeTranscript(transcript, sessionID, turnIdx) {
230
+ async function summarizeTranscript(transcript, sessionID, turnIdx, model) {
206
231
  const tempFile = join(TEMP_DIR, `turn-${sessionID}-${turnIdx}.txt`);
207
232
  await mkdir(TEMP_DIR, { recursive: true });
208
233
  await writeFile(tempFile, transcript);
209
234
  try {
210
- const rawOutput = await $`opencode run -f ${tempFile} --model ${HAIKU_MODEL} --format json ${SUMMARIZE_PROMPT}`.env({ ...process.env, MEMSEARCH_DISABLE: "1" }).nothrow().quiet().text();
235
+ const rawOutput = await $`opencode run -f ${tempFile} --model ${model} --format json ${SUMMARIZE_PROMPT}`.env({ ...process.env, MEMSEARCH_DISABLE: "1" }).nothrow().quiet().text();
211
236
  let summarizationSessionID;
212
237
  const textParts = [];
213
238
  for (const line of rawOutput.split(`
@@ -288,7 +313,9 @@ ${tail}
288
313
  `);
289
314
  }
290
315
  await ensureMemsearch();
291
- if (memsearchCmd) {
316
+ const pluginConfig = await loadConfig(directory);
317
+ const summarizationModel = getSummarizationModel(pluginConfig);
318
+ if (memsearchCmd && shouldAutoConfigureEmbedding(pluginConfig)) {
292
319
  await configureLocalEmbedding();
293
320
  }
294
321
  return {
@@ -398,7 +425,7 @@ The above is recent memory context from past sessions. Use the memsearch_search
398
425
  return;
399
426
  let summary;
400
427
  try {
401
- summary = await summarizeTranscript(transcript, sessionID, state.lastSummarizedMessageCount);
428
+ summary = await summarizeTranscript(transcript, sessionID, state.lastSummarizedMessageCount, summarizationModel);
402
429
  } catch {
403
430
  summary = "";
404
431
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "opencode-memsearch",
3
- "version": "0.1.0",
3
+ "version": "0.2.0",
4
4
  "description": "Persistent cross-session memory for OpenCode, powered by memsearch",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
@@ -8,18 +8,54 @@
8
8
  * This script:
9
9
  * 1. Reads session + message data directly from the OpenCode SQLite database
10
10
  * 2. For each session, formats each conversation turn as a transcript
11
- * 3. Summarizes each turn via `opencode run` with claude-haiku (one process per turn)
11
+ * 3. Summarizes each turn via `opencode run` (model is configurable, see README)
12
12
  * 4. Writes summaries to .memsearch/memory/YYYY-MM-DD.md files per project
13
13
  * 5. Indexes all memory files with memsearch
14
14
  */
15
15
 
16
16
  import { Database } from "bun:sqlite"
17
17
  import { createHash } from "crypto"
18
- import { appendFile, mkdir, writeFile, unlink } from "fs/promises"
18
+ import { appendFile, mkdir, readFile, writeFile, unlink } from "fs/promises"
19
19
  import { join, basename, resolve } from "path"
20
20
  import { homedir, tmpdir } from "os"
21
21
  import { $ } from "bun"
22
22
 
23
+ // --- Configuration ---
24
+
25
+ interface PluginConfig {
26
+ /** Model ID used for summarization (e.g. "anthropic/claude-haiku-4-5") */
27
+ summarization_model?: string
28
+ /** Whether to auto-configure memsearch to use local embeddings (default: true) */
29
+ auto_configure_embedding?: boolean
30
+ }
31
+
32
+ const DEFAULT_SUMMARIZATION_MODEL = "anthropic/claude-haiku-4-5"
33
+ const GLOBAL_CONFIG_PATH = join(homedir(), ".config", "opencode", "memsearch.config.json")
34
+
35
+ async function loadJsonConfig(path: string): Promise<Partial<PluginConfig>> {
36
+ try {
37
+ const content = await readFile(path, "utf-8")
38
+ return JSON.parse(content)
39
+ } catch {
40
+ return {}
41
+ }
42
+ }
43
+
44
+ async function loadConfig(projectDir: string): Promise<PluginConfig> {
45
+ const projectPath = join(projectDir, ".memsearch", "config.json")
46
+ const globalConfig = await loadJsonConfig(GLOBAL_CONFIG_PATH)
47
+ const projectConfig = await loadJsonConfig(projectPath)
48
+ return { ...globalConfig, ...projectConfig }
49
+ }
50
+
51
+ function getSummarizationModel(config: PluginConfig): string {
52
+ return (
53
+ process.env.MEMSEARCH_SUMMARIZATION_MODEL ||
54
+ config.summarization_model ||
55
+ DEFAULT_SUMMARIZATION_MODEL
56
+ )
57
+ }
58
+
23
59
  // --- Config ---
24
60
 
25
61
  const SUMMARIZE_PROMPT = `You are a third-person note-taker. The attached file contains a transcript of ONE conversation turn between a human and an AI coding agent. Tool calls are labeled [Tool Call] and their results [Tool Result] or [Tool Error].
@@ -39,8 +75,6 @@ Rules:
39
75
  - Do NOT ask follow-up questions
40
76
  - STOP immediately after the last bullet point`
41
77
 
42
- const HAIKU_MODEL = "anthropic/claude-haiku-4-5"
43
-
44
78
  const DB_PATH = join(homedir(), ".local", "share", "opencode", "opencode.db")
45
79
  const TEMP_DIR = join(tmpdir(), "memsearch-seed")
46
80
 
@@ -287,14 +321,14 @@ async function detectMemsearch(): Promise<string[]> {
287
321
  }
288
322
 
289
323
  // Summarize a transcript via `opencode run`
290
- async function summarizeWithOpencode(transcript: string, tempFile: string): Promise<string> {
324
+ async function summarizeWithOpencode(transcript: string, tempFile: string, model: string): Promise<string> {
291
325
  // Write transcript to temp file
292
326
  await writeFile(tempFile, transcript)
293
327
 
294
328
  try {
295
329
  // Disable all plugins during summarization to avoid memsearch plugin
296
330
  // interfering with the LLM output (e.g. injecting "[memsearch] Memory available")
297
- const rawOutput = await $`opencode run -f ${tempFile} --model ${HAIKU_MODEL} ${SUMMARIZE_PROMPT}`
331
+ const rawOutput = await $`opencode run -f ${tempFile} --model ${model} ${SUMMARIZE_PROMPT}`
298
332
  .env({ ...process.env, OPENCODE_CONFIG_CONTENT: JSON.stringify({ plugin: [] }) })
299
333
  .nothrow()
300
334
  .quiet()
@@ -331,6 +365,11 @@ async function main() {
331
365
  const memsearchCmd = await detectMemsearch()
332
366
  console.log(`Using memsearch: ${memsearchCmd.join(" ")}`)
333
367
 
368
+ // Load config from cwd (the project root the seed script is run from)
369
+ const config = await loadConfig(process.cwd())
370
+ const summarizationModel = getSummarizationModel(config)
371
+ console.log(`Summarization model: ${summarizationModel}`)
372
+
334
373
  await mkdir(TEMP_DIR, { recursive: true })
335
374
 
336
375
  // Open database (read-only)
@@ -426,7 +465,7 @@ async function main() {
426
465
  const tempFile = join(TEMP_DIR, `turn-${sessionNum}-${turnIdx}.txt`)
427
466
  let summary = ""
428
467
  try {
429
- summary = await summarizeWithOpencode(transcript, tempFile)
468
+ summary = await summarizeWithOpencode(transcript, tempFile, summarizationModel)
430
469
  if (summary) totalSummarized++
431
470
  } catch {
432
471
  // LLM failed