lancedb-opencode-pro 0.1.1 → 0.1.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -10,60 +10,23 @@ LanceDB-backed long-term memory provider for OpenCode.
10
10
 
11
11
  ## Install
12
12
 
13
- For normal use, install from npm into a fixed local plugin directory:
13
+ ### Primary (Recommended): npm package name
14
14
 
15
- ```bash
16
- mkdir -p ~/.config/opencode/plugins/lancedb-opencode-pro
17
- npm install --prefix ~/.config/opencode/plugins/lancedb-opencode-pro lancedb-opencode-pro
18
- ```
19
-
20
- Register the plugin as a `file://` path in `~/.config/opencode/opencode.json`:
15
+ 1. Register the published package name in `~/.config/opencode/opencode.json`:
21
16
 
22
17
  ```json
23
18
  {
24
19
  "$schema": "https://opencode.ai/config.json",
25
20
  "plugin": [
26
21
  "oh-my-opencode",
27
- "file:///home/<user>/.config/opencode/plugins/lancedb-opencode-pro/node_modules/lancedb-opencode-pro/dist/index.js"
22
+ "lancedb-opencode-pro"
28
23
  ]
29
24
  }
30
25
  ```
31
26
 
32
- If you already use other plugins, keep them and append this `file://` entry.
33
-
34
- ## Fallback Install From Local `.tgz`
35
-
36
- Use this flow when you want to install an unpublished build or test a release candidate on another machine.
37
-
38
- 1. On the build host, build and pack:
39
-
40
- ```bash
41
- npm ci
42
- npm run typecheck
43
- npm run build
44
- npm pack
45
- ```
46
-
47
- This generates a file like:
27
+ If you already use other plugins, keep them and append `"lancedb-opencode-pro"`.
48
28
 
49
- ```text
50
- lancedb-opencode-pro-0.1.1.tgz
51
- ```
52
-
53
- 2. Copy the `.tgz` to the target host (example):
54
-
55
- ```bash
56
- scp lancedb-opencode-pro-0.1.1.tgz <user>@<target-host>:/tmp/
57
- ```
58
-
59
- 3. On the target host, install into a fixed local plugin directory:
60
-
61
- ```bash
62
- mkdir -p ~/.config/opencode/plugins/lancedb-opencode-pro
63
- npm install --prefix ~/.config/opencode/plugins/lancedb-opencode-pro /tmp/lancedb-opencode-pro-0.1.1.tgz
64
- ```
65
-
66
- 4. For OpenCode `1.2.27+`, create the sidecar config file `~/.config/opencode/lancedb-opencode-pro.json`:
29
+ 2. For OpenCode `1.2.27+`, create the sidecar config file `~/.config/opencode/lancedb-opencode-pro.json`:
67
30
 
68
31
  ```json
69
32
  {
@@ -86,14 +49,14 @@ npm install --prefix ~/.config/opencode/plugins/lancedb-opencode-pro /tmp/lanced
86
49
  }
87
50
  ```
88
51
 
89
- 5. Set `embedding.baseUrl` to the Ollama endpoint that is reachable from that host.
52
+ 3. Set `embedding.baseUrl` to the Ollama endpoint that is reachable from that host.
90
53
 
91
54
  - Same machine as OpenCode: `http://127.0.0.1:11434`
92
55
  - Another machine on the network: for example `http://192.168.11.206:11434`
93
56
 
94
57
  You do not need `LANCEDB_OPENCODE_PRO_OLLAMA_BASE_URL` if the sidecar file already contains the correct `embedding.baseUrl`. Use the environment variable only when you want to override the file at runtime.
95
58
 
96
- 6. Make sure Ollama is reachable from that host before starting OpenCode:
59
+ 4. Make sure Ollama is reachable from that host, then start or restart OpenCode:
97
60
 
98
61
  ```bash
99
62
  curl http://127.0.0.1:11434/api/tags
@@ -105,26 +68,57 @@ or, for a remote Ollama server:
105
68
  curl http://192.168.11.206:11434/api/tags
106
69
  ```
107
70
 
108
- 7. Verify plugin file path and start/restart OpenCode:
71
+ After the first successful memory operation, LanceDB files should appear under:
72
+
73
+ ```text
74
+ ~/.opencode/memory/lancedb
75
+ ```
76
+
77
+ You can verify that the directory exists:
109
78
 
110
79
  ```bash
111
- ls -la ~/.config/opencode/plugins/lancedb-opencode-pro/node_modules/lancedb-opencode-pro/dist/index.js
80
+ ls -la ~/.opencode/memory/lancedb
112
81
  ```
113
82
 
114
- Then start or restart OpenCode, and verify memory store initialization.
83
+ ## Fallback Install From Release `.tgz`
115
84
 
116
- After the first successful memory operation, LanceDB files should appear under:
85
+ Use this only when npm registry install is unavailable (for example, restricted network, offline staging, or registry outage).
117
86
 
118
- ```text
119
- ~/.opencode/memory/lancedb
87
+ 1. Download the latest published release asset:
88
+
89
+ ```bash
90
+ curl -fL "https://github.com/tryweb/lancedb-opencode-pro/releases/latest/download/lancedb-opencode-pro.tgz" -o /tmp/lancedb-opencode-pro.tgz
120
91
  ```
121
92
 
122
- You can also verify that the directory exists:
93
+ 2. Install into the fixed local plugin directory:
123
94
 
124
95
  ```bash
125
- ls -la ~/.opencode/memory/lancedb
96
+ mkdir -p ~/.config/opencode/plugins/lancedb-opencode-pro
97
+ npm install --prefix ~/.config/opencode/plugins/lancedb-opencode-pro /tmp/lancedb-opencode-pro.tgz
126
98
  ```
127
99
 
100
+ 3. Register the plugin as a `file://` path in `~/.config/opencode/opencode.json`:
101
+
102
+ ```json
103
+ {
104
+ "$schema": "https://opencode.ai/config.json",
105
+ "plugin": [
106
+ "oh-my-opencode",
107
+ "file:///home/<user>/.config/opencode/plugins/lancedb-opencode-pro/node_modules/lancedb-opencode-pro/dist/index.js"
108
+ ]
109
+ }
110
+ ```
111
+
112
+ 4. Reuse the same sidecar config from the primary install flow, then start/restart OpenCode.
113
+
114
+ 5. Verify plugin file path:
115
+
116
+ ```bash
117
+ ls -la ~/.config/opencode/plugins/lancedb-opencode-pro/node_modules/lancedb-opencode-pro/dist/index.js
118
+ ```
119
+
120
+ Then verify memory store initialization.
121
+
128
122
  ### When to use environment variables
129
123
 
130
124
  Environment variables are optional. The recommended default is:
@@ -140,6 +134,26 @@ export LANCEDB_OPENCODE_PRO_OLLAMA_BASE_URL="http://192.168.11.206:11434"
140
134
 
141
135
  This override has higher priority than the sidecar file.
142
136
 
137
+ ### Secondary Fallback: Build From Source And Pack Locally
138
+
139
+ Use this when you need an unpublished local build (for example, testing unreleased commits).
140
+
141
+ ```bash
142
+ npm ci
143
+ npm run typecheck
144
+ npm run build
145
+ npm pack
146
+ ```
147
+
148
+ Then install the generated tarball:
149
+
150
+ ```bash
151
+ mkdir -p ~/.config/opencode/plugins/lancedb-opencode-pro
152
+ npm install --prefix ~/.config/opencode/plugins/lancedb-opencode-pro ./lancedb-opencode-pro-<version>.tgz
153
+ ```
154
+
155
+ Use the same `file://` plugin registration shown in the fallback section above.
156
+
143
157
  ## OpenCode Config
144
158
 
145
159
  Use a sidecar config file. This is the supported configuration model for current OpenCode versions.
@@ -189,9 +203,14 @@ Supported environment variables:
189
203
 
190
204
  - `LANCEDB_OPENCODE_PRO_CONFIG_PATH`
191
205
  - `LANCEDB_OPENCODE_PRO_PROVIDER`
206
+ - `LANCEDB_OPENCODE_PRO_EMBEDDING_PROVIDER` (`ollama` or `openai`, default `ollama`)
192
207
  - `LANCEDB_OPENCODE_PRO_DB_PATH`
193
208
  - `LANCEDB_OPENCODE_PRO_EMBEDDING_MODEL`
194
209
  - `LANCEDB_OPENCODE_PRO_OLLAMA_BASE_URL`
210
+ - `LANCEDB_OPENCODE_PRO_OPENAI_API_KEY`
211
+ - `LANCEDB_OPENCODE_PRO_OPENAI_MODEL`
212
+ - `LANCEDB_OPENCODE_PRO_OPENAI_BASE_URL`
213
+ - `LANCEDB_OPENCODE_PRO_OPENAI_TIMEOUT_MS`
195
214
  - `LANCEDB_OPENCODE_PRO_EMBEDDING_TIMEOUT_MS`
196
215
  - `LANCEDB_OPENCODE_PRO_RETRIEVAL_MODE`
197
216
  - `LANCEDB_OPENCODE_PRO_VECTOR_WEIGHT`
@@ -211,8 +230,119 @@ Supported environment variables:
211
230
  - `memory_delete`
212
231
  - `memory_clear`
213
232
  - `memory_stats`
233
+ - `memory_feedback_missing`
234
+ - `memory_feedback_wrong`
235
+ - `memory_feedback_useful`
236
+ - `memory_effectiveness`
214
237
  - `memory_port_plan`
215
238
 
239
+ ## Memory Effectiveness Feedback
240
+
241
+ The provider can now record structured feedback about long-memory quality in addition to storing and recalling memories.
242
+
243
+ - `memory_feedback_missing`: report information that should have been stored but was missed
244
+ - `memory_feedback_wrong`: report a stored memory that should not have been kept
245
+ - `memory_feedback_useful`: report whether a recalled memory was helpful
246
+ - `memory_effectiveness`: return machine-readable capture, recall, and feedback metrics for the active scope
247
+
248
+ Use `memory_search` or recalled memory ids from injected context when you need to reference a specific memory entry in feedback.
249
+
250
+ ### Viewing Metrics
251
+
252
+ Use `memory_effectiveness` to inspect machine-readable effectiveness data for the active scope.
253
+
254
+ ```text
255
+ memory_effectiveness
256
+ ```
257
+
258
+ Example output:
259
+
260
+ ```json
261
+ {
262
+ "scope": "project:my-project",
263
+ "totalEvents": 12,
264
+ "capture": {
265
+ "considered": 4,
266
+ "stored": 3,
267
+ "skipped": 1,
268
+ "successRate": 0.75,
269
+ "skipReasons": {
270
+ "below-min-chars": 1
271
+ }
272
+ },
273
+ "recall": {
274
+ "requested": 3,
275
+ "injected": 2,
276
+ "returnedResults": 2,
277
+ "hitRate": 0.67,
278
+ "injectionRate": 0.67
279
+ },
280
+ "feedback": {
281
+ "missing": 1,
282
+ "wrong": 0,
283
+ "useful": {
284
+ "positive": 2,
285
+ "negative": 0,
286
+ "helpfulRate": 1
287
+ },
288
+ "falsePositiveRate": 0,
289
+ "falseNegativeRate": 0.25
290
+ }
291
+ }
292
+ ```
293
+
294
+ Key fields:
295
+
296
+ - `capture.successRate`: how often a considered candidate was stored.
297
+ - `recall.hitRate`: how often a recall request returned at least one result.
298
+ - `feedback.falsePositiveRate`: wrong-memory reports divided by stored memories.
299
+ - `feedback.falseNegativeRate`: missing-memory reports relative to capture attempts.
300
+
301
+ ## OpenAI Embedding Configuration
302
+
303
+ Default behavior stays on Ollama. To use OpenAI embeddings, set `embedding.provider` to `openai` and provide API key + model.
304
+
305
+ Example sidecar:
306
+
307
+ ```json
308
+ {
309
+ "provider": "lancedb-opencode-pro",
310
+ "dbPath": "~/.opencode/memory/lancedb",
311
+ "embedding": {
312
+ "provider": "openai",
313
+ "model": "text-embedding-3-small",
314
+ "baseUrl": "https://api.openai.com/v1",
315
+ "apiKey": "sk-your-openai-key"
316
+ },
317
+ "retrieval": {
318
+ "mode": "hybrid",
319
+ "vectorWeight": 0.7,
320
+ "bm25Weight": 0.3,
321
+ "minScore": 0.2
322
+ },
323
+ "includeGlobalScope": true,
324
+ "minCaptureChars": 80,
325
+ "maxEntriesPerScope": 3000
326
+ }
327
+ ```
328
+
329
+ Recommended env overrides for OpenAI:
330
+
331
+ ```bash
332
+ export LANCEDB_OPENCODE_PRO_EMBEDDING_PROVIDER="openai"
333
+ export LANCEDB_OPENCODE_PRO_OPENAI_API_KEY="$OPENAI_API_KEY"
334
+ export LANCEDB_OPENCODE_PRO_OPENAI_MODEL="text-embedding-3-small"
335
+ export LANCEDB_OPENCODE_PRO_OPENAI_BASE_URL="https://api.openai.com/v1"
336
+ ```
337
+
338
+ `lancedb-opencode-pro.json` is parsed as plain JSON, so `${...}` interpolation is not performed. Prefer environment variables for secrets.
339
+
340
+ Validation behavior:
341
+
342
+ - If `embedding.provider=openai` and API key is missing, initialization fails with an explicit configuration error.
343
+ - If `embedding.provider=openai` and model is missing, initialization fails with an explicit configuration error.
344
+ - Ollama remains the default provider when `embedding.provider` is omitted.
345
+
216
346
  ## Compose Port Planning (Cross-Project)
217
347
 
218
348
  Use `memory_port_plan` before writing `docker-compose.yml` to avoid host port collisions across projects on the same machine.
@@ -298,9 +428,10 @@ The project provides layered validation workflows that can run locally or inside
298
428
  |---|---|
299
429
  | `npm run test:foundation` | Write-read persistence, scope isolation, vector compatibility, timestamp ordering |
300
430
  | `npm run test:regression` | Auto-capture extraction, search output shape, delete/clear safety, pruning |
431
+ | `npm run test:effectiveness` | Foundation + regression workflows covering effectiveness events, feedback commands, and summary output |
301
432
  | `npm run test:retrieval` | Recall@K and Robustness-δ@K against synthetic fixtures |
302
433
  | `npm run benchmark:latency` | Search p50/p99, insert avg, list avg with hard-gate enforcement |
303
- | `npm run verify` | Typecheck + build + foundation + regression + retrieval (quick release check) |
434
+ | `npm run verify` | Typecheck + build + effectiveness workflow + retrieval (quick release check) |
304
435
  | `npm run verify:full` | All of the above + benchmark + `npm pack` (full release gate) |
305
436
 
306
437
  Threshold policy and benchmark profiles are documented in `docs/benchmark-thresholds.md`.
@@ -497,5 +628,5 @@ Treat the feature as verified only when all of these are true:
497
628
  ## Notes
498
629
 
499
630
  - Default storage path: `~/.opencode/memory/lancedb`
500
- - Embedding backend in v1: `ollama`
631
+ - Embedding provider defaults to `ollama`; `openai` is supported via `embedding.provider=openai`
501
632
  - The provider keeps schema metadata (`schemaVersion`, `embeddingModel`, `vectorDim`) to guard against unsafe vector mixing.
package/dist/config.js CHANGED
@@ -3,6 +3,7 @@ import { join } from "node:path";
3
3
  import { clamp, expandHomePath, parseJsonObject, toBoolean, toNumber } from "./utils.js";
4
4
  const DEFAULT_DB_PATH = "~/.opencode/memory/lancedb";
5
5
  const DEFAULT_OLLAMA_BASE_URL = "http://127.0.0.1:11434";
6
+ const DEFAULT_OPENAI_BASE_URL = "https://api.openai.com/v1";
6
7
  const SIDECAR_FILE = "lancedb-opencode-pro.json";
7
8
  export function resolveMemoryConfig(config, worktree) {
8
9
  const legacyRaw = (config?.memory ?? {});
@@ -19,14 +20,29 @@ export function resolveMemoryConfig(config, worktree) {
19
20
  const weightSum = vectorWeight + bm25Weight;
20
21
  const normalizedVectorWeight = weightSum > 0 ? vectorWeight / weightSum : 0.7;
21
22
  const normalizedBm25Weight = weightSum > 0 ? bm25Weight / weightSum : 0.3;
22
- return {
23
+ const embeddingProvider = resolveEmbeddingProvider(firstString(process.env.LANCEDB_OPENCODE_PRO_EMBEDDING_PROVIDER, embeddingRaw.provider));
24
+ const embeddingModel = embeddingProvider === "openai"
25
+ ? firstString(process.env.LANCEDB_OPENCODE_PRO_OPENAI_MODEL, process.env.LANCEDB_OPENCODE_PRO_EMBEDDING_MODEL, embeddingRaw.model)
26
+ : firstString(process.env.LANCEDB_OPENCODE_PRO_EMBEDDING_MODEL, embeddingRaw.model) ?? "nomic-embed-text";
27
+ const embeddingBaseUrl = embeddingProvider === "openai"
28
+ ? firstString(process.env.LANCEDB_OPENCODE_PRO_OPENAI_BASE_URL, embeddingRaw.baseUrl) ?? DEFAULT_OPENAI_BASE_URL
29
+ : firstString(process.env.LANCEDB_OPENCODE_PRO_OLLAMA_BASE_URL, embeddingRaw.baseUrl) ?? DEFAULT_OLLAMA_BASE_URL;
30
+ const embeddingApiKey = embeddingProvider === "openai"
31
+ ? firstString(process.env.LANCEDB_OPENCODE_PRO_OPENAI_API_KEY, embeddingRaw.apiKey)
32
+ : undefined;
33
+ const timeoutEnv = embeddingProvider === "openai"
34
+ ? process.env.LANCEDB_OPENCODE_PRO_OPENAI_TIMEOUT_MS ?? process.env.LANCEDB_OPENCODE_PRO_EMBEDDING_TIMEOUT_MS
35
+ : process.env.LANCEDB_OPENCODE_PRO_EMBEDDING_TIMEOUT_MS;
36
+ const timeoutRaw = timeoutEnv ?? embeddingRaw.timeoutMs;
37
+ const resolvedConfig = {
23
38
  provider,
24
39
  dbPath,
25
40
  embedding: {
26
- provider: "ollama",
27
- model: firstString(process.env.LANCEDB_OPENCODE_PRO_EMBEDDING_MODEL, embeddingRaw.model) ?? "nomic-embed-text",
28
- baseUrl: firstString(process.env.LANCEDB_OPENCODE_PRO_OLLAMA_BASE_URL, embeddingRaw.baseUrl) ?? DEFAULT_OLLAMA_BASE_URL,
29
- timeoutMs: Math.max(500, Math.floor(toNumber(process.env.LANCEDB_OPENCODE_PRO_EMBEDDING_TIMEOUT_MS ?? embeddingRaw.timeoutMs, 6000))),
41
+ provider: embeddingProvider,
42
+ model: embeddingModel ?? "",
43
+ baseUrl: embeddingBaseUrl,
44
+ apiKey: embeddingApiKey,
45
+ timeoutMs: Math.max(500, Math.floor(toNumber(timeoutRaw, 6000))),
30
46
  },
31
47
  retrieval: {
32
48
  mode,
@@ -38,6 +54,25 @@ export function resolveMemoryConfig(config, worktree) {
38
54
  minCaptureChars: Math.max(30, Math.floor(toNumber(process.env.LANCEDB_OPENCODE_PRO_MIN_CAPTURE_CHARS ?? raw.minCaptureChars, 80))),
39
55
  maxEntriesPerScope: Math.max(50, Math.floor(toNumber(process.env.LANCEDB_OPENCODE_PRO_MAX_ENTRIES_PER_SCOPE ?? raw.maxEntriesPerScope, 3000))),
40
56
  };
57
+ validateEmbeddingConfig(resolvedConfig.embedding);
58
+ return resolvedConfig;
59
+ }
60
+ function resolveEmbeddingProvider(raw) {
61
+ if (!raw || raw === "ollama")
62
+ return "ollama";
63
+ if (raw === "openai")
64
+ return "openai";
65
+ throw new Error(`[lancedb-opencode-pro] Invalid embedding provider "${raw}". Expected "ollama" or "openai".`);
66
+ }
67
+ function validateEmbeddingConfig(embedding) {
68
+ if (embedding.provider !== "openai")
69
+ return;
70
+ if (!embedding.apiKey) {
71
+ throw new Error("[lancedb-opencode-pro] OpenAI embedding provider requires apiKey. Set embedding.apiKey or LANCEDB_OPENCODE_PRO_OPENAI_API_KEY.");
72
+ }
73
+ if (!embedding.model) {
74
+ throw new Error("[lancedb-opencode-pro] OpenAI embedding provider requires model. Set embedding.model or LANCEDB_OPENCODE_PRO_OPENAI_MODEL.");
75
+ }
41
76
  }
42
77
  function loadSidecarConfig(worktree) {
43
78
  const configPath = firstString(process.env.LANCEDB_OPENCODE_PRO_CONFIG_PATH);
@@ -12,3 +12,12 @@ export declare class OllamaEmbedder implements Embedder {
12
12
  embed(text: string): Promise<number[]>;
13
13
  dim(): Promise<number>;
14
14
  }
15
+ export declare class OpenAIEmbedder implements Embedder {
16
+ private readonly config;
17
+ readonly model: string;
18
+ private cachedDim;
19
+ constructor(config: EmbeddingConfig);
20
+ embed(text: string): Promise<number[]>;
21
+ dim(): Promise<number>;
22
+ }
23
+ export declare function createEmbedder(config: EmbeddingConfig): Embedder;
package/dist/embedder.js CHANGED
@@ -3,6 +3,9 @@ const KNOWN_MODEL_DIMS = {
3
3
  "mxbai-embed-large": 1024,
4
4
  "all-minilm": 384,
5
5
  "snowflake-arctic-embed": 1024,
6
+ "text-embedding-3-small": 1536,
7
+ "text-embedding-3-large": 3072,
8
+ "text-embedding-ada-002": 1536,
6
9
  };
7
10
  function fallbackDim(model) {
8
11
  const normalized = model.toLowerCase().replace(/:.*$/, "");
@@ -70,3 +73,76 @@ export class OllamaEmbedder {
70
73
  }
71
74
  }
72
75
  }
76
+ export class OpenAIEmbedder {
77
+ config;
78
+ model;
79
+ cachedDim = null;
80
+ constructor(config) {
81
+ this.config = config;
82
+ this.model = config.model;
83
+ }
84
+ async embed(text) {
85
+ if (!this.config.apiKey) {
86
+ throw new Error("OpenAI embedding request failed: missing apiKey. Set embedding.apiKey or LANCEDB_OPENCODE_PRO_OPENAI_API_KEY.");
87
+ }
88
+ const baseUrl = (this.config.baseUrl ?? "https://api.openai.com/v1").replace(/\/+$/, "");
89
+ const endpoint = `${baseUrl}/embeddings`;
90
+ const controller = new AbortController();
91
+ const timeout = setTimeout(() => controller.abort(), this.config.timeoutMs ?? 6000);
92
+ try {
93
+ const response = await fetch(endpoint, {
94
+ method: "POST",
95
+ headers: {
96
+ "content-type": "application/json",
97
+ authorization: `Bearer ${this.config.apiKey}`,
98
+ },
99
+ body: JSON.stringify({
100
+ model: this.config.model,
101
+ input: text,
102
+ encoding_format: "float",
103
+ }),
104
+ signal: controller.signal,
105
+ });
106
+ if (!response.ok) {
107
+ const details = await response.text().catch(() => "");
108
+ const suffix = details ? ` - ${details.slice(0, 240)}` : "";
109
+ throw new Error(`OpenAI embedding request failed: HTTP ${response.status}${suffix}`);
110
+ }
111
+ const data = (await response.json());
112
+ const vector = data.data?.[0]?.embedding;
113
+ if (!Array.isArray(vector) || vector.length === 0) {
114
+ throw new Error("OpenAI embedding response missing embedding vector");
115
+ }
116
+ if (this.cachedDim === null) {
117
+ this.cachedDim = vector.length;
118
+ }
119
+ return vector;
120
+ }
121
+ finally {
122
+ clearTimeout(timeout);
123
+ }
124
+ }
125
+ async dim() {
126
+ if (this.cachedDim !== null)
127
+ return this.cachedDim;
128
+ try {
129
+ const probe = await this.embed("dimension probe");
130
+ this.cachedDim = probe.length;
131
+ return this.cachedDim;
132
+ }
133
+ catch {
134
+ const fb = fallbackDim(this.model);
135
+ if (fb !== null) {
136
+ console.warn(`[lancedb-opencode-pro] OpenAI embedding probe failed, using fallback dim ${fb} for model "${this.model}"`);
137
+ return fb;
138
+ }
139
+ throw new Error(`OpenAI embedding probe failed and no known fallback dimension for model "${this.model}"`);
140
+ }
141
+ }
142
+ }
143
+ export function createEmbedder(config) {
144
+ if (config.provider === "openai") {
145
+ return new OpenAIEmbedder(config);
146
+ }
147
+ return new OllamaEmbedder(config);
148
+ }
package/dist/extract.d.ts CHANGED
@@ -1,2 +1,2 @@
1
- import type { CaptureCandidate } from "./types.js";
2
- export declare function extractCaptureCandidate(text: string, minChars: number): CaptureCandidate | null;
1
+ import type { CaptureCandidateResult } from "./types.js";
2
+ export declare function extractCaptureCandidate(text: string, minChars: number): CaptureCandidateResult;
package/dist/extract.js CHANGED
@@ -14,18 +14,21 @@ const FACT_SIGNALS = ["because", "root cause", "原因", "由於"];
14
14
  const PREF_SIGNALS = ["prefer", "preference", "偏好", "習慣"];
15
15
  export function extractCaptureCandidate(text, minChars) {
16
16
  const normalized = text.trim();
17
- if (normalized.length < minChars)
18
- return null;
17
+ if (normalized.length < minChars) {
18
+ return { candidate: null, skipReason: "below-min-chars" };
19
+ }
19
20
  const lower = normalized.toLowerCase();
20
21
  if (!POSITIVE_SIGNALS.some((signal) => lower.includes(signal.toLowerCase()))) {
21
- return null;
22
+ return { candidate: null, skipReason: "no-positive-signal" };
22
23
  }
23
24
  const category = classifyCategory(lower);
24
25
  const importance = category === "decision" ? 0.9 : category === "fact" ? 0.75 : 0.65;
25
26
  return {
26
- text: clipText(normalized, 1200),
27
- category,
28
- importance,
27
+ candidate: {
28
+ text: clipText(normalized, 1200),
29
+ category,
30
+ importance,
31
+ },
29
32
  };
30
33
  }
31
34
  function classifyCategory(text) {
package/dist/index.d.ts CHANGED
@@ -1,4 +1,4 @@
1
1
  import type { Plugin } from "@opencode-ai/plugin";
2
2
  declare const plugin: Plugin;
3
3
  export default plugin;
4
- export type { MemoryRuntimeConfig, MemoryRecord, SearchResult } from "./types.js";
4
+ export type { EffectivenessSummary, FeedbackEvent, MemoryEffectivenessEvent, MemoryRecord, MemoryRuntimeConfig, RecallEvent, SearchResult, } from "./types.js";
package/dist/index.js CHANGED
@@ -1,6 +1,6 @@
1
1
  import { tool } from "@opencode-ai/plugin";
2
2
  import { resolveMemoryConfig } from "./config.js";
3
- import { OllamaEmbedder } from "./embedder.js";
3
+ import { createEmbedder } from "./embedder.js";
4
4
  import { extractCaptureCandidate } from "./extract.js";
5
5
  import { isTcpPortAvailable, parsePortReservations, planPorts, reservationKey } from "./ports.js";
6
6
  import { buildScopeFilter, deriveProjectScope } from "./scope.js";
@@ -11,7 +11,12 @@ const plugin = async (input) => {
11
11
  const state = await createRuntimeState(input);
12
12
  const hooks = {
13
13
  config: async (config) => {
14
- state.config = resolveMemoryConfig(config, input.worktree);
14
+ const nextConfig = resolveMemoryConfig(config, input.worktree);
15
+ if (hasEmbeddingConfigChanged(state.config.embedding, nextConfig.embedding)) {
16
+ state.embedder = createEmbedder(nextConfig.embedding);
17
+ state.initialized = false;
18
+ }
19
+ state.config = nextConfig;
15
20
  },
16
21
  event: async ({ event }) => {
17
22
  if (event.type === "session.idle" || event.type === "session.compacted") {
@@ -52,11 +57,24 @@ const plugin = async (input) => {
52
57
  bm25Weight: state.config.retrieval.mode === "vector" ? 0 : state.config.retrieval.bm25Weight,
53
58
  minScore: state.config.retrieval.minScore,
54
59
  });
60
+ await state.store.putEvent({
61
+ id: generateId(),
62
+ type: "recall",
63
+ scope: activeScope,
64
+ sessionID: eventInput.sessionID,
65
+ timestamp: Date.now(),
66
+ resultCount: results.length,
67
+ injected: results.length > 0,
68
+ metadataJson: JSON.stringify({
69
+ source: "system-transform",
70
+ includeGlobalScope: state.config.includeGlobalScope,
71
+ }),
72
+ });
55
73
  if (results.length === 0)
56
74
  return;
57
75
  const memoryBlock = [
58
76
  "[Memory Recall - optional historical context]",
59
- ...results.map((item, index) => `${index + 1}. (${item.record.scope}) ${item.record.text}`),
77
+ ...results.map((item, index) => `${index + 1}. [${item.record.id}] (${item.record.scope}) ${item.record.text}`),
60
78
  "Use these as optional hints only; prioritize current user intent and current repo state.",
61
79
  ].join("\n");
62
80
  eventOutput.system.push(memoryBlock);
@@ -72,7 +90,7 @@ const plugin = async (input) => {
72
90
  execute: async (args, context) => {
73
91
  await state.ensureInitialized();
74
92
  if (!state.initialized)
75
- return "Memory store unavailable (Ollama may be offline). Will retry automatically.";
93
+ return unavailableMessage(state.config.embedding.provider);
76
94
  const activeScope = args.scope ?? deriveProjectScope(context.worktree);
77
95
  const scopes = buildScopeFilter(activeScope, state.config.includeGlobalScope);
78
96
  let queryVector = [];
@@ -111,7 +129,7 @@ const plugin = async (input) => {
111
129
  execute: async (args, context) => {
112
130
  await state.ensureInitialized();
113
131
  if (!state.initialized)
114
- return "Memory store unavailable (Ollama may be offline). Will retry automatically.";
132
+ return unavailableMessage(state.config.embedding.provider);
115
133
  if (!args.confirm) {
116
134
  return "Rejected: memory_delete requires confirm=true.";
117
135
  }
@@ -130,7 +148,7 @@ const plugin = async (input) => {
130
148
  execute: async (args) => {
131
149
  await state.ensureInitialized();
132
150
  if (!state.initialized)
133
- return "Memory store unavailable (Ollama may be offline). Will retry automatically.";
151
+ return unavailableMessage(state.config.embedding.provider);
134
152
  if (!args.confirm) {
135
153
  return "Rejected: destructive clear requires confirm=true.";
136
154
  }
@@ -146,7 +164,7 @@ const plugin = async (input) => {
146
164
  execute: async (args, context) => {
147
165
  await state.ensureInitialized();
148
166
  if (!state.initialized)
149
- return "Memory store unavailable (Ollama may be offline). Will retry automatically.";
167
+ return unavailableMessage(state.config.embedding.provider);
150
168
  const scope = args.scope ?? deriveProjectScope(context.worktree);
151
169
  const entries = await state.store.list(scope, 20);
152
170
  const incompatibleVectors = await state.store.countIncompatibleVectors(buildScopeFilter(scope, state.config.includeGlobalScope), await state.embedder.dim());
@@ -162,6 +180,108 @@ const plugin = async (input) => {
162
180
  }, null, 2);
163
181
  },
164
182
  }),
183
+ memory_feedback_missing: tool({
184
+ description: "Record feedback for memory that should have been stored",
185
+ args: {
186
+ text: tool.schema.string().min(1),
187
+ labels: tool.schema.array(tool.schema.string().min(1)).default([]),
188
+ scope: tool.schema.string().optional(),
189
+ },
190
+ execute: async (args, context) => {
191
+ await state.ensureInitialized();
192
+ if (!state.initialized)
193
+ return unavailableMessage(state.config.embedding.provider);
194
+ const scope = args.scope ?? deriveProjectScope(context.worktree);
195
+ await state.store.putEvent({
196
+ id: generateId(),
197
+ type: "feedback",
198
+ feedbackType: "missing",
199
+ scope,
200
+ sessionID: context.sessionID,
201
+ timestamp: Date.now(),
202
+ text: args.text,
203
+ labels: args.labels,
204
+ metadataJson: JSON.stringify({ source: "memory_feedback_missing" }),
205
+ });
206
+ return "Recorded missing-memory feedback.";
207
+ },
208
+ }),
209
+ memory_feedback_wrong: tool({
210
+ description: "Record feedback for memory that should not be stored",
211
+ args: {
212
+ id: tool.schema.string().min(6),
213
+ reason: tool.schema.string().optional(),
214
+ scope: tool.schema.string().optional(),
215
+ },
216
+ execute: async (args, context) => {
217
+ await state.ensureInitialized();
218
+ if (!state.initialized)
219
+ return unavailableMessage(state.config.embedding.provider);
220
+ const scope = args.scope ?? deriveProjectScope(context.worktree);
221
+ const scopes = buildScopeFilter(scope, state.config.includeGlobalScope);
222
+ const exists = await state.store.hasMemory(args.id, scopes);
223
+ if (!exists) {
224
+ return `Memory ${args.id} not found in current scope.`;
225
+ }
226
+ await state.store.putEvent({
227
+ id: generateId(),
228
+ type: "feedback",
229
+ feedbackType: "wrong",
230
+ scope,
231
+ sessionID: context.sessionID,
232
+ timestamp: Date.now(),
233
+ memoryId: args.id,
234
+ reason: args.reason,
235
+ metadataJson: JSON.stringify({ source: "memory_feedback_wrong" }),
236
+ });
237
+ return `Recorded wrong-memory feedback for ${args.id}.`;
238
+ },
239
+ }),
240
+ memory_feedback_useful: tool({
241
+ description: "Record whether a recalled memory was helpful",
242
+ args: {
243
+ id: tool.schema.string().min(6),
244
+ helpful: tool.schema.boolean(),
245
+ scope: tool.schema.string().optional(),
246
+ },
247
+ execute: async (args, context) => {
248
+ await state.ensureInitialized();
249
+ if (!state.initialized)
250
+ return unavailableMessage(state.config.embedding.provider);
251
+ const scope = args.scope ?? deriveProjectScope(context.worktree);
252
+ const scopes = buildScopeFilter(scope, state.config.includeGlobalScope);
253
+ const exists = await state.store.hasMemory(args.id, scopes);
254
+ if (!exists) {
255
+ return `Memory ${args.id} not found in current scope.`;
256
+ }
257
+ await state.store.putEvent({
258
+ id: generateId(),
259
+ type: "feedback",
260
+ feedbackType: "useful",
261
+ scope,
262
+ sessionID: context.sessionID,
263
+ timestamp: Date.now(),
264
+ memoryId: args.id,
265
+ helpful: args.helpful,
266
+ metadataJson: JSON.stringify({ source: "memory_feedback_useful" }),
267
+ });
268
+ return `Recorded recall usefulness feedback for ${args.id}.`;
269
+ },
270
+ }),
271
+ memory_effectiveness: tool({
272
+ description: "Show effectiveness metrics for capture recall and feedback",
273
+ args: {
274
+ scope: tool.schema.string().optional(),
275
+ },
276
+ execute: async (args, context) => {
277
+ await state.ensureInitialized();
278
+ if (!state.initialized)
279
+ return unavailableMessage(state.config.embedding.provider);
280
+ const scope = args.scope ?? deriveProjectScope(context.worktree);
281
+ const summary = await state.store.summarizeEvents(scope, state.config.includeGlobalScope);
282
+ return JSON.stringify(summary, null, 2);
283
+ },
284
+ }),
165
285
  memory_port_plan: tool({
166
286
  description: "Plan non-conflicting host ports for compose services and optionally persist reservations",
167
287
  args: {
@@ -180,7 +300,7 @@ const plugin = async (input) => {
180
300
  execute: async (args, context) => {
181
301
  await state.ensureInitialized();
182
302
  if (!state.initialized)
183
- return "Memory store unavailable (Ollama may be offline). Will retry automatically.";
303
+ return unavailableMessage(state.config.embedding.provider);
184
304
  if (args.rangeStart > args.rangeEnd) {
185
305
  return "Invalid range: rangeStart must be <= rangeEnd.";
186
306
  }
@@ -261,7 +381,7 @@ const plugin = async (input) => {
261
381
  };
262
382
  async function createRuntimeState(input) {
263
383
  const resolved = resolveMemoryConfig(undefined, input.worktree);
264
- const embedder = new OllamaEmbedder(resolved.embedding);
384
+ const embedder = createEmbedder(resolved.embedding);
265
385
  const store = new MemoryStore(resolved.dbPath);
266
386
  const state = {
267
387
  config: resolved,
@@ -308,36 +428,74 @@ async function getLastUserText(sessionID, client) {
308
428
  }
309
429
  async function flushAutoCapture(sessionID, state, client) {
310
430
  const fragments = state.captureBuffer.get(sessionID) ?? [];
311
- if (fragments.length === 0)
431
+ if (fragments.length === 0) {
432
+ await recordCaptureEvent(state, {
433
+ sessionID,
434
+ scope: state.defaultScope,
435
+ outcome: "skipped",
436
+ skipReason: "empty-buffer",
437
+ text: "",
438
+ });
312
439
  return;
440
+ }
313
441
  state.captureBuffer.delete(sessionID);
314
442
  const combined = fragments.join("\n").trim();
315
- const candidate = extractCaptureCandidate(combined, state.config.minCaptureChars);
316
- if (!candidate)
317
- return;
443
+ const activeScope = await resolveSessionScope(sessionID, client, state.defaultScope);
318
444
  await state.ensureInitialized();
319
- if (!state.initialized)
445
+ if (!state.initialized) {
446
+ return;
447
+ }
448
+ await recordCaptureEvent(state, {
449
+ sessionID,
450
+ scope: activeScope,
451
+ outcome: "considered",
452
+ text: combined,
453
+ });
454
+ const result = extractCaptureCandidate(combined, state.config.minCaptureChars);
455
+ if (!result.candidate) {
456
+ await recordCaptureEvent(state, {
457
+ sessionID,
458
+ scope: activeScope,
459
+ outcome: "skipped",
460
+ skipReason: result.skipReason,
461
+ text: combined,
462
+ });
320
463
  return;
464
+ }
321
465
  let vector = [];
322
466
  try {
323
- vector = await state.embedder.embed(candidate.text);
467
+ vector = await state.embedder.embed(result.candidate.text);
324
468
  }
325
469
  catch (error) {
326
470
  console.warn(`[lancedb-opencode-pro] embedding unavailable during auto-capture: ${toErrorMessage(error)}`);
471
+ await recordCaptureEvent(state, {
472
+ sessionID,
473
+ scope: activeScope,
474
+ outcome: "skipped",
475
+ skipReason: "embedding-unavailable",
476
+ text: combined,
477
+ });
327
478
  vector = [];
328
479
  }
329
480
  if (vector.length === 0) {
330
481
  console.warn("[lancedb-opencode-pro] auto-capture skipped because embedding vector is empty");
482
+ await recordCaptureEvent(state, {
483
+ sessionID,
484
+ scope: activeScope,
485
+ outcome: "skipped",
486
+ skipReason: "empty-embedding",
487
+ text: combined,
488
+ });
331
489
  return;
332
490
  }
333
- const activeScope = await resolveSessionScope(sessionID, client, state.defaultScope);
491
+ const memoryId = generateId();
334
492
  await state.store.put({
335
- id: generateId(),
336
- text: candidate.text,
493
+ id: memoryId,
494
+ text: result.candidate.text,
337
495
  vector,
338
- category: candidate.category,
496
+ category: result.candidate.category,
339
497
  scope: activeScope,
340
- importance: candidate.importance,
498
+ importance: result.candidate.importance,
341
499
  timestamp: Date.now(),
342
500
  schemaVersion: SCHEMA_VERSION,
343
501
  embeddingModel: state.config.embedding.model,
@@ -347,8 +505,31 @@ async function flushAutoCapture(sessionID, state, client) {
347
505
  sessionID,
348
506
  }),
349
507
  });
508
+ await recordCaptureEvent(state, {
509
+ sessionID,
510
+ scope: activeScope,
511
+ outcome: "stored",
512
+ memoryId,
513
+ text: result.candidate.text,
514
+ });
350
515
  await state.store.pruneScope(activeScope, state.config.maxEntriesPerScope);
351
516
  }
517
+ async function recordCaptureEvent(state, input) {
518
+ if (!state.initialized)
519
+ return;
520
+ await state.store.putEvent({
521
+ id: generateId(),
522
+ type: "capture",
523
+ scope: input.scope,
524
+ sessionID: input.sessionID,
525
+ timestamp: Date.now(),
526
+ outcome: input.outcome,
527
+ skipReason: input.skipReason,
528
+ memoryId: input.memoryId,
529
+ text: input.text,
530
+ metadataJson: JSON.stringify({ source: "auto-capture" }),
531
+ });
532
+ }
352
533
  async function resolveSessionScope(sessionID, client, fallback) {
353
534
  try {
354
535
  const response = await client.session.get({ path: { id: sessionID } });
@@ -369,4 +550,14 @@ function unwrapData(value) {
369
550
  }
370
551
  return value;
371
552
  }
553
+ function unavailableMessage(provider) {
554
+ return `Memory store unavailable (${provider} embedding may be offline). Will retry automatically.`;
555
+ }
556
+ function hasEmbeddingConfigChanged(current, next) {
557
+ return (current.provider !== next.provider
558
+ || current.model !== next.model
559
+ || (current.baseUrl ?? "") !== (next.baseUrl ?? "")
560
+ || (current.apiKey ?? "") !== (next.apiKey ?? "")
561
+ || (current.timeoutMs ?? 0) !== (next.timeoutMs ?? 0));
562
+ }
372
563
  export default plugin;
package/dist/store.d.ts CHANGED
@@ -1,14 +1,16 @@
1
- import type { MemoryRecord, SearchResult } from "./types.js";
1
+ import type { EffectivenessSummary, MemoryEffectivenessEvent, MemoryRecord, SearchResult } from "./types.js";
2
2
  export declare class MemoryStore {
3
3
  private readonly dbPath;
4
4
  private lancedb;
5
5
  private connection;
6
6
  private table;
7
+ private eventTable;
7
8
  private indexState;
8
9
  private scopeCache;
9
10
  constructor(dbPath: string);
10
11
  init(vectorDim: number): Promise<void>;
11
12
  put(record: MemoryRecord): Promise<void>;
13
+ putEvent(event: MemoryEffectivenessEvent): Promise<void>;
12
14
  search(params: {
13
15
  query: string;
14
16
  queryVector: number[];
@@ -23,6 +25,9 @@ export declare class MemoryStore {
23
25
  list(scope: string, limit: number): Promise<MemoryRecord[]>;
24
26
  pruneScope(scope: string, maxEntries: number): Promise<number>;
25
27
  countIncompatibleVectors(scopes: string[], expectedDim: number): Promise<number>;
28
+ hasMemory(id: string, scopes: string[]): Promise<boolean>;
29
+ listEvents(scopes: string[], limit: number): Promise<MemoryEffectivenessEvent[]>;
30
+ summarizeEvents(scope: string, includeGlobalScope: boolean): Promise<EffectivenessSummary>;
26
31
  getIndexHealth(): {
27
32
  vector: boolean;
28
33
  fts: boolean;
@@ -31,6 +36,8 @@ export declare class MemoryStore {
31
36
  private invalidateScope;
32
37
  private getCachedScopes;
33
38
  private requireTable;
39
+ private requireEventTable;
40
+ private readEventsByScopes;
34
41
  private readByScopes;
35
42
  private ensureIndexes;
36
43
  }
package/dist/store.js CHANGED
@@ -2,11 +2,13 @@ import { mkdir } from "node:fs/promises";
2
2
  import { dirname } from "node:path";
3
3
  import { tokenize } from "./utils.js";
4
4
  const TABLE_NAME = "memories";
5
+ const EVENTS_TABLE_NAME = "effectiveness_events";
5
6
  export class MemoryStore {
6
7
  dbPath;
7
8
  lancedb = null;
8
9
  connection = null;
9
10
  table = null;
11
+ eventTable = null;
10
12
  indexState = {
11
13
  vector: false,
12
14
  fts: false,
@@ -41,6 +43,31 @@ export class MemoryStore {
41
43
  this.table = await this.connection.createTable(TABLE_NAME, [bootstrap]);
42
44
  await this.table.delete("id = '__bootstrap__'");
43
45
  }
46
+ try {
47
+ this.eventTable = await this.connection.openTable(EVENTS_TABLE_NAME);
48
+ }
49
+ catch {
50
+ const bootstrapEvent = {
51
+ id: "__bootstrap__",
52
+ type: "capture",
53
+ scope: "global",
54
+ sessionID: "",
55
+ timestamp: 0,
56
+ memoryId: "",
57
+ text: "",
58
+ outcome: "considered",
59
+ skipReason: "",
60
+ resultCount: 0,
61
+ injected: false,
62
+ feedbackType: "",
63
+ helpful: -1,
64
+ reason: "",
65
+ labelsJson: "[]",
66
+ metadataJson: "{}",
67
+ };
68
+ this.eventTable = await this.connection.createTable(EVENTS_TABLE_NAME, [bootstrapEvent]);
69
+ await this.eventTable.delete("id = '__bootstrap__'");
70
+ }
44
71
  await this.ensureIndexes();
45
72
  }
46
73
  async put(record) {
@@ -48,6 +75,28 @@ export class MemoryStore {
48
75
  await table.add([record]);
49
76
  this.invalidateScope(record.scope);
50
77
  }
78
+ async putEvent(event) {
79
+ await this.requireEventTable().add([
80
+ {
81
+ id: event.id,
82
+ type: event.type,
83
+ scope: event.scope,
84
+ sessionID: event.sessionID ?? "",
85
+ timestamp: event.timestamp,
86
+ memoryId: event.memoryId ?? "",
87
+ text: event.text ?? "",
88
+ outcome: event.type === "capture" ? event.outcome : "",
89
+ skipReason: event.type === "capture" ? event.skipReason ?? "" : "",
90
+ resultCount: event.type === "recall" ? event.resultCount : 0,
91
+ injected: event.type === "recall" ? event.injected : false,
92
+ feedbackType: event.type === "feedback" ? event.feedbackType : "",
93
+ helpful: event.type === "feedback" ? (event.helpful === undefined ? -1 : event.helpful ? 1 : 0) : -1,
94
+ reason: event.type === "feedback" ? event.reason ?? "" : "",
95
+ labelsJson: event.type === "feedback" ? JSON.stringify(event.labels ?? []) : "[]",
96
+ metadataJson: event.metadataJson,
97
+ },
98
+ ]);
99
+ }
51
100
  async search(params) {
52
101
  const cached = await this.getCachedScopes(params.scopes);
53
102
  if (cached.records.length === 0)
@@ -104,6 +153,93 @@ export class MemoryStore {
104
153
  const rows = await this.readByScopes(scopes);
105
154
  return rows.filter((row) => row.vectorDim !== expectedDim).length;
106
155
  }
156
+ async hasMemory(id, scopes) {
157
+ const rows = await this.readByScopes(scopes);
158
+ return rows.some((row) => row.id === id);
159
+ }
160
+ async listEvents(scopes, limit) {
161
+ const rows = await this.readEventsByScopes(scopes);
162
+ return rows.sort((a, b) => b.timestamp - a.timestamp).slice(0, limit);
163
+ }
164
+ async summarizeEvents(scope, includeGlobalScope) {
165
+ const scopes = includeGlobalScope && scope !== "global" ? [scope, "global"] : [scope];
166
+ const events = await this.readEventsByScopes(scopes);
167
+ const captureSkipReasons = {};
168
+ let captureConsidered = 0;
169
+ let captureStored = 0;
170
+ let captureSkipped = 0;
171
+ let recallRequested = 0;
172
+ let recallInjected = 0;
173
+ let recallReturnedResults = 0;
174
+ let feedbackMissing = 0;
175
+ let feedbackWrong = 0;
176
+ let feedbackUsefulPositive = 0;
177
+ let feedbackUsefulNegative = 0;
178
+ for (const event of events) {
179
+ if (event.type === "capture") {
180
+ if (event.outcome === "considered")
181
+ captureConsidered += 1;
182
+ if (event.outcome === "stored")
183
+ captureStored += 1;
184
+ if (event.outcome === "skipped") {
185
+ captureSkipped += 1;
186
+ if (event.skipReason) {
187
+ captureSkipReasons[event.skipReason] = (captureSkipReasons[event.skipReason] ?? 0) + 1;
188
+ }
189
+ }
190
+ }
191
+ if (event.type === "recall") {
192
+ recallRequested += 1;
193
+ if (event.resultCount > 0)
194
+ recallReturnedResults += 1;
195
+ if (event.injected)
196
+ recallInjected += 1;
197
+ }
198
+ if (event.type === "feedback") {
199
+ if (event.feedbackType === "missing")
200
+ feedbackMissing += 1;
201
+ if (event.feedbackType === "wrong")
202
+ feedbackWrong += 1;
203
+ if (event.feedbackType === "useful") {
204
+ if (event.helpful)
205
+ feedbackUsefulPositive += 1;
206
+ else
207
+ feedbackUsefulNegative += 1;
208
+ }
209
+ }
210
+ }
211
+ const totalCaptureAttempts = captureStored + captureSkipped;
212
+ const totalUsefulFeedback = feedbackUsefulPositive + feedbackUsefulNegative;
213
+ return {
214
+ scope,
215
+ totalEvents: events.length,
216
+ capture: {
217
+ considered: captureConsidered,
218
+ stored: captureStored,
219
+ skipped: captureSkipped,
220
+ successRate: totalCaptureAttempts === 0 ? 0 : captureStored / totalCaptureAttempts,
221
+ skipReasons: captureSkipReasons,
222
+ },
223
+ recall: {
224
+ requested: recallRequested,
225
+ injected: recallInjected,
226
+ returnedResults: recallReturnedResults,
227
+ hitRate: recallRequested === 0 ? 0 : recallReturnedResults / recallRequested,
228
+ injectionRate: recallRequested === 0 ? 0 : recallInjected / recallRequested,
229
+ },
230
+ feedback: {
231
+ missing: feedbackMissing,
232
+ wrong: feedbackWrong,
233
+ useful: {
234
+ positive: feedbackUsefulPositive,
235
+ negative: feedbackUsefulNegative,
236
+ helpfulRate: totalUsefulFeedback === 0 ? 0 : feedbackUsefulPositive / totalUsefulFeedback,
237
+ },
238
+ falsePositiveRate: captureStored === 0 ? 0 : feedbackWrong / captureStored,
239
+ falseNegativeRate: totalCaptureAttempts === 0 ? 0 : feedbackMissing / totalCaptureAttempts,
240
+ },
241
+ };
242
+ }
107
243
  getIndexHealth() {
108
244
  return {
109
245
  vector: this.indexState.vector,
@@ -148,6 +284,44 @@ export class MemoryStore {
148
284
  }
149
285
  return this.table;
150
286
  }
287
+ requireEventTable() {
288
+ if (!this.eventTable) {
289
+ throw new Error("MemoryStore event table is not initialized");
290
+ }
291
+ return this.eventTable;
292
+ }
293
+ async readEventsByScopes(scopes) {
294
+ const table = this.requireEventTable();
295
+ if (scopes.length === 0)
296
+ return [];
297
+ const whereExpr = scopes.map((scope) => `scope = '${escapeSql(scope)}'`).join(" OR ");
298
+ const rows = await table
299
+ .query()
300
+ .where(`(${whereExpr})`)
301
+ .select([
302
+ "id",
303
+ "type",
304
+ "scope",
305
+ "sessionID",
306
+ "timestamp",
307
+ "memoryId",
308
+ "text",
309
+ "outcome",
310
+ "skipReason",
311
+ "resultCount",
312
+ "injected",
313
+ "feedbackType",
314
+ "helpful",
315
+ "reason",
316
+ "labelsJson",
317
+ "metadataJson",
318
+ ])
319
+ .limit(100000)
320
+ .toArray();
321
+ return rows
322
+ .map((row) => normalizeEventRow(row))
323
+ .filter((row) => row !== null);
324
+ }
151
325
  async readByScopes(scopes) {
152
326
  const table = this.requireTable();
153
327
  if (scopes.length === 0)
@@ -222,6 +396,52 @@ function normalizeRow(row) {
222
396
  metadataJson: String(row.metadataJson ?? "{}"),
223
397
  };
224
398
  }
399
+ function normalizeEventRow(row) {
400
+ if (typeof row.id !== "string" || typeof row.type !== "string" || typeof row.scope !== "string") {
401
+ return null;
402
+ }
403
+ const base = {
404
+ id: row.id,
405
+ scope: row.scope,
406
+ sessionID: typeof row.sessionID === "string" && row.sessionID.length > 0 ? row.sessionID : undefined,
407
+ timestamp: Number(row.timestamp ?? Date.now()),
408
+ memoryId: typeof row.memoryId === "string" && row.memoryId.length > 0 ? row.memoryId : undefined,
409
+ text: typeof row.text === "string" && row.text.length > 0 ? row.text : undefined,
410
+ metadataJson: String(row.metadataJson ?? "{}"),
411
+ };
412
+ if (row.type === "capture") {
413
+ return {
414
+ ...base,
415
+ type: "capture",
416
+ outcome: row.outcome === "stored" || row.outcome === "skipped" ? row.outcome : "considered",
417
+ skipReason: typeof row.skipReason === "string" && row.skipReason.length > 0
418
+ ? row.skipReason
419
+ : undefined,
420
+ };
421
+ }
422
+ if (row.type === "recall") {
423
+ return {
424
+ ...base,
425
+ type: "recall",
426
+ resultCount: Number(row.resultCount ?? 0),
427
+ injected: Boolean(row.injected),
428
+ };
429
+ }
430
+ if (row.type === "feedback") {
431
+ const labelsJson = typeof row.labelsJson === "string" ? row.labelsJson : "[]";
432
+ const labels = JSON.parse(labelsJson);
433
+ const helpfulValue = Number(row.helpful ?? -1);
434
+ return {
435
+ ...base,
436
+ type: "feedback",
437
+ feedbackType: row.feedbackType === "missing" || row.feedbackType === "wrong" ? row.feedbackType : "useful",
438
+ helpful: helpfulValue < 0 ? undefined : helpfulValue === 1,
439
+ labels: Array.isArray(labels) ? labels.filter((item) => typeof item === "string") : [],
440
+ reason: typeof row.reason === "string" && row.reason.length > 0 ? row.reason : undefined,
441
+ };
442
+ }
443
+ return null;
444
+ }
225
445
  function escapeSql(value) {
226
446
  return value.replace(/'/g, "''");
227
447
  }
package/dist/types.d.ts CHANGED
@@ -1,10 +1,14 @@
1
- export type EmbeddingProvider = "ollama";
1
+ export type EmbeddingProvider = "ollama" | "openai";
2
2
  export type RetrievalMode = "hybrid" | "vector";
3
3
  export type MemoryCategory = "preference" | "fact" | "decision" | "entity" | "other";
4
+ export type CaptureOutcome = "considered" | "skipped" | "stored";
5
+ export type CaptureSkipReason = "empty-buffer" | "below-min-chars" | "no-positive-signal" | "initialization-unavailable" | "embedding-unavailable" | "empty-embedding";
6
+ export type FeedbackType = "missing" | "wrong" | "useful";
4
7
  export interface EmbeddingConfig {
5
8
  provider: EmbeddingProvider;
6
9
  model: string;
7
10
  baseUrl?: string;
11
+ apiKey?: string;
8
12
  timeoutMs?: number;
9
13
  }
10
14
  export interface RetrievalConfig {
@@ -46,3 +50,64 @@ export interface CaptureCandidate {
46
50
  category: MemoryCategory;
47
51
  importance: number;
48
52
  }
53
+ export interface CaptureCandidateResult {
54
+ candidate: CaptureCandidate | null;
55
+ skipReason?: CaptureSkipReason;
56
+ }
57
+ interface MemoryEffectivenessEventBase {
58
+ id: string;
59
+ scope: string;
60
+ sessionID?: string;
61
+ timestamp: number;
62
+ memoryId?: string;
63
+ text?: string;
64
+ metadataJson: string;
65
+ }
66
+ export interface CaptureEvent extends MemoryEffectivenessEventBase {
67
+ type: "capture";
68
+ outcome: CaptureOutcome;
69
+ skipReason?: CaptureSkipReason;
70
+ }
71
+ export interface RecallEvent extends MemoryEffectivenessEventBase {
72
+ type: "recall";
73
+ resultCount: number;
74
+ injected: boolean;
75
+ }
76
+ export interface FeedbackEvent extends MemoryEffectivenessEventBase {
77
+ type: "feedback";
78
+ feedbackType: FeedbackType;
79
+ helpful?: boolean;
80
+ labels?: string[];
81
+ reason?: string;
82
+ }
83
+ export type MemoryEffectivenessEvent = CaptureEvent | RecallEvent | FeedbackEvent;
84
+ export interface EffectivenessSummary {
85
+ scope: string;
86
+ totalEvents: number;
87
+ capture: {
88
+ considered: number;
89
+ stored: number;
90
+ skipped: number;
91
+ successRate: number;
92
+ skipReasons: Partial<Record<CaptureSkipReason, number>>;
93
+ };
94
+ recall: {
95
+ requested: number;
96
+ injected: number;
97
+ returnedResults: number;
98
+ hitRate: number;
99
+ injectionRate: number;
100
+ };
101
+ feedback: {
102
+ missing: number;
103
+ wrong: number;
104
+ useful: {
105
+ positive: number;
106
+ negative: number;
107
+ helpfulRate: number;
108
+ };
109
+ falsePositiveRate: number;
110
+ falseNegativeRate: number;
111
+ };
112
+ }
113
+ export {};
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "lancedb-opencode-pro",
3
- "version": "0.1.1",
3
+ "version": "0.1.3",
4
4
  "description": "LanceDB-backed long-term memory provider for OpenCode",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
@@ -25,11 +25,11 @@
25
25
  ],
26
26
  "repository": {
27
27
  "type": "git",
28
- "url": "git+https://gitlab-238.ichiayi.com/jonathan/lancedb-opencode-pro.git"
28
+ "url": "git+https://github.com/tryweb/lancedb-opencode-pro.git"
29
29
  },
30
- "homepage": "https://gitlab-238.ichiayi.com/jonathan/lancedb-opencode-pro",
30
+ "homepage": "https://github.com/tryweb/lancedb-opencode-pro",
31
31
  "bugs": {
32
- "url": "https://gitlab-238.ichiayi.com/jonathan/lancedb-opencode-pro/-/issues"
32
+ "url": "https://github.com/tryweb/lancedb-opencode-pro/issues"
33
33
  },
34
34
  "license": "MIT",
35
35
  "engines": {
@@ -46,10 +46,11 @@
46
46
  "test": "npm run typecheck",
47
47
  "test:foundation": "npm run build:test && node --test dist-test/test/foundation/foundation.test.js",
48
48
  "test:regression": "npm run build:test && node --test dist-test/test/regression/plugin.test.js",
49
+ "test:effectiveness": "npm run test:foundation && npm run test:regression",
49
50
  "test:retrieval": "npm run build:test && node --test dist-test/test/retrieval/retrieval.test.js",
50
51
  "benchmark:latency": "npm run build:test && node dist-test/test/benchmark/latency.js",
51
52
  "test:e2e": "node scripts/e2e-opencode-memory.mjs",
52
- "verify": "npm run typecheck && npm run build && npm run test:foundation && npm run test:regression && npm run test:retrieval",
53
+ "verify": "npm run typecheck && npm run build && npm run test:effectiveness && npm run test:retrieval",
53
54
  "verify:full": "npm run verify && npm run benchmark:latency && npm pack",
54
55
  "release:check": "npm run verify:full && npm publish --dry-run",
55
56
  "prepublishOnly": "npm run verify:full"
package/LICENSE DELETED
@@ -1,21 +0,0 @@
1
- MIT License
2
-
3
- Copyright (c) 2026 Jonathan Tsai <tryweb@ichiayi.com>
4
-
5
- Permission is hereby granted, free of charge, to any person obtaining a copy
6
- of this software and associated documentation files (the "Software"), to deal
7
- in the Software without restriction, including without limitation the rights
8
- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
- copies of the Software, and to permit persons to whom the Software is
10
- furnished to do so, subject to the following conditions:
11
-
12
- The above copyright notice and this permission notice shall be included in all
13
- copies or substantial portions of the Software.
14
-
15
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
- SOFTWARE.