@ory/lumen-opencode 0.0.26 → 0.0.30

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,3 +1,3 @@
1
1
  {
2
- ".": "0.0.26"
2
+ ".": "0.0.30"
3
3
  }
package/README.md CHANGED
@@ -162,7 +162,7 @@ opencode mcp list
162
162
  updating this repository or the published package
163
163
  - **Codex** - `cd "${CODEX_HOME:-$HOME/.codex}/lumen" && git pull`
164
164
  - **OpenCode** - update the version pin in `opencode.json` (e.g.
165
- `@ory/lumen-opencode@0.0.27`) and restart OpenCode
165
+ `@ory/lumen-opencode@0.0.29`) and restart OpenCode
166
166
 
167
167
  On first Claude Code or Cursor session start, Lumen:
168
168
 
@@ -274,13 +274,15 @@ for all 9 per-language benchmark deep dives.
274
274
 
275
275
  All configuration is via environment variables:
276
276
 
277
- | Variable | Default | Description |
278
- | ------------------------ | ------------------------ | ------------------------------------------ |
279
- | `LUMEN_EMBED_MODEL` | see note ¹ | Embedding model (must be in registry) |
280
- | `LUMEN_BACKEND` | `ollama` | Embedding backend (`ollama` or `lmstudio`) |
281
- | `OLLAMA_HOST` | `http://localhost:11434` | Ollama server URL |
282
- | `LM_STUDIO_HOST` | `http://localhost:1234` | LM Studio server URL |
283
- | `LUMEN_MAX_CHUNK_TOKENS` | `512` | Max tokens per chunk before splitting |
277
+ | Variable | Default | Description |
278
+ | ------------------------ | ------------------------ | ------------------------------------------------------------- |
279
+ | `LUMEN_EMBED_MODEL` | see note ¹ | Embedding model; use with `LUMEN_EMBED_DIMS` for unlisted models |
280
+ | `LUMEN_BACKEND` | `ollama` | Embedding backend (`ollama` or `lmstudio`) |
281
+ | `OLLAMA_HOST` | `http://localhost:11434` | Ollama server URL |
282
+ | `LM_STUDIO_HOST` | `http://localhost:1234` | LM Studio server URL |
283
+ | `LUMEN_MAX_CHUNK_TOKENS` | `512` | Max tokens per chunk before splitting |
284
+ | `LUMEN_EMBED_DIMS` | — | Override embedding dimensions (required for unlisted models) |
285
+ | `LUMEN_EMBED_CTX` | `8192` (unlisted models) | Override context window length |
284
286
 
285
287
  ¹ `ordis/jina-embeddings-v2-base-code` (Ollama),
286
288
  `nomic-ai/nomic-embed-code-GGUF` (LM Studio)
@@ -302,6 +304,22 @@ Dimensions and context length are configured automatically per model:
302
304
  Switching models creates a separate index automatically. The model name is part
303
305
  of the database path hash, so different models never collide.
304
306
 
307
+ ### Using a custom or unlisted model
308
+
309
+ If your model is not in the registry above, set `LUMEN_EMBED_DIMS` to bypass the
310
+ registry check. `LUMEN_EMBED_CTX` is optional and defaults to `8192`.
311
+
312
+ Both variables can also override values for _known_ models — useful when running
313
+ a model variant with a longer context window or different output dimensions.
314
+
315
+ ```sh
316
+ LUMEN_BACKEND=lmstudio
317
+ LM_STUDIO_HOST=http://localhost:8801
318
+ LUMEN_EMBED_MODEL=mlx-community/Qwen3-Embedding-8B-4bit-DWQ
319
+ LUMEN_EMBED_DIMS=4096
320
+ LUMEN_EMBED_CTX=40960 # optional, defaults to 8192
321
+ ```
322
+
305
323
  ## Controlling what gets indexed
306
324
 
307
325
  Lumen filters files through six layers: built-in directory and lock file skips →
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@ory/lumen-opencode",
3
- "version": "0.0.26",
3
+ "version": "0.0.30",
4
4
  "description": "Precise local semantic code search plugin for OpenCode — indexes with Go AST/tree-sitter and embeds with Ollama or LM Studio",
5
5
  "type": "module",
6
6
  "main": ".opencode/plugins/lumen.js",