@optave/codegraph 2.1.1-dev.3c12b64 → 2.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -50,14 +50,13 @@ Most tools in this space can't do that:
50
50
  | **Heavy infrastructure that's slow to restart** | code-graph-rag (Memgraph), axon (KuzuDB), badger-graph (Dgraph) | External databases add latency to every write. Bulk-inserting a full graph into Memgraph is not a sub-second operation |
51
51
  | **No persistence between runs** | pyan, cflow | Re-parse from scratch every time. No database, no delta, no incremental anything |
52
52
 
53
- **Codegraph solves this with incremental builds:**
53
+ **Codegraph solves this with three-tier incremental change detection:**
54
54
 
55
- 1. Every file gets an MD5 hash stored in SQLite
56
- 2. On rebuild, only files whose hash changed get re-parsed
57
- 3. Stale nodes and edges for changed files are cleaned, then re-inserted
58
- 4. Everything else is untouched
55
+ 1. **Tier 0 Journal (O(changed)):** If `codegraph watch` was running, a change journal records exactly which files were touched. The next build reads the journal and only processes those files — zero filesystem scanning
56
+ 2. **Tier 1 — mtime+size (O(n) stats, O(changed) reads):** No journal? Codegraph stats every file and compares mtime + size against stored values. Matching files are skipped without reading a single byte — 10-100x cheaper than hashing
57
+ 3. **Tier 2 Hash (O(changed) reads):** Files that fail the mtime/size check are read and MD5-hashed. Only files whose hash actually changed get re-parsed and re-inserted
59
58
 
60
- **Result:** change one file in a 3,000-file project → rebuild completes in **under a second**. Put it in a commit hook, a file watcher, or let your AI agent trigger it. The graph is always current.
59
+ **Result:** change one file in a 3,000-file project → rebuild completes in **under a second**. With watch mode active, rebuilds are near-instant — the journal makes the build proportional to the number of changed files, not the size of the codebase. Put it in a commit hook, a file watcher, or let your AI agent trigger it. The graph is always current.
61
60
 
62
61
  And because the core pipeline is pure local computation (tree-sitter + SQLite), there are no API calls, no network latency, and no cost. LLM-powered features (semantic search, richer embeddings) are a separate optional layer — they enhance the graph but never block it from being current.
63
62
 
@@ -80,7 +79,7 @@ Most code graph tools make you choose: **fast local analysis with no AI, or powe
80
79
  | Git diff impact | **Yes** | — | — | — | — | **Yes** | — | **Yes** |
81
80
  | Watch mode | **Yes** | — | **Yes** | — | — | — | — | — |
82
81
  | Cycle detection | **Yes** | — | **Yes** | — | — | — | — | **Yes** |
83
- | Incremental rebuilds | **Yes** | — | **Yes** | — | — | — | — | — |
82
+ | Incremental rebuilds | **O(changed)** | — | O(n) Merkle | — | — | — | — | — |
84
83
  | Zero config | **Yes** | — | **Yes** | — | — | — | — | — |
85
84
  | Embeddable JS library (`npm install`) | **Yes** | — | — | — | — | — | — | — |
86
85
  | LLM-optional (works without API keys) | **Yes** | **Yes** | **Yes** | — | **Yes** | **Yes** | **Yes** | **Yes** |
@@ -91,22 +90,22 @@ Most code graph tools make you choose: **fast local analysis with no AI, or powe
91
90
 
92
91
  | | Differentiator | In practice |
93
92
  |---|---|---|
94
- | **⚡** | **Always-fresh graph** | Sub-second incremental rebuilds via file-hash tracking. Run on every commit, every save, in watch mode the graph is never stale. Competitors re-index everything from scratch |
93
+ | **⚡** | **Always-fresh graph** | Three-tier change detection: journal (O(changed)) mtime+size (O(n) stats) hash (O(changed) reads). Sub-second rebuilds even on large codebases. Competitors re-index everything from scratch; Merkle-tree approaches still require O(n) filesystem scanning |
95
94
  | **🔓** | **Zero-cost core, LLM-enhanced when you want** | Full graph analysis with no API keys, no accounts, no cost. Optionally bring your own LLM provider for richer embeddings and AI-powered search — your code only goes to the provider you already chose |
96
95
  | **🔬** | **Function-level, not just files** | Traces `handleAuth()` → `validateToken()` → `decryptJWT()` and shows 14 callers across 9 files break if `decryptJWT` changes |
97
- | **🤖** | **Built for AI agents** | 13-tool [MCP server](https://modelcontextprotocol.io/) — AI assistants query your graph directly. Single-repo by default, your code doesn't leak to other projects |
96
+ | **🤖** | **Built for AI agents** | 17-tool [MCP server](https://modelcontextprotocol.io/) with `context` and `explain` compound commands — AI assistants get full function context in one call. Single-repo by default, your code doesn't leak to other projects |
98
97
  | **🌐** | **Multi-language, one CLI** | JS/TS + Python + Go + Rust + Java + C# + PHP + Ruby + HCL in a single graph — no juggling Madge, pyan, and cflow |
99
98
  | **💥** | **Git diff impact** | `codegraph diff-impact` shows changed functions, their callers, and full blast radius — ships with a GitHub Actions workflow |
100
99
  | **🧠** | **Semantic search** | Local embeddings by default, LLM-powered embeddings when opted in — multi-query with RRF ranking via `"auth; token; JWT"` |
101
100
 
102
101
  ### How other tools compare
103
102
 
104
- The key question is: **can you rebuild your graph on every commit in a large codebase without it costing money or taking minutes?** Most tools in this space either re-index everything from scratch (slow), require cloud API calls for core features (costly), or both. Codegraph's incremental builds keep the graph current in millisecondsand the core pipeline needs no API keys at all. LLM-powered features are opt-in, using whichever provider you already work with.
103
+ The key question is: **can you rebuild your graph on every commit in a large codebase without it costing money or taking minutes?** Most tools in this space either re-index everything from scratch (slow), require cloud API calls for core features (costly), or both. Codegraph's three-tier incremental detection achieves true O(changed) in the best case when the watcher is running, rebuilds are proportional only to the number of files that changed, not the size of the codebase. The core pipeline needs no API keys at all. LLM-powered features are opt-in, using whichever provider you already work with.
105
104
 
106
105
  | Tool | What it does well | The tradeoff |
107
106
  |---|---|---|
108
107
  | [joern](https://github.com/joernio/joern) | Full CPG (AST + CFG + PDG) for vulnerability discovery, Scala query DSL, 14 languages, daily releases | No incremental builds — full re-parse on every change. Requires JDK 21, no built-in MCP, no watch mode |
109
- | [narsil-mcp](https://github.com/postrv/narsil-mcp) | 90 MCP tools, 32 languages, taint analysis, SBOM, dead code, neural search, Merkle-tree incremental indexing, single ~30MB binary | Primarily MCP-only — no standalone CLI query interface. Neural search requires API key or ONNX source build |
108
+ | [narsil-mcp](https://github.com/postrv/narsil-mcp) | 90 MCP tools, 32 languages, taint analysis, SBOM, dead code, neural search, Merkle-tree incremental indexing, single ~30MB binary | Merkle trees still require O(n) filesystem scanning on every rebuild. Primarily MCP-only — no standalone CLI query interface. Neural search requires API key or ONNX source build |
110
109
  | [code-graph-rag](https://github.com/vitali87/code-graph-rag) | Graph RAG with Memgraph, multi-provider AI, semantic search, code editing via AST | No incremental rebuilds — full re-index + re-embed through cloud APIs on every change. Requires Docker |
111
110
  | [cpg](https://github.com/Fraunhofer-AISEC/cpg) | Formal Code Property Graph (AST + CFG + PDG + DFG), ~10 languages, MCP module, LLVM IR support, academic specifications | No incremental builds. Requires JVM + Gradle, no zero config, no watch mode |
112
111
  | [GitNexus](https://github.com/abhigyanpatwari/GitNexus) | Knowledge graph with precomputed structural intelligence, 7 MCP tools, hybrid search (BM25 + semantic + RRF), clustering, process tracing | Full 6-phase pipeline re-run on changes. KuzuDB graph DB, browser mode limited to ~5,000 files. **PolyForm NC — no commercial use** |
@@ -133,15 +132,16 @@ Here is a cold, analytical breakdown to help you decide which tool fits your wor
133
132
  | Aspect | Optave Codegraph | Narsil-MCP |
134
133
  | :--- | :--- | :--- |
135
134
  | **Philosophy** | Lean, deterministic, AI-optimized | Comprehensive, feature-dense |
136
- | **AI Tool Count** | 13 focused tools | 90 distinct tools |
135
+ | **AI Tool Count** | 17 focused tools | 90 distinct tools |
137
136
  | **Language Support** | 11 languages | 32 languages |
138
137
  | **Primary Interface** | CLI-first with MCP integration | MCP-first (CLI is secondary) |
139
138
  | **Supply Chain Risk** | Low (minimal dependency tree) | Higher (requires massive dependency graph for embedded ML/scanners) |
140
- | **Graph Updates** | Sub-second incremental (file-hash) | Parallel re-indexing / Merkle trees |
139
+ | **Graph Updates** | **Three-tier O(changed)** journal mtime+size → hash. With watch mode, only changed files are touched | Merkle trees — O(n) filesystem scan on every rebuild to recompute tree hashes |
141
140
 
142
141
  #### Choose Codegraph if:
143
142
 
144
- * **You want to optimize AI agent reasoning.** Large Language Models degrade in performance and hallucinate when overwhelmed with choices. Codegraph’s tight 13-tool surface area ensures agents quickly understand their capabilities without wasting context window tokens.
143
+ * **You need the fastest possible incremental rebuilds.** Codegraph’s three-tier change detection (journal mtime+size hash) achieves true O(changed) when the watcher is running — only touched files are processed. Narsil’s Merkle trees still require O(n) filesystem scanning to recompute hashes on every rebuild, even when nothing changed. On a 3,000-file project, this is the difference between near-instant and noticeable.
144
+ * **You want to optimize AI agent reasoning.** Large Language Models degrade in performance and hallucinate when overwhelmed with choices. Codegraph’s tight 17-tool surface area ensures agents quickly understand their capabilities without wasting context window tokens.
145
145
  * **You are concerned about supply chain attacks.** To support 90 tools, SBOMs, and neural embeddings, a tool must pull in a massive dependency tree. Codegraph keeps its dependencies minimal, dramatically reducing the risk of malicious code sneaking onto your machine.
146
146
  * **You want deterministic blast-radius checks.** Features like `diff-impact` are built specifically to tell you exactly how a changed function cascades through your codebase before you merge a PR.
147
147
  * **You value a strong standalone CLI.** You want to query your code graph locally without necessarily spinning up an AI agent.
@@ -180,17 +180,20 @@ codegraph deps src/index.ts # file-level import/export map
180
180
 
181
181
  | | Feature | Description |
182
182
  |---|---|---|
183
- | 🔍 | **Symbol search** | Find any function, class, or method by name with callers/callees |
183
+ | 🔍 | **Symbol search** | Find any function, class, or method by name exact match priority, relevance scoring, `--file` and `--kind` filters |
184
184
  | 📁 | **File dependencies** | See what a file imports and what imports it |
185
185
  | 💥 | **Impact analysis** | Trace every file affected by a change (transitive) |
186
- | 🧬 | **Function-level tracing** | Call chains, caller trees, and function-level impact |
186
+ | 🧬 | **Function-level tracing** | Call chains, caller trees, and function-level impact with qualified call resolution |
187
+ | 🎯 | **Deep context** | `context` gives AI agents source, deps, callers, signature, and tests for a function in one call; `explain` gives structural summaries of files or functions |
188
+ | 📍 | **Fast lookup** | `where` shows exactly where a symbol is defined and used — minimal, fast |
187
189
  | 📊 | **Diff impact** | Parse `git diff`, find overlapping functions, trace their callers |
188
190
  | 🗺️ | **Module map** | Bird's-eye view of your most-connected files |
191
+ | 🏗️ | **Structure & hotspots** | Directory cohesion scores, fan-in/fan-out hotspot detection, module boundaries |
189
192
  | 🔄 | **Cycle detection** | Find circular dependencies at file or function level |
190
193
  | 📤 | **Export** | DOT (Graphviz), Mermaid, and JSON graph export |
191
194
  | 🧠 | **Semantic search** | Embeddings-powered natural language search with multi-query RRF ranking |
192
195
  | 👀 | **Watch mode** | Incrementally update the graph as files change |
193
- | 🤖 | **MCP server** | 13-tool MCP server for AI assistants; single-repo by default, opt-in multi-repo |
196
+ | 🤖 | **MCP server** | 17-tool MCP server for AI assistants; single-repo by default, opt-in multi-repo |
194
197
  | 🔒 | **Your code, your choice** | Zero-cost core with no API keys. Optionally enhance with your LLM provider — your code only goes where you send it |
195
198
 
196
199
  ## 📦 Commands
@@ -210,7 +213,19 @@ codegraph watch [dir] # Watch for changes, update graph incrementally
210
213
  codegraph query <name> # Find a symbol — shows callers and callees
211
214
  codegraph deps <file> # File imports/exports
212
215
  codegraph map # Top 20 most-connected files
213
- codegraph map -n 50 # Top 50
216
+ codegraph map -n 50 --no-tests # Top 50, excluding test files
217
+ codegraph where <name> # Where is a symbol defined and used?
218
+ codegraph where --file src/db.js # List symbols, imports, exports for a file
219
+ codegraph stats # Graph health: nodes, edges, languages, quality score
220
+ ```
221
+
222
+ ### Deep Context (AI-Optimized)
223
+
224
+ ```bash
225
+ codegraph context <name> # Full context: source, deps, callers, signature, tests
226
+ codegraph context <name> --depth 2 --no-tests # Include callee source 2 levels deep
227
+ codegraph explain <file> # Structural summary: public API, internals, data flow
228
+ codegraph explain <function> # Function summary: signature, calls, callers, tests
214
229
  ```
215
230
 
216
231
  ### Impact Analysis
@@ -225,6 +240,14 @@ codegraph diff-impact --staged # Impact of staged changes
225
240
  codegraph diff-impact HEAD~3 # Impact vs a specific ref
226
241
  ```
227
242
 
243
+ ### Structure & Hotspots
244
+
245
+ ```bash
246
+ codegraph structure # Directory overview with cohesion scores
247
+ codegraph hotspots # Files with extreme fan-in, fan-out, or density
248
+ codegraph hotspots --metric coupling --level directory --no-tests
249
+ ```
250
+
228
251
  ### Export & Visualization
229
252
 
230
253
  ```bash
@@ -268,9 +291,9 @@ A single trailing semicolon is ignored (falls back to single-query mode). The `-
268
291
  | `minilm` | all-MiniLM-L6-v2 | 384 | ~23 MB | Apache-2.0 | Fastest, good for quick iteration |
269
292
  | `jina-small` | jina-embeddings-v2-small-en | 512 | ~33 MB | Apache-2.0 | Better quality, still small |
270
293
  | `jina-base` | jina-embeddings-v2-base-en | 768 | ~137 MB | Apache-2.0 | High quality, 8192 token context |
271
- | `jina-code` (default) | jina-embeddings-v2-base-code | 768 | ~137 MB | Apache-2.0 | **Best for code search**, trained on code+text |
294
+ | `jina-code` | jina-embeddings-v2-base-code | 768 | ~137 MB | Apache-2.0 | Best for code search, trained on code+text (requires HF token) |
272
295
  | `nomic` | nomic-embed-text-v1 | 768 | ~137 MB | Apache-2.0 | Good quality, 8192 context |
273
- | `nomic-v1.5` | nomic-embed-text-v1.5 | 768 | ~137 MB | Apache-2.0 | Improved nomic, Matryoshka dimensions |
296
+ | `nomic-v1.5` (default) | nomic-embed-text-v1.5 | 768 | ~137 MB | Apache-2.0 | **Improved nomic, Matryoshka dimensions** |
274
297
  | `bge-large` | bge-large-en-v1.5 | 1024 | ~335 MB | MIT | Best general retrieval, top MTEB scores |
275
298
 
276
299
  The model used during `embed` is stored in the database, so `search` auto-detects it — no need to pass `--model` when searching.
@@ -304,13 +327,13 @@ By default, the MCP server only exposes the local project's graph. AI agents can
304
327
  | Flag | Description |
305
328
  |---|---|
306
329
  | `-d, --db <path>` | Custom path to `graph.db` |
307
- | `-T, --no-tests` | Exclude `.test.`, `.spec.`, `__test__` files |
330
+ | `-T, --no-tests` | Exclude `.test.`, `.spec.`, `__test__` files (available on `fn`, `fn-impact`, `context`, `explain`, `where`, `diff-impact`, `search`, `map`, `hotspots`, `deps`, `impact`) |
308
331
  | `--depth <n>` | Transitive trace depth (default varies by command) |
309
332
  | `-j, --json` | Output as JSON |
310
333
  | `-v, --verbose` | Enable debug output |
311
334
  | `--engine <engine>` | Parser engine: `native`, `wasm`, or `auto` (default: `auto`) |
312
- | `-k, --kind <kind>` | Filter by kind: `function`, `method`, `class`, `struct`, `enum`, `trait`, `record`, `module` (search) |
313
- | `--file <pattern>` | Filter by file path pattern (search) |
335
+ | `-k, --kind <kind>` | Filter by kind: `function`, `method`, `class`, `struct`, `enum`, `trait`, `record`, `module` (`fn`, `context`, `search`) |
336
+ | `-f, --file <path>` | Scope to a specific file (`fn`, `context`, `where`) |
314
337
  | `--rrf-k <n>` | RRF smoothing constant for multi-query search (default 60) |
315
338
 
316
339
  ## 🌐 Language Support
@@ -361,36 +384,38 @@ Both engines produce identical output. Use `--engine native|wasm|auto` to contro
361
384
 
362
385
  ### Call Resolution
363
386
 
364
- Calls are resolved with priority and confidence scoring:
387
+ Calls are resolved with **qualified resolution** — method calls (`obj.method()`) are distinguished from standalone function calls, and built-in receivers (`console`, `Math`, `JSON`, `Array`, `Promise`, etc.) are filtered out automatically. Import scope is respected: a call to `foo()` only resolves to functions that are actually imported or defined in the same file, eliminating false positives from name collisions.
365
388
 
366
389
  | Priority | Source | Confidence |
367
390
  |---|---|---|
368
391
  | 1 | **Import-aware** — `import { foo } from './bar'` → link to `bar` | `1.0` |
369
392
  | 2 | **Same-file** — definitions in the current file | `1.0` |
370
- | 3 | **Same directory** — definitions in sibling files | `0.7` |
371
- | 4 | **Same parent directory** — definitions in sibling dirs | `0.5` |
372
- | 5 | **Global fallback** — match by name across codebase | `0.3` |
373
- | 6 | **Method hierarchy** — resolved through `extends`/`implements` | — |
393
+ | 3 | **Same directory** — definitions in sibling files (standalone calls only) | `0.7` |
394
+ | 4 | **Same parent directory** — definitions in sibling dirs (standalone calls only) | `0.5` |
395
+ | 5 | **Method hierarchy** — resolved through `extends`/`implements` | varies |
396
+
397
+ Method calls on unknown receivers skip global fallback entirely — `stmt.run()` will never resolve to a standalone `run` function in another file. Duplicate caller/callee edges are deduplicated automatically. Dynamic patterns like `fn.call()`, `fn.apply()`, `fn.bind()`, and `obj["method"]()` are also detected on a best-effort basis.
374
398
 
375
- Dynamic patterns like `fn.call()`, `fn.apply()`, `fn.bind()`, and `obj["method"]()` are also detected on a best-effort basis.
399
+ Codegraph also extracts symbols from common callback patterns: Commander `.command().action()` callbacks (as `command:build`), Express route handlers (as `route:GET /api/users`), and event emitter listeners (as `event:data`).
376
400
 
377
401
  ## 📊 Performance
378
402
 
379
- Benchmarked on a ~3,200-file TypeScript project:
403
+ Self-measured on every release via CI ([full history](generated/BENCHMARKS.md)):
380
404
 
381
- | Metric | Value |
405
+ | Metric | Latest |
382
406
  |---|---|
383
- | Build time | ~30s |
384
- | Nodes | 19,000+ |
385
- | Edges | 120,000+ |
386
- | Query time | <100ms |
387
- | DB size | ~5 MB |
407
+ | Build speed (native) | **2.5 ms/file** |
408
+ | Build speed (WASM) | **5 ms/file** |
409
+ | Query time | **1ms** |
410
+ | ~50,000 files (est.) | **~125.0s build** |
411
+
412
+ Metrics are normalized per file for cross-version comparability. Times above are for a full initial build — incremental rebuilds only re-parse changed files.
388
413
 
389
414
  ## 🤖 AI Agent Integration
390
415
 
391
416
  ### MCP Server
392
417
 
393
- Codegraph includes a built-in [Model Context Protocol](https://modelcontextprotocol.io/) server with 13 tools, so AI assistants can query your dependency graph directly:
418
+ Codegraph includes a built-in [Model Context Protocol](https://modelcontextprotocol.io/) server with 17 tools, so AI assistants can query your dependency graph directly:
394
419
 
395
420
  ```bash
396
421
  codegraph mcp # Single-repo mode (default) — only local project
@@ -404,20 +429,35 @@ codegraph mcp --repos a,b # Multi-repo with allowlist
404
429
 
405
430
  ### CLAUDE.md / Agent Instructions
406
431
 
407
- Add this to your project's `CLAUDE.md` to help AI agents use codegraph:
432
+ Add this to your project's `CLAUDE.md` to help AI agents use codegraph (full template in the [AI Agent Guide](docs/ai-agent-guide.md#claudemd-template)):
408
433
 
409
434
  ```markdown
410
435
  ## Code Navigation
411
436
 
412
437
  This project uses codegraph. The database is at `.codegraph/graph.db`.
413
438
 
414
- - **Before modifying a function**: `codegraph fn <name> --no-tests`
415
- - **Before modifying a file**: `codegraph deps <file>`
416
- - **To assess PR impact**: `codegraph diff-impact --no-tests`
417
- - **To find entry points**: `codegraph map`
418
- - **To trace breakage**: `codegraph fn-impact <name> --no-tests`
419
-
420
- Rebuild after major structural changes: `codegraph build`
439
+ ### Before modifying code, always:
440
+ 1. `codegraph where <name>` find where the symbol lives
441
+ 2. `codegraph explain <file-or-function>` understand the structure
442
+ 3. `codegraph context <name> -T` get full context (source, deps, callers)
443
+ 4. `codegraph fn-impact <name> -T` — check blast radius before editing
444
+
445
+ ### After modifying code:
446
+ 5. `codegraph diff-impact --staged -T` — verify impact before committing
447
+
448
+ ### Other useful commands
449
+ - `codegraph build .` — rebuild the graph (incremental by default)
450
+ - `codegraph map` — module overview
451
+ - `codegraph fn <name> -T` — function call chain
452
+ - `codegraph deps <file>` — file-level dependencies
453
+ - `codegraph search "<query>"` — semantic search (requires `codegraph embed`)
454
+ - `codegraph cycles` — check for circular dependencies
455
+
456
+ ### Flags
457
+ - `-T` / `--no-tests` — exclude test files (use by default)
458
+ - `-j` / `--json` — JSON output for programmatic use
459
+ - `-f, --file <path>` — scope to a specific file
460
+ - `-k, --kind <kind>` — filter by symbol kind
421
461
 
422
462
  ### Semantic search
423
463
 
@@ -455,6 +495,8 @@ See **[docs/recommended-practices.md](docs/recommended-practices.md)** for integ
455
495
  - **Developer workflow** — watch mode, explore-before-you-edit, semantic search
456
496
  - **Secure credentials** — `apiKeyCommand` with 1Password, Bitwarden, Vault, macOS Keychain, `pass`
457
497
 
498
+ For AI-specific integration, see the **[AI Agent Guide](docs/ai-agent-guide.md)** — a comprehensive reference covering the 6-step agent workflow, complete command-to-MCP mapping, Claude Code hooks, and token-saving patterns.
499
+
458
500
  ## 🔁 CI / GitHub Actions
459
501
 
460
502
  Codegraph ships with a ready-to-use GitHub Actions workflow that comments impact analysis on every pull request.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@optave/codegraph",
3
- "version": "2.1.1-dev.3c12b64",
3
+ "version": "2.2.1",
4
4
  "description": "Local code graph CLI — parse codebases with tree-sitter, build dependency graphs, query them",
5
5
  "type": "module",
6
6
  "main": "src/index.js",
@@ -61,10 +61,10 @@
61
61
  "optionalDependencies": {
62
62
  "@huggingface/transformers": "^3.8.1",
63
63
  "@modelcontextprotocol/sdk": "^1.0.0",
64
- "@optave/codegraph-darwin-arm64": "2.1.1-dev.3c12b64",
65
- "@optave/codegraph-darwin-x64": "2.1.1-dev.3c12b64",
66
- "@optave/codegraph-linux-x64-gnu": "2.1.1-dev.3c12b64",
67
- "@optave/codegraph-win32-x64-msvc": "2.1.1-dev.3c12b64"
64
+ "@optave/codegraph-darwin-arm64": "2.2.1",
65
+ "@optave/codegraph-darwin-x64": "2.2.1",
66
+ "@optave/codegraph-linux-x64-gnu": "2.2.1",
67
+ "@optave/codegraph-win32-x64-msvc": "2.2.1"
68
68
  },
69
69
  "devDependencies": {
70
70
  "@biomejs/biome": "^2.4.4",