@optave/codegraph 2.1.1-dev.0e15f12 → 2.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -93,7 +93,7 @@ Most code graph tools make you choose: **fast local analysis with no AI, or powe
93
93
  | **⚡** | **Always-fresh graph** | Three-tier change detection: journal (O(changed)) → mtime+size (O(n) stats) → hash (O(changed) reads). Sub-second rebuilds even on large codebases. Competitors re-index everything from scratch; Merkle-tree approaches still require O(n) filesystem scanning |
94
94
  | **🔓** | **Zero-cost core, LLM-enhanced when you want** | Full graph analysis with no API keys, no accounts, no cost. Optionally bring your own LLM provider for richer embeddings and AI-powered search — your code only goes to the provider you already chose |
95
95
  | **🔬** | **Function-level, not just files** | Traces `handleAuth()` → `validateToken()` → `decryptJWT()` and shows 14 callers across 9 files break if `decryptJWT` changes |
96
- | **🤖** | **Built for AI agents** | 13-tool [MCP server](https://modelcontextprotocol.io/) — AI assistants query your graph directly. Single-repo by default, your code doesn't leak to other projects |
96
+ | **🤖** | **Built for AI agents** | 17-tool [MCP server](https://modelcontextprotocol.io/) — AI assistants query your graph directly. Single-repo by default, your code doesn't leak to other projects |
97
97
  | **🌐** | **Multi-language, one CLI** | JS/TS + Python + Go + Rust + Java + C# + PHP + Ruby + HCL in a single graph — no juggling Madge, pyan, and cflow |
98
98
  | **💥** | **Git diff impact** | `codegraph diff-impact` shows changed functions, their callers, and full blast radius — ships with a GitHub Actions workflow |
99
99
  | **🧠** | **Semantic search** | Local embeddings by default, LLM-powered embeddings when opted in — multi-query with RRF ranking via `"auth; token; JWT"` |
@@ -132,7 +132,7 @@ Here is a cold, analytical breakdown to help you decide which tool fits your wor
132
132
  | Aspect | Optave Codegraph | Narsil-MCP |
133
133
  | :--- | :--- | :--- |
134
134
  | **Philosophy** | Lean, deterministic, AI-optimized | Comprehensive, feature-dense |
135
- | **AI Tool Count** | 13 focused tools | 90 distinct tools |
135
+ | **AI Tool Count** | 17 focused tools | 90 distinct tools |
136
136
  | **Language Support** | 11 languages | 32 languages |
137
137
  | **Primary Interface** | CLI-first with MCP integration | MCP-first (CLI is secondary) |
138
138
  | **Supply Chain Risk** | Low (minimal dependency tree) | Higher (requires massive dependency graph for embedded ML/scanners) |
@@ -141,7 +141,7 @@ Here is a cold, analytical breakdown to help you decide which tool fits your wor
141
141
  #### Choose Codegraph if:
142
142
 
143
143
  * **You need the fastest possible incremental rebuilds.** Codegraph’s three-tier change detection (journal → mtime+size → hash) achieves true O(changed) when the watcher is running — only touched files are processed. Narsil’s Merkle trees still require O(n) filesystem scanning to recompute hashes on every rebuild, even when nothing changed. On a 3,000-file project, this is the difference between near-instant and noticeable.
144
- * **You want to optimize AI agent reasoning.** Large Language Models degrade in performance and hallucinate when overwhelmed with choices. Codegraph’s tight 13-tool surface area ensures agents quickly understand their capabilities without wasting context window tokens.
144
+ * **You want to optimize AI agent reasoning.** Large Language Models degrade in performance and hallucinate when overwhelmed with choices. Codegraph’s tight 17-tool surface area ensures agents quickly understand their capabilities without wasting context window tokens.
145
145
  * **You are concerned about supply chain attacks.** To support 90 tools, SBOMs, and neural embeddings, a tool must pull in a massive dependency tree. Codegraph keeps its dependencies minimal, dramatically reducing the risk of malicious code sneaking onto your machine.
146
146
  * **You want deterministic blast-radius checks.** Features like `diff-impact` are built specifically to tell you exactly how a changed function cascades through your codebase before you merge a PR.
147
147
  * **You value a strong standalone CLI.** You want to query your code graph locally without necessarily spinning up an AI agent.
@@ -190,7 +190,7 @@ codegraph deps src/index.ts # file-level import/export map
190
190
  | 📤 | **Export** | DOT (Graphviz), Mermaid, and JSON graph export |
191
191
  | 🧠 | **Semantic search** | Embeddings-powered natural language search with multi-query RRF ranking |
192
192
  | 👀 | **Watch mode** | Incrementally update the graph as files change |
193
- | 🤖 | **MCP server** | 13-tool MCP server for AI assistants; single-repo by default, opt-in multi-repo |
193
+ | 🤖 | **MCP server** | 17-tool MCP server for AI assistants; single-repo by default, opt-in multi-repo |
194
194
  | 🔒 | **Your code, your choice** | Zero-cost core with no API keys. Optionally enhance with your LLM provider — your code only goes where you send it |
195
195
 
196
196
  ## 📦 Commands
@@ -391,7 +391,7 @@ Metrics are normalized per file for cross-version comparability. Times above are
391
391
 
392
392
  ### MCP Server
393
393
 
394
- Codegraph includes a built-in [Model Context Protocol](https://modelcontextprotocol.io/) server with 13 tools, so AI assistants can query your dependency graph directly:
394
+ Codegraph includes a built-in [Model Context Protocol](https://modelcontextprotocol.io/) server with 17 tools, so AI assistants can query your dependency graph directly:
395
395
 
396
396
  ```bash
397
397
  codegraph mcp # Single-repo mode (default) — only local project
@@ -405,20 +405,35 @@ codegraph mcp --repos a,b # Multi-repo with allowlist
405
405
 
406
406
  ### CLAUDE.md / Agent Instructions
407
407
 
408
- Add this to your project's `CLAUDE.md` to help AI agents use codegraph:
408
+ Add this to your project's `CLAUDE.md` to help AI agents use codegraph (full template in the [AI Agent Guide](docs/ai-agent-guide.md#claudemd-template)):
409
409
 
410
410
  ```markdown
411
411
  ## Code Navigation
412
412
 
413
413
  This project uses codegraph. The database is at `.codegraph/graph.db`.
414
414
 
415
- - **Before modifying a function**: `codegraph fn <name> --no-tests`
416
- - **Before modifying a file**: `codegraph deps <file>`
417
- - **To assess PR impact**: `codegraph diff-impact --no-tests`
418
- - **To find entry points**: `codegraph map`
419
- - **To trace breakage**: `codegraph fn-impact <name> --no-tests`
420
-
421
- Rebuild after major structural changes: `codegraph build`
415
+ ### Before modifying code, always:
416
+ 1. `codegraph where <name>` find where the symbol lives
417
+ 2. `codegraph explain <file-or-function>` understand the structure
418
+ 3. `codegraph context <name> -T` get full context (source, deps, callers)
419
+ 4. `codegraph fn-impact <name> -T` — check blast radius before editing
420
+
421
+ ### After modifying code:
422
+ 5. `codegraph diff-impact --staged -T` — verify impact before committing
423
+
424
+ ### Other useful commands
425
+ - `codegraph build .` — rebuild the graph (incremental by default)
426
+ - `codegraph map` — module overview
427
+ - `codegraph fn <name> -T` — function call chain
428
+ - `codegraph deps <file>` — file-level dependencies
429
+ - `codegraph search "<query>"` — semantic search (requires `codegraph embed`)
430
+ - `codegraph cycles` — check for circular dependencies
431
+
432
+ ### Flags
433
+ - `-T` / `--no-tests` — exclude test files (use by default)
434
+ - `-j` / `--json` — JSON output for programmatic use
435
+ - `-f, --file <path>` — scope to a specific file
436
+ - `-k, --kind <kind>` — filter by symbol kind
422
437
 
423
438
  ### Semantic search
424
439
 
@@ -456,6 +471,8 @@ See **[docs/recommended-practices.md](docs/recommended-practices.md)** for integ
456
471
  - **Developer workflow** — watch mode, explore-before-you-edit, semantic search
457
472
  - **Secure credentials** — `apiKeyCommand` with 1Password, Bitwarden, Vault, macOS Keychain, `pass`
458
473
 
474
+ For AI-specific integration, see the **[AI Agent Guide](docs/ai-agent-guide.md)** — a comprehensive reference covering the 6-step agent workflow, complete command-to-MCP mapping, Claude Code hooks, and token-saving patterns.
475
+
459
476
  ## 🔁 CI / GitHub Actions
460
477
 
461
478
  Codegraph ships with a ready-to-use GitHub Actions workflow that comments impact analysis on every pull request.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@optave/codegraph",
3
- "version": "2.1.1-dev.0e15f12",
3
+ "version": "2.2.0",
4
4
  "description": "Local code graph CLI — parse codebases with tree-sitter, build dependency graphs, query them",
5
5
  "type": "module",
6
6
  "main": "src/index.js",
@@ -61,10 +61,10 @@
61
61
  "optionalDependencies": {
62
62
  "@huggingface/transformers": "^3.8.1",
63
63
  "@modelcontextprotocol/sdk": "^1.0.0",
64
- "@optave/codegraph-darwin-arm64": "2.1.1-dev.0e15f12",
65
- "@optave/codegraph-darwin-x64": "2.1.1-dev.0e15f12",
66
- "@optave/codegraph-linux-x64-gnu": "2.1.1-dev.0e15f12",
67
- "@optave/codegraph-win32-x64-msvc": "2.1.1-dev.0e15f12"
64
+ "@optave/codegraph-darwin-arm64": "2.2.0",
65
+ "@optave/codegraph-darwin-x64": "2.2.0",
66
+ "@optave/codegraph-linux-x64-gnu": "2.2.0",
67
+ "@optave/codegraph-win32-x64-msvc": "2.2.0"
68
68
  },
69
69
  "devDependencies": {
70
70
  "@biomejs/biome": "^2.4.4",
package/src/cli.js CHANGED
@@ -62,18 +62,20 @@ program
62
62
  .command('query <name>')
63
63
  .description('Find a function/class, show callers and callees')
64
64
  .option('-d, --db <path>', 'Path to graph.db')
65
+ .option('-T, --no-tests', 'Exclude test/spec files from results')
65
66
  .option('-j, --json', 'Output as JSON')
66
67
  .action((name, opts) => {
67
- queryName(name, opts.db, { json: opts.json });
68
+ queryName(name, opts.db, { noTests: !opts.tests, json: opts.json });
68
69
  });
69
70
 
70
71
  program
71
72
  .command('impact <file>')
72
73
  .description('Show what depends on this file (transitive)')
73
74
  .option('-d, --db <path>', 'Path to graph.db')
75
+ .option('-T, --no-tests', 'Exclude test/spec files from results')
74
76
  .option('-j, --json', 'Output as JSON')
75
77
  .action((file, opts) => {
76
- impactAnalysis(file, opts.db, { json: opts.json });
78
+ impactAnalysis(file, opts.db, { noTests: !opts.tests, json: opts.json });
77
79
  });
78
80
 
79
81
  program
@@ -81,27 +83,30 @@ program
81
83
  .description('High-level module overview with most-connected nodes')
82
84
  .option('-d, --db <path>', 'Path to graph.db')
83
85
  .option('-n, --limit <number>', 'Number of top nodes', '20')
86
+ .option('-T, --no-tests', 'Exclude test/spec files from results')
84
87
  .option('-j, --json', 'Output as JSON')
85
88
  .action((opts) => {
86
- moduleMap(opts.db, parseInt(opts.limit, 10), { json: opts.json });
89
+ moduleMap(opts.db, parseInt(opts.limit, 10), { noTests: !opts.tests, json: opts.json });
87
90
  });
88
91
 
89
92
  program
90
93
  .command('stats')
91
94
  .description('Show graph health overview: nodes, edges, languages, cycles, hotspots, embeddings')
92
95
  .option('-d, --db <path>', 'Path to graph.db')
96
+ .option('-T, --no-tests', 'Exclude test/spec files from results')
93
97
  .option('-j, --json', 'Output as JSON')
94
98
  .action((opts) => {
95
- stats(opts.db, { json: opts.json });
99
+ stats(opts.db, { noTests: !opts.tests, json: opts.json });
96
100
  });
97
101
 
98
102
  program
99
103
  .command('deps <file>')
100
104
  .description('Show what this file imports and what imports it')
101
105
  .option('-d, --db <path>', 'Path to graph.db')
106
+ .option('-T, --no-tests', 'Exclude test/spec files from results')
102
107
  .option('-j, --json', 'Output as JSON')
103
108
  .action((file, opts) => {
104
- fileDeps(file, opts.db, { json: opts.json });
109
+ fileDeps(file, opts.db, { noTests: !opts.tests, json: opts.json });
105
110
  });
106
111
 
107
112
  program
@@ -159,7 +164,7 @@ program
159
164
  .option('-k, --kind <kind>', 'Filter to a specific symbol kind')
160
165
  .option('--no-source', 'Metadata only (skip source extraction)')
161
166
  .option('--include-tests', 'Include test source code')
162
- .option('-T, --no-tests', 'Exclude test files from callers')
167
+ .option('-T, --no-tests', 'Exclude test/spec files from results')
163
168
  .option('-j, --json', 'Output as JSON')
164
169
  .action((name, opts) => {
165
170
  if (opts.kind && !ALL_SYMBOL_KINDS.includes(opts.kind)) {
@@ -181,7 +186,7 @@ program
181
186
  .command('explain <target>')
182
187
  .description('Structural summary of a file or function (no LLM needed)')
183
188
  .option('-d, --db <path>', 'Path to graph.db')
184
- .option('-T, --no-tests', 'Exclude test/spec files')
189
+ .option('-T, --no-tests', 'Exclude test/spec files from results')
185
190
  .option('-j, --json', 'Output as JSON')
186
191
  .action((target, opts) => {
187
192
  explain(target, opts.db, { noTests: !opts.tests, json: opts.json });
@@ -192,7 +197,7 @@ program
192
197
  .description('Find where a symbol is defined and used (minimal, fast lookup)')
193
198
  .option('-d, --db <path>', 'Path to graph.db')
194
199
  .option('-f, --file <path>', 'File overview: list symbols, imports, exports')
195
- .option('-T, --no-tests', 'Exclude test/spec files')
200
+ .option('-T, --no-tests', 'Exclude test/spec files from results')
196
201
  .option('-j, --json', 'Output as JSON')
197
202
  .action((name, opts) => {
198
203
  if (!name && !opts.file) {
@@ -229,10 +234,11 @@ program
229
234
  .option('-d, --db <path>', 'Path to graph.db')
230
235
  .option('-f, --format <format>', 'Output format: dot, mermaid, json', 'dot')
231
236
  .option('--functions', 'Function-level graph instead of file-level')
237
+ .option('-T, --no-tests', 'Exclude test/spec files')
232
238
  .option('-o, --output <file>', 'Write to file instead of stdout')
233
239
  .action((opts) => {
234
240
  const db = new Database(findDbPath(opts.db), { readonly: true });
235
- const exportOpts = { fileLevel: !opts.functions };
241
+ const exportOpts = { fileLevel: !opts.functions, noTests: !opts.tests };
236
242
 
237
243
  let output;
238
244
  switch (opts.format) {
@@ -240,7 +246,7 @@ program
240
246
  output = exportMermaid(db, exportOpts);
241
247
  break;
242
248
  case 'json':
243
- output = JSON.stringify(exportJSON(db), null, 2);
249
+ output = JSON.stringify(exportJSON(db, exportOpts), null, 2);
244
250
  break;
245
251
  default:
246
252
  output = exportDOT(db, exportOpts);
@@ -262,10 +268,11 @@ program
262
268
  .description('Detect circular dependencies in the codebase')
263
269
  .option('-d, --db <path>', 'Path to graph.db')
264
270
  .option('--functions', 'Function-level cycle detection')
271
+ .option('-T, --no-tests', 'Exclude test/spec files')
265
272
  .option('-j, --json', 'Output as JSON')
266
273
  .action((opts) => {
267
274
  const db = new Database(findDbPath(opts.db), { readonly: true });
268
- const cycles = findCycles(db, { fileLevel: !opts.functions });
275
+ const cycles = findCycles(db, { fileLevel: !opts.functions, noTests: !opts.tests });
269
276
  db.close();
270
277
 
271
278
  if (opts.json) {
@@ -395,7 +402,7 @@ program
395
402
  .option('-d, --db <path>', 'Path to graph.db')
396
403
  .option('-m, --model <name>', 'Override embedding model (auto-detects from DB)')
397
404
  .option('-n, --limit <number>', 'Max results', '15')
398
- .option('-T, --no-tests', 'Exclude test/spec files')
405
+ .option('-T, --no-tests', 'Exclude test/spec files from results')
399
406
  .option('--min-score <score>', 'Minimum similarity threshold', '0.2')
400
407
  .option('-k, --kind <kind>', 'Filter by kind: function, method, class')
401
408
  .option('--file <pattern>', 'Filter by file path pattern')
@@ -420,6 +427,7 @@ program
420
427
  .option('-d, --db <path>', 'Path to graph.db')
421
428
  .option('--depth <n>', 'Max directory depth')
422
429
  .option('--sort <metric>', 'Sort by: cohesion | fan-in | fan-out | density | files', 'files')
430
+ .option('-T, --no-tests', 'Exclude test/spec files')
423
431
  .option('-j, --json', 'Output as JSON')
424
432
  .action(async (dir, opts) => {
425
433
  const { structureData, formatStructure } = await import('./structure.js');
@@ -427,6 +435,7 @@ program
427
435
  directory: dir,
428
436
  depth: opts.depth ? parseInt(opts.depth, 10) : undefined,
429
437
  sort: opts.sort,
438
+ noTests: !opts.tests,
430
439
  });
431
440
  if (opts.json) {
432
441
  console.log(JSON.stringify(data, null, 2));
@@ -444,6 +453,7 @@ program
444
453
  .option('-n, --limit <number>', 'Number of results', '10')
445
454
  .option('--metric <metric>', 'fan-in | fan-out | density | coupling', 'fan-in')
446
455
  .option('--level <level>', 'file | directory', 'file')
456
+ .option('-T, --no-tests', 'Exclude test/spec files from results')
447
457
  .option('-j, --json', 'Output as JSON')
448
458
  .action(async (opts) => {
449
459
  const { hotspotsData, formatHotspots } = await import('./structure.js');
@@ -451,6 +461,7 @@ program
451
461
  metric: opts.metric,
452
462
  level: opts.level,
453
463
  limit: parseInt(opts.limit, 10),
464
+ noTests: !opts.tests,
454
465
  });
455
466
  if (opts.json) {
456
467
  console.log(JSON.stringify(data, null, 2));
package/src/cycles.js CHANGED
@@ -1,14 +1,16 @@
1
1
  import { loadNative } from './native.js';
2
+ import { isTestFile } from './queries.js';
2
3
 
3
4
  /**
4
5
  * Detect circular dependencies in the codebase using Tarjan's SCC algorithm.
5
6
  * Dispatches to native Rust implementation when available, falls back to JS.
6
7
  * @param {object} db - Open SQLite database
7
- * @param {object} opts - { fileLevel: true }
8
+ * @param {object} opts - { fileLevel: true, noTests: false }
8
9
  * @returns {string[][]} Array of cycles, each cycle is an array of file paths
9
10
  */
10
11
  export function findCycles(db, opts = {}) {
11
12
  const fileLevel = opts.fileLevel !== false;
13
+ const noTests = opts.noTests || false;
12
14
 
13
15
  // Build adjacency list from SQLite (stays in JS — only the algorithm can move to Rust)
14
16
  let edges;
@@ -22,6 +24,9 @@ export function findCycles(db, opts = {}) {
22
24
  WHERE n1.file != n2.file AND e.kind IN ('imports', 'imports-type')
23
25
  `)
24
26
  .all();
27
+ if (noTests) {
28
+ edges = edges.filter((e) => !isTestFile(e.source) && !isTestFile(e.target));
29
+ }
25
30
  } else {
26
31
  edges = db
27
32
  .prepare(`
@@ -37,6 +42,13 @@ export function findCycles(db, opts = {}) {
37
42
  AND n1.id != n2.id
38
43
  `)
39
44
  .all();
45
+ if (noTests) {
46
+ edges = edges.filter((e) => {
47
+ const sourceFile = e.source.split('|').pop();
48
+ const targetFile = e.target.split('|').pop();
49
+ return !isTestFile(sourceFile) && !isTestFile(targetFile);
50
+ });
51
+ }
40
52
  }
41
53
 
42
54
  // Try native Rust implementation
package/src/export.js CHANGED
@@ -1,10 +1,12 @@
1
1
  import path from 'node:path';
2
+ import { isTestFile } from './queries.js';
2
3
 
3
4
  /**
4
5
  * Export the dependency graph in DOT (Graphviz) format.
5
6
  */
6
7
  export function exportDOT(db, opts = {}) {
7
8
  const fileLevel = opts.fileLevel !== false;
9
+ const noTests = opts.noTests || false;
8
10
  const lines = [
9
11
  'digraph codegraph {',
10
12
  ' rankdir=LR;',
@@ -14,7 +16,7 @@ export function exportDOT(db, opts = {}) {
14
16
  ];
15
17
 
16
18
  if (fileLevel) {
17
- const edges = db
19
+ let edges = db
18
20
  .prepare(`
19
21
  SELECT DISTINCT n1.file AS source, n2.file AS target
20
22
  FROM edges e
@@ -23,6 +25,7 @@ export function exportDOT(db, opts = {}) {
23
25
  WHERE n1.file != n2.file AND e.kind IN ('imports', 'imports-type', 'calls')
24
26
  `)
25
27
  .all();
28
+ if (noTests) edges = edges.filter((e) => !isTestFile(e.source) && !isTestFile(e.target));
26
29
 
27
30
  // Try to use directory nodes from DB (built by structure analysis)
28
31
  const hasDirectoryNodes =
@@ -89,7 +92,7 @@ export function exportDOT(db, opts = {}) {
89
92
  lines.push(` "${source}" -> "${target}";`);
90
93
  }
91
94
  } else {
92
- const edges = db
95
+ let edges = db
93
96
  .prepare(`
94
97
  SELECT n1.name AS source_name, n1.kind AS source_kind, n1.file AS source_file,
95
98
  n2.name AS target_name, n2.kind AS target_kind, n2.file AS target_file,
@@ -101,6 +104,8 @@ export function exportDOT(db, opts = {}) {
101
104
  AND e.kind = 'calls'
102
105
  `)
103
106
  .all();
107
+ if (noTests)
108
+ edges = edges.filter((e) => !isTestFile(e.source_file) && !isTestFile(e.target_file));
104
109
 
105
110
  for (const e of edges) {
106
111
  const sId = `${e.source_file}:${e.source_name}`.replace(/[^a-zA-Z0-9_]/g, '_');
@@ -120,10 +125,11 @@ export function exportDOT(db, opts = {}) {
120
125
  */
121
126
  export function exportMermaid(db, opts = {}) {
122
127
  const fileLevel = opts.fileLevel !== false;
128
+ const noTests = opts.noTests || false;
123
129
  const lines = ['graph LR'];
124
130
 
125
131
  if (fileLevel) {
126
- const edges = db
132
+ let edges = db
127
133
  .prepare(`
128
134
  SELECT DISTINCT n1.file AS source, n2.file AS target
129
135
  FROM edges e
@@ -132,6 +138,7 @@ export function exportMermaid(db, opts = {}) {
132
138
  WHERE n1.file != n2.file AND e.kind IN ('imports', 'imports-type', 'calls')
133
139
  `)
134
140
  .all();
141
+ if (noTests) edges = edges.filter((e) => !isTestFile(e.source) && !isTestFile(e.target));
135
142
 
136
143
  for (const { source, target } of edges) {
137
144
  const s = source.replace(/[^a-zA-Z0-9]/g, '_');
@@ -139,7 +146,7 @@ export function exportMermaid(db, opts = {}) {
139
146
  lines.push(` ${s}["${source}"] --> ${t}["${target}"]`);
140
147
  }
141
148
  } else {
142
- const edges = db
149
+ let edges = db
143
150
  .prepare(`
144
151
  SELECT n1.name AS source_name, n1.file AS source_file,
145
152
  n2.name AS target_name, n2.file AS target_file
@@ -150,6 +157,8 @@ export function exportMermaid(db, opts = {}) {
150
157
  AND e.kind = 'calls'
151
158
  `)
152
159
  .all();
160
+ if (noTests)
161
+ edges = edges.filter((e) => !isTestFile(e.source_file) && !isTestFile(e.target_file));
153
162
 
154
163
  for (const e of edges) {
155
164
  const sId = `${e.source_file}_${e.source_name}`.replace(/[^a-zA-Z0-9]/g, '_');
@@ -164,14 +173,17 @@ export function exportMermaid(db, opts = {}) {
164
173
  /**
165
174
  * Export as JSON adjacency list.
166
175
  */
167
- export function exportJSON(db) {
168
- const nodes = db
176
+ export function exportJSON(db, opts = {}) {
177
+ const noTests = opts.noTests || false;
178
+
179
+ let nodes = db
169
180
  .prepare(`
170
181
  SELECT id, name, kind, file, line FROM nodes WHERE kind = 'file'
171
182
  `)
172
183
  .all();
184
+ if (noTests) nodes = nodes.filter((n) => !isTestFile(n.file));
173
185
 
174
- const edges = db
186
+ let edges = db
175
187
  .prepare(`
176
188
  SELECT DISTINCT n1.file AS source, n2.file AS target, e.kind
177
189
  FROM edges e
@@ -180,6 +192,7 @@ export function exportJSON(db) {
180
192
  WHERE n1.file != n2.file
181
193
  `)
182
194
  .all();
195
+ if (noTests) edges = edges.filter((e) => !isTestFile(e.source) && !isTestFile(e.target));
183
196
 
184
197
  return { nodes, edges };
185
198
  }
package/src/mcp.js CHANGED
@@ -30,6 +30,7 @@ const BASE_TOOLS = [
30
30
  description: 'Traversal depth for transitive callers',
31
31
  default: 2,
32
32
  },
33
+ no_tests: { type: 'boolean', description: 'Exclude test files', default: false },
33
34
  },
34
35
  required: ['name'],
35
36
  },
@@ -41,6 +42,7 @@ const BASE_TOOLS = [
41
42
  type: 'object',
42
43
  properties: {
43
44
  file: { type: 'string', description: 'File path (partial match supported)' },
45
+ no_tests: { type: 'boolean', description: 'Exclude test files', default: false },
44
46
  },
45
47
  required: ['file'],
46
48
  },
@@ -52,6 +54,7 @@ const BASE_TOOLS = [
52
54
  type: 'object',
53
55
  properties: {
54
56
  file: { type: 'string', description: 'File path to analyze' },
57
+ no_tests: { type: 'boolean', description: 'Exclude test files', default: false },
55
58
  },
56
59
  required: ['file'],
57
60
  },
@@ -71,6 +74,7 @@ const BASE_TOOLS = [
71
74
  type: 'object',
72
75
  properties: {
73
76
  limit: { type: 'number', description: 'Number of top files to show', default: 20 },
77
+ no_tests: { type: 'boolean', description: 'Exclude test files', default: false },
74
78
  },
75
79
  },
76
80
  },
@@ -282,6 +286,7 @@ const BASE_TOOLS = [
282
286
  description: 'Rank files or directories',
283
287
  },
284
288
  limit: { type: 'number', description: 'Number of results to return', default: 10 },
289
+ no_tests: { type: 'boolean', description: 'Exclude test files', default: false },
285
290
  },
286
291
  },
287
292
  },
@@ -408,13 +413,13 @@ export async function startMCPServer(customDbPath, options = {}) {
408
413
  let result;
409
414
  switch (name) {
410
415
  case 'query_function':
411
- result = queryNameData(args.name, dbPath);
416
+ result = queryNameData(args.name, dbPath, { noTests: args.no_tests });
412
417
  break;
413
418
  case 'file_deps':
414
- result = fileDepsData(args.file, dbPath);
419
+ result = fileDepsData(args.file, dbPath, { noTests: args.no_tests });
415
420
  break;
416
421
  case 'impact_analysis':
417
- result = impactAnalysisData(args.file, dbPath);
422
+ result = impactAnalysisData(args.file, dbPath, { noTests: args.no_tests });
418
423
  break;
419
424
  case 'find_cycles': {
420
425
  const db = new Database(findDbPath(dbPath), { readonly: true });
@@ -424,7 +429,7 @@ export async function startMCPServer(customDbPath, options = {}) {
424
429
  break;
425
430
  }
426
431
  case 'module_map':
427
- result = moduleMapData(dbPath, args.limit || 20);
432
+ result = moduleMapData(dbPath, args.limit || 20, { noTests: args.no_tests });
428
433
  break;
429
434
  case 'fn_deps':
430
435
  result = fnDepsData(args.name, dbPath, {
@@ -536,6 +541,7 @@ export async function startMCPServer(customDbPath, options = {}) {
536
541
  metric: args.metric,
537
542
  level: args.level,
538
543
  limit: args.limit,
544
+ noTests: args.no_tests,
539
545
  });
540
546
  break;
541
547
  }
package/src/queries.js CHANGED
@@ -17,7 +17,7 @@ function safePath(repoRoot, file) {
17
17
  }
18
18
 
19
19
  const TEST_PATTERN = /\.(test|spec)\.|__test__|__tests__|\.stories\./;
20
- function isTestFile(filePath) {
20
+ export function isTestFile(filePath) {
21
21
  return TEST_PATTERN.test(filePath);
22
22
  }
23
23
 
@@ -190,16 +190,18 @@ function kindIcon(kind) {
190
190
 
191
191
  // ─── Data-returning functions ───────────────────────────────────────────
192
192
 
193
- export function queryNameData(name, customDbPath) {
193
+ export function queryNameData(name, customDbPath, opts = {}) {
194
194
  const db = openReadonlyOrFail(customDbPath);
195
- const nodes = db.prepare(`SELECT * FROM nodes WHERE name LIKE ?`).all(`%${name}%`);
195
+ const noTests = opts.noTests || false;
196
+ let nodes = db.prepare(`SELECT * FROM nodes WHERE name LIKE ?`).all(`%${name}%`);
197
+ if (noTests) nodes = nodes.filter((n) => !isTestFile(n.file));
196
198
  if (nodes.length === 0) {
197
199
  db.close();
198
200
  return { query: name, results: [] };
199
201
  }
200
202
 
201
203
  const results = nodes.map((node) => {
202
- const callees = db
204
+ let callees = db
203
205
  .prepare(`
204
206
  SELECT n.name, n.kind, n.file, n.line, e.kind as edge_kind
205
207
  FROM edges e JOIN nodes n ON e.target_id = n.id
@@ -207,7 +209,7 @@ export function queryNameData(name, customDbPath) {
207
209
  `)
208
210
  .all(node.id);
209
211
 
210
- const callers = db
212
+ let callers = db
211
213
  .prepare(`
212
214
  SELECT n.name, n.kind, n.file, n.line, e.kind as edge_kind
213
215
  FROM edges e JOIN nodes n ON e.source_id = n.id
@@ -215,6 +217,11 @@ export function queryNameData(name, customDbPath) {
215
217
  `)
216
218
  .all(node.id);
217
219
 
220
+ if (noTests) {
221
+ callees = callees.filter((c) => !isTestFile(c.file));
222
+ callers = callers.filter((c) => !isTestFile(c.file));
223
+ }
224
+
218
225
  return {
219
226
  name: node.name,
220
227
  kind: node.kind,
@@ -728,11 +735,40 @@ export function listFunctionsData(customDbPath, opts = {}) {
728
735
  return { count: rows.length, functions: rows };
729
736
  }
730
737
 
731
- export function statsData(customDbPath) {
738
+ export function statsData(customDbPath, opts = {}) {
732
739
  const db = openReadonlyOrFail(customDbPath);
740
+ const noTests = opts.noTests || false;
741
+
742
+ // Build set of test file IDs for filtering nodes and edges
743
+ let testFileIds = null;
744
+ if (noTests) {
745
+ const allFileNodes = db.prepare("SELECT id, file FROM nodes WHERE kind = 'file'").all();
746
+ testFileIds = new Set();
747
+ const testFiles = new Set();
748
+ for (const n of allFileNodes) {
749
+ if (isTestFile(n.file)) {
750
+ testFileIds.add(n.id);
751
+ testFiles.add(n.file);
752
+ }
753
+ }
754
+ // Also collect non-file node IDs that belong to test files
755
+ const allNodes = db.prepare('SELECT id, file FROM nodes').all();
756
+ for (const n of allNodes) {
757
+ if (testFiles.has(n.file)) testFileIds.add(n.id);
758
+ }
759
+ }
733
760
 
734
761
  // Node breakdown by kind
735
- const nodeRows = db.prepare('SELECT kind, COUNT(*) as c FROM nodes GROUP BY kind').all();
762
+ let nodeRows;
763
+ if (noTests) {
764
+ const allNodes = db.prepare('SELECT id, kind, file FROM nodes').all();
765
+ const filtered = allNodes.filter((n) => !testFileIds.has(n.id));
766
+ const counts = {};
767
+ for (const n of filtered) counts[n.kind] = (counts[n.kind] || 0) + 1;
768
+ nodeRows = Object.entries(counts).map(([kind, c]) => ({ kind, c }));
769
+ } else {
770
+ nodeRows = db.prepare('SELECT kind, COUNT(*) as c FROM nodes GROUP BY kind').all();
771
+ }
736
772
  const nodesByKind = {};
737
773
  let totalNodes = 0;
738
774
  for (const r of nodeRows) {
@@ -741,7 +777,18 @@ export function statsData(customDbPath) {
741
777
  }
742
778
 
743
779
  // Edge breakdown by kind
744
- const edgeRows = db.prepare('SELECT kind, COUNT(*) as c FROM edges GROUP BY kind').all();
780
+ let edgeRows;
781
+ if (noTests) {
782
+ const allEdges = db.prepare('SELECT source_id, target_id, kind FROM edges').all();
783
+ const filtered = allEdges.filter(
784
+ (e) => !testFileIds.has(e.source_id) && !testFileIds.has(e.target_id),
785
+ );
786
+ const counts = {};
787
+ for (const e of filtered) counts[e.kind] = (counts[e.kind] || 0) + 1;
788
+ edgeRows = Object.entries(counts).map(([kind, c]) => ({ kind, c }));
789
+ } else {
790
+ edgeRows = db.prepare('SELECT kind, COUNT(*) as c FROM edges GROUP BY kind').all();
791
+ }
745
792
  const edgesByKind = {};
746
793
  let totalEdges = 0;
747
794
  for (const r of edgeRows) {
@@ -756,7 +803,8 @@ export function statsData(customDbPath) {
756
803
  extToLang.set(ext, entry.id);
757
804
  }
758
805
  }
759
- const fileNodes = db.prepare("SELECT file FROM nodes WHERE kind = 'file'").all();
806
+ let fileNodes = db.prepare("SELECT file FROM nodes WHERE kind = 'file'").all();
807
+ if (noTests) fileNodes = fileNodes.filter((n) => !isTestFile(n.file));
760
808
  const byLanguage = {};
761
809
  for (const row of fileNodes) {
762
810
  const ext = path.extname(row.file).toLowerCase();
@@ -766,23 +814,30 @@ export function statsData(customDbPath) {
766
814
  const langCount = Object.keys(byLanguage).length;
767
815
 
768
816
  // Cycles
769
- const fileCycles = findCycles(db, { fileLevel: true });
770
- const fnCycles = findCycles(db, { fileLevel: false });
817
+ const fileCycles = findCycles(db, { fileLevel: true, noTests });
818
+ const fnCycles = findCycles(db, { fileLevel: false, noTests });
771
819
 
772
820
  // Top 5 coupling hotspots (fan-in + fan-out, file nodes)
821
+ const testFilter = noTests
822
+ ? `AND n.file NOT LIKE '%.test.%'
823
+ AND n.file NOT LIKE '%.spec.%'
824
+ AND n.file NOT LIKE '%__test__%'
825
+ AND n.file NOT LIKE '%__tests__%'
826
+ AND n.file NOT LIKE '%.stories.%'`
827
+ : '';
773
828
  const hotspotRows = db
774
829
  .prepare(`
775
830
  SELECT n.file,
776
831
  (SELECT COUNT(*) FROM edges WHERE target_id = n.id) as fan_in,
777
832
  (SELECT COUNT(*) FROM edges WHERE source_id = n.id) as fan_out
778
833
  FROM nodes n
779
- WHERE n.kind = 'file'
834
+ WHERE n.kind = 'file' ${testFilter}
780
835
  ORDER BY (SELECT COUNT(*) FROM edges WHERE target_id = n.id)
781
836
  + (SELECT COUNT(*) FROM edges WHERE source_id = n.id) DESC
782
- LIMIT 5
783
837
  `)
784
838
  .all();
785
- const hotspots = hotspotRows.map((r) => ({
839
+ const filteredHotspots = noTests ? hotspotRows.filter((r) => !isTestFile(r.file)) : hotspotRows;
840
+ const hotspots = filteredHotspots.slice(0, 5).map((r) => ({
786
841
  file: r.file,
787
842
  fanIn: r.fan_in,
788
843
  fanOut: r.fan_out,
@@ -808,14 +863,17 @@ export function statsData(customDbPath) {
808
863
  }
809
864
 
810
865
  // Graph quality metrics
866
+ const qualityTestFilter = testFilter.replace(/n\.file/g, 'file');
811
867
  const totalCallable = db
812
- .prepare("SELECT COUNT(*) as c FROM nodes WHERE kind IN ('function', 'method')")
868
+ .prepare(
869
+ `SELECT COUNT(*) as c FROM nodes WHERE kind IN ('function', 'method') ${qualityTestFilter}`,
870
+ )
813
871
  .get().c;
814
872
  const callableWithCallers = db
815
873
  .prepare(`
816
874
  SELECT COUNT(DISTINCT e.target_id) as c FROM edges e
817
875
  JOIN nodes n ON e.target_id = n.id
818
- WHERE e.kind = 'calls' AND n.kind IN ('function', 'method')
876
+ WHERE e.kind = 'calls' AND n.kind IN ('function', 'method') ${testFilter}
819
877
  `)
820
878
  .get().c;
821
879
  const callerCoverage = totalCallable > 0 ? callableWithCallers / totalCallable : 0;
@@ -881,7 +939,7 @@ export function statsData(customDbPath) {
881
939
  }
882
940
 
883
941
  export function stats(customDbPath, opts = {}) {
884
- const data = statsData(customDbPath);
942
+ const data = statsData(customDbPath, { noTests: opts.noTests });
885
943
  if (opts.json) {
886
944
  console.log(JSON.stringify(data, null, 2));
887
945
  return;
@@ -979,7 +1037,7 @@ export function stats(customDbPath, opts = {}) {
979
1037
  // ─── Human-readable output (original formatting) ───────────────────────
980
1038
 
981
1039
  export function queryName(name, customDbPath, opts = {}) {
982
- const data = queryNameData(name, customDbPath);
1040
+ const data = queryNameData(name, customDbPath, { noTests: opts.noTests });
983
1041
  if (opts.json) {
984
1042
  console.log(JSON.stringify(data, null, 2));
985
1043
  return;
package/src/structure.js CHANGED
@@ -2,6 +2,7 @@ import path from 'node:path';
2
2
  import { normalizePath } from './constants.js';
3
3
  import { openReadonlyOrFail } from './db.js';
4
4
  import { debug } from './logger.js';
5
+ import { isTestFile } from './queries.js';
5
6
 
6
7
  // ─── Build-time: insert directory nodes, contains edges, and metrics ────
7
8
 
@@ -233,6 +234,7 @@ export function structureData(customDbPath, opts = {}) {
233
234
  const filterDir = opts.directory || null;
234
235
  const maxDepth = opts.depth || null;
235
236
  const sortBy = opts.sort || 'files';
237
+ const noTests = opts.noTests || false;
236
238
 
237
239
  // Get all directory nodes with their metrics
238
240
  let dirs = db
@@ -263,7 +265,7 @@ export function structureData(customDbPath, opts = {}) {
263
265
 
264
266
  // Get file metrics for each directory
265
267
  const result = dirs.map((d) => {
266
- const files = db
268
+ let files = db
267
269
  .prepare(`
268
270
  SELECT n.name, nm.line_count, nm.symbol_count, nm.import_count, nm.export_count, nm.fan_in, nm.fan_out
269
271
  FROM edges e
@@ -272,6 +274,7 @@ export function structureData(customDbPath, opts = {}) {
272
274
  WHERE e.source_id = ? AND e.kind = 'contains' AND n.kind = 'file'
273
275
  `)
274
276
  .all(d.id);
277
+ if (noTests) files = files.filter((f) => !isTestFile(f.name));
275
278
 
276
279
  const subdirs = db
277
280
  .prepare(`
@@ -282,14 +285,15 @@ export function structureData(customDbPath, opts = {}) {
282
285
  `)
283
286
  .all(d.id);
284
287
 
288
+ const fileCount = noTests ? files.length : d.file_count || 0;
285
289
  return {
286
290
  directory: d.name,
287
- fileCount: d.file_count || 0,
291
+ fileCount,
288
292
  symbolCount: d.symbol_count || 0,
289
293
  fanIn: d.fan_in || 0,
290
294
  fanOut: d.fan_out || 0,
291
295
  cohesion: d.cohesion,
292
- density: d.file_count > 0 ? (d.symbol_count || 0) / d.file_count : 0,
296
+ density: fileCount > 0 ? (d.symbol_count || 0) / fileCount : 0,
293
297
  files: files.map((f) => ({
294
298
  file: f.name,
295
299
  lineCount: f.line_count || 0,