@iceinvein/code-intelligence-mcp 2.1.0 → 2.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +215 -387
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -1,38 +1,28 @@
1
1
  # Code Intelligence MCP Server
2
2
 
3
- > **Semantic search and code navigation for LLM agents.**
3
+ > **Give your AI coding agent a deep understanding of your codebase.**
4
4
 
5
5
  [![NPM Version](https://img.shields.io/npm/v/@iceinvein/code-intelligence-mcp?style=flat-square&color=blue)](https://www.npmjs.com/package/@iceinvein/code-intelligence-mcp)
6
6
  [![License](https://img.shields.io/badge/license-MIT-green?style=flat-square)](LICENSE)
7
7
  [![MCP](https://img.shields.io/badge/MCP-Enabled-orange?style=flat-square)](https://modelcontextprotocol.io)
8
+ [![Platform](https://img.shields.io/badge/platform-macOS%20(Apple%20Silicon)-lightgrey?style=flat-square)]()
8
9
 
9
- ---
10
-
11
- This server indexes your codebase locally to provide **fast, semantic, and structure-aware** code navigation to tools like Claude Code, OpenCode, Trae, and Cursor.
12
-
13
- ## Why Use This Server?
14
-
15
- Unlike basic text search, this server builds a local knowledge graph to understand your code.
10
+ A local code indexing engine that gives LLM agents like **Claude Code**, **Cursor**, **Trae**, and **OpenCode** semantic search, call graphs, type hierarchies, and impact analysis across your codebase. Written in Rust with Metal GPU acceleration.
16
11
 
17
- * **Advanced Hybrid Search**: Combines keyword search ([BM25](#glossary) via Tantivy) with semantic vector search (via LanceDB + jina-code-embeddings-0.5b) using [Reciprocal Rank Fusion (RRF)](#glossary) — a technique that merges ranked results from different search systems by position rather than raw score.
18
- * **Smart Context Assembly**: Token-aware budgeting with query-aware truncation that keeps relevant lines within context limits.
19
- * **On-Device LLM Descriptions**: Automatically generates natural-language descriptions for every symbol using a local **Qwen2.5-Coder-1.5B** model (llama.cpp with Metal GPU), enriching search with human-readable summaries. This bridges the vocabulary gap between how developers search ("auth handler") and how code is named (`authenticate_request`).
20
- * **PageRank Scoring**: Graph-based symbol importance scoring (similar to Google's original algorithm) that identifies central, heavily-used components by analyzing call graphs and type relationships.
21
- * **Learns from Feedback**: Optional learning system that adapts to user selections over time.
22
- * **Production First**: Multi-layer test detection (file paths, symbol names, and AST-level `#[test]`/`mod tests` analysis) ensures implementation code ranks above test helpers.
23
- * **Multi-Repo Support**: Index and search across multiple repositories/monorepos simultaneously.
24
- * **OS-Native File Watching**: Uses the `notify` crate with macOS FSEvents for instant re-indexing on file changes.
25
- * **Fast & Local**: Written in **Rust** with Metal GPU acceleration on Apple Silicon. Parallel indexing with persistent caching.
12
+ **Zero config. Runs via `npx`. Indexes in the background.**
26
13
 
27
14
  ---
28
15
 
29
- ## Quick Start
30
-
31
- Runs directly via `npx` without requiring a local Rust toolchain.
16
+ ## Install
32
17
 
33
18
  ### Claude Code
34
19
 
35
- Add to your MCP settings (global `~/.claude.json` or project-level `.mcp.json`):
20
+ ```bash
21
+ claude mcp add code-intelligence -- npx -y @iceinvein/code-intelligence-mcp
22
+ ```
23
+
24
+ <details>
25
+ <summary>Or add manually to <code>~/.claude.json</code></summary>
36
26
 
37
27
  ```json
38
28
  {
@@ -45,18 +35,26 @@ Add to your MCP settings (global `~/.claude.json` or project-level `.mcp.json`):
45
35
  }
46
36
  }
47
37
  ```
38
+ </details>
48
39
 
49
- Or install via the CLI:
40
+ ### Cursor
50
41
 
51
- ```bash
52
- claude mcp add code-intelligence -- npx -y @iceinvein/code-intelligence-mcp
53
- ```
42
+ Add to `.cursor/mcp.json`:
54
43
 
55
- Once connected, Claude Code gains 23 MCP tools for semantic search (`search_code`), symbol navigation (`get_definition`, `find_references`), call/type graphs (`get_call_hierarchy`, `get_type_graph`), impact analysis (`find_affected_code`, `trace_data_flow`), and more. The server auto-detects the working directory and begins indexing in the background.
44
+ ```json
45
+ {
46
+ "mcpServers": {
47
+ "code-intelligence": {
48
+ "command": "npx",
49
+ "args": ["-y", "@iceinvein/code-intelligence-mcp"]
50
+ }
51
+ }
52
+ }
53
+ ```
56
54
 
57
55
  ### OpenCode / Trae
58
56
 
59
- Add to your `opencode.json` (or global config):
57
+ Add to `opencode.json`:
60
58
 
61
59
  ```json
62
60
  {
@@ -70,55 +68,114 @@ Add to your `opencode.json` (or global config):
70
68
  }
71
69
  ```
72
70
 
73
- *The server will automatically download the embedding model (~531MB) and LLM (~1.1GB) on first launch, then index your project in the background.*
71
+ > On first launch, the server downloads the embedding model (~531 MB) and LLM (~1.1 GB), then indexes your project in the background. Models are cached in `~/.code-intelligence/models/`.
74
72
 
75
73
  ---
76
74
 
77
- ## Standalone Server Mode
75
+ ## What It Does
78
76
 
79
- By default, each MCP client spawns its own server process (stdio transport). If you run multiple clients against the same repo, a per-repo leader lock (`flock()`) ensures only one instance performs indexing, file watching, and LLM description generation. The leader loads the LLM (~1.1GB) during indexing and automatically frees it once descriptions are complete. Follower instances never load the LLM — they open the search index read-only and pick up the leader's changes. All instances load their own copy of the embedding model (~531MB) for query-time vector search.
77
+ Unlike basic text search (grep/ripgrep), this server builds a **local knowledge graph** of your code and exposes it through 23 MCP tools.
80
78
 
81
- **Standalone mode** runs a single long-lived HTTP server that all clients share. The main advantage is cross-repo deduplication — in stdio mode, each instance loads its own embedding model regardless of which repo it's on. With 5 instances across 3 repos, that's 5 copies (~2.6GB). Standalone loads the models once and shares them across all repos and clients.
79
+ | Capability | How It Works |
80
+ |---|---|
81
+ | **Hybrid search** | BM25 keyword search (Tantivy) + semantic vector search (LanceDB) merged via Reciprocal Rank Fusion |
82
+ | **On-device LLM descriptions** | Qwen2.5-Coder-1.5B generates natural-language summaries for every symbol, bridging the gap between how you search ("auth handler") and how code is named (`authenticate_request`) |
83
+ | **Graph intelligence** | Call hierarchies, type graphs, dependency trees, and PageRank-based importance scoring |
84
+ | **Impact analysis** | Find all code affected by a change before you make it |
85
+ | **Smart ranking** | Test detection, export boosting, directory semantics, intent detection, edge expansion, and score-gap filtering |
86
+ | **Multi-repo** | Index and search across multiple repositories simultaneously |
87
+ | **Auto-reindex** | OS-native file watching (FSEvents) keeps the index fresh as you code |
88
+
89
+ ---
82
90
 
83
- ### Starting the Server
91
+ ## Tools (23)
92
+
93
+ ### Search & Navigation
94
+
95
+ | Tool | What It Does |
96
+ |---|---|
97
+ | `search_code` | Semantic + keyword hybrid search. Handles natural language ("how does auth work?") and structural queries ("class User") |
98
+ | `get_definition` | Jump to a symbol's full definition |
99
+ | `find_references` | Find all usages of a function, class, or variable |
100
+ | `get_call_hierarchy` | Upstream callers and downstream callees |
101
+ | `get_type_graph` | Inheritance chains, type aliases, implements relationships |
102
+ | `explore_dependency_graph` | Module-level import/export dependencies |
103
+ | `get_file_symbols` | All symbols defined in a file |
104
+ | `get_usage_examples` | Real-world usage examples from the codebase |
105
+
106
+ ### Analysis
107
+
108
+ | Tool | What It Does |
109
+ |---|---|
110
+ | `find_affected_code` | Reverse dependency analysis — what breaks if this changes? |
111
+ | `trace_data_flow` | Follow variable reads and writes through the code |
112
+ | `find_similar_code` | Semantically similar code to a given symbol |
113
+ | `get_similarity_cluster` | Symbols in the same semantic cluster |
114
+ | `explain_search` | Scoring breakdown explaining why results ranked as they did |
115
+ | `summarize_file` | File summary with symbol counts and key exports |
116
+ | `get_module_summary` | All exported symbols from a module with signatures |
117
+
118
+ ### Testing, Frameworks & Discovery
119
+
120
+ | Tool | What It Does |
121
+ |---|---|
122
+ | `find_tests_for_symbol` | Find tests that cover a given symbol |
123
+ | `search_todos` | Search TODO/FIXME comments |
124
+ | `search_decorators` | Find TypeScript/JavaScript decorators |
125
+ | `search_framework_patterns` | Find framework-specific patterns (routes, middleware, WebSocket handlers) |
126
+
127
+ ### Index Management
128
+
129
+ | Tool | What It Does |
130
+ |---|---|
131
+ | `hydrate_symbols` | Load full context for a set of symbol IDs |
132
+ | `report_selection` | Feedback loop — tell the server which result was useful |
133
+ | `refresh_index` | Manually trigger re-indexing |
134
+ | `get_index_stats` | Index statistics (files, symbols, edges, last updated) |
135
+
136
+ ---
137
+
138
+ ## Supported Languages
139
+
140
+ Rust, TypeScript/TSX, JavaScript, Python, Go, Java, C, C++
141
+
142
+ ---
143
+
144
+ ## Standalone Mode (Multi-Client)
145
+
146
+ By default each MCP client spawns its own server process. If you run multiple clients (e.g. 5 Claude Code sessions across 3 repos), standalone mode loads the models **once** and shares them:
84
147
 
85
148
  ```bash
86
- # Default: localhost:3333
87
149
  npx @iceinvein/code-intelligence-mcp-standalone
150
+ ```
88
151
 
89
- # Custom host/port
90
- npx @iceinvein/code-intelligence-mcp-standalone --port 4444 --host 0.0.0.0
152
+ Then point all clients to `http://localhost:3333/mcp`:
91
153
 
92
- # From source
93
- ./target/release/code-intelligence-mcp-server --standalone
94
- ./target/release/code-intelligence-mcp-server --standalone --port 4444
154
+ <details>
155
+ <summary>Claude Code</summary>
95
156
 
96
- # Via environment variable
97
- CIMCP_MODE=standalone ./target/release/code-intelligence-mcp-server
157
+ ```bash
158
+ claude mcp add --transport http code-intelligence http://localhost:3333/mcp
98
159
  ```
160
+ </details>
99
161
 
100
- ### Connecting MCP Clients
101
-
102
- Point your MCP clients to the standalone server using Streamable HTTP transport:
162
+ <details>
163
+ <summary>Cursor</summary>
103
164
 
104
- **Claude Code** (`~/.claude.json` or project-level `.mcp.json`):
105
165
  ```json
106
166
  {
107
167
  "mcpServers": {
108
168
  "code-intelligence": {
109
- "type": "streamable-http",
110
169
  "url": "http://localhost:3333/mcp"
111
170
  }
112
171
  }
113
172
  }
114
173
  ```
174
+ </details>
115
175
 
116
- Or via the CLI:
117
- ```bash
118
- claude mcp add --transport http code-intelligence http://localhost:3333/mcp
119
- ```
176
+ <details>
177
+ <summary>OpenCode</summary>
120
178
 
121
- **OpenCode** (`opencode.json`):
122
179
  ```json
123
180
  {
124
181
  "mcp": {
@@ -130,67 +187,79 @@ claude mcp add --transport http code-intelligence http://localhost:3333/mcp
130
187
  }
131
188
  }
132
189
  ```
190
+ </details>
133
191
 
134
- **Cursor** (`.cursor/mcp.json`):
135
- ```json
136
- {
137
- "mcpServers": {
138
- "code-intelligence": {
139
- "url": "http://localhost:3333/mcp"
140
- }
141
- }
142
- }
192
+ The server auto-detects each client's workspace via the MCP `roots` capability. Separate indexes are maintained per repo, models are shared.
193
+
194
+ ```
195
+ ┌──────────┐ ┌──────────┐ ┌──────────┐
196
+ Claude A │ │ Cursor B │ │ Trae C │
197
+ └─────┬─────┘ └─────┬─────┘ └─────┬─────┘
198
+ │ │ │
199
+ └──────── POST /mcp ─────────┘
200
+
201
+ ┌────────────┴────────────┐
202
+ │ Standalone Server │
203
+ │ (shared models, once) │
204
+ ├─────────────────────────┤
205
+ │ Repo A Repo B Repo C│
206
+ │ indexes indexes indexes│
207
+ └─────────────────────────┘
143
208
  ```
144
209
 
145
- The server auto-detects each client's workspace root via the MCP `roots` capability — no `BASE_DIR` needed.
210
+ ---
146
211
 
147
- ### How It Works
212
+ ## Configuration
148
213
 
149
- ```mermaid
150
- flowchart TB
151
- A[Claude Code - Session A] & B[Cursor - Session B] & C[Trae - Session C]
152
- A & B & C -- "POST /mcp (Streamable HTTP)" --> Server
214
+ Works out of the box with no configuration. All settings are optional environment variables.
153
215
 
154
- Server["Standalone MCP Server<br/>(single process, shared embedding model)"]
216
+ <details>
217
+ <summary><strong>Environment variables</strong></summary>
155
218
 
156
- Server --> RA["Repo A indexes<br/>SQLite + Tantivy + LanceDB"]
157
- Server --> RB["Repo B indexes<br/>SQLite + Tantivy + LanceDB"]
158
- Server --> RC["Repo C indexes<br/>SQLite + Tantivy + LanceDB"]
159
- ```
219
+ **Core:**
160
220
 
161
- Each client session is bound to its workspace root. The server maintains separate indexes per repo but shares the embedding model across all of them.
221
+ | Variable | Default | Description |
222
+ |---|---|---|
223
+ | `WATCH_MODE` | `true` | Auto-reindex on file changes |
224
+ | `INDEX_PATTERNS` | `**/*.ts,**/*.rs,...` | Glob patterns to index |
225
+ | `EXCLUDE_PATTERNS` | `**/node_modules/**,...` | Glob patterns to exclude |
226
+ | `REPO_ROOTS` | — | Comma-separated paths for multi-repo |
162
227
 
163
- ### Data Storage
228
+ **Embeddings:**
164
229
 
165
- Both embedded (stdio) and standalone (HTTP) modes store all data in `~/.code-intelligence/`:
230
+ | Variable | Default | Description |
231
+ |---|---|---|
232
+ | `EMBEDDINGS_BACKEND` | `llamacpp` | `llamacpp` or `hash` (fast testing, no model download) |
233
+ | `EMBEDDINGS_DEVICE` | `metal` | `metal` (GPU) or `cpu` |
166
234
 
167
- ```text
168
- ~/.code-intelligence/
169
- ├── server.toml # Optional config file (standalone only)
170
- ├── models/ # Shared models (loaded once, shared across repos)
171
- │ ├── jina-code-embeddings-0.5b-gguf/ # Embedding model (~531MB, GGUF via llama.cpp)
172
- │ └── qwen2.5-coder-1.5b-gguf/ # LLM model (~1.1GB)
173
- ├── logs/
174
- │ └── server.log
175
- └── repos/
176
- ├── registry.json # Tracks all known repos
177
- ├── a1b2c3d4e5f6a7b8/ # Per-repo data (SHA256 hash of repo path)
178
- │ ├── code-intelligence.db
179
- │ ├── tantivy-index/
180
- │ └── vectors/
181
- └── f8e7d6c5b4a3f2e1/
182
- └── ...
183
- ```
235
+ **Ranking:**
236
+
237
+ | Variable | Default | Description |
238
+ |---|---|---|
239
+ | `HYBRID_ALPHA` | `0.7` | Vector vs keyword weight (0 = all keyword, 1 = all vector) |
240
+ | `RANK_EXPORTED_BOOST` | `1.0` | Boost for exported/public symbols |
241
+ | `RANK_TEST_PENALTY` | `0.1` | Penalty multiplier for test files |
242
+ | `RANK_POPULARITY_WEIGHT` | `0.05` | PageRank influence on ranking |
243
+
244
+ **Context:**
184
245
 
185
- The same repo always maps to the same hash regardless of mode, so embedded and standalone can share the same index data.
246
+ | Variable | Default | Description |
247
+ |---|---|---|
248
+ | `MAX_CONTEXT_TOKENS` | `8192` | Token budget for assembled context |
249
+ | `MAX_CONTEXT_BYTES` | `200000` | Byte-based fallback limit |
186
250
 
187
- ### Configuration
251
+ **Learning (off by default):**
188
252
 
189
- Standalone mode is configured via `~/.code-intelligence/server.toml` (created on first run with defaults). Environment variables and CLI flags override TOML settings.
253
+ | Variable | Default | Description |
254
+ |---|---|---|
255
+ | `LEARNING_ENABLED` | `false` | Track user selections to personalize results |
256
+ | `LEARNING_SELECTION_BOOST` | `0.1` | Max boost from selection history |
257
+ | `LEARNING_FILE_AFFINITY_BOOST` | `0.05` | Max boost from file access frequency |
190
258
 
191
- **Priority:** CLI flags > Environment variables > `server.toml` > Defaults
259
+ </details>
192
260
 
193
- **Example `server.toml`:**
261
+ <details>
262
+ <summary><strong>Standalone server config (<code>~/.code-intelligence/server.toml</code>)</strong></summary>
194
263
 
195
264
  ```toml
196
265
  [server]
@@ -198,332 +267,91 @@ host = "127.0.0.1"
198
267
  port = 3333
199
268
 
200
269
  [embeddings]
201
- backend = "llamacpp" # llamacpp (default) or hash (testing)
202
- device = "metal" # cpu or metal (macOS GPU)
270
+ backend = "llamacpp"
271
+ device = "metal"
203
272
 
204
273
  [repos.defaults]
205
274
  index_patterns = "**/*.ts,**/*.tsx,**/*.rs,**/*.py,**/*.go"
206
275
  exclude_patterns = "**/node_modules/**,**/dist/**,**/.git/**"
207
- watch_mode = true # Auto-reindex on file changes
276
+ watch_mode = true
208
277
 
209
278
  [lifecycle]
210
279
  warm_ttl_seconds = 300 # How long idle repos stay in memory
211
280
  ```
212
281
 
213
- **Environment variable overrides (same as embedded mode):**
282
+ Priority: CLI flags > Environment variables > `server.toml` > Defaults
214
283
 
215
- | Variable | Example | Description |
216
- | -------- | ------- | ----------- |
217
- | `CIMCP_MODE` | `standalone` | Alternative to `--standalone` flag |
218
- | `EMBEDDINGS_BACKEND` | `hash` | Override embedding backend (`llamacpp` or `hash`) |
219
- | `EMBEDDINGS_DEVICE` | `metal` | Override device (cpu/metal) |
220
- | `EMBEDDINGS_MODEL_DIR` | `/path/to/model` | Override model directory |
284
+ </details>
221
285
 
222
286
  ---
223
287
 
224
- ## Capabilities
225
-
226
- Available tools for the agent (23 tools total):
227
-
228
- ### Core Search & Navigation
229
-
230
- | Tool | Description |
231
- | :------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
232
- | `search_code` | **Primary Search.** Finds code by meaning ("how does auth work?") or structure ("class User"). Supports query decomposition (e.g., "authentication and authorization"). |
233
- | `get_definition` | Retrieves the full definition of a specific symbol with disambiguation support. |
234
- | `find_references` | Finds all usages of a function, class, or variable. |
235
- | `get_call_hierarchy` | Specifies upstream callers and downstream callees. |
236
- | `get_type_graph` | Explores inheritance (extends/implements) and type aliases. |
237
- | `explore_dependency_graph` | Explores module-level dependencies upstream or downstream. |
238
- | `get_file_symbols` | Lists all symbols defined in a specific file. |
239
- | `get_usage_examples` | Returns real-world examples of how a symbol is used in the codebase. |
240
-
241
- ### Advanced Analysis
242
-
243
- | Tool | Description |
244
- | :----------------------- | :---------------------------------------------------------------------------------------- |
245
- | `explain_search` | Returns detailed scoring breakdown to understand why results ranked as they did. |
246
- | `find_similar_code` | Finds code semantically similar to a given symbol or code snippet. |
247
- | `trace_data_flow` | Traces variable reads and writes through the codebase to understand data flow. |
248
- | `find_affected_code` | Finds code that would be affected if a symbol changes (reverse dependencies). |
249
- | `get_similarity_cluster` | Returns symbols in the same semantic similarity cluster as a given symbol. |
250
- | `summarize_file` | Generates a summary of file contents including symbol counts, structure, and key exports. |
251
- | `get_module_summary` | Lists all exported symbols from a module/file with their signatures. |
252
-
253
- ### Testing, Frameworks & Documentation
254
-
255
- | Tool | Description |
256
- | :------------------------- | :------------------------------------------------------------------------------------------------------------------------ |
257
- | `search_todos` | Searches for TODO and FIXME comments to track technical debt. |
258
- | `find_tests_for_symbol` | Finds test files that test a given symbol or source file. |
259
- | `search_decorators` | Searches for TypeScript/JavaScript decorators (@Component, @Controller, @Get, @Post, etc.). |
260
- | `search_framework_patterns`| Searches for framework-specific patterns (e.g., Elysia routes, WebSocket handlers, middleware) with method/path filtering.|
261
-
262
- ### Context & Learning
263
-
264
- | Tool | Description |
265
- | :----------------- | :------------------------------------------------------------------------------ |
266
- | `hydrate_symbols` | Hydrates full context for a set of symbol IDs. |
267
- | `report_selection` | Records user selection feedback for learning (call when user selects a result). |
268
- | `refresh_index` | Manually triggers a re-index of the codebase. |
269
- | `get_index_stats` | Returns index statistics (files, symbols, edges, last updated). |
288
+ ## How Ranking Works
270
289
 
271
- ---
272
-
273
- ## Supported Languages
290
+ The search pipeline runs keyword search (BM25) and semantic vector search in parallel, merges them with Reciprocal Rank Fusion, then applies structural signals:
274
291
 
275
- The server supports semantic navigation and symbol extraction for the following languages:
292
+ - **Intent detection** "struct User" boosts definitions, "who calls login" triggers graph lookup, "User schema" boosts models 50-75x
293
+ - **Query decomposition** — "authentication and authorization" automatically splits into sub-queries
294
+ - **LLM-enriched index** — on-device Qwen2.5-Coder generates descriptions bridging vocabulary gaps
295
+ - **PageRank** — graph-based importance scoring identifies central, heavily-used symbols
296
+ - **Morphological expansion** — `watch` matches `watcher`, `index` matches `reindex`
297
+ - **Multi-layer test detection** — file paths, symbol names, and AST-level analysis (`#[test]`, `mod tests`)
298
+ - **Edge expansion** — high-ranking symbols pull in structurally related code (callers, type members)
299
+ - **Export boost** — public API surface ranks above private helpers
300
+ - **Score-gap detection** — drops trailing results that fall off a relevance cliff
301
+ - **Token-aware truncation** — context assembly keeps query-relevant lines within token budgets
276
302
 
277
- * **Rust**
278
- * **TypeScript / TSX**
279
- * **JavaScript**
280
- * **Python**
281
- * **Go**
282
- * **Java**
283
- * **C**
284
- * **C++**
303
+ For the full deep dive, see [System Architecture](SYSTEM_ARCHITECTURE.md).
285
304
 
286
305
  ---
287
306
 
288
- ## Smart Ranking & Context Enhancement
307
+ ## Data Storage
289
308
 
290
- The search pipeline runs two parallel searches — keyword (BM25 via Tantivy) and semantic (vector embeddings via LanceDB) — then merges them using Reciprocal Rank Fusion (RRF). On top of this hybrid base, the ranking engine applies structural signals to optimize for relevance:
309
+ All data lives in `~/.code-intelligence/`:
291
310
 
292
- 1. **PageRank Symbol Importance**: Graph-based scoring that identifies central, heavily-used components (similar to Google's PageRank).
293
- 2. **Reciprocal Rank Fusion (RRF)**: Combines keyword, vector, and graph search results using statistically optimal rank fusion.
294
- 3. **Query Decomposition**: Complex queries ("X and Y") are automatically split into sub-queries for better coverage.
295
- 4. **Token-Aware Truncation**: Context assembly keeps query-relevant lines within token budgets using BM25-style relevance scoring.
296
- 5. **LLM-Enriched Indexing**: On-device Qwen2.5-Coder generates natural-language descriptions for each symbol, bridging the vocabulary gap between how developers search and how code is named.
297
- 6. **Morphological Variants**: Function names are expanded with stems and derivations (e.g., `watch` → `watcher`, `index` → `reindex`) to improve recall for natural-language queries.
298
- 7. **Multi-Layer Test Detection**: Three mechanisms — file path patterns (`*.test.ts`), symbol name heuristics (`test_*`), and SQL-based AST analysis (`#[test]`, `mod tests`) — with a final enforcement pass that prevents test code from escaping via edge expansion.
299
- 8. **Edge Expansion**: High-ranking symbols pull in structurally related code (callers, type members) with importance filtering to avoid noise from private helpers.
300
- 9. **Directory Semantics**: Implementation directories (`src`, `lib`, `app`) are boosted, while build artifacts (`dist`, `build`) and `node_modules` are penalized.
301
- 10. **Exported Symbol Boost**: Exported/public symbols receive a ranking boost as they represent the primary API surface.
302
- 11. **Glue Code Filtering**: Re-export files (e.g., `index.ts`) are deprioritized in favor of the actual implementation.
303
- 12. **JSDoc Boost**: Symbols with documentation receive a ranking boost, and examples are included in search results.
304
- 13. **Learning from Feedback** (optional): Tracks user selections to personalize future search results.
305
- 14. **Package-Aware Scoring** (multi-repo): Boosts results from the same package when working in monorepos.
306
-
307
- ### Intent Detection
308
-
309
- The system detects query intent and adjusts ranking accordingly:
310
-
311
- | Query Pattern | Intent | Effect |
312
- | ----------------- | ------------------------- | --------------------------------------- |
313
- | "struct User" | Definition | Boosts type definitions (1.5x) |
314
- | "who calls login" | Callers | Triggers graph lookup |
315
- | "verify login" | Testing | Boosts test files |
316
- | "User schema" | Schema/Model | Boosts schema/model files (50-75x) |
317
- | "auth and authz" | Multi-query decomposition | Splits into sub-queries, merges via RRF |
318
-
319
- For a deep dive into the system's design, see [System Architecture](SYSTEM_ARCHITECTURE.md).
320
-
321
- ---
322
-
323
- ## Glossary
324
-
325
- Key terms used throughout this documentation:
326
-
327
- | Term | Full Name | What It Means |
328
- |------|-----------|---------------|
329
- | **MCP** | Model Context Protocol | An open protocol for connecting LLM-based tools (like Claude Code, Cursor, OpenCode) to external data sources and capabilities. This server implements MCP to expose code search and navigation tools. |
330
- | **BM25** | Best Matching 25 | A probabilistic text search algorithm (used by Tantivy). Ranks results by how often your search terms appear in a document (term frequency) weighted by how rare those terms are across all documents (inverse document frequency / IDF). The standard algorithm behind most full-text search engines. |
331
- | **IDF** | Inverse Document Frequency | A component of BM25 that measures how rare a term is. A term like `authenticate` appearing in only 3 files has high IDF (very discriminating), while `error` appearing in 200 files has low IDF (less useful for ranking). |
332
- | **RRF** | Reciprocal Rank Fusion | A technique for merging ranked result lists from different search systems. Instead of comparing raw scores (which have different scales), RRF uses rank positions: a result ranked #1 in keyword search and #3 in vector search gets a combined score based on those positions. This makes it robust when combining fundamentally different search approaches. |
333
- | **GGUF** | GGML Unified Format | A binary format for storing quantized (compressed) neural network weights. Used by llama.cpp to run both the embedding model and the LLM efficiently on consumer hardware. Q4_K_M quantization reduces the 1.5B parameter model from ~3GB to ~1.1GB with minimal quality loss. |
334
- | **LLM** | Large Language Model | In this project, a local Qwen2.5-Coder-1.5B model that generates one-sentence natural-language descriptions for each code symbol (function, class, type). These descriptions are indexed alongside the code, helping BM25 match natural-language queries to technically-named code. |
335
- | **PageRank** | — | A graph algorithm (originally from Google Search) adapted here to score symbol importance. Symbols that are called/referenced by many other symbols get higher PageRank scores, indicating they are central to the codebase. |
336
- | **Tree-Sitter** | — | A parser generator that builds concrete syntax trees (CSTs) for source code. Used to extract symbols (functions, classes, types), their relationships (calls, imports, type hierarchies), and structural information from 8 supported languages. |
337
-
338
- ---
339
-
340
- ## Configuration (Optional)
341
-
342
- Works without configuration by default. You can customize behavior via environment variables:
343
-
344
- ### Core Settings
345
-
346
- ```json
347
- "env": {
348
- "BASE_DIR": "/path/to/repo", // Required: Repository root
349
- "WATCH_MODE": "true", // Watch for file changes (Default: true)
350
- "INDEX_PATTERNS": "**/*.ts,**/*.go", // File patterns to index
351
- "EXCLUDE_PATTERNS": "**/node_modules/**",
352
- "REPO_ROOTS": "/path/to/repo1,/path/to/repo2" // Multi-repo support
353
- }
354
- ```
355
-
356
- ### Embedding Model
357
-
358
- ```json
359
- "env": {
360
- "EMBEDDINGS_BACKEND": "llamacpp", // llamacpp (default) or hash (testing)
361
- "EMBEDDINGS_DEVICE": "cpu", // cpu or metal (macOS GPU)
362
- "EMBEDDING_BATCH_SIZE": "32"
363
- }
364
- ```
365
-
366
- ### Context Assembly
367
-
368
- ```json
369
- "env": {
370
- "MAX_CONTEXT_TOKENS": "8192", // Token budget for context (default: 8192)
371
- "TOKEN_ENCODING": "o200k_base", // tiktoken encoding model
372
- "MAX_CONTEXT_BYTES": "200000" // Legacy byte-based limit (fallback)
373
- }
374
- ```
375
-
376
- ### Ranking & Retrieval
377
-
378
- ```json
379
- "env": {
380
- "RANK_EXPORTED_BOOST": "1.0", // Boost for exported symbols
381
- "RANK_TEST_PENALTY": "0.1", // Penalty for test files
382
- "RANK_POPULARITY_WEIGHT": "0.05", // PageRank influence
383
- "RRF_ENABLED": "true", // Enable Reciprocal Rank Fusion
384
- "HYBRID_ALPHA": "0.7" // Vector vs keyword weight (0-1)
385
- }
386
- ```
387
-
388
- ### Learning System (Optional)
389
-
390
- ```json
391
- "env": {
392
- "LEARNING_ENABLED": "false", // Enable selection tracking (default: false)
393
- "LEARNING_SELECTION_BOOST": "0.1", // Boost for previously selected symbols
394
- "LEARNING_FILE_AFFINITY_BOOST": "0.05" // Boost for frequently accessed files
395
- }
396
311
  ```
397
-
398
- ### Performance
399
-
400
- ```json
401
- "env": {
402
- "PARALLEL_WORKERS": "1", // Indexing parallelism (default: 1 for SQLite)
403
- "EMBEDDING_CACHE_ENABLED": "true", // Persistent embedding cache
404
- "PAGERANK_ITERATIONS": "20", // PageRank computation iterations
405
- "METRICS_ENABLED": "true", // Prometheus metrics
406
- "METRICS_PORT": "9090"
407
- }
408
- ```
409
-
410
- ### Query Expansion
411
-
412
- ```json
413
- "env": {
414
- "SYNONYM_EXPANSION_ENABLED": "true", // Expand "auth" → "authentication"
415
- "ACRONYM_EXPANSION_ENABLED": "true" // Expand "db" → "database"
416
- }
417
- ```
418
-
419
- ---
420
-
421
- ## Architecture
422
-
423
- ```mermaid
424
- flowchart LR
425
- Client[MCP Client] <==> Tools
426
-
427
- subgraph Server [Code Intelligence Server]
428
- direction TB
429
- Tools[Tool Router]
430
-
431
- subgraph Indexer [Indexing Pipeline]
432
- direction TB
433
- Watch[OS-Native File Watcher] --> Scan[File Scan]
434
- Scan --> Parse[Tree-Sitter]
435
- Parse --> Extract[Symbol Extraction]
436
- Extract --> PageRank[PageRank Compute]
437
- Extract --> Embed[jina-code-0.5b Embeddings - llama.cpp]
438
- Extract --> LLMDesc[LLM Descriptions - Qwen2.5-Coder]
439
- Extract --> JSDoc[JSDoc/Decorator/TODO Extract]
440
- end
441
-
442
- subgraph Storage [Storage Engine]
443
- direction TB
444
- SQLite[(SQLite)]
445
- Tantivy[(Tantivy)]
446
- Lance[(LanceDB)]
447
- Cache[(Embedding Cache)]
448
- end
449
-
450
- subgraph Retrieval [Retrieval Engine]
451
- direction TB
452
- QueryExpand[Query Expansion]
453
- Hybrid[Hybrid Search RRF]
454
- Signals[Ranking Signals]
455
- Context[Token-Aware Assembly]
456
- end
457
-
458
- Handlers[Tool Handlers]
459
- Tools --> Handlers
460
- Handlers -- Index --> Watch
461
- PageRank --> SQLite
462
- Embed --> Lance
463
- Embed --> Cache
464
- LLMDesc --> SQLite
465
- JSDoc --> SQLite
466
-
467
- Handlers -- Query --> QueryExpand
468
- QueryExpand --> Hybrid
469
- Hybrid --> Signals
470
- Signals --> Context
471
- Context --> Handlers
472
- end
312
+ ~/.code-intelligence/
313
+ ├── models/ # Shared (embedding ~531 MB, LLM ~1.1 GB)
314
+ ├── repos/
315
+ │ ├── registry.json # Tracks all known repos
316
+ │ └── <hash>/ # Per-repo (SHA256 of repo path)
317
+ │ ├── code-intelligence.db # SQLite (symbols, edges, metadata)
318
+ │ ├── tantivy-index/ # BM25 full-text search
319
+ │ └── vectors/ # LanceDB vector embeddings
320
+ ├── logs/
321
+ └── server.toml # Standalone config (optional)
473
322
  ```
474
323
 
475
324
  ---
476
325
 
477
326
  ## Development
478
327
 
479
- 1. **Prerequisites**: Rust (stable), `protobuf`.
480
- 2. **Build**: `cargo build --release`
481
- 3. **Run**: `./scripts/start_mcp.sh`
482
- 4. **Test**: `cargo test` or `EMBEDDINGS_BACKEND=hash cargo test` (faster, skips model download)
483
-
484
- ### Quick Testing with Hash Backend
485
-
486
- For faster development iteration, use the hash embedding backend which skips model downloads:
487
-
488
328
  ```bash
489
- EMBEDDINGS_BACKEND=hash BASE_DIR=/path/to/repo ./target/release/code-intelligence-mcp-server
329
+ cargo build --release
330
+ cargo test # Full test suite
331
+ EMBEDDINGS_BACKEND=hash cargo test # Fast (no model download)
332
+ ./scripts/start_mcp.sh # Start MCP server
490
333
  ```
491
334
 
492
- ### Project Structure
335
+ <details>
336
+ <summary>Project structure</summary>
493
337
 
494
- ```text
338
+ ```
495
339
  src/
496
- ├── indexer/
497
- ├── extract/ # Language-specific symbol extractors (Rust, TS, Python, Go, Java, C, C++)
498
- ├── pipeline/ # Indexing pipeline stages (scan, parse, embed, watch, describe)
499
- │ └── package/ # Package detection (npm, Cargo, Go, Python)
500
- ├── storage/
501
- ├── sqlite/ # SQLite schema, queries, operations
502
- ├── tantivy.rs # BM25 full-text search with n-gram tokenization
503
- │ └── vector.rs # LanceDB vector embeddings
504
- ├── retrieval/
505
- ├── ranking/ # Scoring signals, RRF, diversity, edge expansion, reranker
506
- │ ├── assembler/ # Token-aware context assembly and formatting
507
- │ ├── hyde/ # Hypothetical document expansion
508
- │ ├── mod.rs # Search pipeline orchestrator
509
- │ ├── hybrid.rs # Hybrid BM25 + vector scoring loop
510
- │ └── postprocess.rs # Final enforcement, vector promotion
511
- ├── graph/ # PageRank, call hierarchy, type graphs
512
- ├── handlers/ # MCP tool handlers
513
- ├── server/ # MCP protocol routing (embedded + standalone)
514
- │ ├── mod.rs # Shared tool dispatch, embedded handler
515
- │ └── standalone.rs # Standalone HTTP handler with session routing
516
- ├── tools/ # Tool definitions (23 MCP tools)
517
- ├── embeddings/ # jina-code-0.5b embedding model (GGUF via llama.cpp)
518
- ├── llm/ # On-device LLM (Qwen2.5-Coder-1.5B via llama.cpp, for descriptions)
519
- ├── reranker/ # Reranker trait and cache (currently disabled)
520
- ├── path/ # Cross-platform path normalization (camino)
521
- ├── text.rs # Text processing (synonym expansion, morphological variants)
522
- ├── metrics/ # Prometheus metrics
523
- ├── config.rs # Configuration (embedded + standalone)
524
- ├── session.rs # Multi-repo session management (standalone)
525
- └── registry.rs # Repo registry with path hashing (standalone)
340
+ ├── indexer/ # File scanning, Tree-Sitter parsing, symbol extraction, embeddings, LLM descriptions
341
+ ├── storage/ # SQLite, Tantivy (BM25), LanceDB (vectors)
342
+ ├── retrieval/ # Hybrid search, ranking signals, RRF, context assembly, reranker
343
+ ├── graph/ # PageRank, call hierarchy, type graphs
344
+ ├── handlers/ # MCP tool implementations
345
+ ├── server/ # MCP protocol routing (embedded + standalone)
346
+ ├── tools/ # Tool definitions (23 MCP tools)
347
+ ├── embeddings/ # jina-code-0.5b (GGUF via llama.cpp)
348
+ ├── llm/ # Qwen2.5-Coder-1.5B (GGUF via llama.cpp)
349
+ ├── reranker/ # Cross-encoder reranker
350
+ └── path/ # UTF-8 path normalization (camino)
526
351
  ```
352
+ </details>
353
+
354
+ ---
527
355
 
528
356
  ## License
529
357
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@iceinvein/code-intelligence-mcp",
3
- "version": "2.1.0",
3
+ "version": "2.2.1",
4
4
  "description": "Code Intelligence MCP Server - Smart context for your LLM coding agent",
5
5
  "bin": {
6
6
  "code-intelligence-mcp": "bin/run.js"