smart-coding-mcp 2.1.1 → 2.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -5,18 +5,7 @@
5
5
  [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
6
6
  [![Node.js](https://img.shields.io/badge/Node.js-%3E%3D18-green.svg)](https://nodejs.org/)
7
7
 
8
- An extensible Model Context Protocol (MCP) server that provides intelligent semantic code search for AI assistants. Built with local AI models using Matryoshka Representation Learning (MRL) for flexible embedding dimensions (64-768d), with runtime workspace switching and comprehensive status reporting.
9
-
10
- ### Available Tools
11
-
12
- | Tool | Description | Example |
13
- | ---------------------- | ------------------------------------------------- | ----------------------------------------------- |
14
- | `semantic_search` | Find code by meaning, not just keywords | `"Where do we validate user input?"` |
15
- | `index_codebase` | Manually trigger reindexing | Use after major refactoring or branch switches |
16
- | `clear_cache` | Reset the embeddings cache | Useful when cache becomes corrupted |
17
- | `d_check_last_version` | Get latest version of any package (20 ecosystems) | `"express"`, `"npm:react"`, `"pip:requests"` |
18
- | `e_set_workspace` | Change project path at runtime | Switch to different project without restart |
19
- | `f_get_status` | Get server info: version, index status, config | Check indexing progress, model info, cache size |
8
+ An extensible Model Context Protocol (MCP) server that provides intelligent semantic code search for AI assistants. Built with local AI models using Matryoshka Representation Learning (MRL) for flexible embedding dimensions (64-768d).
20
9
 
21
10
  ## What This Does
22
11
 
@@ -26,83 +15,139 @@ This MCP server solves that by indexing your codebase with AI embeddings. Your A
26
15
 
27
16
  ![Example](example.png)
28
17
 
29
- ## Why Use This
18
+ ## Available Tools
19
+
20
+ ### 🔍 `a_semantic_search` - Find Code by Meaning
21
+
22
+ The primary tool for codebase exploration. Uses AI embeddings to understand what you're looking for, not just match keywords.
23
+
24
+ **How it works:** Converts your natural language query into a vector, then finds code chunks with similar meaning using cosine similarity + exact match boosting.
30
25
 
31
- **Better Code Understanding**
26
+ **Best for:**
27
+ - Exploring unfamiliar codebases: `"How does authentication work?"`
28
+ - Finding related code: `"Where do we validate user input?"`
29
+ - Conceptual searches: `"error handling patterns"`
30
+ - Works even with typos: `"embeding modle initializashun"` still finds embedding code
32
31
 
33
- - Search finds code by concept, not just matching words
34
- - Works with typos and variations in terminology
35
- - Natural language queries like "where do we validate user input?"
32
+ **Example queries:**
33
+ ```
34
+ "Where do we handle cache persistence?"
35
+ "How is the database connection managed?"
36
+ "Find all API endpoint definitions"
37
+ ```
36
38
 
37
- **Performance**
39
+ ---
38
40
 
39
- - Pre-indexed embeddings are faster than scanning files at runtime
40
- - Smart project detection skips dependencies automatically (node_modules, vendor, etc.)
41
- - Incremental updates - only re-processes changed files
41
+ ### 📦 `d_check_last_version` - Package Version Lookup
42
42
 
43
- **Privacy**
43
+ Fetches the latest version of any package from its official registry. Supports 20+ ecosystems.
44
44
 
45
- - Everything runs locally on your machine
46
- - Your code never leaves your system
47
- - No API calls to external services
45
+ **How it works:** Queries official package registries (npm, PyPI, Crates.io, etc.) in real-time. No guessing, no stale training data.
48
46
 
49
- ## Performance & Resource Management
47
+ **Supported ecosystems:** npm, PyPI, Crates.io, Maven, Go, RubyGems, NuGet, Packagist, Hex, pub.dev, Homebrew, Conda, and more.
50
48
 
51
- **Progressive Indexing**
49
+ **Best for:**
50
+ - Before adding dependencies: `"express"` → `4.18.2`
51
+ - Checking for updates: `"pip:requests"` → `2.31.0`
52
+ - Multi-ecosystem projects: `"npm:react"`, `"go:github.com/gin-gonic/gin"`
52
53
 
53
- - Search works immediately, even while indexing continues (like video buffering)
54
- - Incremental saves every 5 batches - no data loss if interrupted
55
- - Real-time indexing status shown when searching during indexing
54
+ **Example usage:**
55
+ ```
56
+ "What's the latest version of lodash?"
57
+ "Check if there's a newer version of axios"
58
+ ```
56
59
 
57
- **Resource Throttling**
60
+ ---
58
61
 
59
- - CPU usage limited to 50% by default (configurable)
60
- - Your laptop stays responsive during indexing
61
- - Configurable delays between batches
62
- - Worker thread limits respect system resources
62
+ ### 🔄 `b_index_codebase` - Manual Reindexing
63
63
 
64
- **SQLite Cache**
64
+ Triggers a full reindex of your codebase. Normally not needed since indexing is automatic and incremental.
65
65
 
66
- - 5-10x faster than JSON for large codebases
67
- - Write-Ahead Logging (WAL) for better concurrency
68
- - Binary blob storage for smaller cache size
69
- - Automatic migration from JSON
66
+ **How it works:** Scans all files, generates new embeddings, and updates the SQLite cache. Uses progressive indexing so you can search while it runs.
70
67
 
71
- **Optimized Defaults**
68
+ **When to use:**
69
+ - After major refactoring or branch switches
70
+ - After pulling large changes from remote
71
+ - If search results seem stale or incomplete
72
+ - After changing embedding configuration (dimension, model)
72
73
 
73
- - 128d embeddings by default (2x faster than 256d, minimal quality loss)
74
- - Smart batch sizing based on project size
75
- - Parallel processing with auto-tuned worker threads
74
+ ---
76
75
 
77
- ## Installation
76
+ ### 🗑️ `c_clear_cache` - Reset Everything
77
+
78
+ Deletes the embeddings cache entirely, forcing a complete reindex on next search.
79
+
80
+ **How it works:** Removes the `.smart-coding-cache/` directory. Next search or index operation starts fresh.
81
+
82
+ **When to use:**
83
+ - Cache corruption (rare, but possible)
84
+ - Switching embedding models or dimensions
85
+ - Starting fresh after major codebase restructure
86
+ - Troubleshooting search issues
87
+
88
+ ---
89
+
90
+ ### 📂 `e_set_workspace` - Switch Projects
91
+
92
+ Changes the workspace path at runtime without restarting the server.
93
+
94
+ **How it works:** Updates the internal workspace reference, creates cache folder for new path, and optionally triggers reindexing.
95
+
96
+ **When to use:**
97
+ - Working on multiple projects in one session
98
+ - Monorepo navigation between packages
99
+ - Switching between related repositories
100
+
101
+ ---
78
102
 
79
- Install globally via npm:
103
+ ### ℹ️ `f_get_status` - Server Health Check
104
+
105
+ Returns comprehensive status information about the MCP server.
106
+
107
+ **What it shows:**
108
+ - Server version and uptime
109
+ - Workspace path and cache location
110
+ - Indexing status (ready, indexing, percentage complete)
111
+ - Files indexed and chunk count
112
+ - Model configuration (name, dimension, device)
113
+ - Cache size and type
114
+
115
+ **When to use:**
116
+ - Start of session to verify everything is working
117
+ - Debugging connection or indexing issues
118
+ - Checking indexing progress on large codebases
119
+
120
+ ---
121
+
122
+ ## Installation
80
123
 
81
124
  ```bash
82
125
  npm install -g smart-coding-mcp
83
126
  ```
84
127
 
85
- To update to the latest version:
128
+ To update:
86
129
 
87
130
  ```bash
88
131
  npm update -g smart-coding-mcp
89
132
  ```
90
133
 
91
- ## Configuration
134
+ ## IDE Integration
92
135
 
93
- Add to your MCP configuration file. The location depends on your IDE and OS:
136
+ Detailed setup instructions for your preferred environment:
94
137
 
95
- | IDE | OS | Config Path |
96
- | -------------------- | ------- | ----------------------------------------------------------------- |
97
- | **Claude Desktop** | macOS | `~/Library/Application Support/Claude/claude_desktop_config.json` |
98
- | **Claude Desktop** | Windows | `%APPDATA%\Claude\claude_desktop_config.json` |
99
- | **Cascade (Cursor)** | All | Configured via UI Settings > Features > MCP |
100
- | **Antigravity** | macOS | `~/.gemini/antigravity/mcp_config.json` |
101
- | **Antigravity** | Windows | `%USERPROFILE%\.gemini\antigravity\mcp_config.json` |
138
+ | IDE / App | Setup Guide | `${workspaceFolder}` Support |
139
+ | ------------------ | -------------------------------------------------- | ---------------------------- |
140
+ | **VS Code** | [**View Guide**](docs/ide-setup/vscode.md) | ✅ Yes |
141
+ | **Cursor** | [**View Guide**](docs/ide-setup/cursor.md) | ✅ Yes |
142
+ | **Windsurf** | [**View Guide**](docs/ide-setup/windsurf.md) | Absolute paths only |
143
+ | **Claude Desktop** | [**View Guide**](docs/ide-setup/claude-desktop.md) | ❌ Absolute paths only |
144
+ | **OpenCode** | [**View Guide**](docs/ide-setup/opencode.md) | ❌ Absolute paths only |
145
+ | **Raycast** | [**View Guide**](docs/ide-setup/raycast.md) | ❌ Absolute paths only |
146
+ | **Antigravity** | [**View Guide**](docs/ide-setup/antigravity.md) | ❌ Absolute paths only |
102
147
 
103
- Add the server configuration to the `mcpServers` object in your config file:
148
+ ### Quick Setup
104
149
 
105
- ### Option 1: Absolute Path (Recommended)
150
+ Add to your MCP config file:
106
151
 
107
152
  ```json
108
153
  {
@@ -115,16 +160,27 @@ Add the server configuration to the `mcpServers` object in your config file:
115
160
  }
116
161
  ```
117
162
 
118
- ### Option 2: Multi-Project Support
163
+ ### Config File Locations
164
+
165
+ | IDE | OS | Path |
166
+ | ------------------ | ------- | ----------------------------------------------------------------- |
167
+ | **Claude Desktop** | macOS | `~/Library/Application Support/Claude/claude_desktop_config.json` |
168
+ | **Claude Desktop** | Windows | `%APPDATA%\Claude\claude_desktop_config.json` |
169
+ | **OpenCode** | Global | `~/.config/opencode/opencode.json` |
170
+ | **OpenCode** | Project | `opencode.json` in project root |
171
+ | **Windsurf** | macOS | `~/.codeium/windsurf/mcp_config.json` |
172
+ | **Windsurf** | Windows | `%USERPROFILE%\.codeium\windsurf\mcp_config.json` |
173
+
174
+ ### Multi-Project Setup
119
175
 
120
176
  ```json
121
177
  {
122
178
  "mcpServers": {
123
- "smart-coding-mcp-frontend": {
179
+ "smart-coding-frontend": {
124
180
  "command": "smart-coding-mcp",
125
181
  "args": ["--workspace", "/path/to/frontend"]
126
182
  },
127
- "smart-coding-mcp-backend": {
183
+ "smart-coding-backend": {
128
184
  "command": "smart-coding-mcp",
129
185
  "args": ["--workspace", "/path/to/backend"]
130
186
  }
@@ -132,55 +188,28 @@ Add the server configuration to the `mcpServers` object in your config file:
132
188
  }
133
189
  ```
134
190
 
135
- ### Option 3: Auto-Detection (May Not Work)
136
-
137
- > ⚠️ **Warning:** Most MCP clients (including Antigravity and Claude Desktop) do NOT support `${workspaceFolder}` variable expansion. The server will exit with an error if the variable is not expanded.
138
-
139
- For clients that support dynamic variables (VS Code, Cursor):
140
-
141
- ```json
142
- {
143
- "mcpServers": {
144
- "smart-coding-mcp": {
145
- "command": "smart-coding-mcp",
146
- "args": ["--workspace", "${workspaceFolder}"]
147
- }
148
- }
149
- }
150
- ```
151
-
152
- | Client | Supports `${workspaceFolder}` |
153
- | ---------------- | ----------------------------- |
154
- | VS Code | Yes |
155
- | Cursor (Cascade) | Yes |
156
- | Antigravity | No ❌ |
157
- | Claude Desktop | No ❌ |
158
-
159
191
  ## Environment Variables
160
192
 
161
- Override configuration settings via environment variables in your MCP config:
162
-
163
- | Variable | Type | Default | Description |
164
- | ---------------------------------- | ------- | -------------------------------- | ------------------------------------------ |
165
- | `SMART_CODING_VERBOSE` | boolean | `false` | Enable detailed logging |
166
- | `SMART_CODING_BATCH_SIZE` | number | `100` | Files to process in parallel |
167
- | `SMART_CODING_MAX_FILE_SIZE` | number | `1048576` | Max file size in bytes (1MB) |
168
- | `SMART_CODING_CHUNK_SIZE` | number | `25` | Lines of code per chunk |
169
- | `SMART_CODING_MAX_RESULTS` | number | `5` | Max search results |
170
- | `SMART_CODING_SMART_INDEXING` | boolean | `true` | Enable smart project detection |
171
- | `SMART_CODING_WATCH_FILES` | boolean | `false` | Enable file watching for auto-reindex |
172
- | `SMART_CODING_SEMANTIC_WEIGHT` | number | `0.7` | Weight for semantic similarity (0-1) |
173
- | `SMART_CODING_EXACT_MATCH_BOOST` | number | `1.5` | Boost for exact text matches |
174
- | `SMART_CODING_EMBEDDING_MODEL` | string | `nomic-ai/nomic-embed-text-v1.5` | AI embedding model to use |
175
- | `SMART_CODING_EMBEDDING_DIMENSION` | number | `128` | MRL dimension (64, 128, 256, 512, 768) |
176
- | `SMART_CODING_DEVICE` | string | `cpu` | Inference device (`cpu`, `webgpu`, `auto`) |
177
- | `SMART_CODING_CHUNKING_MODE` | string | `smart` | Code chunking (`smart`, `ast`, `line`) |
178
- | `SMART_CODING_WORKER_THREADS` | string | `auto` | Worker threads (`auto` or 1-32) |
179
- | `SMART_CODING_MAX_CPU_PERCENT` | number | `50` | Max CPU usage during indexing (10-100%) |
180
- | `SMART_CODING_BATCH_DELAY` | number | `100` | Delay between batches in ms (0-5000) |
181
- | `SMART_CODING_MAX_WORKERS` | string | `auto` | Override max worker threads limit |
182
-
183
- **Example with environment variables:**
193
+ Customize behavior via environment variables:
194
+
195
+ | Variable | Default | Description |
196
+ | ---------------------------------- | -------------------------------- | ------------------------------------------ |
197
+ | `SMART_CODING_VERBOSE` | `false` | Enable detailed logging |
198
+ | `SMART_CODING_MAX_RESULTS` | `5` | Max search results returned |
199
+ | `SMART_CODING_BATCH_SIZE` | `100` | Files to process in parallel |
200
+ | `SMART_CODING_MAX_FILE_SIZE` | `1048576` | Max file size in bytes (1MB) |
201
+ | `SMART_CODING_CHUNK_SIZE` | `25` | Lines of code per chunk |
202
+ | `SMART_CODING_EMBEDDING_DIMENSION` | `128` | MRL dimension (64, 128, 256, 512, 768) |
203
+ | `SMART_CODING_EMBEDDING_MODEL` | `nomic-ai/nomic-embed-text-v1.5` | AI embedding model |
204
+ | `SMART_CODING_DEVICE` | `cpu` | Inference device (`cpu`, `webgpu`, `auto`) |
205
+ | `SMART_CODING_SEMANTIC_WEIGHT` | `0.7` | Weight for semantic vs exact matching |
206
+ | `SMART_CODING_EXACT_MATCH_BOOST` | `1.5` | Boost multiplier for exact text matches |
207
+ | `SMART_CODING_MAX_CPU_PERCENT` | `50` | Max CPU usage during indexing (10-100%) |
208
+ | `SMART_CODING_CHUNKING_MODE` | `smart` | Code chunking (`smart`, `ast`, `line`) |
209
+ | `SMART_CODING_WATCH_FILES` | `false` | Auto-reindex on file changes |
210
+ | `SMART_CODING_AUTO_INDEX_DELAY` | `5000` | Delay before background indexing (ms), `false` to disable |
211
+
212
+ **Example with env vars:**
184
213
 
185
214
  ```json
186
215
  {
@@ -190,15 +219,25 @@ Override configuration settings via environment variables in your MCP config:
190
219
  "args": ["--workspace", "/path/to/project"],
191
220
  "env": {
192
221
  "SMART_CODING_VERBOSE": "true",
193
- "SMART_CODING_BATCH_SIZE": "200",
194
- "SMART_CODING_MAX_FILE_SIZE": "2097152"
222
+ "SMART_CODING_MAX_RESULTS": "10",
223
+ "SMART_CODING_EMBEDDING_DIMENSION": "256"
195
224
  }
196
225
  }
197
226
  }
198
227
  }
199
228
  ```
200
229
 
201
- **Note**: The server starts instantly and indexes in the background, so your IDE won't be blocked waiting for indexing to complete.
230
+ ## Performance
231
+
232
+ **Progressive Indexing** - Search works immediately while indexing continues in the background. No waiting for large codebases.
233
+
234
+ **Resource Throttling** - CPU limited to 50% by default. Your machine stays responsive during indexing.
235
+
236
+ **SQLite Cache** - 5-10x faster than JSON. Automatic migration from older JSON caches.
237
+
238
+ **Incremental Updates** - Only changed files are re-indexed. Saves every 5 batches, so no data loss if interrupted.
239
+
240
+ **Optimized Defaults** - 128d embeddings (2x faster than 256d with minimal quality loss), smart batch sizing, parallel processing.
202
241
 
203
242
  ## How It Works
204
243
 
@@ -265,97 +304,23 @@ flowchart TB
265
304
  | **Inference** | transformers.js + ONNX Runtime |
266
305
  | **Chunking** | Smart regex / Tree-sitter AST |
267
306
  | **Search** | Cosine similarity + exact match boost |
268
-
269
- ### Search Flow
270
-
271
- Query → Vector embedding → Cosine similarity → Ranked results
272
-
273
- ## Examples
274
-
275
- **Natural language search:**
276
-
277
- Query: "How do we handle cache persistence?"
278
-
279
- Result:
280
-
281
- ```javascript
282
- // lib/cache.js (Relevance: 38.2%)
283
- async save() {
284
- await fs.writeFile(cacheFile, JSON.stringify(this.vectorStore));
285
- await fs.writeFile(hashFile, JSON.stringify(this.fileHashes));
286
- }
287
- ```
288
-
289
- **Typo tolerance:**
290
-
291
- Query: "embeding modle initializashun"
292
-
293
- Still finds embedding model initialization code despite multiple typos.
294
-
295
- **Conceptual search:**
296
-
297
- Query: "error handling and exceptions"
298
-
299
- Finds all try/catch blocks and error handling patterns.
307
+ | **Cache** | SQLite with WAL mode |
300
308
 
301
309
  ## Privacy
302
310
 
303
- - AI model runs entirely on your machine
304
- - No network requests to external services
305
- - No telemetry or analytics
306
- - Cache stored locally in `.smart-coding-cache/`
307
-
308
- ## Technical Details
309
-
310
- **Embedding Model**: nomic-embed-text-v1.5 via transformers.js v3
311
-
312
- - Matryoshka Representation Learning (MRL) for flexible dimensions
313
- - Configurable output: 64, 128, 256, 512, or 768 dimensions
314
- - Longer context (8192 tokens vs 256 for MiniLM)
315
- - Better code understanding through specialized training
316
- - WebGPU support for up to 100x faster inference (when available)
317
-
318
- **Legacy Model**: all-MiniLM-L6-v2 (fallback)
311
+ Everything runs **100% locally**:
319
312
 
320
- - Fast inference, small footprint (~100MB)
321
- - Fixed 384-dimensional output
322
-
323
- **Vector Similarity**: Cosine similarity
324
-
325
- - Efficient comparison of embeddings
326
- - Normalized vectors for consistent scoring
327
-
328
- **Hybrid Scoring**: Combines semantic similarity with exact text matching
329
-
330
- - Semantic weight: 0.7 (configurable)
331
- - Exact match boost: 1.5x (configurable)
313
+ - AI model runs on your machine (no API calls)
314
+ - Code never leaves your system
315
+ - No telemetry or analytics
316
+ - Cache stored in `.smart-coding-cache/`
332
317
 
333
318
  ## Research Background
334
319
 
335
- This project builds on research from Cursor showing that semantic search improves AI coding agent performance by 12.5% on average across question-answering tasks. The key insight is that AI assistants benefit more from relevant context than from large amounts of context.
336
-
337
- See: https://cursor.com/blog/semsearch
320
+ This project builds on [research from Cursor](https://cursor.com/blog/semsearch) showing that semantic search improves AI coding agent performance by 12.5% on average. The key insight: AI assistants benefit more from **relevant** context than from **large amounts** of context.
338
321
 
339
322
  ## License
340
323
 
341
- MIT License
342
-
343
- Copyright (c) 2025 Omar Haris
344
-
345
- Permission is hereby granted, free of charge, to any person obtaining a copy
346
- of this software and associated documentation files (the "Software"), to deal
347
- in the Software without restriction, including without limitation the rights
348
- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
349
- copies of the Software, and to permit persons to whom the Software is
350
- furnished to do so, subject to the following conditions:
351
-
352
- The above copyright notice and this permission notice shall be included in all
353
- copies or substantial portions of the Software.
324
+ MIT License - Copyright (c) 2025 Omar Haris
354
325
 
355
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
356
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
357
- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
358
- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
359
- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
360
- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
361
- SOFTWARE.
326
+ See [LICENSE](LICENSE) for full text.
package/config.json CHANGED
@@ -68,5 +68,6 @@
68
68
  "exactMatchBoost": 1.5,
69
69
  "workerThreads": "auto",
70
70
  "maxCpuPercent": 50,
71
- "batchDelay": 100
71
+ "batchDelay": 100,
72
+ "autoIndexDelay": 5000
72
73
  }
@@ -0,0 +1,110 @@
1
+ # Antigravity (Gemini IDE) Integration
2
+
3
+ Antigravity is Google's AI-powered IDE built on VS Code with deep Gemini integration.
4
+
5
+ ## MCP Configuration
6
+
7
+ Edit your MCP config file:
8
+
9
+ - **macOS:** `~/.gemini/antigravity/mcp_config.json`
10
+ - **Windows:** `%USERPROFILE%\.gemini\antigravity\mcp_config.json`
11
+
12
+ ```json
13
+ {
14
+ "mcpServers": {
15
+ "smart-coding-mcp": {
16
+ "command": "npx",
17
+ "args": [
18
+ "-y",
19
+ "smart-coding-mcp",
20
+ "--workspace",
21
+ "/absolute/path/to/project"
22
+ ]
23
+ }
24
+ }
25
+ }
26
+ ```
27
+
28
+ > **Note:** Antigravity does **NOT** support `${workspaceFolder}`. You must use absolute paths.
29
+
30
+ ---
31
+
32
+ ## Configuring Agent Rules
33
+
34
+ Antigravity supports powerful agent rules to control AI behavior.
35
+
36
+ ### Rule Locations
37
+
38
+ | Scope | Location |
39
+ | ------------------- | ----------------------------------------- |
40
+ | **Global** | `~/.gemini/GEMINI.md` |
41
+ | **Workspace** | `.agent/rules/*.md` |
42
+ | **Project Context** | `GEMINI.md` or `AGENT.md` at project root |
43
+
44
+ ### Creating a Rule to Use Smart Coding MCP
45
+
46
+ 1. **Create the rules directory** in your project:
47
+
48
+ ```bash
49
+ mkdir -p .agent/rules
50
+ ```
51
+
52
+ 2. **Create a rule file** `.agent/rules/smart-mcp.md`:
53
+
54
+ ```markdown
55
+ ---
56
+ trigger: always_on
57
+ description: Mandatory usage of Smart Coding MCP tools for dependencies and search
58
+ ---
59
+
60
+ # Smart Coding MCP Usage Rules
61
+
62
+ You must prioritize using the **Smart Coding MCP** tools for the following tasks.
63
+
64
+ ## 1. Dependency Management
65
+
66
+ **Trigger:** When checking, adding, or updating package versions (npm, python, go, rust, etc.).
67
+ **Action:**
68
+
69
+ - **MUST** use the `d_check_last_version` tool.
70
+ - **DO NOT** guess versions or trust internal training data.
71
+ - **DO NOT** use generic web search unless `d_check_last_version` fails.
72
+
73
+ ## 2. Codebase Research
74
+
75
+ **Trigger:** When asking about "how" something works, finding logic, or understanding architecture.
76
+ **Action:**
77
+
78
+ - **MUST** use `a_semantic_search` as the FIRST tool for any codebase research
79
+ - **DO NOT** use `Glob` or `Grep` for exploratory searches
80
+ - Use `Grep` ONLY for exact literal string matching (e.g., finding a specific error message)
81
+ - Use `Glob` ONLY when you already know the exact filename pattern
82
+
83
+ ## 3. Environment & Status
84
+
85
+ **Trigger:** When starting a session or debugging the environment.
86
+ **Action:**
87
+
88
+ - Use `e_set_workspace` if the current workspace path is incorrect.
89
+ - Use `f_get_status` to verify the MCP server is healthy and indexed.
90
+ ```
91
+
92
+ ### Activation Modes
93
+
94
+ | Mode | Description |
95
+ | ---------------- | ---------------------------------- |
96
+ | `always_on` | Rule is always applied |
97
+ | `manual` | Activated when mentioned in prompt |
98
+ | `model_decision` | AI decides based on description |
99
+ | `glob` | Applied to matching file patterns |
100
+
101
+ ### Global Rules
102
+
103
+ For rules that apply to all projects, edit `~/.gemini/GEMINI.md`:
104
+
105
+ ```markdown
106
+ # Global Agent Rules
107
+
108
+ - Always verify package versions before installing
109
+ - Prefer semantic search when available
110
+ ```
@@ -0,0 +1,89 @@
1
+ # Claude Desktop Integration
2
+
3
+ The official Claude Desktop app is the reference implementation for MCP.
4
+
5
+ ## Configuration Location
6
+
7
+ Edit the configuration file located at:
8
+
9
+ - **macOS:** `~/Library/Application Support/Claude/claude_desktop_config.json`
10
+ - **Windows:** `%APPDATA%\Claude\claude_desktop_config.json`
11
+
12
+ ## Configuration
13
+
14
+ Open the file in any text editor (VS Code, TextEdit, Notepad) and add the server:
15
+
16
+ ```json
17
+ {
18
+ "mcpServers": {
19
+ "smart-coding-mcp": {
20
+ "command": "npx",
21
+ "args": [
22
+ "-y",
23
+ "smart-coding-mcp",
24
+ "--workspace",
25
+ "/absolute/path/to/your/project"
26
+ ]
27
+ }
28
+ }
29
+ }
30
+ ```
31
+
32
+ > **Important:** Claude Desktop does **NOT** support `${workspaceFolder}`. You must provide the absolute path to the directory you want to index.
33
+
34
+ ---
35
+
36
+ ## Configuring Rules (Project Instructions)
37
+
38
+ Claude Desktop uses **Projects** with custom instructions.
39
+
40
+ ### Creating Project Instructions
41
+
42
+ 1. **Create a new Project** in Claude Desktop
43
+ 2. Click **"Set project instructions"**
44
+ 3. Add your guidelines:
45
+
46
+ ```
47
+ # Smart Coding MCP Usage Rules
48
+
49
+ You must prioritize using the **Smart Coding MCP** tools for the following tasks.
50
+
51
+ ## 1. Dependency Management
52
+
53
+ **Trigger:** When checking, adding, or updating package versions (npm, python, go, rust, etc.).
54
+ **Action:**
55
+
56
+ - **MUST** use the `d_check_last_version` tool.
57
+ - **DO NOT** guess versions or trust internal training data.
58
+ - **DO NOT** use generic web search unless `d_check_last_version` fails.
59
+
60
+ ## 2. Codebase Research
61
+
62
+ **Trigger:** When asking about "how" something works, finding logic, or understanding architecture.
63
+ **Action:**
64
+
65
+ - **MUST** use `a_semantic_search` as the FIRST tool for any codebase research
66
+ - **DO NOT** use `Glob` or `Grep` for exploratory searches
67
+ - Use `Grep` ONLY for exact literal string matching (e.g., finding a specific error message)
68
+ - Use `Glob` ONLY when you already know the exact filename pattern
69
+
70
+ ## 3. Environment & Status
71
+
72
+ **Trigger:** When starting a session or debugging the environment.
73
+ **Action:**
74
+
75
+ - Use `e_set_workspace` if the current workspace path is incorrect.
76
+ - Use `f_get_status` to verify the MCP server is healthy and indexed.
77
+ ```
78
+
79
+ 4. Click **"Save instructions"**
80
+
81
+ These instructions apply to all chats within that project.
82
+
83
+ ---
84
+
85
+ ## Verification
86
+
87
+ 1. Restart Claude Desktop.
88
+ 2. Look for the plug icon 🔌 in the top right of the chat interface.
89
+ 3. Click it to see the list of connected servers. You should see `smart-coding-mcp` with status "Connected".