smart-coding-mcp 2.2.0 → 2.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,157 @@
1
+ # OpenCode Integration
2
+
3
+ OpenCode is a powerful AI coding agent built for the terminal by SST.
4
+
5
+ ## Configuration Location
6
+
7
+ Edit the configuration file located at:
8
+
9
+ - **Global:** `~/.config/opencode/opencode.json`
10
+ - **Project:** `opencode.json` in your project root
11
+
12
+ ## MCP Configuration
13
+
14
+ Add the server to the `mcp` object in your `opencode.json`:
15
+
16
+ ```json
17
+ {
18
+ "$schema": "https://opencode.ai/config.json",
19
+ "mcp": {
20
+ "smart-coding-mcp": {
21
+ "type": "local",
22
+ "command": ["npx", "-y", "smart-coding-mcp", "--workspace", "/absolute/path/to/your/project"],
23
+ "enabled": true
24
+ }
25
+ }
26
+ }
27
+ ```
28
+
29
+ ### With Environment Variables
30
+
31
+ ```json
32
+ {
33
+ "$schema": "https://opencode.ai/config.json",
34
+ "mcp": {
35
+ "smart-coding-mcp": {
36
+ "type": "local",
37
+ "command": ["npx", "-y", "smart-coding-mcp", "--workspace", "/path/to/project"],
38
+ "environment": {
39
+ "SMART_CODING_VERBOSE": "true",
40
+ "SMART_CODING_MAX_RESULTS": "10"
41
+ },
42
+ "enabled": true
43
+ }
44
+ }
45
+ }
46
+ ```
47
+
48
+ > **Note:** OpenCode does **NOT** support `${workspaceFolder}`. You must use absolute paths.
49
+
50
+ ---
51
+
52
+ ## Configuring Rules (AGENTS.md)
53
+
54
+ OpenCode uses `AGENTS.md` files for custom instructions, similar to `CLAUDE.md` or Cursor's rules.
55
+
56
+ ### Rule Locations
57
+
58
+ | Scope | Location |
59
+ | ----------- | --------------------------------- |
60
+ | **Global** | `~/.config/opencode/AGENTS.md` |
61
+ | **Project** | `AGENTS.md` in project root |
62
+
63
+ ### Creating a Rule
64
+
65
+ 1. **Create an `AGENTS.md` file** in your project root:
66
+
67
+ ```markdown
68
+ # Smart Coding MCP Usage Rules
69
+
70
+ You must prioritize using the **Smart Coding MCP** tools for the following tasks.
71
+
72
+ ## 1. Dependency Management
73
+
74
+ **Trigger:** When checking, adding, or updating package versions (npm, python, go, rust, etc.).
75
+ **Action:**
76
+
77
+ - **MUST** use the `d_check_last_version` tool.
78
+ - **DO NOT** guess versions or trust internal training data.
79
+ - **DO NOT** use generic web search unless `d_check_last_version` fails.
80
+
81
+ ## 2. Codebase Research
82
+
83
+ **Trigger:** When asking about "how" something works, finding logic, or understanding architecture.
84
+ **Action:**
85
+
86
+ - **MUST** use `a_semantic_search` as the FIRST tool for any codebase research
87
+ - **DO NOT** use `Glob` or `Grep` for exploratory searches
88
+ - Use `Grep` ONLY for exact literal string matching (e.g., finding a specific error message)
89
+ - Use `Glob` ONLY when you already know the exact filename pattern
90
+
91
+ ## 3. Environment & Status
92
+
93
+ **Trigger:** When starting a session or debugging the environment.
94
+ **Action:**
95
+
96
+ - Use `e_set_workspace` if the current workspace path is incorrect.
97
+ - Use `f_get_status` to verify the MCP server is healthy and indexed.
98
+ ```
99
+
100
+ ### Alternative: Using opencode.json Instructions Field
101
+
102
+ You can reference external instruction files using glob patterns:
103
+
104
+ ```json
105
+ {
106
+ "$schema": "https://opencode.ai/config.json",
107
+ "instructions": [
108
+ "AGENTS.md",
109
+ "docs/guidelines.md",
110
+ ".cursor/rules/*.md"
111
+ ],
112
+ "mcp": {
113
+ "smart-coding-mcp": {
114
+ "type": "local",
115
+ "command": ["npx", "-y", "smart-coding-mcp", "--workspace", "/path/to/project"],
116
+ "enabled": true
117
+ }
118
+ }
119
+ }
120
+ ```
121
+
122
+ ### Global Rules
123
+
124
+ For rules that apply to all projects, edit `~/.config/opencode/AGENTS.md`:
125
+
126
+ ```markdown
127
+ # Global Agent Rules
128
+
129
+ - Always verify package versions before installing using `d_check_last_version`
130
+ - Prefer semantic search (`a_semantic_search`) when available
131
+ - Check MCP server status with `f_get_status` at session start
132
+ ```
133
+
134
+ ---
135
+
136
+ ## MCP Management Commands
137
+
138
+ OpenCode provides CLI commands for managing MCP servers:
139
+
140
+ ```bash
141
+ # List all configured MCP servers
142
+ opencode mcp list
143
+
144
+ # Authenticate with a server (if required)
145
+ opencode mcp auth smart-coding-mcp
146
+
147
+ # Remove credentials
148
+ opencode mcp logout smart-coding-mcp
149
+ ```
150
+
151
+ ---
152
+
153
+ ## Verification
154
+
155
+ 1. Start OpenCode in your project directory.
156
+ 2. Run `opencode mcp list` to verify the server is connected.
157
+ 3. Ask the AI: "Check the status of the smart coding MCP server" - it should use `f_get_status`.
@@ -70,9 +70,10 @@ Raycast uses **AI Commands** and **Presets** for custom instructions.
70
70
  - **DO NOT** use generic web search unless `d_check_last_version` fails.
71
71
 
72
72
  ## 2. Codebase Research
73
- - **PREFER** `a_semantic_search` over `grep_search` or `find_by_name`.
74
- - Use `a_semantic_search` for natural language queries (e.g., "where is the auth logic").
75
- - Use standard tools (`grep`) only for exact string matching.
73
+ - **MUST** use `a_semantic_search` as the FIRST tool for any codebase research
74
+ - **DO NOT** use `Glob` or `Grep` for exploratory searches
75
+ - Use `Grep` ONLY for exact literal string matching (e.g., finding a specific error message)
76
+ - Use `Glob` ONLY when you already know the exact filename pattern
76
77
 
77
78
  ## 3. Environment & Status
78
79
  - Use `e_set_workspace` if the current workspace path is incorrect.
@@ -93,9 +93,10 @@ Cline uses `.clinerules` files to control AI behavior.
93
93
  **Trigger:** When asking about "how" something works, finding logic, or understanding architecture.
94
94
  **Action:**
95
95
 
96
- - **PREFER** `a_semantic_search` (Smart Coding) over `grep_search` or `find_by_name`.
97
- - Use `a_semantic_search` for natural language queries (e.g., "where is the auth logic").
98
- - Use standard tools (`grep`) only for exact string matching.
96
+ - **MUST** use `a_semantic_search` as the FIRST tool for any codebase research
97
+ - **DO NOT** use `Glob` or `Grep` for exploratory searches
98
+ - Use `Grep` ONLY for exact literal string matching (e.g., finding a specific error message)
99
+ - Use `Glob` ONLY when you already know the exact filename pattern
99
100
 
100
101
  ## 3. Environment & Status
101
102
 
@@ -65,9 +65,10 @@ You must prioritize using the **Smart Coding MCP** tools for the following tasks
65
65
  **Trigger:** When asking about "how" something works, finding logic, or understanding architecture.
66
66
  **Action:**
67
67
 
68
- - **PREFER** `a_semantic_search` (Smart Coding) over `grep_search` or `find_by_name`.
69
- - Use `a_semantic_search` for natural language queries (e.g., "where is the auth logic").
70
- - Use standard tools (`grep`) only for exact string matching.
68
+ - **MUST** use `a_semantic_search` as the FIRST tool for any codebase research
69
+ - **DO NOT** use `Glob` or `Grep` for exploratory searches
70
+ - Use `Grep` ONLY for exact literal string matching (e.g., finding a specific error message)
71
+ - Use `Glob` ONLY when you already know the exact filename pattern
71
72
 
72
73
  ## 3. Environment & Status
73
74
 
@@ -35,16 +35,23 @@ export class CodebaseIndexer {
35
35
 
36
36
  /**
37
37
  * Initialize worker thread pool for parallel embedding
38
+ * Note: Workers are disabled for nomic models due to ONNX runtime thread-safety issues
38
39
  */
39
40
  async initializeWorkers() {
40
- // Force single-threaded mode for nomic models (transformers.js v3 worker thread issue)
41
+ // Workers don't work with nomic/transformers.js due to ONNX WASM thread-safety issues
41
42
  const isNomicModel = this.config.embeddingModel?.includes('nomic');
42
43
  if (isNomicModel) {
43
- console.error("[Indexer] Single-threaded mode (nomic model - workers disabled for stability)");
44
+ console.error("[Indexer] Single-threaded mode (nomic model - ONNX workers incompatible)");
44
45
  return;
45
46
  }
46
47
 
47
- const numWorkers = this.config.workerThreads === "auto"
48
+ // Check if workers are explicitly disabled
49
+ if (this.config.workerThreads === 0 || this.config.disableWorkers) {
50
+ console.error("[Indexer] Single-threaded mode (workers disabled by config)");
51
+ return;
52
+ }
53
+
54
+ const numWorkers = this.config.workerThreads === "auto"
48
55
  ? this.throttle.maxWorkers // Use throttled worker count
49
56
  : this.throttle.getWorkerCount(this.config.workerThreads);
50
57
 
@@ -266,30 +273,44 @@ export class CodebaseIndexer {
266
273
  if (this.config.verbose) {
267
274
  console.error(`[Indexer] Processing: ${fileName}...`);
268
275
  }
269
-
276
+
270
277
  try {
271
278
  // Check file size first
272
279
  const stats = await fs.stat(file);
273
-
280
+
274
281
  // Skip directories
275
282
  if (stats.isDirectory()) {
276
283
  return 0;
277
284
  }
278
-
285
+
279
286
  if (stats.size > this.config.maxFileSize) {
280
287
  if (this.config.verbose) {
281
288
  console.error(`[Indexer] Skipped ${fileName} (too large: ${(stats.size / 1024 / 1024).toFixed(2)}MB)`);
282
289
  }
283
290
  return 0;
284
291
  }
285
-
292
+
293
+ // OPTIMIZATION: Check mtime first (fast) before reading file content
294
+ const currentMtime = stats.mtimeMs;
295
+ const cachedMtime = this.cache.getFileMtime(file);
296
+
297
+ // If mtime unchanged, file definitely unchanged - skip without reading
298
+ if (cachedMtime && currentMtime === cachedMtime) {
299
+ if (this.config.verbose) {
300
+ console.error(`[Indexer] Skipped ${fileName} (unchanged - mtime)`);
301
+ }
302
+ return 0;
303
+ }
304
+
286
305
  const content = await fs.readFile(file, "utf-8");
287
306
  const hash = hashContent(content);
288
-
289
- // Skip if file hasn't changed
307
+
308
+ // Skip if file hasn't changed (content check after mtime indicated change)
290
309
  if (this.cache.getFileHash(file) === hash) {
310
+ // Content same but mtime different - update cached mtime
311
+ this.cache.setFileHash(file, hash, currentMtime);
291
312
  if (this.config.verbose) {
292
- console.error(`[Indexer] Skipped ${fileName} (unchanged)`);
313
+ console.error(`[Indexer] Skipped ${fileName} (unchanged - hash)`);
293
314
  }
294
315
  return 0;
295
316
  }
@@ -321,7 +342,7 @@ export class CodebaseIndexer {
321
342
  }
322
343
  }
323
344
 
324
- this.cache.setFileHash(file, hash);
345
+ this.cache.setFileHash(file, hash, currentMtime);
325
346
  if (this.config.verbose) {
326
347
  console.error(`[Indexer] Completed ${fileName} (${addedChunks} chunks)`);
327
348
  }
@@ -377,6 +398,65 @@ export class CodebaseIndexer {
377
398
  return files;
378
399
  }
379
400
 
401
+ /**
402
+ * Sort files by priority for progressive indexing
403
+ * Priority: recently modified files first (users likely searching for recent work)
404
+ */
405
+ async sortFilesByPriority(files) {
406
+ const startTime = Date.now();
407
+
408
+ // Get mtime for all files in parallel
409
+ const filesWithMtime = await Promise.all(
410
+ files.map(async (file) => {
411
+ try {
412
+ const stats = await fs.stat(file);
413
+ return { file, mtime: stats.mtimeMs };
414
+ } catch {
415
+ return { file, mtime: 0 };
416
+ }
417
+ })
418
+ );
419
+
420
+ // Sort by mtime descending (most recently modified first)
421
+ filesWithMtime.sort((a, b) => b.mtime - a.mtime);
422
+
423
+ if (this.config.verbose) {
424
+ console.error(`[Indexer] Priority sort: ${files.length} files in ${Date.now() - startTime}ms`);
425
+ }
426
+
427
+ return filesWithMtime.map(f => f.file);
428
+ }
429
+
430
+ /**
431
+ * Start background indexing (non-blocking)
432
+ * Allows search to work immediately with partial results
433
+ */
434
+ startBackgroundIndexing(force = false) {
435
+ if (this.isIndexing) {
436
+ console.error("[Indexer] Background indexing already in progress");
437
+ return;
438
+ }
439
+
440
+ console.error("[Indexer] Starting background indexing...");
441
+
442
+ // Run indexAll in background (don't await)
443
+ this.indexAll(force).then(result => {
444
+ console.error(`[Indexer] Background indexing complete: ${result.message || 'done'}`);
445
+ }).catch(err => {
446
+ console.error(`[Indexer] Background indexing error: ${err.message}`);
447
+ });
448
+ }
449
+
450
+ /**
451
+ * Get current indexing status for progressive search
452
+ */
453
+ getIndexingStatus() {
454
+ return {
455
+ ...this.indexingStatus,
456
+ isReady: !this.indexingStatus.inProgress || this.indexingStatus.processedFiles > 0
457
+ };
458
+ }
459
+
380
460
  /**
381
461
  * Pre-filter files by hash (skip unchanged files before processing)
382
462
  */
@@ -461,16 +541,21 @@ export class CodebaseIndexer {
461
541
  console.error(`[Indexer] Starting optimized indexing in ${this.config.searchDirectory}...`);
462
542
 
463
543
  // Step 1: Fast file discovery with fdir
464
- const files = await this.discoverFiles();
465
-
544
+ let files = await this.discoverFiles();
545
+
466
546
  if (files.length === 0) {
467
547
  console.error("[Indexer] No files found to index");
468
548
  this.sendProgress(100, 100, "No files found to index");
469
549
  return { skipped: false, filesProcessed: 0, chunksCreated: 0, message: "No files found to index" };
470
550
  }
471
551
 
552
+ // Step 1.1: Sort files by priority (recently modified first) for progressive indexing
553
+ // This ensures search results are useful even while indexing is in progress
554
+ files = await this.sortFilesByPriority(files);
555
+ console.error(`[Indexer] Progressive mode: recently modified files will be indexed first`);
556
+
472
557
  // Send progress: discovery complete
473
- this.sendProgress(5, 100, `Discovered ${files.length} files`);
558
+ this.sendProgress(5, 100, `Discovered ${files.length} files (sorted by priority)`);
474
559
 
475
560
  // Step 1.5: Prune deleted or excluded files from cache
476
561
  if (!force) {
@@ -494,13 +579,15 @@ export class CodebaseIndexer {
494
579
  }
495
580
  }
496
581
 
497
- // Step 2: Process files in adaptive batches with lazy filtering
498
- // Instead of pre-filtering all files (expensive), check hashes during processing
499
- const adaptiveBatchSize = files.length > 10000 ? 500 :
500
- files.length > 1000 ? 200 :
582
+ // Step 2: Process files with progressive indexing
583
+ // Use batch size of 1 for immediate search availability (progressive indexing)
584
+ // Each file is processed, embedded, and saved immediately so search can find it
585
+ const adaptiveBatchSize = this.config.progressiveIndexing !== false ? 1 :
586
+ files.length > 10000 ? 500 :
587
+ files.length > 1000 ? 200 :
501
588
  this.config.batchSize || 100;
502
589
 
503
- console.error(`[Indexer] Processing ${files.length} files with lazy filtering (batch size: ${adaptiveBatchSize})`);
590
+ console.error(`[Indexer] Processing ${files.length} files (progressive mode: batch size ${adaptiveBatchSize})`);
504
591
 
505
592
  // Step 3: Initialize worker threads (always use when multi-core available)
506
593
  const useWorkers = os.cpus().length > 1;
@@ -529,20 +616,32 @@ export class CodebaseIndexer {
529
616
  for (const file of batch) {
530
617
  try {
531
618
  const stats = await fs.stat(file);
532
-
619
+
533
620
  // Skip directories and oversized files
534
621
  if (stats.isDirectory()) continue;
535
622
  if (stats.size > this.config.maxFileSize) {
536
623
  skippedFiles++;
537
624
  continue;
538
625
  }
539
-
540
- // Read content and check hash
626
+
627
+ // OPTIMIZATION: Check mtime first (fast) before reading file content
628
+ const currentMtime = stats.mtimeMs;
629
+ const cachedMtime = this.cache.getFileMtime(file);
630
+
631
+ // If mtime unchanged, file definitely unchanged - skip without reading
632
+ if (cachedMtime && currentMtime === cachedMtime) {
633
+ skippedFiles++;
634
+ continue;
635
+ }
636
+
637
+ // mtime changed (or new file) - read content and verify with hash
541
638
  const content = await fs.readFile(file, "utf-8");
542
639
  const hash = hashContent(content);
543
-
544
- // Skip unchanged files inline (lazy check)
640
+
641
+ // Check if content actually changed (mtime can change without content change)
545
642
  if (this.cache.getFileHash(file) === hash) {
643
+ // Content same but mtime different - update cached mtime
644
+ this.cache.setFileHash(file, hash, currentMtime);
546
645
  skippedFiles++;
547
646
  continue;
548
647
  }
@@ -557,11 +656,12 @@ export class CodebaseIndexer {
557
656
  text: chunk.text,
558
657
  startLine: chunk.startLine,
559
658
  endLine: chunk.endLine,
560
- hash
659
+ hash,
660
+ mtime: currentMtime
561
661
  });
562
662
  }
563
-
564
- fileHashes.set(file, hash);
663
+
664
+ fileHashes.set(file, { hash, mtime: currentMtime });
565
665
  } catch (error) {
566
666
  // Skip files with read errors
567
667
  skippedFiles++;
@@ -612,9 +712,9 @@ export class CodebaseIndexer {
612
712
  }
613
713
  }
614
714
 
615
- // Update file hashes
616
- for (const [file, hash] of fileHashes) {
617
- this.cache.setFileHash(file, hash);
715
+ // Update file hashes with mtime
716
+ for (const [file, { hash, mtime }] of fileHashes) {
717
+ this.cache.setFileHash(file, hash, mtime);
618
718
  }
619
719
 
620
720
  processedFiles += filesProcessedInBatch.size;
@@ -626,25 +726,30 @@ export class CodebaseIndexer {
626
726
  this.indexingStatus.totalFiles = Math.max(estimatedTotal, processedFiles);
627
727
  this.indexingStatus.percentage = estimatedTotal > 0 ? Math.floor((processedFiles / estimatedTotal) * 100) : 100;
628
728
 
629
- // Incremental save to SQLite (every N batches)
630
- const saveInterval = this.config.incrementalSaveInterval || 5;
631
- if (batchCounter % saveInterval === 0) {
729
+ // Progressive indexing: save after EVERY batch so search can find new results immediately
730
+ // This is critical for background indexing - users can search while indexing continues
731
+ if (chunksToInsert.length > 0) {
632
732
  if (typeof this.cache.saveIncremental === 'function') {
633
733
  await this.cache.saveIncremental();
734
+ } else {
735
+ // Fallback: full save (slower but ensures data is persisted)
736
+ await this.cache.save();
634
737
  }
635
738
  }
636
-
739
+
637
740
  // Apply CPU throttling (delay between batches)
638
741
  await this.throttle.throttledBatch(null);
639
742
 
640
- // Progress indicator every batch
641
- if (processedFiles > 0 && (processedFiles % (adaptiveBatchSize * 2) === 0 || i + adaptiveBatchSize >= files.length)) {
743
+ // Progress indicator - show progress after each file in progressive mode
744
+ const progressInterval = adaptiveBatchSize === 1 ? 1 : adaptiveBatchSize * 2;
745
+ if (processedFiles > 0 && ((processedFiles + skippedFiles) % progressInterval === 0 || i + adaptiveBatchSize >= files.length)) {
642
746
  const elapsed = ((Date.now() - totalStartTime) / 1000).toFixed(1);
643
- const rate = (processedFiles / parseFloat(elapsed)).toFixed(0);
644
- console.error(`[Indexer] Progress: ${processedFiles} changed, ${skippedFiles} skipped (${rate} files/sec)`);
645
-
747
+ const totalProcessed = processedFiles + skippedFiles;
748
+ const rate = totalProcessed > 0 ? (totalProcessed / parseFloat(elapsed)).toFixed(1) : '0';
749
+ console.error(`[Indexer] Progress: ${processedFiles} indexed, ${skippedFiles} skipped of ${files.length} (${rate} files/sec)`);
750
+
646
751
  // Send MCP progress notification (10-95% range for batch processing)
647
- const progressPercent = Math.min(95, Math.floor(10 + (i / files.length) * 85));
752
+ const progressPercent = Math.min(95, Math.floor(10 + (totalProcessed / files.length) * 85));
648
753
  this.sendProgress(progressPercent, 100, `Indexed ${processedFiles} files, ${skippedFiles} skipped (${rate}/sec)`);
649
754
  }
650
755
  }
package/index.js CHANGED
@@ -191,21 +191,23 @@ async function initialize() {
191
191
  "[Server] Configuration loaded. Model will load on first use (lazy initialization)."
192
192
  );
193
193
 
194
- // Optional: auto-index after delay
195
- if (config.autoIndexDelay && config.autoIndexDelay > 0) {
194
+ // Progressive background indexing: starts after short delay, doesn't block
195
+ // Search works right away with partial results while indexing continues
196
+ if (config.autoIndexDelay !== false && config.autoIndexDelay > 0) {
196
197
  console.error(
197
- `[Server] Auto-indexing will start after ${config.autoIndexDelay}ms delay...`
198
+ `[Server] Progressive indexing will start in ${config.autoIndexDelay}ms (search available immediately)...`
198
199
  );
199
200
  setTimeout(async () => {
200
- console.error("[Server] Starting auto-indexing...");
201
201
  try {
202
202
  await ensureInitialized();
203
- await indexer.indexAll();
203
+ // Use background indexing - non-blocking!
204
+ // Search can return partial results while indexing continues
205
+ indexer.startBackgroundIndexing();
204
206
  if (config.watchFiles) {
205
207
  indexer.setupFileWatcher();
206
208
  }
207
209
  } catch (err) {
208
- console.error("[Server] Auto-indexing error:", err.message);
210
+ console.error("[Server] Background indexing error:", err.message);
209
211
  }
210
212
  }, config.autoIndexDelay);
211
213
  }
package/lib/cache.js CHANGED
@@ -83,11 +83,21 @@ export class EmbeddingsCache {
83
83
  }
84
84
 
85
85
  getFileHash(file) {
86
- return this.fileHashes.get(file);
86
+ const entry = this.fileHashes.get(file);
87
+ // Support both old format (string) and new format ({ hash, mtime })
88
+ if (typeof entry === 'string') {
89
+ return entry;
90
+ }
91
+ return entry?.hash;
92
+ }
93
+
94
+ getFileMtime(file) {
95
+ const entry = this.fileHashes.get(file);
96
+ return entry?.mtime;
87
97
  }
88
98
 
89
- setFileHash(file, hash) {
90
- this.fileHashes.set(file, hash);
99
+ setFileHash(file, hash, mtime = null) {
100
+ this.fileHashes.set(file, { hash, mtime });
91
101
  }
92
102
 
93
103
  deleteFileHash(file) {