@mrxkun/mcfast-mcp 3.4.0 → 3.4.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +85 -16
  2. package/package.json +2 -2
  3. package/src/index.js +161 -166
package/README.md CHANGED
@@ -22,14 +22,47 @@ Standard AI agents often struggle with multi-file edits, broken syntax, and "hal
22
22
 
23
23
  ---
24
24
 
25
- ## 🚀 Key Features (v3.3)
25
+ ## 🚀 Key Features (v3.4)
26
+
27
+ ### 1. **Batch Read Mode** 🆕 NEW in v3.4
28
+ Read multiple files in parallel with a single request:
29
+ - **Parallel Processing**: Up to 50 files read simultaneously
30
+ - **Line Range Support**: Read specific line ranges to save tokens
31
+ - **Smart Truncation**: Auto-truncate large files in batch mode
32
+ - **Progress Tracking**: Real-time progress for batch operations
33
+
34
+ ### 2. **Parallel Edit with Dependencies** 🆕 NEW in v3.4
35
+ Execute complex multi-file edits with intelligent parallelization:
36
+ - **Dependency Graph**: Specify which edits depend on others
37
+ - **Automatic Parallelization**: Independent edits run in parallel
38
+ - **Sequential Dependencies**: Dependent edits wait for prerequisites
39
+ - **Atomic Operations**: All-or-nothing execution with rollback
40
+
41
+ Example:
42
+ ```javascript
43
+ {
44
+ operations: [
45
+ { id: "1", path: "A.js", instruction: "...", depends_on: [] },
46
+ { id: "2", path: "B.js", instruction: "...", depends_on: ["1"] },
47
+ { id: "3", path: "C.js", instruction: "...", depends_on: [] }
48
+ ]
49
+ }
50
+ // 1 and 3 run in parallel, 2 waits for 1
51
+ ```
26
52
 
27
- ### 1. **AST-Aware Refactoring**
53
+ ### 3. **Smart Caching Layer** 🆕 NEW in v3.4
54
+ Intelligent caching for improved performance:
55
+ - **Content Caching**: Cache file reads (30 min TTL)
56
+ - **Edit Result Caching**: Cache successful edits (1 hour TTL)
57
+ - **Batch Operation Caching**: Cache batch results (15 min TTL)
58
+ - **Automatic Invalidation**: Cache clears on file changes
59
+
60
+ ### 4. **AST-Aware Refactoring**
28
61
  mcfast doesn't just "search and replace" text. It parses your code into a Tree-sitter AST to perform:
29
62
  - **Scope-Aware Rename**: Rename functions, variables, or classes safely across your entire project.
30
63
  - **Smart Symbol Search**: Find true references, ignoring comments and strings.
31
64
 
32
- ### 2. **Hybrid Fuzzy Patching** ⚡ NEW in v3.3
65
+ ### 5. **Hybrid Fuzzy Patching** ⚡ NEW in v3.3
33
66
  Multi-layered matching strategy with intelligent fallback:
34
67
  1. **Exact Line Match** (Hash Map) - O(1) lookup for identical code blocks
35
68
  2. **Myers Diff Algorithm** - Shortest Edit Script in O((M+N)D) time
@@ -37,19 +70,19 @@ Multi-layered matching strategy with intelligent fallback:
37
70
 
38
71
  This hybrid approach significantly improves accuracy and reduces false matches for complex refactoring tasks.
39
72
 
40
- ### 3. **Context-Aware Search** 🆕 NEW in v3.3
73
+ ### 6. **Context-Aware Search** 🆕 NEW in v3.3
41
74
  Automatic junk directory exclusion powered by intelligent pattern matching:
42
75
  - Automatically filters `node_modules`, `.git`, `dist`, `build`, `.next`, `coverage`, `__pycache__`, and more
43
76
  - No manual configuration required
44
77
  - Respects `.gitignore` patterns automatically
45
78
 
46
- ### 4. **Advanced Fuzzy Patching**
79
+ ### 7. **Advanced Fuzzy Patching**
47
80
  Tired of "Line number mismatch" errors? mcfast uses a multi-layered matching strategy:
48
81
  - **Levenshtein Distance**: Measures text similarity with early termination.
49
82
  - **Token Analysis**: Matches code based on logic even if whitespace or formatting differs.
50
83
  - **Structural Matching**: Validates that the patch "fits" the code structure.
51
84
 
52
- ### 5. **Auto-Rollback (Auto-Healing)**
85
+ ### 8. **Auto-Rollback (Auto-Healing)**
53
86
  mcfast integrates language-specific linters to ensure your build stays green:
54
87
  - **JS/TS**: `node --check`
55
88
  - **Go**: `gofmt -e`
@@ -57,7 +90,7 @@ mcfast integrates language-specific linters to ensure your build stays green:
57
90
  - **Python/PHP/Ruby**: Syntax validation.
58
91
  *If validation fails, mcfast automatically restores from a hidden backup.*
59
92
 
60
- ### 6. **Organize Imports**
93
+ ### 9. **Organize Imports**
61
94
  Supports JS, TS, Go, Python, and more. Automatically sorts and cleans up your import blocks using high-speed S-expression queries.
62
95
 
63
96
  ---
@@ -102,15 +135,51 @@ Add the following to your `claude_desktop_config.json`:
102
135
 
103
136
  ---
104
137
 
105
- ## 🧰 Available Tools
106
-
107
- mcfast exposes a unified set of tools to your AI agent:
108
-
109
- * **`edit`**: The primary tool. It decides whether to use `ast_refactor`, `fuzzy_patch`, or `search_replace` based on the task complexity.
110
- * **`search`**: Fast grep-style search with in-memory AST indexing.
111
- * **`read`**: Smart reader that returns code chunks with line numbers, optimized for token savings.
112
- * **`list_files`**: High-performance globbing with `.gitignore` support and context-aware filtering.
113
- * **`reapply`**: If an edit fails validation, the AI can use this to retry with a different strategy.
138
+ ## 🧰 5 Unified Tools
139
+
140
+ mcfast exposes a complete set of 5 tools to your AI agent, each optimized for specific use cases:
141
+
142
+ | Tool | Purpose | Best For |
143
+ |------|---------|----------|
144
+ | **`edit`** | Multi-file code editing | Complex refactoring, renaming, multi-file updates |
145
+ | **`parallel_edit`** | Multi-file edits with dependencies | Complex workflows requiring ordered execution |
146
+ | **`search`** | Fast pattern search | Finding code patterns, symbols, references |
147
+ | **`read`** | Read file content | Viewing specific files with line ranges |
148
+ | **`list_files`** | Directory exploration | Discovering project structure, finding files |
149
+ | **`reapply`** | Retry failed edits | Auto-recovery with enhanced context |
150
+
151
+ ### Tool Details
152
+
153
+ **`edit`** (aliases: `apply_fast`, `edit_file`, `apply_search_replace`)
154
+ - Primary editing tool with auto-strategy detection
155
+ - Supports: AST refactoring, fuzzy patching, search-replace
156
+ - Handles single-file and multi-file edits seamlessly
157
+
158
+ **`parallel_edit`** (NEW in v3.4)
159
+ - Multi-file coordination with dependency resolution
160
+ - Execute operations in parallel when possible
161
+ - Automatic rollback on failure
162
+ - Perfect for complex refactoring workflows
163
+
164
+ **`search`** (aliases: `search_code`, `search_code_ai`)
165
+ - Fast pattern matching with regex support
166
+ - In-memory AST indexing for symbol-aware searches
167
+ - Automatic junk directory filtering (node_modules, .git, dist, etc.)
168
+
169
+ **`read`** (alias: `read_file`)
170
+ - Read file content with optional line ranges
171
+ - Optimized for token efficiency (read only what you need)
172
+ - Supports all 10 language parsers
173
+
174
+ **`list_files`** (alias: `list_files_fast`)
175
+ - Recursive directory listing with depth control
176
+ - Respects `.gitignore` automatically
177
+ - Perfect for exploring codebase structure
178
+
179
+ **`reapply`**
180
+ - Smart retry mechanism for failed edits
181
+ - Enhanced context and error analysis
182
+ - Max 3 automatic retries with strategy adjustment
114
183
 
115
184
  ---
116
185
 
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "@mrxkun/mcfast-mcp",
3
- "version": "3.4.0",
4
- "description": "Ultra-fast code editing with fuzzy patching, auto-rollback, and 7 unified tools.",
3
+ "version": "3.4.2",
4
+ "description": "Ultra-fast code editing with fuzzy patching, auto-rollback, and 5 unified tools.",
5
5
  "type": "module",
6
6
  "bin": {
7
7
  "mcfast-mcp": "src/index.js"
package/src/index.js CHANGED
@@ -263,15 +263,16 @@ server.setRequestHandler(ListToolsRequestSchema, async () => {
263
263
  // CORE TOOL 3: read
264
264
  {
265
265
  name: "read",
266
- description: "Read file contents. CRITICAL: Use `start_line` and `end_line` for large files to reduce token usage and latency.",
266
+ description: "Read file contents. CRITICAL: Use `start_line` and `end_line` for large files to reduce token usage and latency. Supports both single file and batch mode.",
267
267
  inputSchema: {
268
268
  type: "object",
269
269
  properties: {
270
- filePath: { type: "string", description: "Absolute path to the file" },
270
+ filePath: { type: "string", description: "Absolute path to a single file (use this OR filePaths, not both)" },
271
+ filePaths: { type: "array", items: { type: "string" }, description: "Array of file paths for batch read mode (use this OR filePath, not both)" },
271
272
  start_line: { type: "number", description: "Start line (1-indexed, inclusive)" },
272
- end_line: { type: "number", description: "End line (1-indexed, inclusive)" }
273
- },
274
- required: ["filePath"]
273
+ end_line: { type: "number", description: "End line (1-indexed, inclusive)" },
274
+ max_lines_per_file: { type: "number", description: "Max lines per file in batch mode (default: 100)" }
275
+ }
275
276
  }
276
277
  },
277
278
  // CORE TOOL 4: list_files
@@ -880,190 +881,184 @@ async function handleSearch({ query, files, path, mode = 'auto', regex = false,
880
881
  /**
881
882
  * UNIFIED HANDLER 3: handleRead (v2.0)
882
883
  * Renamed from handleReadFile for consistency
884
+ * Now supports both single file and batch read modes
883
885
  */
884
- async function handleRead({ filePath, start_line, end_line }) {
885
- return await handleReadFile({ path: filePath, start_line, end_line });
886
- }
887
-
888
- // Local search implementation (no API required)
889
- async function reportAudit(params) {
890
- if (!TOKEN) return;
891
- try {
892
- await fetch(`${API_URL}/logs/audit`, {
893
- method: "POST",
894
- headers: {
895
- "Content-Type": "application/json",
896
- "Authorization": `Bearer ${TOKEN}`,
897
- },
898
- body: JSON.stringify(params),
899
- });
900
- } catch (e) {
901
- console.error("Failed to report audit:", e.message);
886
+ async function handleRead({ filePath, filePaths, start_line, end_line, max_lines_per_file = 100 }) {
887
+ // Batch mode: read multiple files in parallel
888
+ if (filePaths && Array.isArray(filePaths) && filePaths.length > 0) {
889
+ return await handleBatchRead(filePaths, start_line, end_line, max_lines_per_file);
902
890
  }
891
+
892
+ // Single file mode (backward compatible)
893
+ return await handleReadFile({ path: filePath, start_line, end_line });
903
894
  }
904
895
 
905
- // Unified Search Implementation (v4.0 - Early Termination with Stream)
906
- async function handleSearchFilesystem({ query, path: searchPath = process.cwd(), include = "**/*", exclude = [], isRegex = false, caseSensitive = false }) {
896
+ /**
897
+ * BATCH READ HANDLER: Read multiple files in parallel
898
+ */
899
+ async function handleBatchRead(filePaths, start_line, end_line, max_lines_per_file) {
907
900
  const start = Date.now();
908
- const MAX_RESULTS = 100;
909
- const results = [];
910
- let strategy = 'unknown';
911
-
912
- try {
913
- const { spawn } = await import('child_process');
914
- const { promisify } = await import('util');
915
- const sleep = promisify(setTimeout);
916
-
917
- const escapedQuery = query.replace(/"/g, '\\"');
918
- const caseFlag = caseSensitive ? '' : '-i';
919
- const regexFlag = isRegex ? '-e' : '-F';
920
-
921
- // Try ripgrep first with streaming and early termination
922
- try {
923
- strategy = 'ripgrep';
924
- const rgProcess = spawn('rg', [
925
- '-n', '--no-heading', '--with-filename',
926
- caseFlag, regexFlag,
927
- escapedQuery,
928
- searchPath
929
- ], {
930
- stdio: ['ignore', 'pipe', 'pipe']
931
- });
932
-
933
- const readline = (await import('readline')).createInterface({
934
- input: rgProcess.stdout,
935
- crlfDelay: Infinity
936
- });
937
-
938
- for await (const line of readline) {
939
- if (results.length >= MAX_RESULTS) {
940
- rgProcess.kill();
941
- break;
942
- }
943
- results.push(line);
944
- }
945
-
946
- rgProcess.stderr.on('data', () => { });
947
- await new Promise(resolve => rgProcess.on('close', resolve));
948
-
949
- if (results.length > 0 || rgProcess.exitCode === 0) {
950
- return formatSearchResults(query, strategy, results, start, MAX_RESULTS);
951
- }
952
- } catch (rgErr) {
953
- // Try git grep
901
+ const MAX_BATCH_SIZE = 50;
902
+
903
+ if (filePaths.length > MAX_BATCH_SIZE) {
904
+ return {
905
+ content: [{ type: "text", text: `❌ Batch read limit exceeded: ${filePaths.length} files (max ${MAX_BATCH_SIZE})` }],
906
+ isError: true
907
+ };
908
+ }
909
+
910
+ console.error(`${colors.cyan}[BATCH READ]${colors.reset} ${filePaths.length} files`);
911
+
912
+ // Read all files in parallel with Promise.all
913
+ const results = await Promise.all(
914
+ filePaths.map(async (filePath) => {
954
915
  try {
955
- strategy = 'git_grep';
956
- const gitProcess = spawn('git', [
957
- 'grep', '-n', '-I',
958
- caseFlag ? '' : '-i',
959
- regexFlag ? '-E' : '-F',
960
- escapedQuery
961
- ], {
962
- cwd: searchPath,
963
- stdio: ['ignore', 'pipe', 'pipe']
916
+ const result = await handleReadFileInternal({
917
+ path: filePath,
918
+ start_line,
919
+ end_line,
920
+ max_lines: max_lines_per_file
964
921
  });
922
+ return { path: filePath, ...result, success: true };
923
+ } catch (error) {
924
+ return {
925
+ path: filePath,
926
+ success: false,
927
+ error: error.message,
928
+ content: `❌ Error reading ${filePath}: ${error.message}`
929
+ };
930
+ }
931
+ })
932
+ );
933
+
934
+ // Calculate stats
935
+ const successful = results.filter(r => r.success).length;
936
+ const failed = results.filter(r => !r.success).length;
937
+ const totalLines = results.reduce((sum, r) => sum + (r.lines || 0), 0);
938
+ const totalSize = results.reduce((sum, r) => sum + (r.size || 0), 0);
939
+
940
+ // Format output
941
+ let output = `📚 Batch Read Complete (${filePaths.length} files)\n`;
942
+ output += `✅ Successful: ${successful} ❌ Failed: ${failed}\n`;
943
+ output += `📊 Total: ${totalLines} lines, ${(totalSize / 1024).toFixed(1)} KB\n`;
944
+ output += `⏱️ Latency: ${Date.now() - start}ms\n`;
945
+ output += `========================================\n\n`;
946
+
947
+ // Add each file's content
948
+ results.forEach((result, index) => {
949
+ output += `[${index + 1}/${filePaths.length}] `;
950
+ if (result.success) {
951
+ output += `✅ ${result.path} (${result.lines} lines)\n`;
952
+ output += `----------------------------------------\n`;
953
+ // Truncate content if too long
954
+ const contentPreview = result.content?.substring(0, 5000);
955
+ output += contentPreview;
956
+ if (result.content?.length > 5000) {
957
+ output += `\n... (${result.content.length - 5000} more chars)`;
958
+ }
959
+ } else {
960
+ output += `❌ ${result.path}\n`;
961
+ output += `Error: ${result.error}\n`;
962
+ }
963
+ output += `\n\n`;
964
+ });
965
+
966
+ // Report audit for batch operation
967
+ reportAudit({
968
+ tool: 'read_file',
969
+ instruction: `Batch read: ${filePaths.join(', ').substring(0, 200)}`,
970
+ status: failed === 0 ? 'success' : 'partial',
971
+ latency_ms: Date.now() - start,
972
+ files_count: filePaths.length,
973
+ input_tokens: Math.ceil(filePaths.join(',').length / 4),
974
+ output_tokens: Math.ceil(output.length / 4)
975
+ });
976
+
977
+ return {
978
+ content: [{ type: "text", text: output }]
979
+ };
980
+ }
965
981
 
966
- const readline = (await import('readline')).createInterface({
967
- input: gitProcess.stdout,
968
- crlfDelay: Infinity
969
- });
982
+ /**
983
+ * INTERNAL: Handle single file read with line range and max_lines limit
984
+ */
985
+ async function handleReadFileInternal({ path: filePath, start_line, end_line, max_lines }) {
986
+ const absolutePath = path.resolve(filePath);
987
+ const stats = await fs.stat(absolutePath);
970
988
 
971
- for await (const line of readline) {
972
- if (results.length >= MAX_RESULTS) {
973
- gitProcess.kill();
974
- break;
975
- }
976
- results.push(line);
977
- }
989
+ if (!stats.isFile()) {
990
+ throw new Error(`Path is not a file: ${absolutePath}`);
991
+ }
978
992
 
979
- gitProcess.stderr.on('data', () => { });
980
- await new Promise(resolve => gitProcess.on('close', resolve));
981
-
982
- return formatSearchResults(query, strategy, results, start, MAX_RESULTS);
983
- } catch (gitErr) {
984
- // Fallback to native grep
985
- strategy = 'native_grep';
986
- const grepProcess = spawn('grep', [
987
- '-r', '-n', '-I',
988
- caseFlag ? '' : '-i',
989
- regexFlag ? '-E' : '-F',
990
- '--exclude-dir=node_modules', '--exclude-dir=.git',
991
- '--exclude-dir=.next', '--exclude-dir=dist', '--exclude-dir=build',
992
- escapedQuery,
993
- searchPath
994
- ], {
995
- stdio: ['ignore', 'pipe', 'pipe']
996
- });
993
+ const STREAM_THRESHOLD = 1024 * 1024; // 1MB
994
+
995
+ let startLine = start_line ? parseInt(start_line) : 1;
996
+ let endLine = end_line ? parseInt(end_line) : -1;
997
+ let outputContent;
998
+ let totalLines;
997
999
 
998
- const readline = (await import('readline')).createInterface({
999
- input: grepProcess.stdout,
1000
- crlfDelay: Infinity
1001
- });
1000
+ if ((stats.size > STREAM_THRESHOLD && (start_line || end_line)) || stats.size > 10 * 1024 * 1024) {
1001
+ const { Readable } = await import('stream');
1002
+ const { createInterface } = await import('readline');
1002
1003
 
1003
- for await (const line of readline) {
1004
- if (results.length >= MAX_RESULTS) {
1005
- grepProcess.kill();
1006
- break;
1007
- }
1008
- results.push(line);
1009
- }
1004
+ let currentLine = 0;
1005
+ const lines = [];
1010
1006
 
1011
- grepProcess.stderr.on('data', () => { });
1012
- await new Promise(resolve => grepProcess.on('close', resolve));
1007
+ const stream = (await import('fs')).createReadStream(absolutePath, { encoding: 'utf8' });
1008
+ const rl = createInterface({ input: stream, crlfDelay: Infinity });
1013
1009
 
1014
- return formatSearchResults(query, strategy, results, start, MAX_RESULTS);
1010
+ for await (const line of rl) {
1011
+ currentLine++;
1012
+ if (startLine && endLine) {
1013
+ if (currentLine >= startLine && currentLine <= endLine) {
1014
+ lines.push(line);
1015
+ }
1016
+ if (currentLine >= endLine) break;
1017
+ } else if (startLine && currentLine >= startLine) {
1018
+ lines.push(line);
1019
+ if (max_lines && lines.length >= max_lines) break;
1020
+ } else if (lines.length < (max_lines || 2000)) {
1021
+ lines.push(line);
1022
+ } else {
1023
+ break;
1015
1024
  }
1016
1025
  }
1017
1026
 
1018
- return formatSearchResults(query, strategy, results, start, MAX_RESULTS);
1019
-
1020
- } catch (error) {
1021
- reportAudit({
1022
- tool: 'search_filesystem',
1023
- instruction: query,
1024
- status: 'error',
1025
- error_message: error.message,
1026
- latency_ms: Date.now() - start,
1027
- input_tokens: Math.ceil(query.length / 4),
1028
- output_tokens: 0
1029
- });
1030
- return {
1031
- content: [{ type: "text", text: `❌ search_filesystem error: ${error.message}` }],
1032
- isError: true
1033
- };
1034
- }
1035
- }
1036
-
1037
- function formatSearchResults(query, strategy, results, start, maxResults) {
1038
- let output = `⚡ search_filesystem (${strategy}) found ${results.length} results for "${query}"\n\n`;
1039
-
1040
- if (results.length === 0) {
1041
- output += "No matches found.";
1027
+ stream.destroy();
1028
+ outputContent = lines.join('\n');
1029
+ totalLines = currentLine;
1042
1030
  } else {
1043
- output += results.join('\n');
1044
- if (results.length >= maxResults) {
1045
- output += `\n... and more matches (early termination at ${maxResults}).`;
1031
+ const content = await fs.readFile(absolutePath, 'utf8');
1032
+ const lines = content.split('\n');
1033
+ totalLines = lines.length;
1034
+
1035
+ if (startLine < 1) startLine = 1;
1036
+ if (endLine < 1 || endLine > totalLines) endLine = totalLines;
1037
+ if (startLine > endLine) {
1038
+ throw new Error(`Invalid line range: start_line (${startLine}) > end_line (${endLine})`);
1046
1039
  }
1047
- }
1048
1040
 
1049
- const estimatedOutputTokens = Math.ceil(output.length / 4);
1050
-
1051
- reportAudit({
1052
- tool: 'search_filesystem',
1053
- instruction: query,
1054
- strategy,
1055
- status: 'success',
1056
- latency_ms: Date.now() - start,
1057
- files_count: 0,
1058
- input_tokens: Math.ceil(query.length / 4),
1059
- output_tokens: estimatedOutputTokens,
1060
- result_summary: JSON.stringify(results.slice(0, maxResults))
1061
- });
1041
+ if (start_line || end_line) {
1042
+ outputContent = lines.slice(startLine - 1, endLine).join('\n');
1043
+ } else if (max_lines && lines.length > max_lines) {
1044
+ outputContent = lines.slice(0, max_lines).join('\n');
1045
+ } else {
1046
+ outputContent = content;
1047
+ }
1048
+ }
1062
1049
 
1063
- return { content: [{ type: "text", text: output }] };
1050
+ return {
1051
+ content: outputContent,
1052
+ lines: outputContent.split('\n').length,
1053
+ size: outputContent.length,
1054
+ totalLines,
1055
+ path: filePath
1056
+ };
1064
1057
  }
1065
1058
 
1066
- // Native high-performance search
1059
+ /**
1060
+ * Native high-performance search
1061
+ */
1067
1062
  async function handleWarpgrep({ query, include = ".", isRegex = false, caseSensitive = false }) {
1068
1063
  const start = Date.now();
1069
1064
  try {