@levnikolaevich/hex-line-mcp 1.3.6 → 1.5.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +34 -40
- package/dist/hook.mjs +54 -26
- package/dist/server.mjs +219 -96
- package/output-style.md +23 -21
- package/package.json +2 -2
package/README.md
CHANGED
|
@@ -32,26 +32,26 @@ Advanced / occasional:
|
|
|
32
32
|
|
|
33
33
|
| Tool | Description | Key Feature |
|
|
34
34
|
|------|-------------|-------------|
|
|
35
|
-
| `read_file` | Read file with hash-annotated lines, checksums, and revision | Partial reads via `offset`/`limit` |
|
|
35
|
+
| `read_file` | Read file with hash-annotated lines, checksums, and revision | Partial reads via `offset`/`limit` or `ranges`, compact output by default |
|
|
36
36
|
| `edit_file` | Revision-aware anchor edits (`set_line`, `replace_lines`, `insert_after`, `replace_between`) | Batched same-file edits + conservative auto-rebase |
|
|
37
37
|
| `write_file` | Create new file or overwrite, auto-creates parent dirs | Path validation, no hash overhead |
|
|
38
|
-
| `grep_search` | Search with ripgrep, 3 output modes, per-group checksums |
|
|
38
|
+
| `grep_search` | Search with ripgrep, 3 output modes, per-group checksums | Plain `files`/`count`, compact edit-ready `content` |
|
|
39
39
|
| `outline` | AST-based structural overview via tree-sitter WASM | 95% token reduction (10 lines instead of 500) |
|
|
40
40
|
| `verify` | Check if held checksums / revision are still current | Staleness check without full re-read |
|
|
41
41
|
| `directory_tree` | Compact directory tree with root .gitignore support | Skips node_modules/.git, shows file sizes |
|
|
42
42
|
| `get_file_info` | File metadata without reading content | Size, lines, mtime, type, binary detection |
|
|
43
43
|
| `setup_hooks` | Configure Claude hooks + install output style | Gemini/Codex get guidance only; no hooks |
|
|
44
44
|
| `changes` | Compare file against git ref, shows added/removed/modified symbols | AST-level semantic diff |
|
|
45
|
-
| `bulk_replace` | Search-and-replace across multiple files by glob |
|
|
45
|
+
| `bulk_replace` | Search-and-replace across multiple files by glob | Compact summary (default) or capped diffs via `format`, dry_run, max_files |
|
|
46
46
|
|
|
47
47
|
### Hooks (PreToolUse + PostToolUse)
|
|
48
48
|
|
|
49
49
|
| Event | Trigger | Action |
|
|
50
50
|
|-------|---------|--------|
|
|
51
|
-
| **PreToolUse** | Read/Edit/Write/Grep on text files |
|
|
51
|
+
| **PreToolUse** | Read/Edit/Write/Grep on text files | Size-aware redirect: cheap small operations may pass, expensive ones are redirected |
|
|
52
52
|
| **PreToolUse** | Bash with dangerous commands | Blocks `rm -rf /`, `git push --force`, etc. Agent must confirm with user |
|
|
53
53
|
| **PostToolUse** | Bash with 50+ lines output | RTK: deduplicates, truncates, shows filtered summary to Claude as feedback |
|
|
54
|
-
| **SessionStart** | Session begins | Injects
|
|
54
|
+
| **SessionStart** | Session begins | Injects a short no-discovery workflow for hex-line tools |
|
|
55
55
|
|
|
56
56
|
|
|
57
57
|
### Bash Redirects
|
|
@@ -90,45 +90,33 @@ The `setup_hooks` tool automatically installs the output style to `~/.claude/out
|
|
|
90
90
|
|
|
91
91
|
## Benchmarking
|
|
92
92
|
|
|
93
|
-
|
|
93
|
+
Two benchmark layers:
|
|
94
94
|
|
|
95
|
-
- `
|
|
96
|
-
- `
|
|
97
|
-
- `diagnostics` — modeled tool-level measurements for engineering inspection
|
|
98
|
-
|
|
99
|
-
Public benchmark mode reports only comparative multi-step workflows:
|
|
95
|
+
- `/benchmark-compare` — balanced built-in vs hex-line comparison inside Claude Code, validated by scenario manifests and saved diffs
|
|
96
|
+
- `npm run benchmark` — hex-line standalone workflow metrics (Node.js, all real library calls, no simulations)
|
|
100
97
|
|
|
101
98
|
```bash
|
|
102
99
|
npm run benchmark -- --repo /path/to/repo
|
|
103
|
-
```
|
|
104
|
-
|
|
105
|
-
Optional diagnostics stay available separately:
|
|
106
|
-
|
|
107
|
-
```bash
|
|
108
100
|
npm run benchmark:diagnostic -- --repo /path/to/repo
|
|
109
|
-
npm run benchmark:diagnostic:graph -- --repo /path/to/repo
|
|
110
101
|
```
|
|
111
102
|
|
|
112
|
-
|
|
103
|
+
Current standalone workflow metrics on the `hex-line-mcp` repo (all real library calls):
|
|
113
104
|
|
|
114
|
-
|
|
105
|
+
| # | Workflow | Hex-line output | Ops |
|
|
106
|
+
|---|----------|---------:|----:|
|
|
107
|
+
| W1 | Debug hook file-listing redirect | 882 chars | 2 |
|
|
108
|
+
| W2 | Adjust `setup_hooks` guidance and verify | 1,719 chars | 3 |
|
|
109
|
+
| W3 | Repo-wide benchmark wording refresh | 213 chars | 1 |
|
|
110
|
+
| W4 | Inspect large smoke test before edit | 2,322 chars | 3 |
|
|
111
|
+
| W5 | Follow-up edit after unrelated line shift | 1,267 chars | 3 |
|
|
115
112
|
|
|
116
|
-
|
|
117
|
-
|----|----------|---------:|---------:|--------:|----:|
|
|
118
|
-
| W1 | Debug hook file-listing redirect | 23,143 chars | 882 chars | 96% | 3→2 |
|
|
119
|
-
| W2 | Adjust `setup_hooks` guidance and verify | 24,877 chars | 1,637 chars | 93% | 3→3 |
|
|
120
|
-
| W3 | Repo-wide benchmark wording refresh | 137,796 chars | 38,918 chars | 72% | 15→1 |
|
|
121
|
-
| W4 | Inspect large smoke test before edit | 49,566 chars | 2,104 chars | 96% | 3→3 |
|
|
122
|
-
|
|
123
|
-
Workflow summary: `89%` average token savings, `24→9` tool calls (`63%` fewer).
|
|
124
|
-
|
|
125
|
-
These workflows are derived from recent real Claude sessions, but executed against local reproducible fixtures in the repository. They should be read as workflow-efficiency measurements, not as correctness or semantic-quality claims.
|
|
113
|
+
Workflow total: `6,403` chars across `12` ops. Run `/benchmark-compare` for the balanced scenario suite with activation checks and diff-based correctness.
|
|
126
114
|
|
|
127
115
|
### Optional Graph Enrichment
|
|
128
116
|
|
|
129
|
-
If a project already has `.codegraph/index.db`, `hex-line` can add lightweight graph hints to `read_file`, `outline`, `grep_search`, and `edit_file`.
|
|
117
|
+
If a project already has `.hex-skills/codegraph/index.db`, `hex-line` can add lightweight graph hints to `read_file`, `outline`, `grep_search`, and `edit_file`.
|
|
130
118
|
|
|
131
|
-
- Graph enrichment is optional. If `.codegraph/index.db` is missing, `hex-line` falls back to standard behavior silently.
|
|
119
|
+
- Graph enrichment is optional. If `.hex-skills/codegraph/index.db` is missing, `hex-line` falls back to standard behavior silently.
|
|
132
120
|
- `better-sqlite3` is optional. If it is unavailable, `hex-line` still works without graph hints.
|
|
133
121
|
- `edit_file` reports **Call impact**, not full semantic blast radius. The warning uses call-graph callers only.
|
|
134
122
|
|
|
@@ -160,11 +148,11 @@ Use `replace_between` inside `edit_file` when you know stable start/end anchors
|
|
|
160
148
|
|
|
161
149
|
### Literal rename / refactor
|
|
162
150
|
|
|
163
|
-
Use `bulk_replace` for text rename patterns across one or more files. Do not use it as a substitute for structured block rewrites.
|
|
151
|
+
Use `bulk_replace` for text rename patterns across one or more files. Returns compact summary by default; pass `format: "full"` for capped diffs. Do not use it as a substitute for structured block rewrites.
|
|
164
152
|
|
|
165
153
|
### read_file
|
|
166
154
|
|
|
167
|
-
Read a file with FNV-1a hash-annotated lines, range checksums, file checksum, and revision. Supports directory listing.
|
|
155
|
+
Read a file with FNV-1a hash-annotated lines, range checksums, file checksum, and revision. Supports batch reads, multi-range reads, and directory listing.
|
|
168
156
|
|
|
169
157
|
| Parameter | Type | Required | Description |
|
|
170
158
|
|-----------|------|----------|-------------|
|
|
@@ -172,17 +160,22 @@ Read a file with FNV-1a hash-annotated lines, range checksums, file checksum, an
|
|
|
172
160
|
| `paths` | string[] | no | Array of file paths to read (batch mode) |
|
|
173
161
|
| `offset` | number | no | Start line, 1-indexed (default: 1) |
|
|
174
162
|
| `limit` | number | no | Max lines to return (default: 2000, 0 = all) |
|
|
163
|
+
| `ranges` | array | no | Explicit line ranges, e.g. `[{ "start": 10, "end": 30 }]` |
|
|
164
|
+
| `include_graph` | boolean | no | Opt in to graph annotations when the graph index exists |
|
|
175
165
|
| `plain` | boolean | no | Omit hashes, output `lineNum\|content` instead |
|
|
176
166
|
|
|
177
|
-
|
|
167
|
+
Default output is compact:
|
|
178
168
|
|
|
179
169
|
```
|
|
170
|
+
File: lib/search.mjs
|
|
171
|
+
meta: lines 1-20 of 282
|
|
172
|
+
revision: rev-12-a1b2c3d4
|
|
173
|
+
file: 1-282:beefcafe
|
|
174
|
+
|
|
180
175
|
ab.1 import { resolve } from "node:path";
|
|
181
176
|
cd.2 import { readFileSync } from "node:fs";
|
|
182
177
|
...
|
|
183
178
|
checksum: 1-50:f7e2a1b0
|
|
184
|
-
revision: rev-12-a1b2c3d4
|
|
185
|
-
file: 1-120:beefcafe
|
|
186
179
|
```
|
|
187
180
|
|
|
188
181
|
### edit_file
|
|
@@ -215,6 +208,7 @@ Result footer includes:
|
|
|
215
208
|
- `revision: ...`
|
|
216
209
|
- `file: ...`
|
|
217
210
|
- `changed_ranges: ...` when relevant
|
|
211
|
+
- `remapped_refs: ...` when stale anchors were uniquely relocated
|
|
218
212
|
- `retry_checksum: ...` on local conflicts
|
|
219
213
|
|
|
220
214
|
### write_file
|
|
@@ -228,7 +222,7 @@ Create a new file or overwrite an existing one. Creates parent directories autom
|
|
|
228
222
|
|
|
229
223
|
### grep_search
|
|
230
224
|
|
|
231
|
-
Search file contents using ripgrep. Three output modes: `content` (hash-annotated with checksums), `files` (
|
|
225
|
+
Search file contents using ripgrep. Three output modes: `content` (hash-annotated with checksums), `files` (plain path list), `count` (plain `file:count` list).
|
|
232
226
|
|
|
233
227
|
| Parameter | Type | Required | Description |
|
|
234
228
|
|-----------|------|----------|-------------|
|
|
@@ -248,7 +242,7 @@ Search file contents using ripgrep. Three output modes: `content` (hash-annotate
|
|
|
248
242
|
| `total_limit` | number | no | Total match events across all files; multiline matches count as 1 (0 = unlimited) |
|
|
249
243
|
| `plain` | boolean | no | Omit hash tags, return `file:line:content` |
|
|
250
244
|
|
|
251
|
-
|
|
245
|
+
`content` mode returns per-group checksums enabling direct `replace_lines` from grep results without intermediate `read_file`.
|
|
252
246
|
|
|
253
247
|
### outline
|
|
254
248
|
|
|
@@ -269,7 +263,7 @@ Check if range checksums from a prior read are still valid, optionally relative
|
|
|
269
263
|
| Parameter | Type | Required | Description |
|
|
270
264
|
|-----------|------|----------|-------------|
|
|
271
265
|
| `path` | string | yes | File path |
|
|
272
|
-
| `checksums` | string | yes |
|
|
266
|
+
| `checksums` | string[] | yes | Array of checksum strings, e.g. `["1-50:f7e2a1b0"]` |
|
|
273
267
|
| `base_revision` | string | no | Prior revision to compare against latest state |
|
|
274
268
|
|
|
275
269
|
Returns a single-line confirmation or lists changed ranges.
|
|
@@ -330,7 +324,7 @@ Configuration constants in `hook.mjs`:
|
|
|
330
324
|
|
|
331
325
|
### SessionStart: Tool Preferences
|
|
332
326
|
|
|
333
|
-
Injects
|
|
327
|
+
Injects a short operational workflow into agent context at session start: no `ToolSearch`, prefer `outline -> read_file -> edit_file -> verify`, and use targeted reads over full-file reads.
|
|
334
328
|
|
|
335
329
|
## Architecture
|
|
336
330
|
|
package/dist/hook.mjs
CHANGED
|
@@ -54,7 +54,7 @@ function normalizeOutput(text, opts = {}) {
|
|
|
54
54
|
}
|
|
55
55
|
|
|
56
56
|
// hook.mjs
|
|
57
|
-
import { readFileSync } from "node:fs";
|
|
57
|
+
import { readFileSync, statSync } from "node:fs";
|
|
58
58
|
import { resolve } from "node:path";
|
|
59
59
|
import { homedir } from "node:os";
|
|
60
60
|
import { fileURLToPath } from "node:url";
|
|
@@ -91,13 +91,13 @@ var BINARY_EXT = /* @__PURE__ */ new Set([
|
|
|
91
91
|
]);
|
|
92
92
|
var REVERSE_TOOL_HINTS = {
|
|
93
93
|
"mcp__hex-line__read_file": "Read (file_path, offset, limit)",
|
|
94
|
-
"mcp__hex-line__edit_file": "Edit (
|
|
94
|
+
"mcp__hex-line__edit_file": "Edit (old_string, new_string, replace_all)",
|
|
95
95
|
"mcp__hex-line__write_file": "Write (file_path, content)",
|
|
96
96
|
"mcp__hex-line__grep_search": "Grep (pattern, path)",
|
|
97
97
|
"mcp__hex-line__directory_tree": "Glob (pattern) or Bash(ls)",
|
|
98
98
|
"mcp__hex-line__get_file_info": "Bash(stat/wc)",
|
|
99
99
|
"mcp__hex-line__outline": "Read with offset/limit",
|
|
100
|
-
"mcp__hex-line__verify": "
|
|
100
|
+
"mcp__hex-line__verify": "Read (re-read file to check freshness)",
|
|
101
101
|
"mcp__hex-line__changes": "Bash(git diff)",
|
|
102
102
|
"mcp__hex-line__bulk_replace": "Edit (text rename/refactor across files)",
|
|
103
103
|
"mcp__hex-line__setup_hooks": "Not available (hex-line disabled)"
|
|
@@ -114,10 +114,10 @@ var TOOL_HINTS = {
|
|
|
114
114
|
stat: "mcp__hex-line__get_file_info (not stat/wc/file)",
|
|
115
115
|
grep: "mcp__hex-line__grep_search (not grep/rg). Params: output, literal, context_before, context_after, multiline",
|
|
116
116
|
sed: "mcp__hex-line__edit_file for hash edits, or mcp__hex-line__bulk_replace for text rename (not sed -i)",
|
|
117
|
-
diff: "mcp__hex-line__changes (not diff). Git
|
|
117
|
+
diff: "mcp__hex-line__changes (not diff). Git diff with change symbols",
|
|
118
118
|
outline: "mcp__hex-line__outline (before reading large code files)",
|
|
119
119
|
verify: "mcp__hex-line__verify (staleness / revision check without re-read)",
|
|
120
|
-
changes: "mcp__hex-line__changes (
|
|
120
|
+
changes: "mcp__hex-line__changes (git diff with change symbols)",
|
|
121
121
|
bulk: "mcp__hex-line__bulk_replace (multi-file search-replace)",
|
|
122
122
|
setup: "mcp__hex-line__setup_hooks (configure hooks for agents)"
|
|
123
123
|
};
|
|
@@ -183,10 +183,31 @@ var CMD_PATTERNS = [
|
|
|
183
183
|
var LINE_THRESHOLD = 50;
|
|
184
184
|
var HEAD_LINES = 15;
|
|
185
185
|
var TAIL_LINES = 15;
|
|
186
|
+
var LARGE_FILE_BYTES = 15 * 1024;
|
|
187
|
+
var LARGE_EDIT_CHARS = 1200;
|
|
186
188
|
function extOf(filePath) {
|
|
187
189
|
const dot = filePath.lastIndexOf(".");
|
|
188
190
|
return dot !== -1 ? filePath.slice(dot).toLowerCase() : "";
|
|
189
191
|
}
|
|
192
|
+
function getFilePath(toolInput) {
|
|
193
|
+
return toolInput.file_path || toolInput.path || "";
|
|
194
|
+
}
|
|
195
|
+
function resolveToolPath(filePath) {
|
|
196
|
+
if (!filePath) return "";
|
|
197
|
+
if (filePath.startsWith("~/")) return resolve(homedir(), filePath.slice(2));
|
|
198
|
+
return resolve(process.cwd(), filePath);
|
|
199
|
+
}
|
|
200
|
+
function getFileSize(filePath) {
|
|
201
|
+
if (!filePath) return null;
|
|
202
|
+
try {
|
|
203
|
+
return statSync(resolveToolPath(filePath)).size;
|
|
204
|
+
} catch {
|
|
205
|
+
return null;
|
|
206
|
+
}
|
|
207
|
+
}
|
|
208
|
+
function isPartialRead(toolInput) {
|
|
209
|
+
return [toolInput.offset, toolInput.limit, toolInput.start_line, toolInput.end_line, toolInput.ranges].some((value) => value !== void 0 && value !== null && value !== "");
|
|
210
|
+
}
|
|
190
211
|
function detectCommandType(cmd) {
|
|
191
212
|
for (const [re, type] of CMD_PATTERNS) {
|
|
192
213
|
if (re.test(cmd)) return type;
|
|
@@ -249,7 +270,8 @@ function handlePreToolUse(data) {
|
|
|
249
270
|
}
|
|
250
271
|
const hintKey = TOOL_REDIRECT_MAP[toolName];
|
|
251
272
|
if (hintKey) {
|
|
252
|
-
const filePath = toolInput
|
|
273
|
+
const filePath = getFilePath(toolInput);
|
|
274
|
+
const fileSize = getFileSize(filePath);
|
|
253
275
|
if (BINARY_EXT.has(extOf(filePath))) {
|
|
254
276
|
process.exit(0);
|
|
255
277
|
}
|
|
@@ -272,10 +294,30 @@ function handlePreToolUse(data) {
|
|
|
272
294
|
process.exit(0);
|
|
273
295
|
}
|
|
274
296
|
}
|
|
275
|
-
|
|
276
|
-
|
|
277
|
-
|
|
278
|
-
|
|
297
|
+
if (toolName === "Read") {
|
|
298
|
+
if (isPartialRead(toolInput) || fileSize !== null && fileSize <= LARGE_FILE_BYTES) {
|
|
299
|
+
process.exit(0);
|
|
300
|
+
}
|
|
301
|
+
const target = filePath ? `Use mcp__hex-line__outline or mcp__hex-line__read_file with path="${filePath}"` : "Use mcp__hex-line__directory_tree or mcp__hex-line__read_file";
|
|
302
|
+
block(target, "For large or unknown full reads: call outline first, then read_file with offset/limit or ranges. Do not use built-in Read here.");
|
|
303
|
+
}
|
|
304
|
+
if (toolName === "Edit") {
|
|
305
|
+
const oldText = String(toolInput.old_string || "");
|
|
306
|
+
const isLargeEdit = Boolean(toolInput.replace_all) || oldText.length > LARGE_EDIT_CHARS || fileSize !== null && fileSize > LARGE_FILE_BYTES;
|
|
307
|
+
if (!isLargeEdit) {
|
|
308
|
+
process.exit(0);
|
|
309
|
+
}
|
|
310
|
+
const target = filePath ? `Use mcp__hex-line__grep_search or mcp__hex-line__read_file, then mcp__hex-line__edit_file with path="${filePath}"` : "Use mcp__hex-line__grep_search or mcp__hex-line__read_file, then mcp__hex-line__edit_file";
|
|
311
|
+
block(target, "For large or repeated edits: locate anchors/checksums first, then call edit_file once with batched edits.");
|
|
312
|
+
}
|
|
313
|
+
if (toolName === "Write") {
|
|
314
|
+
const pathNote = filePath ? ` with path="${filePath}"` : "";
|
|
315
|
+
block(`Use mcp__hex-line__write_file${pathNote}`, TOOL_HINTS.Write);
|
|
316
|
+
}
|
|
317
|
+
if (toolName === "Grep") {
|
|
318
|
+
const pathNote = filePath ? ` with path="${filePath}"` : "";
|
|
319
|
+
block(`Use mcp__hex-line__grep_search${pathNote}`, TOOL_HINTS.Grep);
|
|
320
|
+
}
|
|
279
321
|
}
|
|
280
322
|
if (toolName === "Bash") {
|
|
281
323
|
const command = (toolInput.command || "").trim();
|
|
@@ -381,22 +423,8 @@ function handleSessionStart() {
|
|
|
381
423
|
} catch {
|
|
382
424
|
}
|
|
383
425
|
}
|
|
384
|
-
|
|
385
|
-
|
|
386
|
-
process.exit(0);
|
|
387
|
-
}
|
|
388
|
-
const seen = /* @__PURE__ */ new Set();
|
|
389
|
-
const lines = [];
|
|
390
|
-
for (const hint of Object.values(TOOL_HINTS)) {
|
|
391
|
-
const tool = hint.split(" ")[0];
|
|
392
|
-
if (!seen.has(tool)) {
|
|
393
|
-
seen.add(tool);
|
|
394
|
-
lines.push(`- ${hint}`);
|
|
395
|
-
}
|
|
396
|
-
}
|
|
397
|
-
lines.push("Exceptions: images, PDFs, notebooks, .claude/settings.json, .claude/settings.local.json \u2192 built-in Read; Glob always OK");
|
|
398
|
-
lines.push("Bash OK for: npm/node/git/docker/curl, pipes, scripts");
|
|
399
|
-
const msg = "Hex-line MCP available. Workflow:\n- Discovery: read_file, grep_search, outline, directory_tree\n- Same-file edits: prefer ONE edit_file call per file, carry revision/base_revision\n- Hash edits: edit_file (set_line, replace_lines, insert_after, replace_between)\n- Large rewrites: replace_between instead of reciting old blocks\n- Text rename: bulk_replace (multi-file search-replace)\n- Verify staleness: verify before considering reread\n- Write new: write_file\n" + lines.join("\n");
|
|
426
|
+
const prefix = styleActive ? "Hex-line MCP available. Output style active.\n" : "Hex-line MCP available.\n";
|
|
427
|
+
const msg = prefix + "Call hex-line tools directly. Do not use ToolSearch for hex-line tools.\nWorkflow:\n- Discovery: outline for large code files, read_file for targeted reads, grep_search for symbol/text lookup\n- Read cheaply: prefer offset/limit or ranges; avoid full-file Read on large files\n- Edit safely: read/grep first, then one batched edit_file call per file with base_revision when available\n- Verify before reread: use verify to check checksums or revision freshness\n- Multi-file rename/refactor: use bulk_replace\n- New files: use write_file\nExceptions: images, PDFs, notebooks, .claude/settings.json, .claude/settings.local.json use built-in Read. Glob is always OK.";
|
|
400
428
|
process.stdout.write(JSON.stringify({ systemMessage: msg }));
|
|
401
429
|
process.exit(0);
|
|
402
430
|
}
|
package/dist/server.mjs
CHANGED
|
@@ -6,7 +6,7 @@ import { dirname as dirname4 } from "node:path";
|
|
|
6
6
|
import { z as z2 } from "zod";
|
|
7
7
|
|
|
8
8
|
// ../hex-common/src/runtime/mcp-bootstrap.mjs
|
|
9
|
-
async function createServerRuntime({ name, version: version2
|
|
9
|
+
async function createServerRuntime({ name, version: version2 }) {
|
|
10
10
|
let McpServer, StdioServerTransport2;
|
|
11
11
|
try {
|
|
12
12
|
({ McpServer } = await import("@modelcontextprotocol/sdk/server/mcp.js"));
|
|
@@ -14,11 +14,16 @@ async function createServerRuntime({ name, version: version2, installDir }) {
|
|
|
14
14
|
} catch {
|
|
15
15
|
process.stderr.write(
|
|
16
16
|
`${name}: @modelcontextprotocol/sdk not found.
|
|
17
|
-
Run:
|
|
17
|
+
Run: npm install @modelcontextprotocol/sdk
|
|
18
18
|
`
|
|
19
19
|
);
|
|
20
20
|
process.exit(1);
|
|
21
21
|
}
|
|
22
|
+
const shutdown = () => {
|
|
23
|
+
process.exit(0);
|
|
24
|
+
};
|
|
25
|
+
process.on("SIGTERM", shutdown);
|
|
26
|
+
process.on("SIGINT", shutdown);
|
|
22
27
|
return {
|
|
23
28
|
server: new McpServer({ name, version: version2 }),
|
|
24
29
|
StdioServerTransport: StdioServerTransport2
|
|
@@ -250,6 +255,8 @@ function listDirectory(dirPath, opts = {}) {
|
|
|
250
255
|
}
|
|
251
256
|
var MAX_OUTPUT_CHARS = 8e4;
|
|
252
257
|
var MAX_DIFF_CHARS = 3e4;
|
|
258
|
+
var MAX_BULK_OUTPUT_CHARS = 3e4;
|
|
259
|
+
var MAX_PER_FILE_DIFF_LINES = 50;
|
|
253
260
|
function readText(filePath) {
|
|
254
261
|
return readFileSync(filePath, "utf-8").replace(/\r\n/g, "\n");
|
|
255
262
|
}
|
|
@@ -340,7 +347,7 @@ function getGraphDB(filePath) {
|
|
|
340
347
|
try {
|
|
341
348
|
const projectRoot = findProjectRoot(filePath);
|
|
342
349
|
if (!projectRoot) return null;
|
|
343
|
-
const dbPath = join3(projectRoot, ".codegraph", "index.db");
|
|
350
|
+
const dbPath = join3(projectRoot, ".hex-skills/codegraph", "index.db");
|
|
344
351
|
if (!existsSync2(dbPath)) return null;
|
|
345
352
|
if (_dbs.has(dbPath)) return _dbs.get(dbPath);
|
|
346
353
|
const require2 = createRequire(import.meta.url);
|
|
@@ -454,7 +461,7 @@ function getRelativePath(filePath) {
|
|
|
454
461
|
function findProjectRoot(filePath) {
|
|
455
462
|
let dir = dirname2(filePath);
|
|
456
463
|
for (let i = 0; i < 10; i++) {
|
|
457
|
-
if (existsSync2(join3(dir, ".codegraph", "index.db"))) return dir;
|
|
464
|
+
if (existsSync2(join3(dir, ".hex-skills/codegraph", "index.db"))) return dir;
|
|
458
465
|
const parent = dirname2(dir);
|
|
459
466
|
if (parent === dir) break;
|
|
460
467
|
dir = parent;
|
|
@@ -662,6 +669,24 @@ function buildRangeChecksum(snapshot, startLine, endLine) {
|
|
|
662
669
|
|
|
663
670
|
// lib/read.mjs
|
|
664
671
|
var DEFAULT_LIMIT = 2e3;
|
|
672
|
+
function parseRangeEntry(entry, total) {
|
|
673
|
+
if (typeof entry === "string") {
|
|
674
|
+
const match = entry.trim().match(/^(\d+)(?:-(\d*)?)?$/);
|
|
675
|
+
if (!match) throw new Error(`Invalid range "${entry}". Use "10", "10-25", or "10-"`);
|
|
676
|
+
const start2 = Number(match[1]);
|
|
677
|
+
const end2 = match[2] === void 0 || match[2] === "" ? total : Number(match[2]);
|
|
678
|
+
return { start: start2, end: end2 };
|
|
679
|
+
}
|
|
680
|
+
if (!entry || typeof entry !== "object") {
|
|
681
|
+
throw new Error("ranges entries must be strings or {start,end} objects");
|
|
682
|
+
}
|
|
683
|
+
const start = Number(entry.start ?? 1);
|
|
684
|
+
const end = entry.end === void 0 || entry.end === null ? total : Number(entry.end);
|
|
685
|
+
if (!Number.isFinite(start) || !Number.isFinite(end)) {
|
|
686
|
+
throw new Error("ranges entries must contain numeric start/end values");
|
|
687
|
+
}
|
|
688
|
+
return { start, end };
|
|
689
|
+
}
|
|
665
690
|
function readFile2(filePath, opts = {}) {
|
|
666
691
|
filePath = normalizePath(filePath);
|
|
667
692
|
const real = validatePath(filePath);
|
|
@@ -679,10 +704,13 @@ ${text}
|
|
|
679
704
|
const total = lines.length;
|
|
680
705
|
let ranges;
|
|
681
706
|
if (opts.ranges && opts.ranges.length > 0) {
|
|
682
|
-
ranges = opts.ranges.map((
|
|
683
|
-
|
|
684
|
-
|
|
685
|
-
|
|
707
|
+
ranges = opts.ranges.map((entry) => {
|
|
708
|
+
const parsed = parseRangeEntry(entry, total);
|
|
709
|
+
return {
|
|
710
|
+
start: Math.max(1, parsed.start),
|
|
711
|
+
end: Math.min(total, parsed.end)
|
|
712
|
+
};
|
|
713
|
+
});
|
|
686
714
|
} else {
|
|
687
715
|
const startLine = Math.max(1, opts.offset || 1);
|
|
688
716
|
const maxLines = opts.limit && opts.limit > 0 ? opts.limit : DEFAULT_LIMIT;
|
|
@@ -712,24 +740,22 @@ ${text}
|
|
|
712
740
|
range.end = actualEnd;
|
|
713
741
|
parts.push(formatted.join("\n"));
|
|
714
742
|
const cs = rangeChecksum(lineHashes, range.start, actualEnd);
|
|
715
|
-
parts.push(`
|
|
716
|
-
checksum: ${cs}`);
|
|
743
|
+
parts.push(`checksum: ${cs}`);
|
|
717
744
|
if (cappedAtLine) break;
|
|
718
745
|
}
|
|
719
746
|
const sizeKB = (stat.size / 1024).toFixed(1);
|
|
720
|
-
const
|
|
721
|
-
|
|
722
|
-
let header = `File: ${filePath} (${total} lines, ${sizeKB}KB, ${ago})`;
|
|
747
|
+
const ago = relativeTime(stat.mtime);
|
|
748
|
+
let meta = `${total} lines, ${sizeKB}KB, ${ago}`;
|
|
723
749
|
if (ranges.length === 1) {
|
|
724
750
|
const r = ranges[0];
|
|
725
751
|
if (r.start > 1 || r.end < total) {
|
|
726
|
-
|
|
752
|
+
meta += `, showing ${r.start}-${r.end}`;
|
|
727
753
|
}
|
|
728
754
|
if (r.end < total) {
|
|
729
|
-
|
|
755
|
+
meta += `, ${total - r.end} more below`;
|
|
730
756
|
}
|
|
731
757
|
}
|
|
732
|
-
const db = getGraphDB(real);
|
|
758
|
+
const db = opts.includeGraph ? getGraphDB(real) : null;
|
|
733
759
|
const relFile = db ? getRelativePath(real) : null;
|
|
734
760
|
let graphLine = "";
|
|
735
761
|
if (db && relFile) {
|
|
@@ -743,18 +769,12 @@ checksum: ${cs}`);
|
|
|
743
769
|
Graph: ${items.join(" | ")}`;
|
|
744
770
|
}
|
|
745
771
|
}
|
|
746
|
-
let result =
|
|
772
|
+
let result = `File: ${filePath}${graphLine}
|
|
773
|
+
meta: ${meta}
|
|
747
774
|
revision: ${snapshot.revision}
|
|
748
775
|
file: ${snapshot.fileChecksum}
|
|
749
776
|
|
|
750
|
-
|
|
751
|
-
${parts.join("\n")}
|
|
752
|
-
\`\`\``;
|
|
753
|
-
if (total > 200 && (!opts.offset || opts.offset <= 1) && !cappedAtLine) {
|
|
754
|
-
result += `
|
|
755
|
-
|
|
756
|
-
\u26A1 Tip: This file has ${total} lines. Use outline first, then read_file with offset/limit for 75% fewer tokens.`;
|
|
757
|
-
}
|
|
777
|
+
${parts.join("\n\n")}`;
|
|
758
778
|
if (cappedAtLine) {
|
|
759
779
|
result += `
|
|
760
780
|
|
|
@@ -787,6 +807,26 @@ function buildErrorSnippet(lines, centerIdx, radius = 5) {
|
|
|
787
807
|
}).join("\n");
|
|
788
808
|
return { start: start + 1, end, text };
|
|
789
809
|
}
|
|
810
|
+
function stripAnchorOrDiffPrefix(line) {
|
|
811
|
+
let next = line;
|
|
812
|
+
next = next.replace(/^\s*(?:>>| )?[a-z2-7]{2}\.\d+\t/, "");
|
|
813
|
+
next = next.replace(/^.+:(?:>>| )[a-z2-7]{2}\.\d+\t/, "");
|
|
814
|
+
next = next.replace(/^[ +-]\d+\|\s?/, "");
|
|
815
|
+
return next;
|
|
816
|
+
}
|
|
817
|
+
function sanitizeEditText(text) {
|
|
818
|
+
const original = String(text ?? "");
|
|
819
|
+
const hadTrailingNewline = original.endsWith("\n");
|
|
820
|
+
let lines = original.split("\n");
|
|
821
|
+
const nonEmpty = lines.filter((line) => line.length > 0);
|
|
822
|
+
if (nonEmpty.length > 0 && nonEmpty.every((line) => /^\+(?!\+)/.test(line))) {
|
|
823
|
+
lines = lines.map((line) => line.startsWith("+") && !line.startsWith("++") ? line.slice(1) : line);
|
|
824
|
+
}
|
|
825
|
+
lines = lines.map(stripAnchorOrDiffPrefix);
|
|
826
|
+
let cleaned = lines.join("\n");
|
|
827
|
+
if (hadTrailingNewline && !cleaned.endsWith("\n")) cleaned += "\n";
|
|
828
|
+
return cleaned;
|
|
829
|
+
}
|
|
790
830
|
function findLine(lines, lineNum, expectedTag, hashIndex) {
|
|
791
831
|
const idx = lineNum - 1;
|
|
792
832
|
if (idx < 0 || idx >= lines.length) {
|
|
@@ -902,6 +942,7 @@ function buildConflictMessage({
|
|
|
902
942
|
centerIdx,
|
|
903
943
|
changedRanges,
|
|
904
944
|
retryChecksum,
|
|
945
|
+
remaps,
|
|
905
946
|
details
|
|
906
947
|
}) {
|
|
907
948
|
const safeCenter = Math.max(0, Math.min(lines.length - 1, centerIdx));
|
|
@@ -914,6 +955,9 @@ file: ${fileChecksum}`;
|
|
|
914
955
|
changed_ranges: ${describeChangedRanges(changedRanges)}`;
|
|
915
956
|
if (retryChecksum) msg += `
|
|
916
957
|
retry_checksum: ${retryChecksum}`;
|
|
958
|
+
if (remaps?.length) msg += `
|
|
959
|
+
remapped_refs:
|
|
960
|
+
${remaps.map(({ from, to }) => `${from} -> ${to}`).join("\n")}`;
|
|
917
961
|
msg += `
|
|
918
962
|
|
|
919
963
|
${details}
|
|
@@ -948,6 +992,8 @@ function editFile(filePath, edits, opts = {}) {
|
|
|
948
992
|
const hadTrailingNewline = original.endsWith("\n");
|
|
949
993
|
const hashIndex = currentSnapshot.uniqueTagIndex;
|
|
950
994
|
let autoRebased = false;
|
|
995
|
+
const remaps = [];
|
|
996
|
+
const remapKeys = /* @__PURE__ */ new Set();
|
|
951
997
|
const anchored = [];
|
|
952
998
|
for (const e of edits) {
|
|
953
999
|
if (e.set_line || e.replace_lines || e.insert_after || e.replace_between) anchored.push(e);
|
|
@@ -1004,12 +1050,24 @@ function editFile(filePath, edits, opts = {}) {
|
|
|
1004
1050
|
centerIdx,
|
|
1005
1051
|
changedRanges: staleRevision && hasBaseSnapshot ? changedRanges : null,
|
|
1006
1052
|
retryChecksum,
|
|
1053
|
+
remaps,
|
|
1007
1054
|
details
|
|
1008
1055
|
});
|
|
1009
1056
|
};
|
|
1057
|
+
const trackRemap = (ref, idx) => {
|
|
1058
|
+
const actualRef = `${lineTag(fnv1a(lines[idx]))}.${idx + 1}`;
|
|
1059
|
+
const expectedRef = `${ref.tag}.${ref.line}`;
|
|
1060
|
+
if (actualRef === expectedRef) return;
|
|
1061
|
+
const key = `${expectedRef}->${actualRef}`;
|
|
1062
|
+
if (remapKeys.has(key)) return;
|
|
1063
|
+
remapKeys.add(key);
|
|
1064
|
+
remaps.push({ from: expectedRef, to: actualRef });
|
|
1065
|
+
};
|
|
1010
1066
|
const locateOrConflict = (ref, reason = "stale_anchor") => {
|
|
1011
1067
|
try {
|
|
1012
|
-
|
|
1068
|
+
const idx = findLine(lines, ref.line, ref.tag, hashIndex);
|
|
1069
|
+
trackRemap(ref, idx);
|
|
1070
|
+
return idx;
|
|
1013
1071
|
} catch (e) {
|
|
1014
1072
|
if (conflictPolicy !== "conservative" || !staleRevision) throw e;
|
|
1015
1073
|
const centerIdx = Math.max(0, Math.min(lines.length - 1, ref.line - 1));
|
|
@@ -1051,7 +1109,7 @@ function editFile(filePath, edits, opts = {}) {
|
|
|
1051
1109
|
lines.splice(idx, 1);
|
|
1052
1110
|
} else {
|
|
1053
1111
|
const origLine = [lines[idx]];
|
|
1054
|
-
const raw =
|
|
1112
|
+
const raw = sanitizeEditText(txt).split("\n");
|
|
1055
1113
|
const newLines = opts.restoreIndent ? restoreIndent(origLine, raw) : raw;
|
|
1056
1114
|
lines.splice(idx, 1, ...newLines);
|
|
1057
1115
|
}
|
|
@@ -1063,7 +1121,7 @@ function editFile(filePath, edits, opts = {}) {
|
|
|
1063
1121
|
if (typeof idx === "string") return idx;
|
|
1064
1122
|
const conflict = ensureRevisionContext(idx + 1, idx + 1, idx);
|
|
1065
1123
|
if (conflict) return conflict;
|
|
1066
|
-
let insertLines = e.insert_after.text.split("\n");
|
|
1124
|
+
let insertLines = sanitizeEditText(e.insert_after.text).split("\n");
|
|
1067
1125
|
if (opts.restoreIndent) insertLines = restoreIndent([lines[idx]], insertLines);
|
|
1068
1126
|
lines.splice(idx + 1, 0, ...insertLines);
|
|
1069
1127
|
continue;
|
|
@@ -1127,7 +1185,7 @@ Retry with fresh checksum ${actual}, or use set_line with hashes above.`
|
|
|
1127
1185
|
lines.splice(si, ei - si + 1);
|
|
1128
1186
|
} else {
|
|
1129
1187
|
const origRange = lines.slice(si, ei + 1);
|
|
1130
|
-
let newLines =
|
|
1188
|
+
let newLines = sanitizeEditText(txt).split("\n");
|
|
1131
1189
|
if (opts.restoreIndent) newLines = restoreIndent(origRange, newLines);
|
|
1132
1190
|
lines.splice(si, ei - si + 1, ...newLines);
|
|
1133
1191
|
}
|
|
@@ -1151,7 +1209,7 @@ Retry with fresh checksum ${actual}, or use set_line with hashes above.`
|
|
|
1151
1209
|
const conflict = ensureRevisionContext(targetRange.start, targetRange.end, si);
|
|
1152
1210
|
if (conflict) return conflict;
|
|
1153
1211
|
const txt = e.replace_between.new_text;
|
|
1154
|
-
let newLines =
|
|
1212
|
+
let newLines = sanitizeEditText(txt ?? "").split("\n");
|
|
1155
1213
|
const sliceStart = boundaryMode === "exclusive" ? si + 1 : si;
|
|
1156
1214
|
const removeCount = boundaryMode === "exclusive" ? Math.max(0, ei - si - 1) : ei - si + 1;
|
|
1157
1215
|
const origRange = lines.slice(sliceStart, sliceStart + removeCount);
|
|
@@ -1212,6 +1270,11 @@ file: ${nextSnapshot.fileChecksum}`;
|
|
|
1212
1270
|
if (autoRebased && staleRevision && hasBaseSnapshot) {
|
|
1213
1271
|
msg += `
|
|
1214
1272
|
changed_ranges: ${describeChangedRanges(changedRanges)}`;
|
|
1273
|
+
}
|
|
1274
|
+
if (remaps.length > 0) {
|
|
1275
|
+
msg += `
|
|
1276
|
+
remapped_refs:
|
|
1277
|
+
${remaps.map(({ from, to }) => `${from} -> ${to}`).join("\n")}`;
|
|
1215
1278
|
}
|
|
1216
1279
|
msg += `
|
|
1217
1280
|
Updated ${filePath} (${content.split("\n").length} lines)`;
|
|
@@ -1327,9 +1390,7 @@ async function filesMode(pattern, target, opts) {
|
|
|
1327
1390
|
if (code !== 0 && code !== null) throw new Error(`GREP_ERROR: rg exit ${code} \u2014 ${stderr.trim() || "unknown error"}`);
|
|
1328
1391
|
const lines = stdout.trimEnd().split("\n").filter(Boolean);
|
|
1329
1392
|
const normalized = lines.map((l) => l.replace(/\\/g, "/"));
|
|
1330
|
-
return
|
|
1331
|
-
${normalized.join("\n")}
|
|
1332
|
-
\`\`\``;
|
|
1393
|
+
return normalized.join("\n");
|
|
1333
1394
|
}
|
|
1334
1395
|
async function countMode(pattern, target, opts) {
|
|
1335
1396
|
const realArgs = ["-c"];
|
|
@@ -1346,9 +1407,7 @@ async function countMode(pattern, target, opts) {
|
|
|
1346
1407
|
if (code !== 0 && code !== null) throw new Error(`GREP_ERROR: rg exit ${code} \u2014 ${stderr.trim() || "unknown error"}`);
|
|
1347
1408
|
const lines = stdout.trimEnd().split("\n").filter(Boolean);
|
|
1348
1409
|
const normalized = lines.map((l) => l.replace(/\\/g, "/"));
|
|
1349
|
-
return
|
|
1350
|
-
${normalized.join("\n")}
|
|
1351
|
-
\`\`\``;
|
|
1410
|
+
return normalized.join("\n");
|
|
1352
1411
|
}
|
|
1353
1412
|
async function contentMode(pattern, target, opts, plain, totalLimit) {
|
|
1354
1413
|
const realArgs = ["--json"];
|
|
@@ -1455,16 +1514,12 @@ async function contentMode(pattern, target, opts, plain, totalLimit) {
|
|
|
1455
1514
|
if (totalLimit > 0 && matchCount >= totalLimit) {
|
|
1456
1515
|
flushGroup();
|
|
1457
1516
|
formatted.push(`--- total_limit reached (${totalLimit}) ---`);
|
|
1458
|
-
return
|
|
1459
|
-
${formatted.join("\n")}
|
|
1460
|
-
\`\`\``;
|
|
1517
|
+
return formatted.join("\n");
|
|
1461
1518
|
}
|
|
1462
1519
|
}
|
|
1463
1520
|
}
|
|
1464
1521
|
flushGroup();
|
|
1465
|
-
return
|
|
1466
|
-
${formatted.join("\n")}
|
|
1467
|
-
\`\`\``;
|
|
1522
|
+
return formatted.join("\n");
|
|
1468
1523
|
}
|
|
1469
1524
|
|
|
1470
1525
|
// lib/outline.mjs
|
|
@@ -1602,6 +1657,26 @@ function extractOutline(rootNode, config, sourceLines) {
|
|
|
1602
1657
|
walk(rootNode, 0);
|
|
1603
1658
|
return { entries, skippedRanges };
|
|
1604
1659
|
}
|
|
1660
|
+
function fallbackOutline(sourceLines) {
|
|
1661
|
+
const entries = [];
|
|
1662
|
+
for (let index = 0; index < sourceLines.length; index++) {
|
|
1663
|
+
const line = sourceLines[index];
|
|
1664
|
+
const trimmed = line.trim();
|
|
1665
|
+
if (!trimmed) continue;
|
|
1666
|
+
const match = trimmed.match(
|
|
1667
|
+
/^(?:export\s+)?(?:async\s+)?function\s+[\w$]+|^(?:export\s+)?(?:const|let|var)\s+[\w$]+\s*=|^(?:export\s+)?class\s+[\w$]+|^(?:export\s+)?interface\s+[\w$]+|^(?:export\s+)?type\s+[\w$]+\s*=|^(?:export\s+)?enum\s+[\w$]+|^(?:export\s+default\s+)?[\w$]+\s*=>/
|
|
1668
|
+
);
|
|
1669
|
+
if (!match) continue;
|
|
1670
|
+
entries.push({
|
|
1671
|
+
start: index + 1,
|
|
1672
|
+
end: index + 1,
|
|
1673
|
+
depth: 0,
|
|
1674
|
+
text: trimmed.slice(0, 120),
|
|
1675
|
+
name: trimmed.match(/([\w$]+)/)?.[1] || null
|
|
1676
|
+
});
|
|
1677
|
+
}
|
|
1678
|
+
return entries;
|
|
1679
|
+
}
|
|
1605
1680
|
async function outlineFromContent(content, ext) {
|
|
1606
1681
|
const config = LANG_CONFIGS[ext];
|
|
1607
1682
|
const grammar = grammarForExtension(ext);
|
|
@@ -1618,8 +1693,9 @@ async function outlineFromContent(content, ext) {
|
|
|
1618
1693
|
const tree = parser.parse(content);
|
|
1619
1694
|
return extractOutline(tree.rootNode, config, sourceLines);
|
|
1620
1695
|
}
|
|
1621
|
-
function formatOutline(entries, skippedRanges, sourceLineCount, db, relFile) {
|
|
1696
|
+
function formatOutline(entries, skippedRanges, sourceLineCount, db, relFile, note = "") {
|
|
1622
1697
|
const lines = [];
|
|
1698
|
+
if (note) lines.push(note, "");
|
|
1623
1699
|
if (skippedRanges.length > 0) {
|
|
1624
1700
|
const first = skippedRanges[0].start;
|
|
1625
1701
|
const last = skippedRanges[skippedRanges.length - 1].end;
|
|
@@ -1645,11 +1721,13 @@ async function fileOutline(filePath) {
|
|
|
1645
1721
|
}
|
|
1646
1722
|
const content = readUtf8Normalized(real);
|
|
1647
1723
|
const result = await outlineFromContent(content, ext);
|
|
1724
|
+
const entries = result.entries.length > 0 ? result.entries : fallbackOutline(content.split("\n"));
|
|
1725
|
+
const note = result.entries.length > 0 || entries.length === 0 ? "" : "Fallback outline: heuristic symbols shown because parser returned no structural entries.";
|
|
1648
1726
|
const db = getGraphDB(real);
|
|
1649
1727
|
const relFile = db ? getRelativePath(real) : null;
|
|
1650
1728
|
return `File: ${filePath}
|
|
1651
1729
|
|
|
1652
|
-
${formatOutline(
|
|
1730
|
+
${formatOutline(entries, result.skippedRanges, content.split("\n").length, db, relFile, note)}`;
|
|
1653
1731
|
}
|
|
1654
1732
|
|
|
1655
1733
|
// lib/verify.mjs
|
|
@@ -1964,19 +2042,18 @@ function fileInfo(filePath) {
|
|
|
1964
2042
|
}
|
|
1965
2043
|
|
|
1966
2044
|
// lib/setup.mjs
|
|
1967
|
-
import { readFileSync as readFileSync4, writeFileSync as writeFileSync2, existsSync as existsSync5, mkdirSync } from "node:fs";
|
|
1968
|
-
import { resolve as resolve6, dirname as dirname3 } from "node:path";
|
|
2045
|
+
import { readFileSync as readFileSync4, writeFileSync as writeFileSync2, existsSync as existsSync5, mkdirSync, copyFileSync } from "node:fs";
|
|
2046
|
+
import { resolve as resolve6, dirname as dirname3, join as join5 } from "node:path";
|
|
1969
2047
|
import { fileURLToPath } from "node:url";
|
|
1970
2048
|
import { homedir } from "node:os";
|
|
2049
|
+
var STABLE_HOOK_DIR = resolve6(homedir(), ".claude", "hex-line");
|
|
2050
|
+
var STABLE_HOOK_PATH = join5(STABLE_HOOK_DIR, "hook.mjs").replace(/\\/g, "/");
|
|
2051
|
+
var HOOK_COMMAND = `node ${STABLE_HOOK_PATH}`;
|
|
1971
2052
|
var __filename = fileURLToPath(import.meta.url);
|
|
1972
2053
|
var __dirname = dirname3(__filename);
|
|
1973
|
-
var
|
|
1974
|
-
var
|
|
1975
|
-
var HOOK_SIGNATURE = "hex-line
|
|
1976
|
-
var NPX_MARKERS = ["_npx", "npx-cache", ".npm/_npx"];
|
|
1977
|
-
function isEphemeralInstall(scriptPath) {
|
|
1978
|
-
return NPX_MARKERS.some((m) => scriptPath.includes(m));
|
|
1979
|
-
}
|
|
2054
|
+
var SOURCE_HOOK = resolve6(__dirname, "..", "hook.mjs");
|
|
2055
|
+
var DIST_HOOK = resolve6(__dirname, "hook.mjs");
|
|
2056
|
+
var HOOK_SIGNATURE = "hex-line";
|
|
1980
2057
|
var CLAUDE_HOOKS = {
|
|
1981
2058
|
SessionStart: {
|
|
1982
2059
|
matcher: "*",
|
|
@@ -2038,7 +2115,7 @@ function writeHooksToFile(settingsPath, label) {
|
|
|
2038
2115
|
return `Claude (${label}): already configured`;
|
|
2039
2116
|
}
|
|
2040
2117
|
writeJson(settingsPath, config);
|
|
2041
|
-
return `Claude (${label}): hooks -> ${
|
|
2118
|
+
return `Claude (${label}): hooks -> ${STABLE_HOOK_PATH} OK`;
|
|
2042
2119
|
}
|
|
2043
2120
|
function cleanLocalHooks() {
|
|
2044
2121
|
const localPath = resolve6(process.cwd(), ".claude/settings.local.json");
|
|
@@ -2084,10 +2161,14 @@ function installOutputStyle() {
|
|
|
2084
2161
|
return msg;
|
|
2085
2162
|
}
|
|
2086
2163
|
function setupClaude() {
|
|
2087
|
-
if (isEphemeralInstall(HOOK_SCRIPT)) {
|
|
2088
|
-
return "Claude: SKIPPED \u2014 hook.mjs is in npx cache (ephemeral). Install permanently: npm i -g @levnikolaevich/hex-line-mcp, then re-run setup_hooks.";
|
|
2089
|
-
}
|
|
2090
2164
|
const results = [];
|
|
2165
|
+
const hookSource = existsSync5(DIST_HOOK) ? DIST_HOOK : SOURCE_HOOK;
|
|
2166
|
+
if (!existsSync5(hookSource)) {
|
|
2167
|
+
return "Claude: FAILED \u2014 hook.mjs not found. Reinstall @levnikolaevich/hex-line-mcp.";
|
|
2168
|
+
}
|
|
2169
|
+
mkdirSync(STABLE_HOOK_DIR, { recursive: true });
|
|
2170
|
+
copyFileSync(hookSource, STABLE_HOOK_PATH);
|
|
2171
|
+
results.push(`hook.mjs -> ${STABLE_HOOK_PATH}`);
|
|
2091
2172
|
const globalPath = resolve6(homedir(), ".claude/settings.json");
|
|
2092
2173
|
results.push(writeHooksToFile(globalPath, "global"));
|
|
2093
2174
|
results.push(cleanLocalHooks());
|
|
@@ -2265,8 +2346,8 @@ Summary: ${summary}`);
|
|
|
2265
2346
|
}
|
|
2266
2347
|
|
|
2267
2348
|
// lib/bulk-replace.mjs
|
|
2268
|
-
import { writeFileSync as writeFileSync3, readdirSync as readdirSync3 } from "node:fs";
|
|
2269
|
-
import { resolve as resolve7, relative as relative3, join as
|
|
2349
|
+
import { writeFileSync as writeFileSync3, readdirSync as readdirSync3, renameSync, unlinkSync } from "node:fs";
|
|
2350
|
+
import { resolve as resolve7, relative as relative3, join as join6 } from "node:path";
|
|
2270
2351
|
var ignoreMod;
|
|
2271
2352
|
try {
|
|
2272
2353
|
ignoreMod = await import("ignore");
|
|
@@ -2282,7 +2363,7 @@ function walkFiles(dir, rootDir, ig) {
|
|
|
2282
2363
|
}
|
|
2283
2364
|
for (const e of entries) {
|
|
2284
2365
|
if (e.name === ".git" || e.name === "node_modules") continue;
|
|
2285
|
-
const full =
|
|
2366
|
+
const full = join6(dir, e.name);
|
|
2286
2367
|
const rel = relative3(rootDir, full).replace(/\\/g, "/");
|
|
2287
2368
|
if (ig && ig.ignores(rel)) continue;
|
|
2288
2369
|
if (e.isDirectory()) {
|
|
@@ -2294,21 +2375,21 @@ function walkFiles(dir, rootDir, ig) {
|
|
|
2294
2375
|
return results;
|
|
2295
2376
|
}
|
|
2296
2377
|
function globMatch(filename, pattern) {
|
|
2297
|
-
const re = pattern.replace(/\./g, "\\.").replace(/\*\*/g, "\0").replace(/\*/g, "[^/]*").replace(/\0/g, ".*").replace(/\?/g, ".");
|
|
2378
|
+
const re = pattern.replace(/\./g, "\\.").replace(/\{([^}]+)\}/g, (_, alts) => "(" + alts.split(",").join("|") + ")").replace(/\*\*/g, "\0").replace(/\*/g, "[^/]*").replace(/\0/g, ".*").replace(/\?/g, ".");
|
|
2298
2379
|
return new RegExp("^" + re + "$").test(filename);
|
|
2299
2380
|
}
|
|
2300
2381
|
function loadGitignore2(rootDir) {
|
|
2301
2382
|
if (!ignoreMod) return null;
|
|
2302
2383
|
const ig = (ignoreMod.default || ignoreMod)();
|
|
2303
2384
|
try {
|
|
2304
|
-
const content = readText(
|
|
2385
|
+
const content = readText(join6(rootDir, ".gitignore"));
|
|
2305
2386
|
ig.add(content);
|
|
2306
2387
|
} catch {
|
|
2307
2388
|
}
|
|
2308
2389
|
return ig;
|
|
2309
2390
|
}
|
|
2310
2391
|
function bulkReplace(rootDir, globPattern, replacements, opts = {}) {
|
|
2311
|
-
const { dryRun = false, maxFiles = 100 } = opts;
|
|
2392
|
+
const { dryRun = false, maxFiles = 100, format = "compact" } = opts;
|
|
2312
2393
|
const abs = resolve7(normalizePath(rootDir));
|
|
2313
2394
|
const ig = loadGitignore2(abs);
|
|
2314
2395
|
const allFiles = walkFiles(abs, abs, ig);
|
|
@@ -2321,55 +2402,92 @@ function bulkReplace(rootDir, globPattern, replacements, opts = {}) {
|
|
|
2321
2402
|
return `TOO_MANY_FILES: Found ${files.length} files, max_files is ${maxFiles}. Use more specific glob or increase max_files.`;
|
|
2322
2403
|
}
|
|
2323
2404
|
const results = [];
|
|
2324
|
-
let changed = 0, skipped = 0, errors = 0;
|
|
2325
|
-
const MAX_OUTPUT2 = MAX_OUTPUT_CHARS;
|
|
2326
|
-
let totalChars = 0;
|
|
2405
|
+
let changed = 0, skipped = 0, errors = 0, totalReplacements = 0;
|
|
2327
2406
|
for (const file of files) {
|
|
2328
2407
|
try {
|
|
2329
2408
|
const original = readText(file);
|
|
2330
2409
|
let content = original;
|
|
2410
|
+
let replacementCount = 0;
|
|
2331
2411
|
for (const { old: oldText, new: newText } of replacements) {
|
|
2332
|
-
|
|
2412
|
+
if (oldText === newText) continue;
|
|
2413
|
+
const parts = content.split(oldText);
|
|
2414
|
+
replacementCount += parts.length - 1;
|
|
2415
|
+
content = parts.join(newText);
|
|
2333
2416
|
}
|
|
2334
2417
|
if (content === original) {
|
|
2335
2418
|
skipped++;
|
|
2336
2419
|
continue;
|
|
2337
2420
|
}
|
|
2338
|
-
const diff = simpleDiff(original.split("\n"), content.split("\n"));
|
|
2339
2421
|
if (!dryRun) {
|
|
2340
|
-
|
|
2422
|
+
const tempPath = `${file}.hexline-tmp-${process.pid}`;
|
|
2423
|
+
try {
|
|
2424
|
+
writeFileSync3(tempPath, content, "utf-8");
|
|
2425
|
+
renameSync(tempPath, file);
|
|
2426
|
+
} catch (error) {
|
|
2427
|
+
try {
|
|
2428
|
+
unlinkSync(tempPath);
|
|
2429
|
+
} catch {
|
|
2430
|
+
}
|
|
2431
|
+
throw error;
|
|
2432
|
+
}
|
|
2341
2433
|
}
|
|
2342
|
-
const relPath =
|
|
2343
|
-
|
|
2344
|
-
${diff || "(no visible diff)"}`);
|
|
2434
|
+
const relPath = relative3(abs, file).replace(/\\/g, "/");
|
|
2435
|
+
totalReplacements += replacementCount;
|
|
2345
2436
|
changed++;
|
|
2346
|
-
|
|
2347
|
-
|
|
2348
|
-
|
|
2349
|
-
|
|
2350
|
-
|
|
2437
|
+
if (format === "full") {
|
|
2438
|
+
const diff = simpleDiff(original.split("\n"), content.split("\n"));
|
|
2439
|
+
let diffText = diff || "(no visible diff)";
|
|
2440
|
+
const diffLines3 = diffText.split("\n");
|
|
2441
|
+
if (diffLines3.length > MAX_PER_FILE_DIFF_LINES) {
|
|
2442
|
+
const omitted = diffLines3.length - MAX_PER_FILE_DIFF_LINES;
|
|
2443
|
+
diffText = diffLines3.slice(0, MAX_PER_FILE_DIFF_LINES).join("\n") + `
|
|
2444
|
+
--- ${omitted} lines omitted ---`;
|
|
2445
|
+
}
|
|
2446
|
+
results.push(`--- ${relPath}: ${replacementCount} replacements
|
|
2447
|
+
${diffText}`);
|
|
2448
|
+
} else {
|
|
2449
|
+
results.push(`--- ${relPath}: ${replacementCount} replacements`);
|
|
2351
2450
|
}
|
|
2352
2451
|
} catch (e) {
|
|
2353
2452
|
results.push(`ERROR: ${file}: ${e.message}`);
|
|
2354
2453
|
errors++;
|
|
2355
2454
|
}
|
|
2356
2455
|
}
|
|
2357
|
-
const header = `Bulk replace: ${changed} files changed, ${skipped} skipped, ${errors} errors (dry_run: ${dryRun})`;
|
|
2358
|
-
|
|
2456
|
+
const header = `Bulk replace: ${changed} files changed (${totalReplacements} replacements), ${skipped} skipped, ${errors} errors (dry_run: ${dryRun})`;
|
|
2457
|
+
let output = results.length ? `${header}
|
|
2359
2458
|
|
|
2360
2459
|
${results.join("\n\n")}` : header;
|
|
2460
|
+
if (output.length > MAX_BULK_OUTPUT_CHARS) {
|
|
2461
|
+
output = output.slice(0, MAX_BULK_OUTPUT_CHARS) + `
|
|
2462
|
+
OUTPUT_CAPPED: Output exceeded ${MAX_BULK_OUTPUT_CHARS} chars.`;
|
|
2463
|
+
}
|
|
2464
|
+
return output;
|
|
2361
2465
|
}
|
|
2362
2466
|
|
|
2363
2467
|
// server.mjs
|
|
2364
|
-
var version = true ? "1.
|
|
2468
|
+
var version = true ? "1.5.0" : (await null).createRequire(import.meta.url)("./package.json").version;
|
|
2365
2469
|
var { server, StdioServerTransport } = await createServerRuntime({
|
|
2366
2470
|
name: "hex-line-mcp",
|
|
2367
|
-
version
|
|
2368
|
-
installDir: "mcp/hex-line-mcp"
|
|
2471
|
+
version
|
|
2369
2472
|
});
|
|
2370
2473
|
var replacementPairsSchema = z2.array(
|
|
2371
2474
|
z2.object({ old: z2.string().min(1), new: z2.string() })
|
|
2372
2475
|
).min(1);
|
|
2476
|
+
var readRangeSchema = z2.union([
|
|
2477
|
+
z2.string(),
|
|
2478
|
+
z2.object({
|
|
2479
|
+
start: flexNum().optional(),
|
|
2480
|
+
end: flexNum().optional()
|
|
2481
|
+
})
|
|
2482
|
+
]);
|
|
2483
|
+
function parseReadRanges(rawRanges) {
|
|
2484
|
+
if (!rawRanges) return void 0;
|
|
2485
|
+
const parsed = Array.isArray(rawRanges) ? rawRanges : JSON.parse(rawRanges);
|
|
2486
|
+
if (!Array.isArray(parsed) || parsed.length === 0) {
|
|
2487
|
+
throw new Error("ranges must be a non-empty array");
|
|
2488
|
+
}
|
|
2489
|
+
return parsed;
|
|
2490
|
+
}
|
|
2373
2491
|
function coerceEdit(e) {
|
|
2374
2492
|
if (!e || typeof e !== "object" || Array.isArray(e)) return e;
|
|
2375
2493
|
if (e.set_line || e.replace_lines || e.insert_after || e.replace_between || e.replace) return e;
|
|
@@ -2390,23 +2508,26 @@ function coerceEdit(e) {
|
|
|
2390
2508
|
}
|
|
2391
2509
|
server.registerTool("read_file", {
|
|
2392
2510
|
title: "Read File",
|
|
2393
|
-
description: "Read
|
|
2511
|
+
description: "Read file lines with hashes, checksums, and revision metadata.",
|
|
2394
2512
|
inputSchema: z2.object({
|
|
2395
2513
|
path: z2.string().optional().describe("File or directory path"),
|
|
2396
2514
|
paths: z2.array(z2.string()).optional().describe("Array of file paths to read (batch mode)"),
|
|
2397
2515
|
offset: flexNum().describe("Start line (1-indexed, default: 1)"),
|
|
2398
2516
|
limit: flexNum().describe("Max lines (default: 2000, 0 = all)"),
|
|
2517
|
+
ranges: z2.union([z2.string(), z2.array(readRangeSchema)]).optional().describe('Line ranges, e.g. ["10-25", {"start":40,"end":55}]'),
|
|
2518
|
+
include_graph: flexBool().describe("Include graph annotations"),
|
|
2399
2519
|
plain: flexBool().describe("Omit hashes (lineNum|content)")
|
|
2400
2520
|
}),
|
|
2401
2521
|
annotations: { readOnlyHint: true, destructiveHint: false, idempotentHint: true }
|
|
2402
2522
|
}, async (rawParams) => {
|
|
2403
|
-
const { path: p, paths: multi, offset, limit, plain } = coerceParams(rawParams);
|
|
2523
|
+
const { path: p, paths: multi, offset, limit, ranges: rawRanges, include_graph, plain } = coerceParams(rawParams);
|
|
2404
2524
|
try {
|
|
2525
|
+
const ranges = parseReadRanges(rawRanges);
|
|
2405
2526
|
if (multi && multi.length > 0 && !p) {
|
|
2406
2527
|
const results = [];
|
|
2407
2528
|
for (const fp of multi) {
|
|
2408
2529
|
try {
|
|
2409
|
-
results.push(readFile2(fp, { offset, limit, plain }));
|
|
2530
|
+
results.push(readFile2(fp, { offset, limit, ranges, includeGraph: include_graph, plain }));
|
|
2410
2531
|
} catch (e) {
|
|
2411
2532
|
results.push(`File: ${fp}
|
|
2412
2533
|
|
|
@@ -2416,14 +2537,14 @@ ERROR: ${e.message}`);
|
|
|
2416
2537
|
return { content: [{ type: "text", text: results.join("\n\n---\n\n") }] };
|
|
2417
2538
|
}
|
|
2418
2539
|
if (!p) throw new Error("Either 'path' or 'paths' is required");
|
|
2419
|
-
return { content: [{ type: "text", text: readFile2(p, { offset, limit, plain }) }] };
|
|
2540
|
+
return { content: [{ type: "text", text: readFile2(p, { offset, limit, ranges, includeGraph: include_graph, plain }) }] };
|
|
2420
2541
|
} catch (e) {
|
|
2421
2542
|
return { content: [{ type: "text", text: e.message }], isError: true };
|
|
2422
2543
|
}
|
|
2423
2544
|
});
|
|
2424
2545
|
server.registerTool("edit_file", {
|
|
2425
2546
|
title: "Edit File",
|
|
2426
|
-
description: "Apply
|
|
2547
|
+
description: "Apply verified partial edits to one file.",
|
|
2427
2548
|
inputSchema: z2.object({
|
|
2428
2549
|
path: z2.string().describe("File to edit"),
|
|
2429
2550
|
edits: z2.union([z2.string(), z2.array(z2.any())]).describe(
|
|
@@ -2482,7 +2603,7 @@ server.registerTool("write_file", {
|
|
|
2482
2603
|
});
|
|
2483
2604
|
server.registerTool("grep_search", {
|
|
2484
2605
|
title: "Search Files",
|
|
2485
|
-
description: "Search file contents with ripgrep
|
|
2606
|
+
description: "Search file contents with ripgrep and return edit-ready matches.",
|
|
2486
2607
|
inputSchema: z2.object({
|
|
2487
2608
|
pattern: z2.string().describe("Search pattern (regex by default, literal if literal:true)"),
|
|
2488
2609
|
path: z2.string().optional().describe("Search dir/file (default: cwd)"),
|
|
@@ -2559,19 +2680,20 @@ server.registerTool("outline", {
|
|
|
2559
2680
|
});
|
|
2560
2681
|
server.registerTool("verify", {
|
|
2561
2682
|
title: "Verify Checksums",
|
|
2562
|
-
description: "
|
|
2683
|
+
description: "Verify held checksums without rereading the file.",
|
|
2563
2684
|
inputSchema: z2.object({
|
|
2564
2685
|
path: z2.string().describe("File path"),
|
|
2565
|
-
checksums: z2.string().describe('
|
|
2686
|
+
checksums: z2.array(z2.string()).describe('Checksum strings, e.g. ["1-50:f7e2a1b0", "51-100:abcd1234"]'),
|
|
2566
2687
|
base_revision: z2.string().optional().describe("Optional prior revision to compare against latest state.")
|
|
2567
2688
|
}),
|
|
2568
2689
|
annotations: { readOnlyHint: true, destructiveHint: false, idempotentHint: true }
|
|
2569
2690
|
}, async (rawParams) => {
|
|
2570
2691
|
const { path: p, checksums, base_revision } = coerceParams(rawParams);
|
|
2571
2692
|
try {
|
|
2572
|
-
|
|
2573
|
-
|
|
2574
|
-
|
|
2693
|
+
if (!Array.isArray(checksums) || checksums.length === 0) {
|
|
2694
|
+
throw new Error("checksums must be a non-empty array of strings");
|
|
2695
|
+
}
|
|
2696
|
+
return { content: [{ type: "text", text: verifyChecksums(p, checksums, { baseRevision: base_revision }) }] };
|
|
2575
2697
|
} catch (e) {
|
|
2576
2698
|
return { content: [{ type: "text", text: e.message }], isError: true };
|
|
2577
2699
|
}
|
|
@@ -2645,13 +2767,14 @@ server.registerTool("changes", {
|
|
|
2645
2767
|
});
|
|
2646
2768
|
server.registerTool("bulk_replace", {
|
|
2647
2769
|
title: "Bulk Replace",
|
|
2648
|
-
description: "Search-and-replace across multiple files
|
|
2770
|
+
description: "Search-and-replace across multiple files with compact or full diff output.",
|
|
2649
2771
|
inputSchema: z2.object({
|
|
2650
2772
|
replacements: z2.union([z2.string(), replacementPairsSchema]).describe('JSON array of {old, new} pairs: [{"old":"foo","new":"bar"}]'),
|
|
2651
2773
|
glob: z2.string().optional().describe('File glob (default: "**/*.{md,mjs,json,yml,ts,js}")'),
|
|
2652
2774
|
path: z2.string().optional().describe("Root directory (default: cwd)"),
|
|
2653
2775
|
dry_run: flexBool().describe("Preview without writing (default: false)"),
|
|
2654
|
-
max_files: flexNum().describe("Max files to process (default: 100)")
|
|
2776
|
+
max_files: flexNum().describe("Max files to process (default: 100)"),
|
|
2777
|
+
format: z2.enum(["compact", "full"]).optional().describe('"compact" (default) = summary only, "full" = include capped diffs')
|
|
2655
2778
|
}),
|
|
2656
2779
|
annotations: { readOnlyHint: false, destructiveHint: true, idempotentHint: false }
|
|
2657
2780
|
}, async (rawParams) => {
|
|
@@ -2669,7 +2792,7 @@ server.registerTool("bulk_replace", {
|
|
|
2669
2792
|
params.path || process.cwd(),
|
|
2670
2793
|
params.glob || "**/*.{md,mjs,json,yml,ts,js}",
|
|
2671
2794
|
replacements,
|
|
2672
|
-
{ dryRun: params.dry_run || false, maxFiles: params.max_files || 100 }
|
|
2795
|
+
{ dryRun: params.dry_run || false, maxFiles: params.max_files || 100, format: params.format }
|
|
2673
2796
|
);
|
|
2674
2797
|
return { content: [{ type: "text", text: result }] };
|
|
2675
2798
|
} catch (e) {
|
package/output-style.md
CHANGED
|
@@ -1,31 +1,33 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: hex-line
|
|
3
|
-
description: hex-line MCP tool preferences
|
|
3
|
+
description: hex-line MCP tool preferences with compact coding style
|
|
4
4
|
keep-coding-instructions: true
|
|
5
5
|
---
|
|
6
6
|
|
|
7
7
|
# MCP Tool Preferences
|
|
8
8
|
|
|
9
|
-
**PREFER** hex-line MCP for code files
|
|
9
|
+
**PREFER** hex-line MCP for code files. Hash-annotated reads and verified edits keep context cheap and safe.
|
|
10
10
|
|
|
11
11
|
| Instead of | Use | Why |
|
|
12
12
|
|-----------|-----|-----|
|
|
13
13
|
| Read | `mcp__hex-line__read_file` | Hash-annotated, revision-aware |
|
|
14
14
|
| Edit | `mcp__hex-line__edit_file` | Hash-verified anchors + conservative auto-rebase |
|
|
15
|
-
| Write | `mcp__hex-line__write_file` |
|
|
16
|
-
| Grep | `mcp__hex-line__grep_search` |
|
|
15
|
+
| Write | `mcp__hex-line__write_file` | No prior Read needed |
|
|
16
|
+
| Grep | `mcp__hex-line__grep_search` | Edit-ready matches |
|
|
17
17
|
| Edit (text rename) | `mcp__hex-line__bulk_replace` | Multi-file text rename/refactor |
|
|
18
|
+
| Bash `find`/`tree` | `mcp__hex-line__directory_tree` | Pattern search, gitignore-aware |
|
|
18
19
|
|
|
19
20
|
## Efficient File Reading
|
|
20
21
|
|
|
21
|
-
For
|
|
22
|
-
1. `outline` first
|
|
23
|
-
2. `read_file` with offset
|
|
22
|
+
For unfamiliar code files >100 lines, prefer:
|
|
23
|
+
1. `outline` first
|
|
24
|
+
2. `read_file` with `offset`/`limit` or `ranges`
|
|
25
|
+
3. `paths` or `ranges` when batching several targets
|
|
24
26
|
|
|
25
|
-
Avoid reading a large file in full
|
|
27
|
+
Avoid reading a large file in full. Prefer compact, targeted reads.
|
|
26
28
|
|
|
27
29
|
Bash OK for: npm/node/git/docker/curl, pipes, compound commands.
|
|
28
|
-
**Built-in OK for:** images, PDFs, notebooks, Glob (always), `.claude/settings.json
|
|
30
|
+
**Built-in OK for:** images, PDFs, notebooks, Glob (always), `.claude/settings.json`, `.claude/settings.local.json`.
|
|
29
31
|
|
|
30
32
|
## Edit Workflow
|
|
31
33
|
|
|
@@ -33,24 +35,24 @@ Prefer:
|
|
|
33
35
|
1. collect all known hunks for one file
|
|
34
36
|
2. send one `edit_file` call with batched edits
|
|
35
37
|
3. carry `revision` from `read_file` into `base_revision` on follow-up edits
|
|
36
|
-
4. use `replace_between`
|
|
37
|
-
5. use `verify` before rereading
|
|
38
|
+
4. use `set_line`, `replace_lines`, `insert_after`, `replace_between` based on scope
|
|
39
|
+
5. use `verify` before rereading after staleness
|
|
38
40
|
|
|
39
41
|
Avoid:
|
|
40
42
|
- chained same-file `edit_file` calls when all edits are already known
|
|
41
43
|
- full-file rewrites for local changes
|
|
42
44
|
- using `bulk_replace` for structural block rewrites
|
|
43
45
|
|
|
44
|
-
#
|
|
46
|
+
# Response Style
|
|
45
47
|
|
|
46
|
-
|
|
48
|
+
Keep responses compact and operational. Explain only what is needed to complete the task or justify a non-obvious decision.
|
|
47
49
|
|
|
48
|
-
|
|
49
|
-
|
|
50
|
-
|
|
51
|
-
|
|
52
|
-
"`\u2736 Insight \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500`
|
|
53
|
-
[2-3 key educational points]
|
|
54
|
-
`\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500`"
|
|
50
|
+
Prefer:
|
|
51
|
+
- short progress updates
|
|
52
|
+
- direct tool calls without discovery chatter
|
|
53
|
+
- concise summaries of edits and verification
|
|
55
54
|
|
|
56
|
-
|
|
55
|
+
Avoid:
|
|
56
|
+
- mandatory educational blocks
|
|
57
|
+
- long prose around tool usage
|
|
58
|
+
- repeating obvious implementation details
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@levnikolaevich/hex-line-mcp",
|
|
3
|
-
"version": "1.
|
|
3
|
+
"version": "1.5.0",
|
|
4
4
|
"mcpName": "io.github.levnikolaevich/hex-line-mcp",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"description": "Hash-verified file editing MCP + token efficiency hook for AI coding agents. 11 tools: read, edit, write, grep, outline, verify, directory_tree, file_info, setup_hooks, changes, bulk_replace.",
|
|
@@ -28,7 +28,7 @@
|
|
|
28
28
|
"_dep_notes": {
|
|
29
29
|
"web-tree-sitter": "Pinned ^0.25.0: v0.26 ABI incompatible with tree-sitter-wasms 0.1.x (built with tree-sitter-cli 0.20.8). Language.load() silently fails.",
|
|
30
30
|
"zod": "Pinned ^3.25.0: zod 4 breaks zod-to-json-schema (used by MCP SDK internally). Tool parameter descriptions not sent to clients. Revisit when MCP SDK switches to z.toJSONSchema().",
|
|
31
|
-
"better-sqlite3": "Optional. Used only by lib/graph-enrich.mjs for readonly access to hex-graph .codegraph/index.db. Graceful fallback if absent."
|
|
31
|
+
"better-sqlite3": "Optional. Used only by lib/graph-enrich.mjs for readonly access to hex-graph .hex-skills/codegraph/index.db. Graceful fallback if absent."
|
|
32
32
|
},
|
|
33
33
|
"dependencies": {
|
|
34
34
|
"@modelcontextprotocol/sdk": "^1.27.0",
|