@levnikolaevich/hex-line-mcp 1.3.5 → 1.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -42,7 +42,7 @@ Advanced / occasional:
42
42
  | `get_file_info` | File metadata without reading content | Size, lines, mtime, type, binary detection |
43
43
  | `setup_hooks` | Configure Claude hooks + install output style | Gemini/Codex get guidance only; no hooks |
44
44
  | `changes` | Compare file against git ref, shows added/removed/modified symbols | AST-level semantic diff |
45
- | `bulk_replace` | Search-and-replace across multiple files by glob | Per-file diffs, dry_run, max_files safety |
45
+ | `bulk_replace` | Search-and-replace across multiple files by glob | Compact summary (default) or capped diffs via `format`, dry_run, max_files |
46
46
 
47
47
  ### Hooks (PreToolUse + PostToolUse)
48
48
 
@@ -90,45 +90,33 @@ The `setup_hooks` tool automatically installs the output style to `~/.claude/out
90
90
 
91
91
  ## Benchmarking
92
92
 
93
- `hex-line-mcp` now distinguishes:
93
+ Two-tier benchmark architecture:
94
94
 
95
- - `tests`correctness and regression safety
96
- - `benchmarks` — comparative workflow efficiency against built-in tools
97
- - `diagnostics` — modeled tool-level measurements for engineering inspection
98
-
99
- Public benchmark mode reports only comparative multi-step workflows:
95
+ - `/benchmark-compare` — real 1:1 comparison (runs inside Claude Code, calls BOTH built-in and hex-line tools on same files)
96
+ - `npm run benchmark` — hex-line standalone metrics (Node.js, all real library calls, no simulations)
100
97
 
101
98
  ```bash
102
99
  npm run benchmark -- --repo /path/to/repo
103
- ```
104
-
105
- Optional diagnostics stay available separately:
106
-
107
- ```bash
108
100
  npm run benchmark:diagnostic -- --repo /path/to/repo
109
- npm run benchmark:diagnostic:graph -- --repo /path/to/repo
110
101
  ```
111
102
 
112
- The diagnostics output includes synthetic tool-level comparisons such as read, grep, verify, and graph-enrichment helpers. Those numbers are useful for inspecting output shape and token behavior, but they are not the public workflow benchmark score.
113
-
114
- Current sample run on the `hex-line-mcp` repo with session-derived workflows:
115
-
116
- | ID | Workflow | Built-in | hex-line | Savings | Ops |
117
- |----|----------|---------:|---------:|--------:|----:|
118
- | W1 | Debug hook file-listing redirect | 23,143 chars | 882 chars | 96% | 3→2 |
119
- | W2 | Adjust `setup_hooks` guidance and verify | 24,877 chars | 1,637 chars | 93% | 3→3 |
120
- | W3 | Repo-wide benchmark wording refresh | 137,796 chars | 38,918 chars | 72% | 15→1 |
121
- | W4 | Inspect large smoke test before edit | 49,566 chars | 2,104 chars | 96% | 3→3 |
103
+ Current hex-line workflow metrics on the `hex-line-mcp` repo (all real library calls):
122
104
 
123
- Workflow summary: `89%` average token savings, `24→9` tool calls (`63%` fewer).
105
+ | # | Workflow | Hex-line output | Ops |
106
+ |---|----------|---------:|----:|
107
+ | W1 | Debug hook file-listing redirect | 882 chars | 2 |
108
+ | W2 | Adjust `setup_hooks` guidance and verify | 1,719 chars | 3 |
109
+ | W3 | Repo-wide benchmark wording refresh | 213 chars | 1 |
110
+ | W4 | Inspect large smoke test before edit | 2,322 chars | 3 |
111
+ | W5 | Follow-up edit after unrelated line shift | 1,267 chars | 3 |
124
112
 
125
- These workflows are derived from recent real Claude sessions, but executed against local reproducible fixtures in the repository. They should be read as workflow-efficiency measurements, not as correctness or semantic-quality claims.
113
+ Workflow total: `6,403` chars across `12` ops. Run `/benchmark-compare` in Claude Code for full built-in vs hex-line comparison with real tool calls on both sides.
126
114
 
127
115
  ### Optional Graph Enrichment
128
116
 
129
- If a project already has `.codegraph/index.db`, `hex-line` can add lightweight graph hints to `read_file`, `outline`, `grep_search`, and `edit_file`.
117
+ If a project already has `.hex-skills/codegraph/index.db`, `hex-line` can add lightweight graph hints to `read_file`, `outline`, `grep_search`, and `edit_file`.
130
118
 
131
- - Graph enrichment is optional. If `.codegraph/index.db` is missing, `hex-line` falls back to standard behavior silently.
119
+ - Graph enrichment is optional. If `.hex-skills/codegraph/index.db` is missing, `hex-line` falls back to standard behavior silently.
132
120
  - `better-sqlite3` is optional. If it is unavailable, `hex-line` still works without graph hints.
133
121
  - `edit_file` reports **Call impact**, not full semantic blast radius. The warning uses call-graph callers only.
134
122
 
@@ -160,7 +148,7 @@ Use `replace_between` inside `edit_file` when you know stable start/end anchors
160
148
 
161
149
  ### Literal rename / refactor
162
150
 
163
- Use `bulk_replace` for text rename patterns across one or more files. Do not use it as a substitute for structured block rewrites.
151
+ Use `bulk_replace` for text rename patterns across one or more files. Returns compact summary by default; pass `format: "full"` for capped diffs. Do not use it as a substitute for structured block rewrites.
164
152
 
165
153
  ### read_file
166
154
 
package/dist/hook.mjs CHANGED
@@ -91,13 +91,13 @@ var BINARY_EXT = /* @__PURE__ */ new Set([
91
91
  ]);
92
92
  var REVERSE_TOOL_HINTS = {
93
93
  "mcp__hex-line__read_file": "Read (file_path, offset, limit)",
94
- "mcp__hex-line__edit_file": "Edit (revision-aware hash edits, block rewrite, auto-rebase)",
94
+ "mcp__hex-line__edit_file": "Edit (old_string, new_string, replace_all)",
95
95
  "mcp__hex-line__write_file": "Write (file_path, content)",
96
96
  "mcp__hex-line__grep_search": "Grep (pattern, path)",
97
97
  "mcp__hex-line__directory_tree": "Glob (pattern) or Bash(ls)",
98
98
  "mcp__hex-line__get_file_info": "Bash(stat/wc)",
99
99
  "mcp__hex-line__outline": "Read with offset/limit",
100
- "mcp__hex-line__verify": "Verify held checksums / revision without reread",
100
+ "mcp__hex-line__verify": "Read (re-read file to check freshness)",
101
101
  "mcp__hex-line__changes": "Bash(git diff)",
102
102
  "mcp__hex-line__bulk_replace": "Edit (text rename/refactor across files)",
103
103
  "mcp__hex-line__setup_hooks": "Not available (hex-line disabled)"
@@ -114,10 +114,10 @@ var TOOL_HINTS = {
114
114
  stat: "mcp__hex-line__get_file_info (not stat/wc/file)",
115
115
  grep: "mcp__hex-line__grep_search (not grep/rg). Params: output, literal, context_before, context_after, multiline",
116
116
  sed: "mcp__hex-line__edit_file for hash edits, or mcp__hex-line__bulk_replace for text rename (not sed -i)",
117
- diff: "mcp__hex-line__changes (not diff). Git-based semantic diff",
117
+ diff: "mcp__hex-line__changes (not diff). Git diff with change symbols",
118
118
  outline: "mcp__hex-line__outline (before reading large code files)",
119
119
  verify: "mcp__hex-line__verify (staleness / revision check without re-read)",
120
- changes: "mcp__hex-line__changes (semantic AST diff)",
120
+ changes: "mcp__hex-line__changes (git diff with change symbols)",
121
121
  bulk: "mcp__hex-line__bulk_replace (multi-file search-replace)",
122
122
  setup: "mcp__hex-line__setup_hooks (configure hooks for agents)"
123
123
  };
package/dist/server.mjs CHANGED
@@ -6,7 +6,7 @@ import { dirname as dirname4 } from "node:path";
6
6
  import { z as z2 } from "zod";
7
7
 
8
8
  // ../hex-common/src/runtime/mcp-bootstrap.mjs
9
- async function createServerRuntime({ name, version: version2, installDir }) {
9
+ async function createServerRuntime({ name, version: version2 }) {
10
10
  let McpServer, StdioServerTransport2;
11
11
  try {
12
12
  ({ McpServer } = await import("@modelcontextprotocol/sdk/server/mcp.js"));
@@ -14,11 +14,16 @@ async function createServerRuntime({ name, version: version2, installDir }) {
14
14
  } catch {
15
15
  process.stderr.write(
16
16
  `${name}: @modelcontextprotocol/sdk not found.
17
- Run: cd ${installDir} && npm install
17
+ Run: npm install @modelcontextprotocol/sdk
18
18
  `
19
19
  );
20
20
  process.exit(1);
21
21
  }
22
+ const shutdown = () => {
23
+ process.exit(0);
24
+ };
25
+ process.on("SIGTERM", shutdown);
26
+ process.on("SIGINT", shutdown);
22
27
  return {
23
28
  server: new McpServer({ name, version: version2 }),
24
29
  StdioServerTransport: StdioServerTransport2
@@ -249,6 +254,9 @@ function listDirectory(dirPath, opts = {}) {
249
254
  return { text: lines.join("\n"), total };
250
255
  }
251
256
  var MAX_OUTPUT_CHARS = 8e4;
257
+ var MAX_DIFF_CHARS = 3e4;
258
+ var MAX_BULK_OUTPUT_CHARS = 3e4;
259
+ var MAX_PER_FILE_DIFF_LINES = 50;
252
260
  function readText(filePath) {
253
261
  return readFileSync(filePath, "utf-8").replace(/\r\n/g, "\n");
254
262
  }
@@ -339,7 +347,7 @@ function getGraphDB(filePath) {
339
347
  try {
340
348
  const projectRoot = findProjectRoot(filePath);
341
349
  if (!projectRoot) return null;
342
- const dbPath = join3(projectRoot, ".codegraph", "index.db");
350
+ const dbPath = join3(projectRoot, ".hex-skills/codegraph", "index.db");
343
351
  if (!existsSync2(dbPath)) return null;
344
352
  if (_dbs.has(dbPath)) return _dbs.get(dbPath);
345
353
  const require2 = createRequire(import.meta.url);
@@ -453,7 +461,7 @@ function getRelativePath(filePath) {
453
461
  function findProjectRoot(filePath) {
454
462
  let dir = dirname2(filePath);
455
463
  for (let i = 0; i < 10; i++) {
456
- if (existsSync2(join3(dir, ".codegraph", "index.db"))) return dir;
464
+ if (existsSync2(join3(dir, ".hex-skills/codegraph", "index.db"))) return dir;
457
465
  const parent = dirname2(dir);
458
466
  if (parent === dir) break;
459
467
  dir = parent;
@@ -1036,125 +1044,131 @@ function editFile(filePath, edits, opts = {}) {
1036
1044
  autoRebased = true;
1037
1045
  return null;
1038
1046
  };
1039
- for (const e of sorted) {
1040
- if (e.set_line) {
1041
- const { tag, line } = parseRef(e.set_line.anchor);
1042
- const idx = locateOrConflict({ tag, line });
1043
- if (typeof idx === "string") return idx;
1044
- const conflict = ensureRevisionContext(idx + 1, idx + 1, idx);
1045
- if (conflict) return conflict;
1046
- const txt = e.set_line.new_text;
1047
- if (!txt && txt !== 0) {
1048
- lines.splice(idx, 1);
1049
- } else {
1050
- const origLine = [lines[idx]];
1051
- const raw = String(txt).split("\n");
1052
- const newLines = opts.restoreIndent ? restoreIndent(origLine, raw) : raw;
1053
- lines.splice(idx, 1, ...newLines);
1054
- }
1055
- continue;
1056
- }
1057
- if (e.insert_after) {
1058
- const { tag, line } = parseRef(e.insert_after.anchor);
1059
- const idx = locateOrConflict({ tag, line });
1060
- if (typeof idx === "string") return idx;
1061
- const conflict = ensureRevisionContext(idx + 1, idx + 1, idx);
1062
- if (conflict) return conflict;
1063
- let insertLines = e.insert_after.text.split("\n");
1064
- if (opts.restoreIndent) insertLines = restoreIndent([lines[idx]], insertLines);
1065
- lines.splice(idx + 1, 0, ...insertLines);
1066
- continue;
1067
- }
1068
- if (e.replace_lines) {
1069
- const s = parseRef(e.replace_lines.start_anchor);
1070
- const en = parseRef(e.replace_lines.end_anchor);
1071
- const si = locateOrConflict(s);
1072
- if (typeof si === "string") return si;
1073
- const ei = locateOrConflict(en);
1074
- if (typeof ei === "string") return ei;
1075
- const actualStart = si + 1;
1076
- const actualEnd = ei + 1;
1077
- const rc = e.replace_lines.range_checksum;
1078
- if (!rc) throw new Error("range_checksum required for replace_lines. Read the range first via read_file, then pass its checksum.");
1079
- if (staleRevision && conflictPolicy === "conservative") {
1080
- const conflict = ensureRevisionContext(actualStart, actualEnd, si);
1047
+ for (let _ei = 0; _ei < sorted.length; _ei++) {
1048
+ const e = sorted[_ei];
1049
+ try {
1050
+ if (e.set_line) {
1051
+ const { tag, line } = parseRef(e.set_line.anchor);
1052
+ const idx = locateOrConflict({ tag, line });
1053
+ if (typeof idx === "string") return idx;
1054
+ const conflict = ensureRevisionContext(idx + 1, idx + 1, idx);
1081
1055
  if (conflict) return conflict;
1082
- const baseCheck = hasBaseSnapshot ? verifyChecksumAgainstSnapshot(baseSnapshot, rc) : null;
1083
- if (!baseCheck?.ok) {
1084
- return conflictIfNeeded(
1085
- "stale_checksum",
1086
- si,
1087
- baseCheck?.actual || null,
1088
- baseCheck?.actual ? `Provided checksum ${rc} does not match base revision ${opts.baseRevision}.` : `Checksum range from ${rc} is outside the available base revision.`
1089
- );
1056
+ const txt = e.set_line.new_text;
1057
+ if (!txt && txt !== 0) {
1058
+ lines.splice(idx, 1);
1059
+ } else {
1060
+ const origLine = [lines[idx]];
1061
+ const raw = String(txt).split("\n");
1062
+ const newLines = opts.restoreIndent ? restoreIndent(origLine, raw) : raw;
1063
+ lines.splice(idx, 1, ...newLines);
1090
1064
  }
1091
- } else {
1092
- const { start: csStart, end: csEnd, hex: csHex } = parseChecksum(rc);
1093
- if (csStart > actualStart || csEnd < actualEnd) {
1094
- const snip = buildErrorSnippet(origLines, actualStart - 1);
1095
- throw new Error(
1096
- `CHECKSUM_RANGE_GAP: range ${csStart}-${csEnd} does not cover edit range ${actualStart}-${actualEnd}.
1065
+ continue;
1066
+ }
1067
+ if (e.insert_after) {
1068
+ const { tag, line } = parseRef(e.insert_after.anchor);
1069
+ const idx = locateOrConflict({ tag, line });
1070
+ if (typeof idx === "string") return idx;
1071
+ const conflict = ensureRevisionContext(idx + 1, idx + 1, idx);
1072
+ if (conflict) return conflict;
1073
+ let insertLines = e.insert_after.text.split("\n");
1074
+ if (opts.restoreIndent) insertLines = restoreIndent([lines[idx]], insertLines);
1075
+ lines.splice(idx + 1, 0, ...insertLines);
1076
+ continue;
1077
+ }
1078
+ if (e.replace_lines) {
1079
+ const s = parseRef(e.replace_lines.start_anchor);
1080
+ const en = parseRef(e.replace_lines.end_anchor);
1081
+ const si = locateOrConflict(s);
1082
+ if (typeof si === "string") return si;
1083
+ const ei = locateOrConflict(en);
1084
+ if (typeof ei === "string") return ei;
1085
+ const actualStart = si + 1;
1086
+ const actualEnd = ei + 1;
1087
+ const rc = e.replace_lines.range_checksum;
1088
+ if (!rc) throw new Error("range_checksum required for replace_lines. Read the range first via read_file, then pass its checksum. The checksum range must cover start-to-end anchors (inclusive).");
1089
+ if (staleRevision && conflictPolicy === "conservative") {
1090
+ const conflict = ensureRevisionContext(actualStart, actualEnd, si);
1091
+ if (conflict) return conflict;
1092
+ const baseCheck = hasBaseSnapshot ? verifyChecksumAgainstSnapshot(baseSnapshot, rc) : null;
1093
+ if (!baseCheck?.ok) {
1094
+ return conflictIfNeeded(
1095
+ "stale_checksum",
1096
+ si,
1097
+ baseCheck?.actual || null,
1098
+ baseCheck?.actual ? `Provided checksum ${rc} does not match base revision ${opts.baseRevision}.` : `Checksum range from ${rc} is outside the available base revision.`
1099
+ );
1100
+ }
1101
+ } else {
1102
+ const { start: csStart, end: csEnd, hex: csHex } = parseChecksum(rc);
1103
+ if (csStart > actualStart || csEnd < actualEnd) {
1104
+ const snip = buildErrorSnippet(origLines, actualStart - 1);
1105
+ throw new Error(
1106
+ `CHECKSUM_RANGE_GAP: checksum covers lines ${csStart}-${csEnd} but edit spans ${actualStart}-${actualEnd} (inclusive). Checksum range must fully contain the anchor range.
1097
1107
 
1098
1108
  Current content (lines ${snip.start}-${snip.end}):
1099
1109
  ${snip.text}
1100
1110
 
1101
1111
  Tip: Use updated hashes above for retry.`
1102
- );
1103
- }
1104
- const actual = buildRangeChecksum(currentSnapshot, csStart, csEnd);
1105
- const actualHex = actual?.split(":")[1];
1106
- if (!actual || csHex !== actualHex) {
1107
- const details = `CHECKSUM_MISMATCH: expected ${rc}, got ${actual}. File changed \u2014 re-read lines ${csStart}-${csEnd}.`;
1108
- if (conflictPolicy === "conservative") {
1109
- return conflictIfNeeded("stale_checksum", csStart - 1, actual, details);
1112
+ );
1110
1113
  }
1111
- const snip = buildErrorSnippet(origLines, csStart - 1);
1112
- throw new Error(
1113
- `${details}
1114
+ const actual = buildRangeChecksum(currentSnapshot, csStart, csEnd);
1115
+ const actualHex = actual?.split(":")[1];
1116
+ if (!actual || csHex !== actualHex) {
1117
+ const details = `CHECKSUM_MISMATCH: expected ${rc}, got ${actual}. Content at lines ${csStart}-${csEnd} differs from when you read it \u2014 re-read before editing.`;
1118
+ if (conflictPolicy === "conservative") {
1119
+ return conflictIfNeeded("stale_checksum", csStart - 1, actual, details);
1120
+ }
1121
+ const snip = buildErrorSnippet(origLines, csStart - 1);
1122
+ throw new Error(
1123
+ `${details}
1114
1124
 
1115
1125
  Current content (lines ${snip.start}-${snip.end}):
1116
1126
  ${snip.text}
1117
1127
 
1118
1128
  Retry with fresh checksum ${actual}, or use set_line with hashes above.`
1119
- );
1129
+ );
1130
+ }
1120
1131
  }
1132
+ const txt = e.replace_lines.new_text;
1133
+ if (!txt && txt !== 0) {
1134
+ lines.splice(si, ei - si + 1);
1135
+ } else {
1136
+ const origRange = lines.slice(si, ei + 1);
1137
+ let newLines = String(txt).split("\n");
1138
+ if (opts.restoreIndent) newLines = restoreIndent(origRange, newLines);
1139
+ lines.splice(si, ei - si + 1, ...newLines);
1140
+ }
1141
+ continue;
1121
1142
  }
1122
- const txt = e.replace_lines.new_text;
1123
- if (!txt && txt !== 0) {
1124
- lines.splice(si, ei - si + 1);
1125
- } else {
1126
- const origRange = lines.slice(si, ei + 1);
1127
- let newLines = String(txt).split("\n");
1128
- if (opts.restoreIndent) newLines = restoreIndent(origRange, newLines);
1129
- lines.splice(si, ei - si + 1, ...newLines);
1130
- }
1131
- continue;
1132
- }
1133
- if (e.replace_between) {
1134
- const boundaryMode = e.replace_between.boundary_mode || "inclusive";
1135
- if (boundaryMode !== "inclusive" && boundaryMode !== "exclusive") {
1136
- throw new Error(`BAD_INPUT: replace_between boundary_mode must be inclusive or exclusive, got ${boundaryMode}`);
1137
- }
1138
- const s = parseRef(e.replace_between.start_anchor);
1139
- const en = parseRef(e.replace_between.end_anchor);
1140
- const si = locateOrConflict(s);
1141
- if (typeof si === "string") return si;
1142
- const ei = locateOrConflict(en);
1143
- if (typeof ei === "string") return ei;
1144
- if (si > ei) {
1145
- throw new Error(`BAD_INPUT: replace_between start anchor resolves after end anchor (${si + 1} > ${ei + 1})`);
1143
+ if (e.replace_between) {
1144
+ const boundaryMode = e.replace_between.boundary_mode || "inclusive";
1145
+ if (boundaryMode !== "inclusive" && boundaryMode !== "exclusive") {
1146
+ throw new Error(`BAD_INPUT: replace_between boundary_mode must be inclusive or exclusive, got ${boundaryMode}`);
1147
+ }
1148
+ const s = parseRef(e.replace_between.start_anchor);
1149
+ const en = parseRef(e.replace_between.end_anchor);
1150
+ const si = locateOrConflict(s);
1151
+ if (typeof si === "string") return si;
1152
+ const ei = locateOrConflict(en);
1153
+ if (typeof ei === "string") return ei;
1154
+ if (si > ei) {
1155
+ throw new Error(`BAD_INPUT: replace_between start anchor resolves after end anchor (${si + 1} > ${ei + 1})`);
1156
+ }
1157
+ const targetRange = targetRangeForReplaceBetween(si, ei, boundaryMode);
1158
+ const conflict = ensureRevisionContext(targetRange.start, targetRange.end, si);
1159
+ if (conflict) return conflict;
1160
+ const txt = e.replace_between.new_text;
1161
+ let newLines = String(txt ?? "").split("\n");
1162
+ const sliceStart = boundaryMode === "exclusive" ? si + 1 : si;
1163
+ const removeCount = boundaryMode === "exclusive" ? Math.max(0, ei - si - 1) : ei - si + 1;
1164
+ const origRange = lines.slice(sliceStart, sliceStart + removeCount);
1165
+ if (opts.restoreIndent && origRange.length > 0) newLines = restoreIndent(origRange, newLines);
1166
+ if (txt === "" || txt === null) newLines = [];
1167
+ lines.splice(sliceStart, removeCount, ...newLines);
1146
1168
  }
1147
- const targetRange = targetRangeForReplaceBetween(si, ei, boundaryMode);
1148
- const conflict = ensureRevisionContext(targetRange.start, targetRange.end, si);
1149
- if (conflict) return conflict;
1150
- const txt = e.replace_between.new_text;
1151
- let newLines = String(txt ?? "").split("\n");
1152
- const sliceStart = boundaryMode === "exclusive" ? si + 1 : si;
1153
- const removeCount = boundaryMode === "exclusive" ? Math.max(0, ei - si - 1) : ei - si + 1;
1154
- const origRange = lines.slice(sliceStart, sliceStart + removeCount);
1155
- if (opts.restoreIndent && origRange.length > 0) newLines = restoreIndent(origRange, newLines);
1156
- if (txt === "" || txt === null) newLines = [];
1157
- lines.splice(sliceStart, removeCount, ...newLines);
1169
+ } catch (editErr) {
1170
+ if (sorted.length > 1) editErr.message = `Edit ${_ei + 1}/${sorted.length}: ${editErr.message}`;
1171
+ throw editErr;
1158
1172
  }
1159
1173
  }
1160
1174
  let content = lines.join("\n");
@@ -1163,10 +1177,23 @@ Retry with fresh checksum ${actual}, or use set_line with hashes above.`
1163
1177
  if (original === content) {
1164
1178
  throw new Error("NOOP_EDIT: File already contains the desired content. No changes needed.");
1165
1179
  }
1166
- let diff = simpleDiff(origLines, content.split("\n"));
1167
- if (diff && diff.length > 8e4) {
1168
- diff = diff.slice(0, 8e4) + `
1169
- ... (diff truncated, ${diff.length} chars total)`;
1180
+ const fullDiff = simpleDiff(origLines, content.split("\n"));
1181
+ let displayDiff = fullDiff;
1182
+ if (displayDiff && displayDiff.length > MAX_DIFF_CHARS) {
1183
+ displayDiff = displayDiff.slice(0, MAX_DIFF_CHARS) + `
1184
+ ... (diff truncated, ${displayDiff.length} chars total)`;
1185
+ }
1186
+ const newLinesAll = content.split("\n");
1187
+ let minLine = Infinity, maxLine = 0;
1188
+ if (fullDiff) {
1189
+ for (const dl of fullDiff.split("\n")) {
1190
+ const m = dl.match(/^[+-](\d+)\|/);
1191
+ if (m) {
1192
+ const n = +m[1];
1193
+ if (n < minLine) minLine = n;
1194
+ if (n > maxLine) maxLine = n;
1195
+ }
1196
+ }
1170
1197
  }
1171
1198
  if (opts.dryRun) {
1172
1199
  let msg2 = `status: ${autoRebased ? "AUTO_REBASED" : "OK"}
@@ -1175,11 +1202,11 @@ file: ${currentSnapshot.fileChecksum}
1175
1202
  Dry run: ${filePath} would change (${content.split("\n").length} lines)`;
1176
1203
  if (staleRevision && hasBaseSnapshot) msg2 += `
1177
1204
  changed_ranges: ${describeChangedRanges(changedRanges)}`;
1178
- if (diff) msg2 += `
1205
+ if (displayDiff) msg2 += `
1179
1206
 
1180
1207
  Diff:
1181
1208
  \`\`\`diff
1182
- ${diff}
1209
+ ${displayDiff}
1183
1210
  \`\`\``;
1184
1211
  return msg2;
1185
1212
  }
@@ -1195,78 +1222,55 @@ changed_ranges: ${describeChangedRanges(changedRanges)}`;
1195
1222
  }
1196
1223
  msg += `
1197
1224
  Updated ${filePath} (${content.split("\n").length} lines)`;
1198
- if (diff) msg += `
1225
+ if (fullDiff && minLine <= maxLine) {
1226
+ const ctxStart = Math.max(0, minLine - 6);
1227
+ const ctxEnd = Math.min(newLinesAll.length, maxLine + 5);
1228
+ const ctxLines = [];
1229
+ const ctxHashes = [];
1230
+ for (let i = ctxStart; i < ctxEnd; i++) {
1231
+ const h = fnv1a(newLinesAll[i]);
1232
+ ctxHashes.push(h);
1233
+ ctxLines.push(`${lineTag(h)}.${i + 1} ${newLinesAll[i]}`);
1234
+ }
1235
+ const ctxCs = rangeChecksum(ctxHashes, ctxStart + 1, ctxEnd);
1236
+ msg += `
1199
1237
 
1200
- Diff:
1201
- \`\`\`diff
1202
- ${diff}
1203
- \`\`\``;
1238
+ Post-edit (lines ${ctxStart + 1}-${ctxEnd}):
1239
+ ${ctxLines.join("\n")}
1240
+ checksum: ${ctxCs}`;
1241
+ }
1204
1242
  try {
1205
1243
  const db = getGraphDB(real);
1206
1244
  const relFile = db ? getRelativePath(real) : null;
1207
- if (db && relFile && diff) {
1208
- const diffLinesOut = diff.split("\n");
1209
- let minLine = Infinity, maxLine = 0;
1210
- for (const dl of diffLinesOut) {
1211
- const m = dl.match(/^[+-](\d+)\|/);
1212
- if (m) {
1213
- const n = +m[1];
1214
- if (n < minLine) minLine = n;
1215
- if (n > maxLine) maxLine = n;
1216
- }
1217
- }
1218
- if (minLine <= maxLine) {
1219
- const affected = callImpact(db, relFile, minLine, maxLine);
1220
- if (affected.length > 0) {
1221
- const list = affected.map((a) => `${a.name} (${a.file}:${a.line})`).join(", ");
1222
- msg += `
1245
+ if (db && relFile && fullDiff && minLine <= maxLine) {
1246
+ const affected = callImpact(db, relFile, minLine, maxLine);
1247
+ if (affected.length > 0) {
1248
+ const list = affected.map((a) => `${a.name} (${a.file}:${a.line})`).join(", ");
1249
+ msg += `
1223
1250
 
1224
1251
  \u26A0 Call impact: ${affected.length} callers in other files
1225
1252
  ${list}`;
1226
- }
1227
1253
  }
1228
1254
  }
1229
1255
  } catch {
1230
1256
  }
1231
- const newLinesAll = content.split("\n");
1232
- if (diff) {
1233
- const diffArr = diff.split("\n");
1234
- let minLine = Infinity, maxLine = 0;
1235
- for (const dl of diffArr) {
1236
- const m = dl.match(/^[+-](\d+)\|/);
1237
- if (m) {
1238
- const n = +m[1];
1239
- if (n < minLine) minLine = n;
1240
- if (n > maxLine) maxLine = n;
1241
- }
1242
- }
1243
- if (minLine <= maxLine) {
1244
- const ctxStart = Math.max(0, minLine - 6);
1245
- const ctxEnd = Math.min(newLinesAll.length, maxLine + 5);
1246
- const ctxLines = [];
1247
- const ctxHashes = [];
1248
- for (let i = ctxStart; i < ctxEnd; i++) {
1249
- const h = fnv1a(newLinesAll[i]);
1250
- ctxHashes.push(h);
1251
- ctxLines.push(`${lineTag(h)}.${i + 1} ${newLinesAll[i]}`);
1252
- }
1253
- const ctxCs = rangeChecksum(ctxHashes, ctxStart + 1, ctxEnd);
1254
- msg += `
1257
+ if (displayDiff) msg += `
1255
1258
 
1256
- Post-edit (lines ${ctxStart + 1}-${ctxEnd}):
1257
- ${ctxLines.join("\n")}
1258
- checksum: ${ctxCs}`;
1259
- }
1260
- }
1259
+ Diff:
1260
+ \`\`\`diff
1261
+ ${displayDiff}
1262
+ \`\`\``;
1261
1263
  return msg;
1262
1264
  }
1263
1265
 
1264
1266
  // lib/search.mjs
1265
1267
  import { spawn } from "node:child_process";
1266
- import { resolve as resolve2 } from "node:path";
1268
+ import { resolve as resolve2, isAbsolute as isAbsolute2 } from "node:path";
1269
+ import { existsSync as existsSync3 } from "node:fs";
1267
1270
  var rgBin = "rg";
1268
1271
  try {
1269
1272
  rgBin = (await import("@vscode/ripgrep")).rgPath;
1273
+ if (isAbsolute2(rgBin) && !existsSync3(rgBin)) rgBin = "rg";
1270
1274
  } catch {
1271
1275
  }
1272
1276
  var DEFAULT_LIMIT2 = 100;
@@ -1292,7 +1296,13 @@ function spawnRg(args) {
1292
1296
  stderrBuf += chunk.toString("utf-8");
1293
1297
  });
1294
1298
  child.on("error", (err) => {
1295
- reject(new Error(`rg spawn error: ${err.message}`));
1299
+ if (err.code === "ENOENT") {
1300
+ reject(new Error(
1301
+ `ripgrep not available. Reinstall dependencies so @vscode/ripgrep can provide its binary, or install system rg and add it to PATH. Attempted binary: "${rgBin}".`
1302
+ ));
1303
+ } else {
1304
+ reject(new Error(`rg spawn error: ${err.message}`));
1305
+ }
1296
1306
  });
1297
1307
  child.on("close", (code) => {
1298
1308
  resolve_({ stdout, code, stderr: stderrBuf, killed });
@@ -1694,7 +1704,7 @@ file: ${current.fileChecksum}`;
1694
1704
  }
1695
1705
 
1696
1706
  // lib/tree.mjs
1697
- import { readdirSync as readdirSync2, readFileSync as readFileSync3, statSync as statSync6, existsSync as existsSync3 } from "node:fs";
1707
+ import { readdirSync as readdirSync2, readFileSync as readFileSync3, statSync as statSync6, existsSync as existsSync4 } from "node:fs";
1698
1708
  import { resolve as resolve4, basename, join as join4, relative as relative2 } from "node:path";
1699
1709
  import ignore from "ignore";
1700
1710
  var SKIP_DIRS = /* @__PURE__ */ new Set([
@@ -1713,7 +1723,7 @@ function globToRegex(pat) {
1713
1723
  }
1714
1724
  function loadGitignore(rootDir) {
1715
1725
  const gi = join4(rootDir, ".gitignore");
1716
- if (!existsSync3(gi)) return null;
1726
+ if (!existsSync4(gi)) return null;
1717
1727
  try {
1718
1728
  const content = readFileSync3(gi, "utf-8");
1719
1729
  return ignore().add(content);
@@ -1730,7 +1740,7 @@ function findByPattern(dirPath, opts) {
1730
1740
  const filterType = opts.type || "all";
1731
1741
  const maxDepth = opts.max_depth ?? 20;
1732
1742
  const abs = resolve4(normalizePath(dirPath));
1733
- if (!existsSync3(abs)) throw new Error(`DIRECTORY_NOT_FOUND: ${abs}`);
1743
+ if (!existsSync4(abs)) throw new Error(`DIRECTORY_NOT_FOUND: ${abs}`);
1734
1744
  if (!statSync6(abs).isDirectory()) throw new Error(`Not a directory: ${abs}`);
1735
1745
  const ig = opts.gitignore ?? true ? loadGitignore(abs) : null;
1736
1746
  const matches = [];
@@ -1771,7 +1781,7 @@ function directoryTree(dirPath, opts = {}) {
1771
1781
  const compact = opts.format === "compact";
1772
1782
  const maxDepth = compact ? 1 : opts.max_depth ?? 3;
1773
1783
  const abs = resolve4(normalizePath(dirPath));
1774
- if (!existsSync3(abs)) throw new Error(`DIRECTORY_NOT_FOUND: ${abs}. Check path or use directory_tree on parent directory.`);
1784
+ if (!existsSync4(abs)) throw new Error(`DIRECTORY_NOT_FOUND: ${abs}. Check path or use directory_tree on parent directory.`);
1775
1785
  const rootStat = statSync6(abs);
1776
1786
  if (!rootStat.isDirectory()) throw new Error(`Not a directory: ${abs}`);
1777
1787
  const ig = opts.gitignore ?? true ? loadGitignore(abs) : null;
@@ -1868,7 +1878,7 @@ ${lines.join("\n")}`;
1868
1878
 
1869
1879
  // lib/info.mjs
1870
1880
  import { statSync as statSync7, openSync as openSync2, readSync as readSync2, closeSync as closeSync2 } from "node:fs";
1871
- import { resolve as resolve5, isAbsolute as isAbsolute2, extname as extname2, basename as basename2 } from "node:path";
1881
+ import { resolve as resolve5, isAbsolute as isAbsolute3, extname as extname2, basename as basename2 } from "node:path";
1872
1882
  var MAX_LINE_COUNT_SIZE = 10 * 1024 * 1024;
1873
1883
  var EXT_NAMES = {
1874
1884
  ".ts": "TypeScript source",
@@ -1937,7 +1947,7 @@ function detectBinary(filePath, size) {
1937
1947
  function fileInfo(filePath) {
1938
1948
  if (!filePath) throw new Error("Empty file path");
1939
1949
  const normalized = normalizePath(filePath);
1940
- const abs = isAbsolute2(normalized) ? normalized : resolve5(process.cwd(), normalized);
1950
+ const abs = isAbsolute3(normalized) ? normalized : resolve5(process.cwd(), normalized);
1941
1951
  const stat = statSync7(abs);
1942
1952
  if (!stat.isFile()) throw new Error(`Not a regular file: ${abs}`);
1943
1953
  const size = stat.size;
@@ -1961,19 +1971,18 @@ function fileInfo(filePath) {
1961
1971
  }
1962
1972
 
1963
1973
  // lib/setup.mjs
1964
- import { readFileSync as readFileSync4, writeFileSync as writeFileSync2, existsSync as existsSync4, mkdirSync } from "node:fs";
1965
- import { resolve as resolve6, dirname as dirname3 } from "node:path";
1974
+ import { readFileSync as readFileSync4, writeFileSync as writeFileSync2, existsSync as existsSync5, mkdirSync, copyFileSync } from "node:fs";
1975
+ import { resolve as resolve6, dirname as dirname3, join as join5 } from "node:path";
1966
1976
  import { fileURLToPath } from "node:url";
1967
1977
  import { homedir } from "node:os";
1978
+ var STABLE_HOOK_DIR = resolve6(homedir(), ".claude", "hex-line");
1979
+ var STABLE_HOOK_PATH = join5(STABLE_HOOK_DIR, "hook.mjs").replace(/\\/g, "/");
1980
+ var HOOK_COMMAND = `node ${STABLE_HOOK_PATH}`;
1968
1981
  var __filename = fileURLToPath(import.meta.url);
1969
1982
  var __dirname = dirname3(__filename);
1970
- var HOOK_SCRIPT = resolve6(__dirname, "..", "hook.mjs").replace(/\\/g, "/");
1971
- var HOOK_COMMAND = `node ${HOOK_SCRIPT}`;
1972
- var HOOK_SIGNATURE = "hex-line-mcp/hook.mjs";
1973
- var NPX_MARKERS = ["_npx", "npx-cache", ".npm/_npx"];
1974
- function isEphemeralInstall(scriptPath) {
1975
- return NPX_MARKERS.some((m) => scriptPath.includes(m));
1976
- }
1983
+ var SOURCE_HOOK = resolve6(__dirname, "..", "hook.mjs");
1984
+ var DIST_HOOK = resolve6(__dirname, "hook.mjs");
1985
+ var HOOK_SIGNATURE = "hex-line";
1977
1986
  var CLAUDE_HOOKS = {
1978
1987
  SessionStart: {
1979
1988
  matcher: "*",
@@ -1989,7 +1998,7 @@ var CLAUDE_HOOKS = {
1989
1998
  }
1990
1999
  };
1991
2000
  function readJson(filePath) {
1992
- if (!existsSync4(filePath)) return null;
2001
+ if (!existsSync5(filePath)) return null;
1993
2002
  return JSON.parse(readFileSync4(filePath, "utf-8"));
1994
2003
  }
1995
2004
  function writeJson(filePath, data) {
@@ -2035,7 +2044,7 @@ function writeHooksToFile(settingsPath, label) {
2035
2044
  return `Claude (${label}): already configured`;
2036
2045
  }
2037
2046
  writeJson(settingsPath, config);
2038
- return `Claude (${label}): hooks -> ${HOOK_SCRIPT} OK`;
2047
+ return `Claude (${label}): hooks -> ${STABLE_HOOK_PATH} OK`;
2039
2048
  }
2040
2049
  function cleanLocalHooks() {
2041
2050
  const localPath = resolve6(process.cwd(), ".claude/settings.local.json");
@@ -2081,10 +2090,14 @@ function installOutputStyle() {
2081
2090
  return msg;
2082
2091
  }
2083
2092
  function setupClaude() {
2084
- if (isEphemeralInstall(HOOK_SCRIPT)) {
2085
- return "Claude: SKIPPED \u2014 hook.mjs is in npx cache (ephemeral). Install permanently: npm i -g @levnikolaevich/hex-line-mcp, then re-run setup_hooks.";
2086
- }
2087
2093
  const results = [];
2094
+ const hookSource = existsSync5(DIST_HOOK) ? DIST_HOOK : SOURCE_HOOK;
2095
+ if (!existsSync5(hookSource)) {
2096
+ return "Claude: FAILED \u2014 hook.mjs not found. Reinstall @levnikolaevich/hex-line-mcp.";
2097
+ }
2098
+ mkdirSync(STABLE_HOOK_DIR, { recursive: true });
2099
+ copyFileSync(hookSource, STABLE_HOOK_PATH);
2100
+ results.push(`hook.mjs -> ${STABLE_HOOK_PATH}`);
2088
2101
  const globalPath = resolve6(homedir(), ".claude/settings.json");
2089
2102
  results.push(writeHooksToFile(globalPath, "global"));
2090
2103
  results.push(cleanLocalHooks());
@@ -2263,7 +2276,7 @@ Summary: ${summary}`);
2263
2276
 
2264
2277
  // lib/bulk-replace.mjs
2265
2278
  import { writeFileSync as writeFileSync3, readdirSync as readdirSync3 } from "node:fs";
2266
- import { resolve as resolve7, relative as relative3, join as join5 } from "node:path";
2279
+ import { resolve as resolve7, relative as relative3, join as join6 } from "node:path";
2267
2280
  var ignoreMod;
2268
2281
  try {
2269
2282
  ignoreMod = await import("ignore");
@@ -2279,7 +2292,7 @@ function walkFiles(dir, rootDir, ig) {
2279
2292
  }
2280
2293
  for (const e of entries) {
2281
2294
  if (e.name === ".git" || e.name === "node_modules") continue;
2282
- const full = join5(dir, e.name);
2295
+ const full = join6(dir, e.name);
2283
2296
  const rel = relative3(rootDir, full).replace(/\\/g, "/");
2284
2297
  if (ig && ig.ignores(rel)) continue;
2285
2298
  if (e.isDirectory()) {
@@ -2291,21 +2304,21 @@ function walkFiles(dir, rootDir, ig) {
2291
2304
  return results;
2292
2305
  }
2293
2306
  function globMatch(filename, pattern) {
2294
- const re = pattern.replace(/\./g, "\\.").replace(/\*\*/g, "\0").replace(/\*/g, "[^/]*").replace(/\0/g, ".*").replace(/\?/g, ".");
2307
+ const re = pattern.replace(/\./g, "\\.").replace(/\{([^}]+)\}/g, (_, alts) => "(" + alts.split(",").join("|") + ")").replace(/\*\*/g, "\0").replace(/\*/g, "[^/]*").replace(/\0/g, ".*").replace(/\?/g, ".");
2295
2308
  return new RegExp("^" + re + "$").test(filename);
2296
2309
  }
2297
2310
  function loadGitignore2(rootDir) {
2298
2311
  if (!ignoreMod) return null;
2299
2312
  const ig = (ignoreMod.default || ignoreMod)();
2300
2313
  try {
2301
- const content = readText(join5(rootDir, ".gitignore"));
2314
+ const content = readText(join6(rootDir, ".gitignore"));
2302
2315
  ig.add(content);
2303
2316
  } catch {
2304
2317
  }
2305
2318
  return ig;
2306
2319
  }
2307
2320
  function bulkReplace(rootDir, globPattern, replacements, opts = {}) {
2308
- const { dryRun = false, maxFiles = 100 } = opts;
2321
+ const { dryRun = false, maxFiles = 100, format = "compact" } = opts;
2309
2322
  const abs = resolve7(normalizePath(rootDir));
2310
2323
  const ig = loadGitignore2(abs);
2311
2324
  const allFiles = walkFiles(abs, abs, ig);
@@ -2318,52 +2331,85 @@ function bulkReplace(rootDir, globPattern, replacements, opts = {}) {
2318
2331
  return `TOO_MANY_FILES: Found ${files.length} files, max_files is ${maxFiles}. Use more specific glob or increase max_files.`;
2319
2332
  }
2320
2333
  const results = [];
2321
- let changed = 0, skipped = 0, errors = 0;
2322
- const MAX_OUTPUT2 = MAX_OUTPUT_CHARS;
2323
- let totalChars = 0;
2334
+ let changed = 0, skipped = 0, errors = 0, totalReplacements = 0;
2324
2335
  for (const file of files) {
2325
2336
  try {
2326
2337
  const original = readText(file);
2327
2338
  let content = original;
2339
+ let replacementCount = 0;
2328
2340
  for (const { old: oldText, new: newText } of replacements) {
2329
- content = content.split(oldText).join(newText);
2341
+ if (oldText === newText) continue;
2342
+ const parts = content.split(oldText);
2343
+ replacementCount += parts.length - 1;
2344
+ content = parts.join(newText);
2330
2345
  }
2331
2346
  if (content === original) {
2332
2347
  skipped++;
2333
2348
  continue;
2334
2349
  }
2335
- const diff = simpleDiff(original.split("\n"), content.split("\n"));
2336
2350
  if (!dryRun) {
2337
2351
  writeFileSync3(file, content, "utf-8");
2338
2352
  }
2339
- const relPath = file.replace(abs, "").replace(/^[/\\]/, "");
2340
- results.push(`--- ${relPath}
2341
- ${diff || "(no visible diff)"}`);
2353
+ const relPath = relative3(abs, file).replace(/\\/g, "/");
2354
+ totalReplacements += replacementCount;
2342
2355
  changed++;
2343
- totalChars += results[results.length - 1].length;
2344
- if (totalChars > MAX_OUTPUT2) {
2345
- const remaining = files.length - files.indexOf(file) - 1;
2346
- if (remaining > 0) results.push(`OUTPUT_CAPPED: ${remaining} more files not shown. Output exceeded ${MAX_OUTPUT2} chars.`);
2347
- break;
2356
+ if (format === "full") {
2357
+ const diff = simpleDiff(original.split("\n"), content.split("\n"));
2358
+ let diffText = diff || "(no visible diff)";
2359
+ const diffLines3 = diffText.split("\n");
2360
+ if (diffLines3.length > MAX_PER_FILE_DIFF_LINES) {
2361
+ const omitted = diffLines3.length - MAX_PER_FILE_DIFF_LINES;
2362
+ diffText = diffLines3.slice(0, MAX_PER_FILE_DIFF_LINES).join("\n") + `
2363
+ --- ${omitted} lines omitted ---`;
2364
+ }
2365
+ results.push(`--- ${relPath}: ${replacementCount} replacements
2366
+ ${diffText}`);
2367
+ } else {
2368
+ results.push(`--- ${relPath}: ${replacementCount} replacements`);
2348
2369
  }
2349
2370
  } catch (e) {
2350
2371
  results.push(`ERROR: ${file}: ${e.message}`);
2351
2372
  errors++;
2352
2373
  }
2353
2374
  }
2354
- const header = `Bulk replace: ${changed} files changed, ${skipped} skipped, ${errors} errors (dry_run: ${dryRun})`;
2355
- return results.length ? `${header}
2375
+ const header = `Bulk replace: ${changed} files changed (${totalReplacements} replacements), ${skipped} skipped, ${errors} errors (dry_run: ${dryRun})`;
2376
+ let output = results.length ? `${header}
2356
2377
 
2357
2378
  ${results.join("\n\n")}` : header;
2379
+ if (output.length > MAX_BULK_OUTPUT_CHARS) {
2380
+ output = output.slice(0, MAX_BULK_OUTPUT_CHARS) + `
2381
+ OUTPUT_CAPPED: Output exceeded ${MAX_BULK_OUTPUT_CHARS} chars.`;
2382
+ }
2383
+ return output;
2358
2384
  }
2359
2385
 
2360
2386
  // server.mjs
2361
- var version = true ? "1.3.5" : (await null).createRequire(import.meta.url)("./package.json").version;
2387
+ var version = true ? "1.4.0" : (await null).createRequire(import.meta.url)("./package.json").version;
2362
2388
  var { server, StdioServerTransport } = await createServerRuntime({
2363
2389
  name: "hex-line-mcp",
2364
- version,
2365
- installDir: "mcp/hex-line-mcp"
2390
+ version
2366
2391
  });
2392
+ var replacementPairsSchema = z2.array(
2393
+ z2.object({ old: z2.string().min(1), new: z2.string() })
2394
+ ).min(1);
2395
+ function coerceEdit(e) {
2396
+ if (!e || typeof e !== "object" || Array.isArray(e)) return e;
2397
+ if (e.set_line || e.replace_lines || e.insert_after || e.replace_between || e.replace) return e;
2398
+ if (e.anchor && !e.start_anchor && !e.end_anchor && !e.boundary_mode && !e.range_checksum) {
2399
+ const raw = e.new_text ?? e.updated_lines ?? e.content ?? e.line;
2400
+ if (raw !== void 0) {
2401
+ const text = Array.isArray(raw) ? raw.join("\n") : raw;
2402
+ return { set_line: { anchor: e.anchor, new_text: text } };
2403
+ }
2404
+ }
2405
+ if (e.start_anchor && e.end_anchor && e.boundary_mode && e.new_text !== void 0) {
2406
+ return { replace_between: { start_anchor: e.start_anchor, end_anchor: e.end_anchor, new_text: e.new_text, boundary_mode: e.boundary_mode } };
2407
+ }
2408
+ if (e.start_anchor && e.end_anchor && e.new_text !== void 0) {
2409
+ return { replace_lines: { start_anchor: e.start_anchor, end_anchor: e.end_anchor, new_text: e.new_text, ...e.range_checksum ? { range_checksum: e.range_checksum } : {} } };
2410
+ }
2411
+ return e;
2412
+ }
2367
2413
  server.registerTool("read_file", {
2368
2414
  title: "Read File",
2369
2415
  description: "Read a file with hash-annotated lines, range checksums, and current revision. Use offset/limit for targeted reads; use outline first for large code files.",
@@ -2402,8 +2448,8 @@ server.registerTool("edit_file", {
2402
2448
  description: "Apply revision-aware partial edits to one file. Prefer one batched call per file. Supports set_line, replace_lines, insert_after, and replace_between. For text rename/refactor use bulk_replace.",
2403
2449
  inputSchema: z2.object({
2404
2450
  path: z2.string().describe("File to edit"),
2405
- edits: z2.string().describe(
2406
- 'JSON array. Examples:\n{"set_line":{"anchor":"ab.12","new_text":"new"}} \u2014 replace line\n{"replace_lines":{"start_anchor":"ab.10","end_anchor":"cd.15","new_text":"...","range_checksum":"10-15:a1b2c3d4"}} \u2014 range\n{"replace_between":{"start_anchor":"ab.10","end_anchor":"cd.40","new_text":"...","boundary_mode":"inclusive"}} \u2014 block rewrite\n{"insert_after":{"anchor":"ab.20","text":"inserted"}} \u2014 insert below. For text rename use bulk_replace tool.'
2451
+ edits: z2.union([z2.string(), z2.array(z2.any())]).describe(
2452
+ 'JSON array. Types: set_line, replace_lines, insert_after, replace_between.\n[{"set_line":{"anchor":"ab.12","new_text":"x"}}]\n[{"replace_lines":{"start_anchor":"ab.10","end_anchor":"cd.15","new_text":"x","range_checksum":"10-15:a1b2"}}]\n[{"replace_between":{"start_anchor":"ab.10","end_anchor":"cd.40","new_text":"x","boundary_mode":"inclusive"}}]\n[{"insert_after":{"anchor":"ab.20","text":"x"}}]'
2407
2453
  ),
2408
2454
  dry_run: flexBool().describe("Preview changes without writing"),
2409
2455
  restore_indent: flexBool().describe("Auto-fix indentation to match anchor (default: false)"),
@@ -2414,12 +2460,18 @@ server.registerTool("edit_file", {
2414
2460
  }, async (rawParams) => {
2415
2461
  const { path: p, edits: json, dry_run, restore_indent, base_revision, conflict_policy } = coerceParams(rawParams);
2416
2462
  try {
2417
- const parsed = JSON.parse(json);
2463
+ let parsed;
2464
+ try {
2465
+ parsed = typeof json === "string" ? JSON.parse(json) : json;
2466
+ } catch {
2467
+ throw new Error('edits: invalid JSON. Expected: [{"set_line":{"anchor":"xx.N","new_text":"..."}}]');
2468
+ }
2418
2469
  if (!Array.isArray(parsed) || !parsed.length) throw new Error("Edits: non-empty JSON array required");
2470
+ const normalized = parsed.map(coerceEdit);
2419
2471
  return {
2420
2472
  content: [{
2421
2473
  type: "text",
2422
- text: editFile(p, parsed, {
2474
+ text: editFile(p, normalized, {
2423
2475
  dryRun: dry_run,
2424
2476
  restoreIndent: restore_indent,
2425
2477
  baseRevision: base_revision,
@@ -2452,7 +2504,7 @@ server.registerTool("write_file", {
2452
2504
  });
2453
2505
  server.registerTool("grep_search", {
2454
2506
  title: "Search Files",
2455
- description: "Search file contents with ripgrep. Returns hash-annotated matches with per-group checksums for direct editing. Output modes: content (default, edit-ready hashes+checksums), files (paths only), count (match counts). For single-line edits: grep -> set_line directly. For range edits: use checksum from grep output. ALWAYS prefer over shell grep/rg/findstr.",
2507
+ description: "Search file contents with ripgrep. Returns hash-annotated matches with checksums. Modes: content (default), files, count. Use checksums with set_line/replace_lines. Prefer over shell grep/rg.",
2456
2508
  inputSchema: z2.object({
2457
2509
  pattern: z2.string().describe("Search pattern (regex by default, literal if literal:true)"),
2458
2510
  path: z2.string().optional().describe("Search dir/file (default: cwd)"),
@@ -2513,7 +2565,7 @@ server.registerTool("grep_search", {
2513
2565
  });
2514
2566
  server.registerTool("outline", {
2515
2567
  title: "File Outline",
2516
- description: "AST-based structural outline: functions, classes, interfaces with line ranges. 10-20 lines instead of 500 \u2014 95% token reduction. Use before reading large code files. NOT for .md/.json/.yaml \u2014 use read_file.",
2568
+ description: "AST-based structural outline: functions, classes, interfaces with line ranges. Use before reading large code files. Not for .md/.json/.yaml.",
2517
2569
  inputSchema: z2.object({
2518
2570
  path: z2.string().describe("Source file path")
2519
2571
  }),
@@ -2548,7 +2600,7 @@ server.registerTool("verify", {
2548
2600
  });
2549
2601
  server.registerTool("directory_tree", {
2550
2602
  title: "Directory Tree",
2551
- description: "Compact directory tree with root .gitignore support (path-based rules, negation). Supports pattern glob to find files/dirs by name (like find -name). Use to understand repo structure or find specific files/dirs. Skips node_modules, .git, dist by default.",
2603
+ description: "Directory tree with .gitignore support. Pattern glob to find files/dirs by name. Skips node_modules, .git, dist.",
2552
2604
  inputSchema: z2.object({
2553
2605
  path: z2.string().describe("Directory path"),
2554
2606
  pattern: z2.string().optional().describe('Glob filter on names (e.g. "*-mcp", "*.mjs"). Returns flat match list instead of tree'),
@@ -2583,7 +2635,7 @@ server.registerTool("get_file_info", {
2583
2635
  });
2584
2636
  server.registerTool("setup_hooks", {
2585
2637
  title: "Setup Hooks",
2586
- description: "Install or uninstall hex-line hooks in CLI agent settings. install: writes hooks to ~/.claude/settings.json, removes old per-project hooks. uninstall: removes hex-line hooks from global settings. Idempotent: re-running produces no changes if already in desired state.",
2638
+ description: "Install or uninstall hex-line hooks in CLI agent settings. Idempotent.",
2587
2639
  inputSchema: z2.object({
2588
2640
  agent: z2.string().optional().describe('Target agent: "claude", "gemini", "codex", or "all" (default: "all")'),
2589
2641
  action: z2.string().optional().describe('"install" (default) or "uninstall"')
@@ -2599,7 +2651,7 @@ server.registerTool("setup_hooks", {
2599
2651
  });
2600
2652
  server.registerTool("changes", {
2601
2653
  title: "Semantic Diff",
2602
- description: "Compare file or directory against git ref (default: HEAD). For files: shows added/removed/modified symbols at AST level. For directories: lists changed files with insertions/deletions stats. Use to understand what changed before committing.",
2654
+ description: "Compare file or directory against git ref (default: HEAD). Shows added/removed/modified symbols or file stats.",
2603
2655
  inputSchema: z2.object({
2604
2656
  path: z2.string().describe("File or directory path"),
2605
2657
  compare_against: z2.string().optional().describe('Git ref to compare against (default: "HEAD")')
@@ -2615,25 +2667,32 @@ server.registerTool("changes", {
2615
2667
  });
2616
2668
  server.registerTool("bulk_replace", {
2617
2669
  title: "Bulk Replace",
2618
- description: "Search-and-replace across multiple files. Finds files by glob, applies ordered text replacements, returns per-file diffs. Use dry_run:true to preview. For single-file rename, set glob to the filename.",
2670
+ description: "Search-and-replace across multiple files. Finds files by glob, applies ordered text replacements. Default format is compact (summary only); use format:'full' for capped diffs. Use dry_run:true to preview. For single-file rename, set glob to the filename.",
2619
2671
  inputSchema: z2.object({
2620
- replacements: z2.string().describe('JSON array of {old, new} pairs: [{"old":"foo","new":"bar"}]'),
2672
+ replacements: z2.union([z2.string(), replacementPairsSchema]).describe('JSON array of {old, new} pairs: [{"old":"foo","new":"bar"}]'),
2621
2673
  glob: z2.string().optional().describe('File glob (default: "**/*.{md,mjs,json,yml,ts,js}")'),
2622
2674
  path: z2.string().optional().describe("Root directory (default: cwd)"),
2623
2675
  dry_run: flexBool().describe("Preview without writing (default: false)"),
2624
- max_files: flexNum().describe("Max files to process (default: 100)")
2676
+ max_files: flexNum().describe("Max files to process (default: 100)"),
2677
+ format: z2.enum(["compact", "full"]).optional().describe('"compact" (default) = summary only, "full" = include capped diffs')
2625
2678
  }),
2626
2679
  annotations: { readOnlyHint: false, destructiveHint: true, idempotentHint: false }
2627
2680
  }, async (rawParams) => {
2628
2681
  try {
2629
2682
  const params = coerceParams(rawParams);
2630
- const replacements = JSON.parse(params.replacements);
2631
- if (!Array.isArray(replacements) || !replacements.length) throw new Error("replacements: non-empty JSON array of {old, new} required");
2683
+ const raw = params.replacements;
2684
+ let replacementsInput;
2685
+ try {
2686
+ replacementsInput = typeof raw === "string" ? JSON.parse(raw) : raw;
2687
+ } catch {
2688
+ throw new Error('replacements: invalid JSON. Expected: [{"old":"text","new":"replacement"}]');
2689
+ }
2690
+ const replacements = replacementPairsSchema.parse(replacementsInput);
2632
2691
  const result = bulkReplace(
2633
2692
  params.path || process.cwd(),
2634
2693
  params.glob || "**/*.{md,mjs,json,yml,ts,js}",
2635
2694
  replacements,
2636
- { dryRun: params.dry_run || false, maxFiles: params.max_files || 100 }
2695
+ { dryRun: params.dry_run || false, maxFiles: params.max_files || 100, format: params.format }
2637
2696
  );
2638
2697
  return { content: [{ type: "text", text: result }] };
2639
2698
  } catch (e) {
package/output-style.md CHANGED
@@ -12,15 +12,17 @@ keep-coding-instructions: true
12
12
  |-----------|-----|-----|
13
13
  | Read | `mcp__hex-line__read_file` | Hash-annotated, revision-aware |
14
14
  | Edit | `mcp__hex-line__edit_file` | Hash-verified anchors + conservative auto-rebase |
15
- | Write | `mcp__hex-line__write_file` | Consistent workflow |
15
+ | Write | `mcp__hex-line__write_file` | No prior Read needed |
16
16
  | Grep | `mcp__hex-line__grep_search` | Hash-annotated matches |
17
17
  | Edit (text rename) | `mcp__hex-line__bulk_replace` | Multi-file text rename/refactor |
18
+ | Bash `find`/`tree` | `mcp__hex-line__directory_tree` | Pattern search, gitignore-aware |
18
19
 
19
20
  ## Efficient File Reading
20
21
 
21
22
  For UNFAMILIAR code files >100 lines, PREFER:
22
- 1. `outline` first (10-20 lines of structure)
23
+ 1. `outline` first (code files only — not .md/.json/.yaml)
23
24
  2. `read_file` with offset/limit for the specific section you need
25
+ 3. Batch: `paths` array reads multiple files in one call
24
26
 
25
27
  Avoid reading a large file in full — outline+targeted read saves 75% tokens.
26
28
 
@@ -33,7 +35,7 @@ Prefer:
33
35
  1. collect all known hunks for one file
34
36
  2. send one `edit_file` call with batched edits
35
37
  3. carry `revision` from `read_file` into `base_revision` on follow-up edits
36
- 4. use `replace_between` for large block rewrites
38
+ 4. edit types: `set_line` (1 line), `replace_lines` (range + checksum), `insert_after`, `replace_between` (large blocks)
37
39
  5. use `verify` before rereading a file after staleness
38
40
 
39
41
  Avoid:
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@levnikolaevich/hex-line-mcp",
3
- "version": "1.3.5",
3
+ "version": "1.4.0",
4
4
  "mcpName": "io.github.levnikolaevich/hex-line-mcp",
5
5
  "type": "module",
6
6
  "description": "Hash-verified file editing MCP + token efficiency hook for AI coding agents. 11 tools: read, edit, write, grep, outline, verify, directory_tree, file_info, setup_hooks, changes, bulk_replace.",
@@ -28,7 +28,7 @@
28
28
  "_dep_notes": {
29
29
  "web-tree-sitter": "Pinned ^0.25.0: v0.26 ABI incompatible with tree-sitter-wasms 0.1.x (built with tree-sitter-cli 0.20.8). Language.load() silently fails.",
30
30
  "zod": "Pinned ^3.25.0: zod 4 breaks zod-to-json-schema (used by MCP SDK internally). Tool parameter descriptions not sent to clients. Revisit when MCP SDK switches to z.toJSONSchema().",
31
- "better-sqlite3": "Optional. Used only by lib/graph-enrich.mjs for readonly access to hex-graph .codegraph/index.db. Graceful fallback if absent."
31
+ "better-sqlite3": "Optional. Used only by lib/graph-enrich.mjs for readonly access to hex-graph .hex-skills/codegraph/index.db. Graceful fallback if absent."
32
32
  },
33
33
  "dependencies": {
34
34
  "@modelcontextprotocol/sdk": "^1.27.0",