@levnikolaevich/hex-line-mcp 1.4.0 → 1.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -32,10 +32,10 @@ Advanced / occasional:
32
32
 
33
33
  | Tool | Description | Key Feature |
34
34
  |------|-------------|-------------|
35
- | `read_file` | Read file with hash-annotated lines, checksums, and revision | Partial reads via `offset`/`limit` |
35
+ | `read_file` | Read file with hash-annotated lines, checksums, and revision | Partial reads via `offset`/`limit` or `ranges`, compact output by default |
36
36
  | `edit_file` | Revision-aware anchor edits (`set_line`, `replace_lines`, `insert_after`, `replace_between`) | Batched same-file edits + conservative auto-rebase |
37
37
  | `write_file` | Create new file or overwrite, auto-creates parent dirs | Path validation, no hash overhead |
38
- | `grep_search` | Search with ripgrep, 3 output modes, per-group checksums | Edit-ready: grep -> edit directly with checksums |
38
+ | `grep_search` | Search with ripgrep, 3 output modes, per-group checksums | Plain `files`/`count`, compact edit-ready `content` |
39
39
  | `outline` | AST-based structural overview via tree-sitter WASM | 95% token reduction (10 lines instead of 500) |
40
40
  | `verify` | Check if held checksums / revision are still current | Staleness check without full re-read |
41
41
  | `directory_tree` | Compact directory tree with root .gitignore support | Skips node_modules/.git, shows file sizes |
@@ -48,10 +48,10 @@ Advanced / occasional:
48
48
 
49
49
  | Event | Trigger | Action |
50
50
  |-------|---------|--------|
51
- | **PreToolUse** | Read/Edit/Write/Grep on text files | Blocks built-in, forces hex-line tools |
51
+ | **PreToolUse** | Read/Edit/Write/Grep on text files | Size-aware redirect: cheap small operations may pass, expensive ones are redirected |
52
52
  | **PreToolUse** | Bash with dangerous commands | Blocks `rm -rf /`, `git push --force`, etc. Agent must confirm with user |
53
53
  | **PostToolUse** | Bash with 50+ lines output | RTK: deduplicates, truncates, shows filtered summary to Claude as feedback |
54
- | **SessionStart** | Session begins | Injects full tool preference list into agent context |
54
+ | **SessionStart** | Session begins | Injects a short no-discovery workflow for hex-line tools |
55
55
 
56
56
 
57
57
  ### Bash Redirects
@@ -90,17 +90,17 @@ The `setup_hooks` tool automatically installs the output style to `~/.claude/out
90
90
 
91
91
  ## Benchmarking
92
92
 
93
- Two-tier benchmark architecture:
93
+ Two benchmark layers:
94
94
 
95
- - `/benchmark-compare` — real 1:1 comparison (runs inside Claude Code, calls BOTH built-in and hex-line tools on same files)
96
- - `npm run benchmark` — hex-line standalone metrics (Node.js, all real library calls, no simulations)
95
+ - `/benchmark-compare` — balanced built-in vs hex-line comparison inside Claude Code, validated by scenario manifests and saved diffs
96
+ - `npm run benchmark` — hex-line standalone workflow metrics (Node.js, all real library calls, no simulations)
97
97
 
98
98
  ```bash
99
99
  npm run benchmark -- --repo /path/to/repo
100
100
  npm run benchmark:diagnostic -- --repo /path/to/repo
101
101
  ```
102
102
 
103
- Current hex-line workflow metrics on the `hex-line-mcp` repo (all real library calls):
103
+ Current standalone workflow metrics on the `hex-line-mcp` repo (all real library calls):
104
104
 
105
105
  | # | Workflow | Hex-line output | Ops |
106
106
  |---|----------|---------:|----:|
@@ -110,7 +110,7 @@ Current hex-line workflow metrics on the `hex-line-mcp` repo (all real library c
110
110
  | W4 | Inspect large smoke test before edit | 2,322 chars | 3 |
111
111
  | W5 | Follow-up edit after unrelated line shift | 1,267 chars | 3 |
112
112
 
113
- Workflow total: `6,403` chars across `12` ops. Run `/benchmark-compare` in Claude Code for full built-in vs hex-line comparison with real tool calls on both sides.
113
+ Workflow total: `6,403` chars across `12` ops. Run `/benchmark-compare` for the balanced scenario suite with activation checks and diff-based correctness.
114
114
 
115
115
  ### Optional Graph Enrichment
116
116
 
@@ -152,7 +152,7 @@ Use `bulk_replace` for text rename patterns across one or more files. Returns co
152
152
 
153
153
  ### read_file
154
154
 
155
- Read a file with FNV-1a hash-annotated lines, range checksums, file checksum, and revision. Supports directory listing.
155
+ Read a file with FNV-1a hash-annotated lines, range checksums, file checksum, and revision. Supports batch reads, multi-range reads, and directory listing.
156
156
 
157
157
  | Parameter | Type | Required | Description |
158
158
  |-----------|------|----------|-------------|
@@ -160,17 +160,22 @@ Read a file with FNV-1a hash-annotated lines, range checksums, file checksum, an
160
160
  | `paths` | string[] | no | Array of file paths to read (batch mode) |
161
161
  | `offset` | number | no | Start line, 1-indexed (default: 1) |
162
162
  | `limit` | number | no | Max lines to return (default: 2000, 0 = all) |
163
+ | `ranges` | array | no | Explicit line ranges, e.g. `[{ "start": 10, "end": 30 }]` |
164
+ | `include_graph` | boolean | no | Opt in to graph annotations when the graph index exists |
163
165
  | `plain` | boolean | no | Omit hashes, output `lineNum\|content` instead |
164
166
 
165
- Output format:
167
+ Default output is compact:
166
168
 
167
169
  ```
170
+ File: lib/search.mjs
171
+ meta: lines 1-20 of 282
172
+ revision: rev-12-a1b2c3d4
173
+ file: 1-282:beefcafe
174
+
168
175
  ab.1 import { resolve } from "node:path";
169
176
  cd.2 import { readFileSync } from "node:fs";
170
177
  ...
171
178
  checksum: 1-50:f7e2a1b0
172
- revision: rev-12-a1b2c3d4
173
- file: 1-120:beefcafe
174
179
  ```
175
180
 
176
181
  ### edit_file
@@ -203,6 +208,7 @@ Result footer includes:
203
208
  - `revision: ...`
204
209
  - `file: ...`
205
210
  - `changed_ranges: ...` when relevant
211
+ - `remapped_refs: ...` when stale anchors were uniquely relocated
206
212
  - `retry_checksum: ...` on local conflicts
207
213
 
208
214
  ### write_file
@@ -216,7 +222,7 @@ Create a new file or overwrite an existing one. Creates parent directories autom
216
222
 
217
223
  ### grep_search
218
224
 
219
- Search file contents using ripgrep. Three output modes: `content` (hash-annotated with checksums), `files` (paths only), `count` (match counts).
225
+ Search file contents using ripgrep. Three output modes: `content` (hash-annotated with checksums), `files` (plain path list), `count` (plain `file:count` list).
220
226
 
221
227
  | Parameter | Type | Required | Description |
222
228
  |-----------|------|----------|-------------|
@@ -236,7 +242,7 @@ Search file contents using ripgrep. Three output modes: `content` (hash-annotate
236
242
  | `total_limit` | number | no | Total match events across all files; multiline matches count as 1 (0 = unlimited) |
237
243
  | `plain` | boolean | no | Omit hash tags, return `file:line:content` |
238
244
 
239
- **Content mode** returns per-group checksums enabling direct `replace_lines` from grep results without intermediate `read_file`.
245
+ `content` mode returns per-group checksums enabling direct `replace_lines` from grep results without intermediate `read_file`.
240
246
 
241
247
  ### outline
242
248
 
@@ -257,7 +263,7 @@ Check if range checksums from a prior read are still valid, optionally relative
257
263
  | Parameter | Type | Required | Description |
258
264
  |-----------|------|----------|-------------|
259
265
  | `path` | string | yes | File path |
260
- | `checksums` | string | yes | JSON array of checksum strings, e.g. `["1-50:f7e2a1b0"]` |
266
+ | `checksums` | string[] | yes | Array of checksum strings, e.g. `["1-50:f7e2a1b0"]` |
261
267
  | `base_revision` | string | no | Prior revision to compare against latest state |
262
268
 
263
269
  Returns a single-line confirmation or lists changed ranges.
@@ -318,7 +324,7 @@ Configuration constants in `hook.mjs`:
318
324
 
319
325
  ### SessionStart: Tool Preferences
320
326
 
321
- Injects hex-line tool preference list into agent context at session start.
327
+ Injects a short operational workflow into agent context at session start: no `ToolSearch`, prefer `outline -> read_file -> edit_file -> verify`, and use targeted reads over full-file reads.
322
328
 
323
329
  ## Architecture
324
330
 
package/dist/hook.mjs CHANGED
@@ -54,7 +54,7 @@ function normalizeOutput(text, opts = {}) {
54
54
  }
55
55
 
56
56
  // hook.mjs
57
- import { readFileSync } from "node:fs";
57
+ import { readFileSync, statSync } from "node:fs";
58
58
  import { resolve } from "node:path";
59
59
  import { homedir } from "node:os";
60
60
  import { fileURLToPath } from "node:url";
@@ -183,10 +183,31 @@ var CMD_PATTERNS = [
183
183
  var LINE_THRESHOLD = 50;
184
184
  var HEAD_LINES = 15;
185
185
  var TAIL_LINES = 15;
186
+ var LARGE_FILE_BYTES = 15 * 1024;
187
+ var LARGE_EDIT_CHARS = 1200;
186
188
  function extOf(filePath) {
187
189
  const dot = filePath.lastIndexOf(".");
188
190
  return dot !== -1 ? filePath.slice(dot).toLowerCase() : "";
189
191
  }
192
+ function getFilePath(toolInput) {
193
+ return toolInput.file_path || toolInput.path || "";
194
+ }
195
+ function resolveToolPath(filePath) {
196
+ if (!filePath) return "";
197
+ if (filePath.startsWith("~/")) return resolve(homedir(), filePath.slice(2));
198
+ return resolve(process.cwd(), filePath);
199
+ }
200
+ function getFileSize(filePath) {
201
+ if (!filePath) return null;
202
+ try {
203
+ return statSync(resolveToolPath(filePath)).size;
204
+ } catch {
205
+ return null;
206
+ }
207
+ }
208
+ function isPartialRead(toolInput) {
209
+ return [toolInput.offset, toolInput.limit, toolInput.start_line, toolInput.end_line, toolInput.ranges].some((value) => value !== void 0 && value !== null && value !== "");
210
+ }
190
211
  function detectCommandType(cmd) {
191
212
  for (const [re, type] of CMD_PATTERNS) {
192
213
  if (re.test(cmd)) return type;
@@ -249,7 +270,8 @@ function handlePreToolUse(data) {
249
270
  }
250
271
  const hintKey = TOOL_REDIRECT_MAP[toolName];
251
272
  if (hintKey) {
252
- const filePath = toolInput.file_path || toolInput.path || "";
273
+ const filePath = getFilePath(toolInput);
274
+ const fileSize = getFileSize(filePath);
253
275
  if (BINARY_EXT.has(extOf(filePath))) {
254
276
  process.exit(0);
255
277
  }
@@ -272,10 +294,30 @@ function handlePreToolUse(data) {
272
294
  process.exit(0);
273
295
  }
274
296
  }
275
- const hint = TOOL_HINTS[hintKey];
276
- const toolName2 = hint.split(" (")[0];
277
- const pathNote = filePath ? ` with path="${filePath}"` : "";
278
- block(`Use ${toolName2}${pathNote}`, hint);
297
+ if (toolName === "Read") {
298
+ if (isPartialRead(toolInput) || fileSize !== null && fileSize <= LARGE_FILE_BYTES) {
299
+ process.exit(0);
300
+ }
301
+ const target = filePath ? `Use mcp__hex-line__outline or mcp__hex-line__read_file with path="${filePath}"` : "Use mcp__hex-line__directory_tree or mcp__hex-line__read_file";
302
+ block(target, "For large or unknown full reads: call outline first, then read_file with offset/limit or ranges. Do not use built-in Read here.");
303
+ }
304
+ if (toolName === "Edit") {
305
+ const oldText = String(toolInput.old_string || "");
306
+ const isLargeEdit = Boolean(toolInput.replace_all) || oldText.length > LARGE_EDIT_CHARS || fileSize !== null && fileSize > LARGE_FILE_BYTES;
307
+ if (!isLargeEdit) {
308
+ process.exit(0);
309
+ }
310
+ const target = filePath ? `Use mcp__hex-line__grep_search or mcp__hex-line__read_file, then mcp__hex-line__edit_file with path="${filePath}"` : "Use mcp__hex-line__grep_search or mcp__hex-line__read_file, then mcp__hex-line__edit_file";
311
+ block(target, "For large or repeated edits: locate anchors/checksums first, then call edit_file once with batched edits.");
312
+ }
313
+ if (toolName === "Write") {
314
+ const pathNote = filePath ? ` with path="${filePath}"` : "";
315
+ block(`Use mcp__hex-line__write_file${pathNote}`, TOOL_HINTS.Write);
316
+ }
317
+ if (toolName === "Grep") {
318
+ const pathNote = filePath ? ` with path="${filePath}"` : "";
319
+ block(`Use mcp__hex-line__grep_search${pathNote}`, TOOL_HINTS.Grep);
320
+ }
279
321
  }
280
322
  if (toolName === "Bash") {
281
323
  const command = (toolInput.command || "").trim();
@@ -381,22 +423,8 @@ function handleSessionStart() {
381
423
  } catch {
382
424
  }
383
425
  }
384
- if (styleActive) {
385
- process.stdout.write(JSON.stringify({ systemMessage: "hex-line Output Style active." }));
386
- process.exit(0);
387
- }
388
- const seen = /* @__PURE__ */ new Set();
389
- const lines = [];
390
- for (const hint of Object.values(TOOL_HINTS)) {
391
- const tool = hint.split(" ")[0];
392
- if (!seen.has(tool)) {
393
- seen.add(tool);
394
- lines.push(`- ${hint}`);
395
- }
396
- }
397
- lines.push("Exceptions: images, PDFs, notebooks, .claude/settings.json, .claude/settings.local.json \u2192 built-in Read; Glob always OK");
398
- lines.push("Bash OK for: npm/node/git/docker/curl, pipes, scripts");
399
- const msg = "Hex-line MCP available. Workflow:\n- Discovery: read_file, grep_search, outline, directory_tree\n- Same-file edits: prefer ONE edit_file call per file, carry revision/base_revision\n- Hash edits: edit_file (set_line, replace_lines, insert_after, replace_between)\n- Large rewrites: replace_between instead of reciting old blocks\n- Text rename: bulk_replace (multi-file search-replace)\n- Verify staleness: verify before considering reread\n- Write new: write_file\n" + lines.join("\n");
426
+ const prefix = styleActive ? "Hex-line MCP available. Output style active.\n" : "Hex-line MCP available.\n";
427
+ const msg = prefix + "Call hex-line tools directly. Do not use ToolSearch for hex-line tools.\nWorkflow:\n- Discovery: outline for large code files, read_file for targeted reads, grep_search for symbol/text lookup\n- Read cheaply: prefer offset/limit or ranges; avoid full-file Read on large files\n- Edit safely: read/grep first, then one batched edit_file call per file with base_revision when available\n- Verify before reread: use verify to check checksums or revision freshness\n- Multi-file rename/refactor: use bulk_replace\n- New files: use write_file\nExceptions: images, PDFs, notebooks, .claude/settings.json, .claude/settings.local.json use built-in Read. Glob is always OK.";
400
428
  process.stdout.write(JSON.stringify({ systemMessage: msg }));
401
429
  process.exit(0);
402
430
  }
package/dist/server.mjs CHANGED
@@ -669,6 +669,24 @@ function buildRangeChecksum(snapshot, startLine, endLine) {
669
669
 
670
670
  // lib/read.mjs
671
671
  var DEFAULT_LIMIT = 2e3;
672
+ function parseRangeEntry(entry, total) {
673
+ if (typeof entry === "string") {
674
+ const match = entry.trim().match(/^(\d+)(?:-(\d*)?)?$/);
675
+ if (!match) throw new Error(`Invalid range "${entry}". Use "10", "10-25", or "10-"`);
676
+ const start2 = Number(match[1]);
677
+ const end2 = match[2] === void 0 || match[2] === "" ? total : Number(match[2]);
678
+ return { start: start2, end: end2 };
679
+ }
680
+ if (!entry || typeof entry !== "object") {
681
+ throw new Error("ranges entries must be strings or {start,end} objects");
682
+ }
683
+ const start = Number(entry.start ?? 1);
684
+ const end = entry.end === void 0 || entry.end === null ? total : Number(entry.end);
685
+ if (!Number.isFinite(start) || !Number.isFinite(end)) {
686
+ throw new Error("ranges entries must contain numeric start/end values");
687
+ }
688
+ return { start, end };
689
+ }
672
690
  function readFile2(filePath, opts = {}) {
673
691
  filePath = normalizePath(filePath);
674
692
  const real = validatePath(filePath);
@@ -686,10 +704,13 @@ ${text}
686
704
  const total = lines.length;
687
705
  let ranges;
688
706
  if (opts.ranges && opts.ranges.length > 0) {
689
- ranges = opts.ranges.map((r) => ({
690
- start: Math.max(1, r.start || 1),
691
- end: Math.min(total, r.end || total)
692
- }));
707
+ ranges = opts.ranges.map((entry) => {
708
+ const parsed = parseRangeEntry(entry, total);
709
+ return {
710
+ start: Math.max(1, parsed.start),
711
+ end: Math.min(total, parsed.end)
712
+ };
713
+ });
693
714
  } else {
694
715
  const startLine = Math.max(1, opts.offset || 1);
695
716
  const maxLines = opts.limit && opts.limit > 0 ? opts.limit : DEFAULT_LIMIT;
@@ -719,24 +740,22 @@ ${text}
719
740
  range.end = actualEnd;
720
741
  parts.push(formatted.join("\n"));
721
742
  const cs = rangeChecksum(lineHashes, range.start, actualEnd);
722
- parts.push(`
723
- checksum: ${cs}`);
743
+ parts.push(`checksum: ${cs}`);
724
744
  if (cappedAtLine) break;
725
745
  }
726
746
  const sizeKB = (stat.size / 1024).toFixed(1);
727
- const mtime = stat.mtime;
728
- const ago = relativeTime(mtime);
729
- let header = `File: ${filePath} (${total} lines, ${sizeKB}KB, ${ago})`;
747
+ const ago = relativeTime(stat.mtime);
748
+ let meta = `${total} lines, ${sizeKB}KB, ${ago}`;
730
749
  if (ranges.length === 1) {
731
750
  const r = ranges[0];
732
751
  if (r.start > 1 || r.end < total) {
733
- header += ` [showing ${r.start}-${r.end}]`;
752
+ meta += `, showing ${r.start}-${r.end}`;
734
753
  }
735
754
  if (r.end < total) {
736
- header += ` (${total - r.end} more below)`;
755
+ meta += `, ${total - r.end} more below`;
737
756
  }
738
757
  }
739
- const db = getGraphDB(real);
758
+ const db = opts.includeGraph ? getGraphDB(real) : null;
740
759
  const relFile = db ? getRelativePath(real) : null;
741
760
  let graphLine = "";
742
761
  if (db && relFile) {
@@ -750,18 +769,12 @@ checksum: ${cs}`);
750
769
  Graph: ${items.join(" | ")}`;
751
770
  }
752
771
  }
753
- let result = `${header}${graphLine}
772
+ let result = `File: ${filePath}${graphLine}
773
+ meta: ${meta}
754
774
  revision: ${snapshot.revision}
755
775
  file: ${snapshot.fileChecksum}
756
776
 
757
- \`\`\`
758
- ${parts.join("\n")}
759
- \`\`\``;
760
- if (total > 200 && (!opts.offset || opts.offset <= 1) && !cappedAtLine) {
761
- result += `
762
-
763
- \u26A1 Tip: This file has ${total} lines. Use outline first, then read_file with offset/limit for 75% fewer tokens.`;
764
- }
777
+ ${parts.join("\n\n")}`;
765
778
  if (cappedAtLine) {
766
779
  result += `
767
780
 
@@ -794,6 +807,26 @@ function buildErrorSnippet(lines, centerIdx, radius = 5) {
794
807
  }).join("\n");
795
808
  return { start: start + 1, end, text };
796
809
  }
810
+ function stripAnchorOrDiffPrefix(line) {
811
+ let next = line;
812
+ next = next.replace(/^\s*(?:>>| )?[a-z2-7]{2}\.\d+\t/, "");
813
+ next = next.replace(/^.+:(?:>>| )[a-z2-7]{2}\.\d+\t/, "");
814
+ next = next.replace(/^[ +-]\d+\|\s?/, "");
815
+ return next;
816
+ }
817
+ function sanitizeEditText(text) {
818
+ const original = String(text ?? "");
819
+ const hadTrailingNewline = original.endsWith("\n");
820
+ let lines = original.split("\n");
821
+ const nonEmpty = lines.filter((line) => line.length > 0);
822
+ if (nonEmpty.length > 0 && nonEmpty.every((line) => /^\+(?!\+)/.test(line))) {
823
+ lines = lines.map((line) => line.startsWith("+") && !line.startsWith("++") ? line.slice(1) : line);
824
+ }
825
+ lines = lines.map(stripAnchorOrDiffPrefix);
826
+ let cleaned = lines.join("\n");
827
+ if (hadTrailingNewline && !cleaned.endsWith("\n")) cleaned += "\n";
828
+ return cleaned;
829
+ }
797
830
  function findLine(lines, lineNum, expectedTag, hashIndex) {
798
831
  const idx = lineNum - 1;
799
832
  if (idx < 0 || idx >= lines.length) {
@@ -909,6 +942,7 @@ function buildConflictMessage({
909
942
  centerIdx,
910
943
  changedRanges,
911
944
  retryChecksum,
945
+ remaps,
912
946
  details
913
947
  }) {
914
948
  const safeCenter = Math.max(0, Math.min(lines.length - 1, centerIdx));
@@ -921,6 +955,9 @@ file: ${fileChecksum}`;
921
955
  changed_ranges: ${describeChangedRanges(changedRanges)}`;
922
956
  if (retryChecksum) msg += `
923
957
  retry_checksum: ${retryChecksum}`;
958
+ if (remaps?.length) msg += `
959
+ remapped_refs:
960
+ ${remaps.map(({ from, to }) => `${from} -> ${to}`).join("\n")}`;
924
961
  msg += `
925
962
 
926
963
  ${details}
@@ -955,6 +992,8 @@ function editFile(filePath, edits, opts = {}) {
955
992
  const hadTrailingNewline = original.endsWith("\n");
956
993
  const hashIndex = currentSnapshot.uniqueTagIndex;
957
994
  let autoRebased = false;
995
+ const remaps = [];
996
+ const remapKeys = /* @__PURE__ */ new Set();
958
997
  const anchored = [];
959
998
  for (const e of edits) {
960
999
  if (e.set_line || e.replace_lines || e.insert_after || e.replace_between) anchored.push(e);
@@ -1011,12 +1050,24 @@ function editFile(filePath, edits, opts = {}) {
1011
1050
  centerIdx,
1012
1051
  changedRanges: staleRevision && hasBaseSnapshot ? changedRanges : null,
1013
1052
  retryChecksum,
1053
+ remaps,
1014
1054
  details
1015
1055
  });
1016
1056
  };
1057
+ const trackRemap = (ref, idx) => {
1058
+ const actualRef = `${lineTag(fnv1a(lines[idx]))}.${idx + 1}`;
1059
+ const expectedRef = `${ref.tag}.${ref.line}`;
1060
+ if (actualRef === expectedRef) return;
1061
+ const key = `${expectedRef}->${actualRef}`;
1062
+ if (remapKeys.has(key)) return;
1063
+ remapKeys.add(key);
1064
+ remaps.push({ from: expectedRef, to: actualRef });
1065
+ };
1017
1066
  const locateOrConflict = (ref, reason = "stale_anchor") => {
1018
1067
  try {
1019
- return findLine(lines, ref.line, ref.tag, hashIndex);
1068
+ const idx = findLine(lines, ref.line, ref.tag, hashIndex);
1069
+ trackRemap(ref, idx);
1070
+ return idx;
1020
1071
  } catch (e) {
1021
1072
  if (conflictPolicy !== "conservative" || !staleRevision) throw e;
1022
1073
  const centerIdx = Math.max(0, Math.min(lines.length - 1, ref.line - 1));
@@ -1058,7 +1109,7 @@ function editFile(filePath, edits, opts = {}) {
1058
1109
  lines.splice(idx, 1);
1059
1110
  } else {
1060
1111
  const origLine = [lines[idx]];
1061
- const raw = String(txt).split("\n");
1112
+ const raw = sanitizeEditText(txt).split("\n");
1062
1113
  const newLines = opts.restoreIndent ? restoreIndent(origLine, raw) : raw;
1063
1114
  lines.splice(idx, 1, ...newLines);
1064
1115
  }
@@ -1070,7 +1121,7 @@ function editFile(filePath, edits, opts = {}) {
1070
1121
  if (typeof idx === "string") return idx;
1071
1122
  const conflict = ensureRevisionContext(idx + 1, idx + 1, idx);
1072
1123
  if (conflict) return conflict;
1073
- let insertLines = e.insert_after.text.split("\n");
1124
+ let insertLines = sanitizeEditText(e.insert_after.text).split("\n");
1074
1125
  if (opts.restoreIndent) insertLines = restoreIndent([lines[idx]], insertLines);
1075
1126
  lines.splice(idx + 1, 0, ...insertLines);
1076
1127
  continue;
@@ -1134,7 +1185,7 @@ Retry with fresh checksum ${actual}, or use set_line with hashes above.`
1134
1185
  lines.splice(si, ei - si + 1);
1135
1186
  } else {
1136
1187
  const origRange = lines.slice(si, ei + 1);
1137
- let newLines = String(txt).split("\n");
1188
+ let newLines = sanitizeEditText(txt).split("\n");
1138
1189
  if (opts.restoreIndent) newLines = restoreIndent(origRange, newLines);
1139
1190
  lines.splice(si, ei - si + 1, ...newLines);
1140
1191
  }
@@ -1158,7 +1209,7 @@ Retry with fresh checksum ${actual}, or use set_line with hashes above.`
1158
1209
  const conflict = ensureRevisionContext(targetRange.start, targetRange.end, si);
1159
1210
  if (conflict) return conflict;
1160
1211
  const txt = e.replace_between.new_text;
1161
- let newLines = String(txt ?? "").split("\n");
1212
+ let newLines = sanitizeEditText(txt ?? "").split("\n");
1162
1213
  const sliceStart = boundaryMode === "exclusive" ? si + 1 : si;
1163
1214
  const removeCount = boundaryMode === "exclusive" ? Math.max(0, ei - si - 1) : ei - si + 1;
1164
1215
  const origRange = lines.slice(sliceStart, sliceStart + removeCount);
@@ -1219,6 +1270,11 @@ file: ${nextSnapshot.fileChecksum}`;
1219
1270
  if (autoRebased && staleRevision && hasBaseSnapshot) {
1220
1271
  msg += `
1221
1272
  changed_ranges: ${describeChangedRanges(changedRanges)}`;
1273
+ }
1274
+ if (remaps.length > 0) {
1275
+ msg += `
1276
+ remapped_refs:
1277
+ ${remaps.map(({ from, to }) => `${from} -> ${to}`).join("\n")}`;
1222
1278
  }
1223
1279
  msg += `
1224
1280
  Updated ${filePath} (${content.split("\n").length} lines)`;
@@ -1334,9 +1390,7 @@ async function filesMode(pattern, target, opts) {
1334
1390
  if (code !== 0 && code !== null) throw new Error(`GREP_ERROR: rg exit ${code} \u2014 ${stderr.trim() || "unknown error"}`);
1335
1391
  const lines = stdout.trimEnd().split("\n").filter(Boolean);
1336
1392
  const normalized = lines.map((l) => l.replace(/\\/g, "/"));
1337
- return `\`\`\`
1338
- ${normalized.join("\n")}
1339
- \`\`\``;
1393
+ return normalized.join("\n");
1340
1394
  }
1341
1395
  async function countMode(pattern, target, opts) {
1342
1396
  const realArgs = ["-c"];
@@ -1353,9 +1407,7 @@ async function countMode(pattern, target, opts) {
1353
1407
  if (code !== 0 && code !== null) throw new Error(`GREP_ERROR: rg exit ${code} \u2014 ${stderr.trim() || "unknown error"}`);
1354
1408
  const lines = stdout.trimEnd().split("\n").filter(Boolean);
1355
1409
  const normalized = lines.map((l) => l.replace(/\\/g, "/"));
1356
- return `\`\`\`
1357
- ${normalized.join("\n")}
1358
- \`\`\``;
1410
+ return normalized.join("\n");
1359
1411
  }
1360
1412
  async function contentMode(pattern, target, opts, plain, totalLimit) {
1361
1413
  const realArgs = ["--json"];
@@ -1462,16 +1514,12 @@ async function contentMode(pattern, target, opts, plain, totalLimit) {
1462
1514
  if (totalLimit > 0 && matchCount >= totalLimit) {
1463
1515
  flushGroup();
1464
1516
  formatted.push(`--- total_limit reached (${totalLimit}) ---`);
1465
- return `\`\`\`
1466
- ${formatted.join("\n")}
1467
- \`\`\``;
1517
+ return formatted.join("\n");
1468
1518
  }
1469
1519
  }
1470
1520
  }
1471
1521
  flushGroup();
1472
- return `\`\`\`
1473
- ${formatted.join("\n")}
1474
- \`\`\``;
1522
+ return formatted.join("\n");
1475
1523
  }
1476
1524
 
1477
1525
  // lib/outline.mjs
@@ -1609,6 +1657,26 @@ function extractOutline(rootNode, config, sourceLines) {
1609
1657
  walk(rootNode, 0);
1610
1658
  return { entries, skippedRanges };
1611
1659
  }
1660
+ function fallbackOutline(sourceLines) {
1661
+ const entries = [];
1662
+ for (let index = 0; index < sourceLines.length; index++) {
1663
+ const line = sourceLines[index];
1664
+ const trimmed = line.trim();
1665
+ if (!trimmed) continue;
1666
+ const match = trimmed.match(
1667
+ /^(?:export\s+)?(?:async\s+)?function\s+[\w$]+|^(?:export\s+)?(?:const|let|var)\s+[\w$]+\s*=|^(?:export\s+)?class\s+[\w$]+|^(?:export\s+)?interface\s+[\w$]+|^(?:export\s+)?type\s+[\w$]+\s*=|^(?:export\s+)?enum\s+[\w$]+|^(?:export\s+default\s+)?[\w$]+\s*=>/
1668
+ );
1669
+ if (!match) continue;
1670
+ entries.push({
1671
+ start: index + 1,
1672
+ end: index + 1,
1673
+ depth: 0,
1674
+ text: trimmed.slice(0, 120),
1675
+ name: trimmed.match(/([\w$]+)/)?.[1] || null
1676
+ });
1677
+ }
1678
+ return entries;
1679
+ }
1612
1680
  async function outlineFromContent(content, ext) {
1613
1681
  const config = LANG_CONFIGS[ext];
1614
1682
  const grammar = grammarForExtension(ext);
@@ -1625,8 +1693,9 @@ async function outlineFromContent(content, ext) {
1625
1693
  const tree = parser.parse(content);
1626
1694
  return extractOutline(tree.rootNode, config, sourceLines);
1627
1695
  }
1628
- function formatOutline(entries, skippedRanges, sourceLineCount, db, relFile) {
1696
+ function formatOutline(entries, skippedRanges, sourceLineCount, db, relFile, note = "") {
1629
1697
  const lines = [];
1698
+ if (note) lines.push(note, "");
1630
1699
  if (skippedRanges.length > 0) {
1631
1700
  const first = skippedRanges[0].start;
1632
1701
  const last = skippedRanges[skippedRanges.length - 1].end;
@@ -1652,11 +1721,13 @@ async function fileOutline(filePath) {
1652
1721
  }
1653
1722
  const content = readUtf8Normalized(real);
1654
1723
  const result = await outlineFromContent(content, ext);
1724
+ const entries = result.entries.length > 0 ? result.entries : fallbackOutline(content.split("\n"));
1725
+ const note = result.entries.length > 0 || entries.length === 0 ? "" : "Fallback outline: heuristic symbols shown because parser returned no structural entries.";
1655
1726
  const db = getGraphDB(real);
1656
1727
  const relFile = db ? getRelativePath(real) : null;
1657
1728
  return `File: ${filePath}
1658
1729
 
1659
- ${formatOutline(result.entries, result.skippedRanges, content.split("\n").length, db, relFile)}`;
1730
+ ${formatOutline(entries, result.skippedRanges, content.split("\n").length, db, relFile, note)}`;
1660
1731
  }
1661
1732
 
1662
1733
  // lib/verify.mjs
@@ -2275,7 +2346,7 @@ Summary: ${summary}`);
2275
2346
  }
2276
2347
 
2277
2348
  // lib/bulk-replace.mjs
2278
- import { writeFileSync as writeFileSync3, readdirSync as readdirSync3 } from "node:fs";
2349
+ import { writeFileSync as writeFileSync3, readdirSync as readdirSync3, renameSync, unlinkSync } from "node:fs";
2279
2350
  import { resolve as resolve7, relative as relative3, join as join6 } from "node:path";
2280
2351
  var ignoreMod;
2281
2352
  try {
@@ -2348,7 +2419,17 @@ function bulkReplace(rootDir, globPattern, replacements, opts = {}) {
2348
2419
  continue;
2349
2420
  }
2350
2421
  if (!dryRun) {
2351
- writeFileSync3(file, content, "utf-8");
2422
+ const tempPath = `${file}.hexline-tmp-${process.pid}`;
2423
+ try {
2424
+ writeFileSync3(tempPath, content, "utf-8");
2425
+ renameSync(tempPath, file);
2426
+ } catch (error) {
2427
+ try {
2428
+ unlinkSync(tempPath);
2429
+ } catch {
2430
+ }
2431
+ throw error;
2432
+ }
2352
2433
  }
2353
2434
  const relPath = relative3(abs, file).replace(/\\/g, "/");
2354
2435
  totalReplacements += replacementCount;
@@ -2384,7 +2465,7 @@ OUTPUT_CAPPED: Output exceeded ${MAX_BULK_OUTPUT_CHARS} chars.`;
2384
2465
  }
2385
2466
 
2386
2467
  // server.mjs
2387
- var version = true ? "1.4.0" : (await null).createRequire(import.meta.url)("./package.json").version;
2468
+ var version = true ? "1.5.0" : (await null).createRequire(import.meta.url)("./package.json").version;
2388
2469
  var { server, StdioServerTransport } = await createServerRuntime({
2389
2470
  name: "hex-line-mcp",
2390
2471
  version
@@ -2392,6 +2473,21 @@ var { server, StdioServerTransport } = await createServerRuntime({
2392
2473
  var replacementPairsSchema = z2.array(
2393
2474
  z2.object({ old: z2.string().min(1), new: z2.string() })
2394
2475
  ).min(1);
2476
+ var readRangeSchema = z2.union([
2477
+ z2.string(),
2478
+ z2.object({
2479
+ start: flexNum().optional(),
2480
+ end: flexNum().optional()
2481
+ })
2482
+ ]);
2483
+ function parseReadRanges(rawRanges) {
2484
+ if (!rawRanges) return void 0;
2485
+ const parsed = Array.isArray(rawRanges) ? rawRanges : JSON.parse(rawRanges);
2486
+ if (!Array.isArray(parsed) || parsed.length === 0) {
2487
+ throw new Error("ranges must be a non-empty array");
2488
+ }
2489
+ return parsed;
2490
+ }
2395
2491
  function coerceEdit(e) {
2396
2492
  if (!e || typeof e !== "object" || Array.isArray(e)) return e;
2397
2493
  if (e.set_line || e.replace_lines || e.insert_after || e.replace_between || e.replace) return e;
@@ -2412,23 +2508,26 @@ function coerceEdit(e) {
2412
2508
  }
2413
2509
  server.registerTool("read_file", {
2414
2510
  title: "Read File",
2415
- description: "Read a file with hash-annotated lines, range checksums, and current revision. Use offset/limit for targeted reads; use outline first for large code files.",
2511
+ description: "Read file lines with hashes, checksums, and revision metadata.",
2416
2512
  inputSchema: z2.object({
2417
2513
  path: z2.string().optional().describe("File or directory path"),
2418
2514
  paths: z2.array(z2.string()).optional().describe("Array of file paths to read (batch mode)"),
2419
2515
  offset: flexNum().describe("Start line (1-indexed, default: 1)"),
2420
2516
  limit: flexNum().describe("Max lines (default: 2000, 0 = all)"),
2517
+ ranges: z2.union([z2.string(), z2.array(readRangeSchema)]).optional().describe('Line ranges, e.g. ["10-25", {"start":40,"end":55}]'),
2518
+ include_graph: flexBool().describe("Include graph annotations"),
2421
2519
  plain: flexBool().describe("Omit hashes (lineNum|content)")
2422
2520
  }),
2423
2521
  annotations: { readOnlyHint: true, destructiveHint: false, idempotentHint: true }
2424
2522
  }, async (rawParams) => {
2425
- const { path: p, paths: multi, offset, limit, plain } = coerceParams(rawParams);
2523
+ const { path: p, paths: multi, offset, limit, ranges: rawRanges, include_graph, plain } = coerceParams(rawParams);
2426
2524
  try {
2525
+ const ranges = parseReadRanges(rawRanges);
2427
2526
  if (multi && multi.length > 0 && !p) {
2428
2527
  const results = [];
2429
2528
  for (const fp of multi) {
2430
2529
  try {
2431
- results.push(readFile2(fp, { offset, limit, plain }));
2530
+ results.push(readFile2(fp, { offset, limit, ranges, includeGraph: include_graph, plain }));
2432
2531
  } catch (e) {
2433
2532
  results.push(`File: ${fp}
2434
2533
 
@@ -2438,14 +2537,14 @@ ERROR: ${e.message}`);
2438
2537
  return { content: [{ type: "text", text: results.join("\n\n---\n\n") }] };
2439
2538
  }
2440
2539
  if (!p) throw new Error("Either 'path' or 'paths' is required");
2441
- return { content: [{ type: "text", text: readFile2(p, { offset, limit, plain }) }] };
2540
+ return { content: [{ type: "text", text: readFile2(p, { offset, limit, ranges, includeGraph: include_graph, plain }) }] };
2442
2541
  } catch (e) {
2443
2542
  return { content: [{ type: "text", text: e.message }], isError: true };
2444
2543
  }
2445
2544
  });
2446
2545
  server.registerTool("edit_file", {
2447
2546
  title: "Edit File",
2448
- description: "Apply revision-aware partial edits to one file. Prefer one batched call per file. Supports set_line, replace_lines, insert_after, and replace_between. For text rename/refactor use bulk_replace.",
2547
+ description: "Apply verified partial edits to one file.",
2449
2548
  inputSchema: z2.object({
2450
2549
  path: z2.string().describe("File to edit"),
2451
2550
  edits: z2.union([z2.string(), z2.array(z2.any())]).describe(
@@ -2504,7 +2603,7 @@ server.registerTool("write_file", {
2504
2603
  });
2505
2604
  server.registerTool("grep_search", {
2506
2605
  title: "Search Files",
2507
- description: "Search file contents with ripgrep. Returns hash-annotated matches with checksums. Modes: content (default), files, count. Use checksums with set_line/replace_lines. Prefer over shell grep/rg.",
2606
+ description: "Search file contents with ripgrep and return edit-ready matches.",
2508
2607
  inputSchema: z2.object({
2509
2608
  pattern: z2.string().describe("Search pattern (regex by default, literal if literal:true)"),
2510
2609
  path: z2.string().optional().describe("Search dir/file (default: cwd)"),
@@ -2581,19 +2680,20 @@ server.registerTool("outline", {
2581
2680
  });
2582
2681
  server.registerTool("verify", {
2583
2682
  title: "Verify Checksums",
2584
- description: "Check whether held checksums and optional base_revision are still current, without rereading the file.",
2683
+ description: "Verify held checksums without rereading the file.",
2585
2684
  inputSchema: z2.object({
2586
2685
  path: z2.string().describe("File path"),
2587
- checksums: z2.string().describe('JSON array of checksum strings, e.g. ["1-50:f7e2a1b0", "51-100:abcd1234"]'),
2686
+ checksums: z2.array(z2.string()).describe('Checksum strings, e.g. ["1-50:f7e2a1b0", "51-100:abcd1234"]'),
2588
2687
  base_revision: z2.string().optional().describe("Optional prior revision to compare against latest state.")
2589
2688
  }),
2590
2689
  annotations: { readOnlyHint: true, destructiveHint: false, idempotentHint: true }
2591
2690
  }, async (rawParams) => {
2592
2691
  const { path: p, checksums, base_revision } = coerceParams(rawParams);
2593
2692
  try {
2594
- const parsed = JSON.parse(checksums);
2595
- if (!Array.isArray(parsed)) throw new Error("checksums must be a JSON array of strings");
2596
- return { content: [{ type: "text", text: verifyChecksums(p, parsed, { baseRevision: base_revision }) }] };
2693
+ if (!Array.isArray(checksums) || checksums.length === 0) {
2694
+ throw new Error("checksums must be a non-empty array of strings");
2695
+ }
2696
+ return { content: [{ type: "text", text: verifyChecksums(p, checksums, { baseRevision: base_revision }) }] };
2597
2697
  } catch (e) {
2598
2698
  return { content: [{ type: "text", text: e.message }], isError: true };
2599
2699
  }
@@ -2667,7 +2767,7 @@ server.registerTool("changes", {
2667
2767
  });
2668
2768
  server.registerTool("bulk_replace", {
2669
2769
  title: "Bulk Replace",
2670
- description: "Search-and-replace across multiple files. Finds files by glob, applies ordered text replacements. Default format is compact (summary only); use format:'full' for capped diffs. Use dry_run:true to preview. For single-file rename, set glob to the filename.",
2770
+ description: "Search-and-replace across multiple files with compact or full diff output.",
2671
2771
  inputSchema: z2.object({
2672
2772
  replacements: z2.union([z2.string(), replacementPairsSchema]).describe('JSON array of {old, new} pairs: [{"old":"foo","new":"bar"}]'),
2673
2773
  glob: z2.string().optional().describe('File glob (default: "**/*.{md,mjs,json,yml,ts,js}")'),
package/output-style.md CHANGED
@@ -1,33 +1,33 @@
1
1
  ---
2
2
  name: hex-line
3
- description: hex-line MCP tool preferences + explanatory coding style with insights
3
+ description: hex-line MCP tool preferences with compact coding style
4
4
  keep-coding-instructions: true
5
5
  ---
6
6
 
7
7
  # MCP Tool Preferences
8
8
 
9
- **PREFER** hex-line MCP for code files — hash-annotated reads enable safe edits:
9
+ **PREFER** hex-line MCP for code files. Hash-annotated reads and verified edits keep context cheap and safe.
10
10
 
11
11
  | Instead of | Use | Why |
12
12
  |-----------|-----|-----|
13
13
  | Read | `mcp__hex-line__read_file` | Hash-annotated, revision-aware |
14
14
  | Edit | `mcp__hex-line__edit_file` | Hash-verified anchors + conservative auto-rebase |
15
15
  | Write | `mcp__hex-line__write_file` | No prior Read needed |
16
- | Grep | `mcp__hex-line__grep_search` | Hash-annotated matches |
16
+ | Grep | `mcp__hex-line__grep_search` | Edit-ready matches |
17
17
  | Edit (text rename) | `mcp__hex-line__bulk_replace` | Multi-file text rename/refactor |
18
18
  | Bash `find`/`tree` | `mcp__hex-line__directory_tree` | Pattern search, gitignore-aware |
19
19
 
20
20
  ## Efficient File Reading
21
21
 
22
- For UNFAMILIAR code files >100 lines, PREFER:
23
- 1. `outline` first (code files only — not .md/.json/.yaml)
24
- 2. `read_file` with offset/limit for the specific section you need
25
- 3. Batch: `paths` array reads multiple files in one call
22
+ For unfamiliar code files >100 lines, prefer:
23
+ 1. `outline` first
24
+ 2. `read_file` with `offset`/`limit` or `ranges`
25
+ 3. `paths` or `ranges` when batching several targets
26
26
 
27
- Avoid reading a large file in full outline+targeted read saves 75% tokens.
27
+ Avoid reading a large file in full. Prefer compact, targeted reads.
28
28
 
29
29
  Bash OK for: npm/node/git/docker/curl, pipes, compound commands.
30
- **Built-in OK for:** images, PDFs, notebooks, Glob (always), `.claude/settings.json` and `.claude/settings.local.json`.
30
+ **Built-in OK for:** images, PDFs, notebooks, Glob (always), `.claude/settings.json`, `.claude/settings.local.json`.
31
31
 
32
32
  ## Edit Workflow
33
33
 
@@ -35,24 +35,24 @@ Prefer:
35
35
  1. collect all known hunks for one file
36
36
  2. send one `edit_file` call with batched edits
37
37
  3. carry `revision` from `read_file` into `base_revision` on follow-up edits
38
- 4. edit types: `set_line` (1 line), `replace_lines` (range + checksum), `insert_after`, `replace_between` (large blocks)
39
- 5. use `verify` before rereading a file after staleness
38
+ 4. use `set_line`, `replace_lines`, `insert_after`, `replace_between` based on scope
39
+ 5. use `verify` before rereading after staleness
40
40
 
41
41
  Avoid:
42
42
  - chained same-file `edit_file` calls when all edits are already known
43
43
  - full-file rewrites for local changes
44
44
  - using `bulk_replace` for structural block rewrites
45
45
 
46
- # Explanatory Style
46
+ # Response Style
47
47
 
48
- Provide educational insights about the codebase alongside task completion. When providing insights, you may exceed typical length constraints, but remain focused and relevant.
48
+ Keep responses compact and operational. Explain only what is needed to complete the task or justify a non-obvious decision.
49
49
 
50
- ## Insights
51
-
52
- Before and after writing code, provide brief educational explanations about implementation choices using:
53
-
54
- "`\u2736 Insight \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500`
55
- [2-3 key educational points]
56
- `\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500`"
50
+ Prefer:
51
+ - short progress updates
52
+ - direct tool calls without discovery chatter
53
+ - concise summaries of edits and verification
57
54
 
58
- Focus on insights specific to the codebase or the code just written, not general programming concepts.
55
+ Avoid:
56
+ - mandatory educational blocks
57
+ - long prose around tool usage
58
+ - repeating obvious implementation details
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@levnikolaevich/hex-line-mcp",
3
- "version": "1.4.0",
3
+ "version": "1.5.0",
4
4
  "mcpName": "io.github.levnikolaevich/hex-line-mcp",
5
5
  "type": "module",
6
6
  "description": "Hash-verified file editing MCP + token efficiency hook for AI coding agents. 11 tools: read, edit, write, grep, outline, verify, directory_tree, file_info, setup_hooks, changes, bulk_replace.",