@levnikolaevich/hex-line-mcp 1.12.1 → 1.14.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -7,7 +7,9 @@ Hash-verified file editing MCP + token efficiency hook for AI coding agents.
7
7
  [![license](https://img.shields.io/npm/l/@levnikolaevich/hex-line-mcp)](./LICENSE)
8
8
  ![node](https://img.shields.io/node/v/@levnikolaevich/hex-line-mcp)
9
9
 
10
- Every line carries an FNV-1a content hash. Every edit must present those hashes back -- proving the agent is editing what it thinks it's editing. No stale context, no silent corruption.
10
+ Every line carries an FNV-1a content hash. Every edit must present those hashes back -- proving the agent is editing what it thinks it's editing. No stale context, no silent corruption. Hashing works on normalized logical text; writes preserve the file's existing line endings and trailing-newline shape.
11
+
12
+ By default, mutating tools stay inside the current project root. If you intentionally need to edit a temp or external path, pass `allow_external: true` on `edit_file`, `write_file`, or `bulk_replace`.
11
13
 
12
14
  ## Features
13
15
 
@@ -126,7 +128,8 @@ If a project already has `.hex-skills/codegraph/index.db`, `hex-line` automatica
126
128
 
127
129
  1. Carry `revision` from the earlier `read_file` or `edit_file`
128
130
  2. Pass it back as `base_revision`
129
- 3. Use `verify` before rereading the file
131
+ 3. Use `verify` before delayed or mixed-tool follow-up edits
132
+ 4. If the server returns `retry_edit`, `retry_edits`, `retry_checksum`, or `retry_plan`, reuse those directly
130
133
 
131
134
  ### Rewrite a long block
132
135
 
@@ -156,9 +159,13 @@ File: lib/search.mjs
156
159
  meta: 282 lines, 10.2KB, 2 hours ago
157
160
  revision: rev-12-a1b2c3d4
158
161
  file: 1-282:beefcafe
162
+ eol: lf
163
+ trailing_newline: true
159
164
 
160
165
  block: read_range
161
166
  span: 1-3
167
+ eol: lf
168
+ trailing_newline: true
162
169
  ab.1 import { resolve } from "node:path";
163
170
  cd.2 import { readFileSync } from "node:fs";
164
171
  ef.3 ...
@@ -177,6 +184,7 @@ Edit using revision-aware hash-verified anchors. Prefer one batched call per fil
177
184
  | `restore_indent` | boolean | no | Auto-fix indentation to match anchor context (default: false) |
178
185
  | `base_revision` | string | no | Prior revision from `read_file` / `edit_file` for same-file follow-up edits |
179
186
  | `conflict_policy` | enum | no | `conservative` or `strict` (default: `conservative`) |
187
+ | `allow_external` | boolean | no | Allow editing a path outside the current project root |
180
188
 
181
189
  Edit operations (JSON array):
182
190
 
@@ -193,7 +201,8 @@ Discipline:
193
201
 
194
202
  - Never invent `range_checksum`. Copy it from `read_file` or `grep_search(output:"content")`.
195
203
  - First mutation in a file: prefer `grep_search` for narrow targets, or `outline -> read_file(ranges)` for structural edits.
196
- - Prefer 1-2 hunks on the first pass. Once `edit_file` returns a fresh `revision`, continue from that state.
204
+ - Prefer 1-2 hunks on the first pass. Once `edit_file` returns a fresh `revision`, continue from that state as `base_revision`.
205
+ - `hex-line` preserves existing file line endings on write; repo-level line-ending cleanup should be a separate deliberate operation, not a side effect of `edit_file`.
197
206
 
198
207
  Result footer includes:
199
208
 
@@ -221,6 +230,7 @@ Create a new file or overwrite an existing one. Creates parent directories autom
221
230
  |-----------|------|----------|-------------|
222
231
  | `path` | string | yes | File path |
223
232
  | `content` | string | yes | File content |
233
+ | `allow_external` | boolean | no | Allow writing a path outside the current project root |
224
234
 
225
235
  ### grep_search
226
236
 
@@ -241,7 +251,7 @@ Search file contents using ripgrep. Three output modes: `content` (canonical `se
241
251
  | `context_before` | number | no | Context lines BEFORE match (`-B`) |
242
252
  | `context_after` | number | no | Context lines AFTER match (`-A`) |
243
253
  | `limit` | number | no | Max matches per file (default: 100) |
244
- | `total_limit` | number | no | Total match events across all files; multiline matches count as 1 (0 = unlimited) |
254
+ | `total_limit` | number | no | Total match events across all files; multiline matches count as 1 (default: 200 for `content`, 1000 for `files`/`count`, 0 = unlimited) |
245
255
  | `plain` | boolean | no | Omit hash tags inside block entries, return `lineNum\|content` |
246
256
 
247
257
  `content` mode returns canonical `search_hunk` blocks with per-hunk checksums enabling direct `replace_lines` from grep results without intermediate `read_file`.
package/dist/hook.mjs CHANGED
@@ -111,7 +111,7 @@ var REVERSE_TOOL_HINTS = {
111
111
  "mcp__hex-line__grep_search": "Grep (pattern, path)",
112
112
  "mcp__hex-line__inspect_path": "Path info / tree / Bash(ls,stat)",
113
113
  "mcp__hex-line__outline": "Read with offset/limit",
114
- "mcp__hex-line__verify": "Read (re-read file to check freshness)",
114
+ "mcp__hex-line__verify": "Read (check checksum/revision freshness before follow-up edits)",
115
115
  "mcp__hex-line__changes": "Bash(git diff)",
116
116
  "mcp__hex-line__bulk_replace": "Edit (text rename/refactor across files inside an explicit root path)"
117
117
  };
@@ -129,7 +129,7 @@ var TOOL_HINTS = {
129
129
  sed: "mcp__hex-line__edit_file for hash edits, or mcp__hex-line__bulk_replace with path=<project root> for text rename (not sed -i)",
130
130
  diff: "mcp__hex-line__changes (not diff). Git diff with change symbols",
131
131
  outline: "mcp__hex-line__outline (before reading large code files)",
132
- verify: "mcp__hex-line__verify (staleness / revision check without re-read)",
132
+ verify: "mcp__hex-line__verify (staleness / revision check without re-read; use before delayed same-file follow-ups)",
133
133
  changes: "mcp__hex-line__changes (git diff with change symbols)",
134
134
  bulk: "mcp__hex-line__bulk_replace with path=<project root> (multi-file search-replace)"
135
135
  };
@@ -439,7 +439,7 @@ function handleSessionStart() {
439
439
  } catch {
440
440
  }
441
441
  }
442
- const msg = styleActive ? "Hex-line MCP available. Output style active.\n<hex-line_instructions>\n <deferred_loading>If hex-line schemas not loaded, run: ToolSearch('+hex-line read edit')</deferred_loading>\n <note>Follow the active hex-line output style for primary tool choices.</note>\n <exceptions>Built-in tools stay OK for images, PDFs, notebooks, Glob, .claude/settings.json, and .claude/settings.local.json.</exceptions>\n</hex-line_instructions>" : "Hex-line MCP available.\n<hex-line_instructions>\n <deferred_loading>If hex-line schemas not loaded, run: ToolSearch('+hex-line read edit')</deferred_loading>\n <exploration>\n <rule>Use outline for structure (code + markdown), not Read. ~10-20 lines vs hundreds.</rule>\n <rule>Use read_file with offset/limit or ranges for targeted reads.</rule>\n <rule>Use grep_search before editing to get hash anchors.</rule>\n </exploration>\n <editing>\n <path name='surgical'>grep_search \u2192 edit_file (fastest: hash-verified, no full read needed)</path>\n <path name='exploratory'>outline \u2192 read_file (ranges) \u2192 edit_file with base_revision</path>\n <path name='multi-file'>bulk_replace(path=&quot;&lt;project root&gt;&quot;) for text rename/refactor across files</path>\n </editing>\n <tips>\n <tip>Auto-fill path from the active file or project root. Do not leave repo scope implicit.</tip>\n <tip>Never invent range_checksum. Copy it from fresh read_file or grep_search blocks.</tip>\n <tip>Prefer set_line or insert_after for small local changes and replace_between for larger bounded rewrites.</tip>\n <tip>Carry revision from read_file into base_revision on edit_file.</tip>\n <tip>If edit returns CONFLICT, call verify \u2014 only reread when STALE.</tip>\n <tip>Avoid large first-pass edit batches. Start with 1-2 hunks, then continue from the returned revision.</tip>\n <tip>Use write_file for new files (no prior Read needed).</tip>\n </tips>\n <exceptions>Built-in tools stay OK for images, PDFs, notebooks, Glob, .claude/settings.json, and .claude/settings.local.json.</exceptions>\n</hex-line_instructions>";
442
+ const msg = styleActive ? "Hex-line MCP available. Output style active.\n<hex-line_instructions>\n <deferred_loading>If hex-line schemas not loaded, run: ToolSearch('+hex-line read edit')</deferred_loading>\n <note>Follow the active hex-line output style for primary tool choices.</note>\n <exceptions>Built-in tools stay OK for images, PDFs, notebooks, Glob, .claude/settings.json, and .claude/settings.local.json.</exceptions>\n</hex-line_instructions>" : "Hex-line MCP available.\n<hex-line_instructions>\n <deferred_loading>If hex-line schemas not loaded, run: ToolSearch('+hex-line read edit')</deferred_loading>\n <exploration>\n <rule>Use outline for structure (code + markdown), not Read. ~10-20 lines vs hundreds.</rule>\n <rule>Use read_file with offset/limit or ranges for targeted reads.</rule>\n <rule>Use grep_search before editing to get hash anchors.</rule>\n </exploration>\n <editing>\n <path name='surgical'>grep_search \u2192 edit_file (fastest: hash-verified, no full read needed)</path>\n <path name='exploratory'>outline \u2192 read_file (ranges) \u2192 edit_file with base_revision</path>\n <path name='multi-file'>bulk_replace(path=&quot;&lt;project root&gt;&quot;) for text rename/refactor across files</path>\n </editing>\n <tips>\n <tip>Auto-fill path from the active file or project root. Read-only tools may inspect explicit temp-file paths outside the repo. Mutating tools stay project-scoped unless you intentionally pass allow_external=true.</tip>\n <tip>Never invent range_checksum. Copy it from fresh read_file or grep_search blocks.</tip>\n <tip>Prefer set_line or insert_after for small local changes and replace_between for larger bounded rewrites.</tip>\n <tip>Carry revision from read_file into base_revision on edit_file.</tip>\n <tip>If edit returns CONFLICT, call verify \u2014 only reread when STALE.</tip>\n <tip>Avoid large first-pass edit batches. Start with 1-2 hunks, then continue from the returned revision.</tip>\n <tip>Use write_file for new files (no prior Read needed).</tip>\n </tips>\n <exceptions>Built-in tools stay OK for images, PDFs, notebooks, Glob, .claude/settings.json, and .claude/settings.local.json.</exceptions>\n</hex-line_instructions>";
443
443
  safeExit(1, JSON.stringify({ systemMessage: msg }), 0);
444
444
  }
445
445
  var _norm = (p) => p.replace(/\\/g, "/");
package/dist/server.mjs CHANGED
@@ -107,6 +107,7 @@ import { statSync as statSync4 } from "node:fs";
107
107
 
108
108
  // lib/security.mjs
109
109
  import { realpathSync, statSync as statSync2, existsSync, openSync, readSync, closeSync } from "node:fs";
110
+ import { tmpdir as tmpdir2 } from "node:os";
110
111
  import { resolve, isAbsolute, dirname } from "node:path";
111
112
 
112
113
  // lib/format.mjs
@@ -202,11 +203,43 @@ function readText(filePath) {
202
203
  // lib/security.mjs
203
204
  var MAX_FILE_SIZE = 10 * 1024 * 1024;
204
205
  function normalizePath(p) {
205
- if (process.platform === "win32" && /^\/[a-zA-Z]\//.test(p)) {
206
- p = p[1] + ":" + p.slice(2);
206
+ if (process.platform === "win32") {
207
+ if (p === "/tmp" || p.startsWith("/tmp/")) {
208
+ const suffix = p.slice("/tmp".length).replace(/^\/+/, "");
209
+ p = suffix ? resolve(tmpdir2(), suffix) : tmpdir2();
210
+ } else if (p === "/var/tmp" || p.startsWith("/var/tmp/")) {
211
+ const suffix = p.slice("/var/tmp".length).replace(/^\/+/, "");
212
+ p = suffix ? resolve(tmpdir2(), suffix) : tmpdir2();
213
+ } else if (/^\/[a-zA-Z]\//.test(p)) {
214
+ p = p[1] + ":" + p.slice(2);
215
+ }
207
216
  }
208
217
  return p.replace(/\\/g, "/");
209
218
  }
219
+ function normalizeScopeValue(value) {
220
+ const normalized = value.replace(/\\/g, "/");
221
+ return process.platform === "win32" ? normalized.toLowerCase() : normalized;
222
+ }
223
+ function resolveInputPath(filePath) {
224
+ const normalized = normalizePath(filePath);
225
+ const abs = isAbsolute(normalized) ? normalized : resolve(process.cwd(), normalized);
226
+ return abs.replace(/\\/g, "/");
227
+ }
228
+ function isWithinRoot(rootPath, targetPath) {
229
+ const root = normalizeScopeValue(rootPath).replace(/\/+$/, "");
230
+ const target = normalizeScopeValue(targetPath);
231
+ return target === root || target.startsWith(`${root}/`);
232
+ }
233
+ function assertProjectScopedPath(filePath, { allowExternal = false } = {}) {
234
+ if (!filePath) throw new Error("Empty file path");
235
+ const abs = resolveInputPath(filePath);
236
+ if (allowExternal) return abs;
237
+ const projectRoot = resolve(process.cwd()).replace(/\\/g, "/");
238
+ if (isWithinRoot(projectRoot, abs)) return abs;
239
+ throw new Error(
240
+ `PATH_OUTSIDE_PROJECT: ${abs}. Editing is restricted to the current project by default. If you intentionally need a temp or external path, retry with allow_external=true.`
241
+ );
242
+ }
210
243
  function validatePath(filePath) {
211
244
  if (!filePath) throw new Error("Empty file path");
212
245
  const normalized = normalizePath(filePath);
@@ -669,6 +702,70 @@ function parseChecksum(cs) {
669
702
  return { start: parseInt(m[1], 10), end: parseInt(m[2], 10), hex: m[3] };
670
703
  }
671
704
 
705
+ // ../hex-common/src/text/file-text.mjs
706
+ import { readFileSync as readFileSync2 } from "node:fs";
707
+ function classifyEol(counts) {
708
+ const active = Object.entries(counts).filter(([, count]) => count > 0);
709
+ if (active.length === 0) return "none";
710
+ if (active.length === 1) return active[0][0];
711
+ return "mixed";
712
+ }
713
+ function chooseDefaultEol(lineEndings, counts) {
714
+ const active = Object.entries(counts).filter(([, count]) => count > 0).sort((a, b) => b[1] - a[1]);
715
+ if (active.length === 0) return "\n";
716
+ if (active.length === 1 || active[0][1] > active[1][1]) {
717
+ return active[0][0] === "crlf" ? "\r\n" : active[0][0] === "cr" ? "\r" : "\n";
718
+ }
719
+ const firstSeen = lineEndings.find((ending) => ending);
720
+ return firstSeen || "\n";
721
+ }
722
+ function normalizeSourceText(text) {
723
+ return text.replace(/\r\n/g, "\n").replace(/\r/g, "\n");
724
+ }
725
+ function parseUtf8TextWithMetadata(text) {
726
+ const lines = [];
727
+ const lineEndings = [];
728
+ const eolCounts = { lf: 0, crlf: 0, cr: 0 };
729
+ let start = 0;
730
+ for (let i = 0; i < text.length; i++) {
731
+ const ch = text[i];
732
+ if (ch === "\r") {
733
+ const isCrlf = text[i + 1] === "\n";
734
+ lines.push(text.slice(start, i));
735
+ lineEndings.push(isCrlf ? "\r\n" : "\r");
736
+ if (isCrlf) {
737
+ eolCounts.crlf++;
738
+ i++;
739
+ } else {
740
+ eolCounts.cr++;
741
+ }
742
+ start = i + 1;
743
+ continue;
744
+ }
745
+ if (ch === "\n") {
746
+ lines.push(text.slice(start, i));
747
+ lineEndings.push("\n");
748
+ eolCounts.lf++;
749
+ start = i + 1;
750
+ }
751
+ }
752
+ lines.push(text.slice(start));
753
+ lineEndings.push("");
754
+ const trailingNewline = text.endsWith("\n") || text.endsWith("\r");
755
+ return {
756
+ rawText: text,
757
+ content: normalizeSourceText(text),
758
+ lines,
759
+ lineEndings,
760
+ trailingNewline,
761
+ eol: classifyEol(eolCounts),
762
+ defaultEol: chooseDefaultEol(lineEndings, eolCounts)
763
+ };
764
+ }
765
+ function readUtf8WithMetadata(filePath) {
766
+ return parseUtf8TextWithMetadata(readFileSync2(filePath, "utf-8"));
767
+ }
768
+
672
769
  // lib/snapshot.mjs
673
770
  var MAX_FILES = 200;
674
771
  var MAX_REVISIONS_PER_FILE = 5;
@@ -705,6 +802,10 @@ function pruneExpired(now = Date.now()) {
705
802
  }
706
803
  function rememberRevisionId(filePath, revision) {
707
804
  const ids = fileRevisionIds.get(filePath) || [];
805
+ if (ids.includes(revision)) {
806
+ fileRevisionIds.set(filePath, ids);
807
+ return;
808
+ }
708
809
  ids.push(revision);
709
810
  while (ids.length > MAX_REVISIONS_PER_FILE) {
710
811
  const removed = ids.shift();
@@ -796,36 +897,42 @@ function describeChangedRanges(ranges) {
796
897
  if (!ranges?.length) return "none";
797
898
  return ranges.map((r) => `${r.start}-${r.end}${r.kind ? `(${r.kind})` : ""}`).join(", ");
798
899
  }
799
- function createSnapshot(filePath, content, mtimeMs, size, prevSnapshot = null) {
800
- const lines = content.split("\n");
900
+ function createSnapshot(filePath, parsed, mtimeMs, size, prevSnapshot = null, revisionOverride = null) {
901
+ const { content, lines, lineEndings, rawText, eol, defaultEol, trailingNewline } = parsed;
801
902
  const lineHashes = lines.map((line) => fnv1a(line));
802
903
  const fileChecksum = computeFileChecksum(lineHashes);
803
- const revision = `rev-${++revisionSeq}-${fileChecksum.split(":")[1]}`;
904
+ const revision = revisionOverride || `rev-${++revisionSeq}-${fileChecksum.split(":")[1]}`;
804
905
  return {
805
906
  revision,
806
907
  path: filePath,
807
908
  content,
909
+ rawText,
808
910
  lines,
911
+ lineEndings,
809
912
  lineHashes,
810
913
  fileChecksum,
811
914
  uniqueTagIndex: buildUniqueTagIndex(lineHashes),
812
915
  changedRangesFromPrev: prevSnapshot ? computeChangedRanges(prevSnapshot.lines, lines) : [],
813
916
  prevRevision: prevSnapshot?.revision || null,
917
+ eol,
918
+ defaultEol,
919
+ trailingNewline,
814
920
  mtimeMs,
815
921
  size,
816
922
  createdAt: Date.now()
817
923
  };
818
924
  }
819
- function rememberSnapshot(filePath, content, meta = {}) {
925
+ function rememberSnapshot(filePath, input, meta = {}) {
820
926
  pruneExpired();
821
927
  const latest = latestByFile.get(filePath);
928
+ const parsed = typeof input === "string" ? parseUtf8TextWithMetadata(input) : input;
822
929
  const mtimeMs = meta.mtimeMs ?? latest?.mtimeMs ?? Date.now();
823
- const size = meta.size ?? Buffer.byteLength(content, "utf8");
824
- if (latest && latest.content === content && latest.mtimeMs === mtimeMs && latest.size === size) {
930
+ const size = meta.size ?? Buffer.byteLength(parsed.rawText, "utf8");
931
+ if (latest && latest.content === parsed.content && latest.rawText === parsed.rawText && latest.mtimeMs === mtimeMs && latest.size === size) {
825
932
  touchFile(filePath);
826
933
  return latest;
827
934
  }
828
- const snapshot = createSnapshot(filePath, content, mtimeMs, size, latest || null);
935
+ const snapshot = latest && latest.content === parsed.content ? createSnapshot(filePath, parsed, mtimeMs, size, latest || null, latest.revision) : createSnapshot(filePath, parsed, mtimeMs, size, latest || null);
829
936
  latestByFile.set(filePath, snapshot);
830
937
  revisionsById.set(snapshot.revision, snapshot);
831
938
  rememberRevisionId(filePath, snapshot.revision);
@@ -841,8 +948,8 @@ function readSnapshot(filePath) {
841
948
  touchFile(filePath);
842
949
  return latest;
843
950
  }
844
- const content = readText(filePath);
845
- return rememberSnapshot(filePath, content, { mtimeMs: stat.mtimeMs, size: stat.size });
951
+ const parsed = readUtf8WithMetadata(filePath);
952
+ return rememberSnapshot(filePath, parsed, { mtimeMs: stat.mtimeMs, size: stat.size });
846
953
  }
847
954
  function getSnapshotByRevision(revision) {
848
955
  pruneExpired();
@@ -874,6 +981,9 @@ function renderRequestedSpan(block) {
874
981
  if (block.requestedStartLine === block.startLine && block.requestedEndLine === block.endLine) return null;
875
982
  return `requested_span: ${block.requestedStartLine}-${block.requestedEndLine}`;
876
983
  }
984
+ function renderMetaLines(meta = {}) {
985
+ return Object.entries(meta).filter(([, value]) => value !== void 0 && value !== null && value !== "").map(([key, value]) => `${key}: ${value}`);
986
+ }
877
987
  function renderBaseEntry(entry, plain = false) {
878
988
  return plain ? `${entry.lineNumber}|${entry.text}` : `${entry.tag}.${entry.lineNumber} ${entry.text}`;
879
989
  }
@@ -950,6 +1060,7 @@ function serializeReadBlock(block, opts = {}) {
950
1060
  ];
951
1061
  const requestedSpan = renderRequestedSpan(block);
952
1062
  if (requestedSpan) lines.push(requestedSpan);
1063
+ lines.push(...renderMetaLines(block.meta));
953
1064
  lines.push(...block.entries.map((entry) => serializeReadEntry(entry, opts)));
954
1065
  lines.push(`checksum: ${block.checksum}`);
955
1066
  return lines.join("\n");
@@ -967,6 +1078,9 @@ function serializeSearchBlock(block, opts = {}) {
967
1078
  lines.push(`match_lines: ${block.meta.matchLines.join(",")}`);
968
1079
  }
969
1080
  if (block.meta.summary) lines.push(`summary: ${block.meta.summary}`);
1081
+ lines.push(...renderMetaLines(Object.fromEntries(
1082
+ Object.entries(block.meta).filter(([key]) => key !== "matchLines" && key !== "summary")
1083
+ )));
970
1084
  lines.push(...block.entries.map((entry) => serializeSearchEntry(entry, opts)));
971
1085
  lines.push(`checksum: ${block.checksum}`);
972
1086
  return lines.join("\n");
@@ -1060,7 +1174,11 @@ function buildReadBlock(snapshot, range, plain, remainingChars) {
1060
1174
  kind: "read_range",
1061
1175
  entries,
1062
1176
  requestedStartLine: range.requestedStartLine,
1063
- requestedEndLine: range.requestedEndLine
1177
+ requestedEndLine: range.requestedEndLine,
1178
+ meta: {
1179
+ eol: snapshot.eol,
1180
+ trailing_newline: snapshot.trailingNewline
1181
+ }
1064
1182
  }),
1065
1183
  remainingChars: nextBudget,
1066
1184
  cappedAtLine
@@ -1166,6 +1284,8 @@ Graph: ${items.join(" | ")}`;
1166
1284
  meta: ${meta}
1167
1285
  revision: ${snapshot.revision}
1168
1286
  file: ${snapshot.fileChecksum}
1287
+ eol: ${snapshot.eol}
1288
+ trailing_newline: ${snapshot.trailingNewline}
1169
1289
 
1170
1290
  ${serializedBlocks.join("\n\n")}`.trim();
1171
1291
  }
@@ -1337,6 +1457,41 @@ function sanitizeEditText(text) {
1337
1457
  if (hadTrailingNewline && !cleaned.endsWith("\n")) cleaned += "\n";
1338
1458
  return cleaned;
1339
1459
  }
1460
+ function replaceLogicalRange(lines, lineEndings, startIdx, endIdx, newLines, defaultEol) {
1461
+ const removeCount = endIdx - startIdx + 1;
1462
+ const tailEnding = lineEndings[endIdx] ?? "";
1463
+ const lastIdx = lines.length - 1;
1464
+ if (newLines.length === 0) {
1465
+ lines.splice(startIdx, removeCount);
1466
+ lineEndings.splice(startIdx, removeCount);
1467
+ if (lines.length === 0) {
1468
+ lines.push("");
1469
+ lineEndings.push("");
1470
+ return;
1471
+ }
1472
+ if (endIdx === lastIdx && startIdx > 0) {
1473
+ lineEndings[startIdx - 1] = tailEnding;
1474
+ }
1475
+ return;
1476
+ }
1477
+ const newEndings = newLines.map((_, idx) => idx === newLines.length - 1 ? tailEnding : defaultEol);
1478
+ lines.splice(startIdx, removeCount, ...newLines);
1479
+ lineEndings.splice(startIdx, removeCount, ...newEndings);
1480
+ }
1481
+ function insertLogicalLinesAfter(lines, lineEndings, idx, newLines, defaultEol) {
1482
+ if (newLines.length === 0) return;
1483
+ let lastInsertedEnding = defaultEol;
1484
+ if ((lineEndings[idx] ?? "") === "") {
1485
+ lineEndings[idx] = defaultEol;
1486
+ lastInsertedEnding = "";
1487
+ }
1488
+ const insertedEndings = newLines.map((_, index) => index === newLines.length - 1 ? lastInsertedEnding : defaultEol);
1489
+ lines.splice(idx + 1, 0, ...newLines);
1490
+ lineEndings.splice(idx + 1, 0, ...insertedEndings);
1491
+ }
1492
+ function composeRawText(lines, lineEndings) {
1493
+ return lines.map((line, idx) => `${line}${lineEndings[idx] ?? ""}`).join("");
1494
+ }
1340
1495
  function findLine(lines, lineNum, expectedTag, hashIndex) {
1341
1496
  const idx = lineNum - 1;
1342
1497
  if (idx < 0 || idx >= lines.length) {
@@ -1994,7 +2149,7 @@ Recovery: read_file path ranges=["${csStart}-${csEnd}"], then retry edit with fr
1994
2149
  return conflicts;
1995
2150
  }
1996
2151
  function applySetLineEdit(edit, ctx) {
1997
- const { lines, opts, locateOrConflict, ensureRevisionContext } = ctx;
2152
+ const { lines, lineEndings, defaultEol, opts, locateOrConflict, ensureRevisionContext } = ctx;
1998
2153
  const { tag, line } = parseRef(edit.set_line.anchor);
1999
2154
  const idx = locateOrConflict({ tag, line }, "stale_anchor", () => buildRetryEdit(edit, lines));
2000
2155
  if (typeof idx === "string") return idx;
@@ -2004,17 +2159,17 @@ function applySetLineEdit(edit, ctx) {
2004
2159
  if (conflict) return conflict;
2005
2160
  const txt = edit.set_line.new_text;
2006
2161
  if (!txt && txt !== 0) {
2007
- lines.splice(idx, 1);
2162
+ replaceLogicalRange(lines, lineEndings, idx, idx, [], defaultEol);
2008
2163
  return null;
2009
2164
  }
2010
2165
  const origLine = [lines[idx]];
2011
2166
  const raw = sanitizeEditText(txt).split("\n");
2012
2167
  const newLines = opts.restoreIndent ? restoreIndent(origLine, raw) : raw;
2013
- lines.splice(idx, 1, ...newLines);
2168
+ replaceLogicalRange(lines, lineEndings, idx, idx, newLines, defaultEol);
2014
2169
  return null;
2015
2170
  }
2016
2171
  function applyInsertAfterEdit(edit, ctx) {
2017
- const { lines, opts, locateOrConflict, ensureRevisionContext } = ctx;
2172
+ const { lines, lineEndings, defaultEol, opts, locateOrConflict, ensureRevisionContext } = ctx;
2018
2173
  const { tag, line } = parseRef(edit.insert_after.anchor);
2019
2174
  const idx = locateOrConflict({ tag, line }, "stale_anchor", () => buildRetryEdit(edit, lines));
2020
2175
  if (typeof idx === "string") return idx;
@@ -2024,7 +2179,7 @@ function applyInsertAfterEdit(edit, ctx) {
2024
2179
  if (conflict) return conflict;
2025
2180
  let insertLines = sanitizeEditText(edit.insert_after.text).split("\n");
2026
2181
  if (opts.restoreIndent) insertLines = restoreIndent([lines[idx]], insertLines);
2027
- lines.splice(idx + 1, 0, ...insertLines);
2182
+ insertLogicalLinesAfter(lines, lineEndings, idx, insertLines, defaultEol);
2028
2183
  return null;
2029
2184
  }
2030
2185
  function applyReplaceLinesEdit(edit, ctx) {
@@ -2035,6 +2190,8 @@ function applyReplaceLinesEdit(edit, ctx) {
2035
2190
  currentSnapshot,
2036
2191
  ensureRevisionContext,
2037
2192
  hasBaseSnapshot,
2193
+ lineEndings,
2194
+ defaultEol,
2038
2195
  lines,
2039
2196
  locateOrConflict,
2040
2197
  opts,
@@ -2124,17 +2281,17 @@ Recovery: read_file path ranges=["${csStart}-${csEnd}"], then retry edit with fr
2124
2281
  }
2125
2282
  const txt = edit.replace_lines.new_text;
2126
2283
  if (!txt && txt !== 0) {
2127
- lines.splice(startIdx, endIdx - startIdx + 1);
2284
+ replaceLogicalRange(lines, lineEndings, startIdx, endIdx, [], defaultEol);
2128
2285
  return null;
2129
2286
  }
2130
2287
  const origRange = lines.slice(startIdx, endIdx + 1);
2131
2288
  let newLines = sanitizeEditText(txt).split("\n");
2132
2289
  if (opts.restoreIndent) newLines = restoreIndent(origRange, newLines);
2133
- lines.splice(startIdx, endIdx - startIdx + 1, ...newLines);
2290
+ replaceLogicalRange(lines, lineEndings, startIdx, endIdx, newLines, defaultEol);
2134
2291
  return null;
2135
2292
  }
2136
2293
  function applyReplaceBetweenEdit(edit, ctx) {
2137
- const { lines, opts, locateOrConflict, ensureRevisionContext } = ctx;
2294
+ const { lines, lineEndings, defaultEol, opts, locateOrConflict, ensureRevisionContext } = ctx;
2138
2295
  const boundaryMode = edit.replace_between.boundary_mode || "inclusive";
2139
2296
  if (boundaryMode !== "inclusive" && boundaryMode !== "exclusive") {
2140
2297
  throw new Error(`BAD_INPUT: replace_between boundary_mode must be inclusive or exclusive, got ${boundaryMode}`);
@@ -2163,7 +2320,11 @@ function applyReplaceBetweenEdit(edit, ctx) {
2163
2320
  const origRange = lines.slice(sliceStart, sliceStart + removeCount);
2164
2321
  if (opts.restoreIndent && origRange.length > 0) newLines = restoreIndent(origRange, newLines);
2165
2322
  if (txt === "" || txt === null) newLines = [];
2166
- lines.splice(sliceStart, removeCount, ...newLines);
2323
+ if (removeCount === 0) {
2324
+ insertLogicalLinesAfter(lines, lineEndings, sliceStart - 1, newLines, defaultEol);
2325
+ return null;
2326
+ }
2327
+ replaceLogicalRange(lines, lineEndings, sliceStart, sliceStart + removeCount - 1, newLines, defaultEol);
2167
2328
  return null;
2168
2329
  }
2169
2330
  function editFile(filePath, edits, opts = {}) {
@@ -2175,11 +2336,12 @@ function editFile(filePath, edits, opts = {}) {
2175
2336
  const staleRevision = !!opts.baseRevision && opts.baseRevision !== currentSnapshot.revision && hasBaseSnapshot;
2176
2337
  const changedRanges = staleRevision && hasBaseSnapshot ? computeChangedRanges(baseSnapshot.lines, currentSnapshot.lines) : [];
2177
2338
  const conflictPolicy = opts.conflictPolicy || "conservative";
2178
- const original = currentSnapshot.content;
2339
+ const originalRaw = currentSnapshot.rawText;
2179
2340
  const lines = [...currentSnapshot.lines];
2341
+ const lineEndings = [...currentSnapshot.lineEndings];
2180
2342
  const origLines = [...currentSnapshot.lines];
2181
- const hadTrailingNewline = original.endsWith("\n");
2182
2343
  const hashIndex = currentSnapshot.uniqueTagIndex;
2344
+ const defaultEol = currentSnapshot.defaultEol || "\n";
2183
2345
  let autoRebased = false;
2184
2346
  const remaps = [];
2185
2347
  const remapKeys = /* @__PURE__ */ new Set();
@@ -2300,6 +2462,8 @@ ${snip.text}`;
2300
2462
  ensureRevisionContext,
2301
2463
  hasBaseSnapshot,
2302
2464
  lines,
2465
+ lineEndings,
2466
+ defaultEol,
2303
2467
  locateOrConflict,
2304
2468
  opts,
2305
2469
  origLines,
@@ -2340,19 +2504,17 @@ ${snip.text}`;
2340
2504
  throw editErr;
2341
2505
  }
2342
2506
  }
2343
- let content = lines.join("\n");
2344
- if (hadTrailingNewline && !content.endsWith("\n")) content += "\n";
2345
- if (!hadTrailingNewline && content.endsWith("\n")) content = content.slice(0, -1);
2346
- if (original === content) {
2507
+ const content = composeRawText(lines, lineEndings);
2508
+ if (originalRaw === content) {
2347
2509
  throw new Error("NOOP_EDIT: File already contains the desired content. No changes needed.");
2348
2510
  }
2349
- const fullDiff = simpleDiff(origLines, content.split("\n"));
2511
+ const fullDiff = simpleDiff(origLines, lines);
2350
2512
  let displayDiff = fullDiff;
2351
2513
  if (displayDiff && displayDiff.length > MAX_DIFF_CHARS) {
2352
2514
  displayDiff = displayDiff.slice(0, MAX_DIFF_CHARS) + `
2353
2515
  ... (diff truncated, ${displayDiff.length} chars total)`;
2354
2516
  }
2355
- const newLinesAll = content.split("\n");
2517
+ const newLinesAll = lines;
2356
2518
  let minLine = Infinity, maxLine = 0;
2357
2519
  if (fullDiff) {
2358
2520
  for (const dl of fullDiff.split("\n")) {
@@ -2369,7 +2531,7 @@ ${snip.text}`;
2369
2531
  reason: ${REASON.DRY_RUN_PREVIEW}
2370
2532
  revision: ${currentSnapshot.revision}
2371
2533
  file: ${currentSnapshot.fileChecksum}
2372
- Dry run: ${filePath} would change (${content.split("\n").length} lines)`;
2534
+ Dry run: ${filePath} would change (${lines.length} lines)`;
2373
2535
  if (staleRevision && hasBaseSnapshot) msg2 += `
2374
2536
  changed_ranges: ${describeChangedRanges(changedRanges)}`;
2375
2537
  if (displayDiff) msg2 += `
@@ -2397,13 +2559,21 @@ remapped_refs:
2397
2559
  ${remaps.map(({ from, to }) => `${from} -> ${to}`).join("\n")}`;
2398
2560
  }
2399
2561
  msg += `
2400
- Updated ${filePath} (${content.split("\n").length} lines)`;
2562
+ Updated ${filePath} (${lines.length} lines)`;
2401
2563
  if (fullDiff && minLine <= maxLine) {
2402
2564
  const ctxStart = Math.max(0, minLine - 6) + 1;
2403
2565
  const ctxEnd = Math.min(newLinesAll.length, maxLine + 5);
2404
2566
  const entries = createSnapshotEntries(nextSnapshot, ctxStart, ctxEnd);
2405
2567
  if (entries.length > 0) {
2406
- const block = buildEditReadyBlock({ path: real, kind: "post_edit", entries });
2568
+ const block = buildEditReadyBlock({
2569
+ path: real,
2570
+ kind: "post_edit",
2571
+ entries,
2572
+ meta: {
2573
+ eol: nextSnapshot.eol,
2574
+ trailing_newline: nextSnapshot.trailingNewline
2575
+ }
2576
+ });
2407
2577
  msg += `
2408
2578
 
2409
2579
  ${serializeReadBlock(block)}`;
@@ -2473,6 +2643,8 @@ try {
2473
2643
  } catch {
2474
2644
  }
2475
2645
  var DEFAULT_LIMIT2 = 100;
2646
+ var DEFAULT_TOTAL_LIMIT_CONTENT = 200;
2647
+ var DEFAULT_TOTAL_LIMIT_LIST = 1e3;
2476
2648
  var MAX_OUTPUT = 10 * 1024 * 1024;
2477
2649
  var TIMEOUT2 = 3e4;
2478
2650
  var MAX_SEARCH_OUTPUT_CHARS = 8e4;
@@ -2514,12 +2686,19 @@ function grepSearch(pattern, opts = {}) {
2514
2686
  const target = normPath ? resolve2(normPath) : process.cwd();
2515
2687
  const output = opts.output || "content";
2516
2688
  const plain = !!opts.plain;
2517
- const totalLimit = opts.totalLimit && opts.totalLimit > 0 ? opts.totalLimit : 0;
2518
- if (output === "files") return filesMode(pattern, target, opts);
2519
- if (output === "count") return countMode(pattern, target, opts);
2689
+ const defaultTotalLimit = output === "content" ? DEFAULT_TOTAL_LIMIT_CONTENT : DEFAULT_TOTAL_LIMIT_LIST;
2690
+ const totalLimit = opts.totalLimit === 0 ? 0 : opts.totalLimit && opts.totalLimit > 0 ? opts.totalLimit : defaultTotalLimit;
2691
+ if (output === "files") return filesMode(pattern, target, opts, totalLimit);
2692
+ if (output === "count") return countMode(pattern, target, opts, totalLimit);
2520
2693
  return contentMode(pattern, target, opts, plain, totalLimit);
2521
2694
  }
2522
- async function filesMode(pattern, target, opts) {
2695
+ function applyListModeTotalLimit(lines, totalLimit) {
2696
+ if (!totalLimit || totalLimit <= 0 || lines.length <= totalLimit) return lines.join("\n");
2697
+ const visible = lines.slice(0, totalLimit);
2698
+ visible.push(`OUTPUT_CAPPED: ${lines.length - totalLimit} more result line(s) omitted. Narrow with path= or glob=, or raise total_limit.`);
2699
+ return visible.join("\n");
2700
+ }
2701
+ async function filesMode(pattern, target, opts, totalLimit) {
2523
2702
  const realArgs = ["-l"];
2524
2703
  if (opts.caseInsensitive) realArgs.push("-i");
2525
2704
  else if (opts.smartCase) realArgs.push("-S");
@@ -2534,9 +2713,9 @@ async function filesMode(pattern, target, opts) {
2534
2713
  if (code !== 0 && code !== null) throw new Error(`GREP_ERROR: rg exit ${code} \u2014 ${stderr.trim() || "unknown error"}`);
2535
2714
  const lines = stdout.trimEnd().split("\n").filter(Boolean);
2536
2715
  const normalized = lines.map((l) => l.replace(/\\/g, "/"));
2537
- return normalized.join("\n");
2716
+ return applyListModeTotalLimit(normalized, totalLimit);
2538
2717
  }
2539
- async function countMode(pattern, target, opts) {
2718
+ async function countMode(pattern, target, opts, totalLimit) {
2540
2719
  const realArgs = ["-c"];
2541
2720
  if (opts.caseInsensitive) realArgs.push("-i");
2542
2721
  else if (opts.smartCase) realArgs.push("-S");
@@ -2551,7 +2730,7 @@ async function countMode(pattern, target, opts) {
2551
2730
  if (code !== 0 && code !== null) throw new Error(`GREP_ERROR: rg exit ${code} \u2014 ${stderr.trim() || "unknown error"}`);
2552
2731
  const lines = stdout.trimEnd().split("\n").filter(Boolean);
2553
2732
  const normalized = lines.map((l) => l.replace(/\\/g, "/"));
2554
- return normalized.join("\n");
2733
+ return applyListModeTotalLimit(normalized, totalLimit);
2555
2734
  }
2556
2735
  async function contentMode(pattern, target, opts, plain, totalLimit) {
2557
2736
  const realArgs = ["--json"];
@@ -2663,7 +2842,7 @@ async function contentMode(pattern, target, opts, plain, totalLimit) {
2663
2842
  flushGroup();
2664
2843
  blocks.push(buildDiagnosticBlock({
2665
2844
  kind: "total_limit",
2666
- message: `Search stopped after ${totalLimit} match event(s). Narrow the query to continue.`,
2845
+ message: `Search stopped after ${totalLimit} match event(s). Narrow the query, raise total_limit, or pass total_limit=0 to disable the cap.`,
2667
2846
  path: String(target).replace(/\\/g, "/")
2668
2847
  }));
2669
2848
  return blocks.map((block) => block.type === "edit_ready_block" ? serializeSearchBlock(block, { plain }) : serializeDiagnosticBlock(block)).join("\n\n");
@@ -2728,7 +2907,7 @@ function isSupportedExtension(ext) {
2728
2907
  }
2729
2908
 
2730
2909
  // ../hex-common/src/parser/tree-sitter.mjs
2731
- import { existsSync as existsSync4, readFileSync as readFileSync2 } from "node:fs";
2910
+ import { existsSync as existsSync4, readFileSync as readFileSync3 } from "node:fs";
2732
2911
  import { dirname as dirname3, resolve as resolve3 } from "node:path";
2733
2912
  import { fileURLToPath } from "node:url";
2734
2913
  var parserInstance = null;
@@ -2755,7 +2934,7 @@ function loadArtifactManifest() {
2755
2934
  `Tree-sitter artifact manifest is missing. Checked: ${artifactDirCandidates.join(", ")}. This package now ships first-party grammar WASM artifacts; restore artifacts/tree-sitter or rerun the artifact materialization step.`
2756
2935
  );
2757
2936
  }
2758
- artifactManifest = JSON.parse(readFileSync2(manifestPath, "utf8"));
2937
+ artifactManifest = JSON.parse(readFileSync3(manifestPath, "utf8"));
2759
2938
  return artifactManifest;
2760
2939
  }
2761
2940
  function treeSitterArtifactPath(grammar) {
@@ -3117,7 +3296,7 @@ import { statSync as statSync8 } from "node:fs";
3117
3296
  import { resolve as resolve6 } from "node:path";
3118
3297
 
3119
3298
  // lib/tree.mjs
3120
- import { readdirSync as readdirSync2, readFileSync as readFileSync3, statSync as statSync6, existsSync as existsSync5 } from "node:fs";
3299
+ import { readdirSync as readdirSync2, readFileSync as readFileSync4, statSync as statSync6, existsSync as existsSync5 } from "node:fs";
3121
3300
  import { resolve as resolve4, basename, join as join4, relative as relative2 } from "node:path";
3122
3301
  import ignore from "ignore";
3123
3302
  var SKIP_DIRS = /* @__PURE__ */ new Set([
@@ -3138,7 +3317,7 @@ function loadGitignore(rootDir) {
3138
3317
  const gi = join4(rootDir, ".gitignore");
3139
3318
  if (!existsSync5(gi)) return null;
3140
3319
  try {
3141
- const content = readFileSync3(gi, "utf-8");
3320
+ const content = readFileSync4(gi, "utf-8");
3142
3321
  return ignore().add(content);
3143
3322
  } catch {
3144
3323
  return null;
@@ -3393,7 +3572,7 @@ function inspectPath(inputPath, opts = {}) {
3393
3572
  }
3394
3573
 
3395
3574
  // lib/setup.mjs
3396
- import { readFileSync as readFileSync4, writeFileSync as writeFileSync2, existsSync as existsSync6, mkdirSync, copyFileSync } from "node:fs";
3575
+ import { readFileSync as readFileSync5, writeFileSync as writeFileSync2, existsSync as existsSync6, mkdirSync, copyFileSync } from "node:fs";
3397
3576
  import { resolve as resolve7, dirname as dirname4, join as join5 } from "node:path";
3398
3577
  import { fileURLToPath as fileURLToPath2 } from "node:url";
3399
3578
  import { homedir } from "node:os";
@@ -3424,7 +3603,7 @@ var CLAUDE_HOOKS = {
3424
3603
  };
3425
3604
  function readJson(filePath) {
3426
3605
  if (!existsSync6(filePath)) return null;
3427
- return JSON.parse(readFileSync4(filePath, "utf-8"));
3606
+ return JSON.parse(readFileSync5(filePath, "utf-8"));
3428
3607
  }
3429
3608
  function writeJson(filePath, data) {
3430
3609
  mkdirSync(dirname4(filePath), { recursive: true });
@@ -3439,7 +3618,7 @@ function findEntryByCommand(entries) {
3439
3618
  }
3440
3619
  function safeRead(filePath) {
3441
3620
  try {
3442
- return readFileSync4(filePath, "utf-8");
3621
+ return readFileSync5(filePath, "utf-8");
3443
3622
  } catch {
3444
3623
  return null;
3445
3624
  }
@@ -3530,7 +3709,7 @@ import { join as join6 } from "node:path";
3530
3709
 
3531
3710
  // ../hex-common/src/git/semantic-diff.mjs
3532
3711
  import { execFileSync } from "node:child_process";
3533
- import { existsSync as existsSync7, readFileSync as readFileSync5, statSync as statSync9 } from "node:fs";
3712
+ import { existsSync as existsSync7, readFileSync as readFileSync6, statSync as statSync9 } from "node:fs";
3534
3713
  import { dirname as dirname5, extname as extname3, relative as relative3, resolve as resolve8 } from "node:path";
3535
3714
  function normalizePath2(value) {
3536
3715
  return value.replace(/\\/g, "/");
@@ -3710,7 +3889,7 @@ function compareSymbols(beforeEntries = [], afterEntries = []) {
3710
3889
  function readWorkingTreeFile(repoRoot, relPath) {
3711
3890
  const absPath = resolve8(repoRoot, relPath);
3712
3891
  if (!existsSync7(absPath)) return null;
3713
- return readFileSync5(absPath, "utf8").replace(/\r\n/g, "\n");
3892
+ return readFileSync6(absPath, "utf8").replace(/\r\n/g, "\n");
3714
3893
  }
3715
3894
  function readGitFile(repoRoot, ref, relPath) {
3716
3895
  if (!ref) return null;
@@ -4055,7 +4234,7 @@ OUTPUT_CAPPED: Output exceeded ${MAX_BULK_OUTPUT_CHARS} chars.`;
4055
4234
  }
4056
4235
 
4057
4236
  // server.mjs
4058
- var version = true ? "1.12.1" : (await null).createRequire(import.meta.url)("./package.json").version;
4237
+ var version = true ? "1.14.0" : (await null).createRequire(import.meta.url)("./package.json").version;
4059
4238
  var { server, StdioServerTransport } = await createServerRuntime({
4060
4239
  name: "hex-line-mcp",
4061
4240
  version
@@ -4080,7 +4259,7 @@ function parseReadRanges(rawRanges) {
4080
4259
  }
4081
4260
  server.registerTool("read_file", {
4082
4261
  title: "Read File",
4083
- description: "Read file with hash-annotated lines, checksums, revision metadata, and automatic graph hints when available. Default: edit-ready output. Use plain:true for non-edit workflows.",
4262
+ description: "Read file with hash-annotated lines, checksums, logical revision metadata, EOL/trailing-newline state, and graph hints when available. Default: edit-ready output.",
4084
4263
  inputSchema: z2.object({
4085
4264
  path: z2.string().optional().describe("File path"),
4086
4265
  paths: z2.array(z2.string()).optional().describe("Array of file paths to read (batch mode)"),
@@ -4115,7 +4294,7 @@ ERROR: ${e.message}`);
4115
4294
  });
4116
4295
  server.registerTool("edit_file", {
4117
4296
  title: "Edit File",
4118
- description: "Apply hash-verified partial edits to one file. Batch multiple edits in one call. Carry base_revision from prior read/edit for auto-rebase on concurrent changes. Conservative conflicts return retry_edit/retry_edits, suggested_read_call, and retry_plan when available.",
4297
+ description: "Apply hash-verified partial edits to one file. Carry base_revision on same-file follow-ups. Preserves existing line endings and trailing-newline shape; conservative conflicts return retry helpers.",
4119
4298
  inputSchema: z2.object({
4120
4299
  path: z2.string().describe("File to edit"),
4121
4300
  edits: z2.union([z2.string(), z2.array(z2.any())]).describe(
@@ -4124,12 +4303,14 @@ server.registerTool("edit_file", {
4124
4303
  dry_run: flexBool().describe("Preview changes without writing"),
4125
4304
  restore_indent: flexBool().describe("Auto-fix indentation to match anchor (default: false)"),
4126
4305
  base_revision: z2.string().optional().describe("Prior revision from read_file/edit_file. Enables conservative auto-rebase for same-file follow-up edits."),
4127
- conflict_policy: z2.enum(["strict", "conservative"]).optional().describe('Conflict handling (default: "conservative"). "conservative" returns structured CONFLICT output with recovery_ranges, retry_edit/retry_edits, suggested_read_call, and retry_plan when available.')
4306
+ conflict_policy: z2.enum(["strict", "conservative"]).optional().describe('Conflict handling (default: "conservative"). "conservative" returns structured CONFLICT output with recovery_ranges, retry_edit/retry_edits, suggested_read_call, and retry_plan when available.'),
4307
+ allow_external: flexBool().describe("Allow editing a path outside the current project root. Use only when you intentionally target a temp or external file.")
4128
4308
  }),
4129
4309
  annotations: { readOnlyHint: false, destructiveHint: false, idempotentHint: false }
4130
4310
  }, async (rawParams) => {
4131
- const { path: p, edits: json, dry_run, restore_indent, base_revision, conflict_policy } = rawParams ?? {};
4311
+ const { path: p, edits: json, dry_run, restore_indent, base_revision, conflict_policy, allow_external } = rawParams ?? {};
4132
4312
  try {
4313
+ assertProjectScopedPath(p, { allowExternal: !!allow_external });
4133
4314
  let parsed;
4134
4315
  try {
4135
4316
  parsed = typeof json === "string" ? JSON.parse(json) : json;
@@ -4157,12 +4338,14 @@ server.registerTool("write_file", {
4157
4338
  description: "Create a new file or overwrite existing. Creates parent dirs. For existing files prefer edit_file (shows diff, verifies hashes).",
4158
4339
  inputSchema: z2.object({
4159
4340
  path: z2.string().describe("File path"),
4160
- content: z2.string().describe("File content")
4341
+ content: z2.string().describe("File content"),
4342
+ allow_external: flexBool().describe("Allow writing a path outside the current project root. Use only when you intentionally target a temp or external file.")
4161
4343
  }),
4162
4344
  annotations: { readOnlyHint: false, destructiveHint: false, idempotentHint: true }
4163
4345
  }, async (rawParams) => {
4164
- const { path: p, content } = rawParams ?? {};
4346
+ const { path: p, content, allow_external } = rawParams ?? {};
4165
4347
  try {
4348
+ assertProjectScopedPath(p, { allowExternal: !!allow_external });
4166
4349
  const abs = validateWritePath(p);
4167
4350
  mkdirSync2(dirname6(abs), { recursive: true });
4168
4351
  writeFileSync4(abs, content, "utf-8");
@@ -4188,7 +4371,7 @@ server.registerTool("grep_search", {
4188
4371
  context_before: flexNum().describe("Context lines BEFORE match (-B)"),
4189
4372
  context_after: flexNum().describe("Context lines AFTER match (-A)"),
4190
4373
  limit: flexNum().describe("Max matches per file (default: 100)"),
4191
- total_limit: flexNum().describe("Total match events across all files; multiline matches count as 1 (0 = unlimited)"),
4374
+ total_limit: flexNum().describe("Total match events across all files; multiline matches count as 1 (default: 200 for content, 1000 for files/count, 0 = unlimited)"),
4192
4375
  plain: flexBool().describe("Omit hash tags, return file:line:content")
4193
4376
  }),
4194
4377
  annotations: { readOnlyHint: true, destructiveHint: false, idempotentHint: true }
@@ -4250,7 +4433,7 @@ server.registerTool("outline", {
4250
4433
  });
4251
4434
  server.registerTool("verify", {
4252
4435
  title: "Verify Checksums",
4253
- description: "Check if held checksums are still valid without rereading. Returns canonical status, next_action, and suggested_read_call when rereading specific ranges is the right recovery.",
4436
+ description: "Check if held checksums are still valid without rereading. Use before delayed or mixed-tool follow-up edits; returns canonical status, next_action, and reread guidance.",
4254
4437
  inputSchema: z2.object({
4255
4438
  path: z2.string().describe("File path"),
4256
4439
  checksums: z2.array(z2.string()).describe('Checksum strings, e.g. ["1-50:f7e2a1b0", "51-100:abcd1234"]'),
@@ -4313,12 +4496,14 @@ server.registerTool("bulk_replace", {
4313
4496
  path: z2.string().describe("Root directory for the replacement scope"),
4314
4497
  dry_run: flexBool().describe("Preview without writing (default: false)"),
4315
4498
  max_files: flexNum().describe("Max files to process (default: 100)"),
4316
- format: z2.enum(["compact", "full"]).optional().describe('"compact" (default) = summary only, "full" = include capped diffs')
4499
+ format: z2.enum(["compact", "full"]).optional().describe('"compact" (default) = summary only, "full" = include capped diffs'),
4500
+ allow_external: flexBool().describe("Allow a replacement root outside the current project root. Use only when you intentionally target a temp or external directory.")
4317
4501
  }),
4318
4502
  annotations: { readOnlyHint: false, destructiveHint: true, idempotentHint: false }
4319
4503
  }, async (rawParams) => {
4320
4504
  try {
4321
4505
  const params = rawParams ?? {};
4506
+ assertProjectScopedPath(params.path, { allowExternal: !!params.allow_external });
4322
4507
  const raw = params.replacements;
4323
4508
  let replacementsInput;
4324
4509
  try {
package/output-style.md CHANGED
@@ -27,25 +27,31 @@ Prefer `hex-line` for text files you may inspect or modify. Hash-annotated reads
27
27
  | Path | Flow |
28
28
  |------|------|
29
29
  | Surgical | `grep_search -> edit_file` |
30
- | Exploratory | `outline -> read_file (ranges) -> edit_file` |
30
+ | Exploratory | `outline -> read_file (ranges) -> edit_file(base_revision)` |
31
31
  | Multi-file | `bulk_replace(path=<project root>)` |
32
- | Follow-up after delay | `verify -> reread only if STALE` |
32
+ | Follow-up after delay | `verify(base_revision) -> reread only if STALE -> retry with returned helpers` |
33
33
 
34
34
  ## Scope Discipline
35
35
 
36
36
  - Auto-fill `path` instead of leaving scope implicit.
37
37
  - For file tools (`read_file`, `edit_file`, `outline`, `changes` on one file), use the target file path.
38
+ - Read-only file tools may target explicit temp-file paths outside the repo when you intentionally inspect a scratch file.
38
39
  - For repo-wide tools (`bulk_replace`, directory `inspect_path`, broad `grep_search`), use the resolved project root or intended directory scope.
40
+ - Mutating tools stay inside the current project root by default. Add `allow_external=true` only when you intentionally edit a temp or external path.
39
41
  - Treat missing or ambiguous scope as an error to fix, not as a reason to guess across repositories.
40
42
 
41
43
  ## Edit Discipline
42
44
 
43
45
  - Never invent `range_checksum`. Copy it from a fresh `read_file` or `grep_search(output:"content")` block.
44
46
  - First mutation in a file: use `grep_search` for narrow targets, or `outline -> read_file(ranges)` for structural edits.
47
+ - Preserve file conventions mentally: `hex-line` hashes normalized logical text, but `edit_file` preserves the file's existing line endings and trailing-newline shape on write.
45
48
  - Prefer `set_line` or `insert_after` for small local changes. Prefer `replace_between` for larger bounded block rewrites.
46
49
  - Use `replace_lines` only when you already hold the exact inclusive range checksum for that block.
47
- - Avoid large first-pass edit batches. Start with 1-2 hunks, then continue from the returned `revision`.
50
+ - Avoid large first-pass edit batches. Start with 1-2 hunks, then continue from the returned `revision` as `base_revision`.
51
+ - Before a delayed follow-up edit, a formatter pass, or any mixed-tool workflow on the same file, run `verify` with the last checksums and `base_revision`.
48
52
  - If `edit_file` returns `retry_edit`, `retry_edits`, or `retry_plan`, reuse those directly instead of rebuilding anchors/checksums by hand.
53
+ - Reuse `retry_checksum` when it is returned for the exact same target range.
54
+ - Once `hex-line` owns a file edit session, avoid mixing built-in `Edit`/`Write` on that file unless you intentionally want a new baseline.
49
55
  - Follow `next_action` first. Treat `summary` and `snippet` as the compact local context, not as prose to reinterpret.
50
56
 
51
57
  ## Exceptions
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@levnikolaevich/hex-line-mcp",
3
- "version": "1.12.1",
3
+ "version": "1.14.0",
4
4
  "mcpName": "io.github.levnikolaevich/hex-line-mcp",
5
5
  "type": "module",
6
6
  "description": "Hash-verified file editing MCP + token efficiency hook for AI coding agents. 9 tools: inspect_path, read, edit, write, grep, outline, verify, changes, bulk_replace.",