@levnikolaevich/hex-line-mcp 1.24.1 → 1.25.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -164,8 +164,8 @@ Read a file with progressive disclosure. Default mode is discovery-first: plain
164
164
 
165
165
  | Parameter | Type | Required | Description |
166
166
  |-----------|------|----------|-------------|
167
- | `path` | string | yes | File path |
168
- | `paths` | string[] | no | Array of file paths to read (batch mode) |
167
+ | `file_path` | string | yes | File path |
168
+ | `file_paths` | string[] | no | Array of file paths to read (batch mode) |
169
169
  | `offset` | number | no | Start line, 1-indexed (default: 1) |
170
170
  | `limit` | number | no | Max lines to return (default: 200 for discovery, 2000 for edit-ready, 0 = all) |
171
171
  | `ranges` | array | no | Explicit line ranges, e.g. `[{ "start": 10, "end": 30 }]` |
@@ -211,7 +211,7 @@ Edit using revision-aware hash-verified anchors. Prefer one batched call per fil
211
211
 
212
212
  | Parameter | Type | Required | Description |
213
213
  |-----------|------|----------|-------------|
214
- | `path` | string | yes | File to edit |
214
+ | `file_path` | string | yes | File to edit |
215
215
  | `edits` | string | yes | JSON array of edit operations (see below) |
216
216
  | `dry_run` | boolean | no | Preview changes without writing |
217
217
  | `restore_indent` | boolean | no | Auto-fix indentation to match anchor context (default: false) |
@@ -232,7 +232,7 @@ Edit operations (JSON array):
232
232
 
233
233
  Discipline:
234
234
 
235
- - Never invent `range_checksum`. Copy it from `read_file` or `grep_search(output:"content")`.
235
+ - Never invent `range_checksum`. Copy it from `read_file` or `grep_search(output_mode:"content")`.
236
236
  - First mutation in a file: prefer `grep_search` for narrow targets, or `outline -> read_file(ranges)` for structural edits.
237
237
  - Prefer 1-2 hunks on the first pass. Once `edit_file` returns a fresh `revision`, continue from that state as `base_revision`.
238
238
  - `hex-line` preserves existing file line endings on write; repo-level line-ending cleanup should be a separate deliberate operation, not a side effect of `edit_file`.
@@ -242,12 +242,10 @@ Result footer includes:
242
242
  - `status: OK | AUTO_REBASED | CONFLICT`
243
243
  - `reason: ...` as the canonical machine-readable cause for the current status
244
244
  - `revision: ...`
245
- - `file: ...`
246
245
  - `summary: ...` with edited line span / diff counts on successful edits
247
246
  - `payload_sections: ...` so callers know which detailed sections follow
248
- - `graph_enrichment: available | unavailable`
249
- - `semantic_impact_count: ...`, `semantic_fact_count: ...`, and `clone_warning_count: ...` when graph-backed review data is available
250
- - `provenance_summary: ...` for the edit protocol + optional graph source
247
+ - `graph_enrichment: unavailable` + `graph_fix: ...` when the project graph is missing or out of sync (silent when graph is healthy)
248
+ - `⚠ Semantic impact:` and `⚠ N clone(s):` blocks emitted directly when graph-backed review data is available
251
249
  - `changed_ranges: ...` when relevant
252
250
  - `recovery_ranges: ...` with the narrowest recommended `read_file` ranges for retry
253
251
  - `next_action: ...` as the canonical immediate choice: `apply_retry_edit`, `apply_retry_batch`, or `reread_then_retry`
@@ -266,7 +264,7 @@ Create a new file or overwrite an existing one. Creates parent directories autom
266
264
 
267
265
  | Parameter | Type | Required | Description |
268
266
  |-----------|------|----------|-------------|
269
- | `path` | string | yes | File path |
267
+ | `file_path` | string | yes | File path |
270
268
  | `content` | string | yes | File content |
271
269
  | `allow_external` | boolean | no | Allow writing a path outside the current project root |
272
270
 
@@ -280,7 +278,7 @@ Search file contents using ripgrep. Default mode is `summary` for discovery. Use
280
278
  | `path` | string | no | Directory or file to search (default: cwd) |
281
279
  | `glob` | string | no | Glob filter, e.g. `"*.ts"` |
282
280
  | `type` | string | no | File type filter, e.g. `"js"`, `"py"` |
283
- | `output` | enum | no | Output format: `"summary"` (default), `"content"`, `"files"`, `"count"` |
281
+ | `output_mode` | enum | no | Output format: `"summary"` (default), `"content"`, `"files_with_matches"`, `"count"` |
284
282
  | `case_insensitive` | boolean | no | Ignore case |
285
283
  | `smart_case` | boolean | no | CI when lowercase, CS when uppercase (`-S`) |
286
284
  | `literal` | boolean | no | Literal string search, no regex (`-F`) |
@@ -289,7 +287,7 @@ Search file contents using ripgrep. Default mode is `summary` for discovery. Use
289
287
  | `context_before` | number | no | Context lines BEFORE match (`-B`) |
290
288
  | `context_after` | number | no | Context lines AFTER match (`-A`) |
291
289
  | `limit` | number | no | Max matches per file (default: 20 for `summary`, 100 for `content`) |
292
- | `total_limit` | number | no | Total match events across all files; multiline matches count as 1 (default: 50 for `summary`, 200 for `content`, 1000 for `files`/`count`, 0 = unlimited) |
290
+ | `head_limit` | number | no | Total match events across all files; multiline matches count as 1 (default: 50 for `summary`, 200 for `content`, 1000 for `files_with_matches`/`count`, 0 = unlimited) |
293
291
  | `plain` | boolean | no | Omit hash tags inside block entries, return `lineNum\|content` |
294
292
  | `edit_ready` | boolean | no | Preserve hash/checksum search hunks in `content` mode |
295
293
  | `allow_large_output` | boolean | no | Bypass the default `content`-mode block/char caps when you intentionally need a larger payload |
@@ -305,7 +303,7 @@ AST-based structural outline with hash anchors for direct `edit_file` usage. Sup
305
303
 
306
304
  | Parameter | Type | Required | Description |
307
305
  |-----------|------|----------|-------------|
308
- | `path` | string | yes | Source file path |
306
+ | `file_path` | string | yes | Source file path |
309
307
 
310
308
  Supported languages: JavaScript (`.js`, `.mjs`, `.cjs`, `.jsx`), TypeScript (`.ts`, `.tsx`), Python (`.py`), C# (`.cs`), and PHP (`.php`) via tree-sitter WASM.
311
309
 
@@ -317,7 +315,7 @@ Check if range checksums from prior read/search blocks are still valid, optional
317
315
 
318
316
  | Parameter | Type | Required | Description |
319
317
  |-----------|------|----------|-------------|
320
- | `path` | string | yes | File path |
318
+ | `file_path` | string | yes | File path |
321
319
  | `checksums` | string[] | yes | Array of checksum strings, e.g. `["1-50:f7e2a1b0"]` |
322
320
  | `base_revision` | string | no | Prior revision to compare against latest state |
323
321
 
@@ -327,12 +325,11 @@ Example output:
327
325
  status: STALE
328
326
  reason: checksums_stale
329
327
  revision: rev-17-deadbeef
330
- file: 1-120:abc123ef
331
328
  summary: valid=0 stale=1 invalid=0
332
329
  next_action: reread_ranges
333
330
  base_revision: rev-16-feedcafe
334
331
  changed_ranges: 10-12(replace)
335
- suggested_read_call: {"tool":"mcp__hex-line__read_file","arguments":{"path":"/repo/file.ts","ranges":["10-12"]}}
332
+ suggested_read_call: {"tool":"mcp__hex-line__read_file","arguments":{"file_path":"/repo/file.ts","ranges":["10-12"]}}
336
333
 
337
334
  entry: 1/1 | status: STALE | span: 10-12 | checksum: 10-12:oldc0de0 | current_checksum: 10-12:newc0de0 | next_action: reread_range | summary: content changed since checksum capture
338
335
  ```
package/dist/hook.mjs CHANGED
@@ -116,16 +116,16 @@ var REVERSE_TOOL_HINTS = {
116
116
  };
117
117
  var TOOL_HINTS = {
118
118
  Read: "mcp__hex-line__read_file (not Read). For structure-first: mcp__hex-line__outline then mcp__hex-line__read_file with ranges",
119
- Edit: "mcp__hex-line__edit_file (not Edit). If you need hash anchors: mcp__hex-line__grep_search(output='content', edit_ready=true) first",
119
+ Edit: "mcp__hex-line__edit_file (not Edit). If you need hash anchors: mcp__hex-line__grep_search(output_mode='content', edit_ready=true) first",
120
120
  Write: "mcp__hex-line__write_file (not Write). No prior Read needed",
121
- Grep: "mcp__hex-line__grep_search (not Grep). Params: output, literal, context_before, context_after, multiline",
121
+ Grep: "mcp__hex-line__grep_search (not Grep). Params: output_mode, literal, context_before, context_after, multiline",
122
122
  Glob: "mcp__hex-line__inspect_path (not Glob). Use pattern=... with an explicit path for project file discovery and name/path globbing",
123
123
  cat: "mcp__hex-line__read_file (not cat/head/tail/less/more/type/Get-Content)",
124
124
  head: "mcp__hex-line__read_file with limit param (not head)",
125
125
  tail: "mcp__hex-line__read_file with offset param (not tail)",
126
126
  ls: "mcp__hex-line__inspect_path for tree or pattern search (not ls/dir/find/tree/Get-ChildItem). E.g. pattern='*-mcp' type='dir'",
127
127
  stat: "mcp__hex-line__inspect_path for compact file metadata (not stat/wc/Get-Item/file)",
128
- grep: "mcp__hex-line__grep_search (not grep/rg/findstr/Select-String). Params: output, literal, context_before, context_after, multiline",
128
+ grep: "mcp__hex-line__grep_search (not grep/rg/findstr/Select-String). Params: output_mode, literal, context_before, context_after, multiline",
129
129
  sed: "mcp__hex-line__edit_file for hash edits, or mcp__hex-line__bulk_replace with path=<project root> for text rename (not sed -i)",
130
130
  diff: "mcp__hex-line__changes (not diff). Git diff with change symbols",
131
131
  outline: "mcp__hex-line__outline (before reading large code files)",
@@ -469,15 +469,15 @@ function handlePreToolUse(data) {
469
469
  const ext = filePath ? extOf(filePath) : "";
470
470
  const rangeHint = isPartialRead(toolInput) ? " Preserve the same offset/limit or ranges." : "";
471
471
  const outlineTip = filePath && OUTLINEABLE_EXT.has(ext) ? ` For structure-first discovery: mcp__hex-line__outline then mcp__hex-line__read_file with ranges.` : "";
472
- const target = filePath ? `Use mcp__hex-line__read_file(path="${filePath}").${rangeHint}${outlineTip}` : "Use mcp__hex-line__read_file or mcp__hex-line__inspect_path.";
472
+ const target = filePath ? `Use mcp__hex-line__read_file(file_path="${filePath}").${rangeHint}${outlineTip}` : "Use mcp__hex-line__read_file or mcp__hex-line__inspect_path.";
473
473
  redirect(target, DEFERRED_HINT);
474
474
  }
475
475
  if (toolName === "Edit") {
476
- const target = filePath ? `Use mcp__hex-line__edit_file(path="${filePath}"). If you need hash anchors first: mcp__hex-line__grep_search(output="content", edit_ready=true).` : 'Use mcp__hex-line__edit_file. If you need hash anchors first: mcp__hex-line__grep_search(output="content", edit_ready=true).';
476
+ const target = filePath ? `Use mcp__hex-line__edit_file(file_path="${filePath}"). If you need hash anchors first: mcp__hex-line__grep_search(output_mode="content", edit_ready=true).` : 'Use mcp__hex-line__edit_file. If you need hash anchors first: mcp__hex-line__grep_search(output_mode="content", edit_ready=true).';
477
477
  redirect(target, "Hash-verified edits for project text files.\n" + DEFERRED_HINT);
478
478
  }
479
479
  if (toolName === "Write") {
480
- const pathNote = filePath ? ` with path="${filePath}"` : "";
480
+ const pathNote = filePath ? ` with file_path="${filePath}"` : "";
481
481
  redirect(`Use mcp__hex-line__write_file${pathNote}`, TOOL_HINTS.Write + "\n" + DEFERRED_HINT);
482
482
  }
483
483
  if (toolName === "Grep") {
package/dist/server.mjs CHANGED
@@ -1383,7 +1383,12 @@ function serializeSearchBlock(block, opts = {}) {
1383
1383
  const requestedSpan = renderRequestedSpan(block);
1384
1384
  if (requestedSpan) lines.push(requestedSpan);
1385
1385
  if (Array.isArray(block.meta.matchLines) && block.meta.matchLines.length > 0) {
1386
- lines.push(`match_lines: ${block.meta.matchLines.join(",")}`);
1386
+ const matchLinesStr = block.meta.matchLines.join(",");
1387
+ const spanStr = `${block.startLine}-${block.endLine}`;
1388
+ const singleSpan = block.startLine === block.endLine ? String(block.startLine) : null;
1389
+ if (matchLinesStr !== spanStr && matchLinesStr !== singleSpan) {
1390
+ lines.push(`match_lines: ${matchLinesStr}`);
1391
+ }
1387
1392
  }
1388
1393
  if (block.meta.summary) lines.push(`summary: ${block.meta.summary}`);
1389
1394
  lines.push(...renderMetaLines(Object.fromEntries(
@@ -2007,7 +2012,7 @@ function buildConflictEntryModel({ lines, centerIdx, reason, details, recoveryRa
2007
2012
  snippet: buildSnippetModel(lines, centerIdx)
2008
2013
  };
2009
2014
  }
2010
- function renderConflictEntry(entry) {
2015
+ function renderConflictEntry(entry, { skipRetryEdit = false } = {}) {
2011
2016
  let msg = "";
2012
2017
  if (entry.edit) msg += `edit: ${entry.edit}
2013
2018
  `;
@@ -2016,7 +2021,7 @@ function renderConflictEntry(entry) {
2016
2021
  recovery_ranges: ${entry.recovery_ranges.join(", ")}`;
2017
2022
  if (entry.retry_checksum) msg += `
2018
2023
  retry_checksum: ${entry.retry_checksum}`;
2019
- if (entry.retry_edit) msg += `
2024
+ if (entry.retry_edit && !skipRetryEdit) msg += `
2020
2025
  retry_edit: ${entry.retry_edit}`;
2021
2026
  if (entry.remapped_refs) msg += `
2022
2027
  remapped_refs: ${entry.remapped_refs}`;
@@ -2272,12 +2277,12 @@ next_action: ${recovery.next_action}`;
2272
2277
  retry_edits: ${JSON.stringify(recovery.retry_edits)}`;
2273
2278
  if (recovery.suggested_read_call) msg += `
2274
2279
  suggested_read_call: ${recovery.suggested_read_call}`;
2275
- if (recovery.retry_plan) msg += `
2280
+ if (recovery.retry_plan && conflicts.length > 1) msg += `
2276
2281
  retry_plan: ${recovery.retry_plan}`;
2277
2282
  for (const entry of entries) {
2278
2283
  msg += `
2279
2284
 
2280
- ${renderConflictEntry(entry)}`;
2285
+ ${renderConflictEntry(entry, { skipRetryEdit: conflicts.length === 1 })}`;
2281
2286
  }
2282
2287
  return msg;
2283
2288
  }
@@ -2998,12 +3003,12 @@ ${serializeReadBlock(block)}`;
2998
3003
  if (impact.counts.sameNameSymbols > 0) totals.push(`${impact.counts.sameNameSymbols} same-name siblings`);
2999
3004
  const headline = totals.length > 0 ? totals.join(", ") : "no downstream graph facts";
3000
3005
  const factLines = impact.facts.slice(0, 6).map((fact) => {
3001
- if (fact.fact_kind === "public_api") return "public_api: exported symbol";
3006
+ if (fact.fact_kind === "public_api") return null;
3002
3007
  const location = fact.target_file && fact.target_line ? ` (${fact.target_file}:${fact.target_line})` : "";
3003
3008
  const target = fact.target_display_name ? `${fact.target_display_name}${location}` : `${fact.target_file}:${fact.target_line}`;
3004
3009
  const via = fact.path_kind ? ` via ${fact.path_kind}` : "";
3005
3010
  return `${fact.fact_kind}: ${target}${via}`;
3006
- });
3011
+ }).filter(Boolean);
3007
3012
  return [
3008
3013
  `${impact.symbol}: ${headline}`,
3009
3014
  ...factLines.map((line) => ` ${line}`)
@@ -3118,7 +3123,7 @@ function grepSearch(pattern, opts = {}) {
3118
3123
  function applyListModeTotalLimit(lines, totalLimit) {
3119
3124
  if (!totalLimit || totalLimit <= 0 || lines.length <= totalLimit) return lines.join("\n");
3120
3125
  const visible = lines.slice(0, totalLimit);
3121
- visible.push(`OUTPUT_CAPPED: ${lines.length - totalLimit} more result line(s) omitted. Narrow with path= or glob=, or raise total_limit.`);
3126
+ visible.push(`OUTPUT_CAPPED: ${lines.length - totalLimit} more result line(s) omitted. Narrow with path= or glob=, or raise head_limit.`);
3122
3127
  return visible.join("\n");
3123
3128
  }
3124
3129
  async function filesMode(pattern, target, opts, totalLimit) {
@@ -3183,15 +3188,17 @@ async function summaryMode(pattern, target, opts, totalLimit) {
3183
3188
  }
3184
3189
  const topFiles = [...fileHits.entries()].sort((a, b) => b[1] - a[1] || a[0].localeCompare(b[0])).slice(0, 5);
3185
3190
  const lines = [
3186
- `summary: ${rawLines.length} match event(s) across ${fileHits.size} file(s)`,
3187
- topFiles.length ? `top_files: ${topFiles.map(([file, count]) => `${file} (${count})`).join(", ")}` : "top_files: none"
3191
+ `summary: ${rawLines.length} match event(s) across ${fileHits.size} file(s)`
3188
3192
  ];
3193
+ if (fileHits.size > 1 && topFiles.length) {
3194
+ lines.push(`top_files: ${topFiles.map(([file, count]) => `${file} (${count})`).join(", ")}`);
3195
+ }
3189
3196
  if (snippets.length > 0) {
3190
3197
  lines.push("snippets:");
3191
3198
  lines.push(...snippets.map((snippet) => `- ${snippet}`));
3192
3199
  }
3193
3200
  if (totalLimit > 0 && rawLines.length > totalLimit) {
3194
- lines.push(`continuation_hint: rerun grep_search with a higher total_limit or narrower path/glob to inspect ${rawLines.length - totalLimit} additional match event(s)`);
3201
+ lines.push(`continuation_hint: rerun grep_search with a higher head_limit or narrower path/glob to inspect ${rawLines.length - totalLimit} additional match event(s)`);
3195
3202
  }
3196
3203
  return lines.join("\n");
3197
3204
  }
@@ -3200,10 +3207,10 @@ function buildSearchRefineCall(target, pattern, opts) {
3200
3207
  if (opts.glob) args.glob = opts.glob;
3201
3208
  if (opts.type) args.type = opts.type;
3202
3209
  return JSON.stringify({
3203
- tool: "mcp__hex_line__grep_search",
3210
+ tool: "mcp__hex-line__grep_search",
3204
3211
  arguments: {
3205
3212
  ...args,
3206
- output: "summary"
3213
+ output_mode: "summary"
3207
3214
  }
3208
3215
  });
3209
3216
  }
@@ -3328,7 +3335,7 @@ async function contentMode(pattern, target, opts, plain, editReady, totalLimit,
3328
3335
  if (totalLimit > 0 && matchCount >= totalLimit) {
3329
3336
  flushGroup();
3330
3337
  blocks.push(buildDiagnosticBlock({
3331
- kind: "total_limit",
3338
+ kind: "head_limit",
3332
3339
  meta: {
3333
3340
  total_matches: matchCount,
3334
3341
  shown_matches: matchCount,
@@ -3338,7 +3345,7 @@ async function contentMode(pattern, target, opts, plain, editReady, totalLimit,
3338
3345
  next_action: "narrow_search_scope",
3339
3346
  suggested_refine_call: buildSearchRefineCall(target, pattern, opts)
3340
3347
  },
3341
- message: `Search stopped after ${totalLimit} match event(s). Narrow the query, raise total_limit, or pass total_limit=0 to disable the cap.`,
3348
+ message: `Search stopped after ${totalLimit} match event(s). Narrow the query, raise head_limit, or pass head_limit=0 to disable the cap.`,
3342
3349
  path: String(target).replace(/\\/g, "/")
3343
3350
  }));
3344
3351
  return blocks.map((block) => block.type === "edit_ready_block" ? serializeSearchBlock(block, { plain: plainOutput }) : serializeDiagnosticBlock(block)).join("\n\n");
@@ -3613,7 +3620,7 @@ function markdownOutline(sourceLines) {
3613
3620
  }
3614
3621
  return entries;
3615
3622
  }
3616
- function formatOutline(entries, skippedRanges, sourceLineCount, snapshot, db, relFile, note = "") {
3623
+ function formatOutline(entries, skippedRanges, _sourceLineCount, snapshot, db, relFile, note = "") {
3617
3624
  const lines = [];
3618
3625
  if (note) lines.push(note, "");
3619
3626
  if (skippedRanges.length > 0) {
@@ -3630,8 +3637,6 @@ function formatOutline(entries, skippedRanges, sourceLineCount, snapshot, db, re
3630
3637
  const prefix = tag ? `${tag}.` : "";
3631
3638
  lines.push(`${indent}${prefix}${e.start}-${e.end}: ${e.text}${suffix}`);
3632
3639
  }
3633
- lines.push("");
3634
- lines.push(`(${entries.length} symbols, ${sourceLineCount} source lines)`);
3635
3640
  return lines.join("\n");
3636
3641
  }
3637
3642
  async function fileOutline(filePath) {
@@ -3682,7 +3687,7 @@ function classifyChecksum(currentSnapshot, entry) {
3682
3687
  checksum: entry.raw,
3683
3688
  span: null,
3684
3689
  currentChecksum: null,
3685
- reason: `invalid checksum format: ${entry.error}`
3690
+ reason: entry.error.replace(/^Bad checksum:\s*/, "format error: ")
3686
3691
  };
3687
3692
  }
3688
3693
  const { start, end } = entry.parsed;
@@ -3863,7 +3868,7 @@ function buildPatternRefineCall(absRoot, pattern, type, groups) {
3863
3868
  const args = { path: bestGroup ? join4(absRoot, bestGroup) : absRoot, pattern };
3864
3869
  if (type && type !== "all") args.type = type;
3865
3870
  return JSON.stringify({
3866
- tool: "mcp__hex_line__inspect_path",
3871
+ tool: "mcp__hex-line__inspect_path",
3867
3872
  arguments: args
3868
3873
  });
3869
3874
  }
@@ -4111,13 +4116,14 @@ function fileInfo(filePath) {
4111
4116
  const lineCount = !isBinary && size > 0 ? countFileLines(abs, size, MAX_LINE_COUNT_SIZE) : null;
4112
4117
  const sizeStr = lineCount !== null ? `Size: ${formatSize(size)} (${lineCount} lines)` : `Size: ${formatSize(size)}`;
4113
4118
  const timeStr = `Modified: ${mtime.toISOString().replace("T", " ").slice(0, 19)} (${relativeTime(mtime)})`;
4114
- return [
4119
+ const lines = [
4115
4120
  `File: ${normalized}`,
4116
4121
  sizeStr,
4117
4122
  timeStr,
4118
- `Type: ${typeName}`,
4119
- `Binary: ${isBinary ? "yes" : "no"}`
4120
- ].join("\n");
4123
+ `Type: ${typeName}`
4124
+ ];
4125
+ if (isBinary) lines.push(`Binary: yes`);
4126
+ return lines.join("\n");
4121
4127
  }
4122
4128
 
4123
4129
  // lib/inspect-path.mjs
@@ -4596,7 +4602,6 @@ async function fileChanges(filePath, compareAgainst = "HEAD") {
4596
4602
  `path: ${filePath}`,
4597
4603
  `compare_against: ${compareAgainst}`,
4598
4604
  "scope: directory",
4599
- "summary: changed_files=0",
4600
4605
  ...graphHint2
4601
4606
  ].join("\n");
4602
4607
  }
@@ -4649,7 +4654,6 @@ async function fileChanges(filePath, compareAgainst = "HEAD") {
4649
4654
  `path: ${filePath}`,
4650
4655
  `compare_against: ${compareAgainst}`,
4651
4656
  "scope: file",
4652
- "summary: added=0 removed=0 modified=0",
4653
4657
  ...graphHint
4654
4658
  ].join("\n");
4655
4659
  }
@@ -4866,7 +4870,7 @@ function errorResult(code, message, recovery, { large = false, extra = null } =
4866
4870
  }
4867
4871
 
4868
4872
  // server.mjs
4869
- var version = true ? "1.24.1" : (await null).createRequire(import.meta.url)("./package.json").version;
4873
+ var version = true ? "1.25.0" : (await null).createRequire(import.meta.url)("./package.json").version;
4870
4874
  var STATUS_ENUM = z2.enum(["OK", "ERROR", "AUTO_REBASED", "CONFLICT", "STALE", "INVALID", "NO_CHANGES", "CHANGED", "UNSUPPORTED"]);
4871
4875
  var ERROR_SHAPE = z2.object({ code: z2.string(), message: z2.string(), recovery: z2.string() }).optional();
4872
4876
  var LINE_REPORT_KEYS = /* @__PURE__ */ new Set([
@@ -4935,8 +4939,8 @@ server.registerTool("read_file", {
4935
4939
  title: "Read File",
4936
4940
  description: "Read file with progressive disclosure. Default: minimal plain partial read for discovery; enable edit-ready metadata explicitly when preparing a verified edit.",
4937
4941
  inputSchema: z2.object({
4938
- path: z2.string().optional().describe("File path"),
4939
- paths: z2.array(z2.string()).optional().describe("Array of file paths to read (batch mode)"),
4942
+ file_path: z2.string().optional().describe("File path"),
4943
+ file_paths: z2.array(z2.string()).optional().describe("Array of file paths to read (batch mode)"),
4940
4944
  offset: flexNum().describe("Start line (1-indexed, default: 1)"),
4941
4945
  limit: flexNum().describe("Max lines (default: 200 for discovery, 2000 for edit-ready, 0 = all)"),
4942
4946
  ranges: z2.union([z2.string(), z2.array(readRangeSchema)]).optional().describe('Line ranges, e.g. ["10-25", {"start":40,"end":55}]'),
@@ -4946,8 +4950,8 @@ server.registerTool("read_file", {
4946
4950
  }),
4947
4951
  outputSchema: z2.object({
4948
4952
  status: STATUS_ENUM,
4949
- path: z2.string().optional(),
4950
- paths: z2.array(z2.string()).optional(),
4953
+ file_path: z2.string().optional(),
4954
+ file_paths: z2.array(z2.string()).optional(),
4951
4955
  content: z2.string().optional(),
4952
4956
  edit_ready: z2.boolean().optional(),
4953
4957
  next_action: z2.string().optional(),
@@ -4955,7 +4959,7 @@ server.registerTool("read_file", {
4955
4959
  }),
4956
4960
  annotations: { readOnlyHint: true, destructiveHint: false, idempotentHint: true, openWorldHint: false }
4957
4961
  }, async (rawParams) => {
4958
- const { path: p, paths: multi, offset, limit, ranges: rawRanges, plain, verbosity, edit_ready } = rawParams ?? {};
4962
+ const { file_path: p, file_paths: multi, offset, limit, ranges: rawRanges, plain, verbosity, edit_ready } = rawParams ?? {};
4959
4963
  try {
4960
4964
  const ranges = parseReadRanges(rawRanges);
4961
4965
  const readVerbosity = verbosity ?? "minimal";
@@ -4967,7 +4971,7 @@ server.registerTool("read_file", {
4967
4971
  try {
4968
4972
  if (!edit_ready && readVerbosity !== "full") {
4969
4973
  results.push(`${fileInfo(fp)}
4970
- next_hint: read_file path="${fp}" verbosity="compact"`);
4974
+ next_hint: read_file file_path="${fp}" verbosity="compact"`);
4971
4975
  } else {
4972
4976
  results.push(readFile2(fp, {
4973
4977
  offset,
@@ -4985,9 +4989,9 @@ ERROR: ${e.message}`);
4985
4989
  }
4986
4990
  }
4987
4991
  const content2 = results.join("\n\n---\n\n");
4988
- return result({ status: "OK", paths: multi, content: content2, edit_ready: !!edit_ready }, { large: !!edit_ready || readVerbosity === "full" || content2.length > 5e4 });
4992
+ return result({ status: "OK", file_paths: multi, content: content2, edit_ready: !!edit_ready }, { large: !!edit_ready || readVerbosity === "full" || content2.length > 5e4 });
4989
4993
  }
4990
- if (!p) throw new Error("Either 'path' or 'paths' is required");
4994
+ if (!p) throw new Error("Either 'file_path' or 'file_paths' is required");
4991
4995
  const content = readFile2(p, {
4992
4996
  offset,
4993
4997
  limit: readLimit,
@@ -4996,7 +5000,7 @@ ERROR: ${e.message}`);
4996
5000
  verbosity: readVerbosity,
4997
5001
  editReady: !!edit_ready
4998
5002
  });
4999
- return result({ status: "OK", path: p, content, edit_ready: !!edit_ready }, { large: !!edit_ready || readVerbosity === "full" || content.length > 5e4 });
5003
+ return result({ status: "OK", file_path: p, content, edit_ready: !!edit_ready }, { large: !!edit_ready || readVerbosity === "full" || content.length > 5e4 });
5000
5004
  } catch (e) {
5001
5005
  return errorResult(e.code || "READ_ERROR", e.message, e.recovery || "Check path and permissions");
5002
5006
  }
@@ -5005,7 +5009,7 @@ server.registerTool("edit_file", {
5005
5009
  title: "Edit File",
5006
5010
  description: "Apply hash-verified partial edits to one file. Carry base_revision on same-file follow-ups. Preserves existing line endings and trailing-newline shape; conservative conflicts return retry helpers.",
5007
5011
  inputSchema: z2.object({
5008
- path: z2.string().describe("File to edit"),
5012
+ file_path: z2.string().describe("File to edit"),
5009
5013
  edits: z2.union([z2.string(), z2.array(z2.any())]).describe(
5010
5014
  'JSON array of canonical edits.\n[{"set_line":{"anchor":"ab.12","new_text":"x"}}]\n[{"replace_lines":{"start_anchor":"ab.10","end_anchor":"cd.15","new_text":"x","range_checksum":"10-15:a1b2"}}]\n[{"insert_after":{"anchor":"ab.20","text":"x"}}]\n[{"replace_between":{"start_anchor":"ab.10","end_anchor":"cd.40","new_text":"x","boundary_mode":"inclusive"}}]'
5011
5015
  ),
@@ -5015,10 +5019,10 @@ server.registerTool("edit_file", {
5015
5019
  conflict_policy: z2.enum(["strict", "conservative"]).optional().describe('Conflict handling (default: "conservative"). "conservative" returns structured CONFLICT output with recovery_ranges, retry_edit/retry_edits, suggested_read_call, and retry_plan when available.'),
5016
5020
  allow_external: flexBool().describe("Allow editing a path outside the current project root. Use only when you intentionally target a temp or external file.")
5017
5021
  }),
5018
- outputSchema: z2.object({ status: STATUS_ENUM, path: z2.string().optional(), content: z2.string().optional(), reason: z2.string().optional(), summary: z2.string().optional(), next_action: z2.string().optional(), error: ERROR_SHAPE }),
5022
+ outputSchema: z2.object({ status: STATUS_ENUM, file_path: z2.string().optional(), content: z2.string().optional(), reason: z2.string().optional(), summary: z2.string().optional(), next_action: z2.string().optional(), error: ERROR_SHAPE }),
5019
5023
  annotations: { readOnlyHint: false, destructiveHint: false, idempotentHint: false, openWorldHint: false }
5020
5024
  }, async (rawParams) => {
5021
- const { path: p, edits: json, dry_run, restore_indent, base_revision, conflict_policy, allow_external } = rawParams ?? {};
5025
+ const { file_path: p, edits: json, dry_run, restore_indent, base_revision, conflict_policy, allow_external } = rawParams ?? {};
5022
5026
  try {
5023
5027
  assertProjectScopedPath(p, { allowExternal: !!allow_external });
5024
5028
  let parsed;
@@ -5034,7 +5038,7 @@ server.registerTool("edit_file", {
5034
5038
  baseRevision: base_revision,
5035
5039
  conflictPolicy: conflict_policy
5036
5040
  });
5037
- return lineReportResult({ path: p }, content, { large: content.length > 5e4 });
5041
+ return lineReportResult({ file_path: p }, content, { large: content.length > 5e4 });
5038
5042
  } catch (e) {
5039
5043
  return errorResult(e.code || "EDIT_ERROR", e.message, e.recovery || "Check anchors and checksums");
5040
5044
  }
@@ -5043,21 +5047,21 @@ server.registerTool("write_file", {
5043
5047
  title: "Write File",
5044
5048
  description: "Create a new file or overwrite existing. Creates parent dirs. For existing files prefer edit_file (shows diff, verifies hashes).",
5045
5049
  inputSchema: z2.object({
5046
- path: z2.string().describe("File path"),
5050
+ file_path: z2.string().describe("File path"),
5047
5051
  content: z2.string().describe("File content"),
5048
5052
  allow_external: flexBool().describe("Allow writing a path outside the current project root. Use only when you intentionally target a temp or external file.")
5049
5053
  }),
5050
- outputSchema: z2.object({ status: STATUS_ENUM, path: z2.string().optional(), lines: z2.number().optional(), error: ERROR_SHAPE }),
5054
+ outputSchema: z2.object({ status: STATUS_ENUM, file_path: z2.string().optional(), lines: z2.number().optional(), error: ERROR_SHAPE }),
5051
5055
  annotations: { readOnlyHint: false, destructiveHint: false, idempotentHint: true, openWorldHint: false }
5052
5056
  }, async (rawParams) => {
5053
- const { path: p, content, allow_external } = rawParams ?? {};
5057
+ const { file_path: p, content, allow_external } = rawParams ?? {};
5054
5058
  try {
5055
5059
  assertProjectScopedPath(p, { allowExternal: !!allow_external });
5056
5060
  const abs = validateWritePath(p);
5057
5061
  mkdirSync2(dirname6(abs), { recursive: true });
5058
5062
  writeFileSync4(abs, content, "utf-8");
5059
5063
  const lines = content.split("\n").length;
5060
- return result({ status: "OK", path: p, lines });
5064
+ return result({ status: "OK", file_path: p, lines });
5061
5065
  } catch (e) {
5062
5066
  return errorResult(e.code || "WRITE_ERROR", e.message, e.recovery || "Check path and permissions");
5063
5067
  }
@@ -5070,7 +5074,7 @@ server.registerTool("grep_search", {
5070
5074
  path: z2.string().optional().describe("Search dir/file (default: cwd)"),
5071
5075
  glob: z2.string().optional().describe('Glob filter (e.g. "*.ts")'),
5072
5076
  type: z2.string().optional().describe('File type (e.g. "js", "py")'),
5073
- output: z2.enum(["summary", "content", "files", "count"]).optional().describe("Output format (default: summary)"),
5077
+ output_mode: z2.enum(["summary", "content", "files_with_matches", "count"]).optional().describe("Output format (default: summary)"),
5074
5078
  case_insensitive: flexBool().describe("Ignore case (-i)"),
5075
5079
  smart_case: flexBool().describe("CI when pattern is all lowercase, CS if uppercase (-S)"),
5076
5080
  literal: flexBool().describe("Literal string search, no regex (-F)"),
@@ -5079,7 +5083,7 @@ server.registerTool("grep_search", {
5079
5083
  context_before: flexNum().describe("Context lines BEFORE match (-B)"),
5080
5084
  context_after: flexNum().describe("Context lines AFTER match (-A)"),
5081
5085
  limit: flexNum().describe("Max matches per file (default: 20 for summary discovery, 100 for content)"),
5082
- total_limit: flexNum().describe("Total match events across all files; multiline matches count as 1 (default: 50 for summary discovery, 200 for content, 1000 for files/count, 0 = unlimited)"),
5086
+ head_limit: flexNum().describe("Total match events across all files; multiline matches count as 1 (default: 50 for summary discovery, 200 for content, 1000 for files_with_matches/count, 0 = unlimited)"),
5083
5087
  plain: flexBool().describe("Omit hash tags, return file:line:content"),
5084
5088
  edit_ready: flexBool().describe("Preserve hash/checksum search hunks in `content` mode. Default: false."),
5085
5089
  allow_large_output: flexBool().describe("Bypass the default content-mode block/char caps when you intentionally need a larger payload.")
@@ -5092,7 +5096,7 @@ server.registerTool("grep_search", {
5092
5096
  path: p,
5093
5097
  glob,
5094
5098
  type,
5095
- output,
5099
+ output_mode,
5096
5100
  case_insensitive,
5097
5101
  smart_case,
5098
5102
  literal,
@@ -5101,15 +5105,15 @@ server.registerTool("grep_search", {
5101
5105
  context_before,
5102
5106
  context_after,
5103
5107
  limit,
5104
- total_limit,
5108
+ head_limit,
5105
5109
  plain,
5106
5110
  edit_ready,
5107
5111
  allow_large_output
5108
5112
  } = rawParams ?? {};
5109
5113
  try {
5110
- const resolvedOutput = output ?? "summary";
5114
+ const resolvedOutput = output_mode === "files_with_matches" ? "files" : output_mode ?? "summary";
5111
5115
  const resolvedLimit = limit ?? (resolvedOutput === "summary" ? 20 : 100);
5112
- const resolvedTotalLimit = total_limit ?? (resolvedOutput === "summary" ? 50 : void 0);
5116
+ const resolvedTotalLimit = head_limit ?? (resolvedOutput === "summary" ? 50 : void 0);
5113
5117
  const searchResult = await grepSearch(pattern, {
5114
5118
  path: p,
5115
5119
  glob,
@@ -5137,15 +5141,15 @@ server.registerTool("outline", {
5137
5141
  title: "File Outline",
5138
5142
  description: "AST-based structural outline with hash anchors for direct edit_file usage. Supports JavaScript/TypeScript, Python, C#, and PHP code files plus markdown headings (.md/.mdx, fence-aware).",
5139
5143
  inputSchema: z2.object({
5140
- path: z2.string().describe("Source file path")
5144
+ file_path: z2.string().describe("Source file path")
5141
5145
  }),
5142
- outputSchema: z2.object({ status: STATUS_ENUM, path: z2.string().optional(), content: z2.string().optional(), reason: z2.string().optional(), summary: z2.string().optional(), next_action: z2.string().optional(), error: ERROR_SHAPE }),
5146
+ outputSchema: z2.object({ status: STATUS_ENUM, file_path: z2.string().optional(), content: z2.string().optional(), reason: z2.string().optional(), summary: z2.string().optional(), next_action: z2.string().optional(), error: ERROR_SHAPE }),
5143
5147
  annotations: { readOnlyHint: true, destructiveHint: false, idempotentHint: true, openWorldHint: false }
5144
5148
  }, async (rawParams) => {
5145
- const { path: p } = rawParams ?? {};
5149
+ const { file_path: p } = rawParams ?? {};
5146
5150
  try {
5147
5151
  const content = await fileOutline(p);
5148
- return lineReportResult({ path: p }, content);
5152
+ return lineReportResult({ file_path: p }, content);
5149
5153
  } catch (e) {
5150
5154
  return errorResult(e.code || "OUTLINE_ERROR", e.message, e.recovery || "Check file path and language support");
5151
5155
  }
@@ -5154,20 +5158,20 @@ server.registerTool("verify", {
5154
5158
  title: "Verify Checksums",
5155
5159
  description: "Check if held checksums are still valid without rereading. Use before delayed or mixed-tool follow-up edits; returns canonical status, next_action, and reread guidance.",
5156
5160
  inputSchema: z2.object({
5157
- path: z2.string().describe("File path"),
5161
+ file_path: z2.string().describe("File path"),
5158
5162
  checksums: z2.array(z2.string()).describe('Checksum strings, e.g. ["1-50:f7e2a1b0", "51-100:abcd1234"]'),
5159
5163
  base_revision: z2.string().optional().describe("Optional prior revision to compare against latest state.")
5160
5164
  }),
5161
- outputSchema: z2.object({ status: STATUS_ENUM, path: z2.string().optional(), content: z2.string().optional(), reason: z2.string().optional(), summary: z2.string().optional(), next_action: z2.string().optional(), error: ERROR_SHAPE }),
5165
+ outputSchema: z2.object({ status: STATUS_ENUM, file_path: z2.string().optional(), content: z2.string().optional(), reason: z2.string().optional(), summary: z2.string().optional(), next_action: z2.string().optional(), error: ERROR_SHAPE }),
5162
5166
  annotations: { readOnlyHint: true, destructiveHint: false, idempotentHint: true, openWorldHint: false }
5163
5167
  }, async (rawParams) => {
5164
- const { path: p, checksums, base_revision } = rawParams ?? {};
5168
+ const { file_path: p, checksums, base_revision } = rawParams ?? {};
5165
5169
  try {
5166
5170
  if (!Array.isArray(checksums) || checksums.length === 0) {
5167
5171
  throw new Error("checksums must be a non-empty array of strings");
5168
5172
  }
5169
5173
  const content = verifyChecksums(p, checksums, { baseRevision: base_revision });
5170
- return lineReportResult({ path: p }, content);
5174
+ return lineReportResult({ file_path: p }, content);
5171
5175
  } catch (e) {
5172
5176
  return errorResult(e.code || "VERIFY_ERROR", e.message, e.recovery || "Check checksums format");
5173
5177
  }
package/output-style.md CHANGED
@@ -27,7 +27,7 @@ Prefer `hex-line` for text files you may inspect or modify. Hash-annotated reads
27
27
 
28
28
  | Path | Flow |
29
29
  |------|------|
30
- | Surgical | `grep_search(output="summary") -> grep_search(output="content", edit_ready=true) if needed -> edit_file` |
30
+ | Surgical | `grep_search(output_mode="summary") -> grep_search(output_mode="content", edit_ready=true) if needed -> edit_file` |
31
31
  | Exploratory | `outline -> read_file (ranges) -> edit_file(base_revision)` |
32
32
  | Multi-file | `bulk_replace(path=<project root>)` |
33
33
  | Follow-up after delay | `verify(base_revision) -> reread only if STALE -> retry with returned helpers` |
@@ -43,8 +43,8 @@ Prefer `hex-line` for text files you may inspect or modify. Hash-annotated reads
43
43
 
44
44
  ## Edit Discipline
45
45
 
46
- - Never invent `range_checksum`. Copy it from a fresh `read_file` or `grep_search(output:"content", edit_ready=true)` block.
47
- - First mutation in a file: use `grep_search(output="summary")` for narrow targets, or `outline -> read_file(ranges)` for structural edits. Escalate to `grep_search(output="content", edit_ready=true)` only when the next edit needs canonical hunks.
46
+ - Never invent `range_checksum`. Copy it from a fresh `read_file` or `grep_search(output_mode:"content", edit_ready=true)` block.
47
+ - First mutation in a file: use `grep_search(output_mode="summary")` for narrow targets, or `outline -> read_file(ranges)` for structural edits. Escalate to `grep_search(output_mode="content", edit_ready=true)` only when the next edit needs canonical hunks.
48
48
  - Preserve file conventions mentally: `hex-line` hashes normalized logical text, but `edit_file` preserves the file's existing line endings and trailing-newline shape on write.
49
49
  - Prefer `set_line` or `insert_after` for small local changes. Prefer `replace_between` for larger bounded block rewrites.
50
50
  - Use `replace_lines` only when you already hold the exact inclusive range checksum for that block.
@@ -54,7 +54,7 @@ Prefer `hex-line` for text files you may inspect or modify. Hash-annotated reads
54
54
  - Reuse `retry_checksum` when it is returned for the exact same target range.
55
55
  - Once `hex-line` owns a file edit session, avoid mixing built-in `Edit`/`Write` on that file unless you intentionally want a new baseline.
56
56
  - Follow `next_action` first. Treat `summary` and `snippet` as the compact local context, not as prose to reinterpret.
57
- - If broad `grep_search(output="content")` or pattern `inspect_path` truncates, narrow `path`, `glob`, or query shape before retrying. Use `allow_large_output=true` only when you intentionally accept a larger payload.
57
+ - If broad `grep_search(output_mode="content")` or pattern `inspect_path` truncates, narrow `path`, `glob`, or query shape before retrying. Use `allow_large_output=true` only when you intentionally accept a larger payload.
58
58
 
59
59
  ## Exceptions
60
60
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@levnikolaevich/hex-line-mcp",
3
- "version": "1.24.1",
3
+ "version": "1.25.0",
4
4
  "mcpName": "io.github.levnikolaevich/hex-line-mcp",
5
5
  "type": "module",
6
6
  "description": "Hash-verified file editing MCP + token efficiency hook for AI coding agents. 9 tools: inspect_path, read, edit, write, grep, outline, verify, changes, bulk_replace.",