@levnikolaevich/hex-line-mcp 1.24.0 → 1.25.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -164,8 +164,8 @@ Read a file with progressive disclosure. Default mode is discovery-first: plain
164
164
 
165
165
  | Parameter | Type | Required | Description |
166
166
  |-----------|------|----------|-------------|
167
- | `path` | string | yes | File path |
168
- | `paths` | string[] | no | Array of file paths to read (batch mode) |
167
+ | `file_path` | string | yes | File path |
168
+ | `file_paths` | string[] | no | Array of file paths to read (batch mode) |
169
169
  | `offset` | number | no | Start line, 1-indexed (default: 1) |
170
170
  | `limit` | number | no | Max lines to return (default: 200 for discovery, 2000 for edit-ready, 0 = all) |
171
171
  | `ranges` | array | no | Explicit line ranges, e.g. `[{ "start": 10, "end": 30 }]` |
@@ -211,7 +211,7 @@ Edit using revision-aware hash-verified anchors. Prefer one batched call per fil
211
211
 
212
212
  | Parameter | Type | Required | Description |
213
213
  |-----------|------|----------|-------------|
214
- | `path` | string | yes | File to edit |
214
+ | `file_path` | string | yes | File to edit |
215
215
  | `edits` | string | yes | JSON array of edit operations (see below) |
216
216
  | `dry_run` | boolean | no | Preview changes without writing |
217
217
  | `restore_indent` | boolean | no | Auto-fix indentation to match anchor context (default: false) |
@@ -232,7 +232,7 @@ Edit operations (JSON array):
232
232
 
233
233
  Discipline:
234
234
 
235
- - Never invent `range_checksum`. Copy it from `read_file` or `grep_search(output:"content")`.
235
+ - Never invent `range_checksum`. Copy it from `read_file` or `grep_search(output_mode:"content")`.
236
236
  - First mutation in a file: prefer `grep_search` for narrow targets, or `outline -> read_file(ranges)` for structural edits.
237
237
  - Prefer 1-2 hunks on the first pass. Once `edit_file` returns a fresh `revision`, continue from that state as `base_revision`.
238
238
  - `hex-line` preserves existing file line endings on write; repo-level line-ending cleanup should be a separate deliberate operation, not a side effect of `edit_file`.
@@ -242,12 +242,10 @@ Result footer includes:
242
242
  - `status: OK | AUTO_REBASED | CONFLICT`
243
243
  - `reason: ...` as the canonical machine-readable cause for the current status
244
244
  - `revision: ...`
245
- - `file: ...`
246
245
  - `summary: ...` with edited line span / diff counts on successful edits
247
246
  - `payload_sections: ...` so callers know which detailed sections follow
248
- - `graph_enrichment: available | unavailable`
249
- - `semantic_impact_count: ...`, `semantic_fact_count: ...`, and `clone_warning_count: ...` when graph-backed review data is available
250
- - `provenance_summary: ...` for the edit protocol + optional graph source
247
+ - `graph_enrichment: unavailable` + `graph_fix: ...` when the project graph is missing or out of sync (silent when graph is healthy)
248
+ - `⚠ Semantic impact:` and `⚠ N clone(s):` blocks emitted directly when graph-backed review data is available
251
249
  - `changed_ranges: ...` when relevant
252
250
  - `recovery_ranges: ...` with the narrowest recommended `read_file` ranges for retry
253
251
  - `next_action: ...` as the canonical immediate choice: `apply_retry_edit`, `apply_retry_batch`, or `reread_then_retry`
@@ -266,7 +264,7 @@ Create a new file or overwrite an existing one. Creates parent directories autom
266
264
 
267
265
  | Parameter | Type | Required | Description |
268
266
  |-----------|------|----------|-------------|
269
- | `path` | string | yes | File path |
267
+ | `file_path` | string | yes | File path |
270
268
  | `content` | string | yes | File content |
271
269
  | `allow_external` | boolean | no | Allow writing a path outside the current project root |
272
270
 
@@ -280,7 +278,7 @@ Search file contents using ripgrep. Default mode is `summary` for discovery. Use
280
278
  | `path` | string | no | Directory or file to search (default: cwd) |
281
279
  | `glob` | string | no | Glob filter, e.g. `"*.ts"` |
282
280
  | `type` | string | no | File type filter, e.g. `"js"`, `"py"` |
283
- | `output` | enum | no | Output format: `"summary"` (default), `"content"`, `"files"`, `"count"` |
281
+ | `output_mode` | enum | no | Output format: `"summary"` (default), `"content"`, `"files_with_matches"`, `"count"` |
284
282
  | `case_insensitive` | boolean | no | Ignore case |
285
283
  | `smart_case` | boolean | no | CI when lowercase, CS when uppercase (`-S`) |
286
284
  | `literal` | boolean | no | Literal string search, no regex (`-F`) |
@@ -289,7 +287,7 @@ Search file contents using ripgrep. Default mode is `summary` for discovery. Use
289
287
  | `context_before` | number | no | Context lines BEFORE match (`-B`) |
290
288
  | `context_after` | number | no | Context lines AFTER match (`-A`) |
291
289
  | `limit` | number | no | Max matches per file (default: 20 for `summary`, 100 for `content`) |
292
- | `total_limit` | number | no | Total match events across all files; multiline matches count as 1 (default: 50 for `summary`, 200 for `content`, 1000 for `files`/`count`, 0 = unlimited) |
290
+ | `head_limit` | number | no | Total match events across all files; multiline matches count as 1 (default: 50 for `summary`, 200 for `content`, 1000 for `files_with_matches`/`count`, 0 = unlimited) |
293
291
  | `plain` | boolean | no | Omit hash tags inside block entries, return `lineNum\|content` |
294
292
  | `edit_ready` | boolean | no | Preserve hash/checksum search hunks in `content` mode |
295
293
  | `allow_large_output` | boolean | no | Bypass the default `content`-mode block/char caps when you intentionally need a larger payload |
@@ -305,7 +303,7 @@ AST-based structural outline with hash anchors for direct `edit_file` usage. Sup
305
303
 
306
304
  | Parameter | Type | Required | Description |
307
305
  |-----------|------|----------|-------------|
308
- | `path` | string | yes | Source file path |
306
+ | `file_path` | string | yes | Source file path |
309
307
 
310
308
  Supported languages: JavaScript (`.js`, `.mjs`, `.cjs`, `.jsx`), TypeScript (`.ts`, `.tsx`), Python (`.py`), C# (`.cs`), and PHP (`.php`) via tree-sitter WASM.
311
309
 
@@ -317,7 +315,7 @@ Check if range checksums from prior read/search blocks are still valid, optional
317
315
 
318
316
  | Parameter | Type | Required | Description |
319
317
  |-----------|------|----------|-------------|
320
- | `path` | string | yes | File path |
318
+ | `file_path` | string | yes | File path |
321
319
  | `checksums` | string[] | yes | Array of checksum strings, e.g. `["1-50:f7e2a1b0"]` |
322
320
  | `base_revision` | string | no | Prior revision to compare against latest state |
323
321
 
@@ -327,12 +325,11 @@ Example output:
327
325
  status: STALE
328
326
  reason: checksums_stale
329
327
  revision: rev-17-deadbeef
330
- file: 1-120:abc123ef
331
328
  summary: valid=0 stale=1 invalid=0
332
329
  next_action: reread_ranges
333
330
  base_revision: rev-16-feedcafe
334
331
  changed_ranges: 10-12(replace)
335
- suggested_read_call: {"tool":"mcp__hex-line__read_file","arguments":{"path":"/repo/file.ts","ranges":["10-12"]}}
332
+ suggested_read_call: {"tool":"mcp__hex-line__read_file","arguments":{"file_path":"/repo/file.ts","ranges":["10-12"]}}
336
333
 
337
334
  entry: 1/1 | status: STALE | span: 10-12 | checksum: 10-12:oldc0de0 | current_checksum: 10-12:newc0de0 | next_action: reread_range | summary: content changed since checksum capture
338
335
  ```
package/dist/hook.mjs CHANGED
@@ -116,16 +116,16 @@ var REVERSE_TOOL_HINTS = {
116
116
  };
117
117
  var TOOL_HINTS = {
118
118
  Read: "mcp__hex-line__read_file (not Read). For structure-first: mcp__hex-line__outline then mcp__hex-line__read_file with ranges",
119
- Edit: "mcp__hex-line__edit_file (not Edit). If you need hash anchors: mcp__hex-line__grep_search(output='content', edit_ready=true) first",
119
+ Edit: "mcp__hex-line__edit_file (not Edit). If you need hash anchors: mcp__hex-line__grep_search(output_mode='content', edit_ready=true) first",
120
120
  Write: "mcp__hex-line__write_file (not Write). No prior Read needed",
121
- Grep: "mcp__hex-line__grep_search (not Grep). Params: output, literal, context_before, context_after, multiline",
121
+ Grep: "mcp__hex-line__grep_search (not Grep). Params: output_mode, literal, context_before, context_after, multiline",
122
122
  Glob: "mcp__hex-line__inspect_path (not Glob). Use pattern=... with an explicit path for project file discovery and name/path globbing",
123
123
  cat: "mcp__hex-line__read_file (not cat/head/tail/less/more/type/Get-Content)",
124
124
  head: "mcp__hex-line__read_file with limit param (not head)",
125
125
  tail: "mcp__hex-line__read_file with offset param (not tail)",
126
126
  ls: "mcp__hex-line__inspect_path for tree or pattern search (not ls/dir/find/tree/Get-ChildItem). E.g. pattern='*-mcp' type='dir'",
127
127
  stat: "mcp__hex-line__inspect_path for compact file metadata (not stat/wc/Get-Item/file)",
128
- grep: "mcp__hex-line__grep_search (not grep/rg/findstr/Select-String). Params: output, literal, context_before, context_after, multiline",
128
+ grep: "mcp__hex-line__grep_search (not grep/rg/findstr/Select-String). Params: output_mode, literal, context_before, context_after, multiline",
129
129
  sed: "mcp__hex-line__edit_file for hash edits, or mcp__hex-line__bulk_replace with path=<project root> for text rename (not sed -i)",
130
130
  diff: "mcp__hex-line__changes (not diff). Git diff with change symbols",
131
131
  outline: "mcp__hex-line__outline (before reading large code files)",
@@ -469,15 +469,15 @@ function handlePreToolUse(data) {
469
469
  const ext = filePath ? extOf(filePath) : "";
470
470
  const rangeHint = isPartialRead(toolInput) ? " Preserve the same offset/limit or ranges." : "";
471
471
  const outlineTip = filePath && OUTLINEABLE_EXT.has(ext) ? ` For structure-first discovery: mcp__hex-line__outline then mcp__hex-line__read_file with ranges.` : "";
472
- const target = filePath ? `Use mcp__hex-line__read_file(path="${filePath}").${rangeHint}${outlineTip}` : "Use mcp__hex-line__read_file or mcp__hex-line__inspect_path.";
472
+ const target = filePath ? `Use mcp__hex-line__read_file(file_path="${filePath}").${rangeHint}${outlineTip}` : "Use mcp__hex-line__read_file or mcp__hex-line__inspect_path.";
473
473
  redirect(target, DEFERRED_HINT);
474
474
  }
475
475
  if (toolName === "Edit") {
476
- const target = filePath ? `Use mcp__hex-line__edit_file(path="${filePath}"). If you need hash anchors first: mcp__hex-line__grep_search(output="content", edit_ready=true).` : 'Use mcp__hex-line__edit_file. If you need hash anchors first: mcp__hex-line__grep_search(output="content", edit_ready=true).';
476
+ const target = filePath ? `Use mcp__hex-line__edit_file(file_path="${filePath}"). If you need hash anchors first: mcp__hex-line__grep_search(output_mode="content", edit_ready=true).` : 'Use mcp__hex-line__edit_file. If you need hash anchors first: mcp__hex-line__grep_search(output_mode="content", edit_ready=true).';
477
477
  redirect(target, "Hash-verified edits for project text files.\n" + DEFERRED_HINT);
478
478
  }
479
479
  if (toolName === "Write") {
480
- const pathNote = filePath ? ` with path="${filePath}"` : "";
480
+ const pathNote = filePath ? ` with file_path="${filePath}"` : "";
481
481
  redirect(`Use mcp__hex-line__write_file${pathNote}`, TOOL_HINTS.Write + "\n" + DEFERRED_HINT);
482
482
  }
483
483
  if (toolName === "Grep") {
package/dist/server.mjs CHANGED
@@ -439,9 +439,11 @@ function diagnoseGraph(filePath) {
439
439
  return { reason: "driver_missing" };
440
440
  }
441
441
  }
442
- function diagnoseGraphForProject(projectRoot) {
442
+ function diagnoseGraphForProject(directoryPath) {
443
443
  if (_driverUnavailable) return { reason: "driver_missing" };
444
444
  try {
445
+ if (!directoryPath) return { reason: "no_project_root" };
446
+ const projectRoot = findProjectRoot(join3(directoryPath, "__hex-line_probe__"));
445
447
  if (!projectRoot) return { reason: "no_project_root" };
446
448
  const dbPath = join3(projectRoot, ".hex-skills/codegraph", "index.db");
447
449
  if (!existsSync2(dbPath)) return { reason: "index_missing", projectRoot };
@@ -462,9 +464,9 @@ function diagnoseGraphForProject(projectRoot) {
462
464
  return { reason: "driver_missing" };
463
465
  }
464
466
  }
465
- function getGraphDBForProject(projectRoot) {
466
- const { reason } = diagnoseGraphForProject(projectRoot);
467
- if (reason !== "ok") return null;
467
+ function getGraphDBForProject(directoryPath) {
468
+ const { reason, projectRoot } = diagnoseGraphForProject(directoryPath);
469
+ if (reason !== "ok" || !projectRoot) return null;
468
470
  const dbPath = join3(projectRoot, ".hex-skills/codegraph", "index.db");
469
471
  return _dbs.get(dbPath) || null;
470
472
  }
@@ -1381,7 +1383,12 @@ function serializeSearchBlock(block, opts = {}) {
1381
1383
  const requestedSpan = renderRequestedSpan(block);
1382
1384
  if (requestedSpan) lines.push(requestedSpan);
1383
1385
  if (Array.isArray(block.meta.matchLines) && block.meta.matchLines.length > 0) {
1384
- lines.push(`match_lines: ${block.meta.matchLines.join(",")}`);
1386
+ const matchLinesStr = block.meta.matchLines.join(",");
1387
+ const spanStr = `${block.startLine}-${block.endLine}`;
1388
+ const singleSpan = block.startLine === block.endLine ? String(block.startLine) : null;
1389
+ if (matchLinesStr !== spanStr && matchLinesStr !== singleSpan) {
1390
+ lines.push(`match_lines: ${matchLinesStr}`);
1391
+ }
1385
1392
  }
1386
1393
  if (block.meta.summary) lines.push(`summary: ${block.meta.summary}`);
1387
1394
  lines.push(...renderMetaLines(Object.fromEntries(
@@ -2005,7 +2012,7 @@ function buildConflictEntryModel({ lines, centerIdx, reason, details, recoveryRa
2005
2012
  snippet: buildSnippetModel(lines, centerIdx)
2006
2013
  };
2007
2014
  }
2008
- function renderConflictEntry(entry) {
2015
+ function renderConflictEntry(entry, { skipRetryEdit = false } = {}) {
2009
2016
  let msg = "";
2010
2017
  if (entry.edit) msg += `edit: ${entry.edit}
2011
2018
  `;
@@ -2014,7 +2021,7 @@ function renderConflictEntry(entry) {
2014
2021
  recovery_ranges: ${entry.recovery_ranges.join(", ")}`;
2015
2022
  if (entry.retry_checksum) msg += `
2016
2023
  retry_checksum: ${entry.retry_checksum}`;
2017
- if (entry.retry_edit) msg += `
2024
+ if (entry.retry_edit && !skipRetryEdit) msg += `
2018
2025
  retry_edit: ${entry.retry_edit}`;
2019
2026
  if (entry.remapped_refs) msg += `
2020
2027
  remapped_refs: ${entry.remapped_refs}`;
@@ -2270,12 +2277,12 @@ next_action: ${recovery.next_action}`;
2270
2277
  retry_edits: ${JSON.stringify(recovery.retry_edits)}`;
2271
2278
  if (recovery.suggested_read_call) msg += `
2272
2279
  suggested_read_call: ${recovery.suggested_read_call}`;
2273
- if (recovery.retry_plan) msg += `
2280
+ if (recovery.retry_plan && conflicts.length > 1) msg += `
2274
2281
  retry_plan: ${recovery.retry_plan}`;
2275
2282
  for (const entry of entries) {
2276
2283
  msg += `
2277
2284
 
2278
- ${renderConflictEntry(entry)}`;
2285
+ ${renderConflictEntry(entry, { skipRetryEdit: conflicts.length === 1 })}`;
2279
2286
  }
2280
2287
  return msg;
2281
2288
  }
@@ -2996,12 +3003,12 @@ ${serializeReadBlock(block)}`;
2996
3003
  if (impact.counts.sameNameSymbols > 0) totals.push(`${impact.counts.sameNameSymbols} same-name siblings`);
2997
3004
  const headline = totals.length > 0 ? totals.join(", ") : "no downstream graph facts";
2998
3005
  const factLines = impact.facts.slice(0, 6).map((fact) => {
2999
- if (fact.fact_kind === "public_api") return "public_api: exported symbol";
3006
+ if (fact.fact_kind === "public_api") return null;
3000
3007
  const location = fact.target_file && fact.target_line ? ` (${fact.target_file}:${fact.target_line})` : "";
3001
3008
  const target = fact.target_display_name ? `${fact.target_display_name}${location}` : `${fact.target_file}:${fact.target_line}`;
3002
3009
  const via = fact.path_kind ? ` via ${fact.path_kind}` : "";
3003
3010
  return `${fact.fact_kind}: ${target}${via}`;
3004
- });
3011
+ }).filter(Boolean);
3005
3012
  return [
3006
3013
  `${impact.symbol}: ${headline}`,
3007
3014
  ...factLines.map((line) => ` ${line}`)
@@ -3116,7 +3123,7 @@ function grepSearch(pattern, opts = {}) {
3116
3123
  function applyListModeTotalLimit(lines, totalLimit) {
3117
3124
  if (!totalLimit || totalLimit <= 0 || lines.length <= totalLimit) return lines.join("\n");
3118
3125
  const visible = lines.slice(0, totalLimit);
3119
- visible.push(`OUTPUT_CAPPED: ${lines.length - totalLimit} more result line(s) omitted. Narrow with path= or glob=, or raise total_limit.`);
3126
+ visible.push(`OUTPUT_CAPPED: ${lines.length - totalLimit} more result line(s) omitted. Narrow with path= or glob=, or raise head_limit.`);
3120
3127
  return visible.join("\n");
3121
3128
  }
3122
3129
  async function filesMode(pattern, target, opts, totalLimit) {
@@ -3181,15 +3188,17 @@ async function summaryMode(pattern, target, opts, totalLimit) {
3181
3188
  }
3182
3189
  const topFiles = [...fileHits.entries()].sort((a, b) => b[1] - a[1] || a[0].localeCompare(b[0])).slice(0, 5);
3183
3190
  const lines = [
3184
- `summary: ${rawLines.length} match event(s) across ${fileHits.size} file(s)`,
3185
- topFiles.length ? `top_files: ${topFiles.map(([file, count]) => `${file} (${count})`).join(", ")}` : "top_files: none"
3191
+ `summary: ${rawLines.length} match event(s) across ${fileHits.size} file(s)`
3186
3192
  ];
3193
+ if (fileHits.size > 1 && topFiles.length) {
3194
+ lines.push(`top_files: ${topFiles.map(([file, count]) => `${file} (${count})`).join(", ")}`);
3195
+ }
3187
3196
  if (snippets.length > 0) {
3188
3197
  lines.push("snippets:");
3189
3198
  lines.push(...snippets.map((snippet) => `- ${snippet}`));
3190
3199
  }
3191
3200
  if (totalLimit > 0 && rawLines.length > totalLimit) {
3192
- lines.push(`continuation_hint: rerun grep_search with a higher total_limit or narrower path/glob to inspect ${rawLines.length - totalLimit} additional match event(s)`);
3201
+ lines.push(`continuation_hint: rerun grep_search with a higher head_limit or narrower path/glob to inspect ${rawLines.length - totalLimit} additional match event(s)`);
3193
3202
  }
3194
3203
  return lines.join("\n");
3195
3204
  }
@@ -3198,10 +3207,10 @@ function buildSearchRefineCall(target, pattern, opts) {
3198
3207
  if (opts.glob) args.glob = opts.glob;
3199
3208
  if (opts.type) args.type = opts.type;
3200
3209
  return JSON.stringify({
3201
- tool: "mcp__hex_line__grep_search",
3210
+ tool: "mcp__hex-line__grep_search",
3202
3211
  arguments: {
3203
3212
  ...args,
3204
- output: "summary"
3213
+ output_mode: "summary"
3205
3214
  }
3206
3215
  });
3207
3216
  }
@@ -3326,7 +3335,7 @@ async function contentMode(pattern, target, opts, plain, editReady, totalLimit,
3326
3335
  if (totalLimit > 0 && matchCount >= totalLimit) {
3327
3336
  flushGroup();
3328
3337
  blocks.push(buildDiagnosticBlock({
3329
- kind: "total_limit",
3338
+ kind: "head_limit",
3330
3339
  meta: {
3331
3340
  total_matches: matchCount,
3332
3341
  shown_matches: matchCount,
@@ -3336,7 +3345,7 @@ async function contentMode(pattern, target, opts, plain, editReady, totalLimit,
3336
3345
  next_action: "narrow_search_scope",
3337
3346
  suggested_refine_call: buildSearchRefineCall(target, pattern, opts)
3338
3347
  },
3339
- message: `Search stopped after ${totalLimit} match event(s). Narrow the query, raise total_limit, or pass total_limit=0 to disable the cap.`,
3348
+ message: `Search stopped after ${totalLimit} match event(s). Narrow the query, raise head_limit, or pass head_limit=0 to disable the cap.`,
3340
3349
  path: String(target).replace(/\\/g, "/")
3341
3350
  }));
3342
3351
  return blocks.map((block) => block.type === "edit_ready_block" ? serializeSearchBlock(block, { plain: plainOutput }) : serializeDiagnosticBlock(block)).join("\n\n");
@@ -3611,7 +3620,7 @@ function markdownOutline(sourceLines) {
3611
3620
  }
3612
3621
  return entries;
3613
3622
  }
3614
- function formatOutline(entries, skippedRanges, sourceLineCount, snapshot, db, relFile, note = "") {
3623
+ function formatOutline(entries, skippedRanges, _sourceLineCount, snapshot, db, relFile, note = "") {
3615
3624
  const lines = [];
3616
3625
  if (note) lines.push(note, "");
3617
3626
  if (skippedRanges.length > 0) {
@@ -3628,8 +3637,6 @@ function formatOutline(entries, skippedRanges, sourceLineCount, snapshot, db, re
3628
3637
  const prefix = tag ? `${tag}.` : "";
3629
3638
  lines.push(`${indent}${prefix}${e.start}-${e.end}: ${e.text}${suffix}`);
3630
3639
  }
3631
- lines.push("");
3632
- lines.push(`(${entries.length} symbols, ${sourceLineCount} source lines)`);
3633
3640
  return lines.join("\n");
3634
3641
  }
3635
3642
  async function fileOutline(filePath) {
@@ -3680,7 +3687,7 @@ function classifyChecksum(currentSnapshot, entry) {
3680
3687
  checksum: entry.raw,
3681
3688
  span: null,
3682
3689
  currentChecksum: null,
3683
- reason: `invalid checksum format: ${entry.error}`
3690
+ reason: entry.error.replace(/^Bad checksum:\s*/, "format error: ")
3684
3691
  };
3685
3692
  }
3686
3693
  const { start, end } = entry.parsed;
@@ -3861,7 +3868,7 @@ function buildPatternRefineCall(absRoot, pattern, type, groups) {
3861
3868
  const args = { path: bestGroup ? join4(absRoot, bestGroup) : absRoot, pattern };
3862
3869
  if (type && type !== "all") args.type = type;
3863
3870
  return JSON.stringify({
3864
- tool: "mcp__hex_line__inspect_path",
3871
+ tool: "mcp__hex-line__inspect_path",
3865
3872
  arguments: args
3866
3873
  });
3867
3874
  }
@@ -4109,13 +4116,14 @@ function fileInfo(filePath) {
4109
4116
  const lineCount = !isBinary && size > 0 ? countFileLines(abs, size, MAX_LINE_COUNT_SIZE) : null;
4110
4117
  const sizeStr = lineCount !== null ? `Size: ${formatSize(size)} (${lineCount} lines)` : `Size: ${formatSize(size)}`;
4111
4118
  const timeStr = `Modified: ${mtime.toISOString().replace("T", " ").slice(0, 19)} (${relativeTime(mtime)})`;
4112
- return [
4119
+ const lines = [
4113
4120
  `File: ${normalized}`,
4114
4121
  sizeStr,
4115
4122
  timeStr,
4116
- `Type: ${typeName}`,
4117
- `Binary: ${isBinary ? "yes" : "no"}`
4118
- ].join("\n");
4123
+ `Type: ${typeName}`
4124
+ ];
4125
+ if (isBinary) lines.push(`Binary: yes`);
4126
+ return lines.join("\n");
4119
4127
  }
4120
4128
 
4121
4129
  // lib/inspect-path.mjs
@@ -4594,7 +4602,6 @@ async function fileChanges(filePath, compareAgainst = "HEAD") {
4594
4602
  `path: ${filePath}`,
4595
4603
  `compare_against: ${compareAgainst}`,
4596
4604
  "scope: directory",
4597
- "summary: changed_files=0",
4598
4605
  ...graphHint2
4599
4606
  ].join("\n");
4600
4607
  }
@@ -4647,7 +4654,6 @@ async function fileChanges(filePath, compareAgainst = "HEAD") {
4647
4654
  `path: ${filePath}`,
4648
4655
  `compare_against: ${compareAgainst}`,
4649
4656
  "scope: file",
4650
- "summary: added=0 removed=0 modified=0",
4651
4657
  ...graphHint
4652
4658
  ].join("\n");
4653
4659
  }
@@ -4864,7 +4870,7 @@ function errorResult(code, message, recovery, { large = false, extra = null } =
4864
4870
  }
4865
4871
 
4866
4872
  // server.mjs
4867
- var version = true ? "1.24.0" : (await null).createRequire(import.meta.url)("./package.json").version;
4873
+ var version = true ? "1.25.0" : (await null).createRequire(import.meta.url)("./package.json").version;
4868
4874
  var STATUS_ENUM = z2.enum(["OK", "ERROR", "AUTO_REBASED", "CONFLICT", "STALE", "INVALID", "NO_CHANGES", "CHANGED", "UNSUPPORTED"]);
4869
4875
  var ERROR_SHAPE = z2.object({ code: z2.string(), message: z2.string(), recovery: z2.string() }).optional();
4870
4876
  var LINE_REPORT_KEYS = /* @__PURE__ */ new Set([
@@ -4933,8 +4939,8 @@ server.registerTool("read_file", {
4933
4939
  title: "Read File",
4934
4940
  description: "Read file with progressive disclosure. Default: minimal plain partial read for discovery; enable edit-ready metadata explicitly when preparing a verified edit.",
4935
4941
  inputSchema: z2.object({
4936
- path: z2.string().optional().describe("File path"),
4937
- paths: z2.array(z2.string()).optional().describe("Array of file paths to read (batch mode)"),
4942
+ file_path: z2.string().optional().describe("File path"),
4943
+ file_paths: z2.array(z2.string()).optional().describe("Array of file paths to read (batch mode)"),
4938
4944
  offset: flexNum().describe("Start line (1-indexed, default: 1)"),
4939
4945
  limit: flexNum().describe("Max lines (default: 200 for discovery, 2000 for edit-ready, 0 = all)"),
4940
4946
  ranges: z2.union([z2.string(), z2.array(readRangeSchema)]).optional().describe('Line ranges, e.g. ["10-25", {"start":40,"end":55}]'),
@@ -4944,8 +4950,8 @@ server.registerTool("read_file", {
4944
4950
  }),
4945
4951
  outputSchema: z2.object({
4946
4952
  status: STATUS_ENUM,
4947
- path: z2.string().optional(),
4948
- paths: z2.array(z2.string()).optional(),
4953
+ file_path: z2.string().optional(),
4954
+ file_paths: z2.array(z2.string()).optional(),
4949
4955
  content: z2.string().optional(),
4950
4956
  edit_ready: z2.boolean().optional(),
4951
4957
  next_action: z2.string().optional(),
@@ -4953,7 +4959,7 @@ server.registerTool("read_file", {
4953
4959
  }),
4954
4960
  annotations: { readOnlyHint: true, destructiveHint: false, idempotentHint: true, openWorldHint: false }
4955
4961
  }, async (rawParams) => {
4956
- const { path: p, paths: multi, offset, limit, ranges: rawRanges, plain, verbosity, edit_ready } = rawParams ?? {};
4962
+ const { file_path: p, file_paths: multi, offset, limit, ranges: rawRanges, plain, verbosity, edit_ready } = rawParams ?? {};
4957
4963
  try {
4958
4964
  const ranges = parseReadRanges(rawRanges);
4959
4965
  const readVerbosity = verbosity ?? "minimal";
@@ -4965,7 +4971,7 @@ server.registerTool("read_file", {
4965
4971
  try {
4966
4972
  if (!edit_ready && readVerbosity !== "full") {
4967
4973
  results.push(`${fileInfo(fp)}
4968
- next_hint: read_file path="${fp}" verbosity="compact"`);
4974
+ next_hint: read_file file_path="${fp}" verbosity="compact"`);
4969
4975
  } else {
4970
4976
  results.push(readFile2(fp, {
4971
4977
  offset,
@@ -4983,9 +4989,9 @@ ERROR: ${e.message}`);
4983
4989
  }
4984
4990
  }
4985
4991
  const content2 = results.join("\n\n---\n\n");
4986
- return result({ status: "OK", paths: multi, content: content2, edit_ready: !!edit_ready }, { large: !!edit_ready || readVerbosity === "full" || content2.length > 5e4 });
4992
+ return result({ status: "OK", file_paths: multi, content: content2, edit_ready: !!edit_ready }, { large: !!edit_ready || readVerbosity === "full" || content2.length > 5e4 });
4987
4993
  }
4988
- if (!p) throw new Error("Either 'path' or 'paths' is required");
4994
+ if (!p) throw new Error("Either 'file_path' or 'file_paths' is required");
4989
4995
  const content = readFile2(p, {
4990
4996
  offset,
4991
4997
  limit: readLimit,
@@ -4994,7 +5000,7 @@ ERROR: ${e.message}`);
4994
5000
  verbosity: readVerbosity,
4995
5001
  editReady: !!edit_ready
4996
5002
  });
4997
- return result({ status: "OK", path: p, content, edit_ready: !!edit_ready }, { large: !!edit_ready || readVerbosity === "full" || content.length > 5e4 });
5003
+ return result({ status: "OK", file_path: p, content, edit_ready: !!edit_ready }, { large: !!edit_ready || readVerbosity === "full" || content.length > 5e4 });
4998
5004
  } catch (e) {
4999
5005
  return errorResult(e.code || "READ_ERROR", e.message, e.recovery || "Check path and permissions");
5000
5006
  }
@@ -5003,7 +5009,7 @@ server.registerTool("edit_file", {
5003
5009
  title: "Edit File",
5004
5010
  description: "Apply hash-verified partial edits to one file. Carry base_revision on same-file follow-ups. Preserves existing line endings and trailing-newline shape; conservative conflicts return retry helpers.",
5005
5011
  inputSchema: z2.object({
5006
- path: z2.string().describe("File to edit"),
5012
+ file_path: z2.string().describe("File to edit"),
5007
5013
  edits: z2.union([z2.string(), z2.array(z2.any())]).describe(
5008
5014
  'JSON array of canonical edits.\n[{"set_line":{"anchor":"ab.12","new_text":"x"}}]\n[{"replace_lines":{"start_anchor":"ab.10","end_anchor":"cd.15","new_text":"x","range_checksum":"10-15:a1b2"}}]\n[{"insert_after":{"anchor":"ab.20","text":"x"}}]\n[{"replace_between":{"start_anchor":"ab.10","end_anchor":"cd.40","new_text":"x","boundary_mode":"inclusive"}}]'
5009
5015
  ),
@@ -5013,10 +5019,10 @@ server.registerTool("edit_file", {
5013
5019
  conflict_policy: z2.enum(["strict", "conservative"]).optional().describe('Conflict handling (default: "conservative"). "conservative" returns structured CONFLICT output with recovery_ranges, retry_edit/retry_edits, suggested_read_call, and retry_plan when available.'),
5014
5020
  allow_external: flexBool().describe("Allow editing a path outside the current project root. Use only when you intentionally target a temp or external file.")
5015
5021
  }),
5016
- outputSchema: z2.object({ status: STATUS_ENUM, path: z2.string().optional(), content: z2.string().optional(), reason: z2.string().optional(), summary: z2.string().optional(), next_action: z2.string().optional(), error: ERROR_SHAPE }),
5022
+ outputSchema: z2.object({ status: STATUS_ENUM, file_path: z2.string().optional(), content: z2.string().optional(), reason: z2.string().optional(), summary: z2.string().optional(), next_action: z2.string().optional(), error: ERROR_SHAPE }),
5017
5023
  annotations: { readOnlyHint: false, destructiveHint: false, idempotentHint: false, openWorldHint: false }
5018
5024
  }, async (rawParams) => {
5019
- const { path: p, edits: json, dry_run, restore_indent, base_revision, conflict_policy, allow_external } = rawParams ?? {};
5025
+ const { file_path: p, edits: json, dry_run, restore_indent, base_revision, conflict_policy, allow_external } = rawParams ?? {};
5020
5026
  try {
5021
5027
  assertProjectScopedPath(p, { allowExternal: !!allow_external });
5022
5028
  let parsed;
@@ -5032,7 +5038,7 @@ server.registerTool("edit_file", {
5032
5038
  baseRevision: base_revision,
5033
5039
  conflictPolicy: conflict_policy
5034
5040
  });
5035
- return lineReportResult({ path: p }, content, { large: content.length > 5e4 });
5041
+ return lineReportResult({ file_path: p }, content, { large: content.length > 5e4 });
5036
5042
  } catch (e) {
5037
5043
  return errorResult(e.code || "EDIT_ERROR", e.message, e.recovery || "Check anchors and checksums");
5038
5044
  }
@@ -5041,21 +5047,21 @@ server.registerTool("write_file", {
5041
5047
  title: "Write File",
5042
5048
  description: "Create a new file or overwrite existing. Creates parent dirs. For existing files prefer edit_file (shows diff, verifies hashes).",
5043
5049
  inputSchema: z2.object({
5044
- path: z2.string().describe("File path"),
5050
+ file_path: z2.string().describe("File path"),
5045
5051
  content: z2.string().describe("File content"),
5046
5052
  allow_external: flexBool().describe("Allow writing a path outside the current project root. Use only when you intentionally target a temp or external file.")
5047
5053
  }),
5048
- outputSchema: z2.object({ status: STATUS_ENUM, path: z2.string().optional(), lines: z2.number().optional(), error: ERROR_SHAPE }),
5054
+ outputSchema: z2.object({ status: STATUS_ENUM, file_path: z2.string().optional(), lines: z2.number().optional(), error: ERROR_SHAPE }),
5049
5055
  annotations: { readOnlyHint: false, destructiveHint: false, idempotentHint: true, openWorldHint: false }
5050
5056
  }, async (rawParams) => {
5051
- const { path: p, content, allow_external } = rawParams ?? {};
5057
+ const { file_path: p, content, allow_external } = rawParams ?? {};
5052
5058
  try {
5053
5059
  assertProjectScopedPath(p, { allowExternal: !!allow_external });
5054
5060
  const abs = validateWritePath(p);
5055
5061
  mkdirSync2(dirname6(abs), { recursive: true });
5056
5062
  writeFileSync4(abs, content, "utf-8");
5057
5063
  const lines = content.split("\n").length;
5058
- return result({ status: "OK", path: p, lines });
5064
+ return result({ status: "OK", file_path: p, lines });
5059
5065
  } catch (e) {
5060
5066
  return errorResult(e.code || "WRITE_ERROR", e.message, e.recovery || "Check path and permissions");
5061
5067
  }
@@ -5068,7 +5074,7 @@ server.registerTool("grep_search", {
5068
5074
  path: z2.string().optional().describe("Search dir/file (default: cwd)"),
5069
5075
  glob: z2.string().optional().describe('Glob filter (e.g. "*.ts")'),
5070
5076
  type: z2.string().optional().describe('File type (e.g. "js", "py")'),
5071
- output: z2.enum(["summary", "content", "files", "count"]).optional().describe("Output format (default: summary)"),
5077
+ output_mode: z2.enum(["summary", "content", "files_with_matches", "count"]).optional().describe("Output format (default: summary)"),
5072
5078
  case_insensitive: flexBool().describe("Ignore case (-i)"),
5073
5079
  smart_case: flexBool().describe("CI when pattern is all lowercase, CS if uppercase (-S)"),
5074
5080
  literal: flexBool().describe("Literal string search, no regex (-F)"),
@@ -5077,7 +5083,7 @@ server.registerTool("grep_search", {
5077
5083
  context_before: flexNum().describe("Context lines BEFORE match (-B)"),
5078
5084
  context_after: flexNum().describe("Context lines AFTER match (-A)"),
5079
5085
  limit: flexNum().describe("Max matches per file (default: 20 for summary discovery, 100 for content)"),
5080
- total_limit: flexNum().describe("Total match events across all files; multiline matches count as 1 (default: 50 for summary discovery, 200 for content, 1000 for files/count, 0 = unlimited)"),
5086
+ head_limit: flexNum().describe("Total match events across all files; multiline matches count as 1 (default: 50 for summary discovery, 200 for content, 1000 for files_with_matches/count, 0 = unlimited)"),
5081
5087
  plain: flexBool().describe("Omit hash tags, return file:line:content"),
5082
5088
  edit_ready: flexBool().describe("Preserve hash/checksum search hunks in `content` mode. Default: false."),
5083
5089
  allow_large_output: flexBool().describe("Bypass the default content-mode block/char caps when you intentionally need a larger payload.")
@@ -5090,7 +5096,7 @@ server.registerTool("grep_search", {
5090
5096
  path: p,
5091
5097
  glob,
5092
5098
  type,
5093
- output,
5099
+ output_mode,
5094
5100
  case_insensitive,
5095
5101
  smart_case,
5096
5102
  literal,
@@ -5099,15 +5105,15 @@ server.registerTool("grep_search", {
5099
5105
  context_before,
5100
5106
  context_after,
5101
5107
  limit,
5102
- total_limit,
5108
+ head_limit,
5103
5109
  plain,
5104
5110
  edit_ready,
5105
5111
  allow_large_output
5106
5112
  } = rawParams ?? {};
5107
5113
  try {
5108
- const resolvedOutput = output ?? "summary";
5114
+ const resolvedOutput = output_mode === "files_with_matches" ? "files" : output_mode ?? "summary";
5109
5115
  const resolvedLimit = limit ?? (resolvedOutput === "summary" ? 20 : 100);
5110
- const resolvedTotalLimit = total_limit ?? (resolvedOutput === "summary" ? 50 : void 0);
5116
+ const resolvedTotalLimit = head_limit ?? (resolvedOutput === "summary" ? 50 : void 0);
5111
5117
  const searchResult = await grepSearch(pattern, {
5112
5118
  path: p,
5113
5119
  glob,
@@ -5135,15 +5141,15 @@ server.registerTool("outline", {
5135
5141
  title: "File Outline",
5136
5142
  description: "AST-based structural outline with hash anchors for direct edit_file usage. Supports JavaScript/TypeScript, Python, C#, and PHP code files plus markdown headings (.md/.mdx, fence-aware).",
5137
5143
  inputSchema: z2.object({
5138
- path: z2.string().describe("Source file path")
5144
+ file_path: z2.string().describe("Source file path")
5139
5145
  }),
5140
- outputSchema: z2.object({ status: STATUS_ENUM, path: z2.string().optional(), content: z2.string().optional(), reason: z2.string().optional(), summary: z2.string().optional(), next_action: z2.string().optional(), error: ERROR_SHAPE }),
5146
+ outputSchema: z2.object({ status: STATUS_ENUM, file_path: z2.string().optional(), content: z2.string().optional(), reason: z2.string().optional(), summary: z2.string().optional(), next_action: z2.string().optional(), error: ERROR_SHAPE }),
5141
5147
  annotations: { readOnlyHint: true, destructiveHint: false, idempotentHint: true, openWorldHint: false }
5142
5148
  }, async (rawParams) => {
5143
- const { path: p } = rawParams ?? {};
5149
+ const { file_path: p } = rawParams ?? {};
5144
5150
  try {
5145
5151
  const content = await fileOutline(p);
5146
- return lineReportResult({ path: p }, content);
5152
+ return lineReportResult({ file_path: p }, content);
5147
5153
  } catch (e) {
5148
5154
  return errorResult(e.code || "OUTLINE_ERROR", e.message, e.recovery || "Check file path and language support");
5149
5155
  }
@@ -5152,20 +5158,20 @@ server.registerTool("verify", {
5152
5158
  title: "Verify Checksums",
5153
5159
  description: "Check if held checksums are still valid without rereading. Use before delayed or mixed-tool follow-up edits; returns canonical status, next_action, and reread guidance.",
5154
5160
  inputSchema: z2.object({
5155
- path: z2.string().describe("File path"),
5161
+ file_path: z2.string().describe("File path"),
5156
5162
  checksums: z2.array(z2.string()).describe('Checksum strings, e.g. ["1-50:f7e2a1b0", "51-100:abcd1234"]'),
5157
5163
  base_revision: z2.string().optional().describe("Optional prior revision to compare against latest state.")
5158
5164
  }),
5159
- outputSchema: z2.object({ status: STATUS_ENUM, path: z2.string().optional(), content: z2.string().optional(), reason: z2.string().optional(), summary: z2.string().optional(), next_action: z2.string().optional(), error: ERROR_SHAPE }),
5165
+ outputSchema: z2.object({ status: STATUS_ENUM, file_path: z2.string().optional(), content: z2.string().optional(), reason: z2.string().optional(), summary: z2.string().optional(), next_action: z2.string().optional(), error: ERROR_SHAPE }),
5160
5166
  annotations: { readOnlyHint: true, destructiveHint: false, idempotentHint: true, openWorldHint: false }
5161
5167
  }, async (rawParams) => {
5162
- const { path: p, checksums, base_revision } = rawParams ?? {};
5168
+ const { file_path: p, checksums, base_revision } = rawParams ?? {};
5163
5169
  try {
5164
5170
  if (!Array.isArray(checksums) || checksums.length === 0) {
5165
5171
  throw new Error("checksums must be a non-empty array of strings");
5166
5172
  }
5167
5173
  const content = verifyChecksums(p, checksums, { baseRevision: base_revision });
5168
- return lineReportResult({ path: p }, content);
5174
+ return lineReportResult({ file_path: p }, content);
5169
5175
  } catch (e) {
5170
5176
  return errorResult(e.code || "VERIFY_ERROR", e.message, e.recovery || "Check checksums format");
5171
5177
  }
package/output-style.md CHANGED
@@ -27,7 +27,7 @@ Prefer `hex-line` for text files you may inspect or modify. Hash-annotated reads
27
27
 
28
28
  | Path | Flow |
29
29
  |------|------|
30
- | Surgical | `grep_search(output="summary") -> grep_search(output="content", edit_ready=true) if needed -> edit_file` |
30
+ | Surgical | `grep_search(output_mode="summary") -> grep_search(output_mode="content", edit_ready=true) if needed -> edit_file` |
31
31
  | Exploratory | `outline -> read_file (ranges) -> edit_file(base_revision)` |
32
32
  | Multi-file | `bulk_replace(path=<project root>)` |
33
33
  | Follow-up after delay | `verify(base_revision) -> reread only if STALE -> retry with returned helpers` |
@@ -43,8 +43,8 @@ Prefer `hex-line` for text files you may inspect or modify. Hash-annotated reads
43
43
 
44
44
  ## Edit Discipline
45
45
 
46
- - Never invent `range_checksum`. Copy it from a fresh `read_file` or `grep_search(output:"content", edit_ready=true)` block.
47
- - First mutation in a file: use `grep_search(output="summary")` for narrow targets, or `outline -> read_file(ranges)` for structural edits. Escalate to `grep_search(output="content", edit_ready=true)` only when the next edit needs canonical hunks.
46
+ - Never invent `range_checksum`. Copy it from a fresh `read_file` or `grep_search(output_mode:"content", edit_ready=true)` block.
47
+ - First mutation in a file: use `grep_search(output_mode="summary")` for narrow targets, or `outline -> read_file(ranges)` for structural edits. Escalate to `grep_search(output_mode="content", edit_ready=true)` only when the next edit needs canonical hunks.
48
48
  - Preserve file conventions mentally: `hex-line` hashes normalized logical text, but `edit_file` preserves the file's existing line endings and trailing-newline shape on write.
49
49
  - Prefer `set_line` or `insert_after` for small local changes. Prefer `replace_between` for larger bounded block rewrites.
50
50
  - Use `replace_lines` only when you already hold the exact inclusive range checksum for that block.
@@ -54,7 +54,7 @@ Prefer `hex-line` for text files you may inspect or modify. Hash-annotated reads
54
54
  - Reuse `retry_checksum` when it is returned for the exact same target range.
55
55
  - Once `hex-line` owns a file edit session, avoid mixing built-in `Edit`/`Write` on that file unless you intentionally want a new baseline.
56
56
  - Follow `next_action` first. Treat `summary` and `snippet` as the compact local context, not as prose to reinterpret.
57
- - If broad `grep_search(output="content")` or pattern `inspect_path` truncates, narrow `path`, `glob`, or query shape before retrying. Use `allow_large_output=true` only when you intentionally accept a larger payload.
57
+ - If broad `grep_search(output_mode="content")` or pattern `inspect_path` truncates, narrow `path`, `glob`, or query shape before retrying. Use `allow_large_output=true` only when you intentionally accept a larger payload.
58
58
 
59
59
  ## Exceptions
60
60
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@levnikolaevich/hex-line-mcp",
3
- "version": "1.24.0",
3
+ "version": "1.25.0",
4
4
  "mcpName": "io.github.levnikolaevich/hex-line-mcp",
5
5
  "type": "module",
6
6
  "description": "Hash-verified file editing MCP + token efficiency hook for AI coding agents. 9 tools: inspect_path, read, edit, write, grep, outline, verify, changes, bulk_replace.",