token-pilot 0.17.0 → 0.19.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -5,6 +5,36 @@ All notable changes to Token Pilot will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [0.19.0] - 2026-04-15
9
+
10
+ ### Added
11
+ - **`session_snapshot` tool** — capture current session state (goal, confirmed facts, files, blockers, next step) as a compact markdown block (<200 tokens). Call before context compaction or when switching direction in long sessions.
12
+ - **`max_tokens` parameter** on `smart_read` and `smart_read_many` — token budget per read. Output auto-downgrades through three levels: full content → structural outline → compact (symbol names + line ranges only). Enables context-constrained sessions.
13
+ - **Session compaction advisory** — policy engine now tracks total tool calls and tokens returned. Advises calling `session_snapshot()` when thresholds are reached (default: every 15 calls or after 8,000 tokens). Configurable via `compactionCallThreshold` and `compactionTokenThreshold`.
14
+ - **"Why This Approach Works"** section in README explaining the 3-level optimization strategy.
15
+
16
+ ### Changed
17
+ - **21 tools** (was 20) — added `session_snapshot`.
18
+ - **MCP instructions** updated with `session_snapshot` workflow and `max_tokens` guidance.
19
+ - Benchmark numbers updated: 55 files, 102K raw → 9K outline tokens (91% savings).
20
+
21
+ ## [0.18.0] - 2026-04-05
22
+
23
+ ### Added
24
+ - **`read_section` tool** — read a specific section from Markdown, YAML, JSON, or CSV files. Markdown: by heading name. YAML/JSON: by top-level key. CSV: by row range (`rows:1-50`). Much cheaper than reading the whole file.
25
+ - **`read_for_edit` section parameter** — prepare edit context for non-code file sections. Works with all 4 formats.
26
+ - **Markdown outline with line ranges** — `smart_read` on `.md` files now shows `[L5-20]` ranges and hints for `read_section`.
27
+ - **YAML/JSON section ranges** — `smart_read` on `.yaml`/`.json` shows top-level key ranges.
28
+ - **CSV smart_read** — shows columns, row count, sample rows, and hints for row-range reading.
29
+ - **4 section parsers** — `markdown-sections.ts`, `yaml-sections.ts`, `json-sections.ts`, `csv-sections.ts`.
30
+
31
+ ### Changed
32
+ - **20 tools** (was 19) — added `read_section`.
33
+ - **492 tests** (was 441).
34
+
35
+ ### Fixed
36
+ - `npm audit` — resolved brace-expansion, path-to-regexp, picomatch vulnerabilities.
37
+
8
38
  ## [0.17.0] - 2026-04-02
9
39
 
10
40
  ### Added
package/README.md CHANGED
@@ -21,16 +21,28 @@ Measured on public open-source repos using the regex fallback parser (no ast-ind
21
21
 
22
22
  | Repo | Files | Raw Tokens | Outline Tokens | Savings |
23
23
  |------|------:|----------:|--------------:|--------:|
24
- | [token-pilot](https://github.com/Digital-Threads/token-pilot) (TS) | 48 | 88,920 | 8,123 | **91%** |
24
+ | [token-pilot](https://github.com/Digital-Threads/token-pilot) (TS) | 55 | 102,086 | 8,992 | **91%** |
25
25
  | [express](https://github.com/expressjs/express) (JS) | 6 | 14,421 | 193 | **99%** |
26
26
  | [fastify](https://github.com/fastify/fastify) (JS) | 23 | 50,000 | 3,161 | **94%** |
27
27
  | [flask](https://github.com/pallets/flask) (Python) | 20 | 78,236 | 7,418 | **91%** |
28
- | **Total** | **97** | **231,577** | **18,895** | **92%** |
28
+ | **Total** | **104** | **244,743** | **19,764** | **92%** |
29
29
 
30
30
  > This measures `smart_read` structural outline savings only. Real sessions also benefit from session cache, dedup reminders, `read_symbol` targeted loading, and `read_for_edit` minimal context.
31
31
  >
32
32
  > Run the benchmark yourself: `npx tsx scripts/benchmark.ts`
33
33
 
34
+ ## Why This Approach Works
35
+
36
+ The biggest source of token waste in AI coding sessions isn't verbose prompts — it's **redundant context**. Every time a model re-reads a file, re-sends conversation history, or loads code it doesn't need, you pay for tokens that add no value.
37
+
38
+ Token Pilot attacks this at three levels:
39
+
40
+ 1. **Symbol-first reading** — load outlines instead of full files, drill into specific functions on demand. This alone saves 60-90% on most reads.
41
+ 2. **Context budget control** — `max_tokens` parameter on `smart_read` auto-downgrades output (full → outline → compact) to fit within a token budget per step.
42
+ 3. **Session state management** — `session_snapshot` captures session state as a compact markdown block (<200 tokens), enabling clean context compaction without losing track of what you're doing.
43
+
44
+ These aren't theoretical gains. In real sessions, the combination of structural reading + targeted symbol access + session snapshots consistently reduces token usage by 80-90% compared to raw file reads.
45
+
34
46
  ## Installation
35
47
 
36
48
  ### Quick Start (recommended)
@@ -152,6 +164,7 @@ WHEN TO USE TOKEN PILOT (saves up to 80% tokens):
152
164
  • Reading code files → smart_read (returns structure, not raw content)
153
165
  • Need one function/class → read_symbol (loads only that symbol)
154
166
  • Exploring a directory → outline (all symbols in one call)
167
+ • Long session? → session_snapshot (capture state before compaction)
155
168
  ...
156
169
  WHEN TO USE DEFAULT TOOLS (Token Pilot adds no value):
157
170
  • Regex/pattern search → use Grep/ripgrep, NOT find_usages
@@ -168,13 +181,13 @@ For more control, you can add rules to your project:
168
181
  - **Cursor** → `.cursorrules` in project root
169
182
  - **Codex** → `AGENTS.md` in project root
170
183
 
171
- ## MCP Tools (19)
184
+ ## MCP Tools (21)
172
185
 
173
186
  ### Core Reading
174
187
 
175
188
  | Tool | Instead of | Description |
176
189
  |------|-----------|-------------|
177
- | `smart_read` | `Read` | AST structural overview: classes, functions, methods with signatures. Up to 90% savings on large files. Framework-aware: shows HTTP routes, column types, validation rules. |
190
+ | `smart_read` | `Read` | AST structural overview: classes, functions, methods with signatures. Up to 90% savings on large files. Framework-aware: shows HTTP routes, column types, validation rules. `max_tokens` param for budget-constrained sessions. |
178
191
  | `read_symbol` | `Read` + scroll | Load source of a specific symbol. Supports `Class.method`. `show` param: full/head/tail/outline. |
179
192
  | `read_symbols` | N x `read_symbol` | Batch read multiple symbols from one file in a single call (max 10). One round-trip instead of N. |
180
193
  | `read_for_edit` | `Read` before `Edit` | Minimal RAW code around a symbol — copy directly as `old_string` for Edit tool. Batch mode: pass `symbols` array for multiple edit contexts. |
@@ -198,10 +211,11 @@ For more control, you can add rules to your project:
198
211
  | `smart_log` | raw `git log` | Structured commit history with category detection (feat/fix/refactor/docs), file stats, author breakdown. Filters by path and ref. |
199
212
  | `test_summary` | raw test output | Run tests and get structured summary: total/passed/failed + failure details. Supports vitest, jest, pytest, phpunit, go, cargo, rspec, mocha. |
200
213
 
201
- ### Analytics
214
+ ### Session & Analytics
202
215
 
203
216
  | Tool | Description |
204
217
  |------|-------------|
218
+ | `session_snapshot` | Capture session state as a compact markdown block (<200 tokens): goal, confirmed facts, relevant files, blockers, next step. Call before compaction or when switching direction. |
205
219
  | `session_analytics` | Token savings report: total saved, per-tool breakdown, top files, per-intent breakdown, decision insights, policy advisories. |
206
220
 
207
221
  ## CLI Commands
@@ -59,6 +59,8 @@ export const DEFAULT_CONFIG = {
59
59
  maxFullFileReads: 10,
60
60
  warnOnLargeReads: true,
61
61
  largeReadThreshold: 2000,
62
+ compactionCallThreshold: 15,
63
+ compactionTokenThreshold: 8000,
62
64
  },
63
65
  ignore: [
64
66
  'node_modules/**',
@@ -16,6 +16,10 @@ export interface PolicyConfig {
16
16
  warnOnLargeReads: boolean;
17
17
  /** Token threshold for large read warning */
18
18
  largeReadThreshold: number;
19
+ /** Suggest compaction after N total tool calls (0 = disabled) */
20
+ compactionCallThreshold: number;
21
+ /** Suggest compaction after N total tokens returned (0 = disabled) */
22
+ compactionTokenThreshold: number;
19
23
  }
20
24
  export declare const DEFAULT_POLICIES: PolicyConfig;
21
25
  export interface PolicyCheckContext {
@@ -23,6 +27,8 @@ export interface PolicyCheckContext {
23
27
  tokensReturned: number;
24
28
  readForEditCalled?: Set<string>;
25
29
  editTargetPath?: string;
30
+ totalCallCount?: number;
31
+ totalTokensReturned?: number;
26
32
  }
27
33
  export interface PolicyAdvisory {
28
34
  level: 'info' | 'warn';
@@ -10,6 +10,8 @@ export const DEFAULT_POLICIES = {
10
10
  maxFullFileReads: 10,
11
11
  warnOnLargeReads: true,
12
12
  largeReadThreshold: 2000,
13
+ compactionCallThreshold: 15,
14
+ compactionTokenThreshold: 8000,
13
15
  };
14
16
  /** Full-file read tools that count toward maxFullFileReads */
15
17
  const FULL_READ_TOOLS = new Set([
@@ -64,6 +66,28 @@ export function checkPolicy(policy, tool, context) {
64
66
  message: `POLICY: Consider using read_for_edit("${context.editTargetPath}") before editing to get precise edit context.`,
65
67
  };
66
68
  }
69
+ // 5. Session compaction advisory — by call count
70
+ if (policy.compactionCallThreshold > 0 &&
71
+ context.totalCallCount !== undefined &&
72
+ context.totalCallCount > 0 &&
73
+ context.totalCallCount % policy.compactionCallThreshold === 0) {
74
+ return {
75
+ level: 'info',
76
+ message: `COMPACTION: ${context.totalCallCount} tool calls this session. Consider calling session_snapshot() to capture state, then compact context.`,
77
+ };
78
+ }
79
+ // 6. Session compaction advisory — by total tokens
80
+ if (policy.compactionTokenThreshold > 0 &&
81
+ context.totalTokensReturned !== undefined &&
82
+ context.totalTokensReturned > policy.compactionTokenThreshold &&
83
+ context.totalCallCount !== undefined &&
84
+ context.totalCallCount % 5 === 0 // don't spam every call, check every 5th
85
+ ) {
86
+ return {
87
+ level: 'info',
88
+ message: `COMPACTION: ~${context.totalTokensReturned} tokens returned this session. Consider calling session_snapshot() to capture state, then compact context.`,
89
+ };
90
+ }
67
91
  return null;
68
92
  }
69
93
  /**
@@ -12,6 +12,7 @@ export declare function validateSmartReadArgs(args: unknown): {
12
12
  show_docs?: boolean;
13
13
  show_references?: boolean;
14
14
  depth?: number;
15
+ max_tokens?: number;
15
16
  };
16
17
  /**
17
18
  * Validate read_symbol arguments.
@@ -67,6 +68,7 @@ export declare function validateFindUsagesArgs(args: unknown): FindUsagesArgs;
67
68
  */
68
69
  export declare function validateSmartReadManyArgs(args: unknown): {
69
70
  paths: string[];
71
+ max_tokens?: number;
70
72
  };
71
73
  /**
72
74
  * Validate read_for_edit arguments.
@@ -80,6 +82,7 @@ export declare function validateReadForEditArgs(args: unknown): {
80
82
  include_callers?: boolean;
81
83
  include_tests?: boolean;
82
84
  include_changes?: boolean;
85
+ section?: string;
83
86
  };
84
87
  /**
85
88
  * Validate related_files arguments.
@@ -155,6 +158,10 @@ export interface TestSummaryArgs {
155
158
  timeout?: number;
156
159
  }
157
160
  export declare function validateTestSummaryArgs(args: unknown): TestSummaryArgs;
161
+ export declare function validateReadSectionArgs(args: unknown): {
162
+ path: string;
163
+ heading: string;
164
+ };
158
165
  /** Detect roots that would cause ast-index to scan the entire filesystem */
159
166
  export declare function isDangerousRoot(root: string): boolean;
160
167
  //# sourceMappingURL=validation.d.ts.map
@@ -28,6 +28,7 @@ export function validateSmartReadArgs(args) {
28
28
  show_docs: optionalBool(a.show_docs, 'show_docs'),
29
29
  show_references: optionalBool(a.show_references, 'show_references'),
30
30
  depth: optionalNumber(a.depth, 'depth'),
31
+ max_tokens: optionalNumber(a.max_tokens, 'max_tokens'),
31
32
  };
32
33
  }
33
34
  /**
@@ -194,7 +195,7 @@ export function validateSmartReadManyArgs(args) {
194
195
  throw new Error('Each path in "paths" must be a non-empty string.');
195
196
  }
196
197
  }
197
- return { paths: a.paths };
198
+ return { paths: a.paths, max_tokens: optionalNumber(a.max_tokens, 'max_tokens') };
198
199
  }
199
200
  function optionalString(val, name) {
200
201
  if (val === undefined || val === null)
@@ -228,8 +229,8 @@ export function validateReadForEditArgs(args) {
228
229
  if (typeof a.path !== 'string' || a.path.length === 0) {
229
230
  throw new Error('Required parameter "path" must be a non-empty string.');
230
231
  }
231
- if (!a.symbol && !a.line && (!Array.isArray(a.symbols) || a.symbols.length === 0)) {
232
- throw new Error('Either "symbol", "symbols", or "line" must be provided.');
232
+ if (!a.symbol && !a.line && (!Array.isArray(a.symbols) || a.symbols.length === 0) && !a.section) {
233
+ throw new Error('Either "symbol", "symbols", "line", or "section" must be provided.');
233
234
  }
234
235
  // Validate symbols array (batch mode)
235
236
  let symbols;
@@ -256,6 +257,7 @@ export function validateReadForEditArgs(args) {
256
257
  include_callers: optionalBool(a.include_callers, 'include_callers'),
257
258
  include_tests: optionalBool(a.include_tests, 'include_tests'),
258
259
  include_changes: optionalBool(a.include_changes, 'include_changes'),
260
+ section: optionalString(a.section, 'section'),
259
261
  };
260
262
  }
261
263
  /**
@@ -453,6 +455,19 @@ export function validateTestSummaryArgs(args) {
453
455
  }
454
456
  return { command: a.command, runner, timeout };
455
457
  }
458
+ export function validateReadSectionArgs(args) {
459
+ if (!args || typeof args !== 'object') {
460
+ throw new Error('Arguments must be an object.');
461
+ }
462
+ const a = args;
463
+ if (typeof a.path !== 'string' || a.path.length === 0) {
464
+ throw new Error('Required parameter "path" must be a non-empty string.');
465
+ }
466
+ if (typeof a.heading !== 'string' || a.heading.length === 0) {
467
+ throw new Error('Required parameter "heading" must be a non-empty string.');
468
+ }
469
+ return { path: a.path, heading: a.heading };
470
+ }
456
471
  /** Detect roots that would cause ast-index to scan the entire filesystem */
457
472
  export function isDangerousRoot(root) {
458
473
  const normalized = root.replace(/\/+$/, '') || '/';
@@ -0,0 +1,35 @@
1
+ /**
2
+ * CSV parser — column-aware reading with row/column subsetting.
3
+ */
4
+ export interface CsvOutline {
5
+ columns: string[];
6
+ rowCount: number;
7
+ sampleRows: string[][];
8
+ }
9
+ export interface CsvSection {
10
+ heading: string;
11
+ startLine: number;
12
+ endLine: number;
13
+ lineCount: number;
14
+ }
15
+ /**
16
+ * Parse CSV into an outline: columns, row count, sample.
17
+ * Simple parser — handles quoted fields with commas.
18
+ */
19
+ export declare function parseCsvOutline(content: string): CsvOutline;
20
+ /**
21
+ * Parse a row range specification into a CsvSection.
22
+ * Supported formats:
23
+ * "rows:1-50" — row range (1-indexed, refers to data rows, not header)
24
+ * "rows:1-50" with column filter isn't supported at section level
25
+ */
26
+ export declare function parseCsvSectionSpec(heading: string, totalDataRows: number): CsvSection | null;
27
+ /**
28
+ * Extract CSV rows for a section. Returns header + requested rows.
29
+ */
30
+ export declare function extractCsvSectionContent(lines: string[], section: CsvSection): string;
31
+ /**
32
+ * Format CSV outline for smart_read output.
33
+ */
34
+ export declare function formatCsvOutline(filePath: string, outline: CsvOutline, lineCount: number): string;
35
+ //# sourceMappingURL=csv-sections.d.ts.map
@@ -0,0 +1,129 @@
1
+ /**
2
+ * CSV parser — column-aware reading with row/column subsetting.
3
+ */
4
+ /**
5
+ * Parse CSV into an outline: columns, row count, sample.
6
+ * Simple parser — handles quoted fields with commas.
7
+ */
8
+ export function parseCsvOutline(content) {
9
+ const lines = content.split('\n').filter(l => l.trim());
10
+ if (lines.length === 0)
11
+ return { columns: [], rowCount: 0, sampleRows: [] };
12
+ const columns = parseCsvRow(lines[0]);
13
+ const dataLines = lines.slice(1);
14
+ const sampleRows = dataLines.slice(0, 5).map(parseCsvRow);
15
+ return {
16
+ columns,
17
+ rowCount: dataLines.length,
18
+ sampleRows,
19
+ };
20
+ }
21
+ /**
22
+ * Parse a row range specification into a CsvSection.
23
+ * Supported formats:
24
+ * "rows:1-50" — row range (1-indexed, refers to data rows, not header)
25
+ * "rows:1-50" with column filter isn't supported at section level
26
+ */
27
+ export function parseCsvSectionSpec(heading, totalDataRows) {
28
+ // rows:N-M format
29
+ const rowMatch = heading.match(/^rows?:\s*(\d+)\s*-\s*(\d+)$/i);
30
+ if (rowMatch) {
31
+ const start = Math.max(1, parseInt(rowMatch[1], 10));
32
+ const end = Math.min(totalDataRows, parseInt(rowMatch[2], 10));
33
+ if (start > end || start > totalDataRows)
34
+ return null;
35
+ // +1 for header line offset
36
+ return {
37
+ heading: `rows ${start}-${end}`,
38
+ startLine: start + 1, // +1 because line 1 is header
39
+ endLine: end + 1,
40
+ lineCount: end - start + 1,
41
+ };
42
+ }
43
+ // Single row number
44
+ const singleMatch = heading.match(/^rows?:\s*(\d+)$/i);
45
+ if (singleMatch) {
46
+ const row = parseInt(singleMatch[1], 10);
47
+ if (row < 1 || row > totalDataRows)
48
+ return null;
49
+ return {
50
+ heading: `row ${row}`,
51
+ startLine: row + 1,
52
+ endLine: row + 1,
53
+ lineCount: 1,
54
+ };
55
+ }
56
+ return null;
57
+ }
58
+ /**
59
+ * Extract CSV rows for a section. Returns header + requested rows.
60
+ */
61
+ export function extractCsvSectionContent(lines, section) {
62
+ const header = lines[0]; // always include header
63
+ const dataRows = lines.slice(section.startLine - 1, section.endLine);
64
+ return [header, ...dataRows].join('\n');
65
+ }
66
+ /**
67
+ * Parse a single CSV row handling quoted fields.
68
+ */
69
+ function parseCsvRow(line) {
70
+ const fields = [];
71
+ let current = '';
72
+ let inQuote = false;
73
+ for (let i = 0; i < line.length; i++) {
74
+ const ch = line[i];
75
+ if (inQuote) {
76
+ if (ch === '"') {
77
+ if (i + 1 < line.length && line[i + 1] === '"') {
78
+ current += '"';
79
+ i++; // skip escaped quote
80
+ }
81
+ else {
82
+ inQuote = false;
83
+ }
84
+ }
85
+ else {
86
+ current += ch;
87
+ }
88
+ }
89
+ else {
90
+ if (ch === '"') {
91
+ inQuote = true;
92
+ }
93
+ else if (ch === ',') {
94
+ fields.push(current.trim());
95
+ current = '';
96
+ }
97
+ else {
98
+ current += ch;
99
+ }
100
+ }
101
+ }
102
+ fields.push(current.trim());
103
+ return fields;
104
+ }
105
+ /**
106
+ * Format CSV outline for smart_read output.
107
+ */
108
+ export function formatCsvOutline(filePath, outline, lineCount) {
109
+ const lines = [
110
+ `FILE: ${filePath} (${lineCount} lines, CSV)`,
111
+ '',
112
+ `COLUMNS (${outline.columns.length}): ${outline.columns.join(', ')}`,
113
+ `ROWS: ${outline.rowCount}`,
114
+ '',
115
+ ];
116
+ if (outline.sampleRows.length > 0) {
117
+ lines.push(`SAMPLE (first ${outline.sampleRows.length} rows):`);
118
+ for (const row of outline.sampleRows) {
119
+ // Format as: col1=val1, col2=val2, ...
120
+ const pairs = outline.columns.map((col, i) => `${col}=${row[i] ?? ''}`);
121
+ lines.push(` ${pairs.join(', ')}`);
122
+ }
123
+ }
124
+ lines.push('');
125
+ lines.push(`HINT: Use read_section("${filePath}", heading="rows:1-50") to load specific rows.`);
126
+ lines.push(` Use read_section("${filePath}", heading="rows:${outline.rowCount}") for last row.`);
127
+ return lines.join('\n');
128
+ }
129
+ //# sourceMappingURL=csv-sections.js.map
@@ -0,0 +1,17 @@
1
+ /**
2
+ * JSON section parser — parses top-level keys with line ranges.
3
+ */
4
+ export interface JsonSection {
5
+ heading: string;
6
+ startLine: number;
7
+ endLine: number;
8
+ lineCount: number;
9
+ }
10
+ /**
11
+ * Parse JSON into sections based on top-level keys.
12
+ * Works with formatted JSON (pretty-printed). For minified JSON, returns empty.
13
+ */
14
+ export declare function parseJsonSections(content: string): JsonSection[];
15
+ export declare function findJsonSection(sections: JsonSection[], heading: string): JsonSection | undefined;
16
+ export declare function extractJsonSectionContent(lines: string[], section: JsonSection): string;
17
+ //# sourceMappingURL=json-sections.d.ts.map
@@ -0,0 +1,66 @@
1
+ /**
2
+ * JSON section parser — parses top-level keys with line ranges.
3
+ */
4
+ /**
5
+ * Parse JSON into sections based on top-level keys.
6
+ * Works with formatted JSON (pretty-printed). For minified JSON, returns empty.
7
+ */
8
+ export function parseJsonSections(content) {
9
+ if (!content.trim())
10
+ return [];
11
+ const lines = content.split('\n');
12
+ if (lines.length < 3)
13
+ return []; // minified or trivial
14
+ // Find top-level keys: lines matching /^\s{0,2}"key":/ (0-2 spaces indent = top level)
15
+ const topKeys = [];
16
+ // Track brace depth to identify top-level
17
+ let depth = 0;
18
+ let inString = false;
19
+ let lineIdx = 0;
20
+ for (lineIdx = 0; lineIdx < lines.length; lineIdx++) {
21
+ const line = lines[lineIdx];
22
+ // Simple top-level key detection: at depth 1 (inside root object)
23
+ // Match: "key": or "key" : at the beginning of a line (with indent)
24
+ if (depth === 1) {
25
+ const keyMatch = line.match(/^\s*"([^"]+)"\s*:/);
26
+ if (keyMatch) {
27
+ topKeys.push({ key: keyMatch[1], line: lineIdx + 1 });
28
+ }
29
+ }
30
+ // Track depth (simplified — doesn't handle strings perfectly but good enough for formatted JSON)
31
+ for (let ci = 0; ci < line.length; ci++) {
32
+ const ch = line[ci];
33
+ if (ch === '"' && (ci === 0 || line[ci - 1] !== '\\')) {
34
+ inString = !inString;
35
+ }
36
+ if (!inString) {
37
+ if (ch === '{' || ch === '[')
38
+ depth++;
39
+ if (ch === '}' || ch === ']')
40
+ depth--;
41
+ }
42
+ }
43
+ }
44
+ if (topKeys.length === 0)
45
+ return [];
46
+ const sections = [];
47
+ for (let i = 0; i < topKeys.length; i++) {
48
+ const start = topKeys[i].line;
49
+ const end = i + 1 < topKeys.length ? topKeys[i + 1].line - 1 : lines.length - 1; // -1 to exclude closing }
50
+ sections.push({
51
+ heading: topKeys[i].key,
52
+ startLine: start,
53
+ endLine: end,
54
+ lineCount: end - start + 1,
55
+ });
56
+ }
57
+ return sections;
58
+ }
59
+ export function findJsonSection(sections, heading) {
60
+ const normalized = heading.trim().toLowerCase();
61
+ return sections.find(s => s.heading.toLowerCase() === normalized);
62
+ }
63
+ export function extractJsonSectionContent(lines, section) {
64
+ return lines.slice(section.startLine - 1, section.endLine).join('\n');
65
+ }
66
+ //# sourceMappingURL=json-sections.js.map
@@ -0,0 +1,15 @@
1
+ /**
2
+ * Markdown section parser — shared helper for section-aware tools.
3
+ * Parses heading structure with line ranges for targeted reading.
4
+ */
5
+ export interface MarkdownSection {
6
+ heading: string;
7
+ level: number;
8
+ startLine: number;
9
+ endLine: number;
10
+ lineCount: number;
11
+ }
12
+ export declare function parseMarkdownSections(content: string): MarkdownSection[];
13
+ export declare function findSection(sections: MarkdownSection[], heading: string): MarkdownSection | undefined;
14
+ export declare function extractSectionContent(lines: string[], section: MarkdownSection): string;
15
+ //# sourceMappingURL=markdown-sections.d.ts.map
@@ -0,0 +1,49 @@
1
+ /**
2
+ * Markdown section parser — shared helper for section-aware tools.
3
+ * Parses heading structure with line ranges for targeted reading.
4
+ */
5
+ export function parseMarkdownSections(content) {
6
+ if (!content.trim())
7
+ return [];
8
+ const lines = content.split('\n');
9
+ const headings = [];
10
+ for (let i = 0; i < lines.length; i++) {
11
+ const match = lines[i].match(/^(#{1,6})\s+(.+)/);
12
+ if (match) {
13
+ headings.push({
14
+ heading: match[2].trim(),
15
+ level: match[1].length,
16
+ line: i + 1,
17
+ });
18
+ }
19
+ }
20
+ if (headings.length === 0)
21
+ return [];
22
+ const sections = [];
23
+ for (let i = 0; i < headings.length; i++) {
24
+ const current = headings[i];
25
+ let endLine = lines.length;
26
+ for (let j = i + 1; j < headings.length; j++) {
27
+ if (headings[j].level <= current.level) {
28
+ endLine = headings[j].line - 1;
29
+ break;
30
+ }
31
+ }
32
+ sections.push({
33
+ heading: current.heading,
34
+ level: current.level,
35
+ startLine: current.line,
36
+ endLine,
37
+ lineCount: endLine - current.line + 1,
38
+ });
39
+ }
40
+ return sections;
41
+ }
42
+ export function findSection(sections, heading) {
43
+ const normalized = heading.replace(/^#+\s*/, '').trim().toLowerCase();
44
+ return sections.find(s => s.heading.toLowerCase() === normalized);
45
+ }
46
+ export function extractSectionContent(lines, section) {
47
+ return lines.slice(section.startLine - 1, section.endLine).join('\n');
48
+ }
49
+ //# sourceMappingURL=markdown-sections.js.map