token-pilot 0.23.4 → 0.23.7
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/plugin.json +1 -1
- package/CHANGELOG.md +59 -0
- package/dist/agents/tp-audit-scanner.md +1 -1
- package/dist/agents/tp-commit-writer.md +3 -2
- package/dist/agents/tp-dead-code-finder.md +1 -1
- package/dist/agents/tp-debugger.md +1 -1
- package/dist/agents/tp-history-explorer.md +1 -1
- package/dist/agents/tp-impact-analyzer.md +1 -1
- package/dist/agents/tp-migration-scout.md +1 -1
- package/dist/agents/tp-onboard.md +2 -1
- package/dist/agents/tp-pr-reviewer.md +1 -1
- package/dist/agents/tp-refactor-planner.md +1 -1
- package/dist/agents/tp-run.md +1 -1
- package/dist/agents/tp-session-restorer.md +2 -1
- package/dist/agents/tp-test-triage.md +1 -1
- package/dist/agents/tp-test-writer.md +1 -1
- package/dist/ast-index/binary-manager.d.ts +1 -1
- package/dist/ast-index/binary-manager.js +128 -66
- package/dist/handlers/read-symbols.d.ts +6 -6
- package/dist/handlers/read-symbols.js +68 -38
- package/dist/server/tool-definitions.js +1 -1
- package/dist/server.js +8 -2
- package/package.json +4 -9
- package/scripts/postinstall.mjs +78 -0
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "token-pilot",
|
|
3
|
-
"version": "0.23.
|
|
3
|
+
"version": "0.23.7",
|
|
4
4
|
"description": "Enforcement layer for token-efficient AI coding: MCP-first hook with structural denial summaries, SessionStart reminder, bless-agents CLI, and six tp-* subagents — works for every agent including those without MCP access.",
|
|
5
5
|
"author": "token-pilot",
|
|
6
6
|
"license": "MIT",
|
package/CHANGELOG.md
CHANGED
|
@@ -5,6 +5,65 @@ All notable changes to Token Pilot will be documented in this file.
|
|
|
5
5
|
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
|
6
6
|
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
|
7
7
|
|
|
8
|
+
## [0.23.7] - 2026-04-18
|
|
9
|
+
|
|
10
|
+
### Changed — per-agent `model:` selection for cheap, format-bound work
|
|
11
|
+
|
|
12
|
+
Claude Code allows each subagent to declare its own model in frontmatter (or `inherit` from the main agent). We've been relying on the user's global `CLAUDE_CODE_SUBAGENT_MODEL` env var as a blunt switch — that doesn't fit because some `tp-*` agents need real reasoning (debugger, impact analyzer, refactor planner) while others are pure format work. Moved three agents to **haiku-4.5** explicitly:
|
|
13
|
+
|
|
14
|
+
- **`tp-commit-writer`** — classifies diff → Conventional type, drafts short message. Context-bound, no architectural decisions.
|
|
15
|
+
- **`tp-session-restorer`** — parses `latest.md` + git status, emits a fixed-shape briefing. Pure transformation.
|
|
16
|
+
- **`tp-onboard`** — pulls project_overview and retells it in an orientation map. Format-bound.
|
|
17
|
+
|
|
18
|
+
The other 11 agents keep `inherit` — they do enough reasoning (intent, risk classification, call-tree traversal) that haiku would regress them. `tp-dead-code-finder` and `tp-audit-scanner` stay inherit for now; we'll revisit after real-world usage shows whether cross-check accuracy holds on haiku.
|
|
19
|
+
|
|
20
|
+
**User is NOT asked to set `CLAUDE_CODE_SUBAGENT_MODEL`.** The selection is per-agent and shipped with the template — predictable, rollback-friendly (one line per agent).
|
|
21
|
+
|
|
22
|
+
### Planned
|
|
23
|
+
|
|
24
|
+
- **TP-z64** (v0.28 backlog) — expanded tp-* roster with combo-agents that pair novel MCP-tool combinations for niche workflows (review-impact, test-coverage-gapper, api-surface-tracker, dep-health, incident-timeline). Must be brainstormed with names + triggers before implementation; deferred until v0.24 onboarding wizard ships and baseline stabilises.
|
|
25
|
+
- **v0.24.0** — onboarding wizard (doctor-warnings → one-step applied): writes `MAX_THINKING_TOKENS=10000` + `CLAUDE_AUTOCOMPACT_PCT_OVERRIDE=50` to `~/.claude/settings.json`, generates `.claudeignore` if missing. Does NOT set `CLAUDE_CODE_SUBAGENT_MODEL` — per-agent model now handles that.
|
|
26
|
+
|
|
27
|
+
### Numbers
|
|
28
|
+
- 910 tests green, `tsc --noEmit` clean, 14 agents built.
|
|
29
|
+
|
|
30
|
+
## [0.23.6] - 2026-04-18
|
|
31
|
+
|
|
32
|
+
### Fixed — five findings from a live user audit
|
|
33
|
+
|
|
34
|
+
A real-world QA pass on a large Nuxt repo surfaced five issues. All addressed.
|
|
35
|
+
|
|
36
|
+
**1. `read_symbols` regression (−16% tokens saved).** When the caller requested nearly every symbol of a file, the sum of bodies + N × per-symbol metadata exceeded a raw Read of the whole file — batch tool was worse than no batch. Two fixes:
|
|
37
|
+
- Handler now includes an anti-pattern guard: if ≥ 70 % of the file's line coverage is requested AND ≥ 3 symbols, it refuses with a short advisory pointing at `smart_read` / `read_for_edit` / bounded `Read`.
|
|
38
|
+
- Server-side `tokensWouldBe` for `read_symbols` corrected to reflect reality: baseline is "N individual `read_symbol` calls", not "one raw Read of the whole file". Saved now shows the real win — deduped headers + shared file open — instead of a misleading figure that flipped negative in the edge case.
|
|
39
|
+
|
|
40
|
+
**2. Tool description updated.** `read_symbols` now says *"BEST FIT: 3–8 symbols in one file … if you'd request ≥ 70 % of the file's symbols, the handler refuses and points you to smart_read"*. Prior docstring didn't give agents a decision rule, so they used it reflexively.
|
|
41
|
+
|
|
42
|
+
**3. `tp-commit-writer` trivial-diff guard.** The agent's `description:` was unconditional — reviewers triggered it on a whitespace-only docs diff (239 s subagent spawn for a one-line message). Now it explicitly says *"Do NOT use for docs-only, whitespace-only, or < 20-line diffs — the user can write those manually faster than a subagent spawn"*.
|
|
43
|
+
|
|
44
|
+
**4. `docs/token-pilot-dir.md` — side-files layout reference.** Users saw `hook-events.jsonl` and `hook-denied.jsonl` appear but no snapshots/context-registries/docs directories, wondered if features were broken. They're lazy-created: each sub-path only appears when the triggering feature fires. New doc lists every path, who writes it, when, and whether to commit. Recommended `.gitignore` stanza included.
|
|
45
|
+
|
|
46
|
+
**5. `tp-migration-scout` context-mode "fallback" — false alarm.** Audit reported the agent announced a fallback from an unavailable `context-mode` tool. Verified: `tp-migration-scout.md` does not advertise `context-mode` anywhere. The agent self-reported a fallback it invented. No code change needed; noted for future behavioural-harness work (TP-q33b).
|
|
47
|
+
|
|
48
|
+
### Numbers
|
|
49
|
+
- 910 tests green (+3 regression tests for `read_symbols` guard), `tsc --noEmit` clean.
|
|
50
|
+
|
|
51
|
+
## [0.23.5] - 2026-04-18
|
|
52
|
+
|
|
53
|
+
### Changed — ast-index is now a hard npm dependency
|
|
54
|
+
|
|
55
|
+
Until now `ast-index` was auto-downloaded from GitHub on first MCP-server start. That worked but had weak spots: exotic architectures, corporate proxies, ZIP-only Windows path — any of them left the user with a token-pilot that couldn't do structural reads until they manually ran `install-ast-index`. Users also rightly expected *"I just `npm install`d the package — it should just work"*.
|
|
56
|
+
|
|
57
|
+
- **`@ast-index/cli@^3.38.0` moved from implicit auto-install to `dependencies`.** Regular `npm install token-pilot` now pulls the main package + the correct platform-specific native binary (`@ast-index/cli-<platform>-<arch>`) as a transitive dep, same pattern Rollup / esbuild / swc use. Removed the old `peerDependencies: ast-index` stub — confusing and never served a purpose.
|
|
58
|
+
- **New `findViaBundledDep()` is first in the binary resolution order** (after config override, before system PATH). Walks up from our own module to `node_modules/@ast-index/cli/bin/ast-index`; works whether npm created `.bin/ast-index` symlinks or not.
|
|
59
|
+
- **`BinaryStatus.source` gains `"bundled"`** to distinguish the new path from `system` / `npm` / `managed` / `none`. `doctor` honours it.
|
|
60
|
+
- **`scripts/postinstall.mjs` is a safety net** — runs after `npm install`, checks `findBinary()` result; if nothing found, fires the GitHub download fallback. **Never fails the install** — any error ends in a single stderr warning and exit 0. Respects `TOKEN_PILOT_SKIP_POSTINSTALL=1` and `CI=true` for sandboxed builds.
|
|
61
|
+
|
|
62
|
+
Result: fresh `npm install token-pilot` gives a ready-to-work binary on macOS (arm64 + x64), Linux (arm64 + x64), Windows x64 — no first-run download step, no stderr noise about "ast-index not found, downloading…".
|
|
63
|
+
|
|
64
|
+
### Numbers
|
|
65
|
+
- 907 tests green, `tsc --noEmit` clean. `npm install @ast-index/cli` end-to-end verified against actual npm registry.
|
|
66
|
+
|
|
8
67
|
## [0.23.4] - 2026-04-18
|
|
9
68
|
|
|
10
69
|
### Fixed
|
|
@@ -1,13 +1,14 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: tp-commit-writer
|
|
3
|
-
|
|
3
|
+
model: claude-haiku-4-5-20251001
|
|
4
|
+
description: PROACTIVELY use this when the user is about to commit a NON-TRIVIAL change (new feature, fix, refactor) and asks "write a commit message". Reads staged diff, verifies tests pass, drafts Conventional Commit. Refuses mixed diffs (asks to split), failing tests, or empty stage. Do NOT use for docs-only, whitespace-only, or < 20-line diffs — the user can write those manually faster than a subagent spawn. Do NOT use to explain already-made commits.
|
|
4
5
|
tools:
|
|
5
6
|
- mcp__token-pilot__smart_diff
|
|
6
7
|
- mcp__token-pilot__smart_log
|
|
7
8
|
- mcp__token-pilot__test_summary
|
|
8
9
|
- mcp__token-pilot__outline
|
|
9
10
|
- Bash
|
|
10
|
-
token_pilot_version: "0.23.
|
|
11
|
+
token_pilot_version: "0.23.7"
|
|
11
12
|
token_pilot_body_hash: 559a0b61d20974bf33e35bc4c80dcf1b41d10d4df46cf9d05d3d5620713cd46f
|
|
12
13
|
---
|
|
13
14
|
|
|
@@ -1,5 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: tp-onboard
|
|
3
|
+
model: claude-haiku-4-5-20251001
|
|
3
4
|
description: PROACTIVELY use this when the user is exploring an unfamiliar codebase — asks "how is this organised", "what does this project do", "where do I start reading", or starts any conversation in a repo the main agent doesn't know. Orientation map only (layout, entry points, modules); does NOT drill into implementation.
|
|
4
5
|
tools:
|
|
5
6
|
- mcp__token-pilot__project_overview
|
|
@@ -9,7 +10,7 @@ tools:
|
|
|
9
10
|
- mcp__token-pilot__smart_read
|
|
10
11
|
- mcp__token-pilot__smart_read_many
|
|
11
12
|
- mcp__token-pilot__read_section
|
|
12
|
-
token_pilot_version: "0.23.
|
|
13
|
+
token_pilot_version: "0.23.7"
|
|
13
14
|
token_pilot_body_hash: ae0b86eaffaf34bf283b94b5572481fa8c2d6a2a25193f1173b70bef0fbe1919
|
|
14
15
|
---
|
|
15
16
|
|
|
@@ -7,7 +7,7 @@ tools:
|
|
|
7
7
|
- mcp__token-pilot__read_diff
|
|
8
8
|
- mcp__token-pilot__outline
|
|
9
9
|
- mcp__token-pilot__read_symbol
|
|
10
|
-
token_pilot_version: "0.23.
|
|
10
|
+
token_pilot_version: "0.23.7"
|
|
11
11
|
token_pilot_body_hash: a058518619fd6e2def0c9226f6c70438a5e0a80efe680c935414ecd7e1b14a4f
|
|
12
12
|
---
|
|
13
13
|
|
package/dist/agents/tp-run.md
CHANGED
|
@@ -1,5 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: tp-session-restorer
|
|
3
|
+
model: claude-haiku-4-5-20251001
|
|
3
4
|
description: PROACTIVELY use this as the FIRST step after /clear, compaction, or a fresh window when a recent session_snapshot exists on disk. Reads snapshot + git status + saved docs, returns a ≤200-token briefing. Do NOT use mid-task.
|
|
4
5
|
tools:
|
|
5
6
|
- mcp__token-pilot__smart_read
|
|
@@ -8,7 +9,7 @@ tools:
|
|
|
8
9
|
- mcp__token-pilot__session_budget
|
|
9
10
|
- Bash
|
|
10
11
|
- Read
|
|
11
|
-
token_pilot_version: "0.23.
|
|
12
|
+
token_pilot_version: "0.23.7"
|
|
12
13
|
token_pilot_body_hash: 35b7f333a28c94e7dc89fcc3171703c4b466225f55cd5c701b7592f4f6486440
|
|
13
14
|
---
|
|
14
15
|
|
|
@@ -7,7 +7,7 @@ tools:
|
|
|
7
7
|
- mcp__token-pilot__read_range
|
|
8
8
|
- mcp__token-pilot__find_usages
|
|
9
9
|
- mcp__token-pilot__read_symbol
|
|
10
|
-
token_pilot_version: "0.23.
|
|
10
|
+
token_pilot_version: "0.23.7"
|
|
11
11
|
token_pilot_body_hash: 255912c47661d203c8f9a735237bc419f97e937f788a01811bbe126ee3dd5878
|
|
12
12
|
---
|
|
13
13
|
|
|
@@ -2,7 +2,7 @@ export interface BinaryStatus {
|
|
|
2
2
|
available: boolean;
|
|
3
3
|
path: string;
|
|
4
4
|
version: string | null;
|
|
5
|
-
source:
|
|
5
|
+
source: "system" | "npm" | "managed" | "bundled" | "none";
|
|
6
6
|
}
|
|
7
7
|
/**
|
|
8
8
|
* Find ast-index binary: config → system PATH → npm global → managed install.
|
|
@@ -1,16 +1,17 @@
|
|
|
1
|
-
import { execFile } from
|
|
2
|
-
import { createWriteStream } from
|
|
3
|
-
import { chmod, mkdir, access, rm } from
|
|
4
|
-
import { resolve } from
|
|
5
|
-
import {
|
|
6
|
-
import {
|
|
7
|
-
import {
|
|
8
|
-
import {
|
|
9
|
-
import { get as
|
|
10
|
-
import {
|
|
11
|
-
|
|
12
|
-
const
|
|
13
|
-
const
|
|
1
|
+
import { execFile } from "node:child_process";
|
|
2
|
+
import { createWriteStream } from "node:fs";
|
|
3
|
+
import { chmod, mkdir, access, rm } from "node:fs/promises";
|
|
4
|
+
import { resolve, dirname } from "node:path";
|
|
5
|
+
import { fileURLToPath } from "node:url";
|
|
6
|
+
import { pipeline } from "node:stream/promises";
|
|
7
|
+
import { createGunzip } from "node:zlib";
|
|
8
|
+
import { homedir, platform, arch } from "node:os";
|
|
9
|
+
import { get as httpsGet } from "node:https";
|
|
10
|
+
import { get as httpGet } from "node:http";
|
|
11
|
+
import { tarExtract } from "./tar-extract.js";
|
|
12
|
+
const REPO = "defendend/Claude-ast-index-search";
|
|
13
|
+
const BINARY_NAME = platform() === "win32" ? "ast-index.exe" : "ast-index";
|
|
14
|
+
const INSTALL_DIR = resolve(homedir(), ".token-pilot", "bin");
|
|
14
15
|
/**
|
|
15
16
|
* Find ast-index binary: config → system PATH → npm global → managed install.
|
|
16
17
|
*/
|
|
@@ -19,30 +20,42 @@ export async function findBinary(configPath) {
|
|
|
19
20
|
if (configPath) {
|
|
20
21
|
const version = await getBinaryVersion(configPath);
|
|
21
22
|
if (version) {
|
|
22
|
-
return { available: true, path: configPath, version, source:
|
|
23
|
+
return { available: true, path: configPath, version, source: "system" };
|
|
23
24
|
}
|
|
24
25
|
}
|
|
25
|
-
// 2.
|
|
26
|
+
// 2. Bundled npm dep — @ast-index/cli alongside our own install. This is
|
|
27
|
+
// the default path when the user ran `npm install token-pilot`: npm
|
|
28
|
+
// resolves per-platform binary (@ast-index/cli-linux-x64 etc.) as an
|
|
29
|
+
// optional dep of @ast-index/cli, symlinks ast-index into
|
|
30
|
+
// node_modules/.bin alongside our own bin/, and everything "just works".
|
|
31
|
+
const bundledPath = await findViaBundledDep();
|
|
32
|
+
if (bundledPath) {
|
|
33
|
+
const version = await getBinaryVersion(bundledPath);
|
|
34
|
+
if (version) {
|
|
35
|
+
return { available: true, path: bundledPath, version, source: "bundled" };
|
|
36
|
+
}
|
|
37
|
+
}
|
|
38
|
+
// 3. System PATH
|
|
26
39
|
const systemPath = await findInPath();
|
|
27
40
|
if (systemPath) {
|
|
28
41
|
const version = await getBinaryVersion(systemPath);
|
|
29
|
-
return { available: true, path: systemPath, version, source:
|
|
42
|
+
return { available: true, path: systemPath, version, source: "system" };
|
|
30
43
|
}
|
|
31
44
|
// 3. npm global install (@ast-index/cli)
|
|
32
45
|
const npmPath = await findViaNpmBin();
|
|
33
46
|
if (npmPath) {
|
|
34
47
|
const version = await getBinaryVersion(npmPath);
|
|
35
48
|
if (version) {
|
|
36
|
-
return { available: true, path: npmPath, version, source:
|
|
49
|
+
return { available: true, path: npmPath, version, source: "npm" };
|
|
37
50
|
}
|
|
38
51
|
}
|
|
39
52
|
// 4. Managed install (GitHub download)
|
|
40
53
|
const managedPath = resolve(INSTALL_DIR, BINARY_NAME);
|
|
41
54
|
const version = await getBinaryVersion(managedPath);
|
|
42
55
|
if (version) {
|
|
43
|
-
return { available: true, path: managedPath, version, source:
|
|
56
|
+
return { available: true, path: managedPath, version, source: "managed" };
|
|
44
57
|
}
|
|
45
|
-
return { available: false, path:
|
|
58
|
+
return { available: false, path: "", version: null, source: "none" };
|
|
46
59
|
}
|
|
47
60
|
/**
|
|
48
61
|
* Install ast-index: tries npm global first (all platforms), falls back to GitHub download.
|
|
@@ -60,9 +73,9 @@ export async function installBinary(onProgress) {
|
|
|
60
73
|
return installViaNpmFallback(log);
|
|
61
74
|
}
|
|
62
75
|
async function installViaNpm(onProgress) {
|
|
63
|
-
onProgress(
|
|
76
|
+
onProgress("Installing @ast-index/cli via npm...");
|
|
64
77
|
await new Promise((resolve, reject) => {
|
|
65
|
-
execFile(
|
|
78
|
+
execFile("npm", ["install", "-g", "@ast-index/cli"], { timeout: 120_000 }, (err, _stdout, stderr) => {
|
|
66
79
|
if (err)
|
|
67
80
|
reject(new Error(stderr.trim() || err.message));
|
|
68
81
|
else
|
|
@@ -71,11 +84,11 @@ async function installViaNpm(onProgress) {
|
|
|
71
84
|
});
|
|
72
85
|
const binPath = await findViaNpmBin();
|
|
73
86
|
if (!binPath) {
|
|
74
|
-
throw new Error(
|
|
87
|
+
throw new Error("@ast-index/cli installed but binary not found in npm prefix");
|
|
75
88
|
}
|
|
76
89
|
const version = await getBinaryVersion(binPath);
|
|
77
90
|
onProgress(`Installed ast-index ${version} via npm at ${binPath}`);
|
|
78
|
-
return { available: true, path: binPath, version, source:
|
|
91
|
+
return { available: true, path: binPath, version, source: "npm" };
|
|
79
92
|
}
|
|
80
93
|
async function installViaNpmFallback(onProgress) {
|
|
81
94
|
// Determine platform/arch
|
|
@@ -84,29 +97,29 @@ async function installViaNpmFallback(onProgress) {
|
|
|
84
97
|
if (!plat || !ar) {
|
|
85
98
|
throw new Error(`Unsupported platform: ${platform()} ${arch()}`);
|
|
86
99
|
}
|
|
87
|
-
onProgress(
|
|
100
|
+
onProgress("Fetching latest release info...");
|
|
88
101
|
const release = await fetchLatestRelease();
|
|
89
102
|
const assetName = buildAssetName(release.tag, plat, ar);
|
|
90
|
-
const asset = release.assets.find(a => a.name === assetName);
|
|
103
|
+
const asset = release.assets.find((a) => a.name === assetName);
|
|
91
104
|
if (!asset) {
|
|
92
|
-
throw new Error(`No binary found for ${plat}-${ar}. Available: ${release.assets.map(a => a.name).join(
|
|
105
|
+
throw new Error(`No binary found for ${plat}-${ar}. Available: ${release.assets.map((a) => a.name).join(", ")}`);
|
|
93
106
|
}
|
|
94
107
|
onProgress(`Downloading ${asset.name} (${(asset.size / 1024 / 1024).toFixed(1)}MB)...`);
|
|
95
108
|
await mkdir(INSTALL_DIR, { recursive: true });
|
|
96
109
|
const tmpPath = resolve(INSTALL_DIR, `${BINARY_NAME}.tmp`);
|
|
97
110
|
const finalPath = resolve(INSTALL_DIR, BINARY_NAME);
|
|
98
111
|
try {
|
|
99
|
-
if (assetName.endsWith(
|
|
112
|
+
if (assetName.endsWith(".tar.gz")) {
|
|
100
113
|
await downloadAndExtractTarGz(asset.url, INSTALL_DIR, BINARY_NAME);
|
|
101
114
|
}
|
|
102
115
|
else {
|
|
103
116
|
await downloadFile(asset.url, tmpPath);
|
|
104
|
-
throw new Error(
|
|
117
|
+
throw new Error("ZIP extraction not yet supported. Please use: npm install -g @ast-index/cli");
|
|
105
118
|
}
|
|
106
119
|
await chmod(finalPath, 0o755);
|
|
107
120
|
const version = await getBinaryVersion(finalPath);
|
|
108
121
|
onProgress(`Installed ast-index ${version} to ${finalPath}`);
|
|
109
|
-
return { available: true, path: finalPath, version, source:
|
|
122
|
+
return { available: true, path: finalPath, version, source: "managed" };
|
|
110
123
|
}
|
|
111
124
|
catch (err) {
|
|
112
125
|
try {
|
|
@@ -133,7 +146,7 @@ export async function checkBinaryUpdate(currentPath) {
|
|
|
133
146
|
getBinaryVersion(currentPath),
|
|
134
147
|
fetchLatestRelease(),
|
|
135
148
|
]);
|
|
136
|
-
const latest = release.tag.replace(/^v/,
|
|
149
|
+
const latest = release.tag.replace(/^v/, "");
|
|
137
150
|
if (!current) {
|
|
138
151
|
return { current: null, latest, updateAvailable: false };
|
|
139
152
|
}
|
|
@@ -151,8 +164,8 @@ export async function checkBinaryUpdate(currentPath) {
|
|
|
151
164
|
* Compare two semver strings. Returns true if `latest` is newer than `current`.
|
|
152
165
|
*/
|
|
153
166
|
export function isNewerVersion(current, latest) {
|
|
154
|
-
const c = current.replace(/^v/,
|
|
155
|
-
const l = latest.replace(/^v/,
|
|
167
|
+
const c = current.replace(/^v/, "").split(".").map(Number);
|
|
168
|
+
const l = latest.replace(/^v/, "").split(".").map(Number);
|
|
156
169
|
for (let i = 0; i < Math.max(c.length, l.length); i++) {
|
|
157
170
|
const cv = c[i] ?? 0;
|
|
158
171
|
const lv = l[i] ?? 0;
|
|
@@ -164,6 +177,48 @@ export function isNewerVersion(current, latest) {
|
|
|
164
177
|
return false;
|
|
165
178
|
}
|
|
166
179
|
// --- Internal helpers ---
|
|
180
|
+
/**
|
|
181
|
+
* Find ast-index bundled as a direct dependency of token-pilot. Walks up
|
|
182
|
+
* from this module (dist/ast-index/binary-manager.js → node_modules/…/
|
|
183
|
+
* @ast-index/cli/bin/ast-index) looking for the standard npm layout.
|
|
184
|
+
*
|
|
185
|
+
* Three locations are tried, in order of how npm installs usually resolve:
|
|
186
|
+
* - peer dir : node_modules/.bin/ast-index (our own node_modules)
|
|
187
|
+
* - parent dir : ../../.bin/ast-index (hoisted install)
|
|
188
|
+
* - bin script : ../@ast-index/cli/bin/ast-index (platform-specific
|
|
189
|
+
* sub-package delegates to this JS shim)
|
|
190
|
+
*
|
|
191
|
+
* Returns null on any filesystem error — auto-install downstream still
|
|
192
|
+
* works; we only lose the "prefer local" optimisation.
|
|
193
|
+
*/
|
|
194
|
+
async function findViaBundledDep() {
|
|
195
|
+
let here;
|
|
196
|
+
try {
|
|
197
|
+
here = dirname(fileURLToPath(import.meta.url));
|
|
198
|
+
}
|
|
199
|
+
catch {
|
|
200
|
+
return null;
|
|
201
|
+
}
|
|
202
|
+
const candidates = [
|
|
203
|
+
// .../node_modules/token-pilot/dist/ast-index → up to .../node_modules/.bin
|
|
204
|
+
resolve(here, "..", "..", "..", ".bin", BINARY_NAME),
|
|
205
|
+
// Hoisted npm layout (same but one level deeper)
|
|
206
|
+
resolve(here, "..", "..", "..", "..", ".bin", BINARY_NAME),
|
|
207
|
+
// Direct module bin script (platform-agnostic JS shim in @ast-index/cli)
|
|
208
|
+
resolve(here, "..", "..", "..", "@ast-index", "cli", "bin", BINARY_NAME),
|
|
209
|
+
resolve(here, "..", "..", "..", "..", "@ast-index", "cli", "bin", BINARY_NAME),
|
|
210
|
+
];
|
|
211
|
+
for (const candidate of candidates) {
|
|
212
|
+
try {
|
|
213
|
+
await access(candidate);
|
|
214
|
+
return candidate;
|
|
215
|
+
}
|
|
216
|
+
catch {
|
|
217
|
+
/* try next */
|
|
218
|
+
}
|
|
219
|
+
}
|
|
220
|
+
return null;
|
|
221
|
+
}
|
|
167
222
|
/**
|
|
168
223
|
* Find ast-index binary installed via `npm install -g @ast-index/cli`.
|
|
169
224
|
* Checks the npm global prefix bin directory.
|
|
@@ -171,7 +226,7 @@ export function isNewerVersion(current, latest) {
|
|
|
171
226
|
async function findViaNpmBin() {
|
|
172
227
|
try {
|
|
173
228
|
const prefix = await new Promise((resolve, reject) => {
|
|
174
|
-
execFile(
|
|
229
|
+
execFile("npm", ["config", "get", "prefix"], { timeout: 3000 }, (err, stdout) => {
|
|
175
230
|
if (err)
|
|
176
231
|
reject(err);
|
|
177
232
|
else
|
|
@@ -179,9 +234,9 @@ async function findViaNpmBin() {
|
|
|
179
234
|
});
|
|
180
235
|
});
|
|
181
236
|
// Unix: <prefix>/bin/ast-index | Windows: <prefix>\ast-index.exe or <prefix>\bin\ast-index.exe
|
|
182
|
-
const candidates = platform() ===
|
|
183
|
-
? [resolve(prefix, BINARY_NAME), resolve(prefix,
|
|
184
|
-
: [resolve(prefix,
|
|
237
|
+
const candidates = platform() === "win32"
|
|
238
|
+
? [resolve(prefix, BINARY_NAME), resolve(prefix, "bin", BINARY_NAME)]
|
|
239
|
+
: [resolve(prefix, "bin", BINARY_NAME)];
|
|
185
240
|
for (const candidate of candidates) {
|
|
186
241
|
try {
|
|
187
242
|
await access(candidate);
|
|
@@ -195,30 +250,37 @@ async function findViaNpmBin() {
|
|
|
195
250
|
}
|
|
196
251
|
function getPlatform() {
|
|
197
252
|
switch (platform()) {
|
|
198
|
-
case
|
|
199
|
-
|
|
200
|
-
case
|
|
201
|
-
|
|
253
|
+
case "darwin":
|
|
254
|
+
return "darwin";
|
|
255
|
+
case "linux":
|
|
256
|
+
return "linux";
|
|
257
|
+
case "win32":
|
|
258
|
+
return "windows";
|
|
259
|
+
default:
|
|
260
|
+
return null;
|
|
202
261
|
}
|
|
203
262
|
}
|
|
204
263
|
function getArch() {
|
|
205
264
|
switch (arch()) {
|
|
206
|
-
case
|
|
207
|
-
|
|
208
|
-
|
|
265
|
+
case "arm64":
|
|
266
|
+
return "arm64";
|
|
267
|
+
case "x64":
|
|
268
|
+
return "x86_64";
|
|
269
|
+
default:
|
|
270
|
+
return null;
|
|
209
271
|
}
|
|
210
272
|
}
|
|
211
273
|
function buildAssetName(tag, plat, ar) {
|
|
212
|
-
const ext = plat ===
|
|
274
|
+
const ext = plat === "windows" ? ".zip" : ".tar.gz";
|
|
213
275
|
return `ast-index-${tag}-${plat}-${ar}${ext}`;
|
|
214
276
|
}
|
|
215
277
|
async function findInPath() {
|
|
216
|
-
return new Promise(resolve => {
|
|
217
|
-
const cmd = platform() ===
|
|
218
|
-
execFile(cmd, [
|
|
278
|
+
return new Promise((resolve) => {
|
|
279
|
+
const cmd = platform() === "win32" ? "where" : "which";
|
|
280
|
+
execFile(cmd, ["ast-index"], (err, stdout) => {
|
|
219
281
|
if (err)
|
|
220
282
|
return resolve(null);
|
|
221
|
-
const path = stdout.trim().split(
|
|
283
|
+
const path = stdout.trim().split("\n")[0];
|
|
222
284
|
resolve(path || null);
|
|
223
285
|
});
|
|
224
286
|
});
|
|
@@ -230,8 +292,8 @@ async function getBinaryVersion(binaryPath) {
|
|
|
230
292
|
catch {
|
|
231
293
|
return null;
|
|
232
294
|
}
|
|
233
|
-
return new Promise(resolve => {
|
|
234
|
-
execFile(binaryPath, [
|
|
295
|
+
return new Promise((resolve) => {
|
|
296
|
+
execFile(binaryPath, ["--version"], { timeout: 5000 }, (err, stdout) => {
|
|
235
297
|
if (err)
|
|
236
298
|
return resolve(null);
|
|
237
299
|
// Parse "ast-index v3.24.0" or "ast-index 3.24.0"
|
|
@@ -254,14 +316,14 @@ async function fetchLatestRelease() {
|
|
|
254
316
|
function fetchJson(url) {
|
|
255
317
|
return new Promise((resolve, reject) => {
|
|
256
318
|
const options = {
|
|
257
|
-
headers: {
|
|
319
|
+
headers: { "User-Agent": "token-pilot" },
|
|
258
320
|
};
|
|
259
321
|
httpsGet(url, options, (res) => {
|
|
260
322
|
// Handle redirects
|
|
261
323
|
if (res.statusCode === 301 || res.statusCode === 302) {
|
|
262
324
|
const location = res.headers.location;
|
|
263
325
|
if (!location)
|
|
264
|
-
return reject(new Error(
|
|
326
|
+
return reject(new Error("Redirect without location"));
|
|
265
327
|
fetchJson(location).then(resolve).catch(reject);
|
|
266
328
|
res.resume();
|
|
267
329
|
return;
|
|
@@ -270,10 +332,10 @@ function fetchJson(url) {
|
|
|
270
332
|
res.resume();
|
|
271
333
|
return reject(new Error(`HTTP ${res.statusCode} from ${url}`));
|
|
272
334
|
}
|
|
273
|
-
let body =
|
|
274
|
-
res.setEncoding(
|
|
275
|
-
res.on(
|
|
276
|
-
res.on(
|
|
335
|
+
let body = "";
|
|
336
|
+
res.setEncoding("utf-8");
|
|
337
|
+
res.on("data", (chunk) => (body += chunk));
|
|
338
|
+
res.on("end", () => {
|
|
277
339
|
try {
|
|
278
340
|
resolve(JSON.parse(body));
|
|
279
341
|
}
|
|
@@ -281,18 +343,18 @@ function fetchJson(url) {
|
|
|
281
343
|
reject(new Error(`Invalid JSON from ${url}`));
|
|
282
344
|
}
|
|
283
345
|
});
|
|
284
|
-
res.on(
|
|
285
|
-
}).on(
|
|
346
|
+
res.on("error", reject);
|
|
347
|
+
}).on("error", reject);
|
|
286
348
|
});
|
|
287
349
|
}
|
|
288
350
|
function followRedirects(url) {
|
|
289
351
|
return new Promise((resolve, reject) => {
|
|
290
|
-
const getter = url.startsWith(
|
|
291
|
-
getter(url, { headers: {
|
|
352
|
+
const getter = url.startsWith("https") ? httpsGet : httpGet;
|
|
353
|
+
getter(url, { headers: { "User-Agent": "token-pilot" } }, (res) => {
|
|
292
354
|
if (res.statusCode === 301 || res.statusCode === 302) {
|
|
293
355
|
const location = res.headers.location;
|
|
294
356
|
if (!location)
|
|
295
|
-
return reject(new Error(
|
|
357
|
+
return reject(new Error("Redirect without location"));
|
|
296
358
|
res.resume();
|
|
297
359
|
followRedirects(location).then(resolve).catch(reject);
|
|
298
360
|
return;
|
|
@@ -302,7 +364,7 @@ function followRedirects(url) {
|
|
|
302
364
|
return reject(new Error(`HTTP ${res.statusCode} downloading from ${url}`));
|
|
303
365
|
}
|
|
304
366
|
resolve(res);
|
|
305
|
-
}).on(
|
|
367
|
+
}).on("error", reject);
|
|
306
368
|
});
|
|
307
369
|
}
|
|
308
370
|
async function downloadAndExtractTarGz(url, destDir, binaryName) {
|
|
@@ -312,10 +374,10 @@ async function downloadAndExtractTarGz(url, destDir, binaryName) {
|
|
|
312
374
|
const chunks = [];
|
|
313
375
|
res.pipe(gunzip);
|
|
314
376
|
await new Promise((resolve, reject) => {
|
|
315
|
-
gunzip.on(
|
|
316
|
-
gunzip.on(
|
|
317
|
-
gunzip.on(
|
|
318
|
-
res.on(
|
|
377
|
+
gunzip.on("data", (chunk) => chunks.push(chunk));
|
|
378
|
+
gunzip.on("end", resolve);
|
|
379
|
+
gunzip.on("error", reject);
|
|
380
|
+
res.on("error", reject);
|
|
319
381
|
});
|
|
320
382
|
const tarData = Buffer.concat(chunks);
|
|
321
383
|
await tarExtract(tarData, destDir, binaryName);
|
|
@@ -1,17 +1,17 @@
|
|
|
1
|
-
import type { AstIndexClient } from
|
|
2
|
-
import type { SymbolResolver } from
|
|
3
|
-
import type { FileCache } from
|
|
4
|
-
import type { ContextRegistry } from
|
|
1
|
+
import type { AstIndexClient } from "../ast-index/client.js";
|
|
2
|
+
import type { SymbolResolver } from "../core/symbol-resolver.js";
|
|
3
|
+
import type { FileCache } from "../core/file-cache.js";
|
|
4
|
+
import type { ContextRegistry } from "../core/context-registry.js";
|
|
5
5
|
export interface ReadSymbolsArgs {
|
|
6
6
|
path: string;
|
|
7
7
|
symbols: string[];
|
|
8
8
|
context_before?: number;
|
|
9
9
|
context_after?: number;
|
|
10
|
-
show?:
|
|
10
|
+
show?: "full" | "head" | "tail" | "outline";
|
|
11
11
|
}
|
|
12
12
|
export declare function handleReadSymbols(args: ReadSymbolsArgs, projectRoot: string, symbolResolver: SymbolResolver, fileCache: FileCache, contextRegistry: ContextRegistry, astIndex?: AstIndexClient, advisoryReminders?: boolean): Promise<{
|
|
13
13
|
content: Array<{
|
|
14
|
-
type:
|
|
14
|
+
type: "text";
|
|
15
15
|
text: string;
|
|
16
16
|
}>;
|
|
17
17
|
}>;
|
|
@@ -1,7 +1,7 @@
|
|
|
1
|
-
import { readFile } from
|
|
2
|
-
import { estimateTokens } from
|
|
3
|
-
import { resolveSafePath } from
|
|
4
|
-
import { assessConfidence, formatConfidence } from
|
|
1
|
+
import { readFile } from "node:fs/promises";
|
|
2
|
+
import { estimateTokens } from "../core/token-estimator.js";
|
|
3
|
+
import { resolveSafePath } from "../core/validation.js";
|
|
4
|
+
import { assessConfidence, formatConfidence } from "../core/confidence.js";
|
|
5
5
|
export async function handleReadSymbols(args, projectRoot, symbolResolver, fileCache, contextRegistry, astIndex, advisoryReminders = true) {
|
|
6
6
|
const absPath = resolveSafePath(projectRoot, args.path);
|
|
7
7
|
// Get file content ONCE
|
|
@@ -11,16 +11,41 @@ export async function handleReadSymbols(args, projectRoot, symbolResolver, fileC
|
|
|
11
11
|
lines = cached.lines;
|
|
12
12
|
}
|
|
13
13
|
else {
|
|
14
|
-
const content = await readFile(absPath,
|
|
15
|
-
lines = content.split(
|
|
14
|
+
const content = await readFile(absPath, "utf-8");
|
|
15
|
+
lines = content.split("\n");
|
|
16
16
|
}
|
|
17
17
|
// Get AST structure ONCE
|
|
18
18
|
let structure = cached?.structure;
|
|
19
19
|
if (!structure && astIndex) {
|
|
20
|
-
structure = await astIndex.outline(absPath) ?? undefined;
|
|
20
|
+
structure = (await astIndex.outline(absPath)) ?? undefined;
|
|
21
21
|
}
|
|
22
22
|
const N = args.symbols.length;
|
|
23
23
|
const sections = [];
|
|
24
|
+
// v0.23.6 — anti-pattern guard. When the caller requests nearly every
|
|
25
|
+
// symbol in the file, the sum of bodies + N × per-symbol metadata
|
|
26
|
+
// exceeds a single raw Read. That's worse than what smart_read +
|
|
27
|
+
// read_for_edit would do. Refuse the request and tell the caller.
|
|
28
|
+
if (structure && structure.symbols && structure.symbols.length > 0) {
|
|
29
|
+
const uniqueRequested = new Set(args.symbols.map((s) => s.split(".")[0]));
|
|
30
|
+
const matchedTopLevel = structure.symbols.filter((s) => uniqueRequested.has(s.name));
|
|
31
|
+
const totalTopLevelLines = structure.symbols.reduce((sum, s) => sum + (s.location.lineCount ?? 0), 0);
|
|
32
|
+
const requestedLines = matchedTopLevel.reduce((sum, s) => sum + (s.location.lineCount ?? 0), 0);
|
|
33
|
+
if (totalTopLevelLines > 0 &&
|
|
34
|
+
requestedLines / totalTopLevelLines >= 0.7 &&
|
|
35
|
+
matchedTopLevel.length >= 3) {
|
|
36
|
+
const text = `FILE: ${args.path} | SYMBOLS: ${N} requested\n\n` +
|
|
37
|
+
`ADVISORY: You requested ${matchedTopLevel.length} symbols covering ` +
|
|
38
|
+
`≥70% of this file (${requestedLines}/${totalTopLevelLines} lines). ` +
|
|
39
|
+
`A batch read here costs more than reading the whole file once.\n\n` +
|
|
40
|
+
`Cheaper alternatives:\n` +
|
|
41
|
+
` - smart_read("${args.path}") for a structural overview\n` +
|
|
42
|
+
` - read_for_edit("${args.path}", "<symbol>") when you need exact edit context\n` +
|
|
43
|
+
` - Raw Read with offset/limit for a specific range\n\n` +
|
|
44
|
+
`If you truly need every body, call read_symbols with a narrower list ` +
|
|
45
|
+
`or use raw Read (bounded).`;
|
|
46
|
+
return { content: [{ type: "text", text }] };
|
|
47
|
+
}
|
|
48
|
+
}
|
|
24
49
|
// Show mode constants (same as read_symbol.ts)
|
|
25
50
|
const MAX_SYMBOL_LINES = 300;
|
|
26
51
|
const MAX_FULL_LINES = 500;
|
|
@@ -47,75 +72,76 @@ export async function handleReadSymbols(args, projectRoot, symbolResolver, fileC
|
|
|
47
72
|
const loc = `[L${resolved.startLine}-${resolved.endLine}]`;
|
|
48
73
|
const lineCount = resolved.endLine - resolved.startLine + 1;
|
|
49
74
|
// Determine effective show mode
|
|
50
|
-
const showMode = args.show ?? (lineCount > MAX_SYMBOL_LINES ?
|
|
75
|
+
const showMode = args.show ?? (lineCount > MAX_SYMBOL_LINES ? "outline" : "full");
|
|
51
76
|
let displaySource = source;
|
|
52
77
|
let truncated = false;
|
|
53
|
-
if (showMode ===
|
|
78
|
+
if (showMode === "full") {
|
|
54
79
|
if (lineCount > MAX_FULL_LINES) {
|
|
55
|
-
const sourceLines = source.split(
|
|
56
|
-
displaySource = sourceLines.slice(0, MAX_FULL_LINES).join(
|
|
80
|
+
const sourceLines = source.split("\n");
|
|
81
|
+
displaySource = sourceLines.slice(0, MAX_FULL_LINES).join("\n");
|
|
57
82
|
displaySource += `\n\n ... truncated at ${MAX_FULL_LINES} lines (${lineCount - MAX_FULL_LINES} more). Use show="head"/"tail" for targeted view.`;
|
|
58
83
|
truncated = true;
|
|
59
84
|
}
|
|
60
85
|
}
|
|
61
|
-
else if (showMode ===
|
|
62
|
-
const sourceLines = source.split(
|
|
63
|
-
displaySource = sourceLines.slice(0, HEAD).join(
|
|
86
|
+
else if (showMode === "head") {
|
|
87
|
+
const sourceLines = source.split("\n");
|
|
88
|
+
displaySource = sourceLines.slice(0, HEAD).join("\n");
|
|
64
89
|
if (lineCount > HEAD) {
|
|
65
90
|
displaySource += `\n\n ... ${lineCount - HEAD} more lines. Use show="tail" or read_symbol("${args.path}", "MethodName") for specific parts.`;
|
|
66
91
|
truncated = true;
|
|
67
92
|
}
|
|
68
93
|
}
|
|
69
|
-
else if (showMode ===
|
|
70
|
-
const sourceLines = source.split(
|
|
71
|
-
displaySource = sourceLines.slice(-TAIL).join(
|
|
94
|
+
else if (showMode === "tail") {
|
|
95
|
+
const sourceLines = source.split("\n");
|
|
96
|
+
displaySource = sourceLines.slice(-TAIL).join("\n");
|
|
72
97
|
if (lineCount > TAIL) {
|
|
73
|
-
displaySource =
|
|
98
|
+
displaySource =
|
|
99
|
+
` ... ${lineCount - TAIL} lines above ...\n\n` + displaySource;
|
|
74
100
|
truncated = true;
|
|
75
101
|
}
|
|
76
102
|
}
|
|
77
103
|
else {
|
|
78
104
|
// 'outline' mode: head + method list + tail
|
|
79
105
|
if (lineCount > HEAD + TAIL) {
|
|
80
|
-
const sourceLines = source.split(
|
|
81
|
-
const head = sourceLines.slice(0, HEAD).join(
|
|
82
|
-
const tail = sourceLines.slice(-TAIL).join(
|
|
106
|
+
const sourceLines = source.split("\n");
|
|
107
|
+
const head = sourceLines.slice(0, HEAD).join("\n");
|
|
108
|
+
const tail = sourceLines.slice(-TAIL).join("\n");
|
|
83
109
|
const omitted = sourceLines.length - HEAD - TAIL;
|
|
84
|
-
let methodOutline =
|
|
110
|
+
let methodOutline = "";
|
|
85
111
|
if (resolved.symbol.children && resolved.symbol.children.length > 0) {
|
|
86
|
-
const methodLines = resolved.symbol.children.map(c => {
|
|
112
|
+
const methodLines = resolved.symbol.children.map((c) => {
|
|
87
113
|
const mLoc = `[L${c.location.startLine}-${c.location.endLine}]`;
|
|
88
|
-
return ` ${c.visibility ===
|
|
114
|
+
return ` ${c.visibility === "private" ? "🔒 " : ""}${c.name}${c.kind === "method" || c.kind === "function" ? "()" : ""} ${mLoc} (${c.location.lineCount} lines)`;
|
|
89
115
|
});
|
|
90
|
-
methodOutline = `\nMETHODS (${resolved.symbol.children.length}):\n${methodLines.join(
|
|
116
|
+
methodOutline = `\nMETHODS (${resolved.symbol.children.length}):\n${methodLines.join("\n")}\n`;
|
|
91
117
|
}
|
|
92
118
|
displaySource = [
|
|
93
119
|
head,
|
|
94
|
-
|
|
120
|
+
"",
|
|
95
121
|
` ... ${omitted} lines omitted — use read_symbol("${args.path}", "MethodName") to read specific methods ...`,
|
|
96
122
|
methodOutline,
|
|
97
123
|
tail,
|
|
98
|
-
].join(
|
|
124
|
+
].join("\n");
|
|
99
125
|
truncated = true;
|
|
100
126
|
}
|
|
101
127
|
}
|
|
102
128
|
if (truncated)
|
|
103
129
|
anyTruncated = true;
|
|
104
130
|
const symbolLines = [
|
|
105
|
-
`SYMBOL ${idx}/${N}: ${symbolName} (${resolved.symbol.kind}) ${loc} (${lineCount} lines${truncated ? `, show=${showMode}` :
|
|
106
|
-
|
|
131
|
+
`SYMBOL ${idx}/${N}: ${symbolName} (${resolved.symbol.kind}) ${loc} (${lineCount} lines${truncated ? `, show=${showMode}` : ""})`,
|
|
132
|
+
"",
|
|
107
133
|
displaySource,
|
|
108
134
|
];
|
|
109
135
|
if (resolved.symbol.references.length > 0) {
|
|
110
|
-
symbolLines.push(
|
|
111
|
-
symbolLines.push(`REFERENCES: ${resolved.symbol.references.join(
|
|
136
|
+
symbolLines.push("");
|
|
137
|
+
symbolLines.push(`REFERENCES: ${resolved.symbol.references.join(", ")}`);
|
|
112
138
|
}
|
|
113
|
-
sections.push(symbolLines.join(
|
|
139
|
+
sections.push(symbolLines.join("\n"));
|
|
114
140
|
// Track each symbol
|
|
115
|
-
const sectionTokens = estimateTokens(symbolLines.join(
|
|
141
|
+
const sectionTokens = estimateTokens(symbolLines.join("\n"));
|
|
116
142
|
totalTokens += sectionTokens;
|
|
117
143
|
contextRegistry.trackLoad(absPath, {
|
|
118
|
-
type:
|
|
144
|
+
type: "symbol",
|
|
119
145
|
symbolName,
|
|
120
146
|
startLine: resolved.startLine,
|
|
121
147
|
endLine: resolved.endLine,
|
|
@@ -126,9 +152,9 @@ export async function handleReadSymbols(args, projectRoot, symbolResolver, fileC
|
|
|
126
152
|
contextRegistry.setContentHash(absPath, cached.hash);
|
|
127
153
|
}
|
|
128
154
|
const header = `FILE: ${args.path} | SYMBOLS: ${N} requested`;
|
|
129
|
-
const body = sections.join(
|
|
130
|
-
const footer =
|
|
131
|
-
const output = [header,
|
|
155
|
+
const body = sections.join("\n\n---\n\n");
|
|
156
|
+
const footer = "CONTEXT TRACKED: These symbols are now in your context.";
|
|
157
|
+
const output = [header, "", body, "", footer].join("\n");
|
|
132
158
|
// Confidence metadata (aggregate)
|
|
133
159
|
const confidenceMeta = assessConfidence({
|
|
134
160
|
symbolResolved: anyResolved,
|
|
@@ -137,6 +163,10 @@ export async function handleReadSymbols(args, projectRoot, symbolResolver, fileC
|
|
|
137
163
|
hasCallers: false,
|
|
138
164
|
astAvailable: !!structure,
|
|
139
165
|
});
|
|
140
|
-
return {
|
|
166
|
+
return {
|
|
167
|
+
content: [
|
|
168
|
+
{ type: "text", text: output + formatConfidence(confidenceMeta) },
|
|
169
|
+
],
|
|
170
|
+
};
|
|
141
171
|
}
|
|
142
172
|
//# sourceMappingURL=read-symbols.js.map
|
|
@@ -129,7 +129,7 @@ export const TOOL_DEFINITIONS = [
|
|
|
129
129
|
},
|
|
130
130
|
{
|
|
131
131
|
name: "read_symbols",
|
|
132
|
-
description: "Batch read MULTIPLE symbols from ONE file
|
|
132
|
+
description: "Batch read MULTIPLE symbols from ONE file — saves N-1 round-trips vs calling read_symbol N times. BEST FIT: 3–8 symbols in one file when you need their bodies. For 1–2 symbols use read_symbol (simpler). If you'd request ≥70% of the file's symbols, the handler refuses and points you to smart_read — that's cheaper than a large batch. For edit preparation use read_for_edit.",
|
|
133
133
|
inputSchema: {
|
|
134
134
|
type: "object",
|
|
135
135
|
properties: {
|
package/dist/server.js
CHANGED
|
@@ -368,12 +368,18 @@ export async function createServer(projectRoot, options) {
|
|
|
368
368
|
const rsResult = await handleReadSymbols(rsArgs, projectRoot, symbolResolver, fileCache, contextRegistry, astIndex, config.smartRead.advisoryReminders);
|
|
369
369
|
const rsText = rsResult.content[0]?.text ?? "";
|
|
370
370
|
const rsTokens = estimateTokens(rsText);
|
|
371
|
-
|
|
371
|
+
// v0.23.6 — baseline is "N individual read_symbol calls", not
|
|
372
|
+
// "one raw Read of the whole file". read_symbols replaces the
|
|
373
|
+
// former, not the latter. Each read_symbol call carries its own
|
|
374
|
+
// header/confidence overhead (~60 tokens); we dedupe that into
|
|
375
|
+
// one shared file header, so batch saves roughly N-1 headers.
|
|
376
|
+
const perSymbolOverhead = 60;
|
|
377
|
+
const baselineRs = rsTokens + (rsArgs.symbols.length - 1) * perSymbolOverhead;
|
|
372
378
|
recordWithTrace({
|
|
373
379
|
tool: "read_symbols",
|
|
374
380
|
path: rsArgs.path,
|
|
375
381
|
tokensReturned: rsTokens,
|
|
376
|
-
tokensWouldBe:
|
|
382
|
+
tokensWouldBe: baselineRs,
|
|
377
383
|
timestamp: Date.now(),
|
|
378
384
|
savingsCategory: "compression",
|
|
379
385
|
absPath: resolve(projectRoot, rsArgs.path),
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "token-pilot",
|
|
3
|
-
"version": "0.23.
|
|
3
|
+
"version": "0.23.7",
|
|
4
4
|
"description": "Save up to 80% tokens when AI reads code — MCP server for token-efficient code navigation, AST-aware structural reading instead of dumping full files into context window",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"main": "dist/index.js",
|
|
@@ -11,6 +11,7 @@
|
|
|
11
11
|
"dist/**/*.js",
|
|
12
12
|
"dist/**/*.d.ts",
|
|
13
13
|
"dist/agents/*.md",
|
|
14
|
+
"scripts/postinstall.mjs",
|
|
14
15
|
"start.sh",
|
|
15
16
|
".claude-plugin/",
|
|
16
17
|
".mcp.json",
|
|
@@ -27,6 +28,7 @@
|
|
|
27
28
|
"test:coverage": "vitest run --coverage",
|
|
28
29
|
"test:watch": "vitest",
|
|
29
30
|
"bench:hook": "node scripts/bench-hook.mjs",
|
|
31
|
+
"postinstall": "node scripts/postinstall.mjs",
|
|
30
32
|
"lint": "tsc --noEmit",
|
|
31
33
|
"prepublishOnly": "npm run build && node --input-type=module -e \"import { chmod } from 'node:fs/promises'; await chmod('dist/index.js', 0o755);\""
|
|
32
34
|
},
|
|
@@ -64,6 +66,7 @@
|
|
|
64
66
|
"license": "MIT",
|
|
65
67
|
"dependencies": {
|
|
66
68
|
"@modelcontextprotocol/sdk": "^1.12.0",
|
|
69
|
+
"@ast-index/cli": "^3.38.0",
|
|
67
70
|
"chokidar": "^4.0.3"
|
|
68
71
|
},
|
|
69
72
|
"devDependencies": {
|
|
@@ -75,14 +78,6 @@
|
|
|
75
78
|
"engines": {
|
|
76
79
|
"node": ">=18.0.0"
|
|
77
80
|
},
|
|
78
|
-
"peerDependencies": {
|
|
79
|
-
"ast-index": ">=0.1.0"
|
|
80
|
-
},
|
|
81
|
-
"peerDependenciesMeta": {
|
|
82
|
-
"ast-index": {
|
|
83
|
-
"optional": true
|
|
84
|
-
}
|
|
85
|
-
},
|
|
86
81
|
"optionalDependencies": {
|
|
87
82
|
"@ast-grep/cli": "^0.41.0"
|
|
88
83
|
}
|
|
@@ -0,0 +1,78 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
/**
|
|
3
|
+
* postinstall — verify ast-index is usable, fall back to GitHub download.
|
|
4
|
+
*
|
|
5
|
+
* Runs after `npm install token-pilot` completes. npm has already pulled
|
|
6
|
+
* @ast-index/cli + the platform-specific sub-package as a transitive dep;
|
|
7
|
+
* this script only fires the GitHub fallback when that standard path
|
|
8
|
+
* didn't land a usable binary (exotic arch, corporate proxy, etc.).
|
|
9
|
+
*
|
|
10
|
+
* NEVER fails npm install. On any error we print a single warning and
|
|
11
|
+
* exit 0 — the `doctor` CLI still tells the user how to recover.
|
|
12
|
+
*/
|
|
13
|
+
|
|
14
|
+
import { access, constants } from "node:fs/promises";
|
|
15
|
+
import { dirname, resolve } from "node:path";
|
|
16
|
+
import { fileURLToPath } from "node:url";
|
|
17
|
+
|
|
18
|
+
const here = dirname(fileURLToPath(import.meta.url));
|
|
19
|
+
const pkgRoot = resolve(here, "..");
|
|
20
|
+
|
|
21
|
+
// Opt-out hatch for CI, sandbox builds, etc.
|
|
22
|
+
if (
|
|
23
|
+
process.env.TOKEN_PILOT_SKIP_POSTINSTALL === "1" ||
|
|
24
|
+
process.env.CI === "true"
|
|
25
|
+
) {
|
|
26
|
+
process.exit(0);
|
|
27
|
+
}
|
|
28
|
+
|
|
29
|
+
// dist/ must exist or auto-install falls through — for fresh source
|
|
30
|
+
// installs (cloned repo, pre-build), this is a no-op.
|
|
31
|
+
const binaryManagerPath = resolve(
|
|
32
|
+
pkgRoot,
|
|
33
|
+
"dist",
|
|
34
|
+
"ast-index",
|
|
35
|
+
"binary-manager.js",
|
|
36
|
+
);
|
|
37
|
+
|
|
38
|
+
try {
|
|
39
|
+
await access(binaryManagerPath, constants.R_OK);
|
|
40
|
+
} catch {
|
|
41
|
+
// Source checkout without dist/ — nothing for us to do.
|
|
42
|
+
process.exit(0);
|
|
43
|
+
}
|
|
44
|
+
|
|
45
|
+
let BM;
|
|
46
|
+
try {
|
|
47
|
+
BM = await import(binaryManagerPath);
|
|
48
|
+
} catch {
|
|
49
|
+
process.exit(0);
|
|
50
|
+
}
|
|
51
|
+
|
|
52
|
+
try {
|
|
53
|
+
const status = await BM.findBinary(null);
|
|
54
|
+
if (status && status.available) {
|
|
55
|
+
// Already good — the npm resolver handled everything.
|
|
56
|
+
process.exit(0);
|
|
57
|
+
}
|
|
58
|
+
} catch {
|
|
59
|
+
/* fall through to install attempt */
|
|
60
|
+
}
|
|
61
|
+
|
|
62
|
+
// Try the explicit install path — logs to stderr, exit 0 regardless.
|
|
63
|
+
try {
|
|
64
|
+
await BM.installBinary((msg) => {
|
|
65
|
+
process.stderr.write(`[token-pilot postinstall] ${msg}\n`);
|
|
66
|
+
});
|
|
67
|
+
process.stderr.write("[token-pilot postinstall] ast-index ready.\n");
|
|
68
|
+
} catch (err) {
|
|
69
|
+
const msg = err && err.message ? err.message : String(err);
|
|
70
|
+
process.stderr.write(
|
|
71
|
+
`[token-pilot postinstall] Could not auto-install ast-index (${msg}). ` +
|
|
72
|
+
"Run \`npx token-pilot install-ast-index\` manually when ready; " +
|
|
73
|
+
"token-pilot will still start but some tools degrade until the " +
|
|
74
|
+
"binary is available.\n",
|
|
75
|
+
);
|
|
76
|
+
}
|
|
77
|
+
|
|
78
|
+
process.exit(0);
|