token-pilot 0.28.2 → 0.29.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/marketplace.json +2 -2
- package/.claude-plugin/plugin.json +1 -1
- package/CHANGELOG.md +67 -0
- package/agents/tp-api-surface-tracker.md +4 -2
- package/agents/tp-audit-scanner.md +4 -2
- package/agents/tp-commit-writer.md +4 -2
- package/agents/tp-context-engineer.md +4 -2
- package/agents/tp-dead-code-finder.md +4 -2
- package/agents/tp-debugger.md +4 -2
- package/agents/tp-dep-health.md +4 -2
- package/agents/tp-doc-writer.md +4 -2
- package/agents/tp-history-explorer.md +4 -2
- package/agents/tp-impact-analyzer.md +4 -2
- package/agents/tp-incident-timeline.md +4 -2
- package/agents/tp-incremental-builder.md +4 -2
- package/agents/tp-migration-scout.md +4 -2
- package/agents/tp-onboard.md +4 -2
- package/agents/tp-performance-profiler.md +4 -2
- package/agents/tp-pr-reviewer.md +4 -2
- package/agents/tp-refactor-planner.md +4 -2
- package/agents/tp-review-impact.md +4 -2
- package/agents/tp-run.md +4 -2
- package/agents/tp-session-restorer.md +4 -2
- package/agents/tp-ship-coordinator.md +4 -2
- package/agents/tp-spec-writer.md +4 -2
- package/agents/tp-test-coverage-gapper.md +4 -2
- package/agents/tp-test-triage.md +4 -2
- package/agents/tp-test-writer.md +4 -2
- package/dist/handlers/explore-area.d.ts +3 -3
- package/dist/handlers/explore-area.js +86 -62
- package/dist/hooks/pre-bash.d.ts +11 -0
- package/dist/hooks/pre-bash.js +53 -0
- package/dist/server/tool-definitions.js +2 -2
- package/package.json +1 -1
|
@@ -6,14 +6,14 @@
|
|
|
6
6
|
},
|
|
7
7
|
"metadata": {
|
|
8
8
|
"description": "Token Pilot \u2014 save 60-90% tokens when AI reads code",
|
|
9
|
-
"version": "0.
|
|
9
|
+
"version": "0.29.0"
|
|
10
10
|
},
|
|
11
11
|
"plugins": [
|
|
12
12
|
{
|
|
13
13
|
"name": "token-pilot",
|
|
14
14
|
"source": "./",
|
|
15
15
|
"description": "Reduces token consumption by 60-90% via AST-aware lazy file reading, structural symbol navigation, and cross-session tool-usage analytics. 22 MCP tools + 19 subagents + budget watchdog hooks.",
|
|
16
|
-
"version": "0.
|
|
16
|
+
"version": "0.29.0",
|
|
17
17
|
"author": {
|
|
18
18
|
"name": "Digital-Threads"
|
|
19
19
|
},
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "token-pilot",
|
|
3
|
-
"version": "0.
|
|
3
|
+
"version": "0.29.0",
|
|
4
4
|
"description": "Saves 60-90% tokens when AI reads code. AST-aware lazy reading, symbol navigation, cross-session tool-usage analytics, 22 subagents (haiku/sonnet/opus-tiered) with budget watchdog.",
|
|
5
5
|
"author": {
|
|
6
6
|
"name": "Digital-Threads",
|
package/CHANGELOG.md
CHANGED
|
@@ -5,6 +5,73 @@ All notable changes to Token Pilot will be documented in this file.
|
|
|
5
5
|
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
|
6
6
|
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
|
7
7
|
|
|
8
|
+
## [0.29.0] - 2026-04-19
|
|
9
|
+
|
|
10
|
+
Consolidation release based on Sonnet 4.6 + Opus 4.7 verification findings. Closes the short-tail issues that came out of the two live runs before the weekly-quota window reopens.
|
|
11
|
+
|
|
12
|
+
### Added — context-mode partnership in shared preamble
|
|
13
|
+
|
|
14
|
+
Both verification runs showed the same asymmetry: `token-pilot` saves on delegated (subagent) code reads; `context-mode` saves on main-thread Bash/command execution. Opus 4.7 literally wrote: "Во всей остальной работе использовал `ctx_batch_execute` вместо raw Bash — это adoption context-mode, не token-pilot". That's the right behaviour — we shouldn't fight it, we should formalise it.
|
|
15
|
+
|
|
16
|
+
All 25 tp-* agents now carry an instruction in the shared preamble: *for heavy Bash (tests, builds, recursive searches, network calls), prefer `mcp__context-mode__execute` / `ctx_batch_execute` when available — runs in sandbox, only result enters context (95% reduction vs raw stdout)*. This is complementary, not redundant: token-pilot owns code reading, context-mode owns command execution.
|
|
17
|
+
|
|
18
|
+
### Fixed — composite Bash escape patterns (from Opus 4.7 v0.28.2 report)
|
|
19
|
+
|
|
20
|
+
Opus's verification noted that quoted / wrapped heavy commands slipped past our `PreToolUse:Bash` hook:
|
|
21
|
+
|
|
22
|
+
- `bash -c "cat src/foo.ts"` → slipped
|
|
23
|
+
- `sh -c "grep -r foo ."` → slipped
|
|
24
|
+
- `eval "cat src/foo.ts"` → slipped
|
|
25
|
+
- `for f in *.ts; do cat $f; done` → slipped
|
|
26
|
+
- `while read f; do git log; done` → slipped
|
|
27
|
+
|
|
28
|
+
Added `extractWrappedCommands()` in `src/hooks/pre-bash.ts` — unwraps `bash/sh/zsh -c "..."`, `eval "..."`, `for/while/until ... do BODY done` — and re-runs the heavy-pattern check on each inner body. First deny wins. Adds 7 regression tests covering both deny (heavy inside wrapper) and allow (benign inside wrapper — `bash -c "ls"`, `eval "echo hello"`).
|
|
29
|
+
|
|
30
|
+
### Changed — honest tool descriptions for weak performers
|
|
31
|
+
|
|
32
|
+
- `smart_log` description now carries a heads-up: "two verification runs measured this tool at ~39% token reduction (borderline). Cumulative data being gathered — tool may be dropped or redesigned in v0.30.0 if numbers don't improve". The description already advised scoping with `path` or `count`; kept.
|
|
33
|
+
- `session_budget` re-framed as **META / info-only** — doesn't save tokens itself, purely diagnostic. This matches the META_TOOLS grouping in profiles (shipped in v0.28.1) and stops users thinking it's an optimisation tool.
|
|
34
|
+
|
|
35
|
+
### Changed — composed-agent line budget 60 → 65
|
|
36
|
+
|
|
37
|
+
Shared preamble now carries the context-mode paragraph — 3 extra lines flow into every composed agent file. Three agents (tp-context-engineer, tp-dead-code-finder, tp-doc-writer) ticked over the 60-line cap by 1-3 lines. Raised the hard limit to 65 to accommodate the new content without trimming per-agent instructions. 25 agents currently in the 38-63 range.
|
|
38
|
+
|
|
39
|
+
### Deferred to v0.30.0
|
|
40
|
+
|
|
41
|
+
- **Stop-hook output watchdog** — cap main-thread response size. Needs an experiment against Claude Code API first; too much new surface for a same-day patch.
|
|
42
|
+
- **Automatic MCP response buffer** — intercept 3rd-party MCP (GitHub / Jira / Slack) responses via `updatedMCPToolOutput`. Biggest potential lever in the ecosystem, but a full feature, not a patch.
|
|
43
|
+
- **`smart_log` final decision** — keep, redesign, or drop based on cumulative `tool-audit` data after a week of use.
|
|
44
|
+
- **`explore_area` self-sizing** — v0.28.3 tightened the caps (20/500 → 10/200); next step is compare predicted output to `estimateExploreAreaWorkflowTokens` baseline and trim when exceeded.
|
|
45
|
+
|
|
46
|
+
1026 tests passing (+7 new on composite Bash escape).
|
|
47
|
+
|
|
48
|
+
## [0.28.3] - 2026-04-19
|
|
49
|
+
|
|
50
|
+
### Fixed — `explore_area` output size (was −31% savings)
|
|
51
|
+
|
|
52
|
+
Two independent live verification runs — Sonnet 4.6 on v0.28.1 and Opus 4.7 on v0.28.2, both on `docker-local-env` — measured `explore_area` at exactly **−31% savings**: 5,722 tokens returned against a 4,360-token baseline of reading the scanned files raw. That's the opposite of the tool's stated purpose. Root cause: imports analysis + tests listing + git-log tail accumulated on top of the directory outline, pushing the response above what the individual file-reads would have cost.
|
|
53
|
+
|
|
54
|
+
Tightened two caps in `src/handlers/explore-area.ts`:
|
|
55
|
+
|
|
56
|
+
| Constant | Before | After | Effect |
|
|
57
|
+
|---|---:|---:|---|
|
|
58
|
+
| `MAX_IMPORT_FILES` | 20 | **10** | imports panel scans half as many files |
|
|
59
|
+
| `MAX_OUTPUT_LINES` | 500 | **200** | global response cap drops 60 % |
|
|
60
|
+
|
|
61
|
+
The structural overview survives; the tail (detailed per-file imports past the top 10, git-log beyond the first screen) drops. Per-call smoke-test in the dev harness lands around +40–60 % savings, matching what the tool was supposed to deliver.
|
|
62
|
+
|
|
63
|
+
Self-sizing (compare the predicted output against `estimateExploreAreaWorkflowTokens` baseline and trim if exceeded) deferred to v0.29.0 — needs handler + server coordination.
|
|
64
|
+
|
|
65
|
+
### Noted for v0.29.0 (not this release)
|
|
66
|
+
|
|
67
|
+
Composite Bash escape in `PreToolUse:Bash` hook:
|
|
68
|
+
- `;` `&&` `||` `|` + newline separators → detected correctly (verified)
|
|
69
|
+
- `bash -c "cat src/foo.ts"`, `eval "..."`, `for f in *.ts; do cat $f; done` → slip through (quoted / wrapped commands not lexed)
|
|
70
|
+
|
|
71
|
+
Not shipping today because all three escape patterns require advanced shell knowledge and are rare in agent-generated commands. Opus 4.7's v0.28.2 verification confirmed 5/6 TP-blocked on realistic patterns. Fixing `bash -c` properly needs a small shell-tokenizer; worth a focused design pass, not a same-day patch.
|
|
72
|
+
|
|
73
|
+
1019 tests still passing.
|
|
74
|
+
|
|
8
75
|
## [0.28.2] - 2026-04-19
|
|
9
76
|
|
|
10
77
|
### Fixed — plugin hooks were never actually reaching Claude Code
|
|
@@ -9,8 +9,8 @@ tools:
|
|
|
9
9
|
- mcp__token-pilot__read_symbol
|
|
10
10
|
- Bash
|
|
11
11
|
model: haiku
|
|
12
|
-
token_pilot_version: "0.
|
|
13
|
-
token_pilot_body_hash:
|
|
12
|
+
token_pilot_version: "0.29.0"
|
|
13
|
+
token_pilot_body_hash: c9d33476fdf70c8a7a493ec8720f54792eda2f81585996246e94c130ff3ec356
|
|
14
14
|
---
|
|
15
15
|
|
|
16
16
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -19,6 +19,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
19
19
|
|
|
20
20
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
21
21
|
|
|
22
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
23
|
+
|
|
22
24
|
Your specific role is defined below.
|
|
23
25
|
|
|
24
26
|
Role: public-API diff with semver classification.
|
|
@@ -11,8 +11,8 @@ tools:
|
|
|
11
11
|
- Grep
|
|
12
12
|
- Read
|
|
13
13
|
model: sonnet
|
|
14
|
-
token_pilot_version: "0.
|
|
15
|
-
token_pilot_body_hash:
|
|
14
|
+
token_pilot_version: "0.29.0"
|
|
15
|
+
token_pilot_body_hash: 7095ffab66aca2e424f00875933e3f63bc10651eef2fde6a59f08bbbdbf86f7c
|
|
16
16
|
---
|
|
17
17
|
|
|
18
18
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -21,6 +21,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
21
21
|
|
|
22
22
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
23
23
|
|
|
24
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
25
|
+
|
|
24
26
|
Your specific role is defined below.
|
|
25
27
|
|
|
26
28
|
Role: audit scanner — surfaces risks, never fixes.
|
|
@@ -8,8 +8,8 @@ tools:
|
|
|
8
8
|
- mcp__token-pilot__test_summary
|
|
9
9
|
- mcp__token-pilot__outline
|
|
10
10
|
- Bash
|
|
11
|
-
token_pilot_version: "0.
|
|
12
|
-
token_pilot_body_hash:
|
|
11
|
+
token_pilot_version: "0.29.0"
|
|
12
|
+
token_pilot_body_hash: b6831f11c61a9b255c2b6ffa04837130242fd02843463a7d30f109c1a06b3e3f
|
|
13
13
|
---
|
|
14
14
|
|
|
15
15
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -18,6 +18,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
18
18
|
|
|
19
19
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
20
20
|
|
|
21
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
22
|
+
|
|
21
23
|
Your specific role is defined below.
|
|
22
24
|
|
|
23
25
|
Role: commit-message authoring.
|
|
@@ -13,8 +13,8 @@ tools:
|
|
|
13
13
|
- Edit
|
|
14
14
|
- Glob
|
|
15
15
|
model: sonnet
|
|
16
|
-
token_pilot_version: "0.
|
|
17
|
-
token_pilot_body_hash:
|
|
16
|
+
token_pilot_version: "0.29.0"
|
|
17
|
+
token_pilot_body_hash: 43f9364ce722ff76daf0f8720ddaf9f77e18d4c4ed8bee3e15f12d207798e778
|
|
18
18
|
---
|
|
19
19
|
|
|
20
20
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -23,6 +23,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
23
23
|
|
|
24
24
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
25
25
|
|
|
26
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
27
|
+
|
|
26
28
|
Your specific role is defined below.
|
|
27
29
|
|
|
28
30
|
Role: curate what AI agents see so output quality stays high.
|
|
@@ -11,8 +11,8 @@ tools:
|
|
|
11
11
|
- Grep
|
|
12
12
|
- Read
|
|
13
13
|
model: sonnet
|
|
14
|
-
token_pilot_version: "0.
|
|
15
|
-
token_pilot_body_hash:
|
|
14
|
+
token_pilot_version: "0.29.0"
|
|
15
|
+
token_pilot_body_hash: 386760aed26df6c3595d3267954605565fad08afa8761e016079ae60c19887a8
|
|
16
16
|
---
|
|
17
17
|
|
|
18
18
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -21,6 +21,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
21
21
|
|
|
22
22
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
23
23
|
|
|
24
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
25
|
+
|
|
24
26
|
Your specific role is defined below.
|
|
25
27
|
|
|
26
28
|
Role: safe dead-code detection.
|
package/agents/tp-debugger.md
CHANGED
|
@@ -12,8 +12,8 @@ tools:
|
|
|
12
12
|
- Read
|
|
13
13
|
- Bash
|
|
14
14
|
model: sonnet
|
|
15
|
-
token_pilot_version: "0.
|
|
16
|
-
token_pilot_body_hash:
|
|
15
|
+
token_pilot_version: "0.29.0"
|
|
16
|
+
token_pilot_body_hash: 71738830d025e86c70988e046a2f7f30b4590f3d284291a18609ed5fdd732321
|
|
17
17
|
---
|
|
18
18
|
|
|
19
19
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -22,6 +22,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
22
22
|
|
|
23
23
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
24
24
|
|
|
25
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
26
|
+
|
|
25
27
|
Your specific role is defined below.
|
|
26
28
|
|
|
27
29
|
Role: bug diagnosis via systematic triage.
|
package/agents/tp-dep-health.md
CHANGED
|
@@ -9,8 +9,8 @@ tools:
|
|
|
9
9
|
- Bash
|
|
10
10
|
- Read
|
|
11
11
|
model: haiku
|
|
12
|
-
token_pilot_version: "0.
|
|
13
|
-
token_pilot_body_hash:
|
|
12
|
+
token_pilot_version: "0.29.0"
|
|
13
|
+
token_pilot_body_hash: 12634cd28889d0a0ef1b4a6b994ba978353e14f3cb349011c393076e7e2b5c96
|
|
14
14
|
---
|
|
15
15
|
|
|
16
16
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -19,6 +19,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
19
19
|
|
|
20
20
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
21
21
|
|
|
22
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
23
|
+
|
|
22
24
|
Your specific role is defined below.
|
|
23
25
|
|
|
24
26
|
Role: dependency health audit.
|
package/agents/tp-doc-writer.md
CHANGED
|
@@ -13,8 +13,8 @@ tools:
|
|
|
13
13
|
- Edit
|
|
14
14
|
- Glob
|
|
15
15
|
model: haiku
|
|
16
|
-
token_pilot_version: "0.
|
|
17
|
-
token_pilot_body_hash:
|
|
16
|
+
token_pilot_version: "0.29.0"
|
|
17
|
+
token_pilot_body_hash: 8e29d07dd8f58adeb9530ec477a59a6e42de6c624f322d2c6cfa8da66456b46a
|
|
18
18
|
---
|
|
19
19
|
|
|
20
20
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -23,6 +23,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
23
23
|
|
|
24
24
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
25
25
|
|
|
26
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
27
|
+
|
|
26
28
|
Your specific role is defined below.
|
|
27
29
|
|
|
28
30
|
Role: documentation author — decisions, ADRs, READMEs, API docs.
|
|
@@ -10,8 +10,8 @@ tools:
|
|
|
10
10
|
- Bash
|
|
11
11
|
- Read
|
|
12
12
|
model: haiku
|
|
13
|
-
token_pilot_version: "0.
|
|
14
|
-
token_pilot_body_hash:
|
|
13
|
+
token_pilot_version: "0.29.0"
|
|
14
|
+
token_pilot_body_hash: 260197bc31531352f5eda3b70cf114c7c57bb7e9373f68ca76161dd68a804b0d
|
|
15
15
|
---
|
|
16
16
|
|
|
17
17
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -20,6 +20,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
20
20
|
|
|
21
21
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
22
22
|
|
|
23
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
24
|
+
|
|
23
25
|
Your specific role is defined below.
|
|
24
26
|
|
|
25
27
|
Role: git-history archaeology — why, when, by whom.
|
|
@@ -12,8 +12,8 @@ tools:
|
|
|
12
12
|
- mcp__token-pilot__read_symbols
|
|
13
13
|
- Read
|
|
14
14
|
model: sonnet
|
|
15
|
-
token_pilot_version: "0.
|
|
16
|
-
token_pilot_body_hash:
|
|
15
|
+
token_pilot_version: "0.29.0"
|
|
16
|
+
token_pilot_body_hash: 1da6936cc117a7627640fae3cc85bf13a17f0b0b0d0d533423dfb4b7c0b4b1c2
|
|
17
17
|
---
|
|
18
18
|
|
|
19
19
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -22,6 +22,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
22
22
|
|
|
23
23
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
24
24
|
|
|
25
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
26
|
+
|
|
25
27
|
Your specific role is defined below.
|
|
26
28
|
|
|
27
29
|
Role: impact analysis.
|
|
@@ -8,8 +8,8 @@ tools:
|
|
|
8
8
|
- mcp__token-pilot__read_symbol
|
|
9
9
|
- Bash
|
|
10
10
|
model: inherit
|
|
11
|
-
token_pilot_version: "0.
|
|
12
|
-
token_pilot_body_hash:
|
|
11
|
+
token_pilot_version: "0.29.0"
|
|
12
|
+
token_pilot_body_hash: 213746bab7acb6730a6edb16e1ff7b2c56572c3adf4f94990799f1c168cfa2ad
|
|
13
13
|
---
|
|
14
14
|
|
|
15
15
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -18,6 +18,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
18
18
|
|
|
19
19
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
20
20
|
|
|
21
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
22
|
+
|
|
21
23
|
Your specific role is defined below.
|
|
22
24
|
|
|
23
25
|
Role: incident post-mortem timeline builder.
|
|
@@ -13,8 +13,8 @@ tools:
|
|
|
13
13
|
- Edit
|
|
14
14
|
- Bash
|
|
15
15
|
model: sonnet
|
|
16
|
-
token_pilot_version: "0.
|
|
17
|
-
token_pilot_body_hash:
|
|
16
|
+
token_pilot_version: "0.29.0"
|
|
17
|
+
token_pilot_body_hash: 14c9adcabfb772c77a467a5fbfa682abbd5adc87e22d7fbe5d1329ffd790dde5
|
|
18
18
|
---
|
|
19
19
|
|
|
20
20
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -23,6 +23,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
23
23
|
|
|
24
24
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
25
25
|
|
|
26
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
27
|
+
|
|
26
28
|
Your specific role is defined below.
|
|
27
29
|
|
|
28
30
|
Role: incremental feature implementation with slice-by-slice discipline.
|
|
@@ -11,8 +11,8 @@ tools:
|
|
|
11
11
|
- Grep
|
|
12
12
|
- Glob
|
|
13
13
|
model: sonnet
|
|
14
|
-
token_pilot_version: "0.
|
|
15
|
-
token_pilot_body_hash:
|
|
14
|
+
token_pilot_version: "0.29.0"
|
|
15
|
+
token_pilot_body_hash: 62893e448e943d0e1b928a670823ec3e152de395e487564862f145bd82161fcb
|
|
16
16
|
---
|
|
17
17
|
|
|
18
18
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -21,6 +21,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
21
21
|
|
|
22
22
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
23
23
|
|
|
24
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
25
|
+
|
|
24
26
|
Your specific role is defined below.
|
|
25
27
|
|
|
26
28
|
Role: migration impact mapping.
|
package/agents/tp-onboard.md
CHANGED
|
@@ -10,8 +10,8 @@ tools:
|
|
|
10
10
|
- mcp__token-pilot__smart_read
|
|
11
11
|
- mcp__token-pilot__smart_read_many
|
|
12
12
|
- mcp__token-pilot__read_section
|
|
13
|
-
token_pilot_version: "0.
|
|
14
|
-
token_pilot_body_hash:
|
|
13
|
+
token_pilot_version: "0.29.0"
|
|
14
|
+
token_pilot_body_hash: 4e82f7b3c6446663e958fb6bf5eb5348bbdf33389269c888ce0dab766e50561f
|
|
15
15
|
---
|
|
16
16
|
|
|
17
17
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -20,6 +20,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
20
20
|
|
|
21
21
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
22
22
|
|
|
23
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
24
|
+
|
|
23
25
|
Your specific role is defined below.
|
|
24
26
|
|
|
25
27
|
Role: repository onboarding.
|
|
@@ -11,8 +11,8 @@ tools:
|
|
|
11
11
|
- Bash
|
|
12
12
|
- Read
|
|
13
13
|
model: sonnet
|
|
14
|
-
token_pilot_version: "0.
|
|
15
|
-
token_pilot_body_hash:
|
|
14
|
+
token_pilot_version: "0.29.0"
|
|
15
|
+
token_pilot_body_hash: 8b9f454a47e57e3761668de788850ef97d5d6f127b059cf8e0cef03deaca3f98
|
|
16
16
|
---
|
|
17
17
|
|
|
18
18
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -21,6 +21,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
21
21
|
|
|
22
22
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
23
23
|
|
|
24
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
25
|
+
|
|
24
26
|
Your specific role is defined below.
|
|
25
27
|
|
|
26
28
|
Role: performance diagnosis and targeted optimization.
|
package/agents/tp-pr-reviewer.md
CHANGED
|
@@ -11,8 +11,8 @@ tools:
|
|
|
11
11
|
- mcp__token-pilot__read_for_edit
|
|
12
12
|
- Read
|
|
13
13
|
model: sonnet
|
|
14
|
-
token_pilot_version: "0.
|
|
15
|
-
token_pilot_body_hash:
|
|
14
|
+
token_pilot_version: "0.29.0"
|
|
15
|
+
token_pilot_body_hash: 91003b244472c4e65d840b55474a86ce04fba379859d588cc0fa54850b0e1e4f
|
|
16
16
|
---
|
|
17
17
|
|
|
18
18
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -21,6 +21,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
21
21
|
|
|
22
22
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
23
23
|
|
|
24
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
25
|
+
|
|
24
26
|
Your specific role is defined below.
|
|
25
27
|
|
|
26
28
|
Role: PR / diff review across five axes.
|
|
@@ -8,8 +8,8 @@ tools:
|
|
|
8
8
|
- mcp__token-pilot__outline
|
|
9
9
|
- mcp__token-pilot__read_symbol
|
|
10
10
|
model: sonnet
|
|
11
|
-
token_pilot_version: "0.
|
|
12
|
-
token_pilot_body_hash:
|
|
11
|
+
token_pilot_version: "0.29.0"
|
|
12
|
+
token_pilot_body_hash: 45f972c6b36929491a529322bac3c34fd44872f7be4a974d25c7e27cb12e9dc3
|
|
13
13
|
---
|
|
14
14
|
|
|
15
15
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -18,6 +18,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
18
18
|
|
|
19
19
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
20
20
|
|
|
21
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
22
|
+
|
|
21
23
|
Your specific role is defined below.
|
|
22
24
|
|
|
23
25
|
Role: refactor planning with behaviour-preservation discipline.
|
|
@@ -9,8 +9,8 @@ tools:
|
|
|
9
9
|
- mcp__token-pilot__module_info
|
|
10
10
|
- Bash
|
|
11
11
|
model: sonnet
|
|
12
|
-
token_pilot_version: "0.
|
|
13
|
-
token_pilot_body_hash:
|
|
12
|
+
token_pilot_version: "0.29.0"
|
|
13
|
+
token_pilot_body_hash: 3c1c66f952ac63a5936bec86fefda8c842fb9713bca81e48ca5bb568ccb5f367
|
|
14
14
|
---
|
|
15
15
|
|
|
16
16
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -19,6 +19,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
19
19
|
|
|
20
20
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
21
21
|
|
|
22
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
23
|
+
|
|
22
24
|
Your specific role is defined below.
|
|
23
25
|
|
|
24
26
|
Role: pre-merge blast-radius review.
|
package/agents/tp-run.md
CHANGED
|
@@ -16,8 +16,8 @@ tools:
|
|
|
16
16
|
- Glob
|
|
17
17
|
- Bash
|
|
18
18
|
model: haiku
|
|
19
|
-
token_pilot_version: "0.
|
|
20
|
-
token_pilot_body_hash:
|
|
19
|
+
token_pilot_version: "0.29.0"
|
|
20
|
+
token_pilot_body_hash: de342efe1e3ee265df1773ebde1241555750ab17de249190a5c1c200f1f8f51a
|
|
21
21
|
---
|
|
22
22
|
|
|
23
23
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -26,6 +26,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
26
26
|
|
|
27
27
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
28
28
|
|
|
29
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
30
|
+
|
|
29
31
|
Your specific role is defined below.
|
|
30
32
|
|
|
31
33
|
Role: general-purpose token-pilot workhorse.
|
|
@@ -9,8 +9,8 @@ tools:
|
|
|
9
9
|
- mcp__token-pilot__session_budget
|
|
10
10
|
- Bash
|
|
11
11
|
- Read
|
|
12
|
-
token_pilot_version: "0.
|
|
13
|
-
token_pilot_body_hash:
|
|
12
|
+
token_pilot_version: "0.29.0"
|
|
13
|
+
token_pilot_body_hash: d031f30e9cc4ea454aa256427659ed27249d820b75dc8b9b99c81ba7635230a7
|
|
14
14
|
---
|
|
15
15
|
|
|
16
16
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -19,6 +19,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
19
19
|
|
|
20
20
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
21
21
|
|
|
22
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
23
|
+
|
|
22
24
|
Your specific role is defined below.
|
|
23
25
|
|
|
24
26
|
Role: session-state rehydration.
|
|
@@ -11,8 +11,8 @@ tools:
|
|
|
11
11
|
- Read
|
|
12
12
|
- Grep
|
|
13
13
|
model: sonnet
|
|
14
|
-
token_pilot_version: "0.
|
|
15
|
-
token_pilot_body_hash:
|
|
14
|
+
token_pilot_version: "0.29.0"
|
|
15
|
+
token_pilot_body_hash: 6b1c27b3dc4fad622cebff7c49e079fc764ca0ae57ef5bc4e61b563d8321092d
|
|
16
16
|
---
|
|
17
17
|
|
|
18
18
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -21,6 +21,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
21
21
|
|
|
22
22
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
23
23
|
|
|
24
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
25
|
+
|
|
24
26
|
Your specific role is defined below.
|
|
25
27
|
|
|
26
28
|
Role: pre-production readiness coordinator.
|
package/agents/tp-spec-writer.md
CHANGED
|
@@ -9,8 +9,8 @@ tools:
|
|
|
9
9
|
- Read
|
|
10
10
|
- Write
|
|
11
11
|
model: sonnet
|
|
12
|
-
token_pilot_version: "0.
|
|
13
|
-
token_pilot_body_hash:
|
|
12
|
+
token_pilot_version: "0.29.0"
|
|
13
|
+
token_pilot_body_hash: 4ae44482db80a8a3a43794c6ecb665ec0b5385a274e1e5b2e3a404956075be88
|
|
14
14
|
---
|
|
15
15
|
|
|
16
16
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -19,6 +19,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
19
19
|
|
|
20
20
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
21
21
|
|
|
22
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
23
|
+
|
|
22
24
|
Your specific role is defined below.
|
|
23
25
|
|
|
24
26
|
Role: pre-code specification author.
|
|
@@ -10,8 +10,8 @@ tools:
|
|
|
10
10
|
- mcp__token-pilot__test_summary
|
|
11
11
|
- Glob
|
|
12
12
|
- Grep
|
|
13
|
-
token_pilot_version: "0.
|
|
14
|
-
token_pilot_body_hash:
|
|
13
|
+
token_pilot_version: "0.29.0"
|
|
14
|
+
token_pilot_body_hash: 6d862d1bcaeda3fb13099f51e40faaaf45d16d7d41d1b938609500192aa606f2
|
|
15
15
|
---
|
|
16
16
|
|
|
17
17
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -20,6 +20,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
20
20
|
|
|
21
21
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
22
22
|
|
|
23
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
24
|
+
|
|
23
25
|
Your specific role is defined below.
|
|
24
26
|
|
|
25
27
|
Role: test coverage gap finder.
|
package/agents/tp-test-triage.md
CHANGED
|
@@ -8,8 +8,8 @@ tools:
|
|
|
8
8
|
- mcp__token-pilot__find_usages
|
|
9
9
|
- mcp__token-pilot__read_symbol
|
|
10
10
|
model: sonnet
|
|
11
|
-
token_pilot_version: "0.
|
|
12
|
-
token_pilot_body_hash:
|
|
11
|
+
token_pilot_version: "0.29.0"
|
|
12
|
+
token_pilot_body_hash: f4e0dcbd2b4e8648efcafc9d53101a66bf394d7c90e97df7581ac47fcfbff5cb
|
|
13
13
|
---
|
|
14
14
|
|
|
15
15
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -18,6 +18,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
18
18
|
|
|
19
19
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
20
20
|
|
|
21
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
22
|
+
|
|
21
23
|
Your specific role is defined below.
|
|
22
24
|
|
|
23
25
|
Role: test-failure triage.
|
package/agents/tp-test-writer.md
CHANGED
|
@@ -13,8 +13,8 @@ tools:
|
|
|
13
13
|
- Edit
|
|
14
14
|
- Bash
|
|
15
15
|
model: sonnet
|
|
16
|
-
token_pilot_version: "0.
|
|
17
|
-
token_pilot_body_hash:
|
|
16
|
+
token_pilot_version: "0.29.0"
|
|
17
|
+
token_pilot_body_hash: 960fe9e907e9c7d13b14dcc22af99e8cc7e7335f99791fa808df76ac21e1f5e9
|
|
18
18
|
---
|
|
19
19
|
|
|
20
20
|
You are a token-pilot agent (`tp-<name>`). Your defining contract:
|
|
@@ -23,6 +23,8 @@ For every file in a programming language, you MUST use the token-pilot MCP tools
|
|
|
23
23
|
|
|
24
24
|
If any MCP tool fails, fall back sensibly (another MCP tool → bounded Read → pass-through) and note the fallback in your output. Never silently abandon the contract.
|
|
25
25
|
|
|
26
|
+
For heavy Bash operations (test runs, builds, recursive searches, network calls, any command with potentially large stdout): when `mcp__context-mode__execute` or `ctx_batch_execute` is available, use it instead of raw Bash. Context-mode runs commands in a sandbox and only the result enters your context — typically 95% token reduction vs raw stdout dump. This is complementary to token-pilot: we own code reading, context-mode owns command execution.
|
|
27
|
+
|
|
26
28
|
Your specific role is defined below.
|
|
27
29
|
|
|
28
30
|
Role: targeted test authoring with TDD discipline.
|
|
@@ -1,5 +1,5 @@
|
|
|
1
|
-
import type { AstIndexClient } from
|
|
2
|
-
import type { ExploreAreaArgs } from
|
|
1
|
+
import type { AstIndexClient } from "../ast-index/client.js";
|
|
2
|
+
import type { ExploreAreaArgs } from "../core/validation.js";
|
|
3
3
|
export interface ExploreAreaMeta {
|
|
4
4
|
dir: string;
|
|
5
5
|
codeFiles: string[];
|
|
@@ -11,7 +11,7 @@ export interface ExploreAreaMeta {
|
|
|
11
11
|
}
|
|
12
12
|
export declare function handleExploreArea(args: ExploreAreaArgs, projectRoot: string, astIndex: AstIndexClient): Promise<{
|
|
13
13
|
content: Array<{
|
|
14
|
-
type:
|
|
14
|
+
type: "text";
|
|
15
15
|
text: string;
|
|
16
16
|
}>;
|
|
17
17
|
meta: ExploreAreaMeta;
|
|
@@ -1,15 +1,22 @@
|
|
|
1
|
-
import { execFile } from
|
|
2
|
-
import { promisify } from
|
|
3
|
-
import { readdir, stat } from
|
|
4
|
-
import { resolve, relative, basename, dirname } from
|
|
5
|
-
import { resolveSafePath } from
|
|
6
|
-
import { outlineDir, CODE_EXTENSIONS } from
|
|
1
|
+
import { execFile } from "node:child_process";
|
|
2
|
+
import { promisify } from "node:util";
|
|
3
|
+
import { readdir, stat } from "node:fs/promises";
|
|
4
|
+
import { resolve, relative, basename, dirname } from "node:path";
|
|
5
|
+
import { resolveSafePath } from "../core/validation.js";
|
|
6
|
+
import { outlineDir, CODE_EXTENSIONS } from "./outline.js";
|
|
7
7
|
const execFileAsync = promisify(execFile);
|
|
8
8
|
// ──────────────────────────────────────────────
|
|
9
9
|
// Constants
|
|
10
10
|
// ──────────────────────────────────────────────
|
|
11
|
-
|
|
12
|
-
|
|
11
|
+
// v0.28.3 — tightened from 20/500. Two independent verification runs
|
|
12
|
+
// (Sonnet 4.6 + Opus 4.7 on docker-local-env) measured explore_area at
|
|
13
|
+
// -31% savings — output was larger than reading scanned files raw.
|
|
14
|
+
// Root cause: imports + tests + git log accumulated on top of the
|
|
15
|
+
// directory outline. Halving both caps keeps the structural overview
|
|
16
|
+
// while dropping the tail nobody actually reads. Self-sizing (compare
|
|
17
|
+
// against baseline and trim if exceeded) deferred to v0.29.0.
|
|
18
|
+
const MAX_IMPORT_FILES = 10;
|
|
19
|
+
const MAX_OUTPUT_LINES = 200;
|
|
13
20
|
// ──────────────────────────────────────────────
|
|
14
21
|
// Handler
|
|
15
22
|
// ──────────────────────────────────────────────
|
|
@@ -19,7 +26,7 @@ export async function handleExploreArea(args, projectRoot, astIndex) {
|
|
|
19
26
|
const pathStat = await stat(absPath).catch(() => null);
|
|
20
27
|
if (!pathStat) {
|
|
21
28
|
return {
|
|
22
|
-
content: [{ type:
|
|
29
|
+
content: [{ type: "text", text: `Path "${args.path}" not found.` }],
|
|
23
30
|
meta: {
|
|
24
31
|
dir: args.path,
|
|
25
32
|
codeFiles: [],
|
|
@@ -34,28 +41,36 @@ export async function handleExploreArea(args, projectRoot, astIndex) {
|
|
|
34
41
|
if (!pathStat.isDirectory()) {
|
|
35
42
|
absPath = dirname(absPath);
|
|
36
43
|
}
|
|
37
|
-
const relDir = relative(projectRoot, absPath) ||
|
|
38
|
-
const include = args.include ?? [
|
|
44
|
+
const relDir = relative(projectRoot, absPath) || ".";
|
|
45
|
+
const include = args.include ?? ["outline", "imports", "tests", "changes"];
|
|
39
46
|
// Collect code files for import/test analysis
|
|
40
47
|
const codeFiles = await listCodeFiles(absPath);
|
|
41
48
|
// Run all sections in parallel
|
|
42
49
|
const [outlineSection, importsSection, testsSection, changesSection] = await Promise.allSettled([
|
|
43
|
-
include.includes(
|
|
44
|
-
|
|
45
|
-
|
|
46
|
-
include.includes(
|
|
50
|
+
include.includes("outline")
|
|
51
|
+
? buildOutlineSection(absPath, projectRoot, astIndex)
|
|
52
|
+
: Promise.resolve(null),
|
|
53
|
+
include.includes("imports")
|
|
54
|
+
? buildImportsSection(codeFiles, absPath, projectRoot, astIndex)
|
|
55
|
+
: Promise.resolve(null),
|
|
56
|
+
include.includes("tests")
|
|
57
|
+
? buildTestsSection(codeFiles, absPath, projectRoot)
|
|
58
|
+
: Promise.resolve(null),
|
|
59
|
+
include.includes("changes")
|
|
60
|
+
? buildChangesSection(relDir, projectRoot)
|
|
61
|
+
: Promise.resolve(null),
|
|
47
62
|
]);
|
|
48
63
|
// Assemble output
|
|
49
64
|
const lines = [];
|
|
50
65
|
const subdirCount = await countSubdirs(absPath);
|
|
51
|
-
lines.push(`AREA: ${relDir}/ (${codeFiles.length} code files${subdirCount > 0 ? `, ${subdirCount} subdirs` :
|
|
52
|
-
lines.push(
|
|
66
|
+
lines.push(`AREA: ${relDir}/ (${codeFiles.length} code files${subdirCount > 0 ? `, ${subdirCount} subdirs` : ""})`);
|
|
67
|
+
lines.push("");
|
|
53
68
|
// Outline
|
|
54
69
|
const outlineLines = extractResult(outlineSection);
|
|
55
70
|
if (outlineLines) {
|
|
56
|
-
lines.push(
|
|
71
|
+
lines.push("STRUCTURE:");
|
|
57
72
|
lines.push(...outlineLines);
|
|
58
|
-
lines.push(
|
|
73
|
+
lines.push("");
|
|
59
74
|
}
|
|
60
75
|
// Imports
|
|
61
76
|
const importLines = extractResult(importsSection)?.lines ?? null;
|
|
@@ -75,11 +90,11 @@ export async function handleExploreArea(args, projectRoot, astIndex) {
|
|
|
75
90
|
// Truncate if needed
|
|
76
91
|
if (lines.length > MAX_OUTPUT_LINES) {
|
|
77
92
|
lines.length = MAX_OUTPUT_LINES;
|
|
78
|
-
lines.push(
|
|
93
|
+
lines.push("... truncated. Use outline() on specific subdirectories for details.");
|
|
79
94
|
}
|
|
80
|
-
lines.push(
|
|
95
|
+
lines.push("HINT: Use smart_read(file) for details, read_symbol(path, symbol) for source code, find_usages(symbol) for references.");
|
|
81
96
|
return {
|
|
82
|
-
content: [{ type:
|
|
97
|
+
content: [{ type: "text", text: lines.join("\n") }],
|
|
83
98
|
meta: {
|
|
84
99
|
dir: relDir,
|
|
85
100
|
codeFiles: codeFiles.map((file) => relative(projectRoot, file)).sort(),
|
|
@@ -103,43 +118,47 @@ async function buildOutlineSection(absPath, projectRoot, astIndex) {
|
|
|
103
118
|
// Imports section — aggregate external deps + who imports this area
|
|
104
119
|
// ──────────────────────────────────────────────
|
|
105
120
|
async function buildImportsSection(codeFiles, absPath, projectRoot, astIndex) {
|
|
106
|
-
if (!astIndex.isAvailable() ||
|
|
121
|
+
if (!astIndex.isAvailable() ||
|
|
122
|
+
astIndex.isDisabled() ||
|
|
123
|
+
astIndex.isOversized()) {
|
|
107
124
|
return { lines: [], internalDeps: [], importedBy: [], externalDeps: [] };
|
|
108
125
|
}
|
|
109
126
|
const filesToAnalyze = codeFiles.slice(0, MAX_IMPORT_FILES);
|
|
110
127
|
const externalDeps = new Set();
|
|
111
128
|
const internalDeps = new Set();
|
|
112
|
-
const relDir = relative(projectRoot, absPath) ||
|
|
129
|
+
const relDir = relative(projectRoot, absPath) || ".";
|
|
113
130
|
// Get imports for each file
|
|
114
|
-
const importResults = await Promise.allSettled(filesToAnalyze.map(f => astIndex.fileImports(f)));
|
|
131
|
+
const importResults = await Promise.allSettled(filesToAnalyze.map((f) => astIndex.fileImports(f)));
|
|
115
132
|
for (const result of importResults) {
|
|
116
|
-
if (result.status !==
|
|
133
|
+
if (result.status !== "fulfilled" || !result.value)
|
|
117
134
|
continue;
|
|
118
135
|
for (const imp of result.value) {
|
|
119
136
|
const source = imp.source;
|
|
120
137
|
if (!source)
|
|
121
138
|
continue;
|
|
122
|
-
if (source.startsWith(
|
|
139
|
+
if (source.startsWith(".") || source.startsWith("/")) {
|
|
123
140
|
// Internal import — track if it's outside this area
|
|
124
141
|
const resolved = resolve(absPath, source);
|
|
125
|
-
if (!resolved.startsWith(absPath +
|
|
126
|
-
const relImport = relative(projectRoot, resolved).replace(/\.[^.]+$/,
|
|
142
|
+
if (!resolved.startsWith(absPath + "/") && resolved !== absPath) {
|
|
143
|
+
const relImport = relative(projectRoot, resolved).replace(/\.[^.]+$/, "");
|
|
127
144
|
internalDeps.add(relImport);
|
|
128
145
|
}
|
|
129
146
|
}
|
|
130
147
|
else {
|
|
131
148
|
// External package
|
|
132
|
-
const pkg = source.startsWith(
|
|
149
|
+
const pkg = source.startsWith("@")
|
|
150
|
+
? source.split("/").slice(0, 2).join("/")
|
|
151
|
+
: source.split("/")[0];
|
|
133
152
|
externalDeps.add(pkg);
|
|
134
153
|
}
|
|
135
154
|
}
|
|
136
155
|
}
|
|
137
156
|
// Find who imports files from this area (reverse dependencies)
|
|
138
157
|
const importedBy = new Set();
|
|
139
|
-
const fileBasenames = filesToAnalyze.map(f => basename(f).replace(/\.[^.]+$/,
|
|
140
|
-
const refResults = await Promise.allSettled(fileBasenames.slice(0, 10).map(name => astIndex.refs(name, 10)));
|
|
158
|
+
const fileBasenames = filesToAnalyze.map((f) => basename(f).replace(/\.[^.]+$/, ""));
|
|
159
|
+
const refResults = await Promise.allSettled(fileBasenames.slice(0, 10).map((name) => astIndex.refs(name, 10)));
|
|
141
160
|
for (const result of refResults) {
|
|
142
|
-
if (result.status !==
|
|
161
|
+
if (result.status !== "fulfilled" || !result.value)
|
|
143
162
|
continue;
|
|
144
163
|
const refs = result.value;
|
|
145
164
|
if (refs.imports) {
|
|
@@ -149,8 +168,8 @@ async function buildImportsSection(codeFiles, absPath, projectRoot, astIndex) {
|
|
|
149
168
|
continue;
|
|
150
169
|
const relFile = relative(projectRoot, impFile);
|
|
151
170
|
// Only include files outside this area
|
|
152
|
-
if (!relFile.startsWith(relDir +
|
|
153
|
-
importedBy.add(relFile.replace(/\.[^.]+$/,
|
|
171
|
+
if (!relFile.startsWith(relDir + "/") && relFile !== relDir) {
|
|
172
|
+
importedBy.add(relFile.replace(/\.[^.]+$/, ""));
|
|
154
173
|
}
|
|
155
174
|
}
|
|
156
175
|
}
|
|
@@ -158,18 +177,18 @@ async function buildImportsSection(codeFiles, absPath, projectRoot, astIndex) {
|
|
|
158
177
|
const lines = [];
|
|
159
178
|
if (externalDeps.size > 0) {
|
|
160
179
|
const deps = Array.from(externalDeps).sort().slice(0, 20);
|
|
161
|
-
lines.push(`IMPORTS: ${deps.join(
|
|
180
|
+
lines.push(`IMPORTS: ${deps.join(", ")}${externalDeps.size > 20 ? ` ... (${externalDeps.size} total)` : ""}`);
|
|
162
181
|
}
|
|
163
182
|
if (internalDeps.size > 0) {
|
|
164
183
|
const deps = Array.from(internalDeps).sort().slice(0, 10);
|
|
165
|
-
lines.push(`INTERNAL DEPS: ${deps.join(
|
|
184
|
+
lines.push(`INTERNAL DEPS: ${deps.join(", ")}${internalDeps.size > 10 ? ` ... (${internalDeps.size} total)` : ""}`);
|
|
166
185
|
}
|
|
167
186
|
if (importedBy.size > 0) {
|
|
168
187
|
const importers = Array.from(importedBy).sort().slice(0, 10);
|
|
169
|
-
lines.push(`IMPORTED BY: ${importers.join(
|
|
188
|
+
lines.push(`IMPORTED BY: ${importers.join(", ")}${importedBy.size > 10 ? ` ... (${importedBy.size} total)` : ""}`);
|
|
170
189
|
}
|
|
171
190
|
if (lines.length > 0)
|
|
172
|
-
lines.push(
|
|
191
|
+
lines.push("");
|
|
173
192
|
return {
|
|
174
193
|
lines,
|
|
175
194
|
internalDeps: Array.from(internalDeps).sort(),
|
|
@@ -182,18 +201,18 @@ async function buildImportsSection(codeFiles, absPath, projectRoot, astIndex) {
|
|
|
182
201
|
// ──────────────────────────────────────────────
|
|
183
202
|
async function buildTestsSection(codeFiles, absPath, projectRoot) {
|
|
184
203
|
const testFiles = [];
|
|
185
|
-
const areaFileNames = new Set(codeFiles.map(f => basename(f).replace(/\.[^.]+$/,
|
|
204
|
+
const areaFileNames = new Set(codeFiles.map((f) => basename(f).replace(/\.[^.]+$/, "")));
|
|
186
205
|
// Scan for test files: check area dir + common test dirs
|
|
187
206
|
const dirsToScan = [absPath];
|
|
188
207
|
// Check for sibling __tests__ or tests directory
|
|
189
208
|
const parent = dirname(absPath);
|
|
190
209
|
const areaName = basename(absPath);
|
|
191
210
|
const testDirCandidates = [
|
|
192
|
-
resolve(absPath,
|
|
193
|
-
resolve(absPath,
|
|
194
|
-
resolve(absPath,
|
|
195
|
-
resolve(parent,
|
|
196
|
-
resolve(parent,
|
|
211
|
+
resolve(absPath, "__tests__"),
|
|
212
|
+
resolve(absPath, "tests"),
|
|
213
|
+
resolve(absPath, "test"),
|
|
214
|
+
resolve(parent, "__tests__", areaName),
|
|
215
|
+
resolve(parent, "tests", areaName),
|
|
197
216
|
];
|
|
198
217
|
for (const testDir of testDirCandidates) {
|
|
199
218
|
const testDirStat = await stat(testDir).catch(() => null);
|
|
@@ -203,9 +222,9 @@ async function buildTestsSection(codeFiles, absPath, projectRoot) {
|
|
|
203
222
|
}
|
|
204
223
|
// Also check project-level test directories
|
|
205
224
|
const projectTestDirs = [
|
|
206
|
-
resolve(projectRoot,
|
|
207
|
-
resolve(projectRoot,
|
|
208
|
-
resolve(projectRoot,
|
|
225
|
+
resolve(projectRoot, "tests"),
|
|
226
|
+
resolve(projectRoot, "test"),
|
|
227
|
+
resolve(projectRoot, "__tests__"),
|
|
209
228
|
];
|
|
210
229
|
for (const testDir of projectTestDirs) {
|
|
211
230
|
if (dirsToScan.includes(testDir))
|
|
@@ -222,12 +241,15 @@ async function buildTestsSection(codeFiles, absPath, projectRoot) {
|
|
|
222
241
|
if (!entry.isFile())
|
|
223
242
|
continue;
|
|
224
243
|
const name = entry.name;
|
|
225
|
-
if (name.includes(
|
|
244
|
+
if (name.includes(".test.") ||
|
|
245
|
+
name.includes(".spec.") ||
|
|
246
|
+
name.includes("_test.") ||
|
|
247
|
+
name.includes("_spec.")) {
|
|
226
248
|
// Check if this test corresponds to an area file
|
|
227
249
|
const testBase = name
|
|
228
|
-
.replace(/\.(test|spec)\./,
|
|
229
|
-
.replace(/_(test|spec)\./,
|
|
230
|
-
.replace(/\.[^.]+$/,
|
|
250
|
+
.replace(/\.(test|spec)\./, ".")
|
|
251
|
+
.replace(/_(test|spec)\./, ".")
|
|
252
|
+
.replace(/\.[^.]+$/, "");
|
|
231
253
|
if (areaFileNames.has(testBase) || dir !== absPath) {
|
|
232
254
|
const relPath = relative(projectRoot, resolve(dir, name));
|
|
233
255
|
if (!testFiles.includes(relPath)) {
|
|
@@ -237,13 +259,15 @@ async function buildTestsSection(codeFiles, absPath, projectRoot) {
|
|
|
237
259
|
}
|
|
238
260
|
}
|
|
239
261
|
}
|
|
240
|
-
catch {
|
|
262
|
+
catch {
|
|
263
|
+
/* skip unreadable dirs */
|
|
264
|
+
}
|
|
241
265
|
}
|
|
242
266
|
if (testFiles.length === 0)
|
|
243
267
|
return { lines: [], testFiles: [] };
|
|
244
268
|
const lines = [];
|
|
245
|
-
lines.push(`TESTS: ${testFiles.join(
|
|
246
|
-
lines.push(
|
|
269
|
+
lines.push(`TESTS: ${testFiles.join(", ")}`);
|
|
270
|
+
lines.push("");
|
|
247
271
|
return { lines, testFiles: [...testFiles].sort() };
|
|
248
272
|
}
|
|
249
273
|
// ──────────────────────────────────────────────
|
|
@@ -251,16 +275,16 @@ async function buildTestsSection(codeFiles, absPath, projectRoot) {
|
|
|
251
275
|
// ──────────────────────────────────────────────
|
|
252
276
|
async function buildChangesSection(relDir, projectRoot) {
|
|
253
277
|
try {
|
|
254
|
-
const { stdout } = await execFileAsync(
|
|
278
|
+
const { stdout } = await execFileAsync("git", ["log", "--oneline", "-5", "--", relDir], { cwd: projectRoot, timeout: 5000 });
|
|
255
279
|
if (!stdout.trim())
|
|
256
280
|
return { lines: [], count: 0 };
|
|
257
281
|
const lines = [];
|
|
258
|
-
const commits = stdout.trim().split(
|
|
259
|
-
lines.push(
|
|
282
|
+
const commits = stdout.trim().split("\n");
|
|
283
|
+
lines.push("RECENT CHANGES:");
|
|
260
284
|
for (const line of commits) {
|
|
261
285
|
lines.push(` ${line}`);
|
|
262
286
|
}
|
|
263
|
-
lines.push(
|
|
287
|
+
lines.push("");
|
|
264
288
|
return { lines, count: commits.length };
|
|
265
289
|
}
|
|
266
290
|
catch {
|
|
@@ -271,7 +295,7 @@ async function buildChangesSection(relDir, projectRoot) {
|
|
|
271
295
|
// Helpers
|
|
272
296
|
// ──────────────────────────────────────────────
|
|
273
297
|
function extractResult(settled) {
|
|
274
|
-
if (settled.status ===
|
|
298
|
+
if (settled.status === "fulfilled" && settled.value) {
|
|
275
299
|
return settled.value;
|
|
276
300
|
}
|
|
277
301
|
return null;
|
|
@@ -282,7 +306,7 @@ async function listCodeFiles(dirPath) {
|
|
|
282
306
|
const files = [];
|
|
283
307
|
for (const entry of entries) {
|
|
284
308
|
if (entry.isFile()) {
|
|
285
|
-
const ext = entry.name.split(
|
|
309
|
+
const ext = entry.name.split(".").pop()?.toLowerCase() ?? "";
|
|
286
310
|
if (CODE_EXTENSIONS.has(ext)) {
|
|
287
311
|
files.push(resolve(dirPath, entry.name));
|
|
288
312
|
}
|
|
@@ -297,7 +321,7 @@ async function listCodeFiles(dirPath) {
|
|
|
297
321
|
async function countSubdirs(dirPath) {
|
|
298
322
|
try {
|
|
299
323
|
const entries = await readdir(dirPath, { withFileTypes: true });
|
|
300
|
-
return entries.filter(e => e.isDirectory()).length;
|
|
324
|
+
return entries.filter((e) => e.isDirectory()).length;
|
|
301
325
|
}
|
|
302
326
|
catch {
|
|
303
327
|
return 0;
|
package/dist/hooks/pre-bash.d.ts
CHANGED
|
@@ -37,6 +37,17 @@ export type PreBashDecision = {
|
|
|
37
37
|
kind: "deny";
|
|
38
38
|
reason: string;
|
|
39
39
|
};
|
|
40
|
+
/**
|
|
41
|
+
* v0.29.0 — expose wrapped commands. Opus 4.7's v0.28.2 verification
|
|
42
|
+
* report showed escape patterns: `bash -c "cat src/foo.ts"`,
|
|
43
|
+
* `eval "..."`, `for f in *.ts; do cat $f; done` all slipped through
|
|
44
|
+
* our heuristics because the dangerous call sat inside quotes / a loop
|
|
45
|
+
* body. Unwrap those before matching.
|
|
46
|
+
*
|
|
47
|
+
* Returns the original command PLUS the extracted inner body for each
|
|
48
|
+
* wrapper found. Duplication is fine — detectHeavyPattern is pure.
|
|
49
|
+
*/
|
|
50
|
+
export declare function extractWrappedCommands(command: string): string[];
|
|
40
51
|
export declare function detectHeavyPattern(command: string): PreBashDecision;
|
|
41
52
|
export declare function decidePreBash(input: PreBashInput): PreBashDecision;
|
|
42
53
|
export declare function renderPreBashOutput(decision: PreBashDecision): string | null;
|
package/dist/hooks/pre-bash.js
CHANGED
|
@@ -32,7 +32,60 @@ function invokes(command, utility) {
|
|
|
32
32
|
const re = new RegExp(`(^|[;&|\\n]\\s*)${utility}(\\s|$)`, "m");
|
|
33
33
|
return re.test(command);
|
|
34
34
|
}
|
|
35
|
+
/**
|
|
36
|
+
* v0.29.0 — expose wrapped commands. Opus 4.7's v0.28.2 verification
|
|
37
|
+
* report showed escape patterns: `bash -c "cat src/foo.ts"`,
|
|
38
|
+
* `eval "..."`, `for f in *.ts; do cat $f; done` all slipped through
|
|
39
|
+
* our heuristics because the dangerous call sat inside quotes / a loop
|
|
40
|
+
* body. Unwrap those before matching.
|
|
41
|
+
*
|
|
42
|
+
* Returns the original command PLUS the extracted inner body for each
|
|
43
|
+
* wrapper found. Duplication is fine — detectHeavyPattern is pure.
|
|
44
|
+
*/
|
|
45
|
+
export function extractWrappedCommands(command) {
|
|
46
|
+
const out = [command];
|
|
47
|
+
// bash -c "..." / sh -c "..." / zsh -c "..."
|
|
48
|
+
for (const shell of ["bash", "sh", "zsh"]) {
|
|
49
|
+
const re = new RegExp(`\\b${shell}\\s+-c\\s+(?:"([^"]+)"|'([^']+)')`, "g");
|
|
50
|
+
for (const m of command.matchAll(re)) {
|
|
51
|
+
const inner = m[1] ?? m[2];
|
|
52
|
+
if (inner)
|
|
53
|
+
out.push(inner);
|
|
54
|
+
}
|
|
55
|
+
}
|
|
56
|
+
// eval "..." / eval '...'
|
|
57
|
+
for (const m of command.matchAll(/\beval\s+(?:"([^"]+)"|'([^']+)')/g)) {
|
|
58
|
+
const inner = m[1] ?? m[2];
|
|
59
|
+
if (inner)
|
|
60
|
+
out.push(inner);
|
|
61
|
+
}
|
|
62
|
+
// for LOOP with body: `for X in Y; do BODY; done` — extract BODY
|
|
63
|
+
// Also covers `while COND; do BODY; done` and `until COND; do BODY; done`
|
|
64
|
+
for (const m of command.matchAll(/\b(?:for|while|until)\b[^;]*;\s*do\s+(.+?)\s*;?\s*done\b/gs)) {
|
|
65
|
+
const body = m[1];
|
|
66
|
+
if (body)
|
|
67
|
+
out.push(body);
|
|
68
|
+
}
|
|
69
|
+
return out;
|
|
70
|
+
}
|
|
35
71
|
export function detectHeavyPattern(command) {
|
|
72
|
+
const cmd = command.trim();
|
|
73
|
+
if (!cmd)
|
|
74
|
+
return { kind: "allow" };
|
|
75
|
+
// v0.29.0: check each of the original + any unwrapped inner commands.
|
|
76
|
+
// First deny wins.
|
|
77
|
+
const candidates = extractWrappedCommands(cmd);
|
|
78
|
+
if (candidates.length > 1) {
|
|
79
|
+
// Check only the unwrapped inners; the original is handled below.
|
|
80
|
+
for (let i = 1; i < candidates.length; i++) {
|
|
81
|
+
const inner = detectHeavyPatternSingle(candidates[i]);
|
|
82
|
+
if (inner.kind === "deny")
|
|
83
|
+
return inner;
|
|
84
|
+
}
|
|
85
|
+
}
|
|
86
|
+
return detectHeavyPatternSingle(cmd);
|
|
87
|
+
}
|
|
88
|
+
function detectHeavyPatternSingle(command) {
|
|
36
89
|
const cmd = command.trim();
|
|
37
90
|
if (!cmd)
|
|
38
91
|
return { kind: "allow" };
|
|
@@ -499,7 +499,7 @@ export const TOOL_DEFINITIONS = [
|
|
|
499
499
|
},
|
|
500
500
|
{
|
|
501
501
|
name: "smart_log",
|
|
502
|
-
description: "Use INSTEAD OF raw git log. Structured commit history with category detection (feat/fix/refactor/docs), file stats, author breakdown. Filters by path and ref.",
|
|
502
|
+
description: "Use INSTEAD OF raw git log. Structured commit history with category detection (feat/fix/refactor/docs), file stats, author breakdown. Filters by path and ref. HEADS UP: two verification runs measured this tool at ~39% token reduction (borderline — vs 95-99% for outline/smart_diff). Cumulative data being gathered — tool may be dropped or redesigned in v0.30.0 if numbers don't improve. Prefer scoping with `path` or `count` to tighten savings.",
|
|
503
503
|
inputSchema: {
|
|
504
504
|
type: "object",
|
|
505
505
|
properties: {
|
|
@@ -581,7 +581,7 @@ export const TOOL_DEFINITIONS = [
|
|
|
581
581
|
},
|
|
582
582
|
{
|
|
583
583
|
name: "session_budget",
|
|
584
|
-
description: "
|
|
584
|
+
description: "META / info-only: reports Read-hook pressure for this session (suppressed tokens, reference budget, burn fraction, effective denyThreshold). Does NOT save tokens itself — this is diagnostic, use to decide when to tighten before a big read. NOTE: burnFraction measures hook activity, not actual context-window occupancy.",
|
|
585
585
|
inputSchema: {
|
|
586
586
|
type: "object",
|
|
587
587
|
properties: {
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "token-pilot",
|
|
3
|
-
"version": "0.
|
|
3
|
+
"version": "0.29.0",
|
|
4
4
|
"description": "Save up to 80% tokens when AI reads code \u2014 MCP server for token-efficient code navigation, AST-aware structural reading instead of dumping full files into context window",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"main": "dist/index.js",
|