@ngommans/codefocus 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of @ngommans/codefocus might be problematic. Click here for more details.
- package/README.md +124 -0
- package/dist/benchmark-43DOYNYR.js +465 -0
- package/dist/benchmark-43DOYNYR.js.map +1 -0
- package/dist/chunk-6XH2ZLP6.js +127 -0
- package/dist/chunk-6XH2ZLP6.js.map +1 -0
- package/dist/chunk-7RYHZOYF.js +27 -0
- package/dist/chunk-7RYHZOYF.js.map +1 -0
- package/dist/chunk-ITVAEU6K.js +250 -0
- package/dist/chunk-ITVAEU6K.js.map +1 -0
- package/dist/chunk-Q6DOBQ4F.js +231 -0
- package/dist/chunk-Q6DOBQ4F.js.map +1 -0
- package/dist/chunk-X7DRJUEX.js +543 -0
- package/dist/chunk-X7DRJUEX.js.map +1 -0
- package/dist/cli.js +111 -0
- package/dist/cli.js.map +1 -0
- package/dist/commands-ICBN54MT.js +64 -0
- package/dist/commands-ICBN54MT.js.map +1 -0
- package/dist/config-OCBWYENF.js +12 -0
- package/dist/config-OCBWYENF.js.map +1 -0
- package/dist/extended-benchmark-5RUXDG3D.js +323 -0
- package/dist/extended-benchmark-5RUXDG3D.js.map +1 -0
- package/dist/find-W5UDE4US.js +63 -0
- package/dist/find-W5UDE4US.js.map +1 -0
- package/dist/graph-DZNBEATA.js +189 -0
- package/dist/graph-DZNBEATA.js.map +1 -0
- package/dist/map-6WOMDLCP.js +131 -0
- package/dist/map-6WOMDLCP.js.map +1 -0
- package/dist/mcp-7WYTXIQS.js +354 -0
- package/dist/mcp-7WYTXIQS.js.map +1 -0
- package/dist/mcp-server.js +369 -0
- package/dist/mcp-server.js.map +1 -0
- package/dist/query-DJNWYYJD.js +427 -0
- package/dist/query-DJNWYYJD.js.map +1 -0
- package/dist/query-PS6QVPXP.js +538 -0
- package/dist/query-PS6QVPXP.js.map +1 -0
- package/dist/root-ODTOXM2J.js +10 -0
- package/dist/root-ODTOXM2J.js.map +1 -0
- package/dist/watcher-LFBZAM5E.js +73 -0
- package/dist/watcher-LFBZAM5E.js.map +1 -0
- package/package.json +61 -0
package/README.md
ADDED
|
@@ -0,0 +1,124 @@
|
|
|
1
|
+
# codefocus
|
|
2
|
+
|
|
3
|
+
Smart code context aggregator — AST-powered search that returns structured,
|
|
4
|
+
ranked code context for LLM agents.
|
|
5
|
+
|
|
6
|
+
Tree-sitter parses your codebase into a SQLite graph database. Queries combine
|
|
7
|
+
full-text search, symbol-aware graph traversal, and PageRank scoring to return
|
|
8
|
+
the most relevant code sections within a token budget. One call replaces
|
|
9
|
+
multiple grep/read round-trips.
|
|
10
|
+
|
|
11
|
+
## Install
|
|
12
|
+
|
|
13
|
+
```bash
|
|
14
|
+
npm install -g codefocus
|
|
15
|
+
# or use directly
|
|
16
|
+
npx codefocus
|
|
17
|
+
```
|
|
18
|
+
|
|
19
|
+
Requires Node 18+.
|
|
20
|
+
|
|
21
|
+
## Quick start
|
|
22
|
+
|
|
23
|
+
```bash
|
|
24
|
+
# Index your project (creates .codefocus/index.db)
|
|
25
|
+
npx codefocus index --root .
|
|
26
|
+
|
|
27
|
+
# Search for relevant code context
|
|
28
|
+
npx codefocus query "handleSync" --budget 8000
|
|
29
|
+
|
|
30
|
+
# Get a high-level codebase overview
|
|
31
|
+
npx codefocus map --budget 2000
|
|
32
|
+
```
|
|
33
|
+
|
|
34
|
+
## Commands
|
|
35
|
+
|
|
36
|
+
| Command | Description |
|
|
37
|
+
|---------|-------------|
|
|
38
|
+
| `index` | Parse the codebase with tree-sitter and store symbols in SQLite |
|
|
39
|
+
| `query` | Search and return ranked, budget-constrained code context |
|
|
40
|
+
| `find` | Quick symbol lookup by name and optional kind filter |
|
|
41
|
+
| `graph` | Show dependency graph for a file or symbol |
|
|
42
|
+
| `map` | High-level codebase overview (Aider-style repo map) |
|
|
43
|
+
| `serve` | Start MCP server (stdio transport, keeps DB warm) |
|
|
44
|
+
| `benchmark` | Benchmark codefocus vs. Explore agent pattern |
|
|
45
|
+
|
|
46
|
+
## Claude Code integration
|
|
47
|
+
|
|
48
|
+
### MCP server (recommended)
|
|
49
|
+
|
|
50
|
+
Add a `.mcp.json` file to your project root:
|
|
51
|
+
|
|
52
|
+
```json
|
|
53
|
+
{
|
|
54
|
+
"mcpServers": {
|
|
55
|
+
"codefocus": {
|
|
56
|
+
"command": "npx",
|
|
57
|
+
"args": ["codefocus", "serve", "--root", "."]
|
|
58
|
+
}
|
|
59
|
+
}
|
|
60
|
+
}
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
Claude Code auto-discovers the MCP server and gains `query`, `find`, `graph`,
|
|
64
|
+
and `map` as tools. The server stays warm with sub-10ms response times after
|
|
65
|
+
the first call.
|
|
66
|
+
|
|
67
|
+
### Auto-indexing via SessionStart hook
|
|
68
|
+
|
|
69
|
+
Add `.claude/hooks.json` to keep the index fresh on every session:
|
|
70
|
+
|
|
71
|
+
```json
|
|
72
|
+
{
|
|
73
|
+
"hooks": {
|
|
74
|
+
"SessionStart": [{
|
|
75
|
+
"command": "npx codefocus index --root .",
|
|
76
|
+
"timeout": 30000
|
|
77
|
+
}]
|
|
78
|
+
}
|
|
79
|
+
}
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
## How it works
|
|
83
|
+
|
|
84
|
+
1. **Index** — tree-sitter parses TypeScript/JavaScript files, extracting
|
|
85
|
+
symbols (functions, classes, interfaces, types), imports, and cross-file
|
|
86
|
+
references. Everything is stored in a SQLite database with FTS5 full-text
|
|
87
|
+
search.
|
|
88
|
+
|
|
89
|
+
2. **Query** — searches combine direct symbol-name matching, FTS5 content
|
|
90
|
+
search, and graph expansion (BFS along import/reference edges). Results
|
|
91
|
+
are ranked by match strength, term frequency, symbol proximity, hub
|
|
92
|
+
dampening, and PageRank. Relevance thresholds (score floor, elbow
|
|
93
|
+
detection, marginal value) prune noise.
|
|
94
|
+
|
|
95
|
+
3. **Output** — ranked code sections with YAML front matter (confidence,
|
|
96
|
+
score distribution), constrained to a token budget. One query replaces
|
|
97
|
+
3-7 grep/read round-trips with 50x fewer tokens.
|
|
98
|
+
|
|
99
|
+
## Scoring configuration
|
|
100
|
+
|
|
101
|
+
Tuning parameters can be customized per-project via `.codefocus/config.json`:
|
|
102
|
+
|
|
103
|
+
```json
|
|
104
|
+
{
|
|
105
|
+
"scoring": {
|
|
106
|
+
"scoreFloorRatio": 0.20,
|
|
107
|
+
"elbowDropRatio": 0.60,
|
|
108
|
+
"minMarginalValue": 0.00003,
|
|
109
|
+
"symbolProximityBoost": 1.5,
|
|
110
|
+
"importEdgeWeight": 0.4,
|
|
111
|
+
"typeRefWeight": 0.2,
|
|
112
|
+
"defaultBudget": 8000,
|
|
113
|
+
"defaultDepth": 2
|
|
114
|
+
}
|
|
115
|
+
}
|
|
116
|
+
```
|
|
117
|
+
|
|
118
|
+
Use `codefocus benchmark --emit-config` to print the active configuration.
|
|
119
|
+
Use `codefocus benchmark --extended` to validate parameters against dependency
|
|
120
|
+
codebases.
|
|
121
|
+
|
|
122
|
+
## License
|
|
123
|
+
|
|
124
|
+
MIT
|
|
@@ -0,0 +1,465 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
|
|
3
|
+
// src/benchmark.ts
|
|
4
|
+
import { createRequire } from "module";
|
|
5
|
+
import { execFileSync } from "child_process";
|
|
6
|
+
import { readFileSync } from "fs";
|
|
7
|
+
import { resolve, relative } from "path";
|
|
8
|
+
var require2 = createRequire(import.meta.url);
|
|
9
|
+
var { getEncoding } = require2("js-tiktoken");
|
|
10
|
+
var TASKS = [
|
|
11
|
+
{
|
|
12
|
+
id: "cli-routing",
|
|
13
|
+
question: "How does the CLI route commands to their handlers?",
|
|
14
|
+
searchTerms: ["parseArgs", "handlers", "command"],
|
|
15
|
+
queryTerm: "parseArgs",
|
|
16
|
+
expectedFiles: ["src/cli.ts"],
|
|
17
|
+
essentialFiles: ["src/cli.ts"],
|
|
18
|
+
bonusFiles: [],
|
|
19
|
+
expectedSymbols: ["parseArgs", "main", "ParsedArgs"]
|
|
20
|
+
},
|
|
21
|
+
{
|
|
22
|
+
id: "callers-of-runIndex",
|
|
23
|
+
question: "Find all callers of runIndex and understand the indexing entry point",
|
|
24
|
+
searchTerms: ["runIndex"],
|
|
25
|
+
queryTerm: "runIndex",
|
|
26
|
+
// B1: Added src/watcher.ts — legitimate caller added in Spike 7 Phase C
|
|
27
|
+
expectedFiles: ["src/cli.ts", "src/commands/index.ts", "src/indexer.ts", "src/watcher.ts"],
|
|
28
|
+
essentialFiles: ["src/commands/index.ts", "src/indexer.ts"],
|
|
29
|
+
bonusFiles: ["src/cli.ts", "src/watcher.ts"],
|
|
30
|
+
expectedSymbols: ["runIndex", "indexProject"]
|
|
31
|
+
},
|
|
32
|
+
{
|
|
33
|
+
id: "token-budget",
|
|
34
|
+
question: "How does the token budget system work in the query command?",
|
|
35
|
+
searchTerms: ["budget", "tokenCount", "getEncoding"],
|
|
36
|
+
queryTerm: "budget",
|
|
37
|
+
// B1: src/mcp.ts re-exports query logic with budget param — valid but not essential
|
|
38
|
+
expectedFiles: ["src/commands/query.ts", "src/mcp.ts"],
|
|
39
|
+
essentialFiles: ["src/commands/query.ts"],
|
|
40
|
+
bonusFiles: ["src/mcp.ts"],
|
|
41
|
+
expectedSymbols: ["runQuery", "ScoredSymbol", "FileSection"]
|
|
42
|
+
},
|
|
43
|
+
{
|
|
44
|
+
id: "db-schema",
|
|
45
|
+
question: "What is the database schema and what operations does it support?",
|
|
46
|
+
searchTerms: ["IndexDatabase", "CREATE TABLE"],
|
|
47
|
+
queryTerm: "IndexDatabase",
|
|
48
|
+
expectedFiles: ["src/db.ts"],
|
|
49
|
+
essentialFiles: ["src/db.ts"],
|
|
50
|
+
bonusFiles: [],
|
|
51
|
+
expectedSymbols: [
|
|
52
|
+
"IndexDatabase",
|
|
53
|
+
"SymbolRow",
|
|
54
|
+
"FileRow",
|
|
55
|
+
"ImportRow",
|
|
56
|
+
"ReferenceRow"
|
|
57
|
+
]
|
|
58
|
+
},
|
|
59
|
+
{
|
|
60
|
+
id: "incremental-indexing",
|
|
61
|
+
question: "How does incremental indexing work? How are unchanged files skipped?",
|
|
62
|
+
searchTerms: ["hashContent", "existingHash", "filesSkipped"],
|
|
63
|
+
queryTerm: "hashContent",
|
|
64
|
+
expectedFiles: ["src/indexer.ts"],
|
|
65
|
+
essentialFiles: ["src/indexer.ts"],
|
|
66
|
+
bonusFiles: [],
|
|
67
|
+
expectedSymbols: ["hashContent", "indexProject"]
|
|
68
|
+
},
|
|
69
|
+
{
|
|
70
|
+
id: "cross-file-refs",
|
|
71
|
+
question: "How are cross-file symbol references created during indexing?",
|
|
72
|
+
searchTerms: ["insertReference", "referencesCreated"],
|
|
73
|
+
queryTerm: "insertReference",
|
|
74
|
+
expectedFiles: ["src/indexer.ts", "src/db.ts"],
|
|
75
|
+
essentialFiles: ["src/indexer.ts", "src/db.ts"],
|
|
76
|
+
bonusFiles: [],
|
|
77
|
+
expectedSymbols: ["insertReference", "indexProject"]
|
|
78
|
+
},
|
|
79
|
+
{
|
|
80
|
+
id: "parser-node-types",
|
|
81
|
+
question: "What tree-sitter node types does the parser handle?",
|
|
82
|
+
searchTerms: ["parseSource", "extractDeclaration", "node_type"],
|
|
83
|
+
queryTerm: "parseSource",
|
|
84
|
+
expectedFiles: ["src/parser.ts"],
|
|
85
|
+
essentialFiles: ["src/parser.ts"],
|
|
86
|
+
bonusFiles: [],
|
|
87
|
+
expectedSymbols: ["parseSource", "extractDeclaration"]
|
|
88
|
+
},
|
|
89
|
+
{
|
|
90
|
+
id: "graph-rendering",
|
|
91
|
+
question: "How does the graph command render dependency trees?",
|
|
92
|
+
searchTerms: ["runGraph", "renderTree"],
|
|
93
|
+
queryTerm: "runGraph",
|
|
94
|
+
expectedFiles: ["src/commands/graph.ts"],
|
|
95
|
+
essentialFiles: ["src/commands/graph.ts"],
|
|
96
|
+
bonusFiles: [],
|
|
97
|
+
expectedSymbols: ["runGraph"]
|
|
98
|
+
}
|
|
99
|
+
];
|
|
100
|
+
function simulateExplore(rootDir, task) {
|
|
101
|
+
const enc = getEncoding("cl100k_base");
|
|
102
|
+
const matchedFiles = /* @__PURE__ */ new Set();
|
|
103
|
+
let roundTrips = 0;
|
|
104
|
+
for (const term of task.searchTerms) {
|
|
105
|
+
roundTrips++;
|
|
106
|
+
try {
|
|
107
|
+
const grepResult = execFileSync(
|
|
108
|
+
"grep",
|
|
109
|
+
["-rl", "--include=*.ts", term, resolve(rootDir, "src")],
|
|
110
|
+
{ encoding: "utf-8", timeout: 1e4 }
|
|
111
|
+
);
|
|
112
|
+
for (const line of grepResult.trim().split("\n")) {
|
|
113
|
+
if (line) {
|
|
114
|
+
matchedFiles.add(relative(rootDir, line));
|
|
115
|
+
}
|
|
116
|
+
}
|
|
117
|
+
} catch {
|
|
118
|
+
}
|
|
119
|
+
}
|
|
120
|
+
let totalTokens = 0;
|
|
121
|
+
let totalBytes = 0;
|
|
122
|
+
const filesRead = [];
|
|
123
|
+
for (const filePath of matchedFiles) {
|
|
124
|
+
roundTrips++;
|
|
125
|
+
try {
|
|
126
|
+
const content = readFileSync(resolve(rootDir, filePath), "utf-8");
|
|
127
|
+
totalBytes += Buffer.byteLength(content, "utf-8");
|
|
128
|
+
totalTokens += enc.encode(content).length;
|
|
129
|
+
filesRead.push(filePath);
|
|
130
|
+
} catch {
|
|
131
|
+
}
|
|
132
|
+
}
|
|
133
|
+
return {
|
|
134
|
+
tokensConsumed: totalTokens,
|
|
135
|
+
roundTrips,
|
|
136
|
+
filesRead,
|
|
137
|
+
bytesRead: totalBytes
|
|
138
|
+
};
|
|
139
|
+
}
|
|
140
|
+
function runCodefocusQuery(rootDir, cliPath, task) {
|
|
141
|
+
let stdout;
|
|
142
|
+
try {
|
|
143
|
+
stdout = execFileSync(
|
|
144
|
+
"node",
|
|
145
|
+
[cliPath, "query", task.queryTerm, "--root", rootDir, "--budget", "8000"],
|
|
146
|
+
{ encoding: "utf-8", timeout: 3e4 }
|
|
147
|
+
);
|
|
148
|
+
} catch (err) {
|
|
149
|
+
stdout = (err.stdout || "") + (err.stderr || "");
|
|
150
|
+
}
|
|
151
|
+
const tokenMatch = stdout.match(/~(\d+) tokens/);
|
|
152
|
+
const tokensOutput = tokenMatch ? parseInt(tokenMatch[1], 10) : 0;
|
|
153
|
+
const fileMatches = stdout.matchAll(/── ([\w/.]+\.ts):\d+-\d+/g);
|
|
154
|
+
const filesReturned = [...fileMatches].map((m) => m[1]);
|
|
155
|
+
return {
|
|
156
|
+
tokensOutput,
|
|
157
|
+
roundTrips: 1,
|
|
158
|
+
filesReturned,
|
|
159
|
+
stdout
|
|
160
|
+
};
|
|
161
|
+
}
|
|
162
|
+
function computeCompleteness(returnedFiles, expectedFiles) {
|
|
163
|
+
if (expectedFiles.length === 0) return 1;
|
|
164
|
+
const returned = new Set(returnedFiles);
|
|
165
|
+
const found = expectedFiles.filter((f) => returned.has(f)).length;
|
|
166
|
+
return found / expectedFiles.length;
|
|
167
|
+
}
|
|
168
|
+
function computePrecision(returnedFiles, expectedFiles) {
|
|
169
|
+
if (returnedFiles.length === 0) return 0;
|
|
170
|
+
const expected = new Set(expectedFiles);
|
|
171
|
+
const relevant = returnedFiles.filter((f) => expected.has(f)).length;
|
|
172
|
+
return relevant / returnedFiles.length;
|
|
173
|
+
}
|
|
174
|
+
function runBenchmark(rootDir, cliPath, exploreAgentTokens) {
|
|
175
|
+
const results = [];
|
|
176
|
+
for (const task of TASKS) {
|
|
177
|
+
const explore = simulateExplore(rootDir, task);
|
|
178
|
+
const codefocus = runCodefocusQuery(rootDir, cliPath, task);
|
|
179
|
+
const exploreCompleteness = computeCompleteness(
|
|
180
|
+
explore.filesRead,
|
|
181
|
+
task.expectedFiles
|
|
182
|
+
);
|
|
183
|
+
const codefocusCompleteness = computeCompleteness(
|
|
184
|
+
codefocus.filesReturned,
|
|
185
|
+
task.expectedFiles
|
|
186
|
+
);
|
|
187
|
+
const exploreEssentialCompleteness = computeCompleteness(
|
|
188
|
+
explore.filesRead,
|
|
189
|
+
task.essentialFiles
|
|
190
|
+
);
|
|
191
|
+
const codefocusEssentialCompleteness = computeCompleteness(
|
|
192
|
+
codefocus.filesReturned,
|
|
193
|
+
task.essentialFiles
|
|
194
|
+
);
|
|
195
|
+
const explorePrecision = computePrecision(
|
|
196
|
+
explore.filesRead,
|
|
197
|
+
task.expectedFiles
|
|
198
|
+
);
|
|
199
|
+
const codefocusPrecision = computePrecision(
|
|
200
|
+
codefocus.filesReturned,
|
|
201
|
+
task.expectedFiles
|
|
202
|
+
);
|
|
203
|
+
const tokenSavingsRatio = codefocus.tokensOutput > 0 ? explore.tokensConsumed / codefocus.tokensOutput : 0;
|
|
204
|
+
results.push({
|
|
205
|
+
task,
|
|
206
|
+
explore,
|
|
207
|
+
codefocus,
|
|
208
|
+
exploreCompleteness,
|
|
209
|
+
codefocusCompleteness,
|
|
210
|
+
exploreEssentialCompleteness,
|
|
211
|
+
codefocusEssentialCompleteness,
|
|
212
|
+
explorePrecision,
|
|
213
|
+
codefocusPrecision,
|
|
214
|
+
tokenSavingsRatio
|
|
215
|
+
});
|
|
216
|
+
}
|
|
217
|
+
const avg = (arr) => arr.length > 0 ? arr.reduce((a, b) => a + b, 0) / arr.length : 0;
|
|
218
|
+
return {
|
|
219
|
+
tasks: results,
|
|
220
|
+
summary: {
|
|
221
|
+
avgExploreTokens: avg(results.map((r) => r.explore.tokensConsumed)),
|
|
222
|
+
avgCodefocusTokens: avg(results.map((r) => r.codefocus.tokensOutput)),
|
|
223
|
+
avgTokenSavingsRatio: avg(results.map((r) => r.tokenSavingsRatio)),
|
|
224
|
+
avgExploreRoundTrips: avg(results.map((r) => r.explore.roundTrips)),
|
|
225
|
+
avgCodefocusRoundTrips: avg(
|
|
226
|
+
results.map((r) => r.codefocus.roundTrips)
|
|
227
|
+
),
|
|
228
|
+
avgExploreCompleteness: avg(results.map((r) => r.exploreCompleteness)),
|
|
229
|
+
avgCodefocusCompleteness: avg(
|
|
230
|
+
results.map((r) => r.codefocusCompleteness)
|
|
231
|
+
),
|
|
232
|
+
avgExploreEssentialCompleteness: avg(
|
|
233
|
+
results.map((r) => r.exploreEssentialCompleteness)
|
|
234
|
+
),
|
|
235
|
+
avgCodefocusEssentialCompleteness: avg(
|
|
236
|
+
results.map((r) => r.codefocusEssentialCompleteness)
|
|
237
|
+
),
|
|
238
|
+
avgExplorePrecision: avg(results.map((r) => r.explorePrecision)),
|
|
239
|
+
avgCodefocusPrecision: avg(results.map((r) => r.codefocusPrecision))
|
|
240
|
+
},
|
|
241
|
+
exploreAgentTokens: exploreAgentTokens ?? {}
|
|
242
|
+
};
|
|
243
|
+
}
|
|
244
|
+
function formatResults(results) {
|
|
245
|
+
const lines = [];
|
|
246
|
+
lines.push("# codefocus Benchmark \u2014 Spike 8");
|
|
247
|
+
lines.push("");
|
|
248
|
+
lines.push(
|
|
249
|
+
"Comparison of codefocus query vs. Explore agent pattern (grep/glob/read)"
|
|
250
|
+
);
|
|
251
|
+
lines.push(`for ${results.tasks.length} representative code comprehension tasks.`);
|
|
252
|
+
lines.push("");
|
|
253
|
+
lines.push(`Benchmark run: ${(/* @__PURE__ */ new Date()).toISOString()}`);
|
|
254
|
+
lines.push("");
|
|
255
|
+
lines.push("## Per-task results");
|
|
256
|
+
lines.push("");
|
|
257
|
+
lines.push(
|
|
258
|
+
"| Task | Explore tokens | CF tokens | Savings | CF compl. | CF essential | CF prec. |"
|
|
259
|
+
);
|
|
260
|
+
lines.push(
|
|
261
|
+
"|------|---------------:|----------:|--------:|----------:|-------------:|---------:|"
|
|
262
|
+
);
|
|
263
|
+
for (const r of results.tasks) {
|
|
264
|
+
const savings = r.tokenSavingsRatio.toFixed(1) + "x";
|
|
265
|
+
lines.push(
|
|
266
|
+
`| ${r.task.id} | ${r.explore.tokensConsumed} | ${r.codefocus.tokensOutput} | ${savings} | ${pct(r.codefocusCompleteness)} | ${pct(r.codefocusEssentialCompleteness)} | ${pct(r.codefocusPrecision)} |`
|
|
267
|
+
);
|
|
268
|
+
}
|
|
269
|
+
lines.push("");
|
|
270
|
+
lines.push("## Summary");
|
|
271
|
+
lines.push("");
|
|
272
|
+
const s = results.summary;
|
|
273
|
+
lines.push(
|
|
274
|
+
"| Metric | Explore (grep/glob/read) | codefocus query |"
|
|
275
|
+
);
|
|
276
|
+
lines.push("|--------|------------------------:|----------------:|");
|
|
277
|
+
lines.push(
|
|
278
|
+
`| Avg tokens consumed | ${Math.round(s.avgExploreTokens)} | ${Math.round(s.avgCodefocusTokens)} |`
|
|
279
|
+
);
|
|
280
|
+
lines.push(
|
|
281
|
+
`| Avg token savings | \u2014 | ${s.avgTokenSavingsRatio.toFixed(1)}x less |`
|
|
282
|
+
);
|
|
283
|
+
lines.push(
|
|
284
|
+
`| Avg round-trips | ${s.avgExploreRoundTrips.toFixed(1)} | ${s.avgCodefocusRoundTrips.toFixed(1)} |`
|
|
285
|
+
);
|
|
286
|
+
lines.push(
|
|
287
|
+
`| Avg completeness (total) | ${pct(s.avgExploreCompleteness)} | ${pct(s.avgCodefocusCompleteness)} |`
|
|
288
|
+
);
|
|
289
|
+
lines.push(
|
|
290
|
+
`| Avg completeness (essential) | ${pct(s.avgExploreEssentialCompleteness)} | ${pct(s.avgCodefocusEssentialCompleteness)} |`
|
|
291
|
+
);
|
|
292
|
+
lines.push(
|
|
293
|
+
`| Avg precision | ${pct(s.avgExplorePrecision)} | ${pct(s.avgCodefocusPrecision)} |`
|
|
294
|
+
);
|
|
295
|
+
lines.push("");
|
|
296
|
+
if (Object.keys(results.exploreAgentTokens).length > 0) {
|
|
297
|
+
lines.push("## Real Explore agent token counts");
|
|
298
|
+
lines.push("");
|
|
299
|
+
lines.push(
|
|
300
|
+
"These are actual token counts from running the Explore subagent"
|
|
301
|
+
);
|
|
302
|
+
lines.push("(Claude Code Task tool, subagent_type=Explore).");
|
|
303
|
+
lines.push("");
|
|
304
|
+
lines.push("| Task | Explore agent tokens | codefocus tokens | Savings |");
|
|
305
|
+
lines.push("|------|---------------------:|-----------------:|--------:|");
|
|
306
|
+
for (const r of results.tasks) {
|
|
307
|
+
const agentTokens = results.exploreAgentTokens[r.task.id];
|
|
308
|
+
if (agentTokens !== void 0) {
|
|
309
|
+
const cfTokens = r.codefocus.tokensOutput;
|
|
310
|
+
const ratio = cfTokens > 0 ? (agentTokens / cfTokens).toFixed(1) + "x" : "\u2014";
|
|
311
|
+
lines.push(
|
|
312
|
+
`| ${r.task.id} | ${agentTokens} | ${cfTokens} | ${ratio} |`
|
|
313
|
+
);
|
|
314
|
+
}
|
|
315
|
+
}
|
|
316
|
+
lines.push("");
|
|
317
|
+
}
|
|
318
|
+
lines.push("## Per-task detail");
|
|
319
|
+
lines.push("");
|
|
320
|
+
for (const r of results.tasks) {
|
|
321
|
+
lines.push(`### ${r.task.id}`);
|
|
322
|
+
lines.push("");
|
|
323
|
+
lines.push(`**Question:** ${r.task.question}`);
|
|
324
|
+
lines.push("");
|
|
325
|
+
lines.push(
|
|
326
|
+
`**Search terms:** ${r.task.searchTerms.map((t) => `\`${t}\``).join(", ")}`
|
|
327
|
+
);
|
|
328
|
+
lines.push(`**Query term:** \`${r.task.queryTerm}\``);
|
|
329
|
+
lines.push("");
|
|
330
|
+
lines.push("**Explore pattern:**");
|
|
331
|
+
lines.push(`- Files read: ${r.explore.filesRead.join(", ") || "(none)"}`);
|
|
332
|
+
lines.push(`- Tokens consumed: ${r.explore.tokensConsumed}`);
|
|
333
|
+
lines.push(`- Round-trips: ${r.explore.roundTrips}`);
|
|
334
|
+
lines.push("");
|
|
335
|
+
lines.push("**codefocus query:**");
|
|
336
|
+
lines.push(
|
|
337
|
+
`- Files returned: ${r.codefocus.filesReturned.join(", ") || "(none)"}`
|
|
338
|
+
);
|
|
339
|
+
lines.push(`- Tokens output: ${r.codefocus.tokensOutput}`);
|
|
340
|
+
lines.push(`- Round-trips: ${r.codefocus.roundTrips}`);
|
|
341
|
+
lines.push("");
|
|
342
|
+
lines.push(
|
|
343
|
+
`**Essential files:** ${r.task.essentialFiles.join(", ")}`
|
|
344
|
+
);
|
|
345
|
+
if (r.task.bonusFiles.length > 0) {
|
|
346
|
+
lines.push(
|
|
347
|
+
`**Bonus files:** ${r.task.bonusFiles.join(", ")}`
|
|
348
|
+
);
|
|
349
|
+
}
|
|
350
|
+
lines.push(
|
|
351
|
+
`**Expected symbols:** ${r.task.expectedSymbols.map((s2) => `\`${s2}\``).join(", ")}`
|
|
352
|
+
);
|
|
353
|
+
lines.push("");
|
|
354
|
+
}
|
|
355
|
+
lines.push("## Methodology");
|
|
356
|
+
lines.push("");
|
|
357
|
+
lines.push("### Explore agent simulation");
|
|
358
|
+
lines.push("");
|
|
359
|
+
lines.push("The Explore agent simulation models a **conservative lower bound**");
|
|
360
|
+
lines.push("of what a real grep/glob/read agent would consume:");
|
|
361
|
+
lines.push("");
|
|
362
|
+
lines.push("1. For each search term, run `grep -rl` in `src/` (1 round-trip per grep)");
|
|
363
|
+
lines.push("2. Read each matching file in full (1 round-trip per file)");
|
|
364
|
+
lines.push("3. Count total tokens of all file content with cl100k_base");
|
|
365
|
+
lines.push("");
|
|
366
|
+
lines.push("Real agents typically consume **2-5x more** tokens because they:");
|
|
367
|
+
lines.push("- Read files speculatively that turn out to be irrelevant");
|
|
368
|
+
lines.push("- Re-read files across multiple turns");
|
|
369
|
+
lines.push("- Read test files, configs, and docs for context");
|
|
370
|
+
lines.push("- Include directory listing and file search overhead");
|
|
371
|
+
lines.push("- Receive tool-call framing tokens (prompts, system messages)");
|
|
372
|
+
lines.push("");
|
|
373
|
+
lines.push("### codefocus query");
|
|
374
|
+
lines.push("");
|
|
375
|
+
lines.push("A single `codefocus query <term> --budget 8000` call.");
|
|
376
|
+
lines.push("The output is pre-ranked, budget-constrained, and includes only");
|
|
377
|
+
lines.push("the relevant symbol ranges (not entire files).");
|
|
378
|
+
lines.push("");
|
|
379
|
+
lines.push("### Metrics");
|
|
380
|
+
lines.push("");
|
|
381
|
+
lines.push("- **Tokens:** cl100k_base token count of content consumed/output");
|
|
382
|
+
lines.push("- **Round-trips:** Number of tool calls (grep/read for explore, 1 for codefocus)");
|
|
383
|
+
lines.push("- **Completeness:** Fraction of expected files present in results");
|
|
384
|
+
lines.push("- **Precision:** Fraction of returned files that are in the expected set");
|
|
385
|
+
lines.push("- **Token savings:** Explore tokens / codefocus tokens");
|
|
386
|
+
return lines.join("\n");
|
|
387
|
+
}
|
|
388
|
+
function pct(n) {
|
|
389
|
+
return `${Math.round(n * 100)}%`;
|
|
390
|
+
}
|
|
391
|
+
async function runBenchmarkCommand(positional, flags) {
|
|
392
|
+
if (flags.help) {
|
|
393
|
+
console.log(`codefocus benchmark \u2014 Compare codefocus query vs. Explore agent pattern
|
|
394
|
+
|
|
395
|
+
Usage: codefocus benchmark [options]
|
|
396
|
+
|
|
397
|
+
Options:
|
|
398
|
+
--root <path> Root directory of indexed project (default: auto-detect)
|
|
399
|
+
--json Output raw JSON instead of markdown
|
|
400
|
+
--extended Run extended cross-codebase benchmarks against dependencies
|
|
401
|
+
--emit-config Print the active scoring config as JSON and exit
|
|
402
|
+
--help Show this help message`);
|
|
403
|
+
return;
|
|
404
|
+
}
|
|
405
|
+
const { resolveRoot } = await import("./root-ODTOXM2J.js");
|
|
406
|
+
const root = resolveRoot(flags.root);
|
|
407
|
+
const cliPath = resolve(
|
|
408
|
+
new URL(".", import.meta.url).pathname,
|
|
409
|
+
"../dist/cli.js"
|
|
410
|
+
);
|
|
411
|
+
if (flags["emit-config"]) {
|
|
412
|
+
const { loadScoringConfig, serializeConfig } = await import("./config-OCBWYENF.js");
|
|
413
|
+
const config = loadScoringConfig(root);
|
|
414
|
+
console.log(serializeConfig(config));
|
|
415
|
+
return;
|
|
416
|
+
}
|
|
417
|
+
if (flags.extended) {
|
|
418
|
+
const {
|
|
419
|
+
discoverSuites,
|
|
420
|
+
runExtendedBenchmark,
|
|
421
|
+
formatExtendedResults
|
|
422
|
+
} = await import("./extended-benchmark-5RUXDG3D.js");
|
|
423
|
+
const suites = discoverSuites(root);
|
|
424
|
+
if (suites.length === 0) {
|
|
425
|
+
console.error(
|
|
426
|
+
"[benchmark] No dependency libraries found for extended benchmarks."
|
|
427
|
+
);
|
|
428
|
+
console.error(
|
|
429
|
+
"[benchmark] Run 'npm install' first to install dependencies."
|
|
430
|
+
);
|
|
431
|
+
process.exitCode = 1;
|
|
432
|
+
return;
|
|
433
|
+
}
|
|
434
|
+
console.log(
|
|
435
|
+
`[benchmark] Running extended benchmarks against ${suites.length} libraries ...`
|
|
436
|
+
);
|
|
437
|
+
for (const s of suites) {
|
|
438
|
+
console.log(`[benchmark] ${s.name} (${s.rootDir})`);
|
|
439
|
+
}
|
|
440
|
+
console.log(`[benchmark] CLI: ${cliPath}`);
|
|
441
|
+
console.log("");
|
|
442
|
+
const results2 = runExtendedBenchmark(root, cliPath, suites);
|
|
443
|
+
if (flags.json) {
|
|
444
|
+
console.log(JSON.stringify(results2, null, 2));
|
|
445
|
+
} else {
|
|
446
|
+
console.log(formatExtendedResults(results2));
|
|
447
|
+
}
|
|
448
|
+
return;
|
|
449
|
+
}
|
|
450
|
+
console.log(`[benchmark] Running ${TASKS.length} tasks against ${root} ...`);
|
|
451
|
+
console.log(`[benchmark] CLI: ${cliPath}`);
|
|
452
|
+
console.log("");
|
|
453
|
+
const results = runBenchmark(root, cliPath);
|
|
454
|
+
if (flags.json) {
|
|
455
|
+
console.log(JSON.stringify(results, null, 2));
|
|
456
|
+
} else {
|
|
457
|
+
console.log(formatResults(results));
|
|
458
|
+
}
|
|
459
|
+
}
|
|
460
|
+
export {
|
|
461
|
+
formatResults,
|
|
462
|
+
runBenchmark,
|
|
463
|
+
runBenchmarkCommand
|
|
464
|
+
};
|
|
465
|
+
//# sourceMappingURL=benchmark-43DOYNYR.js.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"sources":["../src/benchmark.ts"],"sourcesContent":["import { createRequire } from \"node:module\";\nimport { execFileSync } from \"node:child_process\";\nimport { readFileSync } from \"node:fs\";\nimport { resolve, relative } from \"node:path\";\n\nconst require = createRequire(import.meta.url);\nconst { getEncoding } = require(\"js-tiktoken\");\n\n// ── types ───────────────────────────────────────────────────────────────\n\ninterface BenchmarkTask {\n /** Short human-readable identifier */\n id: string;\n /** Full question an agent would ask */\n question: string;\n /** Search terms for grep (explore) and codefocus query */\n searchTerms: string[];\n /** codefocus query term (first searchTerm by default) */\n queryTerm: string;\n /** Files that contain the \"correct\" answer (for completeness/precision) */\n expectedFiles: string[];\n /** B2: Core files that must be present (subset of expectedFiles) */\n essentialFiles: string[];\n /** B2: Additional files that provide useful context (nice to have) */\n bonusFiles: string[];\n /** Key symbols the answer must include */\n expectedSymbols: string[];\n}\n\ninterface ExploreResult {\n /** Total tokens of file content consumed */\n tokensConsumed: number;\n /** Number of simulated tool calls (grep + read) */\n roundTrips: number;\n /** Files that were read */\n filesRead: string[];\n /** Total bytes read */\n bytesRead: number;\n}\n\ninterface CodefocusResult {\n /** Tokens reported in the query footer */\n tokensOutput: number;\n /** Always 1 (single codefocus query call) */\n roundTrips: number;\n /** Files included in the output */\n filesReturned: string[];\n /** Raw stdout */\n stdout: string;\n}\n\ninterface TaskResult {\n task: BenchmarkTask;\n explore: ExploreResult;\n codefocus: CodefocusResult;\n /** What fraction of expectedFiles are in the result? (0-1) */\n exploreCompleteness: number;\n codefocusCompleteness: number;\n /** B2: What fraction of essentialFiles are in the result? (0-1) */\n exploreEssentialCompleteness: number;\n codefocusEssentialCompleteness: number;\n /** What fraction of returned files are in expectedFiles? (0-1) */\n explorePrecision: number;\n codefocusPrecision: number;\n /** Token savings ratio: explore tokens / codefocus tokens */\n tokenSavingsRatio: number;\n}\n\n// ── task definitions ────────────────────────────────────────────────────\n\nconst TASKS: BenchmarkTask[] = [\n {\n id: \"cli-routing\",\n question:\n \"How does the CLI route commands to their handlers?\",\n searchTerms: [\"parseArgs\", \"handlers\", \"command\"],\n queryTerm: \"parseArgs\",\n expectedFiles: [\"src/cli.ts\"],\n essentialFiles: [\"src/cli.ts\"],\n bonusFiles: [],\n expectedSymbols: [\"parseArgs\", \"main\", \"ParsedArgs\"],\n },\n {\n id: \"callers-of-runIndex\",\n question: \"Find all callers of runIndex and understand the indexing entry point\",\n searchTerms: [\"runIndex\"],\n queryTerm: \"runIndex\",\n // B1: Added src/watcher.ts — legitimate caller added in Spike 7 Phase C\n expectedFiles: [\"src/cli.ts\", \"src/commands/index.ts\", \"src/indexer.ts\", \"src/watcher.ts\"],\n essentialFiles: [\"src/commands/index.ts\", \"src/indexer.ts\"],\n bonusFiles: [\"src/cli.ts\", \"src/watcher.ts\"],\n expectedSymbols: [\"runIndex\", \"indexProject\"],\n },\n {\n id: \"token-budget\",\n question:\n \"How does the token budget system work in the query command?\",\n searchTerms: [\"budget\", \"tokenCount\", \"getEncoding\"],\n queryTerm: \"budget\",\n // B1: src/mcp.ts re-exports query logic with budget param — valid but not essential\n expectedFiles: [\"src/commands/query.ts\", \"src/mcp.ts\"],\n essentialFiles: [\"src/commands/query.ts\"],\n bonusFiles: [\"src/mcp.ts\"],\n expectedSymbols: [\"runQuery\", \"ScoredSymbol\", \"FileSection\"],\n },\n {\n id: \"db-schema\",\n question: \"What is the database schema and what operations does it support?\",\n searchTerms: [\"IndexDatabase\", \"CREATE TABLE\"],\n queryTerm: \"IndexDatabase\",\n expectedFiles: [\"src/db.ts\"],\n essentialFiles: [\"src/db.ts\"],\n bonusFiles: [],\n expectedSymbols: [\n \"IndexDatabase\",\n \"SymbolRow\",\n \"FileRow\",\n \"ImportRow\",\n \"ReferenceRow\",\n ],\n },\n {\n id: \"incremental-indexing\",\n question:\n \"How does incremental indexing work? How are unchanged files skipped?\",\n searchTerms: [\"hashContent\", \"existingHash\", \"filesSkipped\"],\n queryTerm: \"hashContent\",\n expectedFiles: [\"src/indexer.ts\"],\n essentialFiles: [\"src/indexer.ts\"],\n bonusFiles: [],\n expectedSymbols: [\"hashContent\", \"indexProject\"],\n },\n {\n id: \"cross-file-refs\",\n question:\n \"How are cross-file symbol references created during indexing?\",\n searchTerms: [\"insertReference\", \"referencesCreated\"],\n queryTerm: \"insertReference\",\n expectedFiles: [\"src/indexer.ts\", \"src/db.ts\"],\n essentialFiles: [\"src/indexer.ts\", \"src/db.ts\"],\n bonusFiles: [],\n expectedSymbols: [\"insertReference\", \"indexProject\"],\n },\n {\n id: \"parser-node-types\",\n question:\n \"What tree-sitter node types does the parser handle?\",\n searchTerms: [\"parseSource\", \"extractDeclaration\", \"node_type\"],\n queryTerm: \"parseSource\",\n expectedFiles: [\"src/parser.ts\"],\n essentialFiles: [\"src/parser.ts\"],\n bonusFiles: [],\n expectedSymbols: [\"parseSource\", \"extractDeclaration\"],\n },\n {\n id: \"graph-rendering\",\n question:\n \"How does the graph command render dependency trees?\",\n searchTerms: [\"runGraph\", \"renderTree\"],\n queryTerm: \"runGraph\",\n expectedFiles: [\"src/commands/graph.ts\"],\n essentialFiles: [\"src/commands/graph.ts\"],\n bonusFiles: [],\n expectedSymbols: [\"runGraph\"],\n },\n];\n\n// ── explore simulation ──────────────────────────────────────────────────\n\n/**\n * Simulate what an Explore agent (grep/glob/read pattern) would do\n * to answer a question. This models a realistic lower-bound:\n *\n * 1. For each search term, grep the src/ directory for matching files\n * 2. Read each matching file in full\n * 3. Count total tokens consumed\n *\n * Real agents typically consume MORE tokens because they:\n * - Read files speculatively that turn out to be irrelevant\n * - Re-read files across multiple turns\n * - Read test files, configs, and docs for context\n * - Include file listing / directory exploration overhead\n */\nfunction simulateExplore(\n rootDir: string,\n task: BenchmarkTask,\n): ExploreResult {\n const enc = getEncoding(\"cl100k_base\");\n const matchedFiles = new Set<string>();\n let roundTrips = 0;\n\n // Step 1: Grep for each search term (each grep = 1 round-trip)\n for (const term of task.searchTerms) {\n roundTrips++;\n try {\n const grepResult = execFileSync(\n \"grep\",\n [\"-rl\", \"--include=*.ts\", term, resolve(rootDir, \"src\")],\n { encoding: \"utf-8\", timeout: 10_000 },\n );\n for (const line of grepResult.trim().split(\"\\n\")) {\n if (line) {\n matchedFiles.add(relative(rootDir, line));\n }\n }\n } catch {\n // grep returns exit 1 when no matches — that's fine\n }\n }\n\n // Step 2: Read each matched file (each read = 1 round-trip)\n let totalTokens = 0;\n let totalBytes = 0;\n const filesRead: string[] = [];\n\n for (const filePath of matchedFiles) {\n roundTrips++;\n try {\n const content = readFileSync(resolve(rootDir, filePath), \"utf-8\");\n totalBytes += Buffer.byteLength(content, \"utf-8\");\n totalTokens += enc.encode(content).length;\n filesRead.push(filePath);\n } catch {\n // skip unreadable\n }\n }\n\n return {\n tokensConsumed: totalTokens,\n roundTrips,\n filesRead,\n bytesRead: totalBytes,\n };\n}\n\n// ── codefocus query ─────────────────────────────────────────────────────\n\nfunction runCodefocusQuery(\n rootDir: string,\n cliPath: string,\n task: BenchmarkTask,\n): CodefocusResult {\n let stdout: string;\n try {\n stdout = execFileSync(\n \"node\",\n [cliPath, \"query\", task.queryTerm, \"--root\", rootDir, \"--budget\", \"8000\"],\n { encoding: \"utf-8\", timeout: 30_000 },\n );\n } catch (err: any) {\n stdout = (err.stdout || \"\") + (err.stderr || \"\");\n }\n\n // Parse token count from footer: ~N tokens\n const tokenMatch = stdout.match(/~(\\d+) tokens/);\n const tokensOutput = tokenMatch ? parseInt(tokenMatch[1], 10) : 0;\n\n // Parse files from section headers: ── file:line-line (...) ──\n const fileMatches = stdout.matchAll(/── ([\\w/.]+\\.ts):\\d+-\\d+/g);\n const filesReturned = [...fileMatches].map((m) => m[1]);\n\n return {\n tokensOutput,\n roundTrips: 1,\n filesReturned,\n stdout,\n };\n}\n\n// ── scoring ─────────────────────────────────────────────────────────────\n\nfunction computeCompleteness(\n returnedFiles: string[],\n expectedFiles: string[],\n): number {\n if (expectedFiles.length === 0) return 1;\n const returned = new Set(returnedFiles);\n const found = expectedFiles.filter((f) => returned.has(f)).length;\n return found / expectedFiles.length;\n}\n\nfunction computePrecision(\n returnedFiles: string[],\n expectedFiles: string[],\n): number {\n if (returnedFiles.length === 0) return 0;\n const expected = new Set(expectedFiles);\n const relevant = returnedFiles.filter((f) => expected.has(f)).length;\n return relevant / returnedFiles.length;\n}\n\n// ── main ────────────────────────────────────────────────────────────────\n\nexport interface BenchmarkResults {\n tasks: TaskResult[];\n summary: {\n avgExploreTokens: number;\n avgCodefocusTokens: number;\n avgTokenSavingsRatio: number;\n avgExploreRoundTrips: number;\n avgCodefocusRoundTrips: number;\n avgExploreCompleteness: number;\n avgCodefocusCompleteness: number;\n /** B2: Essential completeness — must-have files only */\n avgExploreEssentialCompleteness: number;\n avgCodefocusEssentialCompleteness: number;\n avgExplorePrecision: number;\n avgCodefocusPrecision: number;\n };\n /** Optional: real Explore agent token counts recorded during benchmark */\n exploreAgentTokens: Record<string, number>;\n}\n\nexport function runBenchmark(\n rootDir: string,\n cliPath: string,\n exploreAgentTokens?: Record<string, number>,\n): BenchmarkResults {\n const results: TaskResult[] = [];\n\n for (const task of TASKS) {\n const explore = simulateExplore(rootDir, task);\n const codefocus = runCodefocusQuery(rootDir, cliPath, task);\n\n const exploreCompleteness = computeCompleteness(\n explore.filesRead,\n task.expectedFiles,\n );\n const codefocusCompleteness = computeCompleteness(\n codefocus.filesReturned,\n task.expectedFiles,\n );\n\n // B2: Essential completeness — only essential files\n const exploreEssentialCompleteness = computeCompleteness(\n explore.filesRead,\n task.essentialFiles,\n );\n const codefocusEssentialCompleteness = computeCompleteness(\n codefocus.filesReturned,\n task.essentialFiles,\n );\n\n const explorePrecision = computePrecision(\n explore.filesRead,\n task.expectedFiles,\n );\n const codefocusPrecision = computePrecision(\n codefocus.filesReturned,\n task.expectedFiles,\n );\n\n const tokenSavingsRatio =\n codefocus.tokensOutput > 0\n ? explore.tokensConsumed / codefocus.tokensOutput\n : 0;\n\n results.push({\n task,\n explore,\n codefocus,\n exploreCompleteness,\n codefocusCompleteness,\n exploreEssentialCompleteness,\n codefocusEssentialCompleteness,\n explorePrecision,\n codefocusPrecision,\n tokenSavingsRatio,\n });\n }\n\n const avg = (arr: number[]) =>\n arr.length > 0 ? arr.reduce((a, b) => a + b, 0) / arr.length : 0;\n\n return {\n tasks: results,\n summary: {\n avgExploreTokens: avg(results.map((r) => r.explore.tokensConsumed)),\n avgCodefocusTokens: avg(results.map((r) => r.codefocus.tokensOutput)),\n avgTokenSavingsRatio: avg(results.map((r) => r.tokenSavingsRatio)),\n avgExploreRoundTrips: avg(results.map((r) => r.explore.roundTrips)),\n avgCodefocusRoundTrips: avg(\n results.map((r) => r.codefocus.roundTrips),\n ),\n avgExploreCompleteness: avg(results.map((r) => r.exploreCompleteness)),\n avgCodefocusCompleteness: avg(\n results.map((r) => r.codefocusCompleteness),\n ),\n avgExploreEssentialCompleteness: avg(\n results.map((r) => r.exploreEssentialCompleteness),\n ),\n avgCodefocusEssentialCompleteness: avg(\n results.map((r) => r.codefocusEssentialCompleteness),\n ),\n avgExplorePrecision: avg(results.map((r) => r.explorePrecision)),\n avgCodefocusPrecision: avg(results.map((r) => r.codefocusPrecision)),\n },\n exploreAgentTokens: exploreAgentTokens ?? {},\n };\n}\n\n// ── CLI entry point ─────────────────────────────────────────────────────\n\nexport function formatResults(results: BenchmarkResults): string {\n const lines: string[] = [];\n\n lines.push(\"# codefocus Benchmark — Spike 8\");\n lines.push(\"\");\n lines.push(\n \"Comparison of codefocus query vs. Explore agent pattern (grep/glob/read)\",\n );\n lines.push(`for ${results.tasks.length} representative code comprehension tasks.`);\n lines.push(\"\");\n lines.push(`Benchmark run: ${new Date().toISOString()}`);\n lines.push(\"\");\n\n // ── per-task table ──────────────────────────────────────────────────\n\n lines.push(\"## Per-task results\");\n lines.push(\"\");\n lines.push(\n \"| Task | Explore tokens | CF tokens | Savings | CF compl. | CF essential | CF prec. |\",\n );\n lines.push(\n \"|------|---------------:|----------:|--------:|----------:|-------------:|---------:|\",\n );\n\n for (const r of results.tasks) {\n const savings = r.tokenSavingsRatio.toFixed(1) + \"x\";\n lines.push(\n `| ${r.task.id} | ${r.explore.tokensConsumed} | ${r.codefocus.tokensOutput} | ${savings} | ${pct(r.codefocusCompleteness)} | ${pct(r.codefocusEssentialCompleteness)} | ${pct(r.codefocusPrecision)} |`,\n );\n }\n\n lines.push(\"\");\n\n // ── summary ─────────────────────────────────────────────────────────\n\n lines.push(\"## Summary\");\n lines.push(\"\");\n\n const s = results.summary;\n lines.push(\n \"| Metric | Explore (grep/glob/read) | codefocus query |\",\n );\n lines.push(\"|--------|------------------------:|----------------:|\");\n lines.push(\n `| Avg tokens consumed | ${Math.round(s.avgExploreTokens)} | ${Math.round(s.avgCodefocusTokens)} |`,\n );\n lines.push(\n `| Avg token savings | — | ${s.avgTokenSavingsRatio.toFixed(1)}x less |`,\n );\n lines.push(\n `| Avg round-trips | ${s.avgExploreRoundTrips.toFixed(1)} | ${s.avgCodefocusRoundTrips.toFixed(1)} |`,\n );\n lines.push(\n `| Avg completeness (total) | ${pct(s.avgExploreCompleteness)} | ${pct(s.avgCodefocusCompleteness)} |`,\n );\n lines.push(\n `| Avg completeness (essential) | ${pct(s.avgExploreEssentialCompleteness)} | ${pct(s.avgCodefocusEssentialCompleteness)} |`,\n );\n lines.push(\n `| Avg precision | ${pct(s.avgExplorePrecision)} | ${pct(s.avgCodefocusPrecision)} |`,\n );\n\n lines.push(\"\");\n\n // ── real explore agent tokens ───────────────────────────────────────\n\n if (Object.keys(results.exploreAgentTokens).length > 0) {\n lines.push(\"## Real Explore agent token counts\");\n lines.push(\"\");\n lines.push(\n \"These are actual token counts from running the Explore subagent\",\n );\n lines.push(\"(Claude Code Task tool, subagent_type=Explore).\");\n lines.push(\"\");\n lines.push(\"| Task | Explore agent tokens | codefocus tokens | Savings |\");\n lines.push(\"|------|---------------------:|-----------------:|--------:|\");\n\n for (const r of results.tasks) {\n const agentTokens = results.exploreAgentTokens[r.task.id];\n if (agentTokens !== undefined) {\n const cfTokens = r.codefocus.tokensOutput;\n const ratio =\n cfTokens > 0 ? (agentTokens / cfTokens).toFixed(1) + \"x\" : \"—\";\n lines.push(\n `| ${r.task.id} | ${agentTokens} | ${cfTokens} | ${ratio} |`,\n );\n }\n }\n\n lines.push(\"\");\n }\n\n // ── per-task detail ─────────────────────────────────────────────────\n\n lines.push(\"## Per-task detail\");\n lines.push(\"\");\n\n for (const r of results.tasks) {\n lines.push(`### ${r.task.id}`);\n lines.push(\"\");\n lines.push(`**Question:** ${r.task.question}`);\n lines.push(\"\");\n lines.push(\n `**Search terms:** ${r.task.searchTerms.map((t) => `\\`${t}\\``).join(\", \")}`,\n );\n lines.push(`**Query term:** \\`${r.task.queryTerm}\\``);\n lines.push(\"\");\n lines.push(\"**Explore pattern:**\");\n lines.push(`- Files read: ${r.explore.filesRead.join(\", \") || \"(none)\"}`);\n lines.push(`- Tokens consumed: ${r.explore.tokensConsumed}`);\n lines.push(`- Round-trips: ${r.explore.roundTrips}`);\n lines.push(\"\");\n lines.push(\"**codefocus query:**\");\n lines.push(\n `- Files returned: ${r.codefocus.filesReturned.join(\", \") || \"(none)\"}`,\n );\n lines.push(`- Tokens output: ${r.codefocus.tokensOutput}`);\n lines.push(`- Round-trips: ${r.codefocus.roundTrips}`);\n lines.push(\"\");\n lines.push(\n `**Essential files:** ${r.task.essentialFiles.join(\", \")}`,\n );\n if (r.task.bonusFiles.length > 0) {\n lines.push(\n `**Bonus files:** ${r.task.bonusFiles.join(\", \")}`,\n );\n }\n lines.push(\n `**Expected symbols:** ${r.task.expectedSymbols.map((s) => `\\`${s}\\``).join(\", \")}`,\n );\n lines.push(\"\");\n }\n\n // ── methodology ─────────────────────────────────────────────────────\n\n lines.push(\"## Methodology\");\n lines.push(\"\");\n lines.push(\"### Explore agent simulation\");\n lines.push(\"\");\n lines.push(\"The Explore agent simulation models a **conservative lower bound**\");\n lines.push(\"of what a real grep/glob/read agent would consume:\");\n lines.push(\"\");\n lines.push(\"1. For each search term, run `grep -rl` in `src/` (1 round-trip per grep)\");\n lines.push(\"2. Read each matching file in full (1 round-trip per file)\");\n lines.push(\"3. Count total tokens of all file content with cl100k_base\");\n lines.push(\"\");\n lines.push(\"Real agents typically consume **2-5x more** tokens because they:\");\n lines.push(\"- Read files speculatively that turn out to be irrelevant\");\n lines.push(\"- Re-read files across multiple turns\");\n lines.push(\"- Read test files, configs, and docs for context\");\n lines.push(\"- Include directory listing and file search overhead\");\n lines.push(\"- Receive tool-call framing tokens (prompts, system messages)\");\n lines.push(\"\");\n lines.push(\"### codefocus query\");\n lines.push(\"\");\n lines.push(\"A single `codefocus query <term> --budget 8000` call.\");\n lines.push(\"The output is pre-ranked, budget-constrained, and includes only\");\n lines.push(\"the relevant symbol ranges (not entire files).\");\n lines.push(\"\");\n lines.push(\"### Metrics\");\n lines.push(\"\");\n lines.push(\"- **Tokens:** cl100k_base token count of content consumed/output\");\n lines.push(\"- **Round-trips:** Number of tool calls (grep/read for explore, 1 for codefocus)\");\n lines.push(\"- **Completeness:** Fraction of expected files present in results\");\n lines.push(\"- **Precision:** Fraction of returned files that are in the expected set\");\n lines.push(\"- **Token savings:** Explore tokens / codefocus tokens\");\n\n return lines.join(\"\\n\");\n}\n\nfunction pct(n: number): string {\n return `${Math.round(n * 100)}%`;\n}\n\nexport async function runBenchmarkCommand(\n positional: string[],\n flags: Record<string, string | boolean>,\n): Promise<void> {\n if (flags.help) {\n console.log(`codefocus benchmark — Compare codefocus query vs. Explore agent pattern\n\nUsage: codefocus benchmark [options]\n\nOptions:\n --root <path> Root directory of indexed project (default: auto-detect)\n --json Output raw JSON instead of markdown\n --extended Run extended cross-codebase benchmarks against dependencies\n --emit-config Print the active scoring config as JSON and exit\n --help Show this help message`);\n return;\n }\n\n const { resolveRoot } = await import(\"./root.js\");\n const root = resolveRoot(flags.root);\n const cliPath = resolve(\n new URL(\".\", import.meta.url).pathname,\n \"../dist/cli.js\",\n );\n\n // --emit-config: print active scoring config and exit\n if (flags[\"emit-config\"]) {\n const { loadScoringConfig, serializeConfig } = await import(\"./config.js\");\n const config = loadScoringConfig(root);\n console.log(serializeConfig(config));\n return;\n }\n\n // --extended: run cross-codebase benchmarks\n if (flags.extended) {\n const {\n discoverSuites,\n runExtendedBenchmark,\n formatExtendedResults,\n } = await import(\"./extended-benchmark.js\");\n\n const suites = discoverSuites(root);\n if (suites.length === 0) {\n console.error(\n \"[benchmark] No dependency libraries found for extended benchmarks.\",\n );\n console.error(\n \"[benchmark] Run 'npm install' first to install dependencies.\",\n );\n process.exitCode = 1;\n return;\n }\n\n console.log(\n `[benchmark] Running extended benchmarks against ${suites.length} libraries ...`,\n );\n for (const s of suites) {\n console.log(`[benchmark] ${s.name} (${s.rootDir})`);\n }\n console.log(`[benchmark] CLI: ${cliPath}`);\n console.log(\"\");\n\n const results = runExtendedBenchmark(root, cliPath, suites);\n\n if (flags.json) {\n console.log(JSON.stringify(results, null, 2));\n } else {\n console.log(formatExtendedResults(results));\n }\n return;\n }\n\n // Default: run standard benchmarks against the project itself\n console.log(`[benchmark] Running ${TASKS.length} tasks against ${root} ...`);\n console.log(`[benchmark] CLI: ${cliPath}`);\n console.log(\"\");\n\n const results = runBenchmark(root, cliPath);\n\n if (flags.json) {\n console.log(JSON.stringify(results, null, 2));\n } else {\n console.log(formatResults(results));\n }\n}\n"],"mappings":";;;AAAA,SAAS,qBAAqB;AAC9B,SAAS,oBAAoB;AAC7B,SAAS,oBAAoB;AAC7B,SAAS,SAAS,gBAAgB;AAElC,IAAMA,WAAU,cAAc,YAAY,GAAG;AAC7C,IAAM,EAAE,YAAY,IAAIA,SAAQ,aAAa;AAgE7C,IAAM,QAAyB;AAAA,EAC7B;AAAA,IACE,IAAI;AAAA,IACJ,UACE;AAAA,IACF,aAAa,CAAC,aAAa,YAAY,SAAS;AAAA,IAChD,WAAW;AAAA,IACX,eAAe,CAAC,YAAY;AAAA,IAC5B,gBAAgB,CAAC,YAAY;AAAA,IAC7B,YAAY,CAAC;AAAA,IACb,iBAAiB,CAAC,aAAa,QAAQ,YAAY;AAAA,EACrD;AAAA,EACA;AAAA,IACE,IAAI;AAAA,IACJ,UAAU;AAAA,IACV,aAAa,CAAC,UAAU;AAAA,IACxB,WAAW;AAAA;AAAA,IAEX,eAAe,CAAC,cAAc,yBAAyB,kBAAkB,gBAAgB;AAAA,IACzF,gBAAgB,CAAC,yBAAyB,gBAAgB;AAAA,IAC1D,YAAY,CAAC,cAAc,gBAAgB;AAAA,IAC3C,iBAAiB,CAAC,YAAY,cAAc;AAAA,EAC9C;AAAA,EACA;AAAA,IACE,IAAI;AAAA,IACJ,UACE;AAAA,IACF,aAAa,CAAC,UAAU,cAAc,aAAa;AAAA,IACnD,WAAW;AAAA;AAAA,IAEX,eAAe,CAAC,yBAAyB,YAAY;AAAA,IACrD,gBAAgB,CAAC,uBAAuB;AAAA,IACxC,YAAY,CAAC,YAAY;AAAA,IACzB,iBAAiB,CAAC,YAAY,gBAAgB,aAAa;AAAA,EAC7D;AAAA,EACA;AAAA,IACE,IAAI;AAAA,IACJ,UAAU;AAAA,IACV,aAAa,CAAC,iBAAiB,cAAc;AAAA,IAC7C,WAAW;AAAA,IACX,eAAe,CAAC,WAAW;AAAA,IAC3B,gBAAgB,CAAC,WAAW;AAAA,IAC5B,YAAY,CAAC;AAAA,IACb,iBAAiB;AAAA,MACf;AAAA,MACA;AAAA,MACA;AAAA,MACA;AAAA,MACA;AAAA,IACF;AAAA,EACF;AAAA,EACA;AAAA,IACE,IAAI;AAAA,IACJ,UACE;AAAA,IACF,aAAa,CAAC,eAAe,gBAAgB,cAAc;AAAA,IAC3D,WAAW;AAAA,IACX,eAAe,CAAC,gBAAgB;AAAA,IAChC,gBAAgB,CAAC,gBAAgB;AAAA,IACjC,YAAY,CAAC;AAAA,IACb,iBAAiB,CAAC,eAAe,cAAc;AAAA,EACjD;AAAA,EACA;AAAA,IACE,IAAI;AAAA,IACJ,UACE;AAAA,IACF,aAAa,CAAC,mBAAmB,mBAAmB;AAAA,IACpD,WAAW;AAAA,IACX,eAAe,CAAC,kBAAkB,WAAW;AAAA,IAC7C,gBAAgB,CAAC,kBAAkB,WAAW;AAAA,IAC9C,YAAY,CAAC;AAAA,IACb,iBAAiB,CAAC,mBAAmB,cAAc;AAAA,EACrD;AAAA,EACA;AAAA,IACE,IAAI;AAAA,IACJ,UACE;AAAA,IACF,aAAa,CAAC,eAAe,sBAAsB,WAAW;AAAA,IAC9D,WAAW;AAAA,IACX,eAAe,CAAC,eAAe;AAAA,IAC/B,gBAAgB,CAAC,eAAe;AAAA,IAChC,YAAY,CAAC;AAAA,IACb,iBAAiB,CAAC,eAAe,oBAAoB;AAAA,EACvD;AAAA,EACA;AAAA,IACE,IAAI;AAAA,IACJ,UACE;AAAA,IACF,aAAa,CAAC,YAAY,YAAY;AAAA,IACtC,WAAW;AAAA,IACX,eAAe,CAAC,uBAAuB;AAAA,IACvC,gBAAgB,CAAC,uBAAuB;AAAA,IACxC,YAAY,CAAC;AAAA,IACb,iBAAiB,CAAC,UAAU;AAAA,EAC9B;AACF;AAkBA,SAAS,gBACP,SACA,MACe;AACf,QAAM,MAAM,YAAY,aAAa;AACrC,QAAM,eAAe,oBAAI,IAAY;AACrC,MAAI,aAAa;AAGjB,aAAW,QAAQ,KAAK,aAAa;AACnC;AACA,QAAI;AACF,YAAM,aAAa;AAAA,QACjB;AAAA,QACA,CAAC,OAAO,kBAAkB,MAAM,QAAQ,SAAS,KAAK,CAAC;AAAA,QACvD,EAAE,UAAU,SAAS,SAAS,IAAO;AAAA,MACvC;AACA,iBAAW,QAAQ,WAAW,KAAK,EAAE,MAAM,IAAI,GAAG;AAChD,YAAI,MAAM;AACR,uBAAa,IAAI,SAAS,SAAS,IAAI,CAAC;AAAA,QAC1C;AAAA,MACF;AAAA,IACF,QAAQ;AAAA,IAER;AAAA,EACF;AAGA,MAAI,cAAc;AAClB,MAAI,aAAa;AACjB,QAAM,YAAsB,CAAC;AAE7B,aAAW,YAAY,cAAc;AACnC;AACA,QAAI;AACF,YAAM,UAAU,aAAa,QAAQ,SAAS,QAAQ,GAAG,OAAO;AAChE,oBAAc,OAAO,WAAW,SAAS,OAAO;AAChD,qBAAe,IAAI,OAAO,OAAO,EAAE;AACnC,gBAAU,KAAK,QAAQ;AAAA,IACzB,QAAQ;AAAA,IAER;AAAA,EACF;AAEA,SAAO;AAAA,IACL,gBAAgB;AAAA,IAChB;AAAA,IACA;AAAA,IACA,WAAW;AAAA,EACb;AACF;AAIA,SAAS,kBACP,SACA,SACA,MACiB;AACjB,MAAI;AACJ,MAAI;AACF,aAAS;AAAA,MACP;AAAA,MACA,CAAC,SAAS,SAAS,KAAK,WAAW,UAAU,SAAS,YAAY,MAAM;AAAA,MACxE,EAAE,UAAU,SAAS,SAAS,IAAO;AAAA,IACvC;AAAA,EACF,SAAS,KAAU;AACjB,cAAU,IAAI,UAAU,OAAO,IAAI,UAAU;AAAA,EAC/C;AAGA,QAAM,aAAa,OAAO,MAAM,eAAe;AAC/C,QAAM,eAAe,aAAa,SAAS,WAAW,CAAC,GAAG,EAAE,IAAI;AAGhE,QAAM,cAAc,OAAO,SAAS,2BAA2B;AAC/D,QAAM,gBAAgB,CAAC,GAAG,WAAW,EAAE,IAAI,CAAC,MAAM,EAAE,CAAC,CAAC;AAEtD,SAAO;AAAA,IACL;AAAA,IACA,YAAY;AAAA,IACZ;AAAA,IACA;AAAA,EACF;AACF;AAIA,SAAS,oBACP,eACA,eACQ;AACR,MAAI,cAAc,WAAW,EAAG,QAAO;AACvC,QAAM,WAAW,IAAI,IAAI,aAAa;AACtC,QAAM,QAAQ,cAAc,OAAO,CAAC,MAAM,SAAS,IAAI,CAAC,CAAC,EAAE;AAC3D,SAAO,QAAQ,cAAc;AAC/B;AAEA,SAAS,iBACP,eACA,eACQ;AACR,MAAI,cAAc,WAAW,EAAG,QAAO;AACvC,QAAM,WAAW,IAAI,IAAI,aAAa;AACtC,QAAM,WAAW,cAAc,OAAO,CAAC,MAAM,SAAS,IAAI,CAAC,CAAC,EAAE;AAC9D,SAAO,WAAW,cAAc;AAClC;AAwBO,SAAS,aACd,SACA,SACA,oBACkB;AAClB,QAAM,UAAwB,CAAC;AAE/B,aAAW,QAAQ,OAAO;AACxB,UAAM,UAAU,gBAAgB,SAAS,IAAI;AAC7C,UAAM,YAAY,kBAAkB,SAAS,SAAS,IAAI;AAE1D,UAAM,sBAAsB;AAAA,MAC1B,QAAQ;AAAA,MACR,KAAK;AAAA,IACP;AACA,UAAM,wBAAwB;AAAA,MAC5B,UAAU;AAAA,MACV,KAAK;AAAA,IACP;AAGA,UAAM,+BAA+B;AAAA,MACnC,QAAQ;AAAA,MACR,KAAK;AAAA,IACP;AACA,UAAM,iCAAiC;AAAA,MACrC,UAAU;AAAA,MACV,KAAK;AAAA,IACP;AAEA,UAAM,mBAAmB;AAAA,MACvB,QAAQ;AAAA,MACR,KAAK;AAAA,IACP;AACA,UAAM,qBAAqB;AAAA,MACzB,UAAU;AAAA,MACV,KAAK;AAAA,IACP;AAEA,UAAM,oBACJ,UAAU,eAAe,IACrB,QAAQ,iBAAiB,UAAU,eACnC;AAEN,YAAQ,KAAK;AAAA,MACX;AAAA,MACA;AAAA,MACA;AAAA,MACA;AAAA,MACA;AAAA,MACA;AAAA,MACA;AAAA,MACA;AAAA,MACA;AAAA,MACA;AAAA,IACF,CAAC;AAAA,EACH;AAEA,QAAM,MAAM,CAAC,QACX,IAAI,SAAS,IAAI,IAAI,OAAO,CAAC,GAAG,MAAM,IAAI,GAAG,CAAC,IAAI,IAAI,SAAS;AAEjE,SAAO;AAAA,IACL,OAAO;AAAA,IACP,SAAS;AAAA,MACP,kBAAkB,IAAI,QAAQ,IAAI,CAAC,MAAM,EAAE,QAAQ,cAAc,CAAC;AAAA,MAClE,oBAAoB,IAAI,QAAQ,IAAI,CAAC,MAAM,EAAE,UAAU,YAAY,CAAC;AAAA,MACpE,sBAAsB,IAAI,QAAQ,IAAI,CAAC,MAAM,EAAE,iBAAiB,CAAC;AAAA,MACjE,sBAAsB,IAAI,QAAQ,IAAI,CAAC,MAAM,EAAE,QAAQ,UAAU,CAAC;AAAA,MAClE,wBAAwB;AAAA,QACtB,QAAQ,IAAI,CAAC,MAAM,EAAE,UAAU,UAAU;AAAA,MAC3C;AAAA,MACA,wBAAwB,IAAI,QAAQ,IAAI,CAAC,MAAM,EAAE,mBAAmB,CAAC;AAAA,MACrE,0BAA0B;AAAA,QACxB,QAAQ,IAAI,CAAC,MAAM,EAAE,qBAAqB;AAAA,MAC5C;AAAA,MACA,iCAAiC;AAAA,QAC/B,QAAQ,IAAI,CAAC,MAAM,EAAE,4BAA4B;AAAA,MACnD;AAAA,MACA,mCAAmC;AAAA,QACjC,QAAQ,IAAI,CAAC,MAAM,EAAE,8BAA8B;AAAA,MACrD;AAAA,MACA,qBAAqB,IAAI,QAAQ,IAAI,CAAC,MAAM,EAAE,gBAAgB,CAAC;AAAA,MAC/D,uBAAuB,IAAI,QAAQ,IAAI,CAAC,MAAM,EAAE,kBAAkB,CAAC;AAAA,IACrE;AAAA,IACA,oBAAoB,sBAAsB,CAAC;AAAA,EAC7C;AACF;AAIO,SAAS,cAAc,SAAmC;AAC/D,QAAM,QAAkB,CAAC;AAEzB,QAAM,KAAK,sCAAiC;AAC5C,QAAM,KAAK,EAAE;AACb,QAAM;AAAA,IACJ;AAAA,EACF;AACA,QAAM,KAAK,OAAO,QAAQ,MAAM,MAAM,2CAA2C;AACjF,QAAM,KAAK,EAAE;AACb,QAAM,KAAK,mBAAkB,oBAAI,KAAK,GAAE,YAAY,CAAC,EAAE;AACvD,QAAM,KAAK,EAAE;AAIb,QAAM,KAAK,qBAAqB;AAChC,QAAM,KAAK,EAAE;AACb,QAAM;AAAA,IACJ;AAAA,EACF;AACA,QAAM;AAAA,IACJ;AAAA,EACF;AAEA,aAAW,KAAK,QAAQ,OAAO;AAC7B,UAAM,UAAU,EAAE,kBAAkB,QAAQ,CAAC,IAAI;AACjD,UAAM;AAAA,MACJ,KAAK,EAAE,KAAK,EAAE,MAAM,EAAE,QAAQ,cAAc,MAAM,EAAE,UAAU,YAAY,MAAM,OAAO,MAAM,IAAI,EAAE,qBAAqB,CAAC,MAAM,IAAI,EAAE,8BAA8B,CAAC,MAAM,IAAI,EAAE,kBAAkB,CAAC;AAAA,IACrM;AAAA,EACF;AAEA,QAAM,KAAK,EAAE;AAIb,QAAM,KAAK,YAAY;AACvB,QAAM,KAAK,EAAE;AAEb,QAAM,IAAI,QAAQ;AAClB,QAAM;AAAA,IACJ;AAAA,EACF;AACA,QAAM,KAAK,wDAAwD;AACnE,QAAM;AAAA,IACJ,2BAA2B,KAAK,MAAM,EAAE,gBAAgB,CAAC,MAAM,KAAK,MAAM,EAAE,kBAAkB,CAAC;AAAA,EACjG;AACA,QAAM;AAAA,IACJ,kCAA6B,EAAE,qBAAqB,QAAQ,CAAC,CAAC;AAAA,EAChE;AACA,QAAM;AAAA,IACJ,uBAAuB,EAAE,qBAAqB,QAAQ,CAAC,CAAC,MAAM,EAAE,uBAAuB,QAAQ,CAAC,CAAC;AAAA,EACnG;AACA,QAAM;AAAA,IACJ,gCAAgC,IAAI,EAAE,sBAAsB,CAAC,MAAM,IAAI,EAAE,wBAAwB,CAAC;AAAA,EACpG;AACA,QAAM;AAAA,IACJ,oCAAoC,IAAI,EAAE,+BAA+B,CAAC,MAAM,IAAI,EAAE,iCAAiC,CAAC;AAAA,EAC1H;AACA,QAAM;AAAA,IACJ,qBAAqB,IAAI,EAAE,mBAAmB,CAAC,MAAM,IAAI,EAAE,qBAAqB,CAAC;AAAA,EACnF;AAEA,QAAM,KAAK,EAAE;AAIb,MAAI,OAAO,KAAK,QAAQ,kBAAkB,EAAE,SAAS,GAAG;AACtD,UAAM,KAAK,oCAAoC;AAC/C,UAAM,KAAK,EAAE;AACb,UAAM;AAAA,MACJ;AAAA,IACF;AACA,UAAM,KAAK,iDAAiD;AAC5D,UAAM,KAAK,EAAE;AACb,UAAM,KAAK,8DAA8D;AACzE,UAAM,KAAK,8DAA8D;AAEzE,eAAW,KAAK,QAAQ,OAAO;AAC7B,YAAM,cAAc,QAAQ,mBAAmB,EAAE,KAAK,EAAE;AACxD,UAAI,gBAAgB,QAAW;AAC7B,cAAM,WAAW,EAAE,UAAU;AAC7B,cAAM,QACJ,WAAW,KAAK,cAAc,UAAU,QAAQ,CAAC,IAAI,MAAM;AAC7D,cAAM;AAAA,UACJ,KAAK,EAAE,KAAK,EAAE,MAAM,WAAW,MAAM,QAAQ,MAAM,KAAK;AAAA,QAC1D;AAAA,MACF;AAAA,IACF;AAEA,UAAM,KAAK,EAAE;AAAA,EACf;AAIA,QAAM,KAAK,oBAAoB;AAC/B,QAAM,KAAK,EAAE;AAEb,aAAW,KAAK,QAAQ,OAAO;AAC7B,UAAM,KAAK,OAAO,EAAE,KAAK,EAAE,EAAE;AAC7B,UAAM,KAAK,EAAE;AACb,UAAM,KAAK,iBAAiB,EAAE,KAAK,QAAQ,EAAE;AAC7C,UAAM,KAAK,EAAE;AACb,UAAM;AAAA,MACJ,qBAAqB,EAAE,KAAK,YAAY,IAAI,CAAC,MAAM,KAAK,CAAC,IAAI,EAAE,KAAK,IAAI,CAAC;AAAA,IAC3E;AACA,UAAM,KAAK,qBAAqB,EAAE,KAAK,SAAS,IAAI;AACpD,UAAM,KAAK,EAAE;AACb,UAAM,KAAK,sBAAsB;AACjC,UAAM,KAAK,iBAAiB,EAAE,QAAQ,UAAU,KAAK,IAAI,KAAK,QAAQ,EAAE;AACxE,UAAM,KAAK,sBAAsB,EAAE,QAAQ,cAAc,EAAE;AAC3D,UAAM,KAAK,kBAAkB,EAAE,QAAQ,UAAU,EAAE;AACnD,UAAM,KAAK,EAAE;AACb,UAAM,KAAK,sBAAsB;AACjC,UAAM;AAAA,MACJ,qBAAqB,EAAE,UAAU,cAAc,KAAK,IAAI,KAAK,QAAQ;AAAA,IACvE;AACA,UAAM,KAAK,oBAAoB,EAAE,UAAU,YAAY,EAAE;AACzD,UAAM,KAAK,kBAAkB,EAAE,UAAU,UAAU,EAAE;AACrD,UAAM,KAAK,EAAE;AACb,UAAM;AAAA,MACJ,wBAAwB,EAAE,KAAK,eAAe,KAAK,IAAI,CAAC;AAAA,IAC1D;AACA,QAAI,EAAE,KAAK,WAAW,SAAS,GAAG;AAChC,YAAM;AAAA,QACJ,oBAAoB,EAAE,KAAK,WAAW,KAAK,IAAI,CAAC;AAAA,MAClD;AAAA,IACF;AACA,UAAM;AAAA,MACJ,yBAAyB,EAAE,KAAK,gBAAgB,IAAI,CAACC,OAAM,KAAKA,EAAC,IAAI,EAAE,KAAK,IAAI,CAAC;AAAA,IACnF;AACA,UAAM,KAAK,EAAE;AAAA,EACf;AAIA,QAAM,KAAK,gBAAgB;AAC3B,QAAM,KAAK,EAAE;AACb,QAAM,KAAK,8BAA8B;AACzC,QAAM,KAAK,EAAE;AACb,QAAM,KAAK,oEAAoE;AAC/E,QAAM,KAAK,oDAAoD;AAC/D,QAAM,KAAK,EAAE;AACb,QAAM,KAAK,2EAA2E;AACtF,QAAM,KAAK,4DAA4D;AACvE,QAAM,KAAK,4DAA4D;AACvE,QAAM,KAAK,EAAE;AACb,QAAM,KAAK,kEAAkE;AAC7E,QAAM,KAAK,2DAA2D;AACtE,QAAM,KAAK,uCAAuC;AAClD,QAAM,KAAK,kDAAkD;AAC7D,QAAM,KAAK,sDAAsD;AACjE,QAAM,KAAK,+DAA+D;AAC1E,QAAM,KAAK,EAAE;AACb,QAAM,KAAK,qBAAqB;AAChC,QAAM,KAAK,EAAE;AACb,QAAM,KAAK,uDAAuD;AAClE,QAAM,KAAK,iEAAiE;AAC5E,QAAM,KAAK,gDAAgD;AAC3D,QAAM,KAAK,EAAE;AACb,QAAM,KAAK,aAAa;AACxB,QAAM,KAAK,EAAE;AACb,QAAM,KAAK,kEAAkE;AAC7E,QAAM,KAAK,kFAAkF;AAC7F,QAAM,KAAK,mEAAmE;AAC9E,QAAM,KAAK,0EAA0E;AACrF,QAAM,KAAK,wDAAwD;AAEnE,SAAO,MAAM,KAAK,IAAI;AACxB;AAEA,SAAS,IAAI,GAAmB;AAC9B,SAAO,GAAG,KAAK,MAAM,IAAI,GAAG,CAAC;AAC/B;AAEA,eAAsB,oBACpB,YACA,OACe;AACf,MAAI,MAAM,MAAM;AACd,YAAQ,IAAI;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,2CAS2B;AACvC;AAAA,EACF;AAEA,QAAM,EAAE,YAAY,IAAI,MAAM,OAAO,oBAAW;AAChD,QAAM,OAAO,YAAY,MAAM,IAAI;AACnC,QAAM,UAAU;AAAA,IACd,IAAI,IAAI,KAAK,YAAY,GAAG,EAAE;AAAA,IAC9B;AAAA,EACF;AAGA,MAAI,MAAM,aAAa,GAAG;AACxB,UAAM,EAAE,mBAAmB,gBAAgB,IAAI,MAAM,OAAO,sBAAa;AACzE,UAAM,SAAS,kBAAkB,IAAI;AACrC,YAAQ,IAAI,gBAAgB,MAAM,CAAC;AACnC;AAAA,EACF;AAGA,MAAI,MAAM,UAAU;AAClB,UAAM;AAAA,MACJ;AAAA,MACA;AAAA,MACA;AAAA,IACF,IAAI,MAAM,OAAO,kCAAyB;AAE1C,UAAM,SAAS,eAAe,IAAI;AAClC,QAAI,OAAO,WAAW,GAAG;AACvB,cAAQ;AAAA,QACN;AAAA,MACF;AACA,cAAQ;AAAA,QACN;AAAA,MACF;AACA,cAAQ,WAAW;AACnB;AAAA,IACF;AAEA,YAAQ;AAAA,MACN,mDAAmD,OAAO,MAAM;AAAA,IAClE;AACA,eAAW,KAAK,QAAQ;AACtB,cAAQ,IAAI,iBAAiB,EAAE,IAAI,KAAK,EAAE,OAAO,GAAG;AAAA,IACtD;AACA,YAAQ,IAAI,oBAAoB,OAAO,EAAE;AACzC,YAAQ,IAAI,EAAE;AAEd,UAAMC,WAAU,qBAAqB,MAAM,SAAS,MAAM;AAE1D,QAAI,MAAM,MAAM;AACd,cAAQ,IAAI,KAAK,UAAUA,UAAS,MAAM,CAAC,CAAC;AAAA,IAC9C,OAAO;AACL,cAAQ,IAAI,sBAAsBA,QAAO,CAAC;AAAA,IAC5C;AACA;AAAA,EACF;AAGA,UAAQ,IAAI,uBAAuB,MAAM,MAAM,kBAAkB,IAAI,MAAM;AAC3E,UAAQ,IAAI,oBAAoB,OAAO,EAAE;AACzC,UAAQ,IAAI,EAAE;AAEd,QAAM,UAAU,aAAa,MAAM,OAAO;AAE1C,MAAI,MAAM,MAAM;AACd,YAAQ,IAAI,KAAK,UAAU,SAAS,MAAM,CAAC,CAAC;AAAA,EAC9C,OAAO;AACL,YAAQ,IAAI,cAAc,OAAO,CAAC;AAAA,EACpC;AACF;","names":["require","s","results"]}
|