e2e-ai 1.2.0 → 1.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,6 +2,8 @@
2
2
 
3
3
  AI-powered E2E test automation pipeline. Takes you from a manual browser recording all the way to a stable, documented Playwright test — with optional Zephyr integration.
4
4
 
5
+ Includes a **codebase scanner** that builds a QA map of your application's features, workflows, and test scenarios using AI analysis.
6
+
5
7
  ## Quick Start
6
8
 
7
9
  ```bash
@@ -17,26 +19,40 @@ npx e2e-ai --help
17
19
  # Full pipeline for an issue
18
20
  npx e2e-ai run --key PROJ-101
19
21
 
20
- # Individual commands
21
- npx e2e-ai scenario --key PROJ-101
22
- npx e2e-ai generate --key PROJ-101
23
- npx e2e-ai qa --key PROJ-101
22
+ # Scan your codebase and generate a QA map
23
+ npx e2e-ai scan
24
+ npx e2e-ai analyze
24
25
  ```
25
26
 
26
27
  ## Architecture
27
28
 
29
+ e2e-ai has two main workflows:
30
+
31
+ ### Test Pipeline
32
+
28
33
  ```
29
- AI Agents (LLM)
30
- |
31
- record -> transcribe -> scenario -> generate -> refine -> test -> heal -> qa
32
- | | | | | | | |
33
- codegen whisper transcript scenario refactor playwright self- QA doc
34
- + audio + merge + scenario agent agent runner healing + export
35
- agent agent agent agent
34
+ record -> transcribe -> scenario -> generate -> refine -> test -> heal -> qa
35
+ | | | | | | | |
36
+ codegen whisper transcript scenario refactor playwright self- QA doc
37
+ + audio + merge + scenario agent agent runner healing + export
38
+ agent agent agent agent
36
39
  ```
37
40
 
38
41
  Each step produces an artifact that feeds the next. You can run the full pipeline or any step individually.
39
42
 
43
+ ### Scanner Pipeline
44
+
45
+ ```
46
+ scan -> analyze -> push
47
+ | | |
48
+ AST feature remote
49
+ scan analyzer API
50
+ + scenario upload
51
+ planner
52
+ ```
53
+
54
+ Scans your codebase structure (routes, components, hooks), uses AI to identify features and workflows, generates test scenarios, and optionally pushes the QA map to a remote API.
55
+
40
56
  ## Commands
41
57
 
42
58
  ### `init` - Project Setup
@@ -50,6 +66,8 @@ npx e2e-ai init
50
66
  npx e2e-ai init --non-interactive
51
67
  ```
52
68
 
69
+ ---
70
+
53
71
  ### `record` - Browser Recording
54
72
 
55
73
  Launches Playwright codegen with optional voice recording and trace capture.
@@ -198,6 +216,115 @@ npx e2e-ai run --key PROJ-101 --from scenario --skip test,heal
198
216
 
199
217
  Each step's output feeds the next via `PipelineContext`. If a step fails, the pipeline stops (unless the step is marked `nonBlocking`).
200
218
 
219
+ ---
220
+
221
+ ### `scan` - Codebase AST Scanner
222
+
223
+ Scans your codebase and extracts structural information: routes, components, hooks, imports, and dependencies.
224
+
225
+ ```bash
226
+ # Scan using config defaults (scans src/)
227
+ npx e2e-ai scan
228
+
229
+ # Scan a specific directory
230
+ npx e2e-ai scan --scan-dir app
231
+
232
+ # Write output to a custom path
233
+ npx e2e-ai scan --output my-scan.json
234
+
235
+ # Disable file-level caching
236
+ npx e2e-ai scan --no-cache
237
+ ```
238
+
239
+ **What it does**:
240
+ 1. Collects all `.ts`, `.tsx`, `.js`, `.jsx` files matching configured include/exclude patterns
241
+ 2. Parses each file with a regex-based TypeScript parser to extract imports, exports, components, and hooks
242
+ 3. Extracts routes from Next.js App Router (`app/`) and Pages Router (`pages/`) conventions
243
+ 4. Caches results per-file (by content hash) for fast incremental re-scans
244
+
245
+ **Output**: `ASTScanResult` JSON containing:
246
+ - `files` — all scanned files with imports, exports, line counts
247
+ - `routes` — extracted routes with paths, methods, dynamic segments, layout files
248
+ - `components` — React components with props, hook calls, export status
249
+ - `hooks` — custom hook definitions with dependencies
250
+ - `dependencies` — import graph edges between files
251
+
252
+ Default output location: `.e2e-ai/ast-scan.json`
253
+
254
+ **Options**:
255
+
256
+ | Flag | Description | Default |
257
+ |------|-------------|---------|
258
+ | `--output <file>` | Write output to specific file | `.e2e-ai/ast-scan.json` |
259
+ | `--scan-dir <dir>` | Directory to scan | from config (`src`) |
260
+ | `--no-cache` | Disable file-level caching | cache enabled |
261
+
262
+ ### `analyze` - AI QA Map Generation
263
+
264
+ Analyzes an AST scan result using AI to identify features, workflows, components, and test scenarios.
265
+
266
+ ```bash
267
+ # Analyze the default scan output
268
+ npx e2e-ai analyze
269
+
270
+ # Analyze a specific scan file
271
+ npx e2e-ai analyze path/to/ast-scan.json
272
+
273
+ # Only extract features (skip scenario generation)
274
+ npx e2e-ai analyze --skip-scenarios
275
+
276
+ # Write output to a custom path
277
+ npx e2e-ai analyze --output my-qa-map.json
278
+ ```
279
+
280
+ **What it does** — two-stage AI pipeline:
281
+
282
+ 1. **Stage 2 — Feature Analysis** (`feature-analyzer-agent`): Receives the AST scan and groups routes, components, and hooks into logical features and user-facing workflows. Identifies component roles (form, display, navigation, modal, layout, feedback) and maps API routes to workflow steps.
283
+
284
+ 2. **Stage 3 — Scenario Planning** (`scenario-planner-agent`): Receives the feature map and generates test scenarios for each workflow. Covers happy paths, validation, error handling, edge cases, and permission checks. Assigns priorities (critical/high/medium/low) and links scenarios to specific workflow steps.
285
+
286
+ **Output**: `QAMapV2Payload` JSON containing:
287
+ - `features` — logical application features with routes and source files
288
+ - `workflows` — user journeys with ordered steps, API calls, conditional branches
289
+ - `components` — UI components with types, props, workflow references
290
+ - `scenarios` — test scenarios with steps, preconditions, expected outcomes, priorities
291
+
292
+ Default output location: `.e2e-ai/qa-map.json`
293
+
294
+ **Options**:
295
+
296
+ | Flag | Description | Default |
297
+ |------|-------------|---------|
298
+ | `--output <file>` | Write QA map to specific file | `.e2e-ai/qa-map.json` |
299
+ | `--skip-scenarios` | Only run feature analysis | both stages |
300
+
301
+ ### `push` - Push QA Map to API
302
+
303
+ Pushes a generated QA map to a remote API endpoint.
304
+
305
+ ```bash
306
+ # Push the default QA map
307
+ npx e2e-ai push
308
+
309
+ # Push a specific file
310
+ npx e2e-ai push path/to/qa-map.json
311
+
312
+ # Associate with a git commit
313
+ npx e2e-ai push --commit-sha abc123
314
+ ```
315
+
316
+ **What it does**: Sends the `QAMapV2Payload` JSON as a POST request to the configured API endpoint with Bearer token authentication. The API responds with version info and auto-linking statistics.
317
+
318
+ **Requires** `push.apiUrl` and `push.apiKey` in config (or `E2E_AI_API_URL` / `E2E_AI_API_KEY` env vars).
319
+
320
+ **Options**:
321
+
322
+ | Flag | Description | Default |
323
+ |------|-------------|---------|
324
+ | `--commit-sha <sha>` | Git commit SHA to associate | - |
325
+
326
+ ---
327
+
201
328
  ## Typical Workflows
202
329
 
203
330
  ### Workflow 1: Full Recording Pipeline
@@ -254,6 +381,32 @@ npx e2e-ai scenario --key PROJ-101
254
381
  npx e2e-ai run --key PROJ-101 --from generate
255
382
  ```
256
383
 
384
+ ### Workflow 5: Codebase Scanner Pipeline
385
+
386
+ Generate a full QA map from your codebase structure:
387
+
388
+ ```bash
389
+ # Step 1: Scan the codebase AST
390
+ npx e2e-ai scan
391
+
392
+ # Step 2: AI analysis → features, workflows, scenarios
393
+ npx e2e-ai analyze
394
+
395
+ # Step 3: Push to remote API (optional)
396
+ npx e2e-ai push
397
+ ```
398
+
399
+ Or run stages selectively:
400
+
401
+ ```bash
402
+ # Just extract features (no scenarios)
403
+ npx e2e-ai analyze --skip-scenarios
404
+
405
+ # Re-scan after code changes (cached, fast)
406
+ npx e2e-ai scan
407
+ npx e2e-ai analyze
408
+ ```
409
+
257
410
  ## Configuration
258
411
 
259
412
  Run `npx e2e-ai init` to generate `e2e-ai.config.ts`:
@@ -262,11 +415,56 @@ Run `npx e2e-ai init` to generate `e2e-ai.config.ts`:
262
415
  import { defineConfig } from 'e2e-ai/config';
263
416
 
264
417
  export default defineConfig({
418
+ // --- General ---
265
419
  inputSource: 'none', // 'jira' | 'linear' | 'none'
266
420
  outputTarget: 'markdown', // 'zephyr' | 'markdown' | 'both'
421
+
422
+ // --- LLM ---
423
+ llm: {
424
+ provider: 'openai', // 'openai' | 'anthropic'
425
+ model: null, // null = use agent defaults (gpt-4o)
426
+ agentModels: {}, // per-agent overrides: { 'scenario-agent': 'gpt-4o-mini' }
427
+ },
428
+
429
+ // --- Paths ---
430
+ paths: {
431
+ tests: 'e2e/tests',
432
+ scenarios: 'e2e/scenarios',
433
+ recordings: 'e2e/recordings',
434
+ transcripts: 'e2e/transcripts',
435
+ traces: 'e2e/traces',
436
+ workingDir: '.e2e-ai',
437
+ qaOutput: 'qa',
438
+ },
439
+
440
+ // --- Scanner ---
441
+ scanner: {
442
+ scanDir: 'src', // root directory to scan
443
+ include: ['**/*.ts', '**/*.tsx', '**/*.js', '**/*.jsx'],
444
+ exclude: [
445
+ '**/node_modules/**', '**/dist/**', '**/build/**',
446
+ '**/.next/**', '**/*.test.*', '**/*.spec.*',
447
+ '**/__tests__/**', '**/*.d.ts',
448
+ ],
449
+ cacheDir: '.e2e-ai/scan-cache', // file-level parse cache
450
+ },
451
+
452
+ // --- Push (remote QA map API) ---
453
+ push: {
454
+ apiUrl: null, // or set E2E_AI_API_URL env var
455
+ apiKey: null, // or set E2E_AI_API_KEY env var
456
+ },
457
+
458
+ // --- Recording ---
267
459
  voice: { enabled: true },
268
- llm: { provider: 'openai' },
269
- contextFile: 'e2e-ai.context.md',
460
+ playwright: {
461
+ browser: 'chromium',
462
+ timeout: 120_000,
463
+ retries: 0,
464
+ traceMode: 'on',
465
+ },
466
+
467
+ contextFile: '.e2e-ai/context.md',
270
468
  });
271
469
  ```
272
470
 
@@ -298,11 +496,13 @@ AI_PROVIDER=openai # openai | anthropic
298
496
  AI_MODEL=gpt-4o # Model override
299
497
  ANTHROPIC_API_KEY=sk-ant-... # Required if AI_PROVIDER=anthropic
300
498
  BASE_URL=https://... # Your application URL
499
+ E2E_AI_API_URL=https://... # Remote API for push command
500
+ E2E_AI_API_KEY=key-... # API key for push command
301
501
  ```
302
502
 
303
503
  ## AI Agents
304
504
 
305
- Six specialized agents live in `agents/*.md`. Each has:
505
+ Eight specialized agents live in `agents/*.md`. Each has:
306
506
  - **YAML frontmatter**: model, max_tokens, temperature
307
507
  - **System prompt**: role + context
308
508
  - **Input/Output schemas**: what the agent receives and must produce
@@ -317,8 +517,10 @@ Six specialized agents live in `agents/*.md`. Each has:
317
517
  | `refactor-agent` | test + project context | Improved test file | `refine` |
318
518
  | `self-healing-agent` | failing test + error output | Diagnosis + patched test | `heal` |
319
519
  | `qa-testcase-agent` | test + scenario + issue data | QA markdown + test case JSON | `qa` |
520
+ | `feature-analyzer-agent` | AST scan result | Features, workflows, components JSON | `analyze` |
521
+ | `scenario-planner-agent` | Features + workflows | Complete QA map with scenarios JSON | `analyze` |
320
522
 
321
- You can customize agent behavior by editing the `.md` files directly. The frontmatter `model` field is the default model for that agent (overridable via `--model`).
523
+ You can customize agent behavior by editing the `.md` files directly. The frontmatter `model` field is the default model for that agent (overridable via `--model` or `config.llm.agentModels`).
322
524
 
323
525
  ## Output Directory Structure
324
526
 
@@ -331,7 +533,30 @@ e2e/
331
533
 
332
534
  qa/ # QA documentation .md files
333
535
 
334
- .e2e-ai/<KEY>/ # per-issue working dir: codegen, recordings/, intermediate files
536
+ .e2e-ai/
537
+ <KEY>/ # per-issue working dir: codegen, recordings/, intermediate files
538
+ ast-scan.json # scan command output
539
+ qa-map.json # analyze command output
540
+ scan-cache/ # file-level parse cache (gitignored)
541
+ ```
542
+
543
+ ## Library API
544
+
545
+ e2e-ai also exports types and config helpers for programmatic use:
546
+
547
+ ```typescript
548
+ import { defineConfig, loadConfig, getProjectRoot } from 'e2e-ai';
549
+ import type {
550
+ E2eAiConfig,
551
+ ResolvedConfig,
552
+ ASTScanResult,
553
+ QAMapV2Payload,
554
+ FeatureV2,
555
+ WorkflowV2,
556
+ ScenarioV2,
557
+ ComponentV2,
558
+ PushResult,
559
+ } from 'e2e-ai';
335
560
  ```
336
561
 
337
562
  ## License
@@ -0,0 +1,81 @@
1
+ ---
2
+ agent: feature-analyzer-agent
3
+ version: "1.0"
4
+ model: gpt-4o
5
+ max_tokens: 8192
6
+ temperature: 0.1
7
+ ---
8
+
9
+ # System Prompt
10
+
11
+ You are a QA feature analyst. You receive an AST scan result (routes, components, hooks, dependencies) of a web application and identify the logical features, user-facing workflows, and reusable components.
12
+
13
+ Your job is to transform raw structural data into a high-level QA map that captures what the application does from a user's perspective.
14
+
15
+ ## Input Schema
16
+
17
+ You receive a JSON object with:
18
+ - `ast`: object - The ASTScanResult from a codebase scan (files, routes, components, hooks, dependencies)
19
+ - `existingMap`: object (optional) - A previous QAMapV2 to update incrementally
20
+
21
+ ## Output Schema
22
+
23
+ Respond with JSON only (no markdown fences, no extra text):
24
+
25
+ ```json
26
+ {
27
+ "features": [
28
+ {
29
+ "id": "feat:<kebab-case>",
30
+ "name": "Human-readable feature name",
31
+ "description": "What this feature does from user perspective",
32
+ "routes": ["/path1", "/path2"],
33
+ "workflowIds": ["wf:<kebab>"],
34
+ "sourceFiles": ["src/path/file.ts"]
35
+ }
36
+ ],
37
+ "workflows": [
38
+ {
39
+ "id": "wf:<kebab-case>",
40
+ "name": "Human-readable workflow name",
41
+ "featureId": "feat:<parent>",
42
+ "type": "navigation|crud|multi-step|configuration|search-filter",
43
+ "preconditions": ["User is authenticated"],
44
+ "steps": [
45
+ {
46
+ "id": "step:<workflow>:<index>",
47
+ "order": 1,
48
+ "description": "What the user does",
49
+ "componentIds": ["comp:<kebab>"],
50
+ "apiCalls": ["POST /api/endpoint"],
51
+ "conditionalBranches": []
52
+ }
53
+ ],
54
+ "componentIds": ["comp:<kebab>"]
55
+ }
56
+ ],
57
+ "components": [
58
+ {
59
+ "id": "comp:<kebab-case>",
60
+ "name": "ComponentName",
61
+ "type": "form|display|navigation|modal|layout|feedback",
62
+ "sourceFiles": ["src/path/Component.tsx"],
63
+ "props": ["prop1", "prop2"],
64
+ "referencedByWorkflows": ["wf:<kebab>"]
65
+ }
66
+ ]
67
+ }
68
+ ```
69
+
70
+ ## Rules
71
+
72
+ 1. Group routes and components into logical features based on shared URL paths, layouts, and data dependencies
73
+ 2. A feature represents a user-facing capability (e.g., "User Management", "Dashboard", "Settings")
74
+ 3. A workflow represents a specific user journey within a feature (e.g., "Create new user", "Filter dashboard by date")
75
+ 4. Workflow type must be one of: navigation, crud, multi-step, configuration, search-filter
76
+ 5. Identify components by their role: form (inputs), display (data rendering), navigation, modal, layout, feedback (toasts/alerts)
77
+ 6. Link components to workflows based on which route/page uses them (infer from imports and hook usage)
78
+ 7. API routes should be mapped as apiCalls within workflow steps
79
+ 8. Dynamic routes (containing `[param]`) indicate CRUD or detail-view workflows
80
+ 9. Prefer fewer, well-defined features over many granular ones. Aim for 3-15 features per app.
81
+ 10. Output valid JSON only, no markdown code fences or surrounding text
@@ -0,0 +1,68 @@
1
+ ---
2
+ agent: scenario-planner-agent
3
+ version: "1.0"
4
+ model: gpt-4o
5
+ max_tokens: 8192
6
+ temperature: 0.2
7
+ ---
8
+
9
+ # System Prompt
10
+
11
+ You are a QA scenario planner. You receive a QA map (features, workflows, components) and generate test scenarios for each workflow. Scenarios cover happy paths, edge cases, validation, error handling, and permission checks.
12
+
13
+ Your output completes the QA map by adding the scenarios array, producing a full QAMapV2Payload ready for use.
14
+
15
+ ## Input Schema
16
+
17
+ You receive a JSON object with:
18
+ - `features`: array - Feature definitions from feature-analyzer-agent
19
+ - `workflows`: array - Workflow definitions with steps
20
+ - `components`: array - Component definitions
21
+
22
+ ## Output Schema
23
+
24
+ Respond with JSON only (no markdown fences, no extra text). Return the complete payload including the original features/workflows/components plus a new `scenarios` array:
25
+
26
+ ```json
27
+ {
28
+ "features": [...],
29
+ "workflows": [...],
30
+ "components": [...],
31
+ "scenarios": [
32
+ {
33
+ "id": "sc:<workflow-id>:<index>",
34
+ "workflowId": "wf:<kebab>",
35
+ "featureId": "feat:<kebab>",
36
+ "name": "Descriptive scenario name",
37
+ "description": "What this scenario verifies",
38
+ "category": "happy-path|permission|validation|error|edge-case|precondition",
39
+ "preconditions": ["User is authenticated", "Data exists"],
40
+ "steps": [
41
+ {
42
+ "order": 1,
43
+ "action": "What the user does",
44
+ "expectedResult": "What should happen"
45
+ }
46
+ ],
47
+ "expectedOutcome": "Final expected state",
48
+ "componentIds": ["comp:<kebab>"],
49
+ "workflowStepIds": ["step:<workflow>:<index>"],
50
+ "priority": "critical|high|medium|low"
51
+ }
52
+ ]
53
+ }
54
+ ```
55
+
56
+ ## Rules
57
+
58
+ 1. Generate at least one happy-path scenario per workflow
59
+ 2. For CRUD workflows: test create, read, update, delete + validation failures
60
+ 3. For multi-step workflows: test complete flow + abandonment at each step
61
+ 4. For forms: test validation (empty fields, invalid input, boundary values)
62
+ 5. For workflows with conditionalBranches: generate one scenario per branch
63
+ 6. Priority mapping: happy-path critical flows = critical, validation = high, edge-cases = medium, precondition checks = low
64
+ 7. Each scenario should have 2-8 steps, each with a verifiable expectedResult
65
+ 8. Scenario names should be descriptive: "[Feature]: [what is being tested]"
66
+ 9. Link scenarios to workflow steps via workflowStepIds
67
+ 10. Aim for 3-8 scenarios per workflow depending on complexity
68
+ 11. Output valid JSON only, no markdown code fences or surrounding text
@@ -0,0 +1,165 @@
1
+ import {
2
+ getPackageRoot,
3
+ getProjectRoot
4
+ } from "./cli-cqabyzv3.js";
5
+
6
+ // src/agents/loadAgent.ts
7
+ import { readFileSync, existsSync } from "node:fs";
8
+ import { join } from "node:path";
9
+ function loadAgent(agentName, config) {
10
+ const localPath = join(getProjectRoot(), ".e2e-ai", "agents", `${agentName}.md`);
11
+ const packagePath = join(getPackageRoot(), "agents", `${agentName}.md`);
12
+ const filePath = existsSync(localPath) ? localPath : packagePath;
13
+ let content;
14
+ try {
15
+ content = readFileSync(filePath, "utf-8");
16
+ } catch {
17
+ throw new Error(`Agent file not found: ${filePath}`);
18
+ }
19
+ const { frontmatter, body } = parseFrontmatter(content);
20
+ const agentConfig = extractConfig(frontmatter);
21
+ let systemPrompt = body;
22
+ if (config) {
23
+ const contextPath = join(getProjectRoot(), ".e2e-ai", "context.md");
24
+ if (existsSync(contextPath)) {
25
+ const projectContext = readFileSync(contextPath, "utf-8").trim();
26
+ if (projectContext) {
27
+ systemPrompt = `${body}
28
+
29
+ ## Project Context
30
+
31
+ ${projectContext}`;
32
+ }
33
+ }
34
+ if (config.llm.agentModels[agentName]) {
35
+ agentConfig.model = config.llm.agentModels[agentName];
36
+ }
37
+ }
38
+ const sections = parseSections(body);
39
+ return {
40
+ name: frontmatter.agent ?? agentName,
41
+ systemPrompt,
42
+ inputSchema: sections["Input Schema"],
43
+ outputSchema: sections["Output Schema"],
44
+ rules: sections["Rules"],
45
+ example: sections["Example"],
46
+ config: agentConfig
47
+ };
48
+ }
49
+ function parseFrontmatter(content) {
50
+ const match = content.match(/^---\n([\s\S]*?)\n---\n([\s\S]*)$/);
51
+ if (!match)
52
+ return { frontmatter: {}, body: content };
53
+ const frontmatter = {};
54
+ for (const line of match[1].split(`
55
+ `)) {
56
+ const colonIdx = line.indexOf(":");
57
+ if (colonIdx === -1)
58
+ continue;
59
+ const key = line.slice(0, colonIdx).trim();
60
+ let value = line.slice(colonIdx + 1).trim();
61
+ if (value.startsWith('"') && value.endsWith('"'))
62
+ value = value.slice(1, -1);
63
+ if (value === "true")
64
+ value = true;
65
+ if (value === "false")
66
+ value = false;
67
+ if (!isNaN(Number(value)) && value !== "")
68
+ value = Number(value);
69
+ frontmatter[key] = value;
70
+ }
71
+ return { frontmatter, body: match[2] };
72
+ }
73
+ function extractConfig(frontmatter) {
74
+ return {
75
+ model: frontmatter.model,
76
+ maxTokens: frontmatter.max_tokens ?? 4096,
77
+ temperature: frontmatter.temperature ?? 0.2
78
+ };
79
+ }
80
+ function parseSections(body) {
81
+ const sections = {};
82
+ const headingRegex = /^##\s+(.+)$/gm;
83
+ const headings = [];
84
+ let match;
85
+ while ((match = headingRegex.exec(body)) !== null) {
86
+ headings.push({ title: match[1].trim(), index: match.index });
87
+ }
88
+ const systemMatch = body.match(/^#\s+System Prompt\n([\s\S]*?)(?=\n##\s|$)/m);
89
+ if (systemMatch) {
90
+ sections["System Prompt"] = systemMatch[1].trim();
91
+ }
92
+ for (let i = 0;i < headings.length; i++) {
93
+ const start = headings[i].index + body.slice(headings[i].index).indexOf(`
94
+ `) + 1;
95
+ const end = i + 1 < headings.length ? headings[i + 1].index : body.length;
96
+ sections[headings[i].title] = body.slice(start, end).trim();
97
+ }
98
+ return sections;
99
+ }
100
+
101
+ // src/utils/scan.ts
102
+ import { readdirSync, existsSync as existsSync2, readFileSync as readFileSync2 } from "node:fs";
103
+ import { join as join2, relative } from "node:path";
104
+ async function scanCodebase(root) {
105
+ const scan = {
106
+ testFiles: [],
107
+ configFiles: [],
108
+ fixtureFiles: [],
109
+ featureFiles: [],
110
+ tsconfigPaths: {},
111
+ playwrightConfig: null,
112
+ sampleTestContent: null
113
+ };
114
+ function walk(dir, depth = 0) {
115
+ if (depth > 5)
116
+ return [];
117
+ const files = [];
118
+ try {
119
+ for (const entry of readdirSync(dir, { withFileTypes: true })) {
120
+ if (entry.name.startsWith(".") || entry.name === "node_modules" || entry.name === "dist")
121
+ continue;
122
+ const full = join2(dir, entry.name);
123
+ if (entry.isDirectory()) {
124
+ files.push(...walk(full, depth + 1));
125
+ } else {
126
+ files.push(full);
127
+ }
128
+ }
129
+ } catch {}
130
+ return files;
131
+ }
132
+ const allFiles = walk(root);
133
+ for (const file of allFiles) {
134
+ const rel = relative(root, file);
135
+ if (rel.endsWith(".test.ts") || rel.endsWith(".spec.ts")) {
136
+ scan.testFiles.push(rel);
137
+ if (!scan.sampleTestContent && scan.testFiles.length <= 3) {
138
+ try {
139
+ scan.sampleTestContent = readFileSync2(file, "utf-8").slice(0, 3000);
140
+ } catch {}
141
+ }
142
+ }
143
+ if (rel.endsWith(".feature.ts"))
144
+ scan.featureFiles.push(rel);
145
+ if (rel.includes("fixture") && rel.endsWith(".ts"))
146
+ scan.fixtureFiles.push(rel);
147
+ if (rel === "playwright.config.ts" || rel === "playwright.config.js")
148
+ scan.playwrightConfig = rel;
149
+ if (rel === "tsconfig.json" || rel.endsWith("/tsconfig.json")) {
150
+ try {
151
+ const tsconfig = JSON.parse(readFileSync2(file, "utf-8"));
152
+ if (tsconfig.compilerOptions?.paths) {
153
+ scan.tsconfigPaths = { ...scan.tsconfigPaths, ...tsconfig.compilerOptions.paths };
154
+ }
155
+ } catch {}
156
+ }
157
+ }
158
+ for (const name of ["playwright.config.ts", "vitest.config.ts", "jest.config.ts", "tsconfig.json", "package.json"]) {
159
+ if (existsSync2(join2(root, name)))
160
+ scan.configFiles.push(name);
161
+ }
162
+ return scan;
163
+ }
164
+
165
+ export { loadAgent, scanCodebase };