e2e-ai 1.2.0 → 1.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,46 +2,65 @@
2
2
 
3
3
  AI-powered E2E test automation pipeline. Takes you from a manual browser recording all the way to a stable, documented Playwright test — with optional Zephyr integration.
4
4
 
5
+ Includes a **codebase scanner** that builds a QA map of your application's features, workflows, and test scenarios using AI analysis.
6
+
5
7
  ## Quick Start
6
8
 
7
9
  ```bash
8
10
  # Install
9
11
  npm install e2e-ai
10
12
 
11
- # Initialize config + project context
13
+ # Initialize config + copy agents
12
14
  npx e2e-ai init
13
15
 
16
+ # Generate project context (use init-agent in your AI tool)
17
+ # → produces .e2e-ai/context.md
18
+
14
19
  # Show all commands
15
20
  npx e2e-ai --help
16
21
 
17
22
  # Full pipeline for an issue
18
23
  npx e2e-ai run --key PROJ-101
19
24
 
20
- # Individual commands
21
- npx e2e-ai scenario --key PROJ-101
22
- npx e2e-ai generate --key PROJ-101
23
- npx e2e-ai qa --key PROJ-101
25
+ # Scan your codebase and generate a QA map
26
+ npx e2e-ai scan
27
+ npx e2e-ai analyze
24
28
  ```
25
29
 
26
30
  ## Architecture
27
31
 
32
+ e2e-ai has two main workflows:
33
+
34
+ ### Test Pipeline
35
+
28
36
  ```
29
- AI Agents (LLM)
30
- |
31
- record -> transcribe -> scenario -> generate -> refine -> test -> heal -> qa
32
- | | | | | | | |
33
- codegen whisper transcript scenario refactor playwright self- QA doc
34
- + audio + merge + scenario agent agent runner healing + export
35
- agent agent agent agent
37
+ record -> transcribe -> scenario -> generate -> refine -> test -> heal -> qa
38
+ | | | | | | | |
39
+ codegen whisper transcript scenario refactor playwright self- QA doc
40
+ + audio + merge + scenario agent agent runner healing + export
41
+ agent agent agent agent
36
42
  ```
37
43
 
38
44
  Each step produces an artifact that feeds the next. You can run the full pipeline or any step individually.
39
45
 
46
+ ### Scanner Pipeline
47
+
48
+ ```
49
+ scan -> analyze -> push
50
+ | | |
51
+ AST feature remote
52
+ scan analyzer API
53
+ + scenario upload
54
+ planner
55
+ ```
56
+
57
+ Scans your codebase structure (routes, components, hooks), uses AI to identify features and workflows, generates test scenarios, and optionally pushes the QA map to a remote API.
58
+
40
59
  ## Commands
41
60
 
42
61
  ### `init` - Project Setup
43
62
 
44
- Interactive wizard that scans your codebase and generates configuration + project context.
63
+ Interactive wizard that creates `.e2e-ai/config.ts` and copies agent prompts to `.e2e-ai/agents/`.
45
64
 
46
65
  ```bash
47
66
  npx e2e-ai init
@@ -50,6 +69,10 @@ npx e2e-ai init
50
69
  npx e2e-ai init --non-interactive
51
70
  ```
52
71
 
72
+ After init, use the **init-agent** in your AI tool (Claude Code, Cursor, etc.) to generate `.e2e-ai/context.md`. This context file teaches all downstream agents about your project's test conventions. If you have the MCP server configured, the AI tool can call `e2e_ai_scan_codebase` to analyze your project automatically.
73
+
74
+ ---
75
+
53
76
  ### `record` - Browser Recording
54
77
 
55
78
  Launches Playwright codegen with optional voice recording and trace capture.
@@ -198,6 +221,115 @@ npx e2e-ai run --key PROJ-101 --from scenario --skip test,heal
198
221
 
199
222
  Each step's output feeds the next via `PipelineContext`. If a step fails, the pipeline stops (unless the step is marked `nonBlocking`).
200
223
 
224
+ ---
225
+
226
+ ### `scan` - Codebase AST Scanner
227
+
228
+ Scans your codebase and extracts structural information: routes, components, hooks, imports, and dependencies.
229
+
230
+ ```bash
231
+ # Scan using config defaults (scans src/)
232
+ npx e2e-ai scan
233
+
234
+ # Scan a specific directory
235
+ npx e2e-ai scan --scan-dir app
236
+
237
+ # Write output to a custom path
238
+ npx e2e-ai scan --output my-scan.json
239
+
240
+ # Disable file-level caching
241
+ npx e2e-ai scan --no-cache
242
+ ```
243
+
244
+ **What it does**:
245
+ 1. Collects all `.ts`, `.tsx`, `.js`, `.jsx` files matching configured include/exclude patterns
246
+ 2. Parses each file with a regex-based TypeScript parser to extract imports, exports, components, and hooks
247
+ 3. Extracts routes from Next.js App Router (`app/`) and Pages Router (`pages/`) conventions
248
+ 4. Caches results per-file (by content hash) for fast incremental re-scans
249
+
250
+ **Output**: `ASTScanResult` JSON containing:
251
+ - `files` — all scanned files with imports, exports, line counts
252
+ - `routes` — extracted routes with paths, methods, dynamic segments, layout files
253
+ - `components` — React components with props, hook calls, export status
254
+ - `hooks` — custom hook definitions with dependencies
255
+ - `dependencies` — import graph edges between files
256
+
257
+ Default output location: `.e2e-ai/ast-scan.json`
258
+
259
+ **Options**:
260
+
261
+ | Flag | Description | Default |
262
+ |------|-------------|---------|
263
+ | `--output <file>` | Write output to specific file | `.e2e-ai/ast-scan.json` |
264
+ | `--scan-dir <dir>` | Directory to scan | from config (`src`) |
265
+ | `--no-cache` | Disable file-level caching | cache enabled |
266
+
267
+ ### `analyze` - AI QA Map Generation
268
+
269
+ Analyzes an AST scan result using AI to identify features, workflows, components, and test scenarios.
270
+
271
+ ```bash
272
+ # Analyze the default scan output
273
+ npx e2e-ai analyze
274
+
275
+ # Analyze a specific scan file
276
+ npx e2e-ai analyze path/to/ast-scan.json
277
+
278
+ # Only extract features (skip scenario generation)
279
+ npx e2e-ai analyze --skip-scenarios
280
+
281
+ # Write output to a custom path
282
+ npx e2e-ai analyze --output my-qa-map.json
283
+ ```
284
+
285
+ **What it does** — two-stage AI pipeline:
286
+
287
+ 1. **Stage 2 — Feature Analysis** (`feature-analyzer-agent`): Receives the AST scan and groups routes, components, and hooks into logical features and user-facing workflows. Identifies component roles (form, display, navigation, modal, layout, feedback) and maps API routes to workflow steps.
288
+
289
+ 2. **Stage 3 — Scenario Planning** (`scenario-planner-agent`): Receives the feature map and generates test scenarios for each workflow. Covers happy paths, validation, error handling, edge cases, and permission checks. Assigns priorities (critical/high/medium/low) and links scenarios to specific workflow steps.
290
+
291
+ **Output**: `QAMapV2Payload` JSON containing:
292
+ - `features` — logical application features with routes and source files
293
+ - `workflows` — user journeys with ordered steps, API calls, conditional branches
294
+ - `components` — UI components with types, props, workflow references
295
+ - `scenarios` — test scenarios with steps, preconditions, expected outcomes, priorities
296
+
297
+ Default output location: `.e2e-ai/qa-map.json`
298
+
299
+ **Options**:
300
+
301
+ | Flag | Description | Default |
302
+ |------|-------------|---------|
303
+ | `--output <file>` | Write QA map to specific file | `.e2e-ai/qa-map.json` |
304
+ | `--skip-scenarios` | Only run feature analysis | both stages |
305
+
306
+ ### `push` - Push QA Map to API
307
+
308
+ Pushes a generated QA map to a remote API endpoint.
309
+
310
+ ```bash
311
+ # Push the default QA map
312
+ npx e2e-ai push
313
+
314
+ # Push a specific file
315
+ npx e2e-ai push path/to/qa-map.json
316
+
317
+ # Associate with a git commit
318
+ npx e2e-ai push --commit-sha abc123
319
+ ```
320
+
321
+ **What it does**: Sends the `QAMapV2Payload` JSON as a POST request to the configured API endpoint with Bearer token authentication. The API responds with version info and auto-linking statistics.
322
+
323
+ **Requires** `push.apiUrl` and `push.apiKey` in config (or `E2E_AI_API_URL` / `E2E_AI_API_KEY` env vars).
324
+
325
+ **Options**:
326
+
327
+ | Flag | Description | Default |
328
+ |------|-------------|---------|
329
+ | `--commit-sha <sha>` | Git commit SHA to associate | - |
330
+
331
+ ---
332
+
201
333
  ## Typical Workflows
202
334
 
203
335
  ### Workflow 1: Full Recording Pipeline
@@ -254,23 +386,92 @@ npx e2e-ai scenario --key PROJ-101
254
386
  npx e2e-ai run --key PROJ-101 --from generate
255
387
  ```
256
388
 
389
+ ### Workflow 5: Codebase Scanner Pipeline
390
+
391
+ Generate a full QA map from your codebase structure:
392
+
393
+ ```bash
394
+ # Step 1: Scan the codebase AST
395
+ npx e2e-ai scan
396
+
397
+ # Step 2: AI analysis → features, workflows, scenarios
398
+ npx e2e-ai analyze
399
+
400
+ # Step 3: Push to remote API (optional)
401
+ npx e2e-ai push
402
+ ```
403
+
404
+ Or run stages selectively:
405
+
406
+ ```bash
407
+ # Just extract features (no scenarios)
408
+ npx e2e-ai analyze --skip-scenarios
409
+
410
+ # Re-scan after code changes (cached, fast)
411
+ npx e2e-ai scan
412
+ npx e2e-ai analyze
413
+ ```
414
+
257
415
  ## Configuration
258
416
 
259
- Run `npx e2e-ai init` to generate `e2e-ai.config.ts`:
417
+ Run `npx e2e-ai init` to generate `.e2e-ai/config.ts`:
260
418
 
261
419
  ```typescript
262
420
  import { defineConfig } from 'e2e-ai/config';
263
421
 
264
422
  export default defineConfig({
423
+ // --- General ---
265
424
  inputSource: 'none', // 'jira' | 'linear' | 'none'
266
425
  outputTarget: 'markdown', // 'zephyr' | 'markdown' | 'both'
426
+
427
+ // --- LLM ---
428
+ llm: {
429
+ provider: 'openai', // 'openai' | 'anthropic'
430
+ model: null, // null = use agent defaults (gpt-4o)
431
+ agentModels: {}, // per-agent overrides: { 'scenario-agent': 'gpt-4o-mini' }
432
+ },
433
+
434
+ // --- Paths ---
435
+ paths: {
436
+ tests: 'e2e/tests',
437
+ scenarios: 'e2e/scenarios',
438
+ recordings: 'e2e/recordings',
439
+ transcripts: 'e2e/transcripts',
440
+ traces: 'e2e/traces',
441
+ workingDir: '.e2e-ai',
442
+ qaOutput: 'qa',
443
+ },
444
+
445
+ // --- Scanner ---
446
+ scanner: {
447
+ scanDir: 'src', // root directory to scan
448
+ include: ['**/*.ts', '**/*.tsx', '**/*.js', '**/*.jsx'],
449
+ exclude: [
450
+ '**/node_modules/**', '**/dist/**', '**/build/**',
451
+ '**/.next/**', '**/*.test.*', '**/*.spec.*',
452
+ '**/__tests__/**', '**/*.d.ts',
453
+ ],
454
+ cacheDir: '.e2e-ai/scan-cache', // file-level parse cache
455
+ },
456
+
457
+ // --- Push (remote QA map API) ---
458
+ push: {
459
+ apiUrl: null, // or set E2E_AI_API_URL env var
460
+ apiKey: null, // or set E2E_AI_API_KEY env var
461
+ },
462
+
463
+ // --- Recording ---
267
464
  voice: { enabled: true },
268
- llm: { provider: 'openai' },
269
- contextFile: 'e2e-ai.context.md',
465
+ playwright: {
466
+ browser: 'chromium',
467
+ timeout: 120_000,
468
+ retries: 0,
469
+ traceMode: 'on',
470
+ },
270
471
  });
271
472
  ```
272
473
 
273
- See `templates/e2e-ai.context.example.md` for the project context file format.
474
+ All configuration lives inside the `.e2e-ai/` directory no files at project root.
274
475
 
275
476
  ## Global Options
276
477
 
@@ -298,11 +499,13 @@ AI_PROVIDER=openai # openai | anthropic
298
499
  AI_MODEL=gpt-4o # Model override
299
500
  ANTHROPIC_API_KEY=sk-ant-... # Required if AI_PROVIDER=anthropic
300
501
  BASE_URL=https://... # Your application URL
502
+ E2E_AI_API_URL=https://... # Remote API for push command
503
+ E2E_AI_API_KEY=key-... # API key for push command
301
504
  ```
302
505
 
303
506
  ## AI Agents
304
507
 
305
- Six specialized agents live in `agents/*.md`. Each has:
508
+ Eight specialized agents live in `agents/*.md`. Each has:
306
509
  - **YAML frontmatter**: model, max_tokens, temperature
307
510
  - **System prompt**: role + context
308
511
  - **Input/Output schemas**: what the agent receives and must produce
@@ -317,12 +520,14 @@ Six specialized agents live in `agents/*.md`. Each has:
317
520
  | `refactor-agent` | test + project context | Improved test file | `refine` |
318
521
  | `self-healing-agent` | failing test + error output | Diagnosis + patched test | `heal` |
319
522
  | `qa-testcase-agent` | test + scenario + issue data | QA markdown + test case JSON | `qa` |
523
+ | `feature-analyzer-agent` | AST scan result | Features, workflows, components JSON | `analyze` |
524
+ | `scenario-planner-agent` | Features + workflows | Complete QA map with scenarios JSON | `analyze` |
320
525
 
321
- You can customize agent behavior by editing the `.md` files directly. The frontmatter `model` field is the default model for that agent (overridable via `--model`).
526
+ You can customize agent behavior by editing the `.md` files directly. The frontmatter `model` field is the default model for that agent (overridable via `--model` or `config.llm.agentModels`).
322
527
 
323
528
  ## Output Directory Structure
324
529
 
325
- Default paths (configurable via `e2e-ai.config.ts`):
530
+ Default paths (configurable via `.e2e-ai/config.ts`):
326
531
 
327
532
  ```
328
533
  e2e/
@@ -331,7 +536,103 @@ e2e/
331
536
 
332
537
  qa/ # QA documentation .md files
333
538
 
334
- .e2e-ai/<KEY>/ # per-issue working dir: codegen, recordings/, intermediate files
539
+ .e2e-ai/
540
+ config.ts # project configuration
541
+ context.md # project context (generated by init-agent)
542
+ agents/ # agent prompt definitions (.md files)
543
+ <KEY>/ # per-issue working dir: codegen, recordings/, intermediate files
544
+ ast-scan.json # scan command output
545
+ qa-map.json # analyze command output
546
+ scan-cache/ # file-level parse cache (gitignored)
547
+ ```
548
+
549
+ ## MCP Server
550
+
551
+ e2e-ai ships an MCP (Model Context Protocol) server that lets AI assistants interact with your project's test infrastructure directly. The server binary is `e2e-ai-mcp`.
552
+
553
+ ### Setup
554
+
555
+ Add to your MCP client configuration:
556
+
557
+ **Claude Desktop** (`~/Library/Application Support/Claude/claude_desktop_config.json`):
558
+
559
+ ```json
560
+ {
561
+ "mcpServers": {
562
+ "e2e-ai": {
563
+ "command": "npx",
564
+ "args": ["e2e-ai-mcp"],
565
+ "cwd": "/path/to/your/project"
566
+ }
567
+ }
568
+ }
569
+ ```
570
+
571
+ **Claude Code:**
572
+
573
+ ```bash
574
+ claude mcp add e2e-ai -- npx e2e-ai-mcp
575
+ ```
576
+
577
+ **Cursor** (`.cursor/mcp.json`):
578
+
579
+ ```json
580
+ {
581
+ "mcpServers": {
582
+ "e2e-ai": {
583
+ "command": "npx",
584
+ "args": ["e2e-ai-mcp"]
585
+ }
586
+ }
587
+ }
588
+ ```
589
+
590
+ If e2e-ai is installed globally or as a project dependency, you can use the binary path directly instead of `npx`:
591
+
592
+ ```json
593
+ {
594
+ "command": "node",
595
+ "args": ["node_modules/.bin/e2e-ai-mcp"]
596
+ }
597
+ ```
598
+
599
+ ### Available Tools
600
+
601
+ | Tool | Description | Input |
602
+ |------|-------------|-------|
603
+ | `e2e_ai_scan_codebase` | Scan project for test files, configs, fixtures, path aliases, and sample test content | `projectRoot?` (defaults to cwd) |
604
+ | `e2e_ai_validate_context` | Validate that a context markdown file has all required sections | `content` (markdown string) |
605
+ | `e2e_ai_read_agent` | Load an agent prompt by name — returns system prompt + config | `agentName` (e.g. `scenario-agent`) |
606
+ | `e2e_ai_get_example` | Get the example context markdown template | (none) |
607
+
608
+ ### Usage with AI Assistants
609
+
610
+ Once configured, an AI assistant can:
611
+
612
+ 1. **Scan your project** to understand its test structure, fixtures, and conventions
613
+ 2. **Read agent prompts** to understand how each pipeline step works
614
+ 3. **Validate context files** to ensure they have the right format before running commands
615
+ 4. **Get the example template** as a starting point for writing `e2e-ai.context.md`
616
+
617
+ This enables AI assistants to help you set up e2e-ai, debug pipeline issues, and generate better project context files.
618
+
619
+ ## Library API
620
+
621
+ e2e-ai also exports types and config helpers for programmatic use:
622
+
623
+ ```typescript
624
+ import { defineConfig, loadConfig, getProjectRoot } from 'e2e-ai';
625
+ import type {
626
+ E2eAiConfig,
627
+ ResolvedConfig,
628
+ ASTScanResult,
629
+ QAMapV2Payload,
630
+ FeatureV2,
631
+ WorkflowV2,
632
+ ScenarioV2,
633
+ ComponentV2,
634
+ PushResult,
635
+ } from 'e2e-ai';
335
636
  ```
336
637
 
337
638
  ## License
@@ -0,0 +1,77 @@
1
+ ---
2
+ agent: feature-analyzer-agent
3
+ ---
4
+
5
+ # System Prompt
6
+
7
+ You are a QA feature analyst. You receive an AST scan result (routes, components, hooks, dependencies) of a web application and identify the logical features, user-facing workflows, and reusable components.
8
+
9
+ Your job is to transform raw structural data into a high-level QA map that captures what the application does from a user's perspective.
10
+
11
+ ## Input Schema
12
+
13
+ You receive a JSON object with:
14
+ - `ast`: object - The ASTScanResult from a codebase scan (files, routes, components, hooks, dependencies)
15
+ - `existingMap`: object (optional) - A previous QAMapV2 to update incrementally
16
+
17
+ ## Output Schema
18
+
19
+ Respond with JSON only (no markdown fences, no extra text):
20
+
21
+ ```json
22
+ {
23
+ "features": [
24
+ {
25
+ "id": "feat:<kebab-case>",
26
+ "name": "Human-readable feature name",
27
+ "description": "What this feature does from user perspective",
28
+ "routes": ["/path1", "/path2"],
29
+ "workflowIds": ["wf:<kebab>"],
30
+ "sourceFiles": ["src/path/file.ts"]
31
+ }
32
+ ],
33
+ "workflows": [
34
+ {
35
+ "id": "wf:<kebab-case>",
36
+ "name": "Human-readable workflow name",
37
+ "featureId": "feat:<parent>",
38
+ "type": "navigation|crud|multi-step|configuration|search-filter",
39
+ "preconditions": ["User is authenticated"],
40
+ "steps": [
41
+ {
42
+ "id": "step:<workflow>:<index>",
43
+ "order": 1,
44
+ "description": "What the user does",
45
+ "componentIds": ["comp:<kebab>"],
46
+ "apiCalls": ["POST /api/endpoint"],
47
+ "conditionalBranches": []
48
+ }
49
+ ],
50
+ "componentIds": ["comp:<kebab>"]
51
+ }
52
+ ],
53
+ "components": [
54
+ {
55
+ "id": "comp:<kebab-case>",
56
+ "name": "ComponentName",
57
+ "type": "form|display|navigation|modal|layout|feedback",
58
+ "sourceFiles": ["src/path/Component.tsx"],
59
+ "props": ["prop1", "prop2"],
60
+ "referencedByWorkflows": ["wf:<kebab>"]
61
+ }
62
+ ]
63
+ }
64
+ ```
65
+
66
+ ## Rules
67
+
68
+ 1. Group routes and components into logical features based on shared URL paths, layouts, and data dependencies
69
+ 2. A feature represents a user-facing capability (e.g., "User Management", "Dashboard", "Settings")
70
+ 3. A workflow represents a specific user journey within a feature (e.g., "Create new user", "Filter dashboard by date")
71
+ 4. Workflow type must be one of: navigation, crud, multi-step, configuration, search-filter
72
+ 5. Identify components by their role: form (inputs), display (data rendering), navigation, modal, layout, feedback (toasts/alerts)
73
+ 6. Link components to workflows based on which route/page uses them (infer from imports and hook usage)
74
+ 7. API routes should be mapped as apiCalls within workflow steps
75
+ 8. Dynamic routes (containing `[param]`) indicate CRUD or detail-view workflows
76
+ 9. Prefer fewer, well-defined features over many granular ones. Aim for 3-15 features per app.
77
+ 10. Output valid JSON only, no markdown code fences or surrounding text
@@ -1,20 +1,24 @@
1
1
  ---
2
2
  agent: init-agent
3
- version: "1.0"
4
- model: gpt-4o
5
- max_tokens: 8192
6
- temperature: 0.3
7
3
  ---
8
4
 
9
5
  # System Prompt
10
6
 
11
- You are a codebase analysis assistant for the e2e-ai test automation tool. Your job is to analyze a project's test infrastructure and produce a well-structured context document (`e2e-ai.context.md`) that will guide AI agents when generating, refining, and healing Playwright tests for this specific project.
7
+ You are a codebase analysis assistant for the e2e-ai test automation tool. Your job is to analyze a project's test infrastructure and produce a well-structured context document (`.e2e-ai/context.md`) that will guide AI agents when generating, refining, and healing Playwright tests for this specific project.
12
8
 
13
- You will receive scan results from the target codebase and engage in a conversation to clarify patterns you're uncertain about.
9
+ ## How to Use This Agent
10
+
11
+ This agent is designed to be used directly in your AI tool (Claude Code, Cursor, Gemini CLI, etc.). Start a conversation and ask it to generate your project context.
12
+
13
+ **If the e2e-ai MCP server is configured**, call `e2e_ai_scan_codebase` to get scan results, then follow this agent's instructions to produce the context file.
14
+
15
+ **If no MCP server**, manually explore the codebase: look at test files, fixtures, playwright config, tsconfig paths, and helper modules.
14
16
 
15
17
  ## Your Task
16
18
 
17
- Analyze the provided codebase scan and produce a context document covering:
19
+ Analyze the project codebase and produce a file at `.e2e-ai/context.md` that documents the project's test infrastructure, conventions, and patterns. This context file is consumed by downstream AI agents (scenario, generator, refiner, healer, QA) to produce Playwright tests that match the project's existing style.
20
+
21
+ Cover these areas:
18
22
 
19
23
  1. **Application Overview**: What the app does, tech stack, key pages/routes
20
24
  2. **Test Infrastructure**: Fixtures, custom test helpers, step counters, auth patterns
@@ -26,13 +30,13 @@ Analyze the provided codebase scan and produce a context document covering:
26
30
 
27
31
  ## Output Format
28
32
 
29
- When you have enough information, produce the final context as a markdown document with these sections:
33
+ Produce the context document with these sections and save it to `.e2e-ai/context.md`:
30
34
 
31
35
  ```markdown
32
36
  # Project Context for e2e-ai
33
37
 
34
38
  ## Application
35
- <name, description, tech stack>
39
+ <name, description, tech stack, base URL>
36
40
 
37
41
  ## Test Infrastructure
38
42
  <fixtures, helpers, auth pattern>
@@ -53,6 +57,22 @@ When you have enough information, produce the final context as a markdown docume
53
57
  <timeouts, waits, assertion patterns>
54
58
  ```
55
59
 
60
+ All sections are required. The file should be 100-300 lines, self-contained, and use actual code from the project (not generic Playwright examples).
61
+
62
+ ## How Context is Used
63
+
64
+ Each pipeline agent reads `.e2e-ai/context.md` to understand project conventions:
65
+
66
+ | Agent | Uses context for |
67
+ |-------|-----------------|
68
+ | **scenario-agent** | Structuring test steps to match project patterns |
69
+ | **playwright-generator-agent** | Generating code with correct imports, fixtures, selectors |
70
+ | **refactor-agent** | Applying project-specific refactoring patterns |
71
+ | **self-healing-agent** | Understanding expected test structure when fixing failures |
72
+ | **qa-testcase-agent** | Formatting QA documentation to match conventions |
73
+ | **feature-analyzer-agent** | Understanding app structure for QA map generation |
74
+ | **scenario-planner-agent** | Generating realistic test scenarios from codebase analysis |
75
+
56
76
  ## Rules
57
77
 
58
78
  1. Ask clarifying questions if the scan data is ambiguous — do NOT guess
@@ -61,15 +81,3 @@ When you have enough information, produce the final context as a markdown docume
61
81
  4. The context file should be self-contained — an AI agent reading only this file should understand all project conventions
62
82
  5. Keep the document concise but complete — aim for 100-300 lines
63
83
  6. If you need to see specific files to complete the analysis, list them explicitly
64
-
65
- ## Conversation Flow
66
-
67
- 1. **First turn**: Receive scan results, analyze them, ask clarifying questions if needed
68
- 2. **Middle turns**: Receive answers, refine understanding
69
- 3. **Final turn**: When you have enough info, produce the complete context document wrapped in a `<context>` tag:
70
- ```
71
- <context>
72
- # Project Context for e2e-ai
73
- ...
74
- </context>
75
- ```
@@ -1,9 +1,5 @@
1
1
  ---
2
2
  agent: playwright-generator-agent
3
- version: "1.0"
4
- model: gpt-4o
5
- max_tokens: 8192
6
- temperature: 0.2
7
3
  ---
8
4
 
9
5
  # System Prompt
@@ -1,9 +1,5 @@
1
1
  ---
2
2
  agent: qa-testcase-agent
3
- version: "1.0"
4
- model: gpt-4o
5
- max_tokens: 8192
6
- temperature: 0.2
7
3
  ---
8
4
 
9
5
  # System Prompt
@@ -1,9 +1,5 @@
1
1
  ---
2
2
  agent: refactor-agent
3
- version: "1.0"
4
- model: gpt-4o
5
- max_tokens: 8192
6
- temperature: 0.2
7
3
  ---
8
4
 
9
5
  # System Prompt
@@ -1,9 +1,5 @@
1
1
  ---
2
2
  agent: scenario-agent
3
- version: "1.0"
4
- model: gpt-4o
5
- max_tokens: 4096
6
- temperature: 0.2
7
3
  ---
8
4
 
9
5
  # System Prompt