qaa-agent 1.4.0 → 1.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,19 +1,22 @@
1
1
  # Create Automated Tests
2
2
 
3
- Generate production-ready test files with POM pattern for a specific feature or module. Reads existing QA_ANALYSIS.md and TEST_INVENTORY.md if available.
3
+ Generate production-ready test files with POM pattern for a specific feature or module. Reads codebase map, existing analysis artifacts, and CLAUDE.md standards to produce tests that match the project's actual code patterns. If E2E tests are generated and an app URL is available, runs them against the live app and fixes locators until they pass.
4
4
 
5
5
  ## Usage
6
6
 
7
- /create-test <feature-name> [--dev-repo <path>]
7
+ /create-test <feature-name> [--dev-repo <path>] [--app-url <url>] [--skip-run]
8
8
 
9
9
  - feature-name: which feature to test (e.g., "login", "checkout", "user API")
10
10
  - --dev-repo: path to developer repository (default: current directory)
11
+ - --app-url: URL of running application for E2E test execution (auto-detects if not provided)
12
+ - --skip-run: skip E2E execution, only generate and statically validate
11
13
 
12
14
  ## What It Produces
13
15
 
14
16
  - Test spec files (unit, API, E2E as appropriate for the feature)
15
17
  - Page Object Model files (for E2E tests)
16
18
  - Fixture files (test data)
19
+ - E2E_RUN_REPORT.md (if E2E tests were run against live app)
17
20
 
18
21
  ## Instructions
19
22
 
@@ -21,20 +24,141 @@ Generate production-ready test files with POM pattern for a specific feature or
21
24
  2. Read existing analysis artifacts if available:
22
25
  - `.qa-output/QA_ANALYSIS.md` -- architecture context
23
26
  - `.qa-output/TEST_INVENTORY.md` -- pre-defined test cases for this feature
24
- 3. Invoke executor agent:
27
+ 3. Read codebase map if available (check `.qa-output/codebase/`):
28
+ - `CODE_PATTERNS.md` -- naming conventions, import style, code patterns to match
29
+ - `API_CONTRACTS.md` -- request/response shapes for API tests
30
+ - `TEST_SURFACE.md` -- function signatures and testable entry points
31
+ - `TESTABILITY.md` -- pure functions vs stateful code, mock boundaries
32
+ If codebase map does not exist, run `/qa-map` first for best results, or proceed without it.
33
+
34
+ 4. **Check existing locator registry and extract new locators from live app:**
35
+
36
+ a. **Check the locator registry first.** Read `.qa-output/locators/LOCATOR_REGISTRY.md` if it exists. This is the accumulated registry of all locators previously extracted from the live app. Check if locators for this feature's pages already exist.
37
+
38
+ b. **If locators for this feature's pages are already in the registry AND no `--app-url` was provided:** Skip browser extraction -- reuse existing locators. Print: `"Reusing cached locators for {feature} from registry."`
39
+
40
+ c. **If locators are missing or `--app-url` was provided:** Use the Playwright MCP to navigate the app and extract real locators BEFORE generating tests.
41
+
42
+ **Browser extraction process:**
43
+
44
+ 1. Navigate to the feature's relevant pages:
45
+ ```
46
+ mcp__playwright__browser_navigate({ url: "{app_url}/{feature_path}" })
47
+ ```
48
+
49
+ 2. Take an accessibility snapshot of each page to discover all interactive elements:
50
+ ```
51
+ mcp__playwright__browser_snapshot()
52
+ ```
53
+
54
+ 3. Extract real locators from the snapshot -- collect:
55
+ - All `data-testid` attributes present on the page
56
+ - ARIA roles with accessible names (buttons, inputs, links, etc.)
57
+ - Form labels and placeholders
58
+ - Navigation structure and page layout
59
+
60
+ 4. If the feature has multiple pages/views (e.g., login -> dashboard), navigate through the flow:
61
+ ```
62
+ mcp__playwright__browser_fill_form({ ... })
63
+ mcp__playwright__browser_click({ element: "..." })
64
+ mcp__playwright__browser_snapshot() // capture next page
65
+ ```
66
+
67
+ 5. Write per-feature locator file to `.qa-output/locators/{feature}.locators.md`:
68
+ ```markdown
69
+ # Locators -- {feature}
70
+
71
+ Extracted: {date}
72
+ App URL: {app_url}
73
+
74
+ ## Page: {page_name} ({url})
75
+
76
+ | Element | Locator Type | Locator Value | Tier |
77
+ |---------|-------------|---------------|------|
78
+ | Email input | data-testid | login-email-input | 1 |
79
+ | Password input | data-testid | login-password-input | 1 |
80
+ | Submit button | role + name | button "Log in" | 1 |
81
+ | Remember me | label | "Remember me" | 2 |
82
+
83
+ ## Page: {next_page} ({url})
84
+ ...
85
+ ```
86
+
87
+ 6. Update the registry `.qa-output/locators/LOCATOR_REGISTRY.md` -- merge new locators into the central index:
88
+ ```markdown
89
+ # Locator Registry
90
+
91
+ Last updated: {date}
92
+ Total pages: {N}
93
+ Total locators: {N}
94
+
95
+ ## Index
96
+
97
+ | Feature | File | Pages | Locators | Extracted |
98
+ |---------|------|-------|----------|-----------|
99
+ | login | login.locators.md | 2 | 14 | 2026-03-25 |
100
+ | checkout | checkout.locators.md | 3 | 22 | 2026-03-25 |
101
+ | dashboard | dashboard.locators.md | 1 | 8 | 2026-03-25 |
102
+
103
+ ## All Locators by Page
104
+
105
+ ### /login
106
+ | Element | Locator Type | Locator Value | Tier | Source |
107
+ |---------|-------------|---------------|------|--------|
108
+ | Email input | data-testid | login-email-input | 1 | login.locators.md |
109
+ | ... | ... | ... | ... | ... |
110
+
111
+ ### /dashboard
112
+ ...
113
+ ```
114
+
115
+ If no app URL is available and no locators exist in the registry for this feature, skip this step -- the executor will propose locators based on source code analysis and CLAUDE.md conventions.
116
+
117
+ 5. Invoke executor agent to generate test files:
25
118
 
26
119
  Task(
27
120
  prompt="
28
- <objective>Generate test files for the specified feature following CLAUDE.md standards</objective>
121
+ <objective>Generate test files for the specified feature following CLAUDE.md standards, using codebase map for context</objective>
29
122
  <execution_context>@agents/qaa-executor.md</execution_context>
30
123
  <files_to_read>
31
124
  - CLAUDE.md
125
+ - .qa-output/locators/LOCATOR_REGISTRY.md (if exists -- accumulated real locators)
126
+ - .qa-output/locators/{feature}.locators.md (if exists -- feature-specific locators)
127
+ - .qa-output/codebase/CODE_PATTERNS.md (if exists)
128
+ - .qa-output/codebase/API_CONTRACTS.md (if exists)
129
+ - .qa-output/codebase/TEST_SURFACE.md (if exists)
130
+ - .qa-output/codebase/TESTABILITY.md (if exists)
32
131
  </files_to_read>
33
132
  <parameters>
34
133
  user_input: $ARGUMENTS
35
134
  mode: feature-test
135
+ codebase_map_dir: .qa-output/codebase
136
+ locator_registry: .qa-output/locators/LOCATOR_REGISTRY.md
36
137
  </parameters>
37
138
  "
38
139
  )
39
140
 
141
+ 6. If E2E test files were generated AND `--skip-run` was NOT passed, invoke the E2E runner to execute tests against the live app:
142
+
143
+ Task(
144
+ prompt="
145
+ <objective>Run generated E2E tests against live application, capture real locators, fix mismatches, loop until pass</objective>
146
+ <execution_context>@agents/qaa-e2e-runner.md</execution_context>
147
+ <files_to_read>
148
+ - CLAUDE.md
149
+ - {generated E2E test files from executor return}
150
+ - {generated POM files from executor return}
151
+ </files_to_read>
152
+ <parameters>
153
+ app_url: {from --app-url flag or auto-detect}
154
+ output_dir: .qa-output
155
+ </parameters>
156
+ "
157
+ )
158
+
159
+ 7. Present results:
160
+ - List generated files with type counts (unit, API, E2E, POM, fixture)
161
+ - If E2E runner executed: show pass/fail counts, locator fixes applied, app bugs found
162
+ - Suggest `/qa-pr` to package as a pull request
163
+
40
164
  $ARGUMENTS
@@ -1,16 +1,17 @@
1
1
  # Create Tests from Ticket
2
2
 
3
- Generate test cases and executable test files from a ticket (Jira, Linear, GitHub Issue, or plain text user story). Combines the ticket's acceptance criteria with actual source code analysis to produce targeted, concrete tests.
3
+ Generate test cases and executable test files from a ticket (Jira, Linear, GitHub Issue, or plain text user story). Combines the ticket's acceptance criteria with actual source code analysis to produce targeted, concrete tests. If an app URL is available, navigates the live app with Playwright MCP to extract real locators before generating tests.
4
4
 
5
5
  ## Usage
6
6
 
7
- /qa-from-ticket <ticket-source> [--dev-repo <path>]
7
+ /qa-from-ticket <ticket-source> [--dev-repo <path>] [--app-url <url>]
8
8
 
9
9
  - ticket-source: one of:
10
10
  - URL to GitHub/Jira/Linear issue
11
11
  - Plain text user story or acceptance criteria
12
12
  - File path to a .md or .txt with ticket details
13
13
  - --dev-repo: path to developer repository (default: current directory)
14
+ - --app-url: URL of running application for browser-based locator extraction (auto-detects if not provided)
14
15
 
15
16
  ## Instructions
16
17
 
@@ -1,21 +1,30 @@
1
- # QA Codebase Map
1
+ # QA Codebase Map & Analysis
2
2
 
3
- Deep-scan a codebase for QA-relevant information. Spawns 4 parallel mapper agents that analyze testability, risk areas, code patterns, and existing tests. Produces structured documents consumed by the QA pipeline.
3
+ Deep-scan a codebase for QA-relevant information and produce a complete analysis. Runs codebase mapping (4 parallel agents) followed by repository analysis. One command to fully understand a codebase before writing tests.
4
4
 
5
5
  ## Usage
6
6
 
7
- /qa-map [--focus <area>]
7
+ /qa-map [--focus <area>] [--dev-repo <path>] [--qa-repo <path>]
8
8
 
9
- - No arguments: runs all 4 focus areas in parallel
10
- - --focus: run a single area (testability, risk, patterns, existing-tests)
9
+ - No arguments: runs full map + analysis on current directory
10
+ - --focus: run a single map area only, skip analysis (testability, risk, patterns, existing-tests)
11
+ - --dev-repo: explicit path to developer repository
12
+ - --qa-repo: path to existing QA repository (produces gap analysis instead of blueprint)
11
13
 
12
14
  ## What It Produces
13
15
 
16
+ ### Stage 1: Codebase Map (4 parallel agents)
14
17
  - TESTABILITY.md + TEST_SURFACE.md — what's testable, entry points, mocking needs
15
18
  - RISK_MAP.md + CRITICAL_PATHS.md — business-critical paths, error handling gaps
16
19
  - CODE_PATTERNS.md + API_CONTRACTS.md — naming conventions, API shapes, auth patterns
17
20
  - TEST_ASSESSMENT.md + COVERAGE_GAPS.md — existing test quality, what's missing
18
21
 
22
+ ### Stage 2: Repository Analysis
23
+ - SCAN_MANIFEST.md — file tree, framework detection, testable surfaces
24
+ - QA_ANALYSIS.md — architecture overview, risk assessment, top 10 unit targets, testing pyramid
25
+ - TEST_INVENTORY.md — every test case with ID, target, inputs, expected outcome, priority
26
+ - QA_REPO_BLUEPRINT.md (no QA repo) or GAP_ANALYSIS.md (QA repo provided)
27
+
19
28
  ## Instructions
20
29
 
21
30
  1. Read `CLAUDE.md` — QA standards.
@@ -30,7 +39,9 @@ Agent(
30
39
  execution_context="@agents/qaa-codebase-mapper.md"
31
40
  )
32
41
 
33
- 4. If --focus flag, spawn only that one area.
34
- 5. When all complete, print summary of documents produced.
42
+ 4. If --focus flag, spawn only that one area and stop (skip analysis stage).
43
+ 5. When all map agents complete, print summary of codebase documents produced.
44
+ 6. Run the analysis stage — execute the workflow defined in `@workflows/qa-analyze.md` end-to-end. Pass --dev-repo and --qa-repo arguments if provided. Preserve all workflow gates (scan verification, artifact checks).
45
+ 7. Print final summary: all documents produced across both stages.
35
46
 
36
47
  $ARGUMENTS
@@ -0,0 +1,23 @@
1
+ # Create QA Pull Request
2
+
3
+ Create a draft PR from QA artifacts already on disk. Auto-detects git platform (GitHub, Azure DevOps, GitLab), applies your team's branch naming convention, and creates a draft PR with a summary. Asks for your branch pattern the first time and remembers it.
4
+
5
+ ## Usage
6
+
7
+ /qa-pr [--ticket <id>] [--title <description>] [--scope <type>] [--files <glob>] [--base <branch>]
8
+
9
+ - --ticket: ticket/issue ID for branch name (e.g., PROJ-123)
10
+ - --title: short description for branch and PR title (e.g., "login tests")
11
+ - --scope: limit to file type: unit, api, e2e, all (default: all)
12
+ - --files: explicit file glob to include (e.g., "tests/unit/auth*")
13
+ - --base: target branch for PR (default: auto-detect)
14
+
15
+ ## Instructions
16
+
17
+ 1. Read `CLAUDE.md` -- git workflow, commit conventions.
18
+ 2. Execute the workflow:
19
+
20
+ Follow the workflow defined in `@workflows/qa-pr.md` end-to-end.
21
+ Preserve all workflow gates (platform detection, user confirmation before push, branch convention save).
22
+
23
+ $ARGUMENTS
@@ -1,20 +1,42 @@
1
1
  # QA Test Validation
2
2
 
3
- Validate existing test files against CLAUDE.md standards. Runs 4-layer validation (syntax, structure, dependencies, logic) and classifies any failures found.
3
+ Validate existing test files against CLAUDE.md standards. Runs 4-layer static validation (syntax, structure, dependencies, logic) and optionally executes E2E tests against a live app to verify locators and assertions with real page data.
4
4
 
5
5
  ## Usage
6
6
 
7
- /qa-validate [<test-directory>] [--classify]
7
+ /qa-validate [<test-directory>] [--classify] [--run --app-url <url>]
8
8
 
9
9
  - test-directory: path to test files (auto-detects if omitted)
10
10
  - --classify: also run bug-detective to classify failures
11
+ - --run: execute E2E tests against a live application after static validation
12
+ - --app-url: URL of running application (auto-detects if not provided)
11
13
 
12
14
  ## Instructions
13
15
 
14
16
  1. Read `CLAUDE.md` -- quality gates, locator tiers, assertion rules.
15
- 2. Execute the workflow:
17
+ 2. Execute the static validation workflow:
16
18
 
17
19
  Follow the workflow defined in `@workflows/qa-validate.md` end-to-end.
18
20
  Preserve all workflow gates (fix loops, classification).
19
21
 
22
+ 3. If `--run` flag is present and E2E test files exist in the validated directory, invoke the E2E runner:
23
+
24
+ Task(
25
+ prompt="
26
+ <objective>Run E2E tests against live application, capture real locators, fix mismatches, loop until pass</objective>
27
+ <execution_context>@agents/qaa-e2e-runner.md</execution_context>
28
+ <files_to_read>
29
+ - CLAUDE.md
30
+ - {E2E test files from validated directory}
31
+ - {POM files from validated directory}
32
+ </files_to_read>
33
+ <parameters>
34
+ app_url: {from --app-url flag or auto-detect}
35
+ output_dir: .qa-output
36
+ </parameters>
37
+ "
38
+ )
39
+
40
+ 4. Present combined results: static validation + E2E execution (if ran).
41
+
20
42
  $ARGUMENTS
@@ -10,7 +10,8 @@
10
10
  "Agent",
11
11
  "WebFetch",
12
12
  "WebSearch",
13
- "NotebookEdit"
13
+ "NotebookEdit",
14
+ "mcp__playwright__*"
14
15
  ]
15
16
  },
16
17
  "env": {
@@ -132,6 +132,14 @@ The preference file is the user's personal override layer on top of the project'
132
132
  **Extract:** Naming rule — test ID at start of name
133
133
  **Save:** `## Naming\n- Test ID must appear at the beginning of the test name (e.g., "UT-AUTH-001: should validate email format") — (added 2026-03-24, context: "user wants ID at start of test name")`
134
134
 
135
+ **User says:** "Las branches tienen que ser feature/{ticket}-{description}"
136
+ **Extract:** Workflow rule — branch naming convention
137
+ **Save:** `## Workflow\n- Branch naming convention: feature/{ticket}-{description} — (added 2026-03-24, context: "user configured branch pattern for /qa-pr")`
138
+
139
+ **User says:** "Nuestras ramas siempre empiezan con test/ y el ticket de Jira"
140
+ **Extract:** Workflow rule — branch naming convention
141
+ **Save:** `## Workflow\n- Branch naming convention: test/{ticket}-{description} — (added 2026-03-24, context: "user specified branch pattern with Jira ticket prefix")`
142
+
135
143
  ## Important Rules
136
144
 
137
145
  1. **NEVER push MY_PREFERENCES.md to any repo** — it's personal
package/.mcp.json ADDED
@@ -0,0 +1,8 @@
1
+ {
2
+ "mcpServers": {
3
+ "playwright": {
4
+ "command": "npx",
5
+ "args": ["@playwright/mcp@latest"]
6
+ }
7
+ }
8
+ }
package/CHANGELOG.md ADDED
@@ -0,0 +1,71 @@
1
+
2
+ # Changelog
3
+
4
+ All notable changes to QAA (QA Automation Agent) are documented here.
5
+
6
+ ## [1.6.0] - 2026-03-25
7
+
8
+ ### Added
9
+ - Playwright MCP server bundled in agent package (`.mcp.json`) -- starts automatically when opening project in Claude Code
10
+ - Persistent locator registry at `.qa-output/locators/` -- accumulates real locators across features over time
11
+ - Per-feature files: `{feature}.locators.md` -- extracted locators for each feature tested
12
+ - Central index: `LOCATOR_REGISTRY.md` -- all locators by page, searchable by any command
13
+ - Browser-based locator extraction step in `/create-test` and `/qa-from-ticket` -- navigates live app with Playwright MCP and captures real data-testid, ARIA roles, and labels before generating tests
14
+ - Registry cache: if locators for a feature already exist in the registry, browser extraction is skipped (reuses cached locators)
15
+ - `--app-url` flag added to `/qa-from-ticket`
16
+ - CHANGELOG.md
17
+
18
+ ### Changed
19
+ - `qaa-executor` now reads locator registry (when available) to use real locators in POMs instead of proposing them
20
+ - `/create-test` flow: checks registry first, then extracts via browser if needed, BEFORE test generation
21
+ - `/qa-from-ticket` workflow: locator extraction step added after source scan, before test case generation
22
+
23
+ ### Removed
24
+ - `/qa-analyze` command (deprecated since v1.4.0, fully replaced by `/qa-map`)
25
+
26
+ ## [1.5.0] - 2026-03-24
27
+
28
+ ### Added
29
+ - Stable release
30
+
31
+ ## [1.4.0]
32
+
33
+ ### Changed
34
+ - Merged `/qa-analyze` into `/qa-map` -- single command for codebase scanning and analysis
35
+ - Consolidated pipeline flow
36
+
37
+ ### Deprecated
38
+ - `/qa-analyze` command (use `/qa-map` instead)
39
+
40
+ ## [1.3.0]
41
+
42
+ ### Added
43
+ - `qa-learner` skill -- persistent preferences from user corrections
44
+ - Preferences saved to `~/.claude/qaa/MY_PREFERENCES.md`
45
+ - Trigger detection for English and Spanish frustration signals
46
+
47
+ ## [1.2.0]
48
+
49
+ ### Added
50
+ - `qaa-codebase-mapper` agent -- 4 parallel focus areas (testability, risk, patterns, existing tests)
51
+ - `qaa-project-researcher` agent -- researches best testing stack and practices
52
+ - 8 codebase map documents produced by mapper
53
+
54
+ ## [1.1.0]
55
+
56
+ ### Added
57
+ - Workflow definitions for all pipeline stages
58
+ - Interactive installer (`npx qaa-agent`)
59
+ - `qaa init` command for per-project initialization
60
+ - npm package distribution
61
+
62
+ ## [1.0.0]
63
+
64
+ ### Added
65
+ - Full QA automation pipeline -- 11 agents, 17 commands, 10 templates, 7 workflows
66
+ - 3 workflow options (dev-only, immature QA, mature QA)
67
+ - 4-layer test validation (syntax, structure, dependencies, logic)
68
+ - Page Object Model generation with CLAUDE.md standards
69
+ - Test ID injection for frontend components
70
+ - Bug detective failure classification
71
+ - Draft PR delivery with branch naming convention
package/CLAUDE.md CHANGED
@@ -197,7 +197,7 @@ The QA automation system runs agents in a defined pipeline. Each stage produces
197
197
  ### Pipeline Stages
198
198
 
199
199
  ```
200
- scan -> analyze -> [testid-inject if frontend] -> plan -> generate -> validate -> deliver
200
+ scan -> codebase-map -> analyze -> [testid-inject if frontend] -> plan -> generate -> validate -> [e2e-runner if E2E tests] -> [bug-detective if failures] -> deliver
201
201
  ```
202
202
 
203
203
  ### Workflow Options
@@ -205,9 +205,9 @@ scan -> analyze -> [testid-inject if frontend] -> plan -> generate -> validate -
205
205
  **Option 1: Dev-Only Repo (no existing QA repo)**
206
206
  Full pipeline from scratch:
207
207
  ```
208
- scan -> analyze -> [testid-inject if frontend] -> plan -> generate -> validate -> deliver
208
+ scan -> codebase-map -> analyze -> [testid-inject if frontend] -> plan -> generate -> validate -> [e2e-runner if E2E tests] -> [bug-detective if failures] -> deliver
209
209
  ```
210
- Produces: SCAN_MANIFEST.md -> QA_ANALYSIS.md + TEST_INVENTORY.md + QA_REPO_BLUEPRINT.md -> [TESTID_AUDIT_REPORT.md] -> generation plan -> test files + POMs + fixtures + configs -> VALIDATION_REPORT.md -> branch + PR
210
+ Produces: SCAN_MANIFEST.md -> 8 codebase map documents -> QA_ANALYSIS.md + TEST_INVENTORY.md + QA_REPO_BLUEPRINT.md -> [TESTID_AUDIT_REPORT.md] -> generation plan -> test files + POMs + fixtures + configs -> VALIDATION_REPORT.md -> [E2E_RUN_REPORT.md] -> branch + PR
211
211
 
212
212
  **Option 2: Dev + Immature QA Repo (existing QA repo with low coverage or quality)**
213
213
  Gap-fill and standardize:
@@ -227,13 +227,18 @@ Produces: SCAN_MANIFEST.md (both repos) -> GAP_ANALYSIS.md (thin areas only) ->
227
227
 
228
228
  | From | To | Condition |
229
229
  |------|----|-----------|
230
- | scan | analyze | SCAN_MANIFEST.md exists with > 0 testable surfaces |
230
+ | scan | codebase-map | SCAN_MANIFEST.md exists with > 0 testable surfaces |
231
+ | codebase-map | analyze | At least 4 of 8 codebase map documents exist |
231
232
  | analyze | testid-inject | QA_ANALYSIS.md exists AND frontend components detected |
232
233
  | analyze | plan | QA_ANALYSIS.md + TEST_INVENTORY.md exist (skip testid-inject if no frontend) |
233
234
  | testid-inject | plan | TESTID_AUDIT_REPORT.md exists with coverage score calculated |
234
235
  | plan | generate | Generation plan approved (or auto-approved in auto-advance mode) |
235
236
  | generate | validate | All planned test files exist on disk |
236
- | validate | deliver | VALIDATION_REPORT.md shows PASS or max fix loops (3) exhausted |
237
+ | validate | e2e-runner | VALIDATION_REPORT.md shows PASS AND E2E test files were generated AND live app available |
238
+ | validate | deliver | VALIDATION_REPORT.md shows PASS and no E2E files or --skip-run |
239
+ | e2e-runner | bug-detective | E2E_RUN_REPORT.md exists AND failures need classification |
240
+ | e2e-runner | deliver | E2E_RUN_REPORT.md exists AND all tests pass or failures classified |
241
+ | bug-detective | deliver | FAILURE_CLASSIFICATION_REPORT.md exists |
237
242
 
238
243
  ---
239
244
 
@@ -244,12 +249,15 @@ Each agent owns specific artifacts. No agent may produce artifacts assigned to a
244
249
  | Agent | Reads | Produces | Template |
245
250
  |-------|-------|----------|----------|
246
251
  | qa-scanner | repo source files, package.json, file tree | SCAN_MANIFEST.md | templates/scan-manifest.md |
247
- | qa-analyzer | SCAN_MANIFEST.md, CLAUDE.md | QA_ANALYSIS.md, TEST_INVENTORY.md, QA_REPO_BLUEPRINT.md (Option 1) or GAP_ANALYSIS.md (Option 2/3) | templates/qa-analysis.md, templates/test-inventory.md, templates/qa-repo-blueprint.md, templates/gap-analysis.md |
248
- | qa-planner | TEST_INVENTORY.md, QA_ANALYSIS.md | Generation plan (internal) | -- |
249
- | qa-executor | TEST_INVENTORY.md, CLAUDE.md | test files, POMs, fixtures, configs | qa-template-engine patterns |
252
+ | qa-codebase-mapper | SCAN_MANIFEST.md, repo source files, CLAUDE.md | TESTABILITY.md, TEST_SURFACE.md, RISK_MAP.md, CRITICAL_PATHS.md, CODE_PATTERNS.md, API_CONTRACTS.md, TEST_ASSESSMENT.md, COVERAGE_GAPS.md | -- |
253
+ | qa-analyzer | SCAN_MANIFEST.md, codebase map documents, CLAUDE.md | QA_ANALYSIS.md, TEST_INVENTORY.md, QA_REPO_BLUEPRINT.md (Option 1) or GAP_ANALYSIS.md (Option 2/3) | templates/qa-analysis.md, templates/test-inventory.md, templates/qa-repo-blueprint.md, templates/gap-analysis.md |
254
+ | qa-planner | TEST_INVENTORY.md, QA_ANALYSIS.md, codebase map documents | Generation plan (internal) | -- |
255
+ | qa-executor | TEST_INVENTORY.md, codebase map documents, CLAUDE.md | test files, POMs, fixtures, configs | qa-template-engine patterns |
250
256
  | qa-validator | generated test files, CLAUDE.md | VALIDATION_REPORT.md (validation mode) or QA_AUDIT_REPORT.md (audit mode) | templates/validation-report.md, templates/qa-audit-report.md |
257
+ | qa-e2e-runner | generated E2E test files, POM files, CLAUDE.md, live application | E2E_RUN_REPORT.md, fixed test/POM files with real locators | -- |
251
258
  | qa-testid-injector | repo source files, SCAN_MANIFEST.md, CLAUDE.md | TESTID_AUDIT_REPORT.md, modified source files with data-testid attributes | templates/scan-manifest.md, templates/testid-audit-report.md |
252
259
  | qa-bug-detective | test execution results, test source files, CLAUDE.md | FAILURE_CLASSIFICATION_REPORT.md | templates/failure-classification.md |
260
+ | qa-project-researcher | SCAN_MANIFEST.md, repo source files | TESTING_STACK.md, FRAMEWORK_CAPABILITIES.md, API_TESTING_STRATEGY.md, E2E_STRATEGY.md | -- |
253
261
 
254
262
  **Rule:** An agent MUST NOT produce artifacts assigned to another agent.
255
263
 
@@ -390,11 +398,13 @@ Each agent operates on the same branch. No worktree splits are needed for this s
390
398
 
391
399
  Respect stage transitions from the Agent Pipeline section:
392
400
  1. qa-scanner runs first (no dependencies)
393
- 2. qa-analyzer and qa-testid-injector run after scanner (both depend on SCAN_MANIFEST.md)
394
- 3. qa-planner runs after analyzer (depends on QA_ANALYSIS.md + TEST_INVENTORY.md)
395
- 4. qa-executor runs after planner (depends on generation plan)
396
- 5. qa-validator runs after executor (depends on generated test files)
397
- 6. qa-bug-detective runs after test execution (depends on test results)
401
+ 2. qa-codebase-mapper runs after scanner (depends on SCAN_MANIFEST.md, spawns 4 parallel focus agents)
402
+ 3. qa-analyzer and qa-testid-injector run after codebase-map (both depend on SCAN_MANIFEST.md + codebase map documents)
403
+ 4. qa-planner runs after analyzer (depends on QA_ANALYSIS.md + TEST_INVENTORY.md + codebase map documents)
404
+ 5. qa-executor runs after planner (depends on generation plan + codebase map documents)
405
+ 6. qa-validator runs after executor (depends on generated test files)
406
+ 7. qa-e2e-runner runs after validator (depends on E2E test files + live application)
407
+ 8. qa-bug-detective runs after e2e-runner or validator if failures (depends on test results)
398
408
 
399
409
  ### Auto-Advance Mode
400
410