qaa-agent 1.6.3 → 1.7.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (42) hide show
  1. package/CHANGELOG.md +22 -0
  2. package/agents/qaa-analyzer.md +16 -1
  3. package/agents/qaa-bug-detective.md +33 -0
  4. package/agents/qaa-discovery.md +384 -0
  5. package/agents/qaa-e2e-runner.md +7 -6
  6. package/agents/qaa-planner.md +16 -1
  7. package/agents/qaa-testid-injector.md +60 -2
  8. package/agents/qaa-validator.md +38 -0
  9. package/bin/install.cjs +11 -9
  10. package/commands/qa-audit.md +119 -0
  11. package/commands/qa-create-test.md +288 -0
  12. package/commands/qa-fix.md +147 -0
  13. package/commands/qa-map.md +137 -0
  14. package/package.json +40 -41
  15. package/{.claude/settings.json → settings.json} +19 -20
  16. package/{.claude/skills → skills}/qa-bug-detective/SKILL.md +122 -122
  17. package/{.claude/skills → skills}/qa-repo-analyzer/SKILL.md +88 -88
  18. package/{.claude/skills → skills}/qa-self-validator/SKILL.md +109 -109
  19. package/{.claude/skills → skills}/qa-template-engine/SKILL.md +113 -113
  20. package/{.claude/skills → skills}/qa-testid-injector/SKILL.md +93 -93
  21. package/{.claude/skills → skills}/qa-workflow-documenter/SKILL.md +87 -87
  22. package/workflows/qa-gap.md +7 -1
  23. package/workflows/qa-start.md +25 -1
  24. package/workflows/qa-testid.md +29 -1
  25. package/workflows/qa-validate.md +5 -1
  26. package/.claude/commands/create-test.md +0 -164
  27. package/.claude/commands/qa-audit.md +0 -37
  28. package/.claude/commands/qa-blueprint.md +0 -54
  29. package/.claude/commands/qa-fix.md +0 -36
  30. package/.claude/commands/qa-from-ticket.md +0 -24
  31. package/.claude/commands/qa-gap.md +0 -20
  32. package/.claude/commands/qa-map.md +0 -47
  33. package/.claude/commands/qa-pom.md +0 -36
  34. package/.claude/commands/qa-pyramid.md +0 -37
  35. package/.claude/commands/qa-report.md +0 -38
  36. package/.claude/commands/qa-research.md +0 -33
  37. package/.claude/commands/qa-validate.md +0 -42
  38. package/.claude/commands/update-test.md +0 -58
  39. package/.claude/skills/qa-learner/SKILL.md +0 -150
  40. /package/{.claude/commands → commands}/qa-pr.md +0 -0
  41. /package/{.claude/commands → commands}/qa-start.md +0 -0
  42. /package/{.claude/commands → commands}/qa-testid.md +0 -0
@@ -1,93 +1,93 @@
1
- ---
2
- name: qa-testid-injector
3
- description: QA Test ID Injector. Scans source code to find interactive UI elements missing data-testid attributes and injects them following naming convention. Use when user wants to add test IDs, improve testability, audit missing test hooks, prepare for E2E automation, or add data-testid attributes before writing Playwright/Cypress tests. Triggers on "add test IDs", "add data-testid", "test hooks", "missing testid", "testability audit", "prepare for automation", "inject test attributes", "make components testable".
4
- ---
5
-
6
- # QA Test ID Injector
7
-
8
- ## Purpose
9
-
10
- Scan application source code, identify interactive UI elements lacking stable test selectors, and inject `data-testid` attributes following a consistent naming convention. Runs as **Step 0** — before any test generation.
11
-
12
- ## Core Rule
13
-
14
- **Every interactive element MUST have a stable, unique `data-testid` before E2E tests are generated against it.**
15
-
16
- ## Naming Convention
17
-
18
- Pattern: `{context}-{description}-{element-type}` in kebab-case.
19
-
20
- ### Element Type Suffixes
21
-
22
- | Element | Suffix | Example |
23
- |---------|--------|---------|
24
- | button | -btn | login-submit-btn |
25
- | input | -input | login-email-input |
26
- | select | -select | settings-language-select |
27
- | textarea | -textarea | feedback-comment-textarea |
28
- | link | -link | navbar-profile-link |
29
- | form | -form | checkout-payment-form |
30
- | img | -img | product-hero-img |
31
- | table | -table | users-list-table |
32
- | row | -row | users-item-row |
33
- | modal | -modal | confirm-delete-modal |
34
- | container | -container | dashboard-stats-container |
35
- | list | -list | notifications-list |
36
- | item | -item | notifications-item |
37
- | dropdown | -dropdown | navbar-user-dropdown |
38
- | tab | -tab | settings-security-tab |
39
- | checkbox | -checkbox | terms-accept-checkbox |
40
- | radio | -radio | shipping-express-radio |
41
- | toggle | -toggle | notifications-enabled-toggle |
42
- | badge | -badge | cart-count-badge |
43
- | alert | -alert | error-validation-alert |
44
-
45
- ### Context Derivation
46
-
47
- 1. **Page-level**: From component filename or route (LoginPage.tsx -> login)
48
- 2. **Component-level**: From component name (<NavBar> -> navbar)
49
- 3. **Nested**: Parent -> child hierarchy, max 3 levels deep
50
- 4. **Dynamic lists**: Use template literals with unique keys
51
-
52
- ## Execution Phases
53
-
54
- ### Phase 1: SCAN
55
- - Detect framework (React/Vue/Angular/HTML) from package.json and file extensions
56
- - List all component files (exclude test/spec/stories)
57
- - Prioritize by interaction density (forms > pages > layouts)
58
- - Output: SCAN_MANIFEST.md
59
-
60
- ### Phase 2: AUDIT
61
- - For each file, identify interactive elements
62
- - Classify: P0 (must have), P1 (should have), P2 (nice to have)
63
- - Record existing data-testid as EXISTING (don't modify)
64
- - Record missing as MISSING with proposed value
65
- - Output: TESTID_AUDIT_REPORT.md
66
-
67
- ### Phase 3: INJECT
68
- - Add data-testid as LAST attribute before closing >
69
- - Preserve all existing formatting
70
- - Only add the attribute — change nothing else
71
- - Framework-specific handling (JSX, Vue, Angular, HTML)
72
- - Output: INJECTION_CHANGELOG.md + modified source files
73
-
74
- ### Phase 4: VALIDATE
75
- - Syntax check all modified files
76
- - Uniqueness check (no duplicate testids per page)
77
- - Convention compliance check
78
- - Output: INJECTION_VALIDATION.md
79
-
80
- ## Third-Party Components
81
-
82
- 1. Props passthrough (if library supports it) — direct data-testid
83
- 2. Wrapper div (if no passthrough) — wrap with data-testid div
84
- 3. inputProps/slotProps (MUI-specific) — use component-specific prop APIs
85
-
86
- ## Quality Gate
87
-
88
- - [ ] Every interactive element has a data-testid
89
- - [ ] All values follow {context}-{description}-{element-type} convention
90
- - [ ] No duplicate data-testid values in same page/route scope
91
- - [ ] No existing code modified beyond adding the attribute
92
- - [ ] Syntax validation passes on all modified files
93
- - [ ] Dynamic list items use template literals with unique keys
1
+ ---
2
+ name: qa-testid-injector
3
+ description: QA Test ID Injector. Scans source code to find interactive UI elements missing data-testid attributes and injects them following naming convention. Use when user wants to add test IDs, improve testability, audit missing test hooks, prepare for E2E automation, or add data-testid attributes before writing Playwright/Cypress tests. Triggers on "add test IDs", "add data-testid", "test hooks", "missing testid", "testability audit", "prepare for automation", "inject test attributes", "make components testable".
4
+ ---
5
+
6
+ # QA Test ID Injector
7
+
8
+ ## Purpose
9
+
10
+ Scan application source code, identify interactive UI elements lacking stable test selectors, and inject `data-testid` attributes following a consistent naming convention. Runs as **Step 0** — before any test generation.
11
+
12
+ ## Core Rule
13
+
14
+ **Every interactive element MUST have a stable, unique `data-testid` before E2E tests are generated against it.**
15
+
16
+ ## Naming Convention
17
+
18
+ Pattern: `{context}-{description}-{element-type}` in kebab-case.
19
+
20
+ ### Element Type Suffixes
21
+
22
+ | Element | Suffix | Example |
23
+ |---------|--------|---------|
24
+ | button | -btn | login-submit-btn |
25
+ | input | -input | login-email-input |
26
+ | select | -select | settings-language-select |
27
+ | textarea | -textarea | feedback-comment-textarea |
28
+ | link | -link | navbar-profile-link |
29
+ | form | -form | checkout-payment-form |
30
+ | img | -img | product-hero-img |
31
+ | table | -table | users-list-table |
32
+ | row | -row | users-item-row |
33
+ | modal | -modal | confirm-delete-modal |
34
+ | container | -container | dashboard-stats-container |
35
+ | list | -list | notifications-list |
36
+ | item | -item | notifications-item |
37
+ | dropdown | -dropdown | navbar-user-dropdown |
38
+ | tab | -tab | settings-security-tab |
39
+ | checkbox | -checkbox | terms-accept-checkbox |
40
+ | radio | -radio | shipping-express-radio |
41
+ | toggle | -toggle | notifications-enabled-toggle |
42
+ | badge | -badge | cart-count-badge |
43
+ | alert | -alert | error-validation-alert |
44
+
45
+ ### Context Derivation
46
+
47
+ 1. **Page-level**: From component filename or route (LoginPage.tsx -> login)
48
+ 2. **Component-level**: From component name (<NavBar> -> navbar)
49
+ 3. **Nested**: Parent -> child hierarchy, max 3 levels deep
50
+ 4. **Dynamic lists**: Use template literals with unique keys
51
+
52
+ ## Execution Phases
53
+
54
+ ### Phase 1: SCAN
55
+ - Detect framework (React/Vue/Angular/HTML) from package.json and file extensions
56
+ - List all component files (exclude test/spec/stories)
57
+ - Prioritize by interaction density (forms > pages > layouts)
58
+ - Output: SCAN_MANIFEST.md
59
+
60
+ ### Phase 2: AUDIT
61
+ - For each file, identify interactive elements
62
+ - Classify: P0 (must have), P1 (should have), P2 (nice to have)
63
+ - Record existing data-testid as EXISTING (don't modify)
64
+ - Record missing as MISSING with proposed value
65
+ - Output: TESTID_AUDIT_REPORT.md
66
+
67
+ ### Phase 3: INJECT
68
+ - Add data-testid as LAST attribute before closing >
69
+ - Preserve all existing formatting
70
+ - Only add the attribute — change nothing else
71
+ - Framework-specific handling (JSX, Vue, Angular, HTML)
72
+ - Output: INJECTION_CHANGELOG.md + modified source files
73
+
74
+ ### Phase 4: VALIDATE
75
+ - Syntax check all modified files
76
+ - Uniqueness check (no duplicate testids per page)
77
+ - Convention compliance check
78
+ - Output: INJECTION_VALIDATION.md
79
+
80
+ ## Third-Party Components
81
+
82
+ 1. Props passthrough (if library supports it) — direct data-testid
83
+ 2. Wrapper div (if no passthrough) — wrap with data-testid div
84
+ 3. inputProps/slotProps (MUI-specific) — use component-specific prop APIs
85
+
86
+ ## Quality Gate
87
+
88
+ - [ ] Every interactive element has a data-testid
89
+ - [ ] All values follow {context}-{description}-{element-type} convention
90
+ - [ ] No duplicate data-testid values in same page/route scope
91
+ - [ ] No existing code modified beyond adding the attribute
92
+ - [ ] Syntax validation passes on all modified files
93
+ - [ ] Dynamic list items use template literals with unique keys
@@ -1,87 +1,87 @@
1
- ---
2
- name: qa-workflow-documenter
3
- description: QA Workflow Documenter. Generates structured QA workflow documentation with decision trees, playbooks, and AI interaction protocols. Use when user wants to document QA processes, create testing playbooks, define workflow steps, document QA procedures, create decision trees for testing, or standardize QA processes. Triggers on "document workflow", "QA process", "testing playbook", "workflow documentation", "QA procedures", "decision tree", "standardize process", "document QA steps".
4
- ---
5
-
6
- # QA Workflow Documenter
7
-
8
- ## Purpose
9
-
10
- Generate structured, AI-specific QA workflow documentation with decision trees, playbooks, and AI interaction protocols. Every step answers: WHO does it, WHAT they do, WHAT input they need, WHAT output they produce.
11
-
12
- ## Core Principle
13
-
14
- **AI-first language**: Use precise verbs — scan, extract, classify, generate, validate. Never use vague terms like "review", "check", "handle".
15
-
16
- ## Output Artifacts
17
-
18
- 1. **WORKFLOW_[NAME].md** — Step-by-step workflow with decision gates
19
- 2. **DECISION_TREE.md** — Visual decision trees for key branch points
20
- 3. **AI_PROMPTS_CATALOG.md** — Reusable prompt patterns for each workflow step
21
- 4. **CHECKLIST.md** — Pre/post verification checklists
22
-
23
- ## Workflow Template Structure
24
-
25
- ### Header Block
26
- ```markdown
27
- # Workflow: [Name]
28
- **Version**: [semver]
29
- **Applies to**: [project types / tech stacks]
30
- **Prerequisites**: [what must exist before starting]
31
- **Estimated duration**: [time range]
32
- **Actors**: [AI Agent, QA Engineer, Team Lead]
33
- ```
34
-
35
- ### Step Format
36
- ```markdown
37
- ## Step N: [Name]
38
- **Actor**: [who executes this step]
39
- **Input**: [what they receive]
40
- **Action**: [precise description using action verbs]
41
- **Output**: [what they produce]
42
- **Decision Gate**: [condition to proceed vs branch]
43
-
44
- ### AI Prompt Pattern
45
- [If this step involves an AI agent, include the prompt template]
46
- ```
47
-
48
- ## Workflow Types
49
-
50
- ### 1. Repository Intake Workflow
51
- New repo arrives -> scan -> classify -> assess risk -> recommend strategy
52
-
53
- ### 2. Test Case Generation Workflow
54
- Analysis done -> select targets -> generate cases -> validate -> deliver
55
-
56
- ### 3. QA Repo Bootstrap Workflow
57
- Blueprint ready -> create structure -> generate configs -> seed initial tests
58
-
59
- ### 4. Validation & Bug Triage Workflow
60
- Tests generated -> run -> classify failures -> fix loop -> report
61
-
62
- ### 5. Test Maintenance Workflow
63
- Existing tests -> audit -> prioritize fixes -> apply -> verify
64
-
65
- ## Decision Tree Format
66
-
67
- ```markdown
68
- ## Decision: [What decision]
69
-
70
- ```
71
- [Question]?
72
- ├── YES → [Action A]
73
- │ └── [Sub-question]?
74
- │ ├── YES → [Action A1]
75
- │ └── NO → [Action A2]
76
- └── NO → [Action B]
77
- ```
78
- ```
79
-
80
- ## Quality Gate
81
-
82
- - [ ] Every step has Actor, Input, Action, Output
83
- - [ ] No vague verbs (review → scan + classify, check → validate against criteria)
84
- - [ ] Decision gates have clear YES/NO branches
85
- - [ ] AI prompt patterns included for AI-executed steps
86
- - [ ] Prerequisites listed for every workflow
87
- - [ ] Output artifacts named and described
1
+ ---
2
+ name: qa-workflow-documenter
3
+ description: QA Workflow Documenter. Generates structured QA workflow documentation with decision trees, playbooks, and AI interaction protocols. Use when user wants to document QA processes, create testing playbooks, define workflow steps, document QA procedures, create decision trees for testing, or standardize QA processes. Triggers on "document workflow", "QA process", "testing playbook", "workflow documentation", "QA procedures", "decision tree", "standardize process", "document QA steps".
4
+ ---
5
+
6
+ # QA Workflow Documenter
7
+
8
+ ## Purpose
9
+
10
+ Generate structured, AI-specific QA workflow documentation with decision trees, playbooks, and AI interaction protocols. Every step answers: WHO does it, WHAT they do, WHAT input they need, WHAT output they produce.
11
+
12
+ ## Core Principle
13
+
14
+ **AI-first language**: Use precise verbs — scan, extract, classify, generate, validate. Never use vague terms like "review", "check", "handle".
15
+
16
+ ## Output Artifacts
17
+
18
+ 1. **WORKFLOW_[NAME].md** — Step-by-step workflow with decision gates
19
+ 2. **DECISION_TREE.md** — Visual decision trees for key branch points
20
+ 3. **AI_PROMPTS_CATALOG.md** — Reusable prompt patterns for each workflow step
21
+ 4. **CHECKLIST.md** — Pre/post verification checklists
22
+
23
+ ## Workflow Template Structure
24
+
25
+ ### Header Block
26
+ ```markdown
27
+ # Workflow: [Name]
28
+ **Version**: [semver]
29
+ **Applies to**: [project types / tech stacks]
30
+ **Prerequisites**: [what must exist before starting]
31
+ **Estimated duration**: [time range]
32
+ **Actors**: [AI Agent, QA Engineer, Team Lead]
33
+ ```
34
+
35
+ ### Step Format
36
+ ```markdown
37
+ ## Step N: [Name]
38
+ **Actor**: [who executes this step]
39
+ **Input**: [what they receive]
40
+ **Action**: [precise description using action verbs]
41
+ **Output**: [what they produce]
42
+ **Decision Gate**: [condition to proceed vs branch]
43
+
44
+ ### AI Prompt Pattern
45
+ [If this step involves an AI agent, include the prompt template]
46
+ ```
47
+
48
+ ## Workflow Types
49
+
50
+ ### 1. Repository Intake Workflow
51
+ New repo arrives -> scan -> classify -> assess risk -> recommend strategy
52
+
53
+ ### 2. Test Case Generation Workflow
54
+ Analysis done -> select targets -> generate cases -> validate -> deliver
55
+
56
+ ### 3. QA Repo Bootstrap Workflow
57
+ Blueprint ready -> create structure -> generate configs -> seed initial tests
58
+
59
+ ### 4. Validation & Bug Triage Workflow
60
+ Tests generated -> run -> classify failures -> fix loop -> report
61
+
62
+ ### 5. Test Maintenance Workflow
63
+ Existing tests -> audit -> prioritize fixes -> apply -> verify
64
+
65
+ ## Decision Tree Format
66
+
67
+ ```markdown
68
+ ## Decision: [What decision]
69
+
70
+ ```
71
+ [Question]?
72
+ ├── YES → [Action A]
73
+ │ └── [Sub-question]?
74
+ │ ├── YES → [Action A1]
75
+ │ └── NO → [Action A2]
76
+ └── NO → [Action B]
77
+ ```
78
+ ```
79
+
80
+ ## Quality Gate
81
+
82
+ - [ ] Every step has Actor, Input, Action, Output
83
+ - [ ] No vague verbs (review → scan + classify, check → validate against criteria)
84
+ - [ ] Decision gates have clear YES/NO branches
85
+ - [ ] AI prompt patterns included for AI-executed steps
86
+ - [ ] Prerequisites listed for every workflow
87
+ - [ ] Output artifacts named and described
@@ -129,11 +129,16 @@ Spawn the analyzer agent in gap analysis mode to compare dev repo testable surfa
129
129
  ```
130
130
  Task(
131
131
  prompt="
132
- <objective>Perform gap analysis comparing dev repo testable surfaces against QA repo test coverage. Produce QA_ANALYSIS.md, TEST_INVENTORY.md, and GAP_ANALYSIS.md.</objective>
132
+ <objective>Perform gap analysis comparing dev repo testable surfaces against QA repo test coverage. Use codebase map for deep context and locator registry for frontend coverage assessment. Produce QA_ANALYSIS.md, TEST_INVENTORY.md, and GAP_ANALYSIS.md.</objective>
133
133
  <execution_context>@agents/qaa-analyzer.md</execution_context>
134
134
  <files_to_read>
135
135
  - {OUTPUT_DIR}/SCAN_MANIFEST.md
136
136
  - CLAUDE.md
137
+ - .qa-output/locators/LOCATOR_REGISTRY.md (if exists -- locator coverage for risk assessment)
138
+ - .qa-output/codebase/RISK_MAP.md (if exists -- business-critical paths for risk assessment)
139
+ - .qa-output/codebase/CRITICAL_PATHS.md (if exists -- user flows for E2E scope)
140
+ - .qa-output/codebase/TEST_ASSESSMENT.md (if exists -- existing test quality)
141
+ - .qa-output/codebase/COVERAGE_GAPS.md (if exists -- uncovered modules)
137
142
  </files_to_read>
138
143
  <parameters>
139
144
  workflow_option: 2
@@ -142,6 +147,7 @@ Task(
142
147
  gap_analysis_path: {OUTPUT_DIR}/GAP_ANALYSIS.md
143
148
  dev_repo_path: {DEV_REPO}
144
149
  qa_repo_path: {QA_REPO}
150
+ codebase_map_dir: .qa-output/codebase
145
151
  </parameters>
146
152
  "
147
153
  )
@@ -645,6 +645,29 @@ Agent(subagent_type="general-purpose",
645
645
  3. **Dependencies** -- Imports resolve, fixtures exist, configs present
646
646
  4. **Logic** -- Assertions are concrete, test IDs are unique, no assertions in page objects
647
647
 
648
+ **5. Browser verification (if app URL available and Playwright MCP connected):**
649
+
650
+ After the 4-layer static validation, use Playwright MCP to verify E2E tests against the live app:
651
+
652
+ 1. Navigate to each page referenced in the E2E tests:
653
+ ```
654
+ mcp__playwright__browser_navigate({ url: "{app_url}/{route}" })
655
+ ```
656
+
657
+ 2. Take an accessibility snapshot to verify locators used in tests actually exist:
658
+ ```
659
+ mcp__playwright__browser_snapshot()
660
+ ```
661
+
662
+ 3. Cross-reference locators in generated tests against the real DOM:
663
+ - Verify `data-testid` values exist on the page
664
+ - Verify ARIA roles and names match test expectations
665
+ - Flag any test locator that does not match a real DOM element
666
+
667
+ 4. If mismatches are found, fix the test locators to match the real DOM and count as a fix loop iteration.
668
+
669
+ This browser verification step prevents delivering tests with locators that will immediately fail at runtime.
670
+
648
671
  **Fix loop:** The validator automatically attempts to fix issues it finds. Maximum 3 fix loop iterations. After each fix attempt, re-validate.
649
672
 
650
673
  **Parse validator return:**
@@ -716,7 +739,7 @@ node bin/qaa-tools.cjs state patch --"Status" "Classifying test failures" 2>/dev
716
739
  ```
717
740
  Agent(subagent_type="general-purpose",
718
741
  prompt="
719
- <objective>Classify test failures and attempt auto-fixes for test errors</objective>
742
+ <objective>Classify test failures and attempt auto-fixes for test errors. Use Playwright MCP to reproduce E2E failures in the browser when available.</objective>
720
743
  <execution_context>@agents/qaa-bug-detective.md</execution_context>
721
744
  <files_to_read>
722
745
  - {test execution results -- from validator or direct test run}
@@ -725,6 +748,7 @@ Agent(subagent_type="general-purpose",
725
748
  </files_to_read>
726
749
  <parameters>
727
750
  output_path: {output_dir}/FAILURE_CLASSIFICATION_REPORT.md
751
+ app_url: {app_url if available}
728
752
  </parameters>
729
753
  "
730
754
  )
@@ -180,17 +180,22 @@ Spawn the testid-injector agent which operates in 4 phases: SCAN, AUDIT, INJECT,
180
180
  ```
181
181
  Task(
182
182
  prompt="
183
- <objective>Audit frontend components for data-testid coverage and inject missing attributes</objective>
183
+ <objective>Audit frontend components for data-testid coverage and inject missing attributes. Use Playwright MCP to verify against live DOM. Use codebase map for component structure context. Update locator registry with discovered and injected locators.</objective>
184
184
  <execution_context>@agents/qaa-testid-injector.md</execution_context>
185
185
  <files_to_read>
186
186
  - {OUTPUT_DIR}/SCAN_MANIFEST.md
187
187
  - CLAUDE.md
188
188
  - data-testid-SKILL.md
189
+ - .qa-output/locators/LOCATOR_REGISTRY.md (if exists -- existing locators to cross-reference)
190
+ - .qa-output/codebase/CODE_PATTERNS.md (if exists -- component naming conventions for context derivation)
191
+ - .qa-output/codebase/TEST_SURFACE.md (if exists -- UI component list with props/events)
192
+ - .qa-output/codebase/TESTABILITY.md (if exists -- component testability for injection priority)
189
193
  </files_to_read>
190
194
  <parameters>
191
195
  source_dir: {SOURCE_DIR}
192
196
  frontend_framework: {FRONTEND_FRAMEWORK}
193
197
  output_path: {OUTPUT_DIR}/TESTID_AUDIT_REPORT.md
198
+ app_url: {auto-detect from dev server or ask user}
194
199
  </parameters>
195
200
  "
196
201
  )
@@ -213,6 +218,29 @@ Task(
213
218
 
214
219
  5. **VALIDATE phase:** Runs syntax checks on all modified files, verifies uniqueness of data-testid values per page/route scope, checks naming convention compliance, and verifies no unintended source code changes (non-interference check). If syntax fails, reverts the specific file.
215
220
 
221
+ 6. **BROWSER VERIFY phase (if app URL available and Playwright MCP connected):**
222
+
223
+ After injection and static validation, use Playwright MCP to verify the injected `data-testid` attributes are present in the rendered DOM:
224
+
225
+ 1. Navigate to each page/route that has injected components:
226
+ ```
227
+ mcp__playwright__browser_navigate({ url: "{app_url}/{route}" })
228
+ ```
229
+
230
+ 2. Take an accessibility snapshot to inspect the rendered DOM:
231
+ ```
232
+ mcp__playwright__browser_snapshot()
233
+ ```
234
+
235
+ 3. Cross-reference: for each injected `data-testid`, verify it appears in the snapshot. Report:
236
+ - **CONFIRMED**: `data-testid` found in rendered DOM
237
+ - **NOT RENDERED**: Component exists in source but not rendered on this route (may appear conditionally)
238
+ - **MISSING**: `data-testid` expected but not found in DOM (possible injection error)
239
+
240
+ 4. Add browser verification results to the TESTID_AUDIT_REPORT.md as a "Browser Verification" section.
241
+
242
+ This step is **optional** -- if no app URL is available or the app is not running, skip and rely on static validation only.
243
+
216
244
  **Handle injector return:**
217
245
 
218
246
  Extract:
@@ -131,11 +131,15 @@ Spawn the validator agent to perform 4-layer validation with fix loop.
131
131
  ```
132
132
  Task(
133
133
  prompt="
134
- <objective>Validate test files against CLAUDE.md standards across 4 layers (Syntax, Structure, Dependencies, Logic)</objective>
134
+ <objective>Validate test files against CLAUDE.md standards across 4 layers (Syntax, Structure, Dependencies, Logic). Use codebase map for context and locator registry to verify locator accuracy.</objective>
135
135
  <execution_context>@agents/qaa-validator.md</execution_context>
136
136
  <files_to_read>
137
137
  - CLAUDE.md
138
138
  - {synthetic file list or generation plan path}
139
+ - .qa-output/locators/LOCATOR_REGISTRY.md (if exists -- real locators for cross-checking POM accuracy)
140
+ - .qa-output/codebase/CODE_PATTERNS.md (if exists -- naming conventions for structure validation)
141
+ - .qa-output/codebase/TEST_SURFACE.md (if exists -- function signatures for target verification)
142
+ - .qa-output/codebase/API_CONTRACTS.md (if exists -- API shapes for assertion verification)
139
143
  </files_to_read>
140
144
  <parameters>
141
145
  test_dir: {TEST_DIR}