qaa-agent 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude/commands/create-test.md +40 -0
- package/.claude/commands/qa-analyze.md +60 -0
- package/.claude/commands/qa-audit.md +37 -0
- package/.claude/commands/qa-blueprint.md +54 -0
- package/.claude/commands/qa-fix.md +36 -0
- package/.claude/commands/qa-from-ticket.md +88 -0
- package/.claude/commands/qa-gap.md +54 -0
- package/.claude/commands/qa-pom.md +36 -0
- package/.claude/commands/qa-pyramid.md +37 -0
- package/.claude/commands/qa-report.md +38 -0
- package/.claude/commands/qa-start.md +33 -0
- package/.claude/commands/qa-testid.md +54 -0
- package/.claude/commands/qa-validate.md +54 -0
- package/.claude/commands/update-test.md +58 -0
- package/.claude/settings.json +19 -0
- package/.claude/skills/qa-bug-detective/SKILL.md +122 -0
- package/.claude/skills/qa-repo-analyzer/SKILL.md +88 -0
- package/.claude/skills/qa-self-validator/SKILL.md +109 -0
- package/.claude/skills/qa-template-engine/SKILL.md +113 -0
- package/.claude/skills/qa-testid-injector/SKILL.md +93 -0
- package/.claude/skills/qa-workflow-documenter/SKILL.md +87 -0
- package/CLAUDE.md +543 -0
- package/README.md +418 -0
- package/agents/qa-pipeline-orchestrator.md +1217 -0
- package/agents/qaa-analyzer.md +508 -0
- package/agents/qaa-bug-detective.md +444 -0
- package/agents/qaa-executor.md +618 -0
- package/agents/qaa-planner.md +374 -0
- package/agents/qaa-scanner.md +422 -0
- package/agents/qaa-testid-injector.md +583 -0
- package/agents/qaa-validator.md +450 -0
- package/bin/install.cjs +176 -0
- package/bin/lib/commands.cjs +709 -0
- package/bin/lib/config.cjs +307 -0
- package/bin/lib/core.cjs +497 -0
- package/bin/lib/frontmatter.cjs +299 -0
- package/bin/lib/init.cjs +989 -0
- package/bin/lib/milestone.cjs +241 -0
- package/bin/lib/model-profiles.cjs +60 -0
- package/bin/lib/phase.cjs +911 -0
- package/bin/lib/roadmap.cjs +306 -0
- package/bin/lib/state.cjs +748 -0
- package/bin/lib/template.cjs +222 -0
- package/bin/lib/verify.cjs +842 -0
- package/bin/qaa-tools.cjs +607 -0
- package/package.json +34 -0
- package/templates/failure-classification.md +391 -0
- package/templates/gap-analysis.md +409 -0
- package/templates/pr-template.md +48 -0
- package/templates/qa-analysis.md +381 -0
- package/templates/qa-audit-report.md +465 -0
- package/templates/qa-repo-blueprint.md +636 -0
- package/templates/scan-manifest.md +312 -0
- package/templates/test-inventory.md +582 -0
- package/templates/testid-audit-report.md +354 -0
- package/templates/validation-report.md +243 -0
package/CLAUDE.md
ADDED
|
@@ -0,0 +1,543 @@
|
|
|
1
|
+
# QA Automation Standards
|
|
2
|
+
|
|
3
|
+
This project follows strict QA automation standards. Every test, page object, and analysis produced MUST follow these rules.
|
|
4
|
+
|
|
5
|
+
## Framework Detection
|
|
6
|
+
|
|
7
|
+
Before generating any code, **detect what the project already uses**:
|
|
8
|
+
|
|
9
|
+
1. Check for existing test config files: `cypress.config.ts`, `playwright.config.ts`, `jest.config.ts`, `vitest.config.ts`, `pytest.ini`, etc.
|
|
10
|
+
2. Check `package.json` or `requirements.txt` for test dependencies
|
|
11
|
+
3. Check existing test files for patterns and conventions
|
|
12
|
+
4. **Always match the project's existing framework, language, and conventions**
|
|
13
|
+
|
|
14
|
+
If no framework exists yet, ask the user which one to use. Never assume.
|
|
15
|
+
|
|
16
|
+
## Testing Pyramid
|
|
17
|
+
|
|
18
|
+
Target distribution for every project:
|
|
19
|
+
|
|
20
|
+
```
|
|
21
|
+
/ E2E \ 3-5% (critical path smoke only)
|
|
22
|
+
/ API \ 20-25% (endpoints + contracts)
|
|
23
|
+
/ Integration\ 10-15% (component interactions)
|
|
24
|
+
/ Unit \ 60-70% (business logic, pure functions)
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
Adjust percentages based on the actual app architecture.
|
|
28
|
+
|
|
29
|
+
## Locator Strategy
|
|
30
|
+
|
|
31
|
+
All UI test locators MUST follow this priority order. Never skip to a lower tier without written justification.
|
|
32
|
+
|
|
33
|
+
**Tier 1 -- BEST (always try these first):**
|
|
34
|
+
- Test IDs: `data-testid`, `data-cy`, `data-test` (adapt to framework)
|
|
35
|
+
- Semantic roles: ARIA roles + accessible name
|
|
36
|
+
|
|
37
|
+
**Tier 2 -- GOOD (when Tier 1 not available):**
|
|
38
|
+
- Form labels, placeholders, visible text content
|
|
39
|
+
|
|
40
|
+
**Tier 3 -- ACCEPTABLE (when Tier 1-2 not available):**
|
|
41
|
+
- Alt text, title attributes
|
|
42
|
+
|
|
43
|
+
**Tier 4 -- LAST RESORT (always add a TODO comment):**
|
|
44
|
+
- CSS selectors, XPath -- mark with `// TODO: Request test ID for this element`
|
|
45
|
+
|
|
46
|
+
### Framework-Specific Examples
|
|
47
|
+
|
|
48
|
+
**Playwright:**
|
|
49
|
+
```typescript
|
|
50
|
+
page.getByTestId('submit') // Tier 1
|
|
51
|
+
page.getByRole('button', {name: 'Log in'}) // Tier 1
|
|
52
|
+
page.getByLabel('Email') // Tier 2
|
|
53
|
+
page.locator('.btn') // Tier 4 -- add TODO
|
|
54
|
+
```
|
|
55
|
+
|
|
56
|
+
**Cypress:**
|
|
57
|
+
```typescript
|
|
58
|
+
cy.get('[data-cy="submit"]') // Tier 1
|
|
59
|
+
cy.findByRole('button', {name: 'Log in'}) // Tier 1 (with testing-library)
|
|
60
|
+
cy.get('[data-testid="submit"]') // Tier 1
|
|
61
|
+
cy.contains('Submit') // Tier 2
|
|
62
|
+
cy.get('.btn') // Tier 4 -- add TODO
|
|
63
|
+
```
|
|
64
|
+
|
|
65
|
+
**Selenium / other:**
|
|
66
|
+
```
|
|
67
|
+
driver.findElement(By.cssSelector('[data-testid="submit"]')) // Tier 1
|
|
68
|
+
driver.findElement(By.className('btn')) // Tier 4 -- add TODO
|
|
69
|
+
```
|
|
70
|
+
|
|
71
|
+
## Page Object Model Rules
|
|
72
|
+
|
|
73
|
+
These rules apply regardless of framework:
|
|
74
|
+
|
|
75
|
+
1. **One class/object per page or view** -- no god objects
|
|
76
|
+
2. **No assertions in page objects** -- assertions belong ONLY in test specs
|
|
77
|
+
3. **Locators are properties** -- defined in constructor or as class fields
|
|
78
|
+
4. **Actions return void or the next page** -- for fluent chaining
|
|
79
|
+
5. **State queries return data** -- let the test file decide what to assert
|
|
80
|
+
6. **Every POM extends a shared base** -- shared navigation, screenshots, waits
|
|
81
|
+
|
|
82
|
+
### POM File Structure
|
|
83
|
+
```
|
|
84
|
+
[pages or page-objects or support/page-objects]/
|
|
85
|
+
base/
|
|
86
|
+
BasePage.[ext] -- shared methods
|
|
87
|
+
[feature]/
|
|
88
|
+
[Feature]Page.[ext] -- one file per page
|
|
89
|
+
components/
|
|
90
|
+
[Component].[ext] -- reusable UI components
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
Adapt folder location to match the project's existing conventions.
|
|
94
|
+
|
|
95
|
+
## Test Spec Rules
|
|
96
|
+
|
|
97
|
+
### Every test case MUST have:
|
|
98
|
+
- **Unique ID**: `UT-MODULE-001`, `API-AUTH-001`, `E2E-FLOW-001`
|
|
99
|
+
- **Exact target**: file path + function name, or HTTP method + endpoint
|
|
100
|
+
- **Concrete inputs**: actual values, not "valid data"
|
|
101
|
+
- **Explicit expected outcome**: exact assertion, not "works correctly"
|
|
102
|
+
- **Priority**: P0 (blocks release), P1 (should fix), P2 (nice to have)
|
|
103
|
+
|
|
104
|
+
### BAD assertions (never do this):
|
|
105
|
+
```
|
|
106
|
+
expect(response.status).toBeTruthy()
|
|
107
|
+
expect(data).toBeDefined()
|
|
108
|
+
cy.get('.result').should('exist') // what should it contain?
|
|
109
|
+
```
|
|
110
|
+
|
|
111
|
+
### GOOD assertions (always do this):
|
|
112
|
+
```
|
|
113
|
+
expect(response.status).toBe(200)
|
|
114
|
+
expect(data.name).toBe('Test User')
|
|
115
|
+
cy.get('[data-cy="result"]').should('have.text', 'Todo created successfully')
|
|
116
|
+
```
|
|
117
|
+
|
|
118
|
+
## Naming Conventions
|
|
119
|
+
|
|
120
|
+
Adapt file extensions to match the project's language:
|
|
121
|
+
|
|
122
|
+
| Type | Pattern | Example (.ts) | Example (.cy.ts) |
|
|
123
|
+
|------|---------|---------------|-------------------|
|
|
124
|
+
| Page Object | `[PageName]Page.[ext]` | `LoginPage.ts` | `LoginPage.ts` |
|
|
125
|
+
| Component POM | `[ComponentName].[ext]` | `NavigationBar.ts` | `NavigationBar.ts` |
|
|
126
|
+
| E2E test | `[feature].e2e.[ext]` | `login.e2e.spec.ts` | `login.e2e.cy.ts` |
|
|
127
|
+
| API test | `[resource].api.[ext]` | `users.api.spec.ts` | `users.api.cy.ts` |
|
|
128
|
+
| Unit test | `[module].unit.[ext]` | `validate.unit.spec.ts` | `validate.test.ts` |
|
|
129
|
+
| Fixture | `[domain]-data.[ext]` | `auth-data.ts` | `auth-data.json` |
|
|
130
|
+
|
|
131
|
+
If the project already has naming conventions, **follow those instead**.
|
|
132
|
+
|
|
133
|
+
## Repo Structure
|
|
134
|
+
|
|
135
|
+
Recommended structure -- adapt to match what the project already has:
|
|
136
|
+
|
|
137
|
+
```
|
|
138
|
+
tests/ or cypress/ or __tests__/
|
|
139
|
+
e2e/
|
|
140
|
+
smoke/ # P0 critical path (every PR)
|
|
141
|
+
regression/ # Full suite (nightly)
|
|
142
|
+
api/ # API-level tests
|
|
143
|
+
unit/ # Unit tests
|
|
144
|
+
|
|
145
|
+
pages/ or page-objects/ or support/page-objects/
|
|
146
|
+
base/
|
|
147
|
+
[feature]/
|
|
148
|
+
components/
|
|
149
|
+
|
|
150
|
+
fixtures/ # Test data & factories
|
|
151
|
+
config/ # Test configs (if separate from root)
|
|
152
|
+
reports/ # Generated reports (gitignored)
|
|
153
|
+
```
|
|
154
|
+
|
|
155
|
+
## Test Data Rules
|
|
156
|
+
|
|
157
|
+
- **NEVER** hardcode real credentials
|
|
158
|
+
- Use environment variables with test fallbacks
|
|
159
|
+
- Fixtures go in dedicated folder
|
|
160
|
+
- Each domain gets its own fixture file
|
|
161
|
+
|
|
162
|
+
## Analysis Documents
|
|
163
|
+
|
|
164
|
+
When analyzing a repository, produce these documents:
|
|
165
|
+
|
|
166
|
+
### QA_ANALYSIS.md must include:
|
|
167
|
+
- Architecture overview (system type, language, runtime, entry points, dependencies)
|
|
168
|
+
- Risk assessment (HIGH / MEDIUM / LOW with justification)
|
|
169
|
+
- Top 10 unit test targets with rationale
|
|
170
|
+
- Recommended testing pyramid with percentages adjusted to this app
|
|
171
|
+
- External dependencies with risk levels
|
|
172
|
+
|
|
173
|
+
### TEST_INVENTORY.md must include:
|
|
174
|
+
- Every test case with ID, target, inputs, expected outcome, priority
|
|
175
|
+
- Organized by pyramid level (unit -> integration -> API -> E2E)
|
|
176
|
+
- No test case without an explicit expected outcome
|
|
177
|
+
|
|
178
|
+
## Quality Gates
|
|
179
|
+
|
|
180
|
+
Before delivering ANY QA artifact, verify:
|
|
181
|
+
- [ ] Framework matches what the project already uses
|
|
182
|
+
- [ ] Every test case has an explicit expected outcome with a concrete value
|
|
183
|
+
- [ ] No outcome says "correct", "proper", "appropriate", or "works" without defining what that means
|
|
184
|
+
- [ ] All locators follow the tier hierarchy
|
|
185
|
+
- [ ] No assertions inside page objects
|
|
186
|
+
- [ ] No hardcoded credentials
|
|
187
|
+
- [ ] File naming follows the project's existing conventions (or the standards above if none exist)
|
|
188
|
+
- [ ] Test IDs are unique and follow naming convention
|
|
189
|
+
- [ ] Priority assigned to every test case
|
|
190
|
+
|
|
191
|
+
---
|
|
192
|
+
|
|
193
|
+
## Agent Pipeline
|
|
194
|
+
|
|
195
|
+
The QA automation system runs agents in a defined pipeline. Each stage produces artifacts consumed by the next stage.
|
|
196
|
+
|
|
197
|
+
### Pipeline Stages
|
|
198
|
+
|
|
199
|
+
```
|
|
200
|
+
scan -> analyze -> [testid-inject if frontend] -> plan -> generate -> validate -> deliver
|
|
201
|
+
```
|
|
202
|
+
|
|
203
|
+
### Workflow Options
|
|
204
|
+
|
|
205
|
+
**Option 1: Dev-Only Repo (no existing QA repo)**
|
|
206
|
+
Full pipeline from scratch:
|
|
207
|
+
```
|
|
208
|
+
scan -> analyze -> [testid-inject if frontend] -> plan -> generate -> validate -> deliver
|
|
209
|
+
```
|
|
210
|
+
Produces: SCAN_MANIFEST.md -> QA_ANALYSIS.md + TEST_INVENTORY.md + QA_REPO_BLUEPRINT.md -> [TESTID_AUDIT_REPORT.md] -> generation plan -> test files + POMs + fixtures + configs -> VALIDATION_REPORT.md -> branch + PR
|
|
211
|
+
|
|
212
|
+
**Option 2: Dev + Immature QA Repo (existing QA repo with low coverage or quality)**
|
|
213
|
+
Gap-fill and standardize:
|
|
214
|
+
```
|
|
215
|
+
scan both repos -> gap analysis -> fix broken tests -> add missing coverage -> standardize existing -> validate -> deliver
|
|
216
|
+
```
|
|
217
|
+
Produces: SCAN_MANIFEST.md (both repos) -> GAP_ANALYSIS.md -> fixed test files -> new test files -> standardized files -> VALIDATION_REPORT.md -> branch + PR
|
|
218
|
+
|
|
219
|
+
**Option 3: Dev + Mature QA Repo (existing QA repo with solid coverage)**
|
|
220
|
+
Surgical additions only:
|
|
221
|
+
```
|
|
222
|
+
scan both repos -> identify thin coverage -> add only missing tests -> validate -> deliver
|
|
223
|
+
```
|
|
224
|
+
Produces: SCAN_MANIFEST.md (both repos) -> GAP_ANALYSIS.md (thin areas only) -> new test files (targeted) -> VALIDATION_REPORT.md -> branch + PR
|
|
225
|
+
|
|
226
|
+
### Stage Transitions
|
|
227
|
+
|
|
228
|
+
| From | To | Condition |
|
|
229
|
+
|------|----|-----------|
|
|
230
|
+
| scan | analyze | SCAN_MANIFEST.md exists with > 0 testable surfaces |
|
|
231
|
+
| analyze | testid-inject | QA_ANALYSIS.md exists AND frontend components detected |
|
|
232
|
+
| analyze | plan | QA_ANALYSIS.md + TEST_INVENTORY.md exist (skip testid-inject if no frontend) |
|
|
233
|
+
| testid-inject | plan | TESTID_AUDIT_REPORT.md exists with coverage score calculated |
|
|
234
|
+
| plan | generate | Generation plan approved (or auto-approved in auto-advance mode) |
|
|
235
|
+
| generate | validate | All planned test files exist on disk |
|
|
236
|
+
| validate | deliver | VALIDATION_REPORT.md shows PASS or max fix loops (3) exhausted |
|
|
237
|
+
|
|
238
|
+
---
|
|
239
|
+
|
|
240
|
+
## Module Boundaries
|
|
241
|
+
|
|
242
|
+
Each agent owns specific artifacts. No agent may produce artifacts assigned to another agent.
|
|
243
|
+
|
|
244
|
+
| Agent | Reads | Produces | Template |
|
|
245
|
+
|-------|-------|----------|----------|
|
|
246
|
+
| qa-scanner | repo source files, package.json, file tree | SCAN_MANIFEST.md | templates/scan-manifest.md |
|
|
247
|
+
| qa-analyzer | SCAN_MANIFEST.md, CLAUDE.md | QA_ANALYSIS.md, TEST_INVENTORY.md, QA_REPO_BLUEPRINT.md (Option 1) or GAP_ANALYSIS.md (Option 2/3) | templates/qa-analysis.md, templates/test-inventory.md, templates/qa-repo-blueprint.md, templates/gap-analysis.md |
|
|
248
|
+
| qa-planner | TEST_INVENTORY.md, QA_ANALYSIS.md | Generation plan (internal) | -- |
|
|
249
|
+
| qa-executor | TEST_INVENTORY.md, CLAUDE.md | test files, POMs, fixtures, configs | qa-template-engine patterns |
|
|
250
|
+
| qa-validator | generated test files, CLAUDE.md | VALIDATION_REPORT.md (validation mode) or QA_AUDIT_REPORT.md (audit mode) | templates/validation-report.md, templates/qa-audit-report.md |
|
|
251
|
+
| qa-testid-injector | repo source files, SCAN_MANIFEST.md, CLAUDE.md | TESTID_AUDIT_REPORT.md, modified source files with data-testid attributes | templates/scan-manifest.md, templates/testid-audit-report.md |
|
|
252
|
+
| qa-bug-detective | test execution results, test source files, CLAUDE.md | FAILURE_CLASSIFICATION_REPORT.md | templates/failure-classification.md |
|
|
253
|
+
|
|
254
|
+
**Rule:** An agent MUST NOT produce artifacts assigned to another agent.
|
|
255
|
+
|
|
256
|
+
**Rule:** An agent MUST read all artifacts listed in its "Reads" column before producing output.
|
|
257
|
+
|
|
258
|
+
---
|
|
259
|
+
|
|
260
|
+
## Verification Commands
|
|
261
|
+
|
|
262
|
+
Every artifact must pass verification before the pipeline advances. Below are the validation rules per artifact type.
|
|
263
|
+
|
|
264
|
+
### SCAN_MANIFEST.md
|
|
265
|
+
- Has > 0 files in File List table
|
|
266
|
+
- Project Detection section is populated (framework, language, component patterns)
|
|
267
|
+
- Testable Surfaces has at least 1 category with entries
|
|
268
|
+
- File priority ordering is present (HIGH/MEDIUM/LOW)
|
|
269
|
+
|
|
270
|
+
### QA_ANALYSIS.md
|
|
271
|
+
- All 6 sections present: Architecture Overview, External Dependencies, Risk Assessment, Top 10 Unit Test Targets, API/Contract Test Targets, Recommended Testing Pyramid
|
|
272
|
+
- Top 10 has exactly 10 entries with module, rationale, and complexity
|
|
273
|
+
- Testing pyramid percentages sum to 100%
|
|
274
|
+
- Risk assessment uses only HIGH/MEDIUM/LOW with justification per item
|
|
275
|
+
|
|
276
|
+
### TEST_INVENTORY.md
|
|
277
|
+
- Every test case has all mandatory fields: ID, target, inputs, expected outcome, priority
|
|
278
|
+
- IDs are unique across the entire document (no duplicates)
|
|
279
|
+
- IDs follow naming convention: UT-MODULE-NNN, INT-MODULE-NNN, API-RESOURCE-NNN, E2E-FLOW-NNN
|
|
280
|
+
- Pyramid tier counts match the summary table
|
|
281
|
+
- No expected outcome says "correct", "proper", "appropriate", or "works" without concrete value
|
|
282
|
+
|
|
283
|
+
### QA_REPO_BLUEPRINT.md
|
|
284
|
+
- Folder structure tree is present with explanations per directory
|
|
285
|
+
- Config files section has actual content (not placeholders)
|
|
286
|
+
- npm scripts defined for smoke, regression, and API test runs
|
|
287
|
+
- CI/CD strategy section includes PR-gate and nightly run configurations
|
|
288
|
+
- Definition of Done checklist is present
|
|
289
|
+
|
|
290
|
+
### VALIDATION_REPORT.md
|
|
291
|
+
- All 4 layers reported per file: Syntax, Structure, Dependencies, Logic
|
|
292
|
+
- Each layer shows PASS or FAIL with details
|
|
293
|
+
- Confidence level assigned: HIGH (all layers pass), MEDIUM (1-2 minor issues), LOW (structural problems)
|
|
294
|
+
- Fix loop log shows iteration count and what was found/fixed per loop
|
|
295
|
+
- Unresolved issues section documents anything not auto-fixed
|
|
296
|
+
|
|
297
|
+
### FAILURE_CLASSIFICATION_REPORT.md
|
|
298
|
+
- Every failure has classification: APPLICATION BUG, TEST CODE ERROR, ENVIRONMENT ISSUE, or INCONCLUSIVE
|
|
299
|
+
- Every failure has confidence level: HIGH, MEDIUM-HIGH, MEDIUM, or LOW
|
|
300
|
+
- Every failure has evidence: code snippet + reasoning explaining the classification
|
|
301
|
+
- No APPLICATION BUG is marked as auto-fixed (application bugs require developer action)
|
|
302
|
+
- Auto-fix log documents what was fixed and at what confidence level
|
|
303
|
+
|
|
304
|
+
### TESTID_AUDIT_REPORT.md
|
|
305
|
+
- Coverage score calculated: existing data-testid count / total interactive elements
|
|
306
|
+
- All proposed data-testid values follow `{context}-{description}-{element-type}` naming convention
|
|
307
|
+
- No duplicate data-testid values within the same page/route scope
|
|
308
|
+
- Elements classified by priority: P0 (form inputs, buttons), P1 (links, images), P2 (containers, decorative)
|
|
309
|
+
- Decision gate threshold applied: >90% SELECTIVE, 50-90% TARGETED, <50% FULL PASS, 0% P0 FIRST
|
|
310
|
+
|
|
311
|
+
### GAP_ANALYSIS.md
|
|
312
|
+
- Coverage map shows all modules from SCAN_MANIFEST.md
|
|
313
|
+
- Missing tests have IDs following naming convention and priorities assigned
|
|
314
|
+
- Broken tests have failure reasons documented with file path and error
|
|
315
|
+
- Quality assessment includes locator tier distribution and assertion quality rating
|
|
316
|
+
- Recommendations are prioritized: fix broken first, then add P0, then P1
|
|
317
|
+
|
|
318
|
+
### QA_AUDIT_REPORT.md
|
|
319
|
+
- All 6 dimensions scored: Locator Quality, Assertion Specificity, POM Compliance, Test Coverage, Naming Convention, Test Data Management
|
|
320
|
+
- Weights sum to 100%: Locator 20%, Assertion 20%, POM 15%, Coverage 20%, Naming 15%, Test Data 10%
|
|
321
|
+
- Overall score matches weighted calculation of dimension scores
|
|
322
|
+
- Critical issues listed with file path, line number, and description
|
|
323
|
+
- Each recommendation has effort estimate: S (small), M (medium), L (large)
|
|
324
|
+
|
|
325
|
+
---
|
|
326
|
+
|
|
327
|
+
## Git Workflow
|
|
328
|
+
|
|
329
|
+
All QA automation output follows these git conventions.
|
|
330
|
+
|
|
331
|
+
### Branch Naming
|
|
332
|
+
|
|
333
|
+
```
|
|
334
|
+
qa/auto-{project}-{date}
|
|
335
|
+
```
|
|
336
|
+
|
|
337
|
+
Examples:
|
|
338
|
+
- `qa/auto-shopflow-2026-03-18`
|
|
339
|
+
- `qa/auto-acme-api-2026-04-01`
|
|
340
|
+
|
|
341
|
+
### Commit Message Format
|
|
342
|
+
|
|
343
|
+
```
|
|
344
|
+
qa({agent}): {description}
|
|
345
|
+
```
|
|
346
|
+
|
|
347
|
+
Examples:
|
|
348
|
+
- `qa(scanner): produce SCAN_MANIFEST.md for shopflow`
|
|
349
|
+
- `qa(analyzer): produce QA_ANALYSIS.md and TEST_INVENTORY.md`
|
|
350
|
+
- `qa(executor): generate 24 test files with POMs and fixtures`
|
|
351
|
+
- `qa(validator): validate generated tests - PASS with HIGH confidence`
|
|
352
|
+
- `qa(testid-injector): inject 47 data-testid attributes across 12 components`
|
|
353
|
+
- `qa(bug-detective): classify 5 failures - 2 APP BUG, 2 TEST ERROR, 1 ENV ISSUE`
|
|
354
|
+
|
|
355
|
+
### Commit Conventions
|
|
356
|
+
|
|
357
|
+
- One commit per agent stage (scanner produces one commit, analyzer produces one commit, etc.)
|
|
358
|
+
- Descriptive messages that include artifact names and counts
|
|
359
|
+
- Never commit .env files, credentials, or secrets
|
|
360
|
+
- Include modified file count in commit body when relevant
|
|
361
|
+
|
|
362
|
+
### PR Template
|
|
363
|
+
|
|
364
|
+
PR description must include:
|
|
365
|
+
- Analysis summary (architecture type, framework, risk areas)
|
|
366
|
+
- Test counts by pyramid level (unit: N, integration: N, API: N, E2E: N)
|
|
367
|
+
- Coverage metrics (modules covered, estimated line coverage)
|
|
368
|
+
- Validation pass/fail status with confidence level
|
|
369
|
+
- Link to VALIDATION_REPORT.md in the PR files
|
|
370
|
+
|
|
371
|
+
---
|
|
372
|
+
|
|
373
|
+
## Team Settings
|
|
374
|
+
|
|
375
|
+
Configuration for multi-agent pipeline execution.
|
|
376
|
+
|
|
377
|
+
### Concurrent Execution
|
|
378
|
+
|
|
379
|
+
Agents in the same pipeline stage can run in parallel when their inputs are independent. Examples:
|
|
380
|
+
- qa-testid-injector and qa-analyzer can run simultaneously after scan completes (both read SCAN_MANIFEST.md)
|
|
381
|
+
- Multiple qa-executor instances can generate tests for different modules in parallel
|
|
382
|
+
|
|
383
|
+
Agents in different pipeline stages MUST respect stage ordering. A downstream agent cannot start until all its required inputs exist on disk.
|
|
384
|
+
|
|
385
|
+
### Worktree Isolation
|
|
386
|
+
|
|
387
|
+
Each agent operates on the same branch. No worktree splits are needed for this system. Agents coordinate through file-based artifacts -- each agent writes its own files and reads other agents' files.
|
|
388
|
+
|
|
389
|
+
### Dependency Ordering
|
|
390
|
+
|
|
391
|
+
Respect stage transitions from the Agent Pipeline section:
|
|
392
|
+
1. qa-scanner runs first (no dependencies)
|
|
393
|
+
2. qa-analyzer and qa-testid-injector run after scanner (both depend on SCAN_MANIFEST.md)
|
|
394
|
+
3. qa-planner runs after analyzer (depends on QA_ANALYSIS.md + TEST_INVENTORY.md)
|
|
395
|
+
4. qa-executor runs after planner (depends on generation plan)
|
|
396
|
+
5. qa-validator runs after executor (depends on generated test files)
|
|
397
|
+
6. qa-bug-detective runs after test execution (depends on test results)
|
|
398
|
+
|
|
399
|
+
### Auto-Advance Mode
|
|
400
|
+
|
|
401
|
+
When auto-advance is enabled, pipeline stages advance automatically when:
|
|
402
|
+
1. The previous stage completes
|
|
403
|
+
2. All output artifacts from the previous stage exist on disk
|
|
404
|
+
3. All output artifacts pass their verification commands (from Verification Commands section)
|
|
405
|
+
|
|
406
|
+
No human confirmation is needed between stages in auto-advance mode. The pipeline pauses only at explicit checkpoint tasks or when verification fails.
|
|
407
|
+
|
|
408
|
+
---
|
|
409
|
+
|
|
410
|
+
## Agent Coordination
|
|
411
|
+
|
|
412
|
+
Rules governing how agents communicate and hand off work through artifacts.
|
|
413
|
+
|
|
414
|
+
### Read-Before-Write Rules
|
|
415
|
+
|
|
416
|
+
Every agent MUST read its required inputs before producing any output. Failure to read inputs produces low-quality, inconsistent artifacts.
|
|
417
|
+
|
|
418
|
+
| Agent | MUST Read Before Producing Output |
|
|
419
|
+
|-------|-----------------------------------|
|
|
420
|
+
| qa-scanner | package.json (or equivalent), folder tree structure, all source file extensions to detect framework and language |
|
|
421
|
+
| qa-analyzer | SCAN_MANIFEST.md (complete, verified), CLAUDE.md (all QA standards sections) |
|
|
422
|
+
| qa-planner | TEST_INVENTORY.md (all test cases), QA_ANALYSIS.md (architecture and risk context) |
|
|
423
|
+
| qa-executor | TEST_INVENTORY.md (test cases to implement), CLAUDE.md (POM rules, locator hierarchy, assertion rules, naming conventions, quality gates) |
|
|
424
|
+
| qa-validator | CLAUDE.md (quality gates, locator tiers, assertion rules), all generated test files to validate |
|
|
425
|
+
| qa-testid-injector | SCAN_MANIFEST.md (component file list), CLAUDE.md (data-testid Convention section for naming rules) |
|
|
426
|
+
| qa-bug-detective | Test execution output (stdout/stderr, exit codes), test source files (to read the failing code), CLAUDE.md (classification rules) |
|
|
427
|
+
|
|
428
|
+
### Handoff Patterns
|
|
429
|
+
|
|
430
|
+
Agents communicate exclusively through file-based artifacts:
|
|
431
|
+
|
|
432
|
+
1. **Producer writes** -- Agent completes its task and writes output artifact(s) to disk
|
|
433
|
+
2. **Pipeline verifies** -- Output artifacts pass verification commands before advancing
|
|
434
|
+
3. **Consumer reads** -- Next agent reads the artifact(s) as its first action
|
|
435
|
+
4. **No direct communication** -- Agents never pass data in memory or through environment variables between stages
|
|
436
|
+
|
|
437
|
+
### Quality Gates Per Artifact
|
|
438
|
+
|
|
439
|
+
Before the pipeline advances past any stage, the produced artifact(s) must pass verification:
|
|
440
|
+
|
|
441
|
+
| Stage Complete | Artifact | Gate |
|
|
442
|
+
|----------------|----------|------|
|
|
443
|
+
| scan | SCAN_MANIFEST.md | > 0 files listed, project detection populated |
|
|
444
|
+
| analyze | QA_ANALYSIS.md + TEST_INVENTORY.md | All sections present, IDs unique, pyramid sums to 100% |
|
|
445
|
+
| testid-inject | TESTID_AUDIT_REPORT.md | Coverage score calculated, naming convention compliant |
|
|
446
|
+
| plan | Generation plan | Test cases mapped to output files, no unassigned cases |
|
|
447
|
+
| generate | Test files + POMs | All planned files exist, imports resolve, syntax valid |
|
|
448
|
+
| validate | VALIDATION_REPORT.md | All 4 layers checked per file, confidence level assigned |
|
|
449
|
+
| deliver | Branch + PR | Branch pushed, PR created with required description sections |
|
|
450
|
+
|
|
451
|
+
### Error Recovery
|
|
452
|
+
|
|
453
|
+
If an agent fails or produces an artifact that does not pass verification:
|
|
454
|
+
1. Log the failure with the specific verification check that failed
|
|
455
|
+
2. Retry the agent (max 3 attempts per stage)
|
|
456
|
+
3. If still failing after 3 attempts, pause the pipeline and report the blocked stage
|
|
457
|
+
4. Do not advance to the next stage with a failed artifact
|
|
458
|
+
|
|
459
|
+
---
|
|
460
|
+
|
|
461
|
+
## data-testid Convention
|
|
462
|
+
|
|
463
|
+
All `data-testid` attributes injected by qa-testid-injector and referenced by generated tests MUST follow this naming convention.
|
|
464
|
+
|
|
465
|
+
### Naming Pattern
|
|
466
|
+
|
|
467
|
+
```
|
|
468
|
+
{context}-{description}-{element-type}
|
|
469
|
+
```
|
|
470
|
+
|
|
471
|
+
All values are **kebab-case**. No camelCase, no underscores, no periods.
|
|
472
|
+
|
|
473
|
+
### Context Derivation
|
|
474
|
+
|
|
475
|
+
1. **Page-level context**: Derived from the component filename or route
|
|
476
|
+
- `LoginPage.tsx` -> context is `login`
|
|
477
|
+
- `ProductDetailPage.tsx` -> context is `product-detail`
|
|
478
|
+
- Route `/settings/profile` -> context is `settings-profile`
|
|
479
|
+
|
|
480
|
+
2. **Component-level context**: Derived from the component name
|
|
481
|
+
- `<NavBar>` -> context is `navbar`
|
|
482
|
+
- `<ShoppingCart>` -> context is `shopping-cart`
|
|
483
|
+
- `<UserAvatar>` -> context is `user-avatar`
|
|
484
|
+
|
|
485
|
+
3. **Nested context**: Parent-child hierarchy, max 3 levels deep
|
|
486
|
+
- `checkout-shipping-address-input` (page -> section -> field)
|
|
487
|
+
- `dashboard-sidebar-nav-link` (page -> component -> element)
|
|
488
|
+
- Never exceed 3 levels: `a-b-c-element` is the maximum depth
|
|
489
|
+
|
|
490
|
+
4. **Dynamic list items**: Use template literals with unique keys
|
|
491
|
+
```tsx
|
|
492
|
+
data-testid={`product-${product.id}-card`}
|
|
493
|
+
data-testid={`order-${order.id}-status-badge`}
|
|
494
|
+
```
|
|
495
|
+
|
|
496
|
+
### Element Type Suffix Table
|
|
497
|
+
|
|
498
|
+
Every `data-testid` value ends with a suffix indicating the element type:
|
|
499
|
+
|
|
500
|
+
| Element | Suffix | Example |
|
|
501
|
+
|---------|--------|---------|
|
|
502
|
+
| `<button>` | `-btn` | `login-submit-btn` |
|
|
503
|
+
| `<input>` | `-input` | `login-email-input` |
|
|
504
|
+
| `<select>` | `-select` | `settings-language-select` |
|
|
505
|
+
| `<textarea>` | `-textarea` | `feedback-comment-textarea` |
|
|
506
|
+
| `<a>` (link) | `-link` | `navbar-profile-link` |
|
|
507
|
+
| `<form>` | `-form` | `checkout-payment-form` |
|
|
508
|
+
| `<img>` | `-img` | `product-hero-img` |
|
|
509
|
+
| `<table>` | `-table` | `users-list-table` |
|
|
510
|
+
| `<tr>` (row) | `-row` | `users-item-row` |
|
|
511
|
+
| `<dialog>/<modal>` | `-modal` | `confirm-delete-modal` |
|
|
512
|
+
| `<div>` container | `-container` | `dashboard-stats-container` |
|
|
513
|
+
| `<ul>/<ol>` list | `-list` | `notifications-list` |
|
|
514
|
+
| `<li>` item | `-item` | `notifications-item` |
|
|
515
|
+
| dropdown menu | `-dropdown` | `navbar-user-dropdown` |
|
|
516
|
+
| tab | `-tab` | `settings-security-tab` |
|
|
517
|
+
| checkbox | `-checkbox` | `terms-accept-checkbox` |
|
|
518
|
+
| radio | `-radio` | `shipping-express-radio` |
|
|
519
|
+
| toggle/switch | `-toggle` | `notifications-enabled-toggle` |
|
|
520
|
+
| badge/chip | `-badge` | `cart-count-badge` |
|
|
521
|
+
| alert/toast | `-alert` | `error-validation-alert` |
|
|
522
|
+
|
|
523
|
+
### Third-Party Component Handling
|
|
524
|
+
|
|
525
|
+
When adding `data-testid` to third-party UI library components, use this priority order:
|
|
526
|
+
|
|
527
|
+
1. **Props passthrough** (preferred): If the library supports passing `data-testid` directly as a prop
|
|
528
|
+
```tsx
|
|
529
|
+
<MuiButton data-testid="checkout-pay-btn">Pay</MuiButton>
|
|
530
|
+
```
|
|
531
|
+
|
|
532
|
+
2. **Wrapper div**: If the library does not support prop passthrough, wrap with a `<div>` that has the `data-testid`
|
|
533
|
+
```tsx
|
|
534
|
+
<div data-testid="checkout-pay-container">
|
|
535
|
+
<ThirdPartyButton>Pay</ThirdPartyButton>
|
|
536
|
+
</div>
|
|
537
|
+
```
|
|
538
|
+
|
|
539
|
+
3. **inputProps / slotProps** (MUI-specific): Use component-specific prop APIs
|
|
540
|
+
```tsx
|
|
541
|
+
<TextField inputProps={{ 'data-testid': 'login-email-input' }} />
|
|
542
|
+
<Autocomplete slotProps={{ input: { 'data-testid': 'search-query-input' } }} />
|
|
543
|
+
```
|