qaa-agent 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (56) hide show
  1. package/.claude/commands/create-test.md +40 -0
  2. package/.claude/commands/qa-analyze.md +60 -0
  3. package/.claude/commands/qa-audit.md +37 -0
  4. package/.claude/commands/qa-blueprint.md +54 -0
  5. package/.claude/commands/qa-fix.md +36 -0
  6. package/.claude/commands/qa-from-ticket.md +88 -0
  7. package/.claude/commands/qa-gap.md +54 -0
  8. package/.claude/commands/qa-pom.md +36 -0
  9. package/.claude/commands/qa-pyramid.md +37 -0
  10. package/.claude/commands/qa-report.md +38 -0
  11. package/.claude/commands/qa-start.md +33 -0
  12. package/.claude/commands/qa-testid.md +54 -0
  13. package/.claude/commands/qa-validate.md +54 -0
  14. package/.claude/commands/update-test.md +58 -0
  15. package/.claude/settings.json +19 -0
  16. package/.claude/skills/qa-bug-detective/SKILL.md +122 -0
  17. package/.claude/skills/qa-repo-analyzer/SKILL.md +88 -0
  18. package/.claude/skills/qa-self-validator/SKILL.md +109 -0
  19. package/.claude/skills/qa-template-engine/SKILL.md +113 -0
  20. package/.claude/skills/qa-testid-injector/SKILL.md +93 -0
  21. package/.claude/skills/qa-workflow-documenter/SKILL.md +87 -0
  22. package/CLAUDE.md +543 -0
  23. package/README.md +418 -0
  24. package/agents/qa-pipeline-orchestrator.md +1217 -0
  25. package/agents/qaa-analyzer.md +508 -0
  26. package/agents/qaa-bug-detective.md +444 -0
  27. package/agents/qaa-executor.md +618 -0
  28. package/agents/qaa-planner.md +374 -0
  29. package/agents/qaa-scanner.md +422 -0
  30. package/agents/qaa-testid-injector.md +583 -0
  31. package/agents/qaa-validator.md +450 -0
  32. package/bin/install.cjs +176 -0
  33. package/bin/lib/commands.cjs +709 -0
  34. package/bin/lib/config.cjs +307 -0
  35. package/bin/lib/core.cjs +497 -0
  36. package/bin/lib/frontmatter.cjs +299 -0
  37. package/bin/lib/init.cjs +989 -0
  38. package/bin/lib/milestone.cjs +241 -0
  39. package/bin/lib/model-profiles.cjs +60 -0
  40. package/bin/lib/phase.cjs +911 -0
  41. package/bin/lib/roadmap.cjs +306 -0
  42. package/bin/lib/state.cjs +748 -0
  43. package/bin/lib/template.cjs +222 -0
  44. package/bin/lib/verify.cjs +842 -0
  45. package/bin/qaa-tools.cjs +607 -0
  46. package/package.json +34 -0
  47. package/templates/failure-classification.md +391 -0
  48. package/templates/gap-analysis.md +409 -0
  49. package/templates/pr-template.md +48 -0
  50. package/templates/qa-analysis.md +381 -0
  51. package/templates/qa-audit-report.md +465 -0
  52. package/templates/qa-repo-blueprint.md +636 -0
  53. package/templates/scan-manifest.md +312 -0
  54. package/templates/test-inventory.md +582 -0
  55. package/templates/testid-audit-report.md +354 -0
  56. package/templates/validation-report.md +243 -0
@@ -0,0 +1,19 @@
1
+ {
2
+ "permissions": {
3
+ "allow": [
4
+ "Bash(*)",
5
+ "Read",
6
+ "Write",
7
+ "Edit",
8
+ "Glob",
9
+ "Grep",
10
+ "Agent",
11
+ "WebFetch",
12
+ "WebSearch",
13
+ "NotebookEdit"
14
+ ]
15
+ },
16
+ "env": {
17
+ "CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1"
18
+ }
19
+ }
@@ -0,0 +1,122 @@
1
+ ---
2
+ name: qa-bug-detective
3
+ description: QA Bug Detective. Runs generated tests and classifies failures as APPLICATION BUG, TEST CODE ERROR, ENVIRONMENT ISSUE, or INCONCLUSIVE with evidence and confidence levels. Use when user wants to run tests and classify results, investigate test failures, determine if failures are bugs or test issues, debug failing tests, triage test results, or understand why tests are failing. Triggers on "run tests", "classify failures", "why is this failing", "test failures", "debug tests", "triage results", "is this a bug or test error", "investigate failures".
4
+ ---
5
+
6
+ # QA Bug Detective
7
+
8
+ ## Purpose
9
+
10
+ Run generated tests and classify every failure into one of four categories with evidence and confidence levels. Auto-fix TEST CODE ERRORS when confidence is HIGH.
11
+
12
+ ## Classification Decision Tree
13
+
14
+ ```
15
+ Test fails
16
+ ├── Syntax/import error in TEST file?
17
+ │ └── YES → TEST CODE ERROR
18
+ ├── Error occurs in PRODUCTION code path?
19
+ │ ├── Known bug / unexpected behavior? → APPLICATION BUG
20
+ │ └── Code works as designed but test expectation wrong? → TEST CODE ERROR
21
+ ├── Connection refused / timeout / missing env var?
22
+ │ └── YES → ENVIRONMENT ISSUE
23
+ └── Can't determine?
24
+ └── INCONCLUSIVE
25
+ ```
26
+
27
+ ## Classification Categories
28
+
29
+ ### APPLICATION BUG
30
+ - Error manifests in production code (not test code)
31
+ - Stack trace points to src/ or app/ code
32
+ - Behavior contradicts documented requirements or API contracts
33
+ - **Action**: Report only. NEVER auto-fix application code.
34
+
35
+ ### TEST CODE ERROR
36
+ - Import/require fails (wrong path, missing module)
37
+ - Selector doesn't match current DOM
38
+ - Assertion expects wrong value (test written incorrectly)
39
+ - Missing await, wrong API usage, stale fixture reference
40
+ - **Action**: Auto-fix if HIGH confidence. Report if MEDIUM or lower.
41
+
42
+ ### ENVIRONMENT ISSUE
43
+ - Connection refused (database, API, external service)
44
+ - Timeout waiting for resource
45
+ - Missing environment variable
46
+ - File/directory not found (test infrastructure)
47
+ - **Action**: Report with suggested resolution steps.
48
+
49
+ ### INCONCLUSIVE
50
+ - Error is ambiguous
51
+ - Could be multiple root causes
52
+ - Insufficient data to classify
53
+ - **Action**: Report with what's known, request more info.
54
+
55
+ ## Evidence Requirements
56
+
57
+ Every classification MUST include:
58
+ 1. **File path**: Exact file where error occurs
59
+ 2. **Line number**: Specific line of failure
60
+ 3. **Error message**: Complete error text
61
+ 4. **Code snippet**: The specific code proving the classification
62
+ 5. **Confidence level**: HIGH / MEDIUM-HIGH / MEDIUM / LOW
63
+ 6. **Reasoning**: Why this classification, not another
64
+
65
+ ## Confidence Levels
66
+
67
+ | Level | Definition |
68
+ |-------|------------|
69
+ | HIGH | Clear evidence in one direction, no ambiguity |
70
+ | MEDIUM-HIGH | Strong evidence but minor ambiguity |
71
+ | MEDIUM | Evidence points one way but alternatives exist |
72
+ | LOW | Insufficient data, multiple possible causes |
73
+
74
+ ## Auto-Fix Rules
75
+
76
+ Only auto-fix when:
77
+ - Classification = TEST CODE ERROR
78
+ - Confidence = HIGH
79
+ - Fix is mechanical (import path, selector, assertion value, config)
80
+
81
+ Fix types:
82
+ - Import path corrections
83
+ - Selector updates (match current DOM/data-testid)
84
+ - Assertion value updates (match current actual behavior)
85
+ - Config fixes (baseURL, timeout values)
86
+ - Missing await keywords
87
+ - Fixture path corrections
88
+
89
+ **NEVER auto-fix**: Application bugs, environment issues, anything with confidence < HIGH.
90
+
91
+ ## Output: FAILURE_CLASSIFICATION_REPORT.md
92
+
93
+ ```markdown
94
+ # Failure Classification Report
95
+
96
+ ## Summary
97
+ | Classification | Count | Auto-Fixed | Needs Attention |
98
+ |---------------|-------|-----------|----------------|
99
+ | APPLICATION BUG | N | 0 | N |
100
+ | TEST CODE ERROR | N | N | N |
101
+ | ENVIRONMENT ISSUE | N | 0 | N |
102
+ | INCONCLUSIVE | N | 0 | N |
103
+
104
+ ## Detailed Analysis
105
+
106
+ ### Failure 1: [test name]
107
+ - **Classification**: [category]
108
+ - **Confidence**: [level]
109
+ - **File**: [path]:[line]
110
+ - **Error**: [message]
111
+ - **Evidence**: [code snippet + reasoning]
112
+ - **Action Taken**: [auto-fixed / reported]
113
+ - **Resolution**: [what was fixed / what needs human attention]
114
+ ```
115
+
116
+ ## Quality Gate
117
+
118
+ - [ ] Every failure classified with evidence
119
+ - [ ] Confidence level assigned to each
120
+ - [ ] No application bugs auto-fixed
121
+ - [ ] Auto-fixes only applied at HIGH confidence
122
+ - [ ] FAILURE_CLASSIFICATION_REPORT.md produced
@@ -0,0 +1,88 @@
1
+ ---
2
+ name: qa-repo-analyzer
3
+ description: QA Repository Analyzer. Analyzes a dev repository and produces a complete QA baseline package including testability report, test inventory, and repo blueprint. Use when user wants to analyze a repo for testing, assess testability, generate test inventory, create QA baseline, understand test coverage needs, evaluate a codebase for QA, or produce a testing strategy. Triggers on "analyze repo", "testability report", "test inventory", "QA analysis", "QA baseline", "coverage assessment", "what should we test", "testing strategy".
4
+ ---
5
+
6
+ # QA Repository Analyzer
7
+
8
+ ## Purpose
9
+
10
+ Analyze a developer repository and produce a complete QA baseline package: Testability Report, Test Inventory (pyramid-based), and QA Repo Blueprint.
11
+
12
+ ## Core Rule
13
+
14
+ **Every analysis must be specific to the actual codebase — never generic advice. Every test case must have an explicit expected outcome.**
15
+
16
+ ## Execution Steps
17
+
18
+ ### Step 0: Collect Repo Context
19
+
20
+ Scan the repository systematically:
21
+ - Folder tree (entry points, structure)
22
+ - Package files (dependencies, scripts, framework detection)
23
+ - Service/controller files (API surface area)
24
+ - Model files (data structures, validation)
25
+ - Database layer (ORM, migrations, schemas)
26
+ - External integrations (payment, email, storage, queues)
27
+ - Existing test coverage (test files, config, CI)
28
+ - Configuration (env vars, feature flags)
29
+
30
+ ### Step 1: Pre-Analysis — Assumptions & Questions
31
+
32
+ Before generating deliverables, list:
33
+ - **Assumptions**: What you're inferring from the code (e.g., "Auth uses JWT based on middleware")
34
+ - **Questions**: What's ambiguous (e.g., "Is the Stripe integration in production or test mode?")
35
+
36
+ Present to user for confirmation before proceeding.
37
+
38
+ ### Step 2: Deliverable A — QA_ANALYSIS.md (Testability Report)
39
+
40
+ Produce with ALL these sections:
41
+ - **Architecture Overview**: System type, language, runtime, entry points table, internal layers
42
+ - **External Dependencies**: Table with purpose and risk level (HIGH/MEDIUM/LOW)
43
+ - **Risk Assessment**: Prioritized risks with justification
44
+ - **Top 10 Unit Test Targets**: Table with module/function, why it's high-priority, complexity assessment
45
+ - **API/Contract Test Targets**: Endpoints that need contract testing
46
+ - **Recommended Testing Pyramid**: Percentages adjusted to this specific app's architecture
47
+
48
+ ### Step 3: Deliverable B — TEST_INVENTORY.md (Test Cases)
49
+
50
+ Generate pyramid-based test inventory:
51
+
52
+ **Unit Tests** (60-70%): For each target:
53
+ - Test ID (UT-MODULE-NNN)
54
+ - Target (file path + function)
55
+ - What to validate
56
+ - Concrete inputs
57
+ - Mocks needed
58
+ - Explicit expected outcome
59
+
60
+ **Integration/Contract Tests** (10-15%): Component interactions, API contracts
61
+
62
+ **API Tests** (20-25%): For each endpoint:
63
+ - Test ID (API-RESOURCE-NNN)
64
+ - Method + endpoint
65
+ - Request body/params
66
+ - Expected status + response shape
67
+
68
+ **E2E Smoke Tests** (3-5%): Max 3-8 critical user paths
69
+
70
+ ### Step 4: QA_REPO_BLUEPRINT.md
71
+
72
+ If no QA repo exists, generate:
73
+ - Suggested repo name and folder structure
74
+ - Recommended stack (framework, runner, reporter)
75
+ - Config files needed
76
+ - Execution scripts (npm scripts, CI commands)
77
+ - CI/CD strategy (smoke on PR, regression nightly)
78
+ - Definition of Done checklist
79
+
80
+ ## Quality Gate
81
+
82
+ - [ ] Architecture overview matches actual codebase (not generic)
83
+ - [ ] Every test case has explicit expected outcome with concrete values
84
+ - [ ] No vague assertions ("works correctly", "returns proper data")
85
+ - [ ] Test IDs follow naming convention
86
+ - [ ] Priority (P0/P1/P2) assigned to every test case
87
+ - [ ] Risks are specific with evidence from the code
88
+ - [ ] Testing pyramid percentages are justified for this architecture
@@ -0,0 +1,109 @@
1
+ ---
2
+ name: qa-self-validator
3
+ description: QA Self Validator. Closed-loop agent that validates generated test code across 4 layers (syntax, structure, dependencies, logic) and auto-fixes issues. Use when user wants to validate tests, check test quality, verify test code compiles, ensure tests follow standards, run quality checks on test suite, or verify generated tests before delivery. Triggers on "validate tests", "check test quality", "verify tests", "test validation", "quality check", "does it compile", "are tests valid", "check my tests".
4
+ ---
5
+
6
+ # QA Self Validator
7
+
8
+ ## Purpose
9
+
10
+ Closed-loop validation agent: Generate -> Validate -> Fix -> Deliver. Never deliver test code without at least one validation pass.
11
+
12
+ ## Core Rule
13
+
14
+ **NEVER deliver generated QA code without running at least one validation pass. Max 3 fix loops before escalating.**
15
+
16
+ ## Validation Layers
17
+
18
+ ### Layer 1: Syntax
19
+ Run the appropriate checker based on language:
20
+ - TypeScript: `tsc --noEmit`
21
+ - JavaScript: `node --check [file]`
22
+ - Python: `python -m py_compile [file]`
23
+ - C#: `dotnet build --no-restore`
24
+ - Also run project linter if configured (eslint, flake8, etc.)
25
+
26
+ **Pass criteria**: Zero syntax errors.
27
+
28
+ ### Layer 2: Structure
29
+ Check each test file for:
30
+ - Correct directory placement (e2e in e2e/, unit in unit/, etc.)
31
+ - Naming convention compliance (CLAUDE.md patterns)
32
+ - Has actual test functions (not empty describe blocks)
33
+ - Imports reference real modules in the codebase
34
+ - No hardcoded secrets/credentials/tokens
35
+ - Page objects in pages/ directory, tests in tests/
36
+
37
+ **Pass criteria**: All structural checks pass.
38
+
39
+ ### Layer 3: Dependencies
40
+ Verify:
41
+ - All imports resolvable (modules exist at the referenced paths)
42
+ - Packages listed in package.json/requirements.txt
43
+ - No missing dependencies
44
+ - No circular dependencies in test helpers
45
+ - Test fixtures reference existing fixture files
46
+
47
+ **Pass criteria**: All imports resolve, all packages available.
48
+
49
+ ### Layer 4: Logic Quality
50
+ Check test logic:
51
+ - Happy path tests have positive assertions (toBe, toEqual, toHaveText)
52
+ - Error/negative tests have negative assertions (not.toBe, toThrow, status >= 400)
53
+ - Setup and teardown are symmetric (what's created is cleaned up)
54
+ - No duplicate test IDs across the suite
55
+ - Assertions are concrete — reject: toBeTruthy(), toBeDefined(), .should('exist')
56
+ - Each test has at least one assertion
57
+
58
+ **Pass criteria**: All logic checks pass.
59
+
60
+ ## Fix Loop Protocol
61
+
62
+ ```
63
+ Loop 1: Generate tests
64
+ -> Run all 4 validation layers
65
+ -> If PASS: Deliver
66
+ -> If FAIL: Identify issues, fix, continue
67
+
68
+ Loop 2: Re-validate after fixes
69
+ -> If PASS: Deliver
70
+ -> If FAIL: Identify remaining issues, fix
71
+
72
+ Loop 3: Final validation
73
+ -> If PASS: Deliver
74
+ -> If FAIL: Deliver with VALIDATION_REPORT noting unresolved issues
75
+ ```
76
+
77
+ ## Output: VALIDATION_REPORT.md
78
+
79
+ ```markdown
80
+ # Validation Report
81
+
82
+ ## Summary
83
+ | Layer | Status | Issues Found | Issues Fixed |
84
+ |-------|--------|-------------|-------------|
85
+ | Syntax | PASS/FAIL | N | N |
86
+ | Structure | PASS/FAIL | N | N |
87
+ | Dependencies | PASS/FAIL | N | N |
88
+ | Logic | PASS/FAIL | N | N |
89
+
90
+ ## File Details
91
+ ### [filename]
92
+ | Layer | Status | Details |
93
+ |-------|--------|---------|
94
+ | ... | ... | ... |
95
+
96
+ ## Unresolved Issues
97
+ [Any issues that couldn't be auto-fixed after 3 loops]
98
+
99
+ ## Confidence Level
100
+ [HIGH/MEDIUM/LOW with reasoning]
101
+ ```
102
+
103
+ ## Quality Gate
104
+
105
+ - [ ] All 4 layers checked for every file
106
+ - [ ] Fix loop executed (max 3 iterations)
107
+ - [ ] VALIDATION_REPORT.md produced
108
+ - [ ] No test delivered with syntax errors
109
+ - [ ] Unresolved issues clearly documented
@@ -0,0 +1,113 @@
1
+ ---
2
+ name: qa-template-engine
3
+ description: QA Template Engine. Creates production-ready test files with POM pattern, explicit assertions, and proper structure. Use when user wants to generate test files, create test templates, write test code, scaffold test suites, produce executable tests, or create test specs from an inventory. Triggers on "generate tests", "create test files", "write tests", "scaffold tests", "test templates", "produce test code", "executable tests", "create test spec".
4
+ ---
5
+
6
+ # QA Template Engine
7
+
8
+ ## Purpose
9
+
10
+ Create definitive, production-ready test files with explicit expected outcomes, proper POM architecture, and framework-specific best practices.
11
+
12
+ ## Core Rule
13
+
14
+ **NO test case is complete without an expected outcome that a junior QA engineer could verify without asking questions.**
15
+
16
+ ## Framework Detection
17
+
18
+ Before generating ANY code:
19
+ 1. Check for existing config: playwright.config.ts, cypress.config.ts, jest.config.ts, vitest.config.ts, pytest.ini
20
+ 2. Check package.json/requirements.txt for test dependencies
21
+ 3. Check existing test files for patterns and conventions
22
+ 4. **Always match the project's existing framework**
23
+
24
+ If no framework exists, ask the user.
25
+
26
+ ## Test Template Categories
27
+
28
+ ### Unit Test Template
29
+ ```
30
+ Test ID: UT-[MODULE]-[NNN]
31
+ Target: [file]:[function]
32
+ Priority: P[0-2]
33
+
34
+ // Arrange
35
+ const input = [concrete value];
36
+ const expected = [concrete value];
37
+
38
+ // Act
39
+ const result = functionUnderTest(input);
40
+
41
+ // Assert
42
+ expect(result).toBe(expected); // NEVER toBeTruthy/toBeDefined
43
+ ```
44
+
45
+ ### API Test Template
46
+ ```
47
+ Test ID: API-[RESOURCE]-[NNN]
48
+ Target: [METHOD] [endpoint]
49
+ Priority: P[0-2]
50
+
51
+ // Arrange
52
+ const payload = { [concrete data] };
53
+
54
+ // Act
55
+ const response = await api.[method]('[endpoint]', payload);
56
+
57
+ // Assert
58
+ expect(response.status).toBe([exact code]);
59
+ expect(response.body.[field]).toBe('[exact value]');
60
+ ```
61
+
62
+ ### E2E Test Template (Playwright)
63
+ ```
64
+ Test ID: E2E-[FLOW]-[NNN]
65
+ Target: [user flow description]
66
+ Priority: P[0-2]
67
+
68
+ // Arrange
69
+ await loginPage.navigate();
70
+
71
+ // Act
72
+ await loginPage.login('[email]', '[password]');
73
+
74
+ // Assert
75
+ await expect(dashboardPage.welcomeMessage).toHaveText('Welcome, Test User');
76
+ ```
77
+
78
+ ## POM Generation Rules
79
+
80
+ Following CLAUDE.md strictly:
81
+ 1. One class per page — no god objects
82
+ 2. No assertions in page objects — assertions in test specs ONLY
83
+ 3. Locators as readonly properties — Tier 1 preferred (data-testid, ARIA roles)
84
+ 4. Actions return void or next page
85
+ 5. State queries return data
86
+ 6. Every POM extends BasePage
87
+
88
+ ## Locator Priority
89
+
90
+ Always use this order:
91
+ 1. data-testid: `page.getByTestId('login-submit-btn')`
92
+ 2. ARIA role: `page.getByRole('button', { name: 'Log in' })`
93
+ 3. Label/placeholder: `page.getByLabel('Email')`
94
+ 4. CSS selector: `page.locator('.btn')` + `// TODO: Request test ID`
95
+
96
+ ## Expected Outcome Rules
97
+
98
+ - **Be specific**: Exact values, status codes, text content
99
+ - **Be measurable**: Timing thresholds, counts, lengths
100
+ - **Be negative too**: What should NOT happen
101
+ - **Include state transitions**: Before/after states
102
+ - **Reference test data**: Use fixture values, not magic strings
103
+
104
+ ## Quality Gate
105
+
106
+ - [ ] Every test has explicit expected outcome with concrete value
107
+ - [ ] No vague words: "correct", "proper", "appropriate", "works"
108
+ - [ ] All locators follow tier hierarchy
109
+ - [ ] No assertions inside page objects
110
+ - [ ] No hardcoded credentials
111
+ - [ ] File naming follows project conventions
112
+ - [ ] Test IDs are unique and follow convention
113
+ - [ ] Priority (P0/P1/P2) assigned to every test
@@ -0,0 +1,93 @@
1
+ ---
2
+ name: qa-testid-injector
3
+ description: QA Test ID Injector. Scans source code to find interactive UI elements missing data-testid attributes and injects them following naming convention. Use when user wants to add test IDs, improve testability, audit missing test hooks, prepare for E2E automation, or add data-testid attributes before writing Playwright/Cypress tests. Triggers on "add test IDs", "add data-testid", "test hooks", "missing testid", "testability audit", "prepare for automation", "inject test attributes", "make components testable".
4
+ ---
5
+
6
+ # QA Test ID Injector
7
+
8
+ ## Purpose
9
+
10
+ Scan application source code, identify interactive UI elements lacking stable test selectors, and inject `data-testid` attributes following a consistent naming convention. Runs as **Step 0** — before any test generation.
11
+
12
+ ## Core Rule
13
+
14
+ **Every interactive element MUST have a stable, unique `data-testid` before E2E tests are generated against it.**
15
+
16
+ ## Naming Convention
17
+
18
+ Pattern: `{context}-{description}-{element-type}` in kebab-case.
19
+
20
+ ### Element Type Suffixes
21
+
22
+ | Element | Suffix | Example |
23
+ |---------|--------|---------|
24
+ | button | -btn | login-submit-btn |
25
+ | input | -input | login-email-input |
26
+ | select | -select | settings-language-select |
27
+ | textarea | -textarea | feedback-comment-textarea |
28
+ | link | -link | navbar-profile-link |
29
+ | form | -form | checkout-payment-form |
30
+ | img | -img | product-hero-img |
31
+ | table | -table | users-list-table |
32
+ | row | -row | users-item-row |
33
+ | modal | -modal | confirm-delete-modal |
34
+ | container | -container | dashboard-stats-container |
35
+ | list | -list | notifications-list |
36
+ | item | -item | notifications-item |
37
+ | dropdown | -dropdown | navbar-user-dropdown |
38
+ | tab | -tab | settings-security-tab |
39
+ | checkbox | -checkbox | terms-accept-checkbox |
40
+ | radio | -radio | shipping-express-radio |
41
+ | toggle | -toggle | notifications-enabled-toggle |
42
+ | badge | -badge | cart-count-badge |
43
+ | alert | -alert | error-validation-alert |
44
+
45
+ ### Context Derivation
46
+
47
+ 1. **Page-level**: From component filename or route (LoginPage.tsx -> login)
48
+ 2. **Component-level**: From component name (<NavBar> -> navbar)
49
+ 3. **Nested**: Parent -> child hierarchy, max 3 levels deep
50
+ 4. **Dynamic lists**: Use template literals with unique keys
51
+
52
+ ## Execution Phases
53
+
54
+ ### Phase 1: SCAN
55
+ - Detect framework (React/Vue/Angular/HTML) from package.json and file extensions
56
+ - List all component files (exclude test/spec/stories)
57
+ - Prioritize by interaction density (forms > pages > layouts)
58
+ - Output: SCAN_MANIFEST.md
59
+
60
+ ### Phase 2: AUDIT
61
+ - For each file, identify interactive elements
62
+ - Classify: P0 (must have), P1 (should have), P2 (nice to have)
63
+ - Record existing data-testid as EXISTING (don't modify)
64
+ - Record missing as MISSING with proposed value
65
+ - Output: TESTID_AUDIT_REPORT.md
66
+
67
+ ### Phase 3: INJECT
68
+ - Add data-testid as LAST attribute before closing >
69
+ - Preserve all existing formatting
70
+ - Only add the attribute — change nothing else
71
+ - Framework-specific handling (JSX, Vue, Angular, HTML)
72
+ - Output: INJECTION_CHANGELOG.md + modified source files
73
+
74
+ ### Phase 4: VALIDATE
75
+ - Syntax check all modified files
76
+ - Uniqueness check (no duplicate testids per page)
77
+ - Convention compliance check
78
+ - Output: INJECTION_VALIDATION.md
79
+
80
+ ## Third-Party Components
81
+
82
+ 1. Props passthrough (if library supports it) — direct data-testid
83
+ 2. Wrapper div (if no passthrough) — wrap with data-testid div
84
+ 3. inputProps/slotProps (MUI-specific) — use component-specific prop APIs
85
+
86
+ ## Quality Gate
87
+
88
+ - [ ] Every interactive element has a data-testid
89
+ - [ ] All values follow {context}-{description}-{element-type} convention
90
+ - [ ] No duplicate data-testid values in same page/route scope
91
+ - [ ] No existing code modified beyond adding the attribute
92
+ - [ ] Syntax validation passes on all modified files
93
+ - [ ] Dynamic list items use template literals with unique keys
@@ -0,0 +1,87 @@
1
+ ---
2
+ name: qa-workflow-documenter
3
+ description: QA Workflow Documenter. Generates structured QA workflow documentation with decision trees, playbooks, and AI interaction protocols. Use when user wants to document QA processes, create testing playbooks, define workflow steps, document QA procedures, create decision trees for testing, or standardize QA processes. Triggers on "document workflow", "QA process", "testing playbook", "workflow documentation", "QA procedures", "decision tree", "standardize process", "document QA steps".
4
+ ---
5
+
6
+ # QA Workflow Documenter
7
+
8
+ ## Purpose
9
+
10
+ Generate structured, AI-specific QA workflow documentation with decision trees, playbooks, and AI interaction protocols. Every step answers: WHO does it, WHAT they do, WHAT input they need, WHAT output they produce.
11
+
12
+ ## Core Principle
13
+
14
+ **AI-first language**: Use precise verbs — scan, extract, classify, generate, validate. Never use vague terms like "review", "check", "handle".
15
+
16
+ ## Output Artifacts
17
+
18
+ 1. **WORKFLOW_[NAME].md** — Step-by-step workflow with decision gates
19
+ 2. **DECISION_TREE.md** — Visual decision trees for key branch points
20
+ 3. **AI_PROMPTS_CATALOG.md** — Reusable prompt patterns for each workflow step
21
+ 4. **CHECKLIST.md** — Pre/post verification checklists
22
+
23
+ ## Workflow Template Structure
24
+
25
+ ### Header Block
26
+ ```markdown
27
+ # Workflow: [Name]
28
+ **Version**: [semver]
29
+ **Applies to**: [project types / tech stacks]
30
+ **Prerequisites**: [what must exist before starting]
31
+ **Estimated duration**: [time range]
32
+ **Actors**: [AI Agent, QA Engineer, Team Lead]
33
+ ```
34
+
35
+ ### Step Format
36
+ ```markdown
37
+ ## Step N: [Name]
38
+ **Actor**: [who executes this step]
39
+ **Input**: [what they receive]
40
+ **Action**: [precise description using action verbs]
41
+ **Output**: [what they produce]
42
+ **Decision Gate**: [condition to proceed vs branch]
43
+
44
+ ### AI Prompt Pattern
45
+ [If this step involves an AI agent, include the prompt template]
46
+ ```
47
+
48
+ ## Workflow Types
49
+
50
+ ### 1. Repository Intake Workflow
51
+ New repo arrives -> scan -> classify -> assess risk -> recommend strategy
52
+
53
+ ### 2. Test Case Generation Workflow
54
+ Analysis done -> select targets -> generate cases -> validate -> deliver
55
+
56
+ ### 3. QA Repo Bootstrap Workflow
57
+ Blueprint ready -> create structure -> generate configs -> seed initial tests
58
+
59
+ ### 4. Validation & Bug Triage Workflow
60
+ Tests generated -> run -> classify failures -> fix loop -> report
61
+
62
+ ### 5. Test Maintenance Workflow
63
+ Existing tests -> audit -> prioritize fixes -> apply -> verify
64
+
65
+ ## Decision Tree Format
66
+
67
+ ```markdown
68
+ ## Decision: [What decision]
69
+
70
+ ```
71
+ [Question]?
72
+ ├── YES → [Action A]
73
+ │ └── [Sub-question]?
74
+ │ ├── YES → [Action A1]
75
+ │ └── NO → [Action A2]
76
+ └── NO → [Action B]
77
+ ```
78
+ ```
79
+
80
+ ## Quality Gate
81
+
82
+ - [ ] Every step has Actor, Input, Action, Output
83
+ - [ ] No vague verbs (review → scan + classify, check → validate against criteria)
84
+ - [ ] Decision gates have clear YES/NO branches
85
+ - [ ] AI prompt patterns included for AI-executed steps
86
+ - [ ] Prerequisites listed for every workflow
87
+ - [ ] Output artifacts named and described