qaa-agent 1.0.0 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -10,51 +10,12 @@ Analysis-only mode. Scans a repository, detects framework and stack, and produce
10
10
  - --dev-repo: explicit path to developer repository
11
11
  - --qa-repo: path to existing QA repository (produces gap analysis instead of blueprint)
12
12
 
13
- ## What It Produces
14
-
15
- - SCAN_MANIFEST.md -- file tree, framework detection, testable surfaces
16
- - QA_ANALYSIS.md -- architecture overview, risk assessment, testing pyramid
17
- - TEST_INVENTORY.md -- prioritized test cases with IDs and explicit outcomes
18
- - QA_REPO_BLUEPRINT.md (if no QA repo) or GAP_ANALYSIS.md (if QA repo provided)
19
-
20
13
  ## Instructions
21
14
 
22
15
  1. Read `CLAUDE.md` -- all QA standards.
23
- 2. Initialize pipeline context:
24
- ```bash
25
- node bin/qaa-tools.cjs init qa-start [user arguments]
26
- ```
27
- 3. Invoke scanner agent:
28
-
29
- Task(
30
- prompt="
31
- <objective>Scan repository and produce SCAN_MANIFEST.md</objective>
32
- <execution_context>@agents/qaa-scanner.md</execution_context>
33
- <files_to_read>
34
- - CLAUDE.md
35
- </files_to_read>
36
- <parameters>
37
- user_input: $ARGUMENTS
38
- </parameters>
39
- "
40
- )
41
-
42
- 4. Invoke analyzer agent:
43
-
44
- Task(
45
- prompt="
46
- <objective>Analyze repository and produce QA_ANALYSIS.md, TEST_INVENTORY.md, and blueprint or gap analysis</objective>
47
- <execution_context>@agents/qaa-analyzer.md</execution_context>
48
- <files_to_read>
49
- - CLAUDE.md
50
- - .qa-output/SCAN_MANIFEST.md
51
- </files_to_read>
52
- <parameters>
53
- user_input: $ARGUMENTS
54
- </parameters>
55
- "
56
- )
57
-
58
- 5. Present results to user. No git operations. No test generation.
16
+ 2. Execute the workflow:
17
+
18
+ Follow the workflow defined in `@workflows/qa-analyze.md` end-to-end.
19
+ Preserve all workflow gates (scan verification, artifact checks).
59
20
 
60
21
  $ARGUMENTS
@@ -7,82 +7,17 @@ Generate test cases and executable test files from a ticket (Jira, Linear, GitHu
7
7
  /qa-from-ticket <ticket-source> [--dev-repo <path>]
8
8
 
9
9
  - ticket-source: one of:
10
- - URL to Jira/Linear/GitHub issue (e.g., https://linear.app/team/PROJ-123)
10
+ - URL to GitHub/Jira/Linear issue
11
11
  - Plain text user story or acceptance criteria
12
12
  - File path to a .md or .txt with ticket details
13
13
  - --dev-repo: path to developer repository (default: current directory)
14
14
 
15
- ## What It Produces
16
-
17
- - TEST_CASES_FROM_TICKET.md -- test cases derived from acceptance criteria
18
- - Test spec files (unit, API, E2E as appropriate)
19
- - Page Object Model files (for E2E tests if needed)
20
- - Fixture files (test data)
21
- - Traceability matrix: which test covers which acceptance criterion
22
-
23
15
  ## Instructions
24
16
 
25
17
  1. Read `CLAUDE.md` -- all QA standards.
18
+ 2. Execute the workflow:
26
19
 
27
- 2. Parse the ticket source:
28
-
29
- **If URL:** Fetch the ticket content using WebFetch or gh CLI (for GitHub issues).
30
- ```bash
31
- # GitHub Issues
32
- gh issue view <number> --json title,body,labels,assignees
33
-
34
- # For other URLs, use WebFetch to read the page
35
- ```
36
-
37
- **If plain text:** Use the text directly as the ticket content.
38
-
39
- **If file path:** Read the file content.
40
-
41
- 3. Extract from the ticket:
42
- - **Title/Summary**: what the feature or bug is about
43
- - **Acceptance Criteria**: the specific conditions that must be met (look for "AC:", "Given/When/Then", checkboxes, numbered criteria)
44
- - **User Story**: "As a [role] I want [action] so that [benefit]"
45
- - **Edge Cases**: any mentioned edge cases, error states, or constraints
46
- - **Priority**: ticket priority if available (maps to test priority P0/P1/P2)
47
-
48
- 4. Read the dev repo source code related to the ticket:
49
- - Search for files matching keywords from the ticket title
50
- - Read routes, controllers, services, models related to the feature
51
- - Identify the actual implementation (or lack of it if the feature is not built yet)
52
-
53
- 5. Generate test cases that map 1:1 to acceptance criteria:
54
- - Each acceptance criterion becomes at least one test case
55
- - Add negative tests for each criterion (what should NOT happen)
56
- - Add edge case tests if mentioned in the ticket
57
- - Every test case follows CLAUDE.md rules: unique ID, concrete inputs, explicit expected outcome
58
-
59
- 6. Write TEST_CASES_FROM_TICKET.md with:
60
- ```markdown
61
- # Test Cases from Ticket: [TICKET-ID] [Title]
62
-
63
- ## Source
64
- - Ticket: [URL or "manual input"]
65
- - Date: [YYYY-MM-DD]
66
-
67
- ## Acceptance Criteria Mapping
68
-
69
- | AC # | Criterion | Test ID(s) | Status |
70
- |------|-----------|------------|--------|
71
- | AC-1 | [criterion text] | UT-XXX-001, E2E-XXX-001 | Covered |
72
- | AC-2 | [criterion text] | API-XXX-001 | Covered |
73
-
74
- ## Test Cases
75
- [test cases following CLAUDE.md format]
76
- ```
77
-
78
- 7. Generate executable test files using the qa-template-engine skill.
79
-
80
- 8. Validate generated tests using the qa-self-validator skill.
81
-
82
- 9. Present summary:
83
- - How many acceptance criteria found
84
- - How many test cases generated (by pyramid level)
85
- - Traceability: every AC is covered
86
- - Any ACs that could not be tested (and why)
20
+ Follow the workflow defined in `@workflows/qa-from-ticket.md` end-to-end.
21
+ Preserve all workflow gates (ticket parsing, traceability, validation).
87
22
 
88
23
  $ARGUMENTS
@@ -9,46 +9,12 @@ Compare a developer repository against its QA repository to identify coverage ga
9
9
  - --dev-repo: path to the developer repository (required)
10
10
  - --qa-repo: path to the existing QA repository (required)
11
11
 
12
- ## What It Produces
13
-
14
- - SCAN_MANIFEST.md -- scan of both repositories
15
- - GAP_ANALYSIS.md -- coverage map, missing tests with IDs, broken tests, quality assessment
16
-
17
12
  ## Instructions
18
13
 
19
14
  1. Read `CLAUDE.md` -- testing pyramid, test spec rules, quality gates.
20
- 2. Invoke scanner agent to scan both repositories:
21
-
22
- Task(
23
- prompt="
24
- <objective>Scan both developer and QA repositories and produce SCAN_MANIFEST.md</objective>
25
- <execution_context>@agents/qaa-scanner.md</execution_context>
26
- <files_to_read>
27
- - CLAUDE.md
28
- </files_to_read>
29
- <parameters>
30
- user_input: $ARGUMENTS
31
- </parameters>
32
- "
33
- )
34
-
35
- 3. Invoke analyzer agent in gap mode:
36
-
37
- Task(
38
- prompt="
39
- <objective>Produce GAP_ANALYSIS.md comparing dev repo against QA repo</objective>
40
- <execution_context>@agents/qaa-analyzer.md</execution_context>
41
- <files_to_read>
42
- - CLAUDE.md
43
- - .qa-output/SCAN_MANIFEST.md
44
- </files_to_read>
45
- <parameters>
46
- user_input: $ARGUMENTS
47
- mode: gap
48
- </parameters>
49
- "
50
- )
51
-
52
- 4. Present results. No test generation. No git operations.
15
+ 2. Execute the workflow:
16
+
17
+ Follow the workflow defined in `@workflows/qa-gap.md` end-to-end.
18
+ Preserve all workflow gates (both-repo scanning, gap metrics).
53
19
 
54
20
  $ARGUMENTS
@@ -0,0 +1,36 @@
1
+ # QA Codebase Map
2
+
3
+ Deep-scan a codebase for QA-relevant information. Spawns 4 parallel mapper agents that analyze testability, risk areas, code patterns, and existing tests. Produces structured documents consumed by the QA pipeline.
4
+
5
+ ## Usage
6
+
7
+ /qa-map [--focus <area>]
8
+
9
+ - No arguments: runs all 4 focus areas in parallel
10
+ - --focus: run a single area (testability, risk, patterns, existing-tests)
11
+
12
+ ## What It Produces
13
+
14
+ - TESTABILITY.md + TEST_SURFACE.md — what's testable, entry points, mocking needs
15
+ - RISK_MAP.md + CRITICAL_PATHS.md — business-critical paths, error handling gaps
16
+ - CODE_PATTERNS.md + API_CONTRACTS.md — naming conventions, API shapes, auth patterns
17
+ - TEST_ASSESSMENT.md + COVERAGE_GAPS.md — existing test quality, what's missing
18
+
19
+ ## Instructions
20
+
21
+ 1. Read `CLAUDE.md` — QA standards.
22
+ 2. Create output directory: `.qa-output/codebase/`
23
+ 3. If no --focus flag, spawn 4 agents in parallel (one per focus area):
24
+
25
+ For each focus area, spawn:
26
+
27
+ Agent(
28
+ prompt="Analyze this codebase for QA purposes. Focus area: {focus}. Write documents to .qa-output/codebase/. Follow the process in your agent definition.",
29
+ subagent_type="general-purpose",
30
+ execution_context="@agents/qaa-codebase-mapper.md"
31
+ )
32
+
33
+ 4. If --focus flag, spawn only that one area.
34
+ 5. When all complete, print summary of documents produced.
35
+
36
+ $ARGUMENTS
@@ -0,0 +1,33 @@
1
+ # QA Research
2
+
3
+ Research the best testing approach for a project's stack. Investigates framework capabilities, best practices, and testing patterns using official docs and community sources.
4
+
5
+ ## Usage
6
+
7
+ /qa-research [--mode <mode>]
8
+
9
+ - No arguments: auto-detects stack and researches testing approach
10
+ - --mode: specific research mode (stack-testing, framework-deep-dive, api-testing, e2e-strategy)
11
+
12
+ ## What It Produces
13
+
14
+ - TESTING_STACK.md — recommended test framework, assertion libraries, mock strategies
15
+ - FRAMEWORK_CAPABILITIES.md — deep dive into detected test framework
16
+ - API_TESTING_STRATEGY.md — endpoint testing patterns for this stack
17
+ - E2E_STRATEGY.md — E2E approach for the frontend (if detected)
18
+
19
+ ## Instructions
20
+
21
+ 1. Read `CLAUDE.md` — QA standards.
22
+ 2. Detect project stack from package.json, requirements.txt, or similar.
23
+ 3. Spawn researcher agent:
24
+
25
+ Agent(
26
+ prompt="Research the testing ecosystem for this project. Mode: {mode}. Write findings to .qa-output/research/. Follow the process in your agent definition.",
27
+ subagent_type="general-purpose",
28
+ execution_context="@agents/qaa-project-researcher.md"
29
+ )
30
+
31
+ 4. Present findings with confidence levels.
32
+
33
+ $ARGUMENTS
@@ -14,20 +14,9 @@ Run the complete QA automation pipeline. Analyzes a repository, generates a stan
14
14
  ## Instructions
15
15
 
16
16
  1. Read `CLAUDE.md` -- all QA standards that govern the pipeline.
17
- 2. Read `agents/qa-pipeline-orchestrator.md` -- the pipeline controller.
18
- 3. Invoke the orchestrator:
19
-
20
- Task(
21
- prompt="
22
- <objective>Run complete QA automation pipeline</objective>
23
- <execution_context>@agents/qa-pipeline-orchestrator.md</execution_context>
24
- <files_to_read>
25
- - CLAUDE.md
26
- </files_to_read>
27
- <parameters>
28
- user_input: $ARGUMENTS
29
- </parameters>
30
- "
31
- )
17
+ 2. Execute the workflow:
18
+
19
+ Follow the workflow defined in `@workflows/qa-start.md` end-to-end.
20
+ Preserve all workflow gates (stage transitions, checkpoints, error handling, state updates, commits).
32
21
 
33
22
  $ARGUMENTS
@@ -4,51 +4,16 @@ Scan frontend source code, audit missing data-testid attributes, and inject them
4
4
 
5
5
  ## Usage
6
6
 
7
- /qa-testid <path-to-frontend-source>
7
+ /qa-testid [<source-directory>]
8
8
 
9
- - path-to-frontend-source: directory containing React/Vue/Angular/HTML components
10
-
11
- ## What It Produces
12
-
13
- - TESTID_AUDIT_REPORT.md -- coverage score, missing elements, proposed values by priority
14
- - Modified source files with data-testid attributes injected
9
+ - source-directory: path to frontend source (auto-detects if omitted)
15
10
 
16
11
  ## Instructions
17
12
 
18
- 1. Read `CLAUDE.md` -- data-testid Convention section for naming rules.
19
- 2. Initialize context:
20
- ```bash
21
- node bin/qaa-tools.cjs init qa-start --dev-repo [user path]
22
- ```
23
- 3. Invoke scanner to identify component files:
24
-
25
- Task(
26
- prompt="
27
- <objective>Scan repository to identify frontend component files</objective>
28
- <execution_context>@agents/qaa-scanner.md</execution_context>
29
- <files_to_read>
30
- - CLAUDE.md
31
- </files_to_read>
32
- <parameters>
33
- user_input: $ARGUMENTS
34
- </parameters>
35
- "
36
- )
37
-
38
- 4. Invoke testid-injector agent:
39
-
40
- Task(
41
- prompt="
42
- <objective>Audit missing data-testid attributes and inject following naming convention</objective>
43
- <execution_context>@agents/qaa-testid-injector.md</execution_context>
44
- <files_to_read>
45
- - CLAUDE.md
46
- - .qa-output/SCAN_MANIFEST.md
47
- </files_to_read>
48
- <parameters>
49
- user_input: $ARGUMENTS
50
- </parameters>
51
- "
52
- )
13
+ 1. Read `CLAUDE.md` -- data-testid Convention section.
14
+ 2. Execute the workflow:
15
+
16
+ Follow the workflow defined in `@workflows/qa-testid.md` end-to-end.
17
+ Preserve all workflow gates (framework detection, user approval, branch creation).
53
18
 
54
19
  $ARGUMENTS
@@ -4,51 +4,17 @@ Validate existing test files against CLAUDE.md standards. Runs 4-layer validatio
4
4
 
5
5
  ## Usage
6
6
 
7
- /qa-validate <path-to-tests> [--framework <name>]
7
+ /qa-validate [<test-directory>] [--classify]
8
8
 
9
- - path-to-tests: directory or specific test files to validate
10
- - --framework: override framework auto-detection (playwright, cypress, jest, etc.)
11
-
12
- ## What It Produces
13
-
14
- - VALIDATION_REPORT.md -- pass/fail per file per validation layer, confidence level
15
- - FAILURE_CLASSIFICATION_REPORT.md -- if failures found, classifies as APP BUG / TEST ERROR / ENV ISSUE / INCONCLUSIVE
9
+ - test-directory: path to test files (auto-detects if omitted)
10
+ - --classify: also run bug-detective to classify failures
16
11
 
17
12
  ## Instructions
18
13
 
19
14
  1. Read `CLAUDE.md` -- quality gates, locator tiers, assertion rules.
20
- 2. Invoke validator agent:
21
-
22
- Task(
23
- prompt="
24
- <objective>Validate test files with 4-layer validation and produce VALIDATION_REPORT.md</objective>
25
- <execution_context>@agents/qaa-validator.md</execution_context>
26
- <files_to_read>
27
- - CLAUDE.md
28
- </files_to_read>
29
- <parameters>
30
- user_input: $ARGUMENTS
31
- mode: validation
32
- </parameters>
33
- "
34
- )
35
-
36
- 3. If failures detected, invoke bug-detective agent:
37
-
38
- Task(
39
- prompt="
40
- <objective>Classify test failures and auto-fix TEST CODE ERRORS</objective>
41
- <execution_context>@agents/qaa-bug-detective.md</execution_context>
42
- <files_to_read>
43
- - CLAUDE.md
44
- - .qa-output/VALIDATION_REPORT.md
45
- </files_to_read>
46
- <parameters>
47
- user_input: $ARGUMENTS
48
- </parameters>
49
- "
50
- )
51
-
52
- 4. Present results. No git operations.
15
+ 2. Execute the workflow:
16
+
17
+ Follow the workflow defined in `@workflows/qa-validate.md` end-to-end.
18
+ Preserve all workflow gates (fix loops, classification).
53
19
 
54
20
  $ARGUMENTS