bmad-method-test-architecture-enterprise 1.2.2 → 1.2.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (55) hide show
  1. package/README.md +14 -12
  2. package/docs/how-to/workflows/setup-ci.md +3 -1
  3. package/docs/how-to/workflows/setup-test-framework.md +29 -6
  4. package/docs/reference/configuration.md +97 -0
  5. package/docs/reference/knowledge-base.md +15 -6
  6. package/package.json +1 -1
  7. package/release_notes.md +6 -4
  8. package/src/agents/tea.agent.yaml +2 -2
  9. package/src/module.yaml +78 -5
  10. package/src/testarch/knowledge/adr-quality-readiness-checklist.md +9 -9
  11. package/src/testarch/tea-index.csv +36 -36
  12. package/src/workflows/testarch/atdd/atdd-checklist-template.md +2 -0
  13. package/src/workflows/testarch/atdd/steps-c/step-01-preflight-and-context.md +65 -12
  14. package/src/workflows/testarch/atdd/steps-c/step-02-generation-mode.md +5 -0
  15. package/src/workflows/testarch/atdd/steps-c/step-03-test-strategy.md +10 -1
  16. package/src/workflows/testarch/atdd/steps-c/step-05-validate-and-complete.md +13 -2
  17. package/src/workflows/testarch/automate/steps-c/step-01-preflight-and-context.md +46 -2
  18. package/src/workflows/testarch/automate/steps-c/step-02-identify-targets.md +12 -0
  19. package/src/workflows/testarch/automate/steps-c/step-03-generate-tests.md +110 -31
  20. package/src/workflows/testarch/automate/steps-c/step-03b-subprocess-backend.md +246 -0
  21. package/src/workflows/testarch/automate/steps-c/step-03c-aggregate.md +90 -38
  22. package/src/workflows/testarch/automate/steps-c/step-04-validate-and-summarize.md +13 -2
  23. package/src/workflows/testarch/ci/azure-pipelines-template.yaml +155 -0
  24. package/src/workflows/testarch/ci/checklist.md +48 -7
  25. package/src/workflows/testarch/ci/github-actions-template.yaml +22 -10
  26. package/src/workflows/testarch/ci/gitlab-ci-template.yaml +21 -12
  27. package/src/workflows/testarch/ci/harness-pipeline-template.yaml +159 -0
  28. package/src/workflows/testarch/ci/jenkins-pipeline-template.groovy +129 -0
  29. package/src/workflows/testarch/ci/steps-c/step-01-preflight.md +58 -17
  30. package/src/workflows/testarch/ci/steps-c/step-02-generate-pipeline.md +21 -10
  31. package/src/workflows/testarch/ci/steps-c/step-03-configure-quality-gates.md +5 -0
  32. package/src/workflows/testarch/ci/workflow.yaml +5 -3
  33. package/src/workflows/testarch/framework/checklist.md +11 -10
  34. package/src/workflows/testarch/framework/steps-c/step-01-preflight.md +34 -2
  35. package/src/workflows/testarch/framework/steps-c/step-02-select-framework.md +20 -1
  36. package/src/workflows/testarch/framework/steps-c/step-03-scaffold-framework.md +56 -5
  37. package/src/workflows/testarch/framework/steps-c/step-04-docs-and-scripts.md +16 -4
  38. package/src/workflows/testarch/nfr-assess/nfr-report-template.md +3 -1
  39. package/src/workflows/testarch/nfr-assess/steps-c/step-01-load-context.md +12 -0
  40. package/src/workflows/testarch/nfr-assess/steps-c/step-05-generate-report.md +14 -3
  41. package/src/workflows/testarch/test-design/checklist.md +20 -9
  42. package/src/workflows/testarch/test-design/instructions.md +3 -3
  43. package/src/workflows/testarch/test-design/steps-c/step-02-load-context.md +34 -0
  44. package/src/workflows/testarch/test-design/steps-c/step-05-generate-output.md +29 -2
  45. package/src/workflows/testarch/test-design/test-design-architecture-template.md +16 -14
  46. package/src/workflows/testarch/test-design/test-design-handoff-template.md +70 -0
  47. package/src/workflows/testarch/test-design/test-design-qa-template.md +11 -9
  48. package/src/workflows/testarch/test-design/workflow.yaml +8 -1
  49. package/src/workflows/testarch/test-review/steps-c/step-01-load-context.md +34 -1
  50. package/src/workflows/testarch/test-review/steps-c/step-04-generate-report.md +14 -3
  51. package/src/workflows/testarch/test-review/test-review-template.md +4 -2
  52. package/src/workflows/testarch/test-review/workflow.yaml +1 -0
  53. package/src/workflows/testarch/trace/trace-template.md +7 -5
  54. package/test/test-installation-components.js +1 -1
  55. package/test/test-knowledge-base.js +10 -1
@@ -37,17 +37,36 @@ Verify prerequisites and load all required inputs before generating failing test
37
37
 
38
38
  **CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
39
39
 
40
- ## 1. Prerequisites (Hard Requirements)
40
+ ## 1. Stack Detection
41
+
42
+ **Read `config.test_stack_type`** from `{config_source}`.
43
+
44
+ **Auto-Detection Algorithm** (when `test_stack_type` is `"auto"` or not configured):
45
+
46
+ - Scan `{project-root}` for project manifests:
47
+ - **Frontend indicators**: `package.json` with react/vue/angular/next dependencies, `playwright.config.*`, `vite.config.*`, `webpack.config.*`
48
+ - **Backend indicators**: `pyproject.toml`, `pom.xml`/`build.gradle`, `go.mod`, `*.csproj`/`*.sln`, `Gemfile`, `Cargo.toml`
49
+ - **Both present** = `fullstack`; only frontend = `frontend`; only backend = `backend`
50
+ - Explicit `test_stack_type` config value overrides auto-detection
51
+ - **Backward compatibility**: if `test_stack_type` is not in config, treat as `"auto"` (preserves current frontend behavior for existing installs)
52
+
53
+ Store result as `{detected_stack}` = `frontend` | `backend` | `fullstack`
54
+
55
+ ---
56
+
57
+ ## 2. Prerequisites (Hard Requirements)
41
58
 
42
59
  - Story approved with **clear acceptance criteria**
43
- - Test framework configured (`playwright.config.ts` or `cypress.config.ts`)
60
+ - Test framework configured:
61
+ - **If {detected_stack} is `frontend` or `fullstack`:** `playwright.config.ts` or `cypress.config.ts`
62
+ - **If {detected_stack} is `backend`:** relevant test config exists (e.g., `conftest.py`, `src/test/`, `*_test.go`, `.rspec`)
44
63
  - Development environment available
45
64
 
46
65
  If any are missing: **HALT** and notify the user.
47
66
 
48
67
  ---
49
68
 
50
- ## 2. Load Story Context
69
+ ## 3. Load Story Context
51
70
 
52
71
  - Read story markdown from `{story_file}` (or ask user if not provided)
53
72
  - Extract acceptance criteria and constraints
@@ -55,21 +74,44 @@ If any are missing: **HALT** and notify the user.
55
74
 
56
75
  ---
57
76
 
58
- ## 3. Load Framework & Existing Patterns
77
+ ## 4. Load Framework & Existing Patterns
59
78
 
60
79
  - Read framework config
61
80
  - Inspect `{test_dir}` for existing test patterns, fixtures, helpers
62
81
 
63
- ## 3.5 Read TEA Config Flags
82
+ ## 4.5 Read TEA Config Flags
64
83
 
65
84
  From `{config_source}`:
66
85
 
67
86
  - `tea_use_playwright_utils`
68
87
  - `tea_browser_automation`
88
+ - `test_stack_type`
69
89
 
70
90
  ---
71
91
 
72
- ## 4. Load Knowledge Base Fragments
92
+ ### Tiered Knowledge Loading
93
+
94
+ Load fragments based on their `tier` classification in `tea-index.csv`:
95
+
96
+ 1. **Core tier** (always load): Foundational fragments required for this workflow
97
+ 2. **Extended tier** (load on-demand): Load when deeper analysis is needed or when the user's context requires it
98
+ 3. **Specialized tier** (load only when relevant): Load only when the specific use case matches (e.g., contract-testing only for microservices, email-auth only for email flows)
99
+
100
+ > **Context Efficiency**: Loading only core fragments reduces context usage by 40-50% compared to loading all fragments.
101
+
102
+ ### Playwright Utils Loading Profiles
103
+
104
+ **If `tea_use_playwright_utils` is enabled**, select the appropriate loading profile:
105
+
106
+ - **API-only profile** (when `{detected_stack}` is `backend` or no `page.goto`/`page.locator` found in test files):
107
+ Load: `overview`, `api-request`, `auth-session`, `recurse` (~1,800 lines)
108
+
109
+ - **Full UI+API profile** (when `{detected_stack}` is `frontend`/`fullstack` or browser tests detected):
110
+ Load: all Playwright Utils core fragments (~4,500 lines)
111
+
112
+ **Detection**: Scan `{test_dir}` for files containing `page.goto` or `page.locator`. If none found, use API-only profile.
113
+
114
+ ## 5. Load Knowledge Base Fragments
73
115
 
74
116
  Use `{knowledgeIndex}` to load:
75
117
 
@@ -79,35 +121,44 @@ Use `{knowledgeIndex}` to load:
79
121
  - `component-tdd.md`
80
122
  - `test-quality.md`
81
123
  - `test-healing-patterns.md`
124
+
125
+ **If {detected_stack} is `frontend` or `fullstack`:**
126
+
82
127
  - `selector-resilience.md`
83
128
  - `timing-debugging.md`
84
129
 
85
- **Playwright Utils (if enabled):**
130
+ **Playwright Utils (if enabled and {detected_stack} is `frontend` or `fullstack`):**
86
131
 
87
132
  - `overview.md`, `api-request.md`, `network-recorder.md`, `auth-session.md`, `intercept-network-call.md`, `recurse.md`, `log.md`, `file-utils.md`, `network-error-monitor.md`, `fixtures-composition.md`
88
133
 
89
- **Playwright CLI (if tea_browser_automation is "cli" or "auto"):**
134
+ **Playwright CLI (if tea_browser_automation is "cli" or "auto" and {detected_stack} is `frontend` or `fullstack`):**
90
135
 
91
136
  - `playwright-cli.md`
92
137
 
93
- **MCP Patterns (if tea_browser_automation is "mcp" or "auto"):**
138
+ **MCP Patterns (if tea_browser_automation is "mcp" or "auto" and {detected_stack} is `frontend` or `fullstack`):**
94
139
 
95
140
  - (existing MCP-related fragments, if any are added in future)
96
141
 
97
- **Traditional Patterns (if utils disabled):**
142
+ **Traditional Patterns (if utils disabled and {detected_stack} is `frontend` or `fullstack`):**
98
143
 
99
144
  - `fixture-architecture.md`
100
145
  - `network-first.md`
101
146
 
147
+ **Backend Patterns (if {detected_stack} is `backend` or `fullstack`):**
148
+
149
+ - `test-levels-framework.md`
150
+ - `test-priorities-matrix.md`
151
+ - `ci-burn-in.md`
152
+
102
153
  ---
103
154
 
104
- ## 5. Confirm Inputs
155
+ ## 6. Confirm Inputs
105
156
 
106
157
  Summarize loaded inputs and confirm with the user. Then proceed.
107
158
 
108
159
  ---
109
160
 
110
- ## 6. Save Progress
161
+ ## 7. Save Progress
111
162
 
112
163
  **Save this step's accumulated work to `{outputFile}`.**
113
164
 
@@ -129,6 +180,8 @@ Summarize loaded inputs and confirm with the user. Then proceed.
129
180
  - Set `lastSaved: '{date}'`
130
181
  - Append this step's output to the appropriate section.
131
182
 
183
+ **Update `inputDocuments`**: Set `inputDocuments` in the output template frontmatter to the list of artifact paths loaded in this step (e.g., knowledge fragments, test design documents, configuration files).
184
+
132
185
  Load next step: `{nextStepFile}`
133
186
 
134
187
  ## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
@@ -41,6 +41,7 @@ Use AI generation when:
41
41
 
42
42
  - Acceptance criteria are clear
43
43
  - Scenarios are standard (CRUD, auth, API, navigation)
44
+ - **If {detected_stack} is `backend`:** Always use AI generation (no browser recording needed)
44
45
 
45
46
  Proceed directly to test strategy if this applies.
46
47
 
@@ -48,6 +49,10 @@ Proceed directly to test strategy if this applies.
48
49
 
49
50
  ## 2. Optional Mode: Recording (Complex UI)
50
51
 
52
+ **Skip this section entirely if {detected_stack} is `backend`.** For backend projects, use AI generation from API documentation, OpenAPI/Swagger specs, or source code analysis instead.
53
+
54
+ **If {detected_stack} is `frontend` or `fullstack`:**
55
+
51
56
  Use recording when UI interactions need live browser verification.
52
57
 
53
58
  **Tool selection based on `config.tea_browser_automation`:**
@@ -45,12 +45,21 @@ Translate acceptance criteria into a prioritized, level-appropriate test plan.
45
45
 
46
46
  ## 2. Select Test Levels
47
47
 
48
- Choose the best level per scenario:
48
+ Choose the best level per scenario based on `{detected_stack}`:
49
+
50
+ **If {detected_stack} is `frontend` or `fullstack`:**
49
51
 
50
52
  - **E2E** for critical user journeys
51
53
  - **API** for business logic and service contracts
52
54
  - **Component** for UI behavior
53
55
 
56
+ **If {detected_stack} is `backend` or `fullstack`:**
57
+
58
+ - **Unit** for pure functions, business logic, and edge cases
59
+ - **Integration** for service interactions, database queries, and middleware
60
+ - **API/Contract** for endpoint validation, request/response schemas, and Pact contracts
61
+ - **No E2E** for pure backend projects (no browser-based testing needed)
62
+
54
63
  ---
55
64
 
56
65
  ## 3. Prioritize Tests
@@ -50,7 +50,18 @@ Fix any gaps before completion.
50
50
 
51
51
  ---
52
52
 
53
- ## 2. Completion Summary
53
+ ## 2. Polish Output
54
+
55
+ Before finalizing, review the complete output document for quality:
56
+
57
+ 1. **Remove duplication**: Progressive-append workflow may have created repeated sections — consolidate
58
+ 2. **Verify consistency**: Ensure terminology, risk scores, and references are consistent throughout
59
+ 3. **Check completeness**: All template sections should be populated or explicitly marked N/A
60
+ 4. **Format cleanup**: Ensure markdown formatting is clean (tables aligned, headers consistent, no orphaned references)
61
+
62
+ ---
63
+
64
+ ## 3. Completion Summary
54
65
 
55
66
  Report:
56
67
 
@@ -61,7 +72,7 @@ Report:
61
72
 
62
73
  ---
63
74
 
64
- ## 3. Save Progress
75
+ ## 4. Save Progress
65
76
 
66
77
  **Save this step's accumulated work to `{outputFile}`.**
67
78
 
@@ -37,13 +37,32 @@ Determine execution mode, verify framework readiness, and load the necessary art
37
37
 
38
38
  **CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
39
39
 
40
- ## 1. Verify Framework
40
+ ## 1. Stack Detection & Verify Framework
41
41
 
42
- Ensure a framework exists:
42
+ **Read `config.test_stack_type`** from `{config_source}`.
43
+
44
+ **Auto-Detection Algorithm** (when `test_stack_type` is `"auto"` or not configured):
45
+
46
+ - Scan `{project-root}` for project manifests:
47
+ - **Frontend indicators**: `package.json` with react/vue/angular/next dependencies, `playwright.config.*`, `vite.config.*`, `webpack.config.*`
48
+ - **Backend indicators**: `pyproject.toml`, `pom.xml`/`build.gradle`, `go.mod`, `*.csproj`/`*.sln`, `Gemfile`, `Cargo.toml`
49
+ - **Both present** = `fullstack`; only frontend = `frontend`; only backend = `backend`
50
+ - Explicit `test_stack_type` config value overrides auto-detection
51
+ - **Backward compatibility**: if `test_stack_type` is not in config, treat as `"auto"` (preserves current frontend behavior for existing installs)
52
+
53
+ Store result as `{detected_stack}` = `frontend` | `backend` | `fullstack`
54
+
55
+ **Verify framework exists:**
56
+
57
+ **If {detected_stack} is `frontend` or `fullstack`:**
43
58
 
44
59
  - `playwright.config.ts` or `cypress.config.ts`
45
60
  - `package.json` includes test dependencies
46
61
 
62
+ **If {detected_stack} is `backend` or `fullstack`:**
63
+
64
+ - Relevant test config exists (e.g., `conftest.py`, `src/test/`, `*_test.go`, `.rspec`, test project `*.csproj`)
65
+
47
66
  If missing: **HALT** with message "Run `framework` workflow first."
48
67
 
49
68
  ---
@@ -78,9 +97,32 @@ If missing: **HALT** with message "Run `framework` workflow first."
78
97
 
79
98
  - From `{config_source}` read `tea_use_playwright_utils`
80
99
  - From `{config_source}` read `tea_browser_automation`
100
+ - From `{config_source}` read `test_stack_type`
81
101
 
82
102
  ---
83
103
 
104
+ ### Tiered Knowledge Loading
105
+
106
+ Load fragments based on their `tier` classification in `tea-index.csv`:
107
+
108
+ 1. **Core tier** (always load): Foundational fragments required for this workflow
109
+ 2. **Extended tier** (load on-demand): Load when deeper analysis is needed or when the user's context requires it
110
+ 3. **Specialized tier** (load only when relevant): Load only when the specific use case matches (e.g., contract-testing only for microservices, email-auth only for email flows)
111
+
112
+ > **Context Efficiency**: Loading only core fragments reduces context usage by 40-50% compared to loading all fragments.
113
+
114
+ ### Playwright Utils Loading Profiles
115
+
116
+ **If `tea_use_playwright_utils` is enabled**, select the appropriate loading profile:
117
+
118
+ - **API-only profile** (when `{detected_stack}` is `backend` or no `page.goto`/`page.locator` found in test files):
119
+ Load: `overview`, `api-request`, `auth-session`, `recurse` (~1,800 lines)
120
+
121
+ - **Full UI+API profile** (when `{detected_stack}` is `frontend`/`fullstack` or browser tests detected):
122
+ Load: all Playwright Utils core fragments (~4,500 lines)
123
+
124
+ **Detection**: Scan `{test_dir}` for files containing `page.goto` or `page.locator`. If none found, use API-only profile.
125
+
84
126
  ## 4. Load Knowledge Base Fragments
85
127
 
86
128
  Use `{knowledgeIndex}` and load only what is required.
@@ -147,6 +189,8 @@ Summarize loaded artifacts, framework, and knowledge fragments, then proceed.
147
189
  - Set `lastSaved: '{date}'`
148
190
  - Append this step's output to the appropriate section.
149
191
 
192
+ **Update `inputDocuments`**: Set `inputDocuments` in the output template frontmatter to the list of artifact paths loaded in this step (e.g., knowledge fragments, test design documents, configuration files).
193
+
150
194
  Load next step: `{nextStepFile}`
151
195
 
152
196
  ## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
@@ -50,6 +50,8 @@ Determine what needs to be tested and select appropriate test levels and priorit
50
50
  - Otherwise auto-discover features in `{source_dir}`
51
51
  - Prioritize critical paths, integrations, and untested logic
52
52
 
53
+ **If {detected_stack} is `frontend` or `fullstack`:**
54
+
53
55
  **Browser Exploration (if `tea_browser_automation` is `cli` or `auto`):**
54
56
 
55
57
  > **Fallback:** If CLI is not installed, fall back to MCP (if available) or skip browser exploration and rely on code/doc analysis.
@@ -63,6 +65,16 @@ Use CLI to explore the application and identify testable pages/flows:
63
65
 
64
66
  > **Session Hygiene:** Always close sessions using `playwright-cli -s=tea-automate close`. Do NOT use `close-all` — it kills every session on the machine and breaks parallel execution.
65
67
 
68
+ **If {detected_stack} is `backend` or `fullstack`:**
69
+
70
+ **Source & API Analysis (no browser exploration):**
71
+
72
+ - Scan source code for route handlers, controllers, service classes, and public APIs
73
+ - Read OpenAPI/Swagger specs (`openapi.yaml`, `swagger.json`) if available
74
+ - Identify database models, migrations, and data access patterns
75
+ - Map service-to-service integrations and message queue consumers/producers
76
+ - Check for existing contract tests (Pact, etc.)
77
+
66
78
  ---
67
79
 
68
80
  ## 2. Choose Test Levels
@@ -8,16 +8,16 @@ nextStepFile: './step-03c-aggregate.md'
8
8
 
9
9
  ## STEP GOAL
10
10
 
11
- Launch parallel subprocesses to generate API and E2E tests simultaneously for maximum performance.
11
+ Launch parallel subprocesses to generate tests simultaneously for maximum performance. Subprocess selection depends on `{detected_stack}`.
12
12
 
13
13
  ## MANDATORY EXECUTION RULES
14
14
 
15
15
  - 📖 Read the entire step file before acting
16
16
  - ✅ Speak in `{communication_language}`
17
- - ✅ Launch TWO subprocesses in PARALLEL
18
- - ✅ Wait for BOTH subprocesses to complete
17
+ - ✅ Launch subprocesses in PARALLEL based on `{detected_stack}`
18
+ - ✅ Wait for ALL launched subprocesses to complete
19
19
  - ❌ Do NOT generate tests sequentially (use subprocesses)
20
- - ❌ Do NOT proceed until both subprocesses finish
20
+ - ❌ Do NOT proceed until all subprocesses finish
21
21
 
22
22
  ---
23
23
 
@@ -48,7 +48,7 @@ Launch parallel subprocesses to generate API and E2E tests simultaneously for ma
48
48
  const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
49
49
  ```
50
50
 
51
- **Prepare input context for both subprocesses:**
51
+ **Prepare input context for subprocesses:**
52
52
 
53
53
  ```javascript
54
54
  const subprocessContext = {
@@ -57,7 +57,8 @@ const subprocessContext = {
57
57
  config: {
58
58
  test_framework: config.test_framework,
59
59
  use_playwright_utils: config.tea_use_playwright_utils,
60
- browser_automation: config.tea_browser_automation // "auto" | "cli" | "mcp" | "none"
60
+ browser_automation: config.tea_browser_automation, // "auto" | "cli" | "mcp" | "none"
61
+ detected_stack: '{detected_stack}' // "frontend" | "backend" | "fullstack"
61
62
  },
62
63
  timestamp: timestamp
63
64
  };
@@ -65,7 +66,19 @@ const subprocessContext = {
65
66
 
66
67
  ---
67
68
 
68
- ### 2. Launch Subprocess A: API Test Generation
69
+ ### 2. Subprocess Dispatch Matrix
70
+
71
+ **Select subprocesses based on `{detected_stack}`:**
72
+
73
+ | `{detected_stack}` | Subprocess A (API) | Subprocess B (E2E) | Subprocess B-backend |
74
+ | ------------------ | ------------------ | ------------------ | -------------------- |
75
+ | `frontend` | Launch | Launch | Skip |
76
+ | `backend` | Launch | Skip | Launch |
77
+ | `fullstack` | Launch | Launch | Launch |
78
+
79
+ ---
80
+
81
+ ### 3. Launch Subprocess A: API Test Generation (always)
69
82
 
70
83
  **Launch subprocess in parallel:**
71
84
 
@@ -84,7 +97,9 @@ const subprocessContext = {
84
97
 
85
98
  ---
86
99
 
87
- ### 3. Launch Subprocess B: E2E Test Generation
100
+ ### 4. Launch Subprocess B: E2E Test Generation (frontend/fullstack only)
101
+
102
+ **If {detected_stack} is `frontend` or `fullstack`:**
88
103
 
89
104
  **Launch subprocess in parallel:**
90
105
 
@@ -101,62 +116,125 @@ const subprocessContext = {
101
116
  ⏳ Status: Running in parallel...
102
117
  ```
103
118
 
119
+ **If {detected_stack} is `backend`:** Skip this subprocess.
120
+
104
121
  ---
105
122
 
106
- ### 4. Wait for Both Subprocesses to Complete
123
+ ### 5. Launch Subprocess B-backend: Backend Test Generation (backend/fullstack only)
124
+
125
+ **If {detected_stack} is `backend` or `fullstack`:**
107
126
 
108
- **Monitor subprocess execution:**
127
+ **Launch subprocess in parallel:**
128
+
129
+ - **Subprocess File:** `./step-03b-subprocess-backend.md`
130
+ - **Output File:** `/tmp/tea-automate-backend-tests-${timestamp}.json`
131
+ - **Context:** Pass `subprocessContext`
132
+ - **Execution:** PARALLEL (non-blocking)
133
+
134
+ **System Action:**
135
+
136
+ ```
137
+ 🚀 Launching Subprocess B-backend: Backend Test Generation
138
+ 📝 Output: /tmp/tea-automate-backend-tests-${timestamp}.json
139
+ ⏳ Status: Running in parallel...
140
+ ```
141
+
142
+ **If {detected_stack} is `frontend`:** Skip this subprocess.
143
+
144
+ ---
145
+
146
+ ### 6. Wait for All Subprocesses to Complete
147
+
148
+ **Monitor subprocess execution based on `{detected_stack}`:**
109
149
 
110
150
  ```
111
151
  ⏳ Waiting for subprocesses to complete...
112
152
  ├── Subprocess A (API): Running... ⟳
113
- └── Subprocess B (E2E): Running... ⟳
153
+ ├── Subprocess B (E2E): Running... ⟳ [if frontend/fullstack]
154
+ └── Subprocess B-backend: Running... ⟳ [if backend/fullstack]
114
155
 
115
156
  [... time passes ...]
116
157
 
117
158
  ├── Subprocess A (API): Complete ✅
118
- └── Subprocess B (E2E): Complete ✅
159
+ ├── Subprocess B (E2E): Complete ✅ [if frontend/fullstack]
160
+ └── Subprocess B-backend: Complete ✅ [if backend/fullstack]
119
161
 
120
162
  ✅ All subprocesses completed successfully!
121
163
  ```
122
164
 
123
- **Verify both outputs exist:**
165
+ **Verify outputs exist (based on `{detected_stack}`):**
124
166
 
125
167
  ```javascript
126
168
  const apiOutputExists = fs.existsSync(`/tmp/tea-automate-api-tests-${timestamp}.json`);
127
- const e2eOutputExists = fs.existsSync(`/tmp/tea-automate-e2e-tests-${timestamp}.json`);
128
169
 
129
- if (!apiOutputExists || !e2eOutputExists) {
130
- throw new Error('One or both subprocess outputs missing!');
170
+ // Check based on detected_stack
171
+ if (detected_stack === 'frontend' || detected_stack === 'fullstack') {
172
+ const e2eOutputExists = fs.existsSync(`/tmp/tea-automate-e2e-tests-${timestamp}.json`);
173
+ if (!e2eOutputExists) throw new Error('E2E subprocess output missing!');
174
+ }
175
+ if (detected_stack === 'backend' || detected_stack === 'fullstack') {
176
+ const backendOutputExists = fs.existsSync(`/tmp/tea-automate-backend-tests-${timestamp}.json`);
177
+ if (!backendOutputExists) throw new Error('Backend subprocess output missing!');
131
178
  }
179
+ if (!apiOutputExists) throw new Error('API subprocess output missing!');
132
180
  ```
133
181
 
134
182
  ---
135
183
 
136
- ### 5. Performance Report
184
+ ### Subprocess Output Schema Contract
185
+
186
+ Both `step-03b-subprocess-e2e.md` and `step-03b-subprocess-backend.md` MUST write JSON to their output file with identical schema:
187
+
188
+ ```json
189
+ {
190
+ "subprocessType": "e2e | backend",
191
+ "testsGenerated": [
192
+ {
193
+ "file": "path/to/test-file",
194
+ "content": "[full test file content]",
195
+ "description": "Test description",
196
+ "priority_coverage": { "P0": 0, "P1": 0, "P2": 0, "P3": 0 }
197
+ }
198
+ ],
199
+ "coverageSummary": {
200
+ "totalTests": 0,
201
+ "testLevels": ["unit", "integration", "api", "e2e"],
202
+ "fixtureNeeds": []
203
+ },
204
+ "status": "complete | partial"
205
+ }
206
+ ```
207
+
208
+ The aggregate step reads whichever output file(s) exist based on `{detected_stack}`.
209
+
210
+ ---
211
+
212
+ ### 7. Performance Report
137
213
 
138
214
  **Display performance metrics:**
139
215
 
140
216
  ```
141
217
  🚀 Performance Report:
142
- - Execution Mode: PARALLEL (2 subprocesses)
218
+ - Execution Mode: PARALLEL (subprocesses based on {detected_stack})
219
+ - Stack Type: {detected_stack}
143
220
  - API Test Generation: ~X minutes
144
- - E2E Test Generation: ~Y minutes
145
- - Total Elapsed: ~max(X, Y) minutes
146
- - Sequential Would Take: ~(X + Y) minutes
147
- - Performance Gain: ~50% faster!
221
+ - E2E Test Generation: ~Y minutes [if frontend/fullstack]
222
+ - Backend Test Generation: ~Z minutes [if backend/fullstack]
223
+ - Total Elapsed: ~max(X, Y, Z) minutes
224
+ - Sequential Would Take: ~(X + Y + Z) minutes
225
+ - Performance Gain: ~40-70% faster!
148
226
  ```
149
227
 
150
228
  ---
151
229
 
152
- ### 6. Proceed to Aggregation
230
+ ### 8. Proceed to Aggregation
153
231
 
154
232
  **Load aggregation step:**
155
233
  Load next step: `{nextStepFile}`
156
234
 
157
235
  The aggregation step (3C) will:
158
236
 
159
- - Read both subprocess outputs
237
+ - Read all subprocess outputs (based on `{detected_stack}`)
160
238
  - Write all test files to disk
161
239
  - Generate shared fixtures and helpers
162
240
  - Calculate summary statistics
@@ -168,13 +246,14 @@ The aggregation step (3C) will:
168
246
  Proceed to Step 3C (Aggregation) when:
169
247
 
170
248
  - ✅ Subprocess A (API tests) completed successfully
171
- - ✅ Subprocess B (E2E tests) completed successfully
172
- - ✅ Both output files exist and are valid JSON
249
+ - ✅ Subprocess B (E2E tests) completed successfully [if frontend/fullstack]
250
+ - ✅ Subprocess B-backend (Backend tests) completed successfully [if backend/fullstack]
251
+ - ✅ All expected output files exist and are valid JSON
173
252
  - ✅ Performance metrics displayed
174
253
 
175
254
  **Do NOT proceed if:**
176
255
 
177
- - ❌ One or both subprocesses failed
256
+ - ❌ Any launched subprocess failed
178
257
  - ❌ Output files missing or corrupted
179
258
  - ❌ Timeout occurred (subprocesses took too long)
180
259
 
@@ -184,15 +263,15 @@ Proceed to Step 3C (Aggregation) when:
184
263
 
185
264
  ### ✅ SUCCESS:
186
265
 
187
- - Both subprocesses launched successfully
188
- - Both subprocesses completed without errors
266
+ - All required subprocesses launched successfully (based on `{detected_stack}`)
267
+ - All subprocesses completed without errors
189
268
  - Output files generated and valid
190
- - Parallel execution achieved ~50% performance gain
269
+ - Parallel execution achieved ~40-70% performance gain
191
270
 
192
271
  ### ❌ SYSTEM FAILURE:
193
272
 
194
273
  - Failed to launch subprocesses
195
- - One or both subprocesses failed
274
+ - One or more subprocesses failed
196
275
  - Output files missing or invalid
197
276
  - Attempted sequential generation instead of parallel
198
277