bmad-method 4.37.0 → 5.0.0-beta.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.github/workflows/promote-to-stable.yml +144 -0
- package/CHANGELOG.md +16 -9
- package/bmad-core/agents/qa.md +37 -18
- package/bmad-core/data/test-levels-framework.md +146 -0
- package/bmad-core/data/test-priorities-matrix.md +172 -0
- package/bmad-core/tasks/nfr-assess.md +343 -0
- package/bmad-core/tasks/qa-gate.md +159 -0
- package/bmad-core/tasks/review-story.md +234 -74
- package/bmad-core/tasks/risk-profile.md +353 -0
- package/bmad-core/tasks/test-design.md +174 -0
- package/bmad-core/tasks/trace-requirements.md +264 -0
- package/bmad-core/templates/qa-gate-tmpl.yaml +102 -0
- package/dist/agents/analyst.txt +20 -26
- package/dist/agents/architect.txt +14 -35
- package/dist/agents/bmad-master.txt +40 -70
- package/dist/agents/bmad-orchestrator.txt +28 -5
- package/dist/agents/dev.txt +0 -14
- package/dist/agents/pm.txt +0 -25
- package/dist/agents/po.txt +0 -18
- package/dist/agents/qa.txt +2079 -135
- package/dist/agents/sm.txt +0 -10
- package/dist/agents/ux-expert.txt +0 -7
- package/dist/expansion-packs/bmad-2d-phaser-game-dev/agents/game-designer.txt +0 -37
- package/dist/expansion-packs/bmad-2d-phaser-game-dev/agents/game-developer.txt +3 -12
- package/dist/expansion-packs/bmad-2d-phaser-game-dev/agents/game-sm.txt +0 -7
- package/dist/expansion-packs/bmad-2d-phaser-game-dev/teams/phaser-2d-nodejs-game-team.txt +44 -90
- package/dist/expansion-packs/bmad-2d-unity-game-dev/agents/game-architect.txt +14 -49
- package/dist/expansion-packs/bmad-2d-unity-game-dev/agents/game-designer.txt +0 -46
- package/dist/expansion-packs/bmad-2d-unity-game-dev/agents/game-developer.txt +0 -15
- package/dist/expansion-packs/bmad-2d-unity-game-dev/agents/game-sm.txt +0 -17
- package/dist/expansion-packs/bmad-2d-unity-game-dev/teams/unity-2d-game-team.txt +38 -142
- package/dist/expansion-packs/bmad-infrastructure-devops/agents/infra-devops-platform.txt +0 -2
- package/dist/teams/team-all.txt +2181 -261
- package/dist/teams/team-fullstack.txt +43 -57
- package/dist/teams/team-ide-minimal.txt +2064 -125
- package/dist/teams/team-no-ui.txt +43 -57
- package/docs/enhanced-ide-development-workflow.md +220 -15
- package/docs/user-guide.md +271 -18
- package/docs/working-in-the-brownfield.md +264 -31
- package/package.json +1 -1
- package/tools/installer/bin/bmad.js +33 -32
- package/tools/installer/config/install.config.yaml +11 -1
- package/tools/installer/lib/file-manager.js +1 -1
- package/tools/installer/lib/ide-base-setup.js +1 -1
- package/tools/installer/lib/ide-setup.js +197 -83
- package/tools/installer/lib/installer.js +3 -3
- package/tools/installer/package.json +1 -1
|
@@ -0,0 +1,174 @@
|
|
|
1
|
+
# test-design
|
|
2
|
+
|
|
3
|
+
Create comprehensive test scenarios with appropriate test level recommendations for story implementation.
|
|
4
|
+
|
|
5
|
+
## Inputs
|
|
6
|
+
|
|
7
|
+
```yaml
|
|
8
|
+
required:
|
|
9
|
+
- story_id: "{epic}.{story}" # e.g., "1.3"
|
|
10
|
+
- story_path: "{devStoryLocation}/{epic}.{story}.*.md" # Path from core-config.yaml
|
|
11
|
+
- story_title: "{title}" # If missing, derive from story file H1
|
|
12
|
+
- story_slug: "{slug}" # If missing, derive from title (lowercase, hyphenated)
|
|
13
|
+
```
|
|
14
|
+
|
|
15
|
+
## Purpose
|
|
16
|
+
|
|
17
|
+
Design a complete test strategy that identifies what to test, at which level (unit/integration/e2e), and why. This ensures efficient test coverage without redundancy while maintaining appropriate test boundaries.
|
|
18
|
+
|
|
19
|
+
## Dependencies
|
|
20
|
+
|
|
21
|
+
```yaml
|
|
22
|
+
data:
|
|
23
|
+
- test-levels-framework.md # Unit/Integration/E2E decision criteria
|
|
24
|
+
- test-priorities-matrix.md # P0/P1/P2/P3 classification system
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
## Process
|
|
28
|
+
|
|
29
|
+
### 1. Analyze Story Requirements
|
|
30
|
+
|
|
31
|
+
Break down each acceptance criterion into testable scenarios. For each AC:
|
|
32
|
+
|
|
33
|
+
- Identify the core functionality to test
|
|
34
|
+
- Determine data variations needed
|
|
35
|
+
- Consider error conditions
|
|
36
|
+
- Note edge cases
|
|
37
|
+
|
|
38
|
+
### 2. Apply Test Level Framework
|
|
39
|
+
|
|
40
|
+
**Reference:** Load `test-levels-framework.md` for detailed criteria
|
|
41
|
+
|
|
42
|
+
Quick rules:
|
|
43
|
+
|
|
44
|
+
- **Unit**: Pure logic, algorithms, calculations
|
|
45
|
+
- **Integration**: Component interactions, DB operations
|
|
46
|
+
- **E2E**: Critical user journeys, compliance
|
|
47
|
+
|
|
48
|
+
### 3. Assign Priorities
|
|
49
|
+
|
|
50
|
+
**Reference:** Load `test-priorities-matrix.md` for classification
|
|
51
|
+
|
|
52
|
+
Quick priority assignment:
|
|
53
|
+
|
|
54
|
+
- **P0**: Revenue-critical, security, compliance
|
|
55
|
+
- **P1**: Core user journeys, frequently used
|
|
56
|
+
- **P2**: Secondary features, admin functions
|
|
57
|
+
- **P3**: Nice-to-have, rarely used
|
|
58
|
+
|
|
59
|
+
### 4. Design Test Scenarios
|
|
60
|
+
|
|
61
|
+
For each identified test need, create:
|
|
62
|
+
|
|
63
|
+
```yaml
|
|
64
|
+
test_scenario:
|
|
65
|
+
id: "{epic}.{story}-{LEVEL}-{SEQ}"
|
|
66
|
+
requirement: "AC reference"
|
|
67
|
+
priority: P0|P1|P2|P3
|
|
68
|
+
level: unit|integration|e2e
|
|
69
|
+
description: "What is being tested"
|
|
70
|
+
justification: "Why this level was chosen"
|
|
71
|
+
mitigates_risks: ["RISK-001"] # If risk profile exists
|
|
72
|
+
```
|
|
73
|
+
|
|
74
|
+
### 5. Validate Coverage
|
|
75
|
+
|
|
76
|
+
Ensure:
|
|
77
|
+
|
|
78
|
+
- Every AC has at least one test
|
|
79
|
+
- No duplicate coverage across levels
|
|
80
|
+
- Critical paths have multiple levels
|
|
81
|
+
- Risk mitigations are addressed
|
|
82
|
+
|
|
83
|
+
## Outputs
|
|
84
|
+
|
|
85
|
+
### Output 1: Test Design Document
|
|
86
|
+
|
|
87
|
+
**Save to:** `docs/qa/assessments/{epic}.{story}-test-design-{YYYYMMDD}.md`
|
|
88
|
+
|
|
89
|
+
```markdown
|
|
90
|
+
# Test Design: Story {epic}.{story}
|
|
91
|
+
|
|
92
|
+
Date: {date}
|
|
93
|
+
Designer: Quinn (Test Architect)
|
|
94
|
+
|
|
95
|
+
## Test Strategy Overview
|
|
96
|
+
|
|
97
|
+
- Total test scenarios: X
|
|
98
|
+
- Unit tests: Y (A%)
|
|
99
|
+
- Integration tests: Z (B%)
|
|
100
|
+
- E2E tests: W (C%)
|
|
101
|
+
- Priority distribution: P0: X, P1: Y, P2: Z
|
|
102
|
+
|
|
103
|
+
## Test Scenarios by Acceptance Criteria
|
|
104
|
+
|
|
105
|
+
### AC1: {description}
|
|
106
|
+
|
|
107
|
+
#### Scenarios
|
|
108
|
+
|
|
109
|
+
| ID | Level | Priority | Test | Justification |
|
|
110
|
+
| ------------ | ----------- | -------- | ------------------------- | ------------------------ |
|
|
111
|
+
| 1.3-UNIT-001 | Unit | P0 | Validate input format | Pure validation logic |
|
|
112
|
+
| 1.3-INT-001 | Integration | P0 | Service processes request | Multi-component flow |
|
|
113
|
+
| 1.3-E2E-001 | E2E | P1 | User completes journey | Critical path validation |
|
|
114
|
+
|
|
115
|
+
[Continue for all ACs...]
|
|
116
|
+
|
|
117
|
+
## Risk Coverage
|
|
118
|
+
|
|
119
|
+
[Map test scenarios to identified risks if risk profile exists]
|
|
120
|
+
|
|
121
|
+
## Recommended Execution Order
|
|
122
|
+
|
|
123
|
+
1. P0 Unit tests (fail fast)
|
|
124
|
+
2. P0 Integration tests
|
|
125
|
+
3. P0 E2E tests
|
|
126
|
+
4. P1 tests in order
|
|
127
|
+
5. P2+ as time permits
|
|
128
|
+
```
|
|
129
|
+
|
|
130
|
+
### Output 2: Gate YAML Block
|
|
131
|
+
|
|
132
|
+
Generate for inclusion in quality gate:
|
|
133
|
+
|
|
134
|
+
```yaml
|
|
135
|
+
test_design:
|
|
136
|
+
scenarios_total: X
|
|
137
|
+
by_level:
|
|
138
|
+
unit: Y
|
|
139
|
+
integration: Z
|
|
140
|
+
e2e: W
|
|
141
|
+
by_priority:
|
|
142
|
+
p0: A
|
|
143
|
+
p1: B
|
|
144
|
+
p2: C
|
|
145
|
+
coverage_gaps: [] # List any ACs without tests
|
|
146
|
+
```
|
|
147
|
+
|
|
148
|
+
### Output 3: Trace References
|
|
149
|
+
|
|
150
|
+
Print for use by trace-requirements task:
|
|
151
|
+
|
|
152
|
+
```text
|
|
153
|
+
Test design matrix: docs/qa/assessments/{epic}.{story}-test-design-{YYYYMMDD}.md
|
|
154
|
+
P0 tests identified: {count}
|
|
155
|
+
```
|
|
156
|
+
|
|
157
|
+
## Quality Checklist
|
|
158
|
+
|
|
159
|
+
Before finalizing, verify:
|
|
160
|
+
|
|
161
|
+
- [ ] Every AC has test coverage
|
|
162
|
+
- [ ] Test levels are appropriate (not over-testing)
|
|
163
|
+
- [ ] No duplicate coverage across levels
|
|
164
|
+
- [ ] Priorities align with business risk
|
|
165
|
+
- [ ] Test IDs follow naming convention
|
|
166
|
+
- [ ] Scenarios are atomic and independent
|
|
167
|
+
|
|
168
|
+
## Key Principles
|
|
169
|
+
|
|
170
|
+
- **Shift left**: Prefer unit over integration, integration over E2E
|
|
171
|
+
- **Risk-based**: Focus on what could go wrong
|
|
172
|
+
- **Efficient coverage**: Test once at the right level
|
|
173
|
+
- **Maintainability**: Consider long-term test maintenance
|
|
174
|
+
- **Fast feedback**: Quick tests run first
|
|
@@ -0,0 +1,264 @@
|
|
|
1
|
+
# trace-requirements
|
|
2
|
+
|
|
3
|
+
Map story requirements to test cases using Given-When-Then patterns for comprehensive traceability.
|
|
4
|
+
|
|
5
|
+
## Purpose
|
|
6
|
+
|
|
7
|
+
Create a requirements traceability matrix that ensures every acceptance criterion has corresponding test coverage. This task helps identify gaps in testing and ensures all requirements are validated.
|
|
8
|
+
|
|
9
|
+
**IMPORTANT**: Given-When-Then is used here for documenting the mapping between requirements and tests, NOT for writing the actual test code. Tests should follow your project's testing standards (no BDD syntax in test code).
|
|
10
|
+
|
|
11
|
+
## Prerequisites
|
|
12
|
+
|
|
13
|
+
- Story file with clear acceptance criteria
|
|
14
|
+
- Access to test files or test specifications
|
|
15
|
+
- Understanding of the implementation
|
|
16
|
+
|
|
17
|
+
## Traceability Process
|
|
18
|
+
|
|
19
|
+
### 1. Extract Requirements
|
|
20
|
+
|
|
21
|
+
Identify all testable requirements from:
|
|
22
|
+
|
|
23
|
+
- Acceptance Criteria (primary source)
|
|
24
|
+
- User story statement
|
|
25
|
+
- Tasks/subtasks with specific behaviors
|
|
26
|
+
- Non-functional requirements mentioned
|
|
27
|
+
- Edge cases documented
|
|
28
|
+
|
|
29
|
+
### 2. Map to Test Cases
|
|
30
|
+
|
|
31
|
+
For each requirement, document which tests validate it. Use Given-When-Then to describe what the test validates (not how it's written):
|
|
32
|
+
|
|
33
|
+
```yaml
|
|
34
|
+
requirement: "AC1: User can login with valid credentials"
|
|
35
|
+
test_mappings:
|
|
36
|
+
- test_file: "auth/login.test.ts"
|
|
37
|
+
test_case: "should successfully login with valid email and password"
|
|
38
|
+
# Given-When-Then describes WHAT the test validates, not HOW it's coded
|
|
39
|
+
given: "A registered user with valid credentials"
|
|
40
|
+
when: "They submit the login form"
|
|
41
|
+
then: "They are redirected to dashboard and session is created"
|
|
42
|
+
coverage: full
|
|
43
|
+
|
|
44
|
+
- test_file: "e2e/auth-flow.test.ts"
|
|
45
|
+
test_case: "complete login flow"
|
|
46
|
+
given: "User on login page"
|
|
47
|
+
when: "Entering valid credentials and submitting"
|
|
48
|
+
then: "Dashboard loads with user data"
|
|
49
|
+
coverage: integration
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
### 3. Coverage Analysis
|
|
53
|
+
|
|
54
|
+
Evaluate coverage for each requirement:
|
|
55
|
+
|
|
56
|
+
**Coverage Levels:**
|
|
57
|
+
|
|
58
|
+
- `full`: Requirement completely tested
|
|
59
|
+
- `partial`: Some aspects tested, gaps exist
|
|
60
|
+
- `none`: No test coverage found
|
|
61
|
+
- `integration`: Covered in integration/e2e tests only
|
|
62
|
+
- `unit`: Covered in unit tests only
|
|
63
|
+
|
|
64
|
+
### 4. Gap Identification
|
|
65
|
+
|
|
66
|
+
Document any gaps found:
|
|
67
|
+
|
|
68
|
+
```yaml
|
|
69
|
+
coverage_gaps:
|
|
70
|
+
- requirement: "AC3: Password reset email sent within 60 seconds"
|
|
71
|
+
gap: "No test for email delivery timing"
|
|
72
|
+
severity: medium
|
|
73
|
+
suggested_test:
|
|
74
|
+
type: integration
|
|
75
|
+
description: "Test email service SLA compliance"
|
|
76
|
+
|
|
77
|
+
- requirement: "AC5: Support 1000 concurrent users"
|
|
78
|
+
gap: "No load testing implemented"
|
|
79
|
+
severity: high
|
|
80
|
+
suggested_test:
|
|
81
|
+
type: performance
|
|
82
|
+
description: "Load test with 1000 concurrent connections"
|
|
83
|
+
```
|
|
84
|
+
|
|
85
|
+
## Outputs
|
|
86
|
+
|
|
87
|
+
### Output 1: Gate YAML Block
|
|
88
|
+
|
|
89
|
+
**Generate for pasting into gate file under `trace`:**
|
|
90
|
+
|
|
91
|
+
```yaml
|
|
92
|
+
trace:
|
|
93
|
+
totals:
|
|
94
|
+
requirements: X
|
|
95
|
+
full: Y
|
|
96
|
+
partial: Z
|
|
97
|
+
none: W
|
|
98
|
+
planning_ref: "docs/qa/assessments/{epic}.{story}-test-design-{YYYYMMDD}.md"
|
|
99
|
+
uncovered:
|
|
100
|
+
- ac: "AC3"
|
|
101
|
+
reason: "No test found for password reset timing"
|
|
102
|
+
notes: "See docs/qa/assessments/{epic}.{story}-trace-{YYYYMMDD}.md"
|
|
103
|
+
```
|
|
104
|
+
|
|
105
|
+
### Output 2: Traceability Report
|
|
106
|
+
|
|
107
|
+
**Save to:** `docs/qa/assessments/{epic}.{story}-trace-{YYYYMMDD}.md`
|
|
108
|
+
|
|
109
|
+
Create a traceability report with:
|
|
110
|
+
|
|
111
|
+
```markdown
|
|
112
|
+
# Requirements Traceability Matrix
|
|
113
|
+
|
|
114
|
+
## Story: {epic}.{story} - {title}
|
|
115
|
+
|
|
116
|
+
### Coverage Summary
|
|
117
|
+
|
|
118
|
+
- Total Requirements: X
|
|
119
|
+
- Fully Covered: Y (Z%)
|
|
120
|
+
- Partially Covered: A (B%)
|
|
121
|
+
- Not Covered: C (D%)
|
|
122
|
+
|
|
123
|
+
### Requirement Mappings
|
|
124
|
+
|
|
125
|
+
#### AC1: {Acceptance Criterion 1}
|
|
126
|
+
|
|
127
|
+
**Coverage: FULL**
|
|
128
|
+
|
|
129
|
+
Given-When-Then Mappings:
|
|
130
|
+
|
|
131
|
+
- **Unit Test**: `auth.service.test.ts::validateCredentials`
|
|
132
|
+
- Given: Valid user credentials
|
|
133
|
+
- When: Validation method called
|
|
134
|
+
- Then: Returns true with user object
|
|
135
|
+
|
|
136
|
+
- **Integration Test**: `auth.integration.test.ts::loginFlow`
|
|
137
|
+
- Given: User with valid account
|
|
138
|
+
- When: Login API called
|
|
139
|
+
- Then: JWT token returned and session created
|
|
140
|
+
|
|
141
|
+
#### AC2: {Acceptance Criterion 2}
|
|
142
|
+
|
|
143
|
+
**Coverage: PARTIAL**
|
|
144
|
+
|
|
145
|
+
[Continue for all ACs...]
|
|
146
|
+
|
|
147
|
+
### Critical Gaps
|
|
148
|
+
|
|
149
|
+
1. **Performance Requirements**
|
|
150
|
+
- Gap: No load testing for concurrent users
|
|
151
|
+
- Risk: High - Could fail under production load
|
|
152
|
+
- Action: Implement load tests using k6 or similar
|
|
153
|
+
|
|
154
|
+
2. **Security Requirements**
|
|
155
|
+
- Gap: Rate limiting not tested
|
|
156
|
+
- Risk: Medium - Potential DoS vulnerability
|
|
157
|
+
- Action: Add rate limit tests to integration suite
|
|
158
|
+
|
|
159
|
+
### Test Design Recommendations
|
|
160
|
+
|
|
161
|
+
Based on gaps identified, recommend:
|
|
162
|
+
|
|
163
|
+
1. Additional test scenarios needed
|
|
164
|
+
2. Test types to implement (unit/integration/e2e/performance)
|
|
165
|
+
3. Test data requirements
|
|
166
|
+
4. Mock/stub strategies
|
|
167
|
+
|
|
168
|
+
### Risk Assessment
|
|
169
|
+
|
|
170
|
+
- **High Risk**: Requirements with no coverage
|
|
171
|
+
- **Medium Risk**: Requirements with only partial coverage
|
|
172
|
+
- **Low Risk**: Requirements with full unit + integration coverage
|
|
173
|
+
```
|
|
174
|
+
|
|
175
|
+
## Traceability Best Practices
|
|
176
|
+
|
|
177
|
+
### Given-When-Then for Mapping (Not Test Code)
|
|
178
|
+
|
|
179
|
+
Use Given-When-Then to document what each test validates:
|
|
180
|
+
|
|
181
|
+
**Given**: The initial context the test sets up
|
|
182
|
+
|
|
183
|
+
- What state/data the test prepares
|
|
184
|
+
- User context being simulated
|
|
185
|
+
- System preconditions
|
|
186
|
+
|
|
187
|
+
**When**: The action the test performs
|
|
188
|
+
|
|
189
|
+
- What the test executes
|
|
190
|
+
- API calls or user actions tested
|
|
191
|
+
- Events triggered
|
|
192
|
+
|
|
193
|
+
**Then**: What the test asserts
|
|
194
|
+
|
|
195
|
+
- Expected outcomes verified
|
|
196
|
+
- State changes checked
|
|
197
|
+
- Values validated
|
|
198
|
+
|
|
199
|
+
**Note**: This is for documentation only. Actual test code follows your project's standards (e.g., describe/it blocks, no BDD syntax).
|
|
200
|
+
|
|
201
|
+
### Coverage Priority
|
|
202
|
+
|
|
203
|
+
Prioritize coverage based on:
|
|
204
|
+
|
|
205
|
+
1. Critical business flows
|
|
206
|
+
2. Security-related requirements
|
|
207
|
+
3. Data integrity requirements
|
|
208
|
+
4. User-facing features
|
|
209
|
+
5. Performance SLAs
|
|
210
|
+
|
|
211
|
+
### Test Granularity
|
|
212
|
+
|
|
213
|
+
Map at appropriate levels:
|
|
214
|
+
|
|
215
|
+
- Unit tests for business logic
|
|
216
|
+
- Integration tests for component interaction
|
|
217
|
+
- E2E tests for user journeys
|
|
218
|
+
- Performance tests for NFRs
|
|
219
|
+
|
|
220
|
+
## Quality Indicators
|
|
221
|
+
|
|
222
|
+
Good traceability shows:
|
|
223
|
+
|
|
224
|
+
- Every AC has at least one test
|
|
225
|
+
- Critical paths have multiple test levels
|
|
226
|
+
- Edge cases are explicitly covered
|
|
227
|
+
- NFRs have appropriate test types
|
|
228
|
+
- Clear Given-When-Then for each test
|
|
229
|
+
|
|
230
|
+
## Red Flags
|
|
231
|
+
|
|
232
|
+
Watch for:
|
|
233
|
+
|
|
234
|
+
- ACs with no test coverage
|
|
235
|
+
- Tests that don't map to requirements
|
|
236
|
+
- Vague test descriptions
|
|
237
|
+
- Missing edge case coverage
|
|
238
|
+
- NFRs without specific tests
|
|
239
|
+
|
|
240
|
+
## Integration with Gates
|
|
241
|
+
|
|
242
|
+
This traceability feeds into quality gates:
|
|
243
|
+
|
|
244
|
+
- Critical gaps → FAIL
|
|
245
|
+
- Minor gaps → CONCERNS
|
|
246
|
+
- Missing P0 tests from test-design → CONCERNS
|
|
247
|
+
|
|
248
|
+
### Output 3: Story Hook Line
|
|
249
|
+
|
|
250
|
+
**Print this line for review task to quote:**
|
|
251
|
+
|
|
252
|
+
```text
|
|
253
|
+
Trace matrix: docs/qa/assessments/{epic}.{story}-trace-{YYYYMMDD}.md
|
|
254
|
+
```
|
|
255
|
+
|
|
256
|
+
- Full coverage → PASS contribution
|
|
257
|
+
|
|
258
|
+
## Key Principles
|
|
259
|
+
|
|
260
|
+
- Every requirement must be testable
|
|
261
|
+
- Use Given-When-Then for clarity
|
|
262
|
+
- Identify both presence and absence
|
|
263
|
+
- Prioritize based on risk
|
|
264
|
+
- Make recommendations actionable
|
|
@@ -0,0 +1,102 @@
|
|
|
1
|
+
template:
|
|
2
|
+
id: qa-gate-template-v1
|
|
3
|
+
name: Quality Gate Decision
|
|
4
|
+
version: 1.0
|
|
5
|
+
output:
|
|
6
|
+
format: yaml
|
|
7
|
+
filename: docs/qa/gates/{{epic_num}}.{{story_num}}-{{story_slug}}.yml
|
|
8
|
+
title: "Quality Gate: {{epic_num}}.{{story_num}}"
|
|
9
|
+
|
|
10
|
+
# Required fields (keep these first)
|
|
11
|
+
schema: 1
|
|
12
|
+
story: "{{epic_num}}.{{story_num}}"
|
|
13
|
+
story_title: "{{story_title}}"
|
|
14
|
+
gate: "{{gate_status}}" # PASS|CONCERNS|FAIL|WAIVED
|
|
15
|
+
status_reason: "{{status_reason}}" # 1-2 sentence summary of why this gate decision
|
|
16
|
+
reviewer: "Quinn (Test Architect)"
|
|
17
|
+
updated: "{{iso_timestamp}}"
|
|
18
|
+
|
|
19
|
+
# Always present but only active when WAIVED
|
|
20
|
+
waiver: { active: false }
|
|
21
|
+
|
|
22
|
+
# Issues (if any) - Use fixed severity: low | medium | high
|
|
23
|
+
top_issues: []
|
|
24
|
+
|
|
25
|
+
# Risk summary (from risk-profile task if run)
|
|
26
|
+
risk_summary:
|
|
27
|
+
totals: { critical: 0, high: 0, medium: 0, low: 0 }
|
|
28
|
+
recommendations:
|
|
29
|
+
must_fix: []
|
|
30
|
+
monitor: []
|
|
31
|
+
|
|
32
|
+
# Examples section using block scalars for clarity
|
|
33
|
+
examples:
|
|
34
|
+
with_issues: |
|
|
35
|
+
top_issues:
|
|
36
|
+
- id: "SEC-001"
|
|
37
|
+
severity: high # ONLY: low|medium|high
|
|
38
|
+
finding: "No rate limiting on login endpoint"
|
|
39
|
+
suggested_action: "Add rate limiting middleware before production"
|
|
40
|
+
- id: "TEST-001"
|
|
41
|
+
severity: medium
|
|
42
|
+
finding: "Missing integration tests for auth flow"
|
|
43
|
+
suggested_action: "Add test coverage for critical paths"
|
|
44
|
+
|
|
45
|
+
when_waived: |
|
|
46
|
+
waiver:
|
|
47
|
+
active: true
|
|
48
|
+
reason: "Accepted for MVP release - will address in next sprint"
|
|
49
|
+
approved_by: "Product Owner"
|
|
50
|
+
|
|
51
|
+
# ============ Optional Extended Fields ============
|
|
52
|
+
# Uncomment and use if your team wants more detail
|
|
53
|
+
|
|
54
|
+
optional_fields_examples:
|
|
55
|
+
quality_and_expiry: |
|
|
56
|
+
quality_score: 75 # 0-100 (optional scoring)
|
|
57
|
+
expires: "2025-01-26T00:00:00Z" # Optional gate freshness window
|
|
58
|
+
|
|
59
|
+
evidence: |
|
|
60
|
+
evidence:
|
|
61
|
+
tests_reviewed: 15
|
|
62
|
+
risks_identified: 3
|
|
63
|
+
trace:
|
|
64
|
+
ac_covered: [1, 2, 3] # AC numbers with test coverage
|
|
65
|
+
ac_gaps: [4] # AC numbers lacking coverage
|
|
66
|
+
|
|
67
|
+
nfr_validation: |
|
|
68
|
+
nfr_validation:
|
|
69
|
+
security: { status: CONCERNS, notes: "Rate limiting missing" }
|
|
70
|
+
performance: { status: PASS, notes: "" }
|
|
71
|
+
reliability: { status: PASS, notes: "" }
|
|
72
|
+
maintainability: { status: PASS, notes: "" }
|
|
73
|
+
|
|
74
|
+
history: |
|
|
75
|
+
history: # Append-only audit trail
|
|
76
|
+
- at: "2025-01-12T10:00:00Z"
|
|
77
|
+
gate: FAIL
|
|
78
|
+
note: "Initial review - missing tests"
|
|
79
|
+
- at: "2025-01-12T15:00:00Z"
|
|
80
|
+
gate: CONCERNS
|
|
81
|
+
note: "Tests added but rate limiting still missing"
|
|
82
|
+
|
|
83
|
+
risk_summary: |
|
|
84
|
+
risk_summary: # From risk-profile task
|
|
85
|
+
totals:
|
|
86
|
+
critical: 0
|
|
87
|
+
high: 0
|
|
88
|
+
medium: 0
|
|
89
|
+
low: 0
|
|
90
|
+
# 'highest' is emitted only when risks exist
|
|
91
|
+
recommendations:
|
|
92
|
+
must_fix: []
|
|
93
|
+
monitor: []
|
|
94
|
+
|
|
95
|
+
recommendations: |
|
|
96
|
+
recommendations:
|
|
97
|
+
immediate: # Must fix before production
|
|
98
|
+
- action: "Add rate limiting to auth endpoints"
|
|
99
|
+
refs: ["api/auth/login.ts:42-68"]
|
|
100
|
+
future: # Can be addressed later
|
|
101
|
+
- action: "Consider caching for better performance"
|
|
102
|
+
refs: ["services/data.service.ts"]
|
package/dist/agents/analyst.txt
CHANGED
|
@@ -149,7 +149,7 @@ If user selects Option 1, present numbered list of techniques from the brainstor
|
|
|
149
149
|
1. Apply selected technique according to data file description
|
|
150
150
|
2. Keep engaging with technique until user indicates they want to:
|
|
151
151
|
- Choose a different technique
|
|
152
|
-
- Apply current ideas to a new technique
|
|
152
|
+
- Apply current ideas to a new technique
|
|
153
153
|
- Move to convergent phase
|
|
154
154
|
- End session
|
|
155
155
|
|
|
@@ -266,63 +266,54 @@ CRITICAL: First, help the user select the most appropriate research focus based
|
|
|
266
266
|
Present these numbered options to the user:
|
|
267
267
|
|
|
268
268
|
1. **Product Validation Research**
|
|
269
|
-
|
|
270
269
|
- Validate product hypotheses and market fit
|
|
271
270
|
- Test assumptions about user needs and solutions
|
|
272
271
|
- Assess technical and business feasibility
|
|
273
272
|
- Identify risks and mitigation strategies
|
|
274
273
|
|
|
275
274
|
2. **Market Opportunity Research**
|
|
276
|
-
|
|
277
275
|
- Analyze market size and growth potential
|
|
278
276
|
- Identify market segments and dynamics
|
|
279
277
|
- Assess market entry strategies
|
|
280
278
|
- Evaluate timing and market readiness
|
|
281
279
|
|
|
282
280
|
3. **User & Customer Research**
|
|
283
|
-
|
|
284
281
|
- Deep dive into user personas and behaviors
|
|
285
282
|
- Understand jobs-to-be-done and pain points
|
|
286
283
|
- Map customer journeys and touchpoints
|
|
287
284
|
- Analyze willingness to pay and value perception
|
|
288
285
|
|
|
289
286
|
4. **Competitive Intelligence Research**
|
|
290
|
-
|
|
291
287
|
- Detailed competitor analysis and positioning
|
|
292
288
|
- Feature and capability comparisons
|
|
293
289
|
- Business model and strategy analysis
|
|
294
290
|
- Identify competitive advantages and gaps
|
|
295
291
|
|
|
296
292
|
5. **Technology & Innovation Research**
|
|
297
|
-
|
|
298
293
|
- Assess technology trends and possibilities
|
|
299
294
|
- Evaluate technical approaches and architectures
|
|
300
295
|
- Identify emerging technologies and disruptions
|
|
301
296
|
- Analyze build vs. buy vs. partner options
|
|
302
297
|
|
|
303
298
|
6. **Industry & Ecosystem Research**
|
|
304
|
-
|
|
305
299
|
- Map industry value chains and dynamics
|
|
306
300
|
- Identify key players and relationships
|
|
307
301
|
- Analyze regulatory and compliance factors
|
|
308
302
|
- Understand partnership opportunities
|
|
309
303
|
|
|
310
304
|
7. **Strategic Options Research**
|
|
311
|
-
|
|
312
305
|
- Evaluate different strategic directions
|
|
313
306
|
- Assess business model alternatives
|
|
314
307
|
- Analyze go-to-market strategies
|
|
315
308
|
- Consider expansion and scaling paths
|
|
316
309
|
|
|
317
310
|
8. **Risk & Feasibility Research**
|
|
318
|
-
|
|
319
311
|
- Identify and assess various risk factors
|
|
320
312
|
- Evaluate implementation challenges
|
|
321
313
|
- Analyze resource requirements
|
|
322
314
|
- Consider regulatory and legal implications
|
|
323
315
|
|
|
324
316
|
9. **Custom Research Focus**
|
|
325
|
-
|
|
326
317
|
- User-defined research objectives
|
|
327
318
|
- Specialized domain investigation
|
|
328
319
|
- Cross-functional research needs
|
|
@@ -491,13 +482,11 @@ CRITICAL: collaborate with the user to develop specific, actionable research que
|
|
|
491
482
|
### 5. Review and Refinement
|
|
492
483
|
|
|
493
484
|
1. **Present Complete Prompt**
|
|
494
|
-
|
|
495
485
|
- Show the full research prompt
|
|
496
486
|
- Explain key elements and rationale
|
|
497
487
|
- Highlight any assumptions made
|
|
498
488
|
|
|
499
489
|
2. **Gather Feedback**
|
|
500
|
-
|
|
501
490
|
- Are the objectives clear and correct?
|
|
502
491
|
- Do the questions address all concerns?
|
|
503
492
|
- Is the scope appropriate?
|
|
@@ -872,9 +861,9 @@ This document captures the CURRENT STATE of the [Project Name] codebase, includi
|
|
|
872
861
|
|
|
873
862
|
### Change Log
|
|
874
863
|
|
|
875
|
-
| Date
|
|
876
|
-
|
|
877
|
-
| [Date] | 1.0
|
|
864
|
+
| Date | Version | Description | Author |
|
|
865
|
+
| ------ | ------- | --------------------------- | --------- |
|
|
866
|
+
| [Date] | 1.0 | Initial brownfield analysis | [Analyst] |
|
|
878
867
|
|
|
879
868
|
## Quick Reference - Key Files and Entry Points
|
|
880
869
|
|
|
@@ -897,11 +886,11 @@ This document captures the CURRENT STATE of the [Project Name] codebase, includi
|
|
|
897
886
|
|
|
898
887
|
### Actual Tech Stack (from package.json/requirements.txt)
|
|
899
888
|
|
|
900
|
-
| Category
|
|
901
|
-
|
|
902
|
-
| Runtime
|
|
903
|
-
| Framework | Express
|
|
904
|
-
| Database
|
|
889
|
+
| Category | Technology | Version | Notes |
|
|
890
|
+
| --------- | ---------- | ------- | -------------------------- |
|
|
891
|
+
| Runtime | Node.js | 16.x | [Any constraints] |
|
|
892
|
+
| Framework | Express | 4.18.2 | [Custom middleware?] |
|
|
893
|
+
| Database | PostgreSQL | 13 | [Connection pooling setup] |
|
|
905
894
|
|
|
906
895
|
etc...
|
|
907
896
|
|
|
@@ -940,6 +929,7 @@ project-root/
|
|
|
940
929
|
### Data Models
|
|
941
930
|
|
|
942
931
|
Instead of duplicating, reference actual model files:
|
|
932
|
+
|
|
943
933
|
- **User Model**: See `src/models/User.js`
|
|
944
934
|
- **Order Model**: See `src/models/Order.js`
|
|
945
935
|
- **Related Types**: TypeScript definitions in `src/types/`
|
|
@@ -969,10 +959,10 @@ Instead of duplicating, reference actual model files:
|
|
|
969
959
|
|
|
970
960
|
### External Services
|
|
971
961
|
|
|
972
|
-
| Service
|
|
973
|
-
|
|
974
|
-
| Stripe
|
|
975
|
-
| SendGrid | Emails
|
|
962
|
+
| Service | Purpose | Integration Type | Key Files |
|
|
963
|
+
| -------- | -------- | ---------------- | ------------------------------ |
|
|
964
|
+
| Stripe | Payments | REST API | `src/integrations/stripe/` |
|
|
965
|
+
| SendGrid | Emails | SDK | `src/services/emailService.js` |
|
|
976
966
|
|
|
977
967
|
etc...
|
|
978
968
|
|
|
@@ -1017,6 +1007,7 @@ npm run test:integration # Runs integration tests (requires local DB)
|
|
|
1017
1007
|
### Files That Will Need Modification
|
|
1018
1008
|
|
|
1019
1009
|
Based on the enhancement requirements, these files will be affected:
|
|
1010
|
+
|
|
1020
1011
|
- `src/services/userService.js` - Add new user fields
|
|
1021
1012
|
- `src/models/User.js` - Update schema
|
|
1022
1013
|
- `src/routes/userRoutes.js` - New endpoints
|
|
@@ -2581,7 +2572,7 @@ Each status change requires user verification and approval before proceeding.
|
|
|
2581
2572
|
#### Greenfield Development
|
|
2582
2573
|
|
|
2583
2574
|
- Business analysis and market research
|
|
2584
|
-
- Product requirements and feature definition
|
|
2575
|
+
- Product requirements and feature definition
|
|
2585
2576
|
- System architecture and design
|
|
2586
2577
|
- Development execution
|
|
2587
2578
|
- Testing and deployment
|
|
@@ -2690,8 +2681,11 @@ Templates with Level 2 headings (`##`) can be automatically sharded:
|
|
|
2690
2681
|
|
|
2691
2682
|
```markdown
|
|
2692
2683
|
## Goals and Background Context
|
|
2693
|
-
|
|
2684
|
+
|
|
2685
|
+
## Requirements
|
|
2686
|
+
|
|
2694
2687
|
## User Interface Design Goals
|
|
2688
|
+
|
|
2695
2689
|
## Success Metrics
|
|
2696
2690
|
```
|
|
2697
2691
|
|