bmad-method 4.37.0 → 5.0.0-beta.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (47) hide show
  1. package/.github/workflows/promote-to-stable.yml +144 -0
  2. package/CHANGELOG.md +16 -9
  3. package/bmad-core/agents/qa.md +37 -18
  4. package/bmad-core/data/test-levels-framework.md +146 -0
  5. package/bmad-core/data/test-priorities-matrix.md +172 -0
  6. package/bmad-core/tasks/nfr-assess.md +343 -0
  7. package/bmad-core/tasks/qa-gate.md +159 -0
  8. package/bmad-core/tasks/review-story.md +234 -74
  9. package/bmad-core/tasks/risk-profile.md +353 -0
  10. package/bmad-core/tasks/test-design.md +174 -0
  11. package/bmad-core/tasks/trace-requirements.md +264 -0
  12. package/bmad-core/templates/qa-gate-tmpl.yaml +102 -0
  13. package/dist/agents/analyst.txt +20 -26
  14. package/dist/agents/architect.txt +14 -35
  15. package/dist/agents/bmad-master.txt +40 -70
  16. package/dist/agents/bmad-orchestrator.txt +28 -5
  17. package/dist/agents/dev.txt +0 -14
  18. package/dist/agents/pm.txt +0 -25
  19. package/dist/agents/po.txt +0 -18
  20. package/dist/agents/qa.txt +2079 -135
  21. package/dist/agents/sm.txt +0 -10
  22. package/dist/agents/ux-expert.txt +0 -7
  23. package/dist/expansion-packs/bmad-2d-phaser-game-dev/agents/game-designer.txt +0 -37
  24. package/dist/expansion-packs/bmad-2d-phaser-game-dev/agents/game-developer.txt +3 -12
  25. package/dist/expansion-packs/bmad-2d-phaser-game-dev/agents/game-sm.txt +0 -7
  26. package/dist/expansion-packs/bmad-2d-phaser-game-dev/teams/phaser-2d-nodejs-game-team.txt +44 -90
  27. package/dist/expansion-packs/bmad-2d-unity-game-dev/agents/game-architect.txt +14 -49
  28. package/dist/expansion-packs/bmad-2d-unity-game-dev/agents/game-designer.txt +0 -46
  29. package/dist/expansion-packs/bmad-2d-unity-game-dev/agents/game-developer.txt +0 -15
  30. package/dist/expansion-packs/bmad-2d-unity-game-dev/agents/game-sm.txt +0 -17
  31. package/dist/expansion-packs/bmad-2d-unity-game-dev/teams/unity-2d-game-team.txt +38 -142
  32. package/dist/expansion-packs/bmad-infrastructure-devops/agents/infra-devops-platform.txt +0 -2
  33. package/dist/teams/team-all.txt +2181 -261
  34. package/dist/teams/team-fullstack.txt +43 -57
  35. package/dist/teams/team-ide-minimal.txt +2064 -125
  36. package/dist/teams/team-no-ui.txt +43 -57
  37. package/docs/enhanced-ide-development-workflow.md +220 -15
  38. package/docs/user-guide.md +271 -18
  39. package/docs/working-in-the-brownfield.md +264 -31
  40. package/package.json +1 -1
  41. package/tools/installer/bin/bmad.js +33 -32
  42. package/tools/installer/config/install.config.yaml +11 -1
  43. package/tools/installer/lib/file-manager.js +1 -1
  44. package/tools/installer/lib/ide-base-setup.js +1 -1
  45. package/tools/installer/lib/ide-setup.js +197 -83
  46. package/tools/installer/lib/installer.js +3 -3
  47. package/tools/installer/package.json +1 -1
@@ -1,6 +1,16 @@
1
1
  # review-story
2
2
 
3
- When a developer agent marks a story as "Ready for Review", perform a comprehensive senior developer code review with the ability to refactor and improve code directly.
3
+ Perform a comprehensive test architecture review with quality gate decision. This adaptive, risk-aware review creates both a story update and a detailed gate file.
4
+
5
+ ## Inputs
6
+
7
+ ```yaml
8
+ required:
9
+ - story_id: "{epic}.{story}" # e.g., "1.3"
10
+ - story_path: "{devStoryLocation}/{epic}.{story}.*.md" # Path from core-config.yaml
11
+ - story_title: "{title}" # If missing, derive from story file H1
12
+ - story_slug: "{slug}" # If missing, derive from title (lowercase, hyphenated)
13
+ ```
4
14
 
5
15
  ## Prerequisites
6
16
 
@@ -8,73 +18,102 @@ When a developer agent marks a story as "Ready for Review", perform a comprehens
8
18
  - Developer has completed all tasks and updated the File List
9
19
  - All automated tests are passing
10
20
 
11
- ## Review Process
12
-
13
- 1. **Read the Complete Story**
14
- - Review all acceptance criteria
15
- - Understand the dev notes and requirements
16
- - Note any completion notes from the developer
17
-
18
- 2. **Verify Implementation Against Dev Notes Guidance**
19
- - Review the "Dev Notes" section for specific technical guidance provided to the developer
20
- - Verify the developer's implementation follows the architectural patterns specified in Dev Notes
21
- - Check that file locations match the project structure guidance in Dev Notes
22
- - Confirm any specified libraries, frameworks, or technical approaches were used correctly
23
- - Validate that security considerations mentioned in Dev Notes were implemented
24
-
25
- 3. **Focus on the File List**
26
- - Verify all files listed were actually created/modified
27
- - Check for any missing files that should have been updated
28
- - Ensure file locations align with the project structure guidance from Dev Notes
29
-
30
- 4. **Senior Developer Code Review**
31
- - Review code with the eye of a senior developer
32
- - If changes form a cohesive whole, review them together
33
- - If changes are independent, review incrementally file by file
34
- - Focus on:
35
- - Code architecture and design patterns
36
- - Refactoring opportunities
37
- - Code duplication or inefficiencies
38
- - Performance optimizations
39
- - Security concerns
40
- - Best practices and patterns
41
-
42
- 5. **Active Refactoring**
43
- - As a senior developer, you CAN and SHOULD refactor code where improvements are needed
44
- - When refactoring:
45
- - Make the changes directly in the files
46
- - Explain WHY you're making the change
47
- - Describe HOW the change improves the code
48
- - Ensure all tests still pass after refactoring
49
- - Update the File List if you modify additional files
50
-
51
- 6. **Standards Compliance Check**
52
- - Verify adherence to `docs/coding-standards.md`
53
- - Check compliance with `docs/unified-project-structure.md`
54
- - Validate testing approach against `docs/testing-strategy.md`
55
- - Ensure all guidelines mentioned in the story are followed
56
-
57
- 7. **Acceptance Criteria Validation**
58
- - Verify each AC is fully implemented
59
- - Check for any missing functionality
60
- - Validate edge cases are handled
61
-
62
- 8. **Test Coverage Review**
63
- - Ensure unit tests cover edge cases
64
- - Add missing tests if critical coverage is lacking
65
- - Verify integration tests (if required) are comprehensive
66
- - Check that test assertions are meaningful
67
- - Look for missing test scenarios
68
-
69
- 9. **Documentation and Comments**
70
- - Verify code is self-documenting where possible
71
- - Add comments for complex logic if missing
72
- - Ensure any API changes are documented
73
-
74
- ## Update Story File - QA Results Section ONLY
21
+ ## Review Process - Adaptive Test Architecture
22
+
23
+ ### 1. Risk Assessment (Determines Review Depth)
24
+
25
+ **Auto-escalate to deep review when:**
26
+
27
+ - Auth/payment/security files touched
28
+ - No tests added to story
29
+ - Diff > 500 lines
30
+ - Previous gate was FAIL/CONCERNS
31
+ - Story has > 5 acceptance criteria
32
+
33
+ ### 2. Comprehensive Analysis
34
+
35
+ **A. Requirements Traceability**
36
+
37
+ - Map each acceptance criteria to its validating tests (document mapping with Given-When-Then, not test code)
38
+ - Identify coverage gaps
39
+ - Verify all requirements have corresponding test cases
40
+
41
+ **B. Code Quality Review**
42
+
43
+ - Architecture and design patterns
44
+ - Refactoring opportunities (and perform them)
45
+ - Code duplication or inefficiencies
46
+ - Performance optimizations
47
+ - Security vulnerabilities
48
+ - Best practices adherence
49
+
50
+ **C. Test Architecture Assessment**
51
+
52
+ - Test coverage adequacy at appropriate levels
53
+ - Test level appropriateness (what should be unit vs integration vs e2e)
54
+ - Test design quality and maintainability
55
+ - Test data management strategy
56
+ - Mock/stub usage appropriateness
57
+ - Edge case and error scenario coverage
58
+ - Test execution time and reliability
59
+
60
+ **D. Non-Functional Requirements (NFRs)**
61
+
62
+ - Security: Authentication, authorization, data protection
63
+ - Performance: Response times, resource usage
64
+ - Reliability: Error handling, recovery mechanisms
65
+ - Maintainability: Code clarity, documentation
66
+
67
+ **E. Testability Evaluation**
68
+
69
+ - Controllability: Can we control the inputs?
70
+ - Observability: Can we observe the outputs?
71
+ - Debuggability: Can we debug failures easily?
72
+
73
+ **F. Technical Debt Identification**
74
+
75
+ - Accumulated shortcuts
76
+ - Missing tests
77
+ - Outdated dependencies
78
+ - Architecture violations
79
+
80
+ ### 3. Active Refactoring
81
+
82
+ - Refactor code where safe and appropriate
83
+ - Run tests to ensure changes don't break functionality
84
+ - Document all changes in QA Results section with clear WHY and HOW
85
+ - Do NOT alter story content beyond QA Results section
86
+ - Do NOT change story Status or File List; recommend next status only
87
+
88
+ ### 4. Standards Compliance Check
89
+
90
+ - Verify adherence to `docs/coding-standards.md`
91
+ - Check compliance with `docs/unified-project-structure.md`
92
+ - Validate testing approach against `docs/testing-strategy.md`
93
+ - Ensure all guidelines mentioned in the story are followed
94
+
95
+ ### 5. Acceptance Criteria Validation
96
+
97
+ - Verify each AC is fully implemented
98
+ - Check for any missing functionality
99
+ - Validate edge cases are handled
100
+
101
+ ### 6. Documentation and Comments
102
+
103
+ - Verify code is self-documenting where possible
104
+ - Add comments for complex logic if missing
105
+ - Ensure any API changes are documented
106
+
107
+ ## Output 1: Update Story File - QA Results Section ONLY
75
108
 
76
109
  **CRITICAL**: You are ONLY authorized to update the "QA Results" section of the story file. DO NOT modify any other sections.
77
110
 
111
+ **QA Results Anchor Rule:**
112
+
113
+ - If `## QA Results` doesn't exist, append it at end of file
114
+ - If it exists, append a new dated entry below existing entries
115
+ - Never edit other sections
116
+
78
117
  After review and any refactoring, append your results to the story file in the QA Results section:
79
118
 
80
119
  ```markdown
@@ -82,7 +121,7 @@ After review and any refactoring, append your results to the story file in the Q
82
121
 
83
122
  ### Review Date: [Date]
84
123
 
85
- ### Reviewed By: Quinn (Senior Developer QA)
124
+ ### Reviewed By: Quinn (Test Architect)
86
125
 
87
126
  ### Code Quality Assessment
88
127
 
@@ -122,18 +161,137 @@ After review and any refactoring, append your results to the story file in the Q
122
161
 
123
162
  [Any performance issues found and whether addressed]
124
163
 
125
- ### Final Status
164
+ ### Files Modified During Review
165
+
166
+ [If you modified files, list them here - ask Dev to update File List]
167
+
168
+ ### Gate Status
169
+
170
+ Gate: {STATUS} → docs/qa/gates/{epic}.{story}-{slug}.yml
171
+ Risk profile: docs/qa/assessments/{epic}.{story}-risk-{YYYYMMDD}.md
172
+ NFR assessment: docs/qa/assessments/{epic}.{story}-nfr-{YYYYMMDD}.md
173
+
174
+ # Note: Paths should reference core-config.yaml for custom configurations
126
175
 
127
- [✓ Approved - Ready for Done] / [✗ Changes Required - See unchecked items above]
176
+ ### Recommended Status
177
+
178
+ [✓ Ready for Done] / [✗ Changes Required - See unchecked items above]
179
+ (Story owner decides final status)
180
+ ```
181
+
182
+ ## Output 2: Create Quality Gate File
183
+
184
+ **Template and Directory:**
185
+
186
+ - Render from `templates/qa-gate-tmpl.yaml`
187
+ - Create `docs/qa/gates/` directory if missing (or configure in core-config.yaml)
188
+ - Save to: `docs/qa/gates/{epic}.{story}-{slug}.yml`
189
+
190
+ Gate file structure:
191
+
192
+ ```yaml
193
+ schema: 1
194
+ story: "{epic}.{story}"
195
+ story_title: "{story title}"
196
+ gate: PASS|CONCERNS|FAIL|WAIVED
197
+ status_reason: "1-2 sentence explanation of gate decision"
198
+ reviewer: "Quinn (Test Architect)"
199
+ updated: "{ISO-8601 timestamp}"
200
+
201
+ top_issues: [] # Empty if no issues
202
+ waiver: { active: false } # Set active: true only if WAIVED
203
+
204
+ # Extended fields (optional but recommended):
205
+ quality_score: 0-100 # 100 - (20*FAILs) - (10*CONCERNS) or use technical-preferences.md weights
206
+ expires: "{ISO-8601 timestamp}" # Typically 2 weeks from review
207
+
208
+ evidence:
209
+ tests_reviewed: { count }
210
+ risks_identified: { count }
211
+ trace:
212
+ ac_covered: [1, 2, 3] # AC numbers with test coverage
213
+ ac_gaps: [4] # AC numbers lacking coverage
214
+
215
+ nfr_validation:
216
+ security:
217
+ status: PASS|CONCERNS|FAIL
218
+ notes: "Specific findings"
219
+ performance:
220
+ status: PASS|CONCERNS|FAIL
221
+ notes: "Specific findings"
222
+ reliability:
223
+ status: PASS|CONCERNS|FAIL
224
+ notes: "Specific findings"
225
+ maintainability:
226
+ status: PASS|CONCERNS|FAIL
227
+ notes: "Specific findings"
228
+
229
+ recommendations:
230
+ immediate: # Must fix before production
231
+ - action: "Add rate limiting"
232
+ refs: ["api/auth/login.ts"]
233
+ future: # Can be addressed later
234
+ - action: "Consider caching"
235
+ refs: ["services/data.ts"]
128
236
  ```
129
237
 
238
+ ### Gate Decision Criteria
239
+
240
+ **Deterministic rule (apply in order):**
241
+
242
+ If risk_summary exists, apply its thresholds first (≥9 → FAIL, ≥6 → CONCERNS), then NFR statuses, then top_issues severity.
243
+
244
+ 1. **Risk thresholds (if risk_summary present):**
245
+ - If any risk score ≥ 9 → Gate = FAIL (unless waived)
246
+ - Else if any score ≥ 6 → Gate = CONCERNS
247
+
248
+ 2. **Test coverage gaps (if trace available):**
249
+ - If any P0 test from test-design is missing → Gate = CONCERNS
250
+ - If security/data-loss P0 test missing → Gate = FAIL
251
+
252
+ 3. **Issue severity:**
253
+ - If any `top_issues.severity == high` → Gate = FAIL (unless waived)
254
+ - Else if any `severity == medium` → Gate = CONCERNS
255
+
256
+ 4. **NFR statuses:**
257
+ - If any NFR status is FAIL → Gate = FAIL
258
+ - Else if any NFR status is CONCERNS → Gate = CONCERNS
259
+ - Else → Gate = PASS
260
+
261
+ - WAIVED only when waiver.active: true with reason/approver
262
+
263
+ Detailed criteria:
264
+
265
+ - **PASS**: All critical requirements met, no blocking issues
266
+ - **CONCERNS**: Non-critical issues found, team should review
267
+ - **FAIL**: Critical issues that should be addressed
268
+ - **WAIVED**: Issues acknowledged but explicitly waived by team
269
+
270
+ ### Quality Score Calculation
271
+
272
+ ```text
273
+ quality_score = 100 - (20 × number of FAILs) - (10 × number of CONCERNS)
274
+ Bounded between 0 and 100
275
+ ```
276
+
277
+ If `technical-preferences.md` defines custom weights, use those instead.
278
+
279
+ ### Suggested Owner Convention
280
+
281
+ For each issue in `top_issues`, include a `suggested_owner`:
282
+
283
+ - `dev`: Code changes needed
284
+ - `sm`: Requirements clarification needed
285
+ - `po`: Business decision needed
286
+
130
287
  ## Key Principles
131
288
 
132
- - You are a SENIOR developer reviewing junior/mid-level work
133
- - You have the authority and responsibility to improve code directly
289
+ - You are a Test Architect providing comprehensive quality assessment
290
+ - You have the authority to improve code directly when appropriate
134
291
  - Always explain your changes for learning purposes
135
292
  - Balance between perfection and pragmatism
136
- - Focus on significant improvements, not nitpicks
293
+ - Focus on risk-based prioritization
294
+ - Provide actionable recommendations with clear ownership
137
295
 
138
296
  ## Blocking Conditions
139
297
 
@@ -149,6 +307,8 @@ Stop the review and request clarification if:
149
307
 
150
308
  After review:
151
309
 
152
- 1. If all items are checked and approved: Update story status to "Done"
153
- 2. If unchecked items remain: Keep status as "Review" for dev to address
154
- 3. Always provide constructive feedback and explanations for learning
310
+ 1. Update the QA Results section in the story file
311
+ 2. Create the gate file in `docs/qa/gates/`
312
+ 3. Recommend status: "Ready for Done" or "Changes Required" (owner decides)
313
+ 4. If files were modified, list them in QA Results and ask Dev to update File List
314
+ 5. Always provide constructive feedback and actionable recommendations
@@ -0,0 +1,353 @@
1
+ # risk-profile
2
+
3
+ Generate a comprehensive risk assessment matrix for a story implementation using probability × impact analysis.
4
+
5
+ ## Inputs
6
+
7
+ ```yaml
8
+ required:
9
+ - story_id: "{epic}.{story}" # e.g., "1.3"
10
+ - story_path: "docs/stories/{epic}.{story}.*.md"
11
+ - story_title: "{title}" # If missing, derive from story file H1
12
+ - story_slug: "{slug}" # If missing, derive from title (lowercase, hyphenated)
13
+ ```
14
+
15
+ ## Purpose
16
+
17
+ Identify, assess, and prioritize risks in the story implementation. Provide risk mitigation strategies and testing focus areas based on risk levels.
18
+
19
+ ## Risk Assessment Framework
20
+
21
+ ### Risk Categories
22
+
23
+ **Category Prefixes:**
24
+
25
+ - `TECH`: Technical Risks
26
+ - `SEC`: Security Risks
27
+ - `PERF`: Performance Risks
28
+ - `DATA`: Data Risks
29
+ - `BUS`: Business Risks
30
+ - `OPS`: Operational Risks
31
+
32
+ 1. **Technical Risks (TECH)**
33
+ - Architecture complexity
34
+ - Integration challenges
35
+ - Technical debt
36
+ - Scalability concerns
37
+ - System dependencies
38
+
39
+ 2. **Security Risks (SEC)**
40
+ - Authentication/authorization flaws
41
+ - Data exposure vulnerabilities
42
+ - Injection attacks
43
+ - Session management issues
44
+ - Cryptographic weaknesses
45
+
46
+ 3. **Performance Risks (PERF)**
47
+ - Response time degradation
48
+ - Throughput bottlenecks
49
+ - Resource exhaustion
50
+ - Database query optimization
51
+ - Caching failures
52
+
53
+ 4. **Data Risks (DATA)**
54
+ - Data loss potential
55
+ - Data corruption
56
+ - Privacy violations
57
+ - Compliance issues
58
+ - Backup/recovery gaps
59
+
60
+ 5. **Business Risks (BUS)**
61
+ - Feature doesn't meet user needs
62
+ - Revenue impact
63
+ - Reputation damage
64
+ - Regulatory non-compliance
65
+ - Market timing
66
+
67
+ 6. **Operational Risks (OPS)**
68
+ - Deployment failures
69
+ - Monitoring gaps
70
+ - Incident response readiness
71
+ - Documentation inadequacy
72
+ - Knowledge transfer issues
73
+
74
+ ## Risk Analysis Process
75
+
76
+ ### 1. Risk Identification
77
+
78
+ For each category, identify specific risks:
79
+
80
+ ```yaml
81
+ risk:
82
+ id: "SEC-001" # Use prefixes: SEC, PERF, DATA, BUS, OPS, TECH
83
+ category: security
84
+ title: "Insufficient input validation on user forms"
85
+ description: "Form inputs not properly sanitized could lead to XSS attacks"
86
+ affected_components:
87
+ - "UserRegistrationForm"
88
+ - "ProfileUpdateForm"
89
+ detection_method: "Code review revealed missing validation"
90
+ ```
91
+
92
+ ### 2. Risk Assessment
93
+
94
+ Evaluate each risk using probability × impact:
95
+
96
+ **Probability Levels:**
97
+
98
+ - `High (3)`: Likely to occur (>70% chance)
99
+ - `Medium (2)`: Possible occurrence (30-70% chance)
100
+ - `Low (1)`: Unlikely to occur (<30% chance)
101
+
102
+ **Impact Levels:**
103
+
104
+ - `High (3)`: Severe consequences (data breach, system down, major financial loss)
105
+ - `Medium (2)`: Moderate consequences (degraded performance, minor data issues)
106
+ - `Low (1)`: Minor consequences (cosmetic issues, slight inconvenience)
107
+
108
+ **Risk Score = Probability × Impact**
109
+
110
+ - 9: Critical Risk (Red)
111
+ - 6: High Risk (Orange)
112
+ - 4: Medium Risk (Yellow)
113
+ - 2-3: Low Risk (Green)
114
+ - 1: Minimal Risk (Blue)
115
+
116
+ ### 3. Risk Prioritization
117
+
118
+ Create risk matrix:
119
+
120
+ ```markdown
121
+ ## Risk Matrix
122
+
123
+ | Risk ID | Description | Probability | Impact | Score | Priority |
124
+ | -------- | ----------------------- | ----------- | ---------- | ----- | -------- |
125
+ | SEC-001 | XSS vulnerability | High (3) | High (3) | 9 | Critical |
126
+ | PERF-001 | Slow query on dashboard | Medium (2) | Medium (2) | 4 | Medium |
127
+ | DATA-001 | Backup failure | Low (1) | High (3) | 3 | Low |
128
+ ```
129
+
130
+ ### 4. Risk Mitigation Strategies
131
+
132
+ For each identified risk, provide mitigation:
133
+
134
+ ```yaml
135
+ mitigation:
136
+ risk_id: "SEC-001"
137
+ strategy: "preventive" # preventive|detective|corrective
138
+ actions:
139
+ - "Implement input validation library (e.g., validator.js)"
140
+ - "Add CSP headers to prevent XSS execution"
141
+ - "Sanitize all user inputs before storage"
142
+ - "Escape all outputs in templates"
143
+ testing_requirements:
144
+ - "Security testing with OWASP ZAP"
145
+ - "Manual penetration testing of forms"
146
+ - "Unit tests for validation functions"
147
+ residual_risk: "Low - Some zero-day vulnerabilities may remain"
148
+ owner: "dev"
149
+ timeline: "Before deployment"
150
+ ```
151
+
152
+ ## Outputs
153
+
154
+ ### Output 1: Gate YAML Block
155
+
156
+ Generate for pasting into gate file under `risk_summary`:
157
+
158
+ **Output rules:**
159
+
160
+ - Only include assessed risks; do not emit placeholders
161
+ - Sort risks by score (desc) when emitting highest and any tabular lists
162
+ - If no risks: totals all zeros, omit highest, keep recommendations arrays empty
163
+
164
+ ```yaml
165
+ # risk_summary (paste into gate file):
166
+ risk_summary:
167
+ totals:
168
+ critical: X # score 9
169
+ high: Y # score 6
170
+ medium: Z # score 4
171
+ low: W # score 2-3
172
+ highest:
173
+ id: SEC-001
174
+ score: 9
175
+ title: "XSS on profile form"
176
+ recommendations:
177
+ must_fix:
178
+ - "Add input sanitization & CSP"
179
+ monitor:
180
+ - "Add security alerts for auth endpoints"
181
+ ```
182
+
183
+ ### Output 2: Markdown Report
184
+
185
+ **Save to:** `docs/qa/assessments/{epic}.{story}-risk-{YYYYMMDD}.md`
186
+
187
+ ```markdown
188
+ # Risk Profile: Story {epic}.{story}
189
+
190
+ Date: {date}
191
+ Reviewer: Quinn (Test Architect)
192
+
193
+ ## Executive Summary
194
+
195
+ - Total Risks Identified: X
196
+ - Critical Risks: Y
197
+ - High Risks: Z
198
+ - Risk Score: XX/100 (calculated)
199
+
200
+ ## Critical Risks Requiring Immediate Attention
201
+
202
+ ### 1. [ID]: Risk Title
203
+
204
+ **Score: 9 (Critical)**
205
+ **Probability**: High - Detailed reasoning
206
+ **Impact**: High - Potential consequences
207
+ **Mitigation**:
208
+
209
+ - Immediate action required
210
+ - Specific steps to take
211
+ **Testing Focus**: Specific test scenarios needed
212
+
213
+ ## Risk Distribution
214
+
215
+ ### By Category
216
+
217
+ - Security: X risks (Y critical)
218
+ - Performance: X risks (Y critical)
219
+ - Data: X risks (Y critical)
220
+ - Business: X risks (Y critical)
221
+ - Operational: X risks (Y critical)
222
+
223
+ ### By Component
224
+
225
+ - Frontend: X risks
226
+ - Backend: X risks
227
+ - Database: X risks
228
+ - Infrastructure: X risks
229
+
230
+ ## Detailed Risk Register
231
+
232
+ [Full table of all risks with scores and mitigations]
233
+
234
+ ## Risk-Based Testing Strategy
235
+
236
+ ### Priority 1: Critical Risk Tests
237
+
238
+ - Test scenarios for critical risks
239
+ - Required test types (security, load, chaos)
240
+ - Test data requirements
241
+
242
+ ### Priority 2: High Risk Tests
243
+
244
+ - Integration test scenarios
245
+ - Edge case coverage
246
+
247
+ ### Priority 3: Medium/Low Risk Tests
248
+
249
+ - Standard functional tests
250
+ - Regression test suite
251
+
252
+ ## Risk Acceptance Criteria
253
+
254
+ ### Must Fix Before Production
255
+
256
+ - All critical risks (score 9)
257
+ - High risks affecting security/data
258
+
259
+ ### Can Deploy with Mitigation
260
+
261
+ - Medium risks with compensating controls
262
+ - Low risks with monitoring in place
263
+
264
+ ### Accepted Risks
265
+
266
+ - Document any risks team accepts
267
+ - Include sign-off from appropriate authority
268
+
269
+ ## Monitoring Requirements
270
+
271
+ Post-deployment monitoring for:
272
+
273
+ - Performance metrics for PERF risks
274
+ - Security alerts for SEC risks
275
+ - Error rates for operational risks
276
+ - Business KPIs for business risks
277
+
278
+ ## Risk Review Triggers
279
+
280
+ Review and update risk profile when:
281
+
282
+ - Architecture changes significantly
283
+ - New integrations added
284
+ - Security vulnerabilities discovered
285
+ - Performance issues reported
286
+ - Regulatory requirements change
287
+ ```
288
+
289
+ ## Risk Scoring Algorithm
290
+
291
+ Calculate overall story risk score:
292
+
293
+ ```
294
+ Base Score = 100
295
+ For each risk:
296
+ - Critical (9): Deduct 20 points
297
+ - High (6): Deduct 10 points
298
+ - Medium (4): Deduct 5 points
299
+ - Low (2-3): Deduct 2 points
300
+
301
+ Minimum score = 0 (extremely risky)
302
+ Maximum score = 100 (minimal risk)
303
+ ```
304
+
305
+ ## Risk-Based Recommendations
306
+
307
+ Based on risk profile, recommend:
308
+
309
+ 1. **Testing Priority**
310
+ - Which tests to run first
311
+ - Additional test types needed
312
+ - Test environment requirements
313
+
314
+ 2. **Development Focus**
315
+ - Code review emphasis areas
316
+ - Additional validation needed
317
+ - Security controls to implement
318
+
319
+ 3. **Deployment Strategy**
320
+ - Phased rollout for high-risk changes
321
+ - Feature flags for risky features
322
+ - Rollback procedures
323
+
324
+ 4. **Monitoring Setup**
325
+ - Metrics to track
326
+ - Alerts to configure
327
+ - Dashboard requirements
328
+
329
+ ## Integration with Quality Gates
330
+
331
+ **Deterministic gate mapping:**
332
+
333
+ - Any risk with score ≥ 9 → Gate = FAIL (unless waived)
334
+ - Else if any score ≥ 6 → Gate = CONCERNS
335
+ - Else → Gate = PASS
336
+ - Unmitigated risks → Document in gate
337
+
338
+ ### Output 3: Story Hook Line
339
+
340
+ **Print this line for review task to quote:**
341
+
342
+ ```
343
+ Risk profile: docs/qa/assessments/{epic}.{story}-risk-{YYYYMMDD}.md
344
+ ```
345
+
346
+ ## Key Principles
347
+
348
+ - Identify risks early and systematically
349
+ - Use consistent probability × impact scoring
350
+ - Provide actionable mitigation strategies
351
+ - Link risks to specific test requirements
352
+ - Track residual risk after mitigation
353
+ - Update risk profile as story evolves