@kudusov.takhir/ba-toolkit 3.6.0 → 3.8.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,7 @@
1
1
  # Project Principles: [PROJECT_NAME]
2
2
 
3
3
  **Version:** 1.0
4
+ **Status:** Draft | In Review | Approved
4
5
  **Date:** [DATE]
5
6
  **Domain:** [DOMAIN]
6
7
  **Slug:** [SLUG]
@@ -23,6 +24,11 @@ All artifacts are generated in: [LANGUAGE]
23
24
  | API Endpoints | REST path | POST /users |
24
25
  | Wireframes | WF-NNN | WF-001 |
25
26
  | Validation Scenarios | SC-NNN | SC-001 |
27
+ | Risks | RISK-NN | RISK-01 |
28
+ | Sprints | SP-NN | SP-01 |
29
+ | Implementation Tasks | T-NN-NNN | T-04-007 |
30
+ | Analyse findings | A-NN | A-01 |
31
+ | Brief Goals | G-N | G-1 |
26
32
 
27
33
  ## 3. Traceability Requirements
28
34
 
@@ -46,56 +52,109 @@ Optional links — violations flagged as **MEDIUM**:
46
52
 
47
53
  ## 4. Definition of Ready
48
54
 
49
- An artifact is ready to `/done` when all of the following are true:
55
+ An artifact is ready to `/done` when all of the following are true. The baseline below mirrors the v3.7.0+ artifact-template field set; project-specific additions go in §8.
50
56
 
51
57
  ### Functional Requirement (FR)
52
58
  - [ ] Description present and unambiguous.
53
59
  - [ ] Actor identified (not "the system" or "the user" without role).
54
60
  - [ ] Priority assigned (MoSCoW).
55
61
  - [ ] Input/Output specified.
62
+ - [ ] **Source** field present (which stakeholder, brief goal G-N, regulatory requirement, or parent FR drove this).
63
+ - [ ] **Verification method** specified (Test / Demo / Inspection / Analysis per IEEE 830 §7).
64
+ - [ ] **Rationale** documented (why this requirement exists, not just what).
65
+ - [ ] FR is grouped under a feature area (`### 3.N` in `02_srs_*.md`).
56
66
 
57
67
  ### User Story (US)
58
- - [ ] Role, Action, and Value filled.
59
- - [ ] Priority assigned.
68
+ - [ ] Persona named (named persona with role and one-line context, not bare job title).
69
+ - [ ] Action and Value filled.
70
+ - [ ] Priority assigned (MoSCoW).
71
+ - [ ] **Business Value Score** assigned (1–5 or H/M/L).
72
+ - [ ] **Size** estimate present (XS / S / M / L / XL or Story Points).
60
73
  - [ ] Linked FR reference present.
74
+ - [ ] **Depends on** field set (other story IDs or `—`).
75
+ - [ ] **Definition of Ready** checklist or reference to this principles section.
76
+ - [ ] **INVEST self-check** confirms Independent · Negotiable · Valuable · Estimable · Small · Testable.
61
77
 
62
78
  ### Use Case (UC)
63
- - [ ] Actor, Preconditions, Main Flow, and at least one Exceptional Flow present.
64
- - [ ] Linked US reference present.
79
+ - [ ] **Goal in Context** present (which Brief goal G-N this UC serves).
80
+ - [ ] **Scope** specified (System / Subsystem / Component) and **Level** specified (User-goal / Summary / Subfunction).
81
+ - [ ] Primary Actor and Supporting Actors listed.
82
+ - [ ] **Stakeholders and Interests** table present (Cockburn discipline — at least 2 stakeholders).
83
+ - [ ] Pre-conditions and Trigger present.
84
+ - [ ] Main Success Scenario as a numbered table.
85
+ - [ ] At least one Exception Flow present.
86
+ - [ ] **Success Guarantees** and **Minimal Guarantees** distinguished in post-conditions.
87
+ - [ ] **Source** field present (linked US/FR).
65
88
 
66
89
  ### Acceptance Criterion (AC)
67
- - [ ] Given / When / Then all present and specific.
68
- - [ ] Type specified (positive / negative / boundary).
90
+ - [ ] Given / When / Then all present, specific, and verifiable (no "the system handles correctly").
91
+ - [ ] **Type** specified (Positive / Negative / Boundary / Performance / Security).
92
+ - [ ] **Source** present (which business rule from `02_srs_*.md` drove this AC).
93
+ - [ ] **Verification** method specified (Automated test / Manual test / Observed in production).
69
94
  - [ ] Linked US reference present.
95
+ - [ ] Linked NFR present for performance and security ACs.
70
96
 
71
97
  ### NFR
72
- - [ ] Category specified.
98
+ - [ ] **ISO/IEC 25010 characteristic** specified (one of the 8: Functional Suitability / Performance Efficiency / Compatibility / Usability / Reliability / Security / Maintainability / Portability).
73
99
  - [ ] Measurable metric present (numeric target, not adjective).
100
+ - [ ] **Acceptance threshold** present separately from the metric.
74
101
  - [ ] Verification method specified.
102
+ - [ ] **Source** present.
103
+ - [ ] **Rationale** present.
104
+ - [ ] Linked FR or US present.
75
105
 
76
106
  ### Data Entity
77
- - [ ] All attributes have types and constraints.
78
- - [ ] FK references point to existing entities.
107
+ - [ ] **Source** field present (which FR/US introduced this entity).
108
+ - [ ] **Owner** field present (which team curates this data).
109
+ - [ ] **Sensitivity classification** present (Public / Internal / Confidential / PII / PCI / PHI / Financial).
110
+ - [ ] All attributes have **logical types** (not DBMS-specific) and constraints.
111
+ - [ ] FK references point to existing entities, with cascade rule specified.
112
+ - [ ] **State machine** documented for entities with more than two distinct lifecycle states.
79
113
 
80
114
  ### API Endpoint
115
+ - [ ] **Source** present (FR-NNN that drove this endpoint).
81
116
  - [ ] Request and Response schemas present.
82
117
  - [ ] At least one error code documented.
83
118
  - [ ] Linked FR/US present.
119
+ - [ ] **Idempotency** marker present (Idempotent / Not idempotent / Idempotent via `Idempotency-Key` header).
120
+ - [ ] **Required scope** specified (or "public" for unauthenticated paths).
121
+ - [ ] **SLO** linked to an NFR.
122
+ - [ ] **Verification** method specified (contract test / consumer-driven contract test / integration test).
84
123
 
85
124
  ### Wireframe (WF)
86
- - [ ] All four states present: default, loading, empty, error.
125
+ - [ ] **Source** present (US-NNN this screen serves).
126
+ - [ ] All **8 canonical states** described that apply: Default / Loading / Empty / Loaded / Partial / Success / Error / Disabled.
87
127
  - [ ] Navigation links (from / to) specified.
88
128
  - [ ] Linked US present.
129
+ - [ ] **Linked AC** present (scenarios this screen verifies).
130
+ - [ ] **Linked NFR** present for performance- and accessibility-sensitive screens.
131
+
132
+ ### Risk
133
+ - [ ] **Probability**, **Impact**, **Velocity** scored (per ISO 31000 + PMBOK 7).
134
+ - [ ] **Treatment strategy** classified (Avoid / Reduce / Transfer / Accept).
135
+ - [ ] **Owner** assigned.
136
+ - [ ] **Review cadence** set.
137
+
138
+ ### Implementation Task (T-NN-NNN, from `/implement-plan`)
139
+ - [ ] At least one `references` id present (FR / US / AC / Entity / Endpoint / WF / SC).
140
+ - [ ] `dependsOn` list points only at task ids that exist in the same plan.
141
+ - [ ] `definitionOfDone` checklist present, with at least one hook tied to a linked AC.
142
+ - [ ] Phase assignment matches the canonical 9-phase ladder.
89
143
 
90
- ## 5. NFR Baseline
144
+ ## 5. NFR Baseline (ISO/IEC 25010)
91
145
 
92
- The following NFR categories are required regardless of domain:
146
+ NFR categories follow **ISO/IEC 25010:2011** Software Quality Model. The following ISO 25010 characteristics are required for this project regardless of domain — `/nfr` reads this list verbatim and treats it as a mandatory checklist:
93
147
 
94
- - **Security:** authentication method, data encryption at rest and in transit.
95
- - **Availability:** uptime SLA with a numeric target.
96
- - **Compliance:** applicable laws and data retention policy.
148
+ - **Security** confidentiality (encryption at rest and in transit), authentication strength, audit trail.
149
+ - **Reliability** availability SLA with a numeric target, RTO / RPO for disaster recovery.
150
+ - **Compatibility** applicable laws and data retention policy *(historically labelled "Compliance" but maps to ISO 25010 Compatibility + Functional Suitability sub-characteristics)*.
97
151
 
98
- [ADDITIONAL_NFR_CATEGORIES]
152
+ [ADDITIONAL_NFR_CATEGORIES — list other ISO 25010 characteristics that are mandatory for this project, e.g.:
153
+ - **Performance Efficiency** — required if the project has user-facing latency or throughput targets.
154
+ - **Usability** — required if WCAG 2.1 AA accessibility is mandated.
155
+ - **Maintainability** — required if the project must hand off to a different team post-launch.
156
+ - **Portability** — required if multi-cloud or vendor-neutral hosting is a constraint.
157
+ ]
99
158
 
100
159
  ## 6. Quality Gates
101
160
 
@@ -113,6 +172,50 @@ For `/analyze` findings:
113
172
  - `flat` (default) — all artifacts saved directly in the output directory.
114
173
  - `subfolder` — all artifacts saved under `{output_dir}/[SLUG]/`.
115
174
 
116
- ## 8. Project-Specific Notes
175
+ ## 8. Testing Strategy
176
+
177
+ **Strategy:** TDD | Tests-after | Integration-only | Manual-only | None
178
+
179
+ | Strategy | Means | When `/implement-plan` task templates embed "Tests to write first" |
180
+ |----------|-------|--------------------------------------------------------------------|
181
+ | TDD | Tests written before implementation; red → green → refactor | Yes — every task with linked AC gets a "Tests to write first" sub-block |
182
+ | Tests-after | Implementation first, tests immediately after | No — task DoD just lists the AC scenarios that must pass |
183
+ | Integration-only | No unit tests; integration tests at the API or UI layer | No — integration test harness is set up in Phase 1 |
184
+ | Manual-only | Tests are manual QA scripts run before release | No — task DoD references manual scenario IDs from `/scenarios` |
185
+ | None | Prototype / spike — no automated tests at all | No — explicit `// no test` marker on every task |
186
+
187
+ `/implement-plan` reads this section to decide whether to embed test specifications in each task. Default is **TDD** for production-grade systems.
188
+
189
+ ## 9. Code Review and Branching
190
+
191
+ **Branching model:** trunk-based | GitHub flow | GitFlow | other
192
+ **Required reviewers per PR:** [N]
193
+ **Merge gate:** [CI green + N reviews / CODEOWNERS approval / specific reviewer]
194
+
195
+ ## 10. Stakeholder Decision Authority
196
+
197
+ Who can approve a change to these principles, and to which sections.
198
+
199
+ | Section | Decision authority | Notes |
200
+ |---------|--------------------|-------|
201
+ | §1 Language | [Role] | |
202
+ | §2 ID Conventions | [Role] | |
203
+ | §3 Traceability | [Role] | |
204
+ | §4 Definition of Ready | [Role] | |
205
+ | §5 NFR Baseline | [Role] | |
206
+ | §6 Quality Gates | [Role] | |
207
+ | §7 Output Structure | [Role] | |
208
+ | §8 Testing Strategy | [Role] | |
209
+ | §9 Branching | [Role] | |
210
+
211
+ ## 11. Project-Specific Notes
117
212
 
118
213
  [ADDITIONAL_CONVENTIONS]
214
+
215
+ ---
216
+
217
+ ## Approvals
218
+
219
+ | Name | Role | Approval Date | Notes |
220
+ |------|------|---------------|-------|
221
+ | [name] | [role] | [YYYY-MM-DD] | [optional notes] |
@@ -1,57 +1,81 @@
1
1
  # Technology Research & Architecture Decisions: [PROJECT_NAME]
2
2
 
3
+ **Version:** 0.1
4
+ **Status:** Draft
3
5
  **Domain:** [DOMAIN]
4
6
  **Date:** [DATE]
5
7
  **Slug:** [SLUG]
8
+ **ADR format:** Michael Nygard format extended with Drivers and Alternatives Considered
6
9
  **References:** `02_srs_[SLUG].md`, `06_nfr_[SLUG].md`, `07_datadict_[SLUG].md`
7
10
 
11
+ > The output of this research is the **primary tech-stack source** for `/implement-plan` (added in v3.4.0). The Tech Stack table at the bottom is read by `/implement-plan` to populate its header without re-asking the calibration interview.
12
+
8
13
  ---
9
14
 
10
15
  ## Architecture Decision Records (ADRs)
11
16
 
12
17
  ### ADR-001: [Decision Title]
13
18
 
14
- **Status:** Proposed | Accepted | Deprecated | Superseded by ADR-[NNN]
15
- **Date:** [DATE]
19
+ | Field | Value |
20
+ |-------|-------|
21
+ | **Status** | Proposed / Accepted / Deprecated / Superseded by ADR-[NNN] |
22
+ | **Proposal date** | [YYYY-MM-DD] |
23
+ | **Decision date** | [YYYY-MM-DD when the decision was locked in] |
24
+ | **Decision owner** | [Name + role] |
25
+
26
+ **Drivers:** *(what forced this decision — reference specific FRs, NFRs, regulatory constraints, cost or time pressure)*
27
+ - NFR-[NNN] — [why this NFR forces the decision]
28
+ - FR-[NNN] — [why this FR forces the decision]
29
+ - [Regulatory / cost / time-to-market constraint]
16
30
 
17
31
  **Context:**
18
- [What situation or requirement forces us to make this decision? Reference specific NFRs or FRs.]
32
+ [What situation requires us to make this decision now. Background a future maintainer would need to understand the decision.]
19
33
 
20
- **Options Considered:**
34
+ **Alternatives Considered:**
21
35
 
22
- | Option | Pros | Cons |
23
- |--------|------|------|
24
- | [Option A] | [pros] | [cons] |
25
- | [Option B] | [pros] | [cons] |
36
+ | Option | Pros | Cons | Disqualifying factor |
37
+ |--------|------|------|----------------------|
38
+ | [Option A] | [pros] | [cons] | — |
39
+ | [Option B] | [pros] | [cons] | [why ruled out] |
40
+ | [Option C] | [pros] | [cons] | [why ruled out] |
26
41
 
27
- **Decision:** [Option A / B / other], because [rationale].
42
+ **Decision:** [Option A / B / other], because [rationale anchored in the Drivers above].
28
43
 
29
44
  **Consequences:**
30
- - [Positive consequence]
31
- - [Trade-off or risk]
32
45
 
33
- **References:** NFR-[NNN], FR-[NNN]
46
+ - **Positive:** [what becomes easier]
47
+ - **Negative:** [trade-off or risk]
48
+ - **Neutral:** [side effect that is neither good nor bad but worth noting]
34
49
 
35
50
  ---
36
51
 
37
52
  ### ADR-002: [Decision Title]
38
53
 
39
- **Status:** Proposed | Accepted
40
- **Date:** [DATE]
54
+ | Field | Value |
55
+ |-------|-------|
56
+ | **Status** | [status] |
57
+ | **Proposal date** | [date] |
58
+ | **Decision date** | [date] |
59
+ | **Decision owner** | [Name + role] |
41
60
 
42
- **Context:** [Context for this decision.]
61
+ **Drivers:**
62
+ - [driver]
43
63
 
44
- **Options Considered:**
64
+ **Context:** [Context.]
45
65
 
46
- | Option | Pros | Cons |
47
- |--------|------|------|
48
- | [Option A] | [pros] | [cons] |
49
- | [Option B] | [pros] | [cons] |
66
+ **Alternatives Considered:**
67
+
68
+ | Option | Pros | Cons | Disqualifying factor |
69
+ |--------|------|------|----------------------|
70
+ | [Option A] | [pros] | [cons] | — |
71
+ | [Option B] | [pros] | [cons] | [why ruled out] |
50
72
 
51
73
  **Decision:** [Chosen option and rationale.]
52
74
 
53
75
  **Consequences:**
54
- - [Consequence]
76
+
77
+ - **Positive:** [consequence]
78
+ - **Negative:** [consequence]
55
79
 
56
80
  <!-- Repeat ADR block for each major architectural decision. -->
57
81
 
@@ -95,5 +119,33 @@
95
119
  ## Open Questions
96
120
 
97
121
  | # | Question | Owner | Target Date |
98
- |---|---------|-------|-------------|
122
+ |---|----------|-------|-------------|
99
123
  | 1 | [Unresolved technical question] | [Role] | [Date] |
124
+
125
+ ---
126
+
127
+ ## Tech Stack Summary *(consumed by `/implement-plan`)*
128
+
129
+ This table is the primary tech-stack source for `/implement-plan`. Every row should have a concrete value (or `[TBD: <slot>]` if the decision is genuinely deferred). `/implement-plan` skips its own calibration interview when this table is complete.
130
+
131
+ | Layer | Choice | Source ADR |
132
+ |-------|--------|------------|
133
+ | Frontend | [framework + language + build tool] | ADR-[NNN] |
134
+ | Backend | [framework + language + runtime] | ADR-[NNN] |
135
+ | Database | [engine + version + hosting] | ADR-[NNN] |
136
+ | Hosting / deployment | [cloud + region + container model] | ADR-[NNN] |
137
+ | Auth / identity | [in-house / SSO / managed service] | ADR-[NNN] |
138
+ | Observability | [logs + metrics + traces platform] | ADR-[NNN] |
139
+ | Mandatory integrations | [list from §Integration Map] | — |
140
+
141
+ ---
142
+
143
+ ## NFR → ADR Traceability
144
+
145
+ Forward traceability from each Non-functional Requirement in `06_nfr_[SLUG].md` to the ADR(s) that satisfy it. NFRs without a linked ADR are flagged — every Must-priority NFR should drive at least one architectural decision.
146
+
147
+ | NFR ID | ISO 25010 Characteristic | Linked ADRs | Coverage Status |
148
+ |--------|--------------------------|-------------|-----------------|
149
+ | NFR-001 | Performance Efficiency | ADR-002, ADR-005 | ✓ |
150
+ | NFR-003 | Reliability | ADR-004 | ✓ |
151
+ | NFR-005 | Security | (uncovered) | ✗ |
@@ -1,188 +1,89 @@
1
- # Risk Register: {PROJECT_NAME}
1
+ # Risk Register: [PROJECT_NAME]
2
2
 
3
- **Domain:** {DOMAIN}
4
- **Date:** {DATE}
5
- **Slug:** {SLUG}
6
- **Sources:** {list of artifacts scanned}
3
+ **Version:** 0.1
4
+ **Status:** Draft
5
+ **Domain:** [DOMAIN]
6
+ **Date:** [DATE]
7
+ **Slug:** [SLUG]
8
+ **Risk tolerance:** Low / Medium / High
9
+ **Standard:** ISO 31000 + PMI PMBOK 7
10
+ **Sources:** [list of artifact files scanned]
7
11
 
8
12
  ---
9
13
 
10
14
  ## Summary
11
15
 
12
16
  | Priority | Count |
13
- |---------|-------|
14
- | 🔴 Critical | 1 |
15
- | 🟡 High | 2 |
16
- | 🟢 Medium | 3 |
17
- | ⚪ Low | 1 |
18
- | **Total** | **7** |
17
+ |----------|-------|
18
+ | 🔴 Critical | [N] |
19
+ | 🟡 High | [N] |
20
+ | 🟢 Medium | [N] |
21
+ | ⚪ Low | [N] |
22
+ | **Total** | **[N]** |
19
23
 
20
24
  ---
21
25
 
22
26
  ## Risk Register
23
27
 
24
- | ID | Title | Category | Probability | Impact | Score | Priority | Status |
25
- |----|-------|----------|:-----------:|:------:|:-----:|---------|--------|
26
- | RISK-01 | Third-party data source rate limits unclear | External | 4 | 5 | 20 | 🔴 Critical | Open |
27
- | RISK-02 | GDPR data-processing agreement unsigned | Compliance | 3 | 4 | 12 | 🟡 High | Open |
28
- | RISK-03 | Columnar query performance under concurrent load unproven | Technical | 3 | 3 | 9 | 🟡 High | Open |
29
- | RISK-04 | Scope creep from stakeholder wish list | Business | 3 | 2 | 6 | 🟢 Medium | Open |
30
- | RISK-05 | OIDC / SSO library breaking changes | External | 2 | 3 | 6 | 🟢 Medium | Open |
31
- | RISK-06 | Data model changes after /datadict | Technical | 2 | 3 | 6 | 🟢 Medium | Open |
32
- | RISK-07 | Development team unfamiliar with analytics domain | Business | 2 | 1 | 2 | ⚪ Low | Open |
28
+ | ID | Title | Category | P | I | Score | V | Priority | Treatment | Owner | Review | Status |
29
+ |----|-------|----------|:-:|:-:|:-----:|:-:|----------|-----------|-------|--------|--------|
30
+ | RISK-01 | [Short title] | Technical / Business / Compliance / External | 4 | 5 | 20 | Days | 🔴 Critical | Reduce | [Owner] | Monthly | Open |
31
+ | RISK-02 | [Short title] | [Category] | [1–5] | [1–5] | [P×I] | [Velocity] | [Priority] | [Avoid / Reduce / Transfer / Accept] | [Owner] | [Cadence] | Open / In Progress / Closed |
33
32
 
34
- ---
35
-
36
- ## Risk Details
37
-
38
- ### RISK-01 — Third-party data source rate limits unclear
39
-
40
- **Category:** External
41
- **Probability:** 4 / 5 — Likely
42
- **Impact:** 5 / 5 — Critical
43
- **Score:** 20 🔴 Critical
44
- **Status:** Open
45
- **Source:** `07a_research_{slug}.md`
46
-
47
- **Description:**
48
- The product depends on timely event delivery from third-party integrations (Segment and warehouse connectors). The published rate limits do not guarantee sustained throughput at the projected MVP event volume. If a critical integration is rate-limited or breaks its contract, dashboards will show stale or incomplete data and user trust erodes quickly.
49
-
50
- **Mitigation:**
51
- Run a sustained-throughput test against each integration in sprint 0. Negotiate higher quotas with the providers before launch. Add per-source ingestion lag as a monitored NFR metric with an alert threshold.
52
-
53
- **Contingency:**
54
- Enable an ingestion backpressure queue and surface a workspace-level banner when a source is lagging more than 5 minutes behind real-time. Prioritise critical event streams over low-value ones until the lag recovers.
55
-
56
- **Owner:** Tech Lead
33
+ > Columns: P = Probability (1–5), I = Impact (1–5), Score = P × I, V = Velocity (Years / Months / Weeks / Days / Immediate). Treatment: Avoid / Reduce / Transfer / Accept. Review: Monthly / Quarterly / Ad hoc.
57
34
 
58
35
  ---
59
36
 
60
- ### RISK-02 — GDPR data-processing agreement unsigned
61
-
62
- **Category:** Compliance
63
- **Probability:** 3 / 5 — Possible
64
- **Impact:** 4 / 5 — Major
65
- **Score:** 12 🟡 High
66
- **Status:** Open
67
- **Source:** `01_brief_{slug}.md`
68
-
69
- **Description:**
70
- The product collects first-party user-behavioural events from EU workspaces. The Brief listed the GDPR data-processing agreement (DPA) with the selected cloud provider as an assumption. If the DPA is delayed or blocked by legal review, the EU launch date will slip regardless of development readiness.
71
-
72
- **Mitigation:**
73
- Engage legal counsel early to track DPA status. Decouple development milestones from the legal timeline so that technical readiness does not block on paperwork. Draft the workspace-level privacy controls (PII redaction, data residency) independently of the final DPA text.
74
-
75
- **Contingency:**
76
- Launch in non-EU regions first (US, CA, APAC) while the EU DPA is pending. Gate EU workspace signups behind a feature flag tied to DPA status.
77
-
78
- **Owner:** Product Manager
79
-
80
- ---
81
-
82
- ### RISK-03 — Columnar query performance under concurrent load unproven
83
-
84
- **Category:** Technical
85
- **Probability:** 3 / 5 — Possible
86
- **Impact:** 3 / 5 — Moderate
87
- **Score:** 9 🟡 High
88
- **Status:** Open
89
- **Source:** `07a_research_{slug}.md`
90
-
91
- **Description:**
92
- The ADR for the analytics query layer chose ClickHouse as the primary store. This setup has not been load-tested for the projected 200 concurrent dashboard viewers reading against a 10 M-event dataset. If query throughput assumptions are wrong, dashboards will exceed the 500 ms p95 NFR target and degrade UX.
93
-
94
- **Mitigation:**
95
- Run a load-test spike in the first development sprint against the reference dataset. Define a caching fallback (5-minute materialised query cache) if raw query throughput does not meet targets.
96
-
97
- **Contingency:**
98
- Enable the query cache globally and mark cached dashboards with a "last refreshed" timestamp. Communicate the change as a phased rollout feature.
99
-
100
- **Owner:** Tech Lead
101
-
102
- ---
103
-
104
- ### RISK-04 — Scope creep from stakeholder wish list
105
-
106
- **Category:** Business
107
- **Probability:** 3 / 5 — Possible
108
- **Impact:** 2 / 5 — Minor
109
- **Score:** 6 🟢 Medium
110
- **Status:** Open
111
- **Source:** `01_brief_{slug}.md`
112
-
113
- **Description:**
114
- Several features were marked out of scope during the Brief interview but stakeholders indicated they "might be needed later." Without a change control process, these may be re-introduced mid-development, inflating the MVP scope.
115
-
116
- **Mitigation:**
117
- Establish a formal change request process referenced in the Handoff document. All scope additions must go through `/stories` and be re-estimated.
118
-
119
- **Contingency:**
120
- Freeze scope at the start of each sprint. Defer any mid-sprint scope additions to the next sprint backlog.
121
-
122
- **Owner:** Product Manager
123
-
124
- ---
125
-
126
- ### RISK-05 — OIDC / SSO library breaking changes
127
-
128
- **Category:** External
129
- **Probability:** 2 / 5 — Unlikely
130
- **Impact:** 3 / 5 — Moderate
131
- **Score:** 6 🟢 Medium
132
- **Status:** Open
133
- **Source:** `07a_research_{slug}.md`
134
-
135
- **Description:**
136
- The product uses a third-party OIDC / SAML library for workspace SSO. Similar libraries have released breaking changes in previous majors without long deprecation windows. If the library releases a breaking change after development, sign-up and SSO flows may require rework.
137
-
138
- **Mitigation:**
139
- Pin the library version used in development. Monitor the library changelog and security advisories. Abstract SSO calls behind a thin adapter layer so a future swap to a different provider is isolated to one module.
140
-
141
- **Contingency:**
142
- Allocate a 3-day buffer in the release plan for library compatibility fixes.
143
-
144
- **Owner:** Frontend Lead
145
-
146
- ---
147
-
148
- ### RISK-06 — Data model changes after /datadict
37
+ ## Risk Details
149
38
 
150
- **Category:** Technical
151
- **Probability:** 2 / 5 — Unlikely
152
- **Impact:** 3 / 5 — Moderate
153
- **Score:** 6 🟢 Medium
154
- **Status:** Open
155
- **Source:** `07_datadict_{slug}.md`
39
+ ### RISK-01 — [Short title]
40
+
41
+ | Field | Value |
42
+ |-------|-------|
43
+ | **Category** | Technical / Business / Compliance / External |
44
+ | **Probability** | [1–5] — [Very unlikely / Unlikely / Possible / Likely / Very likely] |
45
+ | **Impact** | [1–5] — [Negligible / Minor / Moderate / Major / Critical] |
46
+ | **Score** | [P × I] [🔴/🟡/🟢/⚪ priority] |
47
+ | **Velocity** | [Years / Months / Weeks / Days / Immediate] — [reaction-time implication] |
48
+ | **Treatment strategy** | Avoid / Reduce / Transfer / Accept |
49
+ | **Status** | Open / In Progress / Closed |
50
+ | **Source** | [artifact file + section] |
51
+ | **Owner** | [Single accountable role — e.g. Tech Lead, Product Manager, Compliance Officer] |
52
+ | **Review cadence** | Monthly / Quarterly / Ad hoc |
156
53
 
157
54
  **Description:**
158
- The Data Dictionary was finalised before the API Contract was fully detailed. Late-stage entity or field changes discovered during API design may require cascading updates across the SRS, Stories, and AC artifacts.
55
+ [Full description of the risk and the conditions under which it materialises. State the condition and the consequence "If X happens, then Y will occur."]
159
56
 
160
- **Mitigation:**
161
- Run `/trace` after `/apicontract` to detect any new cross-artifact inconsistencies. Address CRITICAL gaps before handoff.
57
+ **Mitigation:** *(actions taken before the risk materialises — Reduce strategy)*
58
+ [Steps to lower the probability or impact. Required for all Reduce-strategy risks. Optional for Avoid (mitigation = the avoidance action itself), Transfer (mitigation = the transfer mechanism), and Accept (mitigation = N/A by definition).]
162
59
 
163
- **Contingency:**
164
- Use `/revise` on affected artifacts to propagate changes. Document the delta in the Handoff open items list.
165
-
166
- **Owner:** Business Analyst
60
+ **Contingency:** *(actions taken after the risk materialises despite mitigation)*
61
+ [Steps to take if the risk fires anyway. Required for High and Critical risks regardless of treatment strategy.]
167
62
 
168
63
  ---
169
64
 
170
- ### RISK-07Development team unfamiliar with analytics domain
171
-
172
- **Category:** Business
173
- **Probability:** 2 / 5 — Unlikely
174
- **Impact:** 1 / 5 — Negligible
175
- **Score:** 2 Low
176
- **Status:** Open
177
- **Source:** `01_brief_{slug}.md`
65
+ ### RISK-02[Short title]
66
+
67
+ | Field | Value |
68
+ |-------|-------|
69
+ | **Category** | [Category] |
70
+ | **Probability** | [1–5] |
71
+ | **Impact** | [1–5] |
72
+ | **Score** | [P × I] |
73
+ | **Velocity** | [Velocity] |
74
+ | **Treatment strategy** | [Strategy] |
75
+ | **Status** | Open |
76
+ | **Source** | [artifact + section] |
77
+ | **Owner** | [Owner] |
78
+ | **Review cadence** | [Cadence] |
178
79
 
179
80
  **Description:**
180
- The engineering team has limited prior experience with columnar analytics storage. Domain-specific concepts (event schemas, funnel aggregation, cohort joins) may be misunderstood during implementation, leading to query correctness bugs on edge cases.
81
+ [Description.]
181
82
 
182
83
  **Mitigation:**
183
- Include a domain onboarding session as part of sprint 0. Reference the SaaS domain glossary in the Handoff document.
84
+ [Mitigation steps.]
184
85
 
185
86
  **Contingency:**
186
- Schedule a BA review checkpoint after the first feature is implemented end-to-end.
87
+ [Contingency steps.]
187
88
 
188
- **Owner:** Product Manager
89
+ <!-- Repeat RISK block for each risk. Numbering: RISK-01, RISK-02, ... -->