@kudusov.takhir/ba-toolkit 3.7.0 → 3.8.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +59 -1
- package/package.json +1 -1
- package/skills/discovery/SKILL.md +3 -1
- package/skills/export/SKILL.md +17 -11
- package/skills/handoff/SKILL.md +64 -20
- package/skills/implement-plan/SKILL.md +8 -2
- package/skills/principles/SKILL.md +5 -2
- package/skills/references/templates/discovery-template.md +24 -1
- package/skills/references/templates/handoff-template.md +93 -33
- package/skills/references/templates/principles-template.md +121 -18
- package/skills/references/templates/research-template.md +74 -22
- package/skills/research/SKILL.md +20 -7
package/CHANGELOG.md
CHANGED
|
@@ -11,6 +11,62 @@ Versions follow [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
|
|
11
11
|
|
|
12
12
|
---
|
|
13
13
|
|
|
14
|
+
## [3.8.1] — 2026-04-10
|
|
15
|
+
|
|
16
|
+
### Fixed
|
|
17
|
+
|
|
18
|
+
- **`/discovery` now updates the `**Domain:**` field inside the `AGENTS.md` managed block.** Reported as "if you change the domain in `/discovery`, the AGENTS.md keeps the original init domain". Root cause: the v3.4.0+ instructions in `skills/discovery/SKILL.md` §6 said *"do not touch the managed block"* with no exception, so the AI dutifully left the domain stale even though the discovery artifact §8 recommended a different one. The fix is a targeted exception in the SKILL.md text — `/discovery` is the canonical source of truth for the project domain after `init`, and the `**Domain:**` line is the only managed-block field it may surgically update (every other managed-block field — Project, Slug, Language, Output folder, the auto-generated date comment — stays owned by `ba-toolkit init`). The reply now also mentions the domain change explicitly so the user sees that AGENTS.md was synced. Content fix only — no changes to `bin/ba-toolkit.js`, no changes to tests, no changes to skill behaviour beyond the targeted exception.
|
|
19
|
+
|
|
20
|
+
---
|
|
21
|
+
|
|
22
|
+
## [3.8.0] — 2026-04-10
|
|
23
|
+
|
|
24
|
+
### Highlights
|
|
25
|
+
|
|
26
|
+
- **Group C skill audit pass on `/discovery`, `/principles`, `/research`, `/handoff`, `/implement-plan`, `/export`** — the bookend skills (entry, conventions, technology research, exit) brought to senior-BA rigour. ~26 Critical + High findings applied. Highlights: full Michael Nygard ADR format with Drivers / Alternatives / Decision Date for `/research`, expanded `/handoff` artifact inventory covering all 15 pipeline stages plus cross-cutting tools, formal Sign-off section for `/handoff`, principles Definition of Ready section synchronised with the v3.5.0+ template fields across every artifact type, new ISO/IEC 25010 alignment in the principles NFR baseline, new Testing Strategy section in `/principles` (the principled approach to TDD that resolves the batch 6 item 2 question), new Hypotheses → Brief Goals retrospective traceability in `/discovery`, observability + CI/CD + secret management slots in `/implement-plan`, and full v3.5.0+ stories template field support in `/export` (Persona / Business Value Score / Depends on / INVEST embedded in exported issues with full FR/UC/AC/NFR/WF traceability columns). After this pass, **22 of 24 shipped skills** carry the senior-BA improvements.
|
|
27
|
+
|
|
28
|
+
### Changed
|
|
29
|
+
|
|
30
|
+
- **`skills/principles/SKILL.md` and `skills/references/templates/principles-template.md`** — biggest debt of the audit because `/principles` is the source of truth that downstream skills read for Definition of Ready, ID conventions, and quality gates. The DoR section had drifted out of sync with the v3.5.0+ template fields across every artifact type:
|
|
31
|
+
- **Definition of Ready (§4) rewritten** to mirror the v3.7.0+ baseline per artifact type. Every DoR checklist now requires the same fields the artifact templates carry: FR adds Source / Verification / Rationale / area-grouping; US adds Persona / Business Value Score / Size / Depends on / INVEST self-check; UC adds Goal in Context / Scope / Stakeholders & Interests / Source / Success vs Minimal Guarantees split; AC adds Source / Verification / Linked NFR; NFR adds ISO 25010 characteristic / Acceptance threshold / Source / Rationale; Entity adds Source / Owner / Sensitivity / state machine for stateful entities; Endpoint adds Source / Idempotency / Required scope / SLO / Verification; Wireframe adds Source / Linked AC / Linked NFR / canonical 8-state list (was 4 states). New artifact types added: Risk DoR (Probability / Impact / Velocity / Treatment Strategy / Owner / Review cadence per ISO 31000) and Implementation Task DoR (references / dependsOn validity / definitionOfDone with linked AC).
|
|
32
|
+
- **ID Conventions table (§2) extended** with the post-v3.5.0 entity types that were previously missing: Risks (`RISK-NN`), Sprints (`SP-NN`), Implementation Tasks (`T-NN-NNN`), Analyse findings (`A-NN`), and Brief Goals (`G-N`).
|
|
33
|
+
- **NFR Baseline (§5) aligned with ISO/IEC 25010**. The three previously-listed categories (Security, Availability, Compliance) were ad-hoc; the new section explicitly maps every category to the parent ISO 25010 characteristic and lists the other 5 characteristics as candidate additions with one-line guidance on when each becomes mandatory.
|
|
34
|
+
- **New §8 Testing Strategy section** with five canonical strategies (TDD / Tests-after / Integration-only / Manual-only / None) and explicit guidance on which one drives `/implement-plan` to embed "Tests to write first" blocks per task. **This is the principled resolution of the batch 6 item 2 question** — TDD support lives in `/principles`, not as a separate `/tdd-tests` skill, exactly as the rationale recorded in `todo.md` "Removed from the backlog" predicted.
|
|
35
|
+
- **New §9 Code Review and Branching** section (trunk-based / GitHub flow / GitFlow / required reviewers / merge gate) and **new §10 Stakeholder Decision Authority** table (per-section decision authority by name and role).
|
|
36
|
+
- **3 new required-topics** in the SKILL.md interview: testing strategy, stakeholder decision authority, code review and branching policy.
|
|
37
|
+
- **Document-control metadata added**: `Status` field plus `Approvals` table at the bottom.
|
|
38
|
+
- **`skills/discovery/SKILL.md` and `skills/references/templates/discovery-template.md`** — retrospective traceability and decision provenance:
|
|
39
|
+
- **New §9 Hypotheses → Brief Goals mapping table** filled in by `/brief` when it consumes the discovery artifact. Forward traceability from each discovery hypothesis to the Brief goal it became, with a Status column (Validated / Refined / Disproved / Pending) so a 3-month-post-launch retrospective can answer "did the chosen audience hypothesis hold?" and "did the predicted MVP features actually drive adoption?".
|
|
40
|
+
- **`Decision date`** and **`Decision owner`** fields in the header so the recommendation moment is timestamped and attributable.
|
|
41
|
+
- **`Status` field** (Concept (pre-brief) / In Review / Locked) and **`Approvals` table** for the cases where the discovery artifact is signed off as a decision document.
|
|
42
|
+
- **`skills/research/SKILL.md` and `skills/references/templates/research-template.md`** — full Michael Nygard ADR format + downstream-consumer awareness:
|
|
43
|
+
- **Standard alignment with the Michael Nygard ADR format** (the de facto industry standard since 2011), extended with explicit **Drivers** field (what forced the decision — FRs, NFRs, regulatory, cost, time-to-market) and **Alternatives Considered** table with a `Disqualifying factor` column. Every ADR carries Status, Proposal date, Decision date, Decision owner, Drivers, Context, Alternatives Considered, Decision, Consequences (positive / negative / neutral) — the field set a senior architect would expect on a serious project.
|
|
44
|
+
- **`/implement-plan` integration** documented explicitly. The output of `/research` is the primary tech-stack source for `/implement-plan` (added in v3.4.0); the new Tech Stack Summary table at the bottom of the research artifact is read directly by `/implement-plan` to populate its header without re-asking the calibration interview.
|
|
45
|
+
- **Required-topics list extended from 6 to 14**. Added the explicit tech-stack slots `/implement-plan` consumes: Frontend stack, Backend stack, Database, Hosting / deployment target, Auth / identity, Observability platform. Added Build vs Buy and Open-source vs Proprietary tolerance as common BA inquiries.
|
|
46
|
+
- **New NFR → ADR traceability matrix** at the bottom flags Must NFRs without an architectural decision.
|
|
47
|
+
- **Document-control metadata added** (Version / Status).
|
|
48
|
+
- **`skills/handoff/SKILL.md` and `skills/references/templates/handoff-template.md`** — full pipeline coverage and formal sign-off:
|
|
49
|
+
- **Artifact Inventory expanded from 11 rows to 21 rows** (15 pipeline-stage rows + 6 cross-cutting tool rows). Was missing /discovery (stage 0), /principles (stage 0a), /implement-plan (stage 12), and every cross-cutting artifact (sprint, risk, glossary, trace, analyze, estimate). The inventory is now the canonical "what's in the package" reference for the dev team.
|
|
50
|
+
- **Traceability Coverage expanded from 7 chains to 11 chains** to reflect the new traceability matrices added in pilot / Group A / Group B: Brief Goal → FR (added in v3.5.0 SRS template), US → AC broken down by Positive / Negative / Boundary type, FR → NFR, FR → Entity, FR → Endpoint, US → WF, US → Scenario, FR → Implementation Task, NFR → ADR.
|
|
51
|
+
- **New §7 Architecture Decision Summary** lists the top ADRs from `/research` with their drivers and which `/implement-plan` phase they affect — so the dev team learns the architectural decisions without having to read `/research` separately.
|
|
52
|
+
- **New §9 Sign-off section** with a formal acceptance table (Business Analyst / Product Manager / Tech Lead / QA Lead / Stakeholder). Senior BA expectation: handoff is the formal acceptance step and needs an explicit sign-off flow with named approvers.
|
|
53
|
+
- **Document-control metadata added** (Version / Status). Same P1 pattern as the rest of the audit.
|
|
54
|
+
- **`skills/implement-plan/SKILL.md`** — extended Tech Stack with operational slots, risk-aware sequencing within phases:
|
|
55
|
+
- **Tech Stack table extended from 6 rows to 9 rows.** Added: **Observability** (logging / metrics / tracing platform — Datadog / New Relic / Grafana / OTel), **CI / CD** (GitHub Actions / GitLab CI / CircleCI / Jenkins), **Secret management** (env vars / Vault / Secrets Manager / Doppler / 1Password CLI). Without these slots, the AI coding agent has to invent operational choices on the fly.
|
|
56
|
+
- **Calibration interview extended from 6 to 9 questions** to cover the same three new slots when `/research` is missing.
|
|
57
|
+
- **Risk-aware sequencing within phases** is now explicit. Tasks whose `references` link to FRs / US / NFRs tied to a 🔴 Critical or 🟡 High risk in `00_risks_*.md` are pulled to the front of their phase, ahead of equally-prioritised tasks. Tagged with `**Risk:** RISK-NN ↑` next to the task title. Rationale: validate risky bets early when there's still time to pivot. Was generic "ordered by dependencies, then priority"; now also "then by risk elevation".
|
|
58
|
+
- **`skills/export/SKILL.md`** — interview-protocol compliance, v3.5.0+ stories template field support, full traceability in exports:
|
|
59
|
+
- **Format interview now follows the standard interview protocol** — table-based options, Recommended marker, inline-context support per protocol rule 9. Was a flat numbered question list that bypassed the protocol.
|
|
60
|
+
- **Exported issues now carry every v3.5.0+ stories template field**: Persona (named persona with role + context, not bare job title), Business Value Score, Size, Depends on (rendered as "Blocked by" link in trackers that support it), INVEST self-check. Trackers with custom field support (Jira, Linear) get them as separate fields; CSV gets extra columns; GitHub Issues embeds them in the issue body since GitHub has no custom-field surface.
|
|
61
|
+
- **Full cross-artifact traceability in exported issues**: Linked FR, Linked UC, Linked AC (per scenario, with their `AC-NNN-NN` IDs), Linked NFR (for performance- and security-relevant stories), Linked Wireframe. Was previously only `FR Reference`. Modern issue trackers can re-establish the traceability graph without re-reading the source artifacts.
|
|
62
|
+
- **CSV format expanded from 10 columns to 17** to carry the new fields (Persona, Value, Size, FR, UC, AC list, NFR, WF, Depends on, AC Summary).
|
|
63
|
+
|
|
64
|
+
### Cross-pattern impact
|
|
65
|
+
|
|
66
|
+
After the pilot, Group A, Group B, and Group C audits, **22 of 24 shipped skills** carry the senior-BA improvements: standards conformance to canonical frameworks (BABOK v3, IEEE 830, ISO 25010, ISO 31000, ISO 1087-1, OpenAPI 3.x, Cockburn use cases, INVEST, MoSCoW, PMBOK 7, Cone of Uncertainty, Michael Nygard ADRs), explicit "why" / provenance / ownership fields on every artifact element, cross-artifact bidirectional traceability with severity-aware coverage gaps, document control with versions and approvers, single-source-of-truth templates with no inline drift, and BA-grade required-topics coverage. The two skills not yet audited are `/publish` (a thin CLI wrapper with nothing structural to audit) and `/clarify` (already audited in Group B). The skill audit rollout is functionally complete.
|
|
67
|
+
|
|
68
|
+
---
|
|
69
|
+
|
|
14
70
|
## [3.7.0] — 2026-04-10
|
|
15
71
|
|
|
16
72
|
### Highlights
|
|
@@ -650,7 +706,9 @@ CI scripts that relied on the old behaviour (`init` creates files only, `install
|
|
|
650
706
|
|
|
651
707
|
---
|
|
652
708
|
|
|
653
|
-
[Unreleased]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.
|
|
709
|
+
[Unreleased]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.8.1...HEAD
|
|
710
|
+
[3.8.1]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.8.0...v3.8.1
|
|
711
|
+
[3.8.0]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.7.0...v3.8.0
|
|
654
712
|
[3.7.0]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.6.0...v3.7.0
|
|
655
713
|
[3.6.0]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.5.0...v3.6.0
|
|
656
714
|
[3.5.0]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.4.1...v3.5.0
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@kudusov.takhir/ba-toolkit",
|
|
3
|
-
"version": "3.
|
|
3
|
+
"version": "3.8.1",
|
|
4
4
|
"description": "AI-powered Business Analyst pipeline — 24 skills from concept discovery to a sequenced implementation plan an AI coding agent can execute, with one-command Notion + Confluence publish. Works with Claude Code, Codex CLI, Gemini CLI, Cursor, and Windsurf.",
|
|
5
5
|
"keywords": [
|
|
6
6
|
"business-analyst",
|
|
@@ -127,7 +127,9 @@ Things we don't yet know and need to learn before committing real resources.
|
|
|
127
127
|
|
|
128
128
|
`ba-toolkit init` already created `AGENTS.md` next to where the artifact lives — typically in the current working directory (the user is expected to `cd output/<slug>` after running init). After saving `00_discovery_{slug}.md`, find the project's `AGENTS.md` (look in cwd first; fall back to walking up the directory tree if cwd has none, for legacy v3.0 single-project layouts).
|
|
129
129
|
|
|
130
|
-
**Update only the `## Pipeline Status` row for `/discovery`** — toggle its status from `⬜ Not started` to `✅ Done` and fill in the artifact filename in the `File` column. **Do not
|
|
130
|
+
**Update only the `## Pipeline Status` row for `/discovery`** — toggle its status from `⬜ Not started` to `✅ Done` and fill in the artifact filename in the `File` column. **Do not recreate the file at the repo root.** **Do not add `## Artifacts` / `## Key context` sections** — those are not part of the v3.1+ template and would be ignored by future runs.
|
|
131
|
+
|
|
132
|
+
**Domain field exception (managed block).** `/discovery` is the canonical source of truth for the project domain after `ba-toolkit init`. After saving `00_discovery_{slug}.md`, compare the recommended domain in §8 of the discovery artifact against the `**Domain:**` line inside the managed block of `AGENTS.md`. **If they differ, surgically update only that single line** to the new value — do not modify any other managed-block field (`**Project:**`, `**Slug:**`, `**Language:**`, `**Output folder:**`, the auto-generated date comment). Mention the change in the user-facing reply: "Updated the project domain in AGENTS.md from `<old>` to `<new>` based on the discovery recommendation." This is the only managed-block field `/discovery` may touch; everything else inside `<!-- ba-toolkit:begin managed -->` … `<!-- ba-toolkit:end managed -->` remains owned by `ba-toolkit init`.
|
|
131
133
|
|
|
132
134
|
If you find no `AGENTS.md` at all (neither in cwd nor up the tree), warn the user that the project was likely set up before v3.2 and tell them to run `ba-toolkit init --name "..." --slug {slug}` to scaffold the per-project `AGENTS.md`. Do not create one yourself with arbitrary structure.
|
|
133
135
|
|
package/skills/export/SKILL.md
CHANGED
|
@@ -36,14 +36,20 @@ Read `references/environment.md` from the `ba-toolkit` directory to determine th
|
|
|
36
36
|
|
|
37
37
|
## Format interview
|
|
38
38
|
|
|
39
|
-
|
|
39
|
+
> **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, present a 2-column `| ID | Variant |` markdown table of up to 4 options plus a free-text "Other" row last (5 rows max), mark exactly one row **Recommended**, render variants in the user's language (rule 11), and wait for an answer.
|
|
40
|
+
>
|
|
41
|
+
> **Inline context (protocol rule 9):** if the user wrote text after `/export` (e.g., `/export jira PROJ`, `/export github owner/repo`), parse it as a format + target hint and skip the matching questions.
|
|
40
42
|
|
|
41
|
-
|
|
42
|
-
|
|
43
|
-
|
|
44
|
-
|
|
43
|
+
If the format is not specified, ask the following (skip any question already answered by inline context):
|
|
44
|
+
|
|
45
|
+
1. **Target tool:** Jira / GitHub Issues / Linear / CSV / Other? **Recommended:** Jira (most enterprise teams).
|
|
46
|
+
2. **Scope:** All stories / a specific epic / specific story IDs?
|
|
47
|
+
3. **Include AC?** Embed acceptance criteria in the issue body? **Recommended:** yes — without AC, the imported issue is just a story title.
|
|
48
|
+
4. **Jira-specific:** Project key (e.g., `PROJ`)? Epic link field name (default: `customfield_10014`)? Story Points field name (default: `customfield_10016`)?
|
|
45
49
|
5. **GitHub-specific:** Repository (e.g., `owner/repo`)? Label prefix for epics (e.g., `epic:`)? Milestone name (optional)?
|
|
46
50
|
|
|
51
|
+
The exported issue body now includes **all v3.5.0+ stories template fields**: Persona (named, with context), Business Value Score, Size, Linked FR, Depends on (rendered as a "Blocked by" link in trackers that support it), Definition of Ready, INVEST self-check. Trackers that support custom fields (Jira, Linear) get them as separate fields; CSV gets extra columns; GitHub Issues embeds them in the issue body.
|
|
52
|
+
|
|
47
53
|
## Export formats
|
|
48
54
|
|
|
49
55
|
---
|
|
@@ -111,14 +117,14 @@ Or via CLI: jira import --file export_{slug}_jira.json
|
|
|
111
117
|
|
|
112
118
|
Output file: `export_{slug}_github.json`
|
|
113
119
|
|
|
114
|
-
Array of issue objects, one per story. Compatible with `gh` CLI batch import.
|
|
120
|
+
Array of issue objects, one per story. Compatible with `gh` CLI batch import. The body embeds all v3.5.0+ stories template fields since GitHub Issues has no custom field support.
|
|
115
121
|
|
|
116
122
|
```json
|
|
117
123
|
[
|
|
118
124
|
{
|
|
119
125
|
"title": "US-001: {Story title}",
|
|
120
|
-
"body": "## User Story\n\nAs
|
|
121
|
-
"labels": ["{epic-label}", "user-story", "{priority-label}"],
|
|
126
|
+
"body": "## User Story\n\nAs **{persona — named persona with role and context}**, I want to **{action}**, so that **{benefit}**.\n\n---\n\n## Acceptance Criteria\n\n**Scenario 1 — {name}** *(AC-001-01)*\n- Given {precondition}\n- When {action}\n- Then {result}\n\n---\n\n## Traceability\n\n- **Linked FR:** FR-{NNN}\n- **Linked Use Case:** UC-{NNN}\n- **Linked Acceptance Criteria:** AC-001-01, AC-001-02, AC-001-03\n- **Linked NFR:** NFR-{NNN} (if applicable)\n- **Linked Wireframe:** WF-{NNN} (if applicable)\n\n---\n\n## Metadata\n\n- **Priority:** {Must | Should | Could | Won't}\n- **Business Value Score:** {1–5}\n- **Size:** {XS | S | M | L | XL}\n- **Estimate:** {N SP | —}\n- **Depends on:** US-{NNN}, US-{NNN}\n- **INVEST self-check:** Independent ✓ · Negotiable ✓ · Valuable ✓ · Estimable ✓ · Small ✓ · Testable ✓",
|
|
127
|
+
"labels": ["{epic-label}", "user-story", "{priority-label}", "value:{1-5}", "size:{xs-xl}"],
|
|
122
128
|
"milestone": "{milestone-name-or-null}"
|
|
123
129
|
}
|
|
124
130
|
]
|
|
@@ -181,11 +187,11 @@ Linear does not have a native bulk JSON import — use the Linear SDK or Zapier.
|
|
|
181
187
|
Output file: `export_{slug}_stories.csv`
|
|
182
188
|
|
|
183
189
|
```
|
|
184
|
-
ID,Title,Epic,
|
|
185
|
-
US-001,"{Story title}",E-01,"{role}","{action}","{benefit}",Must,3 SP,FR-001,"{first AC scenario summary}"
|
|
190
|
+
ID,Title,Epic,Persona,Action,Benefit,Priority,Value,Size,Estimate,FR,UC,AC,NFR,WF,Depends on,AC Summary
|
|
191
|
+
US-001,"{Story title}",E-01,"{persona name + role + context}","{action}","{benefit}",Must,5,M,3 SP,FR-001,UC-001,"AC-001-01;AC-001-02;AC-001-03",NFR-002,WF-005,—,"{first AC scenario summary}"
|
|
186
192
|
```
|
|
187
193
|
|
|
188
|
-
Compatible with Jira CSV import, Trello, Asana, Monday.com, and Google Sheets.
|
|
194
|
+
Compatible with Jira CSV import, Trello, Asana, Monday.com, and Google Sheets. Includes all v3.5.0+ stories template fields plus full cross-artifact traceability columns (FR / UC / AC / NFR / WF / Depends on) so a downstream tool can re-establish links without re-reading the source artifacts.
|
|
189
195
|
|
|
190
196
|
---
|
|
191
197
|
|
package/skills/handoff/SKILL.md
CHANGED
|
@@ -22,13 +22,15 @@ Read `references/environment.md` from the `ba-toolkit` directory to determine th
|
|
|
22
22
|
|
|
23
23
|
## Generation
|
|
24
24
|
|
|
25
|
-
No interview. All content is derived from the existing artifacts.
|
|
25
|
+
No interview. All content is derived from the existing artifacts. The full template lives at `references/templates/handoff-template.md` and is the single source of truth — including the full inventory of pipeline-stage and cross-cutting artifacts (`/discovery`, `/principles`, `/implement-plan`, `/sprint`, `/risk`, `/glossary`, `/trace`, `/analyze`, `/estimate`), the Brief Goal → FR / FR → NFR / FR → API forward-traceability tables, the ADR summary, and the formal Sign-off section.
|
|
26
26
|
|
|
27
27
|
**File:** `11_handoff_{slug}.md`
|
|
28
28
|
|
|
29
29
|
```markdown
|
|
30
30
|
# Development Handoff: {Project Name}
|
|
31
31
|
|
|
32
|
+
**Version:** 1.0
|
|
33
|
+
**Status:** Draft | In Review | Approved
|
|
32
34
|
**Date:** {date}
|
|
33
35
|
**Domain:** {domain}
|
|
34
36
|
**Pipeline completion:** {n}/{total} steps completed
|
|
@@ -37,19 +39,36 @@ No interview. All content is derived from the existing artifacts.
|
|
|
37
39
|
|
|
38
40
|
## 1. Artifact Inventory
|
|
39
41
|
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
|
|
|
43
|
-
|
|
44
|
-
|
|
|
45
|
-
|
|
|
46
|
-
|
|
|
47
|
-
|
|
|
48
|
-
|
|
|
49
|
-
|
|
|
50
|
-
|
|
|
51
|
-
|
|
|
52
|
-
|
|
|
42
|
+
### Pipeline-stage artifacts
|
|
43
|
+
|
|
44
|
+
| Stage | Artifact | File | Status | Key numbers |
|
|
45
|
+
|-------|----------|------|--------|-------------|
|
|
46
|
+
| 0 | Discovery | `00_discovery_{slug}.md` | ✓ / ✗ Missing / — Not run | {recommended domain, MVP feature count} |
|
|
47
|
+
| 0a | Principles | `00_principles_{slug}.md` | ✓ / ✗ Missing / — Not run | {testing strategy, ID conventions, NFR baseline characteristics} |
|
|
48
|
+
| 1 | Project Brief | `01_brief_{slug}.md` | ✓ Complete | {n} goals, {n} stakeholders, {n} risks, {n} assumptions |
|
|
49
|
+
| 2 | SRS | `02_srs_{slug}.md` | ✓ Complete | {n} FR ({must}/{should}/{could}/{wont}) |
|
|
50
|
+
| 3 | User Stories | `03_stories_{slug}.md` | ✓ Complete | {n} stories across {n} epics |
|
|
51
|
+
| 4 | Use Cases | `04_usecases_{slug}.md` | ✓ / ✗ Missing | {n} UC |
|
|
52
|
+
| 5 | Acceptance Criteria | `05_ac_{slug}.md` | ✓ / ✗ Missing | {n} AC ({pos}/{neg}/{boundary}/{perf}) |
|
|
53
|
+
| 6 | NFR | `06_nfr_{slug}.md` | ✓ / ✗ Missing | {n} NFR across {n} ISO 25010 characteristics |
|
|
54
|
+
| 7 | Data Dictionary | `07_datadict_{slug}.md` | ✓ / ✗ Missing | {n} entities, {n} attributes |
|
|
55
|
+
| 7a | Research | `07a_research_{slug}.md` | ✓ / ✗ Missing / — Not run | {n} ADRs, {n} integrations |
|
|
56
|
+
| 8 | API Contract | `08_apicontract_{slug}.md` | ✓ / ✗ Missing | {n} endpoints |
|
|
57
|
+
| 9 | Wireframes | `09_wireframes_{slug}.md` | ✓ / ✗ Missing | {n} screens |
|
|
58
|
+
| 10 | Scenarios | `10_scenarios_{slug}.md` | ✓ / ✗ Missing / — Not run | {n} scenarios |
|
|
59
|
+
| 11 | Handoff | `11_handoff_{slug}.md` | This document | — |
|
|
60
|
+
| 12 | Implementation Plan | `12_implplan_{slug}.md` | ✓ / ✗ Missing / — Not run | {n} phases, {n} tasks |
|
|
61
|
+
|
|
62
|
+
### Cross-cutting artifacts
|
|
63
|
+
|
|
64
|
+
| Tool | File | Status | Key numbers |
|
|
65
|
+
|------|------|--------|-------------|
|
|
66
|
+
| Trace | `00_trace_{slug}.md` | ✓ / — Not run | Overall coverage {n}% |
|
|
67
|
+
| Analyze | `00_analyze_{slug}.md` | ✓ / — Not run | {n} CRITICAL, {n} HIGH findings |
|
|
68
|
+
| Risk | `00_risks_{slug}.md` | ✓ / — Not run | {n} risks ({n} Critical / {n} High) |
|
|
69
|
+
| Sprint | `00_sprint_{slug}.md` | ✓ / — Not run | {n} sprints, {n} weeks |
|
|
70
|
+
| Glossary | `00_glossary_{slug}.md` | ✓ / — Not run | {n} terms, {n} drift findings |
|
|
71
|
+
| Estimate | inline in stories or `00_estimate_{slug}.md` | ✓ / — Not run | {n} SP total ± {confidence band} |
|
|
53
72
|
|
|
54
73
|
---
|
|
55
74
|
|
|
@@ -68,14 +87,18 @@ Must-priority items confirmed for the first release:
|
|
|
68
87
|
## 3. Traceability Coverage
|
|
69
88
|
|
|
70
89
|
| Chain | Coverage |
|
|
71
|
-
|
|
90
|
+
|-------|----------|
|
|
91
|
+
| Brief Goal → FR | {n}% ({uncovered} goals uncovered) |
|
|
72
92
|
| FR → US | {n}% ({uncovered} uncovered) |
|
|
73
93
|
| US → UC | {n}% |
|
|
74
|
-
| US → AC | {n}% |
|
|
94
|
+
| US → AC (Positive / Negative / Boundary) | {n}% / {n}% / {n}% |
|
|
75
95
|
| FR → NFR | {n}% |
|
|
76
|
-
|
|
|
77
|
-
|
|
|
78
|
-
|
|
|
96
|
+
| FR → Entity | {n}% |
|
|
97
|
+
| FR → API Endpoint | {n}% |
|
|
98
|
+
| US → WF | {n}% |
|
|
99
|
+
| US → Scenario | {n}% |
|
|
100
|
+
| FR → Implementation Task (if `12_implplan` exists) | {n}% |
|
|
101
|
+
| NFR → ADR | {n}% |
|
|
79
102
|
|
|
80
103
|
{If coverage is below 100% for any CRITICAL chain, list uncovered items explicitly.}
|
|
81
104
|
|
|
@@ -114,13 +137,34 @@ Must-priority items confirmed for the first release:
|
|
|
114
137
|
|
|
115
138
|
---
|
|
116
139
|
|
|
117
|
-
## 7.
|
|
140
|
+
## 7. Architecture Decision Summary
|
|
141
|
+
|
|
142
|
+
Top architectural decisions from `07a_research_{slug}.md`. Dev team should read each linked ADR before starting the corresponding phase.
|
|
143
|
+
|
|
144
|
+
| ADR | Decision | Drivers | Phase impact |
|
|
145
|
+
|-----|----------|---------|--------------|
|
|
146
|
+
| ADR-001 | {chosen tech for layer X} | NFR-{NNN}, FR-{NNN} | Phase {N} of `/implement-plan` |
|
|
147
|
+
| ADR-002 | {decision} | {drivers} | {phase impact} |
|
|
148
|
+
|
|
149
|
+
## 8. Artifact Files Reference
|
|
118
150
|
|
|
119
151
|
All files are located in: `{output_directory}`
|
|
120
152
|
|
|
121
153
|
```
|
|
122
154
|
{file tree of all generated artifacts}
|
|
123
155
|
```
|
|
156
|
+
|
|
157
|
+
## 9. Sign-off
|
|
158
|
+
|
|
159
|
+
Formal acceptance of the handoff package. Signing this section means the development team agrees the BA package is sufficient to begin implementation.
|
|
160
|
+
|
|
161
|
+
| Role | Name | Sign-off Date | Notes |
|
|
162
|
+
|------|------|---------------|-------|
|
|
163
|
+
| Business Analyst | {name} | {date} | {notes} |
|
|
164
|
+
| Product Manager | {name} | {date} | {notes} |
|
|
165
|
+
| Tech Lead | {name} | {date} | {notes} |
|
|
166
|
+
| QA Lead | {name} | {date} | {notes} |
|
|
167
|
+
| Stakeholder | {name} | {date} | {notes} |
|
|
124
168
|
```
|
|
125
169
|
|
|
126
170
|
## Iterative refinement
|
|
@@ -69,7 +69,10 @@ Required topics for the calibration interview (skip any topic already answered b
|
|
|
69
69
|
3. Database (engine, version, hosting model).
|
|
70
70
|
4. Hosting / deployment target.
|
|
71
71
|
5. Auth / identity approach (in-house vs. SSO vs. managed service).
|
|
72
|
-
6.
|
|
72
|
+
6. **Observability platform** — logging, metrics, traces stack (Datadog / New Relic / Grafana / OpenTelemetry self-hosted). Drives Phase 8 (Quality & NFRs) tasks.
|
|
73
|
+
7. **CI/CD platform** — GitHub Actions / GitLab CI / CircleCI / Jenkins / other. Drives Phase 1 (Foundation) tasks.
|
|
74
|
+
8. **Secret management** — environment variables / Vault / AWS Secrets Manager / Doppler / 1Password CLI. Drives Phase 1 (Foundation) tasks.
|
|
75
|
+
9. Mandatory integrations from `02_srs` (carry over verbatim — do not re-decide them).
|
|
73
76
|
|
|
74
77
|
**Step 3 — TBD slots.** If neither `/research` nor the interview yields a value for a slot (e.g. user picked "Other" without a concrete answer), record `[TBD: <slot>]` in the output's "Tech stack" header AND add a row to the "Open Assumptions" section so the AI coding agent knows it must ask before starting any task that touches that slot.
|
|
75
78
|
|
|
@@ -107,7 +110,7 @@ Each task is one atomic, AI-actionable unit of work. Rules:
|
|
|
107
110
|
- **files:** list of file paths the AI agent should create or modify (best-effort; framework-dependent). Optional. Examples: `src/db/schema.sql`, `apps/api/src/auth/login.controller.ts`. **If unknown, omit rather than guess.**
|
|
108
111
|
- **definitionOfDone:** bullet list of acceptance hooks. Pull from the linked AC where possible ("AC-001-03 passes", "endpoint returns 401 on invalid credentials"). Always include a type-check / lint hook on backend tasks and a render-state hook on UI tasks.
|
|
109
112
|
|
|
110
|
-
Within a phase, order tasks so each task's `dependsOn` list points only at tasks already listed. Risk-elevated tasks
|
|
113
|
+
Within a phase, order tasks so each task's `dependsOn` list points only at tasks already listed. **Risk-elevated tasks come earliest within their phase**: any task whose `references` include an FR / US / NFR linked to a 🔴 Critical or 🟡 High risk in `00_risks_*.md` is pulled to the front of its phase, ahead of equally-prioritised tasks. Rationale: validate risky bets early, when there's still time to pivot. Tag risk-elevated tasks with `**Risk:** RISK-NN ↑` next to their title so the AI agent and the human reviewer both see the elevation reason.
|
|
111
114
|
|
|
112
115
|
### 7. Generation
|
|
113
116
|
|
|
@@ -132,6 +135,9 @@ Within a phase, order tasks so each task's `dependsOn` list points only at tasks
|
|
|
132
135
|
| Database | {…} | … |
|
|
133
136
|
| Hosting | {…} | … |
|
|
134
137
|
| Auth | {…} | … |
|
|
138
|
+
| Observability (logs / metrics / traces) | {…} | … |
|
|
139
|
+
| CI / CD | {…} | … |
|
|
140
|
+
| Secret management | {…} | … |
|
|
135
141
|
| Mandatory integrations | {…} | … |
|
|
136
142
|
|
|
137
143
|
---
|
|
@@ -38,10 +38,13 @@ If `01_brief_*.md` already exists, extract the slug and domain from it. Otherwis
|
|
|
38
38
|
**Required topics:**
|
|
39
39
|
1. Artifact language — which language should all artifacts be generated in? (default: the language of the user's first message)
|
|
40
40
|
2. Traceability strictness — should every Must-priority US require a Use Case, or only US with complex flows? (default: only complex flows)
|
|
41
|
-
3. NFR baseline — which
|
|
42
|
-
4. Definition of Ready — any project-specific acceptance criteria for finalizing an artifact? (e.g., stakeholder sign-off, specific review steps)
|
|
41
|
+
3. NFR baseline — which **ISO/IEC 25010** quality characteristics are mandatory beyond the domain defaults? (Performance Efficiency / Reliability / Security / Compatibility / Usability / Maintainability / Portability / Functional Suitability — `/nfr` reads this list verbatim and treats it as a checklist).
|
|
42
|
+
4. Definition of Ready — any project-specific acceptance criteria for finalizing an artifact beyond the `v3.7.0+` baseline DoR per artifact type listed in §4 below? (e.g., stakeholder sign-off, specific review steps).
|
|
43
43
|
5. Quality gate — at what severity level should `/analyze` findings block `/done`? (default: CRITICAL only)
|
|
44
44
|
6. Output folder structure — save all artifacts flat in the output directory (default), or inside a `{slug}/` subfolder? (useful when managing multiple projects side by side)
|
|
45
|
+
7. **Testing strategy** — TDD (tests before implementation), tests-after, integration-only, manual-only, none? Drives whether `/implement-plan` task templates embed "Tests to write first" blocks. **Recommended:** TDD for production-grade systems; tests-after for prototypes; manual-only or none for spike work. *(This is the principles-based approach to the testing-discipline question that batch 6 item 2 surfaced — the right place to declare a testing strategy is here, not as a separate skill.)*
|
|
46
|
+
8. **Stakeholder decision authority** — who has sign-off authority on changes to these principles? Captured by name and role. Without this, principle changes become contentious mid-project.
|
|
47
|
+
9. **Code review and branching policy** — trunk-based / GitFlow / GitHub flow? Required reviewers per PR? `/implement-plan` and downstream skills assume one of these defaults but the principles file is the source of truth.
|
|
45
48
|
|
|
46
49
|
### 4. Generation
|
|
47
50
|
|
|
@@ -1,8 +1,11 @@
|
|
|
1
1
|
# Discovery: [PROJECT_NAME]
|
|
2
2
|
|
|
3
|
+
**Version:** 0.1
|
|
4
|
+
**Status:** Concept (pre-brief) | In Review | Locked
|
|
3
5
|
**Slug:** [SLUG]
|
|
4
6
|
**Date:** [DATE]
|
|
5
|
-
**
|
|
7
|
+
**Decision date:** [DATE — when the recommended domain was locked in]
|
|
8
|
+
**Decision owner:** [Name + role]
|
|
6
9
|
|
|
7
10
|
## 1. Problem Space
|
|
8
11
|
|
|
@@ -61,3 +64,23 @@ Things we do not yet know and need to learn before committing real resources.
|
|
|
61
64
|
- **Domain:** [chosen domain]
|
|
62
65
|
- **Scope hint for `/brief`:** [one-sentence summary the user can paste as inline context: `/brief [scope hint]`]
|
|
63
66
|
- **Suggested first interview focus in `/brief`:** [1–2 topics from this discovery doc that are now firm enough to anchor the brief]
|
|
67
|
+
|
|
68
|
+
---
|
|
69
|
+
|
|
70
|
+
## 9. Hypotheses → Brief Goals (filled in after `/brief`)
|
|
71
|
+
|
|
72
|
+
Forward traceability from the discovery hypotheses to the brief business goals they became. Filled in by `/brief` when it consumes this discovery artifact, then preserved here for retrospective validation: 3 months after launch, did the chosen audience hypothesis hold? Did the MVP features that the differentiation angle predicted would matter actually drive adoption?
|
|
73
|
+
|
|
74
|
+
| Discovery section | Hypothesis | Brief Goal G-N | Status |
|
|
75
|
+
|-------------------|------------|----------------|--------|
|
|
76
|
+
| §1 Problem space | [problem statement] | G-1 | Validated / Refined / Disproved / Pending |
|
|
77
|
+
| §2 Audience (Primary) | [primary segment] | G-2 | Validated / Refined / Disproved / Pending |
|
|
78
|
+
| §5 MVP feature 1 | [feature] | G-3 | Validated / Refined / Disproved / Pending |
|
|
79
|
+
|
|
80
|
+
---
|
|
81
|
+
|
|
82
|
+
## Approvals
|
|
83
|
+
|
|
84
|
+
| Name | Role | Approval Date | Notes |
|
|
85
|
+
|------|------|---------------|-------|
|
|
86
|
+
| [name] | [role] | [YYYY-MM-DD] | [optional notes] |
|
|
@@ -1,5 +1,7 @@
|
|
|
1
1
|
# Development Handoff Package: [PROJECT_NAME]
|
|
2
2
|
|
|
3
|
+
**Version:** 1.0
|
|
4
|
+
**Status:** Draft
|
|
3
5
|
**Domain:** [DOMAIN]
|
|
4
6
|
**Date:** [DATE]
|
|
5
7
|
**Slug:** [SLUG]
|
|
@@ -8,31 +10,47 @@
|
|
|
8
10
|
|
|
9
11
|
---
|
|
10
12
|
|
|
11
|
-
## Artifact Inventory
|
|
12
|
-
|
|
13
|
-
|
|
14
|
-
|
|
15
|
-
|
|
|
16
|
-
|
|
17
|
-
|
|
|
18
|
-
|
|
|
19
|
-
|
|
|
20
|
-
|
|
|
21
|
-
|
|
|
22
|
-
|
|
|
23
|
-
|
|
|
24
|
-
|
|
|
25
|
-
|
|
|
26
|
-
|
|
|
13
|
+
## 1. Artifact Inventory
|
|
14
|
+
|
|
15
|
+
### Pipeline-stage artifacts
|
|
16
|
+
|
|
17
|
+
| Stage | Artifact | File | Status | Last Updated |
|
|
18
|
+
|-------|----------|------|--------|--------------|
|
|
19
|
+
| 0 | Discovery | `00_discovery_[SLUG].md` | ✅ Complete / — Not run | [DATE] |
|
|
20
|
+
| 0a | Principles | `00_principles_[SLUG].md` | ✅ Complete / — Not run | [DATE] |
|
|
21
|
+
| 1 | Project Brief | `01_brief_[SLUG].md` | ✅ Complete | [DATE] |
|
|
22
|
+
| 2 | SRS | `02_srs_[SLUG].md` | ✅ Complete | [DATE] |
|
|
23
|
+
| 3 | User Stories | `03_stories_[SLUG].md` | ✅ Complete | [DATE] |
|
|
24
|
+
| 4 | Use Cases | `04_usecases_[SLUG].md` | ✅ Complete | [DATE] |
|
|
25
|
+
| 5 | Acceptance Criteria | `05_ac_[SLUG].md` | ✅ Complete | [DATE] |
|
|
26
|
+
| 6 | NFR | `06_nfr_[SLUG].md` | ✅ Complete | [DATE] |
|
|
27
|
+
| 7 | Data Dictionary | `07_datadict_[SLUG].md` | ✅ Complete | [DATE] |
|
|
28
|
+
| 7a | Research / ADRs | `07a_research_[SLUG].md` | ✅ Complete | [DATE] |
|
|
29
|
+
| 8 | API Contract | `08_apicontract_[SLUG].md` | ✅ Complete | [DATE] |
|
|
30
|
+
| 9 | Wireframes | `09_wireframes_[SLUG].md` | ✅ Complete | [DATE] |
|
|
31
|
+
| 10 | Validation Scenarios | `10_scenarios_[SLUG].md` | ✅ Complete | [DATE] |
|
|
32
|
+
| 11 | Handoff | `11_handoff_[SLUG].md` | This document | [DATE] |
|
|
33
|
+
| 12 | Implementation Plan | `12_implplan_[SLUG].md` | ✅ Complete / — Not run | [DATE] |
|
|
34
|
+
|
|
35
|
+
### Cross-cutting artifacts
|
|
36
|
+
|
|
37
|
+
| Tool | File | Status | Last Updated |
|
|
38
|
+
|------|------|--------|--------------|
|
|
39
|
+
| Trace | `00_trace_[SLUG].md` | ✅ / — Not run | [DATE] |
|
|
40
|
+
| Analyze | `00_analyze_[SLUG].md` | ✅ / — Not run | [DATE] |
|
|
41
|
+
| Risk Register | `00_risks_[SLUG].md` | ✅ / — Not run | [DATE] |
|
|
42
|
+
| Sprint Plan | `00_sprint_[SLUG].md` | ✅ / — Not run | [DATE] |
|
|
43
|
+
| Glossary | `00_glossary_[SLUG].md` | ✅ / — Not run | [DATE] |
|
|
44
|
+
| Estimate | inline in `03_stories_[SLUG].md` or `00_estimate_[SLUG].md` | ✅ / — Not run | [DATE] |
|
|
27
45
|
|
|
28
46
|
---
|
|
29
47
|
|
|
30
|
-
## MVP Scope
|
|
48
|
+
## 2. MVP Scope
|
|
31
49
|
|
|
32
50
|
**In scope for MVP:**
|
|
33
51
|
|
|
34
52
|
| ID | Title | Type | Priority |
|
|
35
|
-
|
|
53
|
+
|----|-------|------|----------|
|
|
36
54
|
| FR-[NNN] | [Title] | Functional | Must |
|
|
37
55
|
| US-[NNN] | [Story title] | Story | Must |
|
|
38
56
|
|
|
@@ -44,34 +62,43 @@
|
|
|
44
62
|
|
|
45
63
|
---
|
|
46
64
|
|
|
47
|
-
## Traceability Coverage
|
|
65
|
+
## 3. Traceability Coverage
|
|
48
66
|
|
|
49
|
-
|
|
|
50
|
-
|
|
67
|
+
| Chain | Coverage | Gap |
|
|
68
|
+
|-------|----------|-----|
|
|
69
|
+
| Brief Goal → FR | [N]% | [N goals uncovered] |
|
|
51
70
|
| FR → US | [N]% | [N orphaned FRs] |
|
|
52
|
-
| US →
|
|
53
|
-
|
|
|
71
|
+
| US → UC | [N]% | [N stories without UC] |
|
|
72
|
+
| US → AC (Positive / Negative / Boundary) | [N]% / [N]% / [N]% | [N stories without negative AC] |
|
|
73
|
+
| FR → NFR | [N]% | [N FRs without an NFR] |
|
|
74
|
+
| FR → Entity | [N]% | [N FRs without entity] |
|
|
75
|
+
| FR → API Endpoint | [N]% | [N FRs without endpoint] |
|
|
54
76
|
| US → WF | [N]% | [N stories without wireframe] |
|
|
77
|
+
| US → Scenario | [N]% | [N stories without scenario] |
|
|
78
|
+
| FR → Implementation Task | [N]% | [N FRs without task] |
|
|
79
|
+
| NFR → ADR | [N]% | [N NFRs without architectural decision] |
|
|
55
80
|
|
|
56
81
|
---
|
|
57
82
|
|
|
58
|
-
## Open Items
|
|
83
|
+
## 4. Open Items
|
|
59
84
|
|
|
60
85
|
| # | Type | Description | Owner | Priority |
|
|
61
|
-
|
|
62
|
-
| 1 |
|
|
86
|
+
|---|------|-------------|-------|----------|
|
|
87
|
+
| 1 | Decision / Clarification / Dependency / Risk | [Description] | [Role] | P1 / P2 / P3 |
|
|
63
88
|
|
|
64
89
|
---
|
|
65
90
|
|
|
66
|
-
## Top Risks for Development
|
|
91
|
+
## 5. Top Risks for Development
|
|
67
92
|
|
|
68
|
-
| # | Risk | Probability | Impact | Mitigation |
|
|
69
|
-
|
|
70
|
-
| 1 | [Risk] | High / Med / Low | High / Med / Low | [Mitigation] |
|
|
93
|
+
| # | Risk | Probability | Impact | Velocity | Treatment | Mitigation |
|
|
94
|
+
|---|------|-------------|--------|----------|-----------|------------|
|
|
95
|
+
| 1 | [Risk] | High / Med / Low | High / Med / Low | Days / Weeks / Months | Avoid / Reduce / Transfer / Accept | [Mitigation] |
|
|
96
|
+
|
|
97
|
+
*(Pulled from `00_risks_[SLUG].md` — top 5 by score.)*
|
|
71
98
|
|
|
72
99
|
---
|
|
73
100
|
|
|
74
|
-
## Recommended Development Sequence
|
|
101
|
+
## 6. Recommended Development Sequence
|
|
75
102
|
|
|
76
103
|
1. **[Phase / Sprint 1]** — [What to build first and why]
|
|
77
104
|
2. **[Phase / Sprint 2]** — [Next priority]
|
|
@@ -79,14 +106,32 @@
|
|
|
79
106
|
|
|
80
107
|
_Rationale: [Why this sequencing — dependencies, risk, user value.]_
|
|
81
108
|
|
|
109
|
+
If `12_implplan_[SLUG].md` has been generated, use the phase ladder + Task DAG appendix from there as the canonical sequence; the recommendation above is the human-friendly summary.
|
|
110
|
+
|
|
111
|
+
---
|
|
112
|
+
|
|
113
|
+
## 7. Architecture Decision Summary
|
|
114
|
+
|
|
115
|
+
Top architectural decisions from `07a_research_[SLUG].md`. Dev team should read each linked ADR before starting the corresponding phase.
|
|
116
|
+
|
|
117
|
+
| ADR | Decision | Drivers | Phase impact |
|
|
118
|
+
|-----|----------|---------|--------------|
|
|
119
|
+
| ADR-001 | [chosen tech for layer X] | NFR-[NNN], FR-[NNN] | Phase [N] of `/implement-plan` |
|
|
120
|
+
| ADR-002 | [decision] | [drivers] | [phase impact] |
|
|
121
|
+
|
|
82
122
|
---
|
|
83
123
|
|
|
84
|
-
## Artifact Files Reference
|
|
124
|
+
## 8. Artifact Files Reference
|
|
85
125
|
|
|
86
126
|
```
|
|
87
127
|
output/[SLUG]/
|
|
128
|
+
├── 00_discovery_[SLUG].md
|
|
88
129
|
├── 00_principles_[SLUG].md
|
|
89
130
|
├── 00_analyze_[SLUG].md
|
|
131
|
+
├── 00_glossary_[SLUG].md
|
|
132
|
+
├── 00_risks_[SLUG].md
|
|
133
|
+
├── 00_sprint_[SLUG].md
|
|
134
|
+
├── 00_trace_[SLUG].md
|
|
90
135
|
├── 01_brief_[SLUG].md
|
|
91
136
|
├── 02_srs_[SLUG].md
|
|
92
137
|
├── 03_stories_[SLUG].md
|
|
@@ -98,5 +143,20 @@ output/[SLUG]/
|
|
|
98
143
|
├── 08_apicontract_[SLUG].md
|
|
99
144
|
├── 09_wireframes_[SLUG].md
|
|
100
145
|
├── 10_scenarios_[SLUG].md
|
|
101
|
-
|
|
146
|
+
├── 11_handoff_[SLUG].md ← this document
|
|
147
|
+
└── 12_implplan_[SLUG].md
|
|
102
148
|
```
|
|
149
|
+
|
|
150
|
+
---
|
|
151
|
+
|
|
152
|
+
## 9. Sign-off
|
|
153
|
+
|
|
154
|
+
Formal acceptance of the handoff package. Signing this section means the development team agrees the BA package is sufficient to begin implementation. Outstanding open items (§4) and uncovered traceability chains (§3) must be acknowledged or resolved before sign-off.
|
|
155
|
+
|
|
156
|
+
| Role | Name | Sign-off Date | Notes |
|
|
157
|
+
|------|------|---------------|-------|
|
|
158
|
+
| Business Analyst | [name] | [YYYY-MM-DD] | [notes] |
|
|
159
|
+
| Product Manager | [name] | [YYYY-MM-DD] | [notes] |
|
|
160
|
+
| Tech Lead | [name] | [YYYY-MM-DD] | [notes] |
|
|
161
|
+
| QA Lead | [name] | [YYYY-MM-DD] | [notes] |
|
|
162
|
+
| Stakeholder / Sponsor | [name] | [YYYY-MM-DD] | [notes] |
|
|
@@ -1,6 +1,7 @@
|
|
|
1
1
|
# Project Principles: [PROJECT_NAME]
|
|
2
2
|
|
|
3
3
|
**Version:** 1.0
|
|
4
|
+
**Status:** Draft | In Review | Approved
|
|
4
5
|
**Date:** [DATE]
|
|
5
6
|
**Domain:** [DOMAIN]
|
|
6
7
|
**Slug:** [SLUG]
|
|
@@ -23,6 +24,11 @@ All artifacts are generated in: [LANGUAGE]
|
|
|
23
24
|
| API Endpoints | REST path | POST /users |
|
|
24
25
|
| Wireframes | WF-NNN | WF-001 |
|
|
25
26
|
| Validation Scenarios | SC-NNN | SC-001 |
|
|
27
|
+
| Risks | RISK-NN | RISK-01 |
|
|
28
|
+
| Sprints | SP-NN | SP-01 |
|
|
29
|
+
| Implementation Tasks | T-NN-NNN | T-04-007 |
|
|
30
|
+
| Analyse findings | A-NN | A-01 |
|
|
31
|
+
| Brief Goals | G-N | G-1 |
|
|
26
32
|
|
|
27
33
|
## 3. Traceability Requirements
|
|
28
34
|
|
|
@@ -46,56 +52,109 @@ Optional links — violations flagged as **MEDIUM**:
|
|
|
46
52
|
|
|
47
53
|
## 4. Definition of Ready
|
|
48
54
|
|
|
49
|
-
An artifact is ready to `/done` when all of the following are true
|
|
55
|
+
An artifact is ready to `/done` when all of the following are true. The baseline below mirrors the v3.7.0+ artifact-template field set; project-specific additions go in §8.
|
|
50
56
|
|
|
51
57
|
### Functional Requirement (FR)
|
|
52
58
|
- [ ] Description present and unambiguous.
|
|
53
59
|
- [ ] Actor identified (not "the system" or "the user" without role).
|
|
54
60
|
- [ ] Priority assigned (MoSCoW).
|
|
55
61
|
- [ ] Input/Output specified.
|
|
62
|
+
- [ ] **Source** field present (which stakeholder, brief goal G-N, regulatory requirement, or parent FR drove this).
|
|
63
|
+
- [ ] **Verification method** specified (Test / Demo / Inspection / Analysis per IEEE 830 §7).
|
|
64
|
+
- [ ] **Rationale** documented (why this requirement exists, not just what).
|
|
65
|
+
- [ ] FR is grouped under a feature area (`### 3.N` in `02_srs_*.md`).
|
|
56
66
|
|
|
57
67
|
### User Story (US)
|
|
58
|
-
- [ ]
|
|
59
|
-
- [ ]
|
|
68
|
+
- [ ] Persona named (named persona with role and one-line context, not bare job title).
|
|
69
|
+
- [ ] Action and Value filled.
|
|
70
|
+
- [ ] Priority assigned (MoSCoW).
|
|
71
|
+
- [ ] **Business Value Score** assigned (1–5 or H/M/L).
|
|
72
|
+
- [ ] **Size** estimate present (XS / S / M / L / XL or Story Points).
|
|
60
73
|
- [ ] Linked FR reference present.
|
|
74
|
+
- [ ] **Depends on** field set (other story IDs or `—`).
|
|
75
|
+
- [ ] **Definition of Ready** checklist or reference to this principles section.
|
|
76
|
+
- [ ] **INVEST self-check** confirms Independent · Negotiable · Valuable · Estimable · Small · Testable.
|
|
61
77
|
|
|
62
78
|
### Use Case (UC)
|
|
63
|
-
- [ ]
|
|
64
|
-
- [ ]
|
|
79
|
+
- [ ] **Goal in Context** present (which Brief goal G-N this UC serves).
|
|
80
|
+
- [ ] **Scope** specified (System / Subsystem / Component) and **Level** specified (User-goal / Summary / Subfunction).
|
|
81
|
+
- [ ] Primary Actor and Supporting Actors listed.
|
|
82
|
+
- [ ] **Stakeholders and Interests** table present (Cockburn discipline — at least 2 stakeholders).
|
|
83
|
+
- [ ] Pre-conditions and Trigger present.
|
|
84
|
+
- [ ] Main Success Scenario as a numbered table.
|
|
85
|
+
- [ ] At least one Exception Flow present.
|
|
86
|
+
- [ ] **Success Guarantees** and **Minimal Guarantees** distinguished in post-conditions.
|
|
87
|
+
- [ ] **Source** field present (linked US/FR).
|
|
65
88
|
|
|
66
89
|
### Acceptance Criterion (AC)
|
|
67
|
-
- [ ] Given / When / Then all present and
|
|
68
|
-
- [ ] Type specified (
|
|
90
|
+
- [ ] Given / When / Then all present, specific, and verifiable (no "the system handles correctly").
|
|
91
|
+
- [ ] **Type** specified (Positive / Negative / Boundary / Performance / Security).
|
|
92
|
+
- [ ] **Source** present (which business rule from `02_srs_*.md` drove this AC).
|
|
93
|
+
- [ ] **Verification** method specified (Automated test / Manual test / Observed in production).
|
|
69
94
|
- [ ] Linked US reference present.
|
|
95
|
+
- [ ] Linked NFR present for performance and security ACs.
|
|
70
96
|
|
|
71
97
|
### NFR
|
|
72
|
-
- [ ]
|
|
98
|
+
- [ ] **ISO/IEC 25010 characteristic** specified (one of the 8: Functional Suitability / Performance Efficiency / Compatibility / Usability / Reliability / Security / Maintainability / Portability).
|
|
73
99
|
- [ ] Measurable metric present (numeric target, not adjective).
|
|
100
|
+
- [ ] **Acceptance threshold** present separately from the metric.
|
|
74
101
|
- [ ] Verification method specified.
|
|
102
|
+
- [ ] **Source** present.
|
|
103
|
+
- [ ] **Rationale** present.
|
|
104
|
+
- [ ] Linked FR or US present.
|
|
75
105
|
|
|
76
106
|
### Data Entity
|
|
77
|
-
- [ ]
|
|
78
|
-
- [ ]
|
|
107
|
+
- [ ] **Source** field present (which FR/US introduced this entity).
|
|
108
|
+
- [ ] **Owner** field present (which team curates this data).
|
|
109
|
+
- [ ] **Sensitivity classification** present (Public / Internal / Confidential / PII / PCI / PHI / Financial).
|
|
110
|
+
- [ ] All attributes have **logical types** (not DBMS-specific) and constraints.
|
|
111
|
+
- [ ] FK references point to existing entities, with cascade rule specified.
|
|
112
|
+
- [ ] **State machine** documented for entities with more than two distinct lifecycle states.
|
|
79
113
|
|
|
80
114
|
### API Endpoint
|
|
115
|
+
- [ ] **Source** present (FR-NNN that drove this endpoint).
|
|
81
116
|
- [ ] Request and Response schemas present.
|
|
82
117
|
- [ ] At least one error code documented.
|
|
83
118
|
- [ ] Linked FR/US present.
|
|
119
|
+
- [ ] **Idempotency** marker present (Idempotent / Not idempotent / Idempotent via `Idempotency-Key` header).
|
|
120
|
+
- [ ] **Required scope** specified (or "public" for unauthenticated paths).
|
|
121
|
+
- [ ] **SLO** linked to an NFR.
|
|
122
|
+
- [ ] **Verification** method specified (contract test / consumer-driven contract test / integration test).
|
|
84
123
|
|
|
85
124
|
### Wireframe (WF)
|
|
86
|
-
- [ ]
|
|
125
|
+
- [ ] **Source** present (US-NNN this screen serves).
|
|
126
|
+
- [ ] All **8 canonical states** described that apply: Default / Loading / Empty / Loaded / Partial / Success / Error / Disabled.
|
|
87
127
|
- [ ] Navigation links (from / to) specified.
|
|
88
128
|
- [ ] Linked US present.
|
|
129
|
+
- [ ] **Linked AC** present (scenarios this screen verifies).
|
|
130
|
+
- [ ] **Linked NFR** present for performance- and accessibility-sensitive screens.
|
|
131
|
+
|
|
132
|
+
### Risk
|
|
133
|
+
- [ ] **Probability**, **Impact**, **Velocity** scored (per ISO 31000 + PMBOK 7).
|
|
134
|
+
- [ ] **Treatment strategy** classified (Avoid / Reduce / Transfer / Accept).
|
|
135
|
+
- [ ] **Owner** assigned.
|
|
136
|
+
- [ ] **Review cadence** set.
|
|
137
|
+
|
|
138
|
+
### Implementation Task (T-NN-NNN, from `/implement-plan`)
|
|
139
|
+
- [ ] At least one `references` id present (FR / US / AC / Entity / Endpoint / WF / SC).
|
|
140
|
+
- [ ] `dependsOn` list points only at task ids that exist in the same plan.
|
|
141
|
+
- [ ] `definitionOfDone` checklist present, with at least one hook tied to a linked AC.
|
|
142
|
+
- [ ] Phase assignment matches the canonical 9-phase ladder.
|
|
89
143
|
|
|
90
|
-
## 5. NFR Baseline
|
|
144
|
+
## 5. NFR Baseline (ISO/IEC 25010)
|
|
91
145
|
|
|
92
|
-
The following
|
|
146
|
+
NFR categories follow **ISO/IEC 25010:2011** Software Quality Model. The following ISO 25010 characteristics are required for this project regardless of domain — `/nfr` reads this list verbatim and treats it as a mandatory checklist:
|
|
93
147
|
|
|
94
|
-
- **Security
|
|
95
|
-
- **
|
|
96
|
-
- **
|
|
148
|
+
- **Security** — confidentiality (encryption at rest and in transit), authentication strength, audit trail.
|
|
149
|
+
- **Reliability** — availability SLA with a numeric target, RTO / RPO for disaster recovery.
|
|
150
|
+
- **Compatibility** — applicable laws and data retention policy *(historically labelled "Compliance" but maps to ISO 25010 Compatibility + Functional Suitability sub-characteristics)*.
|
|
97
151
|
|
|
98
|
-
[ADDITIONAL_NFR_CATEGORIES
|
|
152
|
+
[ADDITIONAL_NFR_CATEGORIES — list other ISO 25010 characteristics that are mandatory for this project, e.g.:
|
|
153
|
+
- **Performance Efficiency** — required if the project has user-facing latency or throughput targets.
|
|
154
|
+
- **Usability** — required if WCAG 2.1 AA accessibility is mandated.
|
|
155
|
+
- **Maintainability** — required if the project must hand off to a different team post-launch.
|
|
156
|
+
- **Portability** — required if multi-cloud or vendor-neutral hosting is a constraint.
|
|
157
|
+
]
|
|
99
158
|
|
|
100
159
|
## 6. Quality Gates
|
|
101
160
|
|
|
@@ -113,6 +172,50 @@ For `/analyze` findings:
|
|
|
113
172
|
- `flat` (default) — all artifacts saved directly in the output directory.
|
|
114
173
|
- `subfolder` — all artifacts saved under `{output_dir}/[SLUG]/`.
|
|
115
174
|
|
|
116
|
-
## 8.
|
|
175
|
+
## 8. Testing Strategy
|
|
176
|
+
|
|
177
|
+
**Strategy:** TDD | Tests-after | Integration-only | Manual-only | None
|
|
178
|
+
|
|
179
|
+
| Strategy | Means | When `/implement-plan` task templates embed "Tests to write first" |
|
|
180
|
+
|----------|-------|--------------------------------------------------------------------|
|
|
181
|
+
| TDD | Tests written before implementation; red → green → refactor | Yes — every task with linked AC gets a "Tests to write first" sub-block |
|
|
182
|
+
| Tests-after | Implementation first, tests immediately after | No — task DoD just lists the AC scenarios that must pass |
|
|
183
|
+
| Integration-only | No unit tests; integration tests at the API or UI layer | No — integration test harness is set up in Phase 1 |
|
|
184
|
+
| Manual-only | Tests are manual QA scripts run before release | No — task DoD references manual scenario IDs from `/scenarios` |
|
|
185
|
+
| None | Prototype / spike — no automated tests at all | No — explicit `// no test` marker on every task |
|
|
186
|
+
|
|
187
|
+
`/implement-plan` reads this section to decide whether to embed test specifications in each task. Default is **TDD** for production-grade systems.
|
|
188
|
+
|
|
189
|
+
## 9. Code Review and Branching
|
|
190
|
+
|
|
191
|
+
**Branching model:** trunk-based | GitHub flow | GitFlow | other
|
|
192
|
+
**Required reviewers per PR:** [N]
|
|
193
|
+
**Merge gate:** [CI green + N reviews / CODEOWNERS approval / specific reviewer]
|
|
194
|
+
|
|
195
|
+
## 10. Stakeholder Decision Authority
|
|
196
|
+
|
|
197
|
+
Who can approve a change to these principles, and to which sections.
|
|
198
|
+
|
|
199
|
+
| Section | Decision authority | Notes |
|
|
200
|
+
|---------|--------------------|-------|
|
|
201
|
+
| §1 Language | [Role] | |
|
|
202
|
+
| §2 ID Conventions | [Role] | |
|
|
203
|
+
| §3 Traceability | [Role] | |
|
|
204
|
+
| §4 Definition of Ready | [Role] | |
|
|
205
|
+
| §5 NFR Baseline | [Role] | |
|
|
206
|
+
| §6 Quality Gates | [Role] | |
|
|
207
|
+
| §7 Output Structure | [Role] | |
|
|
208
|
+
| §8 Testing Strategy | [Role] | |
|
|
209
|
+
| §9 Branching | [Role] | |
|
|
210
|
+
|
|
211
|
+
## 11. Project-Specific Notes
|
|
117
212
|
|
|
118
213
|
[ADDITIONAL_CONVENTIONS]
|
|
214
|
+
|
|
215
|
+
---
|
|
216
|
+
|
|
217
|
+
## Approvals
|
|
218
|
+
|
|
219
|
+
| Name | Role | Approval Date | Notes |
|
|
220
|
+
|------|------|---------------|-------|
|
|
221
|
+
| [name] | [role] | [YYYY-MM-DD] | [optional notes] |
|
|
@@ -1,57 +1,81 @@
|
|
|
1
1
|
# Technology Research & Architecture Decisions: [PROJECT_NAME]
|
|
2
2
|
|
|
3
|
+
**Version:** 0.1
|
|
4
|
+
**Status:** Draft
|
|
3
5
|
**Domain:** [DOMAIN]
|
|
4
6
|
**Date:** [DATE]
|
|
5
7
|
**Slug:** [SLUG]
|
|
8
|
+
**ADR format:** Michael Nygard format extended with Drivers and Alternatives Considered
|
|
6
9
|
**References:** `02_srs_[SLUG].md`, `06_nfr_[SLUG].md`, `07_datadict_[SLUG].md`
|
|
7
10
|
|
|
11
|
+
> The output of this research is the **primary tech-stack source** for `/implement-plan` (added in v3.4.0). The Tech Stack table at the bottom is read by `/implement-plan` to populate its header without re-asking the calibration interview.
|
|
12
|
+
|
|
8
13
|
---
|
|
9
14
|
|
|
10
15
|
## Architecture Decision Records (ADRs)
|
|
11
16
|
|
|
12
17
|
### ADR-001: [Decision Title]
|
|
13
18
|
|
|
14
|
-
|
|
15
|
-
|
|
19
|
+
| Field | Value |
|
|
20
|
+
|-------|-------|
|
|
21
|
+
| **Status** | Proposed / Accepted / Deprecated / Superseded by ADR-[NNN] |
|
|
22
|
+
| **Proposal date** | [YYYY-MM-DD] |
|
|
23
|
+
| **Decision date** | [YYYY-MM-DD when the decision was locked in] |
|
|
24
|
+
| **Decision owner** | [Name + role] |
|
|
25
|
+
|
|
26
|
+
**Drivers:** *(what forced this decision — reference specific FRs, NFRs, regulatory constraints, cost or time pressure)*
|
|
27
|
+
- NFR-[NNN] — [why this NFR forces the decision]
|
|
28
|
+
- FR-[NNN] — [why this FR forces the decision]
|
|
29
|
+
- [Regulatory / cost / time-to-market constraint]
|
|
16
30
|
|
|
17
31
|
**Context:**
|
|
18
|
-
[What situation
|
|
32
|
+
[What situation requires us to make this decision now. Background a future maintainer would need to understand the decision.]
|
|
19
33
|
|
|
20
|
-
**
|
|
34
|
+
**Alternatives Considered:**
|
|
21
35
|
|
|
22
|
-
| Option | Pros | Cons |
|
|
23
|
-
|
|
24
|
-
| [Option A] | [pros] | [cons] |
|
|
25
|
-
| [Option B] | [pros] | [cons] |
|
|
36
|
+
| Option | Pros | Cons | Disqualifying factor |
|
|
37
|
+
|--------|------|------|----------------------|
|
|
38
|
+
| [Option A] | [pros] | [cons] | — |
|
|
39
|
+
| [Option B] | [pros] | [cons] | [why ruled out] |
|
|
40
|
+
| [Option C] | [pros] | [cons] | [why ruled out] |
|
|
26
41
|
|
|
27
|
-
**Decision:** [Option A / B / other], because [rationale].
|
|
42
|
+
**Decision:** [Option A / B / other], because [rationale anchored in the Drivers above].
|
|
28
43
|
|
|
29
44
|
**Consequences:**
|
|
30
|
-
- [Positive consequence]
|
|
31
|
-
- [Trade-off or risk]
|
|
32
45
|
|
|
33
|
-
**
|
|
46
|
+
- **Positive:** [what becomes easier]
|
|
47
|
+
- **Negative:** [trade-off or risk]
|
|
48
|
+
- **Neutral:** [side effect that is neither good nor bad but worth noting]
|
|
34
49
|
|
|
35
50
|
---
|
|
36
51
|
|
|
37
52
|
### ADR-002: [Decision Title]
|
|
38
53
|
|
|
39
|
-
|
|
40
|
-
|
|
54
|
+
| Field | Value |
|
|
55
|
+
|-------|-------|
|
|
56
|
+
| **Status** | [status] |
|
|
57
|
+
| **Proposal date** | [date] |
|
|
58
|
+
| **Decision date** | [date] |
|
|
59
|
+
| **Decision owner** | [Name + role] |
|
|
41
60
|
|
|
42
|
-
**
|
|
61
|
+
**Drivers:**
|
|
62
|
+
- [driver]
|
|
43
63
|
|
|
44
|
-
**
|
|
64
|
+
**Context:** [Context.]
|
|
45
65
|
|
|
46
|
-
|
|
47
|
-
|
|
48
|
-
|
|
|
49
|
-
|
|
66
|
+
**Alternatives Considered:**
|
|
67
|
+
|
|
68
|
+
| Option | Pros | Cons | Disqualifying factor |
|
|
69
|
+
|--------|------|------|----------------------|
|
|
70
|
+
| [Option A] | [pros] | [cons] | — |
|
|
71
|
+
| [Option B] | [pros] | [cons] | [why ruled out] |
|
|
50
72
|
|
|
51
73
|
**Decision:** [Chosen option and rationale.]
|
|
52
74
|
|
|
53
75
|
**Consequences:**
|
|
54
|
-
|
|
76
|
+
|
|
77
|
+
- **Positive:** [consequence]
|
|
78
|
+
- **Negative:** [consequence]
|
|
55
79
|
|
|
56
80
|
<!-- Repeat ADR block for each major architectural decision. -->
|
|
57
81
|
|
|
@@ -95,5 +119,33 @@
|
|
|
95
119
|
## Open Questions
|
|
96
120
|
|
|
97
121
|
| # | Question | Owner | Target Date |
|
|
98
|
-
|
|
122
|
+
|---|----------|-------|-------------|
|
|
99
123
|
| 1 | [Unresolved technical question] | [Role] | [Date] |
|
|
124
|
+
|
|
125
|
+
---
|
|
126
|
+
|
|
127
|
+
## Tech Stack Summary *(consumed by `/implement-plan`)*
|
|
128
|
+
|
|
129
|
+
This table is the primary tech-stack source for `/implement-plan`. Every row should have a concrete value (or `[TBD: <slot>]` if the decision is genuinely deferred). `/implement-plan` skips its own calibration interview when this table is complete.
|
|
130
|
+
|
|
131
|
+
| Layer | Choice | Source ADR |
|
|
132
|
+
|-------|--------|------------|
|
|
133
|
+
| Frontend | [framework + language + build tool] | ADR-[NNN] |
|
|
134
|
+
| Backend | [framework + language + runtime] | ADR-[NNN] |
|
|
135
|
+
| Database | [engine + version + hosting] | ADR-[NNN] |
|
|
136
|
+
| Hosting / deployment | [cloud + region + container model] | ADR-[NNN] |
|
|
137
|
+
| Auth / identity | [in-house / SSO / managed service] | ADR-[NNN] |
|
|
138
|
+
| Observability | [logs + metrics + traces platform] | ADR-[NNN] |
|
|
139
|
+
| Mandatory integrations | [list from §Integration Map] | — |
|
|
140
|
+
|
|
141
|
+
---
|
|
142
|
+
|
|
143
|
+
## NFR → ADR Traceability
|
|
144
|
+
|
|
145
|
+
Forward traceability from each Non-functional Requirement in `06_nfr_[SLUG].md` to the ADR(s) that satisfy it. NFRs without a linked ADR are flagged — every Must-priority NFR should drive at least one architectural decision.
|
|
146
|
+
|
|
147
|
+
| NFR ID | ISO 25010 Characteristic | Linked ADRs | Coverage Status |
|
|
148
|
+
|--------|--------------------------|-------------|-----------------|
|
|
149
|
+
| NFR-001 | Performance Efficiency | ADR-002, ADR-005 | ✓ |
|
|
150
|
+
| NFR-003 | Reliability | ADR-004 | ✓ |
|
|
151
|
+
| NFR-005 | Security | (uncovered) | ✗ |
|
package/skills/research/SKILL.md
CHANGED
|
@@ -10,6 +10,10 @@ Optional step between `/datadict` and `/apicontract`. Documents technology decis
|
|
|
10
10
|
|
|
11
11
|
Running this step prevents "beautiful but impractical" API contracts by surfacing constraints early.
|
|
12
12
|
|
|
13
|
+
**Standard alignment:** ADRs follow the **Michael Nygard format** (the de facto industry standard since 2011) extended with explicit Drivers, Alternatives Considered, and Decision Date fields.
|
|
14
|
+
|
|
15
|
+
**Downstream consumers.** The output of `/research` is the **primary tech-stack source** for `/implement-plan` (added in v3.4.0). When `/research` is run, `/implement-plan` parses its ADRs and Integration Map to populate the Tech Stack header without asking the calibration interview. Make sure each tech-stack ADR (frontend, backend, database, hosting, auth, observability) has a clear winning decision so `/implement-plan` doesn't fall back to its own interview.
|
|
16
|
+
|
|
13
17
|
## Context loading
|
|
14
18
|
|
|
15
19
|
0. If `00_principles_*.md` exists in the output directory, load it and apply its conventions.
|
|
@@ -31,11 +35,19 @@ Read `references/environment.md` from the `ba-toolkit` directory to determine th
|
|
|
31
35
|
|
|
32
36
|
**Required topics:**
|
|
33
37
|
1. Existing infrastructure — is there a current backend, database, or API the new system must integrate with or extend?
|
|
34
|
-
2.
|
|
35
|
-
3.
|
|
36
|
-
4.
|
|
37
|
-
5.
|
|
38
|
-
6.
|
|
38
|
+
2. **Frontend stack** — framework, language, build tool? *(consumed by `/implement-plan` Tech Stack header)*
|
|
39
|
+
3. **Backend stack** — framework, language, runtime? *(consumed by `/implement-plan`)*
|
|
40
|
+
4. **Database** — engine, version, hosting model? *(consumed by `/implement-plan`)*
|
|
41
|
+
5. **Hosting / deployment target** — which cloud, which region, container vs serverless? *(consumed by `/implement-plan`)*
|
|
42
|
+
6. **Auth / identity** — in-house vs SSO vs managed service (Auth0, Clerk, Cognito)? *(consumed by `/implement-plan`)*
|
|
43
|
+
7. **Observability platform** — logging, metrics, traces (Datadog, New Relic, Grafana, OpenTelemetry self-hosted)? *(consumed by `/implement-plan` Phase 8)*
|
|
44
|
+
8. API style preference — REST, GraphQL, gRPC, or a combination? Any existing API gateway or BFF?
|
|
45
|
+
9. Real-time requirements — do any user stories require live updates (WebSocket, SSE, polling)?
|
|
46
|
+
10. Third-party integrations — which external services are confirmed (payment gateway, auth provider, analytics, CDN)?
|
|
47
|
+
11. Data storage constraints — any vendor lock-in restrictions, cloud provider preferences, on-premise requirements?
|
|
48
|
+
12. Compliance constraints — any restrictions on where data can be stored (jurisdiction, residency)?
|
|
49
|
+
13. **Build vs buy** — for each major capability, is custom code worth it or is a vendor / open-source library sufficient?
|
|
50
|
+
14. **Open-source vs proprietary tolerance** — any blanket policy (e.g. "no AGPL", "preferred Apache 2.0 / MIT", "vendor SLAs required")?
|
|
39
51
|
|
|
40
52
|
Supplement with domain-specific typical integrations from the reference.
|
|
41
53
|
|
|
@@ -110,9 +122,10 @@ _(Repeat ADR block for each key decision.)_
|
|
|
110
122
|
|
|
111
123
|
**Rules:**
|
|
112
124
|
- ADR numbering: ADR-001, ADR-002, ...
|
|
113
|
-
- Every ADR must include at least two
|
|
114
|
-
- Decisions must reference FR or NFR that drove the choice.
|
|
125
|
+
- Every ADR must include **Drivers** (FR / NFR / regulatory / cost / time-to-market — what forced the decision), **Alternatives Considered** (at least two), **Decision** (chosen option + rationale), and **Consequences** (positive and negative). This is the Nygard format extended.
|
|
126
|
+
- Decisions must reference FR or NFR that drove the choice in the Drivers field.
|
|
115
127
|
- Open Questions section is mandatory — even if empty (write "None").
|
|
128
|
+
- The artifact carries an **NFR → ADR** traceability matrix at the bottom so a senior reviewer can verify which NFRs drove which architectural decisions.
|
|
116
129
|
|
|
117
130
|
## Iterative refinement
|
|
118
131
|
|