@kudusov.takhir/ba-toolkit 3.4.1 → 3.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -11,6 +11,90 @@ Versions follow [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
11
11
 
12
12
  ---
13
13
 
14
+ ## [3.6.0] — 2026-04-10
15
+
16
+ ### Highlights
17
+
18
+ - **Group A skill audit pass on `/usecases`, `/ac`, `/nfr`, `/datadict`, `/apicontract`, `/wireframes`, `/scenarios`** — 7 pipeline-core skills brought to the same senior-BA rigour as the v3.5.0 pilot. ~32 Critical + High findings applied. The same six systemic patterns surfaced in the pilot are now fixed across the entire pipeline core: full standards conformance (Cockburn for use cases, ISO 25010 for NFRs, OpenAPI shape for the API contract), explicit "why" fields (Source / Owner / Sensitivity / Verification) on every artifact element, cross-artifact forward-traceability matrices on every shipped artifact, document-control metadata, single-source-of-truth templates with no inline drift, and required-topics lists extended with universal BA inquiries.
19
+
20
+ ### Changed
21
+
22
+ - **`skills/usecases/SKILL.md` and `skills/references/templates/usecases-template.md`** — full Cockburn alignment:
23
+ - Inline template removed from SKILL.md; the standalone `usecases-template.md` is now the single source of truth (mirror of the same drift fix applied to `/stories` in v3.5.0).
24
+ - Cockburn-mandated fields added to every UC: **Goal in Context** (which Brief goal G-N this UC serves), **Stakeholders and Interests** table (Cockburn discipline — surfaces non-obvious requirements that the primary actor alone misses), **Scope** (system / subsystem / component, distinct from Level), **Source** (which US/FR drove this UC into existence), **Success Guarantees** vs **Minimal Guarantees** (post-conditions split per Cockburn — what's true after success vs what's true after *any* termination including failure).
25
+ - Required-topics list extended from 5 to 8: Goal in Context, Stakeholders and Interests, Scope level added.
26
+ - New US → UC coverage matrix at the bottom of the artifact for forward traceability with `(uncovered)` flagging.
27
+ - **`skills/ac/SKILL.md` and `skills/references/templates/ac-template.md`** — explicit AC-NNN-NN ID scheme (drift fix), provenance fields, US → AC coverage matrix:
28
+ - Inline template removed; standalone is the single source of truth. Inline used `### AC-NNN-NN` IDs while standalone used `### Scenario N` without numeric IDs — different ID schemes between the two templates.
29
+ - Every AC carries **Type** (Positive / Negative / Boundary / Performance / Security), **Source** (which business rule from `02_srs_*.md` drove the AC — required, no AC without provenance), **Verification** (Automated test / Manual test / Observed in production), **Linked FR**, and **Linked NFR** (for performance and security ACs).
30
+ - Required-topics list extended from 5 to 9: performance bounds per scenario, idempotency, observability (audit log requirements), state transitions linking AC to entity state machines.
31
+ - New US → AC coverage matrix at the bottom — forward traceability with column per AC type so `Must` stories with no negative or no boundary AC are flagged at glance.
32
+ - **`skills/nfr/SKILL.md` and `skills/references/templates/nfr-template.md`** — full ISO/IEC 25010 alignment, FR → NFR matrix:
33
+ - **Standard alignment with ISO/IEC 25010:2011** Software Quality Model — every NFR maps to one of the 8 ISO 25010 characteristics: Functional Suitability, Performance Efficiency, Compatibility, Usability, Reliability, Security, Maintainability, Portability. Project-specific extensions (Compliance, Localisation, Observability) explicitly note their parent ISO characteristic so the audit trail is consistent. Previous version named ad-hoc categories without standard backing.
34
+ - Each NFR template gains an **Acceptance threshold** field separate from the metric — "metric: p95 latency in ms" + "acceptance threshold: < 300ms" — so verification is unambiguous.
35
+ - Required-topics list extended from 6 to 13: ISO 25010 characteristic-by-characteristic walk plus SLO/SLI commitment, observability, disaster recovery runbook, data sovereignty, deprecation policy.
36
+ - New FR → NFR coverage matrix flags Must FRs with no linked Performance / Reliability / Security NFR.
37
+ - New per-characteristic priority summary table at the bottom.
38
+ - **`skills/datadict/SKILL.md` and `skills/references/templates/datadict-template.md`** — logical model only, provenance, state machines, FR → Entity matrix:
39
+ - **Scope boundary made explicit:** `/datadict` is the **logical** data model (entities, attributes, conceptual types, relationships, state transitions). Physical schema (specific DBMS, indexes, sharding, partitioning) belongs in `/research` (step 7a) as ADRs. Previous version mixed the two by asking "DBMS choice" as a required topic.
40
+ - Each entity carries **Source** (which FR/US introduced it — required, no entity without provenance), **Owner** (which team curates it), **Sensitivity** (Public / Internal / Confidential / PII / PCI / PHI / Financial — feeds /nfr Security NFRs and GDPR compliance), and a **State Machine** section listing states + legal transitions for any entity with more than two distinct lifecycle states.
41
+ - Required-topics list extended from 6 to 10: PII inventory and retention policy, audit/history requirements, state machines, referential integrity cascade rules, time-zone handling, data ownership.
42
+ - Logical types replace DBMS-specific types (`String / Integer / Decimal / Boolean / Timestamp / UUID / Enum / FK / JSON / Binary` instead of `ObjectId / VARCHAR / NUMERIC`). Physical type mapping happens in `/research`.
43
+ - New FR → Entity coverage matrix at the bottom flags data-touching FRs with no linked entity.
44
+ - **`skills/apicontract/SKILL.md` and `skills/references/templates/apicontract-template.md`** — OpenAPI 3.x shape, idempotency, FR → Endpoint matrix:
45
+ - **Standard alignment with OpenAPI 3.x** structure (servers, paths, parameters with `in` location, request body, responses keyed by HTTP status, components/schemas, security schemes). Markdown is the format, OpenAPI is the shape.
46
+ - Per-endpoint metadata table now carries **Source** (FR/US — required), **Required scope** (OAuth2 / JWT scope), **Idempotency** (Idempotent / Not idempotent / Idempotent via `Idempotency-Key` header), per-endpoint **Rate limit**, per-endpoint **SLO** (latency target linked to NFR), and **Verification** method (contract test / consumer-driven contract test / integration test).
47
+ - Required-topics list extended from 7 to 12: idempotency keys, content negotiation, CORS policy, API deprecation policy with `Sunset` header (RFC 8594), per-endpoint SLO.
48
+ - New "Idempotency, CORS, and Deprecation" section in the template documents the cross-cutting policies.
49
+ - New FR → Endpoint coverage matrix flags FRs with no linked endpoint.
50
+ - **`skills/wireframes/SKILL.md` and `skills/references/templates/wireframes-template.md`** — canonical 8-state list, accessibility-first, US → WF matrix:
51
+ - **Mandatory state list expanded from 4 to 8** canonical states: Default, Loading, Empty, Loaded, Partial, Success, Error, Disabled. The previous 4-state minimum (default / loading / empty / error) missed half of the states a senior UX BA would catch in review.
52
+ - Each screen carries **Source** (US — required), **Linked AC** (scenarios this screen verifies), **Linked NFR** (performance and accessibility NFRs that apply to UI), and an **Internationalisation** flag (LTR / RTL / both, long-string accommodation).
53
+ - Required-topics list extended from 6 to 8: explicit accessibility level (WCAG 2.1 A / AA / AAA — affects design decisions at wireframe stage, not just NFR time), internationalisation (RTL, long-string accommodation, locale-aware formatting).
54
+ - New US → WF coverage matrix flags Must stories with no linked screen.
55
+ - **`skills/scenarios/SKILL.md` and `skills/references/templates/scenarios-template.md`** — drift fix, FR/NFR linking, expanded coverage matrix:
56
+ - Inline template fields and table columns no longer drift from the standalone template. Both now carry the same per-scenario field set.
57
+ - Each scenario carries **Source — Linked FR**, **Source — Linked NFR**, **Frequency** (how often the scenario runs in production — drives test investment), and **Stakes** (cost of failure — data loss, lost revenue, regulatory breach, reputation).
58
+ - Required-topics list extended from 5 to 8: frequency, stakes / blast radius, recovery scenarios.
59
+ - Coverage Summary expanded from 4 columns to 6: User Stories, FRs, NFRs, ACs, API Endpoints, Wireframes — with `(uncovered)` flagging on each axis.
60
+ - **Document-control metadata (`Version`, `Status`) added to all 7 templates** — same `Draft / In Review / Approved / Superseded` lifecycle introduced for `/brief` and `/srs` in v3.5.0. Pipeline-core artifacts are controlled documents and need to answer "which version did we agree to?" three months in.
61
+
62
+ ### Cross-pattern impact
63
+
64
+ After Group A and the pilot, **all 9 currently-shipped pipeline-stage skills** (`/discovery`, `/principles`, `/brief`, `/srs`, `/stories`, `/usecases`, `/ac`, `/nfr`, `/datadict`, `/apicontract`, `/wireframes`, `/scenarios`) carry the six systemic improvements: standards conformance, "why" fields, cross-artifact forward traceability, document control, single-source-of-truth templates, and BA-grade required-topics coverage. Group B (cross-cutting utilities: `/trace`, `/analyze`, `/clarify`, `/estimate`, `/glossary`, `/risk`, `/sprint`) and Group C (bookend skills: `/handoff`, `/implement-plan`, `/export`, `/publish`) remain at their current rigour and may receive a lighter sweep in a future session if patterns repeat.
65
+
66
+ ---
67
+
68
+ ## [3.5.0] — 2026-04-09
69
+
70
+ ### Highlights
71
+
72
+ - **Pilot skill audit pass on `/brief`, `/srs`, `/stories`** — 14 senior-BA findings (6 Critical + 8 High) applied. Three skills now match the rigour a 25-year BA from a top-tier consultancy would expect: `/brief` separates Constraints from Assumptions and forces an explicit Out of Scope section, `/srs` adds Source / Verification / Rationale per FR plus a Brief-Goal → FR traceability matrix, `/stories` makes INVEST a quality gate and adds Persona / Business Value / Dependencies fields.
73
+
74
+ ### Changed
75
+
76
+ - **`skills/brief/SKILL.md` and `skills/references/templates/brief-template.md`** — six findings applied:
77
+ - **Section numbering bug fixed** — workflow steps no longer jump from §6 to §8 (renumbered §8 → §7, §9 → §8). Was a sloppiness flag for any reviewer opening the file.
78
+ - **Out of Scope section is now mandatory** — added §4.2 to the artifact template. A brief is a contract; what is excluded matters as much as what is included. Without this, scope creep starts on the first standup.
79
+ - **Constraints and Assumptions split into two sections** (§6 and §7 in the new template). They have different change-management implications and BABOK v3 separates them. Constraints carry `Type / Source / Implication`; Assumptions carry `Statement / Owner / Validate by / Risk if false`.
80
+ - **Required-topics list extended from 7 to 11.** Added the four canonical brief inquiries every senior BA asks: buyer-vs-user separation, decision-making authority (who can sign off, by name and role), regulatory pre-screening (yes/no per regime: GDPR, HIPAA, FDA SaMD, SOC 2, PCI DSS, SOX, KYC/AML, FERPA, COPPA, EU AI Act, accessibility), and explicit failure criteria (asymmetric to success criteria — flushes out red lines).
81
+ - **Document control metadata added to the template** — `Version`, `Status` (Draft / In Review / Approved / Superseded), and an `Approvals` table at the bottom. A brief is a controlled document and needs to answer "which version did we agree to?" three months in.
82
+ - **Stakeholder table gains a "Sign-off authority" column** so the brief records not just who is interested but who has veto power.
83
+ - **`skills/srs/SKILL.md` and `skills/references/templates/srs-template.md`** — four findings applied:
84
+ - **FR template gains three IEEE 830-mandated fields:** `Source` (which stakeholder, brief goal, regulatory requirement, or parent FR drove this requirement — required for traceability), `Verification` (Test / Demo / Inspection / Analysis — without it an FR is a wish, not a contract), `Rationale` (*why* this requirement exists, not just *what* — helps future maintainers know what is safe to push back on).
85
+ - **New §7 "Traceability — Brief Goal → FR" table.** Forward traceability from `01_brief_<slug>.md` §2 business goals to the FRs that satisfy them. Every Brief goal must have at least one linked FR; uncovered goals are flagged so they cannot silently disappear from the project.
86
+ - **§3 Functional Requirements is now grouped by feature area** as `### 3.N [Area name]` subsections, each with a reserved FR-ID range (FR-001..099 for area 1, FR-100..199 for area 2, …). New FRs inserted later go into their area's free numbers, not the global tail — IDs stay stable. Was a flat list, which became unreadable for any non-trivial system.
87
+ - **Required-topics list extended from 6 to 11.** Added the five universal SRS inquiries that downstream skills (`/datadict`, `/nfr`, `/apicontract`) inherit gaps from when they're missing: authentication and authorisation model (SSO / SAML / OIDC / RBAC / ABAC / MFA), data ownership and stewardship, audit and logging requirements, data retention and deletion (GDPR right-to-erasure), reporting needs.
88
+ - **`skills/stories/SKILL.md` and `skills/references/templates/stories-template.md`** — five findings applied:
89
+ - **SKILL.md inline template removed in favour of a single source of truth at `references/templates/stories-template.md`.** Previously the inline template and the standalone file had drifted apart (the inline version had no `Size:` field and an inline AC reference; the standalone had `Size:` and a file-path AC reference). Two agents reading different parts produced different outputs. The SKILL.md now lists the field set and points at the standalone template; the standalone template is the only place that carries the per-story block layout.
90
+ - **INVEST is now an explicit quality gate.** The template has an `INVEST self-check:` line per story (Independent · Negotiable · Valuable · Estimable · Small · Testable). The `/validate` command description now requires every story to pass INVEST. The `/split` guidance references INVEST's "Small" / "Independent" criteria and Mike Cohn's nine story-splitting patterns instead of the previous arbitrary "more than 3 scenarios" rule.
91
+ - **Persona field replaces bare role.** Stories now use `**As** [Maria, ops supervisor at a 50-warehouse 3PL handling 200 returns/week]` instead of `**As an** admin`. Personas carry goals, frustrations, and context that drive UX decisions; bare job titles do not.
92
+ - **Business Value Score field added** (1–5 or High / Med / Low). Captures relative ranking *within* the same MoSCoW priority tier so a PM can sequence among 30 Must stories instead of treating them as interchangeable.
93
+ - **Depends on field added** so story-to-story dependencies are visible to `/sprint` and `/implement-plan`. A story that can't start until US-007 is done now says so in the template and won't get planned in the wrong sprint.
94
+ - **New "FR → Story coverage matrix" sub-table in the Coverage Summary.** Forward traceability from FR to US, with `(uncovered)` flag for any FR missing a linked story. Was previously a manual cross-reference task.
95
+
96
+ ---
97
+
14
98
  ## [3.4.1] — 2026-04-09
15
99
 
16
100
  ### Fixed
@@ -511,7 +595,9 @@ CI scripts that relied on the old behaviour (`init` creates files only, `install
511
595
 
512
596
  ---
513
597
 
514
- [Unreleased]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.4.1...HEAD
598
+ [Unreleased]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.6.0...HEAD
599
+ [3.6.0]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.5.0...v3.6.0
600
+ [3.5.0]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.4.1...v3.5.0
515
601
  [3.4.1]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.4.0...v3.4.1
516
602
  [3.4.0]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.3.0...v3.4.0
517
603
  [3.3.0]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.2.0...v3.3.0
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@kudusov.takhir/ba-toolkit",
3
- "version": "3.4.1",
3
+ "version": "3.6.0",
4
4
  "description": "AI-powered Business Analyst pipeline — 24 skills from concept discovery to a sequenced implementation plan an AI coding agent can execute, with one-command Notion + Confluence publish. Works with Claude Code, Codex CLI, Gemini CLI, Cursor, and Windsurf.",
5
5
  "keywords": [
6
6
  "business-analyst",
@@ -33,6 +33,10 @@ Read `references/environment.md` from the `ba-toolkit` directory to determine th
33
33
  3. Which boundary values are critical?
34
34
  4. Which US need multiple AC (different roles, states)?
35
35
  5. Are there data precision requirements (decimal places, formats)?
36
+ 6. **Performance bounds per scenario** — does any AC carry a response-time / throughput requirement that must be verified at acceptance?
37
+ 7. **Idempotency** — for action-based scenarios, can the action be safely retried? AC must specify whether duplicate requests produce the same result.
38
+ 8. **Observability** — what audit log entry, metric, or trace must this scenario produce? Verifiable by inspecting logs, not just the user-facing response.
39
+ 9. **State transitions** — which entity state changes during the scenario? Links the AC to the entity state machines defined in `/datadict`.
36
40
 
37
41
  Supplement with domain-specific questions from the reference.
38
42
 
@@ -40,26 +44,15 @@ Supplement with domain-specific questions from the reference.
40
44
 
41
45
  **File:** `05_ac_{slug}.md`
42
46
 
43
- ```markdown
44
- # Acceptance Criteria: {Name}
45
-
46
- ## US-{NNN}: {Short description}
47
-
48
- ### AC-{NNN}-{NN}: {Scenario name}
49
- **Type:** {positive | negative | boundary}
50
- - **Given** {initial state}
51
- - **When** {action}
52
- - **Then** {expected result}
53
-
54
- **Links:** US-{NNN}, UC-{NNN}
55
- ```
47
+ The full per-AC field set lives at `references/templates/ac-template.md` and is the single source of truth. Each AC carries: ID (`AC-NNN-NN`), Type (positive / negative / boundary / performance / security), Given / When / Then, Linked US, Linked UC, Linked FR, Linked NFR (for performance/security ACs), Source (which business rule from `02_srs_{slug}.md` drove this AC), and Verification method (automated test / manual test / observed in production). The artifact also carries a US → AC coverage matrix at the bottom.
56
48
 
57
49
  **Rules:**
58
50
  - Numbering relative to US: AC-001-01 (first AC for US-001).
59
51
  - Every US has at least one positive AC.
60
- - Must-priority US have at least one negative AC.
52
+ - Must-priority US have at least one negative AC AND at least one boundary AC.
61
53
  - Given = specific state. When = single action. Then = verifiable result.
62
- - Avoid vague wording — replace "system handles correctly" with concrete behavior.
54
+ - Avoid vague wording — replace "system handles correctly" with concrete observable behaviour ("response status is 401", "audit log contains entry with action=login_failed", "stock count decremented by exactly the ordered quantity").
55
+ - Every AC must reference its source business rule via the `Source` field — no AC without provenance.
63
56
 
64
57
  ## Back-reference update
65
58
 
@@ -27,14 +27,21 @@ Read `references/environment.md` from the `ba-toolkit` directory to determine th
27
27
 
28
28
  3–7 topics per round, 2–4 rounds.
29
29
 
30
+ **Standard alignment:** API Contract approximates the **OpenAPI 3.x** structure (servers, paths, parameters with `in` location, request body, responses keyed by HTTP status, components/schemas, security schemes). Markdown is the format, OpenAPI is the shape.
31
+
30
32
  **Required topics:**
31
- 1. Protocol — REST, WebSocket, GraphQL, combination?
32
- 2. API versioning — URI-based, header-based?
33
- 3. Authentication — JWT, API Key, OAuth2?
34
- 4. Webhook contracts — needed for incoming events from external systems?
35
- 5. Error format — standard (RFC 7807) or custom?
36
- 6. Pagination — cursor-based, offset-based? Limits?
37
- 7. Rate limiting — any restrictions? For which endpoints?
33
+ 1. Protocol — REST, WebSocket, GraphQL, gRPC, or combination?
34
+ 2. API versioning — URI-based (`/v1/`), header-based (`Accept-Version`), or media-type versioning?
35
+ 3. Authentication and authorisation — JWT / OAuth2 / API Key / mTLS? Required scopes per endpoint group?
36
+ 4. Webhook contracts — incoming events from external systems? Outgoing events to subscribers?
37
+ 5. Error format — RFC 7807 Problem Details, or custom? Localised error messages?
38
+ 6. Pagination — cursor-based, offset-based, or key-set? Default and max page size?
39
+ 7. Rate limiting — global vs per-endpoint vs per-tenant? Headers (RateLimit-*)? 429 response shape?
40
+ 8. **Idempotency** — how does the API support safe retries on POST? `Idempotency-Key` header? Server-side dedupe window?
41
+ 9. **Content negotiation** — JSON only, or also CSV / XML / Protobuf? Accept header semantics?
42
+ 10. **CORS policy** — which origins are allowed for browser callers? Which headers exposed?
43
+ 11. **API deprecation policy** — how is breaking change communicated? `Sunset` header? Deprecation grace period?
44
+ 12. **Per-endpoint SLO** — latency target per endpoint group, links to NFRs?
38
45
 
39
46
  Supplement with domain-specific questions from the reference.
40
47
 
@@ -89,7 +96,13 @@ Supplement with domain-specific questions from the reference.
89
96
  **Rules:**
90
97
  - Endpoints grouped by domain (Auth, Users, Core, Admin, etc.).
91
98
  - JSON schemas are representative examples with types in comments.
92
- - Attributes consistent with Data Dictionary.
99
+ - Attributes consistent with Data Dictionary entity field types.
100
+ - Every endpoint has a **Source** field (FR-NNN) — no endpoint without provenance.
101
+ - Every endpoint has an **Idempotency** marker (idempotent / not idempotent / idempotent via Idempotency-Key header).
102
+ - Every endpoint has a **Required scope** (or "public" for unauthenticated paths) when OAuth2 / JWT scopes are in use.
103
+ - Every endpoint has a **Verification** method (contract test / consumer-driven contract test / integration test).
104
+ - Every endpoint has an **SLO** field linking to an NFR (latency target, error budget).
105
+ - The artifact carries an FR → Endpoint coverage matrix at the bottom.
93
106
 
94
107
  ## Iterative refinement
95
108
 
@@ -48,10 +48,14 @@ Cover 3–7 topics per round, 2–4 rounds. Do not generate the artifact until s
48
48
  1. Product type — what exactly is being built?
49
49
  2. Business goal — what problem does the product solve?
50
50
  3. Target audience and geography.
51
- 4. Key stakeholders — who is interested, who makes decisions?
52
- 5. Known constraintsdeadlines, budget, regulatory requirements, mandatory integrations.
53
- 6. Competitive products or references.
54
- 7. Success criteriaspecific metrics or qualitative goals.
51
+ 4. **Buyer vs. user separation** — who pays for the product vs. who uses it day-to-day? In B2B SaaS, EdTech, healthcare and many other domains these are different people with different priorities. If they coincide, record "buyer = user".
52
+ 5. Key stakeholderswho is interested, who makes decisions?
53
+ 6. **Decision-making authority** who can sign off on the brief now, and who can sign off on changes later? Capture by name and role, not just by team.
54
+ 7. **Regulatory pre-screening**which of the following regimes apply: GDPR / UK GDPR, HIPAA, FDA SaMD, SOC 2, PCI DSS, SOX, KYC / AML, FERPA / COPPA, EU AI Act, accessibility (Section 508 / WCAG / EN 301 549)? Single yes/no per regime — gates the rest of the pipeline.
55
+ 8. Known constraints — deadlines, budget, technology mandates, mandatory integrations.
56
+ 9. Reference products and analogues — products in this space the user admires (and why), products that failed (and why). Anchors the project to real-world references and surfaces unstated requirements.
57
+ 10. Success criteria — specific metrics or qualitative goals.
58
+ 11. **Failure criteria** — what would make us call this project a failure? Asymmetric to success criteria; flushes out red lines that "what's success" never reveals.
55
59
 
56
60
  If a domain reference is loaded, supplement general questions with domain-specific ones. Formulate questions concretely, with example answers in parentheses.
57
61
 
@@ -62,17 +66,37 @@ If a domain reference is loaded, supplement general questions with domain-specif
62
66
  ```markdown
63
67
  # Project Brief: {Project Name}
64
68
 
69
+ **Version:** 0.1
70
+ **Status:** Draft | In Review | Approved | Superseded
65
71
  **Domain:** {saas | fintech | ecommerce | healthcare | logistics | on-demand | social-media | real-estate | igaming | edtech | govtech | ai-ml | custom:{name}}
66
72
  **Date:** {date}
73
+ **Slug:** {slug}
67
74
 
68
75
  ## 1. Project Summary
69
76
  ## 2. Business Goals and Success Metrics
70
77
  ## 3. Target Audience
78
+ ### 3.1 Buyer
79
+ ### 3.2 User (if different from Buyer)
71
80
  ## 4. High-Level Functionality Overview
72
- ## 5. Stakeholders and Roles
73
- ## 6. Constraints and Assumptions
74
- ## 7. Risks
75
- ## 8. Glossary
81
+ ### 4.1 In Scope
82
+ ### 4.2 Out of Scope
83
+ ## 5. Stakeholders and Decision-Making Authority
84
+ ## 6. Constraints
85
+ ## 7. Assumptions
86
+ ## 8. Risks
87
+ ## 9. Success and Failure Criteria
88
+ ### 9.1 Success Criteria
89
+ ### 9.2 Failure Criteria
90
+ ## 10. Reference Products and Analogues
91
+ ## 11. Glossary
92
+
93
+ ---
94
+
95
+ ## Approvals
96
+
97
+ | Name | Role | Approval Date | Notes |
98
+ |------|------|---------------|-------|
99
+ | {name} | {role} | {date} | {notes} |
76
100
  ```
77
101
 
78
102
  ### 6. AGENTS.md update
@@ -83,7 +107,7 @@ If a domain reference is loaded, supplement general questions with domain-specif
83
107
 
84
108
  If you find no `AGENTS.md` at all (neither in cwd nor up the tree), warn the user that the project was likely set up before v3.1 and tell them to run `ba-toolkit init --name "..." --slug {slug}` to scaffold the per-project `AGENTS.md`. Do not create one yourself with arbitrary structure.
85
109
 
86
- ### 8. Iterative refinement
110
+ ### 7. Iterative refinement
87
111
 
88
112
  - `/revise [section]` — rewrite a section.
89
113
  - `/expand [section]` — add detail.
@@ -91,7 +115,7 @@ If you find no `AGENTS.md` at all (neither in cwd nor up the tree), warn the use
91
115
  - `/validate` — check completeness and consistency.
92
116
  - `/done` — finalize and update `AGENTS.md`. Next step: `/srs`.
93
117
 
94
- ### 9. Closing message
118
+ ### 8. Closing message
95
119
 
96
120
  After saving the artifact, present the following summary to the user (see `references/closing-message.md` for format):
97
121
 
@@ -27,13 +27,19 @@ Read `references/environment.md` from the `ba-toolkit` directory to determine th
27
27
 
28
28
  3–7 topics per round, 2–4 rounds.
29
29
 
30
+ **Scope boundary:** `/datadict` describes the **logical** data model — entities, attributes, conceptual types, relationships, state transitions. It does **not** prescribe the physical model (specific DBMS, indexes, sharding, partitioning). Physical-model decisions (PostgreSQL vs MongoDB, schema-per-tenant vs shared, index strategy) belong in `/research` (step 7a) as ADRs. If the user mentions a specific DBMS, note it as an existing constraint or as input to `/research`, not as a mandatory topic here.
31
+
30
32
  **Required topics:**
31
- 1. DBMSMongoDB, PostgreSQL, MySQL, other?
32
- 2. Existing schema — is there one to account for or extend?
33
- 3. Audit entities — which require full audit trail?
34
- 4. Soft delete — is soft deletion used?
35
- 5. Amount storagein minor units (cents) or major currency units?
36
- 6. Versioningis change history needed for any entities?
33
+ 1. **Existing schema or system of record** is there a legacy schema to account for or extend? Migration path?
34
+ 2. **PII inventory**which fields are PII / PCI / PHI / financial? What is the retention policy per class? Which fields must be encrypted at rest? Which masked in UI?
35
+ 3. **Audit and history** — which entities require a full audit trail (every change logged with actor + timestamp)? Which need temporal / bitemporal storage (point-in-time queries)?
36
+ 4. **Soft delete**which entities support soft deletion (`deleted_at`) vs hard delete? Cascade rules on parent deletion?
37
+ 5. **State machines**which entities are stateful (Order, Subscription, Application, Case)? List the states and the legal transitions for each. **Mandatory question for any entity that has more than two distinct lifecycle states.**
38
+ 6. **Referential integrity** cascade rules on parent deletion (cascade / restrict / set null / prevent). Required for every FK.
39
+ 7. **Time-zone handling** — are timestamps stored in UTC and converted at the presentation layer, or stored in local time with a TZ field?
40
+ 8. **Amount storage** — financial amounts in minor units (cents) or major units? Currency code stored alongside?
41
+ 9. **Versioning** — is change history needed for any entities (e.g. price history, terms-of-service version)?
42
+ 10. **Data ownership** — which team / role owns each entity for ongoing curation, schema changes, and incident response?
37
43
 
38
44
  Supplement with domain-specific questions and mandatory entities from the reference.
39
45
 
@@ -41,39 +47,16 @@ Supplement with domain-specific questions and mandatory entities from the refere
41
47
 
42
48
  **File:** `07_datadict_{slug}.md`
43
49
 
44
- ```markdown
45
- # Data Dictionary: {Name}
46
-
47
- ## General Information
48
- - **DBMS:** {type}
49
- - **Naming Conventions:** {camelCase | snake_case}
50
- - **Common Fields:** {createdAt, updatedAt, deletedAt}
51
-
52
- ## Entity: {Name} ({English collection/table name})
53
-
54
- **Description:** {purpose in the system.}
55
- **Linked FR/US:** {references.}
56
-
57
- | Attribute | Type | Required | Constraints | Description |
58
- |-----------|------|----------|-------------|-------------|
59
- | _id / id | ObjectId / UUID | yes | PK | Unique identifier |
60
- | {name} | {type} | {yes/no} | {constraints} | {description} |
61
-
62
- **Indexes:**
63
- - {name}: {fields}, {type}
64
-
65
- ---
66
-
67
- ## Entity Relationships
68
-
69
- | Entity A | Relationship | Entity B | Description |
70
- |----------|-------------|----------|-------------|
71
- ```
50
+ The full per-entity field set lives at `references/templates/datadict-template.md` and is the single source of truth. Each entity carries: name, **Source** (which FR/US introduced this entity), **Owner** (which team / role curates it), **Sensitivity** (Public / Internal / Confidential / PII / PCI / PHI), description, attribute table (with logical types, not DBMS-specific), relationships with cascade rules, **state machine** (if applicable), indexes (logical, not physical), retention policy, and notes. The artifact carries an FR → Entity coverage matrix at the bottom.
72
51
 
73
52
  **Rules:**
74
- - Data types match the chosen DBMS.
75
- - Constraints include: PK, FK, unique, not null, enum, min/max, regex.
76
- - Attribute and entity names in English; descriptions in the user's language.
53
+ - Data types are **logical** (String, Integer, Decimal, Boolean, Timestamp, UUID, Enum, FK, JSON, Binary), not DBMS-specific. Physical type mapping happens in `/research` ADRs.
54
+ - Constraints include: PK, FK with cascade rule, unique, not null, enum, min/max, regex, default.
55
+ - Attribute and entity names in English (or per the project ID convention from `00_principles_*.md`); descriptions in the user's language.
56
+ - Every entity has a **Source** field (FR-NNN or US-NNN) — no entity without provenance.
57
+ - Every entity has an **Owner** field — accountability for schema changes and data quality.
58
+ - Every entity has a **Sensitivity** classification — feeds /nfr Security NFRs and GDPR compliance.
59
+ - Stateful entities (Order, Subscription, Application, Case, Account, …) **must** include a state machine section listing states and legal transitions.
77
60
 
78
61
  ## Iterative refinement
79
62
 
@@ -27,13 +27,22 @@ Read `references/environment.md` from the `ba-toolkit` directory to determine th
27
27
 
28
28
  3–7 topics per round, 2–4 rounds.
29
29
 
30
+ **Standard alignment:** NFR categories follow **ISO/IEC 25010** Software Quality Model (the international standard for software quality characteristics). Every NFR maps to one of the 8 ISO 25010 characteristics: **Functional Suitability**, **Performance Efficiency**, **Compatibility**, **Usability**, **Reliability**, **Security**, **Maintainability**, **Portability**. Project-specific categories (Compliance, Localisation, Observability) can be added but each must be marked as an extension and explain which ISO 25010 characteristic it derives from.
31
+
30
32
  **Required topics:**
31
- 1. Performance — target CCU (Concurrent Users), RPS (Requests Per Second), acceptable response time?
32
- 2. Availabilityrequired SLA? Acceptable downtime? Maintenance windows?
33
- 3. Security — encryption, authentication, access audit?
34
- 4. Complianceapplicable standards and laws? Data retention?
35
- 5. Scalabilityexpected growth, horizontal scaling?
36
- 6. Compatibilitybrowsers, OS, devices?
33
+ 1. **Performance Efficiency (ISO 25010)** time behaviour (response time, throughput), resource utilisation (CPU, memory, network), capacity (CCU, RPS).
34
+ 2. **Reliability (ISO 25010)** availability SLA, maturity (defect rate), fault tolerance, recoverability (RTO/RPO).
35
+ 3. **Security (ISO 25010)** confidentiality (encryption at rest and in transit), integrity, non-repudiation (audit trail), accountability (per-user attribution), authenticity (authentication strength).
36
+ 4. **Compatibility (ISO 25010)** co-existence (with other systems), interoperability (data exchange formats), browser/OS/device support.
37
+ 5. **Usability (ISO 25010)** learnability, operability, accessibility (WCAG level), user error protection.
38
+ 6. **Maintainability (ISO 25010)** modularity, reusability, analysability, modifiability, testability (code coverage target).
39
+ 7. **Portability (ISO 25010)** — adaptability (different environments), installability, replaceability.
40
+ 8. **Functional Suitability (ISO 25010)** — completeness, correctness, appropriateness — usually covered by FRs but worth a sanity check at NFR time.
41
+ 9. **SLO and SLI** — what service-level objectives do we commit to externally, and which service-level indicators do we measure internally to track them?
42
+ 10. **Observability** — what metrics, logs, and traces are mandatory? Retention period for each?
43
+ 11. **Disaster recovery** — RTO and RPO are not just numbers; what is the *runbook* and how often is it tested?
44
+ 12. **Data sovereignty** — where can each data class be stored and processed? Cloud regions allowed?
45
+ 13. **Deprecation policy** — how are NFR thresholds tightened over time, and how is breaking change communicated?
37
46
 
38
47
  Supplement with domain-specific questions and mandatory categories from the reference.
39
48
 
@@ -41,23 +50,13 @@ Supplement with domain-specific questions and mandatory categories from the refe
41
50
 
42
51
  **File:** `06_nfr_{slug}.md`
43
52
 
44
- ```markdown
45
- # Non-functional Requirements: {Name}
46
-
47
- ## NFR-{NNN}: {Category} — {Short Description}
48
- - **Category:** {performance | security | availability | scalability | compatibility | localization | compliance | audit | ...}
49
- - **Description:** {detailed description}
50
- - **Metric:** {measurable criterion}
51
- - **Verification Method:** {how it will be tested}
52
- - **Priority:** {Must | Should | Could | Won't}
53
- - **Linked FR/US:** {references}
54
- ```
53
+ The full per-NFR field set lives at `references/templates/nfr-template.md` and is the single source of truth. Each NFR carries: ID (`NFR-NNN`), ISO 25010 characteristic, sub-characteristic, description, measurable metric, **acceptance threshold** (the bar that says "we passed"), verification method, source (which stakeholder, regulation, or FR drove this NFR), rationale, priority, and linked FRs / USs / Brief constraints. The artifact carries an FR → NFR coverage matrix and a per-characteristic priority summary at the bottom.
55
54
 
56
55
  **Rules:**
57
56
  - Numbering: NFR-001, NFR-002, ...
58
- - Every NFR must have a measurable metric. Avoid "the system should be fast."
59
- - Group by category.
60
- - Domain-specific mandatory categories from the reference.
57
+ - Every NFR must have a measurable metric **and** an acceptance threshold. Avoid "the system should be fast."
58
+ - Group by ISO 25010 characteristic.
59
+ - Domain-specific mandatory categories from the reference are also mapped to ISO 25010 characteristics so the audit trail is consistent.
61
60
 
62
61
  ## Back-reference update
63
62
 
@@ -1,58 +1,109 @@
1
1
  # Acceptance Criteria: [PROJECT_NAME]
2
2
 
3
+ **Version:** 0.1
4
+ **Status:** Draft
3
5
  **Domain:** [DOMAIN]
4
6
  **Date:** [DATE]
5
7
  **Slug:** [SLUG]
6
- **References:** `03_stories_[SLUG].md`, `04_usecases_[SLUG].md`
8
+ **References:** `03_stories_[SLUG].md`, `04_usecases_[SLUG].md`, `02_srs_[SLUG].md`
7
9
 
8
10
  ---
9
11
 
10
12
  ## US-001: [Story Title]
11
13
 
12
- > As a [role], I want to [action], so that [benefit].
14
+ > As a [persona], I want to [action], so that [benefit].
13
15
 
14
- ### Scenario 1: [Happy path name]
16
+ ### AC-001-01: [Happy path scenario name]
17
+
18
+ **Type:** Positive
19
+ **Source:** [business rule from `02_srs_[SLUG].md` FR-NNN, or stakeholder name]
20
+ **Verification:** Automated test | Manual test | Observed in production
21
+
22
+ **Given** [precondition — concrete state, not "the system is ready"]
23
+ **When** [action — single, observable trigger]
24
+ **Then** [expected result — concrete and verifiable, not "the system handles it"]
25
+
26
+ **Links:** US-001, UC-[NNN], FR-[NNN]
27
+
28
+ ---
29
+
30
+ ### AC-001-02: [Negative scenario name]
31
+
32
+ **Type:** Negative
33
+ **Source:** [business rule]
34
+ **Verification:** Automated test
15
35
 
16
36
  **Given** [precondition]
17
37
  **When** [action]
18
- **Then** [expected result]
38
+ **Then** [expected error response, error code, user-facing message]
19
39
 
20
- ### Scenario 2: [Alternative or edge case name]
40
+ **Links:** US-001, UC-[NNN], FR-[NNN]
21
41
 
22
- **Given** [precondition]
42
+ ---
43
+
44
+ ### AC-001-03: [Boundary scenario name]
45
+
46
+ **Type:** Boundary
47
+ **Source:** [business rule with the limit]
48
+ **Verification:** Automated test
49
+
50
+ **Given** [precondition at the boundary value]
23
51
  **When** [action]
24
- **Then** [expected result]
52
+ **Then** [expected behaviour at the boundary — does it allow or reject?]
25
53
 
26
- ### Scenario 3: [Negative / error case name]
54
+ **Links:** US-001, FR-[NNN]
27
55
 
28
- **Given** [precondition]
56
+ ---
57
+
58
+ ### AC-001-04: [Performance scenario name, if applicable]
59
+
60
+ **Type:** Performance
61
+ **Source:** NFR-[NNN]
62
+ **Verification:** Load test | APM measurement
63
+ **Linked NFR:** NFR-[NNN]
64
+
65
+ **Given** [load condition — concurrent users, request rate]
29
66
  **When** [action]
30
- **Then** [expected result]
67
+ **Then** [response time / throughput target, e.g. "p95 latency < 300ms"]
68
+
69
+ **Links:** US-001, NFR-[NNN]
70
+
71
+ ---
31
72
 
32
- **Definition of Done:**
33
- - [ ] All scenarios above pass
73
+ **Definition of Done for US-001:**
74
+ - [ ] All ACs above pass
34
75
  - [ ] Edge case [X] handled
35
76
  - [ ] UI matches wireframe WF-[NNN]
77
+ - [ ] Audit log entry produced for AC-001-01 path
36
78
 
37
79
  ---
38
80
 
39
81
  ## US-002: [Story Title]
40
82
 
41
- > As a [role], I want to [action], so that [benefit].
83
+ > As a [persona], I want to [action], so that [benefit].
42
84
 
43
- ### Scenario 1: [Happy path name]
85
+ ### AC-002-01: [Happy path]
86
+
87
+ **Type:** Positive
88
+ **Source:** [business rule]
89
+ **Verification:** Automated test
44
90
 
45
91
  **Given** [precondition]
46
92
  **When** [action]
47
93
  **Then** [expected result]
48
94
 
49
- ### Scenario 2: [Negative case name]
95
+ **Links:** US-002, FR-[NNN]
50
96
 
51
- **Given** [precondition]
52
- **When** [action]
53
- **Then** [expected result]
97
+ <!-- Repeat AC block per scenario. Each US-NNN section mirrors the stories artifact. -->
98
+
99
+ ---
100
+
101
+ ## US → AC Coverage Matrix
54
102
 
55
- **Definition of Done:**
56
- - [ ] All scenarios above pass
103
+ Forward traceability from each User Story in `03_stories_[SLUG].md` to its Acceptance Criteria. Every Must-priority story must have at least one positive AC AND at least one negative AC AND at least one boundary AC; uncovered combinations are flagged.
57
104
 
58
- <!-- Repeat AC block for each User Story. Each US-NNN section mirrors the stories artifact. -->
105
+ | US ID | Positive AC | Negative AC | Boundary AC | Performance AC | Coverage Status |
106
+ |-------|-------------|-------------|-------------|----------------|-----------------|
107
+ | US-001 | AC-001-01 | AC-001-02 | AC-001-03 | AC-001-04 | ✓ Full |
108
+ | US-002 | AC-002-01 | (missing) | (missing) | — | ⚠ Positive only |
109
+ | US-003 | (missing) | (missing) | (missing) | — | ✗ Uncovered |
@@ -1,12 +1,16 @@
1
1
  # API Contract: [PROJECT_NAME]
2
2
 
3
+ **Version:** 0.1
4
+ **Status:** Draft
3
5
  **Domain:** [DOMAIN]
4
6
  **Date:** [DATE]
5
7
  **Slug:** [SLUG]
6
8
  **API Style:** REST | GraphQL | gRPC
7
9
  **Base URL:** `https://api.[domain].com/v1`
8
10
  **Auth:** Bearer JWT | API Key | OAuth 2.0
9
- **References:** `02_srs_[SLUG].md`, `07_datadict_[SLUG].md`
11
+ **Versioning:** URI (`/v1/`) | Header (`Accept-Version`) | Media-type
12
+ **Standard:** Approximates OpenAPI 3.x structure
13
+ **References:** `02_srs_[SLUG].md`, `07_datadict_[SLUG].md`, `06_nfr_[SLUG].md`
10
14
 
11
15
  ---
12
16
 
@@ -34,9 +38,16 @@ Token obtained via `POST /auth/token`. Expires in [N] minutes. Refresh via `POST
34
38
 
35
39
  ### POST /[resource]
36
40
 
37
- **Description:** [What this endpoint does.]
38
- **Auth required:** Yes | No
39
- **Linked FR:** FR-[NNN] | **Linked Story:** US-[NNN]
41
+ | Field | Value |
42
+ |-------|-------|
43
+ | **Description** | [What this endpoint does] |
44
+ | **Source** | FR-[NNN], US-[NNN] *(Required — what drove this endpoint)* |
45
+ | **Auth required** | Yes / No |
46
+ | **Required scope** | `resource:write` / public |
47
+ | **Idempotency** | Idempotent / Not idempotent / Idempotent via `Idempotency-Key` header |
48
+ | **Rate limit** | [N requests / minute / IP] or "global default" |
49
+ | **SLO** | p95 < 300ms (NFR-[NNN]) |
50
+ | **Verification** | Contract test / consumer-driven contract test / integration test |
40
51
 
41
52
  **Request body:**
42
53
 
@@ -175,9 +186,31 @@ Token obtained via `POST /auth/token`. Expires in [N] minutes. Refresh via `POST
175
186
  |-----------|-----------|---------|
176
187
  | 400 | VALIDATION_ERROR | Request body or params fail validation |
177
188
  | 401 | UNAUTHORIZED | Missing, expired, or invalid token |
178
- | 403 | FORBIDDEN | Authenticated but insufficient permissions |
189
+ | 403 | FORBIDDEN | Authenticated but insufficient permissions / scope |
179
190
  | 404 | NOT_FOUND | Resource does not exist |
180
191
  | 409 | CONFLICT | State conflict (duplicate, wrong status) |
181
192
  | 422 | BUSINESS_RULE_ERROR | Request is valid but violates a business rule |
182
193
  | 429 | RATE_LIMITED | Too many requests |
183
194
  | 500 | INTERNAL_ERROR | Unexpected server error |
195
+
196
+ ---
197
+
198
+ ## Idempotency, CORS, and Deprecation
199
+
200
+ **Idempotency:** POST endpoints marked "Idempotent via header" accept an `Idempotency-Key` header (UUID v4 recommended). The server stores the request fingerprint + response for [N] hours. A retried request with the same key returns the cached response without re-executing the side effect.
201
+
202
+ **CORS policy:** Allowed origins listed in `[config file]`. Browsers from allowed origins may use the API directly with JWT in `Authorization` header. `OPTIONS` preflight responses cache for [N] seconds (`Access-Control-Max-Age`).
203
+
204
+ **API deprecation policy:** Breaking changes are announced via the `Sunset` HTTP response header (RFC 8594) at least [N] months in advance. The `Deprecation` header is set on every response from a deprecated endpoint. A deprecated endpoint continues to function for the entire grace period; only after the sunset date does it return `410 Gone`.
205
+
206
+ ---
207
+
208
+ ## FR → Endpoint Coverage Matrix
209
+
210
+ Forward traceability from each Functional Requirement in `02_srs_[SLUG].md` §3 to the API endpoints that implement it. An FR with no linked endpoint is flagged — every customer- or system-facing FR should have at least one linked endpoint.
211
+
212
+ | FR ID | FR Title | Linked Endpoints | Coverage Status |
213
+ |-------|----------|------------------|-----------------|
214
+ | FR-001 | [title] | `POST /auth/login`, `POST /auth/refresh` | ✓ |
215
+ | FR-002 | [title] | `GET /catalog`, `GET /catalog/:id` | ✓ |
216
+ | FR-003 | [title] | (uncovered) | ✗ |