@kudusov.takhir/ba-toolkit 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (59) hide show
  1. package/CHANGELOG.md +125 -0
  2. package/COMMANDS.md +69 -0
  3. package/LICENSE +21 -0
  4. package/README.md +842 -0
  5. package/README.ru.md +846 -0
  6. package/bin/ba-toolkit.js +468 -0
  7. package/package.json +49 -0
  8. package/skills/ac/SKILL.md +88 -0
  9. package/skills/analyze/SKILL.md +126 -0
  10. package/skills/apicontract/SKILL.md +113 -0
  11. package/skills/brief/SKILL.md +120 -0
  12. package/skills/clarify/SKILL.md +96 -0
  13. package/skills/datadict/SKILL.md +98 -0
  14. package/skills/estimate/SKILL.md +124 -0
  15. package/skills/export/SKILL.md +215 -0
  16. package/skills/glossary/SKILL.md +145 -0
  17. package/skills/handoff/SKILL.md +146 -0
  18. package/skills/nfr/SKILL.md +85 -0
  19. package/skills/principles/SKILL.md +182 -0
  20. package/skills/references/closing-message.md +33 -0
  21. package/skills/references/domains/ecommerce.md +209 -0
  22. package/skills/references/domains/fintech.md +180 -0
  23. package/skills/references/domains/healthcare.md +223 -0
  24. package/skills/references/domains/igaming.md +183 -0
  25. package/skills/references/domains/logistics.md +221 -0
  26. package/skills/references/domains/on-demand.md +231 -0
  27. package/skills/references/domains/real-estate.md +241 -0
  28. package/skills/references/domains/saas.md +185 -0
  29. package/skills/references/domains/social-media.md +234 -0
  30. package/skills/references/environment.md +57 -0
  31. package/skills/references/prerequisites.md +191 -0
  32. package/skills/references/templates/README.md +35 -0
  33. package/skills/references/templates/ac-template.md +58 -0
  34. package/skills/references/templates/analyze-template.md +65 -0
  35. package/skills/references/templates/apicontract-template.md +183 -0
  36. package/skills/references/templates/brief-template.md +51 -0
  37. package/skills/references/templates/datadict-template.md +75 -0
  38. package/skills/references/templates/export-template.md +112 -0
  39. package/skills/references/templates/handoff-template.md +102 -0
  40. package/skills/references/templates/nfr-template.md +97 -0
  41. package/skills/references/templates/principles-template.md +118 -0
  42. package/skills/references/templates/research-template.md +99 -0
  43. package/skills/references/templates/risk-template.md +188 -0
  44. package/skills/references/templates/scenarios-template.md +93 -0
  45. package/skills/references/templates/sprint-template.md +158 -0
  46. package/skills/references/templates/srs-template.md +90 -0
  47. package/skills/references/templates/stories-template.md +60 -0
  48. package/skills/references/templates/trace-template.md +59 -0
  49. package/skills/references/templates/usecases-template.md +51 -0
  50. package/skills/references/templates/wireframes-template.md +96 -0
  51. package/skills/research/SKILL.md +136 -0
  52. package/skills/risk/SKILL.md +163 -0
  53. package/skills/scenarios/SKILL.md +113 -0
  54. package/skills/sprint/SKILL.md +174 -0
  55. package/skills/srs/SKILL.md +124 -0
  56. package/skills/stories/SKILL.md +85 -0
  57. package/skills/trace/SKILL.md +85 -0
  58. package/skills/usecases/SKILL.md +91 -0
  59. package/skills/wireframes/SKILL.md +107 -0
@@ -0,0 +1,215 @@
1
+ ---
2
+ name: ba-export
3
+ description: >
4
+ Export BA Toolkit artifacts to external formats for import into issue trackers and project management tools. Use on /export command, or when the user asks to "export to Jira", "create GitHub issues", "export stories", "generate Linear tickets", "export to CSV", "import into tracker". Run after /stories and /ac for full export with acceptance criteria.
5
+ ---
6
+
7
+ # /export — Artifact Export
8
+
9
+ Converts User Stories (and optionally Acceptance Criteria) into structured output files ready for import into issue trackers and project management tools.
10
+
11
+ ## Syntax
12
+
13
+ ```
14
+ /export [format] [optional: scope]
15
+ ```
16
+
17
+ Examples:
18
+ - `/export` — interactive: ask which format
19
+ - `/export jira` — export all stories as Jira-compatible JSON
20
+ - `/export github` — export all stories as GitHub Issues JSON (via `gh` CLI or API)
21
+ - `/export linear` — export all stories as Linear GraphQL mutation payload
22
+ - `/export csv` — export all stories as CSV (universal fallback)
23
+ - `/export jira E-01` — export only Epic E-01 stories
24
+ - `/export github US-007,US-008` — export specific stories
25
+
26
+ ## Context loading
27
+
28
+ 0. If `00_principles_*.md` exists in the output directory, load it and apply ID conventions and language settings.
29
+ 1. Load `03_stories_{slug}.md` — required. If it does not exist, stop and ask the user to run `/stories` first.
30
+ 2. Load `05_ac_{slug}.md` if it exists — AC scenarios will be embedded in issue descriptions.
31
+ 3. Load `00_estimate_*.md` or read `**Estimate:**` fields from stories — include story points in the export if available.
32
+
33
+ ## Environment
34
+
35
+ Read `references/environment.md` from the `ba-toolkit` directory to determine the output directory.
36
+
37
+ ## Format interview
38
+
39
+ If the format is not specified, ask:
40
+
41
+ 1. **Target tool:** Jira, GitHub Issues, Linear, CSV, or other?
42
+ 2. **Scope:** All stories, a specific epic, or specific story IDs?
43
+ 3. **Include AC?** Embed acceptance criteria in the issue body? (default: yes)
44
+ 4. **Jira-specific:** Project key (e.g., `PROJ`)? Epic link field name (default: `customfield_10014`)? Story Points field name (default: `story_points`)?
45
+ 5. **GitHub-specific:** Repository (e.g., `owner/repo`)? Label prefix for epics (e.g., `epic:`)? Milestone name (optional)?
46
+
47
+ ## Export formats
48
+
49
+ ---
50
+
51
+ ### Format: Jira (JSON)
52
+
53
+ Output file: `export_{slug}_jira.json`
54
+
55
+ ```json
56
+ {
57
+ "projects": [
58
+ {
59
+ "key": "{JIRA_PROJECT_KEY}",
60
+ "issues": [
61
+ {
62
+ "summary": "US-001: {Story title}",
63
+ "description": {
64
+ "type": "doc",
65
+ "version": 1,
66
+ "content": [
67
+ {
68
+ "type": "paragraph",
69
+ "content": [{ "type": "text", "text": "As a {role}, I want to {action}, so that {benefit}." }]
70
+ },
71
+ {
72
+ "type": "heading",
73
+ "attrs": { "level": 3 },
74
+ "content": [{ "type": "text", "text": "Acceptance Criteria" }]
75
+ },
76
+ {
77
+ "type": "bulletList",
78
+ "content": [
79
+ { "type": "listItem", "content": [{ "type": "paragraph", "content": [{ "type": "text", "text": "Given ... When ... Then ..." }] }] }
80
+ ]
81
+ }
82
+ ]
83
+ },
84
+ "issuetype": { "name": "Story" },
85
+ "priority": { "name": "{Must→High | Should→Medium | Could→Low | Won't→Lowest}" },
86
+ "labels": ["{slug}", "{epic-id}"],
87
+ "customfield_10016": {story_points_or_null},
88
+ "customfield_10014": "{epic_link_or_null}"
89
+ }
90
+ ]
91
+ }
92
+ ]
93
+ }
94
+ ```
95
+
96
+ **Priority mapping:**
97
+ - Must → `High`
98
+ - Should → `Medium`
99
+ - Could → `Low`
100
+ - Won't → `Lowest`
101
+
102
+ **Instructions to include after the file:**
103
+ ```
104
+ Import via: Jira → Project Settings → Issue Import → JSON Import
105
+ Or via CLI: jira import --file export_{slug}_jira.json
106
+ ```
107
+
108
+ ---
109
+
110
+ ### Format: GitHub Issues (JSON)
111
+
112
+ Output file: `export_{slug}_github.json`
113
+
114
+ Array of issue objects, one per story. Compatible with `gh` CLI batch import.
115
+
116
+ ```json
117
+ [
118
+ {
119
+ "title": "US-001: {Story title}",
120
+ "body": "## User Story\n\nAs a **{role}**, I want to **{action}**, so that **{benefit}**.\n\n---\n\n## Acceptance Criteria\n\n**Scenario 1 — {name}**\n- Given {precondition}\n- When {action}\n- Then {result}\n\n---\n\n**FR Reference:** FR-{NNN}\n**Priority:** {Must | Should | Could | Won't}\n**Estimate:** {N SP | —}",
121
+ "labels": ["{epic-label}", "user-story", "{priority-label}"],
122
+ "milestone": "{milestone-name-or-null}"
123
+ }
124
+ ]
125
+ ```
126
+
127
+ **Label strategy:**
128
+ - Epic label: `epic:{epic-id}` (e.g., `epic:E-01`)
129
+ - Priority label: `priority:must` / `priority:should` / `priority:could`
130
+ - Type label: `user-story`
131
+
132
+ **Instructions to include after the file:**
133
+ ```bash
134
+ # Create issues via GitHub CLI (requires: gh auth login)
135
+ cat export_{slug}_github.json | jq -c '.[]' | while read issue; do
136
+ title=$(echo "$issue" | jq -r '.title')
137
+ body=$(echo "$issue" | jq -r '.body')
138
+ labels=$(echo "$issue" | jq -r '.labels | join(",")')
139
+ gh issue create --repo {owner/repo} --title "$title" --body "$body" --label "$labels"
140
+ done
141
+ ```
142
+
143
+ ---
144
+
145
+ ### Format: Linear (JSON)
146
+
147
+ Output file: `export_{slug}_linear.json`
148
+
149
+ ```json
150
+ {
151
+ "issues": [
152
+ {
153
+ "title": "US-001: {Story title}",
154
+ "description": "**As a** {role}, **I want to** {action}, **so that** {benefit}.\n\n### Acceptance Criteria\n\n{AC scenarios in markdown}",
155
+ "priority": "{0=No priority | 1=Urgent | 2=High | 3=Medium | 4=Low}",
156
+ "estimate": {story_points_or_null},
157
+ "labelNames": ["{epic-id}", "user-story"],
158
+ "stateName": "Backlog"
159
+ }
160
+ ]
161
+ }
162
+ ```
163
+
164
+ **Priority mapping:**
165
+ - Must → `2` (High)
166
+ - Should → `3` (Medium)
167
+ - Could → `4` (Low)
168
+ - Won't → `0` (No priority)
169
+
170
+ **Instructions to include after the file:**
171
+ ```
172
+ Import via Linear's CSV import or use the Linear API:
173
+ POST https://api.linear.app/graphql with the issueCreate mutation per issue.
174
+ Linear does not have a native bulk JSON import — use the Linear SDK or Zapier.
175
+ ```
176
+
177
+ ---
178
+
179
+ ### Format: CSV
180
+
181
+ Output file: `export_{slug}_stories.csv`
182
+
183
+ ```
184
+ ID,Title,Epic,Role,Action,Benefit,Priority,Estimate,FR Reference,AC Summary
185
+ US-001,"{Story title}",E-01,"{role}","{action}","{benefit}",Must,3 SP,FR-001,"{first AC scenario summary}"
186
+ ```
187
+
188
+ Compatible with Jira CSV import, Trello, Asana, Monday.com, and Google Sheets.
189
+
190
+ ---
191
+
192
+ ## Output
193
+
194
+ Save the export file to the output directory alongside the artifacts:
195
+
196
+ ```
197
+ output/{slug}/export_{slug}_{format}.json
198
+ output/{slug}/export_{slug}_stories.csv (CSV format)
199
+ ```
200
+
201
+ ## Closing message
202
+
203
+ After saving, present the following summary (see `references/closing-message.md` for format):
204
+
205
+ - Saved file path.
206
+ - Format exported.
207
+ - Number of stories exported (and any skipped — e.g., "Won't" priority excluded by default).
208
+ - Whether AC was included.
209
+ - Copy-paste import instructions specific to the chosen format.
210
+
211
+ Available commands: `/export [format]` (export in another format) · `/estimate` · `/handoff`
212
+
213
+ ## Style
214
+
215
+ Generate only valid JSON or CSV. Do not include comments inside JSON files. Use double quotes for all JSON strings. Escape newlines in description fields as `\n`. Generate output in English regardless of the artifact language (issue trackers expect English field values unless explicitly told otherwise).
@@ -0,0 +1,145 @@
1
+ ---
2
+ name: ba-glossary
3
+ description: >
4
+ Unified project glossary extraction and maintenance for BA Toolkit projects. Use on /glossary command, or when the user asks to "build a glossary", "extract terms", "create a glossary", "consolidate terminology", "find terminology drift", "what terms are defined". Cross-cutting command — can run at any pipeline stage once at least one artifact exists.
5
+ ---
6
+
7
+ # /glossary — Unified Project Glossary
8
+
9
+ Cross-cutting command. Scans all existing artifacts and the domain reference file, extracts defined and used terms, detects terminology drift (same concept, different names), and produces or updates a single `00_glossary_{slug}.md` file.
10
+
11
+ ## Syntax
12
+
13
+ ```
14
+ /glossary [optional: action]
15
+ ```
16
+
17
+ Examples:
18
+ - `/glossary` — build or refresh the full project glossary
19
+ - `/glossary drift` — only show terminology drift findings, do not regenerate
20
+ - `/glossary add [Term]: [Definition]` — manually add a term to the glossary
21
+
22
+ ## Context loading
23
+
24
+ 0. If `00_principles_*.md` exists, load it — apply language convention (section 1) and ID naming convention (section 2).
25
+ 1. Scan the output directory for all existing artifacts. Load each one found:
26
+ - `01_brief_{slug}.md` — Brief Glossary section
27
+ - `02_srs_{slug}.md` — Definitions and Abbreviations section, User Roles
28
+ - `03_stories_{slug}.md` — actor names used in "As a..." statements
29
+ - `04_usecases_{slug}.md` — actor names, system names
30
+ - `05_ac_{slug}.md` — state names, condition terms
31
+ - `06_nfr_{slug}.md` — category names, compliance standard names
32
+ - `07_datadict_{slug}.md` — entity names, field names, enum values
33
+ - `07a_research_{slug}.md` — technology names, ADR decisions
34
+ - `08_apicontract_{slug}.md` — error codes, resource names
35
+ - `09_wireframes_{slug}.md` — screen names, UI component names
36
+ - `10_scenarios_{slug}.md` — persona names, scenario types
37
+ - `11_handoff_{slug}.md` — any additional terms
38
+ 2. Load `skills/references/domains/{domain}.md` — Domain Glossary section.
39
+ 3. If `00_glossary_{slug}.md` already exists, load it to merge rather than replace.
40
+
41
+ ## Environment
42
+
43
+ Read `references/environment.md` from the `ba-toolkit` directory to determine the output directory.
44
+
45
+ ## Analysis pass
46
+
47
+ ### Step 1 — Term extraction
48
+
49
+ Extract terms from:
50
+ - Explicit glossary sections in artifacts (Brief § Glossary, SRS § Definitions).
51
+ - Entity names from the Data Dictionary.
52
+ - Actor / role names used across all artifacts.
53
+ - Domain Glossary from the domain reference file.
54
+ - Enum values that represent domain states.
55
+
56
+ For each term, record:
57
+ - The term itself (canonical form).
58
+ - Its definition.
59
+ - Source artifact(s).
60
+ - All variant names found across artifacts (e.g., "User", "Customer", "Account").
61
+
62
+ ### Step 2 — Terminology drift detection
63
+
64
+ Identify cases where the same concept appears under different names in different artifacts:
65
+
66
+ ```
67
+ ⚠️ Drift: "Customer" (Brief), "User" (SRS), "Account" (Data Dictionary) — all refer to the registered buyer entity.
68
+ → Recommend canonical name: "Customer"
69
+ ```
70
+
71
+ Flag drift as:
72
+ - 🔴 **Critical** — core entity or actor name differs across more than 2 artifacts (impairs traceability).
73
+ - 🟡 **Medium** — synonym used in 1 artifact (easy to harmonise).
74
+ - 🟢 **Low** — informal variation in a description field only.
75
+
76
+ ### Step 3 — Undefined term detection
77
+
78
+ Identify terms used in requirements but not defined anywhere in the glossary or domain reference:
79
+ ```
80
+ ⚠️ Undefined: "Vesting schedule" used in FR-007 — not defined in glossary or domain reference.
81
+ ```
82
+
83
+ ## Generation
84
+
85
+ Save `00_glossary_{slug}.md` to the output directory.
86
+
87
+ ```markdown
88
+ # Project Glossary: {PROJECT_NAME}
89
+
90
+ **Domain:** {DOMAIN}
91
+ **Date:** {DATE}
92
+ **Slug:** {SLUG}
93
+ **Sources:** {list of artifacts scanned}
94
+
95
+ ---
96
+
97
+ ## Terms
98
+
99
+ | Term | Definition | Source | Variants |
100
+ |------|-----------|--------|---------|
101
+ | [Term] | [Definition] | [artifact file] | [synonym1, synonym2 or —] |
102
+
103
+ ---
104
+
105
+ ## Terminology Drift Report
106
+
107
+ | Severity | Concept | Variants found | Canonical recommendation |
108
+ |---------|---------|---------------|------------------------|
109
+ | 🔴 Critical | [concept] | [list] | [recommended term] |
110
+ | 🟡 Medium | [concept] | [list] | [recommended term] |
111
+
112
+ ---
113
+
114
+ ## Undefined Terms
115
+
116
+ | Term | Used in | Recommended action |
117
+ |------|---------|-------------------|
118
+ | [term] | [FR-NNN / US-NNN / etc.] | Define in glossary or remove |
119
+ ```
120
+
121
+ Glossary terms are sorted alphabetically. Domain glossary terms are included but labelled with their source (`domain reference`).
122
+
123
+ ### Merge behaviour
124
+
125
+ If `00_glossary_{slug}.md` already exists:
126
+ - Preserve manually added definitions (do not overwrite if definition is more detailed than the extracted one).
127
+ - Add new terms found since last run.
128
+ - Update the Sources and Date fields.
129
+ - Re-run drift detection across all artifacts.
130
+
131
+ ## Closing message
132
+
133
+ After saving, present the following summary (see `references/closing-message.md` for format):
134
+
135
+ - Saved file: `00_glossary_{slug}.md`
136
+ - Total terms in glossary.
137
+ - Drift findings: N critical, N medium, N low.
138
+ - Undefined terms found: N.
139
+ - If critical drift found: recommend `/clarify` on affected artifacts to harmonise terminology, or offer to apply the canonical names automatically.
140
+
141
+ Available commands: `/glossary drift` (drift report only) · `/glossary add [Term]: [Def]` · `/clarify [artifact]` · `/analyze`
142
+
143
+ ## Style
144
+
145
+ Neutral, precise. Term definitions should be one sentence, domain-specific, and consistent with the artifact language set in `00_principles_{slug}.md`. Do not invent definitions — only use what is stated or clearly implied in the artifacts.
@@ -0,0 +1,146 @@
1
+ ---
2
+ name: ba-handoff
3
+ description: >
4
+ Generate a development handoff package summarising the entire BA Toolkit pipeline: artifact inventory, MVP scope, open items, top risks, and recommended next steps. Use on /handoff command, or when the user asks to "prepare handoff", "create handoff document", "summarise the pipeline", "package for developers", "ready for development", "export to Jira", "what is left to do", "pipeline summary". Optional final step — available after /wireframes.
5
+ ---
6
+
7
+ # /handoff — Development Handoff Package
8
+
9
+ Optional final step of the BA Toolkit pipeline. Reads all existing artifacts and generates a single handoff document for the development team. No interview — all information is extracted from the pipeline artifacts.
10
+
11
+ ## Context loading
12
+
13
+ 0. If `00_principles_*.md` exists in the output directory, load it and apply its conventions.
14
+ 1. Read all pipeline artifacts from the output directory.
15
+ 2. Minimum required: `01_brief_*.md` and `02_srs_*.md`. Warn about any missing artifacts and note them as incomplete in the handoff.
16
+ 3. If `00_trace_*.md` exists, use it as the source of coverage statistics. If not, compute coverage from available artifacts.
17
+ 4. If `00_analyze_*.md` exists, import its open findings into the handoff.
18
+
19
+ ## Environment
20
+
21
+ Read `references/environment.md` from the `ba-toolkit` directory to determine the output directory. If unavailable, apply the default rule.
22
+
23
+ ## Generation
24
+
25
+ No interview. All content is derived from the existing artifacts.
26
+
27
+ **File:** `11_handoff_{slug}.md`
28
+
29
+ ```markdown
30
+ # Development Handoff: {Project Name}
31
+
32
+ **Date:** {date}
33
+ **Domain:** {domain}
34
+ **Pipeline completion:** {n}/{total} steps completed
35
+
36
+ ---
37
+
38
+ ## 1. Artifact Inventory
39
+
40
+ | Artifact | File | Status | Key numbers |
41
+ |----------|------|--------|-------------|
42
+ | Project Brief | `01_brief_{slug}.md` | ✓ Complete | {n} goals, {n} risks |
43
+ | SRS | `02_srs_{slug}.md` | ✓ Complete | {n} FR ({must}/{should}/{could}/{wont}) |
44
+ | User Stories | `03_stories_{slug}.md` | ✓ Complete | {n} stories across {n} epics |
45
+ | Use Cases | `04_usecases_{slug}.md` | ✓ / ✗ Missing | {n} UC |
46
+ | Acceptance Criteria | `05_ac_{slug}.md` | ✓ / ✗ Missing | {n} AC |
47
+ | NFR | `06_nfr_{slug}.md` | ✓ / ✗ Missing | {n} NFR across {n} categories |
48
+ | Research | `07a_research_{slug}.md` | ✓ / ✗ Missing / — Not run | {tech decisions} |
49
+ | Data Dictionary | `07_datadict_{slug}.md` | ✓ / ✗ Missing | {n} entities, {n} attributes |
50
+ | API Contract | `08_apicontract_{slug}.md` | ✓ / ✗ Missing | {n} endpoints |
51
+ | Wireframes | `09_wireframes_{slug}.md` | ✓ / ✗ Missing | {n} screens |
52
+ | Scenarios | `10_scenarios_{slug}.md` | ✓ / ✗ Missing / — Not run | {n} scenarios |
53
+
54
+ ---
55
+
56
+ ## 2. MVP Scope
57
+
58
+ Must-priority items confirmed for the first release:
59
+
60
+ ### Functional Requirements (Must)
61
+ {List of Must FR from SRS with one-line description each}
62
+
63
+ ### User Stories (Must)
64
+ {List of Must US with Epic grouping}
65
+
66
+ ---
67
+
68
+ ## 3. Traceability Coverage
69
+
70
+ | Chain | Coverage |
71
+ |-------|---------|
72
+ | FR → US | {n}% ({uncovered} uncovered) |
73
+ | US → UC | {n}% |
74
+ | US → AC | {n}% |
75
+ | FR → NFR | {n}% |
76
+ | Entity → FR/US | {n}% |
77
+ | Endpoint → FR/US | {n}% |
78
+ | WF → US | {n}% |
79
+
80
+ {If coverage is below 100% for any CRITICAL chain, list uncovered items explicitly.}
81
+
82
+ ---
83
+
84
+ ## 4. Open Items
85
+
86
+ {If 00_analyze_{slug}.md or open findings from /validate exist:}
87
+
88
+ | ID | Severity | Location | Summary |
89
+ |----|----------|----------|---------|
90
+ | {A1} | CRITICAL | {location} | {summary} |
91
+
92
+ {If no open items:} No open CRITICAL or HIGH findings. Pipeline is ready for handoff.
93
+
94
+ ---
95
+
96
+ ## 5. Top Risks
97
+
98
+ {Consolidated from 01_brief risks + any gaps identified during the pipeline:}
99
+
100
+ | # | Risk | Impact | Source |
101
+ |---|------|--------|--------|
102
+ | 1 | {risk description} | {High/Medium/Low} | Brief / Analysis |
103
+
104
+ ---
105
+
106
+ ## 6. Recommended Next Steps
107
+
108
+ 1. **Resolve open items** — address any CRITICAL findings listed in section 4 before development begins.
109
+ 2. **Development environment** — use `07a_research_{slug}.md` (if present) for tech stack decisions and `07_datadict_{slug}.md` for schema initialisation.
110
+ 3. **Task breakdown** — import Must-priority US from section 2 into your backlog tool (Jira, Linear, GitHub Issues).
111
+ 4. **Spec-driven implementation** — consider using [Spec Kit](https://github.com/github/spec-kit) with `/speckit.specify` to generate implementation tasks from this handoff.
112
+ 5. **Validation** — use `10_scenarios_{slug}.md` (if present) for end-to-end acceptance testing scenarios.
113
+
114
+ ---
115
+
116
+ ## 7. Artifact Files Reference
117
+
118
+ All files are located in: `{output_directory}`
119
+
120
+ ```
121
+ {file tree of all generated artifacts}
122
+ ```
123
+ ```
124
+
125
+ ## Iterative refinement
126
+
127
+ - `/revise [section]` — update a section.
128
+ - `/analyze` — run quality analysis before finalising.
129
+ - `/trace` — rebuild traceability matrix before finalising.
130
+
131
+ ## Closing message
132
+
133
+ After saving the artifact, present the following summary (see `references/closing-message.md` for format):
134
+
135
+ - Saved file path.
136
+ - Pipeline completion percentage.
137
+ - Count of open CRITICAL/HIGH items.
138
+ - Whether the package is ready for handoff or has blockers.
139
+
140
+ Available commands: `/revise [section]` · `/analyze` · `/trace`
141
+
142
+ Pipeline complete. This document is the development handoff package.
143
+
144
+ ## Style
145
+
146
+ Formal, neutral. No emoji in the saved file. Generate in the artifact language. English for IDs, file names, table column headers, and code.
@@ -0,0 +1,85 @@
1
+ ---
2
+ name: ba-nfr
3
+ description: >
4
+ Generate Non-functional Requirements (NFR): performance, security, availability, scalability, compliance, localization. Use on /nfr command, or when the user asks for "non-functional requirements", "NFR", "performance requirements", "security requirements", "SLA", "compliance requirements", "load requirements", "uptime requirements", "regulatory requirements", "GDPR". Sixth step of the BA Toolkit pipeline.
5
+ ---
6
+
7
+ # /nfr — Non-functional Requirements
8
+
9
+ Sixth step of the BA Toolkit pipeline. Generates NFR with measurable metrics.
10
+
11
+ ## Context loading
12
+
13
+ 0. If `00_principles_*.md` exists in the output directory, load it and apply its conventions (artifact language, ID format, traceability requirements, Definition of Ready, quality gate threshold). Pay special attention to section 5 (NFR Baseline) — all listed categories are mandatory for this project.
14
+ 1. Read `01_brief_*.md`, `02_srs_*.md`, `03_stories_*.md`. SRS is the minimum requirement.
15
+ 2. Extract: slug, domain, integrations, roles, FR list.
16
+ 3. If domain supported, load `references/domains/{domain}.md`, section `6. /nfr`. Use mandatory NFR categories for the domain.
17
+
18
+ ## Environment
19
+
20
+ Read `references/environment.md` from the `ba-toolkit` directory to determine the output directory for the current platform. If the file is unavailable, apply the default rule: if `/mnt/user-data/outputs/` exists and is writable, save there (Claude.ai); otherwise save to the current working directory.
21
+
22
+ ## Interview
23
+
24
+ 3–7 questions per round, 2–4 rounds.
25
+
26
+ **Required topics:**
27
+ 1. Performance — target CCU (Concurrent Users), RPS (Requests Per Second), acceptable response time?
28
+ 2. Availability — required SLA? Acceptable downtime? Maintenance windows?
29
+ 3. Security — encryption, authentication, access audit?
30
+ 4. Compliance — applicable standards and laws? Data retention?
31
+ 5. Scalability — expected growth, horizontal scaling?
32
+ 6. Compatibility — browsers, OS, devices?
33
+
34
+ Supplement with domain-specific questions and mandatory categories from the reference.
35
+
36
+ ## Generation
37
+
38
+ **File:** `06_nfr_{slug}.md`
39
+
40
+ ```markdown
41
+ # Non-functional Requirements: {Name}
42
+
43
+ ## NFR-{NNN}: {Category} — {Short Description}
44
+ - **Category:** {performance | security | availability | scalability | compatibility | localization | compliance | audit | ...}
45
+ - **Description:** {detailed description}
46
+ - **Metric:** {measurable criterion}
47
+ - **Verification Method:** {how it will be tested}
48
+ - **Priority:** {Must | Should | Could | Won't}
49
+ - **Linked FR/US:** {references}
50
+ ```
51
+
52
+ **Rules:**
53
+ - Numbering: NFR-001, NFR-002, ...
54
+ - Every NFR must have a measurable metric. Avoid "the system should be fast."
55
+ - Group by category.
56
+ - Domain-specific mandatory categories from the reference.
57
+
58
+ ## Back-reference update
59
+
60
+ After generation, update section 5 of `02_srs_{slug}.md` with links to specific NFR-{NNN}.
61
+
62
+ ## Iterative refinement
63
+
64
+ - `/revise [NFR-NNN]` — rewrite.
65
+ - `/expand [category]` — add NFR.
66
+ - `/clarify [focus]` — targeted ambiguity pass (especially useful for surfacing NFR without measurable metrics).
67
+ - `/validate` — mandatory categories covered; every NFR has a metric; links correct.
68
+ - `/done` — finalize. Next step: `/datadict`.
69
+
70
+ ## Closing message
71
+
72
+ After saving the artifact, present the following summary to the user (see `references/closing-message.md` for format):
73
+
74
+ - Saved file path.
75
+ - Total number of NFR generated, grouped by category.
76
+ - Confirmation that section 5 of `02_srs_{slug}.md` was updated with NFR links.
77
+ - Any categories flagged as missing or lacking measurable metrics.
78
+
79
+ Available commands: `/clarify [focus]` · `/revise [NFR-NNN]` · `/expand [category]` · `/validate` · `/done`
80
+
81
+ Next step: `/datadict`
82
+
83
+ ## Style
84
+
85
+ Formal, neutral. No emoji, slang. Terms explained on first use. Generate the artifact in the language of the user's request.