@kudusov.takhir/ba-toolkit 3.0.0 → 3.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -11,6 +11,49 @@ Versions follow [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
11
11
 
12
12
  ---
13
13
 
14
+ ## [3.1.1] — 2026-04-09
15
+
16
+ ### Changed
17
+
18
+ - **Interview options table is now capped at 5 rows total** (`skills/references/interview-protocol.md` rule 3). Previously rule 3 allowed "3–5 variants per question" with the free-text "Other" row on top, so a single question could surface up to 6 options and overwhelm the user. The new cap is **up to 4 predefined variants + exactly 1 free-text "Other" row = 5 rows max**, no exceptions. Fewer than 4 predefined rows is still fine when the topic only has 2–3 sensible options. The one-line protocol summary in all 12 interview-phase skills (`brief`, `srs`, `stories`, `usecases`, `ac`, `nfr`, `datadict`, `apicontract`, `wireframes`, `scenarios`, `research`, `principles`) was updated to match. New regression test in `test/cli.test.js` walks every shipped SKILL.md with an Interview heading and fails if any of them carry the legacy `3–5 domain-appropriate options` wording or omit the new `5 rows max` reminder.
19
+ - **Exactly one variant per question is now marked `**Recommended**`** (`skills/references/interview-protocol.md` new rule 10). The AI picks the row using, in priority order, (a) the loaded `references/domains/{domain}.md` for the current skill, (b) the user's prior interview answers, (c) the inline context from rule 9, (d) widely-accepted industry default. The free-text "Other" row is never recommended. If none of (a)–(d) gives a defensible choice the AI omits the marker entirely rather than guessing — a missing recommendation is better than a misleading one. Rendered as `**Recommended**` appended to the end of the `Variant` cell so it stays visible even when the table wraps. Translated together with the variant text per rule 11 (e.g. `**Рекомендуется**`, `**Recomendado**`). All 12 interview-phase SKILL.md summaries point at rule 10. New regression test in `test/cli.test.js` enforces that every Interview-section SKILL.md mentions the marker.
20
+ - **Variant text and the `Variant` column header are now rendered in the user's language** (`skills/references/interview-protocol.md` new rule 11), matching the rule the generated artifacts already follow (`skills/brief/SKILL.md:107`). The `ID` column header and the letter IDs (`a`, `b`, …) stay ASCII. Domain reference files in `skills/references/domains/` remain English-only by design (per the project's English-only convention) — the AI translates the variants on the fly when rendering the table for a non-English-speaking user, instead of pasting the English source verbatim or asking the user which language to use. Updated example block in the protocol now shows both an English question with `**Recommended**` and a Russian rendering of the same question to make the rule concrete.
21
+ - **Replaced `example/dragon-fortune/` with `example/lumen-goods/`**, a sustainable home-goods D2C e-commerce walkthrough. The new example is more universally relatable than the iGaming-themed predecessor: 15 cross-referenced artifacts (Brief, Principles, SRS, Stories, Use Cases, AC, NFRs, Data Dictionary, Tech Research, API Contract, Wireframes, Scenarios, Risk Register, Sprint Plan, Handoff) for a fictional D2C online store selling lighting, kitchenware, and textiles to eco-conscious EU/UK buyers. Stack covers Next.js storefront, Stripe (cards / Apple Pay / Klarna / SEPA), Stripe Tax for destination-based VAT, hybrid stock sync between an NL warehouse and a UK 3PL, GDPR/DSAR/cookie consent flows, and a paid Lumen Circle loyalty tier. CLAUDE.md "Do NOT touch" entry, the placeholder warning, the repo-layout block, and the README example table all updated to reference `lumen-goods`. The old `example/dragon-fortune/` folder has been removed.
22
+
23
+ ---
24
+
25
+ ## [3.1.0] — 2026-04-09
26
+
27
+ ### Highlights
28
+
29
+ - **Multi-project in one repo.** `ba-toolkit init` now writes `AGENTS.md` inside `output/<slug>/`, scoped to that project. Two agent windows in the same repo can `cd output/alpha && claude` and `cd output/beta && claude` independently — no AGENTS.md collision, no shared state.
30
+ - **Interview options as a 2-column markdown table with letter IDs** (`a`, `b`, `c`, …) instead of a numbered list. Same change cascades through every interview-phase skill via the protocol link.
31
+ - **Inline context after slash commands**: `/brief I want to build an online store for construction materials...` is parsed as a lead-in answer; the skill skips redundant questions and jumps straight to what's missing. Works for all 12 interview-phase skills.
32
+ - **Open-ended lead-in question** for `/brief` and `/principles` when there's no inline context — `Tell me about the project in your own words` — instead of dumping a structured table on the first turn.
33
+ - **Detailed next-step closing block** driven by a 13-row pipeline lookup table in `closing-message.md`, replacing per-skill hardcoded `Next step: /xxx` lines. Locked in by two regression tests.
34
+
35
+ ### Changed
36
+
37
+ - **Closing message of every skill is now driven by a single source of truth** — `skills/references/closing-message.md` got a full rewrite that includes (a) a richer closing-block format with an `Available commands` table explaining when to use each subcommand, (b) a 4-line `Next step:` block detailing what the next skill produces, the output filename, the time estimate, and what comes after that, and (c) a 13-row pipeline next-step lookup table that every skill reads from instead of hardcoding `Next step: /xxx` in its own SKILL.md. 10 pipeline-phase SKILL.md files (`brief`, `srs`, `stories`, `usecases`, `ac`, `nfr`, `datadict`, `research`, `apicontract`, `principles`) had their hardcoded `Next step:` lines removed and replaced with an instruction to copy the row from the lookup table. Cross-cutting skills (`/clarify`, `/analyze`, `/trace`, `/estimate`, `/glossary`, `/export`, `/risk`, `/sprint`) keep their per-skill `Available commands` lines but skip the next-step block entirely (as documented in the template). New "If you're stuck" nudge in the closing format suggests `/clarify` and `/validate` for users who don't know what to do next.
38
+ - **Two new regression tests in `test/cli.test.js`**: `closing message: every SKILL.md references the closing-message.md template` (mirror of the existing interview-protocol test, walks every shipped SKILL.md and asserts it points at the template) and `closing message: no SKILL.md hardcodes a next-step line` (greps every shipped SKILL.md for a `Next step: /xxx` pattern and fails if any skill ships its own roll-your-own next-step block, which would silently bypass the lookup table). Locks in the rule for future contributors and future skills.
39
+
40
+ ### Added
41
+
42
+ - **Multi-project support: each `ba-toolkit init` writes `AGENTS.md` inside `output/<slug>/`**, scoped to that project, instead of a single repo-root `AGENTS.md` shared by all projects in the repo. The user opens their AI agent inside the project folder (`cd output/alpha-shop && claude`) — cwd becomes the project root, every skill that looks for `AGENTS.md` or for prior artifacts via `01_brief_*.md` glob sees only that project's files. Two agent windows in the same repo, each `cd`-ed into a different `output/<slug>/`, work on two completely independent projects with zero cross-talk and no shared "active project" pointer to get out of sync. The merge-on-reinit behaviour from v3.0 (managed-block anchors) still applies, just at per-project scope. Closing message of `ba-toolkit init` now points the user at the new `cd output/<slug>` step. Pre-existing repo-root `AGENTS.md` files are never touched by v3.1+ init — covered by an integration regression test. The `skills/references/environment.md` reference file documents both the v3.1+ per-project layout (default) and the legacy v3.0 single-project fallback for backward compat.
43
+ - **`bin/ba-toolkit.js` `cmdInit`** now writes `AGENTS.md` to `output/<slug>/AGENTS.md` and updates the closing "Next steps" message to instruct the user to `cd` into the project folder before opening their AI agent. The `mergeAgentsMd` helper is unchanged — the path move alone gives per-project isolation.
44
+ - **`skills/brief/SKILL.md` and `skills/srs/SKILL.md`** updated their AGENTS.md handling instructions: skills now find AGENTS.md by checking cwd first, falling back to walking up the tree for legacy v3.0 single-project layouts, and only update the `## Pipeline Status` row for their stage — never recreate the file or add legacy `## Artifacts` / `## Key context` sections that aren't part of the v3.1+ template.
45
+ - **Two new integration tests** in `test/cli.integration.test.js`: a `multi-project` test that runs two consecutive `ba-toolkit init` runs with different slugs in the same cwd and asserts that both `output/<slug>/AGENTS.md` files exist independently with the correct project metadata, plus an `init does not touch a pre-existing repo-root AGENTS.md` test that asserts legacy v3.0 root files are preserved byte-for-byte.
46
+ - **Interview skills now accept inline context after the slash command** (interview-protocol rule 9). `/brief I want to build an online store for construction materials targeting B2B buyers in LATAM` is parsed as the lead-in answer; the skill acknowledges it once, skips the open-ended lead-in question, and jumps straight to the first structured-table question that the inline text didn't already cover. Works for every interview-phase skill, not just `/brief` — `/srs focus on the payments module first`, `/stories plan the onboarding epic`, `/nfr emphasise security and compliance`, etc. Each scope hint narrows what the skill asks about. All 12 interview SKILL.md files updated with a one-line pointer to rule 9 in their Interview section; the regression test that walks every shipped SKILL.md still passes (it just checks the protocol link, which all 12 already had).
47
+ - **Open-ended lead-in question for entry-point skills** (interview-protocol rule 8). When `/brief` (or `/principles` for principles-first projects) is invoked with no inline context and no prior brief artifact exists, the very first interview question is now an open-ended free-text prompt: `Tell me about the project in your own words: one or two sentences are enough. What are you building, who is it for, and what problem does it solve?`. The agent extracts whatever it can from the user's reply (product type, target audience, business goal, domain hints) and uses it to pre-fill subsequent structured questions. Subsequent questions follow the regular options-table protocol from rule 2. Non-entry-point skills (`/srs`, `/stories`, …) skip the lead-in entirely because prior artifacts already supply the project context.
48
+ - **Two new walkthrough examples in `docs/USAGE.md` §1** — one showing the plain `/brief` flow with the open-ended lead-in, one showing the `/brief <text>` inline-context flow. Both examples use the new option-table format with letter IDs.
49
+
50
+ ### Changed
51
+
52
+ - **Interview options are now presented as a 2-column markdown table with letter IDs** instead of a numbered list. Every interview-phase skill (12 SKILL.md files) inherits this automatically through the protocol — the change lives in `skills/references/interview-protocol.md` rule 2, no SKILL.md was touched. Each question now carries `| ID | Variant |` columns where ID is `a`, `b`, `c`, … (lowercase letters); the last row is always the free-text "Other" option. Tables render cleanly in Claude Code, Codex CLI, Gemini CLI, Cursor, and Windsurf, scan faster than a numbered list, and let users reply with the letter ID, the verbatim variant text, or — for the free-text row — any text of their own.
53
+ - **CLI domain/agent menus now use letter IDs** to match the interview-protocol convention. Arrow-key navigation, vim-bindings (`j/k`), Enter, and Esc/Ctrl+C are unchanged; the jump key is now `a-z` instead of `1-9`. The non-TTY numbered fallback (CI, piped input) accepts a letter ID as the primary input, with digit and verbatim id-string kept as backward-compat fallbacks so existing scripts and muscle memory still work. New regression test asserts both letter and digit input paths through the fallback. `menuStep`, `renderMenu`, and `runMenuTty` were updated; the keypress handler accepts `[a-z]` as primary and `[0-9]` as fallback.
54
+
55
+ ---
56
+
14
57
  ## [3.0.0] — 2026-04-09
15
58
 
16
59
  ### ⚠️ BREAKING — Cursor and Windsurf install paths moved to native Agent Skills
@@ -408,7 +451,9 @@ CI scripts that relied on the old behaviour (`init` creates files only, `install
408
451
 
409
452
  ---
410
453
 
411
- [Unreleased]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.0.0...HEAD
454
+ [Unreleased]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.1.1...HEAD
455
+ [3.1.1]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.1.0...v3.1.1
456
+ [3.1.0]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.0.0...v3.1.0
412
457
  [3.0.0]: https://github.com/TakhirKudusov/ba-toolkit/compare/v2.0.0...v3.0.0
413
458
  [2.0.0]: https://github.com/TakhirKudusov/ba-toolkit/compare/v1.5.0...v2.0.0
414
459
  [1.5.0]: https://github.com/TakhirKudusov/ba-toolkit/compare/v1.4.0...v1.5.0
package/COMMANDS.md CHANGED
@@ -11,7 +11,7 @@ Run these in order. Each skill reads the output of all previous steps.
11
11
  | # | Command | Output file | What it generates |
12
12
  |:---:|---------|-------------|-------------------|
13
13
  | 0 | `/principles` | `00_principles_{slug}.md` | Project constitution: language, ID conventions, DoR, traceability rules, NFR baseline |
14
- | 1 | `/brief` | `01_brief_{slug}.md` | Project Brief: goals, audience, stakeholders, constraints, risks. Creates `AGENTS.md` |
14
+ | 1 | `/brief` | `01_brief_{slug}.md` | Project Brief: goals, audience, stakeholders, constraints, risks. Updates the `AGENTS.md` Pipeline Status table (which `ba-toolkit init` already created) |
15
15
  | 2 | `/srs` | `02_srs_{slug}.md` | Requirements Specification (IEEE 830): scope, FRs, constraints, assumptions |
16
16
  | 3 | `/stories` | `03_stories_{slug}.md` | User Stories grouped by Epics, with priority and FR references |
17
17
  | 4 | `/usecases` | `04_usecases_{slug}.md` | Use Cases with main, alternative, and exception flows |
package/README.md CHANGED
@@ -48,6 +48,20 @@ ba-toolkit init
48
48
 
49
49
  Supported agents: `claude-code`, `codex`, `gemini`, `cursor`, `windsurf`. All five use the native Agent Skills format (folder-per-skill with `SKILL.md`) — Claude Code at `.claude/skills/`, Codex at `~/.codex/skills/`, Gemini at `.gemini/skills/`, Cursor at `.cursor/skills/`, Windsurf at `.windsurf/skills/`. Pass `--dry-run` to preview the install step without writing files, or `--no-install` to create only the project structure and install skills later with `ba-toolkit install --for <agent>`.
50
50
 
51
+ **New in v3.1** — multi-project + interview UX:
52
+
53
+ - **Multi-project: each `ba-toolkit init` creates `output/<slug>/AGENTS.md`**, scoped to that project. Two agent windows in the same repo can `cd output/alpha && claude` and `cd output/beta && claude` independently — no AGENTS.md collision, no shared state.
54
+ - **Interview options as a 2-column table with letter IDs** (`a`, `b`, `c`, …) instead of a numbered list. Renders cleanly across all 5 supported agents, easier to scan, last row is always free-text "Other".
55
+ - **Inline context after slash commands**: `/brief I want to build an online store for construction materials...` is parsed as a lead-in answer; the skill skips redundant questions and jumps straight to what's missing. Works for all 12 interview-phase skills.
56
+ - **Open-ended lead-in question** for `/brief` and `/principles` when there's no inline context — `Tell me about the project in your own words` — instead of dumping a structured table on the first turn.
57
+
58
+ **New in v3.0** — `ba-toolkit init` UX improvements:
59
+
60
+ - **Arrow-key menu navigation** for the domain and agent prompts in real terminals (`↑/↓` or `j/k` to move, `a-z` to jump, `Enter` to select, `Esc`/`Ctrl+C` to cancel). CI / piped input automatically falls back to a numbered prompt.
61
+ - **Re-prompt on invalid input** instead of crashing on the first typo. Three attempts before aborting, so a piped input can't infinite-loop.
62
+ - **`AGENTS.md` is merged on re-init**, not overwritten. Pipeline Status edits, Key Constraints, Open Questions, and any user notes outside the managed block are preserved byte-for-byte. See [docs/USAGE.md §8](docs/USAGE.md#8-agentsmd--persistent-project-context).
63
+ - **Native Cursor and Windsurf skills** at `.cursor/skills/` and `.windsurf/skills/` — finally registered as actual Agent Skills instead of `.mdc` rules. v2.x users: see the v3.0 migration recipe in [CHANGELOG.md](CHANGELOG.md).
64
+
51
65
  `ba-toolkit --help` shows the full CLI reference. Zero runtime dependencies — only Node.js ≥ 18.
52
66
 
53
67
  <details>
@@ -55,7 +69,7 @@ Supported agents: `claude-code`, `codex`, `gemini`, `cursor`, `windsurf`. All fi
55
69
 
56
70
  Use these if you can't use npm or want to track a specific git commit.
57
71
 
58
- **Important v2.0 layout:** the skills go directly under the agent's skills root, not nested under a `ba-toolkit/` wrapper folder. Versions before v2.0 used a wrapper, which made every skill invisible to the agent. If you're upgrading from v1.x, remove the legacy wrapper folder first.
72
+ **Current install layout (v3.0+):** all five supported agents use the native Agent Skills format — the BA Toolkit skills go directly under each agent's skills root (`.claude/skills/`, `~/.codex/skills/`, `.gemini/skills/`, `.cursor/skills/`, `.windsurf/skills/`), one folder per skill, each containing its own `SKILL.md`. No `.mdc` conversion, no `ba-toolkit/` wrapper. Versions before v2.0 nested everything under a `ba-toolkit/` wrapper folder which made the skills invisible to the agent remove that wrapper if you're upgrading from v1.x. Versions v2.0–v2.x installed Cursor and Windsurf as `.mdc` rules under `.cursor/rules/` and `.windsurf/rules/`, which were the wrong feature entirely (Cursor and Windsurf loaded them as Rules, never as Skills) — see the v3.0 entry in [CHANGELOG.md](CHANGELOG.md) for the migration steps if you're upgrading from v2.x.
59
73
 
60
74
  ### Claude Code CLI
61
75
 
@@ -138,25 +152,25 @@ Your generated artifacts (`01_brief_*.md`, `02_srs_*.md`, …) are untouched by
138
152
 
139
153
  ## Example output
140
154
 
141
- A complete example project — **Dragon Fortune** (iGaming Telegram Mini App) — lives in [`example/dragon-fortune/`](example/dragon-fortune/). All 15 artifacts are realistic, cross-referenced, and generated by running the full BA Toolkit pipeline.
155
+ A complete example project — **Lumen Goods** (sustainable home-goods D2C online store) — lives in [`example/lumen-goods/`](example/lumen-goods/). All 15 artifacts are realistic, cross-referenced, and generated by running the full BA Toolkit pipeline.
142
156
 
143
157
  | Artifact | File |
144
158
  |---------|------|
145
- | Project Principles | [`00_principles_dragon-fortune.md`](example/dragon-fortune/00_principles_dragon-fortune.md) |
146
- | Project Brief | [`01_brief_dragon-fortune.md`](example/dragon-fortune/01_brief_dragon-fortune.md) |
147
- | Requirements (SRS) | [`02_srs_dragon-fortune.md`](example/dragon-fortune/02_srs_dragon-fortune.md) |
148
- | User Stories | [`03_stories_dragon-fortune.md`](example/dragon-fortune/03_stories_dragon-fortune.md) |
149
- | Use Cases | [`04_usecases_dragon-fortune.md`](example/dragon-fortune/04_usecases_dragon-fortune.md) |
150
- | Acceptance Criteria | [`05_ac_dragon-fortune.md`](example/dragon-fortune/05_ac_dragon-fortune.md) |
151
- | Non-functional Requirements | [`06_nfr_dragon-fortune.md`](example/dragon-fortune/06_nfr_dragon-fortune.md) |
152
- | Data Dictionary | [`07_datadict_dragon-fortune.md`](example/dragon-fortune/07_datadict_dragon-fortune.md) |
153
- | Technology Research | [`07a_research_dragon-fortune.md`](example/dragon-fortune/07a_research_dragon-fortune.md) |
154
- | API Contract | [`08_apicontract_dragon-fortune.md`](example/dragon-fortune/08_apicontract_dragon-fortune.md) |
155
- | Wireframes | [`09_wireframes_dragon-fortune.md`](example/dragon-fortune/09_wireframes_dragon-fortune.md) |
156
- | Validation Scenarios | [`10_scenarios_dragon-fortune.md`](example/dragon-fortune/10_scenarios_dragon-fortune.md) |
157
- | Development Handoff | [`11_handoff_dragon-fortune.md`](example/dragon-fortune/11_handoff_dragon-fortune.md) |
158
- | Risk Register | [`00_risks_dragon-fortune.md`](example/dragon-fortune/00_risks_dragon-fortune.md) |
159
- | Sprint Plan | [`00_sprint_dragon-fortune.md`](example/dragon-fortune/00_sprint_dragon-fortune.md) |
159
+ | Project Principles | [`00_principles_lumen-goods.md`](example/lumen-goods/00_principles_lumen-goods.md) |
160
+ | Project Brief | [`01_brief_lumen-goods.md`](example/lumen-goods/01_brief_lumen-goods.md) |
161
+ | Requirements (SRS) | [`02_srs_lumen-goods.md`](example/lumen-goods/02_srs_lumen-goods.md) |
162
+ | User Stories | [`03_stories_lumen-goods.md`](example/lumen-goods/03_stories_lumen-goods.md) |
163
+ | Use Cases | [`04_usecases_lumen-goods.md`](example/lumen-goods/04_usecases_lumen-goods.md) |
164
+ | Acceptance Criteria | [`05_ac_lumen-goods.md`](example/lumen-goods/05_ac_lumen-goods.md) |
165
+ | Non-functional Requirements | [`06_nfr_lumen-goods.md`](example/lumen-goods/06_nfr_lumen-goods.md) |
166
+ | Data Dictionary | [`07_datadict_lumen-goods.md`](example/lumen-goods/07_datadict_lumen-goods.md) |
167
+ | Technology Research | [`07a_research_lumen-goods.md`](example/lumen-goods/07a_research_lumen-goods.md) |
168
+ | API Contract | [`08_apicontract_lumen-goods.md`](example/lumen-goods/08_apicontract_lumen-goods.md) |
169
+ | Wireframes | [`09_wireframes_lumen-goods.md`](example/lumen-goods/09_wireframes_lumen-goods.md) |
170
+ | Validation Scenarios | [`10_scenarios_lumen-goods.md`](example/lumen-goods/10_scenarios_lumen-goods.md) |
171
+ | Development Handoff | [`11_handoff_lumen-goods.md`](example/lumen-goods/11_handoff_lumen-goods.md) |
172
+ | Risk Register | [`00_risks_lumen-goods.md`](example/lumen-goods/00_risks_lumen-goods.md) |
173
+ | Sprint Plan | [`00_sprint_lumen-goods.md`](example/lumen-goods/00_sprint_lumen-goods.md) |
160
174
 
161
175
  Full traceability: FR → US → UC → AC → NFR → Entity → ADR → API → WF → Scenario, plus risk register and sprint plan.
162
176
 
@@ -188,7 +202,7 @@ Full traceability: FR → US → UC → AC → NFR → Entity → ADR → API
188
202
  | — | `/risk` | Risk register — probability × impact matrix, mitigation per risk | `00_risks_{slug}.md` |
189
203
  | — | `/sprint` | Sprint plan — stories grouped by velocity and capacity with sprint goals | `00_sprint_{slug}.md` |
190
204
 
191
- The project **slug** (e.g., `nova-analytics`) is set at `/brief` and reused across all files automatically.
205
+ The project **slug** (e.g., `nova-analytics`) is set at `ba-toolkit init` (derived from the project name) and reused across all files automatically — every skill reads it from `AGENTS.md`.
192
206
 
193
207
  ---
194
208
 
@@ -213,7 +227,7 @@ Skills do not hardcode platform paths — they reference `skills/references/envi
213
227
 
214
228
  ## Domain support
215
229
 
216
- The pipeline is domain-agnostic by default. At `/brief`, you pick a domain, and every subsequent skill loads domain-specific interview questions, mandatory entities, NFR categories, and glossary terms.
230
+ The pipeline is domain-agnostic by default. At `ba-toolkit init` you pick a domain, and every subsequent skill loads domain-specific interview questions, mandatory entities, NFR categories, and glossary terms from `skills/references/domains/{domain}.md`.
217
231
 
218
232
  | Domain | Industries covered |
219
233
  |--------|-------------------|
package/bin/ba-toolkit.js CHANGED
@@ -277,7 +277,17 @@ function menuStep(state, key) {
277
277
  return { ...state, done: true, choice: state.items[state.index] };
278
278
  case 'cancel':
279
279
  return { ...state, done: true, choice: null };
280
- default:
280
+ default: {
281
+ // Letter jump: 'a' → 0, 'b' → 1, …, 'i' → 8.
282
+ if (/^[a-z]$/.test(key)) {
283
+ const idx = key.charCodeAt(0) - 'a'.charCodeAt(0);
284
+ if (idx < len) {
285
+ return { ...state, index: idx };
286
+ }
287
+ return state;
288
+ }
289
+ // Digit jump kept as a backward-compat fallback so existing
290
+ // CI scripts and muscle-memory keep working: '1' → 0, '9' → 8.
281
291
  if (/^[0-9]$/.test(key)) {
282
292
  const n = parseInt(key, 10);
283
293
  if (n >= 1 && n <= len) {
@@ -285,6 +295,7 @@ function menuStep(state, key) {
285
295
  }
286
296
  }
287
297
  return state;
298
+ }
288
299
  }
289
300
  }
290
301
 
@@ -298,13 +309,16 @@ function renderMenu(state, { title } = {}) {
298
309
  state.items.forEach((item, i) => {
299
310
  const selected = i === state.index;
300
311
  const marker = selected ? cyan('>') : ' ';
301
- const idx = String(i + 1).padStart(2);
312
+ // Letter ID `a`, `b`, `c`, … — matches the interview-protocol
313
+ // table format and works for menus up to 26 items (we currently
314
+ // ship 10 domains and 5 agents, so this is plenty).
315
+ const id = String.fromCharCode('a'.charCodeAt(0) + i);
302
316
  const label = selected ? bold(item.label.padEnd(labelWidth)) : item.label.padEnd(labelWidth);
303
317
  const desc = item.desc ? ' ' + gray('— ' + item.desc) : '';
304
- lines.push(` ${marker} ${idx}) ${label}${desc}`);
318
+ lines.push(` ${marker} ${id}) ${label}${desc}`);
305
319
  });
306
320
  lines.push('');
307
- lines.push(' ' + gray('↑/↓ navigate · Enter select · 1-9 jump · Esc cancel'));
321
+ lines.push(' ' + gray('↑/↓ navigate · Enter select · a-z jump · Esc cancel'));
308
322
  return lines.join('\n') + '\n';
309
323
  }
310
324
 
@@ -362,6 +376,12 @@ function runMenuTty(items, { title } = {}) {
362
376
  else if (key.name === 'up' || key.name === 'k') action = 'up';
363
377
  else if (key.name === 'down' || key.name === 'j') action = 'down';
364
378
  else if (key.name === 'return') action = 'enter';
379
+ // Letter jump (a-z) is the new primary; digit (0-9) stays as a
380
+ // backward-compat fallback for users who muscle-memory cipher
381
+ // navigation. menuStep parses both — see its switch statement.
382
+ // Note: 'j' and 'k' are intercepted above as down/up (vim-bindings),
383
+ // so they never reach the letter-jump path.
384
+ else if (key.sequence && /^[a-z]$/.test(key.sequence)) action = key.sequence;
365
385
  else if (key.sequence && /^[0-9]$/.test(key.sequence)) action = key.sequence;
366
386
  if (!action) return;
367
387
  state = menuStep(state, action);
@@ -389,15 +409,15 @@ async function selectMenu(items, { title, fallbackPrompt }) {
389
409
  if (isInteractiveTerminal()) {
390
410
  return await runMenuTty(items, { title });
391
411
  }
392
- // Non-TTY fallback: print the numbered list once, then prompt with
412
+ // Non-TTY fallback: print the lettered list once, then prompt with
393
413
  // promptUntilValid so a single typo doesn't kill the wizard.
394
414
  log('');
395
415
  if (title) log(' ' + yellow(title));
396
416
  const labelWidth = Math.max(...items.map((it) => it.label.length));
397
417
  items.forEach((item, i) => {
398
- const idx = String(i + 1).padStart(2);
418
+ const id = String.fromCharCode('a'.charCodeAt(0) + i);
399
419
  const desc = item.desc ? ' ' + gray('— ' + item.desc) : '';
400
- log(` ${idx}) ${bold(item.label.padEnd(labelWidth))}${desc}`);
420
+ log(` ${id}) ${bold(item.label.padEnd(labelWidth))}${desc}`);
401
421
  });
402
422
  log('');
403
423
  return await promptUntilValid(
@@ -405,13 +425,21 @@ async function selectMenu(items, { title, fallbackPrompt }) {
405
425
  (raw) => {
406
426
  const trimmed = String(raw || '').toLowerCase().trim();
407
427
  if (!trimmed) return null;
428
+ // Letter ID is the primary input (a → 0, b → 1, …).
429
+ if (/^[a-z]$/.test(trimmed)) {
430
+ const idx = trimmed.charCodeAt(0) - 'a'.charCodeAt(0);
431
+ return idx < items.length ? items[idx] : null;
432
+ }
433
+ // Digit ID stays as a fallback so legacy CI scripts and pasted
434
+ // numbers still work (1 → 0, 2 → 1, …).
408
435
  if (/^\d+$/.test(trimmed)) {
409
436
  const n = parseInt(trimmed, 10);
410
437
  return n >= 1 && n <= items.length ? items[n - 1] : null;
411
438
  }
439
+ // Verbatim id-string fallback (e.g., 'saas', 'claude-code').
412
440
  return items.find((it) => it.id === trimmed) || null;
413
441
  },
414
- { invalidMessage: `Invalid selection — pick a number between 1 and ${items.length} or an id.` },
442
+ { invalidMessage: `Invalid selection — pick a letter (a–${String.fromCharCode('a'.charCodeAt(0) + items.length - 1)}), a number, or an id.` },
415
443
  );
416
444
  }
417
445
 
@@ -881,7 +909,7 @@ async function cmdInit(args) {
881
909
  DOMAINS.map((d) => ({ id: d.id, label: d.name, desc: d.desc })),
882
910
  {
883
911
  title: 'Pick a domain:',
884
- fallbackPrompt: ` Select [1-${DOMAINS.length}]: `,
912
+ fallbackPrompt: ` Select [a-${String.fromCharCode('a'.charCodeAt(0) + DOMAINS.length - 1)}]: `,
885
913
  },
886
914
  );
887
915
  if (chosen == null) {
@@ -909,7 +937,7 @@ async function cmdInit(args) {
909
937
  agentEntries.map(([id, a]) => ({ id, label: a.name, desc: '(' + id + ')' })),
910
938
  {
911
939
  title: 'Pick your AI agent:',
912
- fallbackPrompt: ` Select [1-${agentEntries.length}]: `,
940
+ fallbackPrompt: ` Select [a-${String.fromCharCode('a'.charCodeAt(0) + agentEntries.length - 1)}]: `,
913
941
  },
914
942
  );
915
943
  if (chosen == null) {
@@ -932,11 +960,14 @@ async function cmdInit(args) {
932
960
  log(` exists ${outputDir}`);
933
961
  }
934
962
 
935
- // AGENTS.md: merge-on-reinit instead of overwrite. Everything outside
936
- // the managed block (Pipeline Status, Key Constraints, user notes) is
937
- // preserved. See mergeAgentsMd for the three branches (created,
938
- // merged, preserved).
939
- const agentsPath = 'AGENTS.md';
963
+ // AGENTS.md: per-project, lives inside output/<slug>/. Two agent
964
+ // windows can now work on two different projects in the same repo
965
+ // without colliding each cd-s into its own output/<slug>/ folder
966
+ // and finds its own AGENTS.md there. The merge-on-reinit behaviour
967
+ // (managed-block anchors) still applies, just at per-project scope.
968
+ // See mergeAgentsMd for the three branches (created, merged,
969
+ // preserved).
970
+ const agentsPath = path.join(outputDir, 'AGENTS.md');
940
971
  const existingAgents = fs.existsSync(agentsPath)
941
972
  ? fs.readFileSync(agentsPath, 'utf8')
942
973
  : null;
@@ -945,10 +976,10 @@ async function cmdInit(args) {
945
976
  { name, slug, domain },
946
977
  );
947
978
  if (agentsAction === 'preserved') {
948
- log(' ' + gray('preserved AGENTS.md (no ba-toolkit managed block — left untouched)'));
979
+ log(' ' + gray(`preserved ${agentsPath} (no ba-toolkit managed block — left untouched)`));
949
980
  } else {
950
981
  fs.writeFileSync(agentsPath, agentsContent);
951
- log(` ${agentsAction === 'merged' ? 'updated ' : 'created '} AGENTS.md`);
982
+ log(` ${agentsAction === 'merged' ? 'updated ' : 'created '} ${agentsPath}`);
952
983
  }
953
984
 
954
985
  // --- 6. Install skills for the selected agent ---
@@ -969,28 +1000,31 @@ async function cmdInit(args) {
969
1000
 
970
1001
  // --- 7. Final message ---
971
1002
  log('');
972
- log(' ' + cyan(`Project '${name}' (${slug}) is ready.`));
1003
+ log(' ' + cyan(`Project '${name}' (${slug}) is ready in ${outputDir}/.`));
973
1004
  log('');
974
1005
  log(' ' + yellow('Next steps:'));
975
1006
  if (installed === true) {
976
1007
  log(' 1. ' + AGENTS[agentId].restartHint);
977
- log(' 2. Optional: run /principles to define project-wide conventions');
978
- log(' 3. Run /brief to start the BA pipeline');
1008
+ log(' 2. ' + bold(`cd ${outputDir}`) + ' — open your AI agent in this folder.');
1009
+ log(' Each project has its own AGENTS.md, so two agent windows');
1010
+ log(' can work on two different projects in the same repo.');
1011
+ log(' 3. Optional: run /principles to define project-wide conventions');
1012
+ log(' 4. Run /brief to start the BA pipeline');
979
1013
  } else if (installed === false) {
980
1014
  log(' 1. Skill install was cancelled. To install later, run:');
981
1015
  log(' ' + gray(`ba-toolkit install --for ${agentId}`));
982
- log(' 2. Open your AI assistant (Claude, Cursor, etc.)');
1016
+ log(' 2. ' + bold(`cd ${outputDir}`) + ' and open your AI agent there.');
983
1017
  log(' 3. Optional: run /principles to define project-wide conventions');
984
1018
  log(' 4. Run /brief to start the BA pipeline');
985
1019
  } else {
986
1020
  log(' 1. Install skills for your agent:');
987
1021
  log(' ' + gray('ba-toolkit install --for claude-code'));
988
- log(' 2. Open your AI assistant (Claude, Cursor, etc.)');
1022
+ log(' 2. ' + bold(`cd ${outputDir}`) + ' and open your AI agent there.');
989
1023
  log(' 3. Optional: run /principles to define project-wide conventions');
990
1024
  log(' 4. Run /brief to start the BA pipeline');
991
1025
  }
992
1026
  log('');
993
- log(' ' + gray(`Artifacts will be saved to: ${outputDir}/`));
1027
+ log(' ' + gray(`Artifacts and AGENTS.md live in: ${outputDir}/`));
994
1028
  log('');
995
1029
  }
996
1030
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@kudusov.takhir/ba-toolkit",
3
- "version": "3.0.0",
3
+ "version": "3.1.1",
4
4
  "description": "AI-powered Business Analyst pipeline — 21 skills from project brief to development handoff. Works with Claude Code, Codex CLI, Gemini CLI, Cursor, and Windsurf.",
5
5
  "keywords": [
6
6
  "business-analyst",
@@ -21,7 +21,9 @@ Read `references/environment.md` from the `ba-toolkit` directory to determine th
21
21
 
22
22
  ## Interview
23
23
 
24
- > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, offer 3–5 domain-appropriate options (load `references/domains/{domain}.md` for the ones that fit), always include a free-text "Other" option as the last choice, and wait for an answer before asking the next question.
24
+ > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, present a 2-column `| ID | Variant |` markdown table of up to 4 domain-appropriate options plus a free-text "Other" row last (5 rows max), mark exactly one row **Recommended** based on the loaded domain reference and prior answers, render variants in the user's language (rule 11), and wait for an answer before asking the next question.
25
+ >
26
+ > **Inline context (protocol rule 9):** if the user wrote text after `/ac` (e.g., `/ac focus on US-007 and US-011`), use it as a story-id filter for which acceptance criteria to draft first.
25
27
 
26
28
  3–7 topics per round, 2–4 rounds.
27
29
 
@@ -81,9 +83,9 @@ After saving the artifact, present the following summary to the user (see `refer
81
83
  - Count of user stories covered.
82
84
  - Confirmation that back-references in `03_stories_{slug}.md` were updated.
83
85
 
84
- Available commands: `/clarify [focus]` · `/revise [AC-NNN-NN]` · `/expand [US-NNN]` · `/split [AC-NNN-NN]` · `/validate` · `/done`
86
+ Available commands for this artifact: `/clarify [focus]` · `/revise [AC-NNN-NN]` · `/expand [US-NNN]` · `/split [AC-NNN-NN]` · `/validate` · `/done`
85
87
 
86
- Next step: `/nfr`
88
+ Build the `Next step:` block from the pipeline lookup table in `references/closing-message.md` (row `Current = /ac`). Do not hardcode `/nfr` here.
87
89
 
88
90
  ## Style
89
91
 
@@ -21,7 +21,9 @@ Read `references/environment.md` from the `ba-toolkit` directory to determine th
21
21
 
22
22
  ## Interview
23
23
 
24
- > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, offer 3–5 domain-appropriate options (load `references/domains/{domain}.md` for the ones that fit), always include a free-text "Other" option as the last choice, and wait for an answer before asking the next question.
24
+ > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, present a 2-column `| ID | Variant |` markdown table of up to 4 domain-appropriate options plus a free-text "Other" row last (5 rows max), mark exactly one row **Recommended** based on the loaded domain reference and prior answers, render variants in the user's language (rule 11), and wait for an answer before asking the next question.
25
+ >
26
+ > **Inline context (protocol rule 9):** if the user wrote text after `/apicontract` (e.g., `/apicontract REST with JWT auth, OpenAPI 3.1`), use it as a style and protocol hint for the API design.
25
27
 
26
28
  3–7 topics per round, 2–4 rounds.
27
29
 
@@ -106,9 +108,9 @@ After saving the artifact, present the following summary to the user (see `refer
106
108
  - Protocol and authentication method confirmed.
107
109
  - WebSocket events and Webhook contracts included (if applicable).
108
110
 
109
- Available commands: `/clarify [focus]` · `/revise [endpoint]` · `/expand [endpoint]` · `/validate` · `/done`
111
+ Available commands for this artifact: `/clarify [focus]` · `/revise [endpoint]` · `/expand [endpoint]` · `/validate` · `/done`
110
112
 
111
- Next step: `/wireframes`
113
+ Build the `Next step:` block from the pipeline lookup table in `references/closing-message.md` (row `Current = /apicontract`). Do not hardcode `/wireframes` here.
112
114
 
113
115
  ## Style
114
116
 
@@ -34,7 +34,11 @@ The domain is written into the brief metadata and passed to all subsequent pipel
34
34
 
35
35
  ### 4. Interview
36
36
 
37
- > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, offer 3–5 domain-appropriate options (load `references/domains/{domain}.md` for the ones that fit), always include a free-text "Other" option as the last choice, and wait for an answer before asking the next question.
37
+ > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, present a 2-column `| ID | Variant |` markdown table of up to 4 domain-appropriate options (load `references/domains/{domain}.md` for the ones that fit) plus a free-text "Other" row last (5 rows max), mark exactly one row **Recommended** based on the loaded domain reference and prior answers, render variants in the user's language (rule 11), and wait for an answer before asking the next question.
38
+ >
39
+ > **Inline context (protocol rule 9):** if the user wrote text after `/brief` (e.g., `/brief I want to build an online store for construction materials`), parse that as the lead-in answer, acknowledge it in one line, and skip directly to the first structured question that the inline text doesn't already cover.
40
+ >
41
+ > **Open-ended lead-in (protocol rule 8):** if there is NO inline text after `/brief`, your very first interview question is open-ended and free-text, not a table — `Tell me about the project in your own words: one or two sentences are enough. What are you building, who is it for, and what problem does it solve?`. After the user answers, switch to the structured table protocol for all subsequent questions and use the lead-in answer to pre-fill what you can.
38
42
 
39
43
  Cover 3–7 topics per round, 2–4 rounds. Do not generate the artifact until sufficient information is collected.
40
44
 
@@ -71,30 +75,11 @@ If a domain reference is loaded, supplement general questions with domain-specif
71
75
 
72
76
  ### 6. AGENTS.md update
73
77
 
74
- After saving `01_brief_{slug}.md`, create or update `AGENTS.md` in the current working directory (project root). This file helps AI agents in future sessions quickly understand the project context without re-reading all artifacts.
78
+ `ba-toolkit init` already created `AGENTS.md` next to where the artifact lives — typically in the current working directory (the user is expected to `cd output/<slug>` after running init). After saving `01_brief_{slug}.md`, find the project's `AGENTS.md` (look in cwd first; fall back to walking up the directory tree if cwd has none, for legacy v3.0 single-project layouts).
75
79
 
76
- ```markdown
77
- # BA Toolkit — Project Context
78
-
79
- **Project:** {Project Name}
80
- **Slug:** {slug}
81
- **Domain:** {domain}
82
- **Language:** {artifact language}
83
- **Pipeline stage:** Brief complete
84
-
85
- ## Artifacts
86
- - `{output_dir}/01_brief_{slug}.md` — Project Brief
87
-
88
- ## Key context
89
- - **Business goal:** {one-line summary}
90
- - **Target audience:** {one-line summary}
91
- - **Key constraints:** {comma-separated list}
92
-
93
- ## Next step
94
- Run `/srs` to generate the Requirements Specification.
95
- ```
80
+ **Update only the `## Pipeline Status` row for `/brief`** — toggle its status from `⬜ Not started` to `✅ Done` and fill in the artifact filename in the `File` column. **Do not touch the managed block** (`<!-- ba-toolkit:begin managed -->` … `<!-- ba-toolkit:end managed -->`) — that's owned by `ba-toolkit init`. **Do not recreate the file at the repo root.** **Do not add `## Artifacts` / `## Key context` sections** — those are not part of the v3.1+ template and would be ignored by future runs.
96
81
 
97
- If `AGENTS.md` already exists and was created by BA Toolkit, update only the "Pipeline stage" and "Artifacts" sections do not overwrite custom content added by the user.
82
+ If you find no `AGENTS.md` at all (neither in cwd nor up the tree), warn the user that the project was likely set up before v3.1 and tell them to run `ba-toolkit init --name "..." --slug {slug}` to scaffold the per-project `AGENTS.md`. Do not create one yourself with arbitrary structure.
98
83
 
99
84
  ### 8. Iterative refinement
100
85
 
@@ -113,9 +98,9 @@ After saving the artifact, present the following summary to the user (see `refer
113
98
  - Count of business goals documented and key constraints captured.
114
99
  - List of identified risks.
115
100
 
116
- Available commands: `/clarify [focus]` · `/revise [section]` · `/expand [section]` · `/validate` · `/done`
101
+ Available commands for this artifact: `/clarify [focus]` · `/revise [section]` · `/expand [section]` · `/validate` · `/done`
117
102
 
118
- Next step: `/srs`
103
+ Build the `Next step:` block from the pipeline lookup table in `references/closing-message.md` (look up the row where `Current` is `/brief`). Do not hardcode `/srs` here — that table is the single source of truth and includes the four `→` lines (what the next skill produces, output file, time estimate, what comes after).
119
104
 
120
105
  ## Style
121
106
 
@@ -21,7 +21,9 @@ Read `references/environment.md` from the `ba-toolkit` directory to determine th
21
21
 
22
22
  ## Interview
23
23
 
24
- > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, offer 3–5 domain-appropriate options (load `references/domains/{domain}.md` for the ones that fit), always include a free-text "Other" option as the last choice, and wait for an answer before asking the next question.
24
+ > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, present a 2-column `| ID | Variant |` markdown table of up to 4 domain-appropriate options plus a free-text "Other" row last (5 rows max), mark exactly one row **Recommended** based on the loaded domain reference and prior answers, render variants in the user's language (rule 11), and wait for an answer before asking the next question.
25
+ >
26
+ > **Inline context (protocol rule 9):** if the user wrote text after `/datadict` (e.g., `/datadict the user and order entities are critical`), use it as a hint for which entities to model first.
25
27
 
26
28
  3–7 topics per round, 2–4 rounds.
27
29
 
@@ -91,9 +93,9 @@ After saving the artifact, present the following summary to the user (see `refer
91
93
  - DBMS chosen and naming convention confirmed.
92
94
  - Entities flagged for audit trail or versioning.
93
95
 
94
- Available commands: `/clarify [focus]` · `/revise [entity]` · `/expand [entity]` · `/split [entity]` · `/validate` · `/done`
96
+ Available commands for this artifact: `/clarify [focus]` · `/revise [entity]` · `/expand [entity]` · `/split [entity]` · `/validate` · `/done`
95
97
 
96
- Next step: `/apicontract`
98
+ Build the `Next step:` block from the pipeline lookup table in `references/closing-message.md` (row `Current = /datadict`). Do not hardcode `/research` (or `/apicontract` if research is skipped) here — the lookup table is the canonical source.
97
99
 
98
100
  ## Style
99
101
 
@@ -21,7 +21,9 @@ Read `references/environment.md` from the `ba-toolkit` directory to determine th
21
21
 
22
22
  ## Interview
23
23
 
24
- > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, offer 3–5 domain-appropriate options (load `references/domains/{domain}.md` for the ones that fit), always include a free-text "Other" option as the last choice, and wait for an answer before asking the next question.
24
+ > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, present a 2-column `| ID | Variant |` markdown table of up to 4 domain-appropriate options plus a free-text "Other" row last (5 rows max), mark exactly one row **Recommended** based on the loaded domain reference and prior answers, render variants in the user's language (rule 11), and wait for an answer before asking the next question.
25
+ >
26
+ > **Inline context (protocol rule 9):** if the user wrote text after `/nfr` (e.g., `/nfr emphasise security and compliance`), use it as a category hint for which NFR areas to prioritise.
25
27
 
26
28
  3–7 topics per round, 2–4 rounds.
27
29
 
@@ -78,9 +80,9 @@ After saving the artifact, present the following summary to the user (see `refer
78
80
  - Confirmation that section 5 of `02_srs_{slug}.md` was updated with NFR links.
79
81
  - Any categories flagged as missing or lacking measurable metrics.
80
82
 
81
- Available commands: `/clarify [focus]` · `/revise [NFR-NNN]` · `/expand [category]` · `/validate` · `/done`
83
+ Available commands for this artifact: `/clarify [focus]` · `/revise [NFR-NNN]` · `/expand [category]` · `/validate` · `/done`
82
84
 
83
- Next step: `/datadict`
85
+ Build the `Next step:` block from the pipeline lookup table in `references/closing-message.md` (row `Current = /nfr`). Do not hardcode `/datadict` here.
84
86
 
85
87
  ## Style
86
88
 
@@ -27,7 +27,11 @@ If `01_brief_*.md` already exists, extract the slug and domain from it. Otherwis
27
27
 
28
28
  ### 3. Interview
29
29
 
30
- > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, offer 3–5 domain-appropriate options (load `references/domains/{domain}.md` for the ones that fit), always include a free-text "Other" option as the last choice, and wait for an answer before asking the next question.
30
+ > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, present a 2-column `| ID | Variant |` markdown table of up to 4 domain-appropriate options plus a free-text "Other" row last (5 rows max), mark exactly one row **Recommended** based on the loaded domain reference and prior answers, render variants in the user's language (rule 11), and wait for an answer before asking the next question.
31
+ >
32
+ > **Inline context (protocol rule 9):** if the user wrote text after `/principles`, parse it as the lead-in answer and skip directly to the first structured question it doesn't already cover.
33
+ >
34
+ > **Open-ended lead-in (protocol rule 8):** if there is NO inline text and no prior `01_brief_*.md` exists, your very first interview question is open-ended free-text — `What kind of project is this and what conventions matter most to you?`. Otherwise jump straight to the structured questions.
31
35
 
32
36
  1–2 rounds, 3–5 topics each. Do not ask about topics the user can accept as defaults.
33
37
 
@@ -175,9 +179,9 @@ After saving the artifact, present the following summary to the user (see `refer
175
179
  - Quality gate threshold confirmed (which severity blocks `/done`).
176
180
  - NFR baseline categories listed.
177
181
 
178
- Available commands: `/revise [section]` · `/expand [section]`
182
+ Available commands for this artifact: `/revise [section]` · `/expand [section]`
179
183
 
180
- Next step: `/brief` (if not yet started) or continue from where the pipeline left off — all skills now load `00_principles_{slug}.md` automatically.
184
+ Build the `Next step:` block from the pipeline lookup table in `references/closing-message.md` (row `Current = /principles`). The lookup table points at `/brief` as the typical next step. If the user has already started `/brief` for this project, instead suggest continuing from wherever the pipeline left off — all skills now load `00_principles_{slug}.md` automatically.
181
185
 
182
186
  ## Style
183
187
 
@@ -1,6 +1,6 @@
1
1
  # Closing Message Template
2
2
 
3
- After saving an artifact, every BA Toolkit skill presents a short summary block to the user in the chat (not inside the saved file). This ensures a consistent pipeline experience across all steps.
3
+ After saving an artifact, every BA Toolkit skill presents a closing summary block to the user in the chat (not inside the saved file). The block ends the current step on a clear note: what was generated, what's available next, and why the next pipeline step matters.
4
4
 
5
5
  ## Format
6
6
 
@@ -14,20 +14,69 @@ Artifact saved: `{file_path}`
14
14
  main decisions captured during the interview,
15
15
  any back-references updated in prior artifacts.}
16
16
 
17
- Available commands:
18
- /clarify [focus] — targeted ambiguity pass: vague terms, missing metrics, conflicting rules
19
- /revise [section] — rewrite a section with your feedback
20
- /expand [section] — add more detail to a section
21
- /validate — check completeness and cross-artifact consistency
22
- /done — finalize this artifact
17
+ Available commands for the current artifact:
18
+
19
+ | Command | When to use it |
20
+ |--------------------|-------------------------------------------------------------|
21
+ | /clarify [focus] | Pick up vague terms, missing metrics, conflicting rules |
22
+ | /revise [section] | Rewrite a specific section with new feedback |
23
+ | /expand [section] | Add more depth to an under-developed section |
24
+ | /validate | Check completeness and consistency across this artifact |
25
+ | /done | Lock this artifact and move on (skill marks it ✅ in AGENTS.md) |
23
26
 
24
27
  Next step: /{next_command}
28
+
29
+ → What it produces: {one-line description, e.g. "IEEE 830 SRS — scope, FRs, constraints"}
30
+ → Output file: {NN}_{name}_{slug}.md
31
+ → Time estimate: {min}–{max} minutes
32
+ → After that: /{step_after_next} ({one-line of what it does})
33
+
34
+ If you're stuck on the next step:
35
+ - Run `/clarify` here first to surface ambiguities while context is fresh.
36
+ - Run `/validate` to confirm this artifact is internally consistent.
37
+ - The next skill reads this artifact automatically, so you don't need to paste anything.
25
38
  ```
26
39
 
40
+ ## Pipeline next-step lookup table
41
+
42
+ Skills use this table as the single source of truth for the `Next step:` block. When a skill closes, look up its row by the `Current` column and copy the four `→` lines verbatim (substituting the slug). **Do not hardcode next-step text in individual SKILL.md files** — always reference this table so future pipeline reorganisations stay consistent.
43
+
44
+ | Current | Next | What it produces | Time | After that |
45
+ |--------------|-----------------|-----------------------------------------------------------------|--------------|-----------------------------------------------------|
46
+ | /principles | /brief | Project Brief — goals, audience, stakeholders, constraints | 20–35 min | /srs — Requirements Specification (IEEE 830) |
47
+ | /brief | /srs | Requirements Specification (IEEE 830) — scope, FRs, MoSCoW | 25–40 min | /stories — User Stories grouped by Epics |
48
+ | /srs | /stories | User Stories grouped by Epics, with priority and FR refs | 20–30 min | /usecases — Use Cases with main/alt/exception flows |
49
+ | /stories | /usecases | Use Cases with main, alternative, and exception flows | 20–35 min | /ac — Acceptance Criteria (Given / When / Then) |
50
+ | /usecases | /ac | Acceptance Criteria (Given / When / Then) linked to stories | 20–35 min | /nfr — Non-functional Requirements with metrics |
51
+ | /ac | /nfr | Non-functional Requirements with measurable metrics | 15–25 min | /datadict — Data Dictionary (entities, fields) |
52
+ | /nfr | /datadict | Data Dictionary — entities, fields, types, relationships | 15–30 min | /research — Technology Research (ADRs, integrations)|
53
+ | /datadict | /research | Technology Research — ADRs, integration map, storage decisions | 15–25 min | /apicontract — API Contract (endpoints, schemas) |
54
+ | /research | /apicontract | API Contract — endpoints, request/response schemas, errors | 20–35 min | /wireframes — Textual Wireframe Descriptions |
55
+ | /apicontract | /wireframes | Textual Wireframe Descriptions — screens, components, nav | 25–40 min | /scenarios — End-to-end Validation Scenarios |
56
+ | /wireframes | /scenarios | End-to-end Validation Scenarios linking US, AC, WF, API | 15–25 min | /trace + /analyze — coverage + cross-artifact QA |
57
+ | /scenarios | /handoff | Development Handoff Package — inventory, MVP scope, open items | 5–10 min | (pipeline complete) |
58
+ | /handoff | (none) | Pipeline complete | — | Run /trace and /analyze for final coverage check |
59
+
60
+ ## Cross-cutting commands (no Next step line)
61
+
62
+ These skills do not advance the pipeline — they update or report on existing artifacts. Their closing block omits the `Next step:` block entirely (omit it cleanly — don't write "Next step: none"):
63
+
64
+ - `/trace` — coverage report (FR → US → UC → AC → NFR → API)
65
+ - `/clarify` — targeted ambiguity resolution; updates the artifact in place
66
+ - `/analyze` — cross-artifact quality report
67
+ - `/estimate` — effort estimation
68
+ - `/glossary` — glossary maintenance
69
+ - `/export` — export to Jira / GitHub Issues / Linear / CSV
70
+ - `/risk` — risk register
71
+ - `/sprint` — sprint plan
72
+
73
+ Their closing block ends after the "Available commands" table. Optionally, they can add a one-line "Re-run after fixes" hint (e.g., `/analyze` says "Re-run /analyze after each fix to track progress").
74
+
27
75
  ## Rules
28
76
 
29
- - `{file_path}` is the full path where the artifact was saved.
30
- - The summary is generated dynamically — do not repeat boilerplate; mention actual numbers and decisions.
31
- - The "Next step" line is omitted for cross-cutting commands (/trace, /clarify, /analyze) that do not advance the pipeline.
32
- - For `/wireframes` (last pipeline step), replace "Next step" with: `Pipeline complete. Run /trace to check full coverage.`
77
+ - `{file_path}` is the full path where the artifact was saved (typically `output/<slug>/{NN}_{name}_{slug}.md`).
78
+ - The summary line is generated dynamically — do not repeat boilerplate; mention actual numbers and decisions ("18 FRs across 3 roles, 4 risks captured", not "the artifact was generated").
79
+ - The "Available commands" table is fixed (5 rows for pipeline skills). Cross-cutting skills omit `/done` from the table since they don't have a "finalize" state.
80
+ - The "Next step" block is built from the lookup table above. Do not hardcode it in individual SKILL.md files.
81
+ - The "If you're stuck" section is a 2–3-line nudge for users who don't know what to do next. Keep it short.
33
82
  - The block is a chat message, not part of the saved Markdown file.
@@ -31,27 +31,46 @@ This file defines how skills determine the output directory. Each platform has i
31
31
 
32
32
  Skills should not hardcode paths. They reference this file and apply the detection logic above.
33
33
 
34
- ## Output folder structure (optional)
34
+ ## Output folder structure
35
35
 
36
- By default, all artifacts are saved flat in the output directory:
36
+ **v3.1+ default per-project subfolder.** `ba-toolkit init` creates `output/<slug>/` and writes the project's `AGENTS.md` inside it. All artifacts for that project also live there:
37
37
 
38
38
  ```
39
- output_dir/
40
- 00_principles_my-app.md
41
- 01_brief_my-app.md
42
- 02_srs_my-app.md
43
- ...
39
+ repo/
40
+ output/
41
+ my-app/ ← project A
42
+ AGENTS.md
43
+ 00_principles_my-app.md
44
+ 01_brief_my-app.md
45
+ 02_srs_my-app.md
46
+ ...
47
+ other-project/ ← project B
48
+ AGENTS.md
49
+ 01_brief_other-project.md
50
+ ...
44
51
  ```
45
52
 
46
- If the user prefers a project-scoped subfolder (useful when managing multiple projects in the same directory), set `output_mode: subfolder` in `00_principles_{slug}.md` section 7. In this mode, all artifacts are saved under `output_dir/{slug}/`:
53
+ The user opens their AI agent inside one of those subfolders (`cd output/my-app && claude` or equivalent for the agent of choice). With cwd set to the project subfolder, every skill — including those that look for prior artifacts via `01_brief_*.md` glob — sees only that project's files. Two agent windows in the same repo, each `cd`-ed into a different `output/<slug>/`, work on two completely independent projects with zero cross-talk.
54
+
55
+ **Legacy v3.0 single-project layout — still supported.** Projects scaffolded before v3.1 have a single `AGENTS.md` at the repo root and artifacts saved flat under `output/`:
47
56
 
48
57
  ```
49
- output_dir/
50
- my-app/
51
- 00_principles_my-app.md
58
+ repo/
59
+ AGENTS.md ← legacy single-project
60
+ output/
52
61
  01_brief_my-app.md
53
62
  02_srs_my-app.md
54
63
  ...
55
64
  ```
56
65
 
57
- Skills check `00_principles_{slug}.md` for this setting. If principles do not exist or the setting is absent, the default (flat) layout is used.
66
+ When a skill is run with cwd set to the repo root and finds no `AGENTS.md` next to the artifacts, it walks up the directory tree to find the legacy root `AGENTS.md`. New projects scaffolded with v3.1+ should always use the per-project subfolder layout.
67
+
68
+ ## Detection rule for skills
69
+
70
+ When a skill needs to find the project's `AGENTS.md` (for example, to update its `## Pipeline Status` table after generating an artifact):
71
+
72
+ 1. Check `cwd/AGENTS.md` first. If present, that's the project's file.
73
+ 2. Otherwise walk up the directory tree until you find an `AGENTS.md` (legacy v3.0 fallback).
74
+ 3. If neither exists, warn the user that the project was likely not scaffolded with `ba-toolkit init` and tell them to run it.
75
+
76
+ Never create `AGENTS.md` at the repo root from inside a skill — that file is owned by `ba-toolkit init`.
@@ -6,16 +6,16 @@ Every BA Toolkit skill that gathers information from the user MUST follow this p
6
6
 
7
7
  1. **One question at a time.** Never send a numbered list of 5+ questions in a single message. Ask one question, wait for the answer, acknowledge it in one line, then ask the next.
8
8
 
9
- 2. **Offer 3–5 answer options per question.** For every question, present a short numbered list of the most likely answers based on:
9
+ 2. **Present options as a 2-column markdown table.** Every question carries a short table with columns `| ID | Variant |`. The IDs are lowercase letters starting at `a` (`a`, `b`, `c`, `d`, `e`, …). Tables render cleanly in every supported AI agent (Claude Code, Codex CLI, Gemini CLI, Cursor, Windsurf) and scan faster than a numbered list. Pull the variants from:
10
10
  - The project domain (load `references/domains/{domain}.md` and reuse its vocabulary, typical entities, and business goals verbatim when they fit — do not invent domain-specific options when the reference file already lists them).
11
11
  - What the user has already said earlier in the interview.
12
12
  - Industry conventions for the artifact being built.
13
13
 
14
- Options should be **concrete**, not abstract — e.g. for "Who is your primary user?" in a SaaS project, offer "Product Manager at a 50–500-person SaaS startup", "Engineering Lead", "Ops/Support team", not "End user", "Customer", "User".
14
+ Variants should be **concrete**, not abstract — e.g. for "Who is your primary user?" in a SaaS project, offer "Product Manager at a 50–500-person SaaS startup", "Engineering Lead", "Ops/Support team", not "End user", "Customer", "User".
15
15
 
16
- 3. **Always include a free-text option.** The last numbered option must always be something like `5. Other — type your own answer`. If the user picks it, accept arbitrary text. Never force the user into one of the predefined options.
16
+ 3. **At most 5 rows per question, last row is always free-text.** Hard cap: **5 rows total = up to 4 predefined variants + exactly 1 free-text "Other" row**. Never render a 6th row. The last row is always something like `e | Other — type your own answer` (or whatever letter follows the last predefined variant). If the user picks the free-text row, accept arbitrary text. Never force the user into one of the predefined variants. Fewer than 4 predefined rows is fine when the topic genuinely has only 2–3 sensible options — pad with "Other" rather than inventing filler.
17
17
 
18
- 4. **Wait for the answer.** Do not generate the next question or any part of the artifact until the user has replied. A non-answer (e.g. "I don't know", "skip") is a valid answer — record it as "unknown" and move on.
18
+ 4. **Wait for the answer.** Do not generate the next question or any part of the artifact until the user has replied. A non-answer (e.g. "I don't know", "skip") is a valid answer — record it as "unknown" and move on. The user can respond with the letter ID (`a`, `b`, …), the verbatim variant text, or — for the free-text row — any text of their own.
19
19
 
20
20
  5. **Acknowledge, then proceed.** After each answer, reflect it back in one line (e.g. "Got it — primary user is the Ops team at mid-size logistics companies.") before asking the next question. This catches misunderstandings early.
21
21
 
@@ -23,9 +23,17 @@ Every BA Toolkit skill that gathers information from the user MUST follow this p
23
23
 
24
24
  7. **Stop when you have enough.** Each skill specifies a required set of topics. Once every required topic has a recorded answer, stop asking and move to the Generation phase. Do not pad the interview with "nice-to-have" questions.
25
25
 
26
+ 8. **Lead-in question for entry-point skills.** For the first skill in a fresh project (typically `/brief`), the very first interview question is **open-ended free-text**, not a structured options table — `Tell me about the project in your own words: one or two sentences are enough. What are you building, who is it for, and what problem does it solve?`. The user replies in free form. From that reply, extract whatever you can (product type, target audience, business goal, domain hints) and use it to pre-fill the structured questions that follow rule 2. Only ask the structured questions for topics the lead-in answer didn't cover. Non-entry-point skills (`/srs`, `/stories`, …) skip the lead-in and go straight to structured questions, because the prior artifacts already supply the project context.
27
+
28
+ 9. **Inline command context.** If the user invokes the skill with text after the slash command — for example `/brief I want to build an online store for construction materials targeting B2B buyers in LATAM` or `/srs the SRS should focus on the payments module first` — parse that text as if it were the answer to the lead-in question (rule 8). Skip the open-ended lead-in and use the inline text to pre-fill any structured questions you can. Only ask the user for what's still missing. Acknowledge the inline context once at the start (`Got it — online store for construction materials, B2B buyers, LATAM market.`) so the user knows their context was understood, then jump straight into the first structured question that the inline text didn't already answer. This rule applies to **every** skill that has an Interview phase, not just entry-point skills.
29
+
30
+ 10. **Mark exactly one row as Recommended.** In every options table, append `**Recommended**` to the end of the `Variant` cell of the single row that best fits the project context. Pick the row using, in order: (a) the loaded `references/domains/{domain}.md` for the current skill — what the reference treats as the typical default; (b) what the user has already said earlier in this interview; (c) the inline context from rule 9; (d) widely-accepted industry default. Never recommend the free-text "Other" row. Never recommend more than one row. If none of (a)–(d) gives you a defensible choice, omit the marker entirely for that question rather than guessing — a missing recommendation is better than a misleading one. Translate the marker together with the variant text per rule 11 (e.g. `**Рекомендуется**`, `**Recomendado**`).
31
+
32
+ 11. **Variant text in the user's language.** The `Variant` column header and every variant string must be written in the same language as the user's first message in this conversation — the same rule the generated artifact already follows. The `ID` column header and the letter IDs (`a`, `b`, …) stay ASCII, unchanged. The `**Recommended**` marker is also translated. Domain reference files in `references/domains/` are English-only by design; when the interview language is not English, translate the variants on the fly as you render the table — do not paste the English source verbatim and do not ask the user which language to use.
33
+
26
34
  ## Example
27
35
 
28
- Bad (old style):
36
+ Bad (old style — questionnaire dump):
29
37
 
30
38
  > Please answer the following questions:
31
39
  > 1. What is the product?
@@ -34,20 +42,45 @@ Bad (old style):
34
42
  > 4. What are the success metrics?
35
43
  > 5. What are the key constraints?
36
44
 
37
- Good (protocol style):
45
+ Good (protocol style — one question, table of variants, 5 rows max, one Recommended):
38
46
 
39
47
  > Let's start with the product itself. What are you building?
40
48
  >
41
- > 1. A B2B SaaS tool for internal teams (dashboards, automation, reporting)
42
- > 2. A customer-facing web application (marketplace, portal, community)
43
- > 3. A mobile app (consumer or B2B)
44
- > 4. An API / developer platform
45
- > 5. Other type your own answer
49
+ > | ID | Variant |
50
+ > |----|-------------------------------------------------------------------------------|
51
+ > | a | A B2B SaaS tool for internal teams (dashboards, automation) **Recommended** |
52
+ > | b | A customer-facing web application (marketplace, portal) |
53
+ > | c | A mobile app (consumer or B2B) |
54
+ > | d | An API / developer platform |
55
+ > | e | Other — type your own answer |
56
+
57
+ *User replies with `a`, types the verbatim variant text, or picks `e` and types their own description. The `**Recommended**` marker on row `a` reflects the loaded SaaS domain reference + the inline context the user gave with `/brief`.*
58
+
59
+ > Got it — internal B2B SaaS tool. Who is the primary user?
60
+ >
61
+ > | ID | Variant |
62
+ > |----|--------------------------------------------------------------------------|
63
+ > | a | Product Manager at a 50–500-person SaaS startup **Recommended** |
64
+ > | b | Engineering Lead at a B2B company |
65
+ > | c | Operations / Support team at a mid-size SaaS |
66
+ > | d | Other — type your own answer |
67
+
68
+ *…and so on, one question at a time, until every required topic for the current skill has an answer. Tables stay at 5 rows or fewer; exactly one predefined row is marked Recommended, never the "Other" row.*
46
69
 
47
- *User picks 1 or types custom.*
70
+ ### Variant translation example (rule 11)
48
71
 
49
- > Got it internal B2B SaaS tool. Who is the primary user? [next question with 3–5 options tailored to B2B SaaS internal tooling]
72
+ If the user's first message was in Russian, the same question is rendered with Russian variants and a translated `Variant` header / `Recommended` marker `ID` column and letter IDs stay ASCII:
73
+
74
+ > Давайте начнём с самого продукта. Что вы создаёте?
75
+ >
76
+ > | ID | Вариант |
77
+ > |----|----------------------------------------------------------------------------------|
78
+ > | a | B2B SaaS-инструмент для внутренних команд (дашборды, автоматизация) **Рекомендуется** |
79
+ > | b | Клиентское веб-приложение (маркетплейс, портал) |
80
+ > | c | Мобильное приложение (для потребителей или B2B) |
81
+ > | d | API / платформа для разработчиков |
82
+ > | e | Другое — введите свой вариант |
50
83
 
51
84
  ## When this protocol applies
52
85
 
53
- This protocol applies to every skill that has an `### Interview` (or `## Interview`) section in its SKILL.md — currently: `brief`, `srs`, `stories`, `usecases`, `ac`, `nfr`, `datadict`, `apicontract`, `wireframes`, `scenarios`, `research`, `principles`. Each of those skills MUST link to this file from its Interview section and follow the rules above.
86
+ This protocol applies to every skill that has an `### Interview` (or `## Interview`) section in its SKILL.md — currently: `brief`, `srs`, `stories`, `usecases`, `ac`, `nfr`, `datadict`, `apicontract`, `wireframes`, `scenarios`, `research`, `principles`. Each of those skills MUST link to this file from its Interview section and follow rules 1–7 + rules 9–11. Rule 8 (open-ended lead-in question) applies only to entry-point skills — currently `/brief` and `/principles` when no `01_brief_*.md` or `00_principles_*.md` is present in the output directory yet.
@@ -23,7 +23,9 @@ Read `references/environment.md` from the `ba-toolkit` directory to determine th
23
23
 
24
24
  ## Interview
25
25
 
26
- > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, offer 3–5 domain-appropriate options (load `references/domains/{domain}.md` for the ones that fit), always include a free-text "Other" option as the last choice, and wait for an answer before asking the next question.
26
+ > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, present a 2-column `| ID | Variant |` markdown table of up to 4 domain-appropriate options plus a free-text "Other" row last (5 rows max), mark exactly one row **Recommended** based on the loaded domain reference and prior answers, render variants in the user's language (rule 11), and wait for an answer before asking the next question.
27
+ >
28
+ > **Inline context (protocol rule 9):** if the user wrote text after `/research` (e.g., `/research compare PostgreSQL vs DynamoDB for our event store`), use it as the focus question to research instead of asking for one.
27
29
 
28
30
  1–2 rounds, 4–6 topics.
29
31
 
@@ -129,9 +131,9 @@ After saving the artifact, present the following summary (see `references/closin
129
131
  - Number of confirmed external integrations.
130
132
  - Any open questions that must be resolved before `/apicontract`.
131
133
 
132
- Available commands: `/clarify [focus]` · `/revise [ADR-NNN]` · `/expand [ADR-NNN]` · `/validate` · `/done`
134
+ Available commands for this artifact: `/clarify [focus]` · `/revise [ADR-NNN]` · `/expand [ADR-NNN]` · `/validate` · `/done`
133
135
 
134
- Next step: `/apicontract`
136
+ Build the `Next step:` block from the pipeline lookup table in `references/closing-message.md` (row `Current = /research`). Do not hardcode `/apicontract` here.
135
137
 
136
138
  ## Style
137
139
 
@@ -23,7 +23,9 @@ Read `references/environment.md` from the `ba-toolkit` directory to determine th
23
23
 
24
24
  ## Interview
25
25
 
26
- > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, offer 3–5 domain-appropriate options (load `references/domains/{domain}.md` for the ones that fit), always include a free-text "Other" option as the last choice, and wait for an answer before asking the next question.
26
+ > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, present a 2-column `| ID | Variant |` markdown table of up to 4 domain-appropriate options plus a free-text "Other" row last (5 rows max), mark exactly one row **Recommended** based on the loaded domain reference and prior answers, render variants in the user's language (rule 11), and wait for an answer before asking the next question.
27
+ >
28
+ > **Inline context (protocol rule 9):** if the user wrote text after `/scenarios` (e.g., `/scenarios focus on the new-user onboarding journey`), use it to scope which end-to-end scenarios to draft.
27
29
 
28
30
  1 round, 3–5 topics.
29
31
 
@@ -23,7 +23,9 @@ Read `references/environment.md` from the `ba-toolkit` directory to determine th
23
23
 
24
24
  ## Interview
25
25
 
26
- > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, offer 3–5 domain-appropriate options (load `references/domains/{domain}.md` for the ones that fit), always include a free-text "Other" option as the last choice, and wait for an answer before asking the next question.
26
+ > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, present a 2-column `| ID | Variant |` markdown table of up to 4 domain-appropriate options plus a free-text "Other" row last (5 rows max), mark exactly one row **Recommended** based on the loaded domain reference and prior answers, render variants in the user's language (rule 11), and wait for an answer before asking the next question.
27
+ >
28
+ > **Inline context (protocol rule 9):** if the user wrote text after `/srs` (e.g., `/srs focus on the payments module first`), parse it as a scope hint and use it to prioritise which functional areas you ask about.
27
29
 
28
30
  3–7 topics per round, 2–4 rounds. Do not re-ask information already known from the brief.
29
31
 
@@ -80,24 +82,9 @@ FR numbering: sequential, three-digit (FR-001, FR-002, ...).
80
82
 
81
83
  ## AGENTS.md update
82
84
 
83
- After `/done`, update `AGENTS.md` in the project root with SRS-level context:
85
+ After `/done`, find the project's `AGENTS.md` (look in cwd first; fall back to walking up the directory tree for legacy v3.0 single-project layouts). **Update only the `## Pipeline Status` row for `/srs`** — toggle its status from `⬜ Not started` to `✅ Done` and fill in the artifact filename (`02_srs_{slug}.md`) in the `File` column. **Do not touch the managed block** (`<!-- ba-toolkit:begin managed -->` … `<!-- ba-toolkit:end managed -->`) — that's owned by `ba-toolkit init`. **Do not add `## Artifacts` / `## Key context` sections** — those are not part of the v3.1+ template.
84
86
 
85
- ```markdown
86
- ## Artifacts
87
- ...
88
- - `{output_dir}/02_srs_{slug}.md` — SRS ({n} FR, MoSCoW breakdown)
89
-
90
- ## Key context
91
- ...
92
- - **User roles:** {comma-separated list}
93
- - **External integrations:** {comma-separated list}
94
- - **Must-priority FR count:** {n}
95
-
96
- ## Next step
97
- Run `/stories` to generate User Stories.
98
- ```
99
-
100
- Only update the "Pipeline stage", "Artifacts", and "Key context" sections. Preserve any custom content.
87
+ If you find no `AGENTS.md` at all, warn the user that the project was likely set up before v3.1 and tell them to run `ba-toolkit init --name "..." --slug {slug}` to scaffold the per-project file. Do not create one yourself with arbitrary structure.
101
88
 
102
89
  ## Iterative refinement
103
90
 
@@ -117,9 +104,9 @@ After saving the artifact, present the following summary to the user (see `refer
117
104
  - User roles identified.
118
105
  - External integrations and regulatory requirements captured.
119
106
 
120
- Available commands: `/clarify [focus]` · `/revise [section]` · `/expand [section]` · `/split [FR-NNN]` · `/validate` · `/done`
107
+ Available commands for this artifact: `/clarify [focus]` · `/revise [section]` · `/expand [section]` · `/split [FR-NNN]` · `/validate` · `/done`
121
108
 
122
- Next step: `/stories`
109
+ Build the `Next step:` block from the pipeline lookup table in `references/closing-message.md` (look up the row where `Current` is `/srs`). Do not hardcode `/stories` here — copy the four `→` lines verbatim from the lookup table row.
123
110
 
124
111
  ## Style
125
112
 
@@ -21,7 +21,9 @@ Read `references/environment.md` from the `ba-toolkit` directory to determine th
21
21
 
22
22
  ## Interview
23
23
 
24
- > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, offer 3–5 domain-appropriate options (load `references/domains/{domain}.md` for the ones that fit), always include a free-text "Other" option as the last choice, and wait for an answer before asking the next question.
24
+ > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, present a 2-column `| ID | Variant |` markdown table of up to 4 domain-appropriate options plus a free-text "Other" row last (5 rows max), mark exactly one row **Recommended** based on the loaded domain reference and prior answers, render variants in the user's language (rule 11), and wait for an answer before asking the next question.
25
+ >
26
+ > **Inline context (protocol rule 9):** if the user wrote text after `/stories` (e.g., `/stories focus on the onboarding epic`), parse it as a scope hint and use it to narrow which areas you draft user stories for.
25
27
 
26
28
  3–7 topics per round, 2–4 rounds.
27
29
 
@@ -78,9 +80,9 @@ After saving the artifact, present the following summary to the user (see `refer
78
80
  - Count of Must-priority FR covered.
79
81
  - Any stories flagged for `/split` due to complexity.
80
82
 
81
- Available commands: `/clarify [focus]` · `/revise [US-NNN]` · `/expand [US-NNN]` · `/split [US-NNN]` · `/validate` · `/done`
83
+ Available commands for this artifact: `/clarify [focus]` · `/revise [US-NNN]` · `/expand [US-NNN]` · `/split [US-NNN]` · `/validate` · `/done`
82
84
 
83
- Next step: `/usecases`
85
+ Build the `Next step:` block from the pipeline lookup table in `references/closing-message.md` (row `Current = /stories`). Do not hardcode `/usecases` here.
84
86
 
85
87
  ## Style
86
88
 
@@ -21,7 +21,9 @@ Read `references/environment.md` from the `ba-toolkit` directory to determine th
21
21
 
22
22
  ## Interview
23
23
 
24
- > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, offer 3–5 domain-appropriate options (load `references/domains/{domain}.md` for the ones that fit), always include a free-text "Other" option as the last choice, and wait for an answer before asking the next question.
24
+ > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, present a 2-column `| ID | Variant |` markdown table of up to 4 domain-appropriate options plus a free-text "Other" row last (5 rows max), mark exactly one row **Recommended** based on the loaded domain reference and prior answers, render variants in the user's language (rule 11), and wait for an answer before asking the next question.
25
+ >
26
+ > **Inline context (protocol rule 9):** if the user wrote text after `/usecases` (e.g., `/usecases focus on admin flows`), use it as a scope hint for which use cases to draft.
25
27
 
26
28
  3–7 topics per round, 2–4 rounds.
27
29
 
@@ -84,9 +86,9 @@ After saving the artifact, present the following summary to the user (see `refer
84
86
  - Count of alternative and exceptional flows documented.
85
87
  - External system actors identified.
86
88
 
87
- Available commands: `/clarify [focus]` · `/revise [UC-NNN]` · `/expand [UC-NNN]` · `/split [UC-NNN]` · `/validate` · `/done`
89
+ Available commands for this artifact: `/clarify [focus]` · `/revise [UC-NNN]` · `/expand [UC-NNN]` · `/split [UC-NNN]` · `/validate` · `/done`
88
90
 
89
- Next step: `/ac`
91
+ Build the `Next step:` block from the pipeline lookup table in `references/closing-message.md` (row `Current = /usecases`). Do not hardcode `/ac` here.
90
92
 
91
93
  ## Style
92
94
 
@@ -21,7 +21,9 @@ Read `references/environment.md` from the `ba-toolkit` directory to determine th
21
21
 
22
22
  ## Interview
23
23
 
24
- > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, offer 3–5 domain-appropriate options (load `references/domains/{domain}.md` for the ones that fit), always include a free-text "Other" option as the last choice, and wait for an answer before asking the next question.
24
+ > **Follow the [Interview Protocol](../references/interview-protocol.md):** ask one question at a time, present a 2-column `| ID | Variant |` markdown table of up to 4 domain-appropriate options plus a free-text "Other" row last (5 rows max), mark exactly one row **Recommended** based on the loaded domain reference and prior answers, render variants in the user's language (rule 11), and wait for an answer before asking the next question.
25
+ >
26
+ > **Inline context (protocol rule 9):** if the user wrote text after `/wireframes` (e.g., `/wireframes mobile-first, focus on the checkout flow`), use it as a layout and scope hint for which screens to draft first.
25
27
 
26
28
  3–7 topics per round, 2–4 rounds.
27
29