@kudusov.takhir/ba-toolkit 3.4.0 → 3.4.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md
CHANGED
|
@@ -11,6 +11,19 @@ Versions follow [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
|
|
11
11
|
|
|
12
12
|
---
|
|
13
13
|
|
|
14
|
+
## [3.4.1] — 2026-04-09
|
|
15
|
+
|
|
16
|
+
### Fixed
|
|
17
|
+
|
|
18
|
+
- **Content-hygiene audit pass after the v3.4.0 ship.** Five drift / staleness fixes flagged by a cross-file consistency check, all behaviour-neutral:
|
|
19
|
+
- **`skills/references/interview-protocol.md` "When this protocol applies"** — the enumerated list of interview-phase skills had drifted out of sync. `/discovery` (added in v3.2.0 with a literal `### 4. Interview` heading) was missing from the canonical list. Reordered the list to match the actual pipeline order (`discovery → principles → brief → srs → …`) and added `/discovery` to rule 8's "entry-point skills" sub-list alongside `/brief` and `/principles`. Added a follow-up paragraph documenting that `/publish` and `/implement-plan` also follow the protocol via differently-named sections (`### Format selection` and embedded calibration interview inside `### Tech stack resolution` respectively) — the existing `interview-protocol-link` regression test only matches a literal `Interview` heading, so these two skills are not auto-enforced and have to be kept in sync by hand.
|
|
20
|
+
- **`skills/discovery/SKILL.md` domain count** — two locations (the "Domain catalog" paragraph and required-topic 3) said "9 supported domain references" with the pre-v3.3.0 enumeration. Bumped to 12 and added the `edtech`, `govtech`, `ai-ml` references that shipped in v3.3.0.
|
|
21
|
+
- **`skills/wireframes/SKILL.md` closing message** — two places hardcoded "Pipeline complete" against the v3.1.0 single-source-of-truth rule (the `/done` description and the trailing line under "Available commands"). Wireframes is stage 9, not the terminal stage. Replaced with the standard "Build the `Next step:` block from the pipeline lookup table in `references/closing-message.md`" instruction every other pipeline-stage skill uses.
|
|
22
|
+
- **`skills/scenarios/SKILL.md` closing message** — same pattern: hardcoded "Pipeline complete. Proceed to `/handoff`" line that bypassed the lookup table and was logically self-contradictory ("complete" + "proceed"). Replaced with the standard delegation. Existing `closing-message.md` lookup table already had the correct row, so the user-facing behaviour was already right when an AI agent followed the rules — the fix is purely about bringing the SKILL.md text into the same single-source-of-truth pattern.
|
|
23
|
+
- **Existing regression tests still pass without modification** — the new `interview-protocol-link`, `closing-message-link`, `Recommended marker`, `5-rows-cap`, frontmatter parser, and skill-folder-count tests all auto-cover the edits since none of them changed the public surface area or the literal patterns the tests look for. 188/188 still green.
|
|
24
|
+
|
|
25
|
+
---
|
|
26
|
+
|
|
14
27
|
## [3.4.0] — 2026-04-09
|
|
15
28
|
|
|
16
29
|
### Added
|
|
@@ -498,7 +511,8 @@ CI scripts that relied on the old behaviour (`init` creates files only, `install
|
|
|
498
511
|
|
|
499
512
|
---
|
|
500
513
|
|
|
501
|
-
[Unreleased]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.4.
|
|
514
|
+
[Unreleased]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.4.1...HEAD
|
|
515
|
+
[3.4.1]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.4.0...v3.4.1
|
|
502
516
|
[3.4.0]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.3.0...v3.4.0
|
|
503
517
|
[3.3.0]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.2.0...v3.3.0
|
|
504
518
|
[3.2.0]: https://github.com/TakhirKudusov/ba-toolkit/compare/v3.1.1...v3.2.0
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@kudusov.takhir/ba-toolkit",
|
|
3
|
-
"version": "3.4.
|
|
3
|
+
"version": "3.4.1",
|
|
4
4
|
"description": "AI-powered Business Analyst pipeline — 24 skills from concept discovery to a sequenced implementation plan an AI coding agent can execute, with one-command Notion + Confluence publish. Works with Claude Code, Codex CLI, Gemini CLI, Cursor, and Windsurf.",
|
|
5
5
|
"keywords": [
|
|
6
6
|
"business-analyst",
|
|
@@ -27,9 +27,9 @@ If `01_brief_*.md` already exists in the same output directory, the project has
|
|
|
27
27
|
|
|
28
28
|
### 3. Domain catalog
|
|
29
29
|
|
|
30
|
-
The
|
|
30
|
+
The 12 supported domain references live in `references/domains/` (`saas`, `fintech`, `ecommerce`, `healthcare`, `logistics`, `on-demand`, `social-media`, `real-estate`, `igaming`, `edtech`, `govtech`, `ai-ml`). Discovery does **not** load a single domain reference up front — instead, it offers a shortlist of candidate domains during the interview and lets the user pick one. The chosen domain is recorded in section 8 of the artifact and becomes the working domain for `/brief`.
|
|
31
31
|
|
|
32
|
-
If none of the
|
|
32
|
+
If none of the 12 fits, record the recommended domain as `custom:{name}` and let `/brief` continue with general questions only.
|
|
33
33
|
|
|
34
34
|
### 4. Interview
|
|
35
35
|
|
|
@@ -44,7 +44,7 @@ Cover 5–7 topics in 2 rounds. Do not generate the artifact until you can recom
|
|
|
44
44
|
**Required topics:**
|
|
45
45
|
1. Problem space — what pain point, gap, or opportunity is being explored.
|
|
46
46
|
2. Target audience hypothesis — 1–2 candidate user segments, as concrete as possible.
|
|
47
|
-
3. Domain shortlist — narrow to 1 of the
|
|
47
|
+
3. Domain shortlist — narrow to 1 of the 12 supported domains (or `custom`) with a one-line rationale; mark one row **Recommended** based on the problem/audience answers so far.
|
|
48
48
|
4. Reference products and analogues — what already exists in this space (3–5 examples), and what's missing or done badly.
|
|
49
49
|
5. MVP feature hypotheses — 5–10 candidate features for a first version. Bullet list, not committed scope.
|
|
50
50
|
6. Differentiation angle — why this idea would beat the existing analogues (one or two sentences).
|
|
@@ -83,4 +83,6 @@ If the user's first message was in Russian, the same question is rendered with R
|
|
|
83
83
|
|
|
84
84
|
## When this protocol applies
|
|
85
85
|
|
|
86
|
-
This protocol applies to every skill that has an `### Interview` (or `## Interview`) section in its SKILL.md — currently: `brief`, `srs`, `stories`, `usecases`, `ac`, `nfr`, `datadict`, `apicontract`, `wireframes`, `scenarios
|
|
86
|
+
This protocol applies to every skill that has an `### Interview` (or `## Interview`) section in its SKILL.md — currently: `discovery`, `principles`, `brief`, `srs`, `stories`, `usecases`, `ac`, `nfr`, `datadict`, `research`, `apicontract`, `wireframes`, `scenarios`. Each of those skills MUST link to this file from its Interview section and follow rules 1–7 + rules 9–11. Rule 8 (open-ended lead-in question) applies only to entry-point skills — currently `/discovery`, `/brief`, and `/principles` when no `00_discovery_*.md`, `01_brief_*.md`, or `00_principles_*.md` is present in the output directory yet.
|
|
87
|
+
|
|
88
|
+
Two additional skills follow rules 1–11 via differently-named sections rather than a literal `Interview` heading: `/publish` uses a `### Format selection` section to ask which target (Notion / Confluence / both), and `/implement-plan` uses an embedded calibration interview inside its `### Tech stack resolution` step to gather frontend / backend / database / hosting / auth choices when `07a_research_*.md` is absent or incomplete. Both carry the same one-line protocol summary block as the canonical interview-phase skills and must be kept in sync with rules 1–11 even though the existing `interview-protocol-link` regression test in `test/cli.test.js` only matches skills with a literal `Interview` heading and therefore does not enforce them automatically.
|
|
@@ -110,7 +110,7 @@ After saving the artifact, present the following summary (see `references/closin
|
|
|
110
110
|
|
|
111
111
|
Available commands: `/clarify [focus]` · `/revise [SC-NNN]` · `/expand [SC-NNN]` · `/split [SC-NNN]` · `/validate` · `/done`
|
|
112
112
|
|
|
113
|
-
|
|
113
|
+
Build the `Next step:` block from the pipeline lookup table in `references/closing-message.md` (look up the row where `Current` is `/scenarios`). Do not hardcode the next step here — that table is the single source of truth.
|
|
114
114
|
|
|
115
115
|
## Style
|
|
116
116
|
|
|
@@ -91,7 +91,7 @@ Overall navigation structure: sections, hierarchy, transitions.
|
|
|
91
91
|
- `/split [WF-NNN]` — extract modals etc.
|
|
92
92
|
- `/clarify [focus]` — targeted ambiguity pass.
|
|
93
93
|
- `/validate` — all Must-US have a screen; API links correct; 4 states described; navigation connected.
|
|
94
|
-
- `/done` —
|
|
94
|
+
- `/done` — finalize and move to the next pipeline step.
|
|
95
95
|
|
|
96
96
|
## Closing message
|
|
97
97
|
|
|
@@ -104,7 +104,7 @@ After saving the artifact, present the following summary to the user (see `refer
|
|
|
104
104
|
|
|
105
105
|
Available commands: `/clarify [focus]` · `/revise [WF-NNN]` · `/expand [WF-NNN]` · `/split [WF-NNN]` · `/validate` · `/done`
|
|
106
106
|
|
|
107
|
-
|
|
107
|
+
Build the `Next step:` block from the pipeline lookup table in `references/closing-message.md` (look up the row where `Current` is `/wireframes`). Do not hardcode the next step here — that table is the single source of truth.
|
|
108
108
|
|
|
109
109
|
## Style
|
|
110
110
|
|