@kudusov.takhir/ba-toolkit 1.2.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +125 -0
- package/COMMANDS.md +69 -0
- package/LICENSE +21 -0
- package/README.md +842 -0
- package/README.ru.md +846 -0
- package/bin/ba-toolkit.js +468 -0
- package/package.json +49 -0
- package/skills/ac/SKILL.md +88 -0
- package/skills/analyze/SKILL.md +126 -0
- package/skills/apicontract/SKILL.md +113 -0
- package/skills/brief/SKILL.md +120 -0
- package/skills/clarify/SKILL.md +96 -0
- package/skills/datadict/SKILL.md +98 -0
- package/skills/estimate/SKILL.md +124 -0
- package/skills/export/SKILL.md +215 -0
- package/skills/glossary/SKILL.md +145 -0
- package/skills/handoff/SKILL.md +146 -0
- package/skills/nfr/SKILL.md +85 -0
- package/skills/principles/SKILL.md +182 -0
- package/skills/references/closing-message.md +33 -0
- package/skills/references/domains/ecommerce.md +209 -0
- package/skills/references/domains/fintech.md +180 -0
- package/skills/references/domains/healthcare.md +223 -0
- package/skills/references/domains/igaming.md +183 -0
- package/skills/references/domains/logistics.md +221 -0
- package/skills/references/domains/on-demand.md +231 -0
- package/skills/references/domains/real-estate.md +241 -0
- package/skills/references/domains/saas.md +185 -0
- package/skills/references/domains/social-media.md +234 -0
- package/skills/references/environment.md +57 -0
- package/skills/references/prerequisites.md +191 -0
- package/skills/references/templates/README.md +35 -0
- package/skills/references/templates/ac-template.md +58 -0
- package/skills/references/templates/analyze-template.md +65 -0
- package/skills/references/templates/apicontract-template.md +183 -0
- package/skills/references/templates/brief-template.md +51 -0
- package/skills/references/templates/datadict-template.md +75 -0
- package/skills/references/templates/export-template.md +112 -0
- package/skills/references/templates/handoff-template.md +102 -0
- package/skills/references/templates/nfr-template.md +97 -0
- package/skills/references/templates/principles-template.md +118 -0
- package/skills/references/templates/research-template.md +99 -0
- package/skills/references/templates/risk-template.md +188 -0
- package/skills/references/templates/scenarios-template.md +93 -0
- package/skills/references/templates/sprint-template.md +158 -0
- package/skills/references/templates/srs-template.md +90 -0
- package/skills/references/templates/stories-template.md +60 -0
- package/skills/references/templates/trace-template.md +59 -0
- package/skills/references/templates/usecases-template.md +51 -0
- package/skills/references/templates/wireframes-template.md +96 -0
- package/skills/research/SKILL.md +136 -0
- package/skills/risk/SKILL.md +163 -0
- package/skills/scenarios/SKILL.md +113 -0
- package/skills/sprint/SKILL.md +174 -0
- package/skills/srs/SKILL.md +124 -0
- package/skills/stories/SKILL.md +85 -0
- package/skills/trace/SKILL.md +85 -0
- package/skills/usecases/SKILL.md +91 -0
- package/skills/wireframes/SKILL.md +107 -0
|
@@ -0,0 +1,126 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: ba-analyze
|
|
3
|
+
description: >
|
|
4
|
+
Cross-artifact quality analysis across all BA Toolkit pipeline artifacts. Use on /analyze command, or when the user asks for "quality check", "analyze artifacts", "find inconsistencies", "cross-artifact check", "check quality", "find duplicates", "terminology check", "coverage analysis", "what is inconsistent". Available at any pipeline stage after /srs. Generates a structured finding report with severity levels.
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# /analyze — Cross-Artifact Quality Analysis
|
|
8
|
+
|
|
9
|
+
Cross-cutting command. Performs a read-only analysis across all existing pipeline artifacts and generates a structured finding report. Does not modify artifacts — use `/clarify` or `/revise` to act on findings.
|
|
10
|
+
|
|
11
|
+
## Context loading
|
|
12
|
+
|
|
13
|
+
0. If `00_principles_*.md` exists in the output directory, load it. Use its traceability requirements (section 3) to set CRITICAL/HIGH/MEDIUM thresholds, its Definition of Ready (section 4) to check mandatory fields, its NFR Baseline (section 5) to flag missing categories, and its quality gate (section 6) to determine which severity blocks `/done`.
|
|
14
|
+
1. Read all pipeline artifacts present in the output directory.
|
|
15
|
+
2. Minimum required: `02_srs_*.md`. If only `01_brief_*.md` is available, warn that coverage analysis is limited and proceed with what is available.
|
|
16
|
+
3. Domain reference is not needed — all information comes from the artifacts themselves.
|
|
17
|
+
|
|
18
|
+
## Environment
|
|
19
|
+
|
|
20
|
+
Read `references/environment.md` from the `ba-toolkit` directory to determine the output directory. If unavailable, apply the default rule.
|
|
21
|
+
|
|
22
|
+
## Analysis categories
|
|
23
|
+
|
|
24
|
+
### 1. Duplication (DUP)
|
|
25
|
+
- Functionally equivalent or near-duplicate FRs within SRS.
|
|
26
|
+
- User stories that describe the same action for the same role.
|
|
27
|
+
- Repeated business rules across multiple artifacts.
|
|
28
|
+
|
|
29
|
+
### 2. Ambiguity (AMB)
|
|
30
|
+
- Requirements containing metrics-free adjectives: "fast", "reliable", "scalable", "secure", "simple", "efficient" without a measurable criterion.
|
|
31
|
+
- Underspecified scope: "the system" or "the user" without a concrete actor.
|
|
32
|
+
- Conditional requirements without defined conditions.
|
|
33
|
+
|
|
34
|
+
### 3. Coverage Gap (GAP)
|
|
35
|
+
- FR without at least one linked US.
|
|
36
|
+
- US without linked UC (if Use Cases artifact exists).
|
|
37
|
+
- US without AC (if AC artifact exists).
|
|
38
|
+
- FR without NFR coverage for cross-cutting concerns (security, performance) where domain-expected.
|
|
39
|
+
- Data entities not linked to any FR or US.
|
|
40
|
+
- API endpoints not linked to any FR or US.
|
|
41
|
+
- Wireframes not linked to any US.
|
|
42
|
+
|
|
43
|
+
### 4. Terminology Drift (TERM)
|
|
44
|
+
- The same concept referred to by different names across artifacts (e.g., "Wallet" in SRS vs "Balance" in Stories vs "Account" in Data Dictionary).
|
|
45
|
+
- Abbreviations expanded differently in different artifacts.
|
|
46
|
+
|
|
47
|
+
### 5. Invalid Reference (REF)
|
|
48
|
+
- Links to IDs that do not exist (FR-NNN, US-NNN, UC-NNN, etc.).
|
|
49
|
+
- References to roles not defined in the SRS roles section.
|
|
50
|
+
- API endpoint links in wireframes pointing to non-existent endpoints.
|
|
51
|
+
|
|
52
|
+
## Generation
|
|
53
|
+
|
|
54
|
+
**File:** `00_analyze_{slug}.md`
|
|
55
|
+
|
|
56
|
+
```markdown
|
|
57
|
+
# Cross-Artifact Quality Analysis: {Name}
|
|
58
|
+
|
|
59
|
+
**Date:** {date}
|
|
60
|
+
**Artifacts analyzed:** {list of files found}
|
|
61
|
+
|
|
62
|
+
## Finding Report
|
|
63
|
+
|
|
64
|
+
| ID | Category | Severity | Location | Summary | Recommendation |
|
|
65
|
+
|----|----------|----------|----------|---------|----------------|
|
|
66
|
+
| A1 | Duplication | HIGH | srs:FR-003, srs:FR-017 | Both describe user authentication | Merge into a single FR |
|
|
67
|
+
| A2 | Ambiguity | MEDIUM | srs:NFR-002 | "Low latency" has no metric | Add target value in ms |
|
|
68
|
+
| A3 | Coverage Gap | CRITICAL | srs:FR-008 | No linked user story | Create US or remove FR |
|
|
69
|
+
| A4 | Terminology | HIGH | srs, stories | "Wallet" vs "Balance" used interchangeably | Standardize in glossary |
|
|
70
|
+
| A5 | Invalid Ref | CRITICAL | ac:AC-003-01 | Links US-099 which does not exist | Fix reference |
|
|
71
|
+
|
|
72
|
+
## Coverage Summary
|
|
73
|
+
|
|
74
|
+
| Artifact pair | Total source | Covered | Uncovered | Coverage % |
|
|
75
|
+
|---------------|-------------|---------|-----------|------------|
|
|
76
|
+
| FR → US | {n} | {n} | {n} | {%} |
|
|
77
|
+
| US → UC | {n} | {n} | {n} | {%} |
|
|
78
|
+
| US → AC | {n} | {n} | {n} | {%} |
|
|
79
|
+
| FR → NFR | {n} | {n} | {n} | {%} |
|
|
80
|
+
| Entity → FR/US | {n} | {n} | {n} | {%} |
|
|
81
|
+
| Endpoint → FR/US | {n} | {n} | {n} | {%} |
|
|
82
|
+
| WF → US | {n} | {n} | {n} | {%} |
|
|
83
|
+
|
|
84
|
+
_(Rows for artifact pairs where the second artifact has not yet been created are omitted.)_
|
|
85
|
+
|
|
86
|
+
## Priority Actions
|
|
87
|
+
|
|
88
|
+
Top 5 highest-severity items to address first, with recommended commands:
|
|
89
|
+
1. {Finding ID} — {summary} → `/clarify FR-NNN` or `/revise [section]`
|
|
90
|
+
```
|
|
91
|
+
|
|
92
|
+
**Severity scale:**
|
|
93
|
+
- **CRITICAL** — blocks pipeline advancement or handoff (missing mandatory link, non-existent ID).
|
|
94
|
+
- **HIGH** — significant quality risk (duplication, key term drift, missing metric for Must-priority item).
|
|
95
|
+
- **MEDIUM** — quality concern, does not block (missing metric for Should-priority, minor overlap).
|
|
96
|
+
- **LOW** — style or completeness suggestion.
|
|
97
|
+
|
|
98
|
+
**Rules:**
|
|
99
|
+
- Finding IDs are sequential: A1, A2, ...
|
|
100
|
+
- Each finding references the exact artifact file and element ID.
|
|
101
|
+
- Coverage rows are generated only for artifact pairs where both files exist.
|
|
102
|
+
- The report is read-only — no artifacts are modified.
|
|
103
|
+
|
|
104
|
+
## Iterative refinement
|
|
105
|
+
|
|
106
|
+
- `/analyze` again — regenerate after fixes to track progress.
|
|
107
|
+
- `/clarify [FR-NNN or focus]` — resolve specific ambiguities from the report.
|
|
108
|
+
- `/revise [section in artifact]` — fix a specific finding.
|
|
109
|
+
- `/trace` — rebuild the full traceability matrix after fixes.
|
|
110
|
+
|
|
111
|
+
## Closing message
|
|
112
|
+
|
|
113
|
+
After saving the report, present the following summary (see `references/closing-message.md` for format):
|
|
114
|
+
|
|
115
|
+
- Saved file path.
|
|
116
|
+
- Total findings by severity: CRITICAL / HIGH / MEDIUM / LOW.
|
|
117
|
+
- Overall coverage percentage (lowest across all artifact pairs).
|
|
118
|
+
- Top 3 recommended actions with specific commands.
|
|
119
|
+
|
|
120
|
+
Available commands: `/clarify [focus]` · `/revise [section]` · `/trace` · `/analyze` (re-run after fixes)
|
|
121
|
+
|
|
122
|
+
No pipeline advancement — this is a quality checkpoint, not a pipeline step.
|
|
123
|
+
|
|
124
|
+
## Style
|
|
125
|
+
|
|
126
|
+
Formal, neutral. No emoji, slang. Generate output in the language of the artifacts. Element IDs, file names, and table column headers remain in English.
|
|
@@ -0,0 +1,113 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: ba-apicontract
|
|
3
|
+
description: >
|
|
4
|
+
Generate API contracts: endpoints, methods, parameters, request/response schemas, error codes. Markdown format approximating OpenAPI. Use on /apicontract command, or when the user asks for "API contract", "describe API", "endpoints", "REST API", "WebSocket API", "describe API", "integration contract", "swagger", "describe requests and responses", "webhook contract", "API specification". Eighth step of the BA Toolkit pipeline.
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# /apicontract — API Contract
|
|
8
|
+
|
|
9
|
+
Eighth step of the BA Toolkit pipeline. Generates API contracts in Markdown format approximating OpenAPI.
|
|
10
|
+
|
|
11
|
+
## Context loading
|
|
12
|
+
|
|
13
|
+
0. If `00_principles_*.md` exists in the output directory, load it and apply its conventions (artifact language, ID format, traceability requirements, Definition of Ready, quality gate threshold).
|
|
14
|
+
1. Read `02_srs_*.md`, `03_stories_*.md`, `07_datadict_*.md`. SRS and Data Dictionary are the minimum.
|
|
15
|
+
2. Extract: slug, domain, FR list, entities and attributes, integrations, roles.
|
|
16
|
+
3. If domain supported, load `references/domains/{domain}.md`, section `8. /apicontract`. Use typical endpoint groups.
|
|
17
|
+
|
|
18
|
+
## Environment
|
|
19
|
+
|
|
20
|
+
Read `references/environment.md` from the `ba-toolkit` directory to determine the output directory for the current platform. If the file is unavailable, apply the default rule: if `/mnt/user-data/outputs/` exists and is writable, save there (Claude.ai); otherwise save to the current working directory.
|
|
21
|
+
|
|
22
|
+
## Interview
|
|
23
|
+
|
|
24
|
+
3–7 questions per round, 2–4 rounds.
|
|
25
|
+
|
|
26
|
+
**Required topics:**
|
|
27
|
+
1. Protocol — REST, WebSocket, GraphQL, combination?
|
|
28
|
+
2. API versioning — URI-based, header-based?
|
|
29
|
+
3. Authentication — JWT, API Key, OAuth2?
|
|
30
|
+
4. Webhook contracts — needed for incoming events from external systems?
|
|
31
|
+
5. Error format — standard (RFC 7807) or custom?
|
|
32
|
+
6. Pagination — cursor-based, offset-based? Limits?
|
|
33
|
+
7. Rate limiting — any restrictions? For which endpoints?
|
|
34
|
+
|
|
35
|
+
Supplement with domain-specific questions from the reference.
|
|
36
|
+
|
|
37
|
+
## Generation
|
|
38
|
+
|
|
39
|
+
**File:** `08_apicontract_{slug}.md`
|
|
40
|
+
|
|
41
|
+
```markdown
|
|
42
|
+
# API Contract: {Name}
|
|
43
|
+
|
|
44
|
+
## General Information
|
|
45
|
+
- **Base URL:** {url}
|
|
46
|
+
- **Authentication:** {method}
|
|
47
|
+
- **Versioning:** {strategy}
|
|
48
|
+
- **Data Format:** JSON, UTF-8
|
|
49
|
+
|
|
50
|
+
## Error Format
|
|
51
|
+
{JSON example}
|
|
52
|
+
|
|
53
|
+
## Common Error Codes
|
|
54
|
+
| HTTP Code | Error Code | Description |
|
|
55
|
+
|
|
56
|
+
## Endpoints
|
|
57
|
+
|
|
58
|
+
### {GROUP}: {Group Name}
|
|
59
|
+
|
|
60
|
+
#### {METHOD} {/path}
|
|
61
|
+
- **Description:** ...
|
|
62
|
+
- **Linked FR/US:** ...
|
|
63
|
+
- **Authentication:** {required | not required}
|
|
64
|
+
- **Roles:** ...
|
|
65
|
+
|
|
66
|
+
**Parameters:**
|
|
67
|
+
| Parameter | Type | Required | Description |
|
|
68
|
+
|
|
69
|
+
**Request Body:** {JSON}
|
|
70
|
+
**Response (200):** {JSON}
|
|
71
|
+
**Error Codes:**
|
|
72
|
+
| Code | Error Code | Condition |
|
|
73
|
+
|
|
74
|
+
## WebSocket Events (if applicable)
|
|
75
|
+
### Event: {name}
|
|
76
|
+
- **Direction:** {client → server | server → client}
|
|
77
|
+
- **Payload:** {JSON}
|
|
78
|
+
|
|
79
|
+
## Webhook Contracts (if applicable)
|
|
80
|
+
### Webhook: {name}
|
|
81
|
+
- **Source:** {external system}
|
|
82
|
+
- **Payload:** {JSON}
|
|
83
|
+
```
|
|
84
|
+
|
|
85
|
+
**Rules:**
|
|
86
|
+
- Endpoints grouped by domain (Auth, Users, Core, Admin, etc.).
|
|
87
|
+
- JSON schemas are representative examples with types in comments.
|
|
88
|
+
- Attributes consistent with Data Dictionary.
|
|
89
|
+
|
|
90
|
+
## Iterative refinement
|
|
91
|
+
|
|
92
|
+
- `/revise [endpoint or group]` — rewrite.
|
|
93
|
+
- `/expand [endpoint]` — add parameters, errors, examples.
|
|
94
|
+
- `/clarify [focus]` — targeted ambiguity pass.
|
|
95
|
+
- `/validate` — all FR with interface requirements covered; types match Data Dictionary; HTTP methods correct.
|
|
96
|
+
- `/done` — finalize. Next step: `/wireframes`.
|
|
97
|
+
|
|
98
|
+
## Closing message
|
|
99
|
+
|
|
100
|
+
After saving the artifact, present the following summary to the user (see `references/closing-message.md` for format):
|
|
101
|
+
|
|
102
|
+
- Saved file path.
|
|
103
|
+
- Total number of endpoints grouped by HTTP method and domain group.
|
|
104
|
+
- Protocol and authentication method confirmed.
|
|
105
|
+
- WebSocket events and Webhook contracts included (if applicable).
|
|
106
|
+
|
|
107
|
+
Available commands: `/clarify [focus]` · `/revise [endpoint]` · `/expand [endpoint]` · `/validate` · `/done`
|
|
108
|
+
|
|
109
|
+
Next step: `/wireframes`
|
|
110
|
+
|
|
111
|
+
## Style
|
|
112
|
+
|
|
113
|
+
Formal, neutral. No emoji, slang. Terms explained on first use. Generate the artifact in the language of the user's request. Endpoint names, field names, and error codes remain in English.
|
|
@@ -0,0 +1,120 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: ba-brief
|
|
3
|
+
description: >
|
|
4
|
+
Generate a high-level Project Brief for projects in any domain (iGaming, Fintech, SaaS, and others). Use this skill when the user enters /brief, or asks to "create a project brief", "describe the project", "start a new project", "project brief", or mentions the starting stage of the analytical pipeline. Also triggers on requests like "begin with a brief", "describe the product", "form a product description". First step of the BA Toolkit pipeline.
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# /brief — Project Brief
|
|
8
|
+
|
|
9
|
+
Starting point of the BA Toolkit pipeline. Generates a structured Project Brief in Markdown. The pipeline is domain-parameterized: the domain is determined at this step and carried through all subsequent skills.
|
|
10
|
+
|
|
11
|
+
## Loading domain reference
|
|
12
|
+
|
|
13
|
+
Domain references are located in `references/domains/` relative to the `ba-toolkit` directory. Supported domains: `igaming`, `fintech`, `saas`. For other domains, work without a reference file.
|
|
14
|
+
|
|
15
|
+
## Workflow
|
|
16
|
+
|
|
17
|
+
### 1. Environment detection
|
|
18
|
+
|
|
19
|
+
Read `references/environment.md` from the `ba-toolkit` directory to determine the output directory for the current platform. If the file is unavailable, apply the default rule: if `/mnt/user-data/outputs/` exists and is writable, save there (Claude.ai); otherwise save to the current working directory.
|
|
20
|
+
|
|
21
|
+
### 2. Pipeline check
|
|
22
|
+
|
|
23
|
+
No prior artifacts required. If `01_brief_*.md` already exists, warn the user and offer to overwrite or create a new project.
|
|
24
|
+
|
|
25
|
+
If `00_principles_*.md` exists in the output directory, load it and apply its conventions for this and all subsequent pipeline steps (artifact language, ID format, traceability requirements, Definition of Ready, quality gate threshold).
|
|
26
|
+
|
|
27
|
+
### 3. Domain selection
|
|
28
|
+
|
|
29
|
+
Ask the user about the project domain. If the domain matches `igaming`, `fintech`, or `saas`, load the corresponding `references/domains/{domain}.md`. Use domain-specific interview questions (section `1. /brief`), typical business goals, risks, and glossary from that file.
|
|
30
|
+
|
|
31
|
+
If the domain does not match any supported one, record it as `custom:{name}` and use general questions only.
|
|
32
|
+
|
|
33
|
+
The domain is written into the brief metadata and passed to all subsequent pipeline skills.
|
|
34
|
+
|
|
35
|
+
### 4. Interview
|
|
36
|
+
|
|
37
|
+
3–7 questions per round, 2–4 rounds. Do not generate the artifact until sufficient information is collected.
|
|
38
|
+
|
|
39
|
+
**Required topics (all domains):**
|
|
40
|
+
1. Product type — what exactly is being built?
|
|
41
|
+
2. Business goal — what problem does the product solve?
|
|
42
|
+
3. Target audience and geography.
|
|
43
|
+
4. Key stakeholders — who is interested, who makes decisions?
|
|
44
|
+
5. Known constraints — deadlines, budget, regulatory requirements, mandatory integrations.
|
|
45
|
+
6. Competitive products or references.
|
|
46
|
+
7. Success criteria — specific metrics or qualitative goals.
|
|
47
|
+
|
|
48
|
+
If a domain reference is loaded, supplement general questions with domain-specific ones. Formulate questions concretely, with example answers in parentheses.
|
|
49
|
+
|
|
50
|
+
### 5. Generation
|
|
51
|
+
|
|
52
|
+
**File:** `01_brief_{slug}.md` (slug — kebab-case, fixed here for the entire pipeline).
|
|
53
|
+
|
|
54
|
+
```markdown
|
|
55
|
+
# Project Brief: {Project Name}
|
|
56
|
+
|
|
57
|
+
**Domain:** {igaming | fintech | saas | custom:{name}}
|
|
58
|
+
**Date:** {date}
|
|
59
|
+
|
|
60
|
+
## 1. Project Summary
|
|
61
|
+
## 2. Business Goals and Success Metrics
|
|
62
|
+
## 3. Target Audience
|
|
63
|
+
## 4. High-Level Functionality Overview
|
|
64
|
+
## 5. Stakeholders and Roles
|
|
65
|
+
## 6. Constraints and Assumptions
|
|
66
|
+
## 7. Risks
|
|
67
|
+
## 8. Glossary
|
|
68
|
+
```
|
|
69
|
+
|
|
70
|
+
### 6. AGENTS.md update
|
|
71
|
+
|
|
72
|
+
After saving `01_brief_{slug}.md`, create or update `AGENTS.md` in the current working directory (project root). This file helps AI agents in future sessions quickly understand the project context without re-reading all artifacts.
|
|
73
|
+
|
|
74
|
+
```markdown
|
|
75
|
+
# BA Toolkit — Project Context
|
|
76
|
+
|
|
77
|
+
**Project:** {Project Name}
|
|
78
|
+
**Slug:** {slug}
|
|
79
|
+
**Domain:** {domain}
|
|
80
|
+
**Language:** {artifact language}
|
|
81
|
+
**Pipeline stage:** Brief complete
|
|
82
|
+
|
|
83
|
+
## Artifacts
|
|
84
|
+
- `{output_dir}/01_brief_{slug}.md` — Project Brief
|
|
85
|
+
|
|
86
|
+
## Key context
|
|
87
|
+
- **Business goal:** {one-line summary}
|
|
88
|
+
- **Target audience:** {one-line summary}
|
|
89
|
+
- **Key constraints:** {comma-separated list}
|
|
90
|
+
|
|
91
|
+
## Next step
|
|
92
|
+
Run `/srs` to generate the Requirements Specification.
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
If `AGENTS.md` already exists and was created by BA Toolkit, update only the "Pipeline stage" and "Artifacts" sections — do not overwrite custom content added by the user.
|
|
96
|
+
|
|
97
|
+
### 8. Iterative refinement
|
|
98
|
+
|
|
99
|
+
- `/revise [section]` — rewrite a section.
|
|
100
|
+
- `/expand [section]` — add detail.
|
|
101
|
+
- `/clarify [focus]` — targeted ambiguity pass.
|
|
102
|
+
- `/validate` — check completeness and consistency.
|
|
103
|
+
- `/done` — finalize and update `AGENTS.md`. Next step: `/srs`.
|
|
104
|
+
|
|
105
|
+
### 9. Closing message
|
|
106
|
+
|
|
107
|
+
After saving the artifact, present the following summary to the user (see `references/closing-message.md` for format):
|
|
108
|
+
|
|
109
|
+
- Saved file path.
|
|
110
|
+
- Project name, domain, and slug confirmed for the pipeline.
|
|
111
|
+
- Count of business goals documented and key constraints captured.
|
|
112
|
+
- List of identified risks.
|
|
113
|
+
|
|
114
|
+
Available commands: `/clarify [focus]` · `/revise [section]` · `/expand [section]` · `/validate` · `/done`
|
|
115
|
+
|
|
116
|
+
Next step: `/srs`
|
|
117
|
+
|
|
118
|
+
## Style
|
|
119
|
+
|
|
120
|
+
Formal, neutral. No emoji, slang, or evaluative language. Terms and abbreviations explained in parentheses on first use. Generate the artifact in the language of the user's request. Section headings, table headers, and labels also in the user's language.
|
|
@@ -0,0 +1,96 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: ba-clarify
|
|
3
|
+
description: >
|
|
4
|
+
Targeted ambiguity-resolution pass over a BA Toolkit artifact. Use on /clarify command, or when the user asks to "clarify requirements", "find ambiguities", "what is unclear", "check vague terms", "resolve ambiguities", "what needs clarification". Can be focused on a specific area: /clarify security, /clarify FR-012. Cross-cutting command available at any pipeline stage after the first artifact exists.
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# /clarify — Targeted Ambiguity Resolution
|
|
8
|
+
|
|
9
|
+
Cross-cutting command. Performs a post-generation scan of the target artifact, surfaces specific ambiguities as questions, collects answers from the user, and updates the artifact.
|
|
10
|
+
|
|
11
|
+
## Syntax
|
|
12
|
+
|
|
13
|
+
```
|
|
14
|
+
/clarify [optional: focus area or element ID]
|
|
15
|
+
```
|
|
16
|
+
|
|
17
|
+
Examples:
|
|
18
|
+
- `/clarify` — scan the most recent artifact entirely
|
|
19
|
+
- `/clarify security` — focus on security-related requirements
|
|
20
|
+
- `/clarify FR-012` — focus on a single requirement
|
|
21
|
+
- `/clarify NFR` — focus on non-functional requirements
|
|
22
|
+
|
|
23
|
+
## Context loading
|
|
24
|
+
|
|
25
|
+
0. If `00_principles_*.md` exists in the output directory, load it. Use its Definition of Ready (section 4) to identify missing mandatory fields, and its ID conventions (section 2) to validate references.
|
|
26
|
+
1. Identify the target artifact:
|
|
27
|
+
- If the user specifies an element ID (FR-NNN, US-NNN, etc.) or a section keyword, use the artifact that contains it.
|
|
28
|
+
- Otherwise, use the most recently generated artifact in the output directory.
|
|
29
|
+
2. Read the target artifact fully.
|
|
30
|
+
3. Read prior artifacts for cross-reference checks (roles, terms, IDs).
|
|
31
|
+
|
|
32
|
+
## Environment
|
|
33
|
+
|
|
34
|
+
Read `references/environment.md` from the `ba-toolkit` directory to determine the output directory. If unavailable, apply the default rule.
|
|
35
|
+
|
|
36
|
+
## Analysis pass
|
|
37
|
+
|
|
38
|
+
Scan the target artifact (or the specified section) for the following ambiguity categories:
|
|
39
|
+
|
|
40
|
+
### A. Metrics-free adjectives
|
|
41
|
+
Requirements containing words like "fast", "reliable", "scalable", "secure", "user-friendly", "simple", "efficient", "high performance", "low latency" without a measurable criterion.
|
|
42
|
+
|
|
43
|
+
### B. Undefined terms
|
|
44
|
+
Terms, abbreviations, or roles used in the artifact but not defined in the glossary or SRS roles section.
|
|
45
|
+
|
|
46
|
+
### C. Conflicting rules
|
|
47
|
+
Business rules or constraints that contradict each other or contradict rules in prior artifacts.
|
|
48
|
+
|
|
49
|
+
### D. Missing mandatory fields
|
|
50
|
+
Elements that lack required fields per the artifact's own template (e.g., a FR without Priority, a US without Linked FR, an AC without a Then clause).
|
|
51
|
+
|
|
52
|
+
### E. Ambiguous actors or scope
|
|
53
|
+
Actions assigned to "the system", "the user", or "admin" where the specific role or external system is unclear.
|
|
54
|
+
|
|
55
|
+
### F. Duplicate or overlapping requirements
|
|
56
|
+
Functionally equivalent or near-duplicate elements within the artifact or across artifacts.
|
|
57
|
+
|
|
58
|
+
## Output to the user
|
|
59
|
+
|
|
60
|
+
Present findings as a numbered list of targeted questions. Each item references the exact location (section, element ID, line summary):
|
|
61
|
+
|
|
62
|
+
```
|
|
63
|
+
Ambiguities found in {artifact_file}:
|
|
64
|
+
|
|
65
|
+
1. [FR-003] "The system must respond quickly" — what is the acceptable response time in milliseconds under normal load?
|
|
66
|
+
2. [US-007] Role "Manager" is used here but not defined in the SRS Roles section — is this equivalent to "Admin", or a separate role?
|
|
67
|
+
3. [NFR-002 vs FR-015] NFR-002 requires TLS 1.3 only, but FR-015 mentions "standard encryption" — should FR-015 explicitly reference NFR-002?
|
|
68
|
+
4. [AC-005-02] The "Then" clause states "the system handles the error correctly" — what is the specific expected behavior (message shown, state reset, redirect)?
|
|
69
|
+
```
|
|
70
|
+
|
|
71
|
+
If no ambiguities are found, state that clearly and suggest `/analyze` for a broader cross-artifact check.
|
|
72
|
+
|
|
73
|
+
## Resolution
|
|
74
|
+
|
|
75
|
+
After presenting the list, wait for the user to answer. Accept answers inline (user can reply to all questions in one message or answer one by one). After all answers are received:
|
|
76
|
+
|
|
77
|
+
1. Apply the answers to update the artifact (rewrite affected elements).
|
|
78
|
+
2. If an answer affects a prior artifact (e.g., a role definition belongs in the SRS), flag it and offer to update that artifact as well.
|
|
79
|
+
3. Save the updated artifact.
|
|
80
|
+
|
|
81
|
+
## Closing message
|
|
82
|
+
|
|
83
|
+
After saving the updated artifact, present the following summary (see `references/closing-message.md` for format):
|
|
84
|
+
|
|
85
|
+
- Saved file path.
|
|
86
|
+
- Number of ambiguities found and resolved.
|
|
87
|
+
- Any items deferred (user did not answer or flagged for later).
|
|
88
|
+
- If prior artifacts were also updated, list them.
|
|
89
|
+
|
|
90
|
+
Available commands: `/clarify [focus]` (run again) · `/validate` · `/done`
|
|
91
|
+
|
|
92
|
+
No pipeline advancement — return to the current artifact's workflow.
|
|
93
|
+
|
|
94
|
+
## Style
|
|
95
|
+
|
|
96
|
+
Formal, neutral. Questions must be specific and reference exact element IDs. Generate output in the language of the artifact.
|
|
@@ -0,0 +1,98 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: ba-datadict
|
|
3
|
+
description: >
|
|
4
|
+
Generate a Data Dictionary: entities, attributes, types, constraints, relationships. Use on /datadict command, or when the user asks for "data dictionary", "data model", "data schema", "describe entities", "ER model", "database structure", "describe tables", "entity attributes", "entity relationships", "domain model". Seventh step of the BA Toolkit pipeline.
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# /datadict — Data Dictionary
|
|
8
|
+
|
|
9
|
+
Seventh step of the BA Toolkit pipeline. Generates a data dictionary: entities, attributes, data types, constraints, and relationships.
|
|
10
|
+
|
|
11
|
+
## Context loading
|
|
12
|
+
|
|
13
|
+
0. If `00_principles_*.md` exists in the output directory, load it and apply its conventions (artifact language, ID format, entity naming convention, Definition of Ready, quality gate threshold).
|
|
14
|
+
1. Read `01_brief_*.md`, `02_srs_*.md`, `03_stories_*.md`. SRS is the minimum requirement.
|
|
15
|
+
2. Extract: slug, domain, entities (mentioned in FR and US), business rules, roles.
|
|
16
|
+
3. If domain supported, load `references/domains/{domain}.md`, section `7. /datadict`. Use mandatory entities and domain-specific questions.
|
|
17
|
+
|
|
18
|
+
## Environment
|
|
19
|
+
|
|
20
|
+
Read `references/environment.md` from the `ba-toolkit` directory to determine the output directory for the current platform. If the file is unavailable, apply the default rule: if `/mnt/user-data/outputs/` exists and is writable, save there (Claude.ai); otherwise save to the current working directory.
|
|
21
|
+
|
|
22
|
+
## Interview
|
|
23
|
+
|
|
24
|
+
3–7 questions per round, 2–4 rounds.
|
|
25
|
+
|
|
26
|
+
**Required topics:**
|
|
27
|
+
1. DBMS — MongoDB, PostgreSQL, MySQL, other?
|
|
28
|
+
2. Existing schema — is there one to account for or extend?
|
|
29
|
+
3. Audit entities — which require full audit trail?
|
|
30
|
+
4. Soft delete — is soft deletion used?
|
|
31
|
+
5. Amount storage — in minor units (cents) or major currency units?
|
|
32
|
+
6. Versioning — is change history needed for any entities?
|
|
33
|
+
|
|
34
|
+
Supplement with domain-specific questions and mandatory entities from the reference.
|
|
35
|
+
|
|
36
|
+
## Generation
|
|
37
|
+
|
|
38
|
+
**File:** `07_datadict_{slug}.md`
|
|
39
|
+
|
|
40
|
+
```markdown
|
|
41
|
+
# Data Dictionary: {Name}
|
|
42
|
+
|
|
43
|
+
## General Information
|
|
44
|
+
- **DBMS:** {type}
|
|
45
|
+
- **Naming Conventions:** {camelCase | snake_case}
|
|
46
|
+
- **Common Fields:** {createdAt, updatedAt, deletedAt}
|
|
47
|
+
|
|
48
|
+
## Entity: {Name} ({English collection/table name})
|
|
49
|
+
|
|
50
|
+
**Description:** {purpose in the system.}
|
|
51
|
+
**Linked FR/US:** {references.}
|
|
52
|
+
|
|
53
|
+
| Attribute | Type | Required | Constraints | Description |
|
|
54
|
+
|-----------|------|----------|-------------|-------------|
|
|
55
|
+
| _id / id | ObjectId / UUID | yes | PK | Unique identifier |
|
|
56
|
+
| {name} | {type} | {yes/no} | {constraints} | {description} |
|
|
57
|
+
|
|
58
|
+
**Indexes:**
|
|
59
|
+
- {name}: {fields}, {type}
|
|
60
|
+
|
|
61
|
+
---
|
|
62
|
+
|
|
63
|
+
## Entity Relationships
|
|
64
|
+
|
|
65
|
+
| Entity A | Relationship | Entity B | Description |
|
|
66
|
+
|----------|-------------|----------|-------------|
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
**Rules:**
|
|
70
|
+
- Data types match the chosen DBMS.
|
|
71
|
+
- Constraints include: PK, FK, unique, not null, enum, min/max, regex.
|
|
72
|
+
- Attribute and entity names in English; descriptions in the user's language.
|
|
73
|
+
|
|
74
|
+
## Iterative refinement
|
|
75
|
+
|
|
76
|
+
- `/revise [entity]` — rewrite.
|
|
77
|
+
- `/expand [entity]` — add attributes, indexes.
|
|
78
|
+
- `/split [entity]` — separate an entity.
|
|
79
|
+
- `/clarify [focus]` — targeted ambiguity pass.
|
|
80
|
+
- `/validate` — all entities from SRS/stories described; FK references correct; types match DBMS.
|
|
81
|
+
- `/done` — finalize. Next step: `/apicontract`.
|
|
82
|
+
|
|
83
|
+
## Closing message
|
|
84
|
+
|
|
85
|
+
After saving the artifact, present the following summary to the user (see `references/closing-message.md` for format):
|
|
86
|
+
|
|
87
|
+
- Saved file path.
|
|
88
|
+
- Total number of entities documented and total attribute count.
|
|
89
|
+
- DBMS chosen and naming convention confirmed.
|
|
90
|
+
- Entities flagged for audit trail or versioning.
|
|
91
|
+
|
|
92
|
+
Available commands: `/clarify [focus]` · `/revise [entity]` · `/expand [entity]` · `/split [entity]` · `/validate` · `/done`
|
|
93
|
+
|
|
94
|
+
Next step: `/apicontract`
|
|
95
|
+
|
|
96
|
+
## Style
|
|
97
|
+
|
|
98
|
+
Formal, neutral. No emoji, slang. Terms explained on first use. Generate the artifact in the language of the user's request.
|
|
@@ -0,0 +1,124 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: ba-estimate
|
|
3
|
+
description: >
|
|
4
|
+
Effort estimation for BA Toolkit User Stories. Use on /estimate command, or when the user asks to "estimate stories", "add story points", "size the backlog", "estimate effort", "T-shirt sizing", "planning poker". Can target all stories or a specific epic/story. Run after /stories or /ac for best accuracy.
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# /estimate — Effort Estimation
|
|
8
|
+
|
|
9
|
+
Analyses User Stories, assigns effort estimates using the chosen scale, and produces an estimation table with rationale. Can update the stories artifact in-place or output a standalone estimation report.
|
|
10
|
+
|
|
11
|
+
## Syntax
|
|
12
|
+
|
|
13
|
+
```
|
|
14
|
+
/estimate [optional: scale] [optional: scope]
|
|
15
|
+
```
|
|
16
|
+
|
|
17
|
+
Examples:
|
|
18
|
+
- `/estimate` — estimate all stories using Fibonacci Story Points (default)
|
|
19
|
+
- `/estimate t-shirt` — use T-shirt sizes (XS / S / M / L / XL)
|
|
20
|
+
- `/estimate days` — estimate in ideal person-days
|
|
21
|
+
- `/estimate E-01` — estimate only Epic E-01
|
|
22
|
+
- `/estimate US-007` — re-estimate a single story
|
|
23
|
+
|
|
24
|
+
## Context loading
|
|
25
|
+
|
|
26
|
+
0. If `00_principles_*.md` exists in the output directory, load it and apply any estimation conventions defined in section 8 (Additional Conventions).
|
|
27
|
+
1. Load `03_stories_{slug}.md` — primary input. Required.
|
|
28
|
+
2. Load `05_ac_{slug}.md` if it exists — AC scenario count per story is a key complexity signal.
|
|
29
|
+
3. Load `02_srs_{slug}.md` — to understand integration points and technical constraints.
|
|
30
|
+
4. Load `07a_research_{slug}.md` if it exists — for ADR-driven complexity signals.
|
|
31
|
+
|
|
32
|
+
## Environment
|
|
33
|
+
|
|
34
|
+
Read `references/environment.md` from the `ba-toolkit` directory to determine the output directory.
|
|
35
|
+
|
|
36
|
+
## Calibration interview
|
|
37
|
+
|
|
38
|
+
Before estimating, ask the following (skip questions where the answer is already clear from context):
|
|
39
|
+
|
|
40
|
+
1. **Scale:** Story Points (Fibonacci: 1 / 2 / 3 / 5 / 8 / 13 / 21), T-shirt sizes (XS / S / M / L / XL), or ideal person-days? _(default: Story Points Fibonacci)_
|
|
41
|
+
2. **Reference stories:** Do you have known reference stories to anchor the scale? (e.g. "US-003 is a 3" or "login is an S") If yes, calibrate against those.
|
|
42
|
+
3. **Team assumptions:** Full-stack developer pair? Or separate frontend/backend? _(affects day estimates only)_
|
|
43
|
+
4. **Out of scope for this estimate:** Any stories to skip (e.g., already estimated, on ice)?
|
|
44
|
+
|
|
45
|
+
If the user types `/estimate` without additional input and prior context is sufficient, proceed with defaults and state assumptions clearly.
|
|
46
|
+
|
|
47
|
+
## Estimation model
|
|
48
|
+
|
|
49
|
+
For each User Story, assess the following complexity factors:
|
|
50
|
+
|
|
51
|
+
| Factor | Signal | Weight |
|
|
52
|
+
|--------|--------|--------|
|
|
53
|
+
| **Scope** | Number of distinct user interactions described | Medium |
|
|
54
|
+
| **AC count** | More scenarios = higher complexity | Medium |
|
|
55
|
+
| **Integration** | External API calls, third-party services, webhooks | High |
|
|
56
|
+
| **UI complexity** | Multi-state screens, real-time updates, complex forms | Medium |
|
|
57
|
+
| **Data complexity** | New entities, complex relationships, migrations | Medium |
|
|
58
|
+
| **Newness** | Team has no prior experience with this area | High |
|
|
59
|
+
| **Uncertainty** | Requirements are unclear or flagged with open questions | High |
|
|
60
|
+
|
|
61
|
+
Apply the reference stories as anchors. If no references are given, calibrate internally: the simplest story in the set = 1 SP or XS.
|
|
62
|
+
|
|
63
|
+
Do not pad estimates — model what a reasonably skilled team would take, not a worst-case scenario.
|
|
64
|
+
|
|
65
|
+
## Output
|
|
66
|
+
|
|
67
|
+
### If scope ≤ 20 stories — update `03_stories_{slug}.md` in-place
|
|
68
|
+
|
|
69
|
+
Add a `**Estimate:**` field to each US block and append the Estimation Summary table at the end of the file.
|
|
70
|
+
|
|
71
|
+
### If scope > 20 stories — create `03_stories_estimates_{slug}.md`
|
|
72
|
+
|
|
73
|
+
Standalone estimation report with the full table.
|
|
74
|
+
|
|
75
|
+
---
|
|
76
|
+
|
|
77
|
+
### Estimation Summary table (always produced in chat as closing output)
|
|
78
|
+
|
|
79
|
+
```
|
|
80
|
+
## Estimation Summary — [PROJECT_NAME]
|
|
81
|
+
Scale: [Story Points / T-shirt / Days]
|
|
82
|
+
|
|
83
|
+
| US | Title | Epic | Estimate | Key drivers |
|
|
84
|
+
|----|-------|------|---------|-------------|
|
|
85
|
+
| US-001 | [Title] | E-01 | 3 SP | 2 AC scenarios, simple UI |
|
|
86
|
+
| US-002 | [Title] | E-01 | 8 SP | external payment API, 4 AC scenarios |
|
|
87
|
+
| US-003 | [Title] | E-02 | 13 SP | new domain area, complex state machine |
|
|
88
|
+
...
|
|
89
|
+
|
|
90
|
+
| Metric | Value |
|
|
91
|
+
|--------|-------|
|
|
92
|
+
| Total stories estimated | [N] |
|
|
93
|
+
| Total Story Points | [N] SP |
|
|
94
|
+
| Largest story | US-[NNN] ([N] SP) — [reason] |
|
|
95
|
+
| Stories ≥ 13 SP (consider splitting) | [N]: US-[NNN], … |
|
|
96
|
+
| Average per story | [N] SP |
|
|
97
|
+
```
|
|
98
|
+
|
|
99
|
+
### Splitting recommendations
|
|
100
|
+
|
|
101
|
+
For any story estimated at 13 SP or higher (or XL), suggest a concrete split:
|
|
102
|
+
|
|
103
|
+
```
|
|
104
|
+
⚠️ US-[NNN] ([N] SP) is large. Consider splitting:
|
|
105
|
+
→ US-[NNN]a: [sub-story suggestion]
|
|
106
|
+
→ US-[NNN]b: [sub-story suggestion]
|
|
107
|
+
Run /split US-[NNN] to perform the split.
|
|
108
|
+
```
|
|
109
|
+
|
|
110
|
+
## Closing message
|
|
111
|
+
|
|
112
|
+
After saving, present the following summary (see `references/closing-message.md` for format):
|
|
113
|
+
|
|
114
|
+
- Saved file path (if stories artifact updated) or new estimate file created.
|
|
115
|
+
- Scale used and total estimate.
|
|
116
|
+
- Number of stories estimated.
|
|
117
|
+
- Stories flagged for splitting (if any).
|
|
118
|
+
- Assumptions made (if defaults were applied without a calibration interview).
|
|
119
|
+
|
|
120
|
+
Available commands: `/estimate [US-NNN]` (re-estimate a story) · `/split [US-NNN]` (split a large story) · `/analyze` · `/done`
|
|
121
|
+
|
|
122
|
+
## Style
|
|
123
|
+
|
|
124
|
+
Be explicit about the rationale for each estimate. Use concise bullet drivers, not paragraphs. Generate output in the language of the artifact.
|