specrails-hub 1.27.0 → 1.28.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude/commands/specrails/auto-propose-backlog-specs.md +214 -0
- package/.claude/commands/specrails/batch-implement.md +282 -0
- package/.claude/commands/specrails/compat-check.md +132 -0
- package/.claude/commands/specrails/doctor.md +62 -0
- package/.claude/commands/specrails/get-backlog-specs.md +106 -0
- package/.claude/commands/specrails/implement.md +1163 -0
- package/.claude/commands/specrails/propose-spec.md +92 -0
- package/.claude/commands/specrails/refactor-recommender.md +134 -0
- package/.claude/commands/specrails/why.md +96 -0
- package/README.md +34 -2
- package/client/dist/assets/{ActivityFeedPage-tERWyUTH.js → ActivityFeedPage-D61qfhLC.js} +1 -1
- package/client/dist/assets/{AnalyticsPage-DUCp-pHx.js → AnalyticsPage-Dyh7XrH1.js} +2 -2
- package/client/dist/assets/{DocsDialog-C-EhPPrX.js → DocsDialog-QtxIY5rX.js} +2 -2
- package/client/dist/assets/{DocsPage-BA5sORSV.js → DocsPage-ySlUs5-7.js} +2 -2
- package/client/dist/assets/{HubAnalyticsPage-Dao1HNci.js → HubAnalyticsPage-DgcahZrn.js} +1 -1
- package/client/dist/assets/JobDetailPage-q0iD06CJ.js +16 -0
- package/client/dist/assets/JobsPage-CdZXbAgw.js +1 -0
- package/client/dist/assets/dist-js-DPjB3Utf.js +1 -0
- package/client/dist/assets/{dracula-colors-C67D9ngQ.js → dracula-colors-BeO_58_b.js} +1 -1
- package/client/dist/assets/index-DFCOMbSe.js +112 -0
- package/client/dist/assets/index-ZK8zAH1Z.css +2 -0
- package/client/dist/assets/{lib-T3dOmNkJ.js → lib-DylTA75x.js} +1 -1
- package/client/dist/assets/{useHub-D5e-JfkR.js → useHub-DWD4MWhf.js} +1 -1
- package/client/dist/index.html +3 -3
- package/docs/engineering/architecture.md +33 -4
- package/docs/general/getting-started.md +15 -0
- package/docs/general/platform-overview.md +7 -5
- package/docs/operations/runbook.md +30 -0
- package/docs/product/features.md +27 -1
- package/package.json +7 -1
- package/server/dist/chat-manager.js +6 -0
- package/server/dist/index.js +56 -1
- package/client/dist/assets/JobDetailPage-CdbcXcfe.js +0 -16
- package/client/dist/assets/JobsPage-CmgiX4PP.js +0 -1
- package/client/dist/assets/index-Bq1AywmF.css +0 -2
- package/client/dist/assets/index-CGeGMvIX.js +0 -112
|
@@ -0,0 +1,214 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: "Update Product-Driven Backlog"
|
|
3
|
+
description: "Generate new feature ideas through product discovery, create Local Tickets"
|
|
4
|
+
category: Workflow
|
|
5
|
+
tags: [workflow, explore, priorities, backlog, product-discovery]
|
|
6
|
+
model: opus
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
Analyze the project from a **product perspective** to generate new feature ideas. Syncs results to Local Tickets. Use `/specrails:get-backlog-specs` to view current ideas.
|
|
10
|
+
|
|
11
|
+
**Input:** $ARGUMENTS (optional: comma-separated areas to focus on. If empty, analyze all areas.)
|
|
12
|
+
|
|
13
|
+
**IMPORTANT: This command only creates tickets.** You may read files and search code to understand current capabilities, but you must NEVER write application code.
|
|
14
|
+
|
|
15
|
+
---
|
|
16
|
+
|
|
17
|
+
## Areas
|
|
18
|
+
|
|
19
|
+
| Area | Description | Key Files |
|
|
20
|
+
|------|-------------|-----------|
|
|
21
|
+
| backend | Express server, API routes, SQLite, WebSocket | server/ |
|
|
22
|
+
| frontend | React dashboard, components, pages | client/src/ |
|
|
23
|
+
| cli | CLI bridge commands | cli/ |
|
|
24
|
+
| analytics | Job cost/duration/token metrics | server/analytics.ts, client/src/pages/AnalyticsPage.tsx |
|
|
25
|
+
| tickets | Local ticket management, kanban views | .specrails/local-tickets.json, client/src/components/ |
|
|
26
|
+
| pipeline | AI pipeline phases (Architect/Developer/Reviewer) | server/queue-manager.ts |
|
|
27
|
+
|
|
28
|
+
---
|
|
29
|
+
|
|
30
|
+
## Execution
|
|
31
|
+
|
|
32
|
+
Launch a **single** explorer subagent (`subagent_type: Explore`, `run_in_background: true`) for product discovery.
|
|
33
|
+
|
|
34
|
+
The Explore agent receives this prompt:
|
|
35
|
+
|
|
36
|
+
> You are a product strategist analyzing the **specrails-hub** project to generate new feature ideas using the **Value Proposition Canvas** framework.
|
|
37
|
+
>
|
|
38
|
+
> **Your goal:** For each area, propose 2-4 new features that would significantly improve the user experience. Every feature MUST be evaluated against the project's personas.
|
|
39
|
+
>
|
|
40
|
+
> **Areas to analyze:** {all areas or filtered by user input}
|
|
41
|
+
>
|
|
42
|
+
> ### Step 0: Read Personas
|
|
43
|
+
>
|
|
44
|
+
> **Before anything else**, read all persona files:
|
|
45
|
+
> - Read `.claude/agents/personas/the-multi-project-developer.md`
|
|
46
|
+
> - Read `.claude/agents/personas/the-solo-dev.md`
|
|
47
|
+
> - Read `.claude/agents/personas/the-tech-lead.md`
|
|
48
|
+
> - Read `.claude/agents/personas/the-maintainer.md`
|
|
49
|
+
>
|
|
50
|
+
> These contain full Value Proposition Canvas profiles (jobs, pains, gains).
|
|
51
|
+
>
|
|
52
|
+
> ### Research steps
|
|
53
|
+
>
|
|
54
|
+
> 1. **Understand current capabilities** — Read codebase structure
|
|
55
|
+
> 2. **Check existing backlog** — Avoid duplicating existing issues
|
|
56
|
+
> 3. **Think through each persona's day** — For each area:
|
|
57
|
+
> - What does each persona need here?
|
|
58
|
+
> - What would a competitive tool offer?
|
|
59
|
+
> - What data is available but not surfaced?
|
|
60
|
+
>
|
|
61
|
+
> 4. **For each idea, produce a VPC evaluation:**
|
|
62
|
+
> - **Feature name** (short, descriptive)
|
|
63
|
+
> - **User story** ("As a [user type], I want to [action] so that [benefit]")
|
|
64
|
+
> - **Feature description** (2-3 sentences)
|
|
65
|
+
> - **VPC Fit** per persona: Jobs, Pains relieved, Gains created, Score (0-5)
|
|
66
|
+
> - **Total Persona Score**: sum of all persona scores / 20
|
|
67
|
+
> - **Effort** (High/Medium/Low)
|
|
68
|
+
> - **Inspiration** (competitor or product pattern)
|
|
69
|
+
> - **Prerequisites**
|
|
70
|
+
> - **Area**
|
|
71
|
+
|
|
72
|
+
---
|
|
73
|
+
|
|
74
|
+
## Assembly — Backlog Sync
|
|
75
|
+
|
|
76
|
+
After the Explore agent completes:
|
|
77
|
+
|
|
78
|
+
1. **Display** results to the user.
|
|
79
|
+
|
|
80
|
+
2. Read `.claude/backlog-config.json` and extract:
|
|
81
|
+
- `BACKLOG_PROVIDER` (`local`, `github`, `jira`, or `none`)
|
|
82
|
+
- `BACKLOG_WRITE` (from `write_access`)
|
|
83
|
+
|
|
84
|
+
### If `BACKLOG_WRITE=false` — Display only (no sync)
|
|
85
|
+
|
|
86
|
+
Display all proposed features in a structured format. Do NOT create any tickets.
|
|
87
|
+
|
|
88
|
+
```
|
|
89
|
+
## Product Discovery Results (not synced)
|
|
90
|
+
|
|
91
|
+
Backlog access is set to **read-only**. The following features were discovered
|
|
92
|
+
but NOT created. Create them manually if desired.
|
|
93
|
+
|
|
94
|
+
### Feature 1: {name}
|
|
95
|
+
- **Area:** {area}
|
|
96
|
+
- **Persona Fit:** Alex: X/5, Sam: X/5, Morgan: X/5, Kai: X/5
|
|
97
|
+
- **Effort:** {level}
|
|
98
|
+
- **User Story:** As a {user}, I want to {action} so that {benefit}
|
|
99
|
+
- **Description:** {2-3 sentences}
|
|
100
|
+
```
|
|
101
|
+
|
|
102
|
+
### If provider=local — Sync to Local Tickets
|
|
103
|
+
|
|
104
|
+
Local tickets are always read-write.
|
|
105
|
+
|
|
106
|
+
3. **Fetch existing local tickets** to avoid duplicates:
|
|
107
|
+
Read `.specrails/local-tickets.json`. Parse the `tickets` map and return all entries regardless of status.
|
|
108
|
+
Collect all ticket titles into a duplicate-check set.
|
|
109
|
+
|
|
110
|
+
4. **Initialize labels** (idempotent):
|
|
111
|
+
No label initialization required. Local tickets use freeform label strings. Standard label conventions: `area:frontend`, `area:backend`, `area:api`, `effort:low`, `effort:medium`, `effort:high`.
|
|
112
|
+
|
|
113
|
+
5. **For each proposed feature, create a local ticket** (skip if title matches an existing ticket):
|
|
114
|
+
Write to `.specrails/local-tickets.json` using the advisory locking protocol:
|
|
115
|
+
acquire lock → read file → set `id = next_id`, increment `next_id`, set all ticket fields, set `created_at` and `updated_at` to now, bump `revision`, update `last_updated` → write → release lock.
|
|
116
|
+
|
|
117
|
+
Set the following fields:
|
|
118
|
+
- `title`: Feature name
|
|
119
|
+
- `description`: Full VPC body markdown
|
|
120
|
+
- `status`: `"todo"`
|
|
121
|
+
- `priority`: Map effort — Low → `"high"`, Medium → `"medium"`, High → `"low"`
|
|
122
|
+
- `labels`: `["product-driven-backlog", "area:{area}"]`
|
|
123
|
+
- `metadata.vpc_scores`: Per-persona scores from VPC evaluation
|
|
124
|
+
- `metadata.effort_level`: `"High"`, `"Medium"`, or `"Low"`
|
|
125
|
+
- `metadata.user_story`: The user story text
|
|
126
|
+
- `metadata.area`: The area name
|
|
127
|
+
- `prerequisites`: Array of ticket IDs for dependencies (empty if none)
|
|
128
|
+
- `source`: `"get-backlog-specs"`
|
|
129
|
+
- `created_by`: `"sr-product-manager"`
|
|
130
|
+
|
|
131
|
+
6. **Report** sync results:
|
|
132
|
+
```
|
|
133
|
+
Product discovery complete:
|
|
134
|
+
- Created: {N} new feature ideas as local tickets
|
|
135
|
+
- Skipped: {N} duplicates (already exist)
|
|
136
|
+
```
|
|
137
|
+
|
|
138
|
+
### If provider=github and BACKLOG_WRITE=true — Sync to GitHub Issues
|
|
139
|
+
|
|
140
|
+
3. Fetch existing product-driven backlog items:
|
|
141
|
+
```bash
|
|
142
|
+
gh issue list --label "product-driven-backlog" --state open --limit 100 --json number,title
|
|
143
|
+
```
|
|
144
|
+
|
|
145
|
+
4. Initialize labels:
|
|
146
|
+
```bash
|
|
147
|
+
gh label create "product-driven-backlog" --color "6E40C9" --force
|
|
148
|
+
```
|
|
149
|
+
|
|
150
|
+
5. For each proposed feature, create a GitHub Issue (skip duplicates):
|
|
151
|
+
```bash
|
|
152
|
+
gh issue create --title "{feature name}" --label "product-driven-backlog,area:{area}" --body "..."
|
|
153
|
+
```
|
|
154
|
+
|
|
155
|
+
6. Report sync results.
|
|
156
|
+
|
|
157
|
+
### If provider=jira and BACKLOG_WRITE=true — Sync to JIRA
|
|
158
|
+
|
|
159
|
+
Read `.claude/backlog-config.json` for JIRA config, authenticate, and create Story tickets using the JIRA REST API.
|
|
160
|
+
|
|
161
|
+
### VPC Body Format
|
|
162
|
+
|
|
163
|
+
The description/body for each ticket:
|
|
164
|
+
|
|
165
|
+
```markdown
|
|
166
|
+
> **This is a product feature idea.** Generated through VPC-based product discovery.
|
|
167
|
+
|
|
168
|
+
## Overview
|
|
169
|
+
|
|
170
|
+
| Field | Value |
|
|
171
|
+
|-------|-------|
|
|
172
|
+
| **Area** | {Area} |
|
|
173
|
+
| **Persona Fit** | Alex: X/5, Sam: X/5, Morgan: X/5, Kai: X/5 |
|
|
174
|
+
| **Effort** | {High/Medium/Low} — {justification} |
|
|
175
|
+
| **Inspiration** | {source or "Original idea"} |
|
|
176
|
+
| **Prerequisites** | {list or "None"} |
|
|
177
|
+
|
|
178
|
+
## User Story
|
|
179
|
+
|
|
180
|
+
As a **{user type}**, I want to **{action}** so that **{benefit}**.
|
|
181
|
+
|
|
182
|
+
## Feature Description
|
|
183
|
+
|
|
184
|
+
{2-3 sentence description}
|
|
185
|
+
|
|
186
|
+
## Value Proposition Canvas
|
|
187
|
+
|
|
188
|
+
### "Alex" — The Multi-Project Developer (X/5)
|
|
189
|
+
- **Jobs addressed**: {list}
|
|
190
|
+
- **Pains relieved**: {list with severity}
|
|
191
|
+
- **Gains created**: {list with impact}
|
|
192
|
+
|
|
193
|
+
### "Sam" — The Solo Dev (X/5)
|
|
194
|
+
- **Jobs addressed**: {list}
|
|
195
|
+
- **Pains relieved**: {list with severity}
|
|
196
|
+
- **Gains created**: {list with impact}
|
|
197
|
+
|
|
198
|
+
### "Morgan" — The Tech Lead (X/5)
|
|
199
|
+
- **Jobs addressed**: {list}
|
|
200
|
+
- **Pains relieved**: {list with severity}
|
|
201
|
+
- **Gains created**: {list with impact}
|
|
202
|
+
|
|
203
|
+
### "Kai" — The Maintainer (X/5)
|
|
204
|
+
- **Jobs addressed**: {list}
|
|
205
|
+
- **Pains relieved**: {list with severity}
|
|
206
|
+
- **Gains created**: {list with impact}
|
|
207
|
+
|
|
208
|
+
## Implementation Notes
|
|
209
|
+
|
|
210
|
+
{Brief notes on existing infrastructure and what needs to be built}
|
|
211
|
+
|
|
212
|
+
---
|
|
213
|
+
_Auto-generated by `/specrails:auto-propose-backlog-specs` on {DATE}_
|
|
214
|
+
```
|
|
@@ -0,0 +1,282 @@
|
|
|
1
|
+
# Batch Implementation Orchestrator
|
|
2
|
+
|
|
3
|
+
Macro-orchestrator above `/specrails:implement`. Accepts a set of feature references, computes a dependency-aware wave execution plan, invokes `/specrails:implement` per wave, and produces a batch-level progress dashboard and final report. All per-feature pipeline work (sr-architect, sr-developer, sr-reviewer, git, CI) is fully delegated to `/specrails:implement`.
|
|
4
|
+
|
|
5
|
+
**MANDATORY: Always follow this pipeline exactly as written. NEVER skip, shortcut, or "optimize away" any phase — even if the batch seems small enough to handle directly. The orchestrator MUST compute waves, confirm with the user, and invoke `/specrails:implement` per wave as specified. Do NOT implement any feature yourself in the main conversation. No exceptions.**
|
|
6
|
+
|
|
7
|
+
**Input:** $ARGUMENTS — one or more feature references with optional flags:
|
|
8
|
+
|
|
9
|
+
- **Feature refs**: `#85 #71 #63` (GitHub issue numbers) — required, at least two
|
|
10
|
+
- **`--deps "<spec>"`**: inline dependency spec, e.g. `"#71 -> #85, #63 -> #85"` (meaning #71 and #63 must complete before #85)
|
|
11
|
+
- **`--concurrency N`**: max features running in parallel across waves (default: 3)
|
|
12
|
+
- **`--wave-size N`**: max features per wave regardless of concurrency (default: unlimited)
|
|
13
|
+
- **`--dry-run` / `--preview`**: passed through to each `/specrails:implement` invocation; no git or backlog operations will run
|
|
14
|
+
|
|
15
|
+
**IMPORTANT:** Before running, ensure Read/Write/Bash/Glob/Grep permissions are set to "allow" — background agents cannot request permissions interactively.
|
|
16
|
+
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
## Phase 0: Parse Input
|
|
20
|
+
|
|
21
|
+
### Step 1: Extract feature refs
|
|
22
|
+
|
|
23
|
+
Scan `$ARGUMENTS` for issue/ticket references (e.g. `#85`, `#71`). Collect into `FEATURE_REFS` list. If fewer than 2 refs are found, stop and print:
|
|
24
|
+
|
|
25
|
+
```
|
|
26
|
+
[batch-implement] Error: at least 2 feature refs are required. For a single feature, use /specrails:implement directly.
|
|
27
|
+
```
|
|
28
|
+
|
|
29
|
+
### Step 2: Extract flags
|
|
30
|
+
|
|
31
|
+
Scan `$ARGUMENTS` for control flags:
|
|
32
|
+
|
|
33
|
+
- If `--dry-run` or `--preview` is present: set `DRY_RUN=true`. This flag is forwarded to every `/specrails:implement` call.
|
|
34
|
+
- If `--deps "<spec>"` is present: capture the quoted string as `DEPS_SPEC`. Strip from arguments.
|
|
35
|
+
- If `--concurrency N` is present: set `CONCURRENCY=N` (integer ≥ 1). Default: 3.
|
|
36
|
+
- If `--wave-size N` is present: set `WAVE_SIZE=N` (integer ≥ 1). Default: unlimited (no per-wave cap).
|
|
37
|
+
|
|
38
|
+
**If `DRY_RUN=true`**, print:
|
|
39
|
+
```
|
|
40
|
+
[dry-run] Preview mode active — /specrails:implement will be called with --dry-run for each wave.
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
### Step 3: Fetch issue titles
|
|
44
|
+
|
|
45
|
+
For each ref in `FEATURE_REFS`, fetch the issue title to use in progress output:
|
|
46
|
+
|
|
47
|
+
```bash
|
|
48
|
+
Read `.specrails/local-tickets.json`. Parse JSON and return the full ticket object at `tickets["{id}"]`, or an error if not found. --json number,title
|
|
49
|
+
```
|
|
50
|
+
|
|
51
|
+
Store as `FEATURE_TITLES` map: `{ref: title}`.
|
|
52
|
+
|
|
53
|
+
---
|
|
54
|
+
|
|
55
|
+
## Phase 1: Wave Planning
|
|
56
|
+
|
|
57
|
+
### Step 1: Parse dependency graph
|
|
58
|
+
|
|
59
|
+
Build a directed graph `DEP_GRAPH` where an edge `A -> B` means "A must complete before B starts".
|
|
60
|
+
|
|
61
|
+
Parse `DEPS_SPEC` (if provided) by splitting on `,` and parsing each token as `<ref> -> <ref>`.
|
|
62
|
+
|
|
63
|
+
```
|
|
64
|
+
for each token in DEPS_SPEC.split(","):
|
|
65
|
+
left, right = token.split("->")
|
|
66
|
+
DEP_GRAPH.add_edge(left.strip(), right.strip())
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
All refs in `FEATURE_REFS` that appear in no edge are treated as independent (no dependencies).
|
|
70
|
+
|
|
71
|
+
### Step 2: Detect circular dependencies
|
|
72
|
+
|
|
73
|
+
Run cycle detection on `DEP_GRAPH`:
|
|
74
|
+
|
|
75
|
+
```
|
|
76
|
+
visited = {}
|
|
77
|
+
rec_stack = {}
|
|
78
|
+
|
|
79
|
+
function has_cycle(node):
|
|
80
|
+
visited[node] = true
|
|
81
|
+
rec_stack[node] = true
|
|
82
|
+
for neighbor in DEP_GRAPH.neighbors(node):
|
|
83
|
+
if not visited[neighbor] and has_cycle(neighbor):
|
|
84
|
+
return true
|
|
85
|
+
elif rec_stack[neighbor]:
|
|
86
|
+
return true
|
|
87
|
+
rec_stack[node] = false
|
|
88
|
+
return false
|
|
89
|
+
|
|
90
|
+
CYCLES = [node for node in FEATURE_REFS if not visited[node] and has_cycle(node)]
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
If `CYCLES` is non-empty: stop and print:
|
|
94
|
+
|
|
95
|
+
```
|
|
96
|
+
[batch-implement] Error: circular dependency detected.
|
|
97
|
+
Cycle involves: <ref-list>
|
|
98
|
+
Fix the --deps spec and re-run.
|
|
99
|
+
```
|
|
100
|
+
|
|
101
|
+
### Step 3: Compute waves via Kahn's algorithm
|
|
102
|
+
|
|
103
|
+
```
|
|
104
|
+
in_degree = {ref: 0 for ref in FEATURE_REFS}
|
|
105
|
+
for each edge (A -> B) in DEP_GRAPH:
|
|
106
|
+
in_degree[B] += 1
|
|
107
|
+
|
|
108
|
+
WAVES = []
|
|
109
|
+
ready = [ref for ref in FEATURE_REFS if in_degree[ref] == 0]
|
|
110
|
+
sort ready alphabetically (stable ordering)
|
|
111
|
+
|
|
112
|
+
while ready is non-empty:
|
|
113
|
+
wave = ready[:WAVE_SIZE] # cap at WAVE_SIZE if set; else take all
|
|
114
|
+
remaining = ready[WAVE_SIZE:] if WAVE_SIZE else []
|
|
115
|
+
WAVES.append(wave)
|
|
116
|
+
for ref in wave:
|
|
117
|
+
for neighbor in DEP_GRAPH.neighbors(ref):
|
|
118
|
+
in_degree[neighbor] -= 1
|
|
119
|
+
if in_degree[neighbor] == 0:
|
|
120
|
+
remaining.append(neighbor)
|
|
121
|
+
sort remaining alphabetically
|
|
122
|
+
ready = remaining
|
|
123
|
+
```
|
|
124
|
+
|
|
125
|
+
Set `TOTAL_WAVES = len(WAVES)`.
|
|
126
|
+
|
|
127
|
+
### Step 4: Print execution plan and ask for confirmation
|
|
128
|
+
|
|
129
|
+
Print the wave execution plan:
|
|
130
|
+
|
|
131
|
+
```
|
|
132
|
+
## Batch Execution Plan
|
|
133
|
+
|
|
134
|
+
Total features : <N>
|
|
135
|
+
Total waves : <TOTAL_WAVES>
|
|
136
|
+
Max concurrency: <CONCURRENCY>
|
|
137
|
+
Dry-run : <yes / no>
|
|
138
|
+
|
|
139
|
+
| Wave | Features | Depends On |
|
|
140
|
+
|------|----------|------------|
|
|
141
|
+
| 1 | #85, #71 | — |
|
|
142
|
+
| 2 | #63 | #85, #71 |
|
|
143
|
+
|
|
144
|
+
Dependency graph:
|
|
145
|
+
#71 -> #63
|
|
146
|
+
#85 -> #63
|
|
147
|
+
|
|
148
|
+
Proceed? (yes / no / edit-deps)
|
|
149
|
+
```
|
|
150
|
+
|
|
151
|
+
Wait for user confirmation.
|
|
152
|
+
|
|
153
|
+
- **`yes`**: proceed to Phase 2.
|
|
154
|
+
- **`no`**: stop. Print `[batch-implement] Aborted by user.`
|
|
155
|
+
- **`edit-deps`**: ask the user to provide a corrected `--deps` spec, re-run Phase 1 from Step 1.
|
|
156
|
+
|
|
157
|
+
---
|
|
158
|
+
|
|
159
|
+
## Phase 2: Wave Execution Loop
|
|
160
|
+
|
|
161
|
+
Execute waves sequentially. Within each wave, invoke `/specrails:implement` for all features in parallel (up to `CONCURRENCY` at a time).
|
|
162
|
+
|
|
163
|
+
### Progress Dashboard
|
|
164
|
+
|
|
165
|
+
Before starting each wave, print the current dashboard state:
|
|
166
|
+
|
|
167
|
+
```
|
|
168
|
+
## Batch Progress
|
|
169
|
+
|
|
170
|
+
| # | Feature | Title | Wave | Status | Notes |
|
|
171
|
+
|---|---------|-------|------|--------|-------|
|
|
172
|
+
| 1 | #85 | <title> | 1 | done | |
|
|
173
|
+
| 2 | #71 | <title> | 1 | done | |
|
|
174
|
+
| 3 | #63 | <title> | 2 | running| |
|
|
175
|
+
| 4 | #42 | <title> | 2 | blocked| depends on #63 |
|
|
176
|
+
| 5 | #17 | <title> | 3 | pending| |
|
|
177
|
+
```
|
|
178
|
+
|
|
179
|
+
Status values:
|
|
180
|
+
- `pending` — not yet started
|
|
181
|
+
- `running` — `/specrails:implement` invocation is active
|
|
182
|
+
- `done` — `/specrails:implement` completed successfully
|
|
183
|
+
- `failed` — `/specrails:implement` exited with an error
|
|
184
|
+
- `blocked` — a dependency failed; this feature will not run
|
|
185
|
+
|
|
186
|
+
### Wave invocation
|
|
187
|
+
|
|
188
|
+
For each wave `W`:
|
|
189
|
+
|
|
190
|
+
1. Print: `[wave W/TOTAL_WAVES] Starting — features: <ref-list>`
|
|
191
|
+
2. For each feature batch of size ≤ `CONCURRENCY` within the wave:
|
|
192
|
+
- Invoke `/specrails:implement` with the feature refs and forwarded flags:
|
|
193
|
+
```
|
|
194
|
+
/specrails:implement <ref1> <ref2> ... [--dry-run]
|
|
195
|
+
```
|
|
196
|
+
- Run invocations in the batch in parallel (`run_in_background: true`).
|
|
197
|
+
- Wait for all in the batch to complete before starting the next batch.
|
|
198
|
+
3. For each completed invocation, record outcome in `WAVE_RESULTS`:
|
|
199
|
+
- `{ref, wave, status: "done" | "failed", error_summary: "..." | null}`
|
|
200
|
+
|
|
201
|
+
### Failure isolation
|
|
202
|
+
|
|
203
|
+
After each wave completes:
|
|
204
|
+
|
|
205
|
+
```
|
|
206
|
+
FAILED_THIS_WAVE = [ref for ref in wave if WAVE_RESULTS[ref].status == "failed"]
|
|
207
|
+
|
|
208
|
+
for each ref in FAILED_THIS_WAVE:
|
|
209
|
+
BLOCKED = all refs in DEP_GRAPH.descendants(ref)
|
|
210
|
+
for each blocked_ref in BLOCKED:
|
|
211
|
+
WAVE_RESULTS[blocked_ref] = {status: "blocked", reason: "depends on failed " + ref}
|
|
212
|
+
remove blocked_ref from all future waves
|
|
213
|
+
```
|
|
214
|
+
|
|
215
|
+
A failed feature blocks ONLY its transitive dependents. Features in other branches of the dependency graph continue unaffected.
|
|
216
|
+
|
|
217
|
+
Print updated dashboard after each wave.
|
|
218
|
+
|
|
219
|
+
### Wave completion gate
|
|
220
|
+
|
|
221
|
+
Before starting wave W+1, confirm all features in wave W have status `done` or `blocked`. Never start a downstream wave while upstream features are still running.
|
|
222
|
+
|
|
223
|
+
---
|
|
224
|
+
|
|
225
|
+
## Phase 3: Batch Report
|
|
226
|
+
|
|
227
|
+
After all waves complete (or all remaining features are blocked), print the final batch report.
|
|
228
|
+
|
|
229
|
+
```
|
|
230
|
+
## Batch Implementation Report
|
|
231
|
+
|
|
232
|
+
Run completed: <ISO 8601 timestamp>
|
|
233
|
+
Dry-run: <yes / no>
|
|
234
|
+
|
|
235
|
+
### Summary
|
|
236
|
+
|
|
237
|
+
| Metric | Count |
|
|
238
|
+
|--------|-------|
|
|
239
|
+
| Total features | N |
|
|
240
|
+
| Succeeded | N |
|
|
241
|
+
| Failed | N |
|
|
242
|
+
| Blocked (dep failure) | N |
|
|
243
|
+
|
|
244
|
+
### Per-Feature Results
|
|
245
|
+
|
|
246
|
+
| # | Feature | Title | Wave | Status | Notes |
|
|
247
|
+
|---|---------|-------|------|--------|-------|
|
|
248
|
+
| 1 | #85 | <title> | 1 | done | |
|
|
249
|
+
| 2 | #71 | <title> | 1 | failed | see /specrails:implement output |
|
|
250
|
+
| 3 | #63 | <title> | 2 | blocked| depends on #71 |
|
|
251
|
+
|
|
252
|
+
### Merge Conflicts
|
|
253
|
+
|
|
254
|
+
[List any merge conflicts reported by /specrails:implement across all waves. If none: "No merge conflicts detected."]
|
|
255
|
+
|
|
256
|
+
| Feature | File | Conflicting Region |
|
|
257
|
+
|---------|------|--------------------|
|
|
258
|
+
| #85 | src/utils/parser.ts | function parseQuery |
|
|
259
|
+
|
|
260
|
+
### Next Steps
|
|
261
|
+
|
|
262
|
+
[If all features succeeded:]
|
|
263
|
+
All features implemented. Review open PRs and monitor CI.
|
|
264
|
+
|
|
265
|
+
[If any features failed:]
|
|
266
|
+
Re-run failed features individually:
|
|
267
|
+
/specrails:implement <failed-ref>
|
|
268
|
+
/specrails:implement <failed-ref>
|
|
269
|
+
|
|
270
|
+
[If any features were blocked:]
|
|
271
|
+
Once failed features are fixed, re-run blocked features:
|
|
272
|
+
/specrails:implement <blocked-ref> [--deps "..."]
|
|
273
|
+
```
|
|
274
|
+
|
|
275
|
+
---
|
|
276
|
+
|
|
277
|
+
## Error Handling
|
|
278
|
+
|
|
279
|
+
- If a `/specrails:implement` invocation fails: record failure, apply failure isolation, continue remaining waves
|
|
280
|
+
- If GitHub CLI is unavailable (detected during issue title fetch): proceed without titles, show refs only
|
|
281
|
+
- If `--deps` spec contains unknown refs: warn and continue — unknown refs are ignored in graph construction
|
|
282
|
+
- Never block the entire batch on a single feature failure. Always produce a final report.
|
|
@@ -0,0 +1,132 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: "Compatibility Impact Analyzer"
|
|
3
|
+
description: "Snapshot the current API surface and detect breaking changes against a prior baseline. Generates a migration guide when breaking changes are found."
|
|
4
|
+
category: Workflow
|
|
5
|
+
tags: [workflow, compatibility, breaking-changes, migration]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
Analyze the API surface of **specrails-hub** for backwards compatibility. Extracts the current contract surface (CLI flags, template placeholders, command names, argument flags, agent names, config keys), compares against a stored baseline, classifies each change by severity, and generates a migration guide when breaking changes are found.
|
|
9
|
+
|
|
10
|
+
**Input:** `$ARGUMENTS` — optional flags:
|
|
11
|
+
- `--diff` — compare current surface to most recent snapshot (default when snapshots exist)
|
|
12
|
+
- `--snapshot` — capture current surface and save without diffing (default on first run)
|
|
13
|
+
- `--since <date>` — diff against snapshot from this date (ISO format: YYYY-MM-DD)
|
|
14
|
+
- `--propose <change-dir>` — diff proposed changes in `openspec/changes/<change-dir>/` against current surface
|
|
15
|
+
- `--dry-run` — run all phases but skip saving the snapshot
|
|
16
|
+
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
## Phase 0: Argument Parsing
|
|
20
|
+
|
|
21
|
+
Parse `$ARGUMENTS` to set runtime variables.
|
|
22
|
+
|
|
23
|
+
**Variables to set:**
|
|
24
|
+
|
|
25
|
+
- `MODE` — string, one of `"snapshot"`, `"diff"`, `"propose"`. Default: `"diff"` if `.claude/compat-snapshots/` contains any `.json` files; `"snapshot"` otherwise.
|
|
26
|
+
- `COMPARE_DATE` — string (ISO date) or empty string. Default: `""` (use most recent snapshot).
|
|
27
|
+
- `PROPOSE_DIR` — string or empty string. Default: `""`.
|
|
28
|
+
- `DRY_RUN` — boolean. Default: `false`.
|
|
29
|
+
|
|
30
|
+
**Parsing rules:**
|
|
31
|
+
|
|
32
|
+
1. Scan `$ARGUMENTS` for `--snapshot`. If found, set `MODE=snapshot`.
|
|
33
|
+
2. Scan for `--diff`. If found, set `MODE=diff`.
|
|
34
|
+
3. Scan for `--since <date>`. If found, set `COMPARE_DATE=<date>` and (if `MODE` not already set to `snapshot`) set `MODE=diff`.
|
|
35
|
+
4. Scan for `--propose <change-dir>`. If found, set `PROPOSE_DIR=<change-dir>` and `MODE=propose`.
|
|
36
|
+
- Verify `openspec/changes/<change-dir>/` exists. If not: print `Error: no change found at openspec/changes/<change-dir>/` and stop.
|
|
37
|
+
5. Scan for `--dry-run`. If found, set `DRY_RUN=true`.
|
|
38
|
+
6. Apply default-mode logic if `MODE` is not yet set: check whether `.claude/compat-snapshots/` exists and contains `.json` files. If yes: `MODE=diff`. If no: `MODE=snapshot`.
|
|
39
|
+
|
|
40
|
+
**Verify prerequisites:**
|
|
41
|
+
|
|
42
|
+
- Check whether `templates/` directory exists. If not: print `Error: templates/ not found — is this a specrails repo?` and stop.
|
|
43
|
+
- Check whether `install.sh` exists. If not: set `INSTALLER_AVAILABLE=false`. Otherwise set `INSTALLER_AVAILABLE=true`.
|
|
44
|
+
|
|
45
|
+
**Print active configuration:**
|
|
46
|
+
|
|
47
|
+
```
|
|
48
|
+
Mode: <MODE> | Compare date: <COMPARE_DATE or "latest"> | Dry-run: <true/false>
|
|
49
|
+
```
|
|
50
|
+
|
|
51
|
+
---
|
|
52
|
+
|
|
53
|
+
## Phase 1: Extract Current Surface
|
|
54
|
+
|
|
55
|
+
Read the codebase and build the surface snapshot.
|
|
56
|
+
|
|
57
|
+
**Surface categories:** `installer_flags`, `template_placeholders`, `command_names`, `command_arguments`, `agent_names`, `config_keys`
|
|
58
|
+
|
|
59
|
+
Build the surface object:
|
|
60
|
+
|
|
61
|
+
```json
|
|
62
|
+
{
|
|
63
|
+
"schema_version": "1",
|
|
64
|
+
"captured_at": "<ISO 8601 datetime>",
|
|
65
|
+
"git_sha": "<git rev-parse HEAD or 'unknown'>",
|
|
66
|
+
"git_branch": "<git rev-parse --abbrev-ref HEAD or 'unknown'>",
|
|
67
|
+
"surfaces": {
|
|
68
|
+
"installer_flags": [...],
|
|
69
|
+
"template_placeholders": [...],
|
|
70
|
+
"command_names": [...],
|
|
71
|
+
"command_arguments": [...],
|
|
72
|
+
"agent_names": [...],
|
|
73
|
+
"config_keys": [...]
|
|
74
|
+
}
|
|
75
|
+
}
|
|
76
|
+
```
|
|
77
|
+
|
|
78
|
+
If `MODE=snapshot`: proceed directly to Phase 5.
|
|
79
|
+
|
|
80
|
+
---
|
|
81
|
+
|
|
82
|
+
## Phase 2: Load Baseline
|
|
83
|
+
|
|
84
|
+
For `diff` mode: load most recent snapshot (or by `COMPARE_DATE`) from `.claude/compat-snapshots/`.
|
|
85
|
+
For `propose` mode: load most recent snapshot + read `openspec/changes/<PROPOSE_DIR>/design.md`.
|
|
86
|
+
|
|
87
|
+
---
|
|
88
|
+
|
|
89
|
+
## Phase 3: Diff and Classify
|
|
90
|
+
|
|
91
|
+
For each surface category, compute removed/added/changed elements and classify:
|
|
92
|
+
- Category 1: Removal (BREAKING — MAJOR)
|
|
93
|
+
- Category 2: Rename (BREAKING — MAJOR)
|
|
94
|
+
- Category 3: Signature Change (BREAKING or MINOR)
|
|
95
|
+
- Category 4: Behavioral Change (ADVISORY)
|
|
96
|
+
|
|
97
|
+
---
|
|
98
|
+
|
|
99
|
+
## Phase 4: Generate Report
|
|
100
|
+
|
|
101
|
+
```
|
|
102
|
+
## Compatibility Impact Report — specrails-hub
|
|
103
|
+
Date: <ISO date> | Commit: <git_short_sha or "unknown">
|
|
104
|
+
|
|
105
|
+
### Surface Snapshot
|
|
106
|
+
| Category | Elements Found |
|
|
107
|
+
|----------|---------------|
|
|
108
|
+
| Installer flags | N |
|
|
109
|
+
| Template placeholders | N |
|
|
110
|
+
| Command names | N |
|
|
111
|
+
| Command argument flags | N |
|
|
112
|
+
| Agent names | N |
|
|
113
|
+
| Config keys | N |
|
|
114
|
+
|
|
115
|
+
### Breaking Changes (N found)
|
|
116
|
+
[list or "None detected."]
|
|
117
|
+
|
|
118
|
+
### Advisory Changes (N found)
|
|
119
|
+
[list or "None detected."]
|
|
120
|
+
```
|
|
121
|
+
|
|
122
|
+
Include Migration Guide blocks for each breaking change.
|
|
123
|
+
|
|
124
|
+
---
|
|
125
|
+
|
|
126
|
+
## Phase 5: Save Snapshot
|
|
127
|
+
|
|
128
|
+
If `DRY_RUN=true`: print `Snapshot not saved — dry-run mode`.
|
|
129
|
+
|
|
130
|
+
Otherwise: save to `.claude/compat-snapshots/<YYYY-MM-DD>-<git_short_sha>.json`.
|
|
131
|
+
|
|
132
|
+
Check `.gitignore` — suggest adding `.claude/compat-snapshots/` if missing.
|
|
@@ -0,0 +1,62 @@
|
|
|
1
|
+
# Doctor: specrails Health Check
|
|
2
|
+
|
|
3
|
+
Run the specrails health check to validate that all prerequisites are correctly configured for this repository.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## What it checks
|
|
8
|
+
|
|
9
|
+
| Check | Pass condition |
|
|
10
|
+
|-------|---------------|
|
|
11
|
+
| Claude Code CLI | `claude` binary found in PATH |
|
|
12
|
+
| Claude API key | `claude config list` shows a key OR `ANTHROPIC_API_KEY` env var set |
|
|
13
|
+
| Agent files | `agents/` directory exists with at least 1 `AGENTS.md` file |
|
|
14
|
+
| CLAUDE.md | `CLAUDE.md` present in the repo root |
|
|
15
|
+
| Git initialized | `.git/` directory present |
|
|
16
|
+
| npm | `npm` binary found in PATH |
|
|
17
|
+
|
|
18
|
+
## How to run
|
|
19
|
+
|
|
20
|
+
This command delegates to the standalone health check script installed at `.specrails/bin/doctor.sh`. Run it directly:
|
|
21
|
+
|
|
22
|
+
```
|
|
23
|
+
Bash tool: bash .specrails/bin/doctor.sh
|
|
24
|
+
```
|
|
25
|
+
|
|
26
|
+
Or via the npm CLI wrapper:
|
|
27
|
+
|
|
28
|
+
```
|
|
29
|
+
npx specrails-core@latest doctor
|
|
30
|
+
```
|
|
31
|
+
|
|
32
|
+
## Output
|
|
33
|
+
|
|
34
|
+
Each check is displayed as ✅ (pass) or ❌ (fail with fix instruction).
|
|
35
|
+
|
|
36
|
+
On all checks passed:
|
|
37
|
+
```
|
|
38
|
+
All 6 checks passed. Run /specrails:get-backlog-specs to get started.
|
|
39
|
+
```
|
|
40
|
+
|
|
41
|
+
On failure:
|
|
42
|
+
```
|
|
43
|
+
❌ API key: not configured
|
|
44
|
+
Fix: Run: claude config set api_key <your-key> | Get a key: https://console.anthropic.com/
|
|
45
|
+
|
|
46
|
+
1 check(s) failed.
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
## Exit codes
|
|
50
|
+
|
|
51
|
+
- `0` — all checks passed
|
|
52
|
+
- `1` — one or more checks failed
|
|
53
|
+
|
|
54
|
+
## Log file
|
|
55
|
+
|
|
56
|
+
Each run appends a timestamped summary to `~/.specrails/doctor.log`:
|
|
57
|
+
|
|
58
|
+
```
|
|
59
|
+
2026-03-20T10:00:00Z checks=6 passed=6 failed=0
|
|
60
|
+
```
|
|
61
|
+
|
|
62
|
+
The `~/.specrails/` directory is created automatically if it does not exist.
|