specrails 0.2.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude/skills/openspec-apply-change/SKILL.md +156 -0
- package/.claude/skills/openspec-archive-change/SKILL.md +114 -0
- package/.claude/skills/openspec-bulk-archive-change/SKILL.md +246 -0
- package/.claude/skills/openspec-continue-change/SKILL.md +118 -0
- package/.claude/skills/openspec-explore/SKILL.md +290 -0
- package/.claude/skills/openspec-ff-change/SKILL.md +101 -0
- package/.claude/skills/openspec-new-change/SKILL.md +74 -0
- package/.claude/skills/openspec-onboard/SKILL.md +529 -0
- package/.claude/skills/openspec-sync-specs/SKILL.md +138 -0
- package/.claude/skills/openspec-verify-change/SKILL.md +168 -0
- package/README.md +226 -0
- package/VERSION +1 -0
- package/bin/specrails.js +41 -0
- package/commands/setup.md +851 -0
- package/install.sh +488 -0
- package/package.json +34 -0
- package/prompts/analyze-codebase.md +87 -0
- package/prompts/generate-personas.md +61 -0
- package/prompts/infer-conventions.md +72 -0
- package/templates/agents/sr-architect.md +194 -0
- package/templates/agents/sr-backend-developer.md +54 -0
- package/templates/agents/sr-backend-reviewer.md +139 -0
- package/templates/agents/sr-developer.md +146 -0
- package/templates/agents/sr-doc-sync.md +167 -0
- package/templates/agents/sr-frontend-developer.md +48 -0
- package/templates/agents/sr-frontend-reviewer.md +132 -0
- package/templates/agents/sr-product-analyst.md +36 -0
- package/templates/agents/sr-product-manager.md +148 -0
- package/templates/agents/sr-reviewer.md +265 -0
- package/templates/agents/sr-security-reviewer.md +178 -0
- package/templates/agents/sr-test-writer.md +163 -0
- package/templates/claude-md/root.md +50 -0
- package/templates/commands/sr/batch-implement.md +282 -0
- package/templates/commands/sr/compat-check.md +271 -0
- package/templates/commands/sr/health-check.md +396 -0
- package/templates/commands/sr/implement.md +972 -0
- package/templates/commands/sr/product-backlog.md +195 -0
- package/templates/commands/sr/refactor-recommender.md +169 -0
- package/templates/commands/sr/update-product-driven-backlog.md +272 -0
- package/templates/commands/sr/why.md +96 -0
- package/templates/personas/persona.md +43 -0
- package/templates/personas/the-maintainer.md +78 -0
- package/templates/rules/layer.md +8 -0
- package/templates/security/security-exemptions.yaml +20 -0
- package/templates/settings/confidence-config.json +17 -0
- package/templates/settings/settings.json +15 -0
- package/templates/web-manager/README.md +107 -0
- package/templates/web-manager/client/index.html +12 -0
- package/templates/web-manager/client/package-lock.json +1727 -0
- package/templates/web-manager/client/package.json +20 -0
- package/templates/web-manager/client/src/App.tsx +83 -0
- package/templates/web-manager/client/src/components/AgentActivity.tsx +19 -0
- package/templates/web-manager/client/src/components/CommandInput.tsx +81 -0
- package/templates/web-manager/client/src/components/LogStream.tsx +57 -0
- package/templates/web-manager/client/src/components/PipelineSidebar.tsx +65 -0
- package/templates/web-manager/client/src/components/SearchBox.tsx +34 -0
- package/templates/web-manager/client/src/hooks/usePipeline.ts +62 -0
- package/templates/web-manager/client/src/hooks/useWebSocket.ts +59 -0
- package/templates/web-manager/client/src/main.tsx +9 -0
- package/templates/web-manager/client/tsconfig.json +21 -0
- package/templates/web-manager/client/tsconfig.node.json +11 -0
- package/templates/web-manager/client/vite.config.ts +13 -0
- package/templates/web-manager/package-lock.json +3327 -0
- package/templates/web-manager/package.json +30 -0
- package/templates/web-manager/server/hooks.test.ts +196 -0
- package/templates/web-manager/server/hooks.ts +71 -0
- package/templates/web-manager/server/index.test.ts +186 -0
- package/templates/web-manager/server/index.ts +99 -0
- package/templates/web-manager/server/spawner.test.ts +319 -0
- package/templates/web-manager/server/spawner.ts +89 -0
- package/templates/web-manager/server/types.ts +46 -0
- package/templates/web-manager/tsconfig.json +14 -0
- package/templates/web-manager/vitest.config.ts +8 -0
- package/update.sh +877 -0
|
@@ -0,0 +1,195 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: "Product Backlog"
|
|
3
|
+
description: "View product-driven backlog from GitHub Issues and propose top 3 for implementation"
|
|
4
|
+
category: Workflow
|
|
5
|
+
tags: [workflow, backlog, viewer, product-driven]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
Display the product-driven backlog by reading issues/tickets from the configured backlog provider ({{BACKLOG_PROVIDER_NAME}}). These are feature ideas generated through VPC-based product discovery — evaluated against user personas. Use `/sr:update-product-driven-backlog` to generate new ideas.
|
|
9
|
+
|
|
10
|
+
**Input:** $ARGUMENTS (optional: comma-separated areas to filter. If empty, show all.)
|
|
11
|
+
|
|
12
|
+
---
|
|
13
|
+
|
|
14
|
+
## Phase 0: Environment Pre-flight
|
|
15
|
+
|
|
16
|
+
Verify the backlog provider is accessible:
|
|
17
|
+
|
|
18
|
+
```bash
|
|
19
|
+
{{BACKLOG_PREFLIGHT}}
|
|
20
|
+
```
|
|
21
|
+
|
|
22
|
+
If the backlog provider is unavailable, stop and inform the user.
|
|
23
|
+
|
|
24
|
+
---
|
|
25
|
+
|
|
26
|
+
## Execution
|
|
27
|
+
|
|
28
|
+
Launch a **single** sr-product-analyst agent (`subagent_type: sr-product-analyst`) to read and prioritize the backlog.
|
|
29
|
+
|
|
30
|
+
The product-analyst receives this prompt:
|
|
31
|
+
|
|
32
|
+
> You are reading the product-driven backlog from {{BACKLOG_PROVIDER_NAME}} and producing a prioritized view.
|
|
33
|
+
|
|
34
|
+
1. **Fetch all open product-driven backlog items:**
|
|
35
|
+
```bash
|
|
36
|
+
{{BACKLOG_FETCH_CMD}}
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
2. **Parse each issue/ticket** to extract metadata from the body:
|
|
40
|
+
- **Area**: from `area:*` label
|
|
41
|
+
- **Persona Fit**: from the body's Overview table — extract per-persona scores and total
|
|
42
|
+
- **Effort**: from the body's Overview table (High/Medium/Low)
|
|
43
|
+
- **Description**: from the body's "Feature Description" section
|
|
44
|
+
- **User Story**: from the body's "User Story" section
|
|
45
|
+
|
|
46
|
+
3. **Parse prerequisites for each issue:**
|
|
47
|
+
- Locate the row whose first cell matches `**Prerequisites**` in the issue body's Overview table.
|
|
48
|
+
- If the cell value is `None`, `-`, or empty: set `prereqs = []` for this issue.
|
|
49
|
+
- Otherwise: extract all tokens matching `#\d+` from the cell and set `prereqs = [<numbers>]`.
|
|
50
|
+
- If a prerequisite number does not appear in the fetched issue list, treat it as already satisfied (externally closed). Do not include it in the DAG.
|
|
51
|
+
|
|
52
|
+
4. **Build dependency graph and detect cycles:**
|
|
53
|
+
- Construct a directed graph where edge `(A → B)` means "issue A must complete before issue B".
|
|
54
|
+
- For each issue with a non-empty `prereqs` list, add an edge from each prerequisite to the issue.
|
|
55
|
+
- Run depth-first cycle detection:
|
|
56
|
+
- Maintain `visited` and `rec_stack` sets.
|
|
57
|
+
- For each unvisited node, run DFS. If a node in `rec_stack` is encountered, a cycle exists.
|
|
58
|
+
- Collect all cycle members into `CYCLE_MEMBERS`.
|
|
59
|
+
- If `CYCLE_MEMBERS` is non-empty, prepare a warning block to render before the backlog table:
|
|
60
|
+
```
|
|
61
|
+
> **Warning: Circular dependency detected in backlog.**
|
|
62
|
+
> The following issues form a cycle and cannot be safely ordered:
|
|
63
|
+
> #A -> #B -> #A
|
|
64
|
+
> Review these issues and correct the Prerequisites fields.
|
|
65
|
+
```
|
|
66
|
+
- Compute `in_degree[issue]` for all issues (count of prerequisite edges pointing to each issue from other open backlog issues).
|
|
67
|
+
|
|
68
|
+
5. **Compute safe implementation order (Kahn's topological sort):**
|
|
69
|
+
- Exclude `CYCLE_MEMBERS` from this computation.
|
|
70
|
+
- Initialize `ready` = all non-cycle issues where `in_degree == 0`.
|
|
71
|
+
- Sort `ready` by Total Persona Score descending.
|
|
72
|
+
- Build `WAVES = []`:
|
|
73
|
+
```
|
|
74
|
+
while ready is non-empty:
|
|
75
|
+
WAVES.append(copy of ready)
|
|
76
|
+
next_ready = []
|
|
77
|
+
for each issue in ready:
|
|
78
|
+
for each dependent D of issue (edges issue → D):
|
|
79
|
+
in_degree[D] -= 1
|
|
80
|
+
if in_degree[D] == 0: next_ready.append(D)
|
|
81
|
+
sort next_ready by Total Persona Score descending
|
|
82
|
+
ready = next_ready
|
|
83
|
+
```
|
|
84
|
+
- Store `WAVE_1 = WAVES[0]` (the set of immediately startable features).
|
|
85
|
+
|
|
86
|
+
6. **Group by area**.
|
|
87
|
+
|
|
88
|
+
7. **Sort within each area by Total Persona Score (descending)**, then by Effort (Low > Medium > High) as tiebreaker.
|
|
89
|
+
|
|
90
|
+
8. **Display** as a formatted table per area, then **propose the top 3 items from `WAVE_1`** (features with all prerequisites satisfied) for implementation. If fewer than 3 are in `WAVE_1`, show as many as available and add: "Note: Only {N} feature(s) are available to start immediately — remaining features have unmet prerequisites."
|
|
91
|
+
|
|
92
|
+
[If `CYCLE_MEMBERS` is non-empty, render the cycle warning block immediately before the first area table.]
|
|
93
|
+
|
|
94
|
+
Render each area table with the following format:
|
|
95
|
+
- Append `[blocked]` to the issue title cell if `in_degree[issue] > 0` and the issue is not in `CYCLE_MEMBERS`.
|
|
96
|
+
- Append `[cycle]` to the issue title cell if the issue is in `CYCLE_MEMBERS`.
|
|
97
|
+
- `Prereqs` cell: list prerequisite issue numbers as `#N, #M`, or `—` if none.
|
|
98
|
+
|
|
99
|
+
```
|
|
100
|
+
## Product-Driven Backlog
|
|
101
|
+
|
|
102
|
+
{N} open issues | Source: VPC-based product discovery
|
|
103
|
+
Personas: {{PERSONA_NAMES_WITH_ROLES}}
|
|
104
|
+
|
|
105
|
+
### {Area Name}
|
|
106
|
+
|
|
107
|
+
| # | Issue | {{PERSONA_SCORE_HEADERS}} | Total | Effort | Prereqs |
|
|
108
|
+
|---|-------|{{PERSONA_SCORE_SEPARATORS}}|-------|--------|---------|
|
|
109
|
+
| 1 | #42 Feature name [blocked] | ... | X/{{MAX_SCORE}} | Low | #12, #17 |
|
|
110
|
+
| 2 | #43 Other feature | ... | X/{{MAX_SCORE}} | High | — |
|
|
111
|
+
|
|
112
|
+
---
|
|
113
|
+
|
|
114
|
+
## Recommended Next Sprint (Top 3)
|
|
115
|
+
|
|
116
|
+
Ranked by VPC persona score / effort ratio:
|
|
117
|
+
|
|
118
|
+
| Priority | Issue | Area | {{PERSONA_SCORE_HEADERS}} | Total | Effort | Rationale |
|
|
119
|
+
|----------|-------|------|{{PERSONA_SCORE_SEPARATORS}}|-------|--------|-----------|
|
|
120
|
+
|
|
121
|
+
### Selection criteria
|
|
122
|
+
- Cross-persona features (both 4+/5) prioritized over single-persona
|
|
123
|
+
- Low effort preferred over high effort at same score
|
|
124
|
+
- Critical pain relief weighted higher than gain creation
|
|
125
|
+
|
|
126
|
+
Run `/sr:implement` to start implementing these items.
|
|
127
|
+
```
|
|
128
|
+
|
|
129
|
+
9. **Render Safe Implementation Order section** after the Recommended Next Sprint table:
|
|
130
|
+
|
|
131
|
+
```
|
|
132
|
+
---
|
|
133
|
+
|
|
134
|
+
## Safe Implementation Order
|
|
135
|
+
|
|
136
|
+
Features grouped by wave. All features in a wave can start in parallel.
|
|
137
|
+
Features in wave N must complete before wave N+1 begins.
|
|
138
|
+
|
|
139
|
+
| Wave | Issue | Title | Prereqs | Score | Effort |
|
|
140
|
+
|------|-------|-------|---------|-------|--------|
|
|
141
|
+
| 1 | #N | ... | — | X/{{MAX_SCORE}} | Low |
|
|
142
|
+
| 2 | #M | ... | #N | X/{{MAX_SCORE}} | Medium |
|
|
143
|
+
|
|
144
|
+
To implement in this order:
|
|
145
|
+
/sr:batch-implement <issue-refs in wave order> --deps "<A> -> <B>, <C> -> <D>, ..."
|
|
146
|
+
|
|
147
|
+
[If no edges exist in the DAG, omit the --deps clause:]
|
|
148
|
+
/sr:batch-implement <issue-refs>
|
|
149
|
+
|
|
150
|
+
[If CYCLE_MEMBERS is non-empty, append:]
|
|
151
|
+
Cycle members excluded from ordering: #A, #B
|
|
152
|
+
Fix the Prerequisites fields in these issues to include them.
|
|
153
|
+
```
|
|
154
|
+
|
|
155
|
+
Issue refs in the `/sr:batch-implement` command are listed in wave order (wave 1 first, then wave 2, etc.), sorted by persona score within each wave. The `--deps` string is constructed from all edges in the DAG: `"A -> B"` for each edge, comma-separated. If the backlog has no dependencies at all (DAG has no edges), the section still renders showing all features in wave 1 and the `--deps` clause is omitted.
|
|
156
|
+
|
|
157
|
+
10. If no issues exist:
|
|
158
|
+
```
|
|
159
|
+
No product-driven backlog issues found. Run `/sr:update-product-driven-backlog` to generate feature ideas.
|
|
160
|
+
```
|
|
161
|
+
|
|
162
|
+
7. **[Orchestrator]** After the product-analyst completes, write issue snapshots to `.claude/backlog-cache.json`.
|
|
163
|
+
|
|
164
|
+
**Guard:** If `GH_AVAILABLE=false` (from Phase 0 pre-flight), print `[backlog-cache] Skipped — GH unavailable.` and return. Do not attempt the write.
|
|
165
|
+
|
|
166
|
+
**Fetch all open backlog issues in one call:**
|
|
167
|
+
|
|
168
|
+
```bash
|
|
169
|
+
gh issue list --label "product-driven-backlog" --state open --json number,title,state,assignees,labels,body,updatedAt
|
|
170
|
+
```
|
|
171
|
+
|
|
172
|
+
For each issue in the result, build a snapshot object:
|
|
173
|
+
- `number`: integer issue number
|
|
174
|
+
- `title`: issue title string
|
|
175
|
+
- `state`: `"open"` or `"closed"`
|
|
176
|
+
- `assignees`: array of assignee login names, sorted alphabetically
|
|
177
|
+
- `labels`: array of label names, sorted alphabetically
|
|
178
|
+
- `body_sha`: SHA-256 of the raw body string — compute with:
|
|
179
|
+
```bash
|
|
180
|
+
echo -n "{body}" | sha256sum | cut -d' ' -f1
|
|
181
|
+
```
|
|
182
|
+
If `sha256sum` is not available, fall back to `openssl dgst -sha256 -r` or `shasum -a 256`.
|
|
183
|
+
- `updated_at`: the `updatedAt` value from the GitHub API response
|
|
184
|
+
- `captured_at`: current local time in ISO 8601 format
|
|
185
|
+
|
|
186
|
+
**Merge strategy:** If `.claude/backlog-cache.json` already exists and is valid JSON, read it and merge: new snapshot entries overwrite existing entries by issue number key; entries for issue numbers not in the current fetch are preserved (they may be needed by an in-progress `/sr:implement` run). If the file does not exist or is malformed, create it fresh.
|
|
187
|
+
|
|
188
|
+
Write the merged result back to `.claude/backlog-cache.json` with:
|
|
189
|
+
- `schema_version`: `"1"`
|
|
190
|
+
- `provider`: `"github"`
|
|
191
|
+
- `last_updated`: current ISO 8601 timestamp
|
|
192
|
+
- `written_by`: `"product-backlog"`
|
|
193
|
+
- `issues`: the merged map keyed by string issue number
|
|
194
|
+
|
|
195
|
+
If the write fails (e.g., `.claude/` directory does not exist): print `[backlog-cache] Warning: could not write cache. Continuing.` Do not abort.
|
|
@@ -0,0 +1,169 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: "Refactor Recommender"
|
|
3
|
+
description: "Scan the codebase for refactoring opportunities ranked by impact/effort ratio. Analyzes code for duplicates, long functions, large files, dead code, outdated patterns, and complex logic. Optionally creates GitHub Issues for tracking."
|
|
4
|
+
category: Workflow
|
|
5
|
+
tags: [workflow, refactoring, code-quality, tech-debt]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
Scan the codebase for refactoring opportunities, score each by impact/effort ratio, and optionally create GitHub Issues for the top findings in {{BACKLOG_PROVIDER_NAME}}.
|
|
9
|
+
|
|
10
|
+
**Input:** `$ARGUMENTS` — optional: comma-separated paths to scope the analysis. Flags: `--dry-run` (print findings without creating issues).
|
|
11
|
+
|
|
12
|
+
---
|
|
13
|
+
|
|
14
|
+
## Phase 0: Pre-flight
|
|
15
|
+
|
|
16
|
+
Check whether the GitHub CLI is available:
|
|
17
|
+
|
|
18
|
+
```bash
|
|
19
|
+
{{BACKLOG_PREFLIGHT}}
|
|
20
|
+
```
|
|
21
|
+
|
|
22
|
+
Set `GH_AVAILABLE=true` if the command succeeds, `GH_AVAILABLE=false` otherwise. Do not stop — analysis proceeds regardless. Parse `--dry-run` from `$ARGUMENTS` and set `DRY_RUN=true` if present.
|
|
23
|
+
|
|
24
|
+
---
|
|
25
|
+
|
|
26
|
+
## Phase 1: Scope
|
|
27
|
+
|
|
28
|
+
Parse paths from `$ARGUMENTS` after stripping any flags. If no paths are provided, scan the entire repository.
|
|
29
|
+
|
|
30
|
+
Always exclude the following from all analysis:
|
|
31
|
+
|
|
32
|
+
- `node_modules/`
|
|
33
|
+
- `.git/`
|
|
34
|
+
- `.claude/`
|
|
35
|
+
- `vendor/`
|
|
36
|
+
- `dist/`
|
|
37
|
+
- `build/`
|
|
38
|
+
|
|
39
|
+
---
|
|
40
|
+
|
|
41
|
+
## Phase 2: Analysis
|
|
42
|
+
|
|
43
|
+
Analyze the scoped files across six categories. For each finding record:
|
|
44
|
+
|
|
45
|
+
- **file** — relative path
|
|
46
|
+
- **line_range** — start and end line numbers
|
|
47
|
+
- **current_snippet** — the problematic code as-is
|
|
48
|
+
- **proposed_snippet** — concrete refactored version
|
|
49
|
+
- **rationale** — one sentence explaining the improvement
|
|
50
|
+
|
|
51
|
+
### Duplicate Code
|
|
52
|
+
|
|
53
|
+
Find code blocks larger than 10 lines that are substantially similar across two or more files. Consolidation into a shared function or module is the expected refactoring.
|
|
54
|
+
|
|
55
|
+
### Long Functions
|
|
56
|
+
|
|
57
|
+
Find functions or methods exceeding 50 lines. Extraction into smaller, single-purpose functions is the expected refactoring.
|
|
58
|
+
|
|
59
|
+
### Large Files
|
|
60
|
+
|
|
61
|
+
Find files exceeding 300 lines. Splitting into cohesive modules is the expected refactoring.
|
|
62
|
+
|
|
63
|
+
### Dead Code
|
|
64
|
+
|
|
65
|
+
Find unused exports, unreferenced functions, and commented-out blocks that have not been active for the lifetime of the file. Deletion or archival is the expected refactoring.
|
|
66
|
+
|
|
67
|
+
### Outdated Patterns
|
|
68
|
+
|
|
69
|
+
Find deprecated APIs and old language syntax: `var` instead of `let`/`const`, callbacks instead of `async`/`await`, legacy framework APIs with documented replacements, etc. Modernisation to current idioms is the expected refactoring.
|
|
70
|
+
|
|
71
|
+
### Complex Logic
|
|
72
|
+
|
|
73
|
+
Find deeply nested conditionals (more than 3 levels) and functions with high cyclomatic complexity. Extraction, early-return guards, or strategy patterns are the expected refactoring.
|
|
74
|
+
|
|
75
|
+
---
|
|
76
|
+
|
|
77
|
+
## Phase 3: Score and Rank
|
|
78
|
+
|
|
79
|
+
Score every finding on two dimensions (1–5 each):
|
|
80
|
+
|
|
81
|
+
- **Impact** — how much the refactoring improves code quality, readability, or maintainability
|
|
82
|
+
- **Effort** — how hard the refactoring is to implement (1 = trivial, 5 = major)
|
|
83
|
+
|
|
84
|
+
Compute a **composite score**: `impact * 2 + (6 - effort)`. Higher is better.
|
|
85
|
+
|
|
86
|
+
Sort all findings by composite score descending. If the same code block was flagged by multiple categories, keep only the highest-scored entry and discard the duplicates.
|
|
87
|
+
|
|
88
|
+
---
|
|
89
|
+
|
|
90
|
+
## Phase 4: Create GitHub Issues
|
|
91
|
+
|
|
92
|
+
Skip this phase if `GH_AVAILABLE=false` or `DRY_RUN=true`.
|
|
93
|
+
|
|
94
|
+
First ensure the tracking label exists:
|
|
95
|
+
|
|
96
|
+
```bash
|
|
97
|
+
gh label create "refactor-opportunity" --color "B60205" --force
|
|
98
|
+
```
|
|
99
|
+
|
|
100
|
+
Fetch existing open issues that already carry the label to avoid duplicates:
|
|
101
|
+
|
|
102
|
+
```bash
|
|
103
|
+
gh issue list --label "refactor-opportunity" --state open --limit 100 --json number,title
|
|
104
|
+
```
|
|
105
|
+
|
|
106
|
+
For each of the **top 5** findings (by composite score) that does not already have a matching open issue, create a GitHub Issue with the following body:
|
|
107
|
+
|
|
108
|
+
```
|
|
109
|
+
## Refactoring Opportunity: {description}
|
|
110
|
+
|
|
111
|
+
**Category**: {category}
|
|
112
|
+
**File**: {file}:{line_range}
|
|
113
|
+
**Impact**: {impact}/5 | **Effort**: {effort}/5 | **Score**: {composite}
|
|
114
|
+
|
|
115
|
+
### Current Code
|
|
116
|
+
```{lang}
|
|
117
|
+
{current_snippet}
|
|
118
|
+
```
|
|
119
|
+
|
|
120
|
+
### Proposed Refactoring
|
|
121
|
+
```{lang}
|
|
122
|
+
{proposed_snippet}
|
|
123
|
+
```
|
|
124
|
+
|
|
125
|
+
### Rationale
|
|
126
|
+
{rationale}
|
|
127
|
+
|
|
128
|
+
---
|
|
129
|
+
_Generated by `/sr:refactor-recommender` in {{PROJECT_NAME}}_
|
|
130
|
+
```
|
|
131
|
+
|
|
132
|
+
---
|
|
133
|
+
|
|
134
|
+
## Phase 5: Output Summary
|
|
135
|
+
|
|
136
|
+
Print the following report:
|
|
137
|
+
|
|
138
|
+
```
|
|
139
|
+
## Refactoring Opportunities — {{PROJECT_NAME}}
|
|
140
|
+
|
|
141
|
+
{N} opportunities found | Sorted by composite score
|
|
142
|
+
|
|
143
|
+
| # | Category | File | Impact | Effort | Score | Description |
|
|
144
|
+
|---|----------|------|--------|--------|-------|-------------|
|
|
145
|
+
| 1 | {category} | {file}:{line_range} | {impact}/5 | {effort}/5 | {composite} | {description} |
|
|
146
|
+
...
|
|
147
|
+
|
|
148
|
+
### Top 3 Detailed Recommendations
|
|
149
|
+
|
|
150
|
+
#### 1. {description}
|
|
151
|
+
**File**: {file}:{line_range}
|
|
152
|
+
**Category**: {category} | **Score**: {composite}
|
|
153
|
+
|
|
154
|
+
**Current:**
|
|
155
|
+
```{lang}
|
|
156
|
+
{current_snippet}
|
|
157
|
+
```
|
|
158
|
+
|
|
159
|
+
**Proposed:**
|
|
160
|
+
```{lang}
|
|
161
|
+
{proposed_snippet}
|
|
162
|
+
```
|
|
163
|
+
|
|
164
|
+
**Rationale:** {rationale}
|
|
165
|
+
|
|
166
|
+
(repeat for #2 and #3)
|
|
167
|
+
|
|
168
|
+
Issues created: {N} (or "dry-run: no issues created")
|
|
169
|
+
```
|
|
@@ -0,0 +1,272 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: "Update Product-Driven Backlog"
|
|
3
|
+
description: "Generate new feature ideas through product discovery, create GitHub Issues"
|
|
4
|
+
category: Workflow
|
|
5
|
+
tags: [workflow, explore, priorities, backlog, product-discovery]
|
|
6
|
+
model: opus
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
Analyze the project from a **product perspective** to generate new feature ideas. Syncs results to GitHub Issues labeled `product-driven-backlog`. Use `/sr:product-backlog` to view current ideas.
|
|
10
|
+
|
|
11
|
+
**Input:** $ARGUMENTS (optional: comma-separated areas to focus on. If empty, analyze all areas.)
|
|
12
|
+
|
|
13
|
+
**IMPORTANT: This command only creates GitHub Issues.** You may read files and search code to understand current capabilities, but you must NEVER write application code.
|
|
14
|
+
|
|
15
|
+
---
|
|
16
|
+
|
|
17
|
+
## Areas
|
|
18
|
+
|
|
19
|
+
{{AREA_TABLE}}
|
|
20
|
+
|
|
21
|
+
---
|
|
22
|
+
|
|
23
|
+
## Execution
|
|
24
|
+
|
|
25
|
+
Launch a **single** explorer subagent (`subagent_type: Explore`, `run_in_background: true`) for product discovery.
|
|
26
|
+
|
|
27
|
+
The Explore agent receives this prompt:
|
|
28
|
+
|
|
29
|
+
> You are a product strategist analyzing the {{PROJECT_NAME}} project to generate new feature ideas using the **Value Proposition Canvas** framework.
|
|
30
|
+
>
|
|
31
|
+
> **Your goal:** For each area, propose 2-4 new features that would significantly improve the user experience. Every feature MUST be evaluated against the project's personas.
|
|
32
|
+
>
|
|
33
|
+
> **Areas to analyze:** {all areas or filtered by user input}
|
|
34
|
+
>
|
|
35
|
+
> ### Step 0: Read Personas
|
|
36
|
+
>
|
|
37
|
+
> **Before anything else**, read all persona files:
|
|
38
|
+
> {{PERSONA_FILE_READ_LIST}}
|
|
39
|
+
>
|
|
40
|
+
> These contain full Value Proposition Canvas profiles (jobs, pains, gains).
|
|
41
|
+
>
|
|
42
|
+
> ### Research steps
|
|
43
|
+
>
|
|
44
|
+
> 1. **Understand current capabilities** — Read codebase structure
|
|
45
|
+
> 2. **Check existing backlog** — Avoid duplicating existing issues
|
|
46
|
+
> 3. **Think through each persona's day** — For each area:
|
|
47
|
+
> - What does each persona need here?
|
|
48
|
+
> - What would a competitive tool offer?
|
|
49
|
+
> - What data is available but not surfaced?
|
|
50
|
+
>
|
|
51
|
+
> 4. **For each idea, produce a VPC evaluation:**
|
|
52
|
+
> - **Feature name** (short, descriptive)
|
|
53
|
+
> - **User story** ("As a [user type], I want to [action] so that [benefit]")
|
|
54
|
+
> - **Feature description** (2-3 sentences)
|
|
55
|
+
> - **VPC Fit** per persona: Jobs, Pains relieved, Gains created, Score (0-5)
|
|
56
|
+
> - **Total Persona Score**: sum of all persona scores / max possible
|
|
57
|
+
> - **Effort** (High/Medium/Low)
|
|
58
|
+
> - **Inspiration** (competitor or product pattern)
|
|
59
|
+
> - **Prerequisites**
|
|
60
|
+
> - **Area**
|
|
61
|
+
|
|
62
|
+
---
|
|
63
|
+
|
|
64
|
+
## Assembly — Backlog Sync
|
|
65
|
+
|
|
66
|
+
After the Explore agent completes:
|
|
67
|
+
|
|
68
|
+
1. **Display** results to the user.
|
|
69
|
+
|
|
70
|
+
2. Read `.claude/backlog-config.json` and extract:
|
|
71
|
+
- `BACKLOG_PROVIDER` (`github`, `jira`, or `none`)
|
|
72
|
+
- `BACKLOG_WRITE` (from `write_access`)
|
|
73
|
+
|
|
74
|
+
### If `BACKLOG_WRITE=false` — Display only (no sync)
|
|
75
|
+
|
|
76
|
+
3. **Display all proposed features** in a structured format so the user can manually create tickets:
|
|
77
|
+
|
|
78
|
+
```
|
|
79
|
+
## Product Discovery Results (not synced)
|
|
80
|
+
|
|
81
|
+
Backlog access is set to **read-only**. The following features were discovered
|
|
82
|
+
but NOT created in {{BACKLOG_PROVIDER_NAME}}. Create them manually if desired.
|
|
83
|
+
|
|
84
|
+
### Feature 1: {name}
|
|
85
|
+
- **Area:** {area}
|
|
86
|
+
- **Persona Fit:** {{PERSONA_FIT_FORMAT}}
|
|
87
|
+
- **Effort:** {level}
|
|
88
|
+
- **User Story:** As a {user}, I want to {action} so that {benefit}
|
|
89
|
+
- **Description:** {2-3 sentences}
|
|
90
|
+
|
|
91
|
+
(repeat for each feature)
|
|
92
|
+
|
|
93
|
+
### Summary
|
|
94
|
+
| # | Feature | {{PERSONA_SCORE_HEADERS}} | Total | Effort |
|
|
95
|
+
|---|---------|{{PERSONA_SCORE_SEPARATORS}}|-------|--------|
|
|
96
|
+
| 1 | ... | ... | ... | ... |
|
|
97
|
+
```
|
|
98
|
+
|
|
99
|
+
4. **Do NOT** create, modify, or comment on any issues/tickets.
|
|
100
|
+
|
|
101
|
+
### If provider=github and BACKLOG_WRITE=true — Sync to GitHub Issues
|
|
102
|
+
|
|
103
|
+
3. **Fetch existing product-driven backlog items** to avoid duplicates:
|
|
104
|
+
```bash
|
|
105
|
+
{{BACKLOG_FETCH_ALL_CMD}}
|
|
106
|
+
```
|
|
107
|
+
|
|
108
|
+
4. **Initialize backlog labels/tags** (idempotent):
|
|
109
|
+
```bash
|
|
110
|
+
{{BACKLOG_INIT_LABELS_CMD}}
|
|
111
|
+
```
|
|
112
|
+
|
|
113
|
+
5. **For each proposed feature, create a backlog item** (skip duplicates):
|
|
114
|
+
```bash
|
|
115
|
+
{{BACKLOG_CREATE_CMD}}
|
|
116
|
+
> **This is a product feature idea.** Generated through VPC-based product discovery.
|
|
117
|
+
|
|
118
|
+
## Overview
|
|
119
|
+
|
|
120
|
+
| Field | Value |
|
|
121
|
+
|-------|-------|
|
|
122
|
+
| **Area** | {Area} |
|
|
123
|
+
| **Persona Fit** | {{PERSONA_FIT_FORMAT}} |
|
|
124
|
+
| **Effort** | {High/Medium/Low} — {justification} |
|
|
125
|
+
| **Inspiration** | {source or "Original idea"} |
|
|
126
|
+
| **Prerequisites** | {list or "None"} |
|
|
127
|
+
|
|
128
|
+
## User Story
|
|
129
|
+
|
|
130
|
+
As a **{user type}**, I want to **{action}** so that **{benefit}**.
|
|
131
|
+
|
|
132
|
+
## Feature Description
|
|
133
|
+
|
|
134
|
+
{2-3 sentence description}
|
|
135
|
+
|
|
136
|
+
## Value Proposition Canvas
|
|
137
|
+
|
|
138
|
+
{{PERSONA_VPC_SECTIONS}}
|
|
139
|
+
|
|
140
|
+
## Implementation Notes
|
|
141
|
+
|
|
142
|
+
{Brief notes on existing infrastructure and what needs to be built}
|
|
143
|
+
|
|
144
|
+
---
|
|
145
|
+
_Auto-generated by `/sr:update-product-driven-backlog` on {DATE}_
|
|
146
|
+
EOF
|
|
147
|
+
)"
|
|
148
|
+
```
|
|
149
|
+
|
|
150
|
+
6. **Report** sync results:
|
|
151
|
+
```
|
|
152
|
+
Product discovery complete:
|
|
153
|
+
- Created: {N} new feature ideas in GitHub Issues
|
|
154
|
+
- Skipped: {N} duplicates (already exist)
|
|
155
|
+
```
|
|
156
|
+
|
|
157
|
+
### If provider=jira and BACKLOG_WRITE=true — Sync to JIRA
|
|
158
|
+
|
|
159
|
+
Read from `.claude/backlog-config.json`:
|
|
160
|
+
- `JIRA_BASE_URL`, `JIRA_PROJECT_KEY`, `AUTH_METHOD`
|
|
161
|
+
- `PROJECT_LABEL` (may be empty string)
|
|
162
|
+
- `EPIC_MAPPING` (object mapping area name → JIRA epic key)
|
|
163
|
+
- `EPIC_LINK_FIELD` (default: `"parent"`)
|
|
164
|
+
- `CLI_INSTALLED`
|
|
165
|
+
|
|
166
|
+
#### Step A: Authenticate
|
|
167
|
+
|
|
168
|
+
If `AUTH_METHOD=api_token`: require env vars `JIRA_USER_EMAIL` and `JIRA_API_TOKEN`.
|
|
169
|
+
If either is missing:
|
|
170
|
+
```
|
|
171
|
+
Error: JIRA_USER_EMAIL and JIRA_API_TOKEN must be set in your environment.
|
|
172
|
+
See: https://support.atlassian.com/atlassian-account/docs/manage-api-tokens-for-your-atlassian-account/
|
|
173
|
+
```
|
|
174
|
+
Stop and do not proceed with sync.
|
|
175
|
+
|
|
176
|
+
#### Step B: Fetch existing JIRA stories (duplicate check)
|
|
177
|
+
|
|
178
|
+
```bash
|
|
179
|
+
curl -s \
|
|
180
|
+
-H "Authorization: Basic $(printf '%s' "$JIRA_USER_EMAIL:$JIRA_API_TOKEN" | base64)" \
|
|
181
|
+
-H "Content-Type: application/json" \
|
|
182
|
+
"${JIRA_BASE_URL}/rest/api/3/search?jql=project%3D${JIRA_PROJECT_KEY}+AND+labels%3Dproduct-backlog+AND+issuetype%3DStory&fields=summary&maxResults=200"
|
|
183
|
+
```
|
|
184
|
+
|
|
185
|
+
Store all `summary` values. Skip any feature whose title matches an existing summary.
|
|
186
|
+
|
|
187
|
+
#### Step C: Group features by area
|
|
188
|
+
|
|
189
|
+
From the Explore agent output, group features into `area -> [features]`.
|
|
190
|
+
Area names: strip the `area:` prefix (e.g., `area:core` → `core`).
|
|
191
|
+
|
|
192
|
+
#### Step D: Ensure epics exist per area
|
|
193
|
+
|
|
194
|
+
For each unique area:
|
|
195
|
+
|
|
196
|
+
1. **Cache hit:** If `EPIC_MAPPING[area]` is set: use that key. Proceed to Step E.
|
|
197
|
+
|
|
198
|
+
2. **JIRA search:** Search for existing epic:
|
|
199
|
+
```bash
|
|
200
|
+
curl -s \
|
|
201
|
+
-H "Authorization: Basic $(printf '%s' "$JIRA_USER_EMAIL:$JIRA_API_TOKEN" | base64)" \
|
|
202
|
+
-H "Content-Type: application/json" \
|
|
203
|
+
"${JIRA_BASE_URL}/rest/api/3/search?jql=project%3D${JIRA_PROJECT_KEY}+AND+issuetype%3DEpic+AND+summary+%7E+%22${AREA_NAME}%22&fields=summary,key"
|
|
204
|
+
```
|
|
205
|
+
If found: set `EPIC_MAPPING[area] = <key>`. Proceed to Step E.
|
|
206
|
+
|
|
207
|
+
3. **Create epic:**
|
|
208
|
+
```bash
|
|
209
|
+
curl -s -X POST \
|
|
210
|
+
-H "Authorization: Basic $(printf '%s' "$JIRA_USER_EMAIL:$JIRA_API_TOKEN" | base64)" \
|
|
211
|
+
-H "Content-Type: application/json" \
|
|
212
|
+
"${JIRA_BASE_URL}/rest/api/3/issue" \
|
|
213
|
+
--data '{
|
|
214
|
+
"fields": {
|
|
215
|
+
"project": {"key": "'"${JIRA_PROJECT_KEY}"'"},
|
|
216
|
+
"issuetype": {"name": "Epic"},
|
|
217
|
+
"summary": "'"${AREA_DISPLAY_NAME}"'",
|
|
218
|
+
"labels": ["product-backlog"]
|
|
219
|
+
}
|
|
220
|
+
}'
|
|
221
|
+
```
|
|
222
|
+
If `PROJECT_LABEL` is non-empty, add it to the `labels` array.
|
|
223
|
+
Set `EPIC_MAPPING[area] = <returned key>`.
|
|
224
|
+
|
|
225
|
+
After all areas are processed: write the updated `EPIC_MAPPING` back to `.claude/backlog-config.json`.
|
|
226
|
+
|
|
227
|
+
#### Step E: Create Story tickets
|
|
228
|
+
|
|
229
|
+
For each feature not in the duplicate list:
|
|
230
|
+
|
|
231
|
+
```bash
|
|
232
|
+
curl -s -X POST \
|
|
233
|
+
-H "Authorization: Basic $(printf '%s' "$JIRA_USER_EMAIL:$JIRA_API_TOKEN" | base64)" \
|
|
234
|
+
-H "Content-Type: application/json" \
|
|
235
|
+
"${JIRA_BASE_URL}/rest/api/3/issue" \
|
|
236
|
+
--data '{
|
|
237
|
+
"fields": {
|
|
238
|
+
"project": {"key": "'"${JIRA_PROJECT_KEY}"'"},
|
|
239
|
+
"issuetype": {"name": "Story"},
|
|
240
|
+
"summary": "'"${FEATURE_NAME}"'",
|
|
241
|
+
"description": {
|
|
242
|
+
"type": "doc",
|
|
243
|
+
"version": 1,
|
|
244
|
+
"content": [{
|
|
245
|
+
"type": "codeBlock",
|
|
246
|
+
"content": [{"type": "text", "text": "'"${VPC_BODY_ESCAPED}"'"}]
|
|
247
|
+
}]
|
|
248
|
+
},
|
|
249
|
+
"labels": ["product-backlog"],
|
|
250
|
+
"'"${EPIC_LINK_FIELD}"'": {"key": "'"${EPIC_KEY}"'"}
|
|
251
|
+
}
|
|
252
|
+
}'
|
|
253
|
+
```
|
|
254
|
+
|
|
255
|
+
If `PROJECT_LABEL` is non-empty: add it to the `labels` array.
|
|
256
|
+
`VPC_BODY_ESCAPED`: the full VPC markdown body with double quotes escaped (`"`→`\"`).
|
|
257
|
+
|
|
258
|
+
**Error handling:**
|
|
259
|
+
- If the API returns an error about the epic key (dead key): log a warning, create the story without epic linkage, continue.
|
|
260
|
+
- Any other API error: log the error message and story name, continue to next story.
|
|
261
|
+
|
|
262
|
+
#### Step F: Report results
|
|
263
|
+
|
|
264
|
+
```
|
|
265
|
+
JIRA sync complete:
|
|
266
|
+
- Epics created: {N} (area names)
|
|
267
|
+
- Epics reused: {N} (area names)
|
|
268
|
+
- Stories created: {N}
|
|
269
|
+
- Stories skipped (duplicates): {N}
|
|
270
|
+
- Stories without epic (errors): {N}
|
|
271
|
+
- Project label applied: {PROJECT_LABEL} / (none — label was empty)
|
|
272
|
+
```
|