specrails-core 1.7.2 → 2.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,199 @@
1
+ ---
2
+ name: sr-product-backlog
3
+ description: "sr:product-backlog — View product-driven backlog from GitHub Issues and propose top 3 for implementation."
4
+ license: MIT
5
+ compatibility: "Requires GitHub CLI (gh)."
6
+ metadata:
7
+ author: specrails
8
+ version: "1.0"
9
+ ---
10
+
11
+
12
+ Display the product-driven backlog by reading issues/tickets from the configured backlog provider ({{BACKLOG_PROVIDER_NAME}}). These are feature ideas generated through VPC-based product discovery — evaluated against user personas. Use `/sr:update-product-driven-backlog` to generate new ideas.
13
+
14
+ **Input:** $ARGUMENTS (optional: comma-separated areas to filter. If empty, show all.)
15
+
16
+ ---
17
+
18
+ ## Phase 0: Environment Pre-flight
19
+
20
+ Verify the backlog provider is accessible:
21
+
22
+ ```bash
23
+ {{BACKLOG_PREFLIGHT}}
24
+ ```
25
+
26
+ If the backlog provider is unavailable, stop and inform the user.
27
+
28
+ ---
29
+
30
+ ## Execution
31
+
32
+ Launch a **single** sr-product-analyst agent (`subagent_type: sr-product-analyst`) to read and prioritize the backlog.
33
+
34
+ The product-analyst receives this prompt:
35
+
36
+ > You are reading the product-driven backlog from {{BACKLOG_PROVIDER_NAME}} and producing a prioritized view.
37
+
38
+ 1. **Fetch all open product-driven backlog items:**
39
+ ```bash
40
+ {{BACKLOG_FETCH_CMD}}
41
+ ```
42
+
43
+ 2. **Parse each issue/ticket** to extract metadata from the body:
44
+ - **Area**: from `area:*` label
45
+ - **Persona Fit**: from the body's Overview table — extract per-persona scores and total
46
+ - **Effort**: from the body's Overview table (High/Medium/Low)
47
+ - **Description**: from the body's "Feature Description" section
48
+ - **User Story**: from the body's "User Story" section
49
+
50
+ 3. **Parse prerequisites for each issue:**
51
+ - Locate the row whose first cell matches `**Prerequisites**` in the issue body's Overview table.
52
+ - If the cell value is `None`, `-`, or empty: set `prereqs = []` for this issue.
53
+ - Otherwise: extract all tokens matching `#\d+` from the cell and set `prereqs = [<numbers>]`.
54
+ - If a prerequisite number does not appear in the fetched issue list, treat it as already satisfied (externally closed). Do not include it in the DAG.
55
+
56
+ 4. **Build dependency graph and detect cycles:**
57
+ - Construct a directed graph where edge `(A → B)` means "issue A must complete before issue B".
58
+ - For each issue with a non-empty `prereqs` list, add an edge from each prerequisite to the issue.
59
+ - Run depth-first cycle detection:
60
+ - Maintain `visited` and `rec_stack` sets.
61
+ - For each unvisited node, run DFS. If a node in `rec_stack` is encountered, a cycle exists.
62
+ - Collect all cycle members into `CYCLE_MEMBERS`.
63
+ - If `CYCLE_MEMBERS` is non-empty, prepare a warning block to render before the backlog table:
64
+ ```
65
+ > **Warning: Circular dependency detected in backlog.**
66
+ > The following issues form a cycle and cannot be safely ordered:
67
+ > #A -> #B -> #A
68
+ > Review these issues and correct the Prerequisites fields.
69
+ ```
70
+ - Compute `in_degree[issue]` for all issues (count of prerequisite edges pointing to each issue from other open backlog issues).
71
+
72
+ 5. **Compute safe implementation order (Kahn's topological sort):**
73
+ - Exclude `CYCLE_MEMBERS` from this computation.
74
+ - Initialize `ready` = all non-cycle issues where `in_degree == 0`.
75
+ - Sort `ready` by Total Persona Score descending.
76
+ - Build `WAVES = []`:
77
+ ```
78
+ while ready is non-empty:
79
+ WAVES.append(copy of ready)
80
+ next_ready = []
81
+ for each issue in ready:
82
+ for each dependent D of issue (edges issue → D):
83
+ in_degree[D] -= 1
84
+ if in_degree[D] == 0: next_ready.append(D)
85
+ sort next_ready by Total Persona Score descending
86
+ ready = next_ready
87
+ ```
88
+ - Store `WAVE_1 = WAVES[0]` (the set of immediately startable features).
89
+
90
+ 6. **Group by area**.
91
+
92
+ 7. **Sort within each area by Total Persona Score (descending)**, then by Effort (Low > Medium > High) as tiebreaker.
93
+
94
+ 8. **Display** as a formatted table per area, then **propose the top 3 items from `WAVE_1`** (features with all prerequisites satisfied) for implementation. If fewer than 3 are in `WAVE_1`, show as many as available and add: "Note: Only {N} feature(s) are available to start immediately — remaining features have unmet prerequisites."
95
+
96
+ [If `CYCLE_MEMBERS` is non-empty, render the cycle warning block immediately before the first area table.]
97
+
98
+ Render each area table with the following format:
99
+ - Append `[blocked]` to the issue title cell if `in_degree[issue] > 0` and the issue is not in `CYCLE_MEMBERS`.
100
+ - Append `[cycle]` to the issue title cell if the issue is in `CYCLE_MEMBERS`.
101
+ - `Prereqs` cell: list prerequisite issue numbers as `#N, #M`, or `—` if none.
102
+
103
+ ```
104
+ ## Product-Driven Backlog
105
+
106
+ {N} open issues | Source: VPC-based product discovery
107
+ Personas: {{PERSONA_NAMES_WITH_ROLES}}
108
+
109
+ ### {Area Name}
110
+
111
+ | # | Issue | {{PERSONA_SCORE_HEADERS}} | Total | Effort | Prereqs |
112
+ |---|-------|{{PERSONA_SCORE_SEPARATORS}}|-------|--------|---------|
113
+ | 1 | #42 Feature name [blocked] | ... | X/{{MAX_SCORE}} | Low | #12, #17 |
114
+ | 2 | #43 Other feature | ... | X/{{MAX_SCORE}} | High | — |
115
+
116
+ ---
117
+
118
+ ## Recommended Next Sprint (Top 3)
119
+
120
+ Ranked by VPC persona score / effort ratio:
121
+
122
+ | Priority | Issue | Area | {{PERSONA_SCORE_HEADERS}} | Total | Effort | Rationale |
123
+ |----------|-------|------|{{PERSONA_SCORE_SEPARATORS}}|-------|--------|-----------|
124
+
125
+ ### Selection criteria
126
+ - Cross-persona features (both 4+/5) prioritized over single-persona
127
+ - Low effort preferred over high effort at same score
128
+ - Critical pain relief weighted higher than gain creation
129
+
130
+ Run `/sr:implement` to start implementing these items.
131
+ ```
132
+
133
+ 9. **Render Safe Implementation Order section** after the Recommended Next Sprint table:
134
+
135
+ ```
136
+ ---
137
+
138
+ ## Safe Implementation Order
139
+
140
+ Features grouped by wave. All features in a wave can start in parallel.
141
+ Features in wave N must complete before wave N+1 begins.
142
+
143
+ | Wave | Issue | Title | Prereqs | Score | Effort |
144
+ |------|-------|-------|---------|-------|--------|
145
+ | 1 | #N | ... | — | X/{{MAX_SCORE}} | Low |
146
+ | 2 | #M | ... | #N | X/{{MAX_SCORE}} | Medium |
147
+
148
+ To implement in this order:
149
+ /sr:batch-implement <issue-refs in wave order> --deps "<A> -> <B>, <C> -> <D>, ..."
150
+
151
+ [If no edges exist in the DAG, omit the --deps clause:]
152
+ /sr:batch-implement <issue-refs>
153
+
154
+ [If CYCLE_MEMBERS is non-empty, append:]
155
+ Cycle members excluded from ordering: #A, #B
156
+ Fix the Prerequisites fields in these issues to include them.
157
+ ```
158
+
159
+ Issue refs in the `/sr:batch-implement` command are listed in wave order (wave 1 first, then wave 2, etc.), sorted by persona score within each wave. The `--deps` string is constructed from all edges in the DAG: `"A -> B"` for each edge, comma-separated. If the backlog has no dependencies at all (DAG has no edges), the section still renders showing all features in wave 1 and the `--deps` clause is omitted.
160
+
161
+ 10. If no issues exist:
162
+ ```
163
+ No product-driven backlog issues found. Run `/sr:update-product-driven-backlog` to generate feature ideas.
164
+ ```
165
+
166
+ 7. **[Orchestrator]** After the product-analyst completes, write issue snapshots to `.claude/backlog-cache.json`.
167
+
168
+ **Guard:** If `GH_AVAILABLE=false` (from Phase 0 pre-flight), print `[backlog-cache] Skipped — GH unavailable.` and return. Do not attempt the write.
169
+
170
+ **Fetch all open backlog issues in one call:**
171
+
172
+ ```bash
173
+ gh issue list --label "product-driven-backlog" --state open --json number,title,state,assignees,labels,body,updatedAt
174
+ ```
175
+
176
+ For each issue in the result, build a snapshot object:
177
+ - `number`: integer issue number
178
+ - `title`: issue title string
179
+ - `state`: `"open"` or `"closed"`
180
+ - `assignees`: array of assignee login names, sorted alphabetically
181
+ - `labels`: array of label names, sorted alphabetically
182
+ - `body_sha`: SHA-256 of the raw body string — compute with:
183
+ ```bash
184
+ echo -n "{body}" | sha256sum | cut -d' ' -f1
185
+ ```
186
+ If `sha256sum` is not available, fall back to `openssl dgst -sha256 -r` or `shasum -a 256`.
187
+ - `updated_at`: the `updatedAt` value from the GitHub API response
188
+ - `captured_at`: current local time in ISO 8601 format
189
+
190
+ **Merge strategy:** If `.claude/backlog-cache.json` already exists and is valid JSON, read it and merge: new snapshot entries overwrite existing entries by issue number key; entries for issue numbers not in the current fetch are preserved (they may be needed by an in-progress `/sr:implement` run). If the file does not exist or is malformed, create it fresh.
191
+
192
+ Write the merged result back to `.claude/backlog-cache.json` with:
193
+ - `schema_version`: `"1"`
194
+ - `provider`: `"github"`
195
+ - `last_updated`: current ISO 8601 timestamp
196
+ - `written_by`: `"product-backlog"`
197
+ - `issues`: the merged map keyed by string issue number
198
+
199
+ If the write fails (e.g., `.claude/` directory does not exist): print `[backlog-cache] Warning: could not write cache. Continuing.` Do not abort.
@@ -0,0 +1,216 @@
1
+ ---
2
+ name: sr-refactor-recommender
3
+ description: "sr:refactor-recommender — Scan the codebase for refactoring opportunities ranked by impact/effort ratio. Optionally creates GitHub Issues for tracking."
4
+ license: MIT
5
+ compatibility: "Requires git."
6
+ metadata:
7
+ author: specrails
8
+ version: "1.0"
9
+ ---
10
+
11
+
12
+ Scan the codebase for refactoring opportunities, score each by impact/effort ratio and VPC persona value, and optionally create GitHub Issues for the top findings in {{BACKLOG_PROVIDER_NAME}}.
13
+
14
+ **Input:** `$ARGUMENTS` — optional: comma-separated paths to scope the analysis. Flags: `--dry-run` (print findings without creating issues).
15
+
16
+ ---
17
+
18
+ ## Phase 0: Pre-flight
19
+
20
+ Check whether the GitHub CLI is available:
21
+
22
+ ```bash
23
+ {{BACKLOG_PREFLIGHT}}
24
+ ```
25
+
26
+ Set `GH_AVAILABLE=true` if the command succeeds, `GH_AVAILABLE=false` otherwise. Do not stop — analysis proceeds regardless. Parse `--dry-run` from `$ARGUMENTS` and set `DRY_RUN=true` if present.
27
+
28
+ ---
29
+
30
+ ## Phase 1: Scope
31
+
32
+ Parse paths from `$ARGUMENTS` after stripping any flags. If no paths are provided, scan the entire repository.
33
+
34
+ Always exclude the following from all analysis:
35
+
36
+ - `node_modules/`
37
+ - `.git/`
38
+ - `.claude/`
39
+ - `vendor/`
40
+ - `dist/`
41
+ - `build/`
42
+
43
+ ---
44
+
45
+ ## Phase 1.5: VPC Context
46
+
47
+ Check whether persona files exist at `.claude/agents/personas/`. This path is present in any repo that has run `/setup`.
48
+
49
+ ```bash
50
+ ls .claude/agents/personas/ 2>/dev/null
51
+ ```
52
+
53
+ If the directory exists and contains persona files, set `VPC_AVAILABLE=true`. Otherwise set `VPC_AVAILABLE=false` and skip all VPC steps (they are optional enrichment, not blockers).
54
+
55
+ When `VPC_AVAILABLE=true`, read each persona file and extract a compact VPC summary. For each persona record:
56
+
57
+ - **name** — persona display name (e.g. "Alex — The Lead Dev")
58
+ - **top_jobs** — up to 3 functional jobs relevant to code quality and maintainability
59
+ - **critical_pains** — up to 3 pains marked Critical or High related to code reliability, complexity, or developer experience
60
+ - **high_gains** — up to 3 gains marked High related to code clarity, speed, or confidence
61
+
62
+ Store these as an in-memory `VPC_PROFILES` list. You will use it in Phase 3 to score persona fit.
63
+
64
+ ---
65
+
66
+ ## Phase 2: Analysis
67
+
68
+ Analyze the scoped files across six categories. For each finding record:
69
+
70
+ - **file** — relative path
71
+ - **line_range** — start and end line numbers
72
+ - **current_snippet** — the problematic code as-is
73
+ - **proposed_snippet** — concrete refactored version
74
+ - **rationale** — one sentence explaining the improvement
75
+
76
+ ### Duplicate Code
77
+
78
+ Find code blocks larger than 10 lines that are substantially similar across two or more files. Consolidation into a shared function or module is the expected refactoring.
79
+
80
+ ### Long Functions
81
+
82
+ Find functions or methods exceeding 50 lines. Extraction into smaller, single-purpose functions is the expected refactoring.
83
+
84
+ ### Large Files
85
+
86
+ Find files exceeding 300 lines. Splitting into cohesive modules is the expected refactoring.
87
+
88
+ ### Dead Code
89
+
90
+ Find unused exports, unreferenced functions, and commented-out blocks that have not been active for the lifetime of the file. Deletion or archival is the expected refactoring.
91
+
92
+ ### Outdated Patterns
93
+
94
+ Find deprecated APIs and old language syntax: `var` instead of `let`/`const`, callbacks instead of `async`/`await`, legacy framework APIs with documented replacements, etc. Modernisation to current idioms is the expected refactoring.
95
+
96
+ ### Complex Logic
97
+
98
+ Find deeply nested conditionals (more than 3 levels) and functions with high cyclomatic complexity. Extraction, early-return guards, or strategy patterns are the expected refactoring.
99
+
100
+ ---
101
+
102
+ ## Phase 3: Score and Rank
103
+
104
+ Score every finding on three dimensions (1–5 each):
105
+
106
+ - **Impact** — how much the refactoring improves code quality, readability, or maintainability
107
+ - **Effort** — how hard the refactoring is to implement (1 = trivial, 5 = major)
108
+ - **VPC Value** — how directly this refactoring addresses persona jobs, pains, or gains (1 = no relevance, 5 = resolves a critical persona pain or delivers a high-value gain). Set to 3 when `VPC_AVAILABLE=false`.
109
+
110
+ **Scoring VPC Value** (only when `VPC_AVAILABLE=true`):
111
+
112
+ For each finding, reason over `VPC_PROFILES`:
113
+
114
+ - Does fixing this reduce a **Critical/High pain** for any persona? (e.g. complex logic → harder to trust AI output → Alex's "agents go off the rails" pain) → score 4–5
115
+ - Does fixing this deliver a **High gain** for any persona? (e.g. extracting a function → cleaner API surface → easier onboarding → Sara's gain) → score 3–4
116
+ - Is there indirect persona value? (e.g. dead code removal → smaller codebase → easier contributor review → Kai) → score 2–3
117
+ - No clear persona relevance → score 1–2
118
+
119
+ Assign one `vpc_value` integer per finding, and note the **primary persona** and **rationale** (one sentence).
120
+
121
+ **Composite score**: `impact * 2 + (6 - effort) + vpc_value`. Higher is better.
122
+
123
+ Sort all findings by composite score descending. If the same code block was flagged by multiple categories, keep only the highest-scored entry and discard the duplicates.
124
+
125
+ ---
126
+
127
+ ## Phase 4: Create GitHub Issues
128
+
129
+ Skip this phase if `GH_AVAILABLE=false` or `DRY_RUN=true`.
130
+
131
+ First ensure the tracking labels exist:
132
+
133
+ ```bash
134
+ gh label create "refactor-opportunity" --color "B60205" --force
135
+ ```
136
+
137
+ Fetch existing open issues that already carry the label to avoid duplicates:
138
+
139
+ ```bash
140
+ gh issue list --label "refactor-opportunity" --state open --limit 100 --json number,title
141
+ ```
142
+
143
+ For each of the **top 5** findings (by composite score) that does not already have a matching open issue, create a GitHub Issue with the following body:
144
+
145
+ ```
146
+ ## Refactoring Opportunity: {description}
147
+
148
+ **Category**: {category}
149
+ **File**: {file}:{line_range}
150
+ **Impact**: {impact}/5 | **Effort**: {effort}/5 | **Score**: {composite}
151
+ {vpc_line}
152
+
153
+ ### Current Code
154
+ ```{lang}
155
+ {current_snippet}
156
+ ```
157
+
158
+ ### Proposed Refactoring
159
+ ```{lang}
160
+ {proposed_snippet}
161
+ ```
162
+
163
+ ### Rationale
164
+ {rationale}
165
+
166
+ ---
167
+ _Generated by `/sr:refactor-recommender` in {{PROJECT_NAME}}_
168
+ ```
169
+
170
+ Where `{vpc_line}` is included only when `VPC_AVAILABLE=true`:
171
+ `**VPC Value**: {vpc_value}/5 — {vpc_persona}: {vpc_rationale}`
172
+
173
+ ---
174
+
175
+ ## Phase 5: Output Summary
176
+
177
+ Print the following report:
178
+
179
+ ```
180
+ ## Refactoring Opportunities — {{PROJECT_NAME}}
181
+
182
+ {N} opportunities found | Sorted by composite score
183
+ {vpc_header}
184
+
185
+ | # | Category | File | Impact | Effort | VPC | Score | Description |
186
+ |---|----------|------|--------|--------|-----|-------|-------------|
187
+ | 1 | {category} | {file}:{line_range} | {impact}/5 | {effort}/5 | {vpc_value}/5 | {composite} | {description} |
188
+ ...
189
+
190
+ ### Top 3 Detailed Recommendations
191
+
192
+ #### 1. {description}
193
+ **File**: {file}:{line_range}
194
+ **Category**: {category} | **Score**: {composite}
195
+ {vpc_detail}
196
+
197
+ **Current:**
198
+ ```{lang}
199
+ {current_snippet}
200
+ ```
201
+
202
+ **Proposed:**
203
+ ```{lang}
204
+ {proposed_snippet}
205
+ ```
206
+
207
+ **Rationale:** {rationale}
208
+
209
+ (repeat for #2 and #3)
210
+
211
+ Issues created: {N} (or "dry-run: no issues created")
212
+ ```
213
+
214
+ Where:
215
+ - `{vpc_header}` is `VPC personas loaded: {persona names}` when `VPC_AVAILABLE=true`, or `VPC personas: not found (run /setup to enable)` otherwise.
216
+ - `{vpc_detail}` is `**VPC Value**: {vpc_value}/5 — {vpc_persona}: {vpc_rationale}` when `VPC_AVAILABLE=true`, omitted otherwise.
@@ -0,0 +1,275 @@
1
+ ---
2
+ name: sr-update-backlog
3
+ description: "sr:update-product-driven-backlog — Generate new feature ideas through product discovery, create GitHub Issues."
4
+ license: MIT
5
+ compatibility: "Requires GitHub CLI (gh)."
6
+ metadata:
7
+ author: specrails
8
+ version: "1.0"
9
+ ---
10
+
11
+
12
+ Analyze the project from a **product perspective** to generate new feature ideas. Syncs results to GitHub Issues labeled `product-driven-backlog`. Use `/sr:product-backlog` to view current ideas.
13
+
14
+ **Input:** $ARGUMENTS (optional: comma-separated areas to focus on. If empty, analyze all areas.)
15
+
16
+ **IMPORTANT: This command only creates GitHub Issues.** You may read files and search code to understand current capabilities, but you must NEVER write application code.
17
+
18
+ ---
19
+
20
+ ## Areas
21
+
22
+ {{AREA_TABLE}}
23
+
24
+ ---
25
+
26
+ ## Execution
27
+
28
+ Launch a **single** explorer subagent (`subagent_type: Explore`, `run_in_background: true`) for product discovery.
29
+
30
+ The Explore agent receives this prompt:
31
+
32
+ > You are a product strategist analyzing the {{PROJECT_NAME}} project to generate new feature ideas using the **Value Proposition Canvas** framework.
33
+ >
34
+ > **Your goal:** For each area, propose 2-4 new features that would significantly improve the user experience. Every feature MUST be evaluated against the project's personas.
35
+ >
36
+ > **Areas to analyze:** {all areas or filtered by user input}
37
+ >
38
+ > ### Step 0: Read Personas
39
+ >
40
+ > **Before anything else**, read all persona files:
41
+ > {{PERSONA_FILE_READ_LIST}}
42
+ >
43
+ > These contain full Value Proposition Canvas profiles (jobs, pains, gains).
44
+ >
45
+ > ### Research steps
46
+ >
47
+ > 1. **Understand current capabilities** — Read codebase structure
48
+ > 2. **Check existing backlog** — Avoid duplicating existing issues
49
+ > 3. **Think through each persona's day** — For each area:
50
+ > - What does each persona need here?
51
+ > - What would a competitive tool offer?
52
+ > - What data is available but not surfaced?
53
+ >
54
+ > 4. **For each idea, produce a VPC evaluation:**
55
+ > - **Feature name** (short, descriptive)
56
+ > - **User story** ("As a [user type], I want to [action] so that [benefit]")
57
+ > - **Feature description** (2-3 sentences)
58
+ > - **VPC Fit** per persona: Jobs, Pains relieved, Gains created, Score (0-5)
59
+ > - **Total Persona Score**: sum of all persona scores / max possible
60
+ > - **Effort** (High/Medium/Low)
61
+ > - **Inspiration** (competitor or product pattern)
62
+ > - **Prerequisites**
63
+ > - **Area**
64
+
65
+ ---
66
+
67
+ ## Assembly — Backlog Sync
68
+
69
+ After the Explore agent completes:
70
+
71
+ 1. **Display** results to the user.
72
+
73
+ 2. Read `.claude/backlog-config.json` and extract:
74
+ - `BACKLOG_PROVIDER` (`github`, `jira`, or `none`)
75
+ - `BACKLOG_WRITE` (from `write_access`)
76
+
77
+ ### If `BACKLOG_WRITE=false` — Display only (no sync)
78
+
79
+ 3. **Display all proposed features** in a structured format so the user can manually create tickets:
80
+
81
+ ```
82
+ ## Product Discovery Results (not synced)
83
+
84
+ Backlog access is set to **read-only**. The following features were discovered
85
+ but NOT created in {{BACKLOG_PROVIDER_NAME}}. Create them manually if desired.
86
+
87
+ ### Feature 1: {name}
88
+ - **Area:** {area}
89
+ - **Persona Fit:** {{PERSONA_FIT_FORMAT}}
90
+ - **Effort:** {level}
91
+ - **User Story:** As a {user}, I want to {action} so that {benefit}
92
+ - **Description:** {2-3 sentences}
93
+
94
+ (repeat for each feature)
95
+
96
+ ### Summary
97
+ | # | Feature | {{PERSONA_SCORE_HEADERS}} | Total | Effort |
98
+ |---|---------|{{PERSONA_SCORE_SEPARATORS}}|-------|--------|
99
+ | 1 | ... | ... | ... | ... |
100
+ ```
101
+
102
+ 4. **Do NOT** create, modify, or comment on any issues/tickets.
103
+
104
+ ### If provider=github and BACKLOG_WRITE=true — Sync to GitHub Issues
105
+
106
+ 3. **Fetch existing product-driven backlog items** to avoid duplicates:
107
+ ```bash
108
+ {{BACKLOG_FETCH_ALL_CMD}}
109
+ ```
110
+
111
+ 4. **Initialize backlog labels/tags** (idempotent):
112
+ ```bash
113
+ {{BACKLOG_INIT_LABELS_CMD}}
114
+ ```
115
+
116
+ 5. **For each proposed feature, create a backlog item** (skip duplicates):
117
+ ```bash
118
+ {{BACKLOG_CREATE_CMD}}
119
+ > **This is a product feature idea.** Generated through VPC-based product discovery.
120
+
121
+ ## Overview
122
+
123
+ | Field | Value |
124
+ |-------|-------|
125
+ | **Area** | {Area} |
126
+ | **Persona Fit** | {{PERSONA_FIT_FORMAT}} |
127
+ | **Effort** | {High/Medium/Low} — {justification} |
128
+ | **Inspiration** | {source or "Original idea"} |
129
+ | **Prerequisites** | {list or "None"} |
130
+
131
+ ## User Story
132
+
133
+ As a **{user type}**, I want to **{action}** so that **{benefit}**.
134
+
135
+ ## Feature Description
136
+
137
+ {2-3 sentence description}
138
+
139
+ ## Value Proposition Canvas
140
+
141
+ {{PERSONA_VPC_SECTIONS}}
142
+
143
+ ## Implementation Notes
144
+
145
+ {Brief notes on existing infrastructure and what needs to be built}
146
+
147
+ ---
148
+ _Auto-generated by `/sr:update-product-driven-backlog` on {DATE}_
149
+ EOF
150
+ )"
151
+ ```
152
+
153
+ 6. **Report** sync results:
154
+ ```
155
+ Product discovery complete:
156
+ - Created: {N} new feature ideas in GitHub Issues
157
+ - Skipped: {N} duplicates (already exist)
158
+ ```
159
+
160
+ ### If provider=jira and BACKLOG_WRITE=true — Sync to JIRA
161
+
162
+ Read from `.claude/backlog-config.json`:
163
+ - `JIRA_BASE_URL`, `JIRA_PROJECT_KEY`, `AUTH_METHOD`
164
+ - `PROJECT_LABEL` (may be empty string)
165
+ - `EPIC_MAPPING` (object mapping area name → JIRA epic key)
166
+ - `EPIC_LINK_FIELD` (default: `"parent"`)
167
+ - `CLI_INSTALLED`
168
+
169
+ #### Step A: Authenticate
170
+
171
+ If `AUTH_METHOD=api_token`: require env vars `JIRA_USER_EMAIL` and `JIRA_API_TOKEN`.
172
+ If either is missing:
173
+ ```
174
+ Error: JIRA_USER_EMAIL and JIRA_API_TOKEN must be set in your environment.
175
+ See: https://support.atlassian.com/atlassian-account/docs/manage-api-tokens-for-your-atlassian-account/
176
+ ```
177
+ Stop and do not proceed with sync.
178
+
179
+ #### Step B: Fetch existing JIRA stories (duplicate check)
180
+
181
+ ```bash
182
+ curl -s \
183
+ -H "Authorization: Basic $(printf '%s' "$JIRA_USER_EMAIL:$JIRA_API_TOKEN" | base64)" \
184
+ -H "Content-Type: application/json" \
185
+ "${JIRA_BASE_URL}/rest/api/3/search?jql=project%3D${JIRA_PROJECT_KEY}+AND+labels%3Dproduct-backlog+AND+issuetype%3DStory&fields=summary&maxResults=200"
186
+ ```
187
+
188
+ Store all `summary` values. Skip any feature whose title matches an existing summary.
189
+
190
+ #### Step C: Group features by area
191
+
192
+ From the Explore agent output, group features into `area -> [features]`.
193
+ Area names: strip the `area:` prefix (e.g., `area:core` → `core`).
194
+
195
+ #### Step D: Ensure epics exist per area
196
+
197
+ For each unique area:
198
+
199
+ 1. **Cache hit:** If `EPIC_MAPPING[area]` is set: use that key. Proceed to Step E.
200
+
201
+ 2. **JIRA search:** Search for existing epic:
202
+ ```bash
203
+ curl -s \
204
+ -H "Authorization: Basic $(printf '%s' "$JIRA_USER_EMAIL:$JIRA_API_TOKEN" | base64)" \
205
+ -H "Content-Type: application/json" \
206
+ "${JIRA_BASE_URL}/rest/api/3/search?jql=project%3D${JIRA_PROJECT_KEY}+AND+issuetype%3DEpic+AND+summary+%7E+%22${AREA_NAME}%22&fields=summary,key"
207
+ ```
208
+ If found: set `EPIC_MAPPING[area] = <key>`. Proceed to Step E.
209
+
210
+ 3. **Create epic:**
211
+ ```bash
212
+ curl -s -X POST \
213
+ -H "Authorization: Basic $(printf '%s' "$JIRA_USER_EMAIL:$JIRA_API_TOKEN" | base64)" \
214
+ -H "Content-Type: application/json" \
215
+ "${JIRA_BASE_URL}/rest/api/3/issue" \
216
+ --data '{
217
+ "fields": {
218
+ "project": {"key": "'"${JIRA_PROJECT_KEY}"'"},
219
+ "issuetype": {"name": "Epic"},
220
+ "summary": "'"${AREA_DISPLAY_NAME}"'",
221
+ "labels": ["product-backlog"]
222
+ }
223
+ }'
224
+ ```
225
+ If `PROJECT_LABEL` is non-empty, add it to the `labels` array.
226
+ Set `EPIC_MAPPING[area] = <returned key>`.
227
+
228
+ After all areas are processed: write the updated `EPIC_MAPPING` back to `.claude/backlog-config.json`.
229
+
230
+ #### Step E: Create Story tickets
231
+
232
+ For each feature not in the duplicate list:
233
+
234
+ ```bash
235
+ curl -s -X POST \
236
+ -H "Authorization: Basic $(printf '%s' "$JIRA_USER_EMAIL:$JIRA_API_TOKEN" | base64)" \
237
+ -H "Content-Type: application/json" \
238
+ "${JIRA_BASE_URL}/rest/api/3/issue" \
239
+ --data '{
240
+ "fields": {
241
+ "project": {"key": "'"${JIRA_PROJECT_KEY}"'"},
242
+ "issuetype": {"name": "Story"},
243
+ "summary": "'"${FEATURE_NAME}"'",
244
+ "description": {
245
+ "type": "doc",
246
+ "version": 1,
247
+ "content": [{
248
+ "type": "codeBlock",
249
+ "content": [{"type": "text", "text": "'"${VPC_BODY_ESCAPED}"'"}]
250
+ }]
251
+ },
252
+ "labels": ["product-backlog"],
253
+ "'"${EPIC_LINK_FIELD}"'": {"key": "'"${EPIC_KEY}"'"}
254
+ }
255
+ }'
256
+ ```
257
+
258
+ If `PROJECT_LABEL` is non-empty: add it to the `labels` array.
259
+ `VPC_BODY_ESCAPED`: the full VPC markdown body with double quotes escaped (`"`→`\"`).
260
+
261
+ **Error handling:**
262
+ - If the API returns an error about the epic key (dead key): log a warning, create the story without epic linkage, continue.
263
+ - Any other API error: log the error message and story name, continue to next story.
264
+
265
+ #### Step F: Report results
266
+
267
+ ```
268
+ JIRA sync complete:
269
+ - Epics created: {N} (area names)
270
+ - Epics reused: {N} (area names)
271
+ - Stories created: {N}
272
+ - Stories skipped (duplicates): {N}
273
+ - Stories without epic (errors): {N}
274
+ - Project label applied: {PROJECT_LABEL} / (none — label was empty)
275
+ ```