@sandrinio/vbounce 1.9.0 → 2.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,19 +1,47 @@
1
1
  ---
2
2
  name: improve
3
- description: "Use when the V-Bounce Engine framework needs to evolve based on accumulated agent feedback. Activates after sprint retros, when recurring friction patterns emerge, or when the user explicitly asks to improve the framework. Reads Process Feedback from sprint reports, identifies patterns, proposes specific changes to templates, skills, brain files, scripts, and agent configs, and applies approved changes. This is the system's self-improvement loop."
3
+ description: "Use when the V-Bounce Engine framework needs to evolve based on accumulated agent feedback. Activates after sprint retros, when recurring friction patterns emerge, or when the user explicitly asks to improve the framework. Reads Process Feedback from sprint reports, analyzes LESSONS.md for automation candidates, identifies patterns, proposes specific changes to templates, skills, brain files, scripts, and agent configs with impact levels, and applies approved changes. This is the system's self-improvement loop."
4
4
  ---
5
5
 
6
6
  # Framework Self-Improvement
7
7
 
8
8
  ## Purpose
9
9
 
10
- V-Bounce Engine is not static. Every sprint generates friction signals from agents who work within the framework daily. This skill closes the feedback loop: it reads what agents struggled with, identifies patterns, and proposes targeted improvements to the framework itself.
10
+ V-Bounce Engine is not static. Every sprint generates friction signals from agents who work within the framework daily. This skill closes the feedback loop: it reads what agents struggled with, analyzes which lessons can be automated, identifies patterns, and proposes targeted improvements to the framework itself.
11
11
 
12
12
  **Core principle:** No framework change happens without human approval. The system suggests — the human decides.
13
13
 
14
+ ## Impact Levels
15
+
16
+ Every improvement proposal is classified by impact to help the human prioritize:
17
+
18
+ | Level | Label | Meaning | Timeline |
19
+ |-------|-------|---------|----------|
20
+ | **P0** | Critical | Blocks agent work or causes incorrect output | Fix before next sprint |
21
+ | **P1** | High | Causes rework — bounces, wasted tokens, repeated manual steps | Fix this improvement cycle |
22
+ | **P2** | Medium | Friction that slows agents but does not block | Fix within 2 sprints |
23
+ | **P3** | Low | Polish — nice-to-have, batch with other improvements | Batch when convenient |
24
+
25
+ ### How Impact Is Determined
26
+
27
+ | Signal | Impact |
28
+ |--------|--------|
29
+ | Blocker finding + recurring across 2+ sprints | **P0** |
30
+ | Blocker finding (single sprint) | **P1** |
31
+ | Friction finding recurring across 2+ sprints | **P1** |
32
+ | Lesson with mechanical rule (can be a gate check or script) | **P1** |
33
+ | Previous improvement that didn't resolve its finding | **P1** |
34
+ | Friction finding (single sprint) | **P2** |
35
+ | Lesson graduation candidate (3+ sprints old) | **P2** |
36
+ | Low first-pass rate or high correction tax | **P1** |
37
+ | High bounce rate | **P2** |
38
+ | Framework health checks | **P3** |
39
+
14
40
  ## When to Use
15
41
 
16
- - After every 2-3 sprints (recommended cadence)
42
+ - **Automatically** `vbounce sprint close S-XX` runs the improvement pipeline and regenerates `.bounce/improvement-suggestions.md` (overwrites previous — always reflects latest data)
43
+ - **On demand** — `vbounce improve S-XX` runs the full pipeline (trends + analyzer + suggestions)
44
+ - **Applying changes:** After every 1-3 sprints, human reviews suggestions and runs `/improve` to apply approved ones. The analysis runs every sprint; applying changes is the human's call.
17
45
  - When the same Process Feedback appears across multiple sprint reports
18
46
  - When the user explicitly asks to improve templates, skills, or process
19
47
  - When a sprint's Framework Self-Assessment reveals Blocker-severity findings
@@ -21,70 +49,102 @@ V-Bounce Engine is not static. Every sprint generates friction signals from agen
21
49
 
22
50
  ## Trigger
23
51
 
24
- `/improve` OR when the Team Lead identifies recurring framework friction during Sprint Consolidation.
52
+ `/improve` OR `vbounce improve S-XX` OR when the Team Lead identifies recurring framework friction during Sprint Consolidation.
25
53
 
26
54
  ## Announcement
27
55
 
28
56
  When using this skill, state: "Using improve skill to evaluate and propose framework changes."
29
57
 
58
+ ## The Automated Pipeline
59
+
60
+ The self-improvement pipeline runs automatically on `vbounce sprint close` and can be triggered manually via `vbounce improve S-XX`:
61
+
62
+ ```
63
+ vbounce sprint close S-XX
64
+
65
+ ├── scripts/sprint_trends.mjs → .bounce/trends.md
66
+
67
+ ├── scripts/post_sprint_improve.mjs → .bounce/improvement-manifest.json
68
+ │ ├── Parse Sprint Report §5 Framework Self-Assessment tables
69
+ │ ├── Parse LESSONS.md for automation candidates
70
+ │ ├── Cross-reference archived sprint reports for recurring patterns
71
+ │ └── Check if previous improvements resolved their findings
72
+
73
+ └── scripts/suggest_improvements.mjs → .bounce/improvement-suggestions.md
74
+ ├── Consume improvement-manifest.json
75
+ ├── Add metric-driven suggestions (bounce rate, correction tax, first-pass rate)
76
+ ├── Add lesson graduation candidates
77
+ └── Format with impact levels for human review
78
+ ```
79
+
80
+ ### Output Files
81
+
82
+ | File | Purpose |
83
+ |------|---------|
84
+ | `.bounce/improvement-manifest.json` | Machine-readable proposals with metadata (consumed by this skill) |
85
+ | `.bounce/improvement-suggestions.md` | Human-readable improvement suggestions with impact levels |
86
+ | `.bounce/trends.md` | Cross-sprint trend data |
87
+
30
88
  ## Input Sources
31
89
 
32
90
  The improve skill reads from multiple signals, in priority order:
33
91
 
34
- ### 1. Sprint Report §5Framework Self-Assessment (Primary)
35
- The structured retro tables are the richest source. Each row has:
92
+ ### 1. Improvement Manifest (PrimaryMachine-Generated)
93
+ Read `.bounce/improvement-manifest.json` first. It contains pre-analyzed proposals with impact levels, automation classifications, recurrence data, and effectiveness checks. This is the richest, most structured input.
94
+
95
+ ### 2. Sprint Report §5 — Framework Self-Assessment
96
+ The structured retro tables are the richest human-authored source. Each row has:
36
97
  - Finding (what went wrong)
37
98
  - Source Agent (who experienced it)
38
99
  - Severity (Friction vs Blocker)
39
100
  - Suggested Fix (agent's proposal)
40
101
 
41
- ### 2. LESSONS.md — Recurring Patterns
42
- Lessons that point to *process* problems rather than *code* problems:
43
- - "Always check X before Y" → the template should enforce this ordering
44
- - "Agent kept missing Z" the handoff report is missing a field
45
- - Lessons that keep getting re-flagged sprint after sprint
102
+ ### 3. LESSONS.md — Automation Candidates
103
+ Lessons are classified by automation potential:
104
+
105
+ | Automation Type | What to Look For | Target |
106
+ |----------------|-----------------|--------|
107
+ | **gate_check** | Rules with "Always check...", "Never use...", "Must have..." | `.bounce/gate-checks.json` or `pre_gate_runner.sh` |
108
+ | **script** | Rules with "Run X before Y", "Use X instead of Y" | `scripts/` |
109
+ | **template_field** | Rules with "Include X in...", "Add X to the story/epic/template" | `templates/*.md` |
110
+ | **agent_config** | General behavioral rules proven over 3+ sprints | `brains/claude-agents/*.md` |
111
+
112
+ **Key insight:** Lessons tell you WHAT to enforce. Sprint retro tells you WHERE the framework is weak. Together they drive targeted improvements.
46
113
 
47
- ### 3. Sprint Execution Metrics
114
+ ### 4. Sprint Execution Metrics
48
115
  Quantitative signals from Sprint Report §3:
49
116
  - High bounce ratios → story templates may need better acceptance criteria guidance
50
117
  - High correction tax → handoffs may be losing critical context
51
118
  - Escalation patterns → complexity labels may need recalibration
52
119
 
53
- ### 4. Agent Process Feedback (Raw)
120
+ ### 5. Improvement Effectiveness
121
+ The pipeline checks whether previously applied improvements resolved their target findings. Unresolved improvements are re-escalated at P1 priority.
122
+
123
+ ### 6. Agent Process Feedback (Raw)
54
124
  If sprint reports aren't available, read individual agent reports from `.bounce/archive/` and extract `## Process Feedback` sections directly.
55
125
 
56
126
  ## The Improvement Process
57
127
 
58
- ### Step 1: Gather Signals
128
+ ### Step 1: Read the Manifest
59
129
  ```
60
- 1. Read the last 2-3 Sprint Reports (§5 Framework Self-Assessment)
61
- 2. Read LESSONS.md — filter for process-related entries
62
- 3. Read Sprint Execution Metrics flag anomalies
63
- 4. If no sprint reports exist yet, read raw agent reports from .bounce/archive/
130
+ 1. Read .bounce/improvement-manifest.json (if it exists)
131
+ 2. Read .bounce/improvement-suggestions.md for human-readable context
132
+ 3. If no manifest exists, run: vbounce improve S-XX to generate one
64
133
  ```
65
134
 
66
- ### Step 2: Pattern Detection
67
- Group findings by framework area:
68
-
69
- | Area | What to Look For | Files Affected |
70
- |------|-----------------|----------------|
71
- | **Templates** | Missing fields, unused sections, ambiguous instructions | `templates/*.md` |
72
- | **Agent Handoffs** | Missing report fields, redundant data, unclear formats | `brains/claude-agents/*.md` |
73
- | **Context Prep** | Missing context, stale prep packs, truncation issues | `scripts/prep_sprint_context.mjs`, `scripts/prep_qa_context.mjs`, `scripts/prep_arch_context.mjs` |
74
- | **Skills** | Unclear instructions, missing steps, outdated references | `skills/*/SKILL.md`, `skills/*/references/*` |
75
- | **Process Flow** | Unnecessary steps, wrong ordering, missing gates | `skills/agent-team/SKILL.md`, `skills/doc-manager/SKILL.md` |
76
- | **Tooling** | Script failures, validation gaps, missing automation | `scripts/*`, `bin/*` |
77
- | **Brain Files** | Stale rules, missing rules, inconsistencies across brains | `brains/CLAUDE.md`, `brains/GEMINI.md`, `brains/AGENTS.md`, `brains/cursor-rules/*.mdc` |
135
+ ### Step 2: Supplement with Manual Analysis
136
+ The manifest handles mechanical detection. The /improve skill adds judgment:
137
+ - Are there patterns the scripts can't detect? (e.g., misaligned mental models between agents)
138
+ - Do the metric anomalies have root causes not captured in §5?
139
+ - Are there skill instructions that agents consistently misinterpret?
78
140
 
79
- Deduplicate: if 3 agents report the same issue, that's 1 finding with 3 votes — not 3 findings.
141
+ ### Step 3: Prioritize Using Impact Levels
142
+ Rank all proposals (manifest + manual) by impact:
80
143
 
81
- ### Step 3: Prioritize
82
- Rank findings by impact:
83
-
84
- 1. **Blockers reported by 2+ agents** fix immediately
85
- 2. **Friction reported by 2+ agents** — fix in this improvement pass
86
- 3. **Blockers reported once** — fix if the root cause is clear
87
- 4. **Friction reported once** — note for next improvement pass (may be a one-off)
144
+ 1. **P0 Critical** — Fix before next sprint. Non-negotiable.
145
+ 2. **P1 High** — Fix in this improvement pass.
146
+ 3. **P2 Medium** — Fix if bandwidth allows, otherwise defer.
147
+ 4. **P3 Low** Batch with other improvements when convenient.
88
148
 
89
149
  ### Step 4: Propose Changes
90
150
  For each finding, write a concrete proposal:
@@ -92,7 +152,8 @@ For each finding, write a concrete proposal:
92
152
  ```markdown
93
153
  ### Proposal {N}: {Short title}
94
154
 
95
- **Finding:** {What went wrong from the retro}
155
+ **Impact:** {P0/P1/P2/P3}{reason}
156
+ **Finding:** {What went wrong — from the retro or lesson}
96
157
  **Pattern:** {How many times / sprints this appeared}
97
158
  **Root Cause:** {Why the framework allowed this to happen}
98
159
  **Affected Files:**
@@ -107,15 +168,16 @@ For script changes, describe the new behavior.}
107
168
  **Reversibility:** {Easy — revert the edit / Medium — downstream docs may need updating}
108
169
  ```
109
170
 
110
- #### Special Case: Gate Check Proposals
171
+ #### Special Case: Lesson → Gate Check Proposals
111
172
 
112
- When agent feedback reveals a mechanical check that was repeated manually across multiple stories (e.g., "QA checked for inline styles 4 times"), propose adding it as a pre-gate check instead of a skill/template change:
173
+ When a lesson contains a mechanical rule (classified as `gate_check` in the manifest):
113
174
 
114
175
  ```markdown
115
176
  ### Proposal {N}: Add pre-gate check — {check name}
116
177
 
117
- **Finding:** {Agent} manually performed {check description} in {N} stories this sprint.
118
- **Tokens saved:** ~{estimate} per story (based on agent token usage for this check type)
178
+ **Impact:** P1 mechanical check currently performed manually by agents
179
+ **Lesson:** "{lesson title}" (active since {date})
180
+ **Rule:** {the lesson's rule}
119
181
  **Gate:** qa / arch
120
182
  **Check config to add to `.bounce/gate-checks.json`:**
121
183
  ```json
@@ -131,10 +193,35 @@ When agent feedback reveals a mechanical check that was repeated manually across
131
193
  ```
132
194
  ```
133
195
 
134
- This is the primary mechanism for the gate system to grow organically — the `improve` skill reads what agents repeatedly checked by hand and proposes automating those checks via `gate-checks.json`.
196
+ #### Special Case: Lesson Script Proposals
197
+
198
+ When a lesson describes a procedural check:
199
+
200
+ ```markdown
201
+ ### Proposal {N}: Automate — {check name}
202
+
203
+ **Impact:** P1 — repeated manual procedure
204
+ **Lesson:** "{lesson title}" (active since {date})
205
+ **Rule:** {the lesson's rule}
206
+ **Proposed script/enhancement:** {describe the new script or addition to existing script}
207
+ ```
208
+
209
+ #### Special Case: Lesson Graduation
210
+
211
+ When a lesson has been active 3+ sprints and is classified as `agent_config`:
212
+
213
+ ```markdown
214
+ ### Proposal {N}: Graduate lesson — "{title}"
215
+
216
+ **Impact:** P2 — proven rule ready for permanent enforcement
217
+ **Active since:** {date} ({N} sprints)
218
+ **Rule:** {the lesson's rule}
219
+ **Target agent config:** `brains/claude-agents/{agent}.md`
220
+ **Action:** Add rule to agent's Critical Rules section. Archive lesson from LESSONS.md.
221
+ ```
135
222
 
136
223
  ### Step 5: Present to Human
137
- Present ALL proposals as a numbered list. The human can:
224
+ Present ALL proposals as a numbered list, grouped by impact level. The human can:
138
225
  - **Approve** — apply the change
139
226
  - **Reject** — skip it (optionally explain why)
140
227
  - **Modify** — adjust the proposal before applying
@@ -148,26 +235,27 @@ For each approved proposal:
148
235
  2. If brain files are affected, ensure ALL brain surfaces stay in sync (CLAUDE.md, GEMINI.md, AGENTS.md, cursor-rules/)
149
236
  3. Log the change in `brains/CHANGELOG.md`
150
237
  4. If skills were modified, update skill descriptions in all brain files that reference them
238
+ 5. Record in `.bounce/improvement-log.md` under "Applied" with the impact level
151
239
 
152
240
  ### Step 7: Validate
153
241
  After all changes are applied:
154
- 1. Run `./scripts/pre_bounce_sync.sh` to update RAG embeddings with the new framework content
242
+ 1. Run `vbounce doctor` to verify framework integrity
155
243
  2. Verify no cross-references are broken (template paths, skill names, report field names)
156
- 3. Confirm brain file consistency — all 4 surfaces should describe the same process
244
+ 3. Confirm brain file consistency — all surfaces should describe the same process
157
245
 
158
246
  ## Improvement Scope
159
247
 
160
248
  ### What CAN Be Improved
161
249
 
162
- | Target | Examples |
163
- |--------|---------|
164
- | **Templates** | Add/remove/rename sections, improve instructions, add examples, fix ambiguity |
165
- | **Agent Report Formats** | Add/remove YAML fields, add report sections, improve handoff clarity |
166
- | **Skills** | Update instructions, add/remove steps, improve reference docs, add new skills |
167
- | **Brain Files** | Update rules, add missing rules, improve consistency, update skill references |
168
- | **Scripts** | Fix bugs, add validation checks, improve error messages, add new automation |
169
- | **Process Flow** | Reorder steps, add/remove gates, adjust thresholds (bounce limits, complexity labels) |
170
- | **RAG Pipeline** | Adjust indexing scope, improve chunking, add new document types to index |
250
+ | Target | Examples | Typical Impact |
251
+ |--------|---------|----------------|
252
+ | **Gate Checks** | New grep/lint rules from lessons | P1 |
253
+ | **Scripts** | New validation, automate manual steps | P1-P2 |
254
+ | **Templates** | Add/remove/rename sections, improve instructions | P2 |
255
+ | **Agent Report Formats** | Add/remove YAML fields, improve handoff clarity | P1-P2 |
256
+ | **Skills** | Update instructions, add/remove steps, add new skills | P1-P2 |
257
+ | **Brain Files** | Graduate lessons to permanent rules, update skill refs | P2 |
258
+ | **Process Flow** | Reorder steps, add/remove gates, adjust thresholds | P1 |
171
259
 
172
260
  ### What CANNOT Be Changed Without Escalation
173
261
  - **Adding a new agent role** — requires human design decision + new brain config
@@ -177,14 +265,15 @@ After all changes are applied:
177
265
 
178
266
  ## Output
179
267
 
180
- The improve skill does not produce a standalone report file. Its output is:
268
+ The improve skill produces:
181
269
  1. The list of proposals presented to the human (inline during the conversation)
182
270
  2. The applied changes to framework files
183
271
  3. The `brains/CHANGELOG.md` entries documenting what changed and why
272
+ 4. Updates to `.bounce/improvement-log.md` tracking approved/rejected/deferred items
184
273
 
185
274
  ## Tracking Improvement Velocity
186
275
 
187
- Over time, the Sprint Report §5 Framework Self-Assessment tables should shrink. If the same findings keep appearing after improvement passes, the fix didn't work — re-examine the root cause.
276
+ Over time, the Sprint Report §5 Framework Self-Assessment tables should shrink. If the same findings keep appearing after improvement passes, the fix didn't work — the pipeline will automatically detect this and re-escalate at P1 priority.
188
277
 
189
278
  The Team Lead should note in the Sprint Report whether the previous improvement pass resolved the issues it targeted:
190
279
  - "Improvement pass from S-03 resolved the Dev→QA handoff gap (0 handoff complaints this sprint)"
@@ -195,11 +284,13 @@ The Team Lead should note in the Sprint Report whether the previous improvement
195
284
  - **Never change the framework without human approval.** Propose, don't impose.
196
285
  - **Keep all brain surfaces in sync.** A change to CLAUDE.md must be reflected in GEMINI.md, AGENTS.md, and cursor-rules/.
197
286
  - **Log everything.** Every change goes in `brains/CHANGELOG.md` with the finding that motivated it.
198
- - **Run pre_bounce_sync.sh after changes.** Updated skills and rules must be re-indexed for RAG.
287
+ - **Run `vbounce doctor` after changes.** Verify framework integrity after applying improvements.
199
288
  - **Don't over-engineer.** Fix the actual problem reported by agents. Don't add speculative improvements.
200
289
  - **Respect the hierarchy.** Template changes are low-risk. Process flow changes are high-risk. Scope accordingly.
201
290
  - **Skills are living documents.** If a skill's instructions consistently confuse agents, rewrite the confusing section — don't add workarounds elsewhere.
291
+ - **Impact levels drive priority.** P0 and P1 items are addressed first. P3 items are batched.
292
+ - **Lessons are fuel.** Every lesson is a potential automation — classify and act on them.
202
293
 
203
294
  ## Keywords
204
295
 
205
- improve, self-improvement, framework evolution, retro, retrospective, process feedback, friction, template improvement, skill improvement, brain sync, meta-process, self-aware
296
+ improve, self-improvement, framework evolution, retro, retrospective, process feedback, friction, template improvement, skill improvement, brain sync, meta-process, self-aware, impact levels, lesson graduation, gate check, automation
@@ -31,6 +31,20 @@ This is NOT just a command — it is a standing directive:
31
31
  3. **When offering**, say: *"This looks like a lesson worth recording — want me to capture it?"*
32
32
  4. **Never record without the user's approval.** Always ask first.
33
33
 
34
+ ## Timing: Record Immediately, Not at Sprint Close
35
+
36
+ **Lessons MUST be recorded as soon as the story that produced them is merged** — not deferred to sprint close. Context decays fast.
37
+
38
+ **Flow:**
39
+ 1. During execution, agents flag lessons in their reports (`lessons_flagged` field)
40
+ 2. After DevOps merges a story (Phase 3, Step 9), the Team Lead immediately:
41
+ - Reads `lessons_flagged` from Dev and QA reports
42
+ - Presents each lesson to the human for approval
43
+ - Records approved lessons to LESSONS.md right away
44
+ 3. At sprint close (Sprint Report §4), the lesson table serves as a **review of what was already recorded** — not a first-time approval step. This is a confirmation, not a gate.
45
+
46
+ **Why this matters:** A lesson recorded 5 minutes after the problem is specific and actionable. A lesson recorded 3 days later at sprint close is vague and often forgotten.
47
+
34
48
  ## Recording: The `/lesson` Command
35
49
 
36
50
  ### Step 1: Gather Context
@@ -0,0 +1,102 @@
1
+ ---
2
+ name: product-graph
3
+ description: "Use when you need to understand document relationships, check what's affected by a change, find blocked documents, or assess the state of planning documents. Provides structured awareness of the full product document graph without reading every file. Auto-loaded during planning sessions."
4
+ ---
5
+
6
+ # Product Graph — Document Relationship Intelligence
7
+
8
+ ## Purpose
9
+
10
+ This skill gives you instant awareness of all product planning documents and their relationships. Instead of globbing and reading every file in `product_plans/`, you read a single JSON graph that maps every document, its status, and how it connects to other documents.
11
+
12
+ ## Three-Tier Loading Protocol
13
+
14
+ When you need to understand the product document landscape, load information in tiers — stop at the tier that answers your question:
15
+
16
+ ### Tier 1: Graph JSON (~400-1000 tokens)
17
+ Read `.bounce/product-graph.json` for a bird's-eye view.
18
+ - All document IDs, types, statuses, and paths
19
+ - All edges (dependencies, parent relationships, feeds)
20
+ - **Use when:** answering "what exists?", "what's blocked?", "what depends on X?"
21
+
22
+ ### Tier 2: Specific Frontmatter (~200-500 tokens per doc)
23
+ Read the YAML frontmatter of specific documents identified in Tier 1.
24
+ - Ambiguity scores, priorities, tags, owners, dates
25
+ - **Use when:** you need details about specific documents (not the full set)
26
+
27
+ ### Tier 3: Full Documents (~500-3000 tokens per doc)
28
+ Read the complete document body.
29
+ - Full specs, scope boundaries, acceptance criteria, open questions
30
+ - **Use when:** creating or modifying documents, decomposing epics, or resolving ambiguity
31
+
32
+ ## Edge Type Semantics
33
+
34
+ | Edge Type | Meaning | Direction |
35
+ |-----------|---------|-----------|
36
+ | `parent` | Document is a child of another (Story → Epic) | parent → child |
37
+ | `depends-on` | Document cannot proceed until dependency is done | dependency → dependent |
38
+ | `unlocks` | Completing this document enables another | source → unlocked |
39
+ | `context-source` | Document draws context from another | source → consumer |
40
+ | `feeds` | Document contributes to a delivery/release | document → delivery |
41
+
42
+ ## When to Regenerate the Graph
43
+
44
+ Run `vbounce graph` (or `node scripts/product_graph.mjs`) after:
45
+ - **Any document edit** that changes status, dependencies, or relationships
46
+ - **Sprint lifecycle events** (sprint init, story complete, sprint close)
47
+ - **Planning session start** — ensure graph reflects current state
48
+ - **Document creation or archival** — new nodes or removed nodes
49
+
50
+ The graph is a cache — it's cheap to regenerate and stale data is worse than no data.
51
+
52
+ ## Blocked Document Detection
53
+
54
+ A document is **blocked** when:
55
+ 1. It has incoming `depends-on` edges from documents with status != "Done"/"Implemented"/"Completed"
56
+ 2. It has `ambiguity: 🔴 High` and linked spikes are not Validated/Closed
57
+ 3. Its parent document has status "Parking Lot" or "Escalated"
58
+
59
+ To find blocked documents:
60
+ 1. Read the graph (Tier 1)
61
+ 2. For each node, check its incoming `depends-on` edges
62
+ 3. Look up the source node's status
63
+ 4. If any source is not in a terminal state → document is blocked
64
+
65
+ ## Impact Analysis
66
+
67
+ To understand what changes when you modify a document:
68
+ ```bash
69
+ vbounce graph impact <DOC-ID> # human-readable
70
+ vbounce graph impact <DOC-ID> --json # machine-readable
71
+ ```
72
+
73
+ This runs BFS traversal and returns:
74
+ - **Direct dependents** — documents immediately affected
75
+ - **Transitive dependents** — documents affected through cascading dependencies
76
+ - **Upstream feeders** — documents that feed into the changed document
77
+
78
+ ## Graph JSON Schema
79
+
80
+ ```json
81
+ {
82
+ "generated_at": "ISO-8601 timestamp",
83
+ "node_count": 5,
84
+ "edge_count": 12,
85
+ "nodes": {
86
+ "EPIC-002": {
87
+ "type": "epic|story|spike|charter|roadmap|delivery-plan|sprint-plan|risk-registry|hotfix",
88
+ "status": "Draft|Refinement|Ready to Bounce|Bouncing|Done|Implemented|...",
89
+ "ambiguity": "🔴 High|🟡 Medium|🟢 Low|null",
90
+ "path": "product_plans/backlog/EPIC-002_.../EPIC-002_....md",
91
+ "title": "Human-readable title from first heading"
92
+ }
93
+ },
94
+ "edges": [
95
+ { "from": "EPIC-002", "to": "D-02", "type": "feeds" }
96
+ ]
97
+ }
98
+ ```
99
+
100
+ ## Keywords
101
+
102
+ product graph, document graph, dependency, impact analysis, what's affected, what's blocked, document relationships, planning state
@@ -0,0 +1,90 @@
1
+ <instructions>
2
+ FOLLOW THIS EXACT STRUCTURE. This documents a defect found during or after sprint execution.
3
+
4
+ 1. **YAML Frontmatter**: Bug ID, Status, Severity, Found During, Affected Story, Reporter
5
+ 2. **§1 The Bug**: What's broken, reproduction steps, expected vs actual
6
+ 3. **§2 Impact**: What's affected, is it blocking?
7
+ 4. **§3 Fix Approach**: Proposed fix, affected files, estimated complexity
8
+ 5. **§4 Verification**: How to verify the fix
9
+
10
+ When to use this template:
11
+ - User reports something is broken mid-sprint
12
+ - QA discovers a defect not covered by acceptance criteria
13
+ - Post-sprint manual review finds an issue
14
+ - A previously working feature regresses
15
+
16
+ Triage rules (from mid-sprint-triage.md):
17
+ - If the bug is L1 (1-2 files, trivial fix) → use templates/hotfix.md instead
18
+ - If the bug is larger → use THIS template, add to current sprint as a fix task
19
+ - Bug fixes do NOT increment QA/Architect bounce counts
20
+
21
+ Output location: `product_plans/sprints/sprint-{XX}/BUG-{Date}-{Name}.md`
22
+ If no sprint is active: `product_plans/backlog/BUG-{Date}-{Name}.md`
23
+
24
+ Do NOT output these instructions.
25
+ </instructions>
26
+
27
+ ---
28
+ bug_id: "BUG-{YYYY-MM-DD}-{name}"
29
+ status: "Open / In Progress / Fixed / Wont Fix"
30
+ severity: "Critical / High / Medium / Low"
31
+ found_during: "Sprint S-{XX} / Post-Sprint Review / User Report"
32
+ affected_story: "STORY-{ID} / N/A (pre-existing)"
33
+ reporter: "{human / QA / user}"
34
+ ---
35
+
36
+ # BUG: {Short Description}
37
+
38
+ ## 1. The Bug
39
+
40
+ **Current Behavior:**
41
+ {What happens — be specific}
42
+
43
+ **Expected Behavior:**
44
+ {What should happen instead}
45
+
46
+ **Reproduction Steps:**
47
+ 1. {Step 1}
48
+ 2. {Step 2}
49
+ 3. {Observe: ...}
50
+
51
+ **Environment:**
52
+ - {Browser/OS/Node version if relevant}
53
+ - {Branch: sprint/S-XX or main}
54
+
55
+ ---
56
+
57
+ ## 2. Impact
58
+
59
+ - **Blocking?** {Yes — blocks STORY-{ID} / No — cosmetic / degraded}
60
+ - **Affected Areas:** {Which features, pages, or flows}
61
+ - **Users Affected:** {All users / specific persona / edge case only}
62
+ - **Data Impact:** {None / corrupted data / lost data}
63
+
64
+ ---
65
+
66
+ ## 3. Fix Approach
67
+
68
+ - **Root Cause:** {Why it's broken — if known}
69
+ - **Proposed Fix:** {What to change}
70
+ - **Files to Modify:** `{filepath1}`, `{filepath2}`
71
+ - **Complexity:** {L1 Trivial / L2 Standard / L3 Complex}
72
+
73
+ > If complexity is L1 → consider using `templates/hotfix.md` instead for faster resolution.
74
+
75
+ ---
76
+
77
+ ## 4. Verification
78
+
79
+ - [ ] {Reproduction steps no longer reproduce the bug}
80
+ - [ ] {Existing tests still pass}
81
+ - [ ] {New test covers the bug scenario — if applicable}
82
+ - [ ] Run `./scripts/hotfix_manager.sh ledger "BUG: {Name}" "{Brief description}"`
83
+
84
+ ---
85
+
86
+ ## Change Log
87
+
88
+ | Date | Author | Change |
89
+ |------|--------|--------|
90
+ | {YYYY-MM-DD} | {name} | Created |
@@ -0,0 +1,105 @@
1
+ <instructions>
2
+ FOLLOW THIS EXACT STRUCTURE. This documents a scope change or new requirement discovered mid-sprint.
3
+
4
+ 1. **YAML Frontmatter**: CR ID, Status, Category, Urgency, Affected Stories, Requestor
5
+ 2. **§1 The Change**: What's being requested and why
6
+ 3. **§2 Impact Assessment**: What it affects, what breaks, what gets delayed
7
+ 4. **§3 Decision**: Approved action with rationale
8
+ 5. **§4 Execution Plan**: How the change will be handled
9
+
10
+ When to use this template:
11
+ - User requests a new feature or scope expansion mid-sprint
12
+ - User wants to change the technical approach of an active story
13
+ - External dependency change forces a pivot
14
+ - Requirements discovered during implementation that weren't in the original spec
15
+
16
+ Categories (from mid-sprint-triage.md):
17
+ - **Scope Change**: Adding/removing/modifying requirements → use THIS template
18
+ - **Approach Change**: Different technical path → use THIS template
19
+ - **Spec Clarification**: Just clarifying ambiguity → do NOT use this template (update story spec inline)
20
+ - **Bug**: Something is broken → use templates/bug.md instead
21
+
22
+ Triage rules:
23
+ - Scope changes PAUSE the active bounce until the human approves
24
+ - Approach changes reset the Dev pass
25
+ - All CRs are logged in Sprint Plan §4 Execution Log with event type "CR"
26
+ - CRs that can't fit in the current sprint go to backlog for next sprint planning
27
+
28
+ Output location: `product_plans/sprints/sprint-{XX}/CR-{Date}-{Name}.md`
29
+ If no sprint is active: `product_plans/backlog/CR-{Date}-{Name}.md`
30
+
31
+ Do NOT output these instructions.
32
+ </instructions>
33
+
34
+ ---
35
+ cr_id: "CR-{YYYY-MM-DD}-{name}"
36
+ status: "Open / Approved / Rejected / Deferred"
37
+ category: "Scope Change / Approach Change"
38
+ urgency: "Blocking / This Sprint / Next Sprint"
39
+ affected_stories: ["STORY-{ID}"]
40
+ requestor: "{human / AI / external}"
41
+ ---
42
+
43
+ # CR: {Short Description}
44
+
45
+ ## 1. The Change
46
+
47
+ **What is being requested:**
48
+ {Describe the change clearly}
49
+
50
+ **Why:**
51
+ {Business reason, user feedback, technical discovery, external dependency change}
52
+
53
+ **Original vs Proposed:**
54
+ | Aspect | Original | Proposed |
55
+ |--------|----------|----------|
56
+ | {Scope/Approach/Tech} | {What was planned} | {What's now requested} |
57
+
58
+ ---
59
+
60
+ ## 2. Impact Assessment
61
+
62
+ **Affected Stories:**
63
+ | Story | Current State | Impact |
64
+ |-------|--------------|--------|
65
+ | STORY-{ID} | {Bouncing / QA Passed / ...} | {Must restart Dev / Spec update only / Blocked} |
66
+
67
+ **Sprint Impact:**
68
+ - {Does this delay the sprint? By how much?}
69
+ - {Does this invalidate completed work?}
70
+ - {Does this require new stories?}
71
+
72
+ **Risk:**
73
+ - {What could go wrong if we make this change?}
74
+ - {What could go wrong if we DON'T make this change?}
75
+
76
+ ---
77
+
78
+ ## 3. Decision
79
+
80
+ > Filled by human after reviewing the impact assessment.
81
+
82
+ **Decision:** {Approved / Rejected / Deferred to S-{XX}}
83
+
84
+ **Rationale:** {Why this decision}
85
+
86
+ **Conditions:** {Any constraints on the approved change}
87
+
88
+ ---
89
+
90
+ ## 4. Execution Plan
91
+
92
+ > Filled after decision is approved.
93
+
94
+ - **Stories affected:** {Which stories need spec updates}
95
+ - **New stories needed:** {If any — add to backlog or current sprint}
96
+ - **Bounce impact:** {Which passes reset — Dev only / Dev + QA / full restart}
97
+ - **Timeline:** {Can it fit in current sprint or deferred?}
98
+
99
+ ---
100
+
101
+ ## Change Log
102
+
103
+ | Date | Author | Change |
104
+ |------|--------|--------|
105
+ | {YYYY-MM-DD} | {name} | Created |