codingbuddy-rules 5.0.0 → 5.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (53) hide show
  1. package/.ai-rules/adapters/aider.md +374 -0
  2. package/.ai-rules/adapters/windsurf.md +395 -0
  3. package/.ai-rules/agents/accessibility-specialist.json +6 -0
  4. package/.ai-rules/agents/act-mode.json +6 -0
  5. package/.ai-rules/agents/agent-architect.json +6 -0
  6. package/.ai-rules/agents/ai-ml-engineer.json +6 -0
  7. package/.ai-rules/agents/architecture-specialist.json +6 -0
  8. package/.ai-rules/agents/auto-mode.json +6 -0
  9. package/.ai-rules/agents/backend-developer.json +6 -0
  10. package/.ai-rules/agents/code-quality-specialist.json +6 -0
  11. package/.ai-rules/agents/code-reviewer.json +25 -4
  12. package/.ai-rules/agents/data-engineer.json +6 -0
  13. package/.ai-rules/agents/data-scientist.json +6 -0
  14. package/.ai-rules/agents/devops-engineer.json +6 -0
  15. package/.ai-rules/agents/documentation-specialist.json +6 -0
  16. package/.ai-rules/agents/eval-mode.json +11 -1
  17. package/.ai-rules/agents/event-architecture-specialist.json +6 -0
  18. package/.ai-rules/agents/frontend-developer.json +6 -0
  19. package/.ai-rules/agents/i18n-specialist.json +6 -0
  20. package/.ai-rules/agents/integration-specialist.json +6 -0
  21. package/.ai-rules/agents/migration-specialist.json +6 -0
  22. package/.ai-rules/agents/mobile-developer.json +7 -10
  23. package/.ai-rules/agents/observability-specialist.json +6 -0
  24. package/.ai-rules/agents/parallel-orchestrator.json +6 -0
  25. package/.ai-rules/agents/performance-specialist.json +6 -0
  26. package/.ai-rules/agents/plan-mode.json +6 -0
  27. package/.ai-rules/agents/plan-reviewer.json +7 -4
  28. package/.ai-rules/agents/platform-engineer.json +6 -0
  29. package/.ai-rules/agents/security-engineer.json +6 -0
  30. package/.ai-rules/agents/security-specialist.json +6 -0
  31. package/.ai-rules/agents/seo-specialist.json +6 -0
  32. package/.ai-rules/agents/software-engineer.json +6 -0
  33. package/.ai-rules/agents/solution-architect.json +6 -0
  34. package/.ai-rules/agents/systems-developer.json +6 -0
  35. package/.ai-rules/agents/technical-planner.json +6 -0
  36. package/.ai-rules/agents/test-engineer.json +6 -0
  37. package/.ai-rules/agents/test-strategy-specialist.json +6 -0
  38. package/.ai-rules/agents/tooling-engineer.json +6 -0
  39. package/.ai-rules/agents/ui-ux-designer.json +6 -0
  40. package/.ai-rules/schemas/agent.schema.json +38 -0
  41. package/.ai-rules/skills/README.md +6 -0
  42. package/.ai-rules/skills/agent-design/examples/agent-template.json +1 -4
  43. package/.ai-rules/skills/mcp-builder/examples/tool-example.ts +8 -13
  44. package/.ai-rules/skills/onboard/SKILL.md +150 -0
  45. package/.ai-rules/skills/plan-to-issues/SKILL.md +318 -0
  46. package/.ai-rules/skills/retrospective/SKILL.md +192 -0
  47. package/.ai-rules/skills/ship/SKILL.md +242 -0
  48. package/.ai-rules/skills/skill-creator/assets/eval_review.html +539 -260
  49. package/bin/cli.js +11 -19
  50. package/lib/init/detect-stack.js +18 -4
  51. package/lib/init/prompt.js +2 -2
  52. package/lib/init/suggest-agent.js +13 -2
  53. package/package.json +1 -1
@@ -0,0 +1,318 @@
1
+ ---
2
+ name: plan-to-issues
3
+ description: Decompose specs or implementation plans into independent GitHub issues with dependency graphs, wave grouping, and file overlap analysis for safe parallel execution.
4
+ ---
5
+
6
+ # Plan to Issues
7
+
8
+ ## Overview
9
+
10
+ Turn specs and implementation plans into a set of independent, well-structured GitHub issues ready for parallel execution. Each issue gets acceptance criteria, file dependency mapping, wave assignment, and priority labels.
11
+
12
+ **Core principle:** Issues that touch the same files MUST NOT be in the same wave. File overlap = sequential execution.
13
+
14
+ **Announce at start:** "I'm using the plan-to-issues skill to decompose this plan into GitHub issues."
15
+
16
+ **Iron Law:**
17
+
18
+ ```
19
+ ISSUES MODIFYING THE SAME FILE MUST NEVER RUN IN THE SAME WAVE
20
+ ```
21
+
22
+ ## When to Use
23
+
24
+ - Implementation plan is ready and needs to be broken into trackable units of work
25
+ - Spec document needs decomposition into parallel-safe issues
26
+ - Planning a multi-issue feature that will use `/taskmaestro` for parallel execution
27
+ - Roadmap or design doc needs to become actionable GitHub issues
28
+
29
+ **Use this ESPECIALLY when:**
30
+ - The plan has 3+ independent deliverables
31
+ - Multiple developers or agents will work in parallel
32
+ - You need wave grouping for `/taskmaestro` execution
33
+
34
+ **Don't use when:**
35
+ - Single-file change or trivial fix (just create the issue directly)
36
+ - Requirements are still unclear (use brainstorming skill first)
37
+ - Plan hasn't been written yet (use writing-plans skill first)
38
+
39
+ ## Input
40
+
41
+ The skill accepts one of:
42
+ - **Spec document path**: `docs/plans/YYYY-MM-DD-<feature>.md`
43
+ - **Plan document path**: Any markdown file with implementation steps
44
+ - **Inline description**: Direct feature description in the prompt
45
+
46
+ ## The Five Phases
47
+
48
+ ### Phase 1: Analyze — Identify Independent Deliverables
49
+
50
+ **Read the plan and extract discrete units of work:**
51
+
52
+ 1. **Parse the plan document**
53
+ - Read the spec/plan file
54
+ - Identify each distinct feature, component, or change
55
+ - Note dependencies between items
56
+
57
+ 2. **Define each deliverable**
58
+ - Summary: What does this issue deliver?
59
+ - Scope: Which files will be created or modified?
60
+ - Dependencies: What must be done before this?
61
+ - Acceptance criteria: How do we know it's done?
62
+
63
+ 3. **Estimate file touchpoints**
64
+ - List every file each deliverable will create or modify
65
+ - Be specific — `src/utils/parse.ts` not `src/utils/`
66
+ - Include test files: `src/utils/parse.test.ts`
67
+
68
+ **Completion criteria:**
69
+ - [ ] All deliverables identified with clear scope
70
+ - [ ] File touchpoints listed for each deliverable
71
+ - [ ] Dependencies between deliverables noted
72
+
73
+ ### Phase 2: Overlap Detection — Map File Conflicts
74
+
75
+ **Use `validate_parallel_issues` to verify safety:**
76
+
77
+ 1. **Build the file overlap matrix**
78
+ - Compare file lists between all deliverables
79
+ - Identify any shared files
80
+
81
+ 2. **Validate with MCP tool**
82
+
83
+ ```
84
+ Call: codingbuddy validate_parallel_issues
85
+ issues: [issue_numbers]
86
+ issueContents: { "1": "body1", "2": "body2", ... }
87
+ ```
88
+
89
+ 3. **Apply the Iron Rule**
90
+ - Same file in two deliverables = different waves
91
+ - No exceptions — even "small" changes to shared files cause merge conflicts
92
+
93
+ **Decision matrix:**
94
+
95
+ | Overlap | Action |
96
+ |---------|--------|
97
+ | No shared files | Same wave OK |
98
+ | Shared test helpers only | Same wave OK (read-only imports) |
99
+ | Shared source files | Different waves MANDATORY |
100
+ | Shared config files | Different waves MANDATORY |
101
+
102
+ **Completion criteria:**
103
+ - [ ] File overlap matrix built
104
+ - [ ] `validate_parallel_issues` called (if issues already exist)
105
+ - [ ] Conflicts resolved via wave separation
106
+
107
+ ### Phase 3: Prioritize — Assign Priority and Wave Groups
108
+
109
+ **Priority levels:**
110
+
111
+ | Priority | Meaning | Criteria |
112
+ |----------|---------|----------|
113
+ | `P0` | Critical | Blocks other issues; foundational setup |
114
+ | `P1` | High | Core feature; needed for MVP |
115
+ | `P2` | Medium | Enhancement; improves quality |
116
+ | `P3` | Low | Nice-to-have; polish |
117
+
118
+ **Wave assignment rules:**
119
+
120
+ 1. **Wave 1**: All P0 issues + any P1 issues with zero file overlaps with P0
121
+ 2. **Wave 2**: Issues that depend on Wave 1 completion + P1/P2 with no overlap with each other
122
+ 3. **Wave N**: Remaining issues grouped by zero-overlap constraint
123
+
124
+ ```
125
+ Wave 1: [#1, #2, #3] ← No file overlaps between these
126
+ Wave 2: [#4, #5] ← Depend on Wave 1; no overlaps between these
127
+ Wave 3: [#6] ← Depends on Wave 2
128
+ ```
129
+
130
+ **Completion criteria:**
131
+ - [ ] All deliverables have priority labels
132
+ - [ ] Wave groups assigned with zero intra-wave file overlap
133
+ - [ ] Dependency chain is clear
134
+
135
+ ### Phase 4: Create — Generate GitHub Issues
136
+
137
+ **Create each issue with structured body:**
138
+
139
+ ```bash
140
+ gh issue create \
141
+ --title "<type>(<scope>): <description>" \
142
+ --label "<priority>,<wave>,<feature-area>" \
143
+ --body "$(cat <<'EOF'
144
+ ## Summary
145
+
146
+ <1-2 sentence description of what this issue delivers>
147
+
148
+ ## Details
149
+
150
+ <Implementation details, approach, key decisions>
151
+
152
+ ## Files to Create/Modify
153
+
154
+ - [ ] `path/to/file.ts` — <what changes>
155
+ - [ ] `path/to/file.test.ts` — <what tests>
156
+
157
+ ## Acceptance Criteria
158
+
159
+ - [ ] <Criterion 1>
160
+ - [ ] <Criterion 2>
161
+ - [ ] <Criterion 3>
162
+
163
+ ## Dependencies
164
+
165
+ - Depends on: #<number> (if any)
166
+ - Blocks: #<number> (if any)
167
+
168
+ ## Wave Assignment
169
+
170
+ **Wave <N>** — <reason for this wave>
171
+ Can run in parallel with: #<numbers>
172
+ Must wait for: #<numbers> (if any)
173
+ EOF
174
+ )"
175
+ ```
176
+
177
+ **Label conventions:**
178
+
179
+ | Label | Format | Example |
180
+ |-------|--------|---------|
181
+ | Priority | `P0` through `P3` | `P1` |
182
+ | Wave | `wave-<N>` | `wave-1` |
183
+ | Feature area | Project-specific | `auth`, `api`, `ui` |
184
+ | Type | Conventional commits | `feat`, `fix`, `refactor` |
185
+
186
+ **Auto-create missing labels:**
187
+
188
+ ```bash
189
+ gh label create "wave-1" --color "4ECDC4" --description "Wave 1: parallel group" 2>/dev/null || true
190
+ gh label create "wave-2" --color "45B7D1" --description "Wave 2: parallel group" 2>/dev/null || true
191
+ gh label create "P0" --color "FF0000" --description "Critical priority" 2>/dev/null || true
192
+ gh label create "P1" --color "FF6B6B" --description "High priority" 2>/dev/null || true
193
+ gh label create "P2" --color "FFD93D" --description "Medium priority" 2>/dev/null || true
194
+ gh label create "P3" --color "95E1D3" --description "Low priority" 2>/dev/null || true
195
+ ```
196
+
197
+ **Completion criteria:**
198
+ - [ ] All issues created with structured body
199
+ - [ ] Labels applied (created if missing)
200
+ - [ ] Dependencies linked between issues
201
+
202
+ ### Phase 5: Summarize — Output Dependency Graph and Execution Plan
203
+
204
+ **Present the final decomposition:**
205
+
206
+ ```markdown
207
+ ## Decomposition Summary
208
+
209
+ ### Dependency Graph
210
+
211
+ #1 ─┬─► #3
212
+ └─► #4 ──► #6
213
+ #2 ────► #5
214
+
215
+ ### Wave Execution Plan
216
+
217
+ | Wave | Issues | Parallel Safe | Depends On |
218
+ |------|--------|---------------|------------|
219
+ | 1 | #1, #2 | Yes (0 file overlaps) | — |
220
+ | 2 | #3, #4, #5 | Yes (0 file overlaps) | Wave 1 |
221
+ | 3 | #6 | N/A (single) | Wave 2 |
222
+
223
+ ### File Overlap Matrix
224
+
225
+ | | #1 | #2 | #3 | #4 | #5 | #6 |
226
+ |---|---|---|---|---|---|---|
227
+ | #1 | — | 0 | 1 | 0 | 0 | 0 |
228
+ | #2 | 0 | — | 0 | 0 | 1 | 0 |
229
+ ...
230
+
231
+ Total issues: 6
232
+ Waves: 3
233
+ Estimated parallel speedup: ~3x
234
+ ```
235
+
236
+ **Completion criteria:**
237
+ - [ ] Dependency graph visualized
238
+ - [ ] Wave execution plan table complete
239
+ - [ ] File overlap matrix shown
240
+ - [ ] Summary shared with user
241
+
242
+ ## Integration with taskmaestro
243
+
244
+ The wave grouping output is designed for direct use with `/taskmaestro`:
245
+
246
+ ```
247
+ /taskmaestro --issues "#1,#2,#3" --wave 1
248
+ /taskmaestro --issues "#4,#5" --wave 2
249
+ ```
250
+
251
+ Each wave runs in parallel tmux panes with git worktree isolation.
252
+
253
+ ## Quick Reference
254
+
255
+ | Phase | Action | Tool |
256
+ |-------|--------|------|
257
+ | **1. Analyze** | Extract deliverables from plan | Read tool |
258
+ | **2. Overlap** | Detect file conflicts | `validate_parallel_issues` |
259
+ | **3. Prioritize** | Assign P0-P3 and waves | Manual analysis |
260
+ | **4. Create** | Generate GitHub issues | `gh issue create` |
261
+ | **5. Summarize** | Dependency graph + execution plan | Output |
262
+
263
+ ## Safety Checklist
264
+
265
+ Before creating issues:
266
+
267
+ - [ ] Plan document read completely
268
+ - [ ] Every deliverable has specific file touchpoints (not directories)
269
+ - [ ] File overlap matrix checked — no intra-wave conflicts
270
+ - [ ] Priority levels assigned based on dependency chain
271
+ - [ ] Issue bodies include acceptance criteria
272
+ - [ ] Wave assignments respect the Iron Rule
273
+ - [ ] Labels exist or will be auto-created
274
+ - [ ] `validate_parallel_issues` called for final verification
275
+
276
+ ## Red Flags — STOP
277
+
278
+ | Thought | Reality |
279
+ |---------|---------|
280
+ | "These issues probably don't overlap" | CHECK. Run overlap detection. Assumptions cause merge conflicts. |
281
+ | "I'll put them all in Wave 1 for speed" | STOP. Overlapping files in one wave = guaranteed conflicts. |
282
+ | "File overlap is minor, just a config file" | STOP. Config conflicts are the hardest to resolve. Different waves. |
283
+ | "I don't need acceptance criteria" | You do. Issues without AC get implemented wrong. |
284
+ | "I'll figure out waves later" | Waves are determined by file overlap. Do it now. |
285
+ | "This deliverable is too small for its own issue" | Small, focused issues are easier to parallelize and review. |
286
+ | "I'll skip validate_parallel_issues" | The MCP tool catches overlaps you missed. Always validate. |
287
+
288
+ ## Examples
289
+
290
+ ### Spec to Issues
291
+
292
+ ```
293
+ Input: docs/plans/2025-03-15-auth-redesign.md
294
+ Output: 5 GitHub issues across 2 waves
295
+
296
+ Wave 1 (parallel):
297
+ #101 feat(auth): add JWT token service [P0, wave-1]
298
+ #102 feat(auth): add password hashing utils [P0, wave-1]
299
+ #103 feat(db): add users migration [P0, wave-1]
300
+
301
+ Wave 2 (parallel, after Wave 1):
302
+ #104 feat(auth): add login/register endpoints [P1, wave-2]
303
+ #105 feat(auth): add auth middleware [P1, wave-2]
304
+ ```
305
+
306
+ ### Plan with Overlaps
307
+
308
+ ```
309
+ Input: docs/plans/2025-04-01-api-v2.md
310
+
311
+ Overlap detected:
312
+ Issue A: modifies src/routes/index.ts
313
+ Issue B: modifies src/routes/index.ts
314
+ → A and B MUST be in different waves
315
+
316
+ Wave 1: [A, C, D] ← A here
317
+ Wave 2: [B, E] ← B here (after A merges)
318
+ ```
@@ -0,0 +1,192 @@
1
+ ---
2
+ name: retrospective
3
+ description: "Analyze recent session context archives to identify coding patterns, agent usage, TDD cycle stats, and common EVAL issues. Generates a summary report with data-driven improvement suggestions."
4
+ ---
5
+
6
+ # Session Retrospective
7
+
8
+ ## Overview
9
+
10
+ Analyze accumulated PLAN/ACT/EVAL session data from context archives to surface coding habits, recurring patterns, and actionable improvement suggestions. Transforms passive session history into data-driven growth insights.
11
+
12
+ **Core principle:** Decisions improve when informed by patterns, not just memory. Review what actually happened, not what you think happened.
13
+
14
+ ## When to Use
15
+
16
+ - After completing a sprint or milestone
17
+ - During periodic team/personal retrospectives
18
+ - When noticing repeated issues across sessions
19
+ - Before planning process improvements
20
+ - When onboarding to understand team patterns
21
+
22
+ ## When NOT to Use
23
+
24
+ - Mid-session (wait until a natural checkpoint)
25
+ - With fewer than 3 archived sessions (insufficient data)
26
+ - For real-time debugging (use `systematic-debugging` instead)
27
+
28
+ ## Prerequisites
29
+
30
+ - Context archive system enabled (#999)
31
+ - At least 3 archived sessions in `docs/codingbuddy/archive/`
32
+ - MCP tools available: `get_context_history`, `search_context_archives`
33
+
34
+ ## Process
35
+
36
+ ### Phase 1: Data Collection
37
+
38
+ Gather session archives and extract structured data.
39
+
40
+ 1. **Retrieve recent archives**
41
+ - Call `get_context_history` with appropriate limit (default: 20)
42
+ - If no archives exist, inform user and suggest running a few PLAN/ACT/EVAL sessions first
43
+ - Note the date range covered
44
+
45
+ 2. **Read archive contents**
46
+ - Read each archived context document
47
+ - Extract from each session:
48
+ - Mode transitions (PLAN, ACT, EVAL, AUTO)
49
+ - Primary agent used
50
+ - Task description and title
51
+ - Decisions made
52
+ - Notes recorded
53
+ - Progress items (ACT mode)
54
+ - Findings and recommendations (EVAL mode)
55
+ - Status (completed, in_progress, blocked)
56
+
57
+ ### Phase 2: Pattern Analysis
58
+
59
+ Analyze collected data across five dimensions.
60
+
61
+ #### 2a. Mode Usage Patterns
62
+
63
+ - Count sessions per mode (PLAN, ACT, EVAL, AUTO)
64
+ - Calculate EVAL adoption rate: `EVAL sessions / total sessions`
65
+ - Identify sessions that skipped EVAL (potential quality gaps)
66
+ - Flag AUTO mode usage frequency
67
+
68
+ #### 2b. Agent Utilization
69
+
70
+ - Rank agents by frequency of use
71
+ - Identify underutilized specialists (available but rarely used)
72
+ - Detect agent concentration (over-reliance on one agent)
73
+ - Note any specialist gaps for the project's tech stack
74
+
75
+ #### 2c. TDD Cycle Indicators
76
+
77
+ Search archives for TDD-related patterns:
78
+ - `search_context_archives` with keywords: "TDD", "test", "RED", "GREEN", "REFACTOR"
79
+ - Count sessions mentioning test-first vs test-after
80
+ - Identify sessions where tests were skipped or deferred
81
+ - Calculate approximate TDD adherence rate
82
+
83
+ #### 2d. Recurring Issues
84
+
85
+ Search for repeated problems:
86
+ - `search_context_archives` with keywords: "blocked", "failed", "error", "issue", "bug"
87
+ - Group similar issues by category (build, test, integration, deployment)
88
+ - Identify issues that appear in 2+ sessions (systemic problems)
89
+ - Note resolution patterns (same fix applied repeatedly?)
90
+
91
+ #### 2e. Decision Patterns
92
+
93
+ Analyze decisions across sessions:
94
+ - Extract all recorded decisions
95
+ - Identify decisions that were later reversed or modified
96
+ - Spot recurring decision themes (architecture, tooling, process)
97
+ - Flag decisions made without EVAL validation
98
+
99
+ ### Phase 3: Report Generation
100
+
101
+ Generate a structured markdown report.
102
+
103
+ ```markdown
104
+ # Session Retrospective Report
105
+
106
+ > Period: {start_date} - {end_date}
107
+ > Sessions analyzed: {count}
108
+ > Generated: {current_date}
109
+
110
+ ## Summary Statistics
111
+
112
+ | Metric | Value |
113
+ |--------|-------|
114
+ | Total sessions | {n} |
115
+ | PLAN sessions | {n} |
116
+ | ACT sessions | {n} |
117
+ | EVAL sessions | {n} |
118
+ | AUTO sessions | {n} |
119
+ | EVAL adoption rate | {n}% |
120
+ | Completion rate | {n}% |
121
+
122
+ ## Top Agents
123
+
124
+ | Rank | Agent | Sessions | % |
125
+ |------|-------|----------|---|
126
+ | 1 | {agent} | {n} | {n}% |
127
+ | ... | ... | ... | ... |
128
+
129
+ ## TDD Health
130
+
131
+ - Adherence rate: {n}%
132
+ - Test-first sessions: {n}
133
+ - Test-after sessions: {n}
134
+ - No-test sessions: {n}
135
+
136
+ ## Recurring Issues
137
+
138
+ ### {Issue Category}
139
+ - **Frequency**: {n} sessions
140
+ - **Pattern**: {description}
141
+ - **Impact**: {assessment}
142
+
143
+ ## Key Decisions Timeline
144
+
145
+ | Date | Decision | Mode | Validated? |
146
+ |------|----------|------|------------|
147
+ | {date} | {decision} | {mode} | {yes/no} |
148
+
149
+ ## Improvement Suggestions
150
+
151
+ ### High Priority
152
+ 1. {suggestion with rationale}
153
+
154
+ ### Medium Priority
155
+ 1. {suggestion with rationale}
156
+
157
+ ### Process Observations
158
+ 1. {observation}
159
+ ```
160
+
161
+ ### Phase 4: Improvement Suggestions
162
+
163
+ Generate actionable suggestions based on findings.
164
+
165
+ **Auto-generate suggestions for:**
166
+
167
+ | Pattern Detected | Suggestion |
168
+ |-----------------|------------|
169
+ | EVAL rate < 30% | Increase EVAL usage for quality gates |
170
+ | Single agent > 60% | Diversify specialist agents for broader coverage |
171
+ | TDD adherence < 50% | Reinforce test-first discipline with TDD skill |
172
+ | Same issue 3+ times | Create a checklist or rule to prevent recurrence |
173
+ | Blocked sessions > 20% | Investigate common blockers and add preventive steps |
174
+ | No decisions recorded | Improve decision documentation in context |
175
+ | Decisions reversed > 2x | Add EVAL validation before major decisions |
176
+
177
+ ### Phase 5: Output
178
+
179
+ 1. **Display report** in conversation
180
+ 2. **Save report** to `docs/codingbuddy/retrospective-{YYYY-MM-DD}.md`
181
+ 3. **Offer next steps:**
182
+ - "Create GitHub issues for high-priority improvements?"
183
+ - "Update project rules based on findings?"
184
+ - "Schedule recurring retrospectives?"
185
+
186
+ ## Key Principles
187
+
188
+ - **Data over opinion** - Base all observations on archive evidence
189
+ - **Patterns over incidents** - Focus on recurring themes, not one-off events
190
+ - **Actionable suggestions** - Every finding should have a clear next step
191
+ - **No blame** - Focus on process improvement, not individual mistakes
192
+ - **Incremental** - Suggest 2-3 improvements per retrospective, not a complete overhaul