@hopla/claude-setup 1.7.0 → 1.8.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,75 @@
1
+ # Guide: Creating a Project-Specific Review Checklist
2
+
3
+ Use this guide to create a `.agents/guides/review-checklist.md` file in your project with code review checks specific to your tech stack, domain, and known anti-patterns. The `/hopla-code-review` command loads this file automatically when it exists, applying your custom checks alongside the standard review categories.
4
+
5
+ ---
6
+
7
+ ## When to Create This
8
+
9
+ Create a review checklist when:
10
+ - The same bug pattern appears in **2+ code reviews** (e.g., stale closures in grid callbacks)
11
+ - Your project uses a framework with non-obvious gotchas (AG Grid, Hono, Prisma, D3, etc.)
12
+ - A `/hopla-system-review` flags a recurring issue that should be caught during code review
13
+ - A `/hopla-execution-report` documents a new technical pattern in its "Technical Patterns Discovered" section
14
+
15
+ ---
16
+
17
+ ## File Location
18
+
19
+ ```
20
+ your-project/
21
+ └── .agents/
22
+ └── guides/
23
+ └── review-checklist.md ← Create this file
24
+ ```
25
+
26
+ ---
27
+
28
+ ## Template
29
+
30
+ Use this structure for your project's review checklist:
31
+
32
+ ```markdown
33
+ # Project Review Checklist
34
+
35
+ ## Framework-Specific Checks
36
+
37
+ ### [Framework Name, e.g., AG Grid]
38
+ - [ ] Cell editors and callbacks use `useRef` for state, not inline closures (prevents stale closure bugs)
39
+ - [ ] New grid instances include `data-ag-theme-mode` attribute
40
+ - [ ] Custom renderers set `cellDataType: false` to prevent auto-detection overrides
41
+ - [ ] `refreshCells` is called for computed columns when dependencies change
42
+
43
+ ### [Framework Name, e.g., Hono]
44
+ - [ ] Static routes (`/users/all`) are defined BEFORE parameterized routes (`/users/:id`)
45
+ - [ ] All endpoints have input validation (required fields, format, limits)
46
+ - [ ] DELETE routes are ordered correctly (specific before generic)
47
+
48
+ ## Domain-Specific Checks
49
+
50
+ - [ ] [Check description — patterns unique to your business domain]
51
+ - [ ] [Check description]
52
+
53
+ ## Known Anti-Patterns
54
+
55
+ Patterns that have caused bugs in this project before:
56
+
57
+ | Anti-Pattern | What to Look For | Correct Pattern |
58
+ |---|---|---|
59
+ | Stale closure in grid callback | Inline function passed to `onCellValueChanged` | Use `useRef` + stable callback via `useCallback` |
60
+ | Route shadowing | Static route after `/:id` route | Define static routes first |
61
+ | Missing IMEI validation | IMEI field accepts any string | Validate exactly 15 digits |
62
+ | Side effect in JSX render | `array.push()` inside `.map()` | Compute before return, use `useMemo` |
63
+ ```
64
+
65
+ ---
66
+
67
+ ## Maintenance
68
+
69
+ Update this file when:
70
+ - `/hopla-system-review` flags a recurring bug pattern (3+ occurrences across reviews)
71
+ - `/hopla-execution-report` discovers a new technical pattern in "Technical Patterns Discovered"
72
+ - A code review finds a bug that should have been caught by a checklist item
73
+ - A framework is upgraded and known gotchas change
74
+
75
+ Keep the checklist focused — remove items once they become second nature to the team or are enforced by linting rules.
@@ -13,6 +13,8 @@ Read `CLAUDE.md` or `AGENTS.md` to understand project standards and patterns.
13
13
 
14
14
  If `.agents/guides/` exists, read any guides relevant to the files being reviewed (e.g. `@.agents/guides/api-guide.md` when reviewing API changes). These guides define the expected patterns for specific task types.
15
15
 
16
+ If `.agents/guides/review-checklist.md` exists, read it and apply the project-specific checks it defines in addition to the standard checks below. Project-specific checklists cover framework gotchas and domain anti-patterns unique to the project (e.g., AG Grid stale closures, Hono route ordering).
17
+
16
18
  ## Step 2: Identify Changed Files
17
19
 
18
20
  ```bash
@@ -31,11 +33,15 @@ For each changed file, look for:
31
33
  - Off-by-one errors, incorrect conditionals
32
34
  - Missing error handling, unhandled edge cases
33
35
  - Race conditions or async issues
36
+ - Stale closures — callbacks passed to imperative APIs (grids, charts, maps) that capture stale state instead of using refs or stable references
37
+ - Unhandled promise rejections — `.then()` without `.catch()`, async calls without `try/catch` in non-void contexts
38
+ - Side effects inside JSX render — mutations of arrays/objects inside `.map()` in JSX (breaks React strict mode, causes double-execution bugs)
34
39
 
35
40
  **2. Security Issues**
36
41
  - Exposed secrets or API keys
37
42
  - SQL/command injection vulnerabilities
38
- - Insecure data handling or missing input validation
43
+ - Missing input validation on API endpoints — required fields, format constraints (regex, length), payload size limits
44
+ - Insecure data handling — raw user input in queries, responses exposing internal data or stack traces
39
45
  - XSS vulnerabilities (frontend)
40
46
 
41
47
  **3. Performance Problems**
@@ -44,7 +50,7 @@ For each changed file, look for:
44
50
  - Memory leaks
45
51
 
46
52
  **4. Code Quality**
47
- - DRY violations
53
+ - DRY violations — before flagging, search for similar functions/constants elsewhere in the codebase; suggest extraction to a shared module if the same logic exists in multiple places
48
54
  - Poor naming or overly complex functions
49
55
  - Missing TypeScript types or `any` usage
50
56
 
@@ -52,6 +58,10 @@ For each changed file, look for:
52
58
  - Follows project conventions from CLAUDE.md
53
59
  - Consistent with existing codebase style
54
60
 
61
+ **6. Route & Middleware Ordering**
62
+ - Static routes defined AFTER parameterized routes (e.g., `/users/all` after `/users/:id`) causing shadowing — the parameterized route captures requests meant for the static one
63
+ - Middleware applied in incorrect order (e.g., auth after route handler, CORS after response sent)
64
+
55
65
  ## Step 4: Verify Issues Are Real
56
66
 
57
67
  Before reporting, confirm each issue is legitimate:
@@ -87,17 +87,21 @@ Work through each task in the plan sequentially. For each task:
87
87
 
88
88
  1. **Announce** the task you are starting (e.g., "Starting Task 2: Create the filter component")
89
89
  2. **Follow the pattern** referenced in the plan — do not invent new patterns
90
- 3. **Implement** only what the task specifies nothing more
91
- 4. **Validate** the task using the method specified in the plan's validate field
92
- 5. **Report completion** with a brief status: what was done, what was skipped, any decision made
93
- 6. **Do not proceed** to the next task if the current one fails validation
90
+ 3. **Check for existing implementations** before creating new functions, constants, or utility modules, search the codebase for existing implementations that serve the same purpose. Reuse or extend rather than duplicate. DRY violations were the #1 code quality issue across 28 implementations.
91
+ 4. **Implement** only what the task specifies nothing more
92
+ 5. **Validate** the task using the method specified in the plan's validate field
93
+ 6. **Report completion** with a brief status: what was done, what was skipped, any decision made
94
+ 7. **Do not proceed** to the next task if the current one fails validation
94
95
 
95
- **Git strategy:** Do not commit after individual tasks. Keep all changes staged but uncommitted until the full plan passes validation (Step 5). This allows a clean revert if later tasks fail.
96
+ **Git strategy:**
97
+ - **Plans with 1–7 tasks:** Do not commit after individual tasks. Keep all changes staged but uncommitted until the full plan passes validation (Step 5). This allows a clean revert if later tasks fail.
98
+ - **Plans with 8+ tasks (or plans with `## Phase Boundaries`):** Commit at each phase boundary defined in the plan. Run Level 1–2 validation (lint + types) before each intermediate commit. Use commit message format: `feat(<scope>): <feature> — phase N of M`. This prevents losing work on large implementations if later phases fail.
96
99
 
97
100
  **Pause and report if, during implementation:**
98
101
  - A task is ambiguous or has multiple valid implementations
99
102
  - Something unexpected is discovered that could affect subsequent tasks
100
103
  - The plan's structure or ordering doesn't match conventions used elsewhere in the project
104
+ - A new API route might shadow or be shadowed by an existing parameterized route (check route ordering — e.g., `/users/all` must be defined before `/users/:id`)
101
105
 
102
106
  Do not improvise silently. When in doubt, stop and ask.
103
107
 
@@ -107,7 +111,7 @@ If the user requests changes that are NOT in the plan during execution:
107
111
 
108
112
  1. **Acknowledge** the request
109
113
  2. **Assess** whether it's a small adjustment (< 5 minutes, same files) or a new feature
110
- 3. **If small adjustment:** implement it and flag it as a deviation in the completion report
114
+ 3. **If small adjustment:** implement it and flag it as a deviation in the completion report. Even for small adjustments, validate with the same rigor as planned tasks — check for DRY violations, verify pattern adherence, and run the relevant validation level before moving on.
111
115
  4. **If new feature or significant addition:**
112
116
  - Suggest committing the current planned work first
113
117
  - Then create a new branch or add it to the backlog
@@ -14,6 +14,14 @@ git diff HEAD
14
14
  git ls-files --others --exclude-standard
15
15
  ```
16
16
 
17
+ Also check for recent code reviews:
18
+
19
+ ```bash
20
+ ls -t .agents/code-reviews/ 2>/dev/null | head -5
21
+ ```
22
+
23
+ If a code review exists for this feature, note its path for the Code Review Findings section below.
24
+
17
25
  ## Step 2: Generate Report
18
26
 
19
27
  Save to: `.agents/execution-reports/[feature-name].md`
@@ -36,6 +44,13 @@ Use the following structure:
36
44
  - Unit Tests: ✓/✗ [X passed, Y failed]
37
45
  - Integration Tests: ✓/✗ [X passed, Y failed]
38
46
 
47
+ ### Code Review Findings
48
+
49
+ - **Code review file:** [path to `.agents/code-reviews/[name].md`, or "Not run"]
50
+ - **Issues found:** [count by severity: X critical, Y high, Z medium, W low]
51
+ - **Issues fixed before this report:** [count]
52
+ - **Key findings:** [1-2 sentence summary of the most significant issues found]
53
+
39
54
  ### What Went Well
40
55
 
41
56
  List specific things that worked smoothly:
@@ -46,6 +61,16 @@ List specific things that worked smoothly:
46
61
  List specific difficulties encountered:
47
62
  - [what was difficult and why]
48
63
 
64
+ ### Bugs Encountered
65
+
66
+ Categorize each bug found during implementation:
67
+
68
+ | Bug | Category | Found By | Severity |
69
+ |-----|----------|----------|----------|
70
+ | [description] | stale closure / validation / race condition / styling / scope mismatch / type error / route ordering / other | lint / types / tests / code review / manual testing | critical / high / medium / low |
71
+
72
+ If no bugs were encountered, write "No bugs encountered during implementation."
73
+
49
74
  ### Divergences from Plan
50
75
 
51
76
  For each divergence from the original plan:
@@ -56,11 +81,28 @@ For each divergence from the original plan:
56
81
  - **Reason:** [why this divergence occurred]
57
82
  - **Type:** Better approach found | Plan assumption wrong | Security concern | Performance issue | Other
58
83
 
84
+ ### Scope Assessment
85
+
86
+ - **Planned tasks:** [number of tasks in the original plan]
87
+ - **Executed tasks:** [number of tasks actually completed]
88
+ - **Unplanned additions:** [count and brief description of work not in the original plan]
89
+ - **Scope accuracy:** On target | Under-scoped (took more work than planned) | Over-scoped (simpler than expected)
90
+
59
91
  ### Skipped Items
60
92
 
61
93
  List anything from the plan that was not implemented:
62
94
  - [what was skipped] — Reason: [why]
63
95
 
96
+ ### Technical Patterns Discovered
97
+
98
+ New gotchas, patterns, or conventions learned during this implementation that should be documented:
99
+
100
+ - **Pattern/Gotcha:** [description]
101
+ - **Where it applies:** [what type of feature or context triggers this]
102
+ - **Suggested documentation:** [which file should capture this: CLAUDE.md, `.agents/guides/review-checklist.md`, or a new guide]
103
+
104
+ If nothing new was discovered, write "No new patterns discovered."
105
+
64
106
  ### Recommendations
65
107
 
66
108
  Based on this implementation, what should change for next time?
@@ -104,7 +104,7 @@ After showing the PR URL, suggest:
104
104
 
105
105
  After the user confirms the PR was approved and merged on GitHub, run the cleanup workflow based on the branch type:
106
106
 
107
- ### For all branch types:
107
+ ### For feature and fix branches (merged to `dev`/`develop`):
108
108
 
109
109
  ```bash
110
110
  git checkout [base-branch]
@@ -113,7 +113,9 @@ git branch -d [merged-branch]
113
113
  git push origin --delete [merged-branch] 2>/dev/null # skip if GitHub already deleted it
114
114
  ```
115
115
 
116
- ### Additional steps for `hotfix/*` and `release/*`:
116
+ **Important:** When `dev` is merged to `main` via PR, do NOT pull `main` back into `dev` — `dev` already has all the commits. Only sync `main` → `dev` for hotfix/release branches (see below).
117
+
118
+ ### Additional steps for `hotfix/*` and `release/*` ONLY:
117
119
 
118
120
  These branches were merged to `main` but `develop` also needs the changes:
119
121
 
@@ -44,6 +44,7 @@ Investigate the areas of the codebase relevant to this feature:
44
44
  - Locate similar features already implemented to use as reference
45
45
  - Find the entry points that will need to be modified or extended
46
46
  - Identify potential conflicts or dependencies
47
+ - **DRY check:** Before specifying new utility functions, constants, or helpers in the plan, search for existing implementations that can be reused or extended. DRY violations were the #1 code review finding across 28 implementations.
47
48
 
48
49
  Use the Grep tool to find relevant files (pattern: relevant keyword, case-insensitive).
49
50
 
@@ -66,6 +67,8 @@ Based on research, define:
66
67
  - **Derived/computed values:** If any value is calculated from other fields, specify the exact formula including how stored values are interpreted (sign, units, semantics), AND how derived values propagate when inputs change (event system, reactivity, polling, etc.)
67
68
  - **Interaction states & edge cases:** For features involving interactive UI (forms, grids, keyboard navigation, wizards, CLI interactions), define a matrix of user interactions and their expected behavior. Cover: all keyboard shortcuts (both directions — e.g., Tab AND Shift+Tab), state transitions (empty → editing → saved → error), and boundary conditions (first item, last item, empty list, maximum items). This prevents iterative fix rounds that consumed up to 40% of session time in past implementations.
68
69
  - **API input validation:** For every API endpoint being created or modified, specify: required fields, field format constraints (e.g., "IMEI must be exactly 15 digits"), payload size limits, and what the user sees on validation failure. This was the #2 most common gap in past plans — validation was only added after code review in 4 of 7 implementations.
70
+ - **Bidirectional data interactions:** If feature A updates data that feature B displays, does B need to react? If adding an item triggers validation, does editing trigger re-validation? Map all data mutation → side effect chains, not just keyboard navigation. Missed bidirectional interactions were a recurring planning blind spot.
71
+ - **AI/LLM prompt tasks:** If the plan involves creating or modifying AI prompts (system prompts, prompt templates, LLM-based features), add an explicit task for testing against real data with 2-3 iteration cycles budgeted. AI prompt engineering rarely works on the first attempt.
69
72
  - **User preferences check:** Before specifying UI architecture (modal vs. inline, page vs. panel, dialog vs. drawer), verify against MEMORY.md and conversation history for established preferences. In past implementations, plans that specified modals were rejected because the user preferred inline panels — this caused rework. When no preference exists, note it as a decision point for the user to confirm.
70
73
 
71
74
  ## Phase 5: Generate the Plan
@@ -88,6 +91,11 @@ Use this structure:
88
91
  ## Out of Scope
89
92
  - [Anything explicitly excluded]
90
93
 
94
+ ## Likely Follow-ups
95
+ [Features or changes naturally adjacent to this work that the user may request during or after execution. Historical data: 71% of sessions had scope expansion. Listing these upfront helps the executing agent handle them via the Scope Guard rather than improvising.]
96
+ - [Follow-up 1]
97
+ - [Follow-up 2]
98
+
91
99
  ## Git Strategy
92
100
  - **Base branch:** `[develop | dev | main — specify which branch to create the feature branch from]`
93
101
  - **Feature branch:** `feature/[kebab-case-name]`
@@ -166,10 +174,22 @@ Scoring guide:
166
174
 
167
175
  ## Notes for Executing Agent
168
176
  [Any important context, warnings, or decisions made during planning that the executing agent needs to know]
177
+
178
+ > **UI Styling Note:** UI styling specifications (colors, sizes, variants, labels, spacing) are `[provisional]` proposals. Historical data shows these change in 50%+ of implementations based on user feedback. Implement as specified but do not over-invest in pixel-perfect adherence — expect iteration.
169
179
  ```
170
180
 
171
181
  ---
172
182
 
183
+ ### Plan Size Check
184
+
185
+ After generating the plan, count the implementation tasks (excluding test tasks):
186
+
187
+ - **3–7 tasks:** Optimal size. Proceed as-is.
188
+ - **8–11 tasks:** Consider grouping tasks into logical phases with intermediate commit points. Add a `## Phase Boundaries` section to the plan listing where commits should happen.
189
+ - **12+ tasks:** The plan should be split into multiple plans or phased with mandatory intermediate commits. Historical data: plans with 12+ tasks scored 6/10 alignment vs 10/10 for 3–7 task plans. Add phase boundaries and consider whether independent task groups can be separate plans.
190
+
191
+ ---
192
+
173
193
  ## Phase 6: Verify the Plan
174
194
 
175
195
  Before saving the draft, review the plan against these criteria:
@@ -188,6 +208,8 @@ Before saving the draft, review the plan against these criteria:
188
208
  - [ ] **Git strategy specified:** Base branch and feature branch are defined in `## Git Strategy`
189
209
  - [ ] **Interaction matrix included:** If the feature involves interactive UI, the `## Interaction Matrix` section is filled out — or explicitly marked as N/A with justification
190
210
  - [ ] **Time-box on risky tasks:** Any task involving unfamiliar libraries, heuristic parsing, or known-complex behavior (auto-sizing, animation, real-time sync) has a Time-box with a fallback strategy
211
+ - [ ] **Plan size checked:** If >8 tasks, phase boundaries are defined with intermediate commit points. If >12 tasks, split justification is provided or phases are created.
212
+ - [ ] **Likely follow-ups listed:** If the Out of Scope section has items, the Likely Follow-ups section is populated with naturally adjacent work the user may request
191
213
 
192
214
  ## Phase 7: Save Draft and Enter Review Loop
193
215
 
@@ -30,8 +30,9 @@ If no matching review exists, continue with Step 1.
30
30
 
31
31
  ## Step 1: Load All Context
32
32
 
33
- Read these four artifacts in order:
33
+ Read these artifacts in order:
34
34
 
35
+ 0. `.agents/system-reviews/` — Read ALL previous system review files (if any exist) to identify recurring patterns. Pay special attention to: bug categories that appear across multiple reviews, process improvement suggestions that were made before but not yet applied, and alignment score trends (improving or declining).
35
36
  1. `.claude/commands/plan-feature.md` — understand how plans are created
36
37
  2. **$1** (the plan) — what the agent was supposed to do
37
38
  3. `.claude/commands/execute.md` — understand how execution is guided
@@ -77,6 +78,23 @@ For each problematic divergence, identify the root cause:
77
78
  - Was a validation step missing?
78
79
  - Was a manual step repeated that should be automated?
79
80
 
81
+ ## Step 5.5: Cross-Review Pattern Detection
82
+
83
+ If previous system reviews exist in `.agents/system-reviews/`, compare the current findings against them:
84
+
85
+ ### Recurring Bug Patterns
86
+ For each bug category in this implementation, check if the same category appeared in previous reviews. If a pattern appears 3+ times:
87
+ - Flag it as a **systemic issue** that needs a structural fix (not just a process note)
88
+ - Suggest adding it to `.agents/guides/review-checklist.md` as a project-specific check
89
+ - Example: "Stale closure bugs found in 3 of last 5 implementations → add to review checklist"
90
+
91
+ ### Unresolved Improvements
92
+ Cross-reference improvement suggestions from previous reviews. If a suggestion was made before and NOT yet applied:
93
+ - Check if the suggested CLAUDE.md update was made
94
+ - Check if the suggested command update was made
95
+ - Check if the suggested guide was created
96
+ - List any improvements that were suggested 2+ reviews ago and are still pending
97
+
80
98
  ## Step 6: Generate Process Improvements
81
99
 
82
100
  Suggest specific actions based on patterns found:
@@ -142,6 +160,13 @@ root_cause: [unclear plan | missing context | missing validation | other]
142
160
  **What needs improvement:** [specific process gaps]
143
161
  **For next implementation:** [concrete changes to try]
144
162
 
163
+ ### Cross-Review Trends (if previous reviews exist)
164
+
165
+ - **Recurring bug categories:** [list categories that appeared in 2+ reviews, with count — e.g., "stale closures: 3 occurrences"]
166
+ - **Improvement backlog:** [list suggestions from previous reviews not yet applied]
167
+ - **Alignment trend:** [list last 3–5 alignment scores with feature names, e.g., "parse-cache: 10, multi-org: 6, colors: 7"]
168
+ - **Systemic issues:** [patterns appearing 3+ times that need structural fixes, not just process notes]
169
+
145
170
  ---
146
171
 
147
172
  ## Decision Framework — When to Update What
@@ -63,3 +63,8 @@ Provide a clear summary:
63
63
  If everything passes, confirm the project is healthy.
64
64
 
65
65
  If anything failed and could not be fixed, list the remaining issues and suggest next steps.
66
+
67
+ ## Next Step
68
+
69
+ After validation passes, suggest:
70
+ > "All validation levels passed. Consider running `/hopla-code-review` for a deeper analysis — code review catches bugs in 79% of implementations that pass automated validation (stale closures, missing input validation, route shadowing, unhandled promise rejections). Run `/hopla-code-review` to check, or `/hopla-git-commit` to commit directly."
@@ -13,6 +13,8 @@ Read `CLAUDE.md` or `AGENTS.md` to understand project standards and patterns.
13
13
 
14
14
  If `.agents/guides/` exists, read any guides relevant to the files being reviewed (e.g. `@.agents/guides/api-guide.md` when reviewing API changes). These guides define the expected patterns for specific task types.
15
15
 
16
+ If `.agents/guides/review-checklist.md` exists, read it and apply the project-specific checks it defines in addition to the standard checks below. Project-specific checklists cover framework gotchas and domain anti-patterns unique to the project (e.g., AG Grid stale closures, Hono route ordering).
17
+
16
18
  ## Step 2: Identify Changed Files
17
19
 
18
20
  ```bash
@@ -31,11 +33,15 @@ For each changed file, look for:
31
33
  - Off-by-one errors, incorrect conditionals
32
34
  - Missing error handling, unhandled edge cases
33
35
  - Race conditions or async issues
36
+ - Stale closures — callbacks passed to imperative APIs (grids, charts, maps) that capture stale state instead of using refs or stable references
37
+ - Unhandled promise rejections — `.then()` without `.catch()`, async calls without `try/catch` in non-void contexts
38
+ - Side effects inside JSX render — mutations of arrays/objects inside `.map()` in JSX (breaks React strict mode, causes double-execution bugs)
34
39
 
35
40
  **2. Security Issues**
36
41
  - Exposed secrets or API keys
37
42
  - SQL/command injection vulnerabilities
38
- - Insecure data handling or missing input validation
43
+ - Missing input validation on API endpoints — required fields, format constraints (regex, length), payload size limits
44
+ - Insecure data handling — raw user input in queries, responses exposing internal data or stack traces
39
45
  - XSS vulnerabilities (frontend)
40
46
 
41
47
  **3. Performance Problems**
@@ -44,7 +50,7 @@ For each changed file, look for:
44
50
  - Memory leaks
45
51
 
46
52
  **4. Code Quality**
47
- - DRY violations
53
+ - DRY violations — before flagging, search for similar functions/constants elsewhere in the codebase; suggest extraction to a shared module if the same logic exists in multiple places
48
54
  - Poor naming or overly complex functions
49
55
  - Missing TypeScript types or `any` usage
50
56
 
@@ -52,6 +58,10 @@ For each changed file, look for:
52
58
  - Follows project conventions from CLAUDE.md
53
59
  - Consistent with existing codebase style
54
60
 
61
+ **6. Route & Middleware Ordering**
62
+ - Static routes defined AFTER parameterized routes (e.g., `/users/all` after `/users/:id`) causing shadowing — the parameterized route captures requests meant for the static one
63
+ - Middleware applied in incorrect order (e.g., auth after route handler, CORS after response sent)
64
+
55
65
  ## Step 4: Verify Issues Are Real
56
66
 
57
67
  Before reporting, confirm each issue is legitimate:
@@ -15,6 +15,14 @@ git diff HEAD
15
15
  git ls-files --others --exclude-standard
16
16
  ```
17
17
 
18
+ Also check for recent code reviews:
19
+
20
+ ```bash
21
+ ls -t .agents/code-reviews/ 2>/dev/null | head -5
22
+ ```
23
+
24
+ If a code review exists for this feature, note its path for the Code Review Findings section below.
25
+
18
26
  ## Step 2: Generate Report
19
27
 
20
28
  Save to: `.agents/execution-reports/[feature-name].md`
@@ -37,6 +45,13 @@ Use the following structure:
37
45
  - Unit Tests: ✓/✗ [X passed, Y failed]
38
46
  - Integration Tests: ✓/✗ [X passed, Y failed]
39
47
 
48
+ ### Code Review Findings
49
+
50
+ - **Code review file:** [path to `.agents/code-reviews/[name].md`, or "Not run"]
51
+ - **Issues found:** [count by severity: X critical, Y high, Z medium, W low]
52
+ - **Issues fixed before this report:** [count]
53
+ - **Key findings:** [1-2 sentence summary of the most significant issues found]
54
+
40
55
  ### What Went Well
41
56
 
42
57
  List specific things that worked smoothly:
@@ -47,6 +62,16 @@ List specific things that worked smoothly:
47
62
  List specific difficulties encountered:
48
63
  - [what was difficult and why]
49
64
 
65
+ ### Bugs Encountered
66
+
67
+ Categorize each bug found during implementation:
68
+
69
+ | Bug | Category | Found By | Severity |
70
+ |-----|----------|----------|----------|
71
+ | [description] | stale closure / validation / race condition / styling / scope mismatch / type error / route ordering / other | lint / types / tests / code review / manual testing | critical / high / medium / low |
72
+
73
+ If no bugs were encountered, write "No bugs encountered during implementation."
74
+
50
75
  ### Divergences from Plan
51
76
 
52
77
  For each divergence from the original plan:
@@ -57,11 +82,28 @@ For each divergence from the original plan:
57
82
  - **Reason:** [why this divergence occurred]
58
83
  - **Type:** Better approach found | Plan assumption wrong | Security concern | Performance issue | Other
59
84
 
85
+ ### Scope Assessment
86
+
87
+ - **Planned tasks:** [number of tasks in the original plan]
88
+ - **Executed tasks:** [number of tasks actually completed]
89
+ - **Unplanned additions:** [count and brief description of work not in the original plan]
90
+ - **Scope accuracy:** On target | Under-scoped (took more work than planned) | Over-scoped (simpler than expected)
91
+
60
92
  ### Skipped Items
61
93
 
62
94
  List anything from the plan that was not implemented:
63
95
  - [what was skipped] — Reason: [why]
64
96
 
97
+ ### Technical Patterns Discovered
98
+
99
+ New gotchas, patterns, or conventions learned during this implementation that should be documented:
100
+
101
+ - **Pattern/Gotcha:** [description]
102
+ - **Where it applies:** [what type of feature or context triggers this]
103
+ - **Suggested documentation:** [which file should capture this: CLAUDE.md, `.agents/guides/review-checklist.md`, or a new guide]
104
+
105
+ If nothing new was discovered, write "No new patterns discovered."
106
+
65
107
  ### Recommendations
66
108
 
67
109
  Based on this implementation, what should change for next time?
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@hopla/claude-setup",
3
- "version": "1.7.0",
3
+ "version": "1.8.0",
4
4
  "description": "Hopla team agentic coding system for Claude Code",
5
5
  "type": "module",
6
6
  "bin": {