@hopla/claude-setup 1.16.0 → 1.17.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/marketplace.json +1 -1
- package/.claude-plugin/plugin.json +1 -1
- package/README.md +4 -0
- package/commands/init-project.md +6 -0
- package/commands/plan-feature.md +3 -2
- package/package.json +1 -1
- package/skills/migration/SKILL.md +110 -0
- package/skills/performance/SKILL.md +102 -0
- package/skills/refactoring/SKILL.md +84 -0
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "hopla",
|
|
3
3
|
"description": "Agentic coding system for Claude Code: PIV loop (Plan → Implement → Validate), TDD, debugging, brainstorming, subagent execution, and team workflows",
|
|
4
|
-
"version": "1.
|
|
4
|
+
"version": "1.17.0",
|
|
5
5
|
"author": {
|
|
6
6
|
"name": "Hopla Tools",
|
|
7
7
|
"email": "julio@hopla.tools"
|
package/README.md
CHANGED
|
@@ -233,6 +233,9 @@ After each PIV loop, run the `execution-report` skill + `/hopla:system-review` t
|
|
|
233
233
|
| `brainstorm` | "let's brainstorm", "explore approaches" |
|
|
234
234
|
| `debug` | "debug this", "find the bug", "why is this failing" |
|
|
235
235
|
| `tdd` | "write tests first", "TDD", "red-green-refactor" |
|
|
236
|
+
| `refactoring` | "refactor", "clean up", "simplify", "extract", "deduplicate" |
|
|
237
|
+
| `performance` | "slow", "optimize", "bottleneck", "lento", "tarda mucho" |
|
|
238
|
+
| `migration` | "migrate", "upgrade", "switch from X to Y", "major version bump" |
|
|
236
239
|
| `subagent-execution` | "use subagents", plans with 5+ tasks |
|
|
237
240
|
| `parallel-dispatch` | "run in parallel", "parallelize this", independent tasks |
|
|
238
241
|
|
|
@@ -441,6 +444,7 @@ project/
|
|
|
441
444
|
│ ├── rca/ ← Root cause analysis docs (commit)
|
|
442
445
|
│ ├── execution-reports/ ← Post-implementation reports (commit)
|
|
443
446
|
│ ├── system-reviews/ ← Process improvement reports (commit)
|
|
447
|
+
│ ├── audits/ ← Persistent audit reports (commit — opt-in)
|
|
444
448
|
│ └── code-reviews/ ← Code review reports (don't commit — ephemeral)
|
|
445
449
|
└── .claude/
|
|
446
450
|
└── commands/ ← Project-specific commands (optional)
|
package/commands/init-project.md
CHANGED
|
@@ -435,9 +435,15 @@ Create the following directories (with `.gitkeep` where needed):
|
|
|
435
435
|
├── rca/ <- /hopla:rca saves root cause analysis docs here (commit)
|
|
436
436
|
├── execution-reports/ <- the `execution-report` skill saves here (commit — needed for cross-session learning)
|
|
437
437
|
├── system-reviews/ <- /hopla:system-review saves here (commit — needed for feedback loop)
|
|
438
|
+
├── audits/ <- persistent audit reports worth preserving (commit — opt-in; copy a code review here when you want to keep it)
|
|
438
439
|
└── code-reviews/ <- the `code-review` skill saves here (do NOT commit — ephemeral, consumed by code-review-fix)
|
|
439
440
|
```
|
|
440
441
|
|
|
442
|
+
**Policy — `audits/` vs `code-reviews/`:**
|
|
443
|
+
|
|
444
|
+
- `code-reviews/` is **ephemeral working state**. Every run overwrites/adds files; `code-review-fix` consumes them and they become stale fast. Never commit.
|
|
445
|
+
- `audits/` is **persistent**. Move or copy a review here when it documents a finding the team should remember (security issue, architectural concern, post-mortem evidence). Commit.
|
|
446
|
+
|
|
441
447
|
Add to `.gitignore` (create if it doesn't exist):
|
|
442
448
|
```
|
|
443
449
|
.agents/code-reviews/
|
package/commands/plan-feature.md
CHANGED
|
@@ -27,8 +27,9 @@ Read the following to understand the project:
|
|
|
27
27
|
2. `README.md` — project overview and setup
|
|
28
28
|
3. `package.json` or `pyproject.toml` — stack, dependencies, scripts
|
|
29
29
|
4. `.agents/guides/` — if this directory exists, read any guides relevant to the feature being planned (e.g. `@.agents/guides/api-guide.md` when planning an API endpoint)
|
|
30
|
-
5.
|
|
31
|
-
6.
|
|
30
|
+
5. `.agents/specs/` — if this directory exists, scan for design specs that match the feature name. These come from the `brainstorm` skill and already document the chosen approach, files affected, edge cases, and open questions. If a matching spec exists, it is the authoritative design — the plan turns that design into tasks. If no spec exists and the feature is non-trivial, suggest running the `brainstorm` skill first.
|
|
31
|
+
6. `MEMORY.md` (if it exists at project root or `~/.claude/`) — check for user preferences that affect this feature (UI patterns like modal vs inline, keyboard shortcuts, component conventions)
|
|
32
|
+
7. `.agents/execution-reports/` — if this directory exists, scan recent reports (last 3-5) for technical patterns discovered and gotchas relevant to the feature being planned. These contain real-world learnings from previous implementations that prevent re-discovering known issues.
|
|
32
33
|
|
|
33
34
|
Then run:
|
|
34
35
|
|
package/package.json
CHANGED
|
@@ -0,0 +1,110 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: migration
|
|
3
|
+
description: "Phased migration workflow for upgrading dependencies, switching frameworks, or moving between systems. Use when the user says 'migrate', 'upgrade', 'switch from X to Y', 'move to', 'replace library', 'major version bump', 'deprecated', or when changing a framework/runtime/database version. Do NOT use for greenfield features or small refactors — use plan-feature or refactoring instead."
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
> 🌐 **Language:** All user-facing output must match the user's language. Code, paths, and commands stay in English.
|
|
7
|
+
|
|
8
|
+
# Migration: Move Systems Without Breaking Them
|
|
9
|
+
|
|
10
|
+
## Iron Rule
|
|
11
|
+
|
|
12
|
+
**Every migration needs a rollback plan before the first line changes.** If you cannot describe how to undo the migration in one sentence, you are not ready to start it.
|
|
13
|
+
|
|
14
|
+
## Step 1: Classify the Migration
|
|
15
|
+
|
|
16
|
+
Ask the user (one question at a time):
|
|
17
|
+
|
|
18
|
+
- **Type**: dependency upgrade (major version), framework switch (e.g. Express → Hono), runtime switch (Node → Bun), data store (SQLite → Postgres), API version (v1 → v2)
|
|
19
|
+
- **Scope**: one module, one service, or the whole codebase?
|
|
20
|
+
- **Downtime tolerance**: blue/green, zero-downtime (dual-run), acceptable window?
|
|
21
|
+
- **Deadline driver**: deprecation, security, performance, or opportunistic?
|
|
22
|
+
|
|
23
|
+
This framing determines whether the work is a single PR or a multi-phase plan.
|
|
24
|
+
|
|
25
|
+
## Step 2: Audit the Surface
|
|
26
|
+
|
|
27
|
+
Map **everything** that will be affected:
|
|
28
|
+
|
|
29
|
+
- Imports / usages of the old API (use `grep -r` or `rg` across the codebase)
|
|
30
|
+
- Public contracts that depend on current behavior (downstream callers, API consumers)
|
|
31
|
+
- Build / deploy steps tied to the current version
|
|
32
|
+
- Test suites that assume old behavior
|
|
33
|
+
- Documentation mentioning the old API
|
|
34
|
+
|
|
35
|
+
Write the inventory to `.agents/specs/migration-<topic>.md` with counts — "47 import sites across 12 files". Numbers help you size the work honestly.
|
|
36
|
+
|
|
37
|
+
## Step 3: Read the Upgrade Notes
|
|
38
|
+
|
|
39
|
+
Before writing code, read the target's official migration guide / changelog end to end. Note:
|
|
40
|
+
|
|
41
|
+
- **Breaking changes** (renamed APIs, removed APIs, default-behavior flips)
|
|
42
|
+
- **Deprecations** (will break in N+2, not now)
|
|
43
|
+
- **Required minimum versions** for peer dependencies
|
|
44
|
+
- **Data-shape changes** that require a migration script
|
|
45
|
+
|
|
46
|
+
If the target project has no migration guide, treat it as higher risk and budget more time for exploration.
|
|
47
|
+
|
|
48
|
+
## Step 4: Choose a Strategy
|
|
49
|
+
|
|
50
|
+
| Strategy | When to use |
|
|
51
|
+
|---|---|
|
|
52
|
+
| **Big bang** | Small codebase, low downstream coupling, clean cut possible |
|
|
53
|
+
| **Incremental with adapter** | Many call sites — introduce a thin wrapper that presents the old API on top of the new, migrate call sites one by one |
|
|
54
|
+
| **Dual-run (strangler fig)** | High-risk or zero-downtime — run both old and new side by side, shift traffic gradually |
|
|
55
|
+
| **Branch by abstraction** | Internal refactor + external API stays stable — hide the switch behind an interface |
|
|
56
|
+
|
|
57
|
+
Pick one and document the trade-off in the spec file.
|
|
58
|
+
|
|
59
|
+
## Step 5: Plan the Phases
|
|
60
|
+
|
|
61
|
+
For anything non-trivial, run `/hopla:plan-feature` with `migration-<topic>` as the feature name. The plan should specify:
|
|
62
|
+
|
|
63
|
+
- **Phase boundaries** (compatibility shim in, call sites migrated, shim removed)
|
|
64
|
+
- **Rollback plan per phase** (revert commit? feature flag? dual-write?)
|
|
65
|
+
- **Validation at each phase** (test suite green, feature flags covered, canary metrics)
|
|
66
|
+
- **Data migration script** (if the storage layer changes) — idempotent, resumable
|
|
67
|
+
|
|
68
|
+
Each phase should land as its own PR.
|
|
69
|
+
|
|
70
|
+
## Step 6: Migrate With Guardrails
|
|
71
|
+
|
|
72
|
+
Execute phase by phase. After every phase:
|
|
73
|
+
|
|
74
|
+
- Run the full validation pyramid (`commands/guides/validation-pyramid.md`)
|
|
75
|
+
- Check for mixed-version pitfalls — modules importing both the old and new API in the same request
|
|
76
|
+
- Confirm the rollback path still works (git revert + redeploy, or feature flag off)
|
|
77
|
+
|
|
78
|
+
Never advance to the next phase if validation failed on the previous one.
|
|
79
|
+
|
|
80
|
+
## Step 7: Remove the Old Path
|
|
81
|
+
|
|
82
|
+
Once every call site is migrated and observed green in production (where applicable):
|
|
83
|
+
|
|
84
|
+
- Delete the compatibility shim
|
|
85
|
+
- Remove the old dependency (`npm uninstall`, etc.)
|
|
86
|
+
- Remove the feature flag
|
|
87
|
+
- Update documentation to reference only the new path
|
|
88
|
+
|
|
89
|
+
This "cleanup" step is part of the migration. A migration left half-done with a permanent shim is worse than no migration.
|
|
90
|
+
|
|
91
|
+
## Rules
|
|
92
|
+
|
|
93
|
+
- Never migrate on a Friday or before a public release
|
|
94
|
+
- Keep the rollback plan alive at every phase — if it stops working, pause
|
|
95
|
+
- Track breaking changes from the target's changelog in the spec, not in memory
|
|
96
|
+
- Data migrations must be idempotent and resumable — migrations fail mid-run
|
|
97
|
+
- If the migration drags past its original estimate by 2x, stop and reassess scope
|
|
98
|
+
|
|
99
|
+
## Integration
|
|
100
|
+
|
|
101
|
+
- Use `/hopla:plan-feature` to generate the phased plan from the Step 2 inventory
|
|
102
|
+
- Use the `worktree` skill to keep the migration isolated from other work
|
|
103
|
+
- The `code-review` skill (checklist sections 2 and 5) catches dual-import patterns and pattern drift
|
|
104
|
+
- The `performance` skill verifies the migration did not regress hot paths
|
|
105
|
+
|
|
106
|
+
## Next Step
|
|
107
|
+
|
|
108
|
+
Once the migration is planned:
|
|
109
|
+
|
|
110
|
+
> "Migration classified and inventoried. Saved spec to `.agents/specs/migration-<topic>.md`. Run `/hopla:plan-feature` to generate the phased implementation plan."
|
|
@@ -0,0 +1,102 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: performance
|
|
3
|
+
description: "Measured performance optimization workflow. Use when the user says 'slow', 'optimize', 'performance', 'bottleneck', 'too slow', 'high memory', 'high CPU', 'lento', 'tarda mucho', or when asking to make something faster. Do NOT use for correctness bugs or new features — use the debug or plan-feature skills instead."
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
> 🌐 **Language:** All user-facing output must match the user's language. Code, paths, and commands stay in English.
|
|
7
|
+
|
|
8
|
+
# Performance: Measure Before You Change
|
|
9
|
+
|
|
10
|
+
## Iron Rule
|
|
11
|
+
|
|
12
|
+
**No optimization without a measurement.** Every performance change must start with a number (latency, memory, query count) and end with a comparison. Guessing at hot paths wastes time and often makes things slower.
|
|
13
|
+
|
|
14
|
+
## Step 1: Clarify the Symptom
|
|
15
|
+
|
|
16
|
+
Ask the user (one question at a time):
|
|
17
|
+
|
|
18
|
+
- What operation feels slow? (page load, API request, build, test run, specific query)
|
|
19
|
+
- How slow is it? (exact number if possible — "3 seconds", "30 MB", "10s with 100 items")
|
|
20
|
+
- What is "fast enough"? (target: < 500 ms p95, < 100 MB, etc.)
|
|
21
|
+
- Is it reproducible, or only under load?
|
|
22
|
+
|
|
23
|
+
Without a concrete target, you cannot declare the optimization done.
|
|
24
|
+
|
|
25
|
+
## Step 2: Measure the Baseline
|
|
26
|
+
|
|
27
|
+
Pick the right tool for the symptom:
|
|
28
|
+
|
|
29
|
+
| Symptom | Measurement |
|
|
30
|
+
|---|---|
|
|
31
|
+
| Slow endpoint | `curl -w "%{time_total}"` or APM dashboard (see `guides/mcp-integration.md` for MCP options) |
|
|
32
|
+
| Slow DB query | `EXPLAIN ANALYZE` (Postgres), `EXPLAIN` (SQLite/MySQL) |
|
|
33
|
+
| Slow frontend render | Chrome DevTools Performance tab, React Profiler |
|
|
34
|
+
| Memory growth | `process.memoryUsage()` snapshots, heap dumps |
|
|
35
|
+
| Slow build/test | Time the command, compare against a clean cache |
|
|
36
|
+
|
|
37
|
+
Record the baseline with units. "3.2 s to load /dashboard with 1000 items" — not "it feels slow".
|
|
38
|
+
|
|
39
|
+
## Step 3: Identify the Hot Path
|
|
40
|
+
|
|
41
|
+
Rank suspects by where the baseline measurement actually spends its time:
|
|
42
|
+
|
|
43
|
+
- **N+1 queries** — are there loops calling the DB or an API?
|
|
44
|
+
- **Missing indexes** — does `EXPLAIN ANALYZE` show a seq scan on a large table?
|
|
45
|
+
- **Synchronous I/O** — is there a blocking call that could be awaited in parallel (`Promise.all`)?
|
|
46
|
+
- **Rendering** — are components re-rendering with unchanged props? Are lists virtualized?
|
|
47
|
+
- **Algorithm** — is there an O(n²) that could be O(n) with a map?
|
|
48
|
+
- **Caching** — is the same computation repeated without memoization?
|
|
49
|
+
|
|
50
|
+
Do **not** guess. Use the profiler output or query plan to pick one suspect.
|
|
51
|
+
|
|
52
|
+
## Step 4: Apply One Change
|
|
53
|
+
|
|
54
|
+
Change one thing. Not three.
|
|
55
|
+
|
|
56
|
+
- Add the index
|
|
57
|
+
- Replace the loop with `Promise.all`
|
|
58
|
+
- Memoize the expensive selector
|
|
59
|
+
- Batch the API calls
|
|
60
|
+
- Virtualize the list
|
|
61
|
+
|
|
62
|
+
Keep the diff minimal so you can attribute the delta to this change alone.
|
|
63
|
+
|
|
64
|
+
## Step 5: Measure Again
|
|
65
|
+
|
|
66
|
+
Re-run the exact same measurement from Step 2 under the same conditions. Report:
|
|
67
|
+
|
|
68
|
+
- Baseline: X
|
|
69
|
+
- After change: Y
|
|
70
|
+
- Delta: (X − Y) / X × 100 %
|
|
71
|
+
- Target: [target from Step 1]
|
|
72
|
+
|
|
73
|
+
If you did not hit the target, go back to Step 3 and pick the next suspect. If you regressed, revert and rethink.
|
|
74
|
+
|
|
75
|
+
## Step 6: Regression Guard
|
|
76
|
+
|
|
77
|
+
Once the target is met, add a guard so future changes do not erode the win:
|
|
78
|
+
|
|
79
|
+
- A test with a timeout assertion (e.g. `expect(duration).toBeLessThan(500)`)
|
|
80
|
+
- A query count assertion (e.g. `expect(dbQueries).toHaveLength(1)`)
|
|
81
|
+
- A bundle size budget, memory budget, or frame budget if applicable
|
|
82
|
+
|
|
83
|
+
Without a guard, the win decays.
|
|
84
|
+
|
|
85
|
+
## Rules
|
|
86
|
+
|
|
87
|
+
- One suspect at a time — never stack optimizations before measuring
|
|
88
|
+
- Keep the baseline in the commit message so the win is auditable
|
|
89
|
+
- If the fix adds significant complexity for a small win (< 10 %), consider reverting
|
|
90
|
+
- Do not optimize code that is not actually hot — premature optimization hurts readability
|
|
91
|
+
|
|
92
|
+
## Integration
|
|
93
|
+
|
|
94
|
+
- Use the `code-review` skill checklist section 3 (Performance Problems) for patterns to watch for
|
|
95
|
+
- If the optimization requires architectural changes, stop and run `/hopla:plan-feature`
|
|
96
|
+
- After the change lands, the `verify` skill will require the regression guard to run fresh
|
|
97
|
+
|
|
98
|
+
## Next Step
|
|
99
|
+
|
|
100
|
+
After the target is met and a regression guard is in place:
|
|
101
|
+
|
|
102
|
+
> "Target hit: [baseline → result]. Regression guard added. Say 'commit' to trigger the `git` skill with a `perf:` conventional commit."
|
|
@@ -0,0 +1,84 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: refactoring
|
|
3
|
+
description: "Safe refactoring workflow with behavior preservation. Use when the user says 'refactor', 'clean up', 'simplify', 'extract', 'restructure', 'deduplicate', 'rename', or when asking to improve code structure without changing behavior. Do NOT use for bug fixes, new features, or performance work — use the debug, plan-feature, or performance skills instead."
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
> 🌐 **Language:** All user-facing output must match the user's language. Code, paths, and commands stay in English.
|
|
7
|
+
|
|
8
|
+
# Refactoring: Restructure Without Changing Behavior
|
|
9
|
+
|
|
10
|
+
## Iron Rule
|
|
11
|
+
|
|
12
|
+
**Behavior must be identical before and after.** If a refactor changes observable behavior — output, side effects, error shape, API surface — it is not a refactor. Stop and reclassify the work as a feature change or a bug fix.
|
|
13
|
+
|
|
14
|
+
## Step 1: Confirm the Refactor Is Worth Doing
|
|
15
|
+
|
|
16
|
+
Ask the user (one question at a time):
|
|
17
|
+
|
|
18
|
+
- What is the current pain? (duplication, unclear naming, deep nesting, coupled modules)
|
|
19
|
+
- What is the desired structure? (extract helper, collapse abstraction, rename, move, inline)
|
|
20
|
+
- Is there a test suite, and does it cover the code being refactored?
|
|
21
|
+
|
|
22
|
+
If the answers reveal a missing test covering the target, **write the test first** (pin current behavior), then refactor. Untested refactors are rewrites.
|
|
23
|
+
|
|
24
|
+
## Step 2: Capture the Baseline
|
|
25
|
+
|
|
26
|
+
Run the project's validation commands from `CLAUDE.md` (or use `/hopla:validate`). Record:
|
|
27
|
+
|
|
28
|
+
- Lint / format — current state
|
|
29
|
+
- Types — current state
|
|
30
|
+
- Unit tests — pass/fail count
|
|
31
|
+
- Relevant integration tests — pass/fail
|
|
32
|
+
|
|
33
|
+
Every level must be green before starting. A refactor on top of red tests cannot prove it preserved behavior.
|
|
34
|
+
|
|
35
|
+
## Step 3: Apply the Smallest Safe Change
|
|
36
|
+
|
|
37
|
+
Pick one refactor at a time:
|
|
38
|
+
|
|
39
|
+
- Extract function / module
|
|
40
|
+
- Rename (symbol, file)
|
|
41
|
+
- Inline (remove pointless indirection)
|
|
42
|
+
- Move (relocate to a better home)
|
|
43
|
+
- Deduplicate (merge two near-identical pieces)
|
|
44
|
+
- Replace conditional with polymorphism / table lookup
|
|
45
|
+
- Flatten / collapse nesting
|
|
46
|
+
|
|
47
|
+
**Do not** mix refactors. If the change wants to become a redesign, stop and suggest `/hopla:plan-feature`.
|
|
48
|
+
|
|
49
|
+
## Step 4: Re-run the Baseline
|
|
50
|
+
|
|
51
|
+
After each refactor, re-run the same validation set from Step 2. Results must match exactly:
|
|
52
|
+
|
|
53
|
+
- Same lint result (0 new warnings unless whitelisted)
|
|
54
|
+
- Same type result
|
|
55
|
+
- Same pass/fail count on tests
|
|
56
|
+
- Same integration result
|
|
57
|
+
|
|
58
|
+
If anything diverges, the refactor leaked behavior — revert or fix before continuing.
|
|
59
|
+
|
|
60
|
+
## Step 5: Commit at a Clean Boundary
|
|
61
|
+
|
|
62
|
+
When the baseline is restored and the refactor is coherent, suggest a commit via the `git` skill:
|
|
63
|
+
|
|
64
|
+
> "Refactor complete — behavior preserved. Say 'commit' to save it with a `refactor:` conventional commit."
|
|
65
|
+
|
|
66
|
+
## Rules
|
|
67
|
+
|
|
68
|
+
- One refactor per commit — easier to review, easier to revert
|
|
69
|
+
- Never combine refactor + feature in the same commit
|
|
70
|
+
- Prefer many small refactors over one large one
|
|
71
|
+
- If the test suite is missing, add tests FIRST, then refactor (two commits minimum)
|
|
72
|
+
- Preserve public API unless the user explicitly approves a breaking change
|
|
73
|
+
|
|
74
|
+
## Integration
|
|
75
|
+
|
|
76
|
+
- Pair with the `tdd` skill when adding characterization tests before a refactor
|
|
77
|
+
- Use the `code-review` skill after the refactor to confirm no pattern violations were introduced
|
|
78
|
+
- If the refactor touches many files, consider the `worktree` skill for isolation
|
|
79
|
+
|
|
80
|
+
## Next Step
|
|
81
|
+
|
|
82
|
+
After the refactor passes validation:
|
|
83
|
+
|
|
84
|
+
> "Refactor complete and validated. Say 'commit' to trigger the `git` skill with a `refactor:` conventional commit."
|