@kennethsolomon/shipkit 3.6.0 → 3.8.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +14 -15
- package/commands/sk/security-check.md +10 -4
- package/commands/sk/update-task.md +9 -0
- package/commands/sk/write-plan.md +5 -0
- package/package.json +1 -1
- package/skills/sk:context/SKILL.md +4 -0
- package/skills/sk:e2e/SKILL.md +19 -2
- package/skills/sk:fast-track/SKILL.md +80 -0
- package/skills/sk:frontend-design/SKILL.md +12 -5
- package/skills/sk:gates/SKILL.md +97 -0
- package/skills/sk:lint/SKILL.md +27 -6
- package/skills/sk:perf/SKILL.md +15 -4
- package/skills/sk:retro/SKILL.md +124 -0
- package/skills/sk:reverse-doc/SKILL.md +116 -0
- package/skills/sk:review/SKILL.md +19 -11
- package/skills/sk:schema-migrate/SKILL.md +22 -0
- package/skills/sk:scope-check/SKILL.md +93 -0
- package/skills/sk:setup-claude/SKILL.md +53 -0
- package/skills/sk:setup-claude/scripts/apply_setup_claude.py +206 -6
- package/skills/sk:setup-claude/templates/.claude/agents/e2e-tester.md +46 -0
- package/skills/sk:setup-claude/templates/.claude/agents/linter.md +53 -0
- package/skills/sk:setup-claude/templates/.claude/agents/perf-auditor.md +43 -0
- package/skills/sk:setup-claude/templates/.claude/agents/security-auditor.md +47 -0
- package/skills/sk:setup-claude/templates/.claude/agents/test-runner.md +42 -0
- package/skills/sk:setup-claude/templates/.claude/rules/api.md.template +14 -0
- package/skills/sk:setup-claude/templates/.claude/rules/frontend.md.template +15 -0
- package/skills/sk:setup-claude/templates/.claude/rules/laravel.md.template +15 -0
- package/skills/sk:setup-claude/templates/.claude/rules/react.md.template +14 -0
- package/skills/sk:setup-claude/templates/.claude/rules/tests.md.template +16 -0
- package/skills/sk:setup-claude/templates/.claude/settings.json.template +76 -0
- package/skills/sk:setup-claude/templates/.claude/statusline.sh +50 -0
- package/skills/sk:setup-claude/templates/CLAUDE.md.template +31 -42
- package/skills/sk:setup-claude/templates/commands/brainstorm.md.template +1 -1
- package/skills/sk:setup-claude/templates/commands/execute-plan.md.template +1 -1
- package/skills/sk:setup-claude/templates/commands/finish-feature.md.template +1 -1
- package/skills/sk:setup-claude/templates/commands/security-check.md.template +1 -1
- package/skills/sk:setup-claude/templates/commands/write-plan.md.template +1 -1
- package/skills/sk:setup-claude/templates/hooks/log-agent.sh +24 -0
- package/skills/sk:setup-claude/templates/hooks/pre-compact.sh +44 -0
- package/skills/sk:setup-claude/templates/hooks/session-start.sh +53 -0
- package/skills/sk:setup-claude/templates/hooks/session-stop.sh +33 -0
- package/skills/sk:setup-claude/templates/hooks/validate-commit.sh +81 -0
- package/skills/sk:setup-claude/templates/hooks/validate-push.sh +43 -0
- package/skills/sk:setup-claude/templates/tasks/workflow-status.md.template +10 -16
- package/skills/sk:setup-optimizer/SKILL.md +4 -4
- package/skills/sk:test/SKILL.md +6 -2
|
@@ -0,0 +1,116 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: sk:reverse-doc
|
|
3
|
+
description: Generate architecture and design documentation from existing code by analyzing patterns and asking clarifying questions
|
|
4
|
+
user_invocable: true
|
|
5
|
+
allowed_tools: Read, Glob, Grep, Write, Agent
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# Reverse Document
|
|
9
|
+
|
|
10
|
+
Generate documentation from existing code — work backwards from implementation to create missing design or architecture docs.
|
|
11
|
+
|
|
12
|
+
## When to Use
|
|
13
|
+
|
|
14
|
+
- Onboarding to an existing codebase that lacks documentation
|
|
15
|
+
- Formalizing a prototype into a documented design
|
|
16
|
+
- Capturing the "why" behind existing code before refactoring
|
|
17
|
+
- Creating architecture docs for a codebase you inherited
|
|
18
|
+
|
|
19
|
+
## Arguments
|
|
20
|
+
|
|
21
|
+
```
|
|
22
|
+
/sk:reverse-doc <type> <path>
|
|
23
|
+
```
|
|
24
|
+
|
|
25
|
+
| Type | Output | Location |
|
|
26
|
+
|------|--------|----------|
|
|
27
|
+
| `architecture` | Architecture Decision Record | `docs/architecture/` |
|
|
28
|
+
| `design` | Design document (GDD-style) | `docs/design/` |
|
|
29
|
+
| `api` | API specification | `docs/api/` |
|
|
30
|
+
|
|
31
|
+
If no type specified, infer from the path:
|
|
32
|
+
- `src/core/`, `src/lib/`, `app/Services/` → architecture
|
|
33
|
+
- `src/components/`, `resources/views/` → design
|
|
34
|
+
- `routes/`, `app/Http/Controllers/` → api
|
|
35
|
+
|
|
36
|
+
## Steps
|
|
37
|
+
|
|
38
|
+
### Phase 1: Analyze
|
|
39
|
+
|
|
40
|
+
Launch Explore agents to analyze the target path:
|
|
41
|
+
|
|
42
|
+
1. **Structure agent**: Map the file tree, identify entry points, trace dependency chains
|
|
43
|
+
2. **Patterns agent**: Identify design patterns, abstractions, conventions used
|
|
44
|
+
3. **Data flow agent**: Trace data through the system — inputs, transformations, outputs
|
|
45
|
+
|
|
46
|
+
Synthesize findings into:
|
|
47
|
+
- **What it does** (mechanics, behavior)
|
|
48
|
+
- **How it's built** (patterns, architecture, dependencies)
|
|
49
|
+
- **What's unclear** (inconsistencies, undocumented decisions)
|
|
50
|
+
|
|
51
|
+
### Phase 2: Clarify
|
|
52
|
+
|
|
53
|
+
Ask the user 3-5 clarifying questions to distinguish intentional design from accidental implementation:
|
|
54
|
+
|
|
55
|
+
- "Is [pattern X] intentional, or would you change it in a refactor?"
|
|
56
|
+
- "What was the motivation for [architectural decision Y]?"
|
|
57
|
+
- "Are [components A and B] coupled by design, or is that tech debt?"
|
|
58
|
+
|
|
59
|
+
**Critical principle: Never assume intent. Always ask before documenting "why."**
|
|
60
|
+
|
|
61
|
+
The distinction between "what the code does" and "what the developer intended" is the entire value of this skill. Do not skip this phase.
|
|
62
|
+
|
|
63
|
+
### Phase 3: Draft
|
|
64
|
+
|
|
65
|
+
Based on analysis + user answers, generate the document:
|
|
66
|
+
|
|
67
|
+
**Architecture docs include:**
|
|
68
|
+
- System overview and purpose
|
|
69
|
+
- Component diagram (text-based)
|
|
70
|
+
- Data flow description
|
|
71
|
+
- Key design decisions with rationale (from user answers)
|
|
72
|
+
- Dependencies and interfaces
|
|
73
|
+
- Trade-offs and known limitations
|
|
74
|
+
|
|
75
|
+
**Design docs include:**
|
|
76
|
+
- Feature overview and user-facing behavior
|
|
77
|
+
- Component breakdown
|
|
78
|
+
- State management approach
|
|
79
|
+
- Interaction patterns
|
|
80
|
+
- Edge cases and error handling
|
|
81
|
+
|
|
82
|
+
**API docs include:**
|
|
83
|
+
- Endpoint inventory
|
|
84
|
+
- Request/response schemas
|
|
85
|
+
- Authentication requirements
|
|
86
|
+
- Error codes and formats
|
|
87
|
+
- Rate limits and constraints
|
|
88
|
+
|
|
89
|
+
### Phase 4: Approve
|
|
90
|
+
|
|
91
|
+
Present the draft to the user:
|
|
92
|
+
- Show key sections
|
|
93
|
+
- Highlight areas marked as "inferred" (not confirmed by user)
|
|
94
|
+
- Ask for corrections or additions
|
|
95
|
+
|
|
96
|
+
**Do not write the file until the user approves.**
|
|
97
|
+
|
|
98
|
+
### Phase 5: Write
|
|
99
|
+
|
|
100
|
+
Save the approved document to the appropriate location.
|
|
101
|
+
|
|
102
|
+
Flag follow-up work:
|
|
103
|
+
- Related areas that also need documentation
|
|
104
|
+
- Inconsistencies discovered during analysis
|
|
105
|
+
- Suggested refactoring based on documented architecture
|
|
106
|
+
|
|
107
|
+
**Do not auto-execute follow-up work.** Present it as a list for the user to decide.
|
|
108
|
+
|
|
109
|
+
## Model Routing
|
|
110
|
+
|
|
111
|
+
| Profile | Model |
|
|
112
|
+
|---------|-------|
|
|
113
|
+
| `full-sail` | opus (inherit) |
|
|
114
|
+
| `quality` | opus (inherit) |
|
|
115
|
+
| `balanced` | sonnet |
|
|
116
|
+
| `budget` | sonnet |
|
|
@@ -33,7 +33,7 @@ Use `git diff main..HEAD --name-only` to identify the changed files, then run si
|
|
|
33
33
|
|
|
34
34
|
If simplify makes any changes:
|
|
35
35
|
1. Verify the changes are correct
|
|
36
|
-
2.
|
|
36
|
+
2. Auto-commit them with message `fix(review): simplify pre-pass` before continuing the review. Do not ask the user.
|
|
37
37
|
3. Note in the review report: "Simplify pre-pass: X files updated"
|
|
38
38
|
|
|
39
39
|
If simplify makes no changes, proceed directly to step 1.
|
|
@@ -436,20 +436,28 @@ Format findings with severity levels and review dimensions:
|
|
|
436
436
|
- Include a brief "What Looks Good" section (2-3 items) — acknowledge strong patterns so they're reinforced. This isn't cheerleading — it's calibrating signal.
|
|
437
437
|
- If you genuinely find nothing wrong after all 7 dimensions, say so — but that's rare
|
|
438
438
|
|
|
439
|
-
### 11.
|
|
439
|
+
### 11. Fix and Re-run
|
|
440
440
|
|
|
441
|
-
After presenting the review
|
|
441
|
+
After presenting the review report, fix **all** findings regardless of severity (Critical, Warning, and Nitpick). Do not ask the user whether to fix nitpicks — fix everything.
|
|
442
442
|
|
|
443
|
-
|
|
444
|
-
|
|
443
|
+
**For each finding:**
|
|
444
|
+
- If the issue is in a file **within** the current branch diff (`git diff $BASE..HEAD --name-only`): fix it inline, include in the auto-commit
|
|
445
|
+
- If the issue is in a file **outside** the current branch diff (pre-existing issue found via blast-radius): log it to `tasks/tech-debt.md` — do NOT fix it inline:
|
|
446
|
+
```
|
|
447
|
+
### [YYYY-MM-DD] Found during: sk:review
|
|
448
|
+
File: path/to/file.ext:line
|
|
449
|
+
Issue: description of the problem
|
|
450
|
+
Severity: critical | high | medium | low
|
|
451
|
+
```
|
|
445
452
|
|
|
446
|
-
|
|
447
|
-
> "Review complete — no critical issues found, but there are some nitpicks. Would you like to fix them now, or proceed to `/sk:finish-feature`?"
|
|
453
|
+
After all in-scope fixes are applied: auto-commit with `fix(review): address review findings`. Do not ask the user. Re-run `/sk:review` from scratch.
|
|
448
454
|
|
|
449
|
-
|
|
455
|
+
Loop until the review is completely clean (0 findings across all severities for in-scope code).
|
|
450
456
|
|
|
451
|
-
|
|
452
|
-
> "Review complete —
|
|
457
|
+
When clean:
|
|
458
|
+
> "Review complete — 0 findings. Run `/sk:finish-feature` to finalize the branch and create a PR."
|
|
459
|
+
|
|
460
|
+
**Note:** Gates own their commits — the fix-commit-rerun loop is fully internal. No manual commit step needed after this gate.
|
|
453
461
|
|
|
454
462
|
### Fix & Retest Protocol
|
|
455
463
|
|
|
@@ -460,7 +468,7 @@ When applying a fix from this review, classify it before committing:
|
|
|
460
468
|
**b. Logic change** (fix incorrect condition, add missing null check, change data flow, refactor algorithm, fix async bug) → trigger protocol:
|
|
461
469
|
1. Update or add failing unit tests for the corrected behavior
|
|
462
470
|
2. Re-run `/sk:test` — must pass at 100% coverage
|
|
463
|
-
3.
|
|
471
|
+
3. Auto-commit tests + fix together with `fix(review): [description]`.
|
|
464
472
|
4. Re-run `/sk:review` from scratch
|
|
465
473
|
|
|
466
474
|
**Why:** Review catches logic bugs. Fixing a logic bug without updating tests leaves the test suite asserting on the old (wrong) behavior.
|
|
@@ -24,6 +24,28 @@ guidance. Auto-detects the ORM from project files — no configuration needed.
|
|
|
24
24
|
|
|
25
25
|
---
|
|
26
26
|
|
|
27
|
+
## Phase 0: Auto-Detect Migration Changes
|
|
28
|
+
|
|
29
|
+
Before doing anything else, check whether the current branch has any migration-related changes:
|
|
30
|
+
|
|
31
|
+
```bash
|
|
32
|
+
git diff main..HEAD --name-only
|
|
33
|
+
```
|
|
34
|
+
|
|
35
|
+
Scan the output for migration-related files:
|
|
36
|
+
- Files under `migrations/`, `database/migrations/`, `prisma/migrations/`, `alembic/versions/`, `db/migrate/`
|
|
37
|
+
- Schema definition files: `prisma/schema.prisma`, `drizzle.config.ts`, `drizzle.config.js`, `alembic.ini`
|
|
38
|
+
- Any `*.sql` files in migration-related directories
|
|
39
|
+
|
|
40
|
+
**If NO migration-related files are found in the diff:**
|
|
41
|
+
> auto-skip: No migration changes detected in this branch — skipping `/sk:schema-migrate`.
|
|
42
|
+
|
|
43
|
+
Exit cleanly. Do not ask the user. Do not proceed to Phase 1.
|
|
44
|
+
|
|
45
|
+
**If migration-related files ARE found:** proceed to Phase 1 (ORM Detection) below.
|
|
46
|
+
|
|
47
|
+
---
|
|
48
|
+
|
|
27
49
|
## Phase 1: ORM Detection
|
|
28
50
|
|
|
29
51
|
### Step 1 — Read in Parallel
|
|
@@ -0,0 +1,93 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: sk:scope-check
|
|
3
|
+
description: Compare current implementation against the plan to detect scope creep
|
|
4
|
+
user_invocable: true
|
|
5
|
+
allowed_tools: Read, Glob, Grep, Bash
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# Scope Check
|
|
9
|
+
|
|
10
|
+
Compare the current implementation against `tasks/todo.md` to detect scope creep and unplanned additions.
|
|
11
|
+
|
|
12
|
+
## When to Use
|
|
13
|
+
|
|
14
|
+
Run `/sk:scope-check` mid-implementation (during or after step 10) to verify you're building what was planned — no more, no less. Useful when implementation feels like it's growing beyond the original plan.
|
|
15
|
+
|
|
16
|
+
## Steps
|
|
17
|
+
|
|
18
|
+
### 1. Read the Plan
|
|
19
|
+
|
|
20
|
+
- Read `tasks/todo.md` — extract all planned tasks (checkboxes)
|
|
21
|
+
- Count total planned tasks, completed tasks, and remaining tasks
|
|
22
|
+
- List planned files/areas from task descriptions
|
|
23
|
+
|
|
24
|
+
### 2. Analyze Actual Changes
|
|
25
|
+
|
|
26
|
+
- Run `git diff main..HEAD --stat` to get files changed, insertions, deletions
|
|
27
|
+
- Run `git diff main..HEAD --name-only` to list all changed files
|
|
28
|
+
- Count new files created vs. files modified
|
|
29
|
+
- Identify files changed that are NOT mentioned in any todo.md task
|
|
30
|
+
|
|
31
|
+
### 3. Compare Planned vs. Actual
|
|
32
|
+
|
|
33
|
+
For each changed file, trace it back to a planned task:
|
|
34
|
+
- **Planned**: File change is directly described in a todo.md checkbox
|
|
35
|
+
- **Supporting**: File change is a reasonable dependency of a planned task (e.g., updating imports after moving a function)
|
|
36
|
+
- **Unplanned**: File change has no clear connection to any planned task — this is scope creep
|
|
37
|
+
|
|
38
|
+
### 4. Calculate Scope Bloat
|
|
39
|
+
|
|
40
|
+
```
|
|
41
|
+
Planned tasks: N checkboxes in todo.md
|
|
42
|
+
Actual changes: M files changed
|
|
43
|
+
Unplanned items: U files with no matching task
|
|
44
|
+
Scope bloat: (U / M) * 100 = X%
|
|
45
|
+
```
|
|
46
|
+
|
|
47
|
+
### 5. Classify
|
|
48
|
+
|
|
49
|
+
| Classification | Bloat % | Recommendation |
|
|
50
|
+
|---------------|---------|----------------|
|
|
51
|
+
| **On Track** | 0-10% | Proceeding as planned. Minor supporting changes are normal. |
|
|
52
|
+
| **Minor Creep** | 10-25% | Some unplanned additions detected. Review if they're necessary. |
|
|
53
|
+
| **Significant Creep** | 25-50% | Scope has grown substantially. Consider splitting into separate tasks. |
|
|
54
|
+
| **Out of Control** | >50% | More unplanned work than planned. Stop and reassess with `/sk:change`. |
|
|
55
|
+
|
|
56
|
+
### 6. Output Report
|
|
57
|
+
|
|
58
|
+
```markdown
|
|
59
|
+
## Scope Check Report — [date]
|
|
60
|
+
|
|
61
|
+
**Plan**: [N] tasks in tasks/todo.md
|
|
62
|
+
**Completed**: [X] / [N] tasks
|
|
63
|
+
**Files changed**: [M] files (+[insertions] / -[deletions])
|
|
64
|
+
**Unplanned changes**: [U] files
|
|
65
|
+
|
|
66
|
+
### Classification: [On Track | Minor Creep | Significant Creep | Out of Control] ([X]%)
|
|
67
|
+
|
|
68
|
+
### Planned Changes
|
|
69
|
+
- [file] — task: [matching checkbox text]
|
|
70
|
+
- ...
|
|
71
|
+
|
|
72
|
+
### Supporting Changes
|
|
73
|
+
- [file] — supports: [which planned task]
|
|
74
|
+
- ...
|
|
75
|
+
|
|
76
|
+
### Unplanned Changes
|
|
77
|
+
- [file] — no matching task found
|
|
78
|
+
- ...
|
|
79
|
+
|
|
80
|
+
### Recommendation
|
|
81
|
+
[Actionable advice based on classification]
|
|
82
|
+
```
|
|
83
|
+
|
|
84
|
+
## Model Routing
|
|
85
|
+
|
|
86
|
+
Read `.shipkit/config.json` from the project root if it exists.
|
|
87
|
+
|
|
88
|
+
| Profile | Model |
|
|
89
|
+
|---------|-------|
|
|
90
|
+
| `full-sail` | opus (inherit) |
|
|
91
|
+
| `quality` | sonnet |
|
|
92
|
+
| `balanced` | haiku |
|
|
93
|
+
| `budget` | haiku |
|
|
@@ -305,6 +305,59 @@ Additionally report:
|
|
|
305
305
|
- Tools installed vs already present
|
|
306
306
|
- Config files created vs skipped
|
|
307
307
|
|
|
308
|
+
### Hooks (in `.claude/hooks/`)
|
|
309
|
+
|
|
310
|
+
Deployed from `templates/hooks/` to `.claude/hooks/` (made executable):
|
|
311
|
+
|
|
312
|
+
- `session-start.sh` — runs on SessionStart, loads context
|
|
313
|
+
- `session-stop.sh` — runs on Stop, persists session state
|
|
314
|
+
- `pre-compact.sh` — runs on PreCompact, saves context before compaction
|
|
315
|
+
- `validate-commit.sh` — PreToolUse hook for `git commit*`, validates commit messages
|
|
316
|
+
- `validate-push.sh` — PreToolUse hook for `git push*`, confirms before pushing
|
|
317
|
+
- `log-agent.sh` — SubagentStart hook, logs sub-agent launches
|
|
318
|
+
|
|
319
|
+
### Agent Definitions (in `.claude/agents/`)
|
|
320
|
+
|
|
321
|
+
Deployed from `templates/.claude/agents/` (create-if-missing):
|
|
322
|
+
|
|
323
|
+
- `e2e-tester.md` — E2E testing agent definition
|
|
324
|
+
- `linter.md` — Linting agent definition
|
|
325
|
+
- `perf-auditor.md` — Performance auditing agent
|
|
326
|
+
- `security-auditor.md` — Security auditing agent
|
|
327
|
+
- `test-runner.md` — Test execution agent
|
|
328
|
+
|
|
329
|
+
### Path-Scoped Rules (in `.claude/rules/`)
|
|
330
|
+
|
|
331
|
+
Deployed from `templates/.claude/rules/` based on detected stack:
|
|
332
|
+
|
|
333
|
+
| Rule file | Deployed when |
|
|
334
|
+
|-----------|---------------|
|
|
335
|
+
| `tests.md.template` | Always |
|
|
336
|
+
| `frontend.md.template` | Always |
|
|
337
|
+
| `api.md.template` | Always |
|
|
338
|
+
| `laravel.md.template` | Laravel detected in framework |
|
|
339
|
+
| `react.md.template` | React or Next.js detected in framework |
|
|
340
|
+
|
|
341
|
+
### Settings Generation (`.claude/settings.json`)
|
|
342
|
+
|
|
343
|
+
Rendered from `templates/.claude/settings.json.template`. Contains:
|
|
344
|
+
- Statusline configuration (points to `.claude/statusline.sh`)
|
|
345
|
+
- Permission allow/deny lists for safe Bash commands
|
|
346
|
+
- Hook wiring for all 6 hooks above
|
|
347
|
+
|
|
348
|
+
### Statusline Generation (`.claude/statusline.sh`)
|
|
349
|
+
|
|
350
|
+
Copied from `templates/.claude/statusline.sh` (made executable). Displays:
|
|
351
|
+
- Context window usage percentage
|
|
352
|
+
- Current model
|
|
353
|
+
- Current workflow step (from `tasks/workflow-status.md`)
|
|
354
|
+
- Git branch
|
|
355
|
+
- Current task name
|
|
356
|
+
|
|
357
|
+
### Cached Detection
|
|
358
|
+
|
|
359
|
+
Detection results are cached to `.shipkit/config.json` with a `detected_at` timestamp. On subsequent runs, if the cache is less than 7 days old, cached values are used instead of re-scanning. Pass `--force-detect` to bypass the cache and re-run detection from scratch.
|
|
360
|
+
|
|
308
361
|
## Templates (Source of Truth)
|
|
309
362
|
|
|
310
363
|
All output files are rendered from templates in `templates/`:
|
|
@@ -7,12 +7,18 @@ import hashlib
|
|
|
7
7
|
import json
|
|
8
8
|
import os
|
|
9
9
|
import re
|
|
10
|
+
import shutil
|
|
11
|
+
import stat
|
|
10
12
|
import sys
|
|
11
13
|
from dataclasses import asdict, dataclass
|
|
14
|
+
from datetime import datetime, timezone
|
|
12
15
|
from pathlib import Path
|
|
13
16
|
from typing import Dict, Iterable, List, Optional, Tuple
|
|
14
17
|
|
|
15
18
|
|
|
19
|
+
CACHE_MAX_AGE_DAYS = 7
|
|
20
|
+
|
|
21
|
+
|
|
16
22
|
GENERATED_MARKER = "<!-- Generated by /setup-claude -->"
|
|
17
23
|
TEMPLATE_HASH_MARKER = "<!-- Template Hash: "
|
|
18
24
|
TEMPLATE_HASH_END = " -->"
|
|
@@ -59,6 +65,46 @@ def _any_dep_prefix(pkg: dict, prefix: str) -> bool:
|
|
|
59
65
|
return any(k.startswith(prefix) for k in deps.keys())
|
|
60
66
|
|
|
61
67
|
|
|
68
|
+
def _cache_path(repo_root: Path) -> Path:
|
|
69
|
+
return repo_root / ".shipkit" / "config.json"
|
|
70
|
+
|
|
71
|
+
|
|
72
|
+
def _read_cached_detection(repo_root: Path) -> Optional[Detection]:
|
|
73
|
+
"""Return cached Detection if cache exists and is less than CACHE_MAX_AGE_DAYS old."""
|
|
74
|
+
cache = _cache_path(repo_root)
|
|
75
|
+
data = _read_json(cache)
|
|
76
|
+
if data is None:
|
|
77
|
+
return None
|
|
78
|
+
detected_at = data.get("detected_at")
|
|
79
|
+
if not detected_at:
|
|
80
|
+
return None
|
|
81
|
+
try:
|
|
82
|
+
ts = datetime.fromisoformat(detected_at)
|
|
83
|
+
age = datetime.now(timezone.utc) - ts
|
|
84
|
+
if age.days >= CACHE_MAX_AGE_DAYS:
|
|
85
|
+
return None
|
|
86
|
+
except (ValueError, TypeError):
|
|
87
|
+
return None
|
|
88
|
+
det = data.get("detection")
|
|
89
|
+
if not det:
|
|
90
|
+
return None
|
|
91
|
+
try:
|
|
92
|
+
return Detection(**det)
|
|
93
|
+
except (TypeError, KeyError):
|
|
94
|
+
return None
|
|
95
|
+
|
|
96
|
+
|
|
97
|
+
def _write_cached_detection(repo_root: Path, detection: Detection) -> None:
|
|
98
|
+
"""Persist detection results to .shipkit/config.json with a detected_at timestamp."""
|
|
99
|
+
cache = _cache_path(repo_root)
|
|
100
|
+
cache.parent.mkdir(parents=True, exist_ok=True)
|
|
101
|
+
payload = {
|
|
102
|
+
"detected_at": datetime.now(timezone.utc).isoformat(),
|
|
103
|
+
"detection": asdict(detection),
|
|
104
|
+
}
|
|
105
|
+
cache.write_text(json.dumps(payload, indent=2, sort_keys=True), encoding="utf-8")
|
|
106
|
+
|
|
107
|
+
|
|
62
108
|
def detect(repo_root: Path) -> Detection:
|
|
63
109
|
package_json = _read_json(repo_root / "package.json") or {}
|
|
64
110
|
scripts = (package_json.get("scripts") or {}) if isinstance(package_json, dict) else {}
|
|
@@ -272,6 +318,103 @@ def _plan_file_if_generated(dest: Path, content: str) -> str:
|
|
|
272
318
|
return "skipped" if existing == content else "updated"
|
|
273
319
|
|
|
274
320
|
|
|
321
|
+
def _collect_results(
|
|
322
|
+
results,
|
|
323
|
+
repo_root: Path,
|
|
324
|
+
created: List[str],
|
|
325
|
+
updated: List[str],
|
|
326
|
+
skipped: List[str],
|
|
327
|
+
) -> None:
|
|
328
|
+
"""Categorize deployment results into created/updated/skipped lists."""
|
|
329
|
+
for action, p in results:
|
|
330
|
+
rel = str(p.relative_to(repo_root))
|
|
331
|
+
if action == "created":
|
|
332
|
+
created.append(rel)
|
|
333
|
+
elif action == "updated":
|
|
334
|
+
updated.append(rel)
|
|
335
|
+
else:
|
|
336
|
+
skipped.append(rel)
|
|
337
|
+
|
|
338
|
+
|
|
339
|
+
def _make_executable(path: Path) -> None:
|
|
340
|
+
"""Add owner-execute permission to a file."""
|
|
341
|
+
st = path.stat()
|
|
342
|
+
path.chmod(st.st_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH)
|
|
343
|
+
|
|
344
|
+
|
|
345
|
+
def _deploy_directory(
|
|
346
|
+
src_dir: Path,
|
|
347
|
+
dest_dir: Path,
|
|
348
|
+
*,
|
|
349
|
+
dry_run: bool,
|
|
350
|
+
executable: bool = False,
|
|
351
|
+
filter_fn=None,
|
|
352
|
+
) -> List[tuple]:
|
|
353
|
+
"""Copy files from src_dir to dest_dir. Returns list of (action, relative_path)."""
|
|
354
|
+
results: List[tuple] = []
|
|
355
|
+
if not src_dir.exists():
|
|
356
|
+
return results
|
|
357
|
+
for src_file in sorted(src_dir.iterdir()):
|
|
358
|
+
if not src_file.is_file():
|
|
359
|
+
continue
|
|
360
|
+
if filter_fn and not filter_fn(src_file.name):
|
|
361
|
+
continue
|
|
362
|
+
dest_file = dest_dir / src_file.name
|
|
363
|
+
if dest_file.exists():
|
|
364
|
+
results.append(("skipped", dest_file))
|
|
365
|
+
elif dry_run:
|
|
366
|
+
results.append(("created", dest_file))
|
|
367
|
+
else:
|
|
368
|
+
dest_dir.mkdir(parents=True, exist_ok=True)
|
|
369
|
+
shutil.copy2(src_file, dest_file)
|
|
370
|
+
if executable:
|
|
371
|
+
_make_executable(dest_file)
|
|
372
|
+
results.append(("created", dest_file))
|
|
373
|
+
return results
|
|
374
|
+
|
|
375
|
+
|
|
376
|
+
def _deploy_rendered_file(
|
|
377
|
+
template_path: Path,
|
|
378
|
+
dest: Path,
|
|
379
|
+
detection: Detection,
|
|
380
|
+
*,
|
|
381
|
+
dry_run: bool,
|
|
382
|
+
executable: bool = False,
|
|
383
|
+
) -> tuple:
|
|
384
|
+
"""Render a template and write to dest. Returns (action, path)."""
|
|
385
|
+
if not template_path.exists():
|
|
386
|
+
return ("skipped", dest)
|
|
387
|
+
template_text = template_path.read_text(encoding="utf-8")
|
|
388
|
+
rendered = render_template(template_text, detection)
|
|
389
|
+
if dest.exists():
|
|
390
|
+
return ("skipped", dest)
|
|
391
|
+
if dry_run:
|
|
392
|
+
return ("created", dest)
|
|
393
|
+
dest.parent.mkdir(parents=True, exist_ok=True)
|
|
394
|
+
dest.write_text(rendered, encoding="utf-8")
|
|
395
|
+
if executable:
|
|
396
|
+
_make_executable(dest)
|
|
397
|
+
return ("created", dest)
|
|
398
|
+
|
|
399
|
+
|
|
400
|
+
def _rules_filter(detection: Detection):
|
|
401
|
+
"""Return a filter function that selects rules relevant to the detected stack."""
|
|
402
|
+
always = {"tests.md.template", "frontend.md.template", "api.md.template"}
|
|
403
|
+
|
|
404
|
+
def _filter(filename: str) -> bool:
|
|
405
|
+
if filename in always:
|
|
406
|
+
return True
|
|
407
|
+
if filename == "laravel.md.template" and "Laravel" in detection.framework:
|
|
408
|
+
return True
|
|
409
|
+
if filename == "react.md.template" and (
|
|
410
|
+
"React" in detection.framework or detection.framework == "Next.js (App Router)"
|
|
411
|
+
):
|
|
412
|
+
return True
|
|
413
|
+
return False
|
|
414
|
+
|
|
415
|
+
return _filter
|
|
416
|
+
|
|
417
|
+
|
|
275
418
|
def apply(
|
|
276
419
|
repo_root: Path,
|
|
277
420
|
skill_root: Path,
|
|
@@ -370,12 +513,54 @@ def apply(
|
|
|
370
513
|
else:
|
|
371
514
|
action, p = ("skipped", dest_path)
|
|
372
515
|
|
|
373
|
-
|
|
374
|
-
|
|
375
|
-
|
|
376
|
-
|
|
516
|
+
_collect_results([(action, p)], repo_root, created, updated, skipped)
|
|
517
|
+
|
|
518
|
+
# --- Deploy hooks ---
|
|
519
|
+
hooks_src = skill_root / "templates" / "hooks"
|
|
520
|
+
hooks_dest = repo_root / ".claude" / "hooks"
|
|
521
|
+
_collect_results(
|
|
522
|
+
_deploy_directory(hooks_src, hooks_dest, dry_run=dry_run, executable=True),
|
|
523
|
+
repo_root, created, updated, skipped,
|
|
524
|
+
)
|
|
525
|
+
|
|
526
|
+
# --- Deploy agents ---
|
|
527
|
+
agents_src = skill_root / "templates" / ".claude" / "agents"
|
|
528
|
+
agents_dest = repo_root / ".claude" / "agents"
|
|
529
|
+
_collect_results(
|
|
530
|
+
_deploy_directory(agents_src, agents_dest, dry_run=dry_run),
|
|
531
|
+
repo_root, created, updated, skipped,
|
|
532
|
+
)
|
|
533
|
+
|
|
534
|
+
# --- Deploy rules (stack-filtered) ---
|
|
535
|
+
rules_src = skill_root / "templates" / ".claude" / "rules"
|
|
536
|
+
rules_dest = repo_root / ".claude" / "rules"
|
|
537
|
+
_collect_results(
|
|
538
|
+
_deploy_directory(rules_src, rules_dest, dry_run=dry_run, filter_fn=_rules_filter(detection)),
|
|
539
|
+
repo_root, created, updated, skipped,
|
|
540
|
+
)
|
|
541
|
+
|
|
542
|
+
# --- Deploy settings.json ---
|
|
543
|
+
settings_template = skill_root / "templates" / ".claude" / "settings.json.template"
|
|
544
|
+
settings_dest = repo_root / ".claude" / "settings.json"
|
|
545
|
+
_collect_results(
|
|
546
|
+
[_deploy_rendered_file(settings_template, settings_dest, detection, dry_run=dry_run)],
|
|
547
|
+
repo_root, created, updated, skipped,
|
|
548
|
+
)
|
|
549
|
+
|
|
550
|
+
# --- Deploy statusline.sh ---
|
|
551
|
+
statusline_src = skill_root / "templates" / ".claude" / "statusline.sh"
|
|
552
|
+
statusline_dest = repo_root / ".claude" / "statusline.sh"
|
|
553
|
+
if statusline_src.exists():
|
|
554
|
+
if statusline_dest.exists():
|
|
555
|
+
action_sl = "skipped"
|
|
556
|
+
elif dry_run:
|
|
557
|
+
action_sl = "created"
|
|
377
558
|
else:
|
|
378
|
-
|
|
559
|
+
statusline_dest.parent.mkdir(parents=True, exist_ok=True)
|
|
560
|
+
shutil.copy2(statusline_src, statusline_dest)
|
|
561
|
+
_make_executable(statusline_dest)
|
|
562
|
+
action_sl = "created"
|
|
563
|
+
_collect_results([(action_sl, statusline_dest)], repo_root, created, updated, skipped)
|
|
379
564
|
|
|
380
565
|
if dry_run:
|
|
381
566
|
print("setup-claude dry-run complete (no files written)")
|
|
@@ -415,6 +600,11 @@ def main(argv: List[str]) -> int:
|
|
|
415
600
|
action="store_true",
|
|
416
601
|
help="Print detected values as JSON and exit unless combined with --dry-run.",
|
|
417
602
|
)
|
|
603
|
+
parser.add_argument(
|
|
604
|
+
"--force-detect",
|
|
605
|
+
action="store_true",
|
|
606
|
+
help="Bypass cached detection and re-run stack detection from scratch.",
|
|
607
|
+
)
|
|
418
608
|
args = parser.parse_args(argv[1:])
|
|
419
609
|
|
|
420
610
|
repo_root = Path(args.repo_root).resolve()
|
|
@@ -424,7 +614,17 @@ def main(argv: List[str]) -> int:
|
|
|
424
614
|
print(f"Repo root not found: {repo_root}", file=sys.stderr)
|
|
425
615
|
return 2
|
|
426
616
|
|
|
427
|
-
detection
|
|
617
|
+
# Use cached detection unless --force-detect is set
|
|
618
|
+
detection = None
|
|
619
|
+
if not args.force_detect:
|
|
620
|
+
detection = _read_cached_detection(repo_root)
|
|
621
|
+
if detection:
|
|
622
|
+
print("Using cached detection (< 7 days old). Pass --force-detect to re-run.")
|
|
623
|
+
if detection is None:
|
|
624
|
+
detection = detect(repo_root)
|
|
625
|
+
if not args.dry_run:
|
|
626
|
+
_write_cached_detection(repo_root, detection)
|
|
627
|
+
|
|
428
628
|
if args.print_detection:
|
|
429
629
|
print(json.dumps(asdict(detection), indent=2, sort_keys=True))
|
|
430
630
|
if not args.dry_run:
|
|
@@ -0,0 +1,46 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: e2e-tester
|
|
3
|
+
model: sonnet
|
|
4
|
+
description: Run E2E behavioral verification using Playwright CLI or agent-browser. Fix failures and auto-commit.
|
|
5
|
+
allowed_tools: Bash, Read, Edit, Write, Glob, Grep
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# E2E Tester Agent
|
|
9
|
+
|
|
10
|
+
You are a specialized E2E testing agent. Your job is to verify the complete implementation works end-to-end from a user's perspective.
|
|
11
|
+
|
|
12
|
+
## Behavior
|
|
13
|
+
|
|
14
|
+
1. **Detect E2E framework**:
|
|
15
|
+
- If `playwright.config.ts` exists -> use Playwright CLI
|
|
16
|
+
- If `cypress.config.ts` exists -> use Cypress
|
|
17
|
+
- If `tests/verify-workflow.sh` exists -> use bash test suite
|
|
18
|
+
- Otherwise -> report no E2E framework detected
|
|
19
|
+
|
|
20
|
+
2. **Run E2E tests**:
|
|
21
|
+
- Playwright: `npx playwright test --reporter=list`
|
|
22
|
+
- Cypress: `npx cypress run`
|
|
23
|
+
- Bash: `bash tests/verify-workflow.sh`
|
|
24
|
+
|
|
25
|
+
3. **If tests fail**:
|
|
26
|
+
- Analyze failure output and screenshots (if Playwright)
|
|
27
|
+
- Determine if failure is in test or implementation
|
|
28
|
+
- Fix the root cause
|
|
29
|
+
- Stage: `git add <files>`
|
|
30
|
+
- auto-commit: `fix(e2e): resolve failing E2E scenarios`
|
|
31
|
+
- Re-run from scratch
|
|
32
|
+
- Loop until all pass
|
|
33
|
+
|
|
34
|
+
4. **Pre-existing failures** (tests that were already failing before this branch):
|
|
35
|
+
- Log to `tasks/tech-debt.md`:
|
|
36
|
+
```
|
|
37
|
+
### [YYYY-MM-DD] Found during: sk:e2e
|
|
38
|
+
File: path/to/test.ext
|
|
39
|
+
Issue: Pre-existing E2E failure — [description]
|
|
40
|
+
Severity: medium
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
5. **Report** when passing:
|
|
44
|
+
```
|
|
45
|
+
E2E: [N] scenarios passed, 0 failed (attempt [M])
|
|
46
|
+
```
|