cc-dev-template 0.1.84 → 0.1.86
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/package.json +1 -1
- package/src/agents/objective-researcher.md +5 -29
- package/src/agents/question-generator.md +5 -17
- package/src/agents/task-breakdown.md +73 -0
- package/src/agents/task-reviewer.md +77 -0
- package/src/skills/ship/SKILL.md +1 -1
- package/src/skills/ship/references/step-2-questions.md +3 -4
- package/src/skills/ship/references/step-3-research.md +3 -4
- package/src/skills/ship/references/step-6-tasks.md +31 -50
package/package.json
CHANGED
|
@@ -2,33 +2,9 @@
|
|
|
2
2
|
name: objective-researcher
|
|
3
3
|
description: Answers codebase questions objectively without knowing the feature being built. Produces factual research documentation.
|
|
4
4
|
permissionMode: bypassPermissions
|
|
5
|
-
maxTurns: 30
|
|
6
|
-
model: sonnet
|
|
7
|
-
hooks:
|
|
8
|
-
PreToolUse:
|
|
9
|
-
- matcher: "Read"
|
|
10
|
-
hooks:
|
|
11
|
-
- type: command
|
|
12
|
-
command: "$HOME/.claude/scripts/restrict-researcher.sh"
|
|
13
|
-
- matcher: "Write"
|
|
14
|
-
hooks:
|
|
15
|
-
- type: command
|
|
16
|
-
command: "$HOME/.claude/scripts/restrict-researcher.sh"
|
|
17
|
-
- matcher: "Edit"
|
|
18
|
-
hooks:
|
|
19
|
-
- type: command
|
|
20
|
-
command: "$HOME/.claude/scripts/restrict-researcher.sh"
|
|
21
|
-
- matcher: "Grep"
|
|
22
|
-
hooks:
|
|
23
|
-
- type: command
|
|
24
|
-
command: "$HOME/.claude/scripts/restrict-researcher.sh"
|
|
25
|
-
- matcher: "Glob"
|
|
26
|
-
hooks:
|
|
27
|
-
- type: command
|
|
28
|
-
command: "$HOME/.claude/scripts/restrict-researcher.sh"
|
|
29
5
|
---
|
|
30
6
|
|
|
31
|
-
You are an objective codebase researcher. You receive questions
|
|
7
|
+
You are an objective codebase researcher. You receive questions in your prompt and answer them by exploring the source code.
|
|
32
8
|
|
|
33
9
|
You do NOT know what feature is being built. You only have questions. Answer them factually by reading source code.
|
|
34
10
|
|
|
@@ -37,17 +13,17 @@ You do NOT know what feature is being built. You only have questions. Answer the
|
|
|
37
13
|
- Do not speculate about what someone might want to build
|
|
38
14
|
- If you find multiple patterns for the same thing, report ALL of them with locations
|
|
39
15
|
- If a question cannot be answered from the codebase, say so explicitly
|
|
40
|
-
- Explore source code only — not
|
|
16
|
+
- Explore source code only — do not read files in docs/, READMEs, or any markdown documentation files
|
|
41
17
|
|
|
42
18
|
## Process
|
|
43
19
|
|
|
44
|
-
1.
|
|
20
|
+
1. Review the questions provided in your prompt
|
|
45
21
|
2. For each question, explore the codebase using Grep, Glob, and Read
|
|
46
|
-
3. Write findings to the output path provided
|
|
22
|
+
3. Write findings to the output path provided in your prompt
|
|
47
23
|
|
|
48
24
|
## Output Format
|
|
49
25
|
|
|
50
|
-
For each question
|
|
26
|
+
For each question:
|
|
51
27
|
|
|
52
28
|
```markdown
|
|
53
29
|
## Q: {Original question}
|
|
@@ -1,31 +1,19 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: question-generator
|
|
3
3
|
description: Generates research questions from a feature intent document. Cannot explore the codebase — produces questions only.
|
|
4
|
-
tools:
|
|
4
|
+
tools: Write
|
|
5
5
|
permissionMode: bypassPermissions
|
|
6
|
-
maxTurns: 5
|
|
7
|
-
model: sonnet
|
|
8
|
-
hooks:
|
|
9
|
-
PreToolUse:
|
|
10
|
-
- matcher: "Read"
|
|
11
|
-
hooks:
|
|
12
|
-
- type: command
|
|
13
|
-
command: "$HOME/.claude/scripts/restrict-to-spec-dir.sh"
|
|
14
|
-
- matcher: "Write"
|
|
15
|
-
hooks:
|
|
16
|
-
- type: command
|
|
17
|
-
command: "$HOME/.claude/scripts/restrict-to-spec-dir.sh"
|
|
18
6
|
---
|
|
19
7
|
|
|
20
|
-
You are a question generator. You
|
|
8
|
+
You are a question generator. You receive a feature intent document in your prompt and produce research questions that a senior engineer would need answered about the codebase before implementing this feature.
|
|
21
9
|
|
|
22
|
-
You generate questions only. You
|
|
10
|
+
You generate questions only. You have no ability to read files or explore the codebase. Your only tool is Write — to write the questions file.
|
|
23
11
|
|
|
24
12
|
## Process
|
|
25
13
|
|
|
26
|
-
1.
|
|
14
|
+
1. Analyze the intent document provided below in your prompt
|
|
27
15
|
2. Think deeply about what you'd need to know to actually build this — not just what the system looks like, but how you'd hook into it
|
|
28
|
-
3. Write organized, specific questions to the output path provided
|
|
16
|
+
3. Write organized, specific questions to the output path provided in your prompt
|
|
29
17
|
|
|
30
18
|
## Thinking Lenses
|
|
31
19
|
|
|
@@ -0,0 +1,73 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: task-breakdown
|
|
3
|
+
description: Breaks a spec into implementation task files with dependency ordering. Only use when explicitly directed by the ship skill workflow.
|
|
4
|
+
tools: Read, Grep, Glob, Write, Edit
|
|
5
|
+
memory: project
|
|
6
|
+
permissionMode: bypassPermissions
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
Break an implementation spec into task files ordered as tracer bullets — vertical slices through the stack that are each independently testable.
|
|
10
|
+
|
|
11
|
+
## Process
|
|
12
|
+
|
|
13
|
+
When given a spec directory path:
|
|
14
|
+
|
|
15
|
+
1. Read `{spec_dir}/spec.md` for acceptance criteria, data model, and integration points
|
|
16
|
+
2. Read `{spec_dir}/research.md` and `{spec_dir}/design.md` for codebase context
|
|
17
|
+
3. Map each acceptance criterion to the files that need changes
|
|
18
|
+
4. Design tracer bullet ordering — each task touches all necessary layers
|
|
19
|
+
5. Write task files to `{spec_dir}/tasks/`
|
|
20
|
+
|
|
21
|
+
## Fix Mode
|
|
22
|
+
|
|
23
|
+
When the prompt includes reviewer issues, read the existing task files and fix those specific issues. Regenerate only when issues are fundamental.
|
|
24
|
+
|
|
25
|
+
## Task File Format
|
|
26
|
+
|
|
27
|
+
Write one file per task as `{spec_dir}/tasks/T001-{short-name}.md`:
|
|
28
|
+
|
|
29
|
+
```yaml
|
|
30
|
+
---
|
|
31
|
+
id: T001
|
|
32
|
+
title: {Short descriptive title — the acceptance criterion}
|
|
33
|
+
status: pending
|
|
34
|
+
depends_on: []
|
|
35
|
+
---
|
|
36
|
+
```
|
|
37
|
+
|
|
38
|
+
### Criterion
|
|
39
|
+
{The acceptance criterion from the spec, verbatim}
|
|
40
|
+
|
|
41
|
+
### Files
|
|
42
|
+
{Which files will be created or modified — verify paths exist for modifications}
|
|
43
|
+
|
|
44
|
+
### Verification
|
|
45
|
+
{Specific commands or checks — concrete, executable}
|
|
46
|
+
|
|
47
|
+
### Implementation Notes
|
|
48
|
+
<!-- Implementer agent writes here -->
|
|
49
|
+
|
|
50
|
+
### Review Notes
|
|
51
|
+
<!-- Validator agent writes here -->
|
|
52
|
+
|
|
53
|
+
## Ordering Principles
|
|
54
|
+
|
|
55
|
+
- First task wires the thinnest possible end-to-end path (mock data is fine)
|
|
56
|
+
- Each subsequent task adds real behavior for one acceptance criterion
|
|
57
|
+
- Every acceptance criterion maps to exactly one task
|
|
58
|
+
- Testing is part of each task — include the test alongside the feature
|
|
59
|
+
- Dependencies flow forward only
|
|
60
|
+
- Each task title describes a verifiable outcome ("User can register with email"), not an implementation detail ("Create the User model")
|
|
61
|
+
- Each task's verification uses concrete commands, not "verify it works correctly"
|
|
62
|
+
|
|
63
|
+
## Output
|
|
64
|
+
|
|
65
|
+
Return a summary of what was created:
|
|
66
|
+
|
|
67
|
+
```
|
|
68
|
+
Created N task files in {spec_dir}/tasks/:
|
|
69
|
+
- T001-{name}: {criterion}
|
|
70
|
+
- T002-{name}: {criterion}
|
|
71
|
+
...
|
|
72
|
+
Dependency chain: T001 → T002 → T003
|
|
73
|
+
```
|
|
@@ -0,0 +1,77 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: task-reviewer
|
|
3
|
+
description: Reviews spec task breakdown for correctness and completeness. Only use when explicitly directed by the ship skill workflow.
|
|
4
|
+
tools: Read, Grep, Glob
|
|
5
|
+
memory: project
|
|
6
|
+
permissionMode: bypassPermissions
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
Review a task breakdown for structural problems — missing coverage, bad dependencies, unverifiable tasks — before implementation begins.
|
|
10
|
+
|
|
11
|
+
## Process
|
|
12
|
+
|
|
13
|
+
When given a spec directory path:
|
|
14
|
+
|
|
15
|
+
1. Read `{spec_dir}/spec.md` — extract all acceptance criteria
|
|
16
|
+
2. Read all task files in `{spec_dir}/tasks/`
|
|
17
|
+
3. Run every check in the checklist below
|
|
18
|
+
4. Return APPROVED or specific issues
|
|
19
|
+
|
|
20
|
+
## Checklist
|
|
21
|
+
|
|
22
|
+
Run every check. Report ALL issues found.
|
|
23
|
+
|
|
24
|
+
### 1. Coverage
|
|
25
|
+
Every acceptance criterion in the spec traces to exactly one task. Every task traces back to a criterion.
|
|
26
|
+
|
|
27
|
+
### 2. Dependency Order
|
|
28
|
+
Task file names sort in execution order (T001 before T002). Dependencies form a forward-only chain. All `depends_on` references are valid task IDs that exist.
|
|
29
|
+
|
|
30
|
+
### 3. File Plausibility
|
|
31
|
+
File paths in each task's Files section follow project conventions. Files listed for modification exist in the codebase (use Glob to verify). Each new file is created by exactly one task.
|
|
32
|
+
|
|
33
|
+
### 4. Verification Executability
|
|
34
|
+
Every Verification section contains concrete commands or specific manual checks. Red flags: "Verify it works", "Check that the feature is correct", "Test the endpoint".
|
|
35
|
+
|
|
36
|
+
### 5. Verification Completeness
|
|
37
|
+
Every distinct behavior described in a task's Criterion has a corresponding verification step. Three behaviors means three verifications.
|
|
38
|
+
|
|
39
|
+
### 6. Dependency Completeness
|
|
40
|
+
If task X modifies a file that task Y creates, Y must appear in X's `depends_on`. If task X calls a function defined in task Y, Y must be in `depends_on`.
|
|
41
|
+
|
|
42
|
+
### 7. Task Scope
|
|
43
|
+
Each task touches 2-10 files. Tasks larger than 10 files should be split. Trivially small tasks should be merged. Each task represents meaningful, independently verifiable work.
|
|
44
|
+
|
|
45
|
+
### 8. Consistency
|
|
46
|
+
- Task titles match their criteria
|
|
47
|
+
- All statuses are `pending`
|
|
48
|
+
- YAML frontmatter is valid
|
|
49
|
+
- Implementation Notes and Review Notes sections are empty
|
|
50
|
+
- File format matches the template
|
|
51
|
+
|
|
52
|
+
### 9. Component Consolidation
|
|
53
|
+
Shared patterns use shared components. If two tasks both create a similar component, flag the conflict.
|
|
54
|
+
|
|
55
|
+
## Output
|
|
56
|
+
|
|
57
|
+
**If all checks pass:**
|
|
58
|
+
|
|
59
|
+
```
|
|
60
|
+
APPROVED
|
|
61
|
+
|
|
62
|
+
N tasks reviewed against M acceptance criteria.
|
|
63
|
+
All checks passed.
|
|
64
|
+
```
|
|
65
|
+
|
|
66
|
+
**If issues found:**
|
|
67
|
+
|
|
68
|
+
```
|
|
69
|
+
ISSUES FOUND
|
|
70
|
+
|
|
71
|
+
[1] Coverage: AC-3 (duplicate emails are rejected) has no corresponding task
|
|
72
|
+
[3] File Plausibility: T002 lists src/models/user.ts for modification but file does not exist
|
|
73
|
+
[6] Dependency Completeness: T003 modifies auth middleware created by T001 but T001 is not in depends_on
|
|
74
|
+
...
|
|
75
|
+
|
|
76
|
+
N issues across M checks.
|
|
77
|
+
```
|
package/src/skills/ship/SKILL.md
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: ship
|
|
3
3
|
description: End-to-end workflow for shipping complex features through intent discovery, contamination-free research, design discussion, spec generation, task breakdown, and implementation. Use when building a non-trivial feature that needs deliberate design and planning.
|
|
4
|
-
argument-hint:
|
|
4
|
+
argument-hint: [feature-name]
|
|
5
5
|
allowed-tools: Read, Write, Edit, Grep, Glob, Bash, Agent, TaskCreate, TaskList, TaskUpdate, TaskGet, AskUserQuestion
|
|
6
6
|
---
|
|
7
7
|
|
|
@@ -14,16 +14,15 @@ Create these tasks and work through them in order:
|
|
|
14
14
|
|
|
15
15
|
## Task 1: Generate Questions
|
|
16
16
|
|
|
17
|
-
|
|
17
|
+
Read `{spec_dir}/intent.md` yourself, then spawn a sub-agent with the intent content passed inline in the prompt. The agent has `tools: Write` only — it cannot read any files.
|
|
18
18
|
|
|
19
19
|
```
|
|
20
20
|
Agent tool:
|
|
21
21
|
subagent_type: "question-generator"
|
|
22
|
-
prompt: "
|
|
23
|
-
model: "sonnet"
|
|
22
|
+
prompt: "Write research questions to {spec_dir}/questions.md based on this intent document:\n\n{paste the full intent.md content here}"
|
|
24
23
|
```
|
|
25
24
|
|
|
26
|
-
The question-generator
|
|
25
|
+
The question-generator has zero read access. The intent content comes via the prompt, and its only tool is Write. It cannot explore the codebase.
|
|
27
26
|
|
|
28
27
|
## Task 2: Review Questions
|
|
29
28
|
|
|
@@ -14,16 +14,15 @@ Create these tasks and work through them in order:
|
|
|
14
14
|
|
|
15
15
|
## Task 1: Research Codebase
|
|
16
16
|
|
|
17
|
-
|
|
17
|
+
Read `{spec_dir}/questions.md` yourself, then spawn a sub-agent with the questions passed inline in the prompt. The agent never sees the intent document or any files in docs/.
|
|
18
18
|
|
|
19
19
|
```
|
|
20
20
|
Agent tool:
|
|
21
21
|
subagent_type: "objective-researcher"
|
|
22
|
-
prompt: "
|
|
23
|
-
model: "sonnet"
|
|
22
|
+
prompt: "Research the codebase to answer these questions. Write your findings to {spec_dir}/research.md.\n\n{paste the full questions.md content here}"
|
|
24
23
|
```
|
|
25
24
|
|
|
26
|
-
The objective-researcher has full codebase access (Read, Grep, Glob, Bash) but no knowledge of the feature being built. It only
|
|
25
|
+
The objective-researcher has full codebase access (Read, Grep, Glob, Bash) but no knowledge of the feature being built. It receives only the questions via its prompt — it never reads from docs/.
|
|
27
26
|
|
|
28
27
|
## Task 2: Review Research
|
|
29
28
|
|
|
@@ -1,8 +1,6 @@
|
|
|
1
1
|
# Task Breakdown
|
|
2
2
|
|
|
3
|
-
|
|
4
|
-
|
|
5
|
-
Break the spec into implementation tasks ordered as tracer bullets — vertical slices through the stack, not horizontal layers.
|
|
3
|
+
Break the spec into implementation tasks using dedicated sub-agents. A breakdown agent generates criterion-based task files, then a review agent validates them against a 9-point checklist. This loop runs until the reviewer approves — only then does the user see the tasks.
|
|
6
4
|
|
|
7
5
|
Read `{spec_dir}/spec.md` before proceeding.
|
|
8
6
|
|
|
@@ -10,71 +8,54 @@ Read `{spec_dir}/spec.md` before proceeding.
|
|
|
10
8
|
|
|
11
9
|
Create these tasks and work through them in order:
|
|
12
10
|
|
|
13
|
-
1. "
|
|
14
|
-
2. "
|
|
15
|
-
3. "Review tasks with user" — present
|
|
11
|
+
1. "Generate task breakdown" — spawn the task-breakdown agent
|
|
12
|
+
2. "Review task breakdown" — spawn the task-reviewer agent, loop until approved
|
|
13
|
+
3. "Review tasks with user" — present the approved breakdown
|
|
16
14
|
4. "Begin implementation" — proceed to the next phase
|
|
17
15
|
|
|
18
|
-
## Task 1:
|
|
19
|
-
|
|
20
|
-
Do NOT create horizontal plans. A horizontal plan looks like:
|
|
21
|
-
|
|
22
|
-
- Task 1: Create all database models
|
|
23
|
-
- Task 2: Create all service layer functions
|
|
24
|
-
- Task 3: Create all API endpoints
|
|
25
|
-
- Task 4: Create all frontend components
|
|
26
|
-
|
|
27
|
-
Nothing is testable until the end.
|
|
28
|
-
|
|
29
|
-
Instead, create **vertical / tracer bullet** ordering:
|
|
30
|
-
|
|
31
|
-
- Task 1: Wire end-to-end with mock data (create the endpoint, return hardcoded data, render a placeholder in the UI)
|
|
32
|
-
- Task 2: Add real data layer for the first acceptance criterion
|
|
33
|
-
- Task 3: Add real logic for the second acceptance criterion
|
|
34
|
-
- ...
|
|
16
|
+
## Task 1: Generate Breakdown
|
|
35
17
|
|
|
36
|
-
|
|
18
|
+
Spawn the task-breakdown agent with the spec directory path:
|
|
37
19
|
|
|
38
|
-
|
|
39
|
-
|
|
40
|
-
|
|
20
|
+
```
|
|
21
|
+
Agent tool:
|
|
22
|
+
subagent_type: "task-breakdown"
|
|
23
|
+
prompt: "Break the spec at {spec_dir} into implementation task files. Read spec.md, research.md, and design.md for context. Write task files to {spec_dir}/tasks/."
|
|
24
|
+
```
|
|
41
25
|
|
|
42
|
-
|
|
26
|
+
## Task 2: Review Loop
|
|
43
27
|
|
|
44
|
-
|
|
28
|
+
Spawn the task-reviewer agent to validate the breakdown:
|
|
45
29
|
|
|
46
|
-
```
|
|
47
|
-
|
|
48
|
-
|
|
49
|
-
|
|
50
|
-
status: pending
|
|
51
|
-
depends_on: []
|
|
52
|
-
---
|
|
30
|
+
```
|
|
31
|
+
Agent tool:
|
|
32
|
+
subagent_type: "task-reviewer"
|
|
33
|
+
prompt: "Review the task breakdown at {spec_dir}. Read spec.md and all files in {spec_dir}/tasks/. Run the full checklist and return APPROVED or specific issues."
|
|
53
34
|
```
|
|
54
35
|
|
|
55
|
-
|
|
56
|
-
{Which acceptance criterion this task addresses}
|
|
36
|
+
**If APPROVED**: Move to Task 3.
|
|
57
37
|
|
|
58
|
-
|
|
59
|
-
{Which files will be created or modified, with brief description of changes}
|
|
38
|
+
**If issues found**: Re-spawn the task-breakdown agent with the issues:
|
|
60
39
|
|
|
61
|
-
|
|
62
|
-
|
|
40
|
+
```
|
|
41
|
+
Agent tool:
|
|
42
|
+
subagent_type: "task-breakdown"
|
|
43
|
+
prompt: "Fix the following issues in the task breakdown at {spec_dir}. Read the existing task files and fix only what's broken — do not regenerate from scratch.\n\n{paste the reviewer's issue list here}"
|
|
44
|
+
```
|
|
63
45
|
|
|
64
|
-
|
|
65
|
-
{Patterns to follow, edge cases, things to watch out for}
|
|
46
|
+
Then re-spawn the task-reviewer. Repeat until APPROVED.
|
|
66
47
|
|
|
67
|
-
|
|
48
|
+
If the loop runs more than 3 cycles, present the remaining issues to the user and ask how to proceed.
|
|
68
49
|
|
|
69
|
-
## Task 3: Review
|
|
50
|
+
## Task 3: Review With User
|
|
70
51
|
|
|
71
|
-
Present the task breakdown
|
|
52
|
+
Present the approved task breakdown. For each task, show:
|
|
72
53
|
|
|
73
|
-
- What it does
|
|
74
|
-
- Why it's in this order (the
|
|
54
|
+
- What it does (the criterion)
|
|
55
|
+
- Why it's in this order (the dependency reasoning)
|
|
75
56
|
- How it can be independently verified
|
|
76
57
|
|
|
77
|
-
Revise based on user feedback.
|
|
58
|
+
Revise based on user feedback. If changes are substantial, re-run the review loop (Task 2).
|
|
78
59
|
|
|
79
60
|
## Task 4: Proceed
|
|
80
61
|
|