@simplysm/sd-claude 13.0.37 → 13.0.39

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,7 @@
1
1
  ---
2
2
  name: sd-check
3
3
  description: Use when verifying code quality via typecheck, lint, and tests - before deployment, PR creation, after code changes, or when type errors, lint violations, or test failures are suspected. Applies to whole project or specific paths.
4
+ allowed-tools: Bash(node .claude/skills/sd-check/env-check.mjs), Bash(pnpm typecheck), Bash(pnpm lint --fix), Bash(pnpm vitest), Bash(pnpm lint:fix)
4
5
  ---
5
6
 
6
7
  # sd-check
@@ -16,7 +17,7 @@ Verify code quality through parallel execution of typecheck, lint, and test chec
16
17
  When the user asks to verify code, YOU will manually execute **EXACTLY THESE 4 STEPS** (no more, no less):
17
18
 
18
19
  **Step 1:** Environment Pre-check (4 checks in parallel)
19
- **Step 2:** Launch 3 haiku agents in parallel (typecheck, lint, test ONLY)
20
+ **Step 2:** Launch 3 background Bash commands in parallel (typecheck, lint, test ONLY)
20
21
  **Step 3:** Collect results, fix errors in priority order
21
22
  **Step 4:** Re-verify (go back to Step 2) until all pass
22
23
 
@@ -36,13 +37,13 @@ When the user asks to verify code, YOU will manually execute **EXACTLY THESE 4 S
36
37
 
37
38
  ## Quick Reference
38
39
 
39
- | Check | Command | Agent Model | Purpose |
40
- |-------|---------|-------------|---------|
41
- | Typecheck | `pnpm typecheck [path]` | haiku | Type errors |
42
- | Lint | `pnpm lint --fix [path]` | haiku | Code quality |
43
- | Test | `pnpm vitest [path] --run` | haiku | Functionality |
40
+ | Check | Command | Purpose |
41
+ |-------|---------|---------|
42
+ | Typecheck | `pnpm typecheck [path]` | Type errors |
43
+ | Lint | `pnpm lint --fix [path]` | Code quality |
44
+ | Test | `pnpm vitest [path] --run` | Functionality |
44
45
 
45
- **All 3 run in PARALLEL** (separate haiku agents, single message)
46
+ **All 3 run in PARALLEL** (background Bash commands, single message)
46
47
 
47
48
  ## Workflow
48
49
 
@@ -59,42 +60,44 @@ node .claude/skills/sd-check/env-check.mjs
59
60
 
60
61
  The script checks: package.json version (v13), pnpm workspace files, typecheck/lint scripts, vitest config.
61
62
 
62
- ### Step 2: Launch 3 Haiku Agents in Parallel
63
+ ### Step 2: Launch 3 Background Bash Commands in Parallel
63
64
 
64
- Launch ALL 3 agents in a **single message** using Task tool.
65
+ Launch ALL 3 commands in a **single message** using Bash tool with `run_in_background: true`.
65
66
 
66
67
  **Replace `[path]` with user's argument, or OMIT if no argument (defaults to full project).**
67
68
 
68
- **Agent 1 - Typecheck:**
69
+ **Command 1 - Typecheck:**
69
70
  ```
70
- Task tool:
71
- subagent_type: Bash
72
- model: haiku
71
+ Bash tool:
72
+ command: "pnpm typecheck [path]"
73
73
  description: "Run typecheck"
74
- prompt: "Run `pnpm typecheck [path]` and return full output. Do NOT analyze or fix - just report raw output."
74
+ run_in_background: true
75
+ timeout: 300000
75
76
  ```
76
77
 
77
- **Agent 2 - Lint:**
78
+ **Command 2 - Lint:**
78
79
  ```
79
- Task tool:
80
- subagent_type: Bash
81
- model: haiku
80
+ Bash tool:
81
+ command: "pnpm lint --fix [path]"
82
82
  description: "Run lint with auto-fix"
83
- prompt: "Run `pnpm lint --fix [path]` and return full output. Do NOT analyze or fix - just report raw output."
83
+ run_in_background: true
84
+ timeout: 300000
84
85
  ```
85
86
 
86
- **Agent 3 - Test:**
87
+ **Command 3 - Test:**
87
88
  ```
88
- Task tool:
89
- subagent_type: Bash
90
- model: haiku
89
+ Bash tool:
90
+ command: "pnpm vitest [path] --run"
91
91
  description: "Run tests"
92
- prompt: "Run `pnpm vitest [path] --run` and return full output. Do NOT analyze or fix - just report raw output."
92
+ run_in_background: true
93
+ timeout: 300000
93
94
  ```
94
95
 
96
+ Each command returns a `task_id`. Use `TaskOutput` tool to collect results (with `block: true`).
97
+
95
98
  ### Step 3: Collect Results and Fix Errors
96
99
 
97
- Wait for ALL 3 agents. Collect outputs.
100
+ Collect ALL 3 outputs using `TaskOutput` tool (with `block: true, timeout: 300000`) in a **single message** (parallel calls).
98
101
 
99
102
  **If all checks passed:** Complete (see Completion Criteria).
100
103
 
@@ -120,7 +123,7 @@ Wait for ALL 3 agents. Collect outputs.
120
123
 
121
124
  **CRITICAL:** After ANY fix, re-run ALL 3 checks.
122
125
 
123
- Go back to Step 2 and launch 3 haiku agents again.
126
+ Go back to Step 2 and launch 3 background Bash commands again.
124
127
 
125
128
  **Do NOT assume:** "I only fixed typecheck → skip lint/test". Fixes cascade.
126
129
 
@@ -129,20 +132,20 @@ Repeat Steps 2-4 until all 3 checks pass.
129
132
  ## Common Mistakes
130
133
 
131
134
  ### ❌ Running checks sequentially
132
- **Wrong:** Launch agent 1, wait → agent 2, wait → agent 3
133
- **Right:** Launch ALL 3 in single message (parallel Task calls)
135
+ **Wrong:** Launch command 1, wait → command 2, wait → command 3
136
+ **Right:** Launch ALL 3 in single message (parallel background Bash calls)
134
137
 
135
138
  ### ❌ Fixing before collecting all results
136
- **Wrong:** Agent 1 returns error → fix immediately → re-verify
139
+ **Wrong:** Command 1 returns error → fix immediately → re-verify
137
140
  **Right:** Wait for all 3 → collect all errors → fix in priority order → re-verify
138
141
 
139
142
  ### ❌ Skipping re-verification after fixes
140
143
  **Wrong:** Fix typecheck → assume lint/test still pass
141
144
  **Right:** ALWAYS re-run all 3 checks after any fix
142
145
 
143
- ### ❌ Using wrong model
144
- **Wrong:** `model: opus` or `model: sonnet` for verification agents
145
- **Right:** `model: haiku` (cheaper, faster for command execution)
146
+ ### ❌ Using agents instead of background Bash
147
+ **Wrong:** Launch haiku/sonnet/opus agents via Task tool to run commands
148
+ **Right:** Use `Bash` with `run_in_background: true` (no model overhead)
146
149
 
147
150
  ### ❌ Including build/dev steps
148
151
  **Wrong:** Run `pnpm build` or `pnpm dev` as part of verification
@@ -162,11 +165,12 @@ If you find yourself doing ANY of these, you're violating the skill:
162
165
 
163
166
  - Treating sd-check as a command to invoke (`Skill: sd-check Args: ...`)
164
167
  - Including build or dev server in verification
165
- - Running agents sequentially instead of parallel
168
+ - Running commands sequentially instead of parallel
169
+ - Using Task/agent instead of background Bash
166
170
  - Not re-verifying after every fix
167
171
  - Asking user for path when none provided
168
172
  - Continuing past 2-3 failed fix attempts without recommending `/sd-debug`
169
- - Spawning 4+ agents (only 3: typecheck, lint, test)
173
+ - Spawning 4+ commands (only 3: typecheck, lint, test)
170
174
 
171
175
  **All of these violate the skill's core principles. Go back to Step 1 and follow the workflow exactly.**
172
176
 
@@ -187,12 +191,12 @@ If you find yourself doing ANY of these, you're violating the skill:
187
191
  |--------|---------|
188
192
  | "I'm following the spirit, not the letter" | Violating the letter IS violating the spirit - follow EXACTLY |
189
193
  | "I'll create a better workflow with teams/tasks" | Follow the 4 steps EXACTLY - no teams, no task lists |
190
- | "I'll split tests into multiple agents" | Only 3 agents total: typecheck, lint, test |
191
- | "Stratified parallel is faster" | Run ALL 3 in parallel via separate agents - truly parallel |
194
+ | "I'll split tests into multiple commands" | Only 3 commands total: typecheck, lint, test |
195
+ | "Agents can analyze output better" | Background Bash is sufficient - analysis happens in main context |
192
196
  | "I only fixed lint, typecheck still passes" | Always re-verify ALL - fixes can cascade |
193
197
  | "Build is part of verification" | Build is deployment, not verification - NEVER include it |
194
198
  | "Let me ask which path to check" | Default to full project - explicit behavior |
195
199
  | "I'll try one more fix approach" | After 2-3 attempts → recommend /sd-debug |
196
200
  | "Tests are independent of types" | Type fixes affect tests - always re-run ALL |
197
201
  | "I'll invoke sd-check skill with args" | sd-check is EXACT STEPS, not a command |
198
- | "4 agents: typecheck, lint, test, build" | Only 3 agents - build is FORBIDDEN |
202
+ | "4 commands: typecheck, lint, test, build" | Only 3 commands - build is FORBIDDEN |
@@ -0,0 +1,88 @@
1
+ ---
2
+ name: sd-discuss
3
+ description: "Use when evaluating code design decisions against industry standards and project conventions - class vs functional, pattern choices, architecture trade-offs, technology selection"
4
+ model: opus
5
+ ---
6
+
7
+ # Standards-Based Technical Discussion
8
+
9
+ ## Overview
10
+
11
+ Facilitate balanced, evidence-based technical discussions by researching industry standards and project conventions BEFORE forming opinions.
12
+
13
+ ## The Process
14
+
15
+ ### 1. Understand the Question
16
+
17
+ Ask ONE question to understand the user's motivation:
18
+ - What problem triggered this question?
19
+ - What constraints matter most?
20
+
21
+ ### 2. Research Before Opinions
22
+
23
+ **MANDATORY before forming any opinion:**
24
+
25
+ **Project research** (Read/Grep tools):
26
+ - Read the actual source code related to the question
27
+ - Check CLAUDE.md for project conventions
28
+ - Find similar patterns in the codebase
29
+
30
+ **Industry research** (WebSearch tool):
31
+ - Search for current community consensus and trends
32
+ - Check relevant specifications (TC39, W3C, RFCs)
33
+ - Find recent benchmarks, migration case studies, or survey data
34
+ - Check how popular libraries/frameworks approach this
35
+
36
+ ### 3. Present Balanced Arguments
37
+
38
+ Present each option as if you were **advocating FOR it**. Equal depth, equal effort.
39
+
40
+ For each option:
41
+ - **Industry support**: Libraries, standards, and projects that use this approach (with sources)
42
+ - **Technical advantages**: Concrete benefits with evidence
43
+ - **When this wins**: Specific scenarios where this is clearly better
44
+
45
+ ### 4. Project Context Analysis
46
+
47
+ - How does the current codebase align with each option?
48
+ - What would migration cost look like?
49
+ - What existing patterns favor one approach?
50
+
51
+ ### 5. Decision Criteria
52
+
53
+ Provide a decision matrix — NOT a single recommendation:
54
+
55
+ | Criteria | Option A | Option B |
56
+ |----------|----------|----------|
57
+ | Industry alignment | ... | ... |
58
+ | Project consistency | ... | ... |
59
+ | Migration cost | ... | ... |
60
+ | Long-term trend | ... | ... |
61
+
62
+ Then ask: "Which criteria matter most to you?"
63
+
64
+ ## Key Rules
65
+
66
+ 1. **Research FIRST, opinion LAST** — No opinions before WebSearch + project code reading
67
+ 2. **Equal advocacy** — Each option gets the same depth of analysis
68
+ 3. **Evidence over intuition** — Cite sources, show data
69
+ 4. **No "obvious" answers** — If it were obvious, there'd be no discussion
70
+ 5. **Interactive** — Ask questions, don't monologue
71
+ 6. **Project-aware** — Ground the discussion in the actual codebase
72
+
73
+ ## Red Flags - STOP and Research More
74
+
75
+ - Presenting one side with more depth than the other
76
+ - Claiming "industry standard" without citing sources
77
+ - Recommending without checking the project's current patterns
78
+ - Giving a conclusion without asking user's priorities
79
+
80
+ ## Common Mistakes
81
+
82
+ | Mistake | Fix |
83
+ |---------|-----|
84
+ | Jump to recommendation | Research first, present balanced options |
85
+ | "Industry standard is X" without sources | WebSearch for actual data and citations |
86
+ | Ignoring project context | Read codebase patterns before discussing |
87
+ | Monologue instead of discussion | Ask about user's priorities and constraints |
88
+ | Treating "modern" as "better" | Evaluate on actual trade-offs, not trends |
@@ -4,216 +4,263 @@ description: Use when executing implementation plans with independent tasks in t
4
4
  model: opus
5
5
  ---
6
6
 
7
- # Plan Execution
7
+ # Parallel Plan Execution
8
8
 
9
- Execute plan tasks with the right-sized process: direct for small plans, parallel agents for large plans.
9
+ Execute plan tasks via parallel Task agents with dependency-aware scheduling.
10
10
 
11
- **Core principle:** Right-size the process to the plan. Small plans get direct execution; large plans get parallel agents with formal reviews.
11
+ **Core principle:** Dependency analysis + parallel Task agents + nested parallel reviews = maximum throughput
12
12
 
13
13
  ## When to Use
14
14
 
15
- Have an implementation plan (from sd-plan or similar)? Use this skill.
16
- No plan? Use sd-plan or sd-brainstorm first.
15
+ ```dot
16
+ digraph when_to_use {
17
+ "Have implementation plan?" [shape=diamond];
18
+ "sd-plan-dev" [shape=box];
19
+ "Manual execution or brainstorm first" [shape=box];
20
+
21
+ "Have implementation plan?" -> "sd-plan-dev" [label="yes"];
22
+ "Have implementation plan?" -> "Manual execution or brainstorm first" [label="no"];
23
+ }
24
+ ```
25
+
26
+ ## Execution Method
17
27
 
18
- ## Mode Selection
28
+ All execution uses `Task(general-purpose)` for parallel execution.
19
29
 
20
- **Default is Agent Mode.** Only use Direct Mode when ALL "Small" criteria are met.
30
+ - **task agent**: `Task(general-purpose)` implements one task, launches sub-Tasks for review, fixes issues
31
+ - **spec reviewer**: `Task(general-purpose, model: "opus")` — sub-Task launched by task agent (read-only)
32
+ - **quality reviewer**: `Task(general-purpose, model: "opus")` — sub-Task launched by task agent (read-only)
33
+
34
+ Independent tasks run as **parallel Task calls in a single message**. Within each task agent, spec and quality reviews also run as **parallel sub-Task calls**.
35
+
36
+ ## The Process
21
37
 
22
38
  ```dot
23
- digraph mode {
24
- "Small?" [shape=diamond];
25
- "Direct Mode\n(no agents)" [shape=box];
26
- "Agent Mode\n(parallel agents)" [shape=box, style=bold];
39
+ digraph process {
40
+ rankdir=TB;
41
+
42
+ "Read plan, extract tasks, create TaskCreate" [shape=box];
43
+ "Dependency analysis: identify files per task, build graph, group into batches" [shape=box];
27
44
 
28
- "Small?" -> "Direct Mode\n(no agents)" [label="yes (all 3 criteria)"];
29
- "Small?" -> "Agent Mode\n(parallel agents)" [label="no (default)"];
45
+ subgraph cluster_batch {
46
+ label="Per Batch (independent tasks)";
47
+
48
+ subgraph cluster_parallel_tasks {
49
+ label="Parallel Task calls (single message)";
50
+ style=dashed;
51
+
52
+ subgraph cluster_task_agent {
53
+ label="Each Task Agent";
54
+ "Implement the task" [shape=box];
55
+ "Questions?" [shape=diamond];
56
+ "Return questions to orchestrator" [shape=box];
57
+ "Re-launch with answers" [shape=box];
58
+
59
+ subgraph cluster_nested_review {
60
+ label="Parallel sub-Task calls";
61
+ style=dashed;
62
+ "sub-Task: spec reviewer" [shape=box];
63
+ "sub-Task: quality reviewer" [shape=box];
64
+ }
65
+
66
+ "Any issues?" [shape=diamond];
67
+ "Fix all issues" [shape=box];
68
+ "Re-review failed aspects (parallel sub-Task)" [shape=box];
69
+ "Report results" [shape=box];
70
+ }
71
+ }
72
+ }
73
+
74
+ "More batches?" [shape=diamond];
75
+ "Task: final review for entire implementation" [shape=box];
76
+ "Done" [shape=ellipse];
77
+
78
+ "Read plan, extract tasks, create TaskCreate" -> "Dependency analysis: identify files per task, build graph, group into batches";
79
+ "Dependency analysis: identify files per task, build graph, group into batches" -> "Implement the task";
80
+ "Implement the task" -> "Questions?";
81
+ "Questions?" -> "Return questions to orchestrator" [label="yes"];
82
+ "Return questions to orchestrator" -> "Re-launch with answers";
83
+ "Re-launch with answers" -> "Implement the task";
84
+ "Questions?" -> "sub-Task: spec reviewer" [label="no"];
85
+ "Questions?" -> "sub-Task: quality reviewer" [label="no"];
86
+ "sub-Task: spec reviewer" -> "Any issues?";
87
+ "sub-Task: quality reviewer" -> "Any issues?";
88
+ "Any issues?" -> "Fix all issues" [label="yes"];
89
+ "Fix all issues" -> "Re-review failed aspects (parallel sub-Task)";
90
+ "Re-review failed aspects (parallel sub-Task)" -> "Any issues?";
91
+ "Any issues?" -> "Report results" [label="no"];
92
+ "Report results" -> "More batches?";
93
+ "More batches?" -> "Implement the task" [label="yes, next batch"];
94
+ "More batches?" -> "Task: final review for entire implementation" [label="no"];
95
+ "Task: final review for entire implementation" -> "Done";
30
96
  }
31
97
  ```
32
98
 
33
- **"Small" criteria (ALL must be true):**
34
- 1. Tasks ≤ 2
35
- 2. Source files ≤ 3 (test files excluded, count across all tasks)
36
- 3. Every task is a simple addition or modification — NOT refactoring, structural changes, or cross-cutting concerns
99
+ ## Dependency Analysis
37
100
 
38
- **When in doubt, choose Agent Mode.**
101
+ Before launching tasks, analyze the plan to build a dependency graph:
39
102
 
40
- ---
103
+ 1. **For each task**: identify which files/modules it will create or modify
104
+ 2. **Find overlaps**: tasks touching the same files depend on each other
105
+ 3. **Respect logical dependencies**: if task B uses what task A creates, B depends on A
106
+ 4. **Group into batches**: tasks with no dependencies between them form one batch
41
107
 
42
- ## Direct Mode
108
+ ```
109
+ Example: 5 tasks
110
+ Task 1: creates utils/validator.ts
111
+ Task 2: creates hooks/useAuth.ts
112
+ Task 3: creates components/Login.tsx (uses hooks/useAuth.ts)
113
+ Task 4: modifies utils/validator.ts
114
+ Task 5: creates api/endpoints.ts
115
+
116
+ Batch 1: [Task 1, Task 2, Task 5] — independent, parallel
117
+ Batch 2: [Task 3] — depends on Task 2
118
+ Batch 3: [Task 4] — depends on Task 1
119
+ ```
43
120
 
44
- No agents, no batching -- implement directly in main context.
121
+ ## Task Agent Prompt
45
122
 
46
- 1. Read plan, understand all tasks
47
- 2. Implement in dependency order. For each task:
48
- - Implement exactly what the spec says
49
- - Write tests and run them
50
- - Ensure new public APIs are exported in the package's `index.ts`
51
- - Self-review: spec complete? overbuilt? clean?
52
- - Fix issues found
53
- 3. After all tasks: `pnpm typecheck` + `pnpm lint` + `pnpm vitest` on affected packages
54
- 4. Done
123
+ Each task agent receives a prompt combining implementation + review instructions:
55
124
 
56
- **Escalation:** Switch to Agent Mode immediately if any of these occur during execution:
57
- - A task turns out to touch more files than expected
58
- - You find yourself doing structural changes or refactoring
59
- - The total change set exceeds ~100 lines of source code
125
+ ```
126
+ You are implementing and reviewing Task N: [task name]
60
127
 
61
- ---
128
+ ## Task Description
62
129
 
63
- ## Agent Mode
130
+ [FULL TEXT of task from plan]
64
131
 
65
- Dependency-aware batching with parallel Task agents and formal reviews.
132
+ ## Context
66
133
 
67
- ### Architecture
134
+ [Scene-setting: where this fits, dependencies, architectural context]
68
135
 
69
- | Role | How | Model | Job |
70
- |------|-----|-------|-----|
71
- | Orchestrator | (you) | sonnet | Deps, prompts, lifecycle |
72
- | Implementer | `Task(general-purpose)` | default | Implement one task |
73
- | Spec reviewer | `Task(general-purpose)` | opus | Verify spec match |
74
- | Quality reviewer | `Task(general-purpose)` | opus | Verify code quality |
75
- | Final reviewer | `Task(general-purpose)` | opus | Cross-task integration |
136
+ ## Your Job
76
137
 
77
- ### Process
138
+ 1. Implement exactly what the task specifies
139
+ 2. Write tests (following TDD if task says to)
140
+ 3. Verify implementation works
141
+ 4. Self-review: did I implement everything? Did I over-build?
142
+ 5. Launch TWO parallel sub-Tasks (spec review + quality review):
143
+ - Sub-Task 1: spec reviewer — send spec-reviewer-prompt.md based prompt
144
+ - Sub-Task 2: quality reviewer — send code-quality-reviewer-prompt.md based prompt
145
+ 6. If either reviewer finds issues → fix them → re-review only failed aspects (parallel sub-Tasks again)
146
+ 7. Repeat until both reviewers approve
147
+ 8. Report back with: what you implemented, test results, files changed, review outcomes
78
148
 
79
- ```dot
80
- digraph agent_mode {
81
- rankdir=TB;
149
+ If you have questions about requirements — return them immediately WITHOUT implementing. Don't guess.
82
150
 
83
- setup [label="1. Read plan, extract tasks, TaskCreate"];
84
- deps [label="2. Dependency analysis → batch grouping"];
151
+ Work from: [directory]
152
+ ```
85
153
 
86
- subgraph cluster_batch {
87
- label="3. Per Batch";
88
- implement [label="Launch parallel implementer Tasks"];
89
- questions [label="Questions?" shape=diamond];
90
- ask [label="Ask user re-launch with answers"];
91
- reviews [label="Launch parallel reviews\n(spec + quality per task)"];
92
- issues [label="Issues?" shape=diamond];
93
- fix [label="Re-launch implementer with fix list\n→ re-review (max 3 cycles)"];
94
- batch_done [label="Batch complete"];
95
- }
154
+ ## Prompt Templates
155
+
156
+ - `./implementer-prompt.md` base implementer instructions (referenced by task agent)
157
+ - `./spec-reviewer-prompt.md` — spec compliance review sub-Task prompt
158
+ - `./code-quality-reviewer-prompt.md` code quality review sub-Task prompt
159
+
160
+ ## Example Workflow
96
161
 
97
- more [label="More batches?" shape=diamond];
98
- final [label="4. Final review (opus)"];
99
- done [label="5. Done" shape=ellipse];
100
-
101
- setup -> deps -> implement;
102
- implement -> questions;
103
- questions -> ask [label="yes"];
104
- ask -> implement;
105
- questions -> reviews [label="no"];
106
- reviews -> issues;
107
- issues -> fix [label="yes"];
108
- fix -> issues;
109
- issues -> batch_done [label="no"];
110
- batch_done -> more;
111
- more -> implement [label="yes"];
112
- more -> final [label="no"];
113
- final -> done;
114
- }
115
162
  ```
163
+ You: Using sd-plan-dev to execute this plan.
116
164
 
117
- ### Dependency Analysis
165
+ [Read plan file: docs/plans/feature-plan.md]
166
+ [Extract all 5 tasks with full text + create TaskCreate]
118
167
 
119
- 1. Per task: identify files created/modified
120
- 2. File overlap → dependent (same batch is prohibited)
121
- 3. Logical dependency (B uses what A creates) dependent
122
- 4. No dependencies between each other same batch (parallel)
168
+ [Dependency analysis]
169
+ Task 1 (validator): no deps
170
+ Task 2 (auth hook): no deps
171
+ Task 3 (login component): depends on Task 2
172
+ Task 4 (validator update): depends on Task 1
173
+ Task 5 (api endpoints): no deps
123
174
 
124
- ```
125
- Example: Task 1 creates types.ts, Task 2 uses types.ts, Task 3 independent
126
- Batch 1: [Task 1, Task 3] ← parallel
127
- Batch 2: [Task 2] ← depends on Task 1
128
- ```
175
+ Batch 1: [Task 1, Task 2, Task 5]
176
+ Batch 2: [Task 3, Task 4]
129
177
 
130
- ### Prompt Construction
178
+ --- Batch 1: parallel ---
131
179
 
132
- **Implementer:** Use `./implementer-prompt.md` template. Fill in:
133
- - Task name and full description (paste full text, NOT file reference)
134
- - Context: where it fits, what depends on it
135
- - Cross-batch context (for batch 2+): what previous batches produced, files created
136
- - Working directory
180
+ [3 parallel Task calls in single message]
137
181
 
138
- **Reviews:** After **all** implementers in the batch report back (including question resolution), launch in parallel:
139
- - Spec review: `./spec-reviewer-prompt.md` -- fill in requirements + implementer report
140
- - Quality review: `./code-quality-reviewer-prompt.md` -- fill in report + changed files
141
- - Launch both in a single message (2 Task calls per completed task)
182
+ Task 1 agent:
183
+ - Implemented validator, tests 5/5 pass
184
+ - Parallel sub-Tasks: spec ✅, quality
185
+ Done
142
186
 
143
- **Final review:** `./final-review-prompt.md` -- fill in the design doc that corresponds to the current plan (match by topic/date in `docs/plans/`, if exists) + full plan + all task results
187
+ Task 2 agent:
188
+ - "Should auth use JWT or session?" (question returned)
144
189
 
145
- ### Fix-Review Cycle
190
+ Task 5 agent:
191
+ - Implemented endpoints, tests 3/3 pass
192
+ - Parallel sub-Tasks: spec ✅, quality: Issues (magic number)
193
+ - Fixed magic number
194
+ - Parallel re-review: quality ✅
195
+ → Done
146
196
 
147
- When reviewers find issues:
148
- 1. Compile all issues from both reviewers
149
- 2. Re-launch the implementer Task with original prompt + fix list
150
- 3. Re-review only failed aspects (re-launch only the reviewer that found issues)
151
- 4. **Max 3 cycles.** After that, report remaining issues to user for decision.
197
+ [Answer Task 2 question: "JWT"]
198
+ [Re-launch Task 2 agent with answer]
152
199
 
153
- ### Question Handling
200
+ Task 2 agent:
201
+ - Implemented auth hook with JWT, tests 4/4 pass
202
+ - Parallel sub-Tasks: spec ✅, quality ✅
203
+ → Done
154
204
 
155
- If an implementer returns questions (output contains `## Questions`):
156
- 1. Present questions to the user
157
- 2. Re-launch that implementer Task with original prompt + answers appended
158
- 3. Other completed tasks in the batch are unaffected
205
+ [Batch 1 complete]
159
206
 
160
- ### Commit Strategy
207
+ --- Batch 2: parallel ---
161
208
 
162
- - Agents work on filesystem directly -- no commits during implementation
163
- - After final review passes, commit (use sd-commit or manual)
164
- - Do NOT commit between batches
209
+ [2 parallel Task calls in single message]
165
210
 
166
- ## Red Flags
211
+ Task 3 agent:
212
+ - Implemented login component using Task 2's auth hook
213
+ - Parallel sub-Tasks: spec ❌ (missing error state), quality ✅
214
+ - Fixed error state
215
+ - spec re-review ✅
216
+ → Done
167
217
 
168
- - Skip reviews in Agent Mode
169
- - Put file-overlapping tasks in same parallel batch
170
- - Skip dependency analysis
171
- - Send plan file path to agents instead of full text
172
- - Accept "close enough" on spec compliance
173
- - Exceed 3 fix-review cycles without user input
174
- - Proceed to next batch before current batch fully passes
218
+ Task 4 agent:
219
+ - Updated validator with new rules
220
+ - Parallel sub-Tasks: spec ✅, quality ✅
221
+ Done
175
222
 
176
- ## Error Handling
223
+ [Batch 2 complete]
177
224
 
178
- - Implementer fails completely → retry once, then report to user
179
- - Reviewer crashes → re-launch once
180
- - Fix-review loop at max → report to user for decision
181
- - File conflicts between parallel agents → stop batch, report to user
225
+ --- Final ---
182
226
 
183
- ## Example (Agent Mode)
227
+ [Task final review for entire implementation]
228
+ Final reviewer: All requirements met, ready to merge
184
229
 
230
+ Done!
185
231
  ```
186
- [Read plan: 5 tasks]
187
- [Dep analysis → Batch 1: [T1, T2], Batch 2: [T3, T4, T5]]
188
-
189
- --- Batch 1 ---
190
- [2 parallel implementer Tasks]
191
- T1: implemented, tests pass → report
192
- T2: "Should this use JWT?" → questions returned
193
232
 
194
- [2 parallel reviews for T1]
195
- Spec ✅, Quality ⚠️ magic number → CHANGES_NEEDED
233
+ ## Red Flags
196
234
 
197
- [Ask user T2's question → "JWT"]
198
- [Re-launch T2 with answer] implemented report
199
- [Fix T1 magic number re-review quality] → ✅
200
- [2 parallel reviews for T2] → Spec ✅, Quality ✅
235
+ **Never:**
236
+ - Start implementation on main/master without explicit user consent
237
+ - Skip reviews (spec compliance OR code quality)
238
+ - Proceed with unfixed issues
239
+ - Put tasks with file overlap in the same parallel batch
240
+ - Skip dependency analysis
241
+ - Make task agent read plan file directly (provide full text instead)
242
+ - Skip scene-setting context
243
+ - Accept "close enough" on spec compliance
244
+ - Skip review loops (issue found → fix → re-review)
201
245
 
202
- --- Batch 2 (with batch 1 context) ---
203
- [3 parallel implementer Tasks] → all pass
204
- [6 parallel reviews] all pass
246
+ **If task agent returns questions:**
247
+ - Answer clearly and completely
248
+ - Re-launch that agent with answers included
249
+ - Other parallel agents continue unaffected
205
250
 
206
- --- Final review (opus) ---
207
- Integration check Done!
208
- ```
251
+ **If reviewers find issues:**
252
+ - Task agent fixes all issues from both reviewers at once
253
+ - Re-review only the failed aspects (parallel sub-Tasks)
254
+ - Repeat until both approved
209
255
 
210
256
  ## After Completion
211
257
 
212
- When all tasks and final review pass:
213
- - If in a worktree (`.worktrees/`) guide user to `/sd-worktree merge`
258
+ When all tasks and final review are done, if the current working directory is inside a worktree (`.worktrees/`), guide the user to:
259
+ - `/sd-worktree merge` Merge the worktree branch back into main
214
260
 
215
261
  ## Integration
216
262
 
217
- - **sd-plan** -- creates the plan this skill executes
218
- - **sd-tdd** -- implementers follow TDD when plan specifies it
219
- - **sd-worktree** -- branch isolation for worktree-based workflows
263
+ **Related skills:**
264
+ - **sd-plan** creates the plan this skill executes
265
+ - **sd-tdd** task agents follow TDD
266
+ - **sd-worktree** — branch isolation for worktree-based workflows
@@ -31,6 +31,7 @@ Analyze user request from ARGUMENTS, select the best matching sd-* skill or agen
31
31
  | `sd-check` | Verify code — typecheck, lint, tests |
32
32
  | `sd-commit` | Create a git commit |
33
33
  | `sd-readme` | Update a package README.md |
34
+ | `sd-discuss` | Evaluate code design decisions against industry standards and project conventions |
34
35
  | `sd-api-name-review` | Review public API naming consistency |
35
36
  | `sd-worktree` | Start new work in branch isolation |
36
37
  | `sd-skill` | Create or edit skills |
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@simplysm/sd-claude",
3
- "version": "13.0.37",
3
+ "version": "13.0.39",
4
4
  "description": "Simplysm Claude Code CLI — asset installer and cross-platform npx wrapper",
5
5
  "author": "김석래",
6
6
  "license": "Apache-2.0",
@@ -12,9 +12,6 @@
12
12
  "type": "module",
13
13
  "main": "./dist/index.js",
14
14
  "types": "./dist/index.d.ts",
15
- "bin": {
16
- "sd-claude": "./dist/sd-claude.js"
17
- },
18
15
  "files": [
19
16
  "dist",
20
17
  "src",
@@ -30,5 +27,8 @@
30
27
  },
31
28
  "scripts": {
32
29
  "postinstall": "node scripts/postinstall.mjs"
30
+ },
31
+ "bin": {
32
+ "sd-claude": "./dist/sd-claude.js"
33
33
  }
34
34
  }