@simplysm/sd-claude 13.0.44 → 13.0.47

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -16,3 +16,8 @@
16
16
  - Before modifying functions/classes: Read the file to understand existing code style
17
17
  - When unsure about API/method usage: Check signatures in source code
18
18
  - **If confidence is low, ask the user instead of writing code**
19
+
20
+ ## Memory Policy
21
+
22
+ - **Do NOT use auto memory** (`~/.claude/projects/.../memory/`). It is environment-specific and does not persist across machines.
23
+ - All persistent knowledge belongs in `.claude/rules/` or project docs (committed to git).
@@ -39,48 +39,13 @@ Start by understanding the current project context, then ask questions one at a
39
39
  - Write the validated design to `docs/plans/YYYY-MM-DD-<topic>-design.md`
40
40
  - Commit the design document to git
41
41
 
42
- **Next Steps Guide:**
42
+ **Next Step:**
43
43
 
44
- Present the following two workflow paths so the user can see the full process and choose.
45
- Display the guide in the **user's configured language** (follow the language settings from CLAUDE.md or system instructions).
44
+ Display in the **system's configured language**:
46
45
 
47
- Before presenting, check git status for uncommitted changes. If there are any uncommitted changes (staged, unstaged, or untracked files), append the warning line (shown below) at the end of the guide block.
46
+ **"Design complete and saved to `docs/plans/<filename>.md`. Next step: `/sd-plan` to create the implementation plan."**
48
47
 
49
- ```
50
- Design complete! Here's how to proceed:
51
-
52
- --- Path A: With branch isolation (recommended for features/large changes) ---
53
-
54
- 1. /sd-worktree add <name> — Create a worktree branch
55
- 2. /sd-plan — Break into detailed tasks
56
- 3. /sd-plan-dev — Execute tasks in parallel (includes TDD + review)
57
- 4. /sd-check — Verify All
58
- 5. /sd-commit — Commit
59
- 6. /sd-worktree merge — Merge back to main
60
- 7. /sd-worktree clean — Remove worktree
61
-
62
- --- Path B: Direct on current branch (quick fixes/small changes) ---
63
-
64
- 1. /sd-plan — Break into detailed tasks
65
- 2. /sd-plan-dev — Execute tasks in parallel (includes TDD + review)
66
- 3. /sd-check — Verify All
67
- 4. /sd-commit — Commit
68
-
69
- You can start from any step or skip steps as needed.
70
-
71
- 💡 "Path A: yolo" or "Path B: yolo" to auto-run all steps
72
-
73
- ⚠️ You have uncommitted changes. To use Path A, run `/sd-commit all` first.
74
- ```
75
-
76
- - The last `⚠️` line is only shown when uncommitted changes exist. Omit it when working tree is clean.
77
-
78
- - After presenting both paths, **recommend one** based on the design's scope:
79
- - Path A recommended: new features, multi-file changes, architectural changes, anything that benefits from isolation
80
- - Path B recommended: small bug fixes, single-file changes, config tweaks, minor adjustments
81
- - Briefly explain why (1 sentence)
82
- - Do NOT auto-proceed to any step. Present the overview with recommendation and wait for the user's choice.
83
- - **Yolo mode**: If the user responds with "Path A: yolo" or "Path B: yolo" (or similar intent like "A yolo", "B 자동"), execute all steps of the chosen path sequentially without stopping between steps.
48
+ Do NOT auto-proceed. Wait for the user's explicit instruction.
84
49
 
85
50
  ## Key Principles
86
51
 
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: sd-check
3
3
  description: Use when verifying code quality via typecheck, lint, and tests - before deployment, PR creation, after code changes, or when type errors, lint violations, or test failures are suspected. Applies to whole project or specific paths.
4
- allowed-tools: Bash(node .claude/skills/sd-check/env-check.mjs), Bash(pnpm typecheck), Bash(pnpm lint --fix), Bash(pnpm vitest), Bash(pnpm lint:fix)
4
+ allowed-tools: Bash(node .claude/skills/sd-check/run-checks.mjs), Bash(pnpm typecheck), Bash(pnpm lint --fix), Bash(pnpm vitest)
5
5
  ---
6
6
 
7
7
  # sd-check
@@ -14,19 +14,18 @@ Verify code quality through parallel execution of typecheck, lint, and test chec
14
14
 
15
15
  **Foundational Principle:** Violating the letter of these steps is violating the spirit of verification.
16
16
 
17
- When the user asks to verify code, YOU will manually execute **EXACTLY THESE 4 STEPS** (no more, no less):
17
+ When the user asks to verify code, YOU will manually execute **EXACTLY THESE 3 STEPS** (no more, no less):
18
18
 
19
- **Step 1:** Environment Pre-check (4 checks in parallel)
20
- **Step 2:** Launch 3 background Bash commands in parallel (typecheck, lint, test ONLY)
21
- **Step 3:** Collect results, fix errors in priority order
22
- **Step 4:** Re-verify (go back to Step 2) until all pass
19
+ **Step 1:** Run the unified check script (single background command)
20
+ **Step 2:** Collect results, fix errors in priority order
21
+ **Step 3:** Re-verify (go back to Step 1) until all pass
23
22
 
24
23
  **Core principle:** Always re-run ALL checks after any fix - changes can cascade.
25
24
 
26
25
  **CRITICAL:**
27
26
  - This skill verifies ONLY typecheck, lint, and test
28
27
  - **NO BUILD. NO DEV SERVER. NO TEAMS. NO TASK LISTS.**
29
- - Do NOT create your own "better" workflow - follow these 4 steps EXACTLY
28
+ - Do NOT create your own "better" workflow - follow these 3 steps EXACTLY
30
29
 
31
30
  ## Usage
32
31
 
@@ -37,71 +36,48 @@ When the user asks to verify code, YOU will manually execute **EXACTLY THESE 4 S
37
36
 
38
37
  ## Quick Reference
39
38
 
40
- | Check | Command | Purpose |
41
- |-------|---------|---------|
42
- | Typecheck | `pnpm typecheck [path]` | Type errors |
43
- | Lint | `pnpm lint --fix [path]` | Code quality |
44
- | Test | `pnpm vitest [path] --run` | Functionality |
39
+ | Check | Purpose |
40
+ |-------|---------|
41
+ | Typecheck | Type errors |
42
+ | Lint | Code quality (with auto-fix) |
43
+ | Test | Functionality |
45
44
 
46
- **All 3 run in PARALLEL** (background Bash commands, single message)
45
+ **All 3 run in PARALLEL** inside the unified script (single command).
47
46
 
48
47
  ## Workflow
49
48
 
50
- ### Step 1: Environment Pre-check
49
+ ### Step 1: Run Unified Check Script
51
50
 
52
- Before ANY verification, run the environment check script:
51
+ Launch the single check command in background:
53
52
 
54
- ```bash
55
- node .claude/skills/sd-check/env-check.mjs
56
- ```
57
-
58
- - **Exit 0 + "Environment OK"**: Proceed to Step 2
59
- - **Exit 1 + "FAIL"**: STOP, report the listed errors to user
60
-
61
- The script checks: package.json version (v13), pnpm workspace files, typecheck/lint scripts, vitest config.
62
-
63
- ### Step 2: Launch 3 Background Bash Commands in Parallel
64
-
65
- Launch ALL 3 commands in a **single message** using Bash tool with `run_in_background: true`.
66
-
67
- **Replace `[path]` with user's argument, or OMIT if no argument (defaults to full project).**
68
-
69
- **Command 1 - Typecheck:**
70
53
  ```
71
54
  Bash tool:
72
- command: "pnpm typecheck [path]"
73
- description: "Run typecheck"
74
- run_in_background: true
75
- timeout: 300000
55
+ command: "node .claude/skills/sd-check/run-checks.mjs [path]"
56
+ description: "Run typecheck + lint + test"
57
+ timeout: 600000
76
58
  ```
77
59
 
78
- **Command 2 - Lint:**
79
- ```
80
- Bash tool:
81
- command: "pnpm lint --fix [path]"
82
- description: "Run lint with auto-fix"
83
- run_in_background: true
84
- timeout: 300000
85
- ```
60
+ Replace `[path]` with user's argument, or OMIT if no argument (defaults to full project).
86
61
 
87
- **Command 3 - Test:**
88
- ```
89
- Bash tool:
90
- command: "pnpm vitest [path] --run"
91
- description: "Run tests"
92
- run_in_background: true
93
- timeout: 300000
62
+ **Output format:**
94
63
  ```
64
+ ====== TYPECHECK: PASS ======
65
+ [last few lines]
95
66
 
96
- Each command returns a `task_id`. Use `TaskOutput` tool to collect results (with `block: true`).
67
+ ====== LINT: FAIL ======
68
+ [full error output]
97
69
 
98
- ### Step 3: Collect Results and Fix Errors
70
+ ====== TEST: PASS (47 tests) ======
71
+ [last few lines]
99
72
 
100
- Collect ALL 3 outputs using `TaskOutput` tool (with `block: true, timeout: 300000`) in a **single message** (parallel calls).
73
+ ====== SUMMARY: 1/3 FAILED (lint) ======
74
+ ```
101
75
 
102
- **If all checks passed:** Complete (see Completion Criteria).
76
+ - **ENV-CHECK FAIL**: Environment prerequisites missing. Report to user and STOP.
77
+ - **SUMMARY: ALL PASSED**: Complete (see Completion Criteria).
78
+ - **SUMMARY: N/3 FAILED**: Proceed to Step 2.
103
79
 
104
- **If any errors found:**
80
+ ### Step 2: Fix Errors
105
81
 
106
82
  1. **Analyze by priority:** Typecheck → Lint → Test
107
83
  - Typecheck errors may cause lint/test errors (cascade)
@@ -117,81 +93,87 @@ Collect ALL 3 outputs using `TaskOutput` tool (with `block: true, timeout: 30000
117
93
  - If source bug: Fix source
118
94
  - **If root cause unclear OR 2-3 fix attempts failed:** Recommend `/sd-debug`
119
95
 
120
- 4. **Proceed to Step 4**
96
+ 4. **Proceed to Step 3**
121
97
 
122
- ### Step 4: Re-verify (Loop Until All Pass)
98
+ ### Step 3: Re-verify (Loop Until All Pass)
123
99
 
124
100
  **CRITICAL:** After ANY fix, re-run ALL 3 checks.
125
101
 
126
- Go back to Step 2 and launch 3 background Bash commands again.
102
+ Go back to Step 1 and run the unified script again.
127
103
 
128
104
  **Do NOT assume:** "I only fixed typecheck → skip lint/test". Fixes cascade.
129
105
 
130
- Repeat Steps 2-4 until all 3 checks pass.
106
+ Repeat Steps 1-3 until all 3 checks pass.
131
107
 
132
108
  ## Common Mistakes
133
109
 
134
- ### Running checks sequentially
135
- **Wrong:** Launch command 1, wait command 2, wait command 3
136
- **Right:** Launch ALL 3 in single message (parallel background Bash calls)
137
-
138
- ### ❌ Fixing before collecting all results
139
- **Wrong:** Command 1 returns error → fix immediately → re-verify
140
- **Right:** Wait for all 3 → collect all errors → fix in priority order → re-verify
110
+ ### Fixing before reading full output
111
+ **Wrong:** See first error in outputfix immediately
112
+ **Right:** Read full SUMMARY collect all errors fix in priority order → re-verify
141
113
 
142
- ### Skipping re-verification after fixes
114
+ ### Skipping re-verification after fixes
143
115
  **Wrong:** Fix typecheck → assume lint/test still pass
144
- **Right:** ALWAYS re-run all 3 checks after any fix
116
+ **Right:** ALWAYS re-run the script after any fix
145
117
 
146
- ### Using agents instead of background Bash
147
- **Wrong:** Launch haiku/sonnet/opus agents via Task tool to run commands
118
+ ### Using agents instead of background Bash
119
+ **Wrong:** Launch haiku/sonnet/opus agents via Task tool to run the script
148
120
  **Right:** Use `Bash` with `run_in_background: true` (no model overhead)
149
121
 
150
- ### Including build/dev steps
122
+ ### Including build/dev steps
151
123
  **Wrong:** Run `pnpm build` or `pnpm dev` as part of verification
152
124
  **Right:** sd-check is ONLY typecheck, lint, test (no build, no dev)
153
125
 
154
- ### Asking user for path
126
+ ### Asking user for path
155
127
  **Wrong:** No path provided → ask "which package?"
156
- **Right:** No path → verify entire project (omit path in commands)
128
+ **Right:** No path → verify entire project (omit path in command)
157
129
 
158
- ### Infinite fix loop
130
+ ### Infinite fix loop
159
131
  **Wrong:** Keep trying same fix when tests fail repeatedly
160
132
  **Right:** After 2-3 failed attempts → recommend `/sd-debug`
161
133
 
134
+ ### Claiming success without fresh evidence
135
+ **Wrong:** "All checks should pass now" or "Great, that fixes it!"
136
+ **Right:** Run script → read output → cite results (e.g., "0 errors, 47 tests passed") → THEN claim
137
+
162
138
  ## Red Flags - STOP and Follow Workflow
163
139
 
164
140
  If you find yourself doing ANY of these, you're violating the skill:
165
141
 
166
142
  - Treating sd-check as a command to invoke (`Skill: sd-check Args: ...`)
167
143
  - Including build or dev server in verification
168
- - Running commands sequentially instead of parallel
169
144
  - Using Task/agent instead of background Bash
170
145
  - Not re-verifying after every fix
171
146
  - Asking user for path when none provided
172
147
  - Continuing past 2-3 failed fix attempts without recommending `/sd-debug`
173
- - Spawning 4+ commands (only 3: typecheck, lint, test)
148
+ - Expressing satisfaction ("Great!", "Perfect!", "Done!") before all 3 checks pass
149
+ - Using vague language: "should work", "probably passes", "seems fine"
150
+ - Claiming completion based on a previous run, not the current one
174
151
 
175
152
  **All of these violate the skill's core principles. Go back to Step 1 and follow the workflow exactly.**
176
153
 
177
154
  ## Completion Criteria
178
155
 
179
156
  **Complete when:**
180
- - All 3 checks (typecheck, lint, test) pass without errors
181
- - Report: "All checks passed - code verified"
157
+ - All 3 checks (typecheck, lint, test) pass without errors **in the most recent run**
158
+ - Report with evidence: "All checks passed" + cite actual output (e.g., "0 errors", "47 tests passed")
159
+
160
+ **Fresh evidence required:**
161
+ - "Passes" = you ran it THIS iteration and saw it pass in the output
162
+ - Previous run results are NOT evidence for current state
163
+ - Confidence is NOT evidence — run the check
182
164
 
183
165
  **Do NOT complete if:**
184
166
  - Any check has errors
185
167
  - Haven't re-verified after a fix
186
168
  - Environment pre-checks failed
169
+ - Using "should", "probably", or "seems to" instead of actual output
187
170
 
188
171
  ## Rationalization Table
189
172
 
190
173
  | Excuse | Reality |
191
174
  |--------|---------|
192
175
  | "I'm following the spirit, not the letter" | Violating the letter IS violating the spirit - follow EXACTLY |
193
- | "I'll create a better workflow with teams/tasks" | Follow the 4 steps EXACTLY - no teams, no task lists |
194
- | "I'll split tests into multiple commands" | Only 3 commands total: typecheck, lint, test |
176
+ | "I'll create a better workflow with teams/tasks" | Follow the 3 steps EXACTLY - no teams, no task lists |
195
177
  | "Agents can analyze output better" | Background Bash is sufficient - analysis happens in main context |
196
178
  | "I only fixed lint, typecheck still passes" | Always re-verify ALL - fixes can cascade |
197
179
  | "Build is part of verification" | Build is deployment, not verification - NEVER include it |
@@ -199,4 +181,6 @@ If you find yourself doing ANY of these, you're violating the skill:
199
181
  | "I'll try one more fix approach" | After 2-3 attempts → recommend /sd-debug |
200
182
  | "Tests are independent of types" | Type fixes affect tests - always re-run ALL |
201
183
  | "I'll invoke sd-check skill with args" | sd-check is EXACT STEPS, not a command |
202
- | "4 commands: typecheck, lint, test, build" | Only 3 commands - build is FORBIDDEN |
184
+ | "I'm confident it passes" | Confidence is not evidence run the check |
185
+ | "It should work now" | "Should" = no evidence — run the check |
186
+ | "I already verified earlier" | Earlier is not now — re-run after every change |
@@ -0,0 +1,128 @@
1
+ import { readFileSync, existsSync } from "fs";
2
+ import { resolve } from "path";
3
+ import { spawn } from "child_process";
4
+
5
+ const root = resolve(import.meta.dirname, "../../..");
6
+ const pathArg = process.argv[2];
7
+
8
+ // ══════════════════════════════════════════
9
+ // Phase 1: Environment Pre-check
10
+ // ══════════════════════════════════════════
11
+
12
+ const errors = [];
13
+
14
+ const pkgPath = resolve(root, "package.json");
15
+ if (!existsSync(pkgPath)) {
16
+ errors.push("package.json not found");
17
+ } else {
18
+ const pkg = JSON.parse(readFileSync(pkgPath, "utf8"));
19
+ const major = parseInt(pkg.version?.split(".")[0], 10);
20
+ if (Number.isNaN(major) || major < 13) {
21
+ errors.push(`This skill requires simplysm v13+. Current: ${pkg.version}`);
22
+ }
23
+ if (!pkg.scripts?.typecheck) {
24
+ errors.push("'typecheck' script not defined in package.json");
25
+ }
26
+ if (!pkg.scripts?.lint) {
27
+ errors.push("'lint' script not defined in package.json");
28
+ }
29
+ }
30
+
31
+ for (const f of ["pnpm-workspace.yaml", "pnpm-lock.yaml"]) {
32
+ if (!existsSync(resolve(root, f))) {
33
+ errors.push(`${f} not found`);
34
+ }
35
+ }
36
+
37
+ if (!existsSync(resolve(root, "vitest.config.ts"))) {
38
+ errors.push("vitest.config.ts not found");
39
+ }
40
+
41
+ if (errors.length > 0) {
42
+ console.log("ENV-CHECK FAIL");
43
+ for (const e of errors) console.log(`- ${e}`);
44
+ process.exit(1);
45
+ }
46
+
47
+ // ══════════════════════════════════════════
48
+ // Phase 2: Run 3 checks in parallel
49
+ // ══════════════════════════════════════════
50
+
51
+ const TAIL_LINES = 15;
52
+
53
+ function runCommand(name, cmd, args) {
54
+ return new Promise((res) => {
55
+ const child = spawn(cmd, args, { cwd: root, shell: true, stdio: "pipe" });
56
+ const chunks = [];
57
+
58
+ child.stdout.on("data", (d) => chunks.push(d));
59
+ child.stderr.on("data", (d) => chunks.push(d));
60
+
61
+ child.on("close", (code) => {
62
+ const output = Buffer.concat(chunks).toString("utf8");
63
+ res({ name, code, output });
64
+ });
65
+
66
+ child.on("error", (err) => {
67
+ res({ name, code: 1, output: err.message });
68
+ });
69
+ });
70
+ }
71
+
72
+ function tail(text, n) {
73
+ const lines = text.trimEnd().split("\n");
74
+ if (lines.length <= n) return text.trimEnd();
75
+ return "...\n" + lines.slice(-n).join("\n");
76
+ }
77
+
78
+ function extractTestCount(output) {
79
+ const match =
80
+ output.match(/(\d+)\s+tests?\s+passed/i) ??
81
+ output.match(/Tests\s+(\d+)\s+passed/i) ??
82
+ output.match(/(\d+)\s+pass/i);
83
+ return match ? match[1] : null;
84
+ }
85
+
86
+ const checks = [
87
+ { name: "TYPECHECK", cmd: "pnpm", args: pathArg ? ["typecheck", pathArg] : ["typecheck"] },
88
+ { name: "LINT", cmd: "pnpm", args: pathArg ? ["lint", "--fix", pathArg] : ["lint", "--fix"] },
89
+ { name: "TEST", cmd: "pnpm", args: pathArg ? ["vitest", pathArg, "--run"] : ["vitest", "--run"] },
90
+ ];
91
+
92
+ const results = await Promise.all(checks.map((c) => runCommand(c.name, c.cmd, c.args)));
93
+
94
+ // ══════════════════════════════════════════
95
+ // Output results
96
+ // ══════════════════════════════════════════
97
+
98
+ const failed = [];
99
+
100
+ for (const r of results) {
101
+ const passed = r.code === 0;
102
+ let label = passed ? "PASS" : "FAIL";
103
+
104
+ if (passed && r.name === "TEST") {
105
+ const count = extractTestCount(r.output);
106
+ if (count) label = `PASS (${count} tests)`;
107
+ }
108
+
109
+ console.log(`\n${"=".repeat(6)} ${r.name}: ${label} ${"=".repeat(6)}`);
110
+
111
+ if (passed) {
112
+ console.log(tail(r.output, TAIL_LINES));
113
+ } else {
114
+ console.log(r.output.trimEnd());
115
+ failed.push(r.name.toLowerCase());
116
+ }
117
+ }
118
+
119
+ console.log();
120
+ if (failed.length === 0) {
121
+ console.log(`${"=".repeat(6)} SUMMARY: ALL PASSED ${"=".repeat(6)}`);
122
+ } else {
123
+ console.log(
124
+ `${"=".repeat(6)} SUMMARY: ${failed.length}/3 FAILED (${failed.join(", ")}) ${"=".repeat(6)}`,
125
+ );
126
+ }
127
+
128
+ process.exit(failed.length > 0 ? 1 : 0);
@@ -0,0 +1,47 @@
1
+ ---
2
+ name: sd-eml-analyze
3
+ description: Use when user asks to analyze, read, or summarize .eml email files, or when encountering .eml attachments that need content extraction including embedded PDF, XLSX, PPTX files
4
+ ---
5
+
6
+ # EML Email Analyzer
7
+
8
+ ## Overview
9
+
10
+ Python script that parses .eml files and extracts content from all attachments (PDF, XLSX, PPTX) into a single structured markdown report. Handles Korean encodings (EUC-KR, CP949, ks_c_5601-1987) automatically.
11
+
12
+ ## When to Use
13
+
14
+ - User provides a `.eml` file to analyze or summarize
15
+ - Need to extract text from email attachments (PDF, XLSX, PPTX)
16
+ - Korean email content needs proper decoding
17
+
18
+ ## Usage
19
+
20
+ ```bash
21
+ python .claude/skills/sd-eml-analyze/eml-analyzer.py <eml_file_path>
22
+ ```
23
+
24
+ First run auto-installs: `pdfminer.six`, `python-pptx`, `openpyxl`.
25
+
26
+ ## Output Format
27
+
28
+ Markdown report with sections:
29
+ 1. **Mail info table**: Subject, From, To, Cc, Date, attachment count
30
+ 2. **Body text**: Plain text (HTML stripped if no plain text)
31
+ 3. **Attachment analysis**: Summary table + extracted text per file
32
+
33
+ ## Supported Attachments
34
+
35
+ | Format | Method |
36
+ |--------|--------|
37
+ | PDF | pdfminer.six text extraction |
38
+ | XLSX/XLS | openpyxl cell data as markdown table |
39
+ | PPTX | python-pptx slide text + tables + notes |
40
+ | Text files (.txt, .csv, .json, .xml, .html, .md) | UTF-8/CP949 decode |
41
+ | Images | Filename and size only |
42
+
43
+ ## Common Mistakes
44
+
45
+ - **Wrong Python**: Ensure `python` points to Python 3.8+
46
+ - **Firewall blocking pip**: First run needs internet for package install
47
+ - **Legacy .ppt/.xls**: Only modern Office formats (.pptx/.xlsx) supported
@@ -0,0 +1,335 @@
1
+ #!/usr/bin/env python3
2
+ """EML Email Analyzer - Parses EML files and attachments into structured markdown."""
3
+
4
+ import sys
5
+ import os
6
+ import io
7
+ import subprocess
8
+ import email
9
+ import html
10
+ import re
11
+ import tempfile
12
+ from email.policy import default as default_policy
13
+ from email.header import decode_header
14
+ from pathlib import Path
15
+
16
+ # stdout UTF-8 강제 (Windows 호환)
17
+ sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding="utf-8", errors="replace")
18
+ sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding="utf-8", errors="replace")
19
+
20
+
21
+ def ensure_packages():
22
+ """필요한 패키지 자동 설치."""
23
+ packages = {
24
+ "pdfminer.six": "pdfminer",
25
+ "python-pptx": "pptx",
26
+ "openpyxl": "openpyxl",
27
+ }
28
+ missing = []
29
+ for pip_name, import_name in packages.items():
30
+ try:
31
+ __import__(import_name)
32
+ except ImportError:
33
+ missing.append(pip_name)
34
+ if missing:
35
+ print(f"패키지 설치 중: {', '.join(missing)}...", file=sys.stderr)
36
+ subprocess.check_call(
37
+ [sys.executable, "-m", "pip", "install", "-q", *missing],
38
+ stdout=subprocess.DEVNULL,
39
+ stderr=subprocess.DEVNULL,
40
+ )
41
+
42
+
43
+ ensure_packages()
44
+
45
+ from pdfminer.high_level import extract_text as pdf_extract_text # noqa: E402
46
+ from pptx import Presentation # noqa: E402
47
+ from openpyxl import load_workbook # noqa: E402
48
+
49
+
50
+ # ── Korean charset helpers ──────────────────────────────────────────
51
+
52
+ KOREAN_CHARSET_MAP = {
53
+ "ks_c_5601-1987": "cp949",
54
+ "ks_c_5601": "cp949",
55
+ "euc_kr": "cp949",
56
+ "euc-kr": "cp949",
57
+ }
58
+
59
+
60
+ def fix_charset(charset: str) -> str:
61
+ if charset is None:
62
+ return "utf-8"
63
+ return KOREAN_CHARSET_MAP.get(charset.lower(), charset)
64
+
65
+
66
+ # ── EML parsing ─────────────────────────────────────────────────────
67
+
68
+
69
+ def parse_eml(filepath: str):
70
+ with open(filepath, "rb") as f:
71
+ msg = email.message_from_binary_file(f, policy=default_policy)
72
+
73
+ # Headers
74
+ headers = {
75
+ "subject": str(msg["Subject"] or ""),
76
+ "from": str(msg["From"] or ""),
77
+ "to": str(msg["To"] or ""),
78
+ "cc": str(msg["Cc"] or ""),
79
+ "date": str(msg["Date"] or ""),
80
+ }
81
+
82
+ # Body
83
+ body_plain = ""
84
+ body_html = ""
85
+
86
+ if msg.is_multipart():
87
+ for part in msg.walk():
88
+ ctype = part.get_content_type()
89
+ cdisp = part.get_content_disposition()
90
+ if cdisp == "attachment":
91
+ continue
92
+ if ctype == "text/plain" and not body_plain:
93
+ body_plain = _get_text(part)
94
+ elif ctype == "text/html" and not body_html:
95
+ body_html = _get_text(part)
96
+ else:
97
+ body_plain = _get_text(msg)
98
+
99
+ # Attachments
100
+ attachments = []
101
+ for part in msg.walk():
102
+ filename = part.get_filename()
103
+ if not filename:
104
+ continue
105
+ cdisp = part.get_content_disposition()
106
+ if cdisp not in ("attachment", "inline", None):
107
+ continue
108
+ payload = part.get_payload(decode=True)
109
+ if payload is None:
110
+ continue
111
+ attachments.append(
112
+ {
113
+ "filename": filename,
114
+ "content_type": part.get_content_type(),
115
+ "size": len(payload),
116
+ "data": payload,
117
+ }
118
+ )
119
+
120
+ return headers, body_plain, body_html, attachments
121
+
122
+
123
+ def _get_text(part) -> str:
124
+ try:
125
+ return part.get_content()
126
+ except Exception:
127
+ payload = part.get_payload(decode=True)
128
+ if not payload:
129
+ return ""
130
+ charset = fix_charset(part.get_content_charset())
131
+ return payload.decode(charset, errors="replace")
132
+
133
+
134
+ # ── Attachment extractors ───────────────────────────────────────────
135
+
136
+
137
+ def extract_pdf(data: bytes) -> str:
138
+ with tempfile.NamedTemporaryFile(suffix=".pdf", delete=False) as f:
139
+ f.write(data)
140
+ tmp = f.name
141
+ try:
142
+ text = pdf_extract_text(tmp)
143
+ return text.strip() if text else "(텍스트 추출 실패)"
144
+ except Exception as e:
145
+ return f"(PDF 파싱 오류: {e})"
146
+ finally:
147
+ os.unlink(tmp)
148
+
149
+
150
+ def extract_pptx(data: bytes) -> str:
151
+ with tempfile.NamedTemporaryFile(suffix=".pptx", delete=False) as f:
152
+ f.write(data)
153
+ tmp = f.name
154
+ try:
155
+ prs = Presentation(tmp)
156
+ slides = []
157
+ for i, slide in enumerate(prs.slides, 1):
158
+ lines = [f"#### 슬라이드 {i}"]
159
+ for shape in slide.shapes:
160
+ if shape.has_text_frame:
161
+ for para in shape.text_frame.paragraphs:
162
+ line = "".join(run.text for run in para.runs)
163
+ if line.strip():
164
+ lines.append(line)
165
+ if shape.has_table:
166
+ header = " | ".join(
167
+ cell.text for cell in shape.table.rows[0].cells
168
+ )
169
+ sep = " | ".join(
170
+ "---" for _ in shape.table.rows[0].cells
171
+ )
172
+ lines.append(f"| {header} |")
173
+ lines.append(f"| {sep} |")
174
+ for row in list(shape.table.rows)[1:]:
175
+ row_text = " | ".join(cell.text for cell in row.cells)
176
+ lines.append(f"| {row_text} |")
177
+ if slide.has_notes_slide:
178
+ notes = slide.notes_slide.notes_text_frame.text
179
+ if notes.strip():
180
+ lines.append(f"> 노트: {notes}")
181
+ slides.append("\n".join(lines))
182
+ return "\n\n".join(slides) if slides else "(텍스트 없음)"
183
+ except Exception as e:
184
+ return f"(PPTX 파싱 오류: {e})"
185
+ finally:
186
+ os.unlink(tmp)
187
+
188
+
189
+ def extract_xlsx(data: bytes) -> str:
190
+ with tempfile.NamedTemporaryFile(suffix=".xlsx", delete=False) as f:
191
+ f.write(data)
192
+ tmp = f.name
193
+ try:
194
+ wb = load_workbook(tmp, data_only=True)
195
+ sheets = []
196
+ for name in wb.sheetnames:
197
+ ws = wb[name]
198
+ lines = [f"#### 시트: {name}"]
199
+ rows = list(ws.iter_rows(values_only=True))
200
+ if not rows:
201
+ lines.append("(데이터 없음)")
202
+ sheets.append("\n".join(lines))
203
+ continue
204
+ # 마크다운 테이블
205
+ first_row = rows[0]
206
+ col_count = len(first_row)
207
+ header = " | ".join(str(c) if c is not None else "" for c in first_row)
208
+ sep = " | ".join("---" for _ in range(col_count))
209
+ lines.append(f"| {header} |")
210
+ lines.append(f"| {sep} |")
211
+ for row in rows[1:]:
212
+ vals = " | ".join(str(c) if c is not None else "" for c in row)
213
+ if any(c is not None for c in row):
214
+ lines.append(f"| {vals} |")
215
+ sheets.append("\n".join(lines))
216
+ return "\n\n".join(sheets) if sheets else "(데이터 없음)"
217
+ except Exception as e:
218
+ return f"(XLSX 파싱 오류: {e})"
219
+ finally:
220
+ os.unlink(tmp)
221
+
222
+
223
+ def extract_text_file(data: bytes) -> str:
224
+ for enc in ("utf-8", "cp949", "euc-kr", "latin-1"):
225
+ try:
226
+ return data.decode(enc)
227
+ except (UnicodeDecodeError, LookupError):
228
+ continue
229
+ return data.decode("utf-8", errors="replace")
230
+
231
+
232
+ # ── HTML stripping ──────────────────────────────────────────────────
233
+
234
+
235
+ def strip_html(text: str) -> str:
236
+ text = re.sub(r"<style[^>]*>.*?</style>", "", text, flags=re.DOTALL | re.I)
237
+ text = re.sub(r"<script[^>]*>.*?</script>", "", text, flags=re.DOTALL | re.I)
238
+ text = re.sub(r"<br\s*/?>", "\n", text, flags=re.I)
239
+ text = re.sub(r"</(?:p|div|tr|li)>", "\n", text, flags=re.I)
240
+ text = re.sub(r"<[^>]+>", "", text)
241
+ text = html.unescape(text)
242
+ text = re.sub(r"\n{3,}", "\n\n", text)
243
+ return text.strip()
244
+
245
+
246
+ # ── Size formatting ─────────────────────────────────────────────────
247
+
248
+
249
+ def fmt_size(n: int) -> str:
250
+ if n < 1024:
251
+ return f"{n} B"
252
+ if n < 1024 * 1024:
253
+ return f"{n / 1024:.1f} KB"
254
+ return f"{n / (1024 * 1024):.1f} MB"
255
+
256
+
257
+ # ── Markdown report ─────────────────────────────────────────────────
258
+
259
+ PARSEABLE_EXTS = {".pdf", ".xlsx", ".xls", ".pptx", ".txt", ".csv", ".log", ".json", ".xml", ".html", ".htm", ".md"}
260
+ IMAGE_EXTS = {".png", ".jpg", ".jpeg", ".gif", ".bmp", ".webp", ".svg"}
261
+
262
+
263
+ def build_report(filepath: str) -> str:
264
+ headers, body_plain, body_html, attachments = parse_eml(filepath)
265
+
266
+ out = []
267
+ out.append("# 이메일 분석서\n")
268
+ out.append(f"**원본 파일**: `{os.path.basename(filepath)}`\n")
269
+
270
+ # ── 메일 정보
271
+ out.append("## 메일 정보\n")
272
+ out.append("| 항목 | 내용 |")
273
+ out.append("|------|------|")
274
+ out.append(f"| **제목** | {headers['subject']} |")
275
+ out.append(f"| **보낸 사람** | {headers['from']} |")
276
+ out.append(f"| **받는 사람** | {headers['to']} |")
277
+ if headers["cc"]:
278
+ out.append(f"| **참조** | {headers['cc']} |")
279
+ out.append(f"| **날짜** | {headers['date']} |")
280
+ out.append(f"| **첨부파일** | {len(attachments)}개 |")
281
+ out.append("")
282
+
283
+ # ── 본문
284
+ out.append("## 본문 내용\n")
285
+ body = body_plain
286
+ if not body and body_html:
287
+ body = strip_html(body_html)
288
+ out.append(body.strip() if body else "_(본문 없음)_")
289
+ out.append("")
290
+
291
+ # ── 첨부파일
292
+ if attachments:
293
+ out.append("## 첨부파일 분석\n")
294
+ out.append("| # | 파일명 | 형식 | 크기 |")
295
+ out.append("|---|--------|------|------|")
296
+ for i, a in enumerate(attachments, 1):
297
+ out.append(f"| {i} | {a['filename']} | {a['content_type']} | {fmt_size(a['size'])} |")
298
+ out.append("")
299
+
300
+ for i, a in enumerate(attachments, 1):
301
+ ext = Path(a["filename"]).suffix.lower()
302
+ out.append(f"### 첨부 {i}: {a['filename']}\n")
303
+
304
+ if ext == ".pdf":
305
+ out.append(extract_pdf(a["data"]))
306
+ elif ext in (".xlsx", ".xls"):
307
+ out.append(extract_xlsx(a["data"]))
308
+ elif ext == ".pptx":
309
+ out.append(extract_pptx(a["data"]))
310
+ elif ext == ".ppt":
311
+ out.append("_(.ppt 레거시 형식 미지원, .pptx만 지원)_")
312
+ elif ext in (".txt", ".csv", ".log", ".json", ".xml", ".html", ".htm", ".md"):
313
+ out.append(f"```\n{extract_text_file(a['data'])}\n```")
314
+ elif ext in IMAGE_EXTS:
315
+ out.append(f"_(이미지 파일 - {fmt_size(a['size'])})_")
316
+ else:
317
+ out.append(f"_(지원하지 않는 형식: {ext}, {fmt_size(a['size'])})_")
318
+ out.append("")
319
+
320
+ return "\n".join(out)
321
+
322
+
323
+ # ── Main ────────────────────────────────────────────────────────────
324
+
325
+ if __name__ == "__main__":
326
+ if len(sys.argv) < 2:
327
+ print("Usage: python eml-analyzer.py <eml_file_path>", file=sys.stderr)
328
+ sys.exit(1)
329
+
330
+ path = sys.argv[1]
331
+ if not os.path.isfile(path):
332
+ print(f"파일을 찾을 수 없습니다: {path}", file=sys.stderr)
333
+ sys.exit(1)
334
+
335
+ print(build_report(path))
@@ -16,6 +16,34 @@ Assume they are a skilled developer, but know almost nothing about our toolset o
16
16
 
17
17
  **Save plans to:** `docs/plans/YYYY-MM-DD-<feature-name>.md`
18
18
 
19
+ ## Clarification Phase
20
+
21
+ **Before writing the plan**, read the design doc and explore the codebase. Then identify implementation decisions the design didn't specify.
22
+
23
+ **Ask the user about:**
24
+ - Core behavior choices (state management approach, error handling strategy, validation UX)
25
+ - User-facing behavior not specified (what the user sees on error, how navigation works)
26
+ - Architectural choices with multiple valid options (which pattern, which abstraction level)
27
+
28
+ **Don't ask — just decide:**
29
+ - Internal details covered by project conventions (file naming, export patterns)
30
+ - Pure YAGNI decisions (features not mentioned = don't add)
31
+ - Implementation details with only one reasonable option
32
+
33
+ **Format:** Present all discovered gaps as a single numbered list with your recommended option for each. Wait for user confirmation before writing the plan.
34
+
35
+ ```
36
+ Before writing the plan, I found these implementation decisions not covered in the design:
37
+
38
+ 1. **[Topic]**: [The gap]. I'd recommend [option A] because [reason]. Alternatively, [option B].
39
+ 2. **[Topic]**: [The gap]. I'd recommend [option] because [reason].
40
+ ...
41
+
42
+ Should I proceed with these recommendations, or would you like to change any?
43
+ ```
44
+
45
+ If no gaps are found, skip this phase and proceed directly to writing the plan.
46
+
19
47
  ## Bite-Sized Task Granularity
20
48
 
21
49
  **Each step is one action (2-5 minutes):**
@@ -96,11 +124,49 @@ git commit -m "feat: add specific feature"
96
124
 
97
125
  ## Execution Handoff
98
126
 
99
- After saving the plan:
127
+ After saving the plan, present the following workflow paths in the **system's configured language**.
128
+
129
+ Before presenting, check git status for uncommitted changes. If there are any, append the `⚠️` warning line.
130
+
131
+ ```
132
+ Plan complete! Here's how to proceed:
133
+
134
+ --- Path A: With branch isolation (recommended for features/large changes) ---
135
+
136
+ 1. /sd-worktree add <name> — Create a worktree branch
137
+ 2. /sd-plan-dev — Execute tasks in parallel (includes TDD + review)
138
+ 3. /sd-check — Verify (modified + dependents)
139
+ 4. /sd-commit — Commit
140
+ 5. /sd-worktree merge — Merge back to main
141
+ 6. /sd-worktree clean — Remove worktree
142
+
143
+ --- Path B: Direct on current branch (quick fixes/small changes) ---
144
+
145
+ 1. /sd-plan-dev — Execute tasks in parallel (includes TDD + review)
146
+ 2. /sd-check — Verify (modified + dependents)
147
+ 3. /sd-commit — Commit
148
+
149
+ You can start from any step or skip steps as needed.
150
+
151
+ 💡 "Path A: yolo" or "Path B: yolo" to auto-run all steps
152
+
153
+ ⚠️ You have uncommitted changes. To use Path A, run `/sd-commit all` first.
154
+ ```
155
+
156
+ - The `⚠️` line is only shown when uncommitted changes exist. Omit it when working tree is clean.
157
+ - **Recommend one** based on the plan's scope:
158
+ - Path A: new features, multi-file changes, architectural changes
159
+ - Path B: small bug fixes, single-file changes, minor adjustments
160
+ - Briefly explain why (1 sentence)
161
+ - Do NOT auto-proceed. Wait for user's choice.
162
+
163
+ **Yolo mode:** If the user responds with "Path A: yolo" or "Path B: yolo" (or similar intent like "A yolo", "B 자동"), execute all steps of the chosen path sequentially without stopping between steps.
100
164
 
101
- - If in **yolo mode** (user chose "yolo" from sd-brainstorm): Immediately proceed to sd-plan-dev without asking. No confirmation needed.
102
- - Otherwise: Display this message **in the system's configured language** (detect from the language setting and translate accordingly):
103
- **"Plan complete and saved to `docs/plans/<filename>.md`. Ready to execute with sd-plan-dev?"**
165
+ **Yolo sd-check include dependents:** NEVER check only modified packages. Also check all packages that depend on them:
166
+ 1. Identify modified packages from `git diff --name-only`
167
+ 2. Trace reverse dependencies (packages that import from modified packages) using `package.json` or project dependency graph
168
+ 3. Include integration/e2e tests that cover the modified packages
169
+ 4. Run `/sd-check` with all affected paths, or `/sd-check` without path (whole project) when changes are widespread
104
170
 
105
171
  - **REQUIRED SUB-SKILL:** Use sd-plan-dev
106
172
  - Fresh fork per task + two-stage review (spec compliance → code quality)
@@ -90,7 +90,8 @@ digraph process {
90
90
  "Re-review failed aspects (parallel sub-Task)" -> "Any issues?";
91
91
  "Any issues?" -> "Report results" [label="no"];
92
92
  "Report results" -> "More batches?";
93
- "More batches?" -> "Implement the task" [label="yes, next batch"];
93
+ "More batches?" -> "Batch integration check (typecheck + lint)" [label="yes"];
94
+ "Batch integration check (typecheck + lint)" -> "Implement the task" [label="next batch"];
94
95
  "More batches?" -> "Task: final review for entire implementation" [label="no"];
95
96
  "Task: final review for entire implementation" -> "Done";
96
97
  }
@@ -139,14 +140,16 @@ You are implementing and reviewing Task N: [task name]
139
140
  2. Write tests (following TDD if task says to)
140
141
  3. Verify implementation works
141
142
  4. Self-review: did I implement everything? Did I over-build?
142
- 5. Launch TWO parallel sub-Tasks (spec review + quality review):
143
+ 5. Commit your work (record the BASE_SHA before and HEAD_SHA after)
144
+ 6. Launch TWO parallel sub-Tasks (spec review + quality review):
143
145
  - Sub-Task 1: spec reviewer — send spec-reviewer-prompt.md based prompt
144
- - Sub-Task 2: quality reviewer — send code-quality-reviewer-prompt.md based prompt
145
- 6. If either reviewer finds issues → fix them → re-review only failed aspects (parallel sub-Tasks again)
146
- 7. Repeat until both reviewers approve
147
- 8. Report back with: what you implemented, test results, files changed, review outcomes
146
+ - Sub-Task 2: quality reviewer — send code-quality-reviewer-prompt.md based prompt, include BASE_SHA and HEAD_SHA
147
+ 7. If either reviewer finds issues → fix them → re-review only failed aspects (parallel sub-Tasks again)
148
+ 8. Repeat until both reviewers approve
149
+ 9. Report back with: what you implemented, test results, files changed, commit SHA, review outcomes
148
150
 
149
151
  If you have questions about requirements — return them immediately WITHOUT implementing. Don't guess.
152
+ If you encounter unexpected issues mid-implementation — ask rather than guess.
150
153
 
151
154
  Work from: [directory]
152
155
  ```
@@ -230,6 +233,17 @@ Final reviewer: All requirements met, ready to merge
230
233
  Done!
231
234
  ```
232
235
 
236
+ ## Batch Integration Check
237
+
238
+ Between batches, run targeted verification on affected packages before starting the next batch:
239
+
240
+ ```bash
241
+ pnpm typecheck [affected packages]
242
+ pnpm lint [affected packages]
243
+ ```
244
+
245
+ This catches cross-task integration issues early — especially when the next batch depends on the current batch's output. Do NOT skip this even if individual task reviews passed.
246
+
233
247
  ## Red Flags
234
248
 
235
249
  **Never:**
@@ -242,6 +256,7 @@ Done!
242
256
  - Skip scene-setting context
243
257
  - Accept "close enough" on spec compliance
244
258
  - Skip review loops (issue found → fix → re-review)
259
+ - Skip batch integration checks between batches
245
260
 
246
261
  **If task agent returns questions:**
247
262
  - Answer clearly and completely
@@ -253,6 +268,13 @@ Done!
253
268
  - Re-review only the failed aspects (parallel sub-Tasks)
254
269
  - Repeat until both approved
255
270
 
271
+ **If task agent fails or times out:**
272
+ - Do NOT silently proceed — the affected files may be in an indeterminate state
273
+ - Check if other tasks in the same batch depend on the failed task's output
274
+ - Independent tasks' results still stand
275
+ - Escalate to user with specific error details before proceeding
276
+ - Do NOT re-launch on potentially partially-modified files without inspection
277
+
256
278
  ## After Completion
257
279
 
258
280
  When all tasks and final review are done, if the current working directory is inside a worktree (`.worktrees/`), guide the user to:
@@ -10,9 +10,16 @@ You are reviewing code quality for a completed implementation.
10
10
 
11
11
  [Paste the implementer's report: files changed, what they built]
12
12
 
13
- ## Changed Files
13
+ ## Review Scope
14
14
 
15
- [List all files to review]
15
+ Use git diff to review only what changed:
16
+ ```
17
+ git diff [BASE_SHA]..[HEAD_SHA]
18
+ ```
19
+ BASE_SHA: [commit before task started]
20
+ HEAD_SHA: [implementer's commit SHA from report]
21
+
22
+ Focus your review on the diff output. Read surrounding code for context only when needed.
16
23
 
17
24
  ## Your Job
18
25
 
@@ -20,24 +20,30 @@ You are implementing Task [N]: [task name]
20
20
 
21
21
  If anything is unclear about requirements or approach, return your questions under a `## Questions` heading and STOP. Do not guess — do not implement.
22
22
 
23
+ ## While You Work
24
+
25
+ If you encounter something unexpected mid-implementation (missing APIs, unexpected patterns, ambiguous behavior), **ask questions rather than guess**. Return your questions under `## Questions` and STOP. It's always OK to pause and clarify.
26
+
23
27
  ## Your Job
24
28
 
25
29
  1. Implement exactly what the task specifies — nothing more, nothing less
26
30
  2. Write tests (follow TDD if the plan says to)
27
31
  3. Verify: tests pass, no type errors
28
32
  4. Self-review:
29
- - Every requirement implemented?
30
- - Nothing overbuilt (YAGNI)?
31
- - Names clear, code clean?
32
- - Following project conventions?
33
+ - **Completeness**: Every requirement implemented? Edge cases handled?
34
+ - **Quality**: Names clear? Code clean and maintainable?
35
+ - **Discipline**: Nothing overbuilt (YAGNI)? Only what was requested?
36
+ - **Testing**: Tests verify behavior (not implementation)? Comprehensive?
33
37
  5. Fix anything found in self-review
34
- 6. Report back
38
+ 6. Commit your work with a descriptive message (this is required for review)
39
+ 7. Report back
35
40
 
36
41
  Work from: [directory path]
37
42
 
38
43
  ## Report
39
44
 
40
45
  When done, provide:
46
+ - Commit SHA (from step 6)
41
47
  - Files created/modified (with brief description of changes)
42
48
  - Test results
43
49
  - Self-review findings (if any were fixed)
@@ -20,15 +20,26 @@ Read the ACTUAL CODE. Do NOT trust the report — verify everything independentl
20
20
 
21
21
  ### Checklist
22
22
 
23
- 1. **Every requirement implemented?** Compare spec line by line against code.
24
- 2. **Nothing extra?** Did the implementer build things not in the spec?
25
- 3. **Correct interpretation?** Did they solve the right problem?
26
- 4. **Tests exist?** Do tests verify the specified behavior?
27
- 5. **Exports correct?** New public APIs exported in the package's index.ts?
23
+ Categorize every finding as:
24
+
25
+ **MISSING** requirement in spec but absent from code:
26
+ 1. Compare spec line by line against code. Every requirement present?
27
+ 2. Tests exist for each specified behavior?
28
+ 3. New public APIs exported in the package's index.ts?
29
+
30
+ **EXTRA** — code present but not in spec:
31
+ 4. Did the implementer build things not requested? (Public methods, new exports, "nice to have" features)
32
+ 5. Private helpers are OK; public API additions without spec approval are not.
33
+
34
+ **WRONG** — present but incorrectly implemented:
35
+ 6. Did they solve the right problem? Correct interpretation of requirements?
36
+ 7. Do test assertions match spec expectations (not just implementation behavior)?
28
37
 
29
38
  ### Report
30
39
 
31
- - ✅ APPROVED — all requirements verified in code
40
+ - ✅ APPROVED — all requirements verified in code, nothing extra
32
41
  - ❌ CHANGES_NEEDED:
33
- - [What's missing/wrong/extra with file:line references]
42
+ - MISSING: [requirement not implemented] (file:line)
43
+ - EXTRA: [built but not requested] (file:line)
44
+ - WRONG: [incorrect interpretation] (file:line)
34
45
  ```
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: sd-use
3
- description: Use when user doesn't know which sd-* skill or agent to use and wants automatic selection based on their request
3
+ description: You MUST invoke this for any user request that does not explicitly use a /sd-* slash command. Do not bypass by selecting other sd-* skills directly, even when the match seems obvious.
4
4
  model: haiku
5
5
  ---
6
6
 
@@ -46,7 +46,7 @@ switch (cmd) {
46
46
  const branch = getMainBranch();
47
47
  console.log(`Creating worktree: .worktrees/${name} (from ${branch})`);
48
48
  run(`git worktree add "${worktreePath}" -b "${name}"`);
49
- console.log("Installing dependencies...");
49
+ console.log("Installing dependencies...");/
50
50
  run("pnpm install", { cwd: worktreePath });
51
51
  console.log(`\nReady: ${worktreePath}`);
52
52
  break;
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@simplysm/sd-claude",
3
- "version": "13.0.44",
3
+ "version": "13.0.47",
4
4
  "description": "Simplysm Claude Code CLI — asset installer and cross-platform npx wrapper",
5
5
  "author": "김석래",
6
6
  "license": "Apache-2.0",
@@ -1,45 +0,0 @@
1
- import { readFileSync, existsSync } from "fs";
2
- import { resolve } from "path";
3
-
4
- const root = resolve(import.meta.dirname, "../../..");
5
- const errors = [];
6
-
7
- // 1. Root package.json version check
8
- const pkgPath = resolve(root, "package.json");
9
- if (!existsSync(pkgPath)) {
10
- errors.push("package.json not found");
11
- } else {
12
- const pkg = JSON.parse(readFileSync(pkgPath, "utf8"));
13
- const major = pkg.version?.split(".")[0];
14
- if (major !== "13") {
15
- errors.push(`This skill requires simplysm v13. Current: ${pkg.version}`);
16
- }
17
-
18
- // 3. Scripts check
19
- if (!pkg.scripts?.typecheck) {
20
- errors.push("'typecheck' script not defined in package.json");
21
- }
22
- if (!pkg.scripts?.lint) {
23
- errors.push("'lint' script not defined in package.json");
24
- }
25
- }
26
-
27
- // 2. pnpm workspace files
28
- for (const f of ["pnpm-workspace.yaml", "pnpm-lock.yaml"]) {
29
- if (!existsSync(resolve(root, f))) {
30
- errors.push(`${f} not found`);
31
- }
32
- }
33
-
34
- // 4. Vitest config
35
- if (!existsSync(resolve(root, "vitest.config.ts"))) {
36
- errors.push("vitest.config.ts not found");
37
- }
38
-
39
- if (errors.length > 0) {
40
- console.log("FAIL");
41
- for (const e of errors) console.log(`- ${e}`);
42
- process.exit(1);
43
- } else {
44
- console.log("Environment OK");
45
- }