@simplysm/sd-claude 13.0.44 → 13.0.45

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -54,7 +54,7 @@ Design complete! Here's how to proceed:
54
54
  1. /sd-worktree add <name> — Create a worktree branch
55
55
  2. /sd-plan — Break into detailed tasks
56
56
  3. /sd-plan-dev — Execute tasks in parallel (includes TDD + review)
57
- 4. /sd-check — Verify All
57
+ 4. /sd-check — Verify (modified + dependents)
58
58
  5. /sd-commit — Commit
59
59
  6. /sd-worktree merge — Merge back to main
60
60
  7. /sd-worktree clean — Remove worktree
@@ -63,7 +63,7 @@ Design complete! Here's how to proceed:
63
63
 
64
64
  1. /sd-plan — Break into detailed tasks
65
65
  2. /sd-plan-dev — Execute tasks in parallel (includes TDD + review)
66
- 3. /sd-check — Verify All
66
+ 3. /sd-check — Verify (modified + dependents)
67
67
  4. /sd-commit — Commit
68
68
 
69
69
  You can start from any step or skip steps as needed.
@@ -81,6 +81,11 @@ You can start from any step or skip steps as needed.
81
81
  - Briefly explain why (1 sentence)
82
82
  - Do NOT auto-proceed to any step. Present the overview with recommendation and wait for the user's choice.
83
83
  - **Yolo mode**: If the user responds with "Path A: yolo" or "Path B: yolo" (or similar intent like "A yolo", "B 자동"), execute all steps of the chosen path sequentially without stopping between steps.
84
+ - **Yolo sd-check — include dependents**: NEVER check only modified packages. Also check all packages that depend on them:
85
+ 1. Identify modified packages from `git diff --name-only`
86
+ 2. Trace reverse dependencies (packages that import from modified packages) using `package.json` or project dependency graph
87
+ 3. Include integration/e2e tests that cover the modified packages
88
+ 4. Run `/sd-check` with all affected paths, or `/sd-check` without path (whole project) when changes are widespread
84
89
 
85
90
  ## Key Principles
86
91
 
@@ -56,9 +56,10 @@ node .claude/skills/sd-check/env-check.mjs
56
56
  ```
57
57
 
58
58
  - **Exit 0 + "Environment OK"**: Proceed to Step 2
59
- - **Exit 1 + "FAIL"**: STOP, report the listed errors to user
59
+ - **Exit 1 + version error** (e.g., "simplysm v13+"): Tell the user this project is below v13 so automated checks are unavailable, and they should run typecheck, lint, and test manually. Then STOP — do not proceed to Step 2.
60
+ - **Exit 1 + other errors**: STOP, report the listed errors to user
60
61
 
61
- The script checks: package.json version (v13), pnpm workspace files, typecheck/lint scripts, vitest config.
62
+ The script checks: package.json version (v13+), pnpm workspace files, typecheck/lint scripts, vitest config.
62
63
 
63
64
  ### Step 2: Launch 3 Background Bash Commands in Parallel
64
65
 
@@ -159,6 +160,10 @@ Repeat Steps 2-4 until all 3 checks pass.
159
160
  **Wrong:** Keep trying same fix when tests fail repeatedly
160
161
  **Right:** After 2-3 failed attempts → recommend `/sd-debug`
161
162
 
163
+ ### ❌ Claiming success without fresh evidence
164
+ **Wrong:** "All checks should pass now" or "Great, that fixes it!"
165
+ **Right:** Run all 3 → read output → cite results (e.g., "0 errors, 47 tests passed") → THEN claim
166
+
162
167
  ## Red Flags - STOP and Follow Workflow
163
168
 
164
169
  If you find yourself doing ANY of these, you're violating the skill:
@@ -171,19 +176,28 @@ If you find yourself doing ANY of these, you're violating the skill:
171
176
  - Asking user for path when none provided
172
177
  - Continuing past 2-3 failed fix attempts without recommending `/sd-debug`
173
178
  - Spawning 4+ commands (only 3: typecheck, lint, test)
179
+ - Expressing satisfaction ("Great!", "Perfect!", "Done!") before all 3 checks pass
180
+ - Using vague language: "should work", "probably passes", "seems fine"
181
+ - Claiming completion based on a previous run, not the current one
174
182
 
175
183
  **All of these violate the skill's core principles. Go back to Step 1 and follow the workflow exactly.**
176
184
 
177
185
  ## Completion Criteria
178
186
 
179
187
  **Complete when:**
180
- - All 3 checks (typecheck, lint, test) pass without errors
181
- - Report: "All checks passed - code verified"
188
+ - All 3 checks (typecheck, lint, test) pass without errors **in the most recent run**
189
+ - Report with evidence: "All checks passed" + cite actual output (e.g., "0 errors", "47 tests passed")
190
+
191
+ **Fresh evidence required:**
192
+ - "Passes" = you ran it THIS iteration and saw it pass in the output
193
+ - Previous run results are NOT evidence for current state
194
+ - Confidence is NOT evidence — run the check
182
195
 
183
196
  **Do NOT complete if:**
184
197
  - Any check has errors
185
198
  - Haven't re-verified after a fix
186
199
  - Environment pre-checks failed
200
+ - Using "should", "probably", or "seems to" instead of actual output
187
201
 
188
202
  ## Rationalization Table
189
203
 
@@ -200,3 +214,6 @@ If you find yourself doing ANY of these, you're violating the skill:
200
214
  | "Tests are independent of types" | Type fixes affect tests - always re-run ALL |
201
215
  | "I'll invoke sd-check skill with args" | sd-check is EXACT STEPS, not a command |
202
216
  | "4 commands: typecheck, lint, test, build" | Only 3 commands - build is FORBIDDEN |
217
+ | "I'm confident it passes" | Confidence ≠ evidence — run the check |
218
+ | "It should work now" | "Should" = no evidence — run the check |
219
+ | "I already verified earlier" | Earlier ≠ now — re-run after every change |
@@ -10,9 +10,9 @@ if (!existsSync(pkgPath)) {
10
10
  errors.push("package.json not found");
11
11
  } else {
12
12
  const pkg = JSON.parse(readFileSync(pkgPath, "utf8"));
13
- const major = pkg.version?.split(".")[0];
14
- if (major !== "13") {
15
- errors.push(`This skill requires simplysm v13. Current: ${pkg.version}`);
13
+ const major = parseInt(pkg.version?.split(".")[0], 10);
14
+ if (Number.isNaN(major) || major < 13) {
15
+ errors.push(`This skill requires simplysm v13+. Current: ${pkg.version}`);
16
16
  }
17
17
 
18
18
  // 3. Scripts check
@@ -0,0 +1,47 @@
1
+ ---
2
+ name: sd-eml-analyze
3
+ description: Use when user asks to analyze, read, or summarize .eml email files, or when encountering .eml attachments that need content extraction including embedded PDF, XLSX, PPTX files
4
+ ---
5
+
6
+ # EML Email Analyzer
7
+
8
+ ## Overview
9
+
10
+ Python script that parses .eml files and extracts content from all attachments (PDF, XLSX, PPTX) into a single structured markdown report. Handles Korean encodings (EUC-KR, CP949, ks_c_5601-1987) automatically.
11
+
12
+ ## When to Use
13
+
14
+ - User provides a `.eml` file to analyze or summarize
15
+ - Need to extract text from email attachments (PDF, XLSX, PPTX)
16
+ - Korean email content needs proper decoding
17
+
18
+ ## Usage
19
+
20
+ ```bash
21
+ python .claude/skills/sd-eml-analyze/eml-analyzer.py <eml_file_path>
22
+ ```
23
+
24
+ First run auto-installs: `pdfminer.six`, `python-pptx`, `openpyxl`.
25
+
26
+ ## Output Format
27
+
28
+ Markdown report with sections:
29
+ 1. **Mail info table**: Subject, From, To, Cc, Date, attachment count
30
+ 2. **Body text**: Plain text (HTML stripped if no plain text)
31
+ 3. **Attachment analysis**: Summary table + extracted text per file
32
+
33
+ ## Supported Attachments
34
+
35
+ | Format | Method |
36
+ |--------|--------|
37
+ | PDF | pdfminer.six text extraction |
38
+ | XLSX/XLS | openpyxl cell data as markdown table |
39
+ | PPTX | python-pptx slide text + tables + notes |
40
+ | Text files (.txt, .csv, .json, .xml, .html, .md) | UTF-8/CP949 decode |
41
+ | Images | Filename and size only |
42
+
43
+ ## Common Mistakes
44
+
45
+ - **Wrong Python**: Ensure `python` points to Python 3.8+
46
+ - **Firewall blocking pip**: First run needs internet for package install
47
+ - **Legacy .ppt/.xls**: Only modern Office formats (.pptx/.xlsx) supported
@@ -0,0 +1,335 @@
1
+ #!/usr/bin/env python3
2
+ """EML Email Analyzer - Parses EML files and attachments into structured markdown."""
3
+
4
+ import sys
5
+ import os
6
+ import io
7
+ import subprocess
8
+ import email
9
+ import html
10
+ import re
11
+ import tempfile
12
+ from email.policy import default as default_policy
13
+ from email.header import decode_header
14
+ from pathlib import Path
15
+
16
+ # stdout UTF-8 강제 (Windows 호환)
17
+ sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding="utf-8", errors="replace")
18
+ sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding="utf-8", errors="replace")
19
+
20
+
21
+ def ensure_packages():
22
+ """필요한 패키지 자동 설치."""
23
+ packages = {
24
+ "pdfminer.six": "pdfminer",
25
+ "python-pptx": "pptx",
26
+ "openpyxl": "openpyxl",
27
+ }
28
+ missing = []
29
+ for pip_name, import_name in packages.items():
30
+ try:
31
+ __import__(import_name)
32
+ except ImportError:
33
+ missing.append(pip_name)
34
+ if missing:
35
+ print(f"패키지 설치 중: {', '.join(missing)}...", file=sys.stderr)
36
+ subprocess.check_call(
37
+ [sys.executable, "-m", "pip", "install", "-q", *missing],
38
+ stdout=subprocess.DEVNULL,
39
+ stderr=subprocess.DEVNULL,
40
+ )
41
+
42
+
43
+ ensure_packages()
44
+
45
+ from pdfminer.high_level import extract_text as pdf_extract_text # noqa: E402
46
+ from pptx import Presentation # noqa: E402
47
+ from openpyxl import load_workbook # noqa: E402
48
+
49
+
50
+ # ── Korean charset helpers ──────────────────────────────────────────
51
+
52
+ KOREAN_CHARSET_MAP = {
53
+ "ks_c_5601-1987": "cp949",
54
+ "ks_c_5601": "cp949",
55
+ "euc_kr": "cp949",
56
+ "euc-kr": "cp949",
57
+ }
58
+
59
+
60
+ def fix_charset(charset: str) -> str:
61
+ if charset is None:
62
+ return "utf-8"
63
+ return KOREAN_CHARSET_MAP.get(charset.lower(), charset)
64
+
65
+
66
+ # ── EML parsing ─────────────────────────────────────────────────────
67
+
68
+
69
+ def parse_eml(filepath: str):
70
+ with open(filepath, "rb") as f:
71
+ msg = email.message_from_binary_file(f, policy=default_policy)
72
+
73
+ # Headers
74
+ headers = {
75
+ "subject": str(msg["Subject"] or ""),
76
+ "from": str(msg["From"] or ""),
77
+ "to": str(msg["To"] or ""),
78
+ "cc": str(msg["Cc"] or ""),
79
+ "date": str(msg["Date"] or ""),
80
+ }
81
+
82
+ # Body
83
+ body_plain = ""
84
+ body_html = ""
85
+
86
+ if msg.is_multipart():
87
+ for part in msg.walk():
88
+ ctype = part.get_content_type()
89
+ cdisp = part.get_content_disposition()
90
+ if cdisp == "attachment":
91
+ continue
92
+ if ctype == "text/plain" and not body_plain:
93
+ body_plain = _get_text(part)
94
+ elif ctype == "text/html" and not body_html:
95
+ body_html = _get_text(part)
96
+ else:
97
+ body_plain = _get_text(msg)
98
+
99
+ # Attachments
100
+ attachments = []
101
+ for part in msg.walk():
102
+ filename = part.get_filename()
103
+ if not filename:
104
+ continue
105
+ cdisp = part.get_content_disposition()
106
+ if cdisp not in ("attachment", "inline", None):
107
+ continue
108
+ payload = part.get_payload(decode=True)
109
+ if payload is None:
110
+ continue
111
+ attachments.append(
112
+ {
113
+ "filename": filename,
114
+ "content_type": part.get_content_type(),
115
+ "size": len(payload),
116
+ "data": payload,
117
+ }
118
+ )
119
+
120
+ return headers, body_plain, body_html, attachments
121
+
122
+
123
+ def _get_text(part) -> str:
124
+ try:
125
+ return part.get_content()
126
+ except Exception:
127
+ payload = part.get_payload(decode=True)
128
+ if not payload:
129
+ return ""
130
+ charset = fix_charset(part.get_content_charset())
131
+ return payload.decode(charset, errors="replace")
132
+
133
+
134
+ # ── Attachment extractors ───────────────────────────────────────────
135
+
136
+
137
+ def extract_pdf(data: bytes) -> str:
138
+ with tempfile.NamedTemporaryFile(suffix=".pdf", delete=False) as f:
139
+ f.write(data)
140
+ tmp = f.name
141
+ try:
142
+ text = pdf_extract_text(tmp)
143
+ return text.strip() if text else "(텍스트 추출 실패)"
144
+ except Exception as e:
145
+ return f"(PDF 파싱 오류: {e})"
146
+ finally:
147
+ os.unlink(tmp)
148
+
149
+
150
+ def extract_pptx(data: bytes) -> str:
151
+ with tempfile.NamedTemporaryFile(suffix=".pptx", delete=False) as f:
152
+ f.write(data)
153
+ tmp = f.name
154
+ try:
155
+ prs = Presentation(tmp)
156
+ slides = []
157
+ for i, slide in enumerate(prs.slides, 1):
158
+ lines = [f"#### 슬라이드 {i}"]
159
+ for shape in slide.shapes:
160
+ if shape.has_text_frame:
161
+ for para in shape.text_frame.paragraphs:
162
+ line = "".join(run.text for run in para.runs)
163
+ if line.strip():
164
+ lines.append(line)
165
+ if shape.has_table:
166
+ header = " | ".join(
167
+ cell.text for cell in shape.table.rows[0].cells
168
+ )
169
+ sep = " | ".join(
170
+ "---" for _ in shape.table.rows[0].cells
171
+ )
172
+ lines.append(f"| {header} |")
173
+ lines.append(f"| {sep} |")
174
+ for row in list(shape.table.rows)[1:]:
175
+ row_text = " | ".join(cell.text for cell in row.cells)
176
+ lines.append(f"| {row_text} |")
177
+ if slide.has_notes_slide:
178
+ notes = slide.notes_slide.notes_text_frame.text
179
+ if notes.strip():
180
+ lines.append(f"> 노트: {notes}")
181
+ slides.append("\n".join(lines))
182
+ return "\n\n".join(slides) if slides else "(텍스트 없음)"
183
+ except Exception as e:
184
+ return f"(PPTX 파싱 오류: {e})"
185
+ finally:
186
+ os.unlink(tmp)
187
+
188
+
189
+ def extract_xlsx(data: bytes) -> str:
190
+ with tempfile.NamedTemporaryFile(suffix=".xlsx", delete=False) as f:
191
+ f.write(data)
192
+ tmp = f.name
193
+ try:
194
+ wb = load_workbook(tmp, data_only=True)
195
+ sheets = []
196
+ for name in wb.sheetnames:
197
+ ws = wb[name]
198
+ lines = [f"#### 시트: {name}"]
199
+ rows = list(ws.iter_rows(values_only=True))
200
+ if not rows:
201
+ lines.append("(데이터 없음)")
202
+ sheets.append("\n".join(lines))
203
+ continue
204
+ # 마크다운 테이블
205
+ first_row = rows[0]
206
+ col_count = len(first_row)
207
+ header = " | ".join(str(c) if c is not None else "" for c in first_row)
208
+ sep = " | ".join("---" for _ in range(col_count))
209
+ lines.append(f"| {header} |")
210
+ lines.append(f"| {sep} |")
211
+ for row in rows[1:]:
212
+ vals = " | ".join(str(c) if c is not None else "" for c in row)
213
+ if any(c is not None for c in row):
214
+ lines.append(f"| {vals} |")
215
+ sheets.append("\n".join(lines))
216
+ return "\n\n".join(sheets) if sheets else "(데이터 없음)"
217
+ except Exception as e:
218
+ return f"(XLSX 파싱 오류: {e})"
219
+ finally:
220
+ os.unlink(tmp)
221
+
222
+
223
+ def extract_text_file(data: bytes) -> str:
224
+ for enc in ("utf-8", "cp949", "euc-kr", "latin-1"):
225
+ try:
226
+ return data.decode(enc)
227
+ except (UnicodeDecodeError, LookupError):
228
+ continue
229
+ return data.decode("utf-8", errors="replace")
230
+
231
+
232
+ # ── HTML stripping ──────────────────────────────────────────────────
233
+
234
+
235
+ def strip_html(text: str) -> str:
236
+ text = re.sub(r"<style[^>]*>.*?</style>", "", text, flags=re.DOTALL | re.I)
237
+ text = re.sub(r"<script[^>]*>.*?</script>", "", text, flags=re.DOTALL | re.I)
238
+ text = re.sub(r"<br\s*/?>", "\n", text, flags=re.I)
239
+ text = re.sub(r"</(?:p|div|tr|li)>", "\n", text, flags=re.I)
240
+ text = re.sub(r"<[^>]+>", "", text)
241
+ text = html.unescape(text)
242
+ text = re.sub(r"\n{3,}", "\n\n", text)
243
+ return text.strip()
244
+
245
+
246
+ # ── Size formatting ─────────────────────────────────────────────────
247
+
248
+
249
+ def fmt_size(n: int) -> str:
250
+ if n < 1024:
251
+ return f"{n} B"
252
+ if n < 1024 * 1024:
253
+ return f"{n / 1024:.1f} KB"
254
+ return f"{n / (1024 * 1024):.1f} MB"
255
+
256
+
257
+ # ── Markdown report ─────────────────────────────────────────────────
258
+
259
+ PARSEABLE_EXTS = {".pdf", ".xlsx", ".xls", ".pptx", ".txt", ".csv", ".log", ".json", ".xml", ".html", ".htm", ".md"}
260
+ IMAGE_EXTS = {".png", ".jpg", ".jpeg", ".gif", ".bmp", ".webp", ".svg"}
261
+
262
+
263
+ def build_report(filepath: str) -> str:
264
+ headers, body_plain, body_html, attachments = parse_eml(filepath)
265
+
266
+ out = []
267
+ out.append("# 이메일 분석서\n")
268
+ out.append(f"**원본 파일**: `{os.path.basename(filepath)}`\n")
269
+
270
+ # ── 메일 정보
271
+ out.append("## 메일 정보\n")
272
+ out.append("| 항목 | 내용 |")
273
+ out.append("|------|------|")
274
+ out.append(f"| **제목** | {headers['subject']} |")
275
+ out.append(f"| **보낸 사람** | {headers['from']} |")
276
+ out.append(f"| **받는 사람** | {headers['to']} |")
277
+ if headers["cc"]:
278
+ out.append(f"| **참조** | {headers['cc']} |")
279
+ out.append(f"| **날짜** | {headers['date']} |")
280
+ out.append(f"| **첨부파일** | {len(attachments)}개 |")
281
+ out.append("")
282
+
283
+ # ── 본문
284
+ out.append("## 본문 내용\n")
285
+ body = body_plain
286
+ if not body and body_html:
287
+ body = strip_html(body_html)
288
+ out.append(body.strip() if body else "_(본문 없음)_")
289
+ out.append("")
290
+
291
+ # ── 첨부파일
292
+ if attachments:
293
+ out.append("## 첨부파일 분석\n")
294
+ out.append("| # | 파일명 | 형식 | 크기 |")
295
+ out.append("|---|--------|------|------|")
296
+ for i, a in enumerate(attachments, 1):
297
+ out.append(f"| {i} | {a['filename']} | {a['content_type']} | {fmt_size(a['size'])} |")
298
+ out.append("")
299
+
300
+ for i, a in enumerate(attachments, 1):
301
+ ext = Path(a["filename"]).suffix.lower()
302
+ out.append(f"### 첨부 {i}: {a['filename']}\n")
303
+
304
+ if ext == ".pdf":
305
+ out.append(extract_pdf(a["data"]))
306
+ elif ext in (".xlsx", ".xls"):
307
+ out.append(extract_xlsx(a["data"]))
308
+ elif ext == ".pptx":
309
+ out.append(extract_pptx(a["data"]))
310
+ elif ext == ".ppt":
311
+ out.append("_(.ppt 레거시 형식 미지원, .pptx만 지원)_")
312
+ elif ext in (".txt", ".csv", ".log", ".json", ".xml", ".html", ".htm", ".md"):
313
+ out.append(f"```\n{extract_text_file(a['data'])}\n```")
314
+ elif ext in IMAGE_EXTS:
315
+ out.append(f"_(이미지 파일 - {fmt_size(a['size'])})_")
316
+ else:
317
+ out.append(f"_(지원하지 않는 형식: {ext}, {fmt_size(a['size'])})_")
318
+ out.append("")
319
+
320
+ return "\n".join(out)
321
+
322
+
323
+ # ── Main ────────────────────────────────────────────────────────────
324
+
325
+ if __name__ == "__main__":
326
+ if len(sys.argv) < 2:
327
+ print("Usage: python eml-analyzer.py <eml_file_path>", file=sys.stderr)
328
+ sys.exit(1)
329
+
330
+ path = sys.argv[1]
331
+ if not os.path.isfile(path):
332
+ print(f"파일을 찾을 수 없습니다: {path}", file=sys.stderr)
333
+ sys.exit(1)
334
+
335
+ print(build_report(path))
@@ -90,7 +90,8 @@ digraph process {
90
90
  "Re-review failed aspects (parallel sub-Task)" -> "Any issues?";
91
91
  "Any issues?" -> "Report results" [label="no"];
92
92
  "Report results" -> "More batches?";
93
- "More batches?" -> "Implement the task" [label="yes, next batch"];
93
+ "More batches?" -> "Batch integration check (typecheck + lint)" [label="yes"];
94
+ "Batch integration check (typecheck + lint)" -> "Implement the task" [label="next batch"];
94
95
  "More batches?" -> "Task: final review for entire implementation" [label="no"];
95
96
  "Task: final review for entire implementation" -> "Done";
96
97
  }
@@ -139,14 +140,16 @@ You are implementing and reviewing Task N: [task name]
139
140
  2. Write tests (following TDD if task says to)
140
141
  3. Verify implementation works
141
142
  4. Self-review: did I implement everything? Did I over-build?
142
- 5. Launch TWO parallel sub-Tasks (spec review + quality review):
143
+ 5. Commit your work (record the BASE_SHA before and HEAD_SHA after)
144
+ 6. Launch TWO parallel sub-Tasks (spec review + quality review):
143
145
  - Sub-Task 1: spec reviewer — send spec-reviewer-prompt.md based prompt
144
- - Sub-Task 2: quality reviewer — send code-quality-reviewer-prompt.md based prompt
145
- 6. If either reviewer finds issues → fix them → re-review only failed aspects (parallel sub-Tasks again)
146
- 7. Repeat until both reviewers approve
147
- 8. Report back with: what you implemented, test results, files changed, review outcomes
146
+ - Sub-Task 2: quality reviewer — send code-quality-reviewer-prompt.md based prompt, include BASE_SHA and HEAD_SHA
147
+ 7. If either reviewer finds issues → fix them → re-review only failed aspects (parallel sub-Tasks again)
148
+ 8. Repeat until both reviewers approve
149
+ 9. Report back with: what you implemented, test results, files changed, commit SHA, review outcomes
148
150
 
149
151
  If you have questions about requirements — return them immediately WITHOUT implementing. Don't guess.
152
+ If you encounter unexpected issues mid-implementation — ask rather than guess.
150
153
 
151
154
  Work from: [directory]
152
155
  ```
@@ -230,6 +233,17 @@ Final reviewer: All requirements met, ready to merge
230
233
  Done!
231
234
  ```
232
235
 
236
+ ## Batch Integration Check
237
+
238
+ Between batches, run targeted verification on affected packages before starting the next batch:
239
+
240
+ ```bash
241
+ pnpm typecheck [affected packages]
242
+ pnpm lint [affected packages]
243
+ ```
244
+
245
+ This catches cross-task integration issues early — especially when the next batch depends on the current batch's output. Do NOT skip this even if individual task reviews passed.
246
+
233
247
  ## Red Flags
234
248
 
235
249
  **Never:**
@@ -242,6 +256,7 @@ Done!
242
256
  - Skip scene-setting context
243
257
  - Accept "close enough" on spec compliance
244
258
  - Skip review loops (issue found → fix → re-review)
259
+ - Skip batch integration checks between batches
245
260
 
246
261
  **If task agent returns questions:**
247
262
  - Answer clearly and completely
@@ -253,6 +268,13 @@ Done!
253
268
  - Re-review only the failed aspects (parallel sub-Tasks)
254
269
  - Repeat until both approved
255
270
 
271
+ **If task agent fails or times out:**
272
+ - Do NOT silently proceed — the affected files may be in an indeterminate state
273
+ - Check if other tasks in the same batch depend on the failed task's output
274
+ - Independent tasks' results still stand
275
+ - Escalate to user with specific error details before proceeding
276
+ - Do NOT re-launch on potentially partially-modified files without inspection
277
+
256
278
  ## After Completion
257
279
 
258
280
  When all tasks and final review are done, if the current working directory is inside a worktree (`.worktrees/`), guide the user to:
@@ -10,9 +10,16 @@ You are reviewing code quality for a completed implementation.
10
10
 
11
11
  [Paste the implementer's report: files changed, what they built]
12
12
 
13
- ## Changed Files
13
+ ## Review Scope
14
14
 
15
- [List all files to review]
15
+ Use git diff to review only what changed:
16
+ ```
17
+ git diff [BASE_SHA]..[HEAD_SHA]
18
+ ```
19
+ BASE_SHA: [commit before task started]
20
+ HEAD_SHA: [implementer's commit SHA from report]
21
+
22
+ Focus your review on the diff output. Read surrounding code for context only when needed.
16
23
 
17
24
  ## Your Job
18
25
 
@@ -20,24 +20,30 @@ You are implementing Task [N]: [task name]
20
20
 
21
21
  If anything is unclear about requirements or approach, return your questions under a `## Questions` heading and STOP. Do not guess — do not implement.
22
22
 
23
+ ## While You Work
24
+
25
+ If you encounter something unexpected mid-implementation (missing APIs, unexpected patterns, ambiguous behavior), **ask questions rather than guess**. Return your questions under `## Questions` and STOP. It's always OK to pause and clarify.
26
+
23
27
  ## Your Job
24
28
 
25
29
  1. Implement exactly what the task specifies — nothing more, nothing less
26
30
  2. Write tests (follow TDD if the plan says to)
27
31
  3. Verify: tests pass, no type errors
28
32
  4. Self-review:
29
- - Every requirement implemented?
30
- - Nothing overbuilt (YAGNI)?
31
- - Names clear, code clean?
32
- - Following project conventions?
33
+ - **Completeness**: Every requirement implemented? Edge cases handled?
34
+ - **Quality**: Names clear? Code clean and maintainable?
35
+ - **Discipline**: Nothing overbuilt (YAGNI)? Only what was requested?
36
+ - **Testing**: Tests verify behavior (not implementation)? Comprehensive?
33
37
  5. Fix anything found in self-review
34
- 6. Report back
38
+ 6. Commit your work with a descriptive message (this is required for review)
39
+ 7. Report back
35
40
 
36
41
  Work from: [directory path]
37
42
 
38
43
  ## Report
39
44
 
40
45
  When done, provide:
46
+ - Commit SHA (from step 6)
41
47
  - Files created/modified (with brief description of changes)
42
48
  - Test results
43
49
  - Self-review findings (if any were fixed)
@@ -20,15 +20,26 @@ Read the ACTUAL CODE. Do NOT trust the report — verify everything independentl
20
20
 
21
21
  ### Checklist
22
22
 
23
- 1. **Every requirement implemented?** Compare spec line by line against code.
24
- 2. **Nothing extra?** Did the implementer build things not in the spec?
25
- 3. **Correct interpretation?** Did they solve the right problem?
26
- 4. **Tests exist?** Do tests verify the specified behavior?
27
- 5. **Exports correct?** New public APIs exported in the package's index.ts?
23
+ Categorize every finding as:
24
+
25
+ **MISSING** requirement in spec but absent from code:
26
+ 1. Compare spec line by line against code. Every requirement present?
27
+ 2. Tests exist for each specified behavior?
28
+ 3. New public APIs exported in the package's index.ts?
29
+
30
+ **EXTRA** — code present but not in spec:
31
+ 4. Did the implementer build things not requested? (Public methods, new exports, "nice to have" features)
32
+ 5. Private helpers are OK; public API additions without spec approval are not.
33
+
34
+ **WRONG** — present but incorrectly implemented:
35
+ 6. Did they solve the right problem? Correct interpretation of requirements?
36
+ 7. Do test assertions match spec expectations (not just implementation behavior)?
28
37
 
29
38
  ### Report
30
39
 
31
- - ✅ APPROVED — all requirements verified in code
40
+ - ✅ APPROVED — all requirements verified in code, nothing extra
32
41
  - ❌ CHANGES_NEEDED:
33
- - [What's missing/wrong/extra with file:line references]
42
+ - MISSING: [requirement not implemented] (file:line)
43
+ - EXTRA: [built but not requested] (file:line)
44
+ - WRONG: [incorrect interpretation] (file:line)
34
45
  ```
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@simplysm/sd-claude",
3
- "version": "13.0.44",
3
+ "version": "13.0.45",
4
4
  "description": "Simplysm Claude Code CLI — asset installer and cross-platform npx wrapper",
5
5
  "author": "김석래",
6
6
  "license": "Apache-2.0",