@simplysm/sd-claude 13.0.76 → 13.0.78
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/claude/refs/sd-code-conventions.md +11 -3
- package/claude/refs/sd-solid.md +11 -2
- package/claude/rules/sd-claude-rules.md +6 -10
- package/claude/sd-statusline.js +7 -7
- package/claude/skills/sd-api-name-review/SKILL.md +103 -17
- package/claude/skills/sd-brainstorm/SKILL.md +32 -47
- package/claude/skills/sd-check/SKILL.md +14 -16
- package/claude/skills/sd-commit/SKILL.md +1 -3
- package/claude/skills/sd-debug/SKILL.md +5 -11
- package/claude/skills/sd-debug/condition-based-waiting.md +5 -11
- package/claude/skills/sd-debug/root-cause-tracing.md +18 -33
- package/claude/skills/sd-explore/SKILL.md +86 -44
- package/claude/skills/sd-plan/SKILL.md +0 -1
- package/claude/skills/sd-plan-dev/SKILL.md +48 -82
- package/claude/skills/sd-review/SKILL.md +107 -80
- package/claude/skills/sd-review/api-reviewer-prompt.md +23 -43
- package/claude/skills/sd-review/code-reviewer-prompt.md +26 -35
- package/claude/skills/sd-review/convention-checker-prompt.md +23 -26
- package/claude/skills/sd-review/refactoring-analyzer-prompt.md +92 -0
- package/claude/skills/sd-skill/SKILL.md +10 -16
- package/claude/skills/sd-skill/writing-guide.md +7 -11
- package/claude/skills/sd-tdd/SKILL.md +15 -20
- package/claude/skills/sd-use/SKILL.md +3 -4
- package/claude/skills/sd-worktree/SKILL.md +58 -113
- package/package.json +1 -1
- package/claude/skills/sd-review/code-simplifier-prompt.md +0 -95
- package/claude/skills/sd-review/structure-analyzer-prompt.md +0 -97
- package/claude/skills/sd-worktree/sd-worktree.mjs +0 -152
|
@@ -19,6 +19,7 @@ Before using extension methods: Verify actual existence in `@simplysm/core-commo
|
|
|
19
19
|
|
|
20
20
|
- Do not use `Async` suffix on function names — Async is the default
|
|
21
21
|
- When both sync and async versions exist, use `Sync` suffix on the sync function
|
|
22
|
+
- **Exception — extensions**: When adding an async version to an existing prototype (e.g., `Array`), follow the original naming convention. If the sync method already exists without a `Sync` suffix, use `Async` suffix for the async version.
|
|
22
23
|
|
|
23
24
|
```typescript
|
|
24
25
|
// Good
|
|
@@ -27,8 +28,12 @@ function readFileSync() { ... } // Sync version
|
|
|
27
28
|
|
|
28
29
|
// Bad
|
|
29
30
|
async function readFileAsync() { ... } // Async suffix prohibited
|
|
31
|
+
|
|
32
|
+
// Exception — Array extension already has mapMany()
|
|
33
|
+
Array.prototype.mapManyAsync = async function () { ... } // OK
|
|
30
34
|
```
|
|
31
35
|
|
|
36
|
+
|
|
32
37
|
## File Naming
|
|
33
38
|
|
|
34
39
|
- Auxiliary files (`types.ts`, `utils.ts`, etc.) must be prefixed with the main file name (e.g., `CrudSheet.types.ts`)
|
|
@@ -46,10 +51,14 @@ async function readFileAsync() { ... } // Async suffix prohibited
|
|
|
46
51
|
|
|
47
52
|
## index.ts Export Pattern
|
|
48
53
|
|
|
49
|
-
-
|
|
50
|
-
- Small packages (≤10 exports): `//` comments only
|
|
54
|
+
- Use `//` comments to group exports
|
|
51
55
|
- Always `export *` (wildcard), never explicit `export type { ... } from "..."`
|
|
52
56
|
|
|
57
|
+
## `#region` / `#endregion`
|
|
58
|
+
|
|
59
|
+
- When splitting a large ts/tsx file has a bigger tradeoff than keeping it as-is, use `#region`/`#endregion` to organize sections within the file
|
|
60
|
+
- Do not use in simple export files like index.ts
|
|
61
|
+
|
|
53
62
|
## `any` vs `unknown` vs Generics
|
|
54
63
|
|
|
55
64
|
Choose based on **what you do with the value**:
|
|
@@ -87,7 +96,6 @@ function wrapValue<T>(value: T): { value: T } {
|
|
|
87
96
|
|
|
88
97
|
- API changes must be detectable via **typecheck alone** — all affected usage sites must show compile errors
|
|
89
98
|
- Public component props must support **IDE intellisense** (autocomplete, type hints)
|
|
90
|
-
- **No `any` in public-facing types** — use generics or specific union types instead
|
|
91
99
|
- **No `Record<string, any>` for structured props** — define explicit interfaces so consumers get autocomplete
|
|
92
100
|
|
|
93
101
|
```typescript
|
package/claude/refs/sd-solid.md
CHANGED
|
@@ -34,8 +34,17 @@
|
|
|
34
34
|
|
|
35
35
|
All sub-components via dot notation only (`Parent.Child`).
|
|
36
36
|
|
|
37
|
-
-
|
|
38
|
-
|
|
37
|
+
- Export using `Object.assign` pattern:
|
|
38
|
+
```ts
|
|
39
|
+
export const Select = Object.assign(SelectInnerComponent, {
|
|
40
|
+
Item: SelectItem,
|
|
41
|
+
Header: SelectHeader,
|
|
42
|
+
Action: SelectAction,
|
|
43
|
+
ItemTemplate: SelectItemTemplate,
|
|
44
|
+
});
|
|
45
|
+
```
|
|
46
|
+
- Do NOT declare a separate type or interface for the compound component (e.g., `SelectComponent`, `TabsComponent`)
|
|
47
|
+
- Do NOT use type assertions on the export (e.g., `as SelectComponent`)
|
|
39
48
|
- Don't export sub-components separately (export parent only)
|
|
40
49
|
- UI elements → compound sub-components, non-rendering config (state, behavior, callbacks) → props
|
|
41
50
|
|
|
@@ -35,7 +35,7 @@ If a referenced file or document cannot be found, **stop immediately and ask the
|
|
|
35
35
|
- **Do NOT comment on code outside the requested change.** This includes:
|
|
36
36
|
- Listing issues you noticed but did not fix
|
|
37
37
|
- Describing what you "left alone" or "did not change"
|
|
38
|
-
- "
|
|
38
|
+
- "reference", "suggestions", "by the way", "note", "what I left alone"
|
|
39
39
|
- Any unsolicited observations about surrounding code quality
|
|
40
40
|
- Only describe **what you changed** — nothing else
|
|
41
41
|
|
|
@@ -48,17 +48,13 @@ If a referenced file or document cannot be found, **stop immediately and ask the
|
|
|
48
48
|
|
|
49
49
|
- When the user provides a specific action (e.g., "rename X to Y", "delete this file"), **execute it directly**. Do not route through skill agents or sub-agent workflows for trivial operations.
|
|
50
50
|
|
|
51
|
-
## Worktree
|
|
51
|
+
## Worktree Rules
|
|
52
52
|
|
|
53
|
-
|
|
53
|
+
All git worktrees MUST be created under the **`.worktrees/`** directory (project root). Never use `.claude/worktrees/` or any other location.
|
|
54
54
|
|
|
55
|
-
-
|
|
56
|
-
-
|
|
57
|
-
-
|
|
58
|
-
- If you need to run agents, run them **without** the `isolation` parameter.
|
|
59
|
-
- If the user explicitly requests a worktree:
|
|
60
|
-
- Create it under the **`.worktree/`** directory (project root), NOT `.claude/worktrees/`.
|
|
61
|
-
- **After work is complete, you MUST delete the worktree and its branch** before finishing. No worktree may be left behind.
|
|
55
|
+
- When using `isolation: "worktree"` on Agent tool calls, the worktree is created in `.worktrees/`.
|
|
56
|
+
- **After work is complete, you MUST delete the worktree and its branch** before finishing. No worktree may be left behind.
|
|
57
|
+
- Prefer using the **`/sd-worktree`** skill for worktree creation/deletion. It includes fixes for Claude Code's built-in worktree bugs.
|
|
62
58
|
|
|
63
59
|
## Asking Clarifying Questions
|
|
64
60
|
|
package/claude/sd-statusline.js
CHANGED
|
@@ -8,7 +8,7 @@ import { stdin } from "process";
|
|
|
8
8
|
|
|
9
9
|
const STDIN_TIMEOUT_MS = 5000;
|
|
10
10
|
const FETCH_TIMEOUT_MS = 3000;
|
|
11
|
-
const CACHE_TTL_MS = 60_000; // 1
|
|
11
|
+
const CACHE_TTL_MS = 60_000; // 1 minutes
|
|
12
12
|
const CACHE_PATH = path.join(os.homedir(), ".claude", "usage-api-cache.json");
|
|
13
13
|
|
|
14
14
|
//#endregion
|
|
@@ -116,7 +116,7 @@ function writeCache(data) {
|
|
|
116
116
|
async function fetchUsage(token, version) {
|
|
117
117
|
// Return cached data if still valid
|
|
118
118
|
const cache = readCache();
|
|
119
|
-
if (cache != null &&
|
|
119
|
+
if (cache != null && Date.now() - cache.timestamp < CACHE_TTL_MS) {
|
|
120
120
|
return cache.data;
|
|
121
121
|
}
|
|
122
122
|
|
|
@@ -127,8 +127,8 @@ async function fetchUsage(token, version) {
|
|
|
127
127
|
const response = await fetch("https://api.anthropic.com/api/oauth/usage", {
|
|
128
128
|
headers: {
|
|
129
129
|
"Authorization": `Bearer ${token}`,
|
|
130
|
-
"
|
|
131
|
-
|
|
130
|
+
"Accept": "application/json",
|
|
131
|
+
'anthropic-beta': 'oauth-2025-04-20'
|
|
132
132
|
},
|
|
133
133
|
signal: controller.signal,
|
|
134
134
|
});
|
|
@@ -138,14 +138,14 @@ async function fetchUsage(token, version) {
|
|
|
138
138
|
if (!response.ok) {
|
|
139
139
|
// API failed — update timestamp to prevent retry for TTL duration
|
|
140
140
|
writeCache(cache?.data ?? {});
|
|
141
|
-
return
|
|
141
|
+
return undefined;
|
|
142
142
|
}
|
|
143
143
|
|
|
144
144
|
const data = await response.json();
|
|
145
145
|
|
|
146
146
|
if (data == null || typeof data !== "object") {
|
|
147
147
|
writeCache(cache?.data ?? {});
|
|
148
|
-
return
|
|
148
|
+
return undefined;
|
|
149
149
|
}
|
|
150
150
|
|
|
151
151
|
writeCache(data);
|
|
@@ -153,7 +153,7 @@ async function fetchUsage(token, version) {
|
|
|
153
153
|
} catch {
|
|
154
154
|
// Network error — update timestamp to prevent retry for TTL duration
|
|
155
155
|
writeCache(cache?.data ?? {});
|
|
156
|
-
return
|
|
156
|
+
return undefined;
|
|
157
157
|
}
|
|
158
158
|
}
|
|
159
159
|
|
|
@@ -8,32 +8,97 @@ model: sonnet
|
|
|
8
8
|
|
|
9
9
|
## Overview
|
|
10
10
|
|
|
11
|
-
Compare a library/module's public API names against industry standards and review internal consistency, producing a standardization report. **
|
|
11
|
+
Compare a library/module's public API names against industry standards and review internal consistency, producing a standardization report. Uses **sd-explore** to extract the API surface, then dispatches research agents for industry comparison.
|
|
12
|
+
|
|
13
|
+
**Analysis only — no code modifications.**
|
|
14
|
+
|
|
15
|
+
## Principles
|
|
16
|
+
|
|
17
|
+
- **Breaking changes are irrelevant**: Do NOT dismiss findings because renaming would cause a breaking change.
|
|
18
|
+
- **Internal consistency first**: Internal naming consistency takes priority over external standards.
|
|
19
|
+
|
|
20
|
+
## Usage
|
|
21
|
+
|
|
22
|
+
- `/sd-api-name-review packages/solid` — full naming review
|
|
23
|
+
- `/sd-api-name-review packages/orm-common` — review specific package
|
|
24
|
+
- `/sd-api-name-review` — if no argument, ask the user for the target path
|
|
12
25
|
|
|
13
26
|
## Target Selection
|
|
14
27
|
|
|
15
|
-
|
|
16
|
-
|
|
28
|
+
- With argument: review source code at the given path
|
|
29
|
+
- Without argument: ask the user for the target path
|
|
30
|
+
|
|
31
|
+
**Important:** Review ALL source files under the target path. Do not use git status or git diff to limit scope.
|
|
32
|
+
|
|
33
|
+
## Workflow
|
|
34
|
+
|
|
35
|
+
### Step 1: Prepare Context
|
|
36
|
+
|
|
37
|
+
Read these files:
|
|
38
|
+
- `CLAUDE.md` — project overview
|
|
39
|
+
- `.claude/rules/sd-refs-linker.md` — reference guide
|
|
40
|
+
- Target's `package.json` — version (v12/v13)
|
|
41
|
+
|
|
42
|
+
Based on version and target, read all applicable reference files (e.g., `sd-code-conventions.md`, `sd-solid.md`).
|
|
43
|
+
|
|
44
|
+
Keep the collected conventions in memory — they will inform the analysis in later steps.
|
|
45
|
+
|
|
46
|
+
### Step 2: API Extraction (via sd-explore)
|
|
47
|
+
|
|
48
|
+
Follow the **sd-explore** workflow to extract the target's public API surface.
|
|
17
49
|
|
|
18
|
-
|
|
50
|
+
**sd-explore input:**
|
|
19
51
|
|
|
20
|
-
|
|
52
|
+
- **Target path**: the review target directory
|
|
53
|
+
- **Name**: `api-name-review`
|
|
54
|
+
- **File patterns**: `**/*.ts`, `**/*.tsx` (exclude `node_modules`, `dist`)
|
|
55
|
+
- **Analysis instructions**:
|
|
21
56
|
|
|
57
|
+
"For each file, extract its public API surface:
|
|
22
58
|
- All exported identifiers (functions, classes, types, constants, etc.)
|
|
23
59
|
- Names and types of user-facing parameters/options/config
|
|
24
60
|
- Naming pattern classification (prefixes, suffixes, verb/adjective/noun usage, abbreviations, etc.)
|
|
25
61
|
|
|
26
|
-
|
|
62
|
+
Output format:
|
|
63
|
+
```
|
|
64
|
+
# API Surface: [directory names]
|
|
27
65
|
|
|
28
|
-
|
|
66
|
+
## Exports
|
|
67
|
+
- `path/to/file.ts` — `exportName`: type (function/class/type/const), signature summary
|
|
68
|
+
|
|
69
|
+
## Naming Patterns
|
|
70
|
+
- Pattern: description (e.g., 'create-' prefix for factory functions)
|
|
71
|
+
- Examples: list of identifiers using this pattern
|
|
72
|
+
```
|
|
73
|
+
"
|
|
74
|
+
|
|
75
|
+
### Step 3: Industry Standard Research
|
|
76
|
+
|
|
77
|
+
Based on Step 2 results:
|
|
29
78
|
|
|
30
79
|
1. Identify **recurring naming patterns** from the extracted API
|
|
31
80
|
2. Determine the target's domain and tech stack to **select comparable libraries**
|
|
32
|
-
3.
|
|
81
|
+
3. Dispatch **parallel agents** to web-search/fetch official docs for each comparable library, investigating naming conventions for the same pattern categories
|
|
82
|
+
|
|
83
|
+
Each research agent receives:
|
|
84
|
+
|
|
85
|
+
```
|
|
86
|
+
Research naming conventions in [library name] for these pattern categories:
|
|
87
|
+
[list of patterns from Step 2]
|
|
88
|
+
|
|
89
|
+
For each pattern, document:
|
|
90
|
+
- What naming convention the library uses
|
|
91
|
+
- Specific examples from the API
|
|
92
|
+
- Any documented rationale for the convention
|
|
93
|
+
|
|
94
|
+
Write results to: .tmp/api-name-review/research-{library_name}.md
|
|
95
|
+
```
|
|
33
96
|
|
|
34
|
-
|
|
97
|
+
### Step 4: Comparative Analysis & Verification
|
|
35
98
|
|
|
36
|
-
Cross-compare
|
|
99
|
+
Cross-compare Step 2 (API surface) and Step 3 (industry research) results.
|
|
100
|
+
|
|
101
|
+
Classify each naming pattern:
|
|
37
102
|
|
|
38
103
|
| Priority | Criteria |
|
|
39
104
|
| -------- | ------------------------------------------------------ |
|
|
@@ -42,9 +107,11 @@ Cross-compare Phase 1 and Phase 2 results and classify each item:
|
|
|
42
107
|
| **P2** | Better industry term exists (optional) |
|
|
43
108
|
| **Keep** | Already aligned with standards |
|
|
44
109
|
|
|
45
|
-
|
|
110
|
+
**MANDATORY: Read actual code for EVERY finding.** For each finding, `Read` the file at the referenced location before finalizing. Do NOT rely on explore descriptions alone — verify against the actual code.
|
|
111
|
+
|
|
112
|
+
Each finding includes: current name, recommended change, rationale (usage patterns per library).
|
|
46
113
|
|
|
47
|
-
|
|
114
|
+
### Step 5: Report & User Confirmation
|
|
48
115
|
|
|
49
116
|
Present **Keep** items to the user as a summary.
|
|
50
117
|
|
|
@@ -57,12 +124,31 @@ For each finding, explain:
|
|
|
57
124
|
|
|
58
125
|
Collect only findings the user confirms. If the user skips all findings, report that and end.
|
|
59
126
|
|
|
60
|
-
|
|
127
|
+
### Step 6: Brainstorm Handoff
|
|
128
|
+
|
|
129
|
+
Invoke **sd-brainstorm** with all user-confirmed findings as context:
|
|
130
|
+
|
|
131
|
+
_
|
|
132
|
+
"Design naming changes for the following review findings.
|
|
133
|
+
|
|
134
|
+
**For each finding, you MUST:**
|
|
135
|
+
1. Review it thoroughly — examine the code, understand the context, assess the real impact
|
|
136
|
+
2. If any aspect is unclear or ambiguous, ask the user (one question at a time, per brainstorm rules)
|
|
137
|
+
3. If a finding has low cost-benefit (adds complexity for marginal gain, pure style preference, scope too small), drop it. After triage, briefly list all dropped findings with one-line reasons (no user confirmation needed).
|
|
138
|
+
4. For findings worth fixing, explore approaches and design solutions
|
|
139
|
+
|
|
140
|
+
Findings that survive your triage become the design scope. Apply your normal brainstorm process (gap review → approaches → design presentation) to the surviving findings as a group.
|
|
61
141
|
|
|
62
|
-
|
|
142
|
+
<include all confirmed findings with their priority, file:line, current name, recommended name, and rationale>"
|
|
63
143
|
|
|
64
|
-
sd-brainstorm
|
|
144
|
+
sd-brainstorm then owns the full cycle: triage (with user input as needed) → design.
|
|
65
145
|
|
|
66
|
-
##
|
|
146
|
+
## Common Mistakes
|
|
67
147
|
|
|
68
|
-
|
|
148
|
+
| Mistake | Fix |
|
|
149
|
+
|---------|-----|
|
|
150
|
+
| Using git diff to limit scope | Review ALL source files under target |
|
|
151
|
+
| Skipping context preparation | Always read conventions and refs before analysis |
|
|
152
|
+
| Skipping verification | Always verify findings against actual code |
|
|
153
|
+
| Dismissing findings due to breaking changes | Breaking changes are irrelevant — report the naming issue |
|
|
154
|
+
| Not writing research results to files | Research agents MUST write to disk — prevents context bloat |
|
|
@@ -14,7 +14,7 @@ Start by understanding the current project context, then ask questions one at a
|
|
|
14
14
|
## The Process
|
|
15
15
|
|
|
16
16
|
**Understanding the idea:**
|
|
17
|
-
- Check out the current project state first (files, docs, recent commits).
|
|
17
|
+
- Check out the current project state first (files, docs, recent commits).
|
|
18
18
|
- Ask questions one at a time to refine the idea
|
|
19
19
|
- Prefer multiple choice questions when possible, but open-ended is fine too
|
|
20
20
|
- Only one question per message - if a topic needs more exploration, break it into multiple questions
|
|
@@ -22,24 +22,16 @@ Start by understanding the current project context, then ask questions one at a
|
|
|
22
22
|
|
|
23
23
|
**When a main design document is provided as context:**
|
|
24
24
|
|
|
25
|
-
```
|
|
26
|
-
|
|
27
|
-
|
|
28
|
-
|
|
29
|
-
|
|
30
|
-
|
|
31
|
-
|
|
32
|
-
|
|
33
|
-
|
|
34
|
-
|
|
35
|
-
has_main -> normal [label="no"];
|
|
36
|
-
has_main -> section_specified [label="yes"];
|
|
37
|
-
section_specified -> show_progress [label="no"];
|
|
38
|
-
section_specified -> prereqs_ok [label="yes"];
|
|
39
|
-
prereqs_ok -> proceed [label="yes"];
|
|
40
|
-
prereqs_ok -> warn [label="no"];
|
|
41
|
-
warn -> proceed [label="user: proceed"];
|
|
42
|
-
}
|
|
25
|
+
```mermaid
|
|
26
|
+
flowchart TD
|
|
27
|
+
A{"Main design with<br>section plan in context?"}
|
|
28
|
+
A -->|no| B[Normal brainstorm]
|
|
29
|
+
A -->|yes| C{Section specified?}
|
|
30
|
+
C -->|no| D["Show section progress<br>Ask which section<br>(suggest next incomplete)"]
|
|
31
|
+
C -->|yes| E{"Prerequisites<br>complete?"}
|
|
32
|
+
E -->|yes| F[Proceed with section]
|
|
33
|
+
E -->|no| G["Warn prerequisites incomplete<br>Ask: proceed anyway<br>or complete first?"]
|
|
34
|
+
G -->|"user: proceed"| F
|
|
43
35
|
```
|
|
44
36
|
|
|
45
37
|
When proceeding with a section:
|
|
@@ -101,37 +93,31 @@ If your first gap review shows all ✅:
|
|
|
101
93
|
- Present options conversationally with your recommendation and reasoning
|
|
102
94
|
- Lead with your recommended option and explain why
|
|
103
95
|
|
|
104
|
-
**Presenting the design:**
|
|
105
|
-
- Once you believe you understand what you're building, present the design
|
|
106
|
-
- Break it into sections of 200-300 words
|
|
107
|
-
- Ask after each section whether it looks right so far
|
|
108
|
-
- Cover: architecture, components, data flow, error handling, testing
|
|
109
|
-
- Be ready to go back and clarify if something doesn't make sense
|
|
110
|
-
|
|
111
96
|
**Scale assessment:**
|
|
112
97
|
|
|
113
|
-
After the
|
|
114
|
-
|
|
115
|
-
```
|
|
116
|
-
|
|
117
|
-
|
|
118
|
-
manageable [
|
|
119
|
-
|
|
120
|
-
|
|
121
|
-
|
|
122
|
-
|
|
123
|
-
|
|
124
|
-
|
|
125
|
-
assess -> manageable [label="manageable"];
|
|
126
|
-
assess -> propose [label="large"];
|
|
127
|
-
propose -> user_choice;
|
|
128
|
-
user_choice -> manageable [label="proceed as-is"];
|
|
129
|
-
user_choice -> division [label="split"];
|
|
130
|
-
division -> save [label="user selects"];
|
|
131
|
-
save -> guide;
|
|
132
|
-
}
|
|
98
|
+
After the approach is selected, assess scale (file count, logic complexity, number of distinct subsystems, scope of impact):
|
|
99
|
+
|
|
100
|
+
```mermaid
|
|
101
|
+
flowchart TD
|
|
102
|
+
A{"Assess design scale"}
|
|
103
|
+
A -->|manageable| B["Proceed to<br>After the Design<br>(Path A/B)"]
|
|
104
|
+
A -->|large| C["Propose to user:<br>proceed as-is OR<br>split into sections"]
|
|
105
|
+
C --> D{"User choice?"}
|
|
106
|
+
D -->|"proceed as-is"| B
|
|
107
|
+
D -->|split| E["Propose 2-3 section<br>division approaches<br>(by feature/layer/dependency)"]
|
|
108
|
+
E -->|"user selects"| F["Append section plan<br>to design doc<br>Save + commit"]
|
|
109
|
+
F --> G["Show section guide<br>Brainstorm ENDS"]
|
|
133
110
|
```
|
|
134
111
|
|
|
112
|
+
**How to present the split proposal:**
|
|
113
|
+
|
|
114
|
+
When proposing the split to the user, you MUST clearly explain what "section split" means:
|
|
115
|
+
|
|
116
|
+
- **Section split** = the design document is divided into sections, and each section goes through its own **separate brainstorm → plan → plan-dev → check → commit cycle**.
|
|
117
|
+
- This is NOT about implementation phasing (doing some changes before others). It's about breaking the design work itself into independently deliverable chunks.
|
|
118
|
+
- Explain: "Splitting into sections means each section goes through its own brainstorm → plan → plan-dev cycle. Complete and commit one section before moving to the next."
|
|
119
|
+
- Contrast with: "Proceeding as-is means this single design document goes straight to plan → plan-dev."
|
|
120
|
+
|
|
135
121
|
**Section plan format** (append to existing design content as-is):
|
|
136
122
|
|
|
137
123
|
```markdown
|
|
@@ -226,5 +212,4 @@ You can start from any step or skip steps as needed.
|
|
|
226
212
|
- **Multiple choice preferred** - Easier to answer than open-ended when possible
|
|
227
213
|
- **YAGNI ruthlessly** - Remove unnecessary features from all designs
|
|
228
214
|
- **Explore alternatives** - Always propose 2-3 approaches before settling
|
|
229
|
-
- **Incremental validation** - Present design in sections, validate each
|
|
230
215
|
- **Be flexible** - Go back and clarify when something doesn't make sense
|
|
@@ -33,26 +33,24 @@ Multiple types: `--type typecheck,lint`. No path = full project. No type = all c
|
|
|
33
33
|
|
|
34
34
|
## Workflow
|
|
35
35
|
|
|
36
|
-
```
|
|
37
|
-
|
|
38
|
-
|
|
39
|
-
|
|
40
|
-
"
|
|
41
|
-
|
|
42
|
-
|
|
43
|
-
|
|
44
|
-
|
|
45
|
-
"Run check" -> "All passed?";
|
|
46
|
-
"All passed?" -> "Report results → done" [label="yes"];
|
|
47
|
-
"All passed?" -> "Fix errors\n(typecheck → lint → test)" [label="no"];
|
|
48
|
-
"Fix errors\n(typecheck → lint → test)" -> "Stuck after 2-3 tries?";
|
|
49
|
-
"Stuck after 2-3 tries?" -> "Run check" [label="no"];
|
|
50
|
-
"Stuck after 2-3 tries?" -> "Recommend /sd-debug" [label="yes"];
|
|
51
|
-
}
|
|
36
|
+
```mermaid
|
|
37
|
+
flowchart TD
|
|
38
|
+
A[Run check] --> B{All passed?}
|
|
39
|
+
B -->|yes| C[Report results → done]
|
|
40
|
+
B -->|no| D["Fix errors (typecheck → lint → test)"]
|
|
41
|
+
D --> E{Stuck after 2-3 tries?}
|
|
42
|
+
E -->|no| A
|
|
43
|
+
E -->|yes| F[Recommend /sd-debug]
|
|
52
44
|
```
|
|
53
45
|
|
|
54
46
|
**Run command:** `$PM run check [path] [--type type]` (timeout: 600000)
|
|
55
47
|
|
|
48
|
+
- **Output capture:** Bash truncates long output. Always redirect to a file and read it:
|
|
49
|
+
```bash
|
|
50
|
+
mkdir -p .tmp && $PM run check [path] [--type type] > .tmp/check-output.txt 2>&1; echo "EXIT:$?"
|
|
51
|
+
```
|
|
52
|
+
Then use the **Read** tool on `.tmp/check-output.txt` to see the full result. Check `EXIT:0` for success or non-zero for failure.
|
|
53
|
+
|
|
56
54
|
**Fixing errors:**
|
|
57
55
|
- **Before fixing any code**: Read `.claude/refs/sd-code-conventions.md` and check `.claude/rules/sd-refs-linker.md` for additional refs relevant to the affected code area (e.g., `sd-solid.md` for SolidJS, `sd-orm.md` for ORM). Fixing errors does NOT exempt you from following project conventions.
|
|
58
56
|
- Test failures: **MUST** run `git log` to decide — update test or fix source
|
|
@@ -49,7 +49,7 @@ type(scope): short description
|
|
|
49
49
|
| ------------- | ---------------------------------------------------------------------------- |
|
|
50
50
|
| `type` | `feat`, `fix`, `refactor`, `docs`, `test`, `chore`, `build`, `style`, `perf` |
|
|
51
51
|
| `scope` | package name or area (e.g., `solid`, `core-common`, `orm-node`) |
|
|
52
|
-
| `description` |
|
|
52
|
+
| `description` | English, imperative, lowercase, no period at end |
|
|
53
53
|
|
|
54
54
|
Examples:
|
|
55
55
|
|
|
@@ -57,8 +57,6 @@ Examples:
|
|
|
57
57
|
- `fix(orm-node): handle null values in bulk insert`
|
|
58
58
|
- `docs: update README with new API examples`
|
|
59
59
|
|
|
60
|
-
> **Note:** The examples above are in English for reference only. The actual description MUST be written in the system's configured language.
|
|
61
|
-
|
|
62
60
|
Use a HEREDOC for multi-line messages when needed.
|
|
63
61
|
|
|
64
62
|
## Execution
|
|
@@ -197,17 +197,11 @@ You MUST complete each phase before proceeding to the next.
|
|
|
197
197
|
|
|
198
198
|
4. **If Fix Doesn't Work**
|
|
199
199
|
|
|
200
|
-
```
|
|
201
|
-
|
|
202
|
-
"Fix failed?"
|
|
203
|
-
"
|
|
204
|
-
"
|
|
205
|
-
"STOP: Question Architecture\n→ Discuss with user first" [shape=box];
|
|
206
|
-
|
|
207
|
-
"Fix failed?" -> "Attempts < 3?";
|
|
208
|
-
"Attempts < 3?" -> "Phase 1: Re-analyze\nwith new information" [label="yes"];
|
|
209
|
-
"Attempts < 3?" -> "STOP: Question Architecture\n→ Discuss with user first" [label="no (≥3)"];
|
|
210
|
-
}
|
|
200
|
+
```mermaid
|
|
201
|
+
flowchart TD
|
|
202
|
+
A{"Fix failed?"} --> B{"Attempts < 3?"}
|
|
203
|
+
B -->|yes| C["Phase 1: Re-analyze<br>with new information"]
|
|
204
|
+
B -->|"no (≥3)"| D["STOP: Question Architecture<br>→ Discuss with user first"]
|
|
211
205
|
```
|
|
212
206
|
|
|
213
207
|
**Signs of architectural problem (≥3 failures):**
|
|
@@ -8,17 +8,11 @@ Flaky tests often guess at timing with arbitrary delays. This creates race condi
|
|
|
8
8
|
|
|
9
9
|
## When to Use
|
|
10
10
|
|
|
11
|
-
```
|
|
12
|
-
|
|
13
|
-
"Test uses setTimeout/sleep?"
|
|
14
|
-
|
|
15
|
-
|
|
16
|
-
"Use condition-based waiting" [shape=box];
|
|
17
|
-
|
|
18
|
-
"Test uses setTimeout/sleep?" -> "Testing timing behavior?" [label="yes"];
|
|
19
|
-
"Testing timing behavior?" -> "Document WHY timeout needed" [label="yes"];
|
|
20
|
-
"Testing timing behavior?" -> "Use condition-based waiting" [label="no"];
|
|
21
|
-
}
|
|
11
|
+
```mermaid
|
|
12
|
+
flowchart TD
|
|
13
|
+
A{"Test uses setTimeout/sleep?"} -->|yes| B{"Testing timing behavior?"}
|
|
14
|
+
B -->|yes| C[Document WHY timeout needed]
|
|
15
|
+
B -->|no| D[Use condition-based waiting]
|
|
22
16
|
```
|
|
23
17
|
|
|
24
18
|
**Use when:**
|
|
@@ -8,19 +8,12 @@ Bugs often manifest deep in the call stack (git init in wrong directory, file cr
|
|
|
8
8
|
|
|
9
9
|
## When to Use
|
|
10
10
|
|
|
11
|
-
```
|
|
12
|
-
|
|
13
|
-
"Bug appears deep in stack?"
|
|
14
|
-
|
|
15
|
-
"Fix at symptom point
|
|
16
|
-
"
|
|
17
|
-
"BETTER: Also add defense-in-depth" [shape=box];
|
|
18
|
-
|
|
19
|
-
"Bug appears deep in stack?" -> "Can trace backwards?" [label="yes"];
|
|
20
|
-
"Can trace backwards?" -> "Trace to original trigger" [label="yes"];
|
|
21
|
-
"Can trace backwards?" -> "Fix at symptom point" [label="no - dead end"];
|
|
22
|
-
"Trace to original trigger" -> "BETTER: Also add defense-in-depth";
|
|
23
|
-
}
|
|
11
|
+
```mermaid
|
|
12
|
+
flowchart TD
|
|
13
|
+
A{"Bug appears deep in stack?"} -->|yes| B{"Can trace backwards?"}
|
|
14
|
+
B -->|yes| C[Trace to original trigger]
|
|
15
|
+
B -->|"no - dead end"| D[Fix at symptom point]
|
|
16
|
+
C --> E["BETTER: Also add defense-in-depth"]
|
|
24
17
|
```
|
|
25
18
|
|
|
26
19
|
**Use when:**
|
|
@@ -142,26 +135,18 @@ Runs tests one-by-one, stops at first polluter. See script for usage.
|
|
|
142
135
|
|
|
143
136
|
## Key Principle
|
|
144
137
|
|
|
145
|
-
```
|
|
146
|
-
|
|
147
|
-
"Found immediate cause"
|
|
148
|
-
|
|
149
|
-
"
|
|
150
|
-
"Is this the source?"
|
|
151
|
-
"
|
|
152
|
-
"
|
|
153
|
-
"
|
|
154
|
-
|
|
155
|
-
|
|
156
|
-
|
|
157
|
-
"Can trace one level up?" -> "Trace backwards" [label="yes"];
|
|
158
|
-
"Can trace one level up?" -> "NEVER fix just the symptom" [label="no"];
|
|
159
|
-
"Trace backwards" -> "Is this the source?";
|
|
160
|
-
"Is this the source?" -> "Trace backwards" [label="no - keeps going"];
|
|
161
|
-
"Is this the source?" -> "Fix at source" [label="yes"];
|
|
162
|
-
"Fix at source" -> "Add validation at each layer";
|
|
163
|
-
"Add validation at each layer" -> "Bug impossible";
|
|
164
|
-
}
|
|
138
|
+
```mermaid
|
|
139
|
+
flowchart TD
|
|
140
|
+
A(["Found immediate cause"]) --> B{"Can trace one level up?"}
|
|
141
|
+
B -->|yes| C["Trace backwards"]
|
|
142
|
+
B -->|no| D["NEVER fix just the symptom"]:::danger
|
|
143
|
+
C --> E{"Is this the source?"}
|
|
144
|
+
E -->|"no - keeps going"| C
|
|
145
|
+
E -->|yes| F["Fix at source"]
|
|
146
|
+
F --> G["Add validation at each layer"]
|
|
147
|
+
G --> H(("Bug impossible"))
|
|
148
|
+
|
|
149
|
+
classDef danger fill:#f00,color:#fff
|
|
165
150
|
```
|
|
166
151
|
|
|
167
152
|
**NEVER fix just where the error appears.** Trace back to find the original trigger.
|