@kennethsolomon/shipkit 3.19.0 → 3.20.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +25 -3
- package/package.json +1 -1
- package/skills/sk:brainstorming/SKILL.md +19 -128
- package/skills/sk:debug/SKILL.md +44 -111
- package/skills/sk:e2e/SKILL.md +45 -97
- package/skills/sk:features/SKILL.md +44 -99
- package/skills/sk:frontend-design/SKILL.md +16 -32
- package/skills/sk:lint/SKILL.md +42 -62
- package/skills/sk:mvp/SKILL.md +81 -134
- package/skills/sk:perf/SKILL.md +24 -43
- package/skills/sk:review/SKILL.md +57 -93
- package/skills/sk:security-check/SKILL.md +37 -43
- package/skills/sk:seo-audit/SKILL.md +75 -96
- package/skills/sk:setup-claude/SKILL.md +103 -0
- package/skills/sk:setup-claude/references/skill-profiles.md +201 -0
- package/skills/sk:setup-claude/templates/CLAUDE.md.template +102 -247
- package/skills/sk:setup-claude/templates/commands/brainstorm.md.template +1 -1
- package/skills/sk:setup-claude/templates/commands/execute-plan.md.template +1 -1
- package/skills/sk:setup-claude/templates/commands/finish-feature.md.template +1 -1
- package/skills/sk:setup-claude/templates/commands/security-check.md.template +1 -1
- package/skills/sk:setup-claude/templates/commands/write-plan.md.template +1 -1
- package/skills/sk:setup-optimizer/SKILL.md +85 -14
- package/skills/sk:skill-creator/SKILL.md +115 -226
- package/skills/sk:website/SKILL.md +81 -149
- package/skills/sk:write-tests/SKILL.md +44 -110
|
@@ -5,40 +5,11 @@ description: Create new skills, modify and improve existing skills, and measure
|
|
|
5
5
|
|
|
6
6
|
# Skill Creator
|
|
7
7
|
|
|
8
|
-
|
|
8
|
+
Create and iteratively improve skills via: draft → test → evaluate → improve → repeat.
|
|
9
9
|
|
|
10
|
-
|
|
10
|
+
Assess where the user is in this loop and jump in accordingly. If they already have a draft, skip to eval. If they want to skip evals entirely, that's fine too. After the skill is done, offer to run description optimization.
|
|
11
11
|
|
|
12
|
-
|
|
13
|
-
- Write a draft of the skill
|
|
14
|
-
- Create a few test prompts and run claude-with-access-to-the-skill on them
|
|
15
|
-
- Help the user evaluate the results both qualitatively and quantitatively
|
|
16
|
-
- While the runs happen in the background, draft some quantitative evals if there aren't any (if there are some, you can either use as is or modify if you feel something needs to change about them). Then explain them to the user (or if they already existed, explain the ones that already exist)
|
|
17
|
-
- Use the `eval-viewer/generate_review.py` script to show the user the results for them to look at, and also let them look at the quantitative metrics
|
|
18
|
-
- Rewrite the skill based on feedback from the user's evaluation of the results (and also if there are any glaring flaws that become apparent from the quantitative benchmarks)
|
|
19
|
-
- Repeat until you're satisfied
|
|
20
|
-
- Expand the test set and try again at larger scale
|
|
21
|
-
|
|
22
|
-
Your job when using this skill is to figure out where the user is in this process and then jump in and help them progress through these stages. So for instance, maybe they're like "I want to make a skill for X". You can help narrow down what they mean, write a draft, write the test cases, figure out how they want to evaluate, run all the prompts, and repeat.
|
|
23
|
-
|
|
24
|
-
On the other hand, maybe they already have a draft of the skill. In this case you can go straight to the eval/iterate part of the loop.
|
|
25
|
-
|
|
26
|
-
Of course, you should always be flexible and if the user is like "I don't need to run a bunch of evaluations, just vibe with me", you can do that instead.
|
|
27
|
-
|
|
28
|
-
Then after the skill is done (but again, the order is flexible), you can also run the skill description improver, which we have a whole separate script for, to optimize the triggering of the skill.
|
|
29
|
-
|
|
30
|
-
Cool? Cool.
|
|
31
|
-
|
|
32
|
-
## Communicating with the user
|
|
33
|
-
|
|
34
|
-
The skill creator is liable to be used by people across a wide range of familiarity with coding jargon. If you haven't heard (and how could you, it's only very recently that it started), there's a trend now where the power of Claude is inspiring plumbers to open up their terminals, parents and grandparents to google "how to install npm". On the other hand, the bulk of users are probably fairly computer-literate.
|
|
35
|
-
|
|
36
|
-
So please pay attention to context cues to understand how to phrase your communication! In the default case, just to give you some idea:
|
|
37
|
-
|
|
38
|
-
- "evaluation" and "benchmark" are borderline, but OK
|
|
39
|
-
- for "JSON" and "assertion" you want to see serious cues from the user that they know what those things are before using them without explaining them
|
|
40
|
-
|
|
41
|
-
It's OK to briefly explain terms if you're in doubt, and feel free to clarify terms with a short definition if you're unsure if the user will get it.
|
|
12
|
+
Adapt communication to user familiarity — briefly define "JSON", "assertion", etc. if context suggests unfamiliarity.
|
|
42
13
|
|
|
43
14
|
---
|
|
44
15
|
|
|
@@ -46,27 +17,24 @@ It's OK to briefly explain terms if you're in doubt, and feel free to clarify te
|
|
|
46
17
|
|
|
47
18
|
### Capture Intent
|
|
48
19
|
|
|
49
|
-
|
|
20
|
+
If the current conversation already contains a workflow to capture, extract tools, steps, corrections, and I/O formats from history first. User confirms before proceeding.
|
|
50
21
|
|
|
51
22
|
1. What should this skill enable Claude to do?
|
|
52
|
-
2. When should
|
|
23
|
+
2. When should it trigger? (phrases/contexts)
|
|
53
24
|
3. What's the expected output format?
|
|
54
|
-
4.
|
|
25
|
+
4. Do we need test cases? Suggest based on skill type: objectively verifiable outputs (transforms, extraction, codegen, fixed workflows) → yes. Subjective outputs (writing style, art) → usually no.
|
|
55
26
|
|
|
56
27
|
### Interview and Research
|
|
57
28
|
|
|
58
|
-
|
|
59
|
-
|
|
60
|
-
Check available MCPs - if useful for research (searching docs, finding similar skills, looking up best practices), research in parallel via subagents if available, otherwise inline. Come prepared with context to reduce burden on the user.
|
|
29
|
+
Ask about edge cases, I/O formats, example files, success criteria, and dependencies before writing test prompts. Check available MCPs for research; run parallel subagents if available.
|
|
61
30
|
|
|
62
31
|
### Write the SKILL.md
|
|
63
32
|
|
|
64
|
-
|
|
65
|
-
|
|
33
|
+
Fill in:
|
|
66
34
|
- **name**: Skill identifier
|
|
67
|
-
- **description**: When to trigger
|
|
68
|
-
- **compatibility**: Required tools
|
|
69
|
-
- **
|
|
35
|
+
- **description**: When to trigger + what it does. Primary triggering mechanism — include both function and contexts. All "when to use" info goes here, not in the body. Make descriptions slightly "pushy" to counter undertriggering: e.g., "Use this whenever the user mentions dashboards, data visualization, or wants to display any company data, even if they don't explicitly ask for a 'dashboard.'"
|
|
36
|
+
- **compatibility**: Required tools/dependencies (optional, rarely needed)
|
|
37
|
+
- **body**: Instructions
|
|
70
38
|
|
|
71
39
|
### Skill Writing Guide
|
|
72
40
|
|
|
@@ -85,19 +53,16 @@ skill-name/
|
|
|
85
53
|
|
|
86
54
|
#### Progressive Disclosure
|
|
87
55
|
|
|
88
|
-
|
|
89
|
-
1. **Metadata** (name + description)
|
|
90
|
-
2. **SKILL.md body**
|
|
91
|
-
3. **Bundled resources**
|
|
92
|
-
|
|
93
|
-
These word counts are approximate and you can feel free to go longer if needed.
|
|
56
|
+
Three-level loading:
|
|
57
|
+
1. **Metadata** (name + description) — always in context (~100 words)
|
|
58
|
+
2. **SKILL.md body** — in context when skill triggers (<500 lines ideal)
|
|
59
|
+
3. **Bundled resources** — loaded as needed (unlimited; scripts can execute without loading)
|
|
94
60
|
|
|
95
|
-
|
|
96
|
-
-
|
|
97
|
-
- Reference files clearly from SKILL.md with guidance on when to read them
|
|
61
|
+
- Keep SKILL.md under 500 lines; if approaching limit, add hierarchy with clear pointers to follow-up files
|
|
62
|
+
- Reference bundled files clearly with guidance on when to read them
|
|
98
63
|
- For large reference files (>300 lines), include a table of contents
|
|
99
64
|
|
|
100
|
-
**Domain organization**: When a skill supports multiple domains/frameworks
|
|
65
|
+
**Domain organization**: When a skill supports multiple domains/frameworks:
|
|
101
66
|
```
|
|
102
67
|
cloud-deploy/
|
|
103
68
|
├── SKILL.md (workflow + selection)
|
|
@@ -106,17 +71,16 @@ cloud-deploy/
|
|
|
106
71
|
├── gcp.md
|
|
107
72
|
└── azure.md
|
|
108
73
|
```
|
|
109
|
-
Claude reads only the relevant reference file.
|
|
110
74
|
|
|
111
|
-
####
|
|
75
|
+
#### Security
|
|
112
76
|
|
|
113
|
-
|
|
77
|
+
Skills must not contain malware, exploit code, or anything that could compromise system security. Don't create misleading skills or skills designed for unauthorized access, data exfiltration, or other malicious purposes.
|
|
114
78
|
|
|
115
79
|
#### Writing Patterns
|
|
116
80
|
|
|
117
|
-
|
|
81
|
+
Use imperative form. Explain *why* behind instructions rather than heavy-handed MUSTs — LLMs perform better with reasoning than rote commands.
|
|
118
82
|
|
|
119
|
-
**
|
|
83
|
+
**Output format:**
|
|
120
84
|
```markdown
|
|
121
85
|
## Report structure
|
|
122
86
|
ALWAYS use this exact template:
|
|
@@ -126,7 +90,7 @@ ALWAYS use this exact template:
|
|
|
126
90
|
## Recommendations
|
|
127
91
|
```
|
|
128
92
|
|
|
129
|
-
**Examples
|
|
93
|
+
**Examples:**
|
|
130
94
|
```markdown
|
|
131
95
|
## Commit message format
|
|
132
96
|
**Example 1:**
|
|
@@ -134,15 +98,11 @@ Input: Added user authentication with JWT tokens
|
|
|
134
98
|
Output: feat(auth): implement JWT-based authentication
|
|
135
99
|
```
|
|
136
100
|
|
|
137
|
-
### Writing Style
|
|
138
|
-
|
|
139
|
-
Try to explain to the model why things are important in lieu of heavy-handed musty MUSTs. Use theory of mind and try to make the skill general and not super-narrow to specific examples. Start by writing a draft and then look at it with fresh eyes and improve it.
|
|
140
|
-
|
|
141
101
|
### Test Cases
|
|
142
102
|
|
|
143
|
-
After
|
|
103
|
+
After the skill draft, write 2-3 realistic test prompts. Share with user for confirmation, then run them.
|
|
144
104
|
|
|
145
|
-
Save
|
|
105
|
+
Save to `evals/evals.json` (no assertions yet — draft those while runs are in progress):
|
|
146
106
|
|
|
147
107
|
```json
|
|
148
108
|
{
|
|
@@ -158,34 +118,35 @@ Save test cases to `evals/evals.json`. Don't write assertions yet — just the p
|
|
|
158
118
|
}
|
|
159
119
|
```
|
|
160
120
|
|
|
161
|
-
See `references/schemas.md` for the full schema
|
|
121
|
+
See `references/schemas.md` for the full schema including the `assertions` field.
|
|
122
|
+
|
|
123
|
+
---
|
|
162
124
|
|
|
163
125
|
## Running and evaluating test cases
|
|
164
126
|
|
|
165
|
-
|
|
127
|
+
One continuous sequence — do not stop partway. Do NOT use `/skill-test` or any other testing skill.
|
|
166
128
|
|
|
167
|
-
|
|
129
|
+
Organize results in `<skill-name>-workspace/` as a sibling to the skill directory, by iteration (`iteration-1/`, `iteration-2/`, etc.) and test case (`eval-0/`, `eval-1/`, etc.). Create directories as you go.
|
|
168
130
|
|
|
169
|
-
### Step 1: Spawn all runs
|
|
131
|
+
### Step 1: Spawn all runs in the same turn
|
|
170
132
|
|
|
171
|
-
For each test case, spawn two subagents
|
|
133
|
+
For each test case, spawn two subagents simultaneously — one with the skill, one without. Launch everything at once.
|
|
172
134
|
|
|
173
135
|
**With-skill run:**
|
|
174
|
-
|
|
175
136
|
```
|
|
176
137
|
Execute this task:
|
|
177
138
|
- Skill path: <path-to-skill>
|
|
178
139
|
- Task: <eval prompt>
|
|
179
140
|
- Input files: <eval files if any, or "none">
|
|
180
141
|
- Save outputs to: <workspace>/iteration-<N>/eval-<ID>/with_skill/outputs/
|
|
181
|
-
- Outputs to save: <what the user cares about
|
|
142
|
+
- Outputs to save: <what the user cares about>
|
|
182
143
|
```
|
|
183
144
|
|
|
184
|
-
**Baseline run** (
|
|
185
|
-
- **
|
|
186
|
-
- **Improving
|
|
145
|
+
**Baseline run** (context-dependent):
|
|
146
|
+
- **New skill**: no skill at all — same prompt, no skill path, save to `without_skill/outputs/`
|
|
147
|
+
- **Improving existing skill**: old version — snapshot first (`cp -r <skill-path> <workspace>/skill-snapshot/`), point baseline at snapshot, save to `old_skill/outputs/`
|
|
187
148
|
|
|
188
|
-
Write
|
|
149
|
+
Write `eval_metadata.json` per test case (assertions empty for now). Use descriptive names for directories — not just "eval-0":
|
|
189
150
|
|
|
190
151
|
```json
|
|
191
152
|
{
|
|
@@ -196,17 +157,15 @@ Write an `eval_metadata.json` for each test case (assertions can be empty for no
|
|
|
196
157
|
}
|
|
197
158
|
```
|
|
198
159
|
|
|
199
|
-
### Step 2:
|
|
160
|
+
### Step 2: Draft assertions while runs are in progress
|
|
200
161
|
|
|
201
|
-
Don't
|
|
162
|
+
Don't wait — draft quantitative assertions and explain them to the user. Good assertions are objectively verifiable and have descriptive names. For subjective skills, don't force assertions — use qualitative review.
|
|
202
163
|
|
|
203
|
-
|
|
164
|
+
Update `eval_metadata.json` and `evals/evals.json` with assertions once drafted. Explain what the user will see in the viewer.
|
|
204
165
|
|
|
205
|
-
|
|
166
|
+
### Step 3: Capture timing data as runs complete
|
|
206
167
|
|
|
207
|
-
|
|
208
|
-
|
|
209
|
-
When each subagent task completes, you receive a notification containing `total_tokens` and `duration_ms`. Save this data immediately to `timing.json` in the run directory:
|
|
168
|
+
When each subagent completes, save timing data immediately to `timing.json` in the run directory — this data is only available in the task notification:
|
|
210
169
|
|
|
211
170
|
```json
|
|
212
171
|
{
|
|
@@ -216,24 +175,19 @@ When each subagent task completes, you receive a notification containing `total_
|
|
|
216
175
|
}
|
|
217
176
|
```
|
|
218
177
|
|
|
219
|
-
This is the only opportunity to capture this data — it comes through the task notification and isn't persisted elsewhere. Process each notification as it arrives rather than trying to batch them.
|
|
220
|
-
|
|
221
178
|
### Step 4: Grade, aggregate, and launch the viewer
|
|
222
179
|
|
|
223
|
-
|
|
224
|
-
|
|
225
|
-
1. **Grade each run** — spawn a grader subagent (or grade inline) that reads `agents/grader.md` and evaluates each assertion against the outputs. Save results to `grading.json` in each run directory. The grading.json expectations array must use the fields `text`, `passed`, and `evidence` (not `name`/`met`/`details` or other variants) — the viewer depends on these exact field names. For assertions that can be checked programmatically, write and run a script rather than eyeballing it — scripts are faster, more reliable, and can be reused across iterations.
|
|
180
|
+
1. **Grade** — spawn a grader subagent reading `agents/grader.md`. Save `grading.json` per run directory. Required fields: `text`, `passed`, `evidence` (not `name`/`met`/`details`). Use scripts for programmatic assertions.
|
|
226
181
|
|
|
227
|
-
2. **Aggregate
|
|
182
|
+
2. **Aggregate** — run from the skill-creator directory:
|
|
228
183
|
```bash
|
|
229
184
|
python -m scripts.aggregate_benchmark <workspace>/iteration-N --skill-name <name>
|
|
230
185
|
```
|
|
231
|
-
|
|
232
|
-
Put each with_skill version before its baseline counterpart.
|
|
186
|
+
Produces `benchmark.json` and `benchmark.md`. Put each `with_skill` version before its baseline counterpart. See `references/schemas.md` for manual schema.
|
|
233
187
|
|
|
234
|
-
3. **
|
|
188
|
+
3. **Analyst pass** — read `agents/analyzer.md` ("Analyzing Benchmark Results") to surface non-discriminating assertions, high-variance evals, and time/token tradeoffs.
|
|
235
189
|
|
|
236
|
-
4. **Launch the viewer
|
|
190
|
+
4. **Launch the viewer:**
|
|
237
191
|
```bash
|
|
238
192
|
nohup python <skill-creator-path>/eval-viewer/generate_review.py \
|
|
239
193
|
<workspace>/iteration-N \
|
|
@@ -244,45 +198,32 @@ Put each with_skill version before its baseline counterpart.
|
|
|
244
198
|
```
|
|
245
199
|
For iteration 2+, also pass `--previous-workspace <workspace>/iteration-<N-1>`.
|
|
246
200
|
|
|
247
|
-
**Cowork / headless environments:**
|
|
201
|
+
**Cowork / headless environments:** Use `--static <output_path>` for a standalone HTML file. Feedback downloads as `feedback.json` when user clicks "Submit All Reviews" — copy it into the workspace for the next iteration.
|
|
248
202
|
|
|
249
|
-
|
|
203
|
+
Use `generate_review.py` — do not write custom HTML.
|
|
250
204
|
|
|
251
|
-
5.
|
|
205
|
+
5. Tell the user: "I've opened the results in your browser. 'Outputs' tab lets you review each test case and leave feedback; 'Benchmark' shows quantitative comparison. Come back when done."
|
|
252
206
|
|
|
253
|
-
|
|
207
|
+
**Viewer layout:**
|
|
208
|
+
- **Outputs tab**: Prompt, Output, Previous Output (iter 2+, collapsed), Formal Grades (collapsed), Feedback textbox, Previous Feedback (iter 2+)
|
|
209
|
+
- **Benchmark tab**: Pass rates, timing, token usage per configuration, per-eval breakdowns, analyst observations
|
|
210
|
+
- Navigation: prev/next or arrow keys; "Submit All Reviews" saves `feedback.json`
|
|
254
211
|
|
|
255
|
-
|
|
256
|
-
- **Prompt**: the task that was given
|
|
257
|
-
- **Output**: the files the skill produced, rendered inline where possible
|
|
258
|
-
- **Previous Output** (iteration 2+): collapsed section showing last iteration's output
|
|
259
|
-
- **Formal Grades** (if grading was run): collapsed section showing assertion pass/fail
|
|
260
|
-
- **Feedback**: a textbox that auto-saves as they type
|
|
261
|
-
- **Previous Feedback** (iteration 2+): their comments from last time, shown below the textbox
|
|
262
|
-
|
|
263
|
-
The "Benchmark" tab shows the stats summary: pass rates, timing, and token usage for each configuration, with per-eval breakdowns and analyst observations.
|
|
264
|
-
|
|
265
|
-
Navigation is via prev/next buttons or arrow keys. When done, they click "Submit All Reviews" which saves all feedback to `feedback.json`.
|
|
266
|
-
|
|
267
|
-
### Step 5: Read the feedback
|
|
268
|
-
|
|
269
|
-
When the user tells you they're done, read `feedback.json`:
|
|
212
|
+
### Step 5: Read feedback
|
|
270
213
|
|
|
271
214
|
```json
|
|
272
215
|
{
|
|
273
216
|
"reviews": [
|
|
274
217
|
{"run_id": "eval-0-with_skill", "feedback": "the chart is missing axis labels", "timestamp": "..."},
|
|
275
|
-
{"run_id": "eval-1-with_skill", "feedback": "", "timestamp": "..."}
|
|
276
|
-
{"run_id": "eval-2-with_skill", "feedback": "perfect, love this", "timestamp": "..."}
|
|
218
|
+
{"run_id": "eval-1-with_skill", "feedback": "", "timestamp": "..."}
|
|
277
219
|
],
|
|
278
220
|
"status": "complete"
|
|
279
221
|
}
|
|
280
222
|
```
|
|
281
223
|
|
|
282
|
-
Empty feedback
|
|
283
|
-
|
|
284
|
-
Kill the viewer server when you're done with it:
|
|
224
|
+
Empty feedback = the user thought it was fine. Focus on test cases with specific complaints.
|
|
285
225
|
|
|
226
|
+
Kill the viewer when done:
|
|
286
227
|
```bash
|
|
287
228
|
kill $VIEWER_PID 2>/dev/null
|
|
288
229
|
```
|
|
@@ -291,52 +232,43 @@ kill $VIEWER_PID 2>/dev/null
|
|
|
291
232
|
|
|
292
233
|
## Improving the skill
|
|
293
234
|
|
|
294
|
-
|
|
295
|
-
|
|
296
|
-
### How to think about improvements
|
|
235
|
+
### Improvement principles
|
|
297
236
|
|
|
298
|
-
1. **Generalize
|
|
237
|
+
1. **Generalize, don't overfit.** Skills run across millions of diverse prompts. Avoid fiddly or over-constrictive changes. If a stubborn issue persists, try different metaphors or working patterns.
|
|
299
238
|
|
|
300
|
-
2. **Keep the prompt lean.** Remove
|
|
239
|
+
2. **Keep the prompt lean.** Remove instructions that aren't pulling their weight. Read transcripts — if the model wastes time on unproductive steps, remove the instructions causing it.
|
|
301
240
|
|
|
302
|
-
3. **Explain the why.**
|
|
241
|
+
3. **Explain the why.** Write *why* something matters, not just *what* to do. Avoid all-caps ALWAYS/NEVER; reframe with reasoning instead. LLMs respond better to rationale than rigid commands.
|
|
303
242
|
|
|
304
|
-
4. **
|
|
243
|
+
4. **Bundle repeated work.** If all test cases resulted in subagents writing similar helper scripts, bundle the script in `scripts/` and reference it from the skill.
|
|
305
244
|
|
|
306
|
-
|
|
245
|
+
### Iteration loop
|
|
307
246
|
|
|
308
|
-
|
|
247
|
+
1. Apply improvements to the skill
|
|
248
|
+
2. Rerun all test cases into `iteration-<N+1>/`, including baselines (new skill → `without_skill`; improving → use judgment on whether baseline is original or previous iteration)
|
|
249
|
+
3. Launch viewer with `--previous-workspace` pointing at previous iteration
|
|
250
|
+
4. Wait for user review, read feedback, improve again
|
|
309
251
|
|
|
310
|
-
|
|
311
|
-
|
|
312
|
-
|
|
313
|
-
|
|
314
|
-
3. Launch the reviewer with `--previous-workspace` pointing at the previous iteration
|
|
315
|
-
4. Wait for the user to review and tell you they're done
|
|
316
|
-
5. Read the new feedback, improve again, repeat
|
|
317
|
-
|
|
318
|
-
Keep going until:
|
|
319
|
-
- The user says they're happy
|
|
320
|
-
- The feedback is all empty (everything looks good)
|
|
321
|
-
- You're not making meaningful progress
|
|
252
|
+
Stop when:
|
|
253
|
+
- User is satisfied
|
|
254
|
+
- All feedback is empty
|
|
255
|
+
- No meaningful progress is being made
|
|
322
256
|
|
|
323
257
|
---
|
|
324
258
|
|
|
325
259
|
## Advanced: Blind comparison
|
|
326
260
|
|
|
327
|
-
For
|
|
328
|
-
|
|
329
|
-
This is optional, requires subagents, and most users won't need it. The human review loop is usually sufficient.
|
|
261
|
+
For rigorous A/B comparison between two skill versions, read `agents/comparator.md` and `agents/analyzer.md`. An independent agent judges outputs without knowing which version produced them. Optional, requires subagents — human review is usually sufficient.
|
|
330
262
|
|
|
331
263
|
---
|
|
332
264
|
|
|
333
265
|
## Description Optimization
|
|
334
266
|
|
|
335
|
-
The description field
|
|
267
|
+
The `description` field is the primary triggering mechanism. After creating or improving a skill, offer to optimize it.
|
|
336
268
|
|
|
337
269
|
### Step 1: Generate trigger eval queries
|
|
338
270
|
|
|
339
|
-
Create 20 eval queries —
|
|
271
|
+
Create 20 eval queries — mix of should-trigger and should-not-trigger. Save as JSON:
|
|
340
272
|
|
|
341
273
|
```json
|
|
342
274
|
[
|
|
@@ -345,38 +277,28 @@ Create 20 eval queries — a mix of should-trigger and should-not-trigger. Save
|
|
|
345
277
|
]
|
|
346
278
|
```
|
|
347
279
|
|
|
348
|
-
|
|
349
|
-
|
|
350
|
-
Bad: `"Format this data"`, `"Extract text from PDF"`, `"Create a chart"`
|
|
280
|
+
Queries must be realistic and specific — include file paths, personal context, column names, company names, URLs, backstory, typos, casual speech, varying lengths. Focus on edge cases over clear-cut examples.
|
|
351
281
|
|
|
352
|
-
|
|
282
|
+
**Bad:** `"Format this data"`, `"Extract text from PDF"`, `"Create a chart"`
|
|
353
283
|
|
|
354
|
-
|
|
284
|
+
**Good:** `"ok so my boss just sent me this xlsx file (its in my downloads, called something like 'Q4 sales final FINAL v2.xlsx') and she wants me to add a column that shows the profit margin as a percentage. The revenue is in column C and costs are in column D i think"`
|
|
355
285
|
|
|
356
|
-
|
|
357
|
-
|
|
358
|
-
The key thing to avoid: don't make should-not-trigger queries obviously irrelevant. "Write a fibonacci function" as a negative test for a PDF skill is too easy — it doesn't test anything. The negative cases should be genuinely tricky.
|
|
286
|
+
- **Should-trigger (8-10)**: varied phrasings of same intent — formal and casual; cases where user doesn't name the skill but clearly needs it; uncommon use cases; cases where this skill competes with another but should win
|
|
287
|
+
- **Should-not-trigger (8-10)**: near-misses that share keywords but need something different; adjacent domains; ambiguous phrasing where naive keyword match would trigger but shouldn't. Do NOT make these obviously irrelevant.
|
|
359
288
|
|
|
360
289
|
### Step 2: Review with user
|
|
361
290
|
|
|
362
|
-
|
|
363
|
-
|
|
364
|
-
|
|
365
|
-
|
|
366
|
-
|
|
367
|
-
- `__SKILL_NAME_PLACEHOLDER__` → the skill's name
|
|
368
|
-
- `__SKILL_DESCRIPTION_PLACEHOLDER__` → the skill's current description
|
|
369
|
-
3. Write to a temp file (e.g., `/tmp/eval_review_<skill-name>.html`) and open it: `open /tmp/eval_review_<skill-name>.html`
|
|
370
|
-
4. The user can edit queries, toggle should-trigger, add/remove entries, then click "Export Eval Set"
|
|
371
|
-
5. The file downloads to `~/Downloads/eval_set.json` — check the Downloads folder for the most recent version in case there are multiple (e.g., `eval_set (1).json`)
|
|
372
|
-
|
|
373
|
-
This step matters — bad eval queries lead to bad descriptions.
|
|
291
|
+
1. Read `assets/eval_review.html`
|
|
292
|
+
2. Replace placeholders: `__EVAL_DATA_PLACEHOLDER__` → JSON array (no quotes, it's a JS variable), `__SKILL_NAME_PLACEHOLDER__`, `__SKILL_DESCRIPTION_PLACEHOLDER__`
|
|
293
|
+
3. Write to `/tmp/eval_review_<skill-name>.html` and open it
|
|
294
|
+
4. User edits queries, toggles should-trigger, adds/removes entries, clicks "Export Eval Set"
|
|
295
|
+
5. File downloads to `~/Downloads/eval_set.json` — check for most recent version if duplicates exist (e.g., `eval_set (1).json`)
|
|
374
296
|
|
|
375
297
|
### Step 3: Run the optimization loop
|
|
376
298
|
|
|
377
|
-
Tell the user: "This will take some time — I'll run the optimization loop in the background and check
|
|
299
|
+
Tell the user: "This will take some time — I'll run the optimization loop in the background and check periodically."
|
|
378
300
|
|
|
379
|
-
Save
|
|
301
|
+
Save eval set to workspace, then run in background:
|
|
380
302
|
|
|
381
303
|
```bash
|
|
382
304
|
python -m scripts.run_loop \
|
|
@@ -387,93 +309,60 @@ python -m scripts.run_loop \
|
|
|
387
309
|
--verbose
|
|
388
310
|
```
|
|
389
311
|
|
|
390
|
-
Use the model ID from your system prompt
|
|
312
|
+
Use the model ID from your system prompt so the triggering test matches what the user actually experiences. Periodically tail output to give iteration/score updates.
|
|
391
313
|
|
|
392
|
-
|
|
314
|
+
The loop: splits eval 60% train / 40% held-out test → evaluates current description (3 runs per query for reliability) → calls Claude with extended thinking to propose improvements → re-evaluates on train + test → iterates up to 5 times → returns `best_description` selected by test score (not train, to avoid overfitting).
|
|
393
315
|
|
|
394
|
-
|
|
395
|
-
|
|
396
|
-
### How skill triggering works
|
|
397
|
-
|
|
398
|
-
Understanding the triggering mechanism helps design better eval queries. Skills appear in Claude's `available_skills` list with their name + description, and Claude decides whether to consult a skill based on that description. The important thing to know is that Claude only consults skills for tasks it can't easily handle on its own — simple, one-step queries like "read this PDF" may not trigger a skill even if the description matches perfectly, because Claude can handle them directly with basic tools. Complex, multi-step, or specialized queries reliably trigger skills when the description matches.
|
|
399
|
-
|
|
400
|
-
This means your eval queries should be substantive enough that Claude would actually benefit from consulting a skill. Simple queries like "read file X" are poor test cases — they won't trigger skills regardless of description quality.
|
|
316
|
+
**How triggering works:** Claude sees skills in `available_skills` with name + description and decides whether to consult one. Claude only consults skills for tasks it can't handle alone — simple one-step queries won't trigger skills even with a perfect description match. Eval queries must be substantive enough that Claude would genuinely benefit from a skill.
|
|
401
317
|
|
|
402
318
|
### Step 4: Apply the result
|
|
403
319
|
|
|
404
|
-
Take `best_description` from
|
|
320
|
+
Take `best_description` from JSON output, update the skill's frontmatter. Show before/after and report scores.
|
|
405
321
|
|
|
406
322
|
---
|
|
407
323
|
|
|
408
324
|
### Package and Present (only if `present_files` tool is available)
|
|
409
325
|
|
|
410
|
-
Check whether you have access to the `present_files` tool. If you don't, skip this step. If you do, package the skill and present the .skill file to the user:
|
|
411
|
-
|
|
412
326
|
```bash
|
|
413
327
|
python -m scripts.package_skill <path/to/skill-folder>
|
|
414
328
|
```
|
|
415
329
|
|
|
416
|
-
|
|
330
|
+
Direct user to the resulting `.skill` file path for installation.
|
|
417
331
|
|
|
418
332
|
---
|
|
419
333
|
|
|
420
334
|
## Claude.ai-specific instructions
|
|
421
335
|
|
|
422
|
-
|
|
423
|
-
|
|
424
|
-
**Running test cases**: No subagents means no parallel execution. For each test case, read the skill's SKILL.md, then follow its instructions to accomplish the test prompt yourself. Do them one at a time. This is less rigorous than independent subagents (you wrote the skill and you're also running it, so you have full context), but it's a useful sanity check — and the human review step compensates. Skip the baseline runs — just use the skill to complete the task as requested.
|
|
425
|
-
|
|
426
|
-
**Reviewing results**: If you can't open a browser (e.g., Claude.ai's VM has no display, or you're on a remote server), skip the browser reviewer entirely. Instead, present results directly in the conversation. For each test case, show the prompt and the output. If the output is a file the user needs to see (like a .docx or .xlsx), save it to the filesystem and tell them where it is so they can download and inspect it. Ask for feedback inline: "How does this look? Anything you'd change?"
|
|
336
|
+
Core workflow is the same (draft → test → review → improve → repeat), but adapt mechanics:
|
|
427
337
|
|
|
428
|
-
|
|
429
|
-
|
|
430
|
-
|
|
431
|
-
|
|
432
|
-
|
|
433
|
-
|
|
434
|
-
|
|
435
|
-
|
|
436
|
-
|
|
338
|
+
| Feature | Claude.ai behavior |
|
|
339
|
+
|---|---|
|
|
340
|
+
| Test case runs | No subagents — run sequentially, following SKILL.md yourself. Skip baselines. |
|
|
341
|
+
| Results review | No browser — present results inline in conversation. Share file paths for downloadable outputs. Ask for feedback inline. |
|
|
342
|
+
| Benchmarking | Skip — no meaningful baselines without subagents |
|
|
343
|
+
| Iteration loop | Same — improve, rerun, ask for feedback. Organize results into iteration directories if filesystem available. |
|
|
344
|
+
| Description optimization | Skip — requires `claude -p` CLI, only available in Claude Code |
|
|
345
|
+
| Blind comparison | Skip — requires subagents |
|
|
346
|
+
| Packaging | Works anywhere with Python + filesystem |
|
|
437
347
|
|
|
438
348
|
---
|
|
439
349
|
|
|
440
|
-
## Cowork-
|
|
350
|
+
## Cowork-specific instructions
|
|
441
351
|
|
|
442
|
-
|
|
443
|
-
|
|
444
|
-
|
|
445
|
-
|
|
446
|
-
|
|
447
|
-
|
|
448
|
-
|
|
449
|
-
|
|
352
|
+
| Feature | Cowork behavior |
|
|
353
|
+
|---|---|
|
|
354
|
+
| Subagents | Available — main workflow works. Fall back to serial if timeouts are severe. |
|
|
355
|
+
| Viewer | No display — use `--static <output_path>`. Provide a link for user to open HTML in browser. |
|
|
356
|
+
| Feedback | No running server — "Submit All Reviews" downloads `feedback.json`. Read from Downloads (may need to request access). |
|
|
357
|
+
| Eval viewer timing | ALWAYS generate the eval viewer BEFORE evaluating inputs yourself — get outputs in front of the user first. Add "Create evals JSON and run `eval-viewer/generate_review.py`" to TodoList. |
|
|
358
|
+
| Description optimization | Works — `run_loop.py` uses `claude -p` via subprocess. Run only after skill is finalized and user agrees it's in good shape. |
|
|
359
|
+
| Packaging | Works |
|
|
450
360
|
|
|
451
361
|
---
|
|
452
362
|
|
|
453
363
|
## Reference files
|
|
454
364
|
|
|
455
|
-
|
|
456
|
-
|
|
457
|
-
- `agents/
|
|
458
|
-
- `
|
|
459
|
-
- `agents/analyzer.md` — How to analyze why one version beat another
|
|
460
|
-
|
|
461
|
-
The references/ directory has additional documentation:
|
|
462
|
-
- `references/schemas.md` — JSON structures for evals.json, grading.json, etc.
|
|
463
|
-
|
|
464
|
-
---
|
|
465
|
-
|
|
466
|
-
Repeating one more time the core loop here for emphasis:
|
|
467
|
-
|
|
468
|
-
- Figure out what the skill is about
|
|
469
|
-
- Draft or edit the skill
|
|
470
|
-
- Run claude-with-access-to-the-skill on test prompts
|
|
471
|
-
- With the user, evaluate the outputs:
|
|
472
|
-
- Create benchmark.json and run `eval-viewer/generate_review.py` to help the user review them
|
|
473
|
-
- Run quantitative evals
|
|
474
|
-
- Repeat until you and the user are satisfied
|
|
475
|
-
- Package the final skill and return it to the user.
|
|
476
|
-
|
|
477
|
-
Please add steps to your TodoList, if you have such a thing, to make sure you don't forget. If you're in Cowork, please specifically put "Create evals JSON and run `eval-viewer/generate_review.py` so human can review test cases" in your TodoList to make sure it happens.
|
|
478
|
-
|
|
479
|
-
Good luck!
|
|
365
|
+
- `agents/grader.md` — Evaluate assertions against outputs
|
|
366
|
+
- `agents/comparator.md` — Blind A/B comparison between outputs
|
|
367
|
+
- `agents/analyzer.md` — Analyze why one version beat another
|
|
368
|
+
- `references/schemas.md` — JSON structures for evals.json, grading.json, benchmark.json, etc.
|