@kennethsolomon/shipkit 3.19.0 → 3.20.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -5,40 +5,11 @@ description: Create new skills, modify and improve existing skills, and measure
5
5
 
6
6
  # Skill Creator
7
7
 
8
- A skill for creating new skills and iteratively improving them.
8
+ Create and iteratively improve skills via: draft test → evaluate → improve → repeat.
9
9
 
10
- At a high level, the process of creating a skill goes like this:
10
+ Assess where the user is in this loop and jump in accordingly. If they already have a draft, skip to eval. If they want to skip evals entirely, that's fine too. After the skill is done, offer to run description optimization.
11
11
 
12
- - Decide what you want the skill to do and roughly how it should do it
13
- - Write a draft of the skill
14
- - Create a few test prompts and run claude-with-access-to-the-skill on them
15
- - Help the user evaluate the results both qualitatively and quantitatively
16
- - While the runs happen in the background, draft some quantitative evals if there aren't any (if there are some, you can either use as is or modify if you feel something needs to change about them). Then explain them to the user (or if they already existed, explain the ones that already exist)
17
- - Use the `eval-viewer/generate_review.py` script to show the user the results for them to look at, and also let them look at the quantitative metrics
18
- - Rewrite the skill based on feedback from the user's evaluation of the results (and also if there are any glaring flaws that become apparent from the quantitative benchmarks)
19
- - Repeat until you're satisfied
20
- - Expand the test set and try again at larger scale
21
-
22
- Your job when using this skill is to figure out where the user is in this process and then jump in and help them progress through these stages. So for instance, maybe they're like "I want to make a skill for X". You can help narrow down what they mean, write a draft, write the test cases, figure out how they want to evaluate, run all the prompts, and repeat.
23
-
24
- On the other hand, maybe they already have a draft of the skill. In this case you can go straight to the eval/iterate part of the loop.
25
-
26
- Of course, you should always be flexible and if the user is like "I don't need to run a bunch of evaluations, just vibe with me", you can do that instead.
27
-
28
- Then after the skill is done (but again, the order is flexible), you can also run the skill description improver, which we have a whole separate script for, to optimize the triggering of the skill.
29
-
30
- Cool? Cool.
31
-
32
- ## Communicating with the user
33
-
34
- The skill creator is liable to be used by people across a wide range of familiarity with coding jargon. If you haven't heard (and how could you, it's only very recently that it started), there's a trend now where the power of Claude is inspiring plumbers to open up their terminals, parents and grandparents to google "how to install npm". On the other hand, the bulk of users are probably fairly computer-literate.
35
-
36
- So please pay attention to context cues to understand how to phrase your communication! In the default case, just to give you some idea:
37
-
38
- - "evaluation" and "benchmark" are borderline, but OK
39
- - for "JSON" and "assertion" you want to see serious cues from the user that they know what those things are before using them without explaining them
40
-
41
- It's OK to briefly explain terms if you're in doubt, and feel free to clarify terms with a short definition if you're unsure if the user will get it.
12
+ Adapt communication to user familiarity briefly define "JSON", "assertion", etc. if context suggests unfamiliarity.
42
13
 
43
14
  ---
44
15
 
@@ -46,27 +17,24 @@ It's OK to briefly explain terms if you're in doubt, and feel free to clarify te
46
17
 
47
18
  ### Capture Intent
48
19
 
49
- Start by understanding the user's intent. The current conversation might already contain a workflow the user wants to capture (e.g., they say "turn this into a skill"). If so, extract answers from the conversation history first — the tools used, the sequence of steps, corrections the user made, input/output formats observed. The user may need to fill the gaps, and should confirm before proceeding to the next step.
20
+ If the current conversation already contains a workflow to capture, extract tools, steps, corrections, and I/O formats from history first. User confirms before proceeding.
50
21
 
51
22
  1. What should this skill enable Claude to do?
52
- 2. When should this skill trigger? (what user phrases/contexts)
23
+ 2. When should it trigger? (phrases/contexts)
53
24
  3. What's the expected output format?
54
- 4. Should we set up test cases to verify the skill works? Skills with objectively verifiable outputs (file transforms, data extraction, code generation, fixed workflow steps) benefit from test cases. Skills with subjective outputs (writing style, art) often don't need them. Suggest the appropriate default based on the skill type, but let the user decide.
25
+ 4. Do we need test cases? Suggest based on skill type: objectively verifiable outputs (transforms, extraction, codegen, fixed workflows) yes. Subjective outputs (writing style, art) usually no.
55
26
 
56
27
  ### Interview and Research
57
28
 
58
- Proactively ask questions about edge cases, input/output formats, example files, success criteria, and dependencies. Wait to write test prompts until you've got this part ironed out.
59
-
60
- Check available MCPs - if useful for research (searching docs, finding similar skills, looking up best practices), research in parallel via subagents if available, otherwise inline. Come prepared with context to reduce burden on the user.
29
+ Ask about edge cases, I/O formats, example files, success criteria, and dependencies before writing test prompts. Check available MCPs for research; run parallel subagents if available.
61
30
 
62
31
  ### Write the SKILL.md
63
32
 
64
- Based on the user interview, fill in these components:
65
-
33
+ Fill in:
66
34
  - **name**: Skill identifier
67
- - **description**: When to trigger, what it does. This is the primary triggering mechanism - include both what the skill does AND specific contexts for when to use it. All "when to use" info goes here, not in the body. Note: currently Claude has a tendency to "undertrigger" skills -- to not use them when they'd be useful. To combat this, please make the skill descriptions a little bit "pushy". So for instance, instead of "How to build a simple fast dashboard to display internal Anthropic data.", you might write "How to build a simple fast dashboard to display internal Anthropic data. Make sure to use this skill whenever the user mentions dashboards, data visualization, internal metrics, or wants to display any kind of company data, even if they don't explicitly ask for a 'dashboard.'"
68
- - **compatibility**: Required tools, dependencies (optional, rarely needed)
69
- - **the rest of the skill :)**
35
+ - **description**: When to trigger + what it does. Primary triggering mechanism include both function and contexts. All "when to use" info goes here, not in the body. Make descriptions slightly "pushy" to counter undertriggering: e.g., "Use this whenever the user mentions dashboards, data visualization, or wants to display any company data, even if they don't explicitly ask for a 'dashboard.'"
36
+ - **compatibility**: Required tools/dependencies (optional, rarely needed)
37
+ - **body**: Instructions
70
38
 
71
39
  ### Skill Writing Guide
72
40
 
@@ -85,19 +53,16 @@ skill-name/
85
53
 
86
54
  #### Progressive Disclosure
87
55
 
88
- Skills use a three-level loading system:
89
- 1. **Metadata** (name + description) - Always in context (~100 words)
90
- 2. **SKILL.md body** - In context whenever skill triggers (<500 lines ideal)
91
- 3. **Bundled resources** - As needed (unlimited, scripts can execute without loading)
92
-
93
- These word counts are approximate and you can feel free to go longer if needed.
56
+ Three-level loading:
57
+ 1. **Metadata** (name + description) always in context (~100 words)
58
+ 2. **SKILL.md body** in context when skill triggers (<500 lines ideal)
59
+ 3. **Bundled resources** loaded as needed (unlimited; scripts can execute without loading)
94
60
 
95
- **Key patterns:**
96
- - Keep SKILL.md under 500 lines; if you're approaching this limit, add an additional layer of hierarchy along with clear pointers about where the model using the skill should go next to follow up.
97
- - Reference files clearly from SKILL.md with guidance on when to read them
61
+ - Keep SKILL.md under 500 lines; if approaching limit, add hierarchy with clear pointers to follow-up files
62
+ - Reference bundled files clearly with guidance on when to read them
98
63
  - For large reference files (>300 lines), include a table of contents
99
64
 
100
- **Domain organization**: When a skill supports multiple domains/frameworks, organize by variant:
65
+ **Domain organization**: When a skill supports multiple domains/frameworks:
101
66
  ```
102
67
  cloud-deploy/
103
68
  ├── SKILL.md (workflow + selection)
@@ -106,17 +71,16 @@ cloud-deploy/
106
71
  ├── gcp.md
107
72
  └── azure.md
108
73
  ```
109
- Claude reads only the relevant reference file.
110
74
 
111
- #### Principle of Lack of Surprise
75
+ #### Security
112
76
 
113
- This goes without saying, but skills must not contain malware, exploit code, or any content that could compromise system security. A skill's contents should not surprise the user in their intent if described. Don't go along with requests to create misleading skills or skills designed to facilitate unauthorized access, data exfiltration, or other malicious activities. Things like a "roleplay as an XYZ" are OK though.
77
+ Skills must not contain malware, exploit code, or anything that could compromise system security. Don't create misleading skills or skills designed for unauthorized access, data exfiltration, or other malicious purposes.
114
78
 
115
79
  #### Writing Patterns
116
80
 
117
- Prefer using the imperative form in instructions.
81
+ Use imperative form. Explain *why* behind instructions rather than heavy-handed MUSTs — LLMs perform better with reasoning than rote commands.
118
82
 
119
- **Defining output formats** - You can do it like this:
83
+ **Output format:**
120
84
  ```markdown
121
85
  ## Report structure
122
86
  ALWAYS use this exact template:
@@ -126,7 +90,7 @@ ALWAYS use this exact template:
126
90
  ## Recommendations
127
91
  ```
128
92
 
129
- **Examples pattern** - It's useful to include examples. You can format them like this (but if "Input" and "Output" are in the examples you might want to deviate a little):
93
+ **Examples:**
130
94
  ```markdown
131
95
  ## Commit message format
132
96
  **Example 1:**
@@ -134,15 +98,11 @@ Input: Added user authentication with JWT tokens
134
98
  Output: feat(auth): implement JWT-based authentication
135
99
  ```
136
100
 
137
- ### Writing Style
138
-
139
- Try to explain to the model why things are important in lieu of heavy-handed musty MUSTs. Use theory of mind and try to make the skill general and not super-narrow to specific examples. Start by writing a draft and then look at it with fresh eyes and improve it.
140
-
141
101
  ### Test Cases
142
102
 
143
- After writing the skill draft, come up with 2-3 realistic test prompts — the kind of thing a real user would actually say. Share them with the user: [you don't have to use this exact language] "Here are a few test cases I'd like to try. Do these look right, or do you want to add more?" Then run them.
103
+ After the skill draft, write 2-3 realistic test prompts. Share with user for confirmation, then run them.
144
104
 
145
- Save test cases to `evals/evals.json`. Don't write assertions yet — just the prompts. You'll draft assertions in the next step while the runs are in progress.
105
+ Save to `evals/evals.json` (no assertions yet — draft those while runs are in progress):
146
106
 
147
107
  ```json
148
108
  {
@@ -158,34 +118,35 @@ Save test cases to `evals/evals.json`. Don't write assertions yet — just the p
158
118
  }
159
119
  ```
160
120
 
161
- See `references/schemas.md` for the full schema (including the `assertions` field, which you'll add later).
121
+ See `references/schemas.md` for the full schema including the `assertions` field.
122
+
123
+ ---
162
124
 
163
125
  ## Running and evaluating test cases
164
126
 
165
- This section is one continuous sequence — don't stop partway through. Do NOT use `/skill-test` or any other testing skill.
127
+ One continuous sequence — do not stop partway. Do NOT use `/skill-test` or any other testing skill.
166
128
 
167
- Put results in `<skill-name>-workspace/` as a sibling to the skill directory. Within the workspace, organize results by iteration (`iteration-1/`, `iteration-2/`, etc.) and within that, each test case gets a directory (`eval-0/`, `eval-1/`, etc.). Don't create all of this upfront — just create directories as you go.
129
+ Organize results in `<skill-name>-workspace/` as a sibling to the skill directory, by iteration (`iteration-1/`, `iteration-2/`, etc.) and test case (`eval-0/`, `eval-1/`, etc.). Create directories as you go.
168
130
 
169
- ### Step 1: Spawn all runs (with-skill AND baseline) in the same turn
131
+ ### Step 1: Spawn all runs in the same turn
170
132
 
171
- For each test case, spawn two subagents in the same turn — one with the skill, one without. This is important: don't spawn the with-skill runs first and then come back for baselines later. Launch everything at once so it all finishes around the same time.
133
+ For each test case, spawn two subagents simultaneously — one with the skill, one without. Launch everything at once.
172
134
 
173
135
  **With-skill run:**
174
-
175
136
  ```
176
137
  Execute this task:
177
138
  - Skill path: <path-to-skill>
178
139
  - Task: <eval prompt>
179
140
  - Input files: <eval files if any, or "none">
180
141
  - Save outputs to: <workspace>/iteration-<N>/eval-<ID>/with_skill/outputs/
181
- - Outputs to save: <what the user cares about — e.g., "the .docx file", "the final CSV">
142
+ - Outputs to save: <what the user cares about>
182
143
  ```
183
144
 
184
- **Baseline run** (same prompt, but the baseline depends on context):
185
- - **Creating a new skill**: no skill at all. Same prompt, no skill path, save to `without_skill/outputs/`.
186
- - **Improving an existing skill**: the old version. Before editing, snapshot the skill (`cp -r <skill-path> <workspace>/skill-snapshot/`), then point the baseline subagent at the snapshot. Save to `old_skill/outputs/`.
145
+ **Baseline run** (context-dependent):
146
+ - **New skill**: no skill at all same prompt, no skill path, save to `without_skill/outputs/`
147
+ - **Improving existing skill**: old version snapshot first (`cp -r <skill-path> <workspace>/skill-snapshot/`), point baseline at snapshot, save to `old_skill/outputs/`
187
148
 
188
- Write an `eval_metadata.json` for each test case (assertions can be empty for now). Give each eval a descriptive name based on what it's testing — not just "eval-0". Use this name for the directory too. If this iteration uses new or modified eval prompts, create these files for each new eval directory — don't assume they carry over from previous iterations.
149
+ Write `eval_metadata.json` per test case (assertions empty for now). Use descriptive names for directories — not just "eval-0":
189
150
 
190
151
  ```json
191
152
  {
@@ -196,17 +157,15 @@ Write an `eval_metadata.json` for each test case (assertions can be empty for no
196
157
  }
197
158
  ```
198
159
 
199
- ### Step 2: While runs are in progress, draft assertions
160
+ ### Step 2: Draft assertions while runs are in progress
200
161
 
201
- Don't just wait for the runs to finish you can use this time productively. Draft quantitative assertions for each test case and explain them to the user. If assertions already exist in `evals/evals.json`, review them and explain what they check.
162
+ Don't wait — draft quantitative assertions and explain them to the user. Good assertions are objectively verifiable and have descriptive names. For subjective skills, don't force assertions — use qualitative review.
202
163
 
203
- Good assertions are objectively verifiable and have descriptive names they should read clearly in the benchmark viewer so someone glancing at the results immediately understands what each one checks. Subjective skills (writing style, design quality) are better evaluated qualitatively — don't force assertions onto things that need human judgment.
164
+ Update `eval_metadata.json` and `evals/evals.json` with assertions once drafted. Explain what the user will see in the viewer.
204
165
 
205
- Update the `eval_metadata.json` files and `evals/evals.json` with the assertions once drafted. Also explain to the user what they'll see in the viewer — both the qualitative outputs and the quantitative benchmark.
166
+ ### Step 3: Capture timing data as runs complete
206
167
 
207
- ### Step 3: As runs complete, capture timing data
208
-
209
- When each subagent task completes, you receive a notification containing `total_tokens` and `duration_ms`. Save this data immediately to `timing.json` in the run directory:
168
+ When each subagent completes, save timing data immediately to `timing.json` in the run directory — this data is only available in the task notification:
210
169
 
211
170
  ```json
212
171
  {
@@ -216,24 +175,19 @@ When each subagent task completes, you receive a notification containing `total_
216
175
  }
217
176
  ```
218
177
 
219
- This is the only opportunity to capture this data — it comes through the task notification and isn't persisted elsewhere. Process each notification as it arrives rather than trying to batch them.
220
-
221
178
  ### Step 4: Grade, aggregate, and launch the viewer
222
179
 
223
- Once all runs are done:
224
-
225
- 1. **Grade each run** — spawn a grader subagent (or grade inline) that reads `agents/grader.md` and evaluates each assertion against the outputs. Save results to `grading.json` in each run directory. The grading.json expectations array must use the fields `text`, `passed`, and `evidence` (not `name`/`met`/`details` or other variants) — the viewer depends on these exact field names. For assertions that can be checked programmatically, write and run a script rather than eyeballing it — scripts are faster, more reliable, and can be reused across iterations.
180
+ 1. **Grade** spawn a grader subagent reading `agents/grader.md`. Save `grading.json` per run directory. Required fields: `text`, `passed`, `evidence` (not `name`/`met`/`details`). Use scripts for programmatic assertions.
226
181
 
227
- 2. **Aggregate into benchmark** — run the aggregation script from the skill-creator directory:
182
+ 2. **Aggregate** — run from the skill-creator directory:
228
183
  ```bash
229
184
  python -m scripts.aggregate_benchmark <workspace>/iteration-N --skill-name <name>
230
185
  ```
231
- This produces `benchmark.json` and `benchmark.md` with pass_rate, time, and tokens for each configuration, with mean ± stddev and the delta. If generating benchmark.json manually, see `references/schemas.md` for the exact schema the viewer expects.
232
- Put each with_skill version before its baseline counterpart.
186
+ Produces `benchmark.json` and `benchmark.md`. Put each `with_skill` version before its baseline counterpart. See `references/schemas.md` for manual schema.
233
187
 
234
- 3. **Do an analyst pass** — read the benchmark data and surface patterns the aggregate stats might hide. See `agents/analyzer.md` (the "Analyzing Benchmark Results" section) for what to look for — things like assertions that always pass regardless of skill (non-discriminating), high-variance evals (possibly flaky), and time/token tradeoffs.
188
+ 3. **Analyst pass** — read `agents/analyzer.md` ("Analyzing Benchmark Results") to surface non-discriminating assertions, high-variance evals, and time/token tradeoffs.
235
189
 
236
- 4. **Launch the viewer** with both qualitative outputs and quantitative data:
190
+ 4. **Launch the viewer:**
237
191
  ```bash
238
192
  nohup python <skill-creator-path>/eval-viewer/generate_review.py \
239
193
  <workspace>/iteration-N \
@@ -244,45 +198,32 @@ Put each with_skill version before its baseline counterpart.
244
198
  ```
245
199
  For iteration 2+, also pass `--previous-workspace <workspace>/iteration-<N-1>`.
246
200
 
247
- **Cowork / headless environments:** If `webbrowser.open()` is not available or the environment has no display, use `--static <output_path>` to write a standalone HTML file instead of starting a server. Feedback will be downloaded as a `feedback.json` file when the user clicks "Submit All Reviews". After download, copy `feedback.json` into the workspace directory for the next iteration to pick up.
201
+ **Cowork / headless environments:** Use `--static <output_path>` for a standalone HTML file. Feedback downloads as `feedback.json` when user clicks "Submit All Reviews" copy it into the workspace for the next iteration.
248
202
 
249
- Note: please use generate_review.py to create the viewer; there's no need to write custom HTML.
203
+ Use `generate_review.py` do not write custom HTML.
250
204
 
251
- 5. **Tell the user** something like: "I've opened the results in your browser. There are two tabs — 'Outputs' lets you click through each test case and leave feedback, 'Benchmark' shows the quantitative comparison. When you're done, come back here and let me know."
205
+ 5. Tell the user: "I've opened the results in your browser. 'Outputs' tab lets you review each test case and leave feedback; 'Benchmark' shows quantitative comparison. Come back when done."
252
206
 
253
- ### What the user sees in the viewer
207
+ **Viewer layout:**
208
+ - **Outputs tab**: Prompt, Output, Previous Output (iter 2+, collapsed), Formal Grades (collapsed), Feedback textbox, Previous Feedback (iter 2+)
209
+ - **Benchmark tab**: Pass rates, timing, token usage per configuration, per-eval breakdowns, analyst observations
210
+ - Navigation: prev/next or arrow keys; "Submit All Reviews" saves `feedback.json`
254
211
 
255
- The "Outputs" tab shows one test case at a time:
256
- - **Prompt**: the task that was given
257
- - **Output**: the files the skill produced, rendered inline where possible
258
- - **Previous Output** (iteration 2+): collapsed section showing last iteration's output
259
- - **Formal Grades** (if grading was run): collapsed section showing assertion pass/fail
260
- - **Feedback**: a textbox that auto-saves as they type
261
- - **Previous Feedback** (iteration 2+): their comments from last time, shown below the textbox
262
-
263
- The "Benchmark" tab shows the stats summary: pass rates, timing, and token usage for each configuration, with per-eval breakdowns and analyst observations.
264
-
265
- Navigation is via prev/next buttons or arrow keys. When done, they click "Submit All Reviews" which saves all feedback to `feedback.json`.
266
-
267
- ### Step 5: Read the feedback
268
-
269
- When the user tells you they're done, read `feedback.json`:
212
+ ### Step 5: Read feedback
270
213
 
271
214
  ```json
272
215
  {
273
216
  "reviews": [
274
217
  {"run_id": "eval-0-with_skill", "feedback": "the chart is missing axis labels", "timestamp": "..."},
275
- {"run_id": "eval-1-with_skill", "feedback": "", "timestamp": "..."},
276
- {"run_id": "eval-2-with_skill", "feedback": "perfect, love this", "timestamp": "..."}
218
+ {"run_id": "eval-1-with_skill", "feedback": "", "timestamp": "..."}
277
219
  ],
278
220
  "status": "complete"
279
221
  }
280
222
  ```
281
223
 
282
- Empty feedback means the user thought it was fine. Focus your improvements on the test cases where the user had specific complaints.
283
-
284
- Kill the viewer server when you're done with it:
224
+ Empty feedback = the user thought it was fine. Focus on test cases with specific complaints.
285
225
 
226
+ Kill the viewer when done:
286
227
  ```bash
287
228
  kill $VIEWER_PID 2>/dev/null
288
229
  ```
@@ -291,52 +232,43 @@ kill $VIEWER_PID 2>/dev/null
291
232
 
292
233
  ## Improving the skill
293
234
 
294
- This is the heart of the loop. You've run the test cases, the user has reviewed the results, and now you need to make the skill better based on their feedback.
295
-
296
- ### How to think about improvements
235
+ ### Improvement principles
297
236
 
298
- 1. **Generalize from the feedback.** The big picture thing that's happening here is that we're trying to create skills that can be used a million times (maybe literally, maybe even more who knows) across many different prompts. Here you and the user are iterating on only a few examples over and over again because it helps move faster. The user knows these examples in and out and it's quick for them to assess new outputs. But if the skill you and the user are codeveloping works only for those examples, it's useless. Rather than put in fiddly overfitty changes, or oppressively constrictive MUSTs, if there's some stubborn issue, you might try branching out and using different metaphors, or recommending different patterns of working. It's relatively cheap to try and maybe you'll land on something great.
237
+ 1. **Generalize, don't overfit.** Skills run across millions of diverse prompts. Avoid fiddly or over-constrictive changes. If a stubborn issue persists, try different metaphors or working patterns.
299
238
 
300
- 2. **Keep the prompt lean.** Remove things that aren't pulling their weight. Make sure to read the transcripts, not just the final outputs — if it looks like the skill is making the model waste a bunch of time doing things that are unproductive, you can try getting rid of the parts of the skill that are making it do that and seeing what happens.
239
+ 2. **Keep the prompt lean.** Remove instructions that aren't pulling their weight. Read transcripts — if the model wastes time on unproductive steps, remove the instructions causing it.
301
240
 
302
- 3. **Explain the why.** Try hard to explain the **why** behind everything you're asking the model to do. Today's LLMs are *smart*. They have good theory of mind and when given a good harness can go beyond rote instructions and really make things happen. Even if the feedback from the user is terse or frustrated, try to actually understand the task and why the user is writing what they wrote, and what they actually wrote, and then transmit this understanding into the instructions. If you find yourself writing ALWAYS or NEVER in all caps, or using super rigid structures, that's a yellow flag — if possible, reframe and explain the reasoning so that the model understands why the thing you're asking for is important. That's a more humane, powerful, and effective approach.
241
+ 3. **Explain the why.** Write *why* something matters, not just *what* to do. Avoid all-caps ALWAYS/NEVER; reframe with reasoning instead. LLMs respond better to rationale than rigid commands.
303
242
 
304
- 4. **Look for repeated work across test cases.** Read the transcripts from the test runs and notice if the subagents all independently wrote similar helper scripts or took the same multi-step approach to something. If all 3 test cases resulted in the subagent writing a `create_docx.py` or a `build_chart.py`, that's a strong signal the skill should bundle that script. Write it once, put it in `scripts/`, and tell the skill to use it. This saves every future invocation from reinventing the wheel.
243
+ 4. **Bundle repeated work.** If all test cases resulted in subagents writing similar helper scripts, bundle the script in `scripts/` and reference it from the skill.
305
244
 
306
- This task is pretty important (we are trying to create billions a year in economic value here!) and your thinking time is not the blocker; take your time and really mull things over. I'd suggest writing a draft revision and then looking at it anew and making improvements. Really do your best to get into the head of the user and understand what they want and need.
245
+ ### Iteration loop
307
246
 
308
- ### The iteration loop
247
+ 1. Apply improvements to the skill
248
+ 2. Rerun all test cases into `iteration-<N+1>/`, including baselines (new skill → `without_skill`; improving → use judgment on whether baseline is original or previous iteration)
249
+ 3. Launch viewer with `--previous-workspace` pointing at previous iteration
250
+ 4. Wait for user review, read feedback, improve again
309
251
 
310
- After improving the skill:
311
-
312
- 1. Apply your improvements to the skill
313
- 2. Rerun all test cases into a new `iteration-<N+1>/` directory, including baseline runs. If you're creating a new skill, the baseline is always `without_skill` (no skill) — that stays the same across iterations. If you're improving an existing skill, use your judgment on what makes sense as the baseline: the original version the user came in with, or the previous iteration.
314
- 3. Launch the reviewer with `--previous-workspace` pointing at the previous iteration
315
- 4. Wait for the user to review and tell you they're done
316
- 5. Read the new feedback, improve again, repeat
317
-
318
- Keep going until:
319
- - The user says they're happy
320
- - The feedback is all empty (everything looks good)
321
- - You're not making meaningful progress
252
+ Stop when:
253
+ - User is satisfied
254
+ - All feedback is empty
255
+ - No meaningful progress is being made
322
256
 
323
257
  ---
324
258
 
325
259
  ## Advanced: Blind comparison
326
260
 
327
- For situations where you want a more rigorous comparison between two versions of a skill (e.g., the user asks "is the new version actually better?"), there's a blind comparison system. Read `agents/comparator.md` and `agents/analyzer.md` for the details. The basic idea is: give two outputs to an independent agent without telling it which is which, and let it judge quality. Then analyze why the winner won.
328
-
329
- This is optional, requires subagents, and most users won't need it. The human review loop is usually sufficient.
261
+ For rigorous A/B comparison between two skill versions, read `agents/comparator.md` and `agents/analyzer.md`. An independent agent judges outputs without knowing which version produced them. Optional, requires subagents human review is usually sufficient.
330
262
 
331
263
  ---
332
264
 
333
265
  ## Description Optimization
334
266
 
335
- The description field in SKILL.md frontmatter is the primary mechanism that determines whether Claude invokes a skill. After creating or improving a skill, offer to optimize the description for better triggering accuracy.
267
+ The `description` field is the primary triggering mechanism. After creating or improving a skill, offer to optimize it.
336
268
 
337
269
  ### Step 1: Generate trigger eval queries
338
270
 
339
- Create 20 eval queries — a mix of should-trigger and should-not-trigger. Save as JSON:
271
+ Create 20 eval queries — mix of should-trigger and should-not-trigger. Save as JSON:
340
272
 
341
273
  ```json
342
274
  [
@@ -345,38 +277,28 @@ Create 20 eval queries — a mix of should-trigger and should-not-trigger. Save
345
277
  ]
346
278
  ```
347
279
 
348
- The queries must be realistic and something a Claude Code or Claude.ai user would actually type. Not abstract requests, but requests that are concrete and specific and have a good amount of detail. For instance, file paths, personal context about the user's job or situation, column names and values, company names, URLs. A little bit of backstory. Some might be in lowercase or contain abbreviations or typos or casual speech. Use a mix of different lengths, and focus on edge cases rather than making them clear-cut (the user will get a chance to sign off on them).
349
-
350
- Bad: `"Format this data"`, `"Extract text from PDF"`, `"Create a chart"`
280
+ Queries must be realistic and specific include file paths, personal context, column names, company names, URLs, backstory, typos, casual speech, varying lengths. Focus on edge cases over clear-cut examples.
351
281
 
352
- Good: `"ok so my boss just sent me this xlsx file (its in my downloads, called something like 'Q4 sales final FINAL v2.xlsx') and she wants me to add a column that shows the profit margin as a percentage. The revenue is in column C and costs are in column D i think"`
282
+ **Bad:** `"Format this data"`, `"Extract text from PDF"`, `"Create a chart"`
353
283
 
354
- For the **should-trigger** queries (8-10), think about coverage. You want different phrasings of the same intent some formal, some casual. Include cases where the user doesn't explicitly name the skill or file type but clearly needs it. Throw in some uncommon use cases and cases where this skill competes with another but should win.
284
+ **Good:** `"ok so my boss just sent me this xlsx file (its in my downloads, called something like 'Q4 sales final FINAL v2.xlsx') and she wants me to add a column that shows the profit margin as a percentage. The revenue is in column C and costs are in column D i think"`
355
285
 
356
- For the **should-not-trigger** queries (8-10), the most valuable ones are the near-misses queries that share keywords or concepts with the skill but actually need something different. Think adjacent domains, ambiguous phrasing where a naive keyword match would trigger but shouldn't, and cases where the query touches on something the skill does but in a context where another tool is more appropriate.
357
-
358
- The key thing to avoid: don't make should-not-trigger queries obviously irrelevant. "Write a fibonacci function" as a negative test for a PDF skill is too easy — it doesn't test anything. The negative cases should be genuinely tricky.
286
+ - **Should-trigger (8-10)**: varied phrasings of same intent formal and casual; cases where user doesn't name the skill but clearly needs it; uncommon use cases; cases where this skill competes with another but should win
287
+ - **Should-not-trigger (8-10)**: near-misses that share keywords but need something different; adjacent domains; ambiguous phrasing where naive keyword match would trigger but shouldn't. Do NOT make these obviously irrelevant.
359
288
 
360
289
  ### Step 2: Review with user
361
290
 
362
- Present the eval set to the user for review using the HTML template:
363
-
364
- 1. Read the template from `assets/eval_review.html`
365
- 2. Replace the placeholders:
366
- - `__EVAL_DATA_PLACEHOLDER__` the JSON array of eval items (no quotes around it it's a JS variable assignment)
367
- - `__SKILL_NAME_PLACEHOLDER__` → the skill's name
368
- - `__SKILL_DESCRIPTION_PLACEHOLDER__` → the skill's current description
369
- 3. Write to a temp file (e.g., `/tmp/eval_review_<skill-name>.html`) and open it: `open /tmp/eval_review_<skill-name>.html`
370
- 4. The user can edit queries, toggle should-trigger, add/remove entries, then click "Export Eval Set"
371
- 5. The file downloads to `~/Downloads/eval_set.json` — check the Downloads folder for the most recent version in case there are multiple (e.g., `eval_set (1).json`)
372
-
373
- This step matters — bad eval queries lead to bad descriptions.
291
+ 1. Read `assets/eval_review.html`
292
+ 2. Replace placeholders: `__EVAL_DATA_PLACEHOLDER__` → JSON array (no quotes, it's a JS variable), `__SKILL_NAME_PLACEHOLDER__`, `__SKILL_DESCRIPTION_PLACEHOLDER__`
293
+ 3. Write to `/tmp/eval_review_<skill-name>.html` and open it
294
+ 4. User edits queries, toggles should-trigger, adds/removes entries, clicks "Export Eval Set"
295
+ 5. File downloads to `~/Downloads/eval_set.json` check for most recent version if duplicates exist (e.g., `eval_set (1).json`)
374
296
 
375
297
  ### Step 3: Run the optimization loop
376
298
 
377
- Tell the user: "This will take some time — I'll run the optimization loop in the background and check on it periodically."
299
+ Tell the user: "This will take some time — I'll run the optimization loop in the background and check periodically."
378
300
 
379
- Save the eval set to the workspace, then run in the background:
301
+ Save eval set to workspace, then run in background:
380
302
 
381
303
  ```bash
382
304
  python -m scripts.run_loop \
@@ -387,93 +309,60 @@ python -m scripts.run_loop \
387
309
  --verbose
388
310
  ```
389
311
 
390
- Use the model ID from your system prompt (the one powering the current session) so the triggering test matches what the user actually experiences.
312
+ Use the model ID from your system prompt so the triggering test matches what the user actually experiences. Periodically tail output to give iteration/score updates.
391
313
 
392
- While it runs, periodically tail the output to give the user updates on which iteration it's on and what the scores look like.
314
+ The loop: splits eval 60% train / 40% held-out test → evaluates current description (3 runs per query for reliability) → calls Claude with extended thinking to propose improvements re-evaluates on train + test iterates up to 5 times → returns `best_description` selected by test score (not train, to avoid overfitting).
393
315
 
394
- This handles the full optimization loop automatically. It splits the eval set into 60% train and 40% held-out test, evaluates the current description (running each query 3 times to get a reliable trigger rate), then calls Claude with extended thinking to propose improvements based on what failed. It re-evaluates each new description on both train and test, iterating up to 5 times. When it's done, it opens an HTML report in the browser showing the results per iteration and returns JSON with `best_description` — selected by test score rather than train score to avoid overfitting.
395
-
396
- ### How skill triggering works
397
-
398
- Understanding the triggering mechanism helps design better eval queries. Skills appear in Claude's `available_skills` list with their name + description, and Claude decides whether to consult a skill based on that description. The important thing to know is that Claude only consults skills for tasks it can't easily handle on its own — simple, one-step queries like "read this PDF" may not trigger a skill even if the description matches perfectly, because Claude can handle them directly with basic tools. Complex, multi-step, or specialized queries reliably trigger skills when the description matches.
399
-
400
- This means your eval queries should be substantive enough that Claude would actually benefit from consulting a skill. Simple queries like "read file X" are poor test cases — they won't trigger skills regardless of description quality.
316
+ **How triggering works:** Claude sees skills in `available_skills` with name + description and decides whether to consult one. Claude only consults skills for tasks it can't handle alone simple one-step queries won't trigger skills even with a perfect description match. Eval queries must be substantive enough that Claude would genuinely benefit from a skill.
401
317
 
402
318
  ### Step 4: Apply the result
403
319
 
404
- Take `best_description` from the JSON output and update the skill's SKILL.md frontmatter. Show the user before/after and report the scores.
320
+ Take `best_description` from JSON output, update the skill's frontmatter. Show before/after and report scores.
405
321
 
406
322
  ---
407
323
 
408
324
  ### Package and Present (only if `present_files` tool is available)
409
325
 
410
- Check whether you have access to the `present_files` tool. If you don't, skip this step. If you do, package the skill and present the .skill file to the user:
411
-
412
326
  ```bash
413
327
  python -m scripts.package_skill <path/to/skill-folder>
414
328
  ```
415
329
 
416
- After packaging, direct the user to the resulting `.skill` file path so they can install it.
330
+ Direct user to the resulting `.skill` file path for installation.
417
331
 
418
332
  ---
419
333
 
420
334
  ## Claude.ai-specific instructions
421
335
 
422
- In Claude.ai, the core workflow is the same (draft → test → review → improve → repeat), but because Claude.ai doesn't have subagents, some mechanics change. Here's what to adapt:
423
-
424
- **Running test cases**: No subagents means no parallel execution. For each test case, read the skill's SKILL.md, then follow its instructions to accomplish the test prompt yourself. Do them one at a time. This is less rigorous than independent subagents (you wrote the skill and you're also running it, so you have full context), but it's a useful sanity check — and the human review step compensates. Skip the baseline runs — just use the skill to complete the task as requested.
425
-
426
- **Reviewing results**: If you can't open a browser (e.g., Claude.ai's VM has no display, or you're on a remote server), skip the browser reviewer entirely. Instead, present results directly in the conversation. For each test case, show the prompt and the output. If the output is a file the user needs to see (like a .docx or .xlsx), save it to the filesystem and tell them where it is so they can download and inspect it. Ask for feedback inline: "How does this look? Anything you'd change?"
336
+ Core workflow is the same (draft → test → review → improve → repeat), but adapt mechanics:
427
337
 
428
- **Benchmarking**: Skip the quantitative benchmarking — it relies on baseline comparisons which aren't meaningful without subagents. Focus on qualitative feedback from the user.
429
-
430
- **The iteration loop**: Same as beforeimprove the skill, rerun the test cases, ask for feedback — just without the browser reviewer in the middle. You can still organize results into iteration directories on the filesystem if you have one.
431
-
432
- **Description optimization**: This section requires the `claude` CLI tool (specifically `claude -p`) which is only available in Claude Code. Skip it if you're on Claude.ai.
433
-
434
- **Blind comparison**: Requires subagents. Skip it.
435
-
436
- **Packaging**: The `package_skill.py` script works anywhere with Python and a filesystem. On Claude.ai, you can run it and the user can download the resulting `.skill` file.
338
+ | Feature | Claude.ai behavior |
339
+ |---|---|
340
+ | Test case runs | No subagents run sequentially, following SKILL.md yourself. Skip baselines. |
341
+ | Results review | No browser — present results inline in conversation. Share file paths for downloadable outputs. Ask for feedback inline. |
342
+ | Benchmarking | Skip no meaningful baselines without subagents |
343
+ | Iteration loop | Same — improve, rerun, ask for feedback. Organize results into iteration directories if filesystem available. |
344
+ | Description optimization | Skip — requires `claude -p` CLI, only available in Claude Code |
345
+ | Blind comparison | Skip — requires subagents |
346
+ | Packaging | Works anywhere with Python + filesystem |
437
347
 
438
348
  ---
439
349
 
440
- ## Cowork-Specific Instructions
350
+ ## Cowork-specific instructions
441
351
 
442
- If you're in Cowork, the main things to know are:
443
-
444
- - You have subagents, so the main workflow (spawn test cases in parallel, run baselines, grade, etc.) all works. (However, if you run into severe problems with timeouts, it's OK to run the test prompts in series rather than parallel.)
445
- - You don't have a browser or display, so when generating the eval viewer, use `--static <output_path>` to write a standalone HTML file instead of starting a server. Then proffer a link that the user can click to open the HTML in their browser.
446
- - For whatever reason, the Cowork setup seems to disincline Claude from generating the eval viewer after running the tests, so just to reiterate: whether you're in Cowork or in Claude Code, after running tests, you should always generate the eval viewer for the human to look at examples before revising the skill yourself and trying to make corrections, using `generate_review.py` (not writing your own boutique html code). Sorry in advance but I'm gonna go all caps here: GENERATE THE EVAL VIEWER *BEFORE* evaluating inputs yourself. You want to get them in front of the human ASAP!
447
- - Feedback works differently: since there's no running server, the viewer's "Submit All Reviews" button will download `feedback.json` as a file. You can then read it from there (you may have to request access first).
448
- - Packaging works — `package_skill.py` just needs Python and a filesystem.
449
- - Description optimization (`run_loop.py` / `run_eval.py`) should work in Cowork just fine since it uses `claude -p` via subprocess, not a browser, but please save it until you've fully finished making the skill and the user agrees it's in good shape.
352
+ | Feature | Cowork behavior |
353
+ |---|---|
354
+ | Subagents | Available main workflow works. Fall back to serial if timeouts are severe. |
355
+ | Viewer | No display use `--static <output_path>`. Provide a link for user to open HTML in browser. |
356
+ | Feedback | No running server "Submit All Reviews" downloads `feedback.json`. Read from Downloads (may need to request access). |
357
+ | Eval viewer timing | ALWAYS generate the eval viewer BEFORE evaluating inputs yourself get outputs in front of the user first. Add "Create evals JSON and run `eval-viewer/generate_review.py`" to TodoList. |
358
+ | Description optimization | Works — `run_loop.py` uses `claude -p` via subprocess. Run only after skill is finalized and user agrees it's in good shape. |
359
+ | Packaging | Works |
450
360
 
451
361
  ---
452
362
 
453
363
  ## Reference files
454
364
 
455
- The agents/ directory contains instructions for specialized subagents. Read them when you need to spawn the relevant subagent.
456
-
457
- - `agents/grader.md` — How to evaluate assertions against outputs
458
- - `agents/comparator.md` — How to do blind A/B comparison between two outputs
459
- - `agents/analyzer.md` — How to analyze why one version beat another
460
-
461
- The references/ directory has additional documentation:
462
- - `references/schemas.md` — JSON structures for evals.json, grading.json, etc.
463
-
464
- ---
465
-
466
- Repeating one more time the core loop here for emphasis:
467
-
468
- - Figure out what the skill is about
469
- - Draft or edit the skill
470
- - Run claude-with-access-to-the-skill on test prompts
471
- - With the user, evaluate the outputs:
472
- - Create benchmark.json and run `eval-viewer/generate_review.py` to help the user review them
473
- - Run quantitative evals
474
- - Repeat until you and the user are satisfied
475
- - Package the final skill and return it to the user.
476
-
477
- Please add steps to your TodoList, if you have such a thing, to make sure you don't forget. If you're in Cowork, please specifically put "Create evals JSON and run `eval-viewer/generate_review.py` so human can review test cases" in your TodoList to make sure it happens.
478
-
479
- Good luck!
365
+ - `agents/grader.md` Evaluate assertions against outputs
366
+ - `agents/comparator.md` — Blind A/B comparison between outputs
367
+ - `agents/analyzer.md` — Analyze why one version beat another
368
+ - `references/schemas.md` — JSON structures for evals.json, grading.json, benchmark.json, etc.