@cremini/skillpack 1.1.8 → 1.1.9

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (22) hide show
  1. package/README.md +5 -1
  2. package/dist/cli.js +126 -40
  3. package/package.json +1 -1
  4. package/templates/builtin-skills/skill-creator/LICENSE.txt +202 -0
  5. package/templates/builtin-skills/skill-creator/SKILL.md +171 -0
  6. package/templates/builtin-skills/skill-creator/agents/analyzer.md +274 -0
  7. package/templates/builtin-skills/skill-creator/agents/comparator.md +202 -0
  8. package/templates/builtin-skills/skill-creator/agents/grader.md +223 -0
  9. package/templates/builtin-skills/skill-creator/assets/eval_review.html +146 -0
  10. package/templates/builtin-skills/skill-creator/eval-viewer/generate_review.py +471 -0
  11. package/templates/builtin-skills/skill-creator/eval-viewer/viewer.html +1325 -0
  12. package/templates/builtin-skills/skill-creator/references/schemas.md +430 -0
  13. package/templates/builtin-skills/skill-creator/scripts/__init__.py +0 -0
  14. package/templates/builtin-skills/skill-creator/scripts/aggregate_benchmark.py +401 -0
  15. package/templates/builtin-skills/skill-creator/scripts/generate_report.py +326 -0
  16. package/templates/builtin-skills/skill-creator/scripts/improve_description.py +247 -0
  17. package/templates/builtin-skills/skill-creator/scripts/package_skill.py +136 -0
  18. package/templates/builtin-skills/skill-creator/scripts/quick_validate.py +103 -0
  19. package/templates/builtin-skills/skill-creator/scripts/run_eval.py +310 -0
  20. package/templates/builtin-skills/skill-creator/scripts/run_loop.py +328 -0
  21. package/templates/builtin-skills/skill-creator/scripts/utils.py +47 -0
  22. package/web/js/chat.js +8 -8
@@ -0,0 +1,171 @@
1
+ ---
2
+ name: skill-creator
3
+ description: Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, edit, or optimize an existing skill, run evals to test a skill, benchmark skill performance with variance analysis, or optimize a skill's description for better triggering accuracy.
4
+ ---
5
+
6
+ # Skill Creator
7
+
8
+ A skill for creating new skills and iteratively improving them inside this SkillPack.
9
+
10
+ At a high level, the process of creating a skill goes like this:
11
+
12
+ - Decide what the skill should do and when it should trigger.
13
+ - Write a draft of the skill.
14
+ - Create a few realistic test prompts.
15
+ - Run the tests, review the results with the user, and improve the skill.
16
+ - Repeat until the skill is good enough for the user's needs.
17
+
18
+ Your job when using this skill is to figure out where the user is in this process and help them move forward without overcomplicating things.
19
+
20
+ ## Communicating with the user
21
+
22
+ Adjust your language to the user's level of familiarity. Avoid unnecessary jargon. Briefly explain terms like "frontmatter", "assertion", or "benchmark" if the user does not appear comfortable with them.
23
+
24
+ If the user clearly wants a lightweight collaboration rather than a full evaluation loop, keep things simple and iterate directly with them.
25
+
26
+ ## Pack-specific rules
27
+
28
+ This SkillPack uses a fixed project-level skills directory and config file:
29
+
30
+ - Skills directory: `{{SKILLS_PATH}}`
31
+ - SkillPack config: `{{PACK_CONFIG_PATH}}`
32
+
33
+ These paths override any generic advice you may know from other environments.
34
+
35
+ When creating or updating skills in this SkillPack:
36
+
37
+ - Always place the skill under `{{SKILLS_PATH}}/<skill-name>/`.
38
+ - Always write the main skill file to `{{SKILLS_PATH}}/<skill-name>/SKILL.md`.
39
+ - Treat `skill-name` as the canonical directory name unless the user explicitly asks to preserve an existing directory layout.
40
+ - Never create new skills inside the current workspace directory just because the active cwd is elsewhere.
41
+
42
+ ## Creating a skill
43
+
44
+ ### Capture intent
45
+
46
+ Start by understanding the user's intent. The current conversation may already contain the workflow the user wants to capture. Extract answers from the conversation first, then fill the gaps with targeted questions.
47
+
48
+ Confirm these points before writing the first draft:
49
+
50
+ 1. What should this skill enable the model to do?
51
+ 2. When should this skill trigger?
52
+ 3. What output should it produce?
53
+ 4. Does the user want a lightweight draft, or a tested and iterated skill?
54
+
55
+ ### Interview and research
56
+
57
+ Ask about:
58
+
59
+ - edge cases
60
+ - input/output formats
61
+ - example prompts or files
62
+ - success criteria
63
+ - dependencies or required tools
64
+
65
+ Wait to write test prompts until these basics are clear enough.
66
+
67
+ ### Write the skill
68
+
69
+ Create the skill directory at `{{SKILLS_PATH}}/<skill-name>/`.
70
+
71
+ Create `SKILL.md` with YAML frontmatter. The frontmatter must include:
72
+
73
+ - `name`
74
+ - `description`
75
+
76
+ The `description` is the primary triggering mechanism. Make it concrete and slightly "pushy": include both what the skill does and the situations where it should be used.
77
+
78
+ Keep the skill practical:
79
+
80
+ - Put "when to use" information in the `description`, not buried in the body.
81
+ - Keep the body focused on the workflow, decisions, and output expectations.
82
+ - If the skill needs deterministic helpers, place them under `scripts/`.
83
+ - If the skill needs long reference material, place it under `references/` and tell the model when to read it.
84
+
85
+ ### Required save location
86
+
87
+ For a newly created skill named `example-skill`, the target layout must be:
88
+
89
+ ```text
90
+ {{SKILLS_PATH}}/example-skill/
91
+ {{SKILLS_PATH}}/example-skill/SKILL.md
92
+ ```
93
+
94
+ If the user is improving an existing skill, preserve the existing skill name unless they explicitly request a rename.
95
+
96
+ ### Update skillpack.json
97
+
98
+ After you create or update a skill, you must sync `{{PACK_CONFIG_PATH}}`.
99
+
100
+ Do not guess the metadata from memory. Instead:
101
+
102
+ 1. Read the final `SKILL.md`.
103
+ 2. Parse the YAML frontmatter.
104
+ 3. Extract:
105
+ - `name`
106
+ - `description`
107
+ 4. Upsert an entry into the `skills` array in `{{PACK_CONFIG_PATH}}`:
108
+
109
+ ```json
110
+ {
111
+ "name": "<frontmatter.name>",
112
+ "description": "<frontmatter.description>",
113
+ "source": "./skills/<frontmatter.name>"
114
+ }
115
+ ```
116
+
117
+ Rules for this update:
118
+
119
+ - `name` must come from `frontmatter.name`.
120
+ - `description` must come from `frontmatter.description`.
121
+ - `source` must be `./skills/<frontmatter.name>`.
122
+ - If an entry for the same skill already exists, update it instead of creating a duplicate.
123
+
124
+ ### Writing guide
125
+
126
+ Prefer imperative, clear instructions. Explain why important constraints exist. Avoid overly rigid language unless strict behavior is actually required.
127
+
128
+ Useful structure:
129
+
130
+ - purpose
131
+ - trigger guidance
132
+ - required inputs
133
+ - step-by-step workflow
134
+ - output format
135
+ - edge cases
136
+
137
+ If the skill supports multiple domains or frameworks, organize the references by variant and tell the model how to choose the right one.
138
+
139
+ ## Test and iterate
140
+
141
+ After drafting the skill, propose 2-3 realistic test prompts. The prompts should sound like something a real user would actually say.
142
+
143
+ If the user wants evaluation:
144
+
145
+ - run the test prompts with the skill
146
+ - compare the outputs against the user's expectations
147
+ - note what worked and what failed
148
+ - revise the skill
149
+
150
+ If the user does not want a heavy evaluation loop, do at least a lightweight sanity check before calling the skill complete.
151
+
152
+ ## Improving an existing skill
153
+
154
+ When updating an existing skill:
155
+
156
+ - preserve its canonical `name` unless the user explicitly asks to rename it
157
+ - keep the directory aligned with the canonical skill name
158
+ - update `SKILL.md` first
159
+ - then re-read the final frontmatter and sync `{{PACK_CONFIG_PATH}}`
160
+
161
+ Focus on general improvements rather than overfitting to one example. Keep the prompt lean and remove instructions that are not earning their place.
162
+
163
+ ## Completion checklist
164
+
165
+ Before you say the work is done, verify all of the following:
166
+
167
+ - the skill exists under `{{SKILLS_PATH}}/<skill-name>/SKILL.md`
168
+ - `SKILL.md` has `name` and `description` frontmatter
169
+ - `{{PACK_CONFIG_PATH}}` has a matching entry in `skills`
170
+ - the `source` field is `./skills/<skill-name>`
171
+ - you have either tested the skill or explicitly told the user what remains untested
@@ -0,0 +1,274 @@
1
+ # Post-hoc Analyzer Agent
2
+
3
+ Analyze blind comparison results to understand WHY the winner won and generate improvement suggestions.
4
+
5
+ ## Role
6
+
7
+ After the blind comparator determines a winner, the Post-hoc Analyzer "unblids" the results by examining the skills and transcripts. The goal is to extract actionable insights: what made the winner better, and how can the loser be improved?
8
+
9
+ ## Inputs
10
+
11
+ You receive these parameters in your prompt:
12
+
13
+ - **winner**: "A" or "B" (from blind comparison)
14
+ - **winner_skill_path**: Path to the skill that produced the winning output
15
+ - **winner_transcript_path**: Path to the execution transcript for the winner
16
+ - **loser_skill_path**: Path to the skill that produced the losing output
17
+ - **loser_transcript_path**: Path to the execution transcript for the loser
18
+ - **comparison_result_path**: Path to the blind comparator's output JSON
19
+ - **output_path**: Where to save the analysis results
20
+
21
+ ## Process
22
+
23
+ ### Step 1: Read Comparison Result
24
+
25
+ 1. Read the blind comparator's output at comparison_result_path
26
+ 2. Note the winning side (A or B), the reasoning, and any scores
27
+ 3. Understand what the comparator valued in the winning output
28
+
29
+ ### Step 2: Read Both Skills
30
+
31
+ 1. Read the winner skill's SKILL.md and key referenced files
32
+ 2. Read the loser skill's SKILL.md and key referenced files
33
+ 3. Identify structural differences:
34
+ - Instructions clarity and specificity
35
+ - Script/tool usage patterns
36
+ - Example coverage
37
+ - Edge case handling
38
+
39
+ ### Step 3: Read Both Transcripts
40
+
41
+ 1. Read the winner's transcript
42
+ 2. Read the loser's transcript
43
+ 3. Compare execution patterns:
44
+ - How closely did each follow their skill's instructions?
45
+ - What tools were used differently?
46
+ - Where did the loser diverge from optimal behavior?
47
+ - Did either encounter errors or make recovery attempts?
48
+
49
+ ### Step 4: Analyze Instruction Following
50
+
51
+ For each transcript, evaluate:
52
+ - Did the agent follow the skill's explicit instructions?
53
+ - Did the agent use the skill's provided tools/scripts?
54
+ - Were there missed opportunities to leverage skill content?
55
+ - Did the agent add unnecessary steps not in the skill?
56
+
57
+ Score instruction following 1-10 and note specific issues.
58
+
59
+ ### Step 5: Identify Winner Strengths
60
+
61
+ Determine what made the winner better:
62
+ - Clearer instructions that led to better behavior?
63
+ - Better scripts/tools that produced better output?
64
+ - More comprehensive examples that guided edge cases?
65
+ - Better error handling guidance?
66
+
67
+ Be specific. Quote from skills/transcripts where relevant.
68
+
69
+ ### Step 6: Identify Loser Weaknesses
70
+
71
+ Determine what held the loser back:
72
+ - Ambiguous instructions that led to suboptimal choices?
73
+ - Missing tools/scripts that forced workarounds?
74
+ - Gaps in edge case coverage?
75
+ - Poor error handling that caused failures?
76
+
77
+ ### Step 7: Generate Improvement Suggestions
78
+
79
+ Based on the analysis, produce actionable suggestions for improving the loser skill:
80
+ - Specific instruction changes to make
81
+ - Tools/scripts to add or modify
82
+ - Examples to include
83
+ - Edge cases to address
84
+
85
+ Prioritize by impact. Focus on changes that would have changed the outcome.
86
+
87
+ ### Step 8: Write Analysis Results
88
+
89
+ Save structured analysis to `{output_path}`.
90
+
91
+ ## Output Format
92
+
93
+ Write a JSON file with this structure:
94
+
95
+ ```json
96
+ {
97
+ "comparison_summary": {
98
+ "winner": "A",
99
+ "winner_skill": "path/to/winner/skill",
100
+ "loser_skill": "path/to/loser/skill",
101
+ "comparator_reasoning": "Brief summary of why comparator chose winner"
102
+ },
103
+ "winner_strengths": [
104
+ "Clear step-by-step instructions for handling multi-page documents",
105
+ "Included validation script that caught formatting errors",
106
+ "Explicit guidance on fallback behavior when OCR fails"
107
+ ],
108
+ "loser_weaknesses": [
109
+ "Vague instruction 'process the document appropriately' led to inconsistent behavior",
110
+ "No script for validation, agent had to improvise and made errors",
111
+ "No guidance on OCR failure, agent gave up instead of trying alternatives"
112
+ ],
113
+ "instruction_following": {
114
+ "winner": {
115
+ "score": 9,
116
+ "issues": [
117
+ "Minor: skipped optional logging step"
118
+ ]
119
+ },
120
+ "loser": {
121
+ "score": 6,
122
+ "issues": [
123
+ "Did not use the skill's formatting template",
124
+ "Invented own approach instead of following step 3",
125
+ "Missed the 'always validate output' instruction"
126
+ ]
127
+ }
128
+ },
129
+ "improvement_suggestions": [
130
+ {
131
+ "priority": "high",
132
+ "category": "instructions",
133
+ "suggestion": "Replace 'process the document appropriately' with explicit steps: 1) Extract text, 2) Identify sections, 3) Format per template",
134
+ "expected_impact": "Would eliminate ambiguity that caused inconsistent behavior"
135
+ },
136
+ {
137
+ "priority": "high",
138
+ "category": "tools",
139
+ "suggestion": "Add validate_output.py script similar to winner skill's validation approach",
140
+ "expected_impact": "Would catch formatting errors before final output"
141
+ },
142
+ {
143
+ "priority": "medium",
144
+ "category": "error_handling",
145
+ "suggestion": "Add fallback instructions: 'If OCR fails, try: 1) different resolution, 2) image preprocessing, 3) manual extraction'",
146
+ "expected_impact": "Would prevent early failure on difficult documents"
147
+ }
148
+ ],
149
+ "transcript_insights": {
150
+ "winner_execution_pattern": "Read skill -> Followed 5-step process -> Used validation script -> Fixed 2 issues -> Produced output",
151
+ "loser_execution_pattern": "Read skill -> Unclear on approach -> Tried 3 different methods -> No validation -> Output had errors"
152
+ }
153
+ }
154
+ ```
155
+
156
+ ## Guidelines
157
+
158
+ - **Be specific**: Quote from skills and transcripts, don't just say "instructions were unclear"
159
+ - **Be actionable**: Suggestions should be concrete changes, not vague advice
160
+ - **Focus on skill improvements**: The goal is to improve the losing skill, not critique the agent
161
+ - **Prioritize by impact**: Which changes would most likely have changed the outcome?
162
+ - **Consider causation**: Did the skill weakness actually cause the worse output, or is it incidental?
163
+ - **Stay objective**: Analyze what happened, don't editorialize
164
+ - **Think about generalization**: Would this improvement help on other evals too?
165
+
166
+ ## Categories for Suggestions
167
+
168
+ Use these categories to organize improvement suggestions:
169
+
170
+ | Category | Description |
171
+ |----------|-------------|
172
+ | `instructions` | Changes to the skill's prose instructions |
173
+ | `tools` | Scripts, templates, or utilities to add/modify |
174
+ | `examples` | Example inputs/outputs to include |
175
+ | `error_handling` | Guidance for handling failures |
176
+ | `structure` | Reorganization of skill content |
177
+ | `references` | External docs or resources to add |
178
+
179
+ ## Priority Levels
180
+
181
+ - **high**: Would likely change the outcome of this comparison
182
+ - **medium**: Would improve quality but may not change win/loss
183
+ - **low**: Nice to have, marginal improvement
184
+
185
+ ---
186
+
187
+ # Analyzing Benchmark Results
188
+
189
+ When analyzing benchmark results, the analyzer's purpose is to **surface patterns and anomalies** across multiple runs, not suggest skill improvements.
190
+
191
+ ## Role
192
+
193
+ Review all benchmark run results and generate freeform notes that help the user understand skill performance. Focus on patterns that wouldn't be visible from aggregate metrics alone.
194
+
195
+ ## Inputs
196
+
197
+ You receive these parameters in your prompt:
198
+
199
+ - **benchmark_data_path**: Path to the in-progress benchmark.json with all run results
200
+ - **skill_path**: Path to the skill being benchmarked
201
+ - **output_path**: Where to save the notes (as JSON array of strings)
202
+
203
+ ## Process
204
+
205
+ ### Step 1: Read Benchmark Data
206
+
207
+ 1. Read the benchmark.json containing all run results
208
+ 2. Note the configurations tested (with_skill, without_skill)
209
+ 3. Understand the run_summary aggregates already calculated
210
+
211
+ ### Step 2: Analyze Per-Assertion Patterns
212
+
213
+ For each expectation across all runs:
214
+ - Does it **always pass** in both configurations? (may not differentiate skill value)
215
+ - Does it **always fail** in both configurations? (may be broken or beyond capability)
216
+ - Does it **always pass with skill but fail without**? (skill clearly adds value here)
217
+ - Does it **always fail with skill but pass without**? (skill may be hurting)
218
+ - Is it **highly variable**? (flaky expectation or non-deterministic behavior)
219
+
220
+ ### Step 3: Analyze Cross-Eval Patterns
221
+
222
+ Look for patterns across evals:
223
+ - Are certain eval types consistently harder/easier?
224
+ - Do some evals show high variance while others are stable?
225
+ - Are there surprising results that contradict expectations?
226
+
227
+ ### Step 4: Analyze Metrics Patterns
228
+
229
+ Look at time_seconds, tokens, tool_calls:
230
+ - Does the skill significantly increase execution time?
231
+ - Is there high variance in resource usage?
232
+ - Are there outlier runs that skew the aggregates?
233
+
234
+ ### Step 5: Generate Notes
235
+
236
+ Write freeform observations as a list of strings. Each note should:
237
+ - State a specific observation
238
+ - Be grounded in the data (not speculation)
239
+ - Help the user understand something the aggregate metrics don't show
240
+
241
+ Examples:
242
+ - "Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value"
243
+ - "Eval 3 shows high variance (50% ± 40%) - run 2 had an unusual failure that may be flaky"
244
+ - "Without-skill runs consistently fail on table extraction expectations (0% pass rate)"
245
+ - "Skill adds 13s average execution time but improves pass rate by 50%"
246
+ - "Token usage is 80% higher with skill, primarily due to script output parsing"
247
+ - "All 3 without-skill runs for eval 1 produced empty output"
248
+
249
+ ### Step 6: Write Notes
250
+
251
+ Save notes to `{output_path}` as a JSON array of strings:
252
+
253
+ ```json
254
+ [
255
+ "Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value",
256
+ "Eval 3 shows high variance (50% ± 40%) - run 2 had an unusual failure",
257
+ "Without-skill runs consistently fail on table extraction expectations",
258
+ "Skill adds 13s average execution time but improves pass rate by 50%"
259
+ ]
260
+ ```
261
+
262
+ ## Guidelines
263
+
264
+ **DO:**
265
+ - Report what you observe in the data
266
+ - Be specific about which evals, expectations, or runs you're referring to
267
+ - Note patterns that aggregate metrics would hide
268
+ - Provide context that helps interpret the numbers
269
+
270
+ **DO NOT:**
271
+ - Suggest improvements to the skill (that's for the improvement step, not benchmarking)
272
+ - Make subjective quality judgments ("the output was good/bad")
273
+ - Speculate about causes without evidence
274
+ - Repeat information already in the run_summary aggregates
@@ -0,0 +1,202 @@
1
+ # Blind Comparator Agent
2
+
3
+ Compare two outputs WITHOUT knowing which skill produced them.
4
+
5
+ ## Role
6
+
7
+ The Blind Comparator judges which output better accomplishes the eval task. You receive two outputs labeled A and B, but you do NOT know which skill produced which. This prevents bias toward a particular skill or approach.
8
+
9
+ Your judgment is based purely on output quality and task completion.
10
+
11
+ ## Inputs
12
+
13
+ You receive these parameters in your prompt:
14
+
15
+ - **output_a_path**: Path to the first output file or directory
16
+ - **output_b_path**: Path to the second output file or directory
17
+ - **eval_prompt**: The original task/prompt that was executed
18
+ - **expectations**: List of expectations to check (optional - may be empty)
19
+
20
+ ## Process
21
+
22
+ ### Step 1: Read Both Outputs
23
+
24
+ 1. Examine output A (file or directory)
25
+ 2. Examine output B (file or directory)
26
+ 3. Note the type, structure, and content of each
27
+ 4. If outputs are directories, examine all relevant files inside
28
+
29
+ ### Step 2: Understand the Task
30
+
31
+ 1. Read the eval_prompt carefully
32
+ 2. Identify what the task requires:
33
+ - What should be produced?
34
+ - What qualities matter (accuracy, completeness, format)?
35
+ - What would distinguish a good output from a poor one?
36
+
37
+ ### Step 3: Generate Evaluation Rubric
38
+
39
+ Based on the task, generate a rubric with two dimensions:
40
+
41
+ **Content Rubric** (what the output contains):
42
+ | Criterion | 1 (Poor) | 3 (Acceptable) | 5 (Excellent) |
43
+ |-----------|----------|----------------|---------------|
44
+ | Correctness | Major errors | Minor errors | Fully correct |
45
+ | Completeness | Missing key elements | Mostly complete | All elements present |
46
+ | Accuracy | Significant inaccuracies | Minor inaccuracies | Accurate throughout |
47
+
48
+ **Structure Rubric** (how the output is organized):
49
+ | Criterion | 1 (Poor) | 3 (Acceptable) | 5 (Excellent) |
50
+ |-----------|----------|----------------|---------------|
51
+ | Organization | Disorganized | Reasonably organized | Clear, logical structure |
52
+ | Formatting | Inconsistent/broken | Mostly consistent | Professional, polished |
53
+ | Usability | Difficult to use | Usable with effort | Easy to use |
54
+
55
+ Adapt criteria to the specific task. For example:
56
+ - PDF form → "Field alignment", "Text readability", "Data placement"
57
+ - Document → "Section structure", "Heading hierarchy", "Paragraph flow"
58
+ - Data output → "Schema correctness", "Data types", "Completeness"
59
+
60
+ ### Step 4: Evaluate Each Output Against the Rubric
61
+
62
+ For each output (A and B):
63
+
64
+ 1. **Score each criterion** on the rubric (1-5 scale)
65
+ 2. **Calculate dimension totals**: Content score, Structure score
66
+ 3. **Calculate overall score**: Average of dimension scores, scaled to 1-10
67
+
68
+ ### Step 5: Check Assertions (if provided)
69
+
70
+ If expectations are provided:
71
+
72
+ 1. Check each expectation against output A
73
+ 2. Check each expectation against output B
74
+ 3. Count pass rates for each output
75
+ 4. Use expectation scores as secondary evidence (not the primary decision factor)
76
+
77
+ ### Step 6: Determine the Winner
78
+
79
+ Compare A and B based on (in priority order):
80
+
81
+ 1. **Primary**: Overall rubric score (content + structure)
82
+ 2. **Secondary**: Assertion pass rates (if applicable)
83
+ 3. **Tiebreaker**: If truly equal, declare a TIE
84
+
85
+ Be decisive - ties should be rare. One output is usually better, even if marginally.
86
+
87
+ ### Step 7: Write Comparison Results
88
+
89
+ Save results to a JSON file at the path specified (or `comparison.json` if not specified).
90
+
91
+ ## Output Format
92
+
93
+ Write a JSON file with this structure:
94
+
95
+ ```json
96
+ {
97
+ "winner": "A",
98
+ "reasoning": "Output A provides a complete solution with proper formatting and all required fields. Output B is missing the date field and has formatting inconsistencies.",
99
+ "rubric": {
100
+ "A": {
101
+ "content": {
102
+ "correctness": 5,
103
+ "completeness": 5,
104
+ "accuracy": 4
105
+ },
106
+ "structure": {
107
+ "organization": 4,
108
+ "formatting": 5,
109
+ "usability": 4
110
+ },
111
+ "content_score": 4.7,
112
+ "structure_score": 4.3,
113
+ "overall_score": 9.0
114
+ },
115
+ "B": {
116
+ "content": {
117
+ "correctness": 3,
118
+ "completeness": 2,
119
+ "accuracy": 3
120
+ },
121
+ "structure": {
122
+ "organization": 3,
123
+ "formatting": 2,
124
+ "usability": 3
125
+ },
126
+ "content_score": 2.7,
127
+ "structure_score": 2.7,
128
+ "overall_score": 5.4
129
+ }
130
+ },
131
+ "output_quality": {
132
+ "A": {
133
+ "score": 9,
134
+ "strengths": ["Complete solution", "Well-formatted", "All fields present"],
135
+ "weaknesses": ["Minor style inconsistency in header"]
136
+ },
137
+ "B": {
138
+ "score": 5,
139
+ "strengths": ["Readable output", "Correct basic structure"],
140
+ "weaknesses": ["Missing date field", "Formatting inconsistencies", "Partial data extraction"]
141
+ }
142
+ },
143
+ "expectation_results": {
144
+ "A": {
145
+ "passed": 4,
146
+ "total": 5,
147
+ "pass_rate": 0.80,
148
+ "details": [
149
+ {"text": "Output includes name", "passed": true},
150
+ {"text": "Output includes date", "passed": true},
151
+ {"text": "Format is PDF", "passed": true},
152
+ {"text": "Contains signature", "passed": false},
153
+ {"text": "Readable text", "passed": true}
154
+ ]
155
+ },
156
+ "B": {
157
+ "passed": 3,
158
+ "total": 5,
159
+ "pass_rate": 0.60,
160
+ "details": [
161
+ {"text": "Output includes name", "passed": true},
162
+ {"text": "Output includes date", "passed": false},
163
+ {"text": "Format is PDF", "passed": true},
164
+ {"text": "Contains signature", "passed": false},
165
+ {"text": "Readable text", "passed": true}
166
+ ]
167
+ }
168
+ }
169
+ }
170
+ ```
171
+
172
+ If no expectations were provided, omit the `expectation_results` field entirely.
173
+
174
+ ## Field Descriptions
175
+
176
+ - **winner**: "A", "B", or "TIE"
177
+ - **reasoning**: Clear explanation of why the winner was chosen (or why it's a tie)
178
+ - **rubric**: Structured rubric evaluation for each output
179
+ - **content**: Scores for content criteria (correctness, completeness, accuracy)
180
+ - **structure**: Scores for structure criteria (organization, formatting, usability)
181
+ - **content_score**: Average of content criteria (1-5)
182
+ - **structure_score**: Average of structure criteria (1-5)
183
+ - **overall_score**: Combined score scaled to 1-10
184
+ - **output_quality**: Summary quality assessment
185
+ - **score**: 1-10 rating (should match rubric overall_score)
186
+ - **strengths**: List of positive aspects
187
+ - **weaknesses**: List of issues or shortcomings
188
+ - **expectation_results**: (Only if expectations provided)
189
+ - **passed**: Number of expectations that passed
190
+ - **total**: Total number of expectations
191
+ - **pass_rate**: Fraction passed (0.0 to 1.0)
192
+ - **details**: Individual expectation results
193
+
194
+ ## Guidelines
195
+
196
+ - **Stay blind**: DO NOT try to infer which skill produced which output. Judge purely on output quality.
197
+ - **Be specific**: Cite specific examples when explaining strengths and weaknesses.
198
+ - **Be decisive**: Choose a winner unless outputs are genuinely equivalent.
199
+ - **Output quality first**: Assertion scores are secondary to overall task completion.
200
+ - **Be objective**: Don't favor outputs based on style preferences; focus on correctness and completeness.
201
+ - **Explain your reasoning**: The reasoning field should make it clear why you chose the winner.
202
+ - **Handle edge cases**: If both outputs fail, pick the one that fails less badly. If both are excellent, pick the one that's marginally better.