goiabaseeds 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +173 -0
- package/bin/goiabaseeds.js +98 -0
- package/eslint.config.js +14 -0
- package/package.json +61 -0
- package/skills/README.md +60 -0
- package/skills/apify/SKILL.md +55 -0
- package/skills/blotato/SKILL.md +63 -0
- package/skills/canva/SKILL.md +60 -0
- package/skills/goiabaseeds-agent-creator/SKILL.md +192 -0
- package/skills/goiabaseeds-skill-creator/SKILL.md +407 -0
- package/skills/goiabaseeds-skill-creator/agents/analyzer.md +274 -0
- package/skills/goiabaseeds-skill-creator/agents/comparator.md +202 -0
- package/skills/goiabaseeds-skill-creator/agents/grader.md +223 -0
- package/skills/goiabaseeds-skill-creator/assets/eval_review.html +146 -0
- package/skills/goiabaseeds-skill-creator/eval-viewer/generate_review.py +471 -0
- package/skills/goiabaseeds-skill-creator/eval-viewer/viewer.html +1325 -0
- package/skills/goiabaseeds-skill-creator/references/schemas.md +430 -0
- package/skills/goiabaseeds-skill-creator/references/skill-format.md +235 -0
- package/skills/goiabaseeds-skill-creator/scripts/__init__.py +0 -0
- package/skills/goiabaseeds-skill-creator/scripts/aggregate_benchmark.py +401 -0
- package/skills/goiabaseeds-skill-creator/scripts/quick_validate.py +103 -0
- package/skills/goiabaseeds-skill-creator/scripts/run_eval.py +310 -0
- package/skills/goiabaseeds-skill-creator/scripts/utils.py +47 -0
- package/skills/image-creator/SKILL.md +155 -0
- package/skills/image-fetcher/SKILL.md +91 -0
- package/skills/image-generator/SKILL.md +124 -0
- package/skills/image-generator/scripts/generate.py +175 -0
- package/skills/instagram-publisher/SKILL.md +118 -0
- package/skills/instagram-publisher/scripts/publish.js +164 -0
- package/src/agent-session.js +110 -0
- package/src/agents-cli.js +158 -0
- package/src/agents.js +134 -0
- package/src/bundle-detector.js +75 -0
- package/src/bundle.js +286 -0
- package/src/context.js +142 -0
- package/src/export.js +52 -0
- package/src/i18n.js +48 -0
- package/src/init.js +367 -0
- package/src/locales/en.json +72 -0
- package/src/locales/es.json +71 -0
- package/src/locales/pt-BR.json +71 -0
- package/src/logger.js +38 -0
- package/src/models-cli.js +165 -0
- package/src/pipeline-runner.js +478 -0
- package/src/prompt.js +46 -0
- package/src/provider.js +156 -0
- package/src/readme/README.md +181 -0
- package/src/run.js +100 -0
- package/src/runs.js +90 -0
- package/src/skills-cli.js +157 -0
- package/src/skills.js +146 -0
- package/src/state-manager.js +280 -0
- package/src/tools.js +158 -0
- package/src/update.js +140 -0
- package/templates/_goiabaseeds/.goiabaseeds-version +1 -0
- package/templates/_goiabaseeds/_investigations/.gitkeep +0 -0
- package/templates/_goiabaseeds/config/playwright.config.json +11 -0
- package/templates/_goiabaseeds/core/architect.agent.yaml +1141 -0
- package/templates/_goiabaseeds/core/best-practices/_catalog.yaml +116 -0
- package/templates/_goiabaseeds/core/best-practices/blog-post.md +132 -0
- package/templates/_goiabaseeds/core/best-practices/blog-seo.md +127 -0
- package/templates/_goiabaseeds/core/best-practices/copywriting.md +428 -0
- package/templates/_goiabaseeds/core/best-practices/data-analysis.md +401 -0
- package/templates/_goiabaseeds/core/best-practices/email-newsletter.md +118 -0
- package/templates/_goiabaseeds/core/best-practices/email-sales.md +110 -0
- package/templates/_goiabaseeds/core/best-practices/image-design.md +349 -0
- package/templates/_goiabaseeds/core/best-practices/instagram-feed.md +235 -0
- package/templates/_goiabaseeds/core/best-practices/instagram-reels.md +112 -0
- package/templates/_goiabaseeds/core/best-practices/instagram-stories.md +107 -0
- package/templates/_goiabaseeds/core/best-practices/linkedin-article.md +116 -0
- package/templates/_goiabaseeds/core/best-practices/linkedin-post.md +121 -0
- package/templates/_goiabaseeds/core/best-practices/researching.md +347 -0
- package/templates/_goiabaseeds/core/best-practices/review.md +269 -0
- package/templates/_goiabaseeds/core/best-practices/social-networks-publishing.md +294 -0
- package/templates/_goiabaseeds/core/best-practices/strategist.md +344 -0
- package/templates/_goiabaseeds/core/best-practices/technical-writing.md +363 -0
- package/templates/_goiabaseeds/core/best-practices/twitter-post.md +105 -0
- package/templates/_goiabaseeds/core/best-practices/twitter-thread.md +122 -0
- package/templates/_goiabaseeds/core/best-practices/whatsapp-broadcast.md +107 -0
- package/templates/_goiabaseeds/core/best-practices/youtube-script.md +122 -0
- package/templates/_goiabaseeds/core/best-practices/youtube-shorts.md +112 -0
- package/templates/_goiabaseeds/core/prompts/auguste.dupin.prompt.md +1008 -0
- package/templates/_goiabaseeds/core/runner.pipeline.md +467 -0
- package/templates/_goiabaseeds/core/skills.engine.md +381 -0
- package/templates/dashboard/index.html +12 -0
- package/templates/dashboard/package-lock.json +2082 -0
- package/templates/dashboard/package.json +28 -0
- package/templates/dashboard/src/App.tsx +46 -0
- package/templates/dashboard/src/components/DepartmentCard.tsx +47 -0
- package/templates/dashboard/src/components/DepartmentSelector.tsx +61 -0
- package/templates/dashboard/src/components/StatusBadge.tsx +32 -0
- package/templates/dashboard/src/components/StatusBar.tsx +97 -0
- package/templates/dashboard/src/hooks/useDepartmentSocket.ts +84 -0
- package/templates/dashboard/src/lib/formatTime.ts +16 -0
- package/templates/dashboard/src/lib/normalizeState.ts +25 -0
- package/templates/dashboard/src/main.tsx +10 -0
- package/templates/dashboard/src/office/AgentDesk.tsx +151 -0
- package/templates/dashboard/src/office/HandoffEnvelope.tsx +108 -0
- package/templates/dashboard/src/office/OfficeScene.tsx +147 -0
- package/templates/dashboard/src/office/drawDesk.ts +263 -0
- package/templates/dashboard/src/office/drawFurniture.ts +129 -0
- package/templates/dashboard/src/office/drawRoom.ts +51 -0
- package/templates/dashboard/src/office/palette.ts +181 -0
- package/templates/dashboard/src/office/textures.ts +254 -0
- package/templates/dashboard/src/plugin/departmentWatcher.ts +210 -0
- package/templates/dashboard/src/store/useDepartmentStore.ts +56 -0
- package/templates/dashboard/src/styles/globals.css +36 -0
- package/templates/dashboard/src/types/state.ts +64 -0
- package/templates/dashboard/src/vite-env.d.ts +1 -0
- package/templates/dashboard/tsconfig.json +24 -0
- package/templates/dashboard/vite.config.ts +13 -0
- package/templates/departments/.gitkeep +0 -0
- package/templates/ide-templates/antigravity/.agent/rules/goiabaseeds.md +55 -0
- package/templates/ide-templates/antigravity/.agent/workflows/goiabaseeds.md +102 -0
- package/templates/ide-templates/claude-code/.claude/skills/goiabaseeds/SKILL.md +182 -0
- package/templates/ide-templates/claude-code/.mcp.json +8 -0
- package/templates/ide-templates/claude-code/CLAUDE.md +43 -0
- package/templates/ide-templates/codex/.agents/skills/goiabaseeds/SKILL.md +6 -0
- package/templates/ide-templates/codex/AGENTS.md +105 -0
- package/templates/ide-templates/cursor/.cursor/commands/goiabaseeds.md +9 -0
- package/templates/ide-templates/cursor/.cursor/mcp.json +8 -0
- package/templates/ide-templates/cursor/.cursor/rules/goiabaseeds.mdc +48 -0
- package/templates/ide-templates/cursor/.cursorignore +3 -0
- package/templates/ide-templates/opencode/.opencode/commands/goiabaseeds.md +9 -0
- package/templates/ide-templates/opencode/AGENTS.md +105 -0
- package/templates/ide-templates/vscode-copilot/.github/prompts/goiabaseeds.prompt.md +201 -0
- package/templates/ide-templates/vscode-copilot/.vscode/mcp.json +8 -0
- package/templates/ide-templates/vscode-copilot/.vscode/settings.json +3 -0
- package/templates/package.json +8 -0
|
@@ -0,0 +1,192 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: "Best-Practice Creator"
|
|
3
|
+
description: >
|
|
4
|
+
Guides creation and maintenance of best-practice files for the GoiabaSeeds best-practices library.
|
|
5
|
+
Handles format validation, cross-references, versioning, and catalog consistency.
|
|
6
|
+
description_pt-BR: >
|
|
7
|
+
Guia a criação e manutenção de arquivos de best-practice na biblioteca de best-practices do GoiabaSeeds.
|
|
8
|
+
Cuida de validação de formato, referências cruzadas, versionamento e consistência do catálogo.
|
|
9
|
+
description_es: >
|
|
10
|
+
Guía la creación y mantenimiento de archivos de best-practice en la biblioteca de best-practices de GoiabaSeeds.
|
|
11
|
+
Maneja validación de formato, referencias cruzadas, versionamiento y consistencia del catálogo.
|
|
12
|
+
type: prompt
|
|
13
|
+
version: "1.0.0"
|
|
14
|
+
---
|
|
15
|
+
|
|
16
|
+
# Best-Practice Creator — Workflow
|
|
17
|
+
|
|
18
|
+
Use this workflow when creating a new best-practice file for the `_goiabaseeds/core/best-practices/` library.
|
|
19
|
+
|
|
20
|
+
## Pre-flight Checks
|
|
21
|
+
|
|
22
|
+
1. **Scan existing best-practice files**: Read `_goiabaseeds/core/best-practices/_catalog.yaml`. Extract `id`, `name`, `whenToUse`, `file` from each entry.
|
|
23
|
+
2. **Check for overlap**: Verify the new best-practice file doesn't duplicate an existing entry's `whenToUse` scope. If there's overlap, clarify the differentiation before proceeding.
|
|
24
|
+
3. **List available skills**: Read all `skills/*/SKILL.md` files. Extract `name`, `description`, `type` from each — these may inform the best-practice file's content.
|
|
25
|
+
|
|
26
|
+
## Creation Checklist
|
|
27
|
+
|
|
28
|
+
For each new best-practice file, ensure ALL of the following:
|
|
29
|
+
|
|
30
|
+
### Frontmatter (YAML)
|
|
31
|
+
|
|
32
|
+
- [ ] `id`: lowercase kebab-case (e.g., `copywriting`)
|
|
33
|
+
- [ ] `name`: Display name for catalog listing (e.g., `"Copywriting & Persuasive Writing"`)
|
|
34
|
+
- [ ] `whenToUse`: Multi-line with positive scope AND "NOT for: ..." negative scope referencing other best-practice IDs
|
|
35
|
+
- [ ] `version`: `"1.0.0"` for new best-practice files
|
|
36
|
+
|
|
37
|
+
### Body (Markdown) — All sections mandatory
|
|
38
|
+
|
|
39
|
+
- [ ] **Core Principles**: 6+ numbered domain-specific decision rules, each with a bold title and detailed explanation
|
|
40
|
+
- [ ] **Techniques & Frameworks**: Concrete methods, models, or processes practitioners use in this discipline (e.g., diagnostic steps, framework selections, structural patterns)
|
|
41
|
+
- [ ] **Quality Criteria**: 4+ checkable criteria as `- [ ]` list that can be used to evaluate output
|
|
42
|
+
- [ ] **Output Examples**: 2+ complete examples, 15+ lines each, realistic NOT template-like
|
|
43
|
+
- [ ] **Anti-Patterns**: Never Do (4+) + Always Do (3+), each with explanation
|
|
44
|
+
- [ ] **Vocabulary Guidance**: Terms/phrases to Always Use (5+), Terms/phrases to Never Use (3+), Tone Rules (2+)
|
|
45
|
+
|
|
46
|
+
### Quality Minimums
|
|
47
|
+
|
|
48
|
+
| Section | Minimum |
|
|
49
|
+
|---------|---------|
|
|
50
|
+
| Total file lines | 200+ |
|
|
51
|
+
| Core Principles | 6+ numbered rules |
|
|
52
|
+
| Techniques & Frameworks | 3+ concrete techniques |
|
|
53
|
+
| Vocabulary Always Use | 5+ terms |
|
|
54
|
+
| Vocabulary Never Use | 3+ terms |
|
|
55
|
+
| Output Examples | 2 complete, 15+ lines each |
|
|
56
|
+
| Anti-Patterns (Never Do) | 4+ |
|
|
57
|
+
| Anti-Patterns (Always Do) | 3+ |
|
|
58
|
+
| Quality Criteria | 4+ checkable items |
|
|
59
|
+
|
|
60
|
+
## Post-Creation Steps
|
|
61
|
+
|
|
62
|
+
### 1. Update existing best-practice files' `whenToUse`
|
|
63
|
+
|
|
64
|
+
For each existing best-practice file whose scope overlaps with the new one:
|
|
65
|
+
- Add a "NOT for: {overlapping-scope} → See {new-best-practice-id}" line to their `whenToUse`
|
|
66
|
+
- Bump their version (patch increment)
|
|
67
|
+
|
|
68
|
+
### 2. Update `_catalog.yaml`
|
|
69
|
+
|
|
70
|
+
Add a new entry to `_goiabaseeds/core/best-practices/_catalog.yaml` with:
|
|
71
|
+
- `id`: matching the frontmatter `id`
|
|
72
|
+
- `name`: matching the frontmatter `name`
|
|
73
|
+
- `whenToUse`: single-line summary of the scope (positive only, no "NOT for")
|
|
74
|
+
- `file`: `{id}.md`
|
|
75
|
+
|
|
76
|
+
Place it under the appropriate section comment (Discipline or Platform best practices).
|
|
77
|
+
|
|
78
|
+
### 3. File placement
|
|
79
|
+
|
|
80
|
+
Save to `_goiabaseeds/core/best-practices/{id}.md`.
|
|
81
|
+
|
|
82
|
+
### 4. Validation
|
|
83
|
+
|
|
84
|
+
Re-read the created file and verify:
|
|
85
|
+
- [ ] All checklist items above are present
|
|
86
|
+
- [ ] YAML frontmatter parses correctly (no syntax errors)
|
|
87
|
+
- [ ] `whenToUse` references only existing best-practice IDs
|
|
88
|
+
- [ ] Output examples are realistic, not template placeholders
|
|
89
|
+
- [ ] File exceeds 200 lines
|
|
90
|
+
- [ ] Corresponding entry exists in `_catalog.yaml`
|
|
91
|
+
|
|
92
|
+
---
|
|
93
|
+
|
|
94
|
+
# Best-Practice Updater — Workflow
|
|
95
|
+
|
|
96
|
+
Use this workflow when updating best-practice files in the `_goiabaseeds/core/best-practices/` library.
|
|
97
|
+
|
|
98
|
+
## Versioning Rules (Semver)
|
|
99
|
+
|
|
100
|
+
| Change Type | Version Bump | Examples |
|
|
101
|
+
|-------------|-------------|----------|
|
|
102
|
+
| **Patch** (x.x.X) | Fix typos, adjust wording, minor refinements | Fix anti-pattern phrasing, correct a vocabulary term |
|
|
103
|
+
| **Minor** (x.X.0) | Add new content, extend capabilities | Add new principle, new output example, new technique |
|
|
104
|
+
| **Major** (X.0.0) | Rewrite or restructure significantly | Rewrite core principles, fundamentally change scope |
|
|
105
|
+
|
|
106
|
+
Always update the `version` field in the YAML frontmatter after any change.
|
|
107
|
+
|
|
108
|
+
## Update Scenarios
|
|
109
|
+
|
|
110
|
+
### When a best-practice file is removed from the library
|
|
111
|
+
|
|
112
|
+
1. Get the removed best-practice file's `id`
|
|
113
|
+
2. Remove its entry from `_goiabaseeds/core/best-practices/_catalog.yaml`
|
|
114
|
+
3. Scan ALL remaining best-practice files in `_goiabaseeds/core/best-practices/*.md`
|
|
115
|
+
4. For each file, check if the removed ID is referenced in `whenToUse`
|
|
116
|
+
- Look for patterns: "NOT for: ... → See {removed-id}"
|
|
117
|
+
5. If found, remove that "NOT for" line
|
|
118
|
+
6. Bump the affected files' version (patch: x.x.X)
|
|
119
|
+
|
|
120
|
+
### When a new best-practice file is added to the library
|
|
121
|
+
|
|
122
|
+
The Best-Practice Creator workflow (above) handles the initial `whenToUse` cross-references during creation. This section is only needed if cross-references were missed or need adjustment after the fact.
|
|
123
|
+
|
|
124
|
+
1. Read the new best-practice file's `whenToUse` — identify its scope
|
|
125
|
+
2. Scan existing best-practice files for overlapping scope
|
|
126
|
+
3. Add "NOT for: {new-scope} → See {new-id}" where appropriate
|
|
127
|
+
4. Bump affected files' version (patch)
|
|
128
|
+
5. Ensure the new entry exists in `_catalog.yaml`
|
|
129
|
+
|
|
130
|
+
### When updating a best-practice file's content
|
|
131
|
+
|
|
132
|
+
1. Make the content changes
|
|
133
|
+
2. Verify ALL mandatory sections still exist:
|
|
134
|
+
- [ ] Core Principles (6+ rules)
|
|
135
|
+
- [ ] Techniques & Frameworks (3+ techniques)
|
|
136
|
+
- [ ] Quality Criteria (4+ checkable items)
|
|
137
|
+
- [ ] Output Examples (2+ complete examples)
|
|
138
|
+
- [ ] Anti-Patterns (Never Do + Always Do)
|
|
139
|
+
- [ ] Vocabulary Guidance (Always Use, Never Use, Tone Rules)
|
|
140
|
+
3. Bump version according to semver rules above
|
|
141
|
+
4. If the `whenToUse` scope changed, update cross-references in other best-practice files and in `_catalog.yaml`
|
|
142
|
+
|
|
143
|
+
### When updating a best-practice file's `whenToUse` scope
|
|
144
|
+
|
|
145
|
+
This is the most impactful change — it affects how the Architect selects best practices during department creation.
|
|
146
|
+
|
|
147
|
+
1. Document the old scope and new scope
|
|
148
|
+
2. Update the best-practice file's `whenToUse` field
|
|
149
|
+
3. Scan ALL other best-practice files' `whenToUse` for references to this ID
|
|
150
|
+
4. Update cross-references to reflect the new scope
|
|
151
|
+
5. Update the `whenToUse` summary in `_catalog.yaml`
|
|
152
|
+
6. Bump version (minor if scope expanded, patch if scope narrowed)
|
|
153
|
+
|
|
154
|
+
## Validation Checklist
|
|
155
|
+
|
|
156
|
+
After ANY update, verify:
|
|
157
|
+
|
|
158
|
+
- [ ] Version was bumped correctly (patch/minor/major per rules above)
|
|
159
|
+
- [ ] All mandatory sections still present and non-empty
|
|
160
|
+
- [ ] `whenToUse` cross-references are consistent across ALL best-practice files
|
|
161
|
+
- [ ] No broken cross-references to removed best-practice IDs
|
|
162
|
+
- [ ] Output examples are still realistic and complete
|
|
163
|
+
- [ ] File still exceeds 200 lines minimum
|
|
164
|
+
- [ ] `_catalog.yaml` entry is in sync with frontmatter (`id`, `name`, `whenToUse`)
|
|
165
|
+
|
|
166
|
+
## Bulk Operations
|
|
167
|
+
|
|
168
|
+
### Verify catalog consistency
|
|
169
|
+
|
|
170
|
+
```
|
|
171
|
+
Read _goiabaseeds/core/best-practices/_catalog.yaml
|
|
172
|
+
For each entry in catalog:
|
|
173
|
+
1. Verify _goiabaseeds/core/best-practices/{entry.file} exists
|
|
174
|
+
2. Read the file's frontmatter
|
|
175
|
+
3. Verify entry.id matches frontmatter id
|
|
176
|
+
4. Verify entry.name matches frontmatter name
|
|
177
|
+
5. Flag any mismatches
|
|
178
|
+
|
|
179
|
+
For each .md file in _goiabaseeds/core/best-practices/ (excluding _catalog.yaml):
|
|
180
|
+
1. Verify a corresponding entry exists in _catalog.yaml
|
|
181
|
+
2. Flag any orphaned files with no catalog entry
|
|
182
|
+
```
|
|
183
|
+
|
|
184
|
+
### Verify cross-reference consistency
|
|
185
|
+
|
|
186
|
+
```
|
|
187
|
+
For each best-practice file A in _goiabaseeds/core/best-practices/*.md:
|
|
188
|
+
For each "NOT for: ... → See {id}" in A.whenToUse:
|
|
189
|
+
1. Verify _goiabaseeds/core/best-practices/{id}.md exists
|
|
190
|
+
2. Verify {id}'s whenToUse covers the referenced scope
|
|
191
|
+
3. Flag inconsistencies
|
|
192
|
+
```
|
|
@@ -0,0 +1,407 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: goiabaseeds-skill-creator
|
|
3
|
+
description: Create new GoiabaSeeds skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill for their departments, update or optimize an existing skill, run evals to test a skill, or benchmark skill performance. Supports all GoiabaSeeds skill types: MCP integrations, custom scripts, hybrid, and behavioral prompts.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# GoiabaSeeds Skill Creator
|
|
7
|
+
|
|
8
|
+
A skill for creating new GoiabaSeeds skills and iteratively improving them.
|
|
9
|
+
|
|
10
|
+
At a high level, the process of creating a skill goes like this:
|
|
11
|
+
|
|
12
|
+
- Decide what you want the skill to do and roughly how it should do it
|
|
13
|
+
- Write a draft of the skill
|
|
14
|
+
- Create a few test prompts and run a GoiabaSeeds agent with the skill injected into its context
|
|
15
|
+
- Help the user evaluate the results both qualitatively and quantitatively
|
|
16
|
+
- While the runs happen in the background, draft some quantitative evals if there aren't any (if there are some, you can either use as is or modify if you feel something needs to change about them). Then explain them to the user (or if they already existed, explain the ones that already exist)
|
|
17
|
+
- Use the `eval-viewer/generate_review.py` script to show the user the results for them to look at, and also let them look at the quantitative metrics
|
|
18
|
+
- Rewrite the skill based on feedback from the user's evaluation of the results (and also if there are any glaring flaws that become apparent from the quantitative benchmarks)
|
|
19
|
+
- Repeat until you're satisfied
|
|
20
|
+
- Expand the test set and try again at larger scale
|
|
21
|
+
|
|
22
|
+
Your job when using this skill is to figure out where the user is in this process and then jump in and help them progress through these stages. So for instance, maybe they're like "I want to make a skill for X". You can help narrow down what they mean, write a draft, write the test cases, figure out how they want to evaluate, run all the prompts, and repeat.
|
|
23
|
+
|
|
24
|
+
On the other hand, maybe they already have a draft of the skill. In this case you can go straight to the eval/iterate part of the loop.
|
|
25
|
+
|
|
26
|
+
Of course, you should always be flexible and if the user is like "I don't need to run a bunch of evaluations, just vibe with me", you can do that instead.
|
|
27
|
+
|
|
28
|
+
Cool? Cool.
|
|
29
|
+
|
|
30
|
+
## Communicating with the user
|
|
31
|
+
|
|
32
|
+
The skill creator is liable to be used by people across a wide range of familiarity with coding jargon. If you haven't heard (and how could you, it's only very recently that it started), there's a trend now where the power of Claude is inspiring plumbers to open up their terminals, parents and grandparents to google "how to install npm". On the other hand, the bulk of users are probably fairly computer-literate.
|
|
33
|
+
|
|
34
|
+
So please pay attention to context cues to understand how to phrase your communication! In the default case, just to give you some idea:
|
|
35
|
+
|
|
36
|
+
- "evaluation" and "benchmark" are borderline, but OK
|
|
37
|
+
- for "JSON" and "assertion" you want to see serious cues from the user that they know what those things are before using them without explaining them
|
|
38
|
+
|
|
39
|
+
It's OK to briefly explain terms if you're in doubt, and feel free to clarify terms with a short definition if you're unsure if the user will get it.
|
|
40
|
+
|
|
41
|
+
---
|
|
42
|
+
|
|
43
|
+
## Creating a skill
|
|
44
|
+
|
|
45
|
+
### Capture Intent
|
|
46
|
+
|
|
47
|
+
Start by understanding the user's intent. The current conversation might already contain a workflow the user wants to capture (e.g., they say "turn this into a skill"). If so, extract answers from the conversation history first — the tools used, the sequence of steps, corrections the user made, input/output formats observed. The user may need to fill the gaps, and should confirm before proceeding to the next step.
|
|
48
|
+
|
|
49
|
+
1. What should this skill enable agents to do?
|
|
50
|
+
2. When should this skill be used? (what user phrases/contexts/department scenarios)
|
|
51
|
+
3. What's the expected output format?
|
|
52
|
+
4. Should we set up test cases to verify the skill works? Skills with objectively verifiable outputs (file transforms, data extraction, code generation, fixed workflow steps) benefit from test cases. Skills with subjective outputs (writing style, art) often don't need them. Suggest the appropriate default based on the skill type, but let the user decide.
|
|
53
|
+
|
|
54
|
+
5. What type of skill is this?
|
|
55
|
+
- **MCP** — Connects to an external API via MCP server (e.g., Canva, Apify)
|
|
56
|
+
- **Script** — Runs a custom script (Node.js, Python, Bash)
|
|
57
|
+
- **Hybrid** — Both MCP and script components
|
|
58
|
+
- **Prompt** — Pure behavioral instructions for agents (no external integration)
|
|
59
|
+
|
|
60
|
+
For MCP skills, also ask:
|
|
61
|
+
- What MCP server command? (e.g., `npx -y @package/name`)
|
|
62
|
+
- What transport? (stdio or http)
|
|
63
|
+
- If http: what URL?
|
|
64
|
+
- What environment variables are needed?
|
|
65
|
+
- Any authentication headers?
|
|
66
|
+
|
|
67
|
+
For Script skills, also ask:
|
|
68
|
+
- What runtime? (Node.js, Python, Bash)
|
|
69
|
+
- What dependencies?
|
|
70
|
+
- What's the invocation command?
|
|
71
|
+
|
|
72
|
+
For Hybrid: ask both sets of questions.
|
|
73
|
+
For Prompt: skip — proceed directly to writing the skill body.
|
|
74
|
+
|
|
75
|
+
### Interview and Research
|
|
76
|
+
|
|
77
|
+
Proactively ask questions about edge cases, input/output formats, example files, success criteria, and dependencies. Wait to write test prompts until you've got this part ironed out.
|
|
78
|
+
|
|
79
|
+
Check available MCPs - if useful for research (searching docs, finding similar skills, looking up best practices), research in parallel via subagents if available, otherwise inline. Come prepared with context to reduce burden on the user.
|
|
80
|
+
|
|
81
|
+
### Write the SKILL.md
|
|
82
|
+
|
|
83
|
+
After the interview, generate the SKILL.md with:
|
|
84
|
+
- YAML frontmatter following the schema in `references/skill-format.md` for the chosen type
|
|
85
|
+
- Markdown body with instructions for agents
|
|
86
|
+
|
|
87
|
+
Refer to `references/skill-format.md` for the exact frontmatter schema per skill type.
|
|
88
|
+
|
|
89
|
+
### Skill Writing Guide
|
|
90
|
+
|
|
91
|
+
#### Anatomy of a Skill
|
|
92
|
+
|
|
93
|
+
```
|
|
94
|
+
skill-name/
|
|
95
|
+
├── SKILL.md (required)
|
|
96
|
+
│ ├── YAML frontmatter (name, description, type, version required)
|
|
97
|
+
│ └── Markdown instructions
|
|
98
|
+
└── Bundled Resources (optional)
|
|
99
|
+
├── scripts/ - Executable code for deterministic/repetitive tasks
|
|
100
|
+
├── references/ - Docs loaded into context as needed
|
|
101
|
+
└── assets/ - Files used in output (templates, icons, fonts)
|
|
102
|
+
```
|
|
103
|
+
|
|
104
|
+
#### Progressive Disclosure
|
|
105
|
+
|
|
106
|
+
Skills use a three-level loading system:
|
|
107
|
+
1. **Metadata** (name + description) - Always in context (~100 words)
|
|
108
|
+
2. **SKILL.md body** - In context whenever skill is active (<500 lines ideal)
|
|
109
|
+
3. **Bundled resources** - As needed (unlimited, scripts can execute without loading)
|
|
110
|
+
|
|
111
|
+
These word counts are approximate and you can feel free to go longer if needed.
|
|
112
|
+
|
|
113
|
+
**Key patterns:**
|
|
114
|
+
- Keep SKILL.md under 500 lines; if you're approaching this limit, add an additional layer of hierarchy along with clear pointers about where the model using the skill should go next to follow up.
|
|
115
|
+
- Reference files clearly from SKILL.md with guidance on when to read them
|
|
116
|
+
- For large reference files (>300 lines), include a table of contents
|
|
117
|
+
|
|
118
|
+
**Domain organization**: When a skill supports multiple domains/frameworks, organize by variant:
|
|
119
|
+
```
|
|
120
|
+
cloud-deploy/
|
|
121
|
+
├── SKILL.md (workflow + selection)
|
|
122
|
+
└── references/
|
|
123
|
+
├── aws.md
|
|
124
|
+
├── gcp.md
|
|
125
|
+
└── azure.md
|
|
126
|
+
```
|
|
127
|
+
The agent reads only the relevant reference file.
|
|
128
|
+
|
|
129
|
+
#### Principle of Lack of Surprise
|
|
130
|
+
|
|
131
|
+
This goes without saying, but skills must not contain malware, exploit code, or any content that could compromise system security. A skill's contents should not surprise the user in their intent if described. Don't go along with requests to create misleading skills or skills designed to facilitate unauthorized access, data exfiltration, or other malicious activities. Things like a "roleplay as an XYZ" are OK though.
|
|
132
|
+
|
|
133
|
+
#### Writing Patterns
|
|
134
|
+
|
|
135
|
+
Prefer using the imperative form in instructions.
|
|
136
|
+
|
|
137
|
+
**Defining output formats** - You can do it like this:
|
|
138
|
+
```markdown
|
|
139
|
+
## Report structure
|
|
140
|
+
ALWAYS use this exact template:
|
|
141
|
+
# [Title]
|
|
142
|
+
## Executive summary
|
|
143
|
+
## Key findings
|
|
144
|
+
## Recommendations
|
|
145
|
+
```
|
|
146
|
+
|
|
147
|
+
**Examples pattern** - It's useful to include examples. You can format them like this (but if "Input" and "Output" are in the examples you might want to deviate a little):
|
|
148
|
+
```markdown
|
|
149
|
+
## Commit message format
|
|
150
|
+
**Example 1:**
|
|
151
|
+
Input: Added user authentication with JWT tokens
|
|
152
|
+
Output: feat(auth): implement JWT-based authentication
|
|
153
|
+
```
|
|
154
|
+
|
|
155
|
+
### Writing Style
|
|
156
|
+
|
|
157
|
+
Try to explain to the model why things are important in lieu of heavy-handed musty MUSTs. Use theory of mind and try to make the skill general and not super-narrow to specific examples. Start by writing a draft and then look at it with fresh eyes and improve it.
|
|
158
|
+
|
|
159
|
+
### Test Cases
|
|
160
|
+
|
|
161
|
+
After writing the skill draft, come up with 2-3 realistic test prompts — the kind of thing a real user would actually say. Share them with the user: [you don't have to use this exact language] "Here are a few test cases I'd like to try. Do these look right, or do you want to add more?" Then run them.
|
|
162
|
+
|
|
163
|
+
Save test cases to `evals/evals.json`. Don't write assertions yet — just the prompts. You'll draft assertions in the next step while the runs are in progress.
|
|
164
|
+
|
|
165
|
+
```json
|
|
166
|
+
{
|
|
167
|
+
"skill_name": "example-skill",
|
|
168
|
+
"evals": [
|
|
169
|
+
{
|
|
170
|
+
"id": 1,
|
|
171
|
+
"prompt": "User's task prompt",
|
|
172
|
+
"expected_output": "Description of expected result",
|
|
173
|
+
"files": []
|
|
174
|
+
}
|
|
175
|
+
]
|
|
176
|
+
}
|
|
177
|
+
```
|
|
178
|
+
|
|
179
|
+
See `references/schemas.md` for the full schema (including the `assertions` field, which you'll add later).
|
|
180
|
+
|
|
181
|
+
## Running and evaluating test cases
|
|
182
|
+
|
|
183
|
+
This section is one continuous sequence — don't stop partway through. Do NOT use `/skill-test` or any other testing skill.
|
|
184
|
+
|
|
185
|
+
Put results in `<skill-name>-workspace/` as a sibling to the skill directory. Within the workspace, organize results by iteration (`iteration-1/`, `iteration-2/`, etc.) and within that, each test case gets a directory (`eval-0/`, `eval-1/`, etc.). Don't create all of this upfront — just create directories as you go.
|
|
186
|
+
|
|
187
|
+
### Step 1: Spawn all runs (with-skill AND baseline) in the same turn
|
|
188
|
+
|
|
189
|
+
For each test case, spawn two subagents in the same turn — one with the skill, one without. This is important: don't spawn the with-skill runs first and then come back for baselines later. Launch everything at once so it all finishes around the same time.
|
|
190
|
+
|
|
191
|
+
**With-skill run:**
|
|
192
|
+
|
|
193
|
+
The with-skill run simulates how a GoiabaSeeds agent operates with this skill injected into its context. The skill's SKILL.md body gets appended to the agent's instructions, just like the pipeline runner does during actual department execution.
|
|
194
|
+
|
|
195
|
+
```
|
|
196
|
+
Execute this task:
|
|
197
|
+
- Skill path: <path-to-skill>
|
|
198
|
+
- Task: <eval prompt>
|
|
199
|
+
- Input files: <eval files if any, or "none">
|
|
200
|
+
- Save outputs to: <workspace>/iteration-<N>/eval-<ID>/with_skill/outputs/
|
|
201
|
+
- Outputs to save: <what the user cares about — e.g., "the .docx file", "the final CSV">
|
|
202
|
+
- Instructions: Read the skill's SKILL.md and follow its instructions as if you were a GoiabaSeeds agent with this skill active.
|
|
203
|
+
```
|
|
204
|
+
|
|
205
|
+
**Baseline run** (same prompt, but the baseline depends on context):
|
|
206
|
+
- **Creating a new skill**: no skill at all. Same prompt, no skill path, save to `without_skill/outputs/`.
|
|
207
|
+
- **Improving an existing skill**: the old version. Before editing, snapshot the skill (`cp -r <skill-path> <workspace>/skill-snapshot/`), then point the baseline subagent at the snapshot. Save to `old_skill/outputs/`.
|
|
208
|
+
|
|
209
|
+
Write an `eval_metadata.json` for each test case (assertions can be empty for now). Give each eval a descriptive name based on what it's testing — not just "eval-0". Use this name for the directory too. If this iteration uses new or modified eval prompts, create these files for each new eval directory — don't assume they carry over from previous iterations.
|
|
210
|
+
|
|
211
|
+
```json
|
|
212
|
+
{
|
|
213
|
+
"eval_id": 0,
|
|
214
|
+
"eval_name": "descriptive-name-here",
|
|
215
|
+
"prompt": "The user's task prompt",
|
|
216
|
+
"assertions": []
|
|
217
|
+
}
|
|
218
|
+
```
|
|
219
|
+
|
|
220
|
+
### Step 2: While runs are in progress, draft assertions
|
|
221
|
+
|
|
222
|
+
Don't just wait for the runs to finish — you can use this time productively. Draft quantitative assertions for each test case and explain them to the user. If assertions already exist in `evals/evals.json`, review them and explain what they check.
|
|
223
|
+
|
|
224
|
+
Good assertions are objectively verifiable and have descriptive names — they should read clearly in the benchmark viewer so someone glancing at the results immediately understands what each one checks. Subjective skills (writing style, design quality) are better evaluated qualitatively — don't force assertions onto things that need human judgment.
|
|
225
|
+
|
|
226
|
+
Update the `eval_metadata.json` files and `evals/evals.json` with the assertions once drafted. Also explain to the user what they'll see in the viewer — both the qualitative outputs and the quantitative benchmark.
|
|
227
|
+
|
|
228
|
+
### Step 3: As runs complete, capture timing data
|
|
229
|
+
|
|
230
|
+
When each subagent task completes, you receive a notification containing `total_tokens` and `duration_ms`. Save this data immediately to `timing.json` in the run directory:
|
|
231
|
+
|
|
232
|
+
```json
|
|
233
|
+
{
|
|
234
|
+
"total_tokens": 84852,
|
|
235
|
+
"duration_ms": 23332,
|
|
236
|
+
"total_duration_seconds": 23.3
|
|
237
|
+
}
|
|
238
|
+
```
|
|
239
|
+
|
|
240
|
+
This is the only opportunity to capture this data — it comes through the task notification and isn't persisted elsewhere. Process each notification as it arrives rather than trying to batch them.
|
|
241
|
+
|
|
242
|
+
### Step 4: Grade, aggregate, and launch the viewer
|
|
243
|
+
|
|
244
|
+
Once all runs are done:
|
|
245
|
+
|
|
246
|
+
1. **Grade each run** — spawn a grader subagent (or grade inline) that reads `agents/grader.md` and evaluates each assertion against the outputs. Save results to `grading.json` in each run directory. The grading.json expectations array must use the fields `text`, `passed`, and `evidence` (not `name`/`met`/`details` or other variants) — the viewer depends on these exact field names. For assertions that can be checked programmatically, write and run a script rather than eyeballing it — scripts are faster, more reliable, and can be reused across iterations.
|
|
247
|
+
|
|
248
|
+
2. **Aggregate into benchmark** — run the aggregation script from the skill-creator directory:
|
|
249
|
+
```bash
|
|
250
|
+
python -m scripts.aggregate_benchmark <workspace>/iteration-N --skill-name <name>
|
|
251
|
+
```
|
|
252
|
+
This produces `benchmark.json` and `benchmark.md` with pass_rate, time, and tokens for each configuration, with mean +/- stddev and the delta. If generating benchmark.json manually, see `references/schemas.md` for the exact schema the viewer expects.
|
|
253
|
+
Put each with_skill version before its baseline counterpart.
|
|
254
|
+
|
|
255
|
+
3. **Do an analyst pass** — read the benchmark data and surface patterns the aggregate stats might hide. See `agents/analyzer.md` (the "Analyzing Benchmark Results" section) for what to look for — things like assertions that always pass regardless of skill (non-discriminating), high-variance evals (possibly flaky), and time/token tradeoffs.
|
|
256
|
+
|
|
257
|
+
4. **Launch the viewer** with both qualitative outputs and quantitative data:
|
|
258
|
+
```bash
|
|
259
|
+
nohup python <skill-creator-path>/eval-viewer/generate_review.py \
|
|
260
|
+
<workspace>/iteration-N \
|
|
261
|
+
--skill-name "my-skill" \
|
|
262
|
+
--benchmark <workspace>/iteration-N/benchmark.json \
|
|
263
|
+
> /dev/null 2>&1 &
|
|
264
|
+
VIEWER_PID=$!
|
|
265
|
+
```
|
|
266
|
+
For iteration 2+, also pass `--previous-workspace <workspace>/iteration-<N-1>`.
|
|
267
|
+
|
|
268
|
+
**Cowork / headless environments:** If `webbrowser.open()` is not available or the environment has no display, use `--static <output_path>` to write a standalone HTML file instead of starting a server. Feedback will be downloaded as a `feedback.json` file when the user clicks "Submit All Reviews". After download, copy `feedback.json` into the workspace directory for the next iteration to pick up.
|
|
269
|
+
|
|
270
|
+
Note: please use generate_review.py to create the viewer; there's no need to write custom HTML.
|
|
271
|
+
|
|
272
|
+
5. **Tell the user** something like: "I've opened the results in your browser. There are two tabs — 'Outputs' lets you click through each test case and leave feedback, 'Benchmark' shows the quantitative comparison. When you're done, come back here and let me know."
|
|
273
|
+
|
|
274
|
+
### What the user sees in the viewer
|
|
275
|
+
|
|
276
|
+
The "Outputs" tab shows one test case at a time:
|
|
277
|
+
- **Prompt**: the task that was given
|
|
278
|
+
- **Output**: the files the skill produced, rendered inline where possible
|
|
279
|
+
- **Previous Output** (iteration 2+): collapsed section showing last iteration's output
|
|
280
|
+
- **Formal Grades** (if grading was run): collapsed section showing assertion pass/fail
|
|
281
|
+
- **Feedback**: a textbox that auto-saves as they type
|
|
282
|
+
- **Previous Feedback** (iteration 2+): their comments from last time, shown below the textbox
|
|
283
|
+
|
|
284
|
+
The "Benchmark" tab shows the stats summary: pass rates, timing, and token usage for each configuration, with per-eval breakdowns and analyst observations.
|
|
285
|
+
|
|
286
|
+
Navigation is via prev/next buttons or arrow keys. When done, they click "Submit All Reviews" which saves all feedback to `feedback.json`.
|
|
287
|
+
|
|
288
|
+
### Step 5: Read the feedback
|
|
289
|
+
|
|
290
|
+
When the user tells you they're done, read `feedback.json`:
|
|
291
|
+
|
|
292
|
+
```json
|
|
293
|
+
{
|
|
294
|
+
"reviews": [
|
|
295
|
+
{"run_id": "eval-0-with_skill", "feedback": "the chart is missing axis labels", "timestamp": "..."},
|
|
296
|
+
{"run_id": "eval-1-with_skill", "feedback": "", "timestamp": "..."},
|
|
297
|
+
{"run_id": "eval-2-with_skill", "feedback": "perfect, love this", "timestamp": "..."}
|
|
298
|
+
],
|
|
299
|
+
"status": "complete"
|
|
300
|
+
}
|
|
301
|
+
```
|
|
302
|
+
|
|
303
|
+
Empty feedback means the user thought it was fine. Focus your improvements on the test cases where the user had specific complaints.
|
|
304
|
+
|
|
305
|
+
Kill the viewer server when you're done with it:
|
|
306
|
+
|
|
307
|
+
```bash
|
|
308
|
+
kill $VIEWER_PID 2>/dev/null
|
|
309
|
+
```
|
|
310
|
+
|
|
311
|
+
---
|
|
312
|
+
|
|
313
|
+
## Improving the skill
|
|
314
|
+
|
|
315
|
+
This is the heart of the loop. You've run the test cases, the user has reviewed the results, and now you need to make the skill better based on their feedback.
|
|
316
|
+
|
|
317
|
+
### How to think about improvements
|
|
318
|
+
|
|
319
|
+
1. **Generalize from the feedback.** The big picture thing that's happening here is that we're trying to create skills that can be used a million times (maybe literally, maybe even more who knows) across many different prompts. Here you and the user are iterating on only a few examples over and over again because it helps move faster. The user knows these examples in and out and it's quick for them to assess new outputs. But if the skill you and the user are codeveloping works only for those examples, it's useless. Rather than put in fiddly overfitty changes, or oppressively constrictive MUSTs, if there's some stubborn issue, you might try branching out and using different metaphors, or recommending different patterns of working. It's relatively cheap to try and maybe you'll land on something great.
|
|
320
|
+
|
|
321
|
+
2. **Keep the prompt lean.** Remove things that aren't pulling their weight. Make sure to read the transcripts, not just the final outputs — if it looks like the skill is making the model waste a bunch of time doing things that are unproductive, you can try getting rid of the parts of the skill that are making it do that and seeing what happens.
|
|
322
|
+
|
|
323
|
+
3. **Explain the why.** Try hard to explain the **why** behind everything you're asking the model to do. Today's LLMs are *smart*. They have good theory of mind and when given a good harness can go beyond rote instructions and really make things happen. Even if the feedback from the user is terse or frustrated, try to actually understand the task and why the user is writing what they wrote, and what they actually wrote, and then transmit this understanding into the instructions. If you find yourself writing ALWAYS or NEVER in all caps, or using super rigid structures, that's a yellow flag — if possible, reframe and explain the reasoning so that the model understands why the thing you're asking for is important. That's a more humane, powerful, and effective approach.
|
|
324
|
+
|
|
325
|
+
4. **Look for repeated work across test cases.** Read the transcripts from the test runs and notice if the subagents all independently wrote similar helper scripts or took the same multi-step approach to something. If all 3 test cases resulted in the subagent writing a `create_docx.py` or a `build_chart.py`, that's a strong signal the skill should bundle that script. Write it once, put it in `scripts/`, and tell the skill to use it. This saves every future invocation from reinventing the wheel.
|
|
326
|
+
|
|
327
|
+
This task is pretty important (we are trying to create billions a year in economic value here!) and your thinking time is not the blocker; take your time and really mull things over. I'd suggest writing a draft revision and then looking at it anew and making improvements. Really do your best to get into the head of the user and understand what they want and need.
|
|
328
|
+
|
|
329
|
+
### The iteration loop
|
|
330
|
+
|
|
331
|
+
After improving the skill:
|
|
332
|
+
|
|
333
|
+
1. Apply your improvements to the skill
|
|
334
|
+
2. Rerun all test cases into a new `iteration-<N+1>/` directory, including baseline runs. If you're creating a new skill, the baseline is always `without_skill` (no skill) — that stays the same across iterations. If you're improving an existing skill, use your judgment on what makes sense as the baseline: the original version the user came in with, or the previous iteration.
|
|
335
|
+
3. Launch the reviewer with `--previous-workspace` pointing at the previous iteration
|
|
336
|
+
4. Wait for the user to review and tell you they're done
|
|
337
|
+
5. Read the new feedback, improve again, repeat
|
|
338
|
+
|
|
339
|
+
Keep going until:
|
|
340
|
+
- The user says they're happy
|
|
341
|
+
- The feedback is all empty (everything looks good)
|
|
342
|
+
- You're not making meaningful progress
|
|
343
|
+
|
|
344
|
+
---
|
|
345
|
+
|
|
346
|
+
## Advanced: Blind comparison
|
|
347
|
+
|
|
348
|
+
For situations where you want a more rigorous comparison between two versions of a skill (e.g., the user asks "is the new version actually better?"), there's a blind comparison system. Read `agents/comparator.md` and `agents/analyzer.md` for the details. The basic idea is: give two outputs to an independent agent without telling it which is which, and let it judge quality. Then analyze why the winner won.
|
|
349
|
+
|
|
350
|
+
This is optional, requires subagents, and most users won't need it. The human review loop is usually sufficient.
|
|
351
|
+
|
|
352
|
+
---
|
|
353
|
+
|
|
354
|
+
## Claude.ai-specific instructions
|
|
355
|
+
|
|
356
|
+
In Claude.ai, the core workflow is the same (draft -> test -> review -> improve -> repeat), but because Claude.ai doesn't have subagents, some mechanics change. Here's what to adapt:
|
|
357
|
+
|
|
358
|
+
**Running test cases**: No subagents means no parallel execution. For each test case, read the skill's SKILL.md, then follow its instructions to accomplish the test prompt yourself. Do them one at a time. This is less rigorous than independent subagents (you wrote the skill and you're also running it, so you have full context), but it's a useful sanity check — and the human review step compensates. Skip the baseline runs — just use the skill to complete the task as requested.
|
|
359
|
+
|
|
360
|
+
**Reviewing results**: If you can't open a browser (e.g., Claude.ai's VM has no display, or you're on a remote server), skip the browser reviewer entirely. Instead, present results directly in the conversation. For each test case, show the prompt and the output. If the output is a file the user needs to see (like a .docx or .xlsx), save it to the filesystem and tell them where it is so they can download and inspect it. Ask for feedback inline: "How does this look? Anything you'd change?"
|
|
361
|
+
|
|
362
|
+
**Benchmarking**: Skip the quantitative benchmarking — it relies on baseline comparisons which aren't meaningful without subagents. Focus on qualitative feedback from the user.
|
|
363
|
+
|
|
364
|
+
**The iteration loop**: Same as before — improve the skill, rerun the test cases, ask for feedback — just without the browser reviewer in the middle. You can still organize results into iteration directories on the filesystem if you have one.
|
|
365
|
+
|
|
366
|
+
**Blind comparison**: Requires subagents. Skip it.
|
|
367
|
+
|
|
368
|
+
---
|
|
369
|
+
|
|
370
|
+
## Cowork-Specific Instructions
|
|
371
|
+
|
|
372
|
+
If you're in Cowork, the main things to know are:
|
|
373
|
+
|
|
374
|
+
- You have subagents, so the main workflow (spawn test cases in parallel, run baselines, grade, etc.) all works. (However, if you run into severe problems with timeouts, it's OK to run the test prompts in series rather than parallel.)
|
|
375
|
+
- You don't have a browser or display, so when generating the eval viewer, use `--static <output_path>` to write a standalone HTML file instead of starting a server. Then proffer a link that the user can click to open the HTML in their browser.
|
|
376
|
+
- For whatever reason, the Cowork setup seems to disincline Claude from generating the eval viewer after running the tests, so just to reiterate: whether you're in Cowork or in Claude Code, after running tests, you should always generate the eval viewer for the human to look at examples before revising the skill yourself and trying to make corrections, using `generate_review.py` (not writing your own boutique html code). Sorry in advance but I'm gonna go all caps here: GENERATE THE EVAL VIEWER *BEFORE* evaluating inputs yourself. You want to get them in front of the human ASAP!
|
|
377
|
+
- Feedback works differently: since there's no running server, the viewer's "Submit All Reviews" button will download `feedback.json` as a file. You can then read it from there (you may have to request access first).
|
|
378
|
+
|
|
379
|
+
---
|
|
380
|
+
|
|
381
|
+
## Reference files
|
|
382
|
+
|
|
383
|
+
The agents/ directory contains instructions for specialized subagents. Read them when you need to spawn the relevant subagent.
|
|
384
|
+
|
|
385
|
+
- `agents/grader.md` — How to evaluate assertions against outputs
|
|
386
|
+
- `agents/comparator.md` — How to do blind A/B comparison between two outputs
|
|
387
|
+
- `agents/analyzer.md` — How to analyze why one version beat another
|
|
388
|
+
|
|
389
|
+
The references/ directory has additional documentation:
|
|
390
|
+
- `references/schemas.md` — JSON structures for evals.json, grading.json, etc.
|
|
391
|
+
- `references/skill-format.md` — GoiabaSeeds SKILL.md frontmatter schema per skill type
|
|
392
|
+
|
|
393
|
+
---
|
|
394
|
+
|
|
395
|
+
Repeating one more time the core loop here for emphasis:
|
|
396
|
+
|
|
397
|
+
- Figure out what the skill is about
|
|
398
|
+
- Draft or edit the skill
|
|
399
|
+
- Run a GoiabaSeeds agent with the skill injected into its context on test prompts
|
|
400
|
+
- With the user, evaluate the outputs:
|
|
401
|
+
- Create benchmark.json and run `eval-viewer/generate_review.py` to help the user review them
|
|
402
|
+
- Run quantitative evals
|
|
403
|
+
- Repeat until you and the user are satisfied
|
|
404
|
+
|
|
405
|
+
Please add steps to your TodoList, if you have such a thing, to make sure you don't forget. If you're in Cowork, please specifically put "Create evals JSON and run `eval-viewer/generate_review.py` so human can review test cases" in your TodoList to make sure it happens.
|
|
406
|
+
|
|
407
|
+
Good luck!
|