@comate/zulu 1.2.1-beta.1 → 1.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (169) hide show
  1. package/comate-engine/assets/skills/auto-commit-comate/SKILL.md +260 -0
  2. package/comate-engine/assets/skills/auto-commit-comate/references/data_structures.md +189 -0
  3. package/comate-engine/assets/skills/auto-commit-comate/references/new_version_instruction.md +209 -0
  4. package/comate-engine/assets/skills/auto-commit-comate/references/old_version_instruction.md +208 -0
  5. package/comate-engine/assets/skills/auto-commit-comate/scripts/git_diff_cli.py +196 -0
  6. package/comate-engine/assets/skills/{smart-commit → auto-commit-comate}/scripts/git_utils.py +20 -10
  7. package/comate-engine/assets/skills/{smart-commit → auto-commit-comate}/scripts/icafe/client.py +69 -40
  8. package/comate-engine/assets/skills/{smart-commit → auto-commit-comate}/scripts/icafe/farseer.py +8 -9
  9. package/comate-engine/assets/skills/{smart-commit → auto-commit-comate}/scripts/icafe/matching.py +65 -9
  10. package/comate-engine/assets/skills/auto-commit-comate/scripts/match_card_cli.py +37 -0
  11. package/comate-engine/assets/skills/cnap-comate/SKILL.md +157 -0
  12. package/comate-engine/assets/skills/cnap-comate/references/cases.md +198 -0
  13. package/comate-engine/assets/skills/cnap-comate/references/deploy-troubleshoot.md +15 -0
  14. package/comate-engine/assets/skills/cnap-comate/references/install.md +43 -0
  15. package/comate-engine/assets/skills/cnap-comate/references/kubectl.md +55 -0
  16. package/comate-engine/assets/skills/cnap-comate/references/login.md +125 -0
  17. package/comate-engine/assets/skills/cnap-comate/references/oncall.md +24 -0
  18. package/comate-engine/assets/skills/cnap-comate/scripts/install_cnap_cli.sh +36 -0
  19. package/comate-engine/assets/skills/code-security/SKILL.md +176 -0
  20. package/comate-engine/assets/skills/code-security/references/credential_hosting.md +102 -0
  21. package/comate-engine/assets/skills/code-security/references/vul_repair_sensitive.md +219 -0
  22. package/comate-engine/assets/skills/code-security/scripts/build_repair_info.py +0 -0
  23. package/comate-engine/assets/skills/code-security/scripts/credential_hosting.py +99 -0
  24. package/comate-engine/assets/skills/code-security/scripts/credential_poll.py +350 -0
  25. package/comate-engine/assets/skills/code-security/scripts/http_client.py +173 -0
  26. package/comate-engine/assets/skills/code-security/scripts/parse_scan_result.py +301 -0
  27. package/comate-engine/assets/skills/code-security/scripts/repair_vulnerability.py +261 -0
  28. package/comate-engine/assets/skills/code-security/scripts/report_chat.py +198 -0
  29. package/comate-engine/assets/skills/code-security/scripts/scan_vulnerability.py +316 -0
  30. package/comate-engine/assets/skills/code-security-comate/SKILL.md +219 -0
  31. package/comate-engine/assets/skills/code-security-comate/references/credential_hosting.md +102 -0
  32. package/comate-engine/assets/skills/code-security-comate/references/vul_repair-go_sql_injection.md +399 -0
  33. package/comate-engine/assets/skills/code-security-comate/references/vul_repair-java_sql_injection.md +591 -0
  34. package/comate-engine/assets/skills/code-security-comate/references/vul_repair-php_sql_injection.md +318 -0
  35. package/comate-engine/assets/skills/code-security-comate/references/vul_repair-python_sql_injection.md +198 -0
  36. package/comate-engine/assets/skills/code-security-comate/references/vul_repair_sensitive.md +219 -0
  37. package/comate-engine/assets/skills/code-security-comate/scripts/credential_hosting.py +87 -0
  38. package/comate-engine/assets/skills/code-security-comate/scripts/credential_poll.py +345 -0
  39. package/comate-engine/assets/skills/code-security-comate/scripts/http_client.py +173 -0
  40. package/comate-engine/assets/skills/code-security-comate/scripts/parse_scan_result.py +392 -0
  41. package/comate-engine/assets/skills/code-security-comate/scripts/repair_vulnerability.py +245 -0
  42. package/comate-engine/assets/skills/code-security-comate/scripts/report_chat.py +145 -0
  43. package/comate-engine/assets/skills/code-security-comate/scripts/scan_vulnerability.py +444 -0
  44. package/comate-engine/assets/skills/code-security-comate/scripts/utils.py +153 -0
  45. package/comate-engine/assets/skills/comate-docs-comate/SKILL.md +148 -0
  46. package/comate-engine/assets/skills/comate-docs-comate/references/doc-map-extended.md +78 -0
  47. package/comate-engine/assets/skills/comate-docs-comate/references/models-and-billing.md +51 -0
  48. package/comate-engine/assets/skills/comate-docs-comate/references/product-overview.md +73 -0
  49. package/comate-engine/assets/skills/comate-docs-comate/references/query_content.md +83 -0
  50. package/comate-engine/assets/skills/comate-docs-comate/references/query_repo.md +57 -0
  51. package/comate-engine/assets/skills/comate-docs-comate/scripts/ku_operator.py +1575 -0
  52. package/comate-engine/assets/skills/create-image-comate/SKILL.md +278 -0
  53. package/comate-engine/assets/skills/create-skill-comate/SKILL.md +308 -217
  54. package/comate-engine/assets/skills/create-skill-comate/agents/analyzer.md +274 -0
  55. package/comate-engine/assets/skills/create-skill-comate/agents/comparator.md +202 -0
  56. package/comate-engine/assets/skills/create-skill-comate/agents/grader.md +223 -0
  57. package/comate-engine/assets/skills/create-skill-comate/assets/eval_review.html +146 -0
  58. package/comate-engine/assets/skills/create-skill-comate/eval-viewer/generate_review.py +489 -0
  59. package/comate-engine/assets/skills/create-skill-comate/eval-viewer/viewer.html +1325 -0
  60. package/comate-engine/assets/skills/create-skill-comate/references/schemas.md +430 -0
  61. package/comate-engine/assets/skills/create-skill-comate/scripts/__init__.py +0 -0
  62. package/comate-engine/assets/skills/create-skill-comate/scripts/__pycache__/__init__.cpython-311.pyc +0 -0
  63. package/comate-engine/assets/skills/create-skill-comate/scripts/__pycache__/aggregate_benchmark.cpython-311.pyc +0 -0
  64. package/comate-engine/assets/skills/create-skill-comate/scripts/aggregate_benchmark.py +412 -0
  65. package/comate-engine/assets/skills/create-skill-comate/scripts/generate_report.py +334 -0
  66. package/comate-engine/assets/skills/create-skill-comate/scripts/package_skill.py +140 -0
  67. package/comate-engine/assets/skills/create-skill-comate/scripts/utils.py +53 -0
  68. package/comate-engine/assets/skills/find-skills-comate/SKILL.md +15 -12
  69. package/comate-engine/assets/skills/find-skills-comate/scripts/fetch_skills.py +32 -3
  70. package/comate-engine/assets/skills/get-ugate-token-comate/SKILL.md +159 -0
  71. package/comate-engine/assets/skills/get-ugate-token-comate/getUgateToken.py +150 -0
  72. package/comate-engine/assets/skills/icafe-comate/SKILL.md +240 -0
  73. package/comate-engine/assets/skills/icafe-comate/references/ai-workflows.md +233 -0
  74. package/comate-engine/assets/skills/icafe-comate/references/commands.md +1147 -0
  75. package/comate-engine/assets/skills/icafe-comate/references/error-handling.md +164 -0
  76. package/comate-engine/assets/skills/icafe-comate/references/git-auto-bindcard-workflow.md +201 -0
  77. package/comate-engine/assets/skills/icafe-comate/references/git-bindcard-workflow.md +327 -0
  78. package/comate-engine/assets/skills/icafe-comate/references/iql-syntax.md +327 -0
  79. package/comate-engine/assets/skills/icafe-comate/references/platform-concepts.md +317 -0
  80. package/comate-engine/assets/skills/icafe-comate/references/smart-create-workflow.md +171 -0
  81. package/comate-engine/assets/skills/icafe-comate/references/smart-find-workflow.md +127 -0
  82. package/comate-engine/assets/skills/icafe-comate/references/smart-update-workflow.md +118 -0
  83. package/comate-engine/assets/skills/icode-comate/SKILL.md +366 -0
  84. package/comate-engine/assets/skills/icode-comate/references/api/add_reviewers.md +44 -0
  85. package/comate-engine/assets/skills/icode-comate/references/api/build_fetch_command.md +89 -0
  86. package/comate-engine/assets/skills/icode-comate/references/api/check_repo_permission.md +89 -0
  87. package/comate-engine/assets/skills/icode-comate/references/api/create_branch.md +79 -0
  88. package/comate-engine/assets/skills/icode-comate/references/api/create_draft_comment.md +109 -0
  89. package/comate-engine/assets/skills/icode-comate/references/api/get_ai_cr_result.md +190 -0
  90. package/comate-engine/assets/skills/icode-comate/references/api/get_ai_review.md +97 -0
  91. package/comate-engine/assets/skills/icode-comate/references/api/get_diff_content.md +92 -0
  92. package/comate-engine/assets/skills/icode-comate/references/api/get_diff_file.md +88 -0
  93. package/comate-engine/assets/skills/icode-comate/references/api/get_machine_check.md +73 -0
  94. package/comate-engine/assets/skills/icode-comate/references/api/get_my_reviews.md +115 -0
  95. package/comate-engine/assets/skills/icode-comate/references/api/get_person_commit.md +89 -0
  96. package/comate-engine/assets/skills/icode-comate/references/api/get_person_repo.md +63 -0
  97. package/comate-engine/assets/skills/icode-comate/references/api/get_repo_branch.md +62 -0
  98. package/comate-engine/assets/skills/icode-comate/references/api/get_repo_config.md +91 -0
  99. package/comate-engine/assets/skills/icode-comate/references/api/get_repo_members.md +118 -0
  100. package/comate-engine/assets/skills/icode-comate/references/api/get_repo_reviews.md +91 -0
  101. package/comate-engine/assets/skills/icode-comate/references/api/get_review_comments.md +87 -0
  102. package/comate-engine/assets/skills/icode-comate/references/api/get_review_info.md +81 -0
  103. package/comate-engine/assets/skills/icode-comate/references/api/get_submit_settings.md +105 -0
  104. package/comate-engine/assets/skills/icode-comate/references/api/icode-api.md +86 -0
  105. package/comate-engine/assets/skills/icode-comate/references/api/publish_comments.md +72 -0
  106. package/comate-engine/assets/skills/icode-comate/references/api/set_review_score.md +58 -0
  107. package/comate-engine/assets/skills/icode-comate/references/api/start_ai_review.md +77 -0
  108. package/comate-engine/assets/skills/icode-comate/references/api/submit_review.md +50 -0
  109. package/comate-engine/assets/skills/icode-comate/references/api/trigger_ai_cr.md +63 -0
  110. package/comate-engine/assets/skills/icode-comate/references/feature/add-reviewer.md +92 -0
  111. package/comate-engine/assets/skills/icode-comate/references/feature/fix-machine-check.md +144 -0
  112. package/comate-engine/assets/skills/icode-comate/references/feature/merge-cr.md +100 -0
  113. package/comate-engine/assets/skills/icode-comate/references/feature/ssh-setup.md +106 -0
  114. package/comate-engine/assets/skills/icode-comate/references/feature/submit-acr.md +135 -0
  115. package/comate-engine/assets/skills/icode-comate/references/feature/submit-cr.md +123 -0
  116. package/comate-engine/assets/skills/icode-comate/references/git/clone.md +67 -0
  117. package/comate-engine/assets/skills/icode-comate/references/git/icode-git.md +68 -0
  118. package/comate-engine/assets/skills/icode-comate/references/git/push.md +64 -0
  119. package/comate-engine/assets/skills/icode-comate/references/git/push_cr.md +103 -0
  120. package/comate-engine/assets/skills/icode-comate/references/install.md +144 -0
  121. package/comate-engine/assets/skills/icode-comate/references/login.md +111 -0
  122. package/comate-engine/assets/skills/icode-comate/scripts/add-reviewer.sh +154 -0
  123. package/comate-engine/assets/skills/icode-comate/scripts/common.sh +145 -0
  124. package/comate-engine/assets/skills/icode-comate/scripts/fix-machine-check.sh +131 -0
  125. package/comate-engine/assets/skills/icode-comate/scripts/merge-cr.sh +105 -0
  126. package/comate-engine/assets/skills/icode-comate/scripts/ssh-setup.sh +159 -0
  127. package/comate-engine/assets/skills/icode-comate/scripts/submit-acr.sh +236 -0
  128. package/comate-engine/assets/skills/icode-comate/scripts/submit-cr.sh +104 -0
  129. package/comate-engine/assets/skills/icode-comate/scripts/test-preflight.sh +89 -0
  130. package/comate-engine/assets/skills/ku-operator-comate/SKILL.md +121 -0
  131. package/comate-engine/assets/skills/ku-operator-comate/examples.md +190 -0
  132. package/comate-engine/assets/skills/ku-operator-comate/references/add_member.md +49 -0
  133. package/comate-engine/assets/skills/ku-operator-comate/references/change_scope.md +38 -0
  134. package/comate-engine/assets/skills/ku-operator-comate/references/copy_doc.md +50 -0
  135. package/comate-engine/assets/skills/ku-operator-comate/references/create_doc.md +61 -0
  136. package/comate-engine/assets/skills/ku-operator-comate/references/delete_doc.md +31 -0
  137. package/comate-engine/assets/skills/ku-operator-comate/references/edit_content.md +568 -0
  138. package/comate-engine/assets/skills/ku-operator-comate/references/move_doc.md +45 -0
  139. package/comate-engine/assets/skills/ku-operator-comate/references/query_comment.md +79 -0
  140. package/comate-engine/assets/skills/ku-operator-comate/references/query_content.md +83 -0
  141. package/comate-engine/assets/skills/ku-operator-comate/references/query_flowchart.md +84 -0
  142. package/comate-engine/assets/skills/ku-operator-comate/references/query_permission.md +38 -0
  143. package/comate-engine/assets/skills/ku-operator-comate/references/query_recent_view.md +67 -0
  144. package/comate-engine/assets/skills/ku-operator-comate/references/query_repo.md +57 -0
  145. package/comate-engine/assets/skills/ku-operator-comate/references/query_user_info.md +37 -0
  146. package/comate-engine/assets/skills/ku-operator-comate/references/update_member.md +41 -0
  147. package/comate-engine/assets/skills/ku-operator-comate/references/upload_attachment.md +52 -0
  148. package/comate-engine/assets/skills/ku-operator-comate/scripts/ku_operator.py +1575 -0
  149. package/comate-engine/node_modules/better-sqlite3/node_modules/.bin/prebuild-install +2 -2
  150. package/comate-engine/node_modules/tree-sitter-bash/node_modules/.bin/node-gyp-build +2 -2
  151. package/comate-engine/node_modules/tree-sitter-bash/node_modules/.bin/node-gyp-build-optional +2 -2
  152. package/comate-engine/node_modules/tree-sitter-bash/node_modules/.bin/node-gyp-build-test +2 -2
  153. package/comate-engine/package.json +2 -0
  154. package/comate-engine/server.js +263 -79
  155. package/dist/bundle/index.js +8 -8
  156. package/package.json +1 -1
  157. package/comate-engine/assets/skills/figma2code-comate/codeConnect.md +0 -37
  158. package/comate-engine/assets/skills/figma2code-comate/designToken.md +0 -3
  159. package/comate-engine/assets/skills/figma2code-comate/f2cMcp.md +0 -59
  160. package/comate-engine/assets/skills/smart-commit/SKILL.md +0 -646
  161. package/comate-engine/node_modules/@comate/plugin-host/dist/index-AZIho4HV.js +0 -1
  162. package/comate-engine/node_modules/@comate/plugin-host/dist/user-BIpzRUfb.js +0 -44
  163. /package/comate-engine/assets/skills/{smart-commit → auto-commit-comate}/references/issue_type_mapping.json +0 -0
  164. /package/comate-engine/assets/skills/{smart-commit → auto-commit-comate}/references/query_reference.md +0 -0
  165. /package/comate-engine/assets/skills/{smart-commit → auto-commit-comate}/scripts/compat.py +0 -0
  166. /package/comate-engine/assets/skills/{smart-commit → auto-commit-comate}/scripts/create_card_cli.py +0 -0
  167. /package/comate-engine/assets/skills/{smart-commit → auto-commit-comate}/scripts/icafe/__init__.py +0 -0
  168. /package/comate-engine/assets/skills/{smart-commit → auto-commit-comate}/scripts/logger.py +0 -0
  169. /package/comate-engine/assets/skills/{smart-commit → auto-commit-comate}/scripts/recognize_card_cli.py +0 -0
@@ -1,358 +1,449 @@
1
1
  ---
2
2
  name: create-skill
3
- description: Guide for creating effective skills. This skill should be used when users want to create a new skill (or update an existing skill) that extends Comate's capabilities with specialized knowledge, workflows, or tool integrations.
4
- license: Complete terms in LICENSE.txt
3
+ description: Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, edit, or optimize an existing skill, run evals to test a skill, benchmark skill performance with variance analysis, or optimize a skill's description for better triggering accuracy.
4
+ version: v2
5
5
  ---
6
6
 
7
7
  # Skill Creator
8
8
 
9
- This skill provides guidance for creating effective skills.
9
+ A skill for creating new skills and iteratively improving them.
10
10
 
11
- ## About Skills
11
+ At a high level, the process of creating a skill goes like this:
12
12
 
13
- Skills are modular, self-contained packages that extend Comate's capabilities by providing
14
- specialized knowledge, workflows, and tools. Think of them as "onboarding guides" for specific
15
- domains or tasks—they transform Comate from a general-purpose agent into a specialized agent
16
- equipped with procedural knowledge that no model can fully possess.
13
+ - Decide what you want the skill to do and roughly how it should do it
14
+ - After the Interview and Research phase, present the proposed skill and test cases to the user for review — do not proceed until they have confirmed both.
15
+ - Write a draft of the skill
16
+ - Create a few test prompts and run comate-with-access-to-the-skill on them
17
+ - Help the user evaluate the results both qualitatively and quantitatively
18
+ - While the runs happen in the background, draft some quantitative evals if there aren't any (if there are some, you can either use as is or modify if you feel something needs to change about them). Then explain them to the user (or if they already existed, explain the ones that already exist)
19
+ - Use the `eval-viewer/generate_review.py` script to show the user the results for them to look at, and also let them look at the quantitative metrics
20
+ - Rewrite the skill based on feedback from the user's evaluation of the results (and also if there are any glaring flaws that become apparent from the quantitative benchmarks)
21
+ - Repeat until you're satisfied
22
+ - Expand the test set and try again at larger scale
17
23
 
18
- ### What Skills Provide
24
+ Your job when using this skill is to figure out where the user is in this process and then jump in and help them progress through these stages. So for instance, maybe they're like "I want to make a skill for X". You can help narrow down what they mean, write a draft, write the test cases, figure out how they want to evaluate, run all the prompts, and repeat.
19
25
 
20
- 1. Specialized workflows - Multi-step procedures for specific domains
21
- 2. Tool integrations - Instructions for working with specific file formats or APIs
22
- 3. Domain expertise - Company-specific knowledge, schemas, business logic
23
- 4. Bundled resources - Scripts, references, and assets for complex and repetitive tasks
26
+ On the other hand, maybe they already have a draft of the skill. In this case you can go straight to the eval/iterate part of the loop.
24
27
 
25
- ## Core Principles
28
+ Of course, you should always be flexible and if the user is like "I don't need to run a bunch of evaluations, just vibe with me", you can do that instead.
26
29
 
27
- ### Concise is Key
30
+ Then after the skill is done (but again, the order is flexible), you can also run the skill description improver, which we have a whole separate script for, to optimize the triggering of the skill.
28
31
 
29
- The context window is a public good. Skills share the context window with everything else Comate needs: system prompt, conversation history, other Skills' metadata, and the actual user request.
32
+ Cool? Cool.
30
33
 
31
- **Default assumption: Comate is already very smart.** Only add context Comate doesn't already have. Challenge each piece of information: "Does Comate really need this explanation?" and "Does this paragraph justify its token cost?"
34
+ ## Communicating with the user
32
35
 
33
- Prefer concise examples over verbose explanations.
36
+ The skill creator is liable to be used by people across a wide range of familiarity with coding jargon. If you haven't heard (and how could you, it's only very recently that it started), there's a trend now where the power of Comate is inspiring plumbers to open up their terminals, parents and grandparents to google "how to install npm". On the other hand, the bulk of users are probably fairly computer-literate.
34
37
 
35
- ### Set Appropriate Degrees of Freedom
38
+ So please pay attention to context cues to understand how to phrase your communication! In the default case, just to give you some idea:
36
39
 
37
- Match the level of specificity to the task's fragility and variability:
40
+ - "evaluation" and "benchmark" are borderline, but OK
41
+ - for "JSON" and "assertion" you want to see serious cues from the user that they know what those things are before using them without explaining them
38
42
 
39
- **High freedom (text-based instructions)**: Use when multiple approaches are valid, decisions depend on context, or heuristics guide the approach.
43
+ It's OK to briefly explain terms if you're in doubt, and feel free to clarify terms with a short definition if you're unsure if the user will get it.
40
44
 
41
- **Medium freedom (pseudocode or scripts with parameters)**: Use when a preferred pattern exists, some variation is acceptable, or configuration affects behavior.
45
+ ---
46
+
47
+ ## Creating a skill
48
+
49
+ ### Capture Intent
50
+
51
+ Start by understanding the user's intent. The current conversation might already contain a workflow the user wants to capture (e.g., they say "turn this into a skill"). If so, extract answers from the conversation history first — the tools used, the sequence of steps, corrections the user made, input/output formats observed. The user may need to fill the gaps, and should confirm before proceeding to the next step.
52
+
53
+ 1. What should this skill enable Comate to do?
54
+ 2. When should this skill trigger? (what user phrases/contexts)
55
+ 3. What's the expected output format?
56
+ 4. Should we set up test cases to verify the skill works? Skills with objectively verifiable outputs (file transforms, data extraction, code generation, fixed workflow steps) benefit from test cases. Skills with subjective outputs (writing style, art) often don't need them. Suggest the appropriate default based on the skill type, but let the user decide.
57
+ 5. Should this be a personal skill (~/.comate/skills/) or project skill (.comate/skills/)?
58
+
59
+ Storage:
60
+
61
+ | Type | Path | Scope |
62
+ |------|------|-------|
63
+ | Personal | ~/.comate/skills/skill-name/ | Available across all your projects |
64
+ | Project | .comate/skills/skill-name/ | Shared with anyone using the repository |
65
+
66
+ **IMPORTANT**: Never create or update skills in `~/.comate/skills/skill-name-comate/`. This directory is reserved for Comate's internal built-in skills and is managed automatically by the system.
67
+
68
+ ### Interview and Research
69
+
70
+ Proactively ask questions about edge cases, input/output formats, example files, success criteria, and dependencies. Wait to write test prompts until you've got this part ironed out.
71
+
72
+ Check available MCPs - if useful for research (searching docs, finding similar skills, looking up best practices), research in parallel via subagents if available, otherwise inline. Come prepared with context to reduce burden on the user.
73
+
74
+ After the Interview and Research phase, present the proposed skill and test cases to the user for review — do not proceed until they have confirmed both.
42
75
 
43
- **Low freedom (specific scripts, few parameters)**: Use when operations are fragile and error-prone, consistency is critical, or a specific sequence must be followed.
76
+ ### Write the SKILL.md
44
77
 
45
- Think of Comate as exploring a path: a narrow bridge with cliffs needs specific guardrails (low freedom), while an open field allows many routes (high freedom).
78
+ Based on the user interview, fill in these components:
46
79
 
47
- ### Anatomy of a Skill
80
+ - **name**: Skill identifier
81
+ - **description**: When to trigger, what it does. This is the primary triggering mechanism - include both what the skill does AND specific contexts for when to use it. All "when to use" info goes here, not in the body. Note: currently Comate has a tendency to "undertrigger" skills -- to not use them when they'd be useful. To combat this, please make the skill descriptions a little bit "pushy". So for instance, instead of "How to build a simple fast dashboard to display internal Anthropic data.", you might write "How to build a simple fast dashboard to display internal Anthropic data. Make sure to use this skill whenever the user mentions dashboards, data visualization, internal metrics, or wants to display any kind of company data, even if they don't explicitly ask for a 'dashboard.'"
82
+ - **compatibility**: Required tools, dependencies (optional, rarely needed)
83
+ - **the rest of the skill :)**
48
84
 
49
- Every skill consists of a required SKILL.md file and optional bundled resources:
85
+ ### Skill Writing Guide
86
+
87
+ #### Anatomy of a Skill
50
88
 
51
89
  ```
52
90
  skill-name/
53
91
  ├── SKILL.md (required)
54
- │ ├── YAML frontmatter metadata (required)
55
- │ ├── name: (required)
56
- │ │ ├── description: (required)
57
- │ │ └── compatibility: (optional, rarely needed)
58
- │ └── Markdown instructions (required)
92
+ │ ├── YAML frontmatter (name, description required)
93
+ └── Markdown instructions
59
94
  └── Bundled Resources (optional)
60
- ├── scripts/ - Executable code (Python/Bash/etc.)
61
- ├── references/ - Documentation intended to be loaded into context as needed
62
- └── assets/ - Files used in output (templates, icons, fonts, etc.)
95
+ ├── scripts/ - Executable code for deterministic/repetitive tasks
96
+ ├── references/ - Docs loaded into context as needed
97
+ └── assets/ - Files used in output (templates, icons, fonts)
63
98
  ```
64
99
 
65
- #### SKILL.md (required)
66
-
67
- Every SKILL.md consists of:
100
+ #### Progressive Disclosure
68
101
 
69
- - **Frontmatter** (YAML): Contains `name` and `description` fields (required), plus optional fields like `license`, `metadata`, and `compatibility`. Only `name` and `description` are read by Comate to determine when the skill triggers, so be clear and comprehensive about what the skill is and when it should be used. The `compatibility` field is for noting environment requirements (target product, system packages, etc.) but most skills don't need it.
70
- - **Body** (Markdown): Instructions and guidance for using the skill. Only loaded AFTER the skill triggers (if at all).
102
+ Skills use a three-level loading system:
103
+ 1. **Metadata** (name + description) - Always in context (~100 words)
104
+ 2. **SKILL.md body** - In context whenever skill triggers (<500 lines ideal)
105
+ 3. **Bundled resources** - As needed (unlimited, scripts can execute without loading)
71
106
 
72
- #### Bundled Resources (optional)
107
+ These word counts are approximate and you can feel free to go longer if needed.
73
108
 
74
- ##### Scripts (`scripts/`)
109
+ **Key patterns:**
110
+ - Keep SKILL.md under 500 lines; if you're approaching this limit, add an additional layer of hierarchy along with clear pointers about where the model using the skill should go next to follow up.
111
+ - Reference files clearly from SKILL.md with guidance on when to read them
112
+ - For large reference files (>300 lines), include a table of contents
75
113
 
76
- Executable code (Python/Bash/etc.) for tasks that require deterministic reliability or are repeatedly rewritten.
114
+ **Domain organization**: When a skill supports multiple domains/frameworks, organize by variant:
115
+ ```
116
+ cloud-deploy/
117
+ ├── SKILL.md (workflow + selection)
118
+ └── references/
119
+ ├── aws.md
120
+ ├── gcp.md
121
+ └── azure.md
122
+ ```
123
+ Comate reads only the relevant reference file.
77
124
 
78
- - **When to include**: When the same code is being rewritten repeatedly or deterministic reliability is needed
79
- - **Example**: `scripts/rotate_pdf.py` for PDF rotation tasks
80
- - **Benefits**: Token efficient, deterministic, may be executed without loading into context
81
- - **Note**: Scripts may still need to be read by Comate for patching or environment-specific adjustments
125
+ #### Principle of Lack of Surprise
82
126
 
83
- ##### References (`references/`)
127
+ This goes without saying, but skills must not contain malware, exploit code, or any content that could compromise system security. A skill's contents should not surprise the user in their intent if described. Don't go along with requests to create misleading skills or skills designed to facilitate unauthorized access, data exfiltration, or other malicious activities. Things like a "roleplay as an XYZ" are OK though.
84
128
 
85
- Documentation and reference material intended to be loaded as needed into context to inform Comate's process and thinking.
129
+ #### Writing Patterns
86
130
 
87
- - **When to include**: For documentation that Comate should reference while working
88
- - **Examples**: `references/finance.md` for financial schemas, `references/mnda.md` for company NDA template, `references/policies.md` for company policies, `references/api_docs.md` for API specifications
89
- - **Use cases**: Database schemas, API documentation, domain knowledge, company policies, detailed workflow guides
90
- - **Benefits**: Keeps SKILL.md lean, loaded only when Comate determines it's needed
91
- - **Best practice**: If files are large (>10k words), include grep search patterns in SKILL.md
92
- - **Avoid duplication**: Information should live in either SKILL.md or references files, not both. Prefer references files for detailed information unless it's truly core to the skill—this keeps SKILL.md lean while making information discoverable without hogging the context window. Keep only essential procedural instructions and workflow guidance in SKILL.md; move detailed reference material, schemas, and examples to references files.
131
+ Prefer using the imperative form in instructions.
93
132
 
94
- ##### Assets (`assets/`)
133
+ **Defining output formats** - You can do it like this:
134
+ ```markdown
135
+ ## Report structure
136
+ ALWAYS use this exact template:
137
+ # [Title]
138
+ ## Executive summary
139
+ ## Key findings
140
+ ## Recommendations
141
+ ```
95
142
 
96
- Files not intended to be loaded into context, but rather used within the output Comate produces.
143
+ **Examples pattern** - It's useful to include examples. You can format them like this (but if "Input" and "Output" are in the examples you might want to deviate a little):
144
+ ```markdown
145
+ ## Commit message format
146
+ **Example 1:**
147
+ Input: Added user authentication with JWT tokens
148
+ Output: feat(auth): implement JWT-based authentication
149
+ ```
97
150
 
98
- - **When to include**: When the skill needs files that will be used in the final output
99
- - **Examples**: `assets/logo.png` for brand assets, `assets/slides.pptx` for PowerPoint templates, `assets/frontend-template/` for HTML/React boilerplate, `assets/font.ttf` for typography
100
- - **Use cases**: Templates, images, icons, boilerplate code, fonts, sample documents that get copied or modified
101
- - **Benefits**: Separates output resources from documentation, enables Comate to use files without loading them into context
151
+ ### Writing Style
102
152
 
103
- #### What to Not Include in a Skill
153
+ Try to explain to the model why things are important in lieu of heavy-handed musty MUSTs. Use theory of mind and try to make the skill general and not super-narrow to specific examples. Start by writing a draft and then look at it with fresh eyes and improve it.
104
154
 
105
- A skill should only contain essential files that directly support its functionality. Do NOT create extraneous documentation or auxiliary files, including:
155
+ ### Test Cases
106
156
 
107
- - README.md
108
- - INSTALLATION_GUIDE.md
109
- - QUICK_REFERENCE.md
110
- - CHANGELOG.md
111
- - etc.
157
+ After writing the skill draft, come up with 2-3 realistic test prompts — the kind of thing a real user would actually say. Share them with the user: [you don't have to use this exact language] "Here are a few test cases I'd like to try. Do these look right, or do you want to add more?" Then run them.
112
158
 
113
- The skill should only contain the information needed for an AI agent to do the job at hand. It should not contain auxilary context about the process that went into creating it, setup and testing procedures, user-facing documentation, etc. Creating additional documentation files just adds clutter and confusion.
159
+ Save test cases to `evals/evals.json`. Don't write assertions yet just the prompts. You'll draft assertions in the next step while the runs are in progress.
114
160
 
115
- ### Progressive Disclosure Design Principle
161
+ ```json
162
+ {
163
+ "skill_name": "example-skill",
164
+ "evals": [
165
+ {
166
+ "id": 1,
167
+ "prompt": "User's task prompt",
168
+ "expected_output": "Description of expected result",
169
+ "files": []
170
+ }
171
+ ]
172
+ }
173
+ ```
116
174
 
117
- Skills use a three-level loading system to manage context efficiently:
175
+ See `references/schemas.md` for the full schema (including the `assertions` field, which you'll add later).
118
176
 
119
- 1. **Metadata (name + description)** - Always in context (~100 words)
120
- 2. **SKILL.md body** - When skill triggers (<5k words)
121
- 3. **Bundled resources** - As needed by Comate (Unlimited because scripts can be executed without reading into context window)
177
+ ## Running and evaluating test cases
122
178
 
123
- #### Progressive Disclosure Patterns
179
+ This section is one continuous sequence — don't stop partway through. Do NOT use `/skill-test` or any other testing skill.
124
180
 
125
- Keep SKILL.md body to the essentials and under 500 lines to minimize context bloat. Split content into separate files when approaching this limit. When splitting out content into other files, it is very important to reference them from SKILL.md and describe clearly when to read them, to ensure the reader of the skill knows they exist and when to use them.
181
+ Put results in `<skill-name>-workspace/` as a sibling to the skill directory(personal or project). Within the workspace, organize results by iteration (`iteration-1/`, `iteration-2/`, etc.) and within that, each test case gets a directory (`eval-0/`, `eval-1/`, etc.). Don't create all of this upfront just create directories as you go.
126
182
 
127
- **Key principle:** When a skill supports multiple variations, frameworks, or options, keep only the core workflow and selection guidance in SKILL.md. Move variant-specific details (patterns, examples, configuration) into separate reference files.
183
+ ### Step 1: Spawn all runs (with-skill AND baseline) in the same turn
128
184
 
129
- **Pattern 1: High-level guide with references**
185
+ For each test case, spawn two subagents in the same turn — one with the skill, one without. This is important: don't spawn the with-skill runs first and then come back for baselines later. Launch everything at once so it all finishes around the same time.
130
186
 
131
- ```markdown
132
- # PDF Processing
187
+ **With-skill run:**
133
188
 
134
- ## Quick start
189
+ ```
190
+ Execute this task:
191
+ - Skill path: <path-to-skill>
192
+ - Task: <eval prompt>
193
+ - Input files: <eval files if any, or "none">
194
+ - Save outputs to: <workspace>/iteration-<N>/eval-<ID>/with_skill/outputs/
195
+ - Outputs to save: <what the user cares about — e.g., "the .docx file", "the final CSV">
196
+ ```
135
197
 
136
- Extract text with pdfplumber:
137
- [code example]
198
+ **Baseline run** (same prompt, but the baseline depends on context):
199
+ - **Creating a new skill**: no skill at all. Same prompt, no skill path, save to `without_skill/outputs/`.
200
+ - **Improving an existing skill**: the old version. Before editing, snapshot the skill (`cp -r <skill-path> <workspace>/skill-snapshot/`), then point the baseline subagent at the snapshot. Save to `old_skill/outputs/`.
138
201
 
139
- ## Advanced features
202
+ Write an `eval_metadata.json` for each test case (assertions can be empty for now). Give each eval a descriptive name based on what it's testing — not just "eval-0". Use this name for the directory too. If this iteration uses new or modified eval prompts, create these files for each new eval directory — don't assume they carry over from previous iterations.
140
203
 
141
- - **Form filling**: See [FORMS.md](FORMS.md) for complete guide
142
- - **API reference**: See [REFERENCE.md](REFERENCE.md) for all methods
143
- - **Examples**: See [EXAMPLES.md](EXAMPLES.md) for common patterns
204
+ ```json
205
+ {
206
+ "eval_id": 0,
207
+ "eval_name": "descriptive-name-here",
208
+ "prompt": "The user's task prompt",
209
+ "assertions": []
210
+ }
144
211
  ```
145
212
 
146
- Comate loads FORMS.md, REFERENCE.md, or EXAMPLES.md only when needed.
213
+ ### Step 2: While runs are in progress, draft assertions
147
214
 
148
- **Pattern 2: Domain-specific organization**
215
+ Don't just wait for the runs to finish — you can use this time productively. Draft quantitative assertions for each test case and explain them to the user. If assertions already exist in `evals/evals.json`, review them and explain what they check.
149
216
 
150
- For Skills with multiple domains, organize content by domain to avoid loading irrelevant context:
217
+ Good assertions are objectively verifiable and have descriptive names — they should read clearly in the benchmark viewer so someone glancing at the results immediately understands what each one checks. Subjective skills (writing style, design quality) are better evaluated qualitatively don't force assertions onto things that need human judgment.
151
218
 
152
- ```
153
- bigquery-skill/
154
- ├── SKILL.md (overview and navigation)
155
- └── reference/
156
- ├── finance.md (revenue, billing metrics)
157
- ├── sales.md (opportunities, pipeline)
158
- ├── product.md (API usage, features)
159
- └── marketing.md (campaigns, attribution)
160
- ```
219
+ Update the `eval_metadata.json` files and `evals/evals.json` with the assertions once drafted. Also explain to the user what they'll see in the viewer — both the qualitative outputs and the quantitative benchmark.
161
220
 
162
- When a user asks about sales metrics, Comate only reads sales.md.
221
+ ### Step 3: As runs complete, capture timing data
163
222
 
164
- Similarly, for skills supporting multiple frameworks or variants, organize by variant:
223
+ When each subagent task completes, you receive a notification containing `total_tokens` and `duration_ms`. Save this data immediately to `timing.json` in the run directory:
165
224
 
225
+ ```json
226
+ {
227
+ "total_tokens": 84852,
228
+ "duration_ms": 23332,
229
+ "total_duration_seconds": 23.3
230
+ }
166
231
  ```
167
- cloud-deploy/
168
- ├── SKILL.md (workflow + provider selection)
169
- └── references/
170
- ├── aws.md (AWS deployment patterns)
171
- ├── gcp.md (GCP deployment patterns)
172
- └── azure.md (Azure deployment patterns)
173
- ```
174
-
175
- When the user chooses AWS, Comate only reads aws.md.
176
232
 
177
- **Pattern 3: Conditional details**
233
+ This is the only opportunity to capture this data — it comes through the task notification and isn't persisted elsewhere. Process each notification as it arrives rather than trying to batch them.
178
234
 
179
- Show basic content, link to advanced content:
235
+ ### Step 4: Grade, aggregate, and launch the viewer
180
236
 
181
- ```markdown
182
- # DOCX Processing
237
+ Once all runs are done:
183
238
 
184
- ## Creating documents
239
+ 1. **Grade each run** — spawn a grader subagent (or grade inline) that reads `agents/grader.md` and evaluates each assertion against the outputs. Save results to `grading.json` in each run directory. The grading.json expectations array must use the fields `text`, `passed`, and `evidence` (not `name`/`met`/`details` or other variants) — the viewer depends on these exact field names. For assertions that can be checked programmatically, write and run a script rather than eyeballing it — scripts are faster, more reliable, and can be reused across iterations.
185
240
 
186
- Use docx-js for new documents. See [DOCX-JS.md](DOCX-JS.md).
241
+ 2. **Aggregate into benchmark** run the aggregation script from the skill-creator directory:
242
+ ```bash
243
+ python -m scripts.aggregate_benchmark <workspace>/iteration-N --skill-name <name>
244
+ ```
245
+ This produces `benchmark.json` and `benchmark.md` with pass_rate, time, and tokens for each configuration, with mean ± stddev and the delta. If generating benchmark.json manually, see `references/schemas.md` for the exact schema the viewer expects.
246
+ Put each with_skill version before its baseline counterpart.
187
247
 
188
- ## Editing documents
248
+ 3. **Do an analyst pass** — read the benchmark data and surface patterns the aggregate stats might hide. See `agents/analyzer.md` (the "Analyzing Benchmark Results" section) for what to look for — things like assertions that always pass regardless of skill (non-discriminating), high-variance evals (possibly flaky), and time/token tradeoffs.
189
249
 
190
- For simple edits, modify the XML directly.
250
+ 4. **Launch the viewer** with both qualitative outputs and quantitative data:
251
+ ```bash
252
+ nohup python <skill-creator-path>/eval-viewer/generate_review.py \
253
+ <workspace>/iteration-N \
254
+ --skill-name "my-skill" \
255
+ --benchmark <workspace>/iteration-N/benchmark.json \
256
+ > /dev/null 2>&1 &
257
+ VIEWER_PID=$!
258
+ ```
259
+ For iteration 2+, also pass `--previous-workspace <workspace>/iteration-<N-1>`.
191
260
 
192
- **For tracked changes**: See [REDLINING.md](REDLINING.md)
193
- **For OOXML details**: See [OOXML.md](OOXML.md)
194
- ```
195
261
 
196
- Comate reads REDLINING.md or OOXML.md only when the user needs those features.
262
+ Note: please use generate_review.py to create the viewer; there's no need to write custom HTML.
197
263
 
198
- **Important guidelines:**
264
+ 5. **Tell the user** something like: "I've opened the results in your browser. There are two tabs — 'Outputs' lets you click through each test case and leave feedback, 'Benchmark' shows the quantitative comparison. When you're done, come back here and let me know."
199
265
 
200
- - **Avoid deeply nested references** - Keep references one level deep from SKILL.md. All reference files should link directly from SKILL.md.
201
- - **Structure longer reference files** - For files longer than 100 lines, include a table of contents at the top so Comate can see the full scope when previewing.
266
+ ### What the user sees in the viewer
202
267
 
203
- ## Skill Creation Process
268
+ The "Outputs" tab shows one test case at a time:
269
+ - **Prompt**: the task that was given
270
+ - **Output**: the files the skill produced, rendered inline where possible
271
+ - **Previous Output** (iteration 2+): collapsed section showing last iteration's output
272
+ - **Formal Grades** (if grading was run): collapsed section showing assertion pass/fail
273
+ - **Feedback**: a textbox that auto-saves as they type
274
+ - **Previous Feedback** (iteration 2+): their comments from last time, shown below the textbox
204
275
 
205
- Skill creation involves these steps:
276
+ The "Benchmark" tab shows the stats summary: pass rates, timing, and token usage for each configuration, with per-eval breakdowns and analyst observations.
206
277
 
207
- 1. Understand the skill with concrete examples
208
- 2. Plan reusable skill contents (scripts, references, assets)
209
- 3. Initialize the skill (run init_skill.py)
210
- 4. Edit the skill (implement resources and write SKILL.md)
211
- 5. Validate the skill (run quick_validate.py)
212
- 6. Iterate based on real usage
278
+ Navigation is via prev/next buttons or arrow keys. When done, they click "Submit All Reviews" which saves all feedback to `feedback.json`.
213
279
 
214
- Follow these steps in order, skipping only if there is a clear reason why they are not applicable.
280
+ ### Step 5: Read the feedback
215
281
 
216
- ### Step 1: Understanding the Skill with Concrete Examples
282
+ When the user tells you they're done, read `feedback.json`:
217
283
 
218
- Skip this step only when the skill's usage patterns are already clearly understood. It remains valuable even when working with an existing skill.
284
+ ```json
285
+ {
286
+ "reviews": [
287
+ {"run_id": "eval-0-with_skill", "feedback": "the chart is missing axis labels", "timestamp": "..."},
288
+ {"run_id": "eval-1-with_skill", "feedback": "", "timestamp": "..."},
289
+ {"run_id": "eval-2-with_skill", "feedback": "perfect, love this", "timestamp": "..."}
290
+ ],
291
+ "status": "complete"
292
+ }
293
+ ```
219
294
 
220
- To create an effective skill, clearly understand concrete examples of how the skill will be used. This understanding can come from either direct user examples or generated examples that are validated with user feedback.
295
+ Empty feedback means the user thought it was fine. Focus your improvements on the test cases where the user had specific complaints.
221
296
 
222
- For example, when building an image-editor skill, relevant questions include:
297
+ Kill the viewer server when you're done with it:
223
298
 
224
- - "What functionality should the image-editor skill support? Editing, rotating, anything else?"
225
- - "Can you give some examples of how this skill would be used?"
226
- - "I can imagine users asking for things like 'Remove the red-eye from this image' or 'Rotate this image'. Are there other ways you imagine this skill being used?"
227
- - "What would a user say that should trigger this skill?"
299
+ ```bash
300
+ kill $VIEWER_PID 2>/dev/null
301
+ ```
228
302
 
229
- To avoid overwhelming users, avoid asking too many questions in a single message. Start with the most important questions and follow up as needed for better effectiveness.
303
+ ---
230
304
 
231
- Conclude this step when there is a clear sense of the functionality the skill should support.
305
+ ## Improving the skill
232
306
 
233
- ### Step 2: Planning the Reusable Skill Contents
307
+ This is the heart of the loop. You've run the test cases, the user has reviewed the results, and now you need to make the skill better based on their feedback.
234
308
 
235
- To turn concrete examples into an effective skill, analyze each example by:
309
+ ### How to think about improvements
236
310
 
237
- 1. Considering how to execute on the example from scratch
238
- 2. Identifying what scripts, references, and assets would be helpful when executing these workflows repeatedly
311
+ 1. **Generalize from the feedback.** The big picture thing that's happening here is that we're trying to create skills that can be used a million times (maybe literally, maybe even more who knows) across many different prompts. Here you and the user are iterating on only a few examples over and over again because it helps move faster. The user knows these examples in and out and it's quick for them to assess new outputs. But if the skill you and the user are codeveloping works only for those examples, it's useless. Rather than put in fiddly overfitty changes, or oppressively constrictive MUSTs, if there's some stubborn issue, you might try branching out and using different metaphors, or recommending different patterns of working. It's relatively cheap to try and maybe you'll land on something great.
239
312
 
240
- Example: When building a `pdf-editor` skill to handle queries like "Help me rotate this PDF," the analysis shows:
313
+ 2. **Keep the prompt lean.** Remove things that aren't pulling their weight. Make sure to read the transcripts, not just the final outputs — if it looks like the skill is making the model waste a bunch of time doing things that are unproductive, you can try getting rid of the parts of the skill that are making it do that and seeing what happens.
241
314
 
242
- 1. Rotating a PDF requires re-writing the same code each time
243
- 2. A `scripts/rotate_pdf.py` script would be helpful to store in the skill
315
+ 3. **Explain the why.** Try hard to explain the **why** behind everything you're asking the model to do. Today's LLMs are *smart*. They have good theory of mind and when given a good harness can go beyond rote instructions and really make things happen. Even if the feedback from the user is terse or frustrated, try to actually understand the task and why the user is writing what they wrote, and what they actually wrote, and then transmit this understanding into the instructions. If you find yourself writing ALWAYS or NEVER in all caps, or using super rigid structures, that's a yellow flag — if possible, reframe and explain the reasoning so that the model understands why the thing you're asking for is important. That's a more humane, powerful, and effective approach.
244
316
 
245
- Example: When designing a `frontend-webapp-builder` skill for queries like "Build me a todo app" or "Build me a dashboard to track my steps," the analysis shows:
317
+ 4. **Look for repeated work across test cases.** Read the transcripts from the test runs and notice if the subagents all independently wrote similar helper scripts or took the same multi-step approach to something. If all 3 test cases resulted in the subagent writing a `create_docx.py` or a `build_chart.py`, that's a strong signal the skill should bundle that script. Write it once, put it in `scripts/`, and tell the skill to use it. This saves every future invocation from reinventing the wheel.
246
318
 
247
- 1. Writing a frontend webapp requires the same boilerplate HTML/React each time
248
- 2. An `assets/hello-world/` template containing the boilerplate HTML/React project files would be helpful to store in the skill
319
+ This task is pretty important (we are trying to create billions a year in economic value here!) and your thinking time is not the blocker; take your time and really mull things over. I'd suggest writing a draft revision and then looking at it anew and making improvements. Really do your best to get into the head of the user and understand what they want and need.
249
320
 
250
- Example: When building a `big-query` skill to handle queries like "How many users have logged in today?" the analysis shows:
321
+ ### The iteration loop
251
322
 
252
- 1. Querying BigQuery requires re-discovering the table schemas and relationships each time
253
- 2. A `references/schema.md` file documenting the table schemas would be helpful to store in the skill
323
+ After improving the skill:
254
324
 
255
- To establish the skill's contents, analyze each concrete example to create a list of the reusable resources to include: scripts, references, and assets.
325
+ 1. Apply your improvements to the skill
326
+ 2. Rerun all test cases into a new `iteration-<N+1>/` directory, including baseline runs. If you're creating a new skill, the baseline is always `without_skill` (no skill) — that stays the same across iterations. If you're improving an existing skill, use your judgment on what makes sense as the baseline: the original version the user came in with, or the previous iteration.
327
+ 3. Launch the reviewer with `--previous-workspace` pointing at the previous iteration
328
+ 4. Wait for the user to review and tell you they're done
329
+ 5. Read the new feedback, improve again, repeat
256
330
 
257
- ### Step 3: Initializing the Skill
331
+ Keep going until:
332
+ - The user says they're happy
333
+ - The feedback is all empty (everything looks good)
334
+ - You're not making meaningful progress
258
335
 
259
- At this point, it is time to actually create the skill.
336
+ ---
260
337
 
261
- Skip this step only if the skill being developed already exists. In this case, continue to the next step.
338
+ ## Advanced: Blind comparison
262
339
 
263
- Before creating a skill, gather essential information from the user about:
340
+ For situations where you want a more rigorous comparison between two versions of a skill (e.g., the user asks "is the new version actually better?"), there's a blind comparison system. Read `agents/comparator.md` and `agents/analyzer.md` for the details. The basic idea is: give two outputs to an independent agent without telling it which is which, and let it judge quality. Then analyze why the winner won.
264
341
 
265
- 1. **Purpose and scope**: What specific task or workflow should this skill help with?
266
- 2. **Target location**: Should this be a personal skill (~/.comate/skills/) or project skill (.comate/skills/)?
267
- 3. **Trigger scenarios**: When should the agent automatically apply this skill?
268
- 4. **Key domain knowledge**: What specialized information does the agent need that it wouldn't already know?
269
- 5. **Output format preferences**: Are there specific templates, formats, or styles required?
270
- 6. **Existing patterns**: Are there existing examples or conventions to follow?
342
+ This is optional, requires subagents, and most users won't need it. The human review loop is usually sufficient.
271
343
 
272
- When creating a new skill from scratch, always run the `init_skill.py` script. The script conveniently generates a new template skill directory that automatically includes everything a skill requires, making the skill creation process much more efficient and reliable.
344
+ ---
273
345
 
274
- Storage:
346
+ ## Description Optimization
275
347
 
276
- | Type | Path | Scope |
277
- | -------- | ---------------------------- | --------------------------------------- |
278
- | Personal | ~/.comate/skills/skill-name/ | Available across all your projects |
279
- | Project | .comate/skills/skill-name/ | Shared with anyone using the repository |
348
+ The description field in SKILL.md frontmatter is the primary mechanism that determines whether Comate invokes a skill. After creating or improving a skill, offer to optimize the description for better triggering accuracy.
280
349
 
281
- **IMPORTANT**: Never create or update skills in `~/.comate/skills/skill-name-comate/`. This directory is reserved for Comate's internal built-in skills and is managed automatically by the system.
350
+ ### Step 1: Generate trigger eval queries
282
351
 
283
- Usage:
352
+ Create 20 eval queries — a mix of should-trigger and should-not-trigger. Save as JSON:
284
353
 
285
- ```bash
286
- scripts/init_skill.py <skill-name> --path <output-directory>
354
+ ```json
355
+ [
356
+ {"query": "the user prompt", "should_trigger": true},
357
+ {"query": "another prompt", "should_trigger": false}
358
+ ]
287
359
  ```
288
360
 
289
- The script:
361
+ The queries must be realistic and something a Comate user would actually type. Not abstract requests, but requests that are concrete and specific and have a good amount of detail. For instance, file paths, personal context about the user's job or situation, column names and values, company names, URLs. A little bit of backstory. Some might be in lowercase or contain abbreviations or typos or casual speech. Use a mix of different lengths, and focus on edge cases rather than making them clear-cut (the user will get a chance to sign off on them).
290
362
 
291
- - Creates the skill directory at the specified path
292
- - Generates a SKILL.md template with proper frontmatter and TODO placeholders
293
- - Creates example resource directories: `scripts/`, `references/`, and `assets/`
294
- - Adds example files in each directory that can be customized or deleted
363
+ Bad: `"Format this data"`, `"Extract text from PDF"`, `"Create a chart"`
295
364
 
296
- After initialization, customize or remove the generated SKILL.md and example files as needed.
365
+ Good: `"ok so my boss just sent me this xlsx file (its in my downloads, called something like 'Q4 sales final FINAL v2.xlsx') and she wants me to add a column that shows the profit margin as a percentage. The revenue is in column C and costs are in column D i think"`
297
366
 
298
- ### Step 4: Edit the Skill
367
+ For the **should-trigger** queries (8-10), think about coverage. You want different phrasings of the same intent — some formal, some casual. Include cases where the user doesn't explicitly name the skill or file type but clearly needs it. Throw in some uncommon use cases and cases where this skill competes with another but should win.
299
368
 
300
- When editing the (newly-generated or existing) skill, remember that the skill is being created for another instance of Comate to use. Include information that would be beneficial and non-obvious to Comate. Consider what procedural knowledge, domain-specific details, or reusable assets would help another Comate instance execute these tasks more effectively.
369
+ For the **should-not-trigger** queries (8-10), the most valuable ones are the near-misses — queries that share keywords or concepts with the skill but actually need something different. Think adjacent domains, ambiguous phrasing where a naive keyword match would trigger but shouldn't, and cases where the query touches on something the skill does but in a context where another tool is more appropriate.
301
370
 
302
- #### Learn Proven Design Patterns
371
+ The key thing to avoid: don't make should-not-trigger queries obviously irrelevant. "Write a fibonacci function" as a negative test for a PDF skill is too easy — it doesn't test anything. The negative cases should be genuinely tricky.
303
372
 
304
- Consult these helpful guides based on your skill's needs:
373
+ ### Step 2: Review with user
305
374
 
306
- - **Multi-step processes**: See references/workflows.md for sequential workflows and conditional logic
307
- - **Specific output formats or quality standards**: See references/output-patterns.md for template and example patterns
375
+ Present the eval set to the user for review using the HTML template:
308
376
 
309
- These files contain established best practices for effective skill design.
377
+ 1. Read the template from `assets/eval_review.html`
378
+ 2. Replace the placeholders:
379
+ - `__EVAL_DATA_PLACEHOLDER__` → the JSON array of eval items (no quotes around it — it's a JS variable assignment)
380
+ - `__SKILL_NAME_PLACEHOLDER__` → the skill's name
381
+ - `__SKILL_DESCRIPTION_PLACEHOLDER__` → the skill's current description
382
+ 3. Write to a temp file (e.g., `/tmp/eval_review_<skill-name>.html`) and open it: `open /tmp/eval_review_<skill-name>.html`
383
+ 4. The user can edit queries, toggle should-trigger, add/remove entries, then click "Export Eval Set"
384
+ 5. The file downloads to `~/Downloads/eval_set.json` — check the Downloads folder for the most recent version in case there are multiple (e.g., `eval_set (1).json`)
310
385
 
311
- #### Start with Reusable Skill Contents
386
+ This step matters bad eval queries lead to bad descriptions.
312
387
 
313
- To begin implementation, start with the reusable resources identified above: `scripts/`, `references/`, and `assets/` files. Note that this step may require user input. For example, when implementing a `brand-guidelines` skill, the user may need to provide brand assets or templates to store in `assets/`, or documentation to store in `references/`.
388
+ ### Step 3: Run the optimization loop
314
389
 
315
- Added scripts must be tested by actually running them to ensure there are no bugs and that the output matches what is expected. If there are many similar scripts, only a representative sample needs to be tested to ensure confidence that they all work while balancing time to completion.
390
+ Tell the user: "This will take some time I'll run the optimization loop in the background and check on it periodically."
316
391
 
317
- Any example files and directories not needed for the skill should be deleted. The initialization script creates example files in `scripts/`, `references/`, and `assets/` to demonstrate structure, but most skills won't need all of them.
392
+ Save the eval set to the workspace, then run in the background with parallel subagents
318
393
 
319
- #### Update SKILL.md
394
+ Use the model ID from your system prompt (the one powering the current session) so the triggering test matches what the user actually experiences.
320
395
 
321
- **Writing Guidelines:** Always use imperative/infinitive form.
396
+ While it runs, periodically tail the output to give the user updates on which iteration it's on and what the scores look like.
322
397
 
323
- ##### Frontmatter
398
+ This handles the full optimization loop automatically. It splits the eval set into 60% train and 40% held-out test, evaluates the current description (running each query 3 times to get a reliable trigger rate), then calls Comate to propose improvements based on what failed. It re-evaluates each new description on both train and test, iterating up to 5 times. When it's done, it opens an HTML report in the browser showing the results per iteration and returns JSON with `best_description` — selected by test score rather than train score to avoid overfitting.
324
399
 
325
- Write the YAML frontmatter with `name` and `description`:
400
+ ### How skill triggering works
326
401
 
327
- - `name`: The skill name
328
- - `description`: This is the primary triggering mechanism for your skill, and helps Comate understand when to use the skill.
329
- - Include both what the Skill does and specific triggers/contexts for when to use it.
330
- - Include all "when to use" information here - Not in the body. The body is only loaded after triggering, so "When to Use This Skill" sections in the body are not helpful to Comate.
331
- - Example description for a `docx` skill: "Comprehensive document creation, editing, and analysis with support for tracked changes, comments, formatting preservation, and text extraction. Use when Comate needs to work with professional documents (.docx files) for: (1) Creating new documents, (2) Modifying or editing content, (3) Working with tracked changes, (4) Adding comments, or any other document tasks"
402
+ Understanding the triggering mechanism helps design better eval queries. Skills appear in Comate's `available_skills` list with their name + description, and Comate decides whether to consult a skill based on that description. The important thing to know is that Comate only consults skills for tasks it can't easily handle on its own — simple, one-step queries like "read this PDF" may not trigger a skill even if the description matches perfectly, because Comate can handle them directly with basic tools. Complex, multi-step, or specialized queries reliably trigger skills when the description matches.
332
403
 
333
- Do not include any other fields in YAML frontmatter.
404
+ This means your eval queries should be substantive enough that Comate would actually benefit from consulting a skill. Simple queries like "read file X" are poor test cases — they won't trigger skills regardless of description quality.
334
405
 
335
- ##### Body
406
+ ### Step 4: Apply the result
336
407
 
337
- Write instructions for using the skill and its bundled resources.
408
+ Take `best_description` from the JSON output and update the skill's SKILL.md frontmatter. Show the user before/after and report the scores.
338
409
 
339
- ### Step 5: Validate the Skill
410
+ ---
411
+
412
+ ### Package and Present (only if `present_files` tool is available)
340
413
 
341
- Once development of the skill is complete, validate the skill folder to catch basic issues early:
414
+ Check whether you have access to the `present_files` tool. If you don't, skip this step. If you do, package the skill and present the .skill file to the user:
342
415
 
343
416
  ```bash
344
- scripts/quick_validate.py <path/to/skill-folder>
417
+ python -m scripts.package_skill <path/to/skill-folder>
345
418
  ```
346
419
 
347
- The validation script checks YAML frontmatter format, required fields, and naming rules. If validation fails, fix the reported issues and run the command again.
420
+ After packaging, direct the user to the resulting `.skill` file path so they can install it.
348
421
 
349
- ### Step 6: Iterate
422
+ ---
423
+
424
+ ## Reference files
425
+
426
+ The agents/ directory contains instructions for specialized subagents. Read them when you need to spawn the relevant subagent.
427
+
428
+ - `agents/grader.md` — How to evaluate assertions against outputs
429
+ - `agents/comparator.md` — How to do blind A/B comparison between two outputs
430
+ - `agents/analyzer.md` — How to analyze why one version beat another
431
+
432
+ The references/ directory has additional documentation:
433
+ - `references/schemas.md` — JSON structures for evals.json, grading.json, etc.
434
+
435
+ ---
350
436
 
351
- After testing the skill, users may request improvements. Often this happens right after using the skill, with fresh context of how the skill performed.
437
+ Repeating one more time the core loop here for emphasis:
352
438
 
353
- **Iteration workflow:**
439
+ - Figure out what the skill is about
440
+ - After the Interview and Research phase, present the proposed skill and test cases to the user for review — do not proceed until they have confirmed both.
441
+ - Draft or edit the skill
442
+ - Run comate-with-access-to-the-skill on test prompts
443
+ - With the user, evaluate the outputs:
444
+ - Create benchmark.json and run `eval-viewer/generate_review.py` to help the user review them
445
+ - Run quantitative evals
446
+ - Repeat until you and the user are satisfied
447
+ - Package the final skill and return it to the user.
354
448
 
355
- 1. Use the skill on real tasks
356
- 2. Notice struggles or inefficiencies
357
- 3. Identify how SKILL.md or bundled resources should be updated
358
- 4. Implement changes and test again
449
+ Please add steps to your TodoList, if you have such a thing, to make sure you don't forget.