@cleocode/skills 2.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dispatch-config.json +404 -0
- package/index.d.ts +178 -0
- package/index.js +405 -0
- package/package.json +14 -0
- package/profiles/core.json +7 -0
- package/profiles/full.json +10 -0
- package/profiles/minimal.json +7 -0
- package/profiles/recommended.json +7 -0
- package/provider-skills-map.json +97 -0
- package/skills/_shared/cleo-style-guide.md +84 -0
- package/skills/_shared/manifest-operations.md +810 -0
- package/skills/_shared/placeholders.json +433 -0
- package/skills/_shared/skill-chaining-patterns.md +237 -0
- package/skills/_shared/subagent-protocol-base.md +223 -0
- package/skills/_shared/task-system-integration.md +232 -0
- package/skills/_shared/testing-framework-config.md +110 -0
- package/skills/ct-cleo/SKILL.md +490 -0
- package/skills/ct-cleo/references/anti-patterns.md +19 -0
- package/skills/ct-cleo/references/loom-lifecycle.md +136 -0
- package/skills/ct-cleo/references/orchestrator-constraints.md +55 -0
- package/skills/ct-cleo/references/session-protocol.md +162 -0
- package/skills/ct-codebase-mapper/SKILL.md +82 -0
- package/skills/ct-contribution/SKILL.md +521 -0
- package/skills/ct-contribution/templates/contribution-init.json +21 -0
- package/skills/ct-dev-workflow/SKILL.md +423 -0
- package/skills/ct-docs-lookup/SKILL.md +66 -0
- package/skills/ct-docs-review/SKILL.md +175 -0
- package/skills/ct-docs-write/SKILL.md +108 -0
- package/skills/ct-documentor/SKILL.md +231 -0
- package/skills/ct-epic-architect/SKILL.md +305 -0
- package/skills/ct-epic-architect/references/bug-epic-example.md +172 -0
- package/skills/ct-epic-architect/references/commands.md +201 -0
- package/skills/ct-epic-architect/references/feature-epic-example.md +210 -0
- package/skills/ct-epic-architect/references/migration-epic-example.md +244 -0
- package/skills/ct-epic-architect/references/output-format.md +92 -0
- package/skills/ct-epic-architect/references/patterns.md +284 -0
- package/skills/ct-epic-architect/references/refactor-epic-example.md +412 -0
- package/skills/ct-epic-architect/references/research-epic-example.md +226 -0
- package/skills/ct-epic-architect/references/shell-escaping.md +86 -0
- package/skills/ct-epic-architect/references/skill-aware-execution.md +195 -0
- package/skills/ct-grade/SKILL.md +230 -0
- package/skills/ct-grade/agents/analysis-reporter.md +203 -0
- package/skills/ct-grade/agents/blind-comparator.md +157 -0
- package/skills/ct-grade/agents/scenario-runner.md +134 -0
- package/skills/ct-grade/eval-viewer/__pycache__/generate_grade_review.cpython-314.pyc +0 -0
- package/skills/ct-grade/eval-viewer/generate_grade_review.py +1138 -0
- package/skills/ct-grade/eval-viewer/generate_grade_viewer.py +544 -0
- package/skills/ct-grade/eval-viewer/generate_review.py +283 -0
- package/skills/ct-grade/eval-viewer/grade-review.html +1574 -0
- package/skills/ct-grade/eval-viewer/viewer.html +219 -0
- package/skills/ct-grade/evals/evals.json +94 -0
- package/skills/ct-grade/references/ab-test-methodology.md +150 -0
- package/skills/ct-grade/references/domains.md +137 -0
- package/skills/ct-grade/references/grade-spec.md +236 -0
- package/skills/ct-grade/references/scenario-playbook.md +234 -0
- package/skills/ct-grade/references/token-tracking.md +120 -0
- package/skills/ct-grade/scripts/__pycache__/audit_analyzer.cpython-314.pyc +0 -0
- package/skills/ct-grade/scripts/__pycache__/run_ab_test.cpython-314.pyc +0 -0
- package/skills/ct-grade/scripts/__pycache__/run_all.cpython-314.pyc +0 -0
- package/skills/ct-grade/scripts/__pycache__/token_tracker.cpython-314.pyc +0 -0
- package/skills/ct-grade/scripts/audit_analyzer.py +279 -0
- package/skills/ct-grade/scripts/generate_report.py +283 -0
- package/skills/ct-grade/scripts/run_ab_test.py +504 -0
- package/skills/ct-grade/scripts/run_all.py +287 -0
- package/skills/ct-grade/scripts/setup_run.py +183 -0
- package/skills/ct-grade/scripts/token_tracker.py +630 -0
- package/skills/ct-grade-v2-1/SKILL.md +237 -0
- package/skills/ct-grade-v2-1/agents/analysis-reporter.md +203 -0
- package/skills/ct-grade-v2-1/agents/blind-comparator.md +157 -0
- package/skills/ct-grade-v2-1/agents/scenario-runner.md +179 -0
- package/skills/ct-grade-v2-1/evals/evals.json +74 -0
- package/skills/ct-grade-v2-1/grade-viewer/__pycache__/build_op_stats.cpython-314.pyc +0 -0
- package/skills/ct-grade-v2-1/grade-viewer/__pycache__/generate_grade_review.cpython-314.pyc +0 -0
- package/skills/ct-grade-v2-1/grade-viewer/build_op_stats.py +174 -0
- package/skills/ct-grade-v2-1/grade-viewer/eval-analysis.json +41 -0
- package/skills/ct-grade-v2-1/grade-viewer/eval-report.md +34 -0
- package/skills/ct-grade-v2-1/grade-viewer/generate_grade_review.py +1023 -0
- package/skills/ct-grade-v2-1/grade-viewer/generate_grade_viewer.py +548 -0
- package/skills/ct-grade-v2-1/grade-viewer/grade-review-eval.html +613 -0
- package/skills/ct-grade-v2-1/grade-viewer/grade-review.html +1532 -0
- package/skills/ct-grade-v2-1/grade-viewer/viewer.html +620 -0
- package/skills/ct-grade-v2-1/manifest-entry.json +31 -0
- package/skills/ct-grade-v2-1/references/ab-testing.md +233 -0
- package/skills/ct-grade-v2-1/references/domains-ssot.md +156 -0
- package/skills/ct-grade-v2-1/references/grade-spec-v2.md +167 -0
- package/skills/ct-grade-v2-1/references/playbook-v2.md +393 -0
- package/skills/ct-grade-v2-1/references/token-tracking.md +202 -0
- package/skills/ct-grade-v2-1/scripts/generate_report.py +419 -0
- package/skills/ct-grade-v2-1/scripts/run_ab_test.py +493 -0
- package/skills/ct-grade-v2-1/scripts/run_scenario.py +396 -0
- package/skills/ct-grade-v2-1/scripts/setup_run.py +207 -0
- package/skills/ct-grade-v2-1/scripts/token_tracker.py +175 -0
- package/skills/ct-memory/SKILL.md +84 -0
- package/skills/ct-orchestrator/INSTALL.md +61 -0
- package/skills/ct-orchestrator/README.md +69 -0
- package/skills/ct-orchestrator/SKILL.md +380 -0
- package/skills/ct-orchestrator/manifest-entry.json +19 -0
- package/skills/ct-orchestrator/orchestrator-prompt.txt +17 -0
- package/skills/ct-orchestrator/references/SUBAGENT-PROTOCOL-BLOCK.md +66 -0
- package/skills/ct-orchestrator/references/autonomous-operation.md +167 -0
- package/skills/ct-orchestrator/references/lifecycle-gates.md +98 -0
- package/skills/ct-orchestrator/references/orchestrator-compliance.md +271 -0
- package/skills/ct-orchestrator/references/orchestrator-handoffs.md +85 -0
- package/skills/ct-orchestrator/references/orchestrator-patterns.md +164 -0
- package/skills/ct-orchestrator/references/orchestrator-recovery.md +113 -0
- package/skills/ct-orchestrator/references/orchestrator-spawning.md +271 -0
- package/skills/ct-orchestrator/references/orchestrator-tokens.md +180 -0
- package/skills/ct-research-agent/SKILL.md +226 -0
- package/skills/ct-skill-creator/.cleo/.context-state.json +13 -0
- package/skills/ct-skill-creator/.cleo/logs/cleo.2026-03-07.1.log +24 -0
- package/skills/ct-skill-creator/.cleo/tasks.db +0 -0
- package/skills/ct-skill-creator/SKILL.md +356 -0
- package/skills/ct-skill-creator/agents/analyzer.md +276 -0
- package/skills/ct-skill-creator/agents/comparator.md +204 -0
- package/skills/ct-skill-creator/agents/grader.md +225 -0
- package/skills/ct-skill-creator/assets/eval_review.html +146 -0
- package/skills/ct-skill-creator/eval-viewer/__pycache__/generate_review.cpython-314.pyc +0 -0
- package/skills/ct-skill-creator/eval-viewer/generate_review.py +471 -0
- package/skills/ct-skill-creator/eval-viewer/viewer.html +1325 -0
- package/skills/ct-skill-creator/manifest-entry.json +17 -0
- package/skills/ct-skill-creator/references/dynamic-context.md +228 -0
- package/skills/ct-skill-creator/references/frontmatter.md +83 -0
- package/skills/ct-skill-creator/references/invocation-control.md +165 -0
- package/skills/ct-skill-creator/references/output-patterns.md +86 -0
- package/skills/ct-skill-creator/references/provider-deployment.md +175 -0
- package/skills/ct-skill-creator/references/schemas.md +430 -0
- package/skills/ct-skill-creator/references/workflows.md +28 -0
- package/skills/ct-skill-creator/scripts/__init__.py +1 -0
- package/skills/ct-skill-creator/scripts/__pycache__/__init__.cpython-314.pyc +0 -0
- package/skills/ct-skill-creator/scripts/__pycache__/aggregate_benchmark.cpython-314.pyc +0 -0
- package/skills/ct-skill-creator/scripts/__pycache__/generate_report.cpython-314.pyc +0 -0
- package/skills/ct-skill-creator/scripts/__pycache__/improve_description.cpython-314.pyc +0 -0
- package/skills/ct-skill-creator/scripts/__pycache__/init_skill.cpython-314.pyc +0 -0
- package/skills/ct-skill-creator/scripts/__pycache__/quick_validate.cpython-314.pyc +0 -0
- package/skills/ct-skill-creator/scripts/__pycache__/run_eval.cpython-314.pyc +0 -0
- package/skills/ct-skill-creator/scripts/__pycache__/run_loop.cpython-314.pyc +0 -0
- package/skills/ct-skill-creator/scripts/__pycache__/utils.cpython-314.pyc +0 -0
- package/skills/ct-skill-creator/scripts/aggregate_benchmark.py +401 -0
- package/skills/ct-skill-creator/scripts/generate_report.py +326 -0
- package/skills/ct-skill-creator/scripts/improve_description.py +247 -0
- package/skills/ct-skill-creator/scripts/init_skill.py +306 -0
- package/skills/ct-skill-creator/scripts/package_skill.py +110 -0
- package/skills/ct-skill-creator/scripts/quick_validate.py +97 -0
- package/skills/ct-skill-creator/scripts/run_eval.py +310 -0
- package/skills/ct-skill-creator/scripts/run_loop.py +328 -0
- package/skills/ct-skill-creator/scripts/utils.py +47 -0
- package/skills/ct-skill-validator/SKILL.md +178 -0
- package/skills/ct-skill-validator/agents/ecosystem-checker.md +151 -0
- package/skills/ct-skill-validator/assets/valid-skill-example.md +13 -0
- package/skills/ct-skill-validator/evals/eval_set.json +14 -0
- package/skills/ct-skill-validator/evals/evals.json +52 -0
- package/skills/ct-skill-validator/manifest-entry.json +20 -0
- package/skills/ct-skill-validator/references/cleo-ecosystem-rules.md +163 -0
- package/skills/ct-skill-validator/references/validation-rules.md +168 -0
- package/skills/ct-skill-validator/scripts/__init__.py +0 -0
- package/skills/ct-skill-validator/scripts/__pycache__/audit_body.cpython-314.pyc +0 -0
- package/skills/ct-skill-validator/scripts/__pycache__/check_ecosystem.cpython-314.pyc +0 -0
- package/skills/ct-skill-validator/scripts/__pycache__/generate_validation_report.cpython-314.pyc +0 -0
- package/skills/ct-skill-validator/scripts/__pycache__/validate.cpython-314.pyc +0 -0
- package/skills/ct-skill-validator/scripts/audit_body.py +242 -0
- package/skills/ct-skill-validator/scripts/check_ecosystem.py +169 -0
- package/skills/ct-skill-validator/scripts/check_manifest.py +172 -0
- package/skills/ct-skill-validator/scripts/generate_validation_report.py +442 -0
- package/skills/ct-skill-validator/scripts/validate.py +422 -0
- package/skills/ct-spec-writer/SKILL.md +189 -0
- package/skills/ct-stickynote/README.md +14 -0
- package/skills/ct-stickynote/SKILL.md +46 -0
- package/skills/ct-task-executor/SKILL.md +296 -0
- package/skills/ct-validator/SKILL.md +216 -0
- package/skills/manifest.json +469 -0
- package/skills.json +281 -0
|
@@ -0,0 +1,178 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: ct-skill-validator
|
|
3
|
+
description: Validates an existing skill folder against the full CLEO standard and ecosystem. Use when auditing skills for structural compliance, verifying a skill fits into the CLEO ecosystem and constitution, running quality A/B evals, or preparing a skill for distribution. Runs a 3-phase validation loop — structural, ecosystem fit, and quality eval — then presents all findings as an HTML report opened in the user's browser. Iterates until all required phases pass.
|
|
4
|
+
disable-model-invocation: true
|
|
5
|
+
allowed-tools: Bash(python *)
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# CLEO Skill Validator
|
|
9
|
+
|
|
10
|
+
Full 3-phase validation loop for CLEO skills. Every phase must reach PASS before the skill
|
|
11
|
+
is considered ecosystem-ready. Run the phases in order and iterate on failures.
|
|
12
|
+
|
|
13
|
+
**Always end with the HTML report** — the final deliverable to the user is the combined report
|
|
14
|
+
opened in their browser, not terminal output.
|
|
15
|
+
|
|
16
|
+
---
|
|
17
|
+
|
|
18
|
+
## Phase 1: Structural Compliance (Iterate to Zero Errors)
|
|
19
|
+
|
|
20
|
+
Run `validate.py` until the result is `PASS` or `PASS (with warnings)` with 0 errors.
|
|
21
|
+
Warnings are acceptable; errors are not. Fix errors and re-run.
|
|
22
|
+
|
|
23
|
+
```bash
|
|
24
|
+
# Full gauntlet — text output
|
|
25
|
+
python ${CLAUDE_SKILL_DIR}/scripts/validate.py <skill-dir>
|
|
26
|
+
|
|
27
|
+
# With manifest checks (Tier 4):
|
|
28
|
+
python ${CLAUDE_SKILL_DIR}/scripts/validate.py <skill-dir> \
|
|
29
|
+
--manifest <manifest.json> --dispatch-config <dispatch-config.json>
|
|
30
|
+
|
|
31
|
+
# JSON output (for scripting):
|
|
32
|
+
python ${CLAUDE_SKILL_DIR}/scripts/validate.py <skill-dir> --json
|
|
33
|
+
|
|
34
|
+
# Deep body quality audit (optional, run alongside validate.py):
|
|
35
|
+
python ${CLAUDE_SKILL_DIR}/scripts/audit_body.py <skill-dir>
|
|
36
|
+
|
|
37
|
+
# Manifest alignment check:
|
|
38
|
+
python ${CLAUDE_SKILL_DIR}/scripts/check_manifest.py <skill-dir> <manifest.json>
|
|
39
|
+
```
|
|
40
|
+
|
|
41
|
+
**Iteration rule**: If errors > 0, fix them in the skill's SKILL.md, re-run `validate.py`.
|
|
42
|
+
Repeat until errors = 0. Do not proceed to Phase 2 while errors remain.
|
|
43
|
+
|
|
44
|
+
**Validation tiers:**
|
|
45
|
+
- Tier 1 — Structure: SKILL.md exists, frontmatter parseable, no CLEO-only fields
|
|
46
|
+
- Tier 2 — Frontmatter Quality: name matches dir, description has trigger indicators
|
|
47
|
+
- Tier 3 — Body Quality: length, no placeholder text, file references exist on disk
|
|
48
|
+
- Tier 4 — CLEO Integration: manifest and dispatch-config alignment (optional)
|
|
49
|
+
- Tier 5 — Provider Compatibility: provider-skills-map check (optional)
|
|
50
|
+
|
|
51
|
+
See [references/validation-rules.md](references/validation-rules.md) for full rule set.
|
|
52
|
+
|
|
53
|
+
---
|
|
54
|
+
|
|
55
|
+
## Phase 2: CLEO Ecosystem Compliance (Iterate to PASS)
|
|
56
|
+
|
|
57
|
+
Checks whether the skill's intent and purpose fit into the CLEO ecosystem — the 10 canonical
|
|
58
|
+
domains, canonical verbs, RCASD-IVTR+C lifecycle, and the CLEO Operation Constitution.
|
|
59
|
+
|
|
60
|
+
**Step 1: Extract skill context**
|
|
61
|
+
```bash
|
|
62
|
+
python ${CLAUDE_SKILL_DIR}/scripts/check_ecosystem.py <skill-dir> --output context.json
|
|
63
|
+
```
|
|
64
|
+
|
|
65
|
+
This extracts: CLEO operations referenced, domains mentioned, lifecycle stages, deprecated
|
|
66
|
+
verb usage, and direct data manipulation patterns.
|
|
67
|
+
|
|
68
|
+
**Step 2: Run the ecosystem-checker agent**
|
|
69
|
+
|
|
70
|
+
Invoke the ecosystem-checker agent with the context package:
|
|
71
|
+
|
|
72
|
+
```
|
|
73
|
+
Inputs:
|
|
74
|
+
- context.json (from Step 1)
|
|
75
|
+
- references/cleo-ecosystem-rules.md (the 8 rules)
|
|
76
|
+
- The skill's SKILL.md (for full body reading)
|
|
77
|
+
|
|
78
|
+
Agent file: ${CLAUDE_SKILL_DIR}/agents/ecosystem-checker.md
|
|
79
|
+
|
|
80
|
+
Output: ecosystem-check.json
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
The checker evaluates 8 rules from [references/cleo-ecosystem-rules.md](references/cleo-ecosystem-rules.md):
|
|
84
|
+
|
|
85
|
+
1. **Domain Fit** — Does the skill serve at least one of the 10 canonical CLEO domains?
|
|
86
|
+
2. **MCP Operation Syntax** — Are CLEO operations referenced with valid `domain.operation` format?
|
|
87
|
+
3. **Canonical Verb Compliance** — No deprecated verbs (create, get, search as verb)
|
|
88
|
+
4. **Non-Duplication** — Skill isn't a thin wrapper over a single existing CLEO operation
|
|
89
|
+
5. **Data Integrity** — No direct `.cleo/` file editing instructions
|
|
90
|
+
6. **Lifecycle Alignment** — Skill aligns with relevant RCASD-IVTR+C stages
|
|
91
|
+
7. **Purpose Clarity** — Skill has a specific, bounded, genuinely useful purpose
|
|
92
|
+
8. **Tools Alignment** — `allowed-tools` matches what the skill actually needs
|
|
93
|
+
|
|
94
|
+
**Iteration rule**: If ecosystem-check.json contains `"verdict": "FAIL"`, address each ERROR-severity
|
|
95
|
+
rule finding, fix the skill content, re-run check_ecosystem.py, re-run the ecosystem-checker agent.
|
|
96
|
+
Repeat until verdict is `PASS` or `PASS_WITH_WARNINGS`. WARN is acceptable; ERROR is not.
|
|
97
|
+
|
|
98
|
+
---
|
|
99
|
+
|
|
100
|
+
## Phase 3: Quality A/B Eval
|
|
101
|
+
|
|
102
|
+
Tests whether the skill actually improves agent output quality vs. no skill context.
|
|
103
|
+
Uses the eval infrastructure from ct-skill-creator.
|
|
104
|
+
|
|
105
|
+
**Trigger accuracy** — does the skill description trigger correctly?
|
|
106
|
+
```bash
|
|
107
|
+
python ${CLAUDE_SKILL_DIR}/../ct-skill-creator/scripts/run_eval.py \
|
|
108
|
+
--eval-set ${CLAUDE_SKILL_DIR}/evals/eval_set.json \
|
|
109
|
+
--skill-path ${CLAUDE_SKILL_DIR}
|
|
110
|
+
```
|
|
111
|
+
|
|
112
|
+
**Optimize description** (if trigger accuracy < 80%):
|
|
113
|
+
```bash
|
|
114
|
+
python ${CLAUDE_SKILL_DIR}/../ct-skill-creator/scripts/run_loop.py \
|
|
115
|
+
--eval-set ${CLAUDE_SKILL_DIR}/evals/eval_set.json \
|
|
116
|
+
--skill-path ${CLAUDE_SKILL_DIR} \
|
|
117
|
+
--model claude-sonnet-4-6 \
|
|
118
|
+
--max-iterations 5
|
|
119
|
+
```
|
|
120
|
+
`run_loop.py` opens a live HTML accuracy report in the browser automatically.
|
|
121
|
+
|
|
122
|
+
**Quality eval** (with/without skill A/B):
|
|
123
|
+
1. Spawn two agents in the SAME turn: one WITH skill context loaded, one WITHOUT (baseline)
|
|
124
|
+
2. Give both the same task prompt from [evals/evals.json](evals/evals.json)
|
|
125
|
+
3. Grade each with the grader agent → `grading.json`:
|
|
126
|
+
`${CLAUDE_SKILL_DIR}/../ct-skill-creator/agents/grader.md`
|
|
127
|
+
4. Blind A/B comparison with the comparator agent → `comparison.json`:
|
|
128
|
+
`${CLAUDE_SKILL_DIR}/../ct-skill-creator/agents/comparator.md`
|
|
129
|
+
5. Post-hoc analysis with the analyzer agent → `analysis.json`:
|
|
130
|
+
`${CLAUDE_SKILL_DIR}/../ct-skill-creator/agents/analyzer.md`
|
|
131
|
+
6. Serve the full eval review:
|
|
132
|
+
`python ${CLAUDE_SKILL_DIR}/../ct-skill-creator/eval-viewer/generate_review.py <workspace-dir>`
|
|
133
|
+
(Opens browser at localhost:3117)
|
|
134
|
+
|
|
135
|
+
See [references/validation-rules.md](references/validation-rules.md) and
|
|
136
|
+
`${CLAUDE_SKILL_DIR}/../ct-skill-creator/references/schemas.md` for JSON output schemas.
|
|
137
|
+
|
|
138
|
+
---
|
|
139
|
+
|
|
140
|
+
## Final: Generate and Present HTML Report
|
|
141
|
+
|
|
142
|
+
After completing all phases, generate the unified report and open it in the browser.
|
|
143
|
+
|
|
144
|
+
```bash
|
|
145
|
+
# Minimum — Phase 1 only:
|
|
146
|
+
python ${CLAUDE_SKILL_DIR}/scripts/generate_validation_report.py <skill-dir> --no-open --output report.html
|
|
147
|
+
|
|
148
|
+
# With ecosystem check:
|
|
149
|
+
python ${CLAUDE_SKILL_DIR}/scripts/generate_validation_report.py <skill-dir> \
|
|
150
|
+
--ecosystem-check ecosystem-check.json --no-open --output report.html
|
|
151
|
+
|
|
152
|
+
# Full 3-phase report:
|
|
153
|
+
python ${CLAUDE_SKILL_DIR}/scripts/generate_validation_report.py <skill-dir> \
|
|
154
|
+
--ecosystem-check ecosystem-check.json \
|
|
155
|
+
--grading grading.json \
|
|
156
|
+
--comparison comparison.json \
|
|
157
|
+
--audit \
|
|
158
|
+
--output report.html
|
|
159
|
+
```
|
|
160
|
+
|
|
161
|
+
**Tell the user:**
|
|
162
|
+
- The path to report.html (so they can revisit or share it)
|
|
163
|
+
- The Phase 1/2/3 verdict for each phase
|
|
164
|
+
- Which specific errors or warnings remain
|
|
165
|
+
- What to fix if any phase is FAIL
|
|
166
|
+
|
|
167
|
+
Open the report in the browser: omit `--no-open` (default behaviour opens browser automatically).
|
|
168
|
+
|
|
169
|
+
---
|
|
170
|
+
|
|
171
|
+
## Self-Validation
|
|
172
|
+
|
|
173
|
+
This skill validates itself. To validate ct-skill-validator:
|
|
174
|
+
|
|
175
|
+
```bash
|
|
176
|
+
python ${CLAUDE_SKILL_DIR}/scripts/validate.py ${CLAUDE_SKILL_DIR}
|
|
177
|
+
python ${CLAUDE_SKILL_DIR}/scripts/check_ecosystem.py ${CLAUDE_SKILL_DIR} | cat
|
|
178
|
+
```
|
|
@@ -0,0 +1,151 @@
|
|
|
1
|
+
# CLEO Ecosystem Compliance Checker
|
|
2
|
+
|
|
3
|
+
Evaluate whether a skill fits into and properly uses the CLEO ecosystem.
|
|
4
|
+
|
|
5
|
+
## Role
|
|
6
|
+
|
|
7
|
+
You are an ecosystem compliance auditor for CLEO skills. You receive a structured context
|
|
8
|
+
package describing a skill's content and usage patterns. You evaluate the skill against the
|
|
9
|
+
CLEO ecosystem rules and produce a structured compliance report.
|
|
10
|
+
|
|
11
|
+
You MUST be specific: cite exact text from the skill when flagging issues, and cite the exact
|
|
12
|
+
rule number being violated. Vague findings are useless.
|
|
13
|
+
|
|
14
|
+
## Inputs
|
|
15
|
+
|
|
16
|
+
You receive a JSON context package (from `check_ecosystem.py`) with:
|
|
17
|
+
|
|
18
|
+
- `skill_name`: The skill's directory name
|
|
19
|
+
- `frontmatter`: Parsed frontmatter fields
|
|
20
|
+
- `description`: The skill's trigger description
|
|
21
|
+
- `body`: The full SKILL.md body text
|
|
22
|
+
- `allowed_tools`: The allowed-tools field value
|
|
23
|
+
- `cleo_operations_referenced`: List of detected CLEO operations (domain.operation strings)
|
|
24
|
+
- `domains_mentioned`: Canonical domain names found in body
|
|
25
|
+
- `lifecycle_stages_mentioned`: RCASD-IVTR+C stage names found in body
|
|
26
|
+
- `deprecated_verbs_found`: Any deprecated verb patterns found
|
|
27
|
+
- `body_line_count`: Number of body lines
|
|
28
|
+
|
|
29
|
+
You also MUST read `references/cleo-ecosystem-rules.md` for the full rule definitions.
|
|
30
|
+
|
|
31
|
+
## Process
|
|
32
|
+
|
|
33
|
+
### Step 1: Read the Rules
|
|
34
|
+
|
|
35
|
+
Read `${CLAUDE_SKILL_DIR}/references/cleo-ecosystem-rules.md` in full.
|
|
36
|
+
|
|
37
|
+
### Step 2: Evaluate Each Rule
|
|
38
|
+
|
|
39
|
+
For each rule (1 through 8), determine: OK, WARN, ERROR, or SKIP.
|
|
40
|
+
|
|
41
|
+
**Rule 1 — Domain Fit:**
|
|
42
|
+
- Look at `domains_mentioned`, `description`, and `body` for domain signals
|
|
43
|
+
- Classify the skill's primary domain(s)
|
|
44
|
+
- ERROR if no domain connection; WARN if too scattered
|
|
45
|
+
|
|
46
|
+
**Rule 2 — MCP Operation Syntax:**
|
|
47
|
+
- Check each entry in `cleo_operations_referenced`
|
|
48
|
+
- Validate against the known valid operations in cleo-ecosystem-rules.md §Rule 2
|
|
49
|
+
- ERROR for any invalid domain.operation reference
|
|
50
|
+
- SKIP if no CLEO operations are referenced
|
|
51
|
+
|
|
52
|
+
**Rule 3 — Canonical Verb Compliance:**
|
|
53
|
+
- Check `deprecated_verbs_found` and scan `body` text for deprecated verb usage when describing CLEO operations
|
|
54
|
+
- WARN (not ERROR) for deprecated verb usage
|
|
55
|
+
|
|
56
|
+
**Rule 4 — Non-Duplication:**
|
|
57
|
+
- Read the `description` and `body` to understand what the skill does
|
|
58
|
+
- Compare against CLEO's built-in capabilities
|
|
59
|
+
- ERROR if skill is purely a thin wrapper over a single existing operation
|
|
60
|
+
- Use judgment — most skills are fine
|
|
61
|
+
|
|
62
|
+
**Rule 5 — Data Integrity:**
|
|
63
|
+
- Scan `body` for direct `.cleo/` file path editing instructions
|
|
64
|
+
- Look for patterns like "edit tasks.db", "modify .cleo/config.json directly", "open brain.db"
|
|
65
|
+
- ERROR if found
|
|
66
|
+
|
|
67
|
+
**Rule 6 — RCASD-IVTR+C Lifecycle Alignment:**
|
|
68
|
+
- Check if skill touches pipeline/lifecycle operations
|
|
69
|
+
- Verify it references the relevant lifecycle stages
|
|
70
|
+
- WARN (not ERROR) if alignment is missing
|
|
71
|
+
|
|
72
|
+
**Rule 7 — Purpose Clarity:**
|
|
73
|
+
- Evaluate the `description` and `body` for clarity and boundedness
|
|
74
|
+
- Is the scope specific? Is the value proposition clear?
|
|
75
|
+
- ERROR if purpose is contradictory or completely undefined
|
|
76
|
+
- WARN if scope is too broad
|
|
77
|
+
|
|
78
|
+
**Rule 8 — Tools Alignment:**
|
|
79
|
+
- Compare `allowed_tools` against what the skill's body actually needs
|
|
80
|
+
- WARN if mismatched
|
|
81
|
+
|
|
82
|
+
### Step 3: Compute Overall Verdict
|
|
83
|
+
|
|
84
|
+
- `PASS` — No ERROR rules
|
|
85
|
+
- `PASS_WITH_WARNINGS` — No ERROR rules, but 1+ WARN rules
|
|
86
|
+
- `FAIL` — 1+ ERROR rules
|
|
87
|
+
|
|
88
|
+
### Step 4: Write ecosystem-check.json
|
|
89
|
+
|
|
90
|
+
Save to the path specified in your prompt (default: `ecosystem-check.json` in the workspace).
|
|
91
|
+
|
|
92
|
+
## Output Format
|
|
93
|
+
|
|
94
|
+
```json
|
|
95
|
+
{
|
|
96
|
+
"skill_name": "ct-skill-validator",
|
|
97
|
+
"verdict": "PASS|PASS_WITH_WARNINGS|FAIL",
|
|
98
|
+
"rules": [
|
|
99
|
+
{
|
|
100
|
+
"rule_id": 1,
|
|
101
|
+
"rule_name": "Domain Fit",
|
|
102
|
+
"status": "OK|WARN|ERROR|SKIP",
|
|
103
|
+
"finding": "Skill clearly serves the 'tools' and 'check' domains — its purpose is validating skill structure and ecosystem compliance.",
|
|
104
|
+
"evidence": "Body references 'tools.skill.verify', description mentions auditing skills, references validation rules."
|
|
105
|
+
},
|
|
106
|
+
{
|
|
107
|
+
"rule_id": 2,
|
|
108
|
+
"rule_name": "MCP Operation Syntax",
|
|
109
|
+
"status": "ERROR",
|
|
110
|
+
"finding": "Skill references 'tools.skill.validate' which is not a valid CLEO operation. The correct operation is 'tools.skill.verify'.",
|
|
111
|
+
"evidence": "Line: 'Run `query tools.skill.validate <skill-name>`'"
|
|
112
|
+
}
|
|
113
|
+
],
|
|
114
|
+
"summary": {
|
|
115
|
+
"errors": 1,
|
|
116
|
+
"warnings": 0,
|
|
117
|
+
"skipped": 2,
|
|
118
|
+
"passed": 5
|
|
119
|
+
},
|
|
120
|
+
"primary_domain": "tools",
|
|
121
|
+
"lifecycle_stages_served": ["Validation"],
|
|
122
|
+
"recommendations": [
|
|
123
|
+
"Replace 'tools.skill.validate' with 'tools.skill.verify' throughout the body",
|
|
124
|
+
"Add explicit mention of which lifecycle stage this skill supports"
|
|
125
|
+
]
|
|
126
|
+
}
|
|
127
|
+
```
|
|
128
|
+
|
|
129
|
+
## Field Descriptions
|
|
130
|
+
|
|
131
|
+
- **verdict**: Overall compliance result
|
|
132
|
+
- **rules[]**: One entry per rule evaluated (1-8)
|
|
133
|
+
- **rule_id**: Integer 1-8
|
|
134
|
+
- **rule_name**: Short rule name
|
|
135
|
+
- **status**: OK / WARN / ERROR / SKIP
|
|
136
|
+
- **finding**: What you found (specific, actionable)
|
|
137
|
+
- **evidence**: Exact text quoted from the skill that supports the finding
|
|
138
|
+
- **summary**: Count of each status
|
|
139
|
+
- **primary_domain**: The main CLEO domain this skill serves
|
|
140
|
+
- **lifecycle_stages_served**: Which RCASD-IVTR+C stages this skill touches
|
|
141
|
+
- **recommendations**: Ordered list of fixes, most important first
|
|
142
|
+
|
|
143
|
+
## Guidelines
|
|
144
|
+
|
|
145
|
+
- **Be specific**: Quote the exact text that is problematic. "The body mentions X" is not enough.
|
|
146
|
+
- **One finding per rule**: Don't split a single rule into multiple entries.
|
|
147
|
+
- **Distinguish ERROR from WARN**: ERROR means the skill cannot be deployed as-is. WARN means it should be improved but is not blocking.
|
|
148
|
+
- **Give credit**: If a skill does something well, say so in the `finding` for that rule.
|
|
149
|
+
- **No false positives**: Only flag real violations. A skill that doesn't use CLEO operations
|
|
150
|
+
at all should get SKIP on Rule 2, not ERROR.
|
|
151
|
+
- **Actionable recommendations**: Every ERROR must have a concrete fix in recommendations.
|
|
@@ -0,0 +1,13 @@
|
|
|
1
|
+
# Valid SKILL.md Example (for reference)
|
|
2
|
+
|
|
3
|
+
A minimal v2-compliant SKILL.md. Use when writing test fixtures or verifying validator output.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
name: example-skill
|
|
7
|
+
description: "Does X and Y. Use when the user needs Z."
|
|
8
|
+
allowed-tools: Read, Bash(python *)
|
|
9
|
+
---
|
|
10
|
+
|
|
11
|
+
# Example Skill
|
|
12
|
+
|
|
13
|
+
Brief body. Under 400 lines. No CLEO-only fields in frontmatter.
|
|
@@ -0,0 +1,14 @@
|
|
|
1
|
+
[
|
|
2
|
+
{"query": "validate my skill against v2 standard", "should_trigger": true},
|
|
3
|
+
{"query": "check if this skill folder is v2 compliant", "should_trigger": true},
|
|
4
|
+
{"query": "audit the skill structure and frontmatter", "should_trigger": true},
|
|
5
|
+
{"query": "run v2 compliance check on ct-skill-creator", "should_trigger": true},
|
|
6
|
+
{"query": "is my SKILL.md missing required fields", "should_trigger": true},
|
|
7
|
+
{"query": "verify skill passes all CLEO validation tiers", "should_trigger": true},
|
|
8
|
+
{"query": "prepare skill for distribution, check for issues", "should_trigger": true},
|
|
9
|
+
{"query": "create a new skill from scratch", "should_trigger": false},
|
|
10
|
+
{"query": "write a Python script to parse JSON", "should_trigger": false},
|
|
11
|
+
{"query": "run the test suite for my TypeScript project", "should_trigger": false},
|
|
12
|
+
{"query": "commit my changes to git", "should_trigger": false},
|
|
13
|
+
{"query": "help me debug this import error in my code", "should_trigger": false}
|
|
14
|
+
]
|
|
@@ -0,0 +1,52 @@
|
|
|
1
|
+
{
|
|
2
|
+
"skill_name": "ct-skill-validator",
|
|
3
|
+
"evals": [
|
|
4
|
+
{
|
|
5
|
+
"id": 1,
|
|
6
|
+
"prompt": "Validate the ct-skill-creator skill against the v2 standard and show me a full report",
|
|
7
|
+
"expected_output": "A complete tier-by-tier validation report showing pass/fail/warn status for ct-skill-creator, with an HTML report opened in the browser",
|
|
8
|
+
"expectations": [
|
|
9
|
+
"Claude runs validate.py against ct-skill-creator",
|
|
10
|
+
"The output shows results for all 5 tiers",
|
|
11
|
+
"Claude runs generate_validation_report.py to produce an HTML report",
|
|
12
|
+
"Claude tells the user the report path or opens it in the browser",
|
|
13
|
+
"The report correctly identifies PASS or FAIL based on actual skill contents"
|
|
14
|
+
]
|
|
15
|
+
},
|
|
16
|
+
{
|
|
17
|
+
"id": 2,
|
|
18
|
+
"prompt": "Check if this skill folder is v2 compliant: packages/ct-skills/skills/ct-orchestrator",
|
|
19
|
+
"expected_output": "Validation results for ct-orchestrator with tier breakdown, and an HTML report",
|
|
20
|
+
"expectations": [
|
|
21
|
+
"Claude validates ct-orchestrator not some other skill",
|
|
22
|
+
"Tier 1 structure check runs",
|
|
23
|
+
"Tier 2 frontmatter quality check runs",
|
|
24
|
+
"Tier 3 body quality check runs",
|
|
25
|
+
"An HTML report is generated and presented to the user"
|
|
26
|
+
]
|
|
27
|
+
},
|
|
28
|
+
{
|
|
29
|
+
"id": 3,
|
|
30
|
+
"prompt": "Audit the ct-skill-validator skill itself — run all checks and give me a full findings report",
|
|
31
|
+
"expected_output": "Full self-audit of ct-skill-validator including validate gauntlet and audit_body, with HTML report",
|
|
32
|
+
"expectations": [
|
|
33
|
+
"Claude runs validate.py against ct-skill-validator",
|
|
34
|
+
"Claude runs audit_body.py for the deep body quality audit",
|
|
35
|
+
"An HTML report combining all findings is generated",
|
|
36
|
+
"Claude opens or links the HTML report for the user",
|
|
37
|
+
"The result correctly reflects ct-skill-validator's actual compliance status"
|
|
38
|
+
]
|
|
39
|
+
},
|
|
40
|
+
{
|
|
41
|
+
"id": 4,
|
|
42
|
+
"prompt": "Run the manifest alignment check for ct-skill-validator against the CLEO manifest",
|
|
43
|
+
"expected_output": "Manifest alignment results showing whether ct-skill-validator is registered correctly in manifest.json and dispatch-config.json",
|
|
44
|
+
"expectations": [
|
|
45
|
+
"Claude passes --manifest to validate.py or check_manifest.py",
|
|
46
|
+
"The manifest.json path is correctly resolved",
|
|
47
|
+
"The output shows Tier 4 CLEO Integration results",
|
|
48
|
+
"Claude reports whether the skill is found in manifest.json"
|
|
49
|
+
]
|
|
50
|
+
}
|
|
51
|
+
]
|
|
52
|
+
}
|
|
@@ -0,0 +1,20 @@
|
|
|
1
|
+
{
|
|
2
|
+
"_comment": "CLEO-only metadata -- add to packages/ct-skills/skills/manifest.json",
|
|
3
|
+
"name": "ct-skill-validator",
|
|
4
|
+
"version": "1.0.0",
|
|
5
|
+
"tier": 2,
|
|
6
|
+
"token_budget": 6000,
|
|
7
|
+
"capabilities": {
|
|
8
|
+
"inputs": [],
|
|
9
|
+
"outputs": [],
|
|
10
|
+
"dispatch_triggers": [],
|
|
11
|
+
"compatible_subagent_types": ["general-purpose"],
|
|
12
|
+
"chains_to": [],
|
|
13
|
+
"dispatch_keywords": { "primary": [], "secondary": [] }
|
|
14
|
+
},
|
|
15
|
+
"constraints": {
|
|
16
|
+
"max_context_tokens": 60000,
|
|
17
|
+
"requires_session": false,
|
|
18
|
+
"requires_epic": false
|
|
19
|
+
}
|
|
20
|
+
}
|
|
@@ -0,0 +1,163 @@
|
|
|
1
|
+
# CLEO Ecosystem Rules for Skill Compliance
|
|
2
|
+
|
|
3
|
+
This reference defines the rules a skill must meet to be valid within the CLEO ecosystem.
|
|
4
|
+
Used by the ecosystem-checker agent. Derived from CLEO-OPERATION-CONSTITUTION.md and CLEO-VISION.md.
|
|
5
|
+
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
## Rule 1: Domain Fit (REQUIRED)
|
|
9
|
+
|
|
10
|
+
The skill must serve or extend at least one of CLEO's **10 canonical domains**:
|
|
11
|
+
|
|
12
|
+
| Domain | Purpose |
|
|
13
|
+
|--------|---------|
|
|
14
|
+
| `tasks` | Task hierarchy, CRUD, dependencies, work tracking |
|
|
15
|
+
| `session` | Session lifecycle, decisions, assumptions, context |
|
|
16
|
+
| `memory` | Cognitive memory: observations, decisions, patterns, learnings (brain.db) |
|
|
17
|
+
| `check` | Schema validation, protocol compliance, test execution |
|
|
18
|
+
| `pipeline` | RCASD-IVTR+C lifecycle stages, manifest ledger, release management |
|
|
19
|
+
| `orchestrate` | Multi-agent coordination, wave planning, parallel execution |
|
|
20
|
+
| `tools` | Skills, providers, issues, CAAMP catalog |
|
|
21
|
+
| `admin` | Configuration, backup, migration, diagnostics, ADRs |
|
|
22
|
+
| `nexus` | Cross-project coordination, registry, dependency graph |
|
|
23
|
+
| `sticky` | Ephemeral project-wide capture, quick notes |
|
|
24
|
+
|
|
25
|
+
**Fail condition**: Skill has no clear connection to any canonical domain.
|
|
26
|
+
**Warn condition**: Skill touches multiple domains without a clear primary domain.
|
|
27
|
+
|
|
28
|
+
---
|
|
29
|
+
|
|
30
|
+
## Rule 2: MCP Operation Syntax (REQUIRED if CLEO ops referenced)
|
|
31
|
+
|
|
32
|
+
Any CLEO MCP operations referenced in the skill body must use canonical format:
|
|
33
|
+
|
|
34
|
+
```
|
|
35
|
+
query { domain: "...", operation: "..." }
|
|
36
|
+
mutate { domain: "...", operation: "..." }
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
Or the abbreviated shorthand: `query tasks.show`, `mutate memory.observe`
|
|
40
|
+
|
|
41
|
+
**Invalid references**: Operations not listed in the CLEO-OPERATION-CONSTITUTION.md are errors.
|
|
42
|
+
|
|
43
|
+
**Common valid operations to recognize:**
|
|
44
|
+
- `query tasks.show`, `query tasks.find`, `query tasks.list`, `query tasks.next`
|
|
45
|
+
- `mutate tasks.add`, `mutate tasks.update`, `mutate tasks.complete`
|
|
46
|
+
- `query session.status`, `mutate session.start`, `mutate session.end`
|
|
47
|
+
- `query memory.find`, `query memory.timeline`, `query memory.fetch`, `mutate memory.observe`
|
|
48
|
+
- `query admin.dash`, `query admin.health`, `query admin.help`
|
|
49
|
+
- `query check.schema`, `mutate check.test.run`
|
|
50
|
+
- `query pipeline.stage.status`, `mutate pipeline.manifest.append`
|
|
51
|
+
- `query tools.skill.list`, `query tools.skill.show`
|
|
52
|
+
- `query orchestrate.status`, `mutate orchestrate.spawn`
|
|
53
|
+
|
|
54
|
+
**Fail condition**: Skill references a domain.operation that does not exist in the constitution.
|
|
55
|
+
|
|
56
|
+
---
|
|
57
|
+
|
|
58
|
+
## Rule 3: Canonical Verb Compliance (REQUIRED)
|
|
59
|
+
|
|
60
|
+
Skills must use canonical verbs when describing CLEO operations or commands:
|
|
61
|
+
|
|
62
|
+
**Approved verbs**: add, show, find, list, update, delete, archive, restore, complete,
|
|
63
|
+
start, stop, end, status, record, resume, suspend, reset, init, enable, disable, backup,
|
|
64
|
+
migrate, inject, run, link, observe, store, fetch, plan, sync, verify, validate, timeline,
|
|
65
|
+
convert, unlink
|
|
66
|
+
|
|
67
|
+
**Deprecated verbs** (must NOT appear when describing CLEO operations):
|
|
68
|
+
- `create` → use `add`
|
|
69
|
+
- `get` → use `show` or `fetch`
|
|
70
|
+
- `search` → use `find`
|
|
71
|
+
- `query` as a verb (e.g., "query the tasks") → use `find` or `list`
|
|
72
|
+
|
|
73
|
+
**Warn condition**: Skill uses deprecated verbs in its own instructions for CLEO operations.
|
|
74
|
+
|
|
75
|
+
---
|
|
76
|
+
|
|
77
|
+
## Rule 4: Non-Duplication (REQUIRED)
|
|
78
|
+
|
|
79
|
+
Skills must not re-implement functionality already provided by CLEO's MCP operations.
|
|
80
|
+
|
|
81
|
+
**Check**: If a skill's primary function is to do something CLEO can already do via a
|
|
82
|
+
single `query` or `mutate` call, that is duplication. Skills add value by composing
|
|
83
|
+
multiple operations, providing domain expertise, or automating multi-step workflows.
|
|
84
|
+
|
|
85
|
+
**Valid**: "Run 5 CLEO operations in sequence with business logic between them"
|
|
86
|
+
**Invalid**: "Calls `tasks.show` and returns the result" (already exists as `query tasks.show`)
|
|
87
|
+
|
|
88
|
+
**Fail condition**: Skill is a thin wrapper over a single existing CLEO operation with no added logic.
|
|
89
|
+
|
|
90
|
+
---
|
|
91
|
+
|
|
92
|
+
## Rule 5: Data Integrity (REQUIRED if touching .cleo/ data)
|
|
93
|
+
|
|
94
|
+
If the skill reads or writes `.cleo/` data stores:
|
|
95
|
+
- Reads must use `query` gateway
|
|
96
|
+
- Writes must use `mutate` gateway
|
|
97
|
+
- Direct file editing of `.cleo/*.json`, `tasks.db`, `brain.db` is NOT acceptable
|
|
98
|
+
- Skills must not bypass CLEO's atomic write requirements
|
|
99
|
+
|
|
100
|
+
**Fail condition**: Skill instructs direct editing of `.cleo/` data files.
|
|
101
|
+
|
|
102
|
+
---
|
|
103
|
+
|
|
104
|
+
## Rule 6: RCASD-IVTR+C Lifecycle Alignment (RECOMMENDED)
|
|
105
|
+
|
|
106
|
+
Skills that interact with project work should align with CLEO's lifecycle pipeline stages:
|
|
107
|
+
|
|
108
|
+
| Stage | Meaning |
|
|
109
|
+
|-------|---------|
|
|
110
|
+
| Research (R) | Gather information |
|
|
111
|
+
| Consensus (C) | Validate recommendations |
|
|
112
|
+
| Architecture Decision (A) | Document choices (ADRs) |
|
|
113
|
+
| Specification (S) | Formal requirements |
|
|
114
|
+
| Decomposition (D) | Break into tasks |
|
|
115
|
+
| Implementation (I) | Write code |
|
|
116
|
+
| Validation (V) | Verify implementation |
|
|
117
|
+
| Testing (T) | Test coverage |
|
|
118
|
+
| Release (R) | Ship with provenance |
|
|
119
|
+
|
|
120
|
+
**Warn condition**: Skill that touches pipeline/lifecycle operations doesn't reference the relevant stages.
|
|
121
|
+
|
|
122
|
+
---
|
|
123
|
+
|
|
124
|
+
## Rule 7: Purpose Clarity (REQUIRED)
|
|
125
|
+
|
|
126
|
+
The skill must have a **specific, bounded purpose** that is genuinely useful within CLEO workflows.
|
|
127
|
+
|
|
128
|
+
Questions to evaluate:
|
|
129
|
+
- What specific problem does this skill solve for a CLEO user?
|
|
130
|
+
- Is the scope clearly bounded or is it trying to do everything?
|
|
131
|
+
- Would a CLEO user know when to invoke this skill vs. using a different tool?
|
|
132
|
+
- Does the skill description (frontmatter) accurately convey its purpose and trigger conditions?
|
|
133
|
+
|
|
134
|
+
**Fail condition**: Skill purpose is vague, contradictory, or so broad it provides no focused value.
|
|
135
|
+
**Warn condition**: Skill scope is wider than needed for its stated purpose.
|
|
136
|
+
|
|
137
|
+
---
|
|
138
|
+
|
|
139
|
+
## Rule 8: Tools Alignment (RECOMMENDED)
|
|
140
|
+
|
|
141
|
+
The `allowed-tools` frontmatter should match the skill's actual needs:
|
|
142
|
+
|
|
143
|
+
| Skill type | Expected tools |
|
|
144
|
+
|-----------|----------------|
|
|
145
|
+
| Read-only CLEO data | `Bash` (for `cleo` CLI) or implicit MCP query |
|
|
146
|
+
| CLEO data modification | Includes write-capable tools |
|
|
147
|
+
| File system operations | `Read`, `Write`, `Edit`, `Glob`, `Grep` |
|
|
148
|
+
| Python scripts | `Bash(python *)` |
|
|
149
|
+
| Agent orchestration | No tools, or `Agent` |
|
|
150
|
+
| Validation/compliance | `Bash(python *)` for validators |
|
|
151
|
+
|
|
152
|
+
**Warn condition**: `allowed-tools` is overly broad (e.g., `Bash` with no restrictions for a read-only skill).
|
|
153
|
+
|
|
154
|
+
---
|
|
155
|
+
|
|
156
|
+
## Severity Levels
|
|
157
|
+
|
|
158
|
+
| Level | Meaning |
|
|
159
|
+
|-------|---------|
|
|
160
|
+
| `ERROR` | Hard failure — skill must be fixed before it is valid for CLEO ecosystem |
|
|
161
|
+
| `WARN` | Non-blocking issue — skill can still be used but should be addressed |
|
|
162
|
+
| `OK` | Passes this rule |
|
|
163
|
+
| `SKIP` | Rule not applicable to this skill type |
|