claudecode-omc 4.3.5 → 4.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,21 +1,26 @@
1
1
  ---
2
2
  name: skill-quality-analyzer
3
- description: Analyzes Codex skill quality with 6-dimension scoring system similar to CodeDNA, providing actionable improvement recommendations
3
+ description: Analyzes Claude Code skill quality with 6-dimension scoring system, providing actionable improvement recommendations for descriptions, structure, examples, and triggers.
4
4
  ---
5
5
 
6
6
  # Skill Quality Analyzer
7
7
 
8
- Comprehensive quality analysis system for Codex skills using a 6-dimension scoring framework inspired by CodeDNA. Identifies quality issues, provides improvement recommendations, and generates detailed quality reports.
8
+ Comprehensive quality analysis system for Claude Code skills using a 6-dimension scoring framework. Identifies quality issues, provides improvement recommendations, and generates detailed quality reports.
9
9
 
10
10
  ## Agent Workflow
11
11
 
12
12
  To perform a high-quality analysis, follow this **Hybrid Workflow**:
13
13
 
14
- 1. **Run Static Analysis**: Execute `python3 analyzer.py --skill-path [TARGET_SKILL_PATH]` to get objective metrics for the **target skill**.
15
- 2. **Read Target Skill Content**: Use `view_file` to read the `SKILL.md` and `HOW_TO_USE.md` of the **skill you are analyzing** (the target skill). *Do not read this analyzer skill's own files.*
16
- 3. **Synthesize Report**: Combine the script's findings (hard metrics) with your manual review of the **target skill's content** (soft metrics) to generate the final report.
17
- * *Script says*: "Description too short." -> *You check*: "Is it just short, or is it actual nonsense?"
18
- * *Script says*: "100/100 score." -> *You check*: "Format is perfect, but does the logic make sense?"
14
+ 1. **Hard Metrics (tools)**: Use `Glob` and `Read` tools to gather objective facts about the **target skill**.
15
+ - Does `SKILL.md` exist? Check with `Glob("skills/<name>/SKILL.md")`
16
+ - Does the description contain quoted trigger phrases? Check with `Read` on the frontmatter
17
+ - Are referenced `references/` files present? Verify each exists
18
+ - If `analyzer.py` is present in this skill's directory, run it: `python3 skills/skill-quality-analyzer/analyzer.py --skill-path [TARGET_SKILL_PATH]`
19
+ 2. **Soft Metrics (judgment)**: Use `Read` to read the target skill's `SKILL.md` content and assess semantics.
20
+ *Do not read this analyzer skill's own files — read only the skill being analyzed.*
21
+ - *Script says*: "Description too short." → *You check*: "Is it just short, or actually nonsense?"
22
+ - *Script says*: "100/100 score." → *You check*: "Format is perfect, but does the logic make sense?"
23
+ 3. **Synthesize Report**: Combine hard metrics + soft judgment into a severity-ranked report.
19
24
 
20
25
  ## Capabilities
21
26
 
@@ -29,7 +34,7 @@ To perform a high-quality analysis, follow this **Hybrid Workflow**:
29
34
  ## Input Requirements
30
35
 
31
36
  **Single Skill Analysis**:
32
- - Skill folder path (e.g., `~/.codex/skills/my-skill/`)
37
+ - Skill folder path (e.g., `~/.claude/skills/my-skill/` or `skills/my-skill/`)
33
38
  - Or SKILL.md file path directly
34
39
 
35
40
  **Batch Analysis**:
@@ -109,7 +114,7 @@ To perform a high-quality analysis, follow this **Hybrid Workflow**:
109
114
  - <50: No examples or sample data
110
115
 
111
116
  ### 4. Trigger Detection (15%)
112
- - **What it measures**: How easily Codex can determine when to invoke this skill
117
+ - **What it measures**: How easily Claude can determine when to invoke this skill
113
118
  - **Key indicators**:
114
119
  - Clear "When to use" section
115
120
  - Specific trigger keywords identified
@@ -122,13 +127,13 @@ To perform a high-quality analysis, follow this **Hybrid Workflow**:
122
127
  - <50: No clear triggers
123
128
 
124
129
  ### 5. Best Practices (15%)
125
- - **What it measures**: Adherence to Codex skill development standards
130
+ - **What it measures**: Adherence to Claude Code skill development standards
126
131
  - **Key indicators**:
127
- - Follows Codex naming conventions
132
+ - Follows Claude Code naming conventions (kebab-case folder = YAML name)
128
133
  - Proper Python structure (if applicable)
129
- - README.md and HOW_TO_USE.md present
130
- - No backup files or __pycache__
131
- - Proper file organization
134
+ - No backup files or `__pycache__`
135
+ - Proper file organization (SKILL.md + optional references/, scripts/)
136
+ - No platform-specific branding (e.g., "Codex" in a Claude Code skill)
132
137
  - **Scoring**:
133
138
  - 90-100: Exemplary adherence
134
139
  - 70-89: Minor deviations
@@ -153,35 +158,36 @@ To perform a high-quality analysis, follow this **Hybrid Workflow**:
153
158
 
154
159
  **Basic Analysis**:
155
160
  ```
156
- "Analyze the quality of my skill-creator skill"
157
- "What's the quality score for ~/.codex/skills/code-review/"
158
- "Run quality analysis on the aws-solution-architect skill"
161
+ "Analyze the quality of my skill-development skill"
162
+ "What's the quality score for ~/.claude/skills/code-review/"
163
+ "Run quality analysis on the skill-debugger skill"
159
164
  ```
160
165
 
161
166
  **Detailed Report**:
162
167
  ```
163
168
  "Generate a detailed quality report for skill-debugger"
164
- "Analyze ~/.codex/skills/prompt-factory/ and create improvement recommendations"
169
+ "Analyze skills/skill-development/ and create improvement recommendations"
165
170
  ```
166
171
 
167
172
  **Batch Analysis**:
168
173
  ```
169
- "Analyze all skills in ~/.codex/skills/ and rank them by quality"
174
+ "Analyze all skills in skills/ and rank them by quality"
170
175
  "Compare quality scores across all my custom skills"
171
176
  ```
172
177
 
173
178
  **Comparative Analysis**:
174
179
  ```
175
- "Compare my code-review skill against Antigravity's best practices"
176
- "How does skill-tester compare to official skills in quality?"
180
+ "Compare my code-review skill against best practices"
181
+ "How does skill-tester compare to the skill-development skill in quality?"
177
182
  ```
178
183
 
179
184
  ## Scripts
180
185
 
181
186
  - `analyzer.py`: Core 6-dimension quality analysis engine
182
- - Usage: `python3 analyzer.py --skill-path /path/to/skill`
187
+ - Usage: `python3 skills/skill-quality-analyzer/analyzer.py --skill-path /path/to/skill`
188
+ - Run this as the **Hard Metrics** step before manual review
183
189
  - `validator.py`: YAML frontmatter and structure validation (merged into analyzer.py)
184
- - `best_practices_checker.py`: Checks adherence to Codex standards (merged into analyzer.py)
190
+ - `best_practices_checker.py`: Checks adherence to Claude Code standards (merged into analyzer.py)
185
191
 
186
192
  ## Best Practices
187
193
 
@@ -192,22 +198,15 @@ To perform a high-quality analysis, follow this **Hybrid Workflow**:
192
198
  5. **Batch Analysis for Consistency**: Use batch mode to ensure consistent quality across all your skills
193
199
  6. **Compare Against Examples**: Use comparative analysis to learn from official skills
194
200
 
195
- ## Integration with Quality Systems
201
+ ## Integration with Skill Pipeline
196
202
 
197
- **Agent-KB Integration**:
198
- - Automatically records quality patterns from high-scoring skills
199
- - Learns common issues from low-scoring skills
200
- - Suggests improvements based on historical data
203
+ **With skill-development**: Use after creating a new skill to validate it meets the standard before shipping.
201
204
 
202
- **CodeDNA Alignment**:
203
- - Uses similar 6-dimension framework
204
- - Consistent scoring methodology
205
- - Shares best practices database
205
+ **With skill-debugger**: Run quality-analyzer first (structural issues), then skill-debugger (trigger issues).
206
206
 
207
- **CI/CD Integration**:
208
- - Can be used in pre-commit hooks
209
- - Quality gates for skill deployment
210
- - Automated quality regression testing
207
+ **With skill-tester**: Quality-analyzer checks static quality; skill-tester validates behavioral correctness.
208
+
209
+ **Recommended pipeline**: `skill-development` `skill-quality-analyzer` `skill-debugger` → `skill-tester`
211
210
 
212
211
  ## Limitations
213
212
 
@@ -220,10 +219,9 @@ To perform a high-quality analysis, follow this **Hybrid Workflow**:
220
219
 
221
220
  ## When NOT to Use This Skill
222
221
 
223
- - **Testing Functional Correctness**: Use skill-tester instead
224
- - **Runtime Debugging**: Use skill-debugger for execution issues
225
- - **Documentation Generation**: Use skill-doc-generator for creating docs
226
- - **Initial Skill Creation**: Use skill-creator or templates first, then analyze
222
+ - **Testing trigger behavior**: Use skill-debugger instead (why isn't it triggering?)
223
+ - **Verifying behavioral correctness**: Use skill-tester instead (does it produce the right output?)
224
+ - **Creating a new skill**: Use skill-development first, then analyze
227
225
 
228
226
  ## Quality Thresholds
229
227
 
@@ -243,5 +241,5 @@ To perform a high-quality analysis, follow this **Hybrid Workflow**:
243
241
  | Structure | Code organization | Section organization & YAML |
244
242
  | Examples | Test coverage | Usage examples & sample data |
245
243
  | Patterns | Design patterns | Trigger detection |
246
- | Standards | Coding standards | Codex best practices |
244
+ | Standards | Coding standards | Claude Code best practices |
247
245
  | Maintenance | Cyclomatic complexity | File cleanliness & modularity |