claude-skill-lint 0.3.0 → 0.4.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +4 -30
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -16,41 +16,15 @@ Caught in under two seconds. No LLM calls. Deterministic.
16
16
 
17
17
  "Tolerate" is not "work correctly."
18
18
 
19
- ## Two Layers
19
+ ## Companion: /te-review
20
20
 
21
- | | `claude-skill-lint` | `/te-review` |
22
- |-|-------------|-------------|
23
- | **What** | Structural validation | Token efficiency audit |
24
- | **Speed** | <2s, every PR | ~60s, on demand |
25
- | **Approach** | Deterministic CI — no LLM | LLM-powered deep analysis |
26
- | **Catches** | Broken refs, orphans, cycles, collisions, parse errors, size violations, quality regressions | Redundant content, output format waste, instruction bloat, model routing, architecture-level optimization |
21
+ This repo also includes [`/te-review`](skills/te-review.md), an LLM-powered deep audit skill that goes beyond structural validation — analyzing token efficiency, redundancy, output constraints, and instruction quality. It produces a scored assessment (0-24) with estimated token savings per fix.
27
22
 
28
- claude-skill-lint enforces the structural foundation. `/te-review` (included in [`skills/te-review.md`](skills/te-review.md)) goes deeper — four-pass analysis across architecture, efficiency, and instruction quality, producing a scored assessment (0-24) with a prioritized optimization plan and estimated token savings per fix.
29
-
30
- We ran te-review against itself. It scored 20/24. The main finding: the skill didn't constrain its own output the way it tells others to. After adding "top 5 findings per category" and "each subagent returns top 10 findings only," estimated output token savings on large suite reviews dropped by ~50%. The tool practices what it preaches.
31
-
32
- Install the deep audit skill:
23
+ claude-skill-lint catches structural bugs deterministically in CI. `/te-review` is an on-demand complement for deeper optimization.
33
24
 
34
25
  ```bash
35
- cp node_modules/claude-skill-lint/skills/te-review.md ~/.claude/commands/te-review.md
36
- ```
37
-
38
- Then in any Claude Code session:
39
-
26
+ curl -o ~/.claude/commands/te-review.md https://raw.githubusercontent.com/shinytoyrobots/claude-skills-linter/main/skills/te-review.md
40
27
  ```
41
- /te-review suite # Full suite audit (scored 0-24)
42
- /te-review audit my-skill # Single skill deep dive
43
- /te-review compare old.md new.md # Before/after token impact
44
- ```
45
-
46
- ### What te-review checks
47
-
48
- Four sequential passes, each producing up to 5 findings per severity level:
49
-
50
- 1. **Structural Audit** — CLAUDE.md bloat, unused tool declarations, reference depth, subagent model routing, file size
51
- 2. **Redundancy Detection** — skill-context overlap, cross-skill duplication, CLAUDE.md-skill overlap. High-fanout context files are flagged as highest-ROI optimization targets.
52
- 3. **Output Efficiency** — missing format constraints, missing conciseness directives, unbounded output sections, prose where structured would suffice. Output tokens cost 5x input — this pass often finds the biggest savings.
53
- 4. **Instruction Quality** — motivational fluff, politeness tokens, filler phrases, default-behavior instructions, conflicting constraints, emphasis overuse
54
28
 
55
29
  ## What It Actually Found
56
30
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "claude-skill-lint",
3
- "version": "0.3.0",
3
+ "version": "0.4.1",
4
4
  "description": "A CLI tool that validates Claude Code skill files for structural correctness",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",