oh-my-customcode 0.33.1 → 0.35.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (32) hide show
  1. package/README.md +19 -19
  2. package/package.json +1 -1
  3. package/templates/.claude/hooks/hooks.json +10 -0
  4. package/templates/.claude/hooks/scripts/cost-cap-advisor.sh +71 -0
  5. package/templates/.claude/hooks/scripts/stuck-detector.sh +62 -2
  6. package/templates/.claude/hooks/scripts/task-outcome-recorder.sh +34 -7
  7. package/templates/.claude/rules/MUST-agent-design.md +2 -2
  8. package/templates/.claude/skills/analysis/SKILL.md +2 -2
  9. package/templates/.claude/skills/audit-agents/SKILL.md +1 -1
  10. package/templates/.claude/skills/create-agent/SKILL.md +1 -1
  11. package/templates/.claude/skills/dev-refactor/SKILL.md +76 -0
  12. package/templates/.claude/skills/dev-review/SKILL.md +76 -0
  13. package/templates/.claude/skills/evaluator-optimizer/SKILL.md +256 -0
  14. package/templates/.claude/skills/fix-refs/SKILL.md +1 -1
  15. package/templates/.claude/skills/help/SKILL.md +2 -2
  16. package/templates/.claude/skills/lists/SKILL.md +2 -2
  17. package/templates/.claude/skills/monitoring-setup/SKILL.md +1 -1
  18. package/templates/.claude/skills/npm-audit/SKILL.md +1 -1
  19. package/templates/.claude/skills/npm-publish/SKILL.md +1 -1
  20. package/templates/.claude/skills/npm-version/SKILL.md +1 -1
  21. package/templates/.claude/skills/research/SKILL.md +116 -0
  22. package/templates/.claude/skills/sauron-watch/SKILL.md +1 -1
  23. package/templates/.claude/skills/status/SKILL.md +2 -2
  24. package/templates/.claude/skills/task-decomposition/SKILL.md +13 -0
  25. package/templates/.claude/skills/update-docs/SKILL.md +1 -1
  26. package/templates/.claude/skills/update-external/SKILL.md +1 -1
  27. package/templates/.claude/skills/worker-reviewer-pipeline/SKILL.md +10 -0
  28. package/templates/.claude/statusline.sh +6 -0
  29. package/templates/CLAUDE.md.en +21 -21
  30. package/templates/CLAUDE.md.ko +21 -21
  31. package/templates/guides/claude-code/12-workflow-patterns.md +182 -0
  32. package/templates/manifest.json +3 -3
@@ -0,0 +1,256 @@
1
+ ---
2
+ name: evaluator-optimizer
3
+ description: Parameterized evaluator-optimizer loop for quality-critical output with configurable rubrics
4
+ scope: core
5
+ user-invocable: false
6
+ ---
7
+
8
+ # Evaluator-Optimizer Skill
9
+
10
+ ## Purpose
11
+
12
+ General-purpose iterative refinement loop. A generator agent produces output, an evaluator agent scores it against a configurable rubric, and the loop continues until the quality gate is met or max iterations are reached.
13
+
14
+ This skill generalizes the worker-reviewer-pipeline pattern beyond code review to any domain requiring quality-critical output: documentation, architecture decisions, test plans, configurations, and more.
15
+
16
+ ## Configuration Schema
17
+
18
+ ```yaml
19
+ evaluator-optimizer:
20
+ generator:
21
+ agent: {subagent_type} # Agent that produces output
22
+ model: sonnet # Default model
23
+ evaluator:
24
+ agent: {subagent_type} # Agent that reviews output
25
+ model: opus # Evaluator benefits from stronger reasoning
26
+ rubric:
27
+ - criterion: {name}
28
+ weight: {0.0-1.0}
29
+ description: {what to evaluate}
30
+ quality_gate:
31
+ type: all_pass | majority_pass | score_threshold
32
+ threshold: 0.8 # For score_threshold type
33
+ max_iterations: 3 # Default, hard cap: 5
34
+ ```
35
+
36
+ ### Parameter Details
37
+
38
+ | Parameter | Required | Default | Description |
39
+ |-----------|----------|---------|-------------|
40
+ | `generator.agent` | Yes | — | Subagent type that produces output |
41
+ | `generator.model` | No | `sonnet` | Model for generation |
42
+ | `evaluator.agent` | Yes | — | Subagent type that evaluates output |
43
+ | `evaluator.model` | No | `opus` | Model for evaluation (stronger reasoning preferred) |
44
+ | `rubric` | Yes | — | List of evaluation criteria with weights |
45
+ | `quality_gate.type` | No | `score_threshold` | Gate strategy |
46
+ | `quality_gate.threshold` | No | `0.8` | Score threshold (for `score_threshold` type) |
47
+ | `max_iterations` | No | `3` | Max refinement loops (hard cap: 5) |
48
+
49
+ ## Quality Gate Types
50
+
51
+ | Type | Behavior |
52
+ |------|----------|
53
+ | `all_pass` | Every rubric criterion must pass |
54
+ | `majority_pass` | >50% of weighted criteria pass |
55
+ | `score_threshold` | Weighted average score >= threshold |
56
+
57
+ ### Gate Evaluation Logic
58
+
59
+ - **all_pass**: Each criterion scored individually. All must receive `pass: true`.
60
+ - **majority_pass**: Sum weights of passing criteria. If > 0.5 of total weight, gate passes.
61
+ - **score_threshold**: Compute weighted average: `sum(score_i * weight_i) / sum(weight_i)`. Compare against threshold.
62
+
63
+ ## Workflow
64
+
65
+ ```
66
+ 1. Generator produces output
67
+ → Orchestrator spawns generator agent with task prompt
68
+ → Generator returns output artifact
69
+
70
+ 2. Evaluator scores against rubric
71
+ → Orchestrator spawns evaluator agent with:
72
+ - The output artifact
73
+ - The rubric criteria
74
+ - Instructions to produce verdict JSON
75
+ → Evaluator returns structured verdict
76
+
77
+ 3. Quality gate check:
78
+ - PASS → return output + final verdict
79
+ - FAIL → extract feedback, append to generator prompt → iteration N+1
80
+
81
+ 4. Max iterations reached → return best output + warning
82
+ → "Best" = output from iteration with highest weighted score
83
+ ```
84
+
85
+ ### Iteration Flow Diagram
86
+
87
+ ```
88
+ ┌─────────────────────────────────────────────────┐
89
+ │ Orchestrator │
90
+ │ │
91
+ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
92
+ │ │ Generate │───→│ Evaluate │───→│ Gate │ │
93
+ │ │ (iter N) │ │ │ │ Check │ │
94
+ │ └──────────┘ └──────────┘ └────┬─────┘ │
95
+ │ ↑ │ │
96
+ │ │ ┌──────────┐ FAIL │ PASS │
97
+ │ └─────────│ Feedback │←────────┘ │ │
98
+ │ └──────────┘ ↓ │
99
+ │ Return │
100
+ └─────────────────────────────────────────────────┘
101
+ ```
102
+
103
+ ## Stopping Criteria Display
104
+
105
+ ```
106
+ [Evaluator-Optimizer]
107
+ ├── Generator: {agent}:{model}
108
+ ├── Evaluator: {agent}:{model}
109
+ ├── Max iterations: {max_iterations} (hard cap: 5)
110
+ ├── Quality gate: {type} (threshold: {threshold})
111
+ └── Rubric: {N} criteria
112
+ ```
113
+
114
+ Display this at the start of the loop to provide transparency into the refinement configuration.
115
+
116
+ ## Verdict Format
117
+
118
+ The evaluator MUST return a structured verdict in this format:
119
+
120
+ ```json
121
+ {
122
+ "status": "pass | fail",
123
+ "iteration": 2,
124
+ "score": 0.85,
125
+ "rubric_results": [
126
+ {"criterion": "clarity", "pass": true, "score": 0.9, "feedback": "Clear structure and logical flow"},
127
+ {"criterion": "accuracy", "pass": true, "score": 0.8, "feedback": "All facts verified, one minor imprecision in section 3"}
128
+ ],
129
+ "improvement_summary": "Section 3 terminology tightened. Examples added to section 2."
130
+ }
131
+ ```
132
+
133
+ ### Verdict Fields
134
+
135
+ | Field | Type | Description |
136
+ |-------|------|-------------|
137
+ | `status` | `pass` or `fail` | Overall quality gate result |
138
+ | `iteration` | number | Current iteration number (1-indexed) |
139
+ | `score` | number (0.0-1.0) | Weighted average score across all criteria |
140
+ | `rubric_results` | array | Per-criterion evaluation details |
141
+ | `improvement_summary` | string | Summary of changes from previous iteration (empty on iteration 1) |
142
+
143
+ ## Domain Examples
144
+
145
+ | Domain | Generator | Evaluator | Rubric Focus |
146
+ |--------|-----------|-----------|--------------|
147
+ | Code review | `lang-*-expert` | opus reviewer | Correctness, style, security |
148
+ | Documentation | `arch-documenter` | opus reviewer | Completeness, clarity, accuracy |
149
+ | Architecture | Plan agent | opus reviewer | No SPOFs, no circular deps |
150
+ | Test plans | `qa-planner` | `qa-engineer` | Coverage, edge cases, feasibility |
151
+ | Agent creation | `mgr-creator` | opus reviewer | Frontmatter validity, R006 compliance |
152
+ | Security audit | `sec-codeql-expert` | opus reviewer | Vulnerability coverage, false positive rate |
153
+
154
+ ### Example: Documentation Review
155
+
156
+ ```yaml
157
+ evaluator-optimizer:
158
+ generator:
159
+ agent: arch-documenter
160
+ model: sonnet
161
+ evaluator:
162
+ agent: general-purpose
163
+ model: opus
164
+ rubric:
165
+ - criterion: completeness
166
+ weight: 0.3
167
+ description: All sections present, no gaps in coverage
168
+ - criterion: clarity
169
+ weight: 0.3
170
+ description: Clear language, no ambiguity, proper examples
171
+ - criterion: accuracy
172
+ weight: 0.25
173
+ description: All technical details correct and verifiable
174
+ - criterion: consistency
175
+ weight: 0.15
176
+ description: Consistent terminology, formatting, and style
177
+ quality_gate:
178
+ type: score_threshold
179
+ threshold: 0.8
180
+ max_iterations: 3
181
+ ```
182
+
183
+ ### Example: Code Implementation
184
+
185
+ ```yaml
186
+ evaluator-optimizer:
187
+ generator:
188
+ agent: lang-typescript-expert
189
+ model: sonnet
190
+ evaluator:
191
+ agent: general-purpose
192
+ model: opus
193
+ rubric:
194
+ - criterion: correctness
195
+ weight: 0.35
196
+ description: Code compiles, logic is correct, edge cases handled
197
+ - criterion: style
198
+ weight: 0.2
199
+ description: Follows project conventions, clean and readable
200
+ - criterion: security
201
+ weight: 0.25
202
+ description: No injection risks, proper input validation
203
+ - criterion: performance
204
+ weight: 0.2
205
+ description: No unnecessary allocations, efficient algorithms
206
+ quality_gate:
207
+ type: all_pass
208
+ max_iterations: 3
209
+ ```
210
+
211
+ ## Integration
212
+
213
+ | Rule | Integration |
214
+ |------|-------------|
215
+ | R009 | Generator and evaluator run sequentially (dependent — evaluator needs generator output) |
216
+ | R010 | Orchestrator configures and invokes the loop; generator and evaluator agents execute via Agent tool |
217
+ | R007 | Each iteration displays agent identification for both generator and evaluator |
218
+ | R008 | Tool calls within generator/evaluator follow tool identification rules |
219
+ | R013 | Ecomode: return verdict summary only, skip per-criterion details |
220
+ | R015 | Display configuration block at loop start for intent transparency |
221
+
222
+ ## Ecomode Behavior
223
+
224
+ When ecomode is active (R013), compress output:
225
+
226
+ **Normal mode:**
227
+ ```
228
+ [Evaluator-Optimizer] Iteration 2/3
229
+ ├── Generator: lang-typescript-expert:sonnet → produced 45-line module
230
+ ├── Evaluator: general-purpose:opus → scored 0.85
231
+ ├── Rubric: correctness ✓(0.9), style ✓(0.8), security ✓(0.85), performance ✓(0.8)
232
+ └── Gate: score_threshold(0.8) → PASS
233
+ ```
234
+
235
+ **Ecomode:**
236
+ ```
237
+ [EO] iter 2/3 → 0.85 → PASS
238
+ ```
239
+
240
+ ## Error Handling
241
+
242
+ | Scenario | Action |
243
+ |----------|--------|
244
+ | Generator fails to produce output | Retry once with simplified prompt; if still fails, abort with error |
245
+ | Evaluator returns malformed verdict | Retry once; if still malformed, treat as fail with score 0.0 |
246
+ | Max iterations reached without passing | Return best-scored output with warning: "Quality gate not met after {N} iterations" |
247
+ | Rubric has zero total weight | Reject configuration, report error before starting loop |
248
+ | Hard cap exceeded in config | Clamp `max_iterations` to 5, emit warning |
249
+
250
+ ## Constraints
251
+
252
+ - This skill does NOT use `context: fork` — it operates within the caller's context
253
+ - Generator and evaluator MUST be different agent invocations (no self-review)
254
+ - The evaluator prompt MUST include the full rubric to ensure consistent scoring
255
+ - Iteration state (best score, best output) is tracked by the orchestrator
256
+ - The hard cap of 5 iterations prevents runaway refinement loops
@@ -1,5 +1,5 @@
1
1
  ---
2
- name: fix-refs
2
+ name: omcustom:fix-refs
3
3
  description: Fix broken agent references and symlinks
4
4
  scope: harness
5
5
  argument-hint: "[agent-name] [--all] [--dry-run]"
@@ -1,7 +1,7 @@
1
1
  ---
2
- name: help
2
+ name: omcustom:help
3
3
  description: Show help information for commands and system
4
- scope: core
4
+ scope: harness
5
5
  argument-hint: "[command] [--agents] [--rules]"
6
6
  ---
7
7
 
@@ -1,7 +1,7 @@
1
1
  ---
2
- name: lists
2
+ name: omcustom:lists
3
3
  description: Show all available commands
4
- scope: core
4
+ scope: harness
5
5
  argument-hint: "[--category <category>] [--verbose]"
6
6
  ---
7
7
 
@@ -1,5 +1,5 @@
1
1
  ---
2
- name: monitoring-setup
2
+ name: omcustom:monitoring-setup
3
3
  description: Enable/disable OpenTelemetry console monitoring for Claude Code usage tracking
4
4
  scope: package
5
5
  argument-hint: "[enable|disable|status]"
@@ -1,5 +1,5 @@
1
1
  ---
2
- name: npm-audit
2
+ name: omcustom:npm-audit
3
3
  description: Audit npm dependencies for security and updates
4
4
  scope: package
5
5
  argument-hint: "[--fix] [--production]"
@@ -1,5 +1,5 @@
1
1
  ---
2
- name: npm-publish
2
+ name: omcustom:npm-publish
3
3
  description: Publish package to npm registry with pre-checks
4
4
  scope: package
5
5
  argument-hint: "[--tag <tag>] [--dry-run]"
@@ -1,5 +1,5 @@
1
1
  ---
2
- name: npm-version
2
+ name: omcustom:npm-version
3
3
  description: Manage semantic versions for npm packages
4
4
  scope: package
5
5
  argument-hint: "<major|minor|patch> [--no-tag] [--no-commit]"
@@ -20,10 +20,124 @@ Orchestrates 10 parallel research teams for comprehensive deep analysis of any t
20
20
  /research Rust async runtime comparison
21
21
  ```
22
22
 
23
+ ## When NOT to Use
24
+
25
+ | Scenario | Better Alternative |
26
+ |----------|--------------------|
27
+ | Simple factual question | Direct answer or single WebSearch |
28
+ | Single-file code review | `/dev-review` with specific file |
29
+ | Known solution implementation | `/structured-dev-cycle` |
30
+ | Topic with < 3 comparison dimensions | Single Explore agent |
31
+
32
+ **Pre-execution check**: If the query can be answered with < 3 sources, skip 10-team research.
33
+
34
+ ## Pre-flight Guards
35
+
36
+ Before executing the 10-team research workflow, the agent MUST run these checks. Research is a high-cost operation (~$8-15) — these guards prevent wasteful execution.
37
+
38
+ ### Guard Levels
39
+
40
+ | Level | Meaning | Action |
41
+ |-------|---------|--------|
42
+ | PASS | No issues detected | Proceed with research |
43
+ | INFO | Minor suggestion | Log note, proceed |
44
+ | WARN | Potentially wasteful | Show warning with cost estimate, ask confirmation |
45
+ | GATE | Wrong tool — use simpler alternative | Block execution, suggest alternative |
46
+
47
+ ### Guard 1: Query Complexity Assessment
48
+
49
+ **Level**: GATE or PASS
50
+
51
+ **Check**: Assess if the query requires multi-team research
52
+
53
+ ```
54
+ # Simple factual questions → GATE
55
+ indicators_simple:
56
+ - Query is < 10 words
57
+ - Query asks "what is", "how to", "when was" (factual)
58
+ - Query has a single definitive answer
59
+ - Can be answered from a single documentation source
60
+
61
+ # Complex research questions → PASS
62
+ indicators_complex:
63
+ - Query involves comparison of 3+ alternatives
64
+ - Query requires analysis across multiple dimensions
65
+ - Query mentions "compare", "evaluate", "analyze", "research"
66
+ - Query references a repository or ecosystem for deep analysis
67
+ ```
68
+
69
+ **Action (GATE)**: `[Pre-flight] GATE: Query appears to be a simple factual question. Use direct answer or single WebSearch instead. 10-team research (~$8-15) would be wasteful. Override with /research --force if intended.`
70
+
71
+ ### Guard 2: Single-File Review Detection
72
+
73
+ **Level**: GATE
74
+
75
+ **Check**: If the query references a single file for review
76
+
77
+ ```
78
+ # Detection
79
+ - Query mentions a specific file path (e.g., src/main.go)
80
+ - Query asks to "review" or "analyze" a single file
81
+ - No broader context requested
82
+ ```
83
+
84
+ **Action**: `[Pre-flight] GATE: For single-file review, use /dev-review {file} instead. Research is for multi-source analysis.`
85
+
86
+ ### Guard 3: Known Solution Detection
87
+
88
+ **Level**: INFO
89
+
90
+ **Check**: If the query is about implementing a known solution
91
+
92
+ ```
93
+ # Detection
94
+ keywords: implement, build, create, add feature, 구현, 만들어
95
+ # AND the solution approach is well-known (not requiring research)
96
+ ```
97
+
98
+ **Action**: `[Pre-flight] INFO: If the implementation approach is already known, consider /structured-dev-cycle instead of research. Proceeding with research.`
99
+
100
+ ### Guard 4: Context Budget Check
101
+
102
+ **Level**: WARN
103
+
104
+ **Check**: Estimate context impact of 10-team research
105
+
106
+ ```bash
107
+ # Check current context usage from statusline data
108
+ CONTEXT_FILE="/tmp/.claude-context-$PPID"
109
+ if [ -f "$CONTEXT_FILE" ]; then
110
+ context_pct=$(cat "$CONTEXT_FILE")
111
+ if [ "$context_pct" -gt 40 ]; then
112
+ # WARN — research will consume significant additional context
113
+ fi
114
+ fi
115
+ ```
116
+
117
+ **Action**: `[Pre-flight] WARN: Context usage at {pct}%. 10-team research typically adds 30-40% context. Consider /compact before proceeding, or results may be truncated.`
118
+
119
+ ### Display Format
120
+
121
+ ```
122
+ [Pre-flight] research
123
+ ├── Query complexity: PASS — multi-dimensional comparison detected
124
+ ├── Single-file review: PASS
125
+ ├── Known solution: PASS
126
+ └── Context budget: WARN — context at 45%, research adds ~35%
127
+ Result: PROCEED WITH CAUTION (0 GATE, 1 WARN, 0 INFO)
128
+ Cost estimate: ~$8-15 for 10-team parallel research
129
+ ```
130
+
131
+ If any GATE: block and suggest alternative. User can override with `--force`.
132
+ If any WARN: show warning with cost context, ask user to confirm.
133
+ If only PASS/INFO: proceed automatically.
134
+
23
135
  ## Architecture — 4 Phases
24
136
 
25
137
  ### Phase 1: Parallel Research (10 teams, batched per R009)
26
138
 
139
+ **Step 0**: Pre-flight guards pass (see Pre-flight Guards section)
140
+
27
141
  Teams operate in breadth/depth pairs across 5 domains:
28
142
 
29
143
  | Pair | Domain | Team | Role | Focus |
@@ -258,6 +372,8 @@ Before execution:
258
372
  └── Phase 4: Report + GitHub issue
259
373
 
260
374
  Estimated: {time} | Teams: 10 | Models: sonnet → opus → codex
375
+ Stopping: max 30 verification rounds, convergence at 0 contradictions
376
+ Cost: ~$8-15 (10 teams × sonnet + opus verification)
261
377
  Execute? [Y/n]
262
378
  ```
263
379
 
@@ -1,5 +1,5 @@
1
1
  ---
2
- name: sauron-watch
2
+ name: omcustom:sauron-watch
3
3
  description: Full R017 verification (5+3 rounds) before commit
4
4
  scope: harness
5
5
  disable-model-invocation: true
@@ -1,7 +1,7 @@
1
1
  ---
2
- name: status
2
+ name: omcustom:status
3
3
  description: Show system status and health checks
4
- scope: core
4
+ scope: harness
5
5
  argument-hint: "[--verbose] [--health]"
6
6
  ---
7
7
 
@@ -20,6 +20,19 @@ Decomposition is **recommended** when any of these thresholds are met:
20
20
  | Domains involved | > 2 domains | Requires multiple specialists |
21
21
  | Agent types needed | > 2 types | Cross-specialty coordination |
22
22
 
23
+ ### Step 0: Pattern Selection
24
+
25
+ Before decomposing, select the appropriate workflow pattern:
26
+
27
+ | Pattern | When to Use | Primitive |
28
+ |---------|-------------|-----------|
29
+ | Sequential | Steps must execute in order, each depends on previous | dag-orchestration (linear) |
30
+ | Parallel | Independent subtasks with no shared state | Agent tool (R009) or Agent Teams (R018) |
31
+ | Evaluator-Optimizer | Quality-critical output needing iterative refinement | worker-reviewer-pipeline |
32
+ | Orchestrator | Complex multi-step with dynamic routing | Routing skills (secretary/dev-lead/de-lead/qa-lead) |
33
+
34
+ **Decision**: If task has independent subtasks → Parallel. If quality-critical → add EO review cycle. If multi-step with dependencies → Sequential/Orchestrator.
35
+
23
36
  ## Decomposition Process
24
37
 
25
38
  ```
@@ -1,5 +1,5 @@
1
1
  ---
2
- name: update-docs
2
+ name: omcustom:update-docs
3
3
  description: Sync documentation with project structure
4
4
  scope: harness
5
5
  argument-hint: "[--check] [--target <path>]"
@@ -1,5 +1,5 @@
1
1
  ---
2
- name: update-external
2
+ name: omcustom:update-external
3
3
  description: Update agents from external sources (GitHub, docs, etc.)
4
4
  scope: harness
5
5
  argument-hint: "[agent-name] [--check] [--force]"
@@ -98,6 +98,16 @@ When Agent Teams is NOT available, falls back to sequential Agent tool calls:
98
98
  Agent(worker) → result → Agent(reviewer) → verdict → Agent(worker) → ...
99
99
  ```
100
100
 
101
+ ## Stopping Criteria Display
102
+
103
+ Before execution, display:
104
+ ```
105
+ [Worker-Reviewer Pipeline]
106
+ ├── Max iterations: {max_iterations} (default: 3, hard cap: 5)
107
+ ├── Quality gate: {pass_threshold}% approval required
108
+ └── Early stop: All reviewers approve → stop immediately
109
+ ```
110
+
101
111
  ## Display Format
102
112
 
103
113
  ```
@@ -69,6 +69,12 @@ IFS=$'\t' read -r model_name project_dir ctx_pct ctx_size cost_usd <<< "$(
69
69
  ] | @tsv'
70
70
  )"
71
71
 
72
+ # ---------------------------------------------------------------------------
73
+ # 4b. Cost & context data bridge — write to temp file for hooks
74
+ # ---------------------------------------------------------------------------
75
+ COST_BRIDGE_FILE="/tmp/.claude-cost-${PPID}"
76
+ printf '%s\t%s\t%s\n' "$cost_usd" "$ctx_pct" "$(date +%s)" > "$COST_BRIDGE_FILE" 2>/dev/null || true
77
+
72
78
  # ---------------------------------------------------------------------------
73
79
  # 5. Model display name + color (bash 3.2 compatible case pattern matching)
74
80
  # Model detection (kept for internal reference, not displayed in statusline)
@@ -151,31 +151,31 @@ Violation = immediate correction. No exception for "small changes".
151
151
 
152
152
  | Command | Description |
153
153
  |---------|-------------|
154
- | `/analysis` | Analyze project and auto-configure customizations |
155
- | `/create-agent` | Create a new agent |
156
- | `/update-docs` | Sync documentation with project structure |
157
- | `/update-external` | Update agents from external sources |
158
- | `/audit-agents` | Audit agent dependencies |
159
- | `/fix-refs` | Fix broken references |
154
+ | `/omcustom:analysis` | Analyze project and auto-configure customizations |
155
+ | `/omcustom:create-agent` | Create a new agent |
156
+ | `/omcustom:update-docs` | Sync documentation with project structure |
157
+ | `/omcustom:update-external` | Update agents from external sources |
158
+ | `/omcustom:audit-agents` | Audit agent dependencies |
159
+ | `/omcustom:fix-refs` | Fix broken references |
160
160
  | `/dev-review` | Review code for best practices |
161
161
  | `/dev-refactor` | Refactor code |
162
162
  | `/memory-save` | Save session context to claude-mem |
163
163
  | `/memory-recall` | Search and recall memories |
164
- | `/monitoring-setup` | Enable/disable OTel console monitoring |
165
- | `/npm-publish` | Publish package to npm registry |
166
- | `/npm-version` | Manage semantic versions |
167
- | `/npm-audit` | Audit dependencies |
164
+ | `/omcustom:monitoring-setup` | Enable/disable OTel console monitoring |
165
+ | `/omcustom:npm-publish` | Publish package to npm registry |
166
+ | `/omcustom:npm-version` | Manage semantic versions |
167
+ | `/omcustom:npm-audit` | Audit dependencies |
168
168
  | `/codex-exec` | Execute Codex CLI prompts |
169
169
  | `/optimize-analyze` | Analyze bundle and performance |
170
170
  | `/optimize-bundle` | Optimize bundle size |
171
171
  | `/optimize-report` | Generate optimization report |
172
172
  | `/research` | 10-team parallel deep analysis and cross-verification |
173
173
  | `/deep-plan` | Research-validated planning (research → plan → verify) |
174
- | `/sauron-watch` | Full R017 verification |
174
+ | `/omcustom:sauron-watch` | Full R017 verification |
175
175
  | `/structured-dev-cycle` | 6-stage structured development cycle (Plan → Verify → Implement → Verify → Compound → Done) |
176
- | `/lists` | Show all available commands |
177
- | `/status` | Show system status |
178
- | `/help` | Show help information |
176
+ | `/omcustom:lists` | Show all available commands |
177
+ | `/omcustom:status` | Show system status |
178
+ | `/omcustom:help` | Show help information |
179
179
 
180
180
  ## Project Structure
181
181
 
@@ -184,7 +184,7 @@ project/
184
184
  +-- CLAUDE.md # Entry point
185
185
  +-- .claude/
186
186
  | +-- agents/ # Subagent definitions (44 files)
187
- | +-- skills/ # Skills (70 directories)
187
+ | +-- skills/ # Skills (71 directories)
188
188
  | +-- rules/ # Global rules (R000-R019)
189
189
  | +-- hooks/ # Hook scripts (memory, HUD)
190
190
  | +-- contexts/ # Context files (ecomode)
@@ -250,15 +250,15 @@ Task tool + routing skills remain the fallback for simple/cost-sensitive tasks.
250
250
 
251
251
  ```bash
252
252
  # Project analysis
253
- /analysis
253
+ /omcustom:analysis
254
254
 
255
255
  # Show all commands
256
- /lists
256
+ /omcustom:lists
257
257
 
258
258
  # Agent management
259
- /create-agent my-agent
260
- /update-docs
261
- /audit-agents
259
+ /omcustom:create-agent my-agent
260
+ /omcustom:update-docs
261
+ /omcustom:audit-agents
262
262
 
263
263
  # Code review
264
264
  /dev-review src/main.go
@@ -268,7 +268,7 @@ Task tool + routing skills remain the fallback for simple/cost-sensitive tasks.
268
268
  /memory-recall authentication
269
269
 
270
270
  # Verification
271
- /sauron-watch
271
+ /omcustom:sauron-watch
272
272
  ```
273
273
 
274
274
  ## External Dependencies