oh-my-customcode 0.33.0 → 0.34.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +22 -21
- package/package.json +1 -1
- package/templates/.claude/hooks/scripts/stuck-detector.sh +61 -1
- package/templates/.claude/hooks/scripts/task-outcome-recorder.sh +2 -1
- package/templates/.claude/rules/MUST-agent-design.md +2 -2
- package/templates/.claude/skills/analysis/SKILL.md +2 -2
- package/templates/.claude/skills/audit-agents/SKILL.md +1 -1
- package/templates/.claude/skills/create-agent/SKILL.md +1 -1
- package/templates/.claude/skills/deep-plan/SKILL.md +292 -0
- package/templates/.claude/skills/dev-refactor/SKILL.md +11 -0
- package/templates/.claude/skills/dev-review/SKILL.md +11 -0
- package/templates/.claude/skills/evaluator-optimizer/SKILL.md +256 -0
- package/templates/.claude/skills/fix-refs/SKILL.md +1 -1
- package/templates/.claude/skills/help/SKILL.md +2 -2
- package/templates/.claude/skills/lists/SKILL.md +2 -2
- package/templates/.claude/skills/monitoring-setup/SKILL.md +1 -1
- package/templates/.claude/skills/npm-audit/SKILL.md +1 -1
- package/templates/.claude/skills/npm-publish/SKILL.md +1 -1
- package/templates/.claude/skills/npm-version/SKILL.md +1 -1
- package/templates/.claude/skills/research/SKILL.md +13 -0
- package/templates/.claude/skills/sauron-watch/SKILL.md +1 -1
- package/templates/.claude/skills/status/SKILL.md +2 -2
- package/templates/.claude/skills/task-decomposition/SKILL.md +13 -0
- package/templates/.claude/skills/update-docs/SKILL.md +1 -1
- package/templates/.claude/skills/update-external/SKILL.md +1 -1
- package/templates/.claude/skills/worker-reviewer-pipeline/SKILL.md +10 -0
- package/templates/CLAUDE.md.en +22 -21
- package/templates/CLAUDE.md.ko +22 -21
- package/templates/guides/claude-code/12-workflow-patterns.md +182 -0
- package/templates/manifest.json +3 -3
|
@@ -0,0 +1,256 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: evaluator-optimizer
|
|
3
|
+
description: Parameterized evaluator-optimizer loop for quality-critical output with configurable rubrics
|
|
4
|
+
scope: core
|
|
5
|
+
user-invocable: false
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# Evaluator-Optimizer Skill
|
|
9
|
+
|
|
10
|
+
## Purpose
|
|
11
|
+
|
|
12
|
+
General-purpose iterative refinement loop. A generator agent produces output, an evaluator agent scores it against a configurable rubric, and the loop continues until the quality gate is met or max iterations are reached.
|
|
13
|
+
|
|
14
|
+
This skill generalizes the worker-reviewer-pipeline pattern beyond code review to any domain requiring quality-critical output: documentation, architecture decisions, test plans, configurations, and more.
|
|
15
|
+
|
|
16
|
+
## Configuration Schema
|
|
17
|
+
|
|
18
|
+
```yaml
|
|
19
|
+
evaluator-optimizer:
|
|
20
|
+
generator:
|
|
21
|
+
agent: {subagent_type} # Agent that produces output
|
|
22
|
+
model: sonnet # Default model
|
|
23
|
+
evaluator:
|
|
24
|
+
agent: {subagent_type} # Agent that reviews output
|
|
25
|
+
model: opus # Evaluator benefits from stronger reasoning
|
|
26
|
+
rubric:
|
|
27
|
+
- criterion: {name}
|
|
28
|
+
weight: {0.0-1.0}
|
|
29
|
+
description: {what to evaluate}
|
|
30
|
+
quality_gate:
|
|
31
|
+
type: all_pass | majority_pass | score_threshold
|
|
32
|
+
threshold: 0.8 # For score_threshold type
|
|
33
|
+
max_iterations: 3 # Default, hard cap: 5
|
|
34
|
+
```
|
|
35
|
+
|
|
36
|
+
### Parameter Details
|
|
37
|
+
|
|
38
|
+
| Parameter | Required | Default | Description |
|
|
39
|
+
|-----------|----------|---------|-------------|
|
|
40
|
+
| `generator.agent` | Yes | — | Subagent type that produces output |
|
|
41
|
+
| `generator.model` | No | `sonnet` | Model for generation |
|
|
42
|
+
| `evaluator.agent` | Yes | — | Subagent type that evaluates output |
|
|
43
|
+
| `evaluator.model` | No | `opus` | Model for evaluation (stronger reasoning preferred) |
|
|
44
|
+
| `rubric` | Yes | — | List of evaluation criteria with weights |
|
|
45
|
+
| `quality_gate.type` | No | `score_threshold` | Gate strategy |
|
|
46
|
+
| `quality_gate.threshold` | No | `0.8` | Score threshold (for `score_threshold` type) |
|
|
47
|
+
| `max_iterations` | No | `3` | Max refinement loops (hard cap: 5) |
|
|
48
|
+
|
|
49
|
+
## Quality Gate Types
|
|
50
|
+
|
|
51
|
+
| Type | Behavior |
|
|
52
|
+
|------|----------|
|
|
53
|
+
| `all_pass` | Every rubric criterion must pass |
|
|
54
|
+
| `majority_pass` | >50% of weighted criteria pass |
|
|
55
|
+
| `score_threshold` | Weighted average score >= threshold |
|
|
56
|
+
|
|
57
|
+
### Gate Evaluation Logic
|
|
58
|
+
|
|
59
|
+
- **all_pass**: Each criterion scored individually. All must receive `pass: true`.
|
|
60
|
+
- **majority_pass**: Sum weights of passing criteria. If > 0.5 of total weight, gate passes.
|
|
61
|
+
- **score_threshold**: Compute weighted average: `sum(score_i * weight_i) / sum(weight_i)`. Compare against threshold.
|
|
62
|
+
|
|
63
|
+
## Workflow
|
|
64
|
+
|
|
65
|
+
```
|
|
66
|
+
1. Generator produces output
|
|
67
|
+
→ Orchestrator spawns generator agent with task prompt
|
|
68
|
+
→ Generator returns output artifact
|
|
69
|
+
|
|
70
|
+
2. Evaluator scores against rubric
|
|
71
|
+
→ Orchestrator spawns evaluator agent with:
|
|
72
|
+
- The output artifact
|
|
73
|
+
- The rubric criteria
|
|
74
|
+
- Instructions to produce verdict JSON
|
|
75
|
+
→ Evaluator returns structured verdict
|
|
76
|
+
|
|
77
|
+
3. Quality gate check:
|
|
78
|
+
- PASS → return output + final verdict
|
|
79
|
+
- FAIL → extract feedback, append to generator prompt → iteration N+1
|
|
80
|
+
|
|
81
|
+
4. Max iterations reached → return best output + warning
|
|
82
|
+
→ "Best" = output from iteration with highest weighted score
|
|
83
|
+
```
|
|
84
|
+
|
|
85
|
+
### Iteration Flow Diagram
|
|
86
|
+
|
|
87
|
+
```
|
|
88
|
+
┌─────────────────────────────────────────────────┐
|
|
89
|
+
│ Orchestrator │
|
|
90
|
+
│ │
|
|
91
|
+
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
|
|
92
|
+
│ │ Generate │───→│ Evaluate │───→│ Gate │ │
|
|
93
|
+
│ │ (iter N) │ │ │ │ Check │ │
|
|
94
|
+
│ └──────────┘ └──────────┘ └────┬─────┘ │
|
|
95
|
+
│ ↑ │ │
|
|
96
|
+
│ │ ┌──────────┐ FAIL │ PASS │
|
|
97
|
+
│ └─────────│ Feedback │←────────┘ │ │
|
|
98
|
+
│ └──────────┘ ↓ │
|
|
99
|
+
│ Return │
|
|
100
|
+
└─────────────────────────────────────────────────┘
|
|
101
|
+
```
|
|
102
|
+
|
|
103
|
+
## Stopping Criteria Display
|
|
104
|
+
|
|
105
|
+
```
|
|
106
|
+
[Evaluator-Optimizer]
|
|
107
|
+
├── Generator: {agent}:{model}
|
|
108
|
+
├── Evaluator: {agent}:{model}
|
|
109
|
+
├── Max iterations: {max_iterations} (hard cap: 5)
|
|
110
|
+
├── Quality gate: {type} (threshold: {threshold})
|
|
111
|
+
└── Rubric: {N} criteria
|
|
112
|
+
```
|
|
113
|
+
|
|
114
|
+
Display this at the start of the loop to provide transparency into the refinement configuration.
|
|
115
|
+
|
|
116
|
+
## Verdict Format
|
|
117
|
+
|
|
118
|
+
The evaluator MUST return a structured verdict in this format:
|
|
119
|
+
|
|
120
|
+
```json
|
|
121
|
+
{
|
|
122
|
+
"status": "pass | fail",
|
|
123
|
+
"iteration": 2,
|
|
124
|
+
"score": 0.85,
|
|
125
|
+
"rubric_results": [
|
|
126
|
+
{"criterion": "clarity", "pass": true, "score": 0.9, "feedback": "Clear structure and logical flow"},
|
|
127
|
+
{"criterion": "accuracy", "pass": true, "score": 0.8, "feedback": "All facts verified, one minor imprecision in section 3"}
|
|
128
|
+
],
|
|
129
|
+
"improvement_summary": "Section 3 terminology tightened. Examples added to section 2."
|
|
130
|
+
}
|
|
131
|
+
```
|
|
132
|
+
|
|
133
|
+
### Verdict Fields
|
|
134
|
+
|
|
135
|
+
| Field | Type | Description |
|
|
136
|
+
|-------|------|-------------|
|
|
137
|
+
| `status` | `pass` or `fail` | Overall quality gate result |
|
|
138
|
+
| `iteration` | number | Current iteration number (1-indexed) |
|
|
139
|
+
| `score` | number (0.0-1.0) | Weighted average score across all criteria |
|
|
140
|
+
| `rubric_results` | array | Per-criterion evaluation details |
|
|
141
|
+
| `improvement_summary` | string | Summary of changes from previous iteration (empty on iteration 1) |
|
|
142
|
+
|
|
143
|
+
## Domain Examples
|
|
144
|
+
|
|
145
|
+
| Domain | Generator | Evaluator | Rubric Focus |
|
|
146
|
+
|--------|-----------|-----------|--------------|
|
|
147
|
+
| Code review | `lang-*-expert` | opus reviewer | Correctness, style, security |
|
|
148
|
+
| Documentation | `arch-documenter` | opus reviewer | Completeness, clarity, accuracy |
|
|
149
|
+
| Architecture | Plan agent | opus reviewer | No SPOFs, no circular deps |
|
|
150
|
+
| Test plans | `qa-planner` | `qa-engineer` | Coverage, edge cases, feasibility |
|
|
151
|
+
| Agent creation | `mgr-creator` | opus reviewer | Frontmatter validity, R006 compliance |
|
|
152
|
+
| Security audit | `sec-codeql-expert` | opus reviewer | Vulnerability coverage, false positive rate |
|
|
153
|
+
|
|
154
|
+
### Example: Documentation Review
|
|
155
|
+
|
|
156
|
+
```yaml
|
|
157
|
+
evaluator-optimizer:
|
|
158
|
+
generator:
|
|
159
|
+
agent: arch-documenter
|
|
160
|
+
model: sonnet
|
|
161
|
+
evaluator:
|
|
162
|
+
agent: general-purpose
|
|
163
|
+
model: opus
|
|
164
|
+
rubric:
|
|
165
|
+
- criterion: completeness
|
|
166
|
+
weight: 0.3
|
|
167
|
+
description: All sections present, no gaps in coverage
|
|
168
|
+
- criterion: clarity
|
|
169
|
+
weight: 0.3
|
|
170
|
+
description: Clear language, no ambiguity, proper examples
|
|
171
|
+
- criterion: accuracy
|
|
172
|
+
weight: 0.25
|
|
173
|
+
description: All technical details correct and verifiable
|
|
174
|
+
- criterion: consistency
|
|
175
|
+
weight: 0.15
|
|
176
|
+
description: Consistent terminology, formatting, and style
|
|
177
|
+
quality_gate:
|
|
178
|
+
type: score_threshold
|
|
179
|
+
threshold: 0.8
|
|
180
|
+
max_iterations: 3
|
|
181
|
+
```
|
|
182
|
+
|
|
183
|
+
### Example: Code Implementation
|
|
184
|
+
|
|
185
|
+
```yaml
|
|
186
|
+
evaluator-optimizer:
|
|
187
|
+
generator:
|
|
188
|
+
agent: lang-typescript-expert
|
|
189
|
+
model: sonnet
|
|
190
|
+
evaluator:
|
|
191
|
+
agent: general-purpose
|
|
192
|
+
model: opus
|
|
193
|
+
rubric:
|
|
194
|
+
- criterion: correctness
|
|
195
|
+
weight: 0.35
|
|
196
|
+
description: Code compiles, logic is correct, edge cases handled
|
|
197
|
+
- criterion: style
|
|
198
|
+
weight: 0.2
|
|
199
|
+
description: Follows project conventions, clean and readable
|
|
200
|
+
- criterion: security
|
|
201
|
+
weight: 0.25
|
|
202
|
+
description: No injection risks, proper input validation
|
|
203
|
+
- criterion: performance
|
|
204
|
+
weight: 0.2
|
|
205
|
+
description: No unnecessary allocations, efficient algorithms
|
|
206
|
+
quality_gate:
|
|
207
|
+
type: all_pass
|
|
208
|
+
max_iterations: 3
|
|
209
|
+
```
|
|
210
|
+
|
|
211
|
+
## Integration
|
|
212
|
+
|
|
213
|
+
| Rule | Integration |
|
|
214
|
+
|------|-------------|
|
|
215
|
+
| R009 | Generator and evaluator run sequentially (dependent — evaluator needs generator output) |
|
|
216
|
+
| R010 | Orchestrator configures and invokes the loop; generator and evaluator agents execute via Agent tool |
|
|
217
|
+
| R007 | Each iteration displays agent identification for both generator and evaluator |
|
|
218
|
+
| R008 | Tool calls within generator/evaluator follow tool identification rules |
|
|
219
|
+
| R013 | Ecomode: return verdict summary only, skip per-criterion details |
|
|
220
|
+
| R015 | Display configuration block at loop start for intent transparency |
|
|
221
|
+
|
|
222
|
+
## Ecomode Behavior
|
|
223
|
+
|
|
224
|
+
When ecomode is active (R013), compress output:
|
|
225
|
+
|
|
226
|
+
**Normal mode:**
|
|
227
|
+
```
|
|
228
|
+
[Evaluator-Optimizer] Iteration 2/3
|
|
229
|
+
├── Generator: lang-typescript-expert:sonnet → produced 45-line module
|
|
230
|
+
├── Evaluator: general-purpose:opus → scored 0.85
|
|
231
|
+
├── Rubric: correctness ✓(0.9), style ✓(0.8), security ✓(0.85), performance ✓(0.8)
|
|
232
|
+
└── Gate: score_threshold(0.8) → PASS
|
|
233
|
+
```
|
|
234
|
+
|
|
235
|
+
**Ecomode:**
|
|
236
|
+
```
|
|
237
|
+
[EO] iter 2/3 → 0.85 → PASS
|
|
238
|
+
```
|
|
239
|
+
|
|
240
|
+
## Error Handling
|
|
241
|
+
|
|
242
|
+
| Scenario | Action |
|
|
243
|
+
|----------|--------|
|
|
244
|
+
| Generator fails to produce output | Retry once with simplified prompt; if still fails, abort with error |
|
|
245
|
+
| Evaluator returns malformed verdict | Retry once; if still malformed, treat as fail with score 0.0 |
|
|
246
|
+
| Max iterations reached without passing | Return best-scored output with warning: "Quality gate not met after {N} iterations" |
|
|
247
|
+
| Rubric has zero total weight | Reject configuration, report error before starting loop |
|
|
248
|
+
| Hard cap exceeded in config | Clamp `max_iterations` to 5, emit warning |
|
|
249
|
+
|
|
250
|
+
## Constraints
|
|
251
|
+
|
|
252
|
+
- This skill does NOT use `context: fork` — it operates within the caller's context
|
|
253
|
+
- Generator and evaluator MUST be different agent invocations (no self-review)
|
|
254
|
+
- The evaluator prompt MUST include the full rubric to ensure consistent scoring
|
|
255
|
+
- Iteration state (best score, best output) is tracked by the orchestrator
|
|
256
|
+
- The hard cap of 5 iterations prevents runaway refinement loops
|
|
@@ -20,6 +20,17 @@ Orchestrates 10 parallel research teams for comprehensive deep analysis of any t
|
|
|
20
20
|
/research Rust async runtime comparison
|
|
21
21
|
```
|
|
22
22
|
|
|
23
|
+
## When NOT to Use
|
|
24
|
+
|
|
25
|
+
| Scenario | Better Alternative |
|
|
26
|
+
|----------|--------------------|
|
|
27
|
+
| Simple factual question | Direct answer or single WebSearch |
|
|
28
|
+
| Single-file code review | `/dev-review` with specific file |
|
|
29
|
+
| Known solution implementation | `/structured-dev-cycle` |
|
|
30
|
+
| Topic with < 3 comparison dimensions | Single Explore agent |
|
|
31
|
+
|
|
32
|
+
**Pre-execution check**: If the query can be answered with < 3 sources, skip 10-team research.
|
|
33
|
+
|
|
23
34
|
## Architecture — 4 Phases
|
|
24
35
|
|
|
25
36
|
### Phase 1: Parallel Research (10 teams, batched per R009)
|
|
@@ -258,6 +269,8 @@ Before execution:
|
|
|
258
269
|
└── Phase 4: Report + GitHub issue
|
|
259
270
|
|
|
260
271
|
Estimated: {time} | Teams: 10 | Models: sonnet → opus → codex
|
|
272
|
+
Stopping: max 30 verification rounds, convergence at 0 contradictions
|
|
273
|
+
Cost: ~$8-15 (10 teams × sonnet + opus verification)
|
|
261
274
|
Execute? [Y/n]
|
|
262
275
|
```
|
|
263
276
|
|
|
@@ -20,6 +20,19 @@ Decomposition is **recommended** when any of these thresholds are met:
|
|
|
20
20
|
| Domains involved | > 2 domains | Requires multiple specialists |
|
|
21
21
|
| Agent types needed | > 2 types | Cross-specialty coordination |
|
|
22
22
|
|
|
23
|
+
### Step 0: Pattern Selection
|
|
24
|
+
|
|
25
|
+
Before decomposing, select the appropriate workflow pattern:
|
|
26
|
+
|
|
27
|
+
| Pattern | When to Use | Primitive |
|
|
28
|
+
|---------|-------------|-----------|
|
|
29
|
+
| Sequential | Steps must execute in order, each depends on previous | dag-orchestration (linear) |
|
|
30
|
+
| Parallel | Independent subtasks with no shared state | Agent tool (R009) or Agent Teams (R018) |
|
|
31
|
+
| Evaluator-Optimizer | Quality-critical output needing iterative refinement | worker-reviewer-pipeline |
|
|
32
|
+
| Orchestrator | Complex multi-step with dynamic routing | Routing skills (secretary/dev-lead/de-lead/qa-lead) |
|
|
33
|
+
|
|
34
|
+
**Decision**: If task has independent subtasks → Parallel. If quality-critical → add EO review cycle. If multi-step with dependencies → Sequential/Orchestrator.
|
|
35
|
+
|
|
23
36
|
## Decomposition Process
|
|
24
37
|
|
|
25
38
|
```
|
|
@@ -98,6 +98,16 @@ When Agent Teams is NOT available, falls back to sequential Agent tool calls:
|
|
|
98
98
|
Agent(worker) → result → Agent(reviewer) → verdict → Agent(worker) → ...
|
|
99
99
|
```
|
|
100
100
|
|
|
101
|
+
## Stopping Criteria Display
|
|
102
|
+
|
|
103
|
+
Before execution, display:
|
|
104
|
+
```
|
|
105
|
+
[Worker-Reviewer Pipeline]
|
|
106
|
+
├── Max iterations: {max_iterations} (default: 3, hard cap: 5)
|
|
107
|
+
├── Quality gate: {pass_threshold}% approval required
|
|
108
|
+
└── Early stop: All reviewers approve → stop immediately
|
|
109
|
+
```
|
|
110
|
+
|
|
101
111
|
## Display Format
|
|
102
112
|
|
|
103
113
|
```
|
package/templates/CLAUDE.md.en
CHANGED
|
@@ -151,30 +151,31 @@ Violation = immediate correction. No exception for "small changes".
|
|
|
151
151
|
|
|
152
152
|
| Command | Description |
|
|
153
153
|
|---------|-------------|
|
|
154
|
-
| `/analysis` | Analyze project and auto-configure customizations |
|
|
155
|
-
| `/create-agent` | Create a new agent |
|
|
156
|
-
| `/update-docs` | Sync documentation with project structure |
|
|
157
|
-
| `/update-external` | Update agents from external sources |
|
|
158
|
-
| `/audit-agents` | Audit agent dependencies |
|
|
159
|
-
| `/fix-refs` | Fix broken references |
|
|
154
|
+
| `/omcustom:analysis` | Analyze project and auto-configure customizations |
|
|
155
|
+
| `/omcustom:create-agent` | Create a new agent |
|
|
156
|
+
| `/omcustom:update-docs` | Sync documentation with project structure |
|
|
157
|
+
| `/omcustom:update-external` | Update agents from external sources |
|
|
158
|
+
| `/omcustom:audit-agents` | Audit agent dependencies |
|
|
159
|
+
| `/omcustom:fix-refs` | Fix broken references |
|
|
160
160
|
| `/dev-review` | Review code for best practices |
|
|
161
161
|
| `/dev-refactor` | Refactor code |
|
|
162
162
|
| `/memory-save` | Save session context to claude-mem |
|
|
163
163
|
| `/memory-recall` | Search and recall memories |
|
|
164
|
-
| `/monitoring-setup` | Enable/disable OTel console monitoring |
|
|
165
|
-
| `/npm-publish` | Publish package to npm registry |
|
|
166
|
-
| `/npm-version` | Manage semantic versions |
|
|
167
|
-
| `/npm-audit` | Audit dependencies |
|
|
164
|
+
| `/omcustom:monitoring-setup` | Enable/disable OTel console monitoring |
|
|
165
|
+
| `/omcustom:npm-publish` | Publish package to npm registry |
|
|
166
|
+
| `/omcustom:npm-version` | Manage semantic versions |
|
|
167
|
+
| `/omcustom:npm-audit` | Audit dependencies |
|
|
168
168
|
| `/codex-exec` | Execute Codex CLI prompts |
|
|
169
169
|
| `/optimize-analyze` | Analyze bundle and performance |
|
|
170
170
|
| `/optimize-bundle` | Optimize bundle size |
|
|
171
171
|
| `/optimize-report` | Generate optimization report |
|
|
172
172
|
| `/research` | 10-team parallel deep analysis and cross-verification |
|
|
173
|
-
| `/
|
|
173
|
+
| `/deep-plan` | Research-validated planning (research → plan → verify) |
|
|
174
|
+
| `/omcustom:sauron-watch` | Full R017 verification |
|
|
174
175
|
| `/structured-dev-cycle` | 6-stage structured development cycle (Plan → Verify → Implement → Verify → Compound → Done) |
|
|
175
|
-
| `/lists` | Show all available commands |
|
|
176
|
-
| `/status` | Show system status |
|
|
177
|
-
| `/help` | Show help information |
|
|
176
|
+
| `/omcustom:lists` | Show all available commands |
|
|
177
|
+
| `/omcustom:status` | Show system status |
|
|
178
|
+
| `/omcustom:help` | Show help information |
|
|
178
179
|
|
|
179
180
|
## Project Structure
|
|
180
181
|
|
|
@@ -183,7 +184,7 @@ project/
|
|
|
183
184
|
+-- CLAUDE.md # Entry point
|
|
184
185
|
+-- .claude/
|
|
185
186
|
| +-- agents/ # Subagent definitions (44 files)
|
|
186
|
-
| +-- skills/ # Skills (
|
|
187
|
+
| +-- skills/ # Skills (71 directories)
|
|
187
188
|
| +-- rules/ # Global rules (R000-R019)
|
|
188
189
|
| +-- hooks/ # Hook scripts (memory, HUD)
|
|
189
190
|
| +-- contexts/ # Context files (ecomode)
|
|
@@ -249,15 +250,15 @@ Task tool + routing skills remain the fallback for simple/cost-sensitive tasks.
|
|
|
249
250
|
|
|
250
251
|
```bash
|
|
251
252
|
# Project analysis
|
|
252
|
-
/analysis
|
|
253
|
+
/omcustom:analysis
|
|
253
254
|
|
|
254
255
|
# Show all commands
|
|
255
|
-
/lists
|
|
256
|
+
/omcustom:lists
|
|
256
257
|
|
|
257
258
|
# Agent management
|
|
258
|
-
/create-agent my-agent
|
|
259
|
-
/update-docs
|
|
260
|
-
/audit-agents
|
|
259
|
+
/omcustom:create-agent my-agent
|
|
260
|
+
/omcustom:update-docs
|
|
261
|
+
/omcustom:audit-agents
|
|
261
262
|
|
|
262
263
|
# Code review
|
|
263
264
|
/dev-review src/main.go
|
|
@@ -267,7 +268,7 @@ Task tool + routing skills remain the fallback for simple/cost-sensitive tasks.
|
|
|
267
268
|
/memory-recall authentication
|
|
268
269
|
|
|
269
270
|
# Verification
|
|
270
|
-
/sauron-watch
|
|
271
|
+
/omcustom:sauron-watch
|
|
271
272
|
```
|
|
272
273
|
|
|
273
274
|
## External Dependencies
|
package/templates/CLAUDE.md.ko
CHANGED
|
@@ -151,30 +151,31 @@ oh-my-customcode로 구동됩니다.
|
|
|
151
151
|
|
|
152
152
|
| 커맨드 | 설명 |
|
|
153
153
|
|--------|------|
|
|
154
|
-
| `/analysis` | 프로젝트 분석 및 자동 커스터마이징 |
|
|
155
|
-
| `/create-agent` | 새 에이전트 생성 |
|
|
156
|
-
| `/update-docs` | 프로젝트 구조와 문서 동기화 |
|
|
157
|
-
| `/update-external` | 외부 소스에서 에이전트 업데이트 |
|
|
158
|
-
| `/audit-agents` | 에이전트 의존성 감사 |
|
|
159
|
-
| `/fix-refs` | 깨진 참조 수정 |
|
|
154
|
+
| `/omcustom:analysis` | 프로젝트 분석 및 자동 커스터마이징 |
|
|
155
|
+
| `/omcustom:create-agent` | 새 에이전트 생성 |
|
|
156
|
+
| `/omcustom:update-docs` | 프로젝트 구조와 문서 동기화 |
|
|
157
|
+
| `/omcustom:update-external` | 외부 소스에서 에이전트 업데이트 |
|
|
158
|
+
| `/omcustom:audit-agents` | 에이전트 의존성 감사 |
|
|
159
|
+
| `/omcustom:fix-refs` | 깨진 참조 수정 |
|
|
160
160
|
| `/dev-review` | 코드 베스트 프랙티스 리뷰 |
|
|
161
161
|
| `/dev-refactor` | 코드 리팩토링 |
|
|
162
162
|
| `/memory-save` | 세션 컨텍스트를 claude-mem에 저장 |
|
|
163
163
|
| `/memory-recall` | 메모리 검색 및 리콜 |
|
|
164
|
-
| `/monitoring-setup` | OTel 콘솔 모니터링 활성화/비활성화 |
|
|
165
|
-
| `/npm-publish` | npm 레지스트리에 패키지 배포 |
|
|
166
|
-
| `/npm-version` | 시맨틱 버전 관리 |
|
|
167
|
-
| `/npm-audit` | 의존성 감사 |
|
|
164
|
+
| `/omcustom:monitoring-setup` | OTel 콘솔 모니터링 활성화/비활성화 |
|
|
165
|
+
| `/omcustom:npm-publish` | npm 레지스트리에 패키지 배포 |
|
|
166
|
+
| `/omcustom:npm-version` | 시맨틱 버전 관리 |
|
|
167
|
+
| `/omcustom:npm-audit` | 의존성 감사 |
|
|
168
168
|
| `/codex-exec` | Codex CLI 프롬프트 실행 |
|
|
169
169
|
| `/optimize-analyze` | 번들 및 성능 분석 |
|
|
170
170
|
| `/optimize-bundle` | 번들 크기 최적화 |
|
|
171
171
|
| `/optimize-report` | 최적화 리포트 생성 |
|
|
172
172
|
| `/research` | 10-team 병렬 딥 분석 및 교차 검증 |
|
|
173
|
-
| `/
|
|
173
|
+
| `/deep-plan` | 연구 검증 기반 계획 수립 (research → plan → verify) |
|
|
174
|
+
| `/omcustom:sauron-watch` | 전체 R017 검증 |
|
|
174
175
|
| `/structured-dev-cycle` | 6단계 구조적 개발 사이클 (Plan → Verify → Implement → Verify → Compound → Done) |
|
|
175
|
-
| `/lists` | 모든 사용 가능한 커맨드 표시 |
|
|
176
|
-
| `/status` | 시스템 상태 표시 |
|
|
177
|
-
| `/help` | 도움말 표시 |
|
|
176
|
+
| `/omcustom:lists` | 모든 사용 가능한 커맨드 표시 |
|
|
177
|
+
| `/omcustom:status` | 시스템 상태 표시 |
|
|
178
|
+
| `/omcustom:help` | 도움말 표시 |
|
|
178
179
|
|
|
179
180
|
## 프로젝트 구조
|
|
180
181
|
|
|
@@ -183,7 +184,7 @@ project/
|
|
|
183
184
|
+-- CLAUDE.md # 진입점
|
|
184
185
|
+-- .claude/
|
|
185
186
|
| +-- agents/ # 서브에이전트 정의 (44 파일)
|
|
186
|
-
| +-- skills/ # 스킬 (
|
|
187
|
+
| +-- skills/ # 스킬 (71 디렉토리)
|
|
187
188
|
| +-- rules/ # 전역 규칙 (R000-R019)
|
|
188
189
|
| +-- hooks/ # 훅 스크립트 (메모리, HUD)
|
|
189
190
|
| +-- contexts/ # 컨텍스트 파일 (ecomode)
|
|
@@ -249,15 +250,15 @@ Claude Code의 Agent Teams 기능이 활성화되어 있으면 (`CLAUDE_CODE_EXP
|
|
|
249
250
|
|
|
250
251
|
```bash
|
|
251
252
|
# 프로젝트 분석
|
|
252
|
-
/analysis
|
|
253
|
+
/omcustom:analysis
|
|
253
254
|
|
|
254
255
|
# 모든 커맨드 표시
|
|
255
|
-
/lists
|
|
256
|
+
/omcustom:lists
|
|
256
257
|
|
|
257
258
|
# 에이전트 관리
|
|
258
|
-
/create-agent my-agent
|
|
259
|
-
/update-docs
|
|
260
|
-
/audit-agents
|
|
259
|
+
/omcustom:create-agent my-agent
|
|
260
|
+
/omcustom:update-docs
|
|
261
|
+
/omcustom:audit-agents
|
|
261
262
|
|
|
262
263
|
# 코드 리뷰
|
|
263
264
|
/dev-review src/main.go
|
|
@@ -267,7 +268,7 @@ Claude Code의 Agent Teams 기능이 활성화되어 있으면 (`CLAUDE_CODE_EXP
|
|
|
267
268
|
/memory-recall authentication
|
|
268
269
|
|
|
269
270
|
# 검증
|
|
270
|
-
/sauron-watch
|
|
271
|
+
/omcustom:sauron-watch
|
|
271
272
|
```
|
|
272
273
|
|
|
273
274
|
## 외부 의존성
|