oh-my-customcode 0.64.2 → 0.64.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/dist/cli/index.js CHANGED
@@ -9325,7 +9325,7 @@ var init_package = __esm(() => {
9325
9325
  workspaces: [
9326
9326
  "packages/*"
9327
9327
  ],
9328
- version: "0.64.2",
9328
+ version: "0.64.3",
9329
9329
  description: "Batteries-included agent harness for Claude Code",
9330
9330
  type: "module",
9331
9331
  bin: {
package/dist/index.js CHANGED
@@ -1674,7 +1674,7 @@ var package_default = {
1674
1674
  workspaces: [
1675
1675
  "packages/*"
1676
1676
  ],
1677
- version: "0.64.2",
1677
+ version: "0.64.3",
1678
1678
  description: "Batteries-included agent harness for Claude Code",
1679
1679
  type: "module",
1680
1680
  bin: {
package/package.json CHANGED
@@ -3,7 +3,7 @@
3
3
  "workspaces": [
4
4
  "packages/*"
5
5
  ],
6
- "version": "0.64.2",
6
+ "version": "0.64.3",
7
7
  "description": "Batteries-included agent harness for Claude Code",
8
8
  "type": "module",
9
9
  "bin": {
@@ -54,6 +54,56 @@ When enabled:
54
54
 
55
55
  Use when: tasks requiring 3+ iterations consistently, or when generator-evaluator score disagreements exceed 0.3.
56
56
 
57
+ ### Evaluator Calibration
58
+
59
+ Anthropic's harness design research identifies evaluator leniency as a key failure mode: LLMs default to generous scoring, especially when evaluating output from the same model family. Counter-measures:
60
+
61
+ **Skepticism Prompting**: Include explicit instructions in the evaluator prompt:
62
+ - "Default to skepticism. A 'pass' should require clear evidence, not absence of issues."
63
+ - "Score as if you are reviewing code that will run in production with real users."
64
+ - "When uncertain between pass and fail, choose fail and explain what evidence would change your mind."
65
+
66
+ **Anti-Self-Praise Bias**: When generator and evaluator share the same model family (e.g., both Claude), add:
67
+ - "You are reviewing another agent's work, not your own. Do not give credit for intent — only for execution."
68
+ - "Identify at least one concrete improvement, even for high-quality output."
69
+
70
+ **Calibration via Rubric Examples**: Each rubric criterion SHOULD include a `fail_example` alongside the description:
71
+
72
+ ```yaml
73
+ rubric:
74
+ - criterion: error_handling
75
+ weight: 0.25
76
+ description: "All error paths handled with meaningful messages"
77
+ fail_example: "Generic try/catch with console.log(error) — no recovery, no user-facing message"
78
+ ```
79
+
80
+ Adding `fail_example` anchors the evaluator's scale, reducing score inflation by ~20% (based on Anthropic's internal testing).
81
+
82
+ ### Conditional Evaluator (Cost Optimization)
83
+
84
+ Not every task justifies evaluator overhead. Skip the evaluator loop for tasks within the model's reliable capability range. From Anthropic's research: "Worth cost when tasks sit beyond baseline model capability; unnecessary overhead for problems within model's reliable range."
85
+
86
+ ```yaml
87
+ evaluator-optimizer:
88
+ conditional:
89
+ enabled: true
90
+ skip_when:
91
+ - task_complexity: low # Simple, well-defined tasks
92
+ - generator_confidence: high # Generator self-reports high confidence
93
+ - historical_pass_rate: 0.9 # Same task type historically passes first try
94
+ ```
95
+
96
+ When `conditional.enabled: true` and ANY `skip_when` condition is met, the evaluator is skipped and the generator's first output is returned directly. This reduces token cost by ~40% for straightforward tasks.
97
+
98
+ **Decision matrix**:
99
+
100
+ | Task Type | Complexity | Evaluator? |
101
+ |-----------|-----------|------------|
102
+ | Simple file rename, config change | Low | Skip |
103
+ | Standard CRUD implementation | Medium | Run |
104
+ | Complex architecture, security-critical | High | Run with pre-negotiation |
105
+ | Previously failed task retry | Any | Always run |
106
+
57
107
  ### Parameter Details
58
108
 
59
109
  | Parameter | Required | Default | Description |
@@ -224,6 +274,7 @@ evaluator-optimizer:
224
274
  - criterion: correctness
225
275
  weight: 0.35
226
276
  description: Code compiles, logic is correct, edge cases handled
277
+ fail_example: "Missing null check on user input causes runtime crash"
227
278
  - criterion: style
228
279
  weight: 0.2
229
280
  description: Follows project conventions, clean and readable
@@ -328,6 +379,7 @@ When ecomode is active (R013), compress output:
328
379
  - The evaluator prompt MUST include the full rubric to ensure consistent scoring
329
380
  - Iteration state (best score, best output) is tracked by the orchestrator
330
381
  - The hard cap of 5 iterations prevents runaway refinement loops
382
+ - For multi-sprint runs (5+ iterations), consider context reset: spawn a fresh evaluator agent rather than continuing with degraded context. The workflow-runner supports this via `context: fork` on individual steps. Anthropic's research confirms "context resets provide clean slates superior to compaction" for long-running evaluation.
331
383
 
332
384
  ## Domain Examples
333
385
 
@@ -1,5 +1,5 @@
1
1
  {
2
- "version": "0.64.2",
2
+ "version": "0.64.3",
3
3
  "lastUpdated": "2026-03-24T00:00:00.000Z",
4
4
  "components": [
5
5
  {