opencode-swarm-plugin 0.40.0 → 0.42.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (59) hide show
  1. package/.hive/analysis/eval-failure-analysis-2025-12-25.md +331 -0
  2. package/.hive/analysis/session-data-quality-audit.md +320 -0
  3. package/.hive/eval-results.json +481 -24
  4. package/.hive/issues.jsonl +65 -16
  5. package/.hive/memories.jsonl +159 -1
  6. package/.opencode/eval-history.jsonl +315 -0
  7. package/.turbo/turbo-build.log +5 -5
  8. package/CHANGELOG.md +155 -0
  9. package/README.md +2 -0
  10. package/SCORER-ANALYSIS.md +598 -0
  11. package/bin/eval-gate.test.ts +158 -0
  12. package/bin/eval-gate.ts +74 -0
  13. package/bin/swarm.test.ts +661 -732
  14. package/bin/swarm.ts +274 -0
  15. package/dist/compaction-hook.d.ts +7 -5
  16. package/dist/compaction-hook.d.ts.map +1 -1
  17. package/dist/compaction-prompt-scoring.d.ts +1 -0
  18. package/dist/compaction-prompt-scoring.d.ts.map +1 -1
  19. package/dist/eval-runner.d.ts +134 -0
  20. package/dist/eval-runner.d.ts.map +1 -0
  21. package/dist/hive.d.ts.map +1 -1
  22. package/dist/index.d.ts +29 -0
  23. package/dist/index.d.ts.map +1 -1
  24. package/dist/index.js +99741 -58858
  25. package/dist/memory-tools.d.ts +70 -2
  26. package/dist/memory-tools.d.ts.map +1 -1
  27. package/dist/memory.d.ts +37 -0
  28. package/dist/memory.d.ts.map +1 -1
  29. package/dist/observability-tools.d.ts +64 -0
  30. package/dist/observability-tools.d.ts.map +1 -1
  31. package/dist/plugin.js +99356 -58318
  32. package/dist/swarm-orchestrate.d.ts.map +1 -1
  33. package/dist/swarm-prompts.d.ts +32 -1
  34. package/dist/swarm-prompts.d.ts.map +1 -1
  35. package/docs/planning/ADR-009-oh-my-opencode-patterns.md +353 -0
  36. package/evals/ARCHITECTURE.md +1189 -0
  37. package/evals/example.eval.ts +3 -4
  38. package/evals/fixtures/compaction-prompt-cases.ts +6 -0
  39. package/evals/scorers/coordinator-discipline.ts +0 -253
  40. package/evals/swarm-decomposition.eval.ts +4 -2
  41. package/package.json +4 -3
  42. package/src/compaction-prompt-scorers.test.ts +10 -9
  43. package/src/compaction-prompt-scoring.ts +7 -5
  44. package/src/eval-runner.test.ts +128 -1
  45. package/src/eval-runner.ts +46 -0
  46. package/src/hive.ts +43 -42
  47. package/src/memory-tools.test.ts +84 -0
  48. package/src/memory-tools.ts +68 -3
  49. package/src/memory.test.ts +2 -112
  50. package/src/memory.ts +88 -49
  51. package/src/observability-tools.test.ts +13 -0
  52. package/src/observability-tools.ts +277 -0
  53. package/src/swarm-orchestrate.test.ts +162 -0
  54. package/src/swarm-orchestrate.ts +7 -5
  55. package/src/swarm-prompts.test.ts +168 -4
  56. package/src/swarm-prompts.ts +228 -7
  57. package/.env +0 -2
  58. package/.turbo/turbo-test.log +0 -481
  59. package/.turbo/turbo-typecheck.log +0 -1
@@ -0,0 +1,598 @@
1
+ # Scorer Implementation Analysis
2
+
3
+ **Date:** 2025-12-25
4
+ **Cell:** opencode-swarm-plugin--ys7z8-mjlk7jsrvls
5
+ **Scope:** All scorer implementations in `evals/scorers/`
6
+
7
+ ```
8
+ ┌────────────────────────────────────────────────────────────┐
9
+ │ │
10
+ │ 📊 SCORER AUDIT REPORT │
11
+ │ ═══════════════════════ │
12
+ │ │
13
+ │ Files Analyzed: │
14
+ │ • index.ts (primary scorers) │
15
+ │ • coordinator-discipline.ts (11 scorers) │
16
+ │ • compaction-scorers.ts (5 scorers) │
17
+ │ • outcome-scorers.ts (5 scorers) │
18
+ │ │
19
+ │ Total Scorers: 24 │
20
+ │ Composite Scorers: 3 │
21
+ │ LLM-as-Judge: 1 │
22
+ │ │
23
+ └────────────────────────────────────────────────────────────┘
24
+ ```
25
+
26
+ ---
27
+
28
+ ## Executive Summary
29
+
30
+ **Overall Assessment:** ✅ Scorers are well-implemented with correct API usage. Found **3 critical issues** and **5 optimization opportunities**.
31
+
32
+ **Eval Performance Context:**
33
+ - compaction-prompt: 53% (LOW - needs investigation)
34
+ - coordinator-behavior: 77% (GOOD)
35
+ - coordinator-session: 66% (FAIR)
36
+ - compaction-resumption: 93% (EXCELLENT)
37
+ - swarm-decomposition: 70% (GOOD)
38
+ - example: 0% (expected - sanity check)
39
+
40
+ ---
41
+
42
+ ## 🔴 CRITICAL ISSUES
43
+
44
+ ### 1. **UNUSED SCORERS - Dead Code**
45
+
46
+ **Severity:** HIGH
47
+ **Impact:** Wasted development effort, misleading test coverage
48
+
49
+ #### Scorers Defined But Never Used in Evals
50
+
51
+ | Scorer | File | Lines | Status |
52
+ |--------|------|-------|--------|
53
+ | `researcherSpawnRate` | coordinator-discipline.ts | 345-378 | ❌ NEVER USED |
54
+ | `skillLoadingRate` | coordinator-discipline.ts | 388-421 | ❌ NEVER USED |
55
+ | `inboxMonitoringRate` | coordinator-discipline.ts | 433-484 | ❌ NEVER USED |
56
+ | `blockerResponseTime` | coordinator-discipline.ts | 499-588 | ❌ NEVER USED |
57
+
58
+ **Evidence:**
59
+ ```bash
60
+ grep -r "researcherSpawnRate\|skillLoadingRate\|inboxMonitoringRate\|blockerResponseTime" evals/*.eval.ts
61
+ # → No matches
62
+ ```
63
+
64
+ **Why This Matters:**
65
+ - These scorers represent ~250 lines of code (~38% of coordinator-discipline.ts)
66
+ - Tests exist for them but they don't influence eval results
67
+ - Maintenance burden without benefit
68
+ - Misleading signal that these metrics are being measured
69
+
70
+ **Recommendation:**
71
+ 1. **EITHER** add these scorers to `coordinator-session.eval.ts` scorers array
72
+ 2. **OR** remove them and their tests to reduce noise
73
+
74
+ **Probable Intent:**
75
+ These scorers were likely prototypes for expanded coordinator metrics but never integrated. The current 5-scorer set (violations, spawn, review, speed, reviewEfficiency) is sufficient for protocol adherence.
76
+
77
+ ---
78
+
79
+ ### 2. **reviewEfficiency vs reviewThoroughness - Potential Redundancy**
80
+
81
+ **Severity:** MEDIUM
82
+ **Impact:** Confusing metrics, potential double-penalization
83
+
84
+ #### What They Measure
85
+
86
+ | Scorer | Metric | Scoring |
87
+ |--------|--------|---------|
88
+ | `reviewThoroughness` | reviews / finished_workers | 0-1 (completeness) |
89
+ | `reviewEfficiency` | reviews / spawned_workers | penalizes >2:1 ratio |
90
+
91
+ **The Problem:**
92
+ ```typescript
93
+ // Scenario: 2 workers spawned, 2 finished, 4 reviews completed
94
+
95
+ // reviewThoroughness: 4/2 = 2.0 → clipped to 1.0 (perfect!)
96
+ // reviewEfficiency: 4/2 = 2.0 → 0.5 (threshold penalty)
97
+
98
+ // These contradict each other
99
+ ```
100
+
101
+ **Why This Exists:**
102
+ - `reviewThoroughness` added early to ensure coordinators review worker output
103
+ - `reviewEfficiency` added later to prevent over-reviewing (context waste)
104
+ - Both measure review behavior but from different angles
105
+
106
+ **Current Usage:**
107
+ - `coordinator-session.eval.ts` uses BOTH in scorers array
108
+ - `overallDiscipline` composite uses only `reviewThoroughness` (not efficiency)
109
+
110
+ **Recommendation:**
111
+ 1. **Short-term:** Document that these are intentionally complementary (thoroughness=quality gate, efficiency=resource optimization)
112
+ 2. **Long-term:** Consider composite "reviewQuality" scorer that balances both:
113
+ ```typescript
114
+ // Perfect: 1:1 ratio (one review per finished worker)
115
+ // Good: 0.8-1.2 ratio
116
+ // Bad: <0.5 or >2.0 ratio
117
+ ```
118
+
119
+ ---
120
+
121
+ ### 3. **Arbitrary Normalization Thresholds**
122
+
123
+ **Severity:** LOW
124
+ **Impact:** Scores may not reflect reality, hard to tune
125
+
126
+ #### timeToFirstSpawn Thresholds
127
+
128
+ ```typescript
129
+ const EXCELLENT_MS = 60_000; // < 60s = 1.0 (why 60s?)
130
+ const POOR_MS = 300_000; // > 300s = 0.0 (why 5min?)
131
+ ```
132
+
133
+ **Question:** Are these evidence-based or arbitrary?
134
+
135
+ **From Real Data:** We don't know - no analysis of actual coordinator spawn times.
136
+
137
+ **Recommendation:**
138
+ 1. Add comment with rationale: "Based on X coordinator sessions, median spawn time is Y"
139
+ 2. OR make thresholds configurable via expected values
140
+ 3. OR use percentile-based normalization from real data
141
+
142
+ #### blockerResponseTime Thresholds
143
+
144
+ ```typescript
145
+ const EXCELLENT_MS = 5 * 60 * 1000; // 5 min
146
+ const POOR_MS = 15 * 60 * 1000; // 15 min
147
+ ```
148
+
149
+ **Same Issue:** No evidence these thresholds match real coordinator response patterns.
150
+
151
+ **Deeper Problem:**
152
+ ```typescript
153
+ // This scorer matches blockers to resolutions by subtask_id
154
+ const resolution = resolutions.find(
155
+ (r) => (r.payload as any).subtask_id === subtaskId
156
+ );
157
+
158
+ // BUT: If coordinator resolves blocker by reassigning task,
159
+ // the subtask_id might change. This would miss the resolution.
160
+ ```
161
+
162
+ ---
163
+
164
+ ## ⚠️ CALIBRATION ISSUES
165
+
166
+ ### 1. **Composite Scorer Weight Inconsistency**
167
+
168
+ #### Current Weights
169
+
170
+ **overallDiscipline** (coordinator-discipline.ts:603):
171
+ ```typescript
172
+ const weights = {
173
+ violations: 0.3, // 30% - "most critical"
174
+ spawn: 0.25, // 25%
175
+ review: 0.25, // 25%
176
+ speed: 0.2, // 20%
177
+ };
178
+ ```
179
+
180
+ **compactionQuality** (compaction-scorers.ts:260):
181
+ ```typescript
182
+ const weights = {
183
+ confidence: 0.25, // 25%
184
+ injection: 0.25, // 25%
185
+ required: 0.3, // 30% - "most critical"
186
+ forbidden: 0.2, // 20%
187
+ };
188
+ ```
189
+
190
+ **overallCoordinatorBehavior** (coordinator-behavior.eval.ts:196):
191
+ ```typescript
192
+ const score =
193
+ (toolsResult.score ?? 0) * 0.3 +
194
+ (avoidsResult.score ?? 0) * 0.4 + // 40% - "most important"
195
+ (mindsetResult.score ?? 0) * 0.3;
196
+ ```
197
+
198
+ **Pattern:** Each composite prioritizes different metrics, which is GOOD (domain-specific), but...
199
+
200
+ **Issue:** No documentation of WHY these weights were chosen.
201
+
202
+ **Recommendation:**
203
+ Add comments explaining weight rationale:
204
+ ```typescript
205
+ // Weights based on failure impact:
206
+ // - Violations (30%): Breaking protocol causes immediate harm
207
+ // - Spawn (25%): Delegation is core coordinator job
208
+ // - Review (25%): Quality gate prevents bad work propagating
209
+ // - Speed (20%): Optimization, not correctness
210
+ ```
211
+
212
+ ---
213
+
214
+ ### 2. **Binary vs Gradient Scoring Philosophy**
215
+
216
+ #### Binary Scorers (0 or 1 only)
217
+
218
+ - `subtaskIndependence` - either conflicts exist or they don't
219
+ - `executionSuccess` - either all succeeded or not
220
+ - `noRework` - either rework detected or not
221
+
222
+ #### Gradient Scorers (0-1 continuous)
223
+
224
+ - `timeBalance` - ratio-based
225
+ - `scopeAccuracy` - percentage-based
226
+ - `instructionClarity` - heuristic-based
227
+
228
+ #### LLM-as-Judge (0-1 via scoring prompt)
229
+
230
+ - `decompositionCoherence` - Claude Haiku scores 0-100, normalized to 0-1
231
+
232
+ **Question:** Should all outcome scorers be gradient, or is binary appropriate?
233
+
234
+ **Trade-off:**
235
+ - **Binary:** Clear pass/fail, easy to reason about, motivates fixes
236
+ - **Gradient:** More nuanced, rewards partial success, better for learning
237
+
238
+ **Current Mix:** Seems reasonable. Binary for critical invariants (no conflicts, no rework), gradient for optimization metrics (balance, accuracy).
239
+
240
+ **Recommendation:** Document this philosophy in scorer file headers.
241
+
242
+ ---
243
+
244
+ ## ✅ WELL-CALIBRATED PATTERNS
245
+
246
+ ### 1. **Fallback Strategy Consistency**
247
+
248
+ From semantic memory:
249
+ > "When no baseline exists, prefer realistic fallback (1.0 if delegation happened) over arbitrary 0.5"
250
+
251
+ **Good Example - spawnEfficiency (lines 98-108):**
252
+ ```typescript
253
+ if (!decomp) {
254
+ // Fallback: if workers were spawned but no decomp event, assume they're doing work
255
+ if (spawned > 0) {
256
+ return {
257
+ score: 1.0, // Optimistic - work is happening
258
+ message: `${spawned} workers spawned (no decomposition event)`,
259
+ };
260
+ }
261
+ return {
262
+ score: 0,
263
+ message: "No decomposition event found",
264
+ };
265
+ }
266
+ ```
267
+
268
+ **Rationale:** Workers spawned = delegation happened = good coordinator behavior. Not penalizing missing instrumentation.
269
+
270
+ **Contrast - decompositionCoherence fallback (lines 321-325):**
271
+ ```typescript
272
+ } catch (error) {
273
+ // Don't fail the eval if judge fails - return neutral score
274
+ return {
275
+ score: 0.5, // Neutral - can't determine quality
276
+ message: `LLM judge error: ${error instanceof Error ? error.message : String(error)}`,
277
+ };
278
+ }
279
+ ```
280
+
281
+ **Rationale:** LLM judge failure = unknown quality, not good or bad. Neutral 0.5 prevents biasing results.
282
+
283
+ **Consistency:** ✅ Both fallbacks match their semantic context.
284
+
285
+ ---
286
+
287
+ ### 2. **Test Coverage Philosophy**
288
+
289
+ #### Unit Tests (Bun test)
290
+ - **coordinator-discipline.evalite-test.ts** - Full functional tests with synthetic fixtures
291
+ - **outcome-scorers.evalite-test.ts** - Export verification only (integration tested via evalite)
292
+
293
+ #### Integration Tests (Evalite)
294
+ - **coordinator-session.eval.ts** - Real captured sessions + synthetic fixtures
295
+ - **swarm-decomposition.eval.ts** - Real LLM calls + fixtures
296
+
297
+ **Pattern:** Scorers with complex logic get unit tests. Simple scorers get integration tests only.
298
+
299
+ **Trade-off:**
300
+ - **Pro:** Faster iteration for complex scorers
301
+ - **Con:** No unit tests for outcome scorers (harder to debug failures)
302
+
303
+ **Recommendation:** Add characterization tests for outcome scorers (snapshot actual scores for known inputs).
304
+
305
+ ---
306
+
307
+ ## 📊 SCORER USAGE MATRIX
308
+
309
+ | Scorer | coordinator-session | swarm-decomposition | coordinator-behavior | compaction-resumption | compaction-prompt |
310
+ |--------|---------------------|---------------------|----------------------|-----------------------|-------------------|
311
+ | **violationCount** | ✅ | - | - | - | - |
312
+ | **spawnEfficiency** | ✅ | - | - | - | - |
313
+ | **reviewThoroughness** | ✅ | - | - | - | - |
314
+ | **reviewEfficiency** | ✅ | - | - | - | - |
315
+ | **timeToFirstSpawn** | ✅ | - | - | - | - |
316
+ | **overallDiscipline** | ✅ | - | - | - | - |
317
+ | **researcherSpawnRate** | ❌ | - | - | - | - |
318
+ | **skillLoadingRate** | ❌ | - | - | - | - |
319
+ | **inboxMonitoringRate** | ❌ | - | - | - | - |
320
+ | **blockerResponseTime** | ❌ | - | - | - | - |
321
+ | **subtaskIndependence** | - | ✅ | - | - | - |
322
+ | **coverageCompleteness** | - | ✅ | - | - | - |
323
+ | **instructionClarity** | - | ✅ | - | - | - |
324
+ | **decompositionCoherence** | - | ✅ | - | - | - |
325
+ | **mentionsCoordinatorTools** | - | - | ✅ | - | - |
326
+ | **avoidsWorkerBehaviors** | - | - | ✅ | - | - |
327
+ | **coordinatorMindset** | - | - | ✅ | - | - |
328
+ | **overallCoordinatorBehavior** | - | - | ✅ | - | - |
329
+ | **confidenceAccuracy** | - | - | - | ✅ | - |
330
+ | **contextInjectionCorrectness** | - | - | - | ✅ | - |
331
+ | **requiredPatternsPresent** | - | - | - | ✅ | - |
332
+ | **forbiddenPatternsAbsent** | - | - | - | ✅ | - |
333
+ | **compactionQuality** | - | - | - | ✅ | - |
334
+ | **compaction-prompt scorers** | - | - | - | - | ✅ |
335
+ | **outcome scorers** | - | - | - | - | - |
336
+
337
+ **Note:** Outcome scorers not used in any current eval (waiting for real execution data).
338
+
339
+ ---
340
+
341
+ ## 🎯 RECOMMENDATIONS
342
+
343
+ ### Immediate (Pre-Ship)
344
+
345
+ 1. **DECIDE:** Keep or remove unused coordinator scorers
346
+ - If keeping: Add to coordinator-session.eval.ts
347
+ - If removing: Delete scorers + tests, update exports
348
+
349
+ 2. **DOCUMENT:** Add weight rationale comments to composite scorers
350
+
351
+ 3. **CLARIFY:** Add docstring to reviewEfficiency explaining relationship with reviewThoroughness
352
+
353
+ ### Short-term (Next Sprint)
354
+
355
+ 4. **CALIBRATE:** Gather real coordinator session data, validate normalization thresholds
356
+ - Run 20+ real coordinator sessions
357
+ - Plot distribution of spawn times, blocker response times
358
+ - Adjust EXCELLENT_MS/POOR_MS based on percentiles
359
+
360
+ 5. **TEST:** Add characterization tests for outcome scorers
361
+ ```typescript
362
+ test("scopeAccuracy with known input", () => {
363
+ const result = scopeAccuracy({ output: knownGoodOutput, ... });
364
+ expect(result.score).toMatchSnapshot();
365
+ });
366
+ ```
367
+
368
+ 6. **INVESTIGATE:** Why is compaction-prompt eval at 53%?
369
+ - Review fixtures in `compaction-prompt-cases.ts`
370
+ - Check if scorers are too strict or fixtures are wrong
371
+ - This is the LOWEST-performing eval (red flag)
372
+
373
+ ### Long-term (Future Iterations)
374
+
375
+ 7. **REFACTOR:** Consider `reviewQuality` composite that balances thoroughness + efficiency
376
+
377
+ 8. **ENHANCE:** Add percentile-based normalization for time-based scorers
378
+ ```typescript
379
+ function normalizeTime(valueMs: number, p50: number, p95: number): number {
380
+ // Values at p50 = 0.5, values at p95 = 0.0
381
+ // Self-calibrating from real data
382
+ }
383
+ ```
384
+
385
+ 9. **INTEGRATE:** Use outcome scorers once real swarm execution data exists
386
+ - Currently no eval uses executionSuccess, timeBalance, scopeAccuracy, scopeDrift, noRework
387
+ - These are outcome-based (require actual subtask execution)
388
+ - Valuable for learning which decomposition strategies work
389
+
390
+ ---
391
+
392
+ ## 📈 SCORING PHILOSOPHY PATTERNS
393
+
394
+ ### Pattern 1: "Perfect or Penalty" (Binary with Partial Credit)
395
+
396
+ **Example:** `instructionClarity` (index.ts:174-228)
397
+ ```typescript
398
+ let score = 0.5; // baseline
399
+ if (subtask.description && subtask.description.length > 20) score += 0.2;
400
+ if (subtask.files && subtask.files.length > 0) score += 0.2;
401
+ if (!isGeneric) score += 0.1;
402
+ return Math.min(1.0, score);
403
+ ```
404
+
405
+ **Philosophy:** Start at baseline, add points for quality signals, cap at 1.0
406
+
407
+ **Pro:** Rewards partial quality improvements
408
+ **Con:** Arbitrary baseline and increments
409
+
410
+ ---
411
+
412
+ ### Pattern 2: "Ratio Normalization" (Continuous Gradient)
413
+
414
+ **Example:** `timeBalance` (outcome-scorers.ts:73-141)
415
+ ```typescript
416
+ const ratio = maxDuration / minDuration;
417
+ if (ratio < 2.0) score = 1.0; // well balanced
418
+ else if (ratio < 4.0) score = 0.5; // moderately balanced
419
+ else score = 0.0; // poorly balanced
420
+ ```
421
+
422
+ **Philosophy:** Define thresholds for quality bands, linear interpolation between
423
+
424
+ **Pro:** Clear expectations, easy to reason about
425
+ **Con:** Threshold choices are subjective
426
+
427
+ ---
428
+
429
+ ### Pattern 3: "LLM-as-Judge" (Delegated Evaluation)
430
+
431
+ **Example:** `decompositionCoherence` (index.ts:245-328)
432
+ ```typescript
433
+ const { text } = await generateText({
434
+ model: gateway(JUDGE_MODEL),
435
+ prompt: `Evaluate on these criteria (be harsh)...
436
+ 1. INDEPENDENCE (25%)
437
+ 2. SCOPE (25%)
438
+ 3. COMPLETENESS (25%)
439
+ 4. CLARITY (25%)
440
+ Return ONLY valid JSON: {"score": <0-100>, "issues": [...]}`,
441
+ });
442
+ ```
443
+
444
+ **Philosophy:** Use LLM for nuanced evaluation humans/heuristics can't capture
445
+
446
+ **Pro:** Catches semantic issues (hidden dependencies, ambiguous scope)
447
+ **Con:** Non-deterministic, slower, requires API key, costs money
448
+
449
+ ---
450
+
451
+ ### Pattern 4: "Composite Weighted Average"
452
+
453
+ **Example:** `overallDiscipline` (coordinator-discipline.ts:603-648)
454
+ ```typescript
455
+ const totalScore =
456
+ (scores.violations.score ?? 0) * weights.violations +
457
+ (scores.spawn.score ?? 0) * weights.spawn +
458
+ (scores.review.score ?? 0) * weights.review +
459
+ (scores.speed.score ?? 0) * weights.speed;
460
+ ```
461
+
462
+ **Philosophy:** Combine multiple signals with domain-specific weights
463
+
464
+ **Pro:** Single metric for "overall quality", weights encode priorities
465
+ **Con:** Weights are subjective, hides individual metric details
466
+
467
+ ---
468
+
469
+ ## 🔬 DEEP DIVE: compaction-prompt 53% Score
470
+
471
+ **Context:** This is the LOWEST-performing eval. Needs investigation.
472
+
473
+ **Hypothesis 1:** Scorers are too strict
474
+ - Check if perfect fixture actually scores 100% (has dedicated eval for this)
475
+ - If perfect scores <100%, scorers have bugs
476
+
477
+ **Hypothesis 2:** Fixtures are wrong
478
+ - Fixtures might not represent actual good prompts
479
+ - Need to compare against real coordinator resumption prompts
480
+
481
+ **Hypothesis 3:** Real implementation doesn't match fixture assumptions
482
+ - Fixtures assume certain prompt structure
483
+ - Actual implementation may have evolved differently
484
+
485
+ **Next Steps:**
486
+ 1. Run `Perfect Prompt Scores 100%` eval and check results
487
+ 2. If it scores <100%, debug scorer logic
488
+ 3. If it scores 100%, review other fixture expected values
489
+
490
+ ---
491
+
492
+ ## 💡 INSIGHTS FROM SEMANTIC MEMORY
493
+
494
+ ### 1. Evalite API Pattern (from memory c2bb8f11)
495
+
496
+ ```typescript
497
+ // CORRECT: Scorers are async functions
498
+ const result = await childScorer({ output, expected, input });
499
+ const score = result.score ?? 0;
500
+
501
+ // WRONG: .scorer property doesn't exist
502
+ const result = childScorer.scorer({ output, expected }); // ❌
503
+ ```
504
+
505
+ ✅ All current scorers follow correct pattern.
506
+
507
+ ---
508
+
509
+ ### 2. Garbage Input Handling (from memory b0ef27d5)
510
+
511
+ > "When LLM receives garbage input, it correctly scores it 0 - this is the RIGHT behavior, not an error."
512
+
513
+ **Application:** `decompositionCoherence` should NOT return 0.5 fallback for parse errors. Should let LLM judge garbage as garbage.
514
+
515
+ **Current Implementation:** ❌ Returns 0.5 on error (line 324)
516
+
517
+ **Recommendation:** Distinguish between:
518
+ - **LLM error** (API failure) → 0.5 fallback (can't judge)
519
+ - **Parse error** (invalid JSON output) → Pass raw output to LLM, let it judge as low quality
520
+
521
+ ---
522
+
523
+ ### 3. Epic ID Pattern (from memory ba964b81)
524
+
525
+ > "Epic ID pattern is mjkw + 7 base36 chars = 11 chars total"
526
+
527
+ **Application:** `forbiddenPatternsAbsent` checks for "bd-xxx" placeholders, but should also check for other placeholder patterns:
528
+ - `<epic>`, `<path>`, `placeholder`, `YOUR_EPIC_ID`, etc.
529
+
530
+ **Current Implementation:** ✅ Already checks these (compaction-scorers.ts:200)
531
+
532
+ ---
533
+
534
+ ## 🎨 ASCII ART SCORING DISTRIBUTION
535
+
536
+ ```
537
+ SCORER USAGE HEAT MAP
538
+ ═══════════════════════
539
+
540
+ coordinator-session ██████ (6 scorers)
541
+ swarm-decomposition ████ (4 scorers)
542
+ coordinator-behavior ████ (4 scorers)
543
+ compaction-resumption █████ (5 scorers)
544
+ compaction-prompt █████ (5 scorers)
545
+
546
+ UNUSED SCORERS: 🗑️ (4 scorers, 250 LOC)
547
+ ```
548
+
549
+ ---
550
+
551
+ ## 📋 ACTION ITEMS
552
+
553
+ ### Critical (Do First)
554
+ - [ ] **Decide fate of unused scorers** (remove or integrate)
555
+ - [ ] **Investigate compaction-prompt 53% score** (lowest eval)
556
+ - [ ] **Add weight rationale comments** to composite scorers
557
+
558
+ ### High Priority
559
+ - [ ] **Document reviewEfficiency vs reviewThoroughness** relationship
560
+ - [ ] **Validate normalization thresholds** with real data
561
+ - [ ] **Add characterization tests** for outcome scorers
562
+
563
+ ### Medium Priority
564
+ - [ ] **Consider reviewQuality composite** (balances thorough + efficient)
565
+ - [ ] **Enhance blockerResponseTime** matching logic (handle reassignments)
566
+ - [ ] **Document binary vs gradient scoring philosophy** in file headers
567
+
568
+ ### Low Priority
569
+ - [ ] **Refactor garbage input handling** in decompositionCoherence
570
+ - [ ] **Add percentile-based normalization** for time scorers
571
+ - [ ] **Create scorer usage dashboard** (track which scorers impact results)
572
+
573
+ ---
574
+
575
+ ## 🏆 CONCLUSION
576
+
577
+ **Overall Quality:** 🟢 GOOD
578
+
579
+ **Strengths:**
580
+ - Correct Evalite API usage (no `.scorer` property bugs)
581
+ - Thoughtful fallback strategies (realistic vs neutral)
582
+ - Good separation of concerns (discipline, outcome, compaction)
583
+ - LLM-as-judge for complex evaluation
584
+
585
+ **Weaknesses:**
586
+ - 4 unused scorers (38% dead code in coordinator-discipline.ts)
587
+ - Arbitrary normalization thresholds (no evidence-based calibration)
588
+ - Undocumented weight rationale (composite scorers)
589
+ - Lowest eval score (compaction-prompt 53%) not investigated
590
+
591
+ **Priority:** Focus on **removing unused scorers** and **investigating compaction-prompt failure** before shipping.
592
+
593
+ ---
594
+
595
+ **Analysis by:** CoolOcean
596
+ **Cell:** opencode-swarm-plugin--ys7z8-mjlk7jsrvls
597
+ **Epic:** opencode-swarm-plugin--ys7z8-mjlk7js9bt1
598
+ **Timestamp:** 2025-12-25T17:30:00Z