opencode-hive 0.9.0 → 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (36) hide show
  1. package/README.md +10 -0
  2. package/dist/agents/architect.d.ts +12 -0
  3. package/dist/agents/forager.d.ts +12 -0
  4. package/dist/agents/hive.d.ts +6 -24
  5. package/dist/agents/hygienic.d.ts +12 -0
  6. package/dist/agents/index.d.ts +60 -0
  7. package/dist/agents/scout.d.ts +12 -0
  8. package/dist/agents/swarm.d.ts +12 -0
  9. package/dist/background/agent-gate.d.ts +64 -0
  10. package/dist/background/concurrency.d.ts +102 -0
  11. package/dist/background/index.d.ts +27 -0
  12. package/dist/background/manager.d.ts +154 -0
  13. package/dist/background/poller.d.ts +131 -0
  14. package/dist/background/store.d.ts +100 -0
  15. package/dist/background/types.d.ts +239 -0
  16. package/dist/index.js +5196 -2081
  17. package/dist/mcp/ast-grep.d.ts +2 -0
  18. package/dist/mcp/context7.d.ts +2 -0
  19. package/dist/mcp/grep-app.d.ts +2 -0
  20. package/dist/mcp/index.d.ts +2 -0
  21. package/dist/mcp/types.d.ts +12 -0
  22. package/dist/mcp/websearch.d.ts +2 -0
  23. package/dist/skills/builtin.d.ts +1 -1
  24. package/dist/skills/registry.generated.d.ts +1 -1
  25. package/dist/skills/types.d.ts +1 -1
  26. package/dist/tools/background-tools.d.ts +24 -0
  27. package/package.json +7 -1
  28. package/skills/brainstorming/SKILL.md +52 -0
  29. package/skills/dispatching-parallel-agents/SKILL.md +180 -0
  30. package/skills/executing-plans/SKILL.md +76 -0
  31. package/skills/systematic-debugging/SKILL.md +296 -0
  32. package/skills/test-driven-development/SKILL.md +371 -0
  33. package/skills/verification-before-completion/SKILL.md +139 -0
  34. package/skills/writing-plans/SKILL.md +116 -0
  35. package/dist/utils/agent-selector.d.ts +0 -29
  36. package/skills/hive-execution/SKILL.md +0 -60
@@ -0,0 +1,296 @@
1
+ ---
2
+ name: systematic-debugging
3
+ description: Use when encountering any bug, test failure, or unexpected behavior, before proposing fixes
4
+ ---
5
+
6
+ # Systematic Debugging
7
+
8
+ ## Overview
9
+
10
+ Random fixes waste time and create new bugs. Quick patches mask underlying issues.
11
+
12
+ **Core principle:** ALWAYS find root cause before attempting fixes. Symptom fixes are failure.
13
+
14
+ **Violating the letter of this process is violating the spirit of debugging.**
15
+
16
+ ## The Iron Law
17
+
18
+ ```
19
+ NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST
20
+ ```
21
+
22
+ If you haven't completed Phase 1, you cannot propose fixes.
23
+
24
+ ## When to Use
25
+
26
+ Use for ANY technical issue:
27
+ - Test failures
28
+ - Bugs in production
29
+ - Unexpected behavior
30
+ - Performance problems
31
+ - Build failures
32
+ - Integration issues
33
+
34
+ **Use this ESPECIALLY when:**
35
+ - Under time pressure (emergencies make guessing tempting)
36
+ - "Just one quick fix" seems obvious
37
+ - You've already tried multiple fixes
38
+ - Previous fix didn't work
39
+ - You don't fully understand the issue
40
+
41
+ **Don't skip when:**
42
+ - Issue seems simple (simple bugs have root causes too)
43
+ - You're in a hurry (rushing guarantees rework)
44
+ - Manager wants it fixed NOW (systematic is faster than thrashing)
45
+
46
+ ## The Four Phases
47
+
48
+ You MUST complete each phase before proceeding to the next.
49
+
50
+ ### Phase 1: Root Cause Investigation
51
+
52
+ **BEFORE attempting ANY fix:**
53
+
54
+ 1. **Read Error Messages Carefully**
55
+ - Don't skip past errors or warnings
56
+ - They often contain the exact solution
57
+ - Read stack traces completely
58
+ - Note line numbers, file paths, error codes
59
+
60
+ 2. **Reproduce Consistently**
61
+ - Can you trigger it reliably?
62
+ - What are the exact steps?
63
+ - Does it happen every time?
64
+ - If not reproducible → gather more data, don't guess
65
+
66
+ 3. **Check Recent Changes**
67
+ - What changed that could cause this?
68
+ - Git diff, recent commits
69
+ - New dependencies, config changes
70
+ - Environmental differences
71
+
72
+ 4. **Gather Evidence in Multi-Component Systems**
73
+
74
+ **WHEN system has multiple components (CI → build → signing, API → service → database):**
75
+
76
+ **BEFORE proposing fixes, add diagnostic instrumentation:**
77
+ ```
78
+ For EACH component boundary:
79
+ - Log what data enters component
80
+ - Log what data exits component
81
+ - Verify environment/config propagation
82
+ - Check state at each layer
83
+
84
+ Run once to gather evidence showing WHERE it breaks
85
+ THEN analyze evidence to identify failing component
86
+ THEN investigate that specific component
87
+ ```
88
+
89
+ **Example (multi-layer system):**
90
+ ```bash
91
+ # Layer 1: Workflow
92
+ echo "=== Secrets available in workflow: ==="
93
+ echo "IDENTITY: ${IDENTITY:+SET}${IDENTITY:-UNSET}"
94
+
95
+ # Layer 2: Build script
96
+ echo "=== Env vars in build script: ==="
97
+ env | grep IDENTITY || echo "IDENTITY not in environment"
98
+
99
+ # Layer 3: Signing script
100
+ echo "=== Keychain state: ==="
101
+ security list-keychains
102
+ security find-identity -v
103
+
104
+ # Layer 4: Actual signing
105
+ codesign --sign "$IDENTITY" --verbose=4 "$APP"
106
+ ```
107
+
108
+ **This reveals:** Which layer fails (secrets → workflow ✓, workflow → build ✗)
109
+
110
+ 5. **Trace Data Flow**
111
+
112
+ **WHEN error is deep in call stack:**
113
+
114
+ See `root-cause-tracing.md` in this directory for the complete backward tracing technique.
115
+
116
+ **Quick version:**
117
+ - Where does bad value originate?
118
+ - What called this with bad value?
119
+ - Keep tracing up until you find the source
120
+ - Fix at source, not at symptom
121
+
122
+ ### Phase 2: Pattern Analysis
123
+
124
+ **Find the pattern before fixing:**
125
+
126
+ 1. **Find Working Examples**
127
+ - Locate similar working code in same codebase
128
+ - What works that's similar to what's broken?
129
+
130
+ 2. **Compare Against References**
131
+ - If implementing pattern, read reference implementation COMPLETELY
132
+ - Don't skim - read every line
133
+ - Understand the pattern fully before applying
134
+
135
+ 3. **Identify Differences**
136
+ - What's different between working and broken?
137
+ - List every difference, however small
138
+ - Don't assume "that can't matter"
139
+
140
+ 4. **Understand Dependencies**
141
+ - What other components does this need?
142
+ - What settings, config, environment?
143
+ - What assumptions does it make?
144
+
145
+ ### Phase 3: Hypothesis and Testing
146
+
147
+ **Scientific method:**
148
+
149
+ 1. **Form Single Hypothesis**
150
+ - State clearly: "I think X is the root cause because Y"
151
+ - Write it down
152
+ - Be specific, not vague
153
+
154
+ 2. **Test Minimally**
155
+ - Make the SMALLEST possible change to test hypothesis
156
+ - One variable at a time
157
+ - Don't fix multiple things at once
158
+
159
+ 3. **Verify Before Continuing**
160
+ - Did it work? Yes → Phase 4
161
+ - Didn't work? Form NEW hypothesis
162
+ - DON'T add more fixes on top
163
+
164
+ 4. **When You Don't Know**
165
+ - Say "I don't understand X"
166
+ - Don't pretend to know
167
+ - Ask for help
168
+ - Research more
169
+
170
+ ### Phase 4: Implementation
171
+
172
+ **Fix the root cause, not the symptom:**
173
+
174
+ 1. **Create Failing Test Case**
175
+ - Simplest possible reproduction
176
+ - Automated test if possible
177
+ - One-off test script if no framework
178
+ - MUST have before fixing
179
+ - Use the `hive_skill:test-driven-development` skill for writing proper failing tests
180
+
181
+ 2. **Implement Single Fix**
182
+ - Address the root cause identified
183
+ - ONE change at a time
184
+ - No "while I'm here" improvements
185
+ - No bundled refactoring
186
+
187
+ 3. **Verify Fix**
188
+ - Test passes now?
189
+ - No other tests broken?
190
+ - Issue actually resolved?
191
+
192
+ 4. **If Fix Doesn't Work**
193
+ - STOP
194
+ - Count: How many fixes have you tried?
195
+ - If < 3: Return to Phase 1, re-analyze with new information
196
+ - **If ≥ 3: STOP and question the architecture (step 5 below)**
197
+ - DON'T attempt Fix #4 without architectural discussion
198
+
199
+ 5. **If 3+ Fixes Failed: Question Architecture**
200
+
201
+ **Pattern indicating architectural problem:**
202
+ - Each fix reveals new shared state/coupling/problem in different place
203
+ - Fixes require "massive refactoring" to implement
204
+ - Each fix creates new symptoms elsewhere
205
+
206
+ **STOP and question fundamentals:**
207
+ - Is this pattern fundamentally sound?
208
+ - Are we "sticking with it through sheer inertia"?
209
+ - Should we refactor architecture vs. continue fixing symptoms?
210
+
211
+ **Discuss with your human partner before attempting more fixes**
212
+
213
+ This is NOT a failed hypothesis - this is a wrong architecture.
214
+
215
+ ## Red Flags - STOP and Follow Process
216
+
217
+ If you catch yourself thinking:
218
+ - "Quick fix for now, investigate later"
219
+ - "Just try changing X and see if it works"
220
+ - "Add multiple changes, run tests"
221
+ - "Skip the test, I'll manually verify"
222
+ - "It's probably X, let me fix that"
223
+ - "I don't fully understand but this might work"
224
+ - "Pattern says X but I'll adapt it differently"
225
+ - "Here are the main problems: [lists fixes without investigation]"
226
+ - Proposing solutions before tracing data flow
227
+ - **"One more fix attempt" (when already tried 2+)**
228
+ - **Each fix reveals new problem in different place**
229
+
230
+ **ALL of these mean: STOP. Return to Phase 1.**
231
+
232
+ **If 3+ fixes failed:** Question the architecture (see Phase 4.5)
233
+
234
+ ## your human partner's Signals You're Doing It Wrong
235
+
236
+ **Watch for these redirections:**
237
+ - "Is that not happening?" - You assumed without verifying
238
+ - "Will it show us...?" - You should have added evidence gathering
239
+ - "Stop guessing" - You're proposing fixes without understanding
240
+ - "Ultrathink this" - Question fundamentals, not just symptoms
241
+ - "We're stuck?" (frustrated) - Your approach isn't working
242
+
243
+ **When you see these:** STOP. Return to Phase 1.
244
+
245
+ ## Common Rationalizations
246
+
247
+ | Excuse | Reality |
248
+ |--------|---------|
249
+ | "Issue is simple, don't need process" | Simple issues have root causes too. Process is fast for simple bugs. |
250
+ | "Emergency, no time for process" | Systematic debugging is FASTER than guess-and-check thrashing. |
251
+ | "Just try this first, then investigate" | First fix sets the pattern. Do it right from the start. |
252
+ | "I'll write test after confirming fix works" | Untested fixes don't stick. Test first proves it. |
253
+ | "Multiple fixes at once saves time" | Can't isolate what worked. Causes new bugs. |
254
+ | "Reference too long, I'll adapt the pattern" | Partial understanding guarantees bugs. Read it completely. |
255
+ | "I see the problem, let me fix it" | Seeing symptoms ≠ understanding root cause. |
256
+ | "One more fix attempt" (after 2+ failures) | 3+ failures = architectural problem. Question pattern, don't fix again. |
257
+
258
+ ## Quick Reference
259
+
260
+ | Phase | Key Activities | Success Criteria |
261
+ |-------|---------------|------------------|
262
+ | **1. Root Cause** | Read errors, reproduce, check changes, gather evidence | Understand WHAT and WHY |
263
+ | **2. Pattern** | Find working examples, compare | Identify differences |
264
+ | **3. Hypothesis** | Form theory, test minimally | Confirmed or new hypothesis |
265
+ | **4. Implementation** | Create test, fix, verify | Bug resolved, tests pass |
266
+
267
+ ## When Process Reveals "No Root Cause"
268
+
269
+ If systematic investigation reveals issue is truly environmental, timing-dependent, or external:
270
+
271
+ 1. You've completed the process
272
+ 2. Document what you investigated
273
+ 3. Implement appropriate handling (retry, timeout, error message)
274
+ 4. Add monitoring/logging for future investigation
275
+
276
+ **But:** 95% of "no root cause" cases are incomplete investigation.
277
+
278
+ ## Supporting Techniques
279
+
280
+ These techniques are part of systematic debugging and available in this directory:
281
+
282
+ - **`root-cause-tracing.md`** - Trace bugs backward through call stack to find original trigger
283
+ - **`defense-in-depth.md`** - Add validation at multiple layers after finding root cause
284
+ - **`condition-based-waiting.md`** - Replace arbitrary timeouts with condition polling
285
+
286
+ **Related skills:**
287
+ - **hive_skill:test-driven-development** - For creating failing test case (Phase 4, Step 1)
288
+ - **hive_skill:verification-before-completion** - Verify fix worked before claiming success
289
+
290
+ ## Real-World Impact
291
+
292
+ From debugging sessions:
293
+ - Systematic approach: 15-30 minutes to fix
294
+ - Random fixes approach: 2-3 hours of thrashing
295
+ - First-time fix rate: 95% vs 40%
296
+ - New bugs introduced: Near zero vs common
@@ -0,0 +1,371 @@
1
+ ---
2
+ name: test-driven-development
3
+ description: Use when implementing any feature or bugfix, before writing implementation code
4
+ ---
5
+
6
+ # Test-Driven Development (TDD)
7
+
8
+ ## Overview
9
+
10
+ Write the test first. Watch it fail. Write minimal code to pass.
11
+
12
+ **Core principle:** If you didn't watch the test fail, you don't know if it tests the right thing.
13
+
14
+ **Violating the letter of the rules is violating the spirit of the rules.**
15
+
16
+ ## When to Use
17
+
18
+ **Always:**
19
+ - New features
20
+ - Bug fixes
21
+ - Refactoring
22
+ - Behavior changes
23
+
24
+ **Exceptions (ask your human partner):**
25
+ - Throwaway prototypes
26
+ - Generated code
27
+ - Configuration files
28
+
29
+ Thinking "skip TDD just this once"? Stop. That's rationalization.
30
+
31
+ ## The Iron Law
32
+
33
+ ```
34
+ NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
35
+ ```
36
+
37
+ Write code before the test? Delete it. Start over.
38
+
39
+ **No exceptions:**
40
+ - Don't keep it as "reference"
41
+ - Don't "adapt" it while writing tests
42
+ - Don't look at it
43
+ - Delete means delete
44
+
45
+ Implement fresh from tests. Period.
46
+
47
+ ## Red-Green-Refactor
48
+
49
+ ```dot
50
+ digraph tdd_cycle {
51
+ rankdir=LR;
52
+ red [label="RED\nWrite failing test", shape=box, style=filled, fillcolor="#ffcccc"];
53
+ verify_red [label="Verify fails\ncorrectly", shape=diamond];
54
+ green [label="GREEN\nMinimal code", shape=box, style=filled, fillcolor="#ccffcc"];
55
+ verify_green [label="Verify passes\nAll green", shape=diamond];
56
+ refactor [label="REFACTOR\nClean up", shape=box, style=filled, fillcolor="#ccccff"];
57
+ next [label="Next", shape=ellipse];
58
+
59
+ red -> verify_red;
60
+ verify_red -> green [label="yes"];
61
+ verify_red -> red [label="wrong\nfailure"];
62
+ green -> verify_green;
63
+ verify_green -> refactor [label="yes"];
64
+ verify_green -> green [label="no"];
65
+ refactor -> verify_green [label="stay\ngreen"];
66
+ verify_green -> next;
67
+ next -> red;
68
+ }
69
+ ```
70
+
71
+ ### RED - Write Failing Test
72
+
73
+ Write one minimal test showing what should happen.
74
+
75
+ <Good>
76
+ ```typescript
77
+ test('retries failed operations 3 times', async () => {
78
+ let attempts = 0;
79
+ const operation = () => {
80
+ attempts++;
81
+ if (attempts < 3) throw new Error('fail');
82
+ return 'success';
83
+ };
84
+
85
+ const result = await retryOperation(operation);
86
+
87
+ expect(result).toBe('success');
88
+ expect(attempts).toBe(3);
89
+ });
90
+ ```
91
+ Clear name, tests real behavior, one thing
92
+ </Good>
93
+
94
+ <Bad>
95
+ ```typescript
96
+ test('retry works', async () => {
97
+ const mock = jest.fn()
98
+ .mockRejectedValueOnce(new Error())
99
+ .mockRejectedValueOnce(new Error())
100
+ .mockResolvedValueOnce('success');
101
+ await retryOperation(mock);
102
+ expect(mock).toHaveBeenCalledTimes(3);
103
+ });
104
+ ```
105
+ Vague name, tests mock not code
106
+ </Bad>
107
+
108
+ **Requirements:**
109
+ - One behavior
110
+ - Clear name
111
+ - Real code (no mocks unless unavoidable)
112
+
113
+ ### Verify RED - Watch It Fail
114
+
115
+ **MANDATORY. Never skip.**
116
+
117
+ ```bash
118
+ npm test path/to/test.test.ts
119
+ ```
120
+
121
+ Confirm:
122
+ - Test fails (not errors)
123
+ - Failure message is expected
124
+ - Fails because feature missing (not typos)
125
+
126
+ **Test passes?** You're testing existing behavior. Fix test.
127
+
128
+ **Test errors?** Fix error, re-run until it fails correctly.
129
+
130
+ ### GREEN - Minimal Code
131
+
132
+ Write simplest code to pass the test.
133
+
134
+ <Good>
135
+ ```typescript
136
+ async function retryOperation<T>(fn: () => Promise<T>): Promise<T> {
137
+ for (let i = 0; i < 3; i++) {
138
+ try {
139
+ return await fn();
140
+ } catch (e) {
141
+ if (i === 2) throw e;
142
+ }
143
+ }
144
+ throw new Error('unreachable');
145
+ }
146
+ ```
147
+ Just enough to pass
148
+ </Good>
149
+
150
+ <Bad>
151
+ ```typescript
152
+ async function retryOperation<T>(
153
+ fn: () => Promise<T>,
154
+ options?: {
155
+ maxRetries?: number;
156
+ backoff?: 'linear' | 'exponential';
157
+ onRetry?: (attempt: number) => void;
158
+ }
159
+ ): Promise<T> {
160
+ // YAGNI
161
+ }
162
+ ```
163
+ Over-engineered
164
+ </Bad>
165
+
166
+ Don't add features, refactor other code, or "improve" beyond the test.
167
+
168
+ ### Verify GREEN - Watch It Pass
169
+
170
+ **MANDATORY.**
171
+
172
+ ```bash
173
+ npm test path/to/test.test.ts
174
+ ```
175
+
176
+ Confirm:
177
+ - Test passes
178
+ - Other tests still pass
179
+ - Output pristine (no errors, warnings)
180
+
181
+ **Test fails?** Fix code, not test.
182
+
183
+ **Other tests fail?** Fix now.
184
+
185
+ ### REFACTOR - Clean Up
186
+
187
+ After green only:
188
+ - Remove duplication
189
+ - Improve names
190
+ - Extract helpers
191
+
192
+ Keep tests green. Don't add behavior.
193
+
194
+ ### Repeat
195
+
196
+ Next failing test for next feature.
197
+
198
+ ## Good Tests
199
+
200
+ | Quality | Good | Bad |
201
+ |---------|------|-----|
202
+ | **Minimal** | One thing. "and" in name? Split it. | `test('validates email and domain and whitespace')` |
203
+ | **Clear** | Name describes behavior | `test('test1')` |
204
+ | **Shows intent** | Demonstrates desired API | Obscures what code should do |
205
+
206
+ ## Why Order Matters
207
+
208
+ **"I'll write tests after to verify it works"**
209
+
210
+ Tests written after code pass immediately. Passing immediately proves nothing:
211
+ - Might test wrong thing
212
+ - Might test implementation, not behavior
213
+ - Might miss edge cases you forgot
214
+ - You never saw it catch the bug
215
+
216
+ Test-first forces you to see the test fail, proving it actually tests something.
217
+
218
+ **"I already manually tested all the edge cases"**
219
+
220
+ Manual testing is ad-hoc. You think you tested everything but:
221
+ - No record of what you tested
222
+ - Can't re-run when code changes
223
+ - Easy to forget cases under pressure
224
+ - "It worked when I tried it" ≠ comprehensive
225
+
226
+ Automated tests are systematic. They run the same way every time.
227
+
228
+ **"Deleting X hours of work is wasteful"**
229
+
230
+ Sunk cost fallacy. The time is already gone. Your choice now:
231
+ - Delete and rewrite with TDD (X more hours, high confidence)
232
+ - Keep it and add tests after (30 min, low confidence, likely bugs)
233
+
234
+ The "waste" is keeping code you can't trust. Working code without real tests is technical debt.
235
+
236
+ **"TDD is dogmatic, being pragmatic means adapting"**
237
+
238
+ TDD IS pragmatic:
239
+ - Finds bugs before commit (faster than debugging after)
240
+ - Prevents regressions (tests catch breaks immediately)
241
+ - Documents behavior (tests show how to use code)
242
+ - Enables refactoring (change freely, tests catch breaks)
243
+
244
+ "Pragmatic" shortcuts = debugging in production = slower.
245
+
246
+ **"Tests after achieve the same goals - it's spirit not ritual"**
247
+
248
+ No. Tests-after answer "What does this do?" Tests-first answer "What should this do?"
249
+
250
+ Tests-after are biased by your implementation. You test what you built, not what's required. You verify remembered edge cases, not discovered ones.
251
+
252
+ Tests-first force edge case discovery before implementing. Tests-after verify you remembered everything (you didn't).
253
+
254
+ 30 minutes of tests after ≠ TDD. You get coverage, lose proof tests work.
255
+
256
+ ## Common Rationalizations
257
+
258
+ | Excuse | Reality |
259
+ |--------|---------|
260
+ | "Too simple to test" | Simple code breaks. Test takes 30 seconds. |
261
+ | "I'll test after" | Tests passing immediately prove nothing. |
262
+ | "Tests after achieve same goals" | Tests-after = "what does this do?" Tests-first = "what should this do?" |
263
+ | "Already manually tested" | Ad-hoc ≠ systematic. No record, can't re-run. |
264
+ | "Deleting X hours is wasteful" | Sunk cost fallacy. Keeping unverified code is technical debt. |
265
+ | "Keep as reference, write tests first" | You'll adapt it. That's testing after. Delete means delete. |
266
+ | "Need to explore first" | Fine. Throw away exploration, start with TDD. |
267
+ | "Test hard = design unclear" | Listen to test. Hard to test = hard to use. |
268
+ | "TDD will slow me down" | TDD faster than debugging. Pragmatic = test-first. |
269
+ | "Manual test faster" | Manual doesn't prove edge cases. You'll re-test every change. |
270
+ | "Existing code has no tests" | You're improving it. Add tests for existing code. |
271
+
272
+ ## Red Flags - STOP and Start Over
273
+
274
+ - Code before test
275
+ - Test after implementation
276
+ - Test passes immediately
277
+ - Can't explain why test failed
278
+ - Tests added "later"
279
+ - Rationalizing "just this once"
280
+ - "I already manually tested it"
281
+ - "Tests after achieve the same purpose"
282
+ - "It's about spirit not ritual"
283
+ - "Keep as reference" or "adapt existing code"
284
+ - "Already spent X hours, deleting is wasteful"
285
+ - "TDD is dogmatic, I'm being pragmatic"
286
+ - "This is different because..."
287
+
288
+ **All of these mean: Delete code. Start over with TDD.**
289
+
290
+ ## Example: Bug Fix
291
+
292
+ **Bug:** Empty email accepted
293
+
294
+ **RED**
295
+ ```typescript
296
+ test('rejects empty email', async () => {
297
+ const result = await submitForm({ email: '' });
298
+ expect(result.error).toBe('Email required');
299
+ });
300
+ ```
301
+
302
+ **Verify RED**
303
+ ```bash
304
+ $ npm test
305
+ FAIL: expected 'Email required', got undefined
306
+ ```
307
+
308
+ **GREEN**
309
+ ```typescript
310
+ function submitForm(data: FormData) {
311
+ if (!data.email?.trim()) {
312
+ return { error: 'Email required' };
313
+ }
314
+ // ...
315
+ }
316
+ ```
317
+
318
+ **Verify GREEN**
319
+ ```bash
320
+ $ npm test
321
+ PASS
322
+ ```
323
+
324
+ **REFACTOR**
325
+ Extract validation for multiple fields if needed.
326
+
327
+ ## Verification Checklist
328
+
329
+ Before marking work complete:
330
+
331
+ - [ ] Every new function/method has a test
332
+ - [ ] Watched each test fail before implementing
333
+ - [ ] Each test failed for expected reason (feature missing, not typo)
334
+ - [ ] Wrote minimal code to pass each test
335
+ - [ ] All tests pass
336
+ - [ ] Output pristine (no errors, warnings)
337
+ - [ ] Tests use real code (mocks only if unavoidable)
338
+ - [ ] Edge cases and errors covered
339
+
340
+ Can't check all boxes? You skipped TDD. Start over.
341
+
342
+ ## When Stuck
343
+
344
+ | Problem | Solution |
345
+ |---------|----------|
346
+ | Don't know how to test | Write wished-for API. Write assertion first. Ask your human partner. |
347
+ | Test too complicated | Design too complicated. Simplify interface. |
348
+ | Must mock everything | Code too coupled. Use dependency injection. |
349
+ | Test setup huge | Extract helpers. Still complex? Simplify design. |
350
+
351
+ ## Debugging Integration
352
+
353
+ Bug found? Write failing test reproducing it. Follow TDD cycle. Test proves fix and prevents regression.
354
+
355
+ Never fix bugs without a test.
356
+
357
+ ## Testing Anti-Patterns
358
+
359
+ When adding mocks or test utilities, read @testing-anti-patterns.md to avoid common pitfalls:
360
+ - Testing mock behavior instead of real behavior
361
+ - Adding test-only methods to production classes
362
+ - Mocking without understanding dependencies
363
+
364
+ ## Final Rule
365
+
366
+ ```
367
+ Production code → test exists and failed first
368
+ Otherwise → not TDD
369
+ ```
370
+
371
+ No exceptions without your human partner's permission.