@ia-ccun/code-agent-cli 0.0.15 → 0.0.16

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (75) hide show
  1. package/bin/cli.js +153 -84
  2. package/config/agent/extensions/working-msg.ts +33 -8
  3. package/config/agent/models.json +41 -11
  4. package/config/agent/prompts/code-simplifier.md +52 -0
  5. package/config/agent/skills/brainstorming/SKILL.md +165 -0
  6. package/config/agent/skills/brainstorming/scripts/frame-template.html +214 -0
  7. package/config/agent/skills/brainstorming/scripts/helper.js +88 -0
  8. package/config/agent/skills/brainstorming/scripts/server.cjs +338 -0
  9. package/config/agent/skills/brainstorming/scripts/start-server.sh +153 -0
  10. package/config/agent/skills/brainstorming/scripts/stop-server.sh +55 -0
  11. package/config/agent/skills/brainstorming/spec-document-reviewer-prompt.md +49 -0
  12. package/config/agent/skills/brainstorming/visual-companion.md +286 -0
  13. package/config/agent/skills/dispatching-parallel-agents/SKILL.md +183 -0
  14. package/config/agent/skills/executing-plans/SKILL.md +71 -0
  15. package/config/agent/skills/finishing-a-development-branch/SKILL.md +201 -0
  16. package/config/agent/skills/owasp-security/SKILL.md +537 -0
  17. package/config/agent/skills/receiving-code-review/SKILL.md +214 -0
  18. package/config/agent/skills/requesting-code-review/SKILL.md +106 -0
  19. package/config/agent/skills/requesting-code-review/code-reviewer.md +146 -0
  20. package/config/agent/skills/skill-creator/SKILL.md +337 -213
  21. package/config/agent/skills/skill-creator/agents/analyzer.md +274 -0
  22. package/config/agent/skills/skill-creator/agents/comparator.md +202 -0
  23. package/config/agent/skills/skill-creator/agents/grader.md +223 -0
  24. package/config/agent/skills/skill-creator/assets/eval_review.html +146 -0
  25. package/config/agent/skills/skill-creator/eval-viewer/generate_review.py +471 -0
  26. package/config/agent/skills/skill-creator/eval-viewer/viewer.html +1325 -0
  27. package/config/agent/skills/skill-creator/references/schemas.md +430 -0
  28. package/config/agent/skills/skill-creator/scripts/__init__.py +0 -0
  29. package/config/agent/skills/skill-creator/scripts/aggregate_benchmark.py +401 -0
  30. package/config/agent/skills/skill-creator/scripts/generate_report.py +326 -0
  31. package/config/agent/skills/skill-creator/scripts/improve_description.py +248 -0
  32. package/config/agent/skills/skill-creator/scripts/package_skill.py +33 -7
  33. package/config/agent/skills/skill-creator/scripts/quick_validate.py +11 -3
  34. package/config/agent/skills/skill-creator/scripts/run_eval.py +310 -0
  35. package/config/agent/skills/skill-creator/scripts/run_loop.py +332 -0
  36. package/config/agent/skills/skill-creator/scripts/utils.py +47 -0
  37. package/config/agent/skills/subagent-driven-development/SKILL.md +278 -0
  38. package/config/agent/skills/subagent-driven-development/code-quality-reviewer-prompt.md +26 -0
  39. package/config/agent/skills/subagent-driven-development/implementer-prompt.md +113 -0
  40. package/config/agent/skills/subagent-driven-development/spec-reviewer-prompt.md +61 -0
  41. package/config/agent/skills/systematic-debugging/CREATION-LOG.md +119 -0
  42. package/config/agent/skills/systematic-debugging/SKILL.md +297 -0
  43. package/config/agent/skills/systematic-debugging/condition-based-waiting-example.ts +158 -0
  44. package/config/agent/skills/systematic-debugging/condition-based-waiting.md +115 -0
  45. package/config/agent/skills/systematic-debugging/defense-in-depth.md +122 -0
  46. package/config/agent/skills/systematic-debugging/find-polluter.sh +63 -0
  47. package/config/agent/skills/systematic-debugging/root-cause-tracing.md +169 -0
  48. package/config/agent/skills/systematic-debugging/test-academic.md +14 -0
  49. package/config/agent/skills/systematic-debugging/test-pressure-1.md +58 -0
  50. package/config/agent/skills/systematic-debugging/test-pressure-2.md +68 -0
  51. package/config/agent/skills/systematic-debugging/test-pressure-3.md +69 -0
  52. package/config/agent/skills/test-driven-development/SKILL.md +372 -0
  53. package/config/agent/skills/test-driven-development/testing-anti-patterns.md +299 -0
  54. package/config/agent/skills/using-git-worktrees/SKILL.md +219 -0
  55. package/config/agent/skills/using-superpowers/SKILL.md +116 -0
  56. package/config/agent/skills/using-superpowers/references/codex-tools.md +25 -0
  57. package/config/agent/skills/using-superpowers/references/gemini-tools.md +33 -0
  58. package/config/agent/skills/verification-before-completion/SKILL.md +140 -0
  59. package/config/agent/skills/writing-plans/SKILL.md +146 -0
  60. package/config/agent/skills/writing-plans/plan-document-reviewer-prompt.md +49 -0
  61. package/config/agent/skills/writing-skills/SKILL.md +667 -0
  62. package/config/agent/skills/writing-skills/anthropic-best-practices.md +1150 -0
  63. package/config/agent/skills/writing-skills/examples/CLAUDE_MD_TESTING.md +189 -0
  64. package/config/agent/skills/writing-skills/graphviz-conventions.dot +172 -0
  65. package/config/agent/skills/writing-skills/persuasion-principles.md +187 -0
  66. package/config/agent/skills/writing-skills/render-graphs.js +168 -0
  67. package/config/agent/skills/writing-skills/testing-skills-with-subagents.md +384 -0
  68. package/package.json +14 -7
  69. package/scripts/postinstall.js +4 -18
  70. package/config/agent/skills/github/SKILL.md +0 -47
  71. package/config/agent/skills/owasp/SKILL.md +0 -169
  72. package/config/agent/skills/pua/SKILL.md +0 -364
  73. package/config/agent/skills/skill-creator/references/output-patterns.md +0 -82
  74. package/config/agent/skills/skill-creator/references/workflows.md +0 -28
  75. package/config/agent/skills/skill-creator/scripts/init_skill.py +0 -303
@@ -0,0 +1,372 @@
1
+ ---
2
+ name: test-driven-development
3
+ author: xujianjiang
4
+ description: Use when implementing any feature or bugfix, before writing implementation code
5
+ ---
6
+
7
+ # Test-Driven Development (TDD)
8
+
9
+ ## Overview
10
+
11
+ Write the test first. Watch it fail. Write minimal code to pass.
12
+
13
+ **Core principle:** If you didn't watch the test fail, you don't know if it tests the right thing.
14
+
15
+ **Violating the letter of the rules is violating the spirit of the rules.**
16
+
17
+ ## When to Use
18
+
19
+ **Always:**
20
+ - New features
21
+ - Bug fixes
22
+ - Refactoring
23
+ - Behavior changes
24
+
25
+ **Exceptions (ask your human partner):**
26
+ - Throwaway prototypes
27
+ - Generated code
28
+ - Configuration files
29
+
30
+ Thinking "skip TDD just this once"? Stop. That's rationalization.
31
+
32
+ ## The Iron Law
33
+
34
+ ```
35
+ NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
36
+ ```
37
+
38
+ Write code before the test? Delete it. Start over.
39
+
40
+ **No exceptions:**
41
+ - Don't keep it as "reference"
42
+ - Don't "adapt" it while writing tests
43
+ - Don't look at it
44
+ - Delete means delete
45
+
46
+ Implement fresh from tests. Period.
47
+
48
+ ## Red-Green-Refactor
49
+
50
+ ```dot
51
+ digraph tdd_cycle {
52
+ rankdir=LR;
53
+ red [label="RED\nWrite failing test", shape=box, style=filled, fillcolor="#ffcccc"];
54
+ verify_red [label="Verify fails\ncorrectly", shape=diamond];
55
+ green [label="GREEN\nMinimal code", shape=box, style=filled, fillcolor="#ccffcc"];
56
+ verify_green [label="Verify passes\nAll green", shape=diamond];
57
+ refactor [label="REFACTOR\nClean up", shape=box, style=filled, fillcolor="#ccccff"];
58
+ next [label="Next", shape=ellipse];
59
+
60
+ red -> verify_red;
61
+ verify_red -> green [label="yes"];
62
+ verify_red -> red [label="wrong\nfailure"];
63
+ green -> verify_green;
64
+ verify_green -> refactor [label="yes"];
65
+ verify_green -> green [label="no"];
66
+ refactor -> verify_green [label="stay\ngreen"];
67
+ verify_green -> next;
68
+ next -> red;
69
+ }
70
+ ```
71
+
72
+ ### RED - Write Failing Test
73
+
74
+ Write one minimal test showing what should happen.
75
+
76
+ <Good>
77
+ ```typescript
78
+ test('retries failed operations 3 times', async () => {
79
+ let attempts = 0;
80
+ const operation = () => {
81
+ attempts++;
82
+ if (attempts < 3) throw new Error('fail');
83
+ return 'success';
84
+ };
85
+
86
+ const result = await retryOperation(operation);
87
+
88
+ expect(result).toBe('success');
89
+ expect(attempts).toBe(3);
90
+ });
91
+ ```
92
+ Clear name, tests real behavior, one thing
93
+ </Good>
94
+
95
+ <Bad>
96
+ ```typescript
97
+ test('retry works', async () => {
98
+ const mock = jest.fn()
99
+ .mockRejectedValueOnce(new Error())
100
+ .mockRejectedValueOnce(new Error())
101
+ .mockResolvedValueOnce('success');
102
+ await retryOperation(mock);
103
+ expect(mock).toHaveBeenCalledTimes(3);
104
+ });
105
+ ```
106
+ Vague name, tests mock not code
107
+ </Bad>
108
+
109
+ **Requirements:**
110
+ - One behavior
111
+ - Clear name
112
+ - Real code (no mocks unless unavoidable)
113
+
114
+ ### Verify RED - Watch It Fail
115
+
116
+ **MANDATORY. Never skip.**
117
+
118
+ ```bash
119
+ npm test path/to/test.test.ts
120
+ ```
121
+
122
+ Confirm:
123
+ - Test fails (not errors)
124
+ - Failure message is expected
125
+ - Fails because feature missing (not typos)
126
+
127
+ **Test passes?** You're testing existing behavior. Fix test.
128
+
129
+ **Test errors?** Fix error, re-run until it fails correctly.
130
+
131
+ ### GREEN - Minimal Code
132
+
133
+ Write simplest code to pass the test.
134
+
135
+ <Good>
136
+ ```typescript
137
+ async function retryOperation<T>(fn: () => Promise<T>): Promise<T> {
138
+ for (let i = 0; i < 3; i++) {
139
+ try {
140
+ return await fn();
141
+ } catch (e) {
142
+ if (i === 2) throw e;
143
+ }
144
+ }
145
+ throw new Error('unreachable');
146
+ }
147
+ ```
148
+ Just enough to pass
149
+ </Good>
150
+
151
+ <Bad>
152
+ ```typescript
153
+ async function retryOperation<T>(
154
+ fn: () => Promise<T>,
155
+ options?: {
156
+ maxRetries?: number;
157
+ backoff?: 'linear' | 'exponential';
158
+ onRetry?: (attempt: number) => void;
159
+ }
160
+ ): Promise<T> {
161
+ // YAGNI
162
+ }
163
+ ```
164
+ Over-engineered
165
+ </Bad>
166
+
167
+ Don't add features, refactor other code, or "improve" beyond the test.
168
+
169
+ ### Verify GREEN - Watch It Pass
170
+
171
+ **MANDATORY.**
172
+
173
+ ```bash
174
+ npm test path/to/test.test.ts
175
+ ```
176
+
177
+ Confirm:
178
+ - Test passes
179
+ - Other tests still pass
180
+ - Output pristine (no errors, warnings)
181
+
182
+ **Test fails?** Fix code, not test.
183
+
184
+ **Other tests fail?** Fix now.
185
+
186
+ ### REFACTOR - Clean Up
187
+
188
+ After green only:
189
+ - Remove duplication
190
+ - Improve names
191
+ - Extract helpers
192
+
193
+ Keep tests green. Don't add behavior.
194
+
195
+ ### Repeat
196
+
197
+ Next failing test for next feature.
198
+
199
+ ## Good Tests
200
+
201
+ | Quality | Good | Bad |
202
+ |---------|------|-----|
203
+ | **Minimal** | One thing. "and" in name? Split it. | `test('validates email and domain and whitespace')` |
204
+ | **Clear** | Name describes behavior | `test('test1')` |
205
+ | **Shows intent** | Demonstrates desired API | Obscures what code should do |
206
+
207
+ ## Why Order Matters
208
+
209
+ **"I'll write tests after to verify it works"**
210
+
211
+ Tests written after code pass immediately. Passing immediately proves nothing:
212
+ - Might test wrong thing
213
+ - Might test implementation, not behavior
214
+ - Might miss edge cases you forgot
215
+ - You never saw it catch the bug
216
+
217
+ Test-first forces you to see the test fail, proving it actually tests something.
218
+
219
+ **"I already manually tested all the edge cases"**
220
+
221
+ Manual testing is ad-hoc. You think you tested everything but:
222
+ - No record of what you tested
223
+ - Can't re-run when code changes
224
+ - Easy to forget cases under pressure
225
+ - "It worked when I tried it" ≠ comprehensive
226
+
227
+ Automated tests are systematic. They run the same way every time.
228
+
229
+ **"Deleting X hours of work is wasteful"**
230
+
231
+ Sunk cost fallacy. The time is already gone. Your choice now:
232
+ - Delete and rewrite with TDD (X more hours, high confidence)
233
+ - Keep it and add tests after (30 min, low confidence, likely bugs)
234
+
235
+ The "waste" is keeping code you can't trust. Working code without real tests is technical debt.
236
+
237
+ **"TDD is dogmatic, being pragmatic means adapting"**
238
+
239
+ TDD IS pragmatic:
240
+ - Finds bugs before commit (faster than debugging after)
241
+ - Prevents regressions (tests catch breaks immediately)
242
+ - Documents behavior (tests show how to use code)
243
+ - Enables refactoring (change freely, tests catch breaks)
244
+
245
+ "Pragmatic" shortcuts = debugging in production = slower.
246
+
247
+ **"Tests after achieve the same goals - it's spirit not ritual"**
248
+
249
+ No. Tests-after answer "What does this do?" Tests-first answer "What should this do?"
250
+
251
+ Tests-after are biased by your implementation. You test what you built, not what's required. You verify remembered edge cases, not discovered ones.
252
+
253
+ Tests-first force edge case discovery before implementing. Tests-after verify you remembered everything (you didn't).
254
+
255
+ 30 minutes of tests after ≠ TDD. You get coverage, lose proof tests work.
256
+
257
+ ## Common Rationalizations
258
+
259
+ | Excuse | Reality |
260
+ |--------|---------|
261
+ | "Too simple to test" | Simple code breaks. Test takes 30 seconds. |
262
+ | "I'll test after" | Tests passing immediately prove nothing. |
263
+ | "Tests after achieve same goals" | Tests-after = "what does this do?" Tests-first = "what should this do?" |
264
+ | "Already manually tested" | Ad-hoc ≠ systematic. No record, can't re-run. |
265
+ | "Deleting X hours is wasteful" | Sunk cost fallacy. Keeping unverified code is technical debt. |
266
+ | "Keep as reference, write tests first" | You'll adapt it. That's testing after. Delete means delete. |
267
+ | "Need to explore first" | Fine. Throw away exploration, start with TDD. |
268
+ | "Test hard = design unclear" | Listen to test. Hard to test = hard to use. |
269
+ | "TDD will slow me down" | TDD faster than debugging. Pragmatic = test-first. |
270
+ | "Manual test faster" | Manual doesn't prove edge cases. You'll re-test every change. |
271
+ | "Existing code has no tests" | You're improving it. Add tests for existing code. |
272
+
273
+ ## Red Flags - STOP and Start Over
274
+
275
+ - Code before test
276
+ - Test after implementation
277
+ - Test passes immediately
278
+ - Can't explain why test failed
279
+ - Tests added "later"
280
+ - Rationalizing "just this once"
281
+ - "I already manually tested it"
282
+ - "Tests after achieve the same purpose"
283
+ - "It's about spirit not ritual"
284
+ - "Keep as reference" or "adapt existing code"
285
+ - "Already spent X hours, deleting is wasteful"
286
+ - "TDD is dogmatic, I'm being pragmatic"
287
+ - "This is different because..."
288
+
289
+ **All of these mean: Delete code. Start over with TDD.**
290
+
291
+ ## Example: Bug Fix
292
+
293
+ **Bug:** Empty email accepted
294
+
295
+ **RED**
296
+ ```typescript
297
+ test('rejects empty email', async () => {
298
+ const result = await submitForm({ email: '' });
299
+ expect(result.error).toBe('Email required');
300
+ });
301
+ ```
302
+
303
+ **Verify RED**
304
+ ```bash
305
+ $ npm test
306
+ FAIL: expected 'Email required', got undefined
307
+ ```
308
+
309
+ **GREEN**
310
+ ```typescript
311
+ function submitForm(data: FormData) {
312
+ if (!data.email?.trim()) {
313
+ return { error: 'Email required' };
314
+ }
315
+ // ...
316
+ }
317
+ ```
318
+
319
+ **Verify GREEN**
320
+ ```bash
321
+ $ npm test
322
+ PASS
323
+ ```
324
+
325
+ **REFACTOR**
326
+ Extract validation for multiple fields if needed.
327
+
328
+ ## Verification Checklist
329
+
330
+ Before marking work complete:
331
+
332
+ - [ ] Every new function/method has a test
333
+ - [ ] Watched each test fail before implementing
334
+ - [ ] Each test failed for expected reason (feature missing, not typo)
335
+ - [ ] Wrote minimal code to pass each test
336
+ - [ ] All tests pass
337
+ - [ ] Output pristine (no errors, warnings)
338
+ - [ ] Tests use real code (mocks only if unavoidable)
339
+ - [ ] Edge cases and errors covered
340
+
341
+ Can't check all boxes? You skipped TDD. Start over.
342
+
343
+ ## When Stuck
344
+
345
+ | Problem | Solution |
346
+ |---------|----------|
347
+ | Don't know how to test | Write wished-for API. Write assertion first. Ask your human partner. |
348
+ | Test too complicated | Design too complicated. Simplify interface. |
349
+ | Must mock everything | Code too coupled. Use dependency injection. |
350
+ | Test setup huge | Extract helpers. Still complex? Simplify design. |
351
+
352
+ ## Debugging Integration
353
+
354
+ Bug found? Write failing test reproducing it. Follow TDD cycle. Test proves fix and prevents regression.
355
+
356
+ Never fix bugs without a test.
357
+
358
+ ## Testing Anti-Patterns
359
+
360
+ When adding mocks or test utilities, read @testing-anti-patterns.md to avoid common pitfalls:
361
+ - Testing mock behavior instead of real behavior
362
+ - Adding test-only methods to production classes
363
+ - Mocking without understanding dependencies
364
+
365
+ ## Final Rule
366
+
367
+ ```
368
+ Production code → test exists and failed first
369
+ Otherwise → not TDD
370
+ ```
371
+
372
+ No exceptions without your human partner's permission.
@@ -0,0 +1,299 @@
1
+ # Testing Anti-Patterns
2
+
3
+ **Load this reference when:** writing or changing tests, adding mocks, or tempted to add test-only methods to production code.
4
+
5
+ ## Overview
6
+
7
+ Tests must verify real behavior, not mock behavior. Mocks are a means to isolate, not the thing being tested.
8
+
9
+ **Core principle:** Test what the code does, not what the mocks do.
10
+
11
+ **Following strict TDD prevents these anti-patterns.**
12
+
13
+ ## The Iron Laws
14
+
15
+ ```
16
+ 1. NEVER test mock behavior
17
+ 2. NEVER add test-only methods to production classes
18
+ 3. NEVER mock without understanding dependencies
19
+ ```
20
+
21
+ ## Anti-Pattern 1: Testing Mock Behavior
22
+
23
+ **The violation:**
24
+ ```typescript
25
+ // ❌ BAD: Testing that the mock exists
26
+ test('renders sidebar', () => {
27
+ render(<Page />);
28
+ expect(screen.getByTestId('sidebar-mock')).toBeInTheDocument();
29
+ });
30
+ ```
31
+
32
+ **Why this is wrong:**
33
+ - You're verifying the mock works, not that the component works
34
+ - Test passes when mock is present, fails when it's not
35
+ - Tells you nothing about real behavior
36
+
37
+ **your human partner's correction:** "Are we testing the behavior of a mock?"
38
+
39
+ **The fix:**
40
+ ```typescript
41
+ // ✅ GOOD: Test real component or don't mock it
42
+ test('renders sidebar', () => {
43
+ render(<Page />); // Don't mock sidebar
44
+ expect(screen.getByRole('navigation')).toBeInTheDocument();
45
+ });
46
+
47
+ // OR if sidebar must be mocked for isolation:
48
+ // Don't assert on the mock - test Page's behavior with sidebar present
49
+ ```
50
+
51
+ ### Gate Function
52
+
53
+ ```
54
+ BEFORE asserting on any mock element:
55
+ Ask: "Am I testing real component behavior or just mock existence?"
56
+
57
+ IF testing mock existence:
58
+ STOP - Delete the assertion or unmock the component
59
+
60
+ Test real behavior instead
61
+ ```
62
+
63
+ ## Anti-Pattern 2: Test-Only Methods in Production
64
+
65
+ **The violation:**
66
+ ```typescript
67
+ // ❌ BAD: destroy() only used in tests
68
+ class Session {
69
+ async destroy() { // Looks like production API!
70
+ await this._workspaceManager?.destroyWorkspace(this.id);
71
+ // ... cleanup
72
+ }
73
+ }
74
+
75
+ // In tests
76
+ afterEach(() => session.destroy());
77
+ ```
78
+
79
+ **Why this is wrong:**
80
+ - Production class polluted with test-only code
81
+ - Dangerous if accidentally called in production
82
+ - Violates YAGNI and separation of concerns
83
+ - Confuses object lifecycle with entity lifecycle
84
+
85
+ **The fix:**
86
+ ```typescript
87
+ // ✅ GOOD: Test utilities handle test cleanup
88
+ // Session has no destroy() - it's stateless in production
89
+
90
+ // In test-utils/
91
+ export async function cleanupSession(session: Session) {
92
+ const workspace = session.getWorkspaceInfo();
93
+ if (workspace) {
94
+ await workspaceManager.destroyWorkspace(workspace.id);
95
+ }
96
+ }
97
+
98
+ // In tests
99
+ afterEach(() => cleanupSession(session));
100
+ ```
101
+
102
+ ### Gate Function
103
+
104
+ ```
105
+ BEFORE adding any method to production class:
106
+ Ask: "Is this only used by tests?"
107
+
108
+ IF yes:
109
+ STOP - Don't add it
110
+ Put it in test utilities instead
111
+
112
+ Ask: "Does this class own this resource's lifecycle?"
113
+
114
+ IF no:
115
+ STOP - Wrong class for this method
116
+ ```
117
+
118
+ ## Anti-Pattern 3: Mocking Without Understanding
119
+
120
+ **The violation:**
121
+ ```typescript
122
+ // ❌ BAD: Mock breaks test logic
123
+ test('detects duplicate server', () => {
124
+ // Mock prevents config write that test depends on!
125
+ vi.mock('ToolCatalog', () => ({
126
+ discoverAndCacheTools: vi.fn().mockResolvedValue(undefined)
127
+ }));
128
+
129
+ await addServer(config);
130
+ await addServer(config); // Should throw - but won't!
131
+ });
132
+ ```
133
+
134
+ **Why this is wrong:**
135
+ - Mocked method had side effect test depended on (writing config)
136
+ - Over-mocking to "be safe" breaks actual behavior
137
+ - Test passes for wrong reason or fails mysteriously
138
+
139
+ **The fix:**
140
+ ```typescript
141
+ // ✅ GOOD: Mock at correct level
142
+ test('detects duplicate server', () => {
143
+ // Mock the slow part, preserve behavior test needs
144
+ vi.mock('MCPServerManager'); // Just mock slow server startup
145
+
146
+ await addServer(config); // Config written
147
+ await addServer(config); // Duplicate detected ✓
148
+ });
149
+ ```
150
+
151
+ ### Gate Function
152
+
153
+ ```
154
+ BEFORE mocking any method:
155
+ STOP - Don't mock yet
156
+
157
+ 1. Ask: "What side effects does the real method have?"
158
+ 2. Ask: "Does this test depend on any of those side effects?"
159
+ 3. Ask: "Do I fully understand what this test needs?"
160
+
161
+ IF depends on side effects:
162
+ Mock at lower level (the actual slow/external operation)
163
+ OR use test doubles that preserve necessary behavior
164
+ NOT the high-level method the test depends on
165
+
166
+ IF unsure what test depends on:
167
+ Run test with real implementation FIRST
168
+ Observe what actually needs to happen
169
+ THEN add minimal mocking at the right level
170
+
171
+ Red flags:
172
+ - "I'll mock this to be safe"
173
+ - "This might be slow, better mock it"
174
+ - Mocking without understanding the dependency chain
175
+ ```
176
+
177
+ ## Anti-Pattern 4: Incomplete Mocks
178
+
179
+ **The violation:**
180
+ ```typescript
181
+ // ❌ BAD: Partial mock - only fields you think you need
182
+ const mockResponse = {
183
+ status: 'success',
184
+ data: { userId: '123', name: 'Alice' }
185
+ // Missing: metadata that downstream code uses
186
+ };
187
+
188
+ // Later: breaks when code accesses response.metadata.requestId
189
+ ```
190
+
191
+ **Why this is wrong:**
192
+ - **Partial mocks hide structural assumptions** - You only mocked fields you know about
193
+ - **Downstream code may depend on fields you didn't include** - Silent failures
194
+ - **Tests pass but integration fails** - Mock incomplete, real API complete
195
+ - **False confidence** - Test proves nothing about real behavior
196
+
197
+ **The Iron Rule:** Mock the COMPLETE data structure as it exists in reality, not just fields your immediate test uses.
198
+
199
+ **The fix:**
200
+ ```typescript
201
+ // ✅ GOOD: Mirror real API completeness
202
+ const mockResponse = {
203
+ status: 'success',
204
+ data: { userId: '123', name: 'Alice' },
205
+ metadata: { requestId: 'req-789', timestamp: 1234567890 }
206
+ // All fields real API returns
207
+ };
208
+ ```
209
+
210
+ ### Gate Function
211
+
212
+ ```
213
+ BEFORE creating mock responses:
214
+ Check: "What fields does the real API response contain?"
215
+
216
+ Actions:
217
+ 1. Examine actual API response from docs/examples
218
+ 2. Include ALL fields system might consume downstream
219
+ 3. Verify mock matches real response schema completely
220
+
221
+ Critical:
222
+ If you're creating a mock, you must understand the ENTIRE structure
223
+ Partial mocks fail silently when code depends on omitted fields
224
+
225
+ If uncertain: Include all documented fields
226
+ ```
227
+
228
+ ## Anti-Pattern 5: Integration Tests as Afterthought
229
+
230
+ **The violation:**
231
+ ```
232
+ ✅ Implementation complete
233
+ ❌ No tests written
234
+ "Ready for testing"
235
+ ```
236
+
237
+ **Why this is wrong:**
238
+ - Testing is part of implementation, not optional follow-up
239
+ - TDD would have caught this
240
+ - Can't claim complete without tests
241
+
242
+ **The fix:**
243
+ ```
244
+ TDD cycle:
245
+ 1. Write failing test
246
+ 2. Implement to pass
247
+ 3. Refactor
248
+ 4. THEN claim complete
249
+ ```
250
+
251
+ ## When Mocks Become Too Complex
252
+
253
+ **Warning signs:**
254
+ - Mock setup longer than test logic
255
+ - Mocking everything to make test pass
256
+ - Mocks missing methods real components have
257
+ - Test breaks when mock changes
258
+
259
+ **your human partner's question:** "Do we need to be using a mock here?"
260
+
261
+ **Consider:** Integration tests with real components often simpler than complex mocks
262
+
263
+ ## TDD Prevents These Anti-Patterns
264
+
265
+ **Why TDD helps:**
266
+ 1. **Write test first** → Forces you to think about what you're actually testing
267
+ 2. **Watch it fail** → Confirms test tests real behavior, not mocks
268
+ 3. **Minimal implementation** → No test-only methods creep in
269
+ 4. **Real dependencies** → You see what the test actually needs before mocking
270
+
271
+ **If you're testing mock behavior, you violated TDD** - you added mocks without watching test fail against real code first.
272
+
273
+ ## Quick Reference
274
+
275
+ | Anti-Pattern | Fix |
276
+ |--------------|-----|
277
+ | Assert on mock elements | Test real component or unmock it |
278
+ | Test-only methods in production | Move to test utilities |
279
+ | Mock without understanding | Understand dependencies first, mock minimally |
280
+ | Incomplete mocks | Mirror real API completely |
281
+ | Tests as afterthought | TDD - tests first |
282
+ | Over-complex mocks | Consider integration tests |
283
+
284
+ ## Red Flags
285
+
286
+ - Assertion checks for `*-mock` test IDs
287
+ - Methods only called in test files
288
+ - Mock setup is >50% of test
289
+ - Test fails when you remove mock
290
+ - Can't explain why mock is needed
291
+ - Mocking "just to be safe"
292
+
293
+ ## The Bottom Line
294
+
295
+ **Mocks are tools to isolate, not things to test.**
296
+
297
+ If TDD reveals you're testing mock behavior, you've gone wrong.
298
+
299
+ Fix: Test real behavior or question why you're mocking at all.