tdd-claude-code 0.4.0 → 0.4.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/bin/install.js CHANGED
@@ -26,7 +26,7 @@ ${c.cyan} ████████╗██████╗ ██████
26
26
  ╚═╝ ╚═════╝ ╚═════╝${c.reset}
27
27
  `;
28
28
 
29
- const VERSION = '0.4.0';
29
+ const VERSION = '0.4.1';
30
30
 
31
31
  const COMMANDS = [
32
32
  'new-project.md',
package/build.md CHANGED
@@ -32,61 +32,60 @@ Check what's already set up:
32
32
  - `spec/` directory → RSpec
33
33
  - None found → Set up based on PROJECT.md stack (see framework defaults below)
34
34
 
35
- ### Step 3: Write Tests (Spawn Test Writer Agents)
35
+ ### Step 3: Plan Tests for Each Task
36
36
 
37
- For each plan, spawn an agent with this prompt:
37
+ Before writing any tests, create a test plan for each task in the phase.
38
38
 
39
- <agent_prompt>
40
- You are a TDD Test Writer. Write failing tests that define expected behavior BEFORE implementation.
39
+ For each task in the plan:
41
40
 
42
- ## Context
41
+ 1. **Read the task** — understand what behavior is being specified
42
+ 2. **Identify test cases:**
43
+ - Happy path (expected inputs → expected outputs)
44
+ - Edge cases mentioned in `<action>`
45
+ - Error conditions from `<verify>`
46
+ 3. **Create test plan entry**
43
47
 
44
- <project>
45
- {PROJECT.md contents}
46
- </project>
48
+ Create `.planning/phases/{phase}-TEST-PLAN.md`:
47
49
 
48
- <plan>
49
- {Current PLAN.md contents}
50
- </plan>
50
+ ```markdown
51
+ # Phase {N} Test Plan
51
52
 
52
- <test_framework>
53
- {Detected or default framework}
54
- </test_framework>
53
+ ## Task: {task-id} - {task-title}
55
54
 
56
- <existing_tests>
57
- {List any existing test files for patterns, or "None - this is a new project"}
58
- </existing_tests>
55
+ ### File: tests/{feature}.test.ts
59
56
 
60
- ## Your Task
57
+ | Test | Type | Expected Result |
58
+ |------|------|-----------------|
59
+ | user can log in with valid credentials | happy path | returns user object |
60
+ | login rejects invalid password | error | throws AuthError |
61
+ | login rejects empty email | edge case | throws ValidationError |
61
62
 
62
- For each `<task>` in the plan:
63
+ ### Dependencies to mock:
64
+ - database connection
65
+ - email service
63
66
 
64
- 1. **Understand the task** — What behavior is being specified?
67
+ ---
65
68
 
66
- 2. **Write test file(s)** that cover:
67
- - Happy path (expected inputs → expected outputs)
68
- - Edge cases mentioned in `<action>`
69
- - Error conditions from `<verify>`
70
-
71
- 3. **Make tests runnable NOW** — even though implementation doesn't exist:
72
- - Import from where the code WILL be (path in `<files>`)
73
- - Tests should FAIL, not ERROR from missing imports
74
- - Use mocks/stubs where needed for dependencies
75
-
76
- 4. **Use clear test names** that describe expected behavior:
77
- ```
78
- ✓ "user can log in with valid credentials"
79
- ✓ "login rejects invalid password with 401"
80
- "login returns httpOnly cookie on success"
81
- ✗ "test login" (too vague)
82
- ```
83
-
84
- ## Test File Patterns
69
+ ## Task: {task-id-2} - {task-title-2}
70
+ ...
71
+ ```
72
+
73
+ ### Step 4: Write Tests One Task at a Time
74
+
75
+ **For each task in the test plan, sequentially:**
76
+
77
+ #### 4a. Write test file for this task
78
+
79
+ Follow the project's test patterns. Test names should describe expected behavior:
80
+ ```
81
+ ✓ "user can log in with valid credentials"
82
+ ✓ "login rejects invalid password with 401"
83
+ "test login" (too vague)
84
+ ```
85
85
 
86
86
  **Vitest/Jest (TypeScript):**
87
87
  ```typescript
88
88
  import { describe, it, expect } from 'vitest'
89
- // Import from where code WILL exist
90
89
  import { login } from '../src/auth/login'
91
90
 
92
91
  describe('login', () => {
@@ -106,7 +105,6 @@ describe('login', () => {
106
105
  **pytest (Python):**
107
106
  ```python
108
107
  import pytest
109
- # Import from where code WILL exist
110
108
  from src.auth import login
111
109
 
112
110
  def test_login_returns_user_for_valid_credentials():
@@ -118,28 +116,40 @@ def test_login_raises_for_invalid_password():
118
116
  login("user@test.com", "wrong")
119
117
  ```
120
118
 
121
- ## Output
119
+ #### 4b. Run this test file
120
+
121
+ ```bash
122
+ npm test -- tests/auth/login.test.ts # vitest
123
+ pytest tests/test_login.py # pytest
124
+ ```
125
+
126
+ Verify:
127
+ - ✅ Tests execute (no syntax errors)
128
+ - ✅ Tests FAIL (not pass, not skip)
129
+ - ❌ If import errors, add mocks/stubs and retry
130
+
131
+ #### 4c. Commit this test file
122
132
 
123
- Create test files following the project's structure. Common locations:
124
- - `tests/{feature}.test.ts`
125
- - `src/{feature}/__tests__/{feature}.test.ts`
126
- - `tests/test_{feature}.py`
127
- - `spec/{feature}_spec.rb`
133
+ ```bash
134
+ git add tests/auth/login.test.ts
135
+ git commit -m "test: add login tests (red) - phase {N}"
136
+ ```
128
137
 
129
- After creating each test file, run it and confirm it FAILS (not errors).
138
+ #### 4d. Move to next task
130
139
 
131
- ## Critical Rules
140
+ Repeat 4a-4c for each task in the test plan.
132
141
 
142
+ **Critical Rules:**
133
143
  - Tests must be **syntactically valid** and **runnable**
134
144
  - Tests must **FAIL** because code doesn't exist yet
135
145
  - Tests must NOT **ERROR** from import issues — mock if needed
136
146
  - Do NOT write any implementation code
137
147
  - Do NOT skip or stub out the actual assertions
138
- </agent_prompt>
148
+ - **One task at a time, verify, commit, then next**
139
149
 
140
- ### Step 4: Verify All Tests Fail (Red)
150
+ ### Step 5: Verify All Tests Fail (Red)
141
151
 
142
- Run the test suite:
152
+ Run the full test suite:
143
153
  ```bash
144
154
  npm test # or vitest run, pytest, etc.
145
155
  ```
@@ -150,7 +160,7 @@ Check output:
150
160
  - ❌ If tests error on imports, add mocks and retry
151
161
  - ❌ If tests pass, something's wrong — investigate
152
162
 
153
- ### Step 5: Create Test Summary
163
+ ### Step 6: Create Test Summary
154
164
 
155
165
  Create `.planning/phases/{phase}-TESTS.md`:
156
166
 
@@ -180,13 +190,13 @@ Status: ✅ All tests failing (Red)
180
190
  | session persists across requests | 01-task-2 |
181
191
  ```
182
192
 
183
- ### Step 6: Execute Implementation (Green)
193
+ ### Step 7: Execute Implementation (Green)
184
194
 
185
195
  Call `/gsd:execute-phase {phase_number}`
186
196
 
187
197
  GSD's executor implements the code. Tests provide concrete pass/fail targets.
188
198
 
189
- ### Step 7: Verify All Tests Pass (Green)
199
+ ### Step 8: Verify All Tests Pass (Green)
190
200
 
191
201
  After execution completes, run tests again:
192
202
  ```bash
@@ -197,7 +207,7 @@ Check output:
197
207
  - ✅ All tests PASS → Continue to verify
198
208
  - ❌ Some tests fail → Report which tasks need fixes
199
209
 
200
- ### Step 8: Update Test Summary
210
+ ### Step 9: Update Test Summary
201
211
 
202
212
  Update `.planning/phases/{phase}-TESTS.md`:
203
213
 
package/coverage.md CHANGED
@@ -125,12 +125,79 @@ Start writing tests now?
125
125
 
126
126
  **If "Yes":**
127
127
 
128
- For each file, write tests that capture current behavior:
129
- 1. Read the source file
130
- 2. Identify exported functions/classes
131
- 3. Write tests for each public interface
132
- 4. Run tests to verify they pass (code already exists)
133
- 5. Mark item complete in backlog
128
+ Write tests using GSD-style execution — one file at a time with verification and commits.
129
+
130
+ #### For each file in the backlog (sequentially):
131
+
132
+ **a) Plan tests for this file**
133
+
134
+ Read the source file and create test plan:
135
+ ```markdown
136
+ ## File: src/services/payment.ts
137
+
138
+ ### Exports:
139
+ - createCharge(amount, customerId)
140
+ - refundCharge(chargeId)
141
+ - getPaymentHistory(customerId)
142
+
143
+ ### Test cases:
144
+ | Function | Test | Type |
145
+ |----------|------|------|
146
+ | createCharge | creates charge for valid customer | happy path |
147
+ | createCharge | rejects negative amount | edge case |
148
+ | createCharge | handles Stripe API error | error |
149
+ | refundCharge | refunds existing charge | happy path |
150
+ | refundCharge | fails for invalid chargeId | error |
151
+ ```
152
+
153
+ **b) Write test file**
154
+
155
+ Create tests that capture current behavior:
156
+ ```typescript
157
+ import { describe, it, expect } from 'vitest'
158
+ import { createCharge, refundCharge } from '../src/services/payment'
159
+
160
+ describe('createCharge', () => {
161
+ it('creates charge for valid customer', async () => {
162
+ const result = await createCharge(1000, 'cust_123')
163
+ expect(result.id).toBeDefined()
164
+ expect(result.amount).toBe(1000)
165
+ })
166
+
167
+ it('rejects negative amount', async () => {
168
+ await expect(createCharge(-100, 'cust_123'))
169
+ .rejects.toThrow()
170
+ })
171
+ })
172
+ ```
173
+
174
+ **c) Run tests for this file**
175
+
176
+ ```bash
177
+ npm test -- tests/services/payment.test.ts
178
+ ```
179
+
180
+ Verify:
181
+ - ✅ Tests PASS (code already exists)
182
+ - ❌ If tests fail, investigate — either test is wrong or found a bug
183
+
184
+ **d) Commit this test file**
185
+
186
+ ```bash
187
+ git add tests/services/payment.test.ts
188
+ git commit -m "test: add payment service tests"
189
+ ```
190
+
191
+ **e) Update backlog**
192
+
193
+ Mark item complete in `.planning/TEST-BACKLOG.md`:
194
+ ```markdown
195
+ - [x] src/services/payment.ts - payment processing logic ✅
196
+ ```
197
+
198
+ **f) Move to next file**
199
+
200
+ Repeat a-e for each file in the backlog.
134
201
 
135
202
  ### 7. Report Summary
136
203
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "tdd-claude-code",
3
- "version": "0.4.0",
3
+ "version": "0.4.1",
4
4
  "description": "TDD workflow for Claude Code - wraps GSD",
5
5
  "bin": {
6
6
  "tdd-claude-code": "./bin/install.js"