5-phase-workflow 1.4.2 → 1.4.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/bin/install.js +81 -188
- package/package.json +1 -1
- package/src/commands/5/configure.md +122 -326
- package/src/commands/5/discuss-feature.md +7 -172
- package/src/commands/5/implement-feature.md +33 -151
- package/src/commands/5/plan-feature.md +2 -8
- package/src/commands/5/plan-implementation.md +0 -10
- package/src/commands/5/quick-implement.md +34 -141
- package/src/commands/5/review-code.md +2 -12
- package/src/commands/5/update.md +17 -0
- package/src/commands/5/verify-implementation.md +32 -199
- package/src/hooks/check-updates.js +11 -13
- package/src/hooks/config-guard.js +30 -0
- package/src/hooks/plan-guard.js +52 -28
- package/src/hooks/statusline.js +29 -3
- package/src/settings.json +11 -1
- package/src/skills/build-project/SKILL.md +4 -130
- package/src/skills/configure-project/SKILL.md +26 -215
- package/src/skills/run-tests/SKILL.md +5 -208
- package/src/templates/workflow/QUICK-PLAN.md +0 -17
|
@@ -65,48 +65,7 @@ If commands are specified, use them with variable substitution. Otherwise, auto-
|
|
|
65
65
|
|
|
66
66
|
### 2. Detect Test Runner
|
|
67
67
|
|
|
68
|
-
If no config,
|
|
69
|
-
|
|
70
|
-
```bash
|
|
71
|
-
# Check package.json for test configuration
|
|
72
|
-
if [ -f "package.json" ]; then
|
|
73
|
-
# Check for test frameworks
|
|
74
|
-
if grep -q '"jest"' package.json || grep -q '"@jest"' package.json; then
|
|
75
|
-
TEST_RUNNER="jest"
|
|
76
|
-
elif grep -q '"vitest"' package.json; then
|
|
77
|
-
TEST_RUNNER="vitest"
|
|
78
|
-
elif grep -q '"mocha"' package.json; then
|
|
79
|
-
TEST_RUNNER="mocha"
|
|
80
|
-
else
|
|
81
|
-
TEST_RUNNER="npm" # Use npm test script
|
|
82
|
-
fi
|
|
83
|
-
fi
|
|
84
|
-
|
|
85
|
-
# Check for pytest
|
|
86
|
-
if [ -f "pytest.ini" ] || [ -f "setup.py" ] || ls tests/*.py >/dev/null 2>&1; then
|
|
87
|
-
TEST_RUNNER="pytest"
|
|
88
|
-
fi
|
|
89
|
-
|
|
90
|
-
# Check for Cargo
|
|
91
|
-
if [ -f "Cargo.toml" ]; then
|
|
92
|
-
TEST_RUNNER="cargo"
|
|
93
|
-
fi
|
|
94
|
-
|
|
95
|
-
# Check for Go
|
|
96
|
-
if [ -f "go.mod" ]; then
|
|
97
|
-
TEST_RUNNER="go"
|
|
98
|
-
fi
|
|
99
|
-
|
|
100
|
-
# Check for Gradle
|
|
101
|
-
if [ -f "build.gradle" ] || [ -f "build.gradle.kts" ]; then
|
|
102
|
-
TEST_RUNNER="gradle"
|
|
103
|
-
fi
|
|
104
|
-
|
|
105
|
-
# Check for Maven
|
|
106
|
-
if [ -f "pom.xml" ]; then
|
|
107
|
-
TEST_RUNNER="mvn"
|
|
108
|
-
fi
|
|
109
|
-
```
|
|
68
|
+
If no config, detect by checking project files: `package.json` (jest/vitest/mocha), `pytest.ini`/test files (pytest), `Cargo.toml` (cargo), `go.mod` (go), `build.gradle` (gradle), `pom.xml` (mvn).
|
|
110
69
|
|
|
111
70
|
### 3. Determine Test Command
|
|
112
71
|
|
|
@@ -139,73 +98,9 @@ Execute the command and capture output.
|
|
|
139
98
|
|
|
140
99
|
### 5. Parse Test Output
|
|
141
100
|
|
|
142
|
-
|
|
143
|
-
|
|
144
|
-
#### Jest/Vitest Output
|
|
145
|
-
```
|
|
146
|
-
Tests: 2 failed, 5 passed, 7 total
|
|
147
|
-
```
|
|
148
|
-
|
|
149
|
-
#### Pytest Output
|
|
150
|
-
```
|
|
151
|
-
====== 5 passed, 2 failed in 1.23s ======
|
|
152
|
-
```
|
|
153
|
-
|
|
154
|
-
#### Cargo Output
|
|
155
|
-
```
|
|
156
|
-
test result: FAILED. 5 passed; 2 failed; 0 ignored
|
|
157
|
-
```
|
|
158
|
-
|
|
159
|
-
#### Go Output
|
|
160
|
-
```
|
|
161
|
-
FAIL package/name 0.123s
|
|
162
|
-
PASS package/other 0.456s
|
|
163
|
-
```
|
|
164
|
-
|
|
165
|
-
#### Gradle/Maven Output
|
|
166
|
-
```
|
|
167
|
-
Tests run: 7, Failures: 2, Errors: 0, Skipped: 0
|
|
168
|
-
```
|
|
169
|
-
|
|
170
|
-
Extract:
|
|
171
|
-
- Total tests
|
|
172
|
-
- Passed
|
|
173
|
-
- Failed
|
|
174
|
-
- Skipped/Ignored
|
|
175
|
-
- Duration
|
|
176
|
-
- Failed test names and error messages with file/line info
|
|
177
|
-
|
|
178
|
-
### 6. Parse Failure Details
|
|
101
|
+
Parse runner-specific output to extract: total tests, passed, failed, skipped, duration, and failed test names with error messages and file/line info.
|
|
179
102
|
|
|
180
|
-
|
|
181
|
-
|
|
182
|
-
**Jest/Vitest:**
|
|
183
|
-
```
|
|
184
|
-
● TestSuite › test name
|
|
185
|
-
|
|
186
|
-
expect(received).toBe(expected)
|
|
187
|
-
|
|
188
|
-
at Object.<anonymous> (path/to/file.test.ts:42:5)
|
|
189
|
-
```
|
|
190
|
-
|
|
191
|
-
**Pytest:**
|
|
192
|
-
```
|
|
193
|
-
FAILED path/to/test_file.py::test_name - AssertionError: assert False
|
|
194
|
-
```
|
|
195
|
-
|
|
196
|
-
**Cargo:**
|
|
197
|
-
```
|
|
198
|
-
---- test_name stdout ----
|
|
199
|
-
thread 'test_name' panicked at 'assertion failed', src/lib.rs:42:5
|
|
200
|
-
```
|
|
201
|
-
|
|
202
|
-
**Go:**
|
|
203
|
-
```
|
|
204
|
-
--- FAIL: TestName (0.00s)
|
|
205
|
-
file_test.go:42: expected 5, got 3
|
|
206
|
-
```
|
|
207
|
-
|
|
208
|
-
### 7. Format Output
|
|
103
|
+
### 6. Format Output
|
|
209
104
|
|
|
210
105
|
Provide structured response:
|
|
211
106
|
|
|
@@ -238,77 +133,6 @@ SUGGESTIONS:
|
|
|
238
133
|
- Run specific failed tests individually to debug
|
|
239
134
|
```
|
|
240
135
|
|
|
241
|
-
## Common Test Scenarios
|
|
242
|
-
|
|
243
|
-
### Run All Tests Before Commit
|
|
244
|
-
|
|
245
|
-
```
|
|
246
|
-
Target: all
|
|
247
|
-
Use: Verify all tests pass before pushing changes
|
|
248
|
-
```
|
|
249
|
-
|
|
250
|
-
### Test Specific Module After Changes
|
|
251
|
-
|
|
252
|
-
```
|
|
253
|
-
Target: module
|
|
254
|
-
Module: user-service
|
|
255
|
-
Use: Quick verification after modifying specific module
|
|
256
|
-
```
|
|
257
|
-
|
|
258
|
-
### Debug Single Failing Test
|
|
259
|
-
|
|
260
|
-
```
|
|
261
|
-
Target: test
|
|
262
|
-
Test: UserService › should create user
|
|
263
|
-
Use: Isolate and debug specific test failure
|
|
264
|
-
```
|
|
265
|
-
|
|
266
|
-
### Test File After Refactoring
|
|
267
|
-
|
|
268
|
-
```
|
|
269
|
-
Target: file
|
|
270
|
-
File: src/services/user.test.ts
|
|
271
|
-
Use: Verify tests in refactored file
|
|
272
|
-
```
|
|
273
|
-
|
|
274
|
-
## Common Test Issues
|
|
275
|
-
|
|
276
|
-
### Tests Fail with "Module Not Found"
|
|
277
|
-
|
|
278
|
-
**Indicator**: Import/require errors
|
|
279
|
-
|
|
280
|
-
**Suggestions**:
|
|
281
|
-
- Run `npm install` or equivalent
|
|
282
|
-
- Check test file paths
|
|
283
|
-
- Verify module resolution config
|
|
284
|
-
|
|
285
|
-
### Tests Timeout
|
|
286
|
-
|
|
287
|
-
**Indicator**: `Exceeded timeout` messages
|
|
288
|
-
|
|
289
|
-
**Suggestions**:
|
|
290
|
-
- Increase test timeout in config
|
|
291
|
-
- Check for infinite loops or blocking operations
|
|
292
|
-
- Review async code completion
|
|
293
|
-
|
|
294
|
-
### Flaky Tests
|
|
295
|
-
|
|
296
|
-
**Indicator**: Tests pass sometimes, fail other times
|
|
297
|
-
|
|
298
|
-
**Suggestions**:
|
|
299
|
-
- Check for time-dependent code (use mocked time)
|
|
300
|
-
- Review concurrent code and race conditions
|
|
301
|
-
- Ensure tests don't depend on execution order
|
|
302
|
-
|
|
303
|
-
### Environment Issues
|
|
304
|
-
|
|
305
|
-
**Indicator**: Tests fail in CI but pass locally
|
|
306
|
-
|
|
307
|
-
**Suggestions**:
|
|
308
|
-
- Check environment variables
|
|
309
|
-
- Verify test database/services availability
|
|
310
|
-
- Review CI-specific configurations
|
|
311
|
-
|
|
312
136
|
## Error Handling
|
|
313
137
|
|
|
314
138
|
- If test runner cannot be detected, return error with detection attempted
|
|
@@ -325,38 +149,11 @@ Use: Verify tests in refactored file
|
|
|
325
149
|
- DO NOT assume a specific test framework - always detect or use config
|
|
326
150
|
- DO NOT truncate test output too aggressively (users need full error messages)
|
|
327
151
|
|
|
328
|
-
##
|
|
329
|
-
|
|
330
|
-
### Example 1: Auto-detect and run all tests
|
|
152
|
+
## Example
|
|
331
153
|
|
|
332
154
|
```
|
|
333
155
|
User: /run-tests
|
|
334
|
-
|
|
335
|
-
Skill: [Detects package.json with jest]
|
|
336
|
-
Skill: [Runs: jest]
|
|
337
|
-
Skill: [Reports: 47 tests, 47 passed, 0 failed]
|
|
338
|
-
```
|
|
339
|
-
|
|
340
|
-
### Example 2: Run module tests
|
|
341
|
-
|
|
342
|
-
```
|
|
343
|
-
User: /run-tests target=module module=user-service
|
|
344
|
-
|
|
345
|
-
Skill: [Detects pytest]
|
|
346
|
-
Skill: [Runs: pytest tests/user-service]
|
|
347
|
-
Skill: [Reports: 12 tests, 10 passed, 2 failed]
|
|
348
|
-
Skill: [Lists failed test details]
|
|
349
|
-
```
|
|
350
|
-
|
|
351
|
-
### Example 3: Run specific test
|
|
352
|
-
|
|
353
|
-
```
|
|
354
|
-
User: /run-tests target=test test="should validate email format"
|
|
355
|
-
|
|
356
|
-
Skill: [Detects jest]
|
|
357
|
-
Skill: [Runs: jest -t "should validate email format"]
|
|
358
|
-
Skill: [Reports: 1 test, 0 passed, 1 failed]
|
|
359
|
-
Skill: [Shows assertion error with file:line]
|
|
156
|
+
Skill: [Detects jest] → [Runs: jest] → [Reports: 47 passed, 0 failed]
|
|
360
157
|
```
|
|
361
158
|
|
|
362
159
|
## Related Documentation
|
|
@@ -1,17 +0,0 @@
|
|
|
1
|
-
# Quick Implementation: {TICKET-ID}
|
|
2
|
-
|
|
3
|
-
## Task
|
|
4
|
-
{DESCRIPTION}
|
|
5
|
-
|
|
6
|
-
## Components
|
|
7
|
-
|
|
8
|
-
| # | Type | Name | Skill | Module |
|
|
9
|
-
|---|------|------|-------|--------|
|
|
10
|
-
| 1 | {type} | {name} | {skill} | {module} |
|
|
11
|
-
|
|
12
|
-
## Affected Modules
|
|
13
|
-
- {module-1}
|
|
14
|
-
- {module-2}
|
|
15
|
-
|
|
16
|
-
## Execution
|
|
17
|
-
{parallel | sequential | direct}
|