claude-fsd 1.5.28 → 1.6.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/bin/claudefsd-dev +155 -28
- package/package.json +1 -1
package/bin/claudefsd-dev
CHANGED
|
@@ -183,13 +183,22 @@ You are an elite AI developer working in an automated development environment. Y
|
|
|
183
183
|
DEVELOPMENT_PROMPT="$DEVELOPMENT_PROMPT
|
|
184
184
|
- $HOME/.claude/CLAUDE.md (global development principles)"
|
|
185
185
|
fi
|
|
186
|
-
|
|
186
|
+
|
|
187
|
+
# Check for human feedback file
|
|
188
|
+
feedback_file=$(find_project_file "FEEDBACK.md" 2>/dev/null || echo "")
|
|
189
|
+
if [ -n "$feedback_file" ]; then
|
|
190
|
+
DEVELOPMENT_PROMPT="$DEVELOPMENT_PROMPT
|
|
191
|
+
- $feedback_file (URGENT: Human feedback requiring immediate attention)"
|
|
192
|
+
fi
|
|
193
|
+
|
|
187
194
|
DEVELOPMENT_PROMPT="$DEVELOPMENT_PROMPT
|
|
188
195
|
|
|
189
196
|
**IMPORTANT:** Before starting ANY work, you MUST read and understand:
|
|
190
197
|
1. The project's CLAUDE.md file (if it exists) - this contains project-specific instructions
|
|
191
|
-
2. The user's global CLAUDE.md file at
|
|
192
|
-
3.
|
|
198
|
+
2. The user's global CLAUDE.md file at \$HOME/.claude/CLAUDE.md (if it exists) - this contains general development principles
|
|
199
|
+
3. If FEEDBACK.md exists, READ IT FIRST - it contains urgent human feedback that takes priority
|
|
200
|
+
4. Read the '## Test Infrastructure' section in $plan_file for test commands
|
|
201
|
+
5. Ensure all your work follows the architectural and development guidelines from both files
|
|
193
202
|
|
|
194
203
|
**CRITICAL ANTI-PATTERNS TO AVOID (from CLAUDE.md):**
|
|
195
204
|
- NO CHEATING: Never disable tests, exclude files from compilation, or use silent fallbacks
|
|
@@ -206,23 +215,30 @@ You are an elite AI developer working in an automated development environment. Y
|
|
|
206
215
|
3. Complete tasks in the order they appear - don't skip ahead
|
|
207
216
|
4. Identify if tasks can be done in parallel
|
|
208
217
|
|
|
209
|
-
**PHASE 2: EXECUTION STRATEGY**
|
|
210
|
-
|
|
218
|
+
**PHASE 2: EXECUTION STRATEGY (TDD REQUIRED)**
|
|
219
|
+
You MUST follow Test-Driven Development:
|
|
220
|
+
|
|
221
|
+
1. **RED**: Write a failing test first that defines the expected behavior
|
|
222
|
+
2. **GREEN**: Write the minimum code to make the test pass
|
|
223
|
+
3. **REFACTOR**: Clean up the code while keeping tests passing
|
|
224
|
+
|
|
225
|
+
For each task:
|
|
226
|
+
- Write the test BEFORE writing implementation code
|
|
227
|
+
- Run tests using commands from '## Test Infrastructure' in $plan_file
|
|
228
|
+
- Only mark task complete after tests pass
|
|
211
229
|
|
|
212
230
|
**Option A: Single Focus Task** (for sequential dependencies or complex architectural work)
|
|
213
|
-
-
|
|
214
|
-
- Update $plan_file to mark task as complete with [x]
|
|
231
|
+
- Follow TDD cycle for the next task
|
|
232
|
+
- Update $plan_file to mark task as complete with [x] ONLY after tests pass
|
|
215
233
|
|
|
216
234
|
**Option B: Parallel Task Execution** (for independent tasks)
|
|
217
|
-
-
|
|
218
|
-
-
|
|
219
|
-
- Each agent brief should include full project context and specific implementation goals
|
|
220
|
-
- Coordinate the parallel work to ensure consistency
|
|
235
|
+
- Each parallel agent must also follow TDD
|
|
236
|
+
- Coordinate the parallel work to ensure test consistency
|
|
221
237
|
|
|
222
238
|
**PHASE 3: COMPLETION CHECK**
|
|
223
239
|
After completing work:
|
|
224
|
-
1.
|
|
225
|
-
2.
|
|
240
|
+
1. Run ALL tests to verify nothing is broken
|
|
241
|
+
2. Update $plan_file to reflect completed tasks (only if tests pass)
|
|
226
242
|
3. Report on what was accomplished and what remains
|
|
227
243
|
|
|
228
244
|
**EXECUTION GUIDELINES:**
|
|
@@ -265,8 +281,9 @@ IMPORTANT: You must ACTUALLY IMPLEMENT tasks, not just describe what should be d
|
|
|
265
281
|
echo -e "\033[32m== REVIEWING/VERIFYING WORK\033[0m"
|
|
266
282
|
echo -e "\033[32m==================================================================\033[0m"
|
|
267
283
|
|
|
268
|
-
# Define the verifier prompt
|
|
269
|
-
VERIFIER_PROMPT="You are an expert code reviewer
|
|
284
|
+
# Define the verifier prompt - CODE REVIEW ONLY, no test running
|
|
285
|
+
VERIFIER_PROMPT="You are an expert code reviewer. Your job is CODE REVIEW and GIT COMMITS only.
|
|
286
|
+
DO NOT run tests - a separate Tester agent will handle that.
|
|
270
287
|
|
|
271
288
|
**DEVELOPER'S OUTPUT:**
|
|
272
289
|
$DEVELOPER_OUTPUT
|
|
@@ -274,14 +291,15 @@ $DEVELOPER_OUTPUT
|
|
|
274
291
|
**YOUR TASKS:**
|
|
275
292
|
1. Review what the developer claims to have done
|
|
276
293
|
2. Verify the work was actually completed by checking files
|
|
277
|
-
3. Look for
|
|
278
|
-
4.
|
|
279
|
-
5.
|
|
294
|
+
3. Look for cheating patterns (disabled tests, silent fallbacks, mock data, etc.)
|
|
295
|
+
4. Check code quality: proper error handling, no obvious bugs
|
|
296
|
+
5. Create a git commit (see guidelines below)
|
|
280
297
|
|
|
281
|
-
**
|
|
298
|
+
**CODE REVIEW CHECKLIST:**
|
|
282
299
|
- Did the developer actually implement code (not just analyze)?
|
|
283
|
-
-
|
|
284
|
-
-
|
|
300
|
+
- Did the developer follow TDD (wrote tests before implementation)?
|
|
301
|
+
- Are there any cheating patterns or anti-patterns?
|
|
302
|
+
- Is the code well-structured and maintainable?
|
|
285
303
|
- Is the task properly marked as complete in $plan_file?
|
|
286
304
|
|
|
287
305
|
**GIT COMMIT GUIDELINES:**
|
|
@@ -291,12 +309,12 @@ $DEVELOPER_OUTPUT
|
|
|
291
309
|
- Only skip commit if changes are truly destructive/terrible
|
|
292
310
|
- Use descriptive commit messages that explain what was attempted
|
|
293
311
|
|
|
294
|
-
**IMPORTANT:**
|
|
295
|
-
-
|
|
296
|
-
- If
|
|
297
|
-
-
|
|
312
|
+
**IMPORTANT:**
|
|
313
|
+
- DO NOT run tests - the Tester agent will do that next
|
|
314
|
+
- If you find code quality issues, describe them clearly in your review AND in the commit message
|
|
315
|
+
- Focus on code review, not test verification
|
|
298
316
|
|
|
299
|
-
Be thorough but concise in your
|
|
317
|
+
Be thorough but concise in your code review."
|
|
300
318
|
|
|
301
319
|
VERIFIER_LOGFILE="${LOGFILE}-verifier"
|
|
302
320
|
echo "=== VERIFIER PROMPT ===" > $VERIFIER_LOGFILE
|
|
@@ -308,8 +326,91 @@ Be thorough but concise in your verification."
|
|
|
308
326
|
# Run verifier
|
|
309
327
|
time claude --model $CLAUDE_MODEL --dangerously-skip-permissions -p "$VERIFIER_PROMPT" 2>&1 | tee -a $VERIFIER_LOGFILE
|
|
310
328
|
|
|
311
|
-
#
|
|
312
|
-
|
|
329
|
+
# Extract verifier output for the tester
|
|
330
|
+
VERIFIER_OUTPUT=$(sed -n '/=== OUTPUT ===/,$p' $VERIFIER_LOGFILE)
|
|
331
|
+
|
|
332
|
+
echo -e "\033[32m==================================================================\033[0m"
|
|
333
|
+
echo -e "\033[32m== TESTING PHASE\033[0m"
|
|
334
|
+
echo -e "\033[32m==================================================================\033[0m"
|
|
335
|
+
|
|
336
|
+
# Determine if this is an acceptance test iteration (every 4th, with megathinking)
|
|
337
|
+
if [ $((LOOP_COUNTER % 4)) -eq 0 ]; then
|
|
338
|
+
ACCEPTANCE_TEST_MODE="
|
|
339
|
+
**ACCEPTANCE TESTS (4th iteration - run these now):**
|
|
340
|
+
Read the '## Acceptance Criteria' section in $plan_file.
|
|
341
|
+
- Run each acceptance test (may include browser tests, integration tests)
|
|
342
|
+
- Mark passing criteria with [x] in the plan file
|
|
343
|
+
- If a previously-passing criterion now FAILS, add a bug task:
|
|
344
|
+
\`- [ ] [BUG] <description of what regressed>\`
|
|
345
|
+
- Report which acceptance criteria pass/fail
|
|
346
|
+
"
|
|
347
|
+
else
|
|
348
|
+
ACCEPTANCE_TEST_MODE="
|
|
349
|
+
**ACCEPTANCE TESTS:** Skip this iteration (only run every 4th iteration).
|
|
350
|
+
"
|
|
351
|
+
fi
|
|
352
|
+
|
|
353
|
+
# Define the tester prompt
|
|
354
|
+
TESTER_PROMPT="You are an expert QA engineer. Your job is to RUN TESTS and verify the code works.
|
|
355
|
+
|
|
356
|
+
**DEVELOPER'S OUTPUT:**
|
|
357
|
+
$DEVELOPER_OUTPUT
|
|
358
|
+
|
|
359
|
+
**VERIFIER'S OUTPUT:**
|
|
360
|
+
$VERIFIER_OUTPUT
|
|
361
|
+
|
|
362
|
+
**PROJECT FILES:**
|
|
363
|
+
- $plan_file (check '## Test Infrastructure' for test commands)
|
|
364
|
+
- $plan_file (check '## Acceptance Criteria' for acceptance tests)
|
|
365
|
+
|
|
366
|
+
**YOUR TASKS:**
|
|
367
|
+
|
|
368
|
+
**1. UNIT TESTS (run every iteration):**
|
|
369
|
+
Read '## Test Infrastructure' section in $plan_file for test commands.
|
|
370
|
+
- Run the test suite (pytest, npm test, cargo test, etc.)
|
|
371
|
+
- Run linters if configured
|
|
372
|
+
- Report: which tests passed/failed
|
|
373
|
+
- If tests FAIL, development cannot continue - report the failure clearly
|
|
374
|
+
|
|
375
|
+
**2. TEST INFRASTRUCTURE CHECK:**
|
|
376
|
+
If no '## Test Infrastructure' section exists OR no tests can be found:
|
|
377
|
+
- Add task: \`- [ ] [INFRA] Set up test infrastructure\`
|
|
378
|
+
- Report that TDD cannot proceed without test infrastructure
|
|
379
|
+
$ACCEPTANCE_TEST_MODE
|
|
380
|
+
**OUTPUT FORMAT:**
|
|
381
|
+
- **<unit_tests>**: PASS or FAIL
|
|
382
|
+
- **<unit_details>**: What tests ran, any failures
|
|
383
|
+
- **<acceptance_tests>**: PASS, FAIL, SKIPPED, or NO_CRITERIA
|
|
384
|
+
- **<acceptance_details>**: Which criteria checked (if applicable)
|
|
385
|
+
- **<infrastructure>**: OK or MISSING
|
|
386
|
+
- **<bugs_added>**: List any [BUG] tasks added for regressions
|
|
387
|
+
- **<all_tests_pass>**: YES or NO
|
|
388
|
+
|
|
389
|
+
**CRITICAL:** If unit tests fail, output <TESTS_FAILED> so the loop knows to stop.
|
|
390
|
+
If ALL tasks in the plan are complete AND all tests pass, output: <VERIFIED_ALL_DONE>
|
|
391
|
+
"
|
|
392
|
+
|
|
393
|
+
TESTER_LOGFILE="${LOGFILE}-tester"
|
|
394
|
+
echo "=== TESTER PROMPT ===" > $TESTER_LOGFILE
|
|
395
|
+
echo "$TESTER_PROMPT" >> $TESTER_LOGFILE
|
|
396
|
+
echo "=== END PROMPT ===" >> $TESTER_LOGFILE
|
|
397
|
+
echo "" >> $TESTER_LOGFILE
|
|
398
|
+
echo "=== OUTPUT ===" >> $TESTER_LOGFILE
|
|
399
|
+
|
|
400
|
+
# Run tester
|
|
401
|
+
echo "Running tester with $CLAUDE_MODEL model..."
|
|
402
|
+
time claude --model $CLAUDE_MODEL --dangerously-skip-permissions -p "$TESTER_PROMPT" 2>&1 | tee -a $TESTER_LOGFILE
|
|
403
|
+
|
|
404
|
+
# Check if tests failed
|
|
405
|
+
if sed -n '/=== OUTPUT ===/,$p' $TESTER_LOGFILE | grep -q "<TESTS_FAILED>"; then
|
|
406
|
+
echo -e "\033[31m==================================================================\033[0m"
|
|
407
|
+
echo -e "\033[31m== TESTS FAILED - Review tester output above\033[0m"
|
|
408
|
+
echo -e "\033[31m==================================================================\033[0m"
|
|
409
|
+
# Continue to next iteration - developer needs to fix the tests
|
|
410
|
+
fi
|
|
411
|
+
|
|
412
|
+
# Check if all done (tester confirmed)
|
|
413
|
+
if sed -n '/=== OUTPUT ===/,$p' $TESTER_LOGFILE | grep -q "^<VERIFIED_ALL_DONE>$"; then
|
|
313
414
|
echo -e "\033[32m==================================================================\033[0m"
|
|
314
415
|
echo -e "\033[32m== PROJECT COMPLETE - ALL TASKS VERIFIED!\033[0m"
|
|
315
416
|
echo -e "\033[32m==================================================================\033[0m"
|
|
@@ -386,5 +487,31 @@ Be thorough but concise in your verification."
|
|
|
386
487
|
echo -e "\033[32mNormal iteration timing - continuing...\033[0m"
|
|
387
488
|
fi
|
|
388
489
|
|
|
490
|
+
# Check for PAUSE file - allows human intervention
|
|
491
|
+
if [ -f "PAUSE" ]; then
|
|
492
|
+
echo -e "\033[33m==================================================================\033[0m"
|
|
493
|
+
echo -e "\033[33m== PAUSED - Human intervention requested\033[0m"
|
|
494
|
+
echo -e "\033[33m==================================================================\033[0m"
|
|
495
|
+
echo -e "\033[33mDevelopment is paused. To continue:\033[0m"
|
|
496
|
+
echo -e "\033[33m - Remove the PAUSE file: rm PAUSE\033[0m"
|
|
497
|
+
echo -e "\033[33m - Add feedback in $WORKING_DIR/FEEDBACK.md (optional)\033[0m"
|
|
498
|
+
echo -e "\033[33mWaiting...\033[0m"
|
|
499
|
+
while [ -f "PAUSE" ]; do
|
|
500
|
+
sleep 5
|
|
501
|
+
done
|
|
502
|
+
echo -e "\033[32mResuming development...\033[0m"
|
|
503
|
+
|
|
504
|
+
# Check if feedback was added
|
|
505
|
+
if [ -f "$WORKING_DIR/FEEDBACK.md" ]; then
|
|
506
|
+
echo -e "\033[36mFEEDBACK.md detected - will be processed in next iteration\033[0m"
|
|
507
|
+
fi
|
|
508
|
+
fi
|
|
509
|
+
|
|
510
|
+
# Archive FEEDBACK.md after it's been processed (move to logs)
|
|
511
|
+
if [ -n "$feedback_file" ] && [ -f "$feedback_file" ]; then
|
|
512
|
+
mv "$feedback_file" "logs/FEEDBACK-processed-$(date +%Y%m%d_%H%M%S).md"
|
|
513
|
+
echo -e "\033[36mFEEDBACK.md archived after processing\033[0m"
|
|
514
|
+
fi
|
|
515
|
+
|
|
389
516
|
sleep 1
|
|
390
517
|
done
|