@atlashub/smartstack-cli 3.0.0 → 3.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -6,6 +6,11 @@ next_step: steps/step-04-check.md
6
6
 
7
7
  # Step 3: Commit Changes
8
8
 
9
+ > **CONTEXT OPTIMIZATION:** This file is only read ONCE (first iteration).
10
+ > After the first full iteration, ALL subsequent iterations use the COMPACT LOOP
11
+ > in step-04-check.md section 5 which includes inline commit logic.
12
+ > **DO NOT re-read this file for iterations > 1.**
13
+
9
14
  ## YOUR TASK:
10
15
 
11
16
  Commit the changes from the executed task, finalize task tracking in prd.json, and append to history.
@@ -14,6 +19,83 @@ Commit the changes from the executed task, finalize task tracking in prd.json, a
14
19
 
15
20
  ## EXECUTION SEQUENCE:
16
21
 
22
+ ### 0. PRE-COMMIT VALIDATION (BLOCKING)
23
+
24
+ **CRITICAL: Before ANY commit, verify all tests pass.**
25
+
26
+ ```bash
27
+ # Check if test project exists
28
+ PROJECT_NAME=$(basename $(pwd))
29
+ TEST_PROJECT="tests/${PROJECT_NAME}.Tests.Unit"
30
+
31
+ if [ -d "$TEST_PROJECT" ]; then
32
+ echo "Running full test suite before commit..."
33
+
34
+ # Run ALL tests (not just current task's tests)
35
+ dotnet test "$TEST_PROJECT" --no-build --verbosity minimal
36
+
37
+ TEST_EXIT_CODE=$?
38
+
39
+ if [ $TEST_EXIT_CODE -ne 0 ]; then
40
+ echo "╔════════════════════════════════════════════════════════════╗"
41
+ echo "║ ❌ COMMIT BLOCKED: TESTS FAILED ║"
42
+ echo "╠════════════════════════════════════════════════════════════╣"
43
+ echo "║ Cannot commit code when tests are failing. ║"
44
+ echo "║ This prevents broken code from entering the repository. ║"
45
+ echo "╠════════════════════════════════════════════════════════════╣"
46
+ echo "║ ACTION REQUIRED: ║"
47
+ echo "║ 1. Review test failures above ║"
48
+ echo "║ 2. Fix the failing code ║"
49
+ echo "║ 3. Re-run: dotnet test ║"
50
+ echo "║ 4. Once tests pass, Ralph will commit automatically ║"
51
+ echo "╚════════════════════════════════════════════════════════════╝"
52
+
53
+ # Mark task as failed (do not complete)
54
+ # Update prd.json
55
+ const prd = readJSON('.ralph/prd.json');
56
+ const task = prd.tasks.find(t => t.id === {current_task_id});
57
+ task.status = 'failed';
58
+ task.error = 'Pre-commit test suite failed. Cannot commit broken code.';
59
+ writeJSON('.ralph/prd.json', prd);
60
+
61
+ # STOP - return to step-02 to fix
62
+ exit 1;
63
+ fi
64
+
65
+ echo "✅ All tests passed. Proceeding with commit..."
66
+ fi
67
+ ```
68
+
69
+ **Additional validation for specific categories:**
70
+
71
+ ```javascript
72
+ const task = prd.tasks.find(t => t.id === {current_task_id});
73
+
74
+ // For backend changes: ensure build succeeds
75
+ if (['domain', 'application', 'infrastructure', 'api'].includes(task.category)) {
76
+ dotnet build --no-restore --verbosity quiet
77
+ if ($? !== 0) {
78
+ echo "❌ Build failed. Cannot commit broken code.";
79
+ task.status = 'failed';
80
+ task.error = 'Build failed';
81
+ exit 1;
82
+ }
83
+ }
84
+
85
+ // For frontend changes: ensure typecheck passes
86
+ if (task.category === 'frontend') {
87
+ npm run typecheck
88
+ if ($? !== 0) {
89
+ echo "❌ TypeScript errors. Cannot commit broken code.";
90
+ task.status = 'failed';
91
+ task.error = 'TypeScript compilation failed';
92
+ exit 1;
93
+ }
94
+ }
95
+ ```
96
+
97
+ **If ALL validations pass, proceed to staging.**
98
+
17
99
  ### 1. Stage Changes
18
100
 
19
101
  **Add modified files:**
@@ -34,6 +34,97 @@ const hasBlocked = tasksBlocked > 0;
34
34
  const hasPending = tasksPending > 0;
35
35
  ```
36
36
 
37
+ ### 1.5. REGRESSION CHECK (MANDATORY AFTER EACH ITERATION)
38
+
39
+ **CRITICAL: After EVERY task completion, run full test suite to detect regressions.**
40
+
41
+ ```bash
42
+ PROJECT_NAME=$(basename $(pwd))
43
+ TEST_PROJECT="tests/${PROJECT_NAME}.Tests.Unit"
44
+
45
+ if [ -d "$TEST_PROJECT" ]; then
46
+ echo "🔍 Running regression check (full test suite)..."
47
+
48
+ # Run ALL tests to ensure nothing broke
49
+ dotnet test "$TEST_PROJECT" --no-build --verbosity minimal --logger "console;verbosity=minimal"
50
+
51
+ REGRESSION_EXIT_CODE=$?
52
+
53
+ if [ $REGRESSION_EXIT_CODE -ne 0 ]; then
54
+ echo "╔════════════════════════════════════════════════════════════╗"
55
+ echo "║ ⚠️ REGRESSION DETECTED ║"
56
+ echo "╠════════════════════════════════════════════════════════════╣"
57
+ echo "║ A previously passing test is now failing. ║"
58
+ echo "║ This indicates the last change broke existing code. ║"
59
+ echo "╠════════════════════════════════════════════════════════════╣"
60
+ echo "║ CORRECTIVE ACTION: ║"
61
+ echo "║ 1. Identify which test(s) started failing ║"
62
+ echo "║ 2. Analyze what changed in last commit ║"
63
+ echo "║ 3. Fix the regression ║"
64
+ echo "║ 4. Commit the fix ║"
65
+ echo "║ 5. Ralph will continue automatically ║"
66
+ echo "╚════════════════════════════════════════════════════════════╝"
67
+
68
+ // Parse test output to identify which tests failed
69
+ const regressionTests = parseFailedTests(testOutput);
70
+
71
+ // Log regression details to progress.txt
72
+ const progressEntry = `
73
+ [REGRESSION DETECTED - Iteration ${prd.config.current_iteration}]
74
+ Failed tests: ${regressionTests.join(', ')}
75
+ Last completed task: ${prd.tasks.find(t => t.id === lastCompletedTaskId).description}
76
+ Commit: ${lastCommitHash}
77
+
78
+ ACTION REQUIRED: Fix regression before continuing.
79
+ `;
80
+ appendToFile('.ralph/progress.txt', progressEntry);
81
+
82
+ // Mark current state as having regression
83
+ prd.regression_detected = {
84
+ iteration: prd.config.current_iteration,
85
+ failed_tests: regressionTests,
86
+ last_task: lastCompletedTaskId,
87
+ commit_hash: lastCommitHash
88
+ };
89
+ writeJSON('.ralph/prd.json', prd);
90
+
91
+ // Create a new task to fix the regression
92
+ const fixTask = {
93
+ id: prd.tasks.length + 1,
94
+ description: `Fix regression: ${regressionTests.length} test(s) failing`,
95
+ status: 'pending',
96
+ category: 'validation',
97
+ dependencies: [],
98
+ acceptance_criteria: `All tests pass: ${regressionTests.join(', ')}`,
99
+ started_at: null,
100
+ completed_at: null,
101
+ iteration: null,
102
+ commit_hash: null,
103
+ files_changed: { created: [], modified: [] },
104
+ validation: null,
105
+ error: null
106
+ };
107
+ prd.tasks.push(fixTask);
108
+ prd.updated_at = new Date().toISOString();
109
+ writeJSON('.ralph/prd.json', prd);
110
+
111
+ // DO NOT STOP - continue to next task (which is the fix task)
112
+ echo "📋 Created task ${fixTask.id}: Fix regression";
113
+ } else {
114
+ echo "✅ Regression check passed. No tests broken.";
115
+
116
+ // Update test metrics in progress.txt
117
+ const testStats = parseTestStats(testOutput);
118
+ const metricsEntry = `
119
+ [Test Metrics - Iteration ${prd.config.current_iteration}]
120
+ Total: ${testStats.total} | Passed: ${testStats.passed} | Failed: ${testStats.failed} | Skipped: ${testStats.skipped}
121
+ Duration: ${testStats.duration}
122
+ `;
123
+ appendToFile('.ralph/progress.txt', metricsEntry);
124
+ }
125
+ fi
126
+ ```
127
+
37
128
  ### 2. Check Iteration Limit
38
129
 
39
130
  **If `prd.config.current_iteration` > `prd.config.max_iterations`:**
@@ -191,20 +282,148 @@ writeJSON('.ralph/prd.json', prd);
191
282
 
192
283
  **Proceed to step-05-report.md**
193
284
 
194
- ### 5. More Tasks Remaining
285
+ ### 5. More Tasks Remaining — COMPACT LOOP (CRITICAL)
286
+
287
+ > **MANDATORY RULE: DO NOT STOP. DO NOT WAIT FOR USER INPUT. DO NOT RE-READ STEP FILES.**
288
+ > After the first full iteration (step-01 → step-02 → step-03 → step-04), all subsequent iterations
289
+ > MUST use this COMPACT LOOP inline. Re-reading step files wastes context and causes the loop to stall.
290
+ > **STOPPING THE LOOP = BUG.** Only completion, max iterations, or dead-end stop the loop.
195
291
 
196
292
  **If hasPending AND iteration < max_iterations:**
197
293
 
294
+ Display compact progress (ONE line):
198
295
  ```
199
- Progress: {tasksCompleted} / {tasksTotal} tasks
200
- Failed: {tasksFailed} | Blocked: {tasksBlocked} | Pending: {tasksPending}
296
+ [{current_iteration}/{max_iterations}] {tasksCompleted}/{tasksTotal} done | Next: finding eligible task...
297
+ ```
298
+
299
+ **IMMEDIATELY execute the following inline loop — DO NOT re-read step files:**
201
300
 
202
- Next eligible task will be determined by dependency resolution.
301
+ #### 5a. Find Next Eligible Task (inline step-01)
203
302
 
204
- Continuing to next iteration...
303
+ ```javascript
304
+ const prd = readJSON('.ralph/prd.json');
305
+
306
+ // Block tasks whose dependencies failed
307
+ for (const task of prd.tasks) {
308
+ if (task.status !== 'pending') continue;
309
+ const depsBlocked = task.dependencies.some(depId => {
310
+ const dep = prd.tasks.find(t => t.id === depId);
311
+ return dep && (dep.status === 'failed' || dep.status === 'blocked');
312
+ });
313
+ if (depsBlocked) { task.status = 'blocked'; task.error = 'Blocked by failed dependency'; }
314
+ }
315
+
316
+ // Find ALL eligible tasks (dependencies met)
317
+ const eligible = prd.tasks.filter(task => {
318
+ if (task.status !== 'pending') return false;
319
+ return task.dependencies.every(depId => {
320
+ const dep = prd.tasks.find(t => t.id === depId);
321
+ return dep && dep.status === 'completed';
322
+ });
323
+ });
324
+
325
+ if (eligible.length === 0) {
326
+ // Dead-end or all done — re-run sections 2-4 above
327
+ goto CHECK_COMPLETION;
328
+ }
329
+
330
+ // BATCH MODE: group eligible tasks by category, take the first group
331
+ const firstCategory = eligible[0].category;
332
+ const batch = eligible.filter(t => t.category === firstCategory);
333
+ // Cap batch at 5 tasks max to keep atomic
334
+ const tasksToExecute = batch.slice(0, 5);
205
335
  ```
206
336
 
207
- **Loop back to step-01-task.md**
337
+ Display:
338
+ ```
339
+ Batch: {tasksToExecute.length} task(s) [{firstCategory}]
340
+ {tasksToExecute.map(t => `[${t.id}] ${t.description}`).join('\n ')}
341
+ ```
342
+
343
+ #### 5b. Execute Batch (inline step-02)
344
+
345
+ **For EACH task in tasksToExecute:**
346
+
347
+ 1. Mark `task.status = 'in_progress'`, `task.started_at = now`
348
+ 2. ULTRA THINK: implement the task following SmartStack conventions
349
+ - Track files_created and files_modified per task
350
+ 3. Verify acceptance criteria
351
+ 4. If failed: set `task.status = 'failed'`, `task.error = reason`, continue to next task in batch
352
+
353
+ **CATEGORY-SPECIFIC EXECUTION RULES:**
354
+
355
+ **IF category = "frontend":** Follow MCP-FIRST protocol (MANDATORY):
356
+ 1. Call `mcp__smartstack__scaffold_api_client` → generates API client + types
357
+ 2. Call `mcp__smartstack__scaffold_routes` → updates routes inside Layout wrapper
358
+ 3. Create pages using SmartStack components (SmartTable, EntityCard, SmartForm, SmartFilter)
359
+ 4. CSS variables ONLY (NO `bg-blue-600`, use `bg-[var(--color-accent-600)]`)
360
+ 5. `EntityCard` for grids, `SmartTable` for lists (NO HTML `<table>` or custom `<div>` cards)
361
+ 6. All pages MUST have loading/error/empty states
362
+ 7. API client uses `@/services/api/apiClient` (NOT axios)
363
+ 8. Generate 4-language i18n (fr, en, it, de)
364
+ 9. `npm run typecheck` MUST pass
365
+
366
+ **IF category = "infrastructure":** Seed data in `Infrastructure/Persistence/Seeding/Data/{Module}/`
367
+
368
+ **IF category = "api":** Controllers in `Api/Controllers/{Context}/{App}/{Entity}Controller.cs`
369
+
370
+ After ALL tasks in batch executed:
371
+ - Run `mcp__smartstack__validate_conventions` ONCE for the whole batch
372
+ - Quick build check: `dotnet build --no-restore` (backend) or `npm run typecheck` (frontend)
373
+
374
+ #### 5c. Commit Batch (inline step-03)
375
+
376
+ ```bash
377
+ # Stage all changed files
378
+ git add {all files from batch}
379
+ git add .ralph/prd.json
380
+
381
+ # Single commit for the batch
382
+ git commit -m "$(cat <<'EOF'
383
+ feat({scope}): [{firstCategory}] {tasksToExecute.length} tasks — {short summary}
384
+
385
+ Tasks: {tasksToExecute.map(t => t.id).join(', ')} / {tasks_total}
386
+ Iteration: {current_iteration}
387
+ {current_module ? "Module: " + current_module : ""}
388
+
389
+ Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
390
+ EOF
391
+ )"
392
+
393
+ COMMIT_HASH=$(git rev-parse --short HEAD)
394
+ ```
395
+
396
+ **Finalize each task in prd.json:**
397
+ ```javascript
398
+ for (const task of tasksToExecute) {
399
+ if (task.status !== 'failed') task.status = 'completed';
400
+ task.completed_at = now;
401
+ task.iteration = prd.config.current_iteration;
402
+ task.commit_hash = COMMIT_HASH;
403
+ }
404
+ prd.history.push({
405
+ iteration: prd.config.current_iteration,
406
+ task_ids: tasksToExecute.map(t => t.id),
407
+ action: 'batch-completed',
408
+ timestamp: now,
409
+ commit_hash: COMMIT_HASH,
410
+ notes: "{What was accomplished}"
411
+ });
412
+ prd.config.current_iteration++;
413
+ prd.updated_at = now;
414
+ writeJSON('.ralph/prd.json', prd);
415
+ ```
416
+
417
+ ```bash
418
+ git add .ralph/prd.json .ralph/progress.txt
419
+ [ -f .ralph/modules-queue.json ] && git add .ralph/modules-queue.json
420
+ git commit -m "chore(ralph): progress — iteration {current_iteration}"
421
+ ```
422
+
423
+ #### 5d. Re-check Completion (loop back to section 1)
424
+
425
+ **IMMEDIATELY go back to section 1 (Read Current State) of THIS step.**
426
+ **DO NOT stop. DO NOT wait for user. DO NOT re-read step files.**
208
427
 
209
428
  ---
210
429
 
@@ -300,6 +519,16 @@ Completion Check:
300
519
 
301
520
  **Always update `prd.status` and `prd.updated_at` before proceeding.**
302
521
 
522
+ **LOOP CONTINUATION IS MANDATORY:**
523
+ - After the first full iteration (step-01→02→03→04), ALL subsequent iterations use the COMPACT LOOP in section 5.
524
+ - DO NOT re-read step-01, step-02, step-03 files. You already know the instructions.
525
+ - DO NOT stop and wait for user input between iterations.
526
+ - DO NOT output a summary and pause. The loop is AUTONOMOUS.
527
+ - The ONLY reasons to stop: completion, max iterations, dead-end, or user interruption.
528
+ - Stopping for any other reason is a **BUG** that wastes user time and context.
529
+ - **BATCH tasks of the same category** to reduce iterations (max 5 per batch).
530
+ - Prefer compact output (1-2 lines per task) over verbose output during the loop.
531
+
303
532
  ---
304
533
 
305
534
  ## NEXT STEP:
@@ -123,6 +123,86 @@ if (hasQueue) {
123
123
  }
124
124
  ```
125
125
 
126
+ ### 1c. Extract Test Metrics (from progress.txt and test execution)
127
+
128
+ **CRITICAL: Test metrics MUST be included in the final report.**
129
+
130
+ ```bash
131
+ PROJECT_NAME=$(basename $(pwd))
132
+ TEST_PROJECT="tests/${PROJECT_NAME}.Tests.Unit"
133
+
134
+ testMetrics={
135
+ projectExists: false,
136
+ testsExecuted: false,
137
+ lastRunStatus: "unknown",
138
+ totalTests: 0,
139
+ passed: 0,
140
+ failed: 0,
141
+ skipped: 0,
142
+ coverage: 0,
143
+ duration: 0
144
+ };
145
+
146
+ if [ -d "$TEST_PROJECT" ]; then
147
+ testMetrics.projectExists = true;
148
+
149
+ # Extract latest test metrics from progress.txt
150
+ if [ -f ".ralph/progress.txt" ]; then
151
+ # Parse last "Test Metrics" entry
152
+ LAST_METRICS=$(grep -A 3 "\[Test Metrics" .ralph/progress.txt | tail -4)
153
+
154
+ # Extract values using regex
155
+ TOTAL=$(echo "$LAST_METRICS" | grep -oP "Total: \K\d+")
156
+ PASSED=$(echo "$LAST_METRICS" | grep -oP "Passed: \K\d+")
157
+ FAILED=$(echo "$LAST_METRICS" | grep -oP "Failed: \K\d+")
158
+ SKIPPED=$(echo "$LAST_METRICS" | grep -oP "Skipped: \K\d+")
159
+ DURATION=$(echo "$LAST_METRICS" | grep -oP "Duration: \K[\d\.]+")
160
+
161
+ if [ -n "$TOTAL" ]; then
162
+ testMetrics.testsExecuted = true;
163
+ testMetrics.totalTests = $TOTAL;
164
+ testMetrics.passed = $PASSED;
165
+ testMetrics.failed = $FAILED;
166
+ testMetrics.skipped = $SKIPPED;
167
+ testMetrics.duration = $DURATION;
168
+ testMetrics.lastRunStatus = [ $FAILED -eq 0 ] ? "passed" : "failed";
169
+ fi
170
+ fi
171
+
172
+ # If no metrics in progress.txt, run tests now to get final stats
173
+ if [ "$testMetrics.testsExecuted" = false ]; then
174
+ echo "Running final test suite to collect metrics...";
175
+ TEST_OUTPUT=$(dotnet test "$TEST_PROJECT" --no-build --verbosity minimal 2>&1);
176
+ TEST_EXIT_CODE=$?;
177
+
178
+ testMetrics.testsExecuted = true;
179
+ testMetrics.lastRunStatus = [ $TEST_EXIT_CODE -eq 0 ] ? "passed" : "failed";
180
+
181
+ # Parse test output
182
+ testMetrics.totalTests = parseTestCount(TEST_OUTPUT);
183
+ testMetrics.passed = parsePassedCount(TEST_OUTPUT);
184
+ testMetrics.failed = parseFailedCount(TEST_OUTPUT);
185
+ testMetrics.skipped = parseSkippedCount(TEST_OUTPUT);
186
+ testMetrics.duration = parseDuration(TEST_OUTPUT);
187
+ fi
188
+
189
+ # Get coverage using MCP
190
+ const coverageResult = mcp__smartstack__analyze_test_coverage({
191
+ project_path: process.cwd()
192
+ });
193
+
194
+ if (coverageResult && coverageResult.percentage) {
195
+ testMetrics.coverage = coverageResult.percentage;
196
+ }
197
+ fi
198
+ ```
199
+
200
+ **Store metrics for report generation:**
201
+
202
+ ```javascript
203
+ stats.tests = testMetrics;
204
+ ```
205
+
126
206
  ### 2. Collect MCP Usage (from logs if available)
127
207
 
128
208
  **Parse from verbose logs:**
@@ -187,6 +267,41 @@ const validationStats = {
187
267
  **Modules: {moduleStats.completedModules}/{moduleStats.totalModules} completed**
188
268
  {end if}
189
269
 
270
+ ## Test Metrics
271
+
272
+ {if testMetrics.projectExists:}
273
+ | Metric | Value |
274
+ |--------|-------|
275
+ | **Test Project** | ✅ `{TEST_PROJECT}` |
276
+ | **Tests Executed** | {testMetrics.testsExecuted ? '✅ Yes' : '❌ No'} |
277
+ | **Last Run Status** | {testMetrics.lastRunStatus === 'passed' ? '✅ PASSED' : testMetrics.lastRunStatus === 'failed' ? '❌ FAILED' : '⚠️ UNKNOWN'} |
278
+ | **Total Tests** | {testMetrics.totalTests} |
279
+ | **Passed** | ✅ {testMetrics.passed} |
280
+ | **Failed** | ❌ {testMetrics.failed} |
281
+ | **Skipped** | ⏭️ {testMetrics.skipped} |
282
+ | **Coverage** | {testMetrics.coverage}% {testMetrics.coverage >= 80 ? '✅' : testMetrics.coverage >= 60 ? '⚠️' : '❌'} |
283
+ | **Duration** | {testMetrics.duration}s |
284
+
285
+ {if testMetrics.failed > 0:}
286
+ ⚠️ **WARNING:** Some tests are failing. The module is NOT production-ready.
287
+ {end if}
288
+
289
+ {if testMetrics.coverage < 80:}
290
+ ⚠️ **WARNING:** Test coverage is below 80% minimum. Consider adding more tests.
291
+ {end if}
292
+
293
+ {else:}
294
+ ❌ **No test project found.** Tests were not created for this module.
295
+
296
+ **RECOMMENDATION:** Create a test project and add comprehensive tests before deploying to production.
297
+
298
+ Suggested command:
299
+ ```bash
300
+ dotnet new xunit -n {PROJECT_NAME}.Tests.Unit -o tests/{PROJECT_NAME}.Tests.Unit
301
+ dotnet sln add tests/{PROJECT_NAME}.Tests.Unit
302
+ ```
303
+ {end if}
304
+
190
305
  ## Failed Tasks
191
306
 
192
307
  {if any failed tasks:}