speccrew 0.5.19 → 0.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,243 @@
1
+ ---
2
+ name: speccrew-deploy-smoke-test
3
+ description: Performs lightweight smoke testing against running application. Verifies core API endpoint reachability based on API Contract documents. Does NOT test business logic — only HTTP status code verification.
4
+ tools: Read, Bash, Glob
5
+ ---
6
+
7
+ # Trigger Scenarios
8
+
9
+ - User requests to verify application is running correctly
10
+ - Deploy Agent needs to validate deployment success
11
+ - Post-deployment verification required
12
+
13
+ # Input Parameters
14
+
15
+ | Parameter | Required | Type | Description |
16
+ |-----------|----------|------|-------------|
17
+ | `platform_id` | Yes | string | Platform identifier |
18
+ | `base_url` | No | string | Application base URL (e.g., `http://localhost:8080`). Required for `http` mode. |
19
+ | `api_contract_paths` | No | string | Comma-separated paths to API Contract documents. Required for `http` mode. |
20
+ | `iteration_path` | Yes | string | Current iteration directory path |
21
+ | `test_mode` | No | string | Test method: `http` (default, for server apps), `process` (for client apps - verify process runs and exits cleanly), `log` (verify expected log output). Default: `http` |
22
+ | `expected_exit_code` | No | string | Expected process exit code for `process` mode (default: `0`) |
23
+ | `log_file` | No | string | Log file path for `log` mode verification |
24
+ | `expected_log_patterns` | No | string | Comma-separated regex patterns expected in log for `log` mode (e.g., `"DB initialized,UI loaded"`) |
25
+ | `process_name` | No | string | Process name or pattern to check (for `process` mode, e.g., `myapp.exe` or `MyApp`) |
26
+
27
+ # Workflow
28
+
29
+ ### Test Strategy
30
+
31
+ Based on `test_mode` parameter:
32
+
33
+ **Mode: `http`** (default — for server/web applications)
34
+ - Parse API Contract documents
35
+ - curl each endpoint and verify HTTP status codes
36
+ - Pass criteria: GET returns 200; POST/PUT/DELETE returns non-404
37
+
38
+ **Mode: `process`** (for client/desktop applications)
39
+ - Verify the application process is still running (not crashed)
40
+ - If the app supports a CLI health check command, run it
41
+ - Pass criteria: Process alive AND exit code matches `expected_exit_code` (if process has exited)
42
+ - For GUI apps: Check process exists and has been running for at least 10 seconds without crash
43
+
44
+ **Mode: `log`** (for log-based verification)
45
+ - Read `log_file` content
46
+ - Check for each pattern in `expected_log_patterns`
47
+ - Pass criteria: ALL expected patterns found in log
48
+ - Fail: Any pattern missing → report which patterns were not found
49
+ - Example patterns for client with local DB:
50
+ - "Database migration completed" (DB initialized)
51
+ - "Application ready" (app started)
52
+ - "UI rendered" (UI loaded, for GUI apps)
53
+
54
+ ## Step 1: Parse API Contracts (HTTP Mode)
55
+
56
+ Extract endpoints from API Contract documents:
57
+
58
+ 1. **Read each API Contract document**
59
+ - Split `api_contract_paths` by comma
60
+ - Read each document using Read tool
61
+
62
+ 2. **Extract endpoint list**
63
+ - Parse endpoints from contract:
64
+ - Method: GET, POST, PUT, DELETE, etc.
65
+ - Path: endpoint path (e.g., `/api/users`)
66
+ - Expected status code: from contract definition
67
+
68
+ 3. **Filter for core endpoints**
69
+ - Focus on core endpoints only
70
+ - For GET endpoints: test directly with full validation
71
+ - For POST/PUT/DELETE: only verify endpoint exists (expect 400/401/405 is acceptable, NOT 404)
72
+
73
+ ## Step 2: Execute Smoke Tests (HTTP Mode)
74
+
75
+ Test each endpoint:
76
+
77
+ 1. **For each endpoint, execute curl command**
78
+ ```bash
79
+ curl -s -o /dev/null -w "%{http_code}" -X {METHOD} {base_url}{path}
80
+ ```
81
+
82
+ 2. **Record test results**
83
+ - Endpoint path
84
+ - HTTP method
85
+ - Expected status (or status range)
86
+ - Actual status code
87
+ - Pass/fail status
88
+
89
+ 3. **Handle connection errors**
90
+ - If curl fails to connect → mark as FAILED
91
+ - Report: "Cannot connect to {base_url}"
92
+
93
+ ## Step 2 (Alternative): Process Mode Verification
94
+
95
+ For client/desktop applications:
96
+
97
+ 1. **Check process is still running**
98
+ - **Windows**:
99
+ ```powershell
100
+ tasklist /FI "IMAGENAME eq {process_name}" | findstr /I "{process_name}"
101
+ ```
102
+ - **Unix/Mac**:
103
+ ```bash
104
+ pgrep -f "{process_name}"
105
+ ```
106
+
107
+ 2. **Verify process stability**
108
+ - Record process start time
109
+ - Wait 10 seconds
110
+ - Check if process is still running (not crashed)
111
+ - If process has exited:
112
+ - Get exit code
113
+ - Compare with `expected_exit_code` (default: 0)
114
+
115
+ 3. **Record test results**
116
+ - Process name
117
+ - Status: RUNNING / EXITED
118
+ - Exit code (if exited)
119
+ - Uptime (if running)
120
+ - Pass/fail status
121
+
122
+ ## Step 2 (Alternative): Log Mode Verification
123
+
124
+ For log-based smoke testing:
125
+
126
+ 1. **Read log file content**
127
+ - Use Read tool to read `log_file`
128
+ - If file not found → FAILED
129
+
130
+ 2. **Parse expected patterns**
131
+ - Split `expected_log_patterns` by comma
132
+ - Trim whitespace from each pattern
133
+
134
+ 3. **Check each pattern**
135
+ - For each pattern in expected patterns:
136
+ - **Windows**:
137
+ ```powershell
138
+ Select-String -Path "{log_file}" -Pattern "{pattern}"
139
+ ```
140
+ - **Unix/Mac**:
141
+ ```bash
142
+ grep -E "{pattern}" "{log_file}"
143
+ ```
144
+ - Record whether pattern was found
145
+
146
+ 4. **Record test results**
147
+ - Pattern name
148
+ - Found: YES/NO
149
+ - Pass/fail status
150
+
151
+ 5. **Report missing patterns**
152
+ - List all patterns not found
153
+ - Include context from log if available
154
+
155
+ ## Step 3: Evaluate Results
156
+
157
+ Apply pass criteria:
158
+
159
+ 1. **GET endpoints**
160
+ - Pass: HTTP 200 or 301/302 (redirect)
161
+ - Fail: 404 or connection error
162
+
163
+ 2. **POST/PUT/DELETE endpoints**
164
+ - Pass: NOT 404 (any other status is acceptable for smoke test)
165
+ - Acceptable: 400 (bad request), 401 (unauthorized), 405 (method not allowed)
166
+ - Fail: 404 (not found)
167
+
168
+ 3. **Health endpoint**
169
+ - Pass: HTTP 200
170
+ - Fail: Any other status
171
+
172
+ 4. **Calculate pass rate**
173
+ - pass_rate = (passed_count / total_count) * 100
174
+
175
+ ## Step 4: Report Smoke Test Results
176
+
177
+ Compile test results table:
178
+
179
+ | Endpoint | Method | Expected | Actual | Status |
180
+ |----------|--------|----------|--------|--------|
181
+ | /api/users | GET | 200 | 200 | PASS |
182
+ | /api/auth/login | POST | !404 | 401 | PASS |
183
+ | /api/orders | GET | 200 | 404 | FAIL |
184
+
185
+ # Task Completion Report
186
+
187
+ ## Success Report
188
+
189
+ ```
190
+ ## Task Completion Report
191
+ - **Status**: SUCCESS
192
+ - **Platform**: {platform_id}
193
+ - **Base URL**: {base_url}
194
+ - **Endpoints Tested**: {total_count}
195
+ - **Pass Rate**: {pass_rate}%
196
+ - **Passed**: {passed_count}
197
+ - **Failed**: {failed_count}
198
+ - **Test Results**:
199
+
200
+ | Endpoint | Method | Expected | Actual | Status |
201
+ |----------|--------|----------|--------|--------|
202
+ | {endpoint_1} | {method} | {expected} | {actual} | PASS |
203
+ | {endpoint_2} | {method} | {expected} | {actual} | PASS |
204
+ | {endpoint_3} | {method} | {expected} | {actual} | FAIL |
205
+
206
+ - **Summary**: Smoke test passed with {pass_rate}% success rate
207
+ ```
208
+
209
+ ## Failure Report
210
+
211
+ ```
212
+ ## Task Completion Report
213
+ - **Status**: FAILED
214
+ - **Platform**: {platform_id}
215
+ - **Base URL**: {base_url}
216
+ - **Endpoints Tested**: {total_count}
217
+ - **Pass Rate**: {pass_rate}%
218
+ - **Failed Endpoints**:
219
+ - {endpoint_1}: expected {expected}, got {actual}
220
+ - {endpoint_2}: expected {expected}, got {actual}
221
+ - **Error Category**: {VALIDATION_ERROR | RUNTIME_ERROR}
222
+ - **Error**: {detailed error description}
223
+ - **Recovery Hint**: Verify application is running and API contracts are up to date
224
+ ```
225
+
226
+ # Important Notes
227
+
228
+ - **Smoke test is NOT integration testing** — it only verifies service availability and endpoint reachability
229
+ - **Pass rate threshold** — If pass rate < 80% → FAILED
230
+ - **Critical GET endpoints** — If any critical GET endpoint returns 404 → FAILED
231
+ - **No business logic testing** — Smoke test does not validate request/response bodies or business rules
232
+ - **Lightweight verification** — Designed for quick post-deployment validation
233
+
234
+ # Key Rules
235
+
236
+ | Rule | Description |
237
+ |------|-------------|
238
+ | **HTTP Status Only** | Only verify HTTP status codes, not response bodies |
239
+ | **GET Endpoints** | Must return 200 or redirect (301/302) |
240
+ | **Write Endpoints** | Any status except 404 is acceptable |
241
+ | **80% Pass Rate** | Overall pass rate must be >= 80% |
242
+ | **Critical GET 404** | Any critical GET endpoint returning 404 causes failure |
243
+ | **Contract-Based** | Extract endpoints from API Contract documents |
@@ -0,0 +1,218 @@
1
+ ---
2
+ name: speccrew-deploy-startup
3
+ description: Starts the application in local/development environment and performs health check verification. Reports service URL and health status.
4
+ tools: Read, Bash
5
+ ---
6
+
7
+ # Trigger Scenarios
8
+
9
+ - User requests to start the application for testing
10
+ - Deploy Agent needs to verify application startup
11
+ - Smoke test requires a running application instance
12
+
13
+ # Input Parameters
14
+
15
+ | Parameter | Required | Type | Description |
16
+ |-----------|----------|------|-------------|
17
+ | `platform_id` | Yes | string | Platform identifier |
18
+ | `start_cmd` | Yes | string | Application start command from conventions-data (e.g., `java -jar target/app.jar`) |
19
+ | `health_url` | No | string | Health check URL (e.g., `http://localhost:8080/actuator/health`). Required for `http` mode. |
20
+ | `health_timeout` | No | string | Health check timeout, default 60s |
21
+ | `project_root` | Yes | string | Absolute path to the project root directory |
22
+ | `iteration_path` | Yes | string | Current iteration directory path |
23
+ | `verification_mode` | No | string | Verification method: `http` (default, for server apps), `process` (for client/desktop apps), `log` (for apps with log output). Default: `http` |
24
+ | `process_name` | No | string | Process name or pattern to check (for `process` mode, e.g., `myapp.exe` or `MyApp`) |
25
+ | `log_file` | No | string | Path to application log file (for `log` mode) |
26
+ | `success_pattern` | No | string | Regex pattern in log that indicates successful startup (for `log` mode, e.g., `"Application started"` or `"Ready"`) |
27
+
28
+ # Workflow
29
+
30
+ ## Step 1: Start Application
31
+
32
+ Launch the application in background:
33
+
34
+ 1. **Execute start_cmd in background via Bash**
35
+ - Working directory: `project_root`
36
+ - Command: `start_cmd`
37
+ - Use background execution (e.g., `nohup {start_cmd} > app.log 2>&1 &` on Unix)
38
+ - Capture the process PID
39
+
40
+ 2. **Record PID for cleanup**
41
+ - Store PID for later reference
42
+ - Log: "Application started with PID {pid}"
43
+
44
+ 3. **Wait for initial startup**
45
+ - Wait 5 seconds for application initialization
46
+ - Log: "Waiting for application to initialize..."
47
+
48
+ ## Step 2: Verification (Based on Mode)
49
+
50
+ ### Verification Strategy
51
+
52
+ Based on `verification_mode` parameter:
53
+
54
+ **Mode: `http`** (default — for server/web applications)
55
+ - Poll `health_url` every 5 seconds using curl
56
+ - Success: HTTP 200 response
57
+ - Timeout: `health_timeout` seconds
58
+
59
+ **Mode: `process`** (for desktop/mobile/client applications)
60
+ - Check if process `process_name` is running
61
+ - On Windows: `tasklist /FI "IMAGENAME eq {process_name}" | findstr /I "{process_name}"`
62
+ - On Unix/Mac: `pgrep -f "{process_name}"`
63
+ - Success: Process found and running
64
+ - Timeout: `health_timeout` seconds (default 30s)
65
+
66
+ **Mode: `log`** (for applications with log-based startup confirmation)
67
+ - Monitor `log_file` for `success_pattern`
68
+ - On Windows: `Select-String -Path "{log_file}" -Pattern "{success_pattern}"`
69
+ - On Unix/Mac: `grep -m 1 "{success_pattern}" "{log_file}"`
70
+ - Poll every 3 seconds
71
+ - Success: Pattern found in log
72
+ - Timeout: `health_timeout` seconds (default 60s)
73
+
74
+ ### Execution by Mode
75
+
76
+ #### HTTP Mode (default)
77
+
78
+ Poll health endpoint until success or timeout:
79
+
80
+ 1. **Calculate max attempts**
81
+ - timeout_seconds = parseInt(health_timeout) || 60
82
+ - max_attempts = timeout_seconds / 5
83
+ - attempt = 0
84
+
85
+ 2. **Poll health_url**
86
+ ```bash
87
+ curl -s -o /dev/null -w "%{http_code}" {health_url}
88
+ ```
89
+
90
+ 3. **Check response**
91
+ - HTTP 200 → Health check passed, continue to Step 3
92
+ - Other codes → Increment attempt, wait 5 seconds, retry
93
+
94
+ 4. **Timeout handling**
95
+ - If attempt >= max_attempts → FAILED with Error Category: RUNTIME_ERROR
96
+ - Report: "Health check timed out after {timeout_seconds}s"
97
+
98
+ #### Process Mode (for client apps)
99
+
100
+ Verify process is running:
101
+
102
+ 1. **Calculate max attempts**
103
+ - timeout_seconds = parseInt(health_timeout) || 30
104
+ - max_attempts = timeout_seconds / 3
105
+ - attempt = 0
106
+
107
+ 2. **Check process existence**
108
+ - **Windows**:
109
+ ```powershell
110
+ tasklist /FI "IMAGENAME eq {process_name}" | findstr /I "{process_name}"
111
+ ```
112
+ - **Unix/Mac**:
113
+ ```bash
114
+ pgrep -f "{process_name}"
115
+ ```
116
+
117
+ 3. **Check result**
118
+ - Process found → Verification passed, continue to Step 3
119
+ - Process not found → Increment attempt, wait 3 seconds, retry
120
+
121
+ 4. **Timeout handling**
122
+ - If attempt >= max_attempts → FAILED with Error Category: RUNTIME_ERROR
123
+ - Report: "Process verification timed out after {timeout_seconds}s - process {process_name} not found"
124
+
125
+ #### Log Mode (for log-based verification)
126
+
127
+ Monitor log file for success pattern:
128
+
129
+ 1. **Calculate max attempts**
130
+ - timeout_seconds = parseInt(health_timeout) || 60
131
+ - max_attempts = timeout_seconds / 3
132
+ - attempt = 0
133
+
134
+ 2. **Check log file for success_pattern**
135
+ - **Windows**:
136
+ ```powershell
137
+ Select-String -Path "{log_file}" -Pattern "{success_pattern}"
138
+ ```
139
+ - **Unix/Mac**:
140
+ ```bash
141
+ grep -m 1 "{success_pattern}" "{log_file}"
142
+ ```
143
+
144
+ 3. **Check result**
145
+ - Pattern found → Verification passed, continue to Step 3
146
+ - Pattern not found → Increment attempt, wait 3 seconds, retry
147
+
148
+ 4. **Timeout handling**
149
+ - If attempt >= max_attempts → FAILED with Error Category: RUNTIME_ERROR
150
+ - Report: "Log verification timed out after {timeout_seconds}s - pattern '{success_pattern}' not found in {log_file}"
151
+
152
+ ## Step 3: Report Startup Status
153
+
154
+ Compile and report startup results:
155
+
156
+ 1. **Record service information**
157
+ - Service URL: derived from `health_url` (remove `/actuator/health` if present)
158
+ - Health status: UP / DOWN
159
+ - Startup duration: time from start to health success
160
+ - PID: process ID for cleanup
161
+
162
+ 2. **Verify application is still running**
163
+ - Check if process with recorded PID exists
164
+ - If not running → FAILED with Error Category: RUNTIME_ERROR
165
+
166
+ # Task Completion Report
167
+
168
+ ## Success Report
169
+
170
+ ```
171
+ ## Task Completion Report
172
+ - **Status**: SUCCESS
173
+ - **Platform**: {platform_id}
174
+ - **Project Root**: {project_root}
175
+ - **Service URL**: {service_url}
176
+ - **Health URL**: {health_url}
177
+ - **Health Status**: UP
178
+ - **Startup Duration**: {duration_seconds}s
179
+ - **Process ID (PID)**: {pid}
180
+ - **Start Command**: {start_cmd}
181
+ - **Summary**: Application started successfully and passed health check
182
+ ```
183
+
184
+ ## Failure Report
185
+
186
+ ```
187
+ ## Task Completion Report
188
+ - **Status**: FAILED
189
+ - **Platform**: {platform_id}
190
+ - **Project Root**: {project_root}
191
+ - **Start Command**: {start_cmd}
192
+ - **Health URL**: {health_url}
193
+ - **Error Category**: {DEPENDENCY_MISSING | RUNTIME_ERROR}
194
+ - **Error**: {detailed error description}
195
+ - **Startup Log** (last 30 lines):
196
+ ```
197
+ {last_30_lines_of_application_log}
198
+ ```
199
+ - **Recovery Hint**: {suggestion for resolving the issue}
200
+ ```
201
+
202
+ # Important Notes
203
+
204
+ - **Application runs in BACKGROUND** — it must remain running for subsequent smoke test
205
+ - **PID must be reported** — the Deploy Agent uses this for cleanup later
206
+ - **Health check uses curl** — cross-platform availability may vary; ensure curl is available
207
+ - **Default timeout is 60s** — adjust health_timeout parameter for slower-starting applications
208
+ - **Process monitoring** — verify the application process is still running after health check
209
+
210
+ # Key Rules
211
+
212
+ | Rule | Description |
213
+ |------|-------------|
214
+ | **Background Execution** | Application must start in background and continue running |
215
+ | **PID Recording** | Always record and report the process PID |
216
+ | **Health Polling** | Poll health endpoint every 5 seconds until success or timeout |
217
+ | **Timeout Configurable** | health_timeout parameter controls maximum wait time |
218
+ | **Process Verification** | Verify application process is still running after health check |
@@ -117,7 +117,7 @@ Execute tasks in dependency order.
117
117
 
118
118
  1. **Mark task as 🔄 In Progress**
119
119
  2. **Implement the code** following design specification
120
- 3. **Run local checks** (Step 5)
120
+ 3. **Run local checks** (Step 6)
121
121
  4. **Update status to ✅ Complete** if checks pass
122
122
  5. **Record deviations** if implementation differs from design
123
123
 
@@ -127,7 +127,33 @@ Execute tasks in dependency order.
127
127
  - Describe issue clearly to user
128
128
  - Wait for user decision: return to design phase OR proceed with documented deviation
129
129
 
130
- ## Step 5: Local Checks
130
+ ## Step 5: Database Migration Verification
131
+
132
+ > This step applies ONLY when the task checklist contains Database Migration tasks.
133
+ > If no migration tasks exist, skip to Step 6.
134
+
135
+ ### 5.1 Verify Migration Scripts
136
+
137
+ After all migration-related tasks in Step 4 are complete:
138
+
139
+ 1. **Check script existence**: Verify all migration scripts listed in the design document's "Migration Requirements" table have been created at the specified paths
140
+ 2. **Check naming convention**: Verify script names follow the pattern defined in conventions-data.md Migration Configuration
141
+ 3. **Check script content**: Each script must contain valid SQL/DDL (or tool-specific syntax) that matches the Table Schema defined in the design document
142
+
143
+ ### 5.2 Verify Migration Order
144
+
145
+ 1. **Dependency check**: Migration scripts with table dependencies must be ordered correctly (e.g., referenced table created before foreign key table)
146
+ 2. **Version sequence**: Migration version numbers must be sequential with no gaps
147
+
148
+ ### 5.3 Report Migration Summary
149
+
150
+ Add to the task record:
151
+
152
+ | Script Name | Path | Type | Tables Affected | Status |
153
+ |-------------|------|------|----------------|--------|
154
+ | {name} | {path} | CREATE/ALTER | {tables} | Created/Verified |
155
+
156
+ ## Step 6: Local Checks
131
157
 
132
158
  After completing each task, run quality checks:
133
159
 
@@ -155,7 +181,7 @@ When task is blocked (compile fail, test fail, env issue):
155
181
  3. **Check environment**: `.env` variables, database connectivity
156
182
  4. **Record diagnosis**: symptom → investigation steps → root cause → resolution
157
183
 
158
- ## Step 6: Record Deviations
184
+ ## Step 7: Record Deviations
159
185
 
160
186
  If implementation differs from design, record in task file "Deviation Log" section:
161
187
 
@@ -167,7 +193,7 @@ If implementation differs from design, record in task file "Deviation Log" secti
167
193
  | BE-003 | Use JWT library A | Used JWT library B | Library A has security vulnerability |
168
194
  ```
169
195
 
170
- ## Step 7: Handle Technical Debt
196
+ ## Step 8: Handle Technical Debt
171
197
 
172
198
  If accepting suboptimal solutions, write to tech-debt directory:
173
199
 
@@ -175,7 +201,7 @@ If accepting suboptimal solutions, write to tech-debt directory:
175
201
 
176
202
  Use the unified tech_debt document template defined in the workspace document templates configuration.
177
203
 
178
- ## Step 8: Completion Notification
204
+ ## Step 9: Completion Notification
179
205
 
180
206
  When all tasks complete, update task record and notify user:
181
207
 
@@ -205,7 +231,7 @@ Ready for testing phase.
205
231
 
206
232
  ## Task Completion Report
207
233
 
208
- At the end of Step 8 (or if the skill fails at any point), output a structured Task Completion Report:
234
+ At the end of Step 9 (or if the skill fails at any point), output a structured Task Completion Report:
209
235
 
210
236
  ### Success Report
211
237
 
@@ -219,6 +245,9 @@ At the end of Step 8 (or if the skill fails at any point), output a structured T
219
245
  - {file_path_1}
220
246
  - {file_path_2}
221
247
  - ...
248
+ - **Migration Scripts**: {count} scripts at {migration_dir}
249
+ - {script_1_name}: {type} ({tables})
250
+ - {script_2_name}: {type} ({tables})
222
251
  - **Summary**: Backend module {module_name} implemented with {X} tasks completed
223
252
  ```
224
253
 
@@ -111,6 +111,24 @@ erDiagram
111
111
 
112
112
  ### Migration Patterns
113
113
 
114
+ #### Migration Configuration
115
+
116
+ <!-- AI-TAG: MIGRATION_CONFIG -->
117
+ <!-- Backend only. Extract from project's actual migration setup. -->
118
+
119
+ | Config Item | Value |
120
+ |-------------|-------|
121
+ | Migration Tool | migration_tool (e.g., Flyway / Liquibase / Alembic / Prisma Migrate / TypeORM migrations / Knex migrations) |
122
+ | Script Language | script_language (e.g., SQL / XML / YAML / TypeScript / Python) |
123
+ | Script Directory | migration_script_dir (e.g., `src/main/resources/db/migration/`) |
124
+ | Naming Convention | migration_naming (e.g., `V{version}__{description}.sql` for Flyway) |
125
+ | Seed Data Directory | seed_data_dir (e.g., `src/main/resources/db/seed/`, or "N/A" if not used) |
126
+ | Seed Data Format | seed_data_format (e.g., SQL INSERT / JSON fixtures / CSV) |
127
+ | Execution Command | migration_run_cmd (e.g., `mvn flyway:migrate` / `npx prisma migrate dev`) |
128
+ | Validation Command | migration_validate_cmd (e.g., `mvn flyway:validate` / `npx prisma migrate diff`) |
129
+
130
+ #### Migration Workflow
131
+
114
132
  <!-- AI-TAG: MIGRATION_PATTERNS -->
115
133
  <!-- Backend only. If this platform is frontend or mobile, write 'Not applicable - database operations are handled at the backend layer.' -->
116
134
 
@@ -128,6 +146,23 @@ TestMigration --> ApplyMigration["Apply to Production"]
128
146
  - [{{name}}](file://{{path}}#L{{start}}-L{{end}})
129
147
  {{/each}}
130
148
 
149
+ #### Deployment Configuration
150
+
151
+ <!-- AI-TAG: DEPLOYMENT_CONFIG -->
152
+ <!-- Backend only. Extract from project's actual deployment/startup setup. -->
153
+
154
+ | Config Item | Value |
155
+ |-------------|-------|
156
+ | Build Command | build_cmd (e.g., `mvn package -DskipTests` / `npm run build`) |
157
+ | Start Command | start_cmd (e.g., `java -jar target/app.jar` / `npm start`) |
158
+ | Health Check URL | health_url (e.g., `http://localhost:8080/actuator/health`) |
159
+ | Health Check Timeout | health_timeout (e.g., 30s) |
160
+ | Stop Command | stop_cmd (e.g., `kill $PID` / Ctrl+C) |
161
+ | Verification Mode | verification_mode (e.g., `http` for server, `process` for desktop/mobile, `log` for log-based) |
162
+ | Process Name | process_name (e.g., `MyApp.exe` / `com.example.myapp`, for process mode. "N/A" for server) |
163
+ | Log File Path | log_file_path (e.g., `logs/app.log` / `~/Library/Logs/MyApp.log`, for log mode. "N/A" if not applicable) |
164
+ | Success Log Pattern | success_log_pattern (e.g., `"Application started"` / `"Ready"`, for log mode. "N/A" if not applicable) |
165
+
131
166
  ### Query Optimization
132
167
 
133
168
  <!-- AI-TAG: QUERY_OPTIMIZATION -->
@@ -132,9 +132,12 @@ flowchart TD
132
132
 
133
133
  ### 4.4 Migration Requirements
134
134
 
135
- | Migration | Type | Description |
136
- |-----------|------|-------------|
137
- | {migration-name} | CREATE TABLE/ALTER TABLE/ADD INDEX | {what changes} |
135
+ <!-- AI-NOTE: File Path and Script Name MUST follow the migration naming convention
136
+ and script directory defined in conventions-data.md Migration Configuration -->
137
+
138
+ | Migration | Type | Script Name | File Path | Description |
139
+ |-----------|------|-------------|-----------|-------------|
140
+ | {migration-name} | CREATE TABLE/ALTER TABLE/ADD INDEX | {e.g., V001__create_user.sql} | {e.g., src/main/resources/db/migration/} | {what changes} |
138
141
 
139
142
  ## 5. Transaction Design
140
143
 
@@ -260,7 +260,7 @@ For any uncovered acceptance criteria:
260
260
 
261
261
  **Test Case Design Document:**
262
262
  ```
263
- speccrew-workspace/iterations/{number}-{type}-{name}/05.tests/cases/[feature-name]-test-case-design.md
263
+ speccrew-workspace/iterations/{number}-{type}-{name}/06.system-test/cases/[feature-name]-test-case-design.md
264
264
  ```
265
265
 
266
266
  ### 7.2 Read Template
@@ -334,7 +334,7 @@ Upon completion (success or failure), output the following report format:
334
334
  - **Platform**: <platform_id, e.g., "web-vue">
335
335
  - **Phase**: test_case_design
336
336
  - **Output Files**:
337
- - `speccrew-workspace/iterations/{iteration}/05.system-test/cases/{platform_id}/[feature]-test-cases.md`
337
+ - `speccrew-workspace/iterations/{iteration}/06.system-test/cases/{platform_id}/[feature]-test-cases.md`
338
338
  - **Summary**: Test case design completed with {count} test cases covering {dimensions} dimensions
339
339
  ```
340
340
 
@@ -307,7 +307,7 @@ Output the code plan document for traceability:
307
307
  1. **Read the template**: `templates/TEST-CODE-PLAN-TEMPLATE.md`
308
308
  2. **Replace top-level placeholders** (feature name, platform, date, etc.)
309
309
  3. **Create the document** using `create_file`:
310
- - Target path: `speccrew-workspace/iterations/{number}-{type}-{name}/05.system-test/code/{platform_id}/[feature]-test-code-plan.md`
310
+ - Target path: `speccrew-workspace/iterations/{number}-{type}-{name}/06.system-test/code/{platform_id}/[feature]-test-code-plan.md`
311
311
  - Content: Template with top-level placeholders replaced
312
312
  4. **Verify**: Document has complete section structure
313
313
 
@@ -376,7 +376,7 @@ Upon completion (success or failure), output the following report format:
376
376
  - **Platform**: <platform_id, e.g., "web-vue">
377
377
  - **Phase**: test_code_gen
378
378
  - **Output Files**:
379
- - `speccrew-workspace/iterations/{iteration}/05.system-test/code/{platform_id}/[feature]-test-code-plan.md`
379
+ - `speccrew-workspace/iterations/{iteration}/06.system-test/code/{platform_id}/[feature]-test-code-plan.md`
380
380
  - <list of generated test source files>
381
381
  - **Summary**: Test code generation completed with {file_count} files covering {case_count} test cases
382
382
  ```
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "speccrew",
3
- "version": "0.5.19",
3
+ "version": "0.6.0",
4
4
  "description": "Spec-Driven Development toolkit for AI-powered IDEs",
5
5
  "author": "charlesmu99",
6
6
  "repository": {
@@ -180,8 +180,8 @@ Backend Designer Agent:
180
180
 
181
181
  | Deliverable | Path | Format | Description |
182
182
  |-------------|------|--------|-------------|
183
- | Test Case Document | `iterations/iXXX/05.tests/cases/[feature-name]-test-cases.md` | Per template | Includes acceptance and unit tests |
184
- | Test Report | `iterations/iXXX/05.tests/reports/[feature-name]-test-report.md` | Structured report | Includes pass rate, failure details |
183
+ | Test Case Document | `iterations/iXXX/06.system-test/cases/[feature-name]-test-cases.md` | Per template | Includes acceptance and unit tests |
184
+ | Test Report | `iterations/iXXX/06.system-test/reports/[feature-name]-test-report.md` | Structured report | Includes pass rate, failure details |
185
185
 
186
186
  ---
187
187
 
@@ -204,7 +204,8 @@ knowledge/ iterations/iXXX/
204
204
  │ ├── 04.development/
205
205
  │ │ ├── {platform_id}/ ←── Dev Agent output (frontend/backend/mobile/desktop)
206
206
  │ Dev Agent │
207
- └── 05.tests/
207
+ ├── 05.deployment/
208
+ │ └── 06.system-test/
208
209
  └── techs/conventions/testing.md ├── cases/ ←── Test Agent output
209
210
  Test Agent ────────────────── └── reports/
210
211
  ```