prizmkit 1.1.39 → 1.1.41

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (34) hide show
  1. package/bundled/VERSION.json +3 -3
  2. package/bundled/dev-pipeline/SCHEMA_ANALYSIS.md +1 -1
  3. package/bundled/dev-pipeline/run-bugfix.sh +74 -0
  4. package/bundled/dev-pipeline/run-feature.sh +74 -0
  5. package/bundled/dev-pipeline/run-refactor.sh +74 -0
  6. package/bundled/dev-pipeline/scripts/generate-bootstrap-prompt.py +0 -6
  7. package/bundled/dev-pipeline/scripts/generate-bugfix-prompt.py +118 -1
  8. package/bundled/dev-pipeline/scripts/generate-refactor-prompt.py +123 -8
  9. package/bundled/dev-pipeline/templates/bootstrap-tier1.md +0 -23
  10. package/bundled/dev-pipeline/templates/bootstrap-tier2.md +0 -23
  11. package/bundled/dev-pipeline/templates/bootstrap-tier3.md +0 -23
  12. package/bundled/dev-pipeline/templates/bug-fix-list-schema.json +22 -3
  13. package/bundled/dev-pipeline/templates/bugfix-bootstrap-prompt.md +56 -0
  14. package/bundled/dev-pipeline/templates/refactor-bootstrap-prompt.md +64 -4
  15. package/bundled/dev-pipeline/templates/refactor-list-schema.json +22 -3
  16. package/bundled/dev-pipeline/tests/test-deploy-safety.sh +223 -0
  17. package/bundled/skills/_metadata.json +3 -3
  18. package/bundled/skills/app-planner/SKILL.md +0 -3
  19. package/bundled/skills/bugfix-pipeline-launcher/SKILL.md +34 -6
  20. package/bundled/skills/feature-pipeline-launcher/SKILL.md +42 -18
  21. package/bundled/skills/prizmkit-committer/SKILL.md +0 -1
  22. package/bundled/skills/prizmkit-deploy/SKILL.md +491 -209
  23. package/bundled/skills/prizmkit-deploy/references/cloud-platform-deploy.md +93 -0
  24. package/bundled/skills/prizmkit-deploy/references/deploy-config-schema.md +147 -0
  25. package/bundled/skills/prizmkit-deploy/references/deploy-history-schema.md +62 -0
  26. package/bundled/skills/prizmkit-deploy/references/docker-deploy.md +31 -0
  27. package/bundled/skills/prizmkit-deploy/references/nginx-blue-green.md +59 -0
  28. package/bundled/skills/prizmkit-init/SKILL.md +0 -2
  29. package/bundled/skills/prizmkit-plan/SKILL.md +0 -3
  30. package/bundled/skills/recovery-workflow/SKILL.md +96 -7
  31. package/bundled/skills/refactor-pipeline-launcher/SKILL.md +40 -9
  32. package/package.json +1 -1
  33. package/bundled/dev-pipeline/templates/sections/phase-deploy-verification.md +0 -31
  34. package/bundled/skills/prizmkit-deploy/assets/deploy-template.md +0 -187
@@ -342,29 +342,6 @@ Append results to `context-snapshot.md`:
342
342
  If verification fails, log the failure details but continue to commit. Failures do NOT block the commit, but you MUST attempt verification and MUST clean up the dev server.
343
343
  {{END_IF_BROWSER_INTERACTION}}
344
344
 
345
- ### Phase 3.8: Local Deploy Verification
346
-
347
- You just implemented this feature — you know the project's tech stack and build tools.
348
-
349
- 1. **Build**: Run the project's build/compile commands. If a required tool is missing, install it first.
350
- 2. **Fix**: If build fails with code errors (type errors, missing imports, config issues), fix them (max 2 rounds), then re-verify.
351
- 3. **Assess and record** — append to context-snapshot.md:
352
- - **ALL builds pass** → `## Deploy Verification: PASS` — proceed to commit
353
- - **Some builds fail with fixable errors** → fix and re-verify (already handled in step 2)
354
- - **Cannot build locally** (missing system-level deps you cannot install) → Record: `## Deploy Verification: PARTIAL — missing system deps (see below)`
355
-
356
- Deploy verification does NOT block the commit, but you MUST attempt it.
357
-
358
- **Step 4 — Smoke test** (only if build passed and project can be started):
359
- 1. Start the project locally (e.g., `make dev`, `npm start`, `go run .`, etc.)
360
- 2. Verify basic functionality: hit key endpoints, check health routes, confirm the UI loads
361
- 3. Stop the server process you started — do NOT leave it running
362
- 4. Record smoke test results in `## Deploy Verification` section
363
-
364
- If the project cannot be started locally (e.g., requires external services, databases, credentials), skip the smoke test and note why.
365
-
366
- **Deploy documentation update** — Run `/prizmkit-deploy` ONLY if this feature introduced new infrastructure or deployment-affecting changes (new database, cache, message queue, new env vars, new build steps, changed ports/protocols). If none apply, skip `/prizmkit-deploy`.
367
-
368
345
  ### Phase 4: Architecture Sync & Commit (SINGLE COMMIT)
369
346
 
370
347
  **4a.** Run `/prizmkit-retrospective` — maintains `.prizm-docs/` (architecture index):
@@ -435,29 +435,6 @@ Append results to `context-snapshot.md`:
435
435
  If verification fails, log the failure details but continue to commit. Failures do NOT block the commit, but you MUST attempt verification and MUST clean up the dev server.
436
436
  {{END_IF_BROWSER_INTERACTION}}
437
437
 
438
- ### Phase 5.8: Local Deploy Verification
439
-
440
- You just implemented this feature — you know the project's tech stack and build tools.
441
-
442
- 1. **Build**: Run the project's build/compile commands. If a required tool is missing, install it first.
443
- 2. **Fix**: If build fails with code errors (type errors, missing imports, config issues), fix them (max 2 rounds), then re-verify.
444
- 3. **Assess and record** — append to context-snapshot.md:
445
- - **ALL builds pass** → `## Deploy Verification: PASS` — proceed to commit
446
- - **Some builds fail with fixable errors** → fix and re-verify (already handled in step 2)
447
- - **Cannot build locally** (missing system-level deps you cannot install) → Record: `## Deploy Verification: PARTIAL — missing system deps (see below)`
448
-
449
- Deploy verification does NOT block the commit, but you MUST attempt it.
450
-
451
- **Step 4 — Smoke test** (only if build passed and project can be started):
452
- 1. Start the project locally (e.g., `make dev`, `npm start`, `go run .`, etc.)
453
- 2. Verify basic functionality: hit key endpoints, check health routes, confirm the UI loads
454
- 3. Stop the server process you started — do NOT leave it running
455
- 4. Record smoke test results in `## Deploy Verification` section
456
-
457
- If the project cannot be started locally (e.g., requires external services, databases, credentials), skip the smoke test and note why.
458
-
459
- **Deploy documentation update** — Run `/prizmkit-deploy` ONLY if this feature introduced new infrastructure or deployment-affecting changes (new database, cache, message queue, new env vars, new build steps, changed ports/protocols). If none apply, skip `/prizmkit-deploy`.
460
-
461
438
  ### Phase 6: Architecture Sync & Commit (SINGLE COMMIT)
462
439
 
463
440
  **6a.** Run `/prizmkit-retrospective` — maintains `.prizm-docs/` (architecture index):
@@ -500,29 +500,6 @@ Append results to `context-snapshot.md`:
500
500
  If verification fails, log the failure details but continue to commit. Failures do NOT block the commit, but you MUST attempt verification and MUST clean up the dev server.
501
501
  {{END_IF_BROWSER_INTERACTION}}
502
502
 
503
- ### Phase 5.8: Local Deploy Verification
504
-
505
- You just implemented this feature — you know the project's tech stack and build tools.
506
-
507
- 1. **Build**: Run the project's build/compile commands. If a required tool is missing, install it first.
508
- 2. **Fix**: If build fails with code errors (type errors, missing imports, config issues), fix them (max 2 rounds), then re-verify.
509
- 3. **Assess and record** — append to context-snapshot.md:
510
- - **ALL builds pass** → `## Deploy Verification: PASS` — proceed to commit
511
- - **Some builds fail with fixable errors** → fix and re-verify (already handled in step 2)
512
- - **Cannot build locally** (missing system-level deps you cannot install) → Record: `## Deploy Verification: PARTIAL — missing system deps (see below)`
513
-
514
- Deploy verification does NOT block the commit, but you MUST attempt it.
515
-
516
- **Step 4 — Smoke test** (only if build passed and project can be started):
517
- 1. Start the project locally (e.g., `make dev`, `npm start`, `go run .`, etc.)
518
- 2. Verify basic functionality: hit key endpoints, check health routes, confirm the UI loads
519
- 3. Stop the server process you started — do NOT leave it running
520
- 4. Record smoke test results in `## Deploy Verification` section
521
-
522
- If the project cannot be started locally (e.g., requires external services, databases, credentials), skip the smoke test and note why.
523
-
524
- **Deploy documentation update** — Run `/prizmkit-deploy` ONLY if this feature introduced new infrastructure or deployment-affecting changes (new database, cache, message queue, new env vars, new build steps, changed ports/protocols). If none apply, skip `/prizmkit-deploy`.
525
-
526
503
  ### Phase 6: Retrospective & Commit (SINGLE COMMIT) — DO NOT SKIP
527
504
 
528
505
  **Bug Fix Documentation Policy**:
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "$schema": "http://json-schema.org/draft-07/schema#",
3
3
  "title": "Dev-Pipeline Bug Fix List",
4
- "description": "Schema for .prizmkit/plans/bug-fix-list.json \u2014 standardized input for the Bug Fix Pipeline",
4
+ "description": "Schema for .prizmkit/plans/bug-fix-list.json standardized input for the Bug Fix Pipeline",
5
5
  "type": "object",
6
6
  "required": [
7
7
  "$schema",
@@ -51,7 +51,7 @@
51
51
  "title": {
52
52
  "type": "string",
53
53
  "minLength": 1,
54
- "description": "Bug title \u2014 brief description of the symptom"
54
+ "description": "Bug title brief description of the symptom"
55
55
  },
56
56
  "description": {
57
57
  "type": "string",
@@ -194,6 +194,25 @@
194
194
  "model": {
195
195
  "type": "string",
196
196
  "description": "AI model ID for this bug fix. Overrides $MODEL env var."
197
+ },
198
+ "browser_interaction": {
199
+ "type": "object",
200
+ "description": "Browser verification config for bugs reproducible via UI. Supports playwright-cli and opencli. AI auto-detects dev server command, URL, and port from project config at runtime.",
201
+ "properties": {
202
+ "tool": {
203
+ "type": "string",
204
+ "enum": ["playwright-cli", "opencli", "auto"],
205
+ "default": "auto",
206
+ "description": "Browser tool to use. 'auto' (default) = AI chooses at runtime. 'playwright-cli' = local dev server verification in isolated browser. 'opencli' = reuses Chrome logged-in session, ideal for verifying bugs related to third-party integrations or OAuth flows."
207
+ },
208
+ "verify_steps": {
209
+ "type": "array",
210
+ "description": "Verification goals describing HOW to reproduce and verify the bug fix (e.g., 'Click login button and verify error message is gone', 'Open dashboard and confirm performance metrics display'). AI decides concrete browser tool actions at runtime. If omitted, AI auto-reproduces from error description.",
211
+ "items": {
212
+ "type": "string"
213
+ }
214
+ }
215
+ }
197
216
  }
198
217
  }
199
218
  }
@@ -241,4 +260,4 @@
241
260
  }
242
261
  }
243
262
  }
244
- }
263
+ }
@@ -82,6 +82,34 @@ You are the **bug fix session agent**. Fix Bug {{BUG_ID}}: "{{BUG_TITLE}}".
82
82
  mkdir -p .prizmkit/bugfix/{{BUG_ID}}
83
83
  ```
84
84
 
85
+ {{IF_BROWSER_INTERACTION}}
86
+
87
+ #### Browser Verification Setup
88
+
89
+ The bug may be reproducible via the UI using browser tools:
90
+
91
+ {{IF_BROWSER_TOOL_AUTO}}
92
+ - **Browser Tool**: Will be auto-selected based on error type and dev server configuration
93
+ {{END_IF_BROWSER_TOOL_AUTO}}
94
+
95
+ {{IF_BROWSER_TOOL_PLAYWRIGHT}}
96
+ - **Browser Tool**: playwright-cli (local isolated browser against dev server)
97
+ {{END_IF_BROWSER_TOOL_PLAYWRIGHT}}
98
+
99
+ {{IF_BROWSER_TOOL_OPENCLI}}
100
+ - **Browser Tool**: opencli (Chrome session with existing login context — ideal for OAuth/third-party integrations)
101
+ {{END_IF_BROWSER_TOOL_OPENCLI}}
102
+
103
+ **Browser Verification Goals**:
104
+ {{BROWSER_VERIFY_STEPS}}
105
+
106
+ If the bug is related to UI/frontend, you may use these tools to:
107
+ 1. Reproduce the bug in a running dev server
108
+ 2. Verify the fix after implementation
109
+ 3. Smoke-test related UI flows for regression
110
+
111
+ {{END_IF_BROWSER_INTERACTION}}
112
+
85
113
  ### Phase 1: Diagnose & Plan
86
114
 
87
115
  **Goal**: Identify root cause, build project context, produce spec.md + plan.md.
@@ -112,6 +140,10 @@ Run `/prizmkit-plan` with `artifact_dir=.prizmkit/bugfix/{{BUG_ID}}/`:
112
140
  - Subsequent tasks implement the minimal fix to make the test pass (GREEN state)
113
141
  - Resolve any `[NEEDS CLARIFICATION]` markers autonomously — do NOT pause
114
142
 
143
+ {{IF_BROWSER_INTERACTION}}
144
+ - **Browser Verification**: If the bug is UI-reproducible, plan.md should include browser-based reproduction as an optional verification step
145
+ {{END_IF_BROWSER_INTERACTION}}
146
+
115
147
  **DECISION GATE — Fast Path Check**:
116
148
  - If plan.md has ≤ 2 tasks AND root cause is obvious → mark `FAST_PATH=true`, skip Phase 3 (Review) later
117
149
 
@@ -137,6 +169,16 @@ Run `/prizmkit-implement` with `artifact_dir=.prizmkit/bugfix/{{BUG_ID}}/`:
137
169
  - Runs test suite after each task
138
170
  - Uses convergence-based test failure recovery (keep fixing while progress is being made)
139
171
 
172
+ {{IF_BROWSER_INTERACTION}}
173
+
174
+ **Browser Verification During Implementation**:
175
+ - After each code fix, you may optionally use browser tools to verify the behavior
176
+ - Reproduce the original bug steps and confirm they no longer occur
177
+ - Test related UI flows to ensure no regression
178
+ - Document any manual verification steps in the implementation notes
179
+
180
+ {{END_IF_BROWSER_INTERACTION}}
181
+
140
182
  After implement completes, verify:
141
183
  1. All tasks in plan.md are `[x]`
142
184
  2. Reproduction test passes (GREEN)
@@ -158,6 +200,15 @@ Run `/prizmkit-code-review` with `artifact_dir=.prizmkit/bugfix/{{BUG_ID}}/`:
158
200
  - If PASS: proceed
159
201
  - If NEEDS_FIXES: the skill exhausted its max rounds; log remaining findings and proceed
160
202
 
203
+ {{IF_BROWSER_INTERACTION}}
204
+
205
+ **Code Review — Browser Verification Check**:
206
+ - Verify that browser-based reproduction steps (if applicable) are clearly documented
207
+ - Confirm that the fix maintains the expected UI behavior for all affected flows
208
+ - Validate that any manual verification steps have been completed successfully
209
+
210
+ {{END_IF_BROWSER_INTERACTION}}
211
+
161
212
  **CP-3**: Code review complete, all tests green.
162
213
 
163
214
  **Checkpoint update**: Set step `prizmkit-code-review` to `"completed"`.
@@ -206,6 +257,11 @@ The fix-report.md MUST contain:
206
257
  - **Bug Resolution Summary**: ID, title, status, phases completed
207
258
  - **What Was Fixed**: changes made, diff summary, commit hash
208
259
  - **Verification Results**: reproduction test before/after, regression tests, review findings
260
+
261
+ {{IF_BROWSER_INTERACTION}}
262
+ - **Browser Verification Results**: UI flows tested, browser tool used (if any), manual verification steps completed
263
+ {{END_IF_BROWSER_INTERACTION}}
264
+
209
265
  - **Knowledge Captured**: TRAPS updated (if any), prevention recommendation
210
266
  - **Acceptance Criteria Verification**: checklist with pass/fail for each criterion
211
267
 
@@ -123,6 +123,37 @@ Run `/prizmkit-plan` with `artifact_dir=.prizmkit/refactor/{{REFACTOR_ID}}/`:
123
123
 
124
124
  Resolve any `[NEEDS CLARIFICATION]` markers using the refactor description — do NOT pause for interactive input.
125
125
 
126
+ {{IF_BROWSER_INTERACTION}}
127
+
128
+ #### Browser Verification Strategy
129
+
130
+ The refactor may affect UI behavior. Browser verification can validate:
131
+ - **UI Render**: UI components render identically after refactoring
132
+ - **User Interactions**: Navigation, form submissions, and workflows function as before
133
+ - **Feature Coverage**: Refactored features work end-to-end in real browser environment
134
+
135
+ {{IF_BROWSER_TOOL_AUTO}}
136
+ Browser tool will be auto-selected at runtime based on dev server and feature complexity.
137
+ {{END_IF_BROWSER_TOOL_AUTO}}
138
+
139
+ {{IF_BROWSER_TOOL_PLAYWRIGHT}}
140
+ **Tool: playwright-cli** — Local isolated browser instance for dev server verification
141
+ {{END_IF_BROWSER_TOOL_PLAYWRIGHT}}
142
+
143
+ {{IF_BROWSER_TOOL_OPENCLI}}
144
+ **Tool: opencli** — Chrome session with existing login for testing OAuth/third-party integrations
145
+ {{END_IF_BROWSER_TOOL_OPENCLI}}
146
+
147
+ **Verification Goals**:
148
+ {{BROWSER_VERIFY_STEPS}}
149
+
150
+ Include browser verification approach in plan.md:
151
+ - Which UI flows should be smoke-tested after refactoring?
152
+ - Any specific user journeys affected by the structural changes?
153
+ - Should verification be part of review phase or implementation phase?
154
+
155
+ {{END_IF_BROWSER_INTERACTION}}
156
+
126
157
  - **CP-RF-1**: Both `spec.md` and `plan.md` exist in `.prizmkit/refactor/{{REFACTOR_ID}}/`
127
158
  - **Checkpoint update**: set step `prizmkit-plan` to `"completed"` in `{{CHECKPOINT_PATH}}`
128
159
 
@@ -144,8 +175,19 @@ Resolve any `[NEEDS CLARIFICATION]` markers using the refactor description — d
144
175
  - If tests fail: revert the task, analyze why, try alternative approach
145
176
  - Writes '## Implementation Log' to context-snapshot.md (or equivalent)
146
177
  5. Do NOT change behavior — only improve structure
147
- 6. If the refactor involves multiple files: run `/compact` after completing half the tasks to free context budget. If `/compact` is unavailable, continue without it.
148
- 7. After all tasks complete, run the full test suite one final time
178
+
179
+ {{IF_BROWSER_INTERACTION}}
180
+
181
+ 6. **Browser Smoke Tests** (after every major refactor task):
182
+ - Use browser tools to verify UI still renders correctly
183
+ - Test the primary user flows affected by the refactoring
184
+ - Confirm no visual or interactive regressions
185
+ - Document any manual browser verification steps in implementation notes
186
+
187
+ {{END_IF_BROWSER_INTERACTION}}
188
+
189
+ 7. If the refactor involves multiple files: run `/compact` after completing half the tasks to free context budget. If `/compact` is unavailable, continue without it.
190
+ 8. After all tasks complete, run the full test suite one final time
149
191
  "
150
192
  - **Wait for Dev to return**
151
193
  - If Dev reports test failures that cannot be resolved after 3 attempts: escalate, write status="failed"
@@ -164,8 +206,19 @@ Resolve any `[NEEDS CLARIFICATION]` markers using the refactor description — d
164
206
  2. Read `.prizmkit/refactor/{{REFACTOR_ID}}/plan.md` for architecture decisions and completed tasks
165
207
  3. Run `/prizmkit-code-review` with artifact_dir=.prizmkit/refactor/{{REFACTOR_ID}}/. The skill runs an internal review-fix loop (Reviewer → filter → Dev fix, max 3 rounds) and writes review-report.md.
166
208
  4. Run full test suite and verify ALL tests pass
167
- 5. review-report.md will be written to .prizmkit/refactor/{{REFACTOR_ID}}/ by prizmkit-code-review
168
- 6. Report: verdict (PASS/NEEDS_FIXES), number of rounds, findings fixed/rejected
209
+
210
+ {{IF_BROWSER_INTERACTION}}
211
+
212
+ 5. **Browser Verification Review**:
213
+ - Verify that critical user workflows still function end-to-end in browser
214
+ - Confirm UI renders consistently after structural changes
215
+ - Validate any behavior-sensitive components behave identically
216
+ - Document browser verification findings in review-report.md
217
+
218
+ {{END_IF_BROWSER_INTERACTION}}
219
+
220
+ 6. review-report.md will be written to .prizmkit/refactor/{{REFACTOR_ID}}/ by prizmkit-code-review
221
+ 7. Report: verdict (PASS/NEEDS_FIXES), number of rounds, findings fixed/rejected
169
222
  "
170
223
  - **Wait for Reviewer to return**
171
224
  - Read `review-report.md` — if PASS proceed, if NEEDS_FIXES log remaining findings and proceed.
@@ -193,6 +246,13 @@ Resolve any `[NEEDS CLARIFICATION]` markers using the refactor description — d
193
246
  - Refactor Summary (ID, title, type, status, phases completed)
194
247
  - What Changed (files modified, structural changes made, diff summary)
195
248
  - Behavior Verification (test suite results before/after, specific tests exercised)
249
+
250
+ {{IF_BROWSER_INTERACTION}}
251
+
252
+ - Browser Verification (UI flows tested, tools used, any manual verification performed)
253
+
254
+ {{END_IF_BROWSER_INTERACTION}}
255
+
196
256
  - Code Quality Metrics (if measurable: files consolidated, duplication reduced, etc.)
197
257
  - Acceptance Criteria Verification (checklist with pass/fail for each criterion)
198
258
 
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "$schema": "http://json-schema.org/draft-07/schema#",
3
3
  "title": "Dev-Pipeline Refactor List",
4
- "description": "Schema for .prizmkit/plans/refactor-list.json \u2014 standardized input for the Refactor Pipeline",
4
+ "description": "Schema for .prizmkit/plans/refactor-list.json standardized input for the Refactor Pipeline",
5
5
  "type": "object",
6
6
  "required": [
7
7
  "$schema",
@@ -54,7 +54,7 @@
54
54
  "title": {
55
55
  "type": "string",
56
56
  "minLength": 1,
57
- "description": "Refactor title \u2014 brief description of the change"
57
+ "description": "Refactor title brief description of the change"
58
58
  },
59
59
  "description": {
60
60
  "type": "string",
@@ -201,6 +201,25 @@
201
201
  1,
202
202
  3
203
203
  ]
204
+ },
205
+ "browser_interaction": {
206
+ "type": "object",
207
+ "description": "Browser verification config for refactors affecting UI behavior. Supports playwright-cli and opencli. AI auto-detects dev server command, URL, and port from project config at runtime.",
208
+ "properties": {
209
+ "tool": {
210
+ "type": "string",
211
+ "enum": ["playwright-cli", "opencli", "auto"],
212
+ "default": "auto",
213
+ "description": "Browser tool to use. 'auto' (default) = AI chooses at runtime. 'playwright-cli' = local dev server verification in isolated browser. 'opencli' = reuses Chrome logged-in session, ideal for verifying refactors involving third-party integrations or OAuth flows."
214
+ },
215
+ "verify_steps": {
216
+ "type": "array",
217
+ "description": "Verification goals describing WHAT to verify after refactoring (e.g., 'Navigation still works', 'Form submission succeeds', 'UI renders identically'). AI decides concrete browser tool actions at runtime. If omitted, AI explores the app and verifies no visual or behavioral regressions.",
218
+ "items": {
219
+ "type": "string"
220
+ }
221
+ }
222
+ }
204
223
  }
205
224
  }
206
225
  }
@@ -248,4 +267,4 @@
248
267
  }
249
268
  }
250
269
  }
251
- }
270
+ }
@@ -0,0 +1,223 @@
1
+ #!/usr/bin/env bash
2
+ # Test deploy safety check logic across different task status scenarios
3
+ # Run: bash dev-pipeline/tests/test-deploy-safety.sh
4
+ set -euo pipefail
5
+
6
+ SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
7
+ REPO_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
8
+
9
+ RED='\033[0;31m'
10
+ GREEN='\033[0;32m'
11
+ YELLOW='\033[1;33m'
12
+ NC='\033[0m'
13
+
14
+ PASS=0
15
+ FAIL=0
16
+ TOTAL=0
17
+
18
+ pass() { echo -e " ${GREEN}PASS${NC} $1"; PASS=$((PASS + 1)); TOTAL=$((TOTAL + 1)); }
19
+ fail() { echo -e " ${RED}FAIL${NC} $1 — $2"; FAIL=$((FAIL + 1)); TOTAL=$((TOTAL + 1)); }
20
+
21
+ echo "================================="
22
+ echo " Deploy Safety Check Tests"
23
+ echo "================================="
24
+ echo ""
25
+
26
+ # --- Test 1: All tasks completed -> deploy should proceed ---
27
+ echo "[Test 1] All 'completed': incomplete_count should be 0"
28
+ cat > /tmp/test-deploy-all-completed.json << 'JSON'
29
+ {"features": [{"id": "F-A", "status": "completed"}, {"id": "F-B", "status": "completed"}]}
30
+ JSON
31
+ count=$(python3 -c "
32
+ import json
33
+ with open('/tmp/test-deploy-all-completed.json') as f:
34
+ data = json.load(f)
35
+ bad = [f for f in data.get('features', [])
36
+ if f.get('status') not in ('completed', 'skipped')]
37
+ print(len(bad))
38
+ ")
39
+ if [[ "$count" == "0" ]]; then
40
+ pass "incomplete_count=0, deploy proceeds"
41
+ else
42
+ fail "incomplete_count=$count, expected 0"
43
+ fi
44
+
45
+ # --- Test 2: One failed, one completed -> deploy blocked ---
46
+ echo "[Test 2] Mixed 'completed' + 'failed': incomplete_count should be 1"
47
+ cat > /tmp/test-deploy-mixed.json << 'JSON'
48
+ {"features": [{"id": "F-A", "status": "completed"}, {"id": "F-B", "status": "failed"}]}
49
+ JSON
50
+ count=$(python3 -c "
51
+ import json
52
+ with open('/tmp/test-deploy-mixed.json') as f:
53
+ data = json.load(f)
54
+ bad = [f for f in data.get('features', [])
55
+ if f.get('status') not in ('completed', 'skipped')]
56
+ print(len(bad))
57
+ ")
58
+ if [[ "$count" == "1" ]]; then
59
+ pass "incomplete_count=1, deploy blocked"
60
+ else
61
+ fail "incomplete_count=$count, expected 1"
62
+ fi
63
+
64
+ # --- Test 3: All skipped -> deploy proceeds (skipped is non-blocking) ---
65
+ echo "[Test 3] All 'skipped': incomplete_count should be 0"
66
+ cat > /tmp/test-deploy-all-skipped.json << 'JSON'
67
+ {"features": [{"id": "F-A", "status": "skipped"}, {"id": "F-B", "status": "skipped"}]}
68
+ JSON
69
+ count=$(python3 -c "
70
+ import json
71
+ with open('/tmp/test-deploy-all-skipped.json') as f:
72
+ data = json.load(f)
73
+ bad = [f for f in data.get('features', [])
74
+ if f.get('status') not in ('completed', 'skipped')]
75
+ print(len(bad))
76
+ ")
77
+ if [[ "$count" == "0" ]]; then
78
+ pass "incomplete_count=0, deploy proceeds (skipped is non-blocking)"
79
+ else
80
+ fail "incomplete_count=$count, expected 0"
81
+ fi
82
+
83
+ # --- Test 4: Timed-out status -> deploy blocked ---
84
+ echo "[Test 4] 'timed_out' status: incomplete_count should be 1"
85
+ cat > /tmp/test-deploy-timeout.json << 'JSON'
86
+ {"features": [{"id": "F-A", "status": "completed"}, {"id": "F-B", "status": "timed_out"}]}
87
+ JSON
88
+ count=$(python3 -c "
89
+ import json
90
+ with open('/tmp/test-deploy-timeout.json') as f:
91
+ data = json.load(f)
92
+ bad = [f for f in data.get('features', [])
93
+ if f.get('status') not in ('completed', 'skipped')]
94
+ print(len(bad))
95
+ ")
96
+ if [[ "$count" == "1" ]]; then
97
+ pass "incomplete_count=1, deploy blocked"
98
+ else
99
+ fail "incomplete_count=$count, expected 1"
100
+ fi
101
+
102
+ # --- Test 5: tee /dev/stderr | tail -1 pattern extracts only count ---
103
+ echo "[Test 5] Multi-line output extraction: only last line (integer) captured"
104
+ cat > /tmp/test-deploy-tee.json << 'JSON'
105
+ {"features": [{"id": "F-001", "status": "failed", "title": "Broken feature"}, {"id": "F-002", "status": "crashed", "title": "Also broken"}]}
106
+ JSON
107
+ result=$({ python3 -c "
108
+ import json
109
+ with open('/tmp/test-deploy-tee.json') as f:
110
+ data = json.load(f)
111
+ bad = [f for f in data.get('features', [])
112
+ if f.get('status') not in ('completed', 'skipped')]
113
+ for f in bad:
114
+ print(f\" {f['id']}: {f.get('status', 'unknown')} — {f.get('title', '')}\")
115
+ print(len(bad))
116
+ " /tmp/test-deploy-tee.json 2>/dev/null || echo "0"; } | tail -1)
117
+ if [[ "$result" =~ ^[0-9]+$ ]]; then
118
+ pass "extracted count is integer: $result"
119
+ else
120
+ fail "extracted count is not a clean integer: '$result'"
121
+ fi
122
+
123
+ # --- Test 6: Empty feature list -> deploy proceeds ---
124
+ echo "[Test 6] Empty list: incomplete_count should be 0"
125
+ cat > /tmp/test-deploy-empty.json << 'JSON'
126
+ {"features": []}
127
+ JSON
128
+ count=$(python3 -c "
129
+ import json
130
+ with open('/tmp/test-deploy-empty.json') as f:
131
+ data = json.load(f)
132
+ bad = [f for f in data.get('features', [])
133
+ if f.get('status') not in ('completed', 'skipped')]
134
+ print(len(bad))
135
+ ")
136
+ if [[ "$count" == "0" ]]; then
137
+ pass "incomplete_count=0, deploy proceeds"
138
+ else
139
+ fail "incomplete_count=$count, expected 0"
140
+ fi
141
+
142
+ # --- Test 7: Bugfix variant with needs_info (non-blocking) ---
143
+ echo "[Test 7] Bugfix: 'needs_info' should NOT block deploy"
144
+ cat > /tmp/test-deploy-bugfix.json << 'JSON'
145
+ {"bugs": [{"id": "B-001", "status": "completed"}, {"id": "B-002", "status": "needs_info"}]}
146
+ JSON
147
+ count=$(python3 -c "
148
+ import json
149
+ with open('/tmp/test-deploy-bugfix.json') as f:
150
+ data = json.load(f)
151
+ bad = [b for b in data.get('bugs', [])
152
+ if b.get('status') not in ('completed', 'skipped', 'needs_info')]
153
+ print(len(bad))
154
+ ")
155
+ if [[ "$count" == "0" ]]; then
156
+ pass "incomplete_count=0, deploy proceeds (needs_info non-blocking)"
157
+ else
158
+ fail "incomplete_count=$count, expected 0"
159
+ fi
160
+
161
+ # --- Test 8: Refactor variant ---
162
+ echo "[Test 8] Refactor: completed + skipped -> deploy proceeds"
163
+ cat > /tmp/test-deploy-refactor.json << 'JSON'
164
+ {"refactors": [{"id": "R-001", "status": "completed"}, {"id": "R-002", "status": "skipped"}]}
165
+ JSON
166
+ count=$(python3 -c "
167
+ import json
168
+ with open('/tmp/test-deploy-refactor.json') as f:
169
+ data = json.load(f)
170
+ bad = [r for r in data.get('refactors', [])
171
+ if r.get('status') not in ('completed', 'skipped')]
172
+ print(len(bad))
173
+ ")
174
+ if [[ "$count" == "0" ]]; then
175
+ pass "incomplete_count=0, deploy proceeds"
176
+ else
177
+ fail "incomplete_count=$count, expected 0"
178
+ fi
179
+
180
+ # --- Test 9: Real feature-list.json ---
181
+ echo "[Test 9] Real feature-list.json: both completed -> deploy proceeds"
182
+ if [[ -f "$REPO_ROOT/.prizmkit/plans/feature-list.json" ]]; then
183
+ count=$(python3 -c "
184
+ import json
185
+ with open('$REPO_ROOT/.prizmkit/plans/feature-list.json') as f:
186
+ data = json.load(f)
187
+ bad = [f for f in data.get('features', [])
188
+ if f.get('status') not in ('completed', 'skipped')]
189
+ print(len(bad))
190
+ ")
191
+ if [[ "$count" == "0" ]]; then
192
+ pass "real list: incomplete_count=0"
193
+ else
194
+ fail "real list: incomplete_count=$count, expected 0"
195
+ fi
196
+ else
197
+ echo " ${YELLOW}SKIP${NC} No real feature-list.json found"
198
+ fi
199
+
200
+ # --- Test 10: ENABLE_DEPLOY=0 skips the entire block ---
201
+ echo "[Test 10] ENABLE_DEPLOY=0: deploy block should be skipped entirely"
202
+ ENABLE_DEPLOY=0
203
+ executed=false
204
+ if [[ "$ENABLE_DEPLOY" == "1" ]]; then
205
+ executed=true
206
+ fi
207
+ if [[ "$executed" == "false" ]]; then
208
+ pass "ENABLE_DEPLOY=0: deploy block skipped"
209
+ else
210
+ fail "ENABLE_DEPLOY=0: deploy block should NOT execute"
211
+ fi
212
+
213
+ # --- Cleanup ---
214
+ rm -f /tmp/test-deploy-*.json
215
+
216
+ echo ""
217
+ echo "================================="
218
+ echo " Results: $PASS passed, $FAIL failed ($TOTAL total)"
219
+ echo "================================="
220
+
221
+ if [[ $FAIL -gt 0 ]]; then
222
+ exit 1
223
+ fi
@@ -1,5 +1,5 @@
1
1
  {
2
- "version": "1.1.39",
2
+ "version": "1.1.41",
3
3
  "skills": {
4
4
  "prizm-kit": {
5
5
  "description": "Full-lifecycle dev toolkit. Covers spec-driven development, Prizm context docs, code quality, debugging, deployment, and knowledge management.",
@@ -58,10 +58,10 @@
58
58
  "hasScripts": false
59
59
  },
60
60
  "prizmkit-deploy": {
61
- "description": "Generate or update deployment documentation (.prizmkit/deploy.md) by scanning project state and deploy strategy. On-demand skill.",
61
+ "description": "Universal deployment gateway: auto-discovers project type and target, routes to SSH automation (PM2 + Nginx + blue/green), guided cloud/Docker deployment, or documentation fallback. Also operates existing deployments (status/logs/restart/rollback).",
62
62
  "tier": "1",
63
63
  "category": "prizmkit-skill",
64
- "hasAssets": true,
64
+ "hasAssets": false,
65
65
  "hasScripts": false
66
66
  },
67
67
  "feature-workflow": {
@@ -37,7 +37,6 @@ If you believe the task is better suited for a different workflow, you MUST:
37
37
  1. `.prizmkit/plans/project-brief.md` (`.prizmkit/plans/` — accumulated project context brief)
38
38
  2. Project conventions and architecture decisions appended to `CLAUDE.md` / `CODEBUDDY.md` (with user consent)
39
39
  3. Infrastructure configuration (database conventions + deployment config) appended to `CLAUDE.md` / `CODEBUDDY.md` `### Infrastructure` section
40
- 4. `.prizmkit/deploy.md` — deployment documentation (created or updated with infrastructure config)
41
40
 
42
41
  **After planning is complete**, you MUST:
43
42
  1. Present the summary of captured project-level context (vision, conventions, architecture decisions, project brief)
@@ -199,7 +198,6 @@ Do NOT use this skill when:
199
198
  #### Deployment Credentials Reference
200
199
  - [platform]: [token/auth method description]
201
200
  ```
202
- - Update `.prizmkit/deploy.md` if it exists — append deployment details to relevant sections (Prerequisites, Production Deployment, Environment Variables). If it does not exist, create it from the `prizmkit-deploy` template with known information filled in.
203
201
  - Items still marked "Skip — decide later" remain as `<!-- [topic]: deferred -->` in CLAUDE.md for `prizmkit-deploy` to pick up later.
204
202
 
205
203
  4. **Project brief accumulation** — throughout all interactive phases:
@@ -443,7 +441,6 @@ After all checkpoints pass, present a summary and end the session:
443
441
  - Infrastructure config → `CLAUDE.md` / `CODEBUDDY.md` `### Infrastructure` (database conventions + deployment config)
444
442
  - Tech stack → `.prizmkit/config.json`
445
443
  - Architecture decisions (if any) → `CLAUDE.md` / `CODEBUDDY.md` `### Architecture Decisions`
446
- - Deployment docs → `.prizmkit/deploy.md` (if created/updated)
447
444
  - Project brief → `.prizmkit/plans/project-brief.md`
448
445
 
449
446
  2. **Suggest possible next steps** (as text, NOT auto-invoked):