agentic-sdlc-wizard 1.20.0 → 1.21.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -4,6 +4,24 @@ All notable changes to the SDLC Wizard.
4
4
 
5
5
  > **Note:** This changelog is for humans to read. Don't manually apply these changes - just run the wizard ("Check for SDLC wizard updates") and it handles everything automatically.
6
6
 
7
+ ## [1.21.0] - 2026-03-31
8
+
9
+ ### Added
10
+ - Confidence-driven setup wizard — kills the fixed 18 questions. Scans repo, builds confidence per data point, only asks what it can't infer. Dynamic question count (0-2 for well-configured projects, 10+ for bare repos). 95% aggregate confidence threshold (#52)
11
+ - CI Shepherd opt-in question in setup wizard (#48 partial)
12
+ - Cross-model release review recommendation — releases/publishes as explicit trigger, Release Review Checklist with v1.20.0 evidence (#49)
13
+ - Prove It Gate enforcement in SDLC skill — prevents unvalidated additions with quality test requirements (#50)
14
+ - 6 confidence-driven setup tests, 10 prove-it-gate tests, 6 release review tests
15
+
16
+ ### Removed
17
+ - ci-analyzer skill — violated Prove It philosophy (existence-only tests, no quality validation, overlap with `/claude-automation-recommender`) (#50)
18
+ - ci-self-heal.yml deprecated — local shepherd is the primary CI fix mechanism
19
+
20
+ ### Changed
21
+ - Wizard doc: Q-numbered questions → data point descriptions with detection hints
22
+ - Setup skill: 12 steps (was 11) with new "Build Confidence Map" step
23
+ - CLI distributes 8 template files (was 9, removed ci-analyzer)
24
+
7
25
  ## [1.20.0] - 2026-03-31
8
26
 
9
27
  ### Added
@@ -37,7 +37,7 @@ As Claude Code improves, the wizard absorbs those improvements and removes its o
37
37
  **But here's the key:** This isn't a one-size-fits-all answer. It's a starting point that helps you find YOUR answer. Every project is different. The self-evaluating loop (plan → build → test → review → improve) needs to be tuned to your codebase, your team, your standards. The wizard gives you the framework — you shape it into something bespoke.
38
38
 
39
39
  **The living system:**
40
- - CI self-heal captures friction signals as GitHub issues for pattern analysis
40
+ - The local shepherd captures friction signals during active sessions
41
41
  - You approve changes to the process
42
42
  - Both sides learn over time
43
43
  - The system improves the system (recursive improvement)
@@ -356,6 +356,14 @@ This applies to everything: native Claude Code commands vs custom skills, framew
356
356
 
357
357
  **For the wizard's CI/CD:** When the weekly-update workflow detects a new Claude Code feature that overlaps with a wizard feature, the CI should automatically run E2E with both versions and recommend KEEP CUSTOM / SWITCH TO NATIVE / TIE.
358
358
 
359
+ **This applies to YOUR OWN additions too — not just native vs custom:**
360
+ - Adding a new skill? Prove it fills a gap nothing else covers. Write quality tests.
361
+ - Adding a new hook? Prove it improves scores or catches real issues.
362
+ - Adding a new workflow? Prove the automation ROI exceeds maintenance cost.
363
+ - Existence tests ("file exists", "has frontmatter") are NOT proof. They prove the file was created, not that it works.
364
+
365
+ **Evidence:** ci-analyzer skill was added in v1.20.0 with 4 existence-only tests, zero quality validation, and overlap with the third-party `/claude-automation-recommender`. Deleted in next release. This gap led to the Prove It Gate enforcement in the SDLC skill.
366
+
359
367
  ---
360
368
 
361
369
  ## What You're Setting Up
@@ -954,7 +962,7 @@ After SDLC setup is complete, run `/claude-automation-recommender` for stack-spe
954
962
  | Category | Wizard Ships | Recommender Suggests |
955
963
  |----------|-------------|---------------------|
956
964
  | SDLC process (TDD, planning, review) | Enforced via hooks + skills | Not covered |
957
- | CI workflows (self-heal, PR review) | Templates + docs | Not covered |
965
+ | CI workflows (PR review) | Templates + docs | Not covered |
958
966
  | MCP servers (context7, Playwright, DB) | Not covered | Per-stack suggestions |
959
967
  | Auto-formatting hooks (Prettier, ESLint) | Not covered | Per-stack suggestions |
960
968
  | Type-checking hooks (tsc, mypy) | Not covered | Per-stack suggestions |
@@ -1026,39 +1034,44 @@ Feature branches still recommended for solo devs (keeps main clean, easy rollbac
1026
1034
 
1027
1035
  **Back-and-forth:** User questions live in PR comments. Bot's response is always the latest sticky comment. Clean and organized.
1028
1036
 
1029
- **CI monitoring question:**
1030
- > "Should Claude monitor CI checks after pushing and auto-diagnose failures? (y/n)"
1037
+ **CI shepherd opt-in (only if CI detected during auto-scan):**
1038
+ > "Enable CI shepherd role? Claude will actively watch CI, auto-fix failures, and iterate on review feedback. (y/n)"
1031
1039
 
1032
- - **Yes** → Enable CI feedback loop in SDLC skill, add `gh` CLI to allowedTools
1033
- - **No** → Skip CI monitoring steps (Claude still runs local tests, just doesn't watch CI)
1040
+ - **Yes** → Enable full shepherd loop: CI fix loop + review feedback loop. Ask detail questions below
1041
+ - **No** → Skip CI shepherd entirely (Claude still runs local tests, just doesn't interact with CI after pushing)
1034
1042
 
1035
- **What this does:**
1036
- 1. After pushing, Claude runs `gh pr checks` to watch CI status
1037
- 2. If checks fail, Claude reads logs via `gh run view --log-failed`
1038
- 3. Claude diagnoses the failure and proposes a fix
1039
- 4. Max 2 fix attempts, then asks user
1040
- 5. Job isn't done until CI is green
1043
+ **What the CI shepherd does:**
1044
+ 1. **CI fix loop:** After pushing, Claude watches CI via `gh pr checks`, reads failure logs, diagnoses and fixes, pushes again (max 2 attempts)
1045
+ 2. **Review feedback loop:** After CI passes, Claude reads automated review comments, implements valid suggestions, pushes and re-reviews (max 3 iterations)
1041
1046
 
1042
- **Recommendation:** Yes if you have CI configured. This closes the loop between
1043
- "local tests pass" and "PR is actually ready to merge."
1047
+ **Recommendation:** Yes if you have CI configured. The shepherd closes the loop between "local tests pass" and "PR is actually ready to merge."
1044
1048
 
1045
1049
  **Requirements:**
1046
1050
  - `gh` CLI installed and authenticated
1047
1051
  - CI/CD configured (GitHub Actions, etc.)
1048
1052
  - If no CI yet: skip, add later when you set up CI
1049
1053
 
1054
+ **Stored in SDLC.md metadata as:**
1055
+ ```
1056
+ <!-- CI Shepherd: enabled -->
1057
+ ```
1058
+
1059
+ **Detail questions (only if CI shepherd is enabled):**
1060
+
1061
+ **CI monitoring detail:**
1062
+ > "Should Claude monitor CI checks after pushing and auto-diagnose failures? (y/n)"
1063
+
1064
+ - **Yes** → Enable CI feedback loop in SDLC skill, add `gh` CLI to allowedTools
1065
+ - **No** → Skip CI monitoring steps (Claude still runs local tests, just doesn't watch CI)
1066
+
1050
1067
  **CI review feedback question (only if CI monitoring is enabled):**
1051
1068
  > "What level of automated review response do you want?"
1052
1069
 
1053
- | Level | Name | What autofix handles | Est. API cost |
1054
- |-------|------|---------------------|---------------|
1055
- | **L1** | `ci-only` | CI failures only (broken tests, lint) | ~$0.50/fix |
1056
- | **L2** | `criticals` (default) | + Critical review findings (must-fix) | ~$1/fix |
1057
- | **L3** | `all-findings` | + Every suggestion the reviewer flags | ~$2/fix |
1058
-
1059
- > **Cost note:** Higher levels mean more autofix iterations (each ~$0.50).
1060
- > L3 typically adds 1-2 extra iterations per PR but produces cleaner code.
1061
- > You can change this anytime by editing `AUTOFIX_LEVEL` in your ci-autofix workflow.
1070
+ | Level | Name | What the shepherd handles |
1071
+ |-------|------|--------------------------|
1072
+ | **L1** | `ci-only` | CI failures only (broken tests, lint) |
1073
+ | **L2** | `criticals` (default) | + Critical review findings (must-fix) |
1074
+ | **L3** | `all-findings` | + Every suggestion the reviewer flags |
1062
1075
 
1063
1076
  **What this does:**
1064
1077
  1. After CI passes, Claude reads the automated code review comments
@@ -1233,9 +1246,11 @@ Recommendation: Your current tests rely heavily on mocks.
1233
1246
 
1234
1247
  ---
1235
1248
 
1236
- ## Step 1: Confirm or Customize
1249
+ ## Step 1: Build Confidence Map and Fill Gaps
1250
+
1251
+ Claude assigns a state to each configuration data point based on scan results. **RESOLVED (detected)** items are presented for bulk confirmation. **RESOLVED (inferred)** items are presented with inferred values for the user to verify. **UNRESOLVED** items become questions. **The number of questions is dynamic — it depends on how much the scan resolves.** Stop asking when ALL data points are resolved (detected, inferred+confirmed, or answered by user).
1237
1252
 
1238
- Claude presents what it found. You confirm or override:
1253
+ Claude presents what it found, organized by resolution state:
1239
1254
 
1240
1255
  ### Project Structure (Auto-Detected)
1241
1256
 
@@ -1244,13 +1259,13 @@ Claude presents what it found. You confirm or override:
1244
1259
  Override? (leave blank to accept): _______________
1245
1260
  ```
1246
1261
 
1247
- **Q2: Where do your tests live?**
1262
+ **Test directory** (detect from tests/, __tests__/, spec/, test file patterns)
1248
1263
  ```
1249
1264
  Examples: tests/, __tests__/, src/**/*.test.ts, spec/
1250
1265
  Your answer: _______________
1251
1266
  ```
1252
1267
 
1253
- **Q3: What's your test framework?**
1268
+ **Test framework** (detect from jest.config, vitest.config, pytest.ini, etc.)
1254
1269
  ```
1255
1270
  Options: Jest, Vitest, Playwright, Cypress, pytest, Go testing, other
1256
1271
  Your answer: _______________
@@ -1258,31 +1273,31 @@ Your answer: _______________
1258
1273
 
1259
1274
  ### Commands
1260
1275
 
1261
- **Q4: What runs your linter?**
1276
+ **Lint command** (detect from package.json scripts, Makefile, config files)
1262
1277
  ```
1263
1278
  Examples: npm run lint, pnpm lint, eslint ., biome check
1264
1279
  Your answer: _______________
1265
1280
  ```
1266
1281
 
1267
- **Q5: What runs type checking?**
1282
+ **Type-check command** (detect from tsconfig.json, mypy.ini, etc.)
1268
1283
  ```
1269
1284
  Examples: npm run typecheck, tsc --noEmit, mypy, none
1270
1285
  Your answer: _______________
1271
1286
  ```
1272
1287
 
1273
- **Q6: What runs all tests?**
1288
+ **Run all tests command** (detect from package.json "test" script, Makefile)
1274
1289
  ```
1275
1290
  Examples: npm run test, pnpm test, pytest, go test ./...
1276
1291
  Your answer: _______________
1277
1292
  ```
1278
1293
 
1279
- **Q7: What runs a specific test file?**
1294
+ **Run single test file command** (infer from framework: jest jest path, pytest → pytest path)
1280
1295
  ```
1281
1296
  Examples: npm run test -- path/to/test.ts, pytest path/to/test.py
1282
1297
  Your answer: _______________
1283
1298
  ```
1284
1299
 
1285
- **Q8: What builds for production?**
1300
+ **Production build command** (detect from package.json "build" script, Makefile)
1286
1301
  ```
1287
1302
  Examples: npm run build, pnpm build, go build, cargo build
1288
1303
  Your answer: _______________
@@ -1290,7 +1305,7 @@ Your answer: _______________
1290
1305
 
1291
1306
  ### Deployment
1292
1307
 
1293
- **Q8.5: How do you deploy? (auto-detected, confirm or override)**
1308
+ **Deployment setup** (auto-detected from Dockerfile, vercel.json, fly.toml, deploy scripts)
1294
1309
  ```
1295
1310
  Detected: [e.g., Vercel, GitHub Actions, Docker, none]
1296
1311
 
@@ -1313,19 +1328,19 @@ Your answer: _______________
1313
1328
 
1314
1329
  ### Infrastructure
1315
1330
 
1316
- **Q9: What database(s) do you use?**
1331
+ **Database(s)** (detect from prisma/, .env DB vars, docker-compose services)
1317
1332
  ```
1318
1333
  Examples: PostgreSQL, MySQL, SQLite, MongoDB, none
1319
1334
  Your answer: _______________
1320
1335
  ```
1321
1336
 
1322
- **Q10: Do you use caching (Redis, etc.)?**
1337
+ **Caching layer** (detect from .env REDIS vars, docker-compose redis service)
1323
1338
  ```
1324
1339
  Examples: Redis, Memcached, none
1325
1340
  Your answer: _______________
1326
1341
  ```
1327
1342
 
1328
- **Q11: How long do your tests take?**
1343
+ **Test duration** (estimate from test file count, CI run times if available)
1329
1344
  ```
1330
1345
  Examples: <1 minute, 1-5 minutes, 5+ minutes
1331
1346
  Your answer: _______________
@@ -1333,7 +1348,7 @@ Your answer: _______________
1333
1348
 
1334
1349
  ### Output Preferences
1335
1350
 
1336
- **Q12: How much detail in Claude's responses?**
1351
+ **Response detail level** (cannot detect always ask if no preference found)
1337
1352
  ```
1338
1353
  Options:
1339
1354
  - Small - Minimal output, just essentials (experienced users)
@@ -1351,7 +1366,7 @@ Stored in `.claude/settings.json` as `"verbosity": "small|medium|large"`.
1351
1366
 
1352
1367
  ### Testing Philosophy
1353
1368
 
1354
- **Q13: What's your testing approach?**
1369
+ **Testing approach** (infer from existing test patterns — test-first files, coverage config)
1355
1370
  ```
1356
1371
  Options:
1357
1372
  - Strict TDD (test first always)
@@ -1362,7 +1377,7 @@ Options:
1362
1377
  Your answer: _______________
1363
1378
  ```
1364
1379
 
1365
- **Q14: What types of tests do you want?**
1380
+ **Test types** (detect from existing test file patterns: *.test.*, *.spec.*, e2e/, integration/)
1366
1381
  ```
1367
1382
  (Check all that apply)
1368
1383
  [ ] Unit tests (pure logic, isolated)
@@ -1372,7 +1387,7 @@ Your answer: _______________
1372
1387
  [ ] Other: _______________
1373
1388
  ```
1374
1389
 
1375
- **Q15: Your mocking philosophy?**
1390
+ **Mocking philosophy** (detect from jest.mock, unittest.mock usage patterns)
1376
1391
  ```
1377
1392
  Options:
1378
1393
  - Minimal mocking (real DB, mock external APIs only)
@@ -1387,7 +1402,7 @@ Your answer: _______________
1387
1402
  **If test framework detected (Jest, pytest, Go, etc.):**
1388
1403
 
1389
1404
  ```
1390
- Q16: Code Coverage (Optional)
1405
+ Code Coverage (Optional)
1391
1406
 
1392
1407
  Detected: [test framework] with coverage configuration
1393
1408
 
@@ -1408,7 +1423,7 @@ Your answer: _______________
1408
1423
  **If no test framework detected (docs/AI-heavy project):**
1409
1424
 
1410
1425
  ```
1411
- Q16: Code Coverage (Optional)
1426
+ Code Coverage (Optional)
1412
1427
 
1413
1428
  No test framework detected (documentation/AI-heavy project).
1414
1429
 
@@ -1428,19 +1443,19 @@ Your answer: _______________
1428
1443
 
1429
1444
  ---
1430
1445
 
1431
- ### Using Your Answers
1446
+ ### How Configuration Data Points Map to Files
1432
1447
 
1433
- Your answers map to these files:
1448
+ Each resolved data point (whether detected or confirmed by the user) maps to generated files:
1434
1449
 
1435
- | Question | Used In |
1436
- |----------|---------|
1437
- | Q1 (source dir) | `tdd-pretool-check.sh` - pattern match |
1438
- | Q2 (test dir) | `TESTING.md` - documentation |
1439
- | Q3 (test framework) | `TESTING.md` - documentation |
1440
- | Q4-Q8 (commands) | `CLAUDE.md` - Commands section |
1441
- | Q9-Q10 (infra) | `CLAUDE.md` - Architecture section, `TESTING.md` - mock decisions |
1442
- | Q11 (test duration) | `SDLC skill` - wait time note |
1443
- | Q12 (E2E) | `TESTING.md` - testing diamond top |
1450
+ | Data Point | Used In |
1451
+ |-----------|---------|
1452
+ | Source directory | `tdd-pretool-check.sh` - pattern match |
1453
+ | Test directory | `TESTING.md` - documentation |
1454
+ | Test framework | `TESTING.md` - documentation |
1455
+ | Commands (lint, typecheck, test, build) | `CLAUDE.md` - Commands section |
1456
+ | Infrastructure (DB, cache) | `CLAUDE.md` - Architecture section, `TESTING.md` - mock decisions |
1457
+ | Test duration | `SDLC skill` - wait time note |
1458
+ | Test types (E2E) | `TESTING.md` - testing diamond top |
1444
1459
 
1445
1460
  ---
1446
1461
 
@@ -1689,6 +1704,7 @@ TodoWrite([
1689
1704
  { content: "Find and read relevant documentation", status: "in_progress", activeForm: "Reading docs" },
1690
1705
  { content: "Assess doc health - flag issues (ask before cleaning)", status: "pending", activeForm: "Checking doc health" },
1691
1706
  { content: "DRY scan: What patterns exist to reuse?", status: "pending", activeForm: "Scanning for reusable patterns" },
1707
+ { content: "Prove It Gate: adding new component? Research alternatives, prove quality with tests", status: "pending", activeForm: "Checking prove-it gate" },
1692
1708
  { content: "Blast radius: What depends on code I'm changing?", status: "pending", activeForm: "Checking dependencies" },
1693
1709
  { content: "Restate task in own words - verify understanding", status: "pending", activeForm: "Verifying understanding" },
1694
1710
  { content: "Scrutinize test design - right things tested? Follow TESTING.md?", status: "pending", activeForm: "Reviewing test approach" },
@@ -1730,6 +1746,22 @@ TodoWrite([
1730
1746
  - Does test approach follow TESTING.md philosophies?
1731
1747
  - If introducing new test patterns, same scrutiny as code patterns
1732
1748
 
1749
+ ## Prove It Gate (REQUIRED for New Additions)
1750
+
1751
+ **Adding a new skill, hook, workflow, or component? PROVE IT FIRST:**
1752
+
1753
+ 1. **Research:** Does something equivalent already exist (native CC, third-party plugin, existing skill)?
1754
+ 2. **If YES:** Why is yours better? Show evidence (A/B test, quality comparison, gap analysis)
1755
+ 3. **If NO:** What gap does this fill? Is the gap real or theoretical?
1756
+ 4. **Quality tests:** New additions MUST have tests that prove OUTPUT QUALITY, not just existence
1757
+ 5. **Less is more:** Every addition is maintenance burden. Default answer is NO unless proven YES
1758
+
1759
+ **Existence tests are NOT quality tests:**
1760
+ - BAD: "ci-analyzer skill file exists" — proves nothing about quality
1761
+ - GOOD: "ci-analyzer recommends lint-first when test-before-lint detected" — proves behavior
1762
+
1763
+ **If you can't write a quality test for it, you can't prove it works, so don't add it.**
1764
+
1733
1765
  ## Plan Mode Integration
1734
1766
 
1735
1767
  **Use plan mode for:** Multi-file changes, new features, LOW confidence, bugs needing investigation.
@@ -1779,7 +1811,7 @@ PLANNING → DOCS → TDD RED → TDD GREEN → Tests Pass → Self-Review
1779
1811
 
1780
1812
  ## Cross-Model Review (If Configured)
1781
1813
 
1782
- **When to run:** High-stakes changes (auth, payments, data handling), complex refactors, research-heavy work.
1814
+ **When to run:** High-stakes changes (auth, payments, data handling), releases/publishes (version bumps, CHANGELOG, npm publish), complex refactors, research-heavy work.
1783
1815
  **When to skip:** Trivial changes (typo fixes, config tweaks), time-sensitive hotfixes, risk < review cost.
1784
1816
 
1785
1817
  **Prerequisites:** Codex CLI installed (`npm i -g @openai/codex`), OpenAI API key set.
@@ -1884,6 +1916,17 @@ Self-review passes → handoff.json (round 1, PENDING_REVIEW)
1884
1916
 
1885
1917
  **Full protocol:** See the "Cross-Model Review Loop (Optional)" section below for key flags and reasoning effort guidance.
1886
1918
 
1919
+ ### Release Review Focus
1920
+
1921
+ Before any release/publish, add these to `review_instructions`:
1922
+ - **CHANGELOG consistency** — all sections present, no lost entries during consolidation
1923
+ - **Version parity** — package.json, SDLC.md, CHANGELOG, wizard metadata all match
1924
+ - **Stale examples** — hardcoded version strings in docs match current release
1925
+ - **Docs accuracy** — README, ARCHITECTURE.md reflect current feature set
1926
+ - **CLI-distributed file parity** — live skills, hooks, settings match CLI templates
1927
+
1928
+ Evidence: v1.20.0 cross-model review caught CHANGELOG section loss and stale wizard version examples that passed all tests and self-review.
1929
+
1887
1930
  ## Test Review (Harder Than Implementation)
1888
1931
 
1889
1932
  During self-review, critique tests HARDER than app code:
@@ -1963,7 +2006,7 @@ Sometimes the flakiness is genuinely in CI infrastructure (runner environment, G
1963
2006
 
1964
2007
  ## CI Feedback Loop — Local Shepherd (After Commit)
1965
2008
 
1966
- **This is the "local shepherd" — the primary CI fix mechanism.** It runs in your active session with full context. The optional CI Auto-Fix bot (`.github/workflows/ci-autofix.yml`) is a fallback for unattended PRs only. When both are active, the bot detects your local pushes via SHA comparison and skips automatically.
2009
+ **This is the "local shepherd" — your CI fix mechanism.** It runs in your active session with full context.
1967
2010
 
1968
2011
  **The SDLC doesn't end at local tests.** CI must pass too.
1969
2012
 
@@ -2041,25 +2084,6 @@ CI passes -> Read review suggestions
2041
2084
  - **Ask first**: Present suggestions to user, let them decide which to implement
2042
2085
  - **Skip review feedback**: Ignore CI review suggestions, only fix CI failures
2043
2086
 
2044
- ## Shepherd vs. Bot: Two-Tier CI Fix Model
2045
-
2046
- | Aspect | Local Shepherd | CI Auto-Fix Bot |
2047
- |--------|---------------|-----------------|
2048
- | **When** | Active session (you're working) | Unattended (pushed and walked away) |
2049
- | **Context** | Full: codebase, conversation, intent | Minimal: `--bare`, 200-line truncated logs |
2050
- | **Cost** | Session tokens (marginal cost ~$0) | Separate API calls ($0.50-$2.00 per fix) |
2051
- | **Noise** | 0 extra commits | 1+ `[autofix N/M]` commits per attempt |
2052
- | **Quality** | High: full diagnosis, targeted fix | Lower: stateless, may repeat same approach |
2053
- | **Speed** | Immediate: fix locally, push once | Delayed: workflow_run trigger + runner queue |
2054
- | **Deconfliction** | N/A (is the primary) | SHA check: skips if branch advanced since failure |
2055
-
2056
- **The shepherd is the default.** It runs as part of the SDLC checklist above whenever you push from an active session. The bot is optional and only adds value for:
2057
- - Dependabot/Renovate PRs (no human session)
2058
- - PRs where you push and walk away
2059
- - Overnight CI runs
2060
-
2061
- If you set up the bot, the SHA-based suppression ensures they never conflict.
2062
-
2063
2087
  ## DRY Principle
2064
2088
 
2065
2089
  **Before coding:** "What patterns exist I can reuse?"
@@ -2158,7 +2182,7 @@ Create `CLAUDE.md` in your project root. This is your project-specific configura
2158
2182
 
2159
2183
  ## Commands
2160
2184
 
2161
- <!-- CUSTOMIZE: Replace with your actual commands from Q4-Q8 -->
2185
+ <!-- CUSTOMIZE: Replace with your actual detected/confirmed commands -->
2162
2186
 
2163
2187
  - Build: `[your build command]`
2164
2188
  - Run dev: `[your dev command]`
@@ -2245,7 +2269,7 @@ These are your full reference docs. Start with stubs and expand over time:
2245
2269
 
2246
2270
  ## Environments
2247
2271
 
2248
- <!-- Claude auto-populates this from Q8.5 deployment detection -->
2272
+ <!-- Claude auto-populates this from deployment detection -->
2249
2273
 
2250
2274
  | Environment | URL | Deploy Command | Trigger |
2251
2275
  |-------------|-----|----------------|---------|
@@ -2292,7 +2316,7 @@ If deployment fails or post-deploy verification catches issues:
2292
2316
 
2293
2317
  | Environment | Rollback Command | Notes |
2294
2318
  |-------------|------------------|-------|
2295
- | Preview | [auto-expires or redeploy] | Usually self-heals |
2319
+ | Preview | [auto-expires or redeploy] | Ephemeral redeploy to fix |
2296
2320
  | Staging | `[your rollback command]` | [notes] |
2297
2321
  | Production | `[your rollback command]` | [critical - document clearly] |
2298
2322
 
@@ -2322,7 +2346,7 @@ If deployment fails or post-deploy verification catches issues:
2322
2346
 
2323
2347
  **SDLC.md:**
2324
2348
  ```markdown
2325
- <!-- SDLC Wizard Version: 1.20.0 -->
2349
+ <!-- SDLC Wizard Version: 1.21.0 -->
2326
2350
  <!-- Setup Date: [DATE] -->
2327
2351
  <!-- Completed Steps: step-0.1, step-0.2, step-0.4, step-1, step-2, step-3, step-4, step-5, step-6, step-7, step-8, step-9 -->
2328
2352
  <!-- Git Workflow: [PRs or Solo] -->
@@ -2889,87 +2913,6 @@ Claude: [fetches via gh api, discusses with you interactively]
2889
2913
 
2890
2914
  This is optional - skip if you prefer fresh reviews only.
2891
2915
 
2892
- ### CI Auto-Fix Loop (Optional — Bot Fallback)
2893
-
2894
- > **Two-tier model:** The SDLC skill's CI loops (above) are the "local shepherd" — they handle CI fixes during active sessions. This bot is the second tier: an unattended fallback for when no one is watching. The bot includes SHA-based suppression — if you push a fix locally before the bot runs, it skips automatically.
2895
-
2896
- Automatically fix CI failures and PR review findings. Claude reads the error context, fixes the code, commits, and re-triggers CI. Loops until CI passes AND review has no findings at your chosen level, or max retries hit.
2897
-
2898
- **The Loop:**
2899
- ```
2900
- Push to PR
2901
- |
2902
- v
2903
- CI runs ──► FAIL ──► ci-autofix: Claude reads logs, fixes, commits [autofix 1/3] ──► re-trigger
2904
- |
2905
- └── PASS ──► PR Review ──► has findings at your level? ──► ci-autofix: fixes all ──► re-trigger
2906
- |
2907
- └── APPROVE, no findings ──► DONE
2908
- ```
2909
-
2910
- **Safety measures:**
2911
- - Never runs on main branch
2912
- - Max retries (default 3, configurable via `MAX_AUTOFIX_RETRIES`)
2913
- - `AUTOFIX_LEVEL` controls what findings to act on (`ci-only`, `criticals`, `all-findings`)
2914
- - Restricted Claude tools (no git, no npm)
2915
- - Self-modification ban (can't edit its own workflow file)
2916
- - `[autofix N/M]` commit tags for audit trail
2917
- - Sticky PR comments show status
2918
-
2919
- **Setup:**
2920
- 1. Create `.github/workflows/ci-autofix.yml`:
2921
-
2922
- ```yaml
2923
- name: CI Auto-Fix
2924
-
2925
- on:
2926
- workflow_run:
2927
- workflows: ["CI", "PR Code Review"]
2928
- types: [completed]
2929
-
2930
- permissions:
2931
- contents: write
2932
- pull-requests: write
2933
-
2934
- env:
2935
- MAX_AUTOFIX_RETRIES: 3
2936
- AUTOFIX_LEVEL: criticals # ci-only | criticals | all-findings
2937
-
2938
- jobs:
2939
- autofix:
2940
- runs-on: ubuntu-latest
2941
- if: |
2942
- github.event.workflow_run.head_branch != 'main' &&
2943
- github.event.workflow_run.event == 'pull_request' &&
2944
- (
2945
- (github.event.workflow_run.name == 'CI' && github.event.workflow_run.conclusion == 'failure') ||
2946
- (github.event.workflow_run.name == 'PR Code Review' && github.event.workflow_run.conclusion == 'success')
2947
- )
2948
- steps:
2949
- # Count previous [autofix] commits to enforce max retries
2950
- # Download CI failure logs or fetch review comment
2951
- # Check findings at your AUTOFIX_LEVEL (criticals + suggestions)
2952
- # Run Claude to fix ALL findings with restricted tools
2953
- # Commit [autofix N/M], push, re-trigger CI
2954
- # Post sticky PR comment with status
2955
- ```
2956
-
2957
- 2. Add `workflow_dispatch:` trigger to your CI workflow (so autofix can re-trigger it)
2958
- 3. Optionally configure a GitHub App for token generation (avoids `workflow_run` default-branch constraint)
2959
-
2960
- **Token approaches:**
2961
-
2962
- | Approach | When | Pros |
2963
- |----------|------|------|
2964
- | GITHUB_TOKEN + `gh workflow run` | Default | No extra setup |
2965
- | GitHub App token | `CI_AUTOFIX_APP_ID` secret exists | Push triggers `synchronize` naturally |
2966
-
2967
- **Note:** `workflow_run` only fires for workflows on the default branch. The ci-autofix workflow is dormant until first merged to main.
2968
-
2969
- > **Template vs. this repo:** The template above uses `ci-autofix.yml` with `criticals` as a safe default for new projects. The wizard's own repo has evolved this into `ci-self-heal.yml` with `all-findings` — a more aggressive configuration we dogfood internally. Both naming conventions work; the behavior is identical.
2970
-
2971
- ---
2972
-
2973
2916
  ### Cross-Model Review Loop (Optional)
2974
2917
 
2975
2918
  Use an independent AI model from a different company as a code reviewer. The author can't grade their own homework — a model with different training data and different biases catches blind spots the authoring model misses.
@@ -3108,6 +3051,7 @@ Claude writes code → self-review passes → handoff.json (round 1)
3108
3051
 
3109
3052
  **When to use this:**
3110
3053
  - High-stakes changes (auth, payments, data handling)
3054
+ - **Releases and publishes** (version bumps, CHANGELOG, npm publish) — see Release Review Checklist below
3111
3055
  - Research-heavy work where accuracy matters more than speed
3112
3056
  - Complex refactors touching many files
3113
3057
  - Any time you want higher confidence before merging
@@ -3117,6 +3061,30 @@ Claude writes code → self-review passes → handoff.json (round 1)
3117
3061
  - Time-sensitive hotfixes
3118
3062
  - Changes where the review cost exceeds the risk
3119
3063
 
3064
+ #### Release Review Checklist
3065
+
3066
+ Before any release or npm publish, add these focus areas to the cross-model `review_instructions`:
3067
+
3068
+ **Why:** Self-review and automated tests regularly miss release-specific inconsistencies. Evidence: v1.20.0 cross-model review caught 2 real issues (CHANGELOG section lost during consolidation, stale hardcoded version examples) that passed all tests and self-review.
3069
+
3070
+ | Check | What to Look For | Example Failure |
3071
+ |-------|-------------------|-----------------|
3072
+ | CHANGELOG consistency | All sections present, no lost entries during consolidation | v1.19.0 section dropped when merging into v1.20.0 |
3073
+ | Version parity | package.json, SDLC.md, CHANGELOG, wizard metadata all match | SDLC.md says 1.19.0 but package.json says 1.20.0 |
3074
+ | Stale examples | Hardcoded version strings in docs/wizard match current release | Wizard examples showing v1.15.0 when publishing v1.20.0 |
3075
+ | Docs accuracy | README, ARCHITECTURE.md reflect current feature set | "8 workflows" when there are actually 7 |
3076
+ | CLI-distributed file parity | Live skills, hooks, settings match CLI templates | SKILL.md edited but cli/templates/ not updated |
3077
+
3078
+ **Example `review_instructions` for releases:**
3079
+ ```
3080
+ Review for release consistency: CHANGELOG completeness (no lost sections),
3081
+ version parity across package.json/SDLC.md/CHANGELOG/wizard metadata,
3082
+ stale hardcoded versions in examples, docs accuracy vs actual features,
3083
+ CLI-distributed file parity (skills, hooks, settings).
3084
+ ```
3085
+
3086
+ **This complements automated tests, not replaces them.** Tests catch exact version mismatches (e.g., `test_package_version_matches_changelog`). Cross-model review catches semantic issues tests cannot — a section silently dropped, examples using outdated but syntactically valid versions, docs describing features that no longer exist.
3087
+
3120
3088
  ---
3121
3089
 
3122
3090
  ## User Understanding and Periodic Feedback
@@ -3249,7 +3217,7 @@ Walk through updates? (y/n)
3249
3217
  Store wizard state in `SDLC.md` as metadata comments (invisible to readers, parseable by Claude):
3250
3218
 
3251
3219
  ```markdown
3252
- <!-- SDLC Wizard Version: 1.20.0 -->
3220
+ <!-- SDLC Wizard Version: 1.21.0 -->
3253
3221
  <!-- Setup Date: 2026-01-24 -->
3254
3222
  <!-- Completed Steps: step-0.1, step-0.2, step-1, step-2, step-3, step-4, step-5, step-6, step-7, step-8, step-9 -->
3255
3223
  <!-- Git Workflow: PRs -->
package/README.md CHANGED
@@ -83,7 +83,7 @@ Layer 1: PHILOSOPHY
83
83
  | **SDP normalization** | Separates "the model had a bad day" from "our SDLC broke" by cross-referencing external benchmarks |
84
84
  | **CUSUM drift detection** | Catches gradual quality decay over time — borrowed from manufacturing quality control |
85
85
  | **Pre-tool TDD hooks** | Before source edits, a hook reminds Claude to write tests first. CI scoring checks whether it actually followed TDD |
86
- | **Self-evolving loop** | Weekly/monthly external research + CI friction signals from self-heal — you approve, the system gets better |
86
+ | **Self-evolving loop** | Weekly/monthly external research + local CI shepherd loop — you approve, the system gets better |
87
87
 
88
88
  ## How It Works
89
89
 
@@ -186,14 +186,14 @@ This isn't the only Claude Code SDLC tool. Here's an honest comparison:
186
186
  |--------|------------|----------------------|-------------|
187
187
  | **Focus** | SDLC enforcement + measurement | Agent performance optimization | Plugin marketplace |
188
188
  | **Hooks** | 3 (SDLC, TDD, instructions) | 12+ (dev blocker, prettier, etc.) | Webhook watcher |
189
- | **Skills** | 2 (/sdlc, /setup) | 80+ domain-specific | 13 slash commands |
189
+ | **Skills** | 3 (/sdlc, /setup, /update) | 80+ domain-specific | 13 slash commands |
190
190
  | **Evaluation** | 95% CI, CUSUM, SDP, Tier 1/2 | Configuration testing | skilltest framework |
191
- | **Self-healing** | CI auto-fix + re-trigger | No | No |
191
+ | **CI Shepherd** | Local CI fix loop | No | No |
192
192
  | **Auto-updates** | Weekly CC + community scan | No | No |
193
193
  | **Install** | `npx agentic-sdlc-wizard init` | npm install | npm install |
194
194
  | **Philosophy** | Lightweight, prove-it-or-delete | Scale and optimization | Documentation-first |
195
195
 
196
- **Our unique strengths:** Statistical rigor (CUSUM + 95% CI), SDP scoring (model quality vs SDLC compliance), self-healing CI, Prove-It A/B pipeline, comprehensive automated test suite, dogfooding enforcement.
196
+ **Our unique strengths:** Statistical rigor (CUSUM + 95% CI), SDP scoring (model quality vs SDLC compliance), CI shepherd loop, Prove-It A/B pipeline, comprehensive automated test suite, dogfooding enforcement.
197
197
 
198
198
  **Where others are stronger:** everything-claude-code has broader language/framework coverage. claude-sdlc has webhook-driven automation. Both have npm distribution.
199
199
 
@@ -204,7 +204,7 @@ This isn't the only Claude Code SDLC tool. Here's an honest comparison:
204
204
  | Document | What It Covers |
205
205
  |----------|---------------|
206
206
  | [ARCHITECTURE.md](ARCHITECTURE.md) | System design, 5-layer diagram, data flows, file structure |
207
- | [CI_CD.md](CI_CD.md) | All 5 workflows, E2E scoring, tier system, SDP, integrity checks |
207
+ | [CI_CD.md](CI_CD.md) | All 4 workflows, E2E scoring, tier system, SDP, integrity checks |
208
208
  | [SDLC.md](SDLC.md) | Version tracking, enforcement rules, SDLC configuration |
209
209
  | [TESTING.md](TESTING.md) | Testing philosophy, test diamond, TDD approach |
210
210
  | [CHANGELOG.md](CHANGELOG.md) | Version history, what changed and when |
@@ -19,6 +19,7 @@ TodoWrite([
19
19
  { content: "Find and read relevant documentation", status: "in_progress", activeForm: "Reading docs" },
20
20
  { content: "Assess doc health - flag issues (ask before cleaning)", status: "pending", activeForm: "Checking doc health" },
21
21
  { content: "DRY scan: What patterns exist to reuse? New pattern = get approval", status: "pending", activeForm: "Scanning for reusable patterns" },
22
+ { content: "Prove It Gate: adding new component? Research alternatives, prove quality with tests", status: "pending", activeForm: "Checking prove-it gate" },
22
23
  { content: "Blast radius: What depends on code I'm changing?", status: "pending", activeForm: "Checking dependencies" },
23
24
  { content: "Design system check (if UI change)", status: "pending", activeForm: "Checking design system" },
24
25
  { content: "Restate task in own words - verify understanding", status: "pending", activeForm: "Verifying understanding" },
@@ -84,6 +85,22 @@ Critical miss on `tdd_red` or `self_review` = process failure regardless of tota
84
85
  - Does test approach follow TESTING.md philosophies?
85
86
  - If introducing new test patterns, same scrutiny as code patterns
86
87
 
88
+ ## Prove It Gate (REQUIRED for New Additions)
89
+
90
+ **Adding a new skill, hook, workflow, or component? PROVE IT FIRST:**
91
+
92
+ 1. **Research:** Does something equivalent already exist (native CC, third-party plugin, existing skill)?
93
+ 2. **If YES:** Why is yours better? Show evidence (A/B test, quality comparison, gap analysis)
94
+ 3. **If NO:** What gap does this fill? Is the gap real or theoretical?
95
+ 4. **Quality tests:** New additions MUST have tests that prove OUTPUT QUALITY, not just existence
96
+ 5. **Less is more:** Every addition is maintenance burden. Default answer is NO unless proven YES
97
+
98
+ **Existence tests are NOT quality tests:**
99
+ - BAD: "ci-analyzer skill file exists" — proves nothing about quality
100
+ - GOOD: "ci-analyzer recommends lint-first when test-before-lint detected" — proves behavior
101
+
102
+ **If you can't write a quality test for it, you can't prove it works, so don't add it.**
103
+
87
104
  ## Plan Mode Integration
88
105
 
89
106
  **Use plan mode for:** Multi-file changes, new features, LOW confidence, bugs needing investigation.
@@ -131,7 +148,7 @@ PLANNING -> DOCS -> TDD RED -> TDD GREEN -> Tests Pass -> Self-Review
131
148
 
132
149
  ## Cross-Model Review (If Configured)
133
150
 
134
- **When to run:** High-stakes changes (auth, payments, data handling), complex refactors, research-heavy work.
151
+ **When to run:** High-stakes changes (auth, payments, data handling), releases/publishes (version bumps, CHANGELOG, npm publish), complex refactors, research-heavy work.
135
152
  **When to skip:** Trivial changes (typo fixes, config tweaks), time-sensitive hotfixes, risk < review cost.
136
153
 
137
154
  **Prerequisites:** Codex CLI installed (`npm i -g @openai/codex`), OpenAI API key set.
@@ -236,6 +253,17 @@ Self-review passes → handoff.json (round 1, PENDING_REVIEW)
236
253
 
237
254
  **Full protocol:** See the wizard's "Cross-Model Review Loop (Optional)" section for key flags and reasoning effort guidance.
238
255
 
256
+ ### Release Review Focus
257
+
258
+ Before any release/publish, add these to `review_instructions`:
259
+ - **CHANGELOG consistency** — all sections present, no lost entries during consolidation
260
+ - **Version parity** — package.json, SDLC.md, CHANGELOG, wizard metadata all match
261
+ - **Stale examples** — hardcoded version strings in docs match current release
262
+ - **Docs accuracy** — README, ARCHITECTURE.md reflect current feature set
263
+ - **CLI-distributed file parity** — live skills, hooks, settings match CLI templates
264
+
265
+ Evidence: v1.20.0 cross-model review caught CHANGELOG section loss and stale wizard version examples that passed all tests and self-review. Tests catch version mismatches; cross-model review catches semantic issues tests cannot.
266
+
239
267
  ## Test Review (Harder Than Implementation)
240
268
 
241
269
  During self-review, critique tests HARDER than app code:
@@ -337,7 +365,7 @@ Debug it. Find root cause. Fix it properly. Tests ARE code.
337
365
 
338
366
  ## CI Feedback Loop — Local Shepherd (After Commit)
339
367
 
340
- **This is the "local shepherd" — the primary CI fix mechanism.** It runs in your active session with full context. The optional CI Auto-Fix bot (`.github/workflows/ci-autofix.yml`) is a fallback for unattended PRs only. When both are active, the bot detects your local pushes via SHA comparison and skips automatically.
368
+ **This is the "local shepherd" — the CI fix mechanism.** It runs in your active session with full context.
341
369
 
342
370
  **The SDLC doesn't end at local tests.** CI must pass too.
343
371
 
@@ -1,17 +1,19 @@
1
1
  ---
2
2
  name: setup-wizard
3
- description: Setup wizard — scans codebase, asks 16 config questions, generates SDLC files (CLAUDE.md, SDLC.md, TESTING.md, ARCHITECTURE.md), verifies installation. Use for first-time setup or re-running setup.
3
+ description: Setup wizard — scans codebase, builds confidence per data point, only asks what it can't figure out, generates SDLC files. Use for first-time setup or re-running setup.
4
4
  argument-hint: [optional: regenerate | verify-only]
5
5
  effort: high
6
6
  ---
7
- # Setup Wizard - Interactive Project Configuration
7
+ # Setup Wizard - Confidence-Driven Project Configuration
8
8
 
9
9
  ## Task
10
10
  $ARGUMENTS
11
11
 
12
12
  ## Purpose
13
13
 
14
- You are an interactive setup wizard. Your job is to scan the project, ask the user ALL configuration questions, and generate the SDLC files. DO NOT skip questions. DO NOT make assumptions. The user's answers drive the output.
14
+ You are a confidence-driven setup wizard. Your job is to scan the project, infer as much as possible, and only ask the user about what you can't figure out. The number of questions is DYNAMIC it depends on how much you can detect. Stop asking when all configuration data points are resolved (detected, confirmed, or answered).
15
+
16
+ **DO NOT ask a fixed list of questions. DO NOT ask what you already know.**
15
17
 
16
18
  ## MANDATORY FIRST ACTION: Read the Wizard Doc
17
19
 
@@ -36,56 +38,70 @@ Scan the project root for:
36
38
  - Deployment: Dockerfile, vercel.json, fly.toml, netlify.toml, Procfile, k8s/
37
39
  - Design system: tailwind.config.*, .storybook/, theme files, CSS custom properties
38
40
  - Existing docs: README.md, CLAUDE.md, ARCHITECTURE.md
41
+ - Scripts in package.json (lint, test, build, typecheck, etc.)
42
+ - Database config files (prisma/, drizzle.config.*, knexfile.*, .env with DB_*)
43
+ - Cache config (redis.conf, .env with REDIS_*)
44
+
45
+ ### Step 2: Build Confidence Map
46
+
47
+ For each configuration data point, assign a confidence level based on scan results:
39
48
 
40
- Present findings to the user in a clear summary with detected values.
49
+ **Configuration Data Points:**
41
50
 
42
- ### Step 2: Ask ALL 17 Questions
51
+ | Category | Data Point | How to Detect |
52
+ |----------|-----------|---------------|
53
+ | Structure | Source directory | Look for src/, app/, lib/, etc. |
54
+ | Structure | Test directory | Look for tests/, __tests__/, spec/ |
55
+ | Structure | Test framework | Config files (jest.config, vitest.config, pytest.ini) |
56
+ | Commands | Lint command | package.json scripts, Makefile, config files |
57
+ | Commands | Type-check command | tsconfig.json → tsc, mypy.ini → mypy |
58
+ | Commands | Run all tests | package.json "test" script, Makefile |
59
+ | Commands | Run single test file | Infer from framework (jest → jest path, pytest → pytest path) |
60
+ | Commands | Production build | package.json "build" script, Makefile |
61
+ | Commands | Deployment setup | Dockerfile, vercel.json, fly.toml, deploy scripts |
62
+ | Infra | Database(s) | prisma/, .env DB vars, docker-compose services |
63
+ | Infra | Caching layer | .env REDIS vars, docker-compose redis service |
64
+ | Infra | Test duration | Count test files, check CI run times if available |
65
+ | Preferences | Response detail level | Cannot detect — ALWAYS ASK |
66
+ | Preferences | Testing approach | Cannot detect intent from existing code — ALWAYS ASK |
67
+ | Preferences | Mocking philosophy | Cannot detect intent from existing code — ALWAYS ASK |
68
+ | Testing | Test types | What test files exist (*.test.*, *.spec.*, e2e/, integration/) |
69
+ | Coverage | Coverage config | nyc, c8, coverage.py config, CI coverage steps |
70
+ | CI | CI shepherd opt-in | Only if CI detected — ALWAYS ASK |
43
71
 
44
- Ask every question. Pre-fill detected values but let the user confirm or override.
72
+ **Each data point has one of three states:**
73
+ - **RESOLVED (detected):** Found concrete evidence — config file, script, directory exists. No question needed, just confirm.
74
+ - **RESOLVED (inferred):** Found indirect evidence — naming patterns, related config. Present inference, let user confirm or correct.
75
+ - **UNRESOLVED:** No evidence found — must ask user directly.
45
76
 
46
- **Project Structure:**
47
- 1. Source directory (detected or ask)
48
- 2. Test directory (detected or ask)
49
- 3. Test framework (detected or ask)
77
+ **Preference data points** (response detail, testing approach, mocking philosophy, CI shepherd) are ALWAYS UNRESOLVED regardless of what code patterns exist. Current code patterns show what IS, not what the user WANTS going forward.
50
78
 
51
- **Commands:**
52
- 4. Lint command
53
- 5. Type-check command
54
- 6. Run all tests command
55
- 7. Run single test file command
56
- 8. Production build command
57
- 9. Deployment setup (detected environments, confirm or customize)
79
+ ### Step 3: Present Findings and Fill Gaps
58
80
 
59
- **Infrastructure:**
60
- 10. Database(s) used
61
- 11. Caching layer (Redis, etc.)
62
- 12. Test duration (<1 min, 1-5 min, 5+ min)
81
+ Present ALL detected values organized by state to the user.
63
82
 
64
- **Output Preferences:**
65
- 13. Response detail level (small/medium/large)
83
+ **For RESOLVED (detected) items:** Show what was found, let user bulk-confirm with a single "Looks good" or override specific items.
66
84
 
67
- **Testing Philosophy:**
68
- 14. Testing approach (strict TDD, test-after, mixed, minimal, none yet)
69
- 15. Test types wanted (unit, integration, E2E, API)
70
- 16. Mocking philosophy (minimal, heavy, no mocking)
85
+ **For RESOLVED (inferred) items:** Show what was inferred with reasoning, ask user to confirm or correct.
71
86
 
72
- **Coverage:**
73
- 17. Code coverage preferences (enforce threshold, report only, AI suggestions, skip)
87
+ **For UNRESOLVED items:** Ask the user directly — these are your questions.
74
88
 
75
- DO NOT proceed to file generation until ALL 17 questions have answers.
89
+ **The ready rule:** You are ready to generate files when ALL data points are resolved (detected, inferred+confirmed, or answered by user). The number of questions you ask depends entirely on how many data points remain unresolved after scanning. A well-configured project might need 3-4 questions (just preferences). A bare repo might need 10+. There is no fixed count.
76
90
 
77
- ### Step 3: Generate CLAUDE.md
91
+ DO NOT proceed to file generation until all data points are resolved.
78
92
 
79
- Using the user's answers, generate `CLAUDE.md` with:
93
+ ### Step 4: Generate CLAUDE.md
94
+
95
+ Using detected + confirmed values, generate `CLAUDE.md` with:
80
96
  - Project overview (from scan results)
81
- - Commands table (Q4-Q8 answers)
97
+ - Commands table (detected/confirmed commands)
82
98
  - Code style section (from detected linters/formatters)
83
99
  - Architecture summary (from scan)
84
- - Special notes (from Q9-Q11)
100
+ - Special notes (infra, deployment)
85
101
 
86
102
  Reference: See "Step 3" in `CLAUDE_CODE_SDLC_WIZARD.md` for the full template.
87
103
 
88
- ### Step 4: Generate SDLC.md
104
+ ### Step 5: Generate SDLC.md
89
105
 
90
106
  Generate `SDLC.md` with the full SDLC checklist customized to the project:
91
107
  - Plan mode guidance
@@ -98,35 +114,35 @@ Include metadata comments:
98
114
  ```
99
115
  <!-- SDLC Wizard Version: [version from CLAUDE_CODE_SDLC_WIZARD.md] -->
100
116
  <!-- Setup Date: [today's date] -->
101
- <!-- Completed Steps: 0.4, 1-10 -->
117
+ <!-- Completed Steps: step-0.1, step-0.2, step-1, step-2, step-3, step-4, step-5, step-6, step-7, step-8, step-9 -->
102
118
  ```
103
119
 
104
120
  Reference: See "Step 4" in `CLAUDE_CODE_SDLC_WIZARD.md` for the full template.
105
121
 
106
- ### Step 5: Generate TESTING.md
122
+ ### Step 6: Generate TESTING.md
107
123
 
108
- Generate `TESTING.md` based on Q13-Q16 answers:
124
+ Generate `TESTING.md` based on detected/confirmed testing data:
109
125
  - Testing Diamond visualization
110
126
  - Test types and their purposes
111
- - Mocking rules (from Q15)
112
- - Test file organization (from Q2, Q3)
113
- - Coverage config (from Q16)
127
+ - Mocking rules (from detected patterns or user input)
128
+ - Test file organization (from detected structure)
129
+ - Coverage config (from detected config or user input)
114
130
  - Framework-specific patterns
115
131
 
116
132
  Reference: See "Step 5" in `CLAUDE_CODE_SDLC_WIZARD.md` for the full template.
117
133
 
118
- ### Step 6: Generate ARCHITECTURE.md
134
+ ### Step 7: Generate ARCHITECTURE.md
119
135
 
120
136
  Generate `ARCHITECTURE.md` with:
121
137
  - System overview diagram (from scan)
122
138
  - Component descriptions
123
- - Environments table (from Q8.5)
139
+ - Environments table (from detected deployment config)
124
140
  - Deployment checklist
125
141
  - Key technical decisions
126
142
 
127
143
  Reference: See "Step 6" in `CLAUDE_CODE_SDLC_WIZARD.md` for the full template.
128
144
 
129
- ### Step 7: Generate DESIGN_SYSTEM.md (If UI Detected)
145
+ ### Step 8: Generate DESIGN_SYSTEM.md (If UI Detected)
130
146
 
131
147
  Only if design system artifacts were found in Step 1:
132
148
  - Extract colors, fonts, spacing from config
@@ -135,7 +151,7 @@ Only if design system artifacts were found in Step 1:
135
151
 
136
152
  Skip this step if no UI/design system detected.
137
153
 
138
- ### Step 8: Configure Tool Permissions
154
+ ### Step 9: Configure Tool Permissions
139
155
 
140
156
  Based on detected stack, suggest `allowedTools` entries for `.claude/settings.json`:
141
157
  - Package manager commands (npm, pnpm, yarn, cargo, go, pip, etc.)
@@ -144,11 +160,11 @@ Based on detected stack, suggest `allowedTools` entries for `.claude/settings.js
144
160
 
145
161
  Present suggestions and let the user confirm.
146
162
 
147
- ### Step 9: Customize Hooks
163
+ ### Step 10: Customize Hooks
148
164
 
149
- Update `tdd-pretool-check.sh` with the actual source directory from Q1 (replace generic `/src/` pattern).
165
+ Update `tdd-pretool-check.sh` with the actual source directory (replace generic `/src/` pattern).
150
166
 
151
- ### Step 10: Verify Setup
167
+ ### Step 11: Verify Setup
152
168
 
153
169
  Run verification checks:
154
170
  1. All generated files exist and are non-empty
@@ -159,20 +175,24 @@ Run verification checks:
159
175
 
160
176
  Report any issues found.
161
177
 
162
- ### Step 11: Instruct Restart and Next Steps
178
+ ### Step 12: Instruct Restart and Next Steps
163
179
 
164
180
  Tell the user:
165
181
  > Setup complete. Hooks and settings load at session start.
166
182
  > **Exit Claude Code and restart it** for the new configuration to take effect.
167
183
  > On restart, the SDLC hook will fire and you'll see the checklist in every response.
168
184
  >
169
- > **Optional next step:** Run `/claude-automation-recommender` for stack-specific tooling suggestions (MCP servers, formatting hooks, type-checking hooks, plugins). These are complementary to the SDLC wizard — they add per-stack tooling, not process enforcement.
185
+ > **Optional next step:**
186
+ > - Run `/claude-automation-recommender` for stack-specific tooling suggestions (MCP servers, formatting hooks, type-checking hooks, plugins)
187
+ >
188
+ > The recommender is complementary to the SDLC wizard — it adds tooling recommendations, not process enforcement.
170
189
 
171
190
  ## Rules
172
191
 
173
- - NEVER skip a question. If the user says "I don't know", record that and move on.
174
- - NEVER assume answers. If auto-scan can't detect something, ASK.
175
- - ALWAYS show detected values and let the user confirm or override.
192
+ - NEVER ask what you already know from scanning. If you found it, confirm it — don't ask it.
193
+ - NEVER use a fixed question count. The number of questions is dynamic based on scan results.
194
+ - ALWAYS show detected values organized by resolution state and let the user confirm or override.
176
195
  - ALWAYS generate metadata comments in SDLC.md (version, date, steps).
196
+ - If most data points are resolved after scanning, present findings for bulk confirmation — don't force individual questions.
177
197
  - If the user passes `regenerate` as an argument, skip Q&A and regenerate files from existing SDLC.md metadata.
178
- - If the user passes `verify-only` as an argument, skip to Step 10 (verify) only.
198
+ - If the user passes `verify-only` as an argument, skip to Step 11 (verify) only.
@@ -45,13 +45,13 @@ Extract the latest version from the first `## [X.X.X]` line.
45
45
  Parse all CHANGELOG entries between the user's installed version and the latest. Present a clear summary:
46
46
 
47
47
  ```
48
- Installed: 1.15.0
49
- Latest: 1.20.0
48
+ Installed: 1.19.0
49
+ Latest: 1.21.0
50
50
 
51
51
  What changed:
52
+ - [1.21.0] Confidence-driven setup, prove-it gate, cross-model release review, ...
52
53
  - [1.20.0] Version-pinned CC update gate, Tier 1 flakiness fix, flaky test guidance, ...
53
54
  - [1.19.0] CI shepherd model, token efficiency, feature doc enforcement, ...
54
- - [1.18.0] Added /update-wizard skill, ...
55
55
  ```
56
56
 
57
57
  **If versions match:** Say "You're up to date! (version X.X.X)" and stop.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "agentic-sdlc-wizard",
3
- "version": "1.20.0",
3
+ "version": "1.21.0",
4
4
  "description": "SDLC enforcement for Claude Code — hooks, skills, and wizard setup in one command",
5
5
  "bin": {
6
6
  "sdlc-wizard": "./cli/bin/sdlc-wizard.js"