agentic-sdlc-wizard 1.70.0 → 1.71.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -13,7 +13,7 @@
13
13
  "name": "sdlc-wizard",
14
14
  "source": ".",
15
15
  "description": "SDLC enforcement for AI agents — TDD, planning, self-review, CI shepherd",
16
- "version": "1.70.0",
16
+ "version": "1.71.0",
17
17
  "author": {
18
18
  "name": "Stefan Ayala"
19
19
  },
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "sdlc-wizard",
3
- "version": "1.70.0",
3
+ "version": "1.71.0",
4
4
  "description": "SDLC enforcement for AI agents — TDD, planning, self-review, CI shepherd",
5
5
  "author": {
6
6
  "name": "Stefan Ayala",
package/CHANGELOG.md CHANGED
@@ -4,6 +4,47 @@ All notable changes to the SDLC Wizard.
4
4
 
5
5
  > **Note:** This changelog is for humans to read. Don't manually apply these changes - just run the wizard ("Check for SDLC wizard updates") and it handles everything automatically.
6
6
 
7
+ ## [1.71.0] - 2026-05-05
8
+
9
+ ### Token-bloat fix: SDLC skill Cross-Model Review section trimmed
10
+
11
+ `skills/sdlc/SKILL.md` Cross-Model Review section condensed from ~70 lines to ~20 lines. Saves ~427 tokens per SDLC skill auto-invoke (4995 → 4568 tokens). The skill auto-loads on virtually every productive `implement/fix/refactor` task, so this is real per-session cost.
12
+
13
+ ### What stayed in SKILL.md
14
+
15
+ - Decision-making: when to run / skip / prerequisites / flagship-tier reviewer rule (#233)
16
+ - 4-step protocol summary (preflight → handoff → reviewer → dialogue loop)
17
+ - Required handoff JSON keys + `pr_number` self-heal opt-in note (#209)
18
+ - Convergence rule (2 rounds sweet spot, 3 max)
19
+ - Release-review verification-checklist additions
20
+ - Sandbox flag for Codex from CC
21
+
22
+ ### What moved to canonical wizard doc only
23
+
24
+ Full JSON example, full codex command example, anti-patterns, multi-reviewer (Claude+Codex+human) workflow, non-code-domain variants. All these live in `CLAUDE_CODE_SDLC_WIZARD.md` → "Cross-Model Review Loop" (194 lines, full canonical protocol). The trimmed SKILL.md ends with an explicit pointer to that section.
25
+
26
+ ### Audit method
27
+
28
+ ROADMAP #236 phase 3. `scripts/audit-session-load.sh` ranked SKILL.md files at the top of the size table:
29
+ - `skills/sdlc/SKILL.md`: 4995 tokens (sat right at 5K threshold)
30
+ - `skills/update/SKILL.md`: 4931 tokens
31
+ - `skills/setup/SKILL.md`: 4490 tokens
32
+
33
+ SDLC skill auto-invokes most often (every implement/fix/refactor task), so it earned the cut. Verified 8 test suites that grep for SKILL.md content (mocking table, TDD prove, Memory Audit Protocol heading, opus[1m], autocompact compound, Deployment Tasks, plus `tests/test-self-update.sh` which asserts cross-model-review-specific content: `### Release Review Focus` heading, `Version parity` focus area, `"mission"`/`"success"`/`"failure"` JSON-quoted schema keys, "verification checklist" pattern, "preflight" mention). Codex round 1 caught 3 missed assertions in `test-self-update.sh`; round 2 fixes restored those constraints in tighter prose without re-bloating.
34
+
35
+ ### Files
36
+
37
+ - `skills/sdlc/SKILL.md` — Cross-Model Review section trimmed
38
+ - `CHANGELOG.md`, `SDLC.md`, `skills/update/SKILL.md` (Latest:), `package.json`, `.claude-plugin/plugin.json` + `marketplace.json`, `CLAUDE_CODE_SDLC_WIZARD.md` (1.70.0 → 1.71.0)
39
+
40
+ ### Combined savings ROADMAP #236 phases 1-3
41
+
42
+ - v1.69.0: ~12K tokens/session (BASELINE block fires once)
43
+ - v1.70.0: ~0.5-1.5K tokens/session (TDD CHECK fires once)
44
+ - v1.71.0: ~573 tokens/session (SDLC skill leaner on auto-invoke)
45
+
46
+ Total on a 50-prompt + 20-Edit + 1 SDLC-skill-invoke session: **~14K tokens/session**.
47
+
7
48
  ## [1.70.0] - 2026-05-05
8
49
 
9
50
  ### Token-bloat fix: TDD CHECK nudge fires once per CC session
@@ -2976,7 +2976,7 @@ If deployment fails or post-deploy verification catches issues:
2976
2976
 
2977
2977
  **SDLC.md:**
2978
2978
  ```markdown
2979
- <!-- SDLC Wizard Version: 1.70.0 -->
2979
+ <!-- SDLC Wizard Version: 1.71.0 -->
2980
2980
  <!-- Setup Date: [DATE] -->
2981
2981
  <!-- Completed Steps: step-0.1, step-0.2, step-0.4, step-1, step-2, step-3, step-4, step-5, step-6, step-7, step-8, step-9 -->
2982
2982
  <!-- Git Workflow: [PRs or Solo] -->
@@ -3923,6 +3923,21 @@ CLI-distributed file parity (skills, hooks, settings).
3923
3923
 
3924
3924
  **This complements automated tests, not replaces them.** Tests catch exact version mismatches (e.g., `test_package_version_matches_changelog`). Cross-model review catches semantic issues tests cannot — a section silently dropped, examples using outdated but syntactically valid versions, docs describing features that no longer exist.
3925
3925
 
3926
+ #### Anti-patterns
3927
+
3928
+ - **"Find at least N problems"** — incentivizes false positives. The reviewer will manufacture findings to hit the count.
3929
+ - **"Review this"** — too vague. Always pair with `verification_checklist` items that name file:line evidence to verify.
3930
+ - **1-10 score with no criteria** — every reviewer scores differently. Either define what 1, 5, 10 mean for *this* review, or drop the score and just produce CERTIFIED / NOT CERTIFIED with findings.
3931
+ - **Author reasoning visible to reviewer** — anchoring bias. The reviewer should see code + handoff, not the author's self-assessment of why it's correct.
3932
+
3933
+ #### Multiple reviewers (Claude review + Codex + human)
3934
+
3935
+ Run them in parallel; collect feedback via `gh api repos/OWNER/REPO/pulls/PR/comments` (single source of truth). Respond per-reviewer (different blind spots — don't merge feedback). On conflicts, pick the stronger argument with reasoning, not the louder voice. Cap iterations at 3 per reviewer to avoid infinite loops.
3936
+
3937
+ #### Non-code domains (research, persuasion, medical content)
3938
+
3939
+ Same handoff format. Adapt `review_instructions` (e.g. "verify each cited claim links to a primary source") and `verification_checklist` (specific claim → specific source). Add `"audience"` and `"stakes"` keys to the JSON so the reviewer knows what reading level / risk profile to apply.
3940
+
3926
3941
  ---
3927
3942
 
3928
3943
  ## User Understanding and Periodic Feedback
@@ -4055,7 +4070,7 @@ Walk through updates? (y/n)
4055
4070
  Store wizard state in `SDLC.md` as metadata comments (invisible to readers, parseable by Claude):
4056
4071
 
4057
4072
  ```markdown
4058
- <!-- SDLC Wizard Version: 1.70.0 -->
4073
+ <!-- SDLC Wizard Version: 1.71.0 -->
4059
4074
  <!-- Setup Date: 2026-01-24 -->
4060
4075
  <!-- Completed Steps: step-0.1, step-0.2, step-1, step-2, step-3, step-4, step-5, step-6, step-7, step-8, step-9 -->
4061
4076
  <!-- Git Workflow: PRs -->
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "agentic-sdlc-wizard",
3
- "version": "1.70.0",
3
+ "version": "1.71.0",
4
4
  "description": "SDLC enforcement for Claude Code — hooks, skills, and wizard setup in one command",
5
5
  "bin": {
6
6
  "sdlc-wizard": "cli/bin/sdlc-wizard.js"
@@ -120,76 +120,24 @@ The loop goes back to PLANNING, not TDD RED. Run `/code-review`; issues at confi
120
120
 
121
121
  ## Cross-Model Review (If Configured)
122
122
 
123
- **When to run:** high-stakes changes (auth, payments, data), releases/publishes, complex refactors.
124
- **When to skip:** trivial changes, time-sensitive hotfixes, risk < review cost.
125
- **Prerequisites:** Codex CLI (`npm i -g @openai/codex`) + OpenAI API key.
126
-
127
- The PROTOCOL is universal across domains; only `review_instructions` and `verification_checklist` change. **Reviewer always at flagship tier (#233):** if the project pins `model: "sonnet[1m]"` (mixed-mode), the reviewer still runs `gpt-5.5` or Opus 4.7 max — adversarial diversity is the point.
128
-
129
- ### Step 0: Preflight Self-Review
130
-
131
- At `.reviews/preflight-{review_id}.md`, document what you already checked: `/code-review` passed, all tests passing, specific concerns checked, what you verified manually, known limitations. Reduces reviewer findings to 0-1 per round.
132
-
133
- ### Step 1: Mission-First Handoff
134
-
135
- Write `.reviews/handoff.json`:
136
- ```jsonc
137
- {
138
- "review_id": "feature-xyz-001",
139
- "status": "PENDING_REVIEW",
140
- "round": 1,
141
- "mission": "What changed and why — 2-3 sentences",
142
- "success": "What 'correctly reviewed' looks like",
143
- "failure": "What gets missed if reviewer is superficial",
144
- "files_changed": ["src/auth.ts", "tests/auth.test.ts"],
145
- "fixes_applied": [],
146
- "previous_score": null,
147
- "verification_checklist": [
148
- "(a) Verify input validation at auth.ts:45 handles empty strings",
149
- "(b) Verify test covers null-token edge case"
150
- ],
151
- "review_instructions": "Focus on security and edge cases. Be strict — assume bugs may be present.",
152
- "preflight_path": ".reviews/preflight-feature-xyz-001.md",
153
- "pr_number": 205
154
- }
155
- ```
156
-
157
- `mission/success/failure` give context (without them: generic "looks good"). `verification_checklist` is specific (file:line), not "review for correctness." `pr_number` (optional) is the **PreCompact self-heal opt-in (ROADMAP #209)**: when set, `precompact-seam-check.sh` checks `gh pr view N --json state` on `/compact` and, if MERGED, treats handoff as implicit CERTIFIED. Without it, a forgotten PENDING handoff blocks every manual compact until you flip status or hit `SDLC_HANDOFF_STALE_DAYS` (default 14).
158
-
159
- ### Step 2: Run the Reviewer
160
-
161
- ```bash
162
- codex exec -c 'model_reasoning_effort="xhigh"' -s danger-full-access \
163
- -o .reviews/latest-review.md \
164
- "Independent code reviewer. Read .reviews/handoff.json for context. \
165
- Verify each checklist item with evidence (file:line, grep, test output). \
166
- Each finding: ID, severity (P0/P1/P2), evidence, certify condition. \
167
- End with: score (1-10), CERTIFIED or NOT CERTIFIED."
168
- ```
169
-
170
- Always `xhigh` — lower settings miss subtle errors. **Progress (#259):** xhigh runs take 1-5 min; for a heartbeat use `scripts/codex-review-with-progress.sh` (`SDLC_CODEX_HEARTBEAT_INTERVAL` tunes). **Sandbox:** Codex's Rust binary needs `SCDynamicStore`; CC's sandbox blocks this. From CC, use `dangerouslyDisableSandbox: true` — Codex has its own sandbox via `-s danger-full-access`. Known issue: [codex#15640](https://github.com/openai/codex/issues/15640).
171
-
172
- CERTIFIED → CI. NOT CERTIFIED → dialogue loop.
173
-
174
- ### Step 3: Dialogue Loop
175
-
176
- Per-finding response in `.reviews/response.json`: `{"finding": "1", "action": "FIXED|DISPUTED|ACCEPTED", "summary": "..."}`. Update `handoff.json`: increment `round`, status `PENDING_RECHECK`, add `fixes_applied` (numbered, file:line refs).
177
-
178
- Recheck prompt: "TARGETED RECHECK. For each finding: FIXED → verify certify condition. DISPUTED → ACCEPT if sound, REJECT with reasoning. ACCEPTED → verify applied. Do NOT raise new findings unless P0. End with score, CERTIFIED or NOT CERTIFIED."
123
+ **When to run:** high-stakes changes (auth, payments, data), releases/publishes, complex refactors. **When to skip:** trivial changes, time-sensitive hotfixes, risk < review cost. **Prerequisites:** Codex CLI (`npm i -g @openai/codex`) + OpenAI API key. **Reviewer at flagship tier (#233):** even when project pins `sonnet[1m]`, reviewer runs `gpt-5.5` / Opus 4.7 max — adversarial diversity is the point.
179
124
 
180
- **Convergence:** 2 rounds is the sweet spot, 3 max (research: 14 repos + 7 papers). After 3 still NOT CERTIFIED → escalate to user.
125
+ PROTOCOL is universal across domains; only `review_instructions` and `verification_checklist` change.
181
126
 
182
- **Anti-patterns:** "find at least N problems," "review this," 1-10 without criteria, letting reviewer see author's reasoning (anchoring).
127
+ 1. **Preflight** (`.reviews/preflight-{review_id}.md`) what you already checked: `/code-review` passed, tests passing, manual verifications, known limits. Reduces reviewer findings to 0-1/round.
128
+ 2. **Mission-first handoff** (`.reviews/handoff.json`) — required JSON keys: `"review_id"`, `"status": "PENDING_REVIEW"`, `"round": 1`, `"mission"` / `"success"` / `"failure"` (2-3 sentences each — without them you get "looks good"), `"files_changed"`, `"verification_checklist"` — the **verification checklist** is specific items with file:line refs (NOT a generic "review this"), `"review_instructions"`, `"preflight_path"`. Optional `"pr_number":` opts into the PreCompact self-heal (#209): if PR is MERGED, `/compact` treats handoff as implicit CERTIFIED.
129
+ 3. **Run reviewer:** `codex exec -c 'model_reasoning_effort="xhigh"' -s danger-full-access -o .reviews/latest-review.md "<prompt>"`. Always `xhigh`. CC sandbox blocks Codex's Rust binary (`SCDynamicStore`) — use `dangerouslyDisableSandbox: true` on Bash; Codex has its own sandbox. xhigh runs take 1-5 min; for a heartbeat use `scripts/codex-review-with-progress.sh`.
130
+ 4. **Dialogue loop:** per-finding response (`{"finding": "1", "action": "FIXED|DISPUTED|ACCEPTED", "summary": "..."}` in `.reviews/response.json`). Bump round, set status `PENDING_RECHECK`, add `fixes_applied` (numbered, file:line). Recheck prompt: "TARGETED RECHECK. FIXED → verify certify condition. DISPUTED → ACCEPT if sound, REJECT with reasoning. ACCEPTED → verify applied. No new findings unless P0."
183
131
 
184
- **Multiple reviewers** (Claude review + Codex + human): `gh api repos/OWNER/REPO/pulls/PR/comments` for all feedback, respond to each reviewer independently (different blind spots), pick stronger argument on conflicts, max 3 iterations per reviewer.
132
+ **Convergence:** 2 rounds sweet spot, 3 max (research: 14 repos + 7 papers). After 3 still NOT CERTIFIED escalate.
185
133
 
186
- **Non-code domains** (research, persuasion, medical): same handoff format, adapt `review_instructions` + `verification_checklist`, add `audience` + `stakes`.
134
+ **Multi-reviewer / non-code domains:** when running multiple reviewers in parallel (e.g. Claude review + Codex + human), respond per-reviewer (different blind spots, no shared anchoring). For non-code domains (research, persuasion, medical), keep the same handoff format and add `"audience"` + `"stakes"` keys.
187
135
 
188
136
  ### Release Review Focus
189
137
 
190
138
  Before any release/publish, add to `verification_checklist`: **CHANGELOG consistency** (sections present, no lost entries), **Version parity** (package.json + SDLC.md + CHANGELOG + wizard metadata), **Stale examples** (hardcoded version strings), **Docs accuracy** (README + ARCHITECTURE reflect current features), **CLI-distributed file parity** (live skills/hooks match CLI templates).
191
139
 
192
- (Full protocol with rationale and convergence diagrams: `CLAUDE_CODE_SDLC_WIZARD.md` → Cross-Model Review.)
140
+ **Full protocol** (rationale, full JSON example, anti-patterns like "find at least N", convergence diagrams): `CLAUDE_CODE_SDLC_WIZARD.md` → "Cross-Model Review Loop".
193
141
 
194
142
  ## Documentation Sync (REQUIRED — During Planning)
195
143
 
@@ -93,11 +93,12 @@ Parse CHANGELOG entries between the user's installed version and latest. Present
93
93
 
94
94
  ```
95
95
  Installed: 1.42.0
96
- Latest: 1.70.0
96
+ Latest: 1.71.0
97
97
 
98
98
  What changed:
99
- - [1.70.0] token-bloat fix phase 2 — `hooks/tdd-pretool-check.sh` TDD CHECK JSON nudge (the per-`Write/Edit` "Are you writing IMPLEMENTATION before a FAILING TEST?" reminder) now fires once per CC `session_id` instead of every src/ edit. Saves ~0.5-1.5K tokens/session (10-30 src Edits × ~50 tok). Same atomic-noclobber claim pattern as v1.69.0 BASELINE gate. Non-src/ edits don't consume the sentinel slot.
100
- - [1.69.0] token-bloat fix phase 1 — `hooks/sdlc-prompt-check.sh` BASELINE block (the ~250-token "TodoWrite FIRST / STATE CONFIDENCE / AUTO-INVOKE" reminder) now fires once per CC `session_id`. Saves ~12K tokens/session. SETUP-not-complete + EFFORT-bump warnings still fire every prompt (dynamic state).
99
+ - [1.71.0] token-bloat fix phase 3 — `skills/sdlc/SKILL.md` Cross-Model Review section trimmed from ~70 lines to ~20 (4995 4568 tokens). Decision-making + 4-step protocol summary + convergence rule kept; full JSON examples / codex commands moved to `CLAUDE_CODE_SDLC_WIZARD.md` "Cross-Model Review Loop" canonical section (which also gained Anti-patterns + Multi-reviewer + Non-code-domain subsections). Saves ~427 tokens per SDLC skill auto-invoke. Codex round 1 caught 3 test assertions broken by initial trim; round 2 fixes restored constraints in tighter prose.
100
+ - [1.70.0] token-bloat fix phase 2 — `hooks/tdd-pretool-check.sh` TDD CHECK JSON nudge fires once per CC `session_id` instead of every src/ edit. Saves ~0.5-1.5K tokens/session.
101
+ - [1.69.0] token-bloat fix phase 1 — `hooks/sdlc-prompt-check.sh` BASELINE block fires once per CC `session_id`. Saves ~12K tokens/session.
101
102
  - [1.68.0–1.65.0] roadmap hygiene — five paperwork closes: #97 Anthropic Policy NO-GO + AAR-paper validating parallel; #99 AutoGPT NO-GO; #95 Nous NO-GO; #243 token-history liveness verified; #210 Node-24 false-green; #235 Thoughtworks AI Evals NO-GO. **6/6 external-product audits NO-GO** (continues #76, #77). Research write-ups in `.reviews/research-*.md`.
102
103
  - [1.64.0] XDLC ecosystem cross-references — README, wizard doc, and ROADMAP now cross-reference all three sibling packages (`agentic-sdlc-wizard`, `codex-sdlc-wizard`, `claude-gdlc-wizard`). New "Ecosystem (Sibling Projects)" section in README. 3 new doc-consistency tests prevent drift.
103
104
  - [1.63.0] cache-cost observability closeout (#204 absorbed by #220) — `tests/test-token-spike.sh` gains explicit cache-miss regression test + negative-control test. SDLC skill + wizard doc gain "Cache-Cost Surprises" sections covering 10-20× silent cost blowups (mid-session CLAUDE.md edits, idle pruning, upstream cache bugs) and detection via `hooks/token-spike-check.sh`'s `costly_tokens` metric.