claude-sprint-gate 2.1.0 → 2.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -11,42 +11,72 @@ You are entering an autonomous sprint cycle.
11
11
 
12
12
  1. Read `mission.md` — this defines who you are, what you're building, and your constraints. Adopt the persona and objectives defined there. If no mission file exists, your objective is to complete the current sprint.
13
13
  2. Read `CLAUDE.md` if it exists — these are rules that carry forward to every sprint.
14
- 3. Check `goals-open/` for the active goal file (most recent `goal-*.md`).
14
+ 3. Read `stop-condition.md` if it exists — these are the quality gates every sprint must pass.
15
+ 4. Check `goals-open/` for the active goal file (most recent `goal-*.md`).
15
16
 
16
- ## Step 2 — Execute
17
+ ## Step 2 — Execute or Plan
17
18
 
18
- - **If a goal file exists**: read it, execute every unchecked item, check them off (`- [x]`) as you complete and verify each one.
19
- - **If no goal file exists**: read `goals-completed/` to understand what has been delivered so far, then create the next sprint file based on your mission objectives. Define concrete, verifiable deliverables as checkboxes. Then execute it.
19
+ **If a goal file exists**: read it, execute every unchecked item, check them off (`- [x]`) as you complete and verify each one.
20
20
 
21
- ## Step 3 The cycle
21
+ **If no goal file exists**: you need to plan the next sprint. Follow this process:
22
22
 
23
- When you try to stop, the stop hook will:
24
- - **Block** if unchecked items remain — re-prompting you with the full goal
25
- - **Generate a verification sprint** when all items are checked — you must prove everything works end-to-end
26
- - **Prompt the next sprint** after verification if a mission is active — you plan it based on your mission, then execute it
27
- - **Allow stop** only when the mission's sprint cap is reached or no mission is active
23
+ ### Sprint Planning The 7-Step Method
24
+
25
+ Before writing a single checkbox, answer these questions in order:
26
+
27
+ 1. **What is the business value?** Not the feature the outcome. Strip away technical jargon. What problem disappears for the customer?
28
+
29
+ 2. **Does this align with the mission?** Re-read `mission.md`. Does this sprint make the product's core thesis more true, more visible, more demonstrable? Four outcomes: reinforces mission (ship it), neutral but needed (table stakes), contradicts mission (drop or re-scope), already delivered architecturally (just prove it).
30
+
31
+ 3. **First principles implementation.** Don't copy competitors. Start from your architecture. What is the minimum mechanism that delivers the business value within your constraints?
32
+
33
+ 4. **Who benefits?** Every sprint should deliver value to at least 3 stakeholders — investors, customers, end users, operators, developers. If fewer than 3 benefit, the scope is wrong.
34
+
35
+ 5. **What is the Minimum Demonstrable Increment?** The smallest slice that (a) advances the mission, (b) is visible to an end user, and (c) has a screenshot a salesperson can show in a demo. If you can't describe the screenshot that proves it works, the scope is wrong.
28
36
 
29
- This cycle repeats autonomously: build, verify, plan, build, verify, plan.
37
+ 6. **Acceptance criteria with evidence.** Each deliverable gets checkbox criteria with verifiable evidence — build output, screenshots reviewed by you, API responses, end-to-end user journeys.
30
38
 
31
- ## Sprint planning guidance
39
+ 7. **Four-Gate Filter** before committing to the sprint:
40
+ - **Thesis gate**: Does this make the product's core value proposition stronger?
41
+ - **Customer gate**: Would the first 10 customers refuse to buy without this?
42
+ - **Architecture gate**: Does this fit naturally, or does it bend the architecture?
43
+ - **Demo gate**: Can someone show this in a 5-minute demo and close a deal?
32
44
 
33
- When creating a new sprint, decide what the product needs most right now:
45
+ If a deliverable fails any gate, drop it or re-scope it.
46
+
47
+ ### Write the sprint file
48
+
49
+ Create `goals-open/goal-{name}-v{N}.md` with:
50
+ - Sprint title and context (what was delivered before, what this sprint advances)
51
+ - Concrete, verifiable deliverables as checkboxes
52
+ - Not "improve the UI" — instead "Add error states to all forms, loading indicators to async actions, test each page at mobile width"
34
53
  - Early sprints: foundation, architecture, core data model
35
- - Middle sprints: features that users interact with
54
+ - Middle sprints: features customers interact with
36
55
  - Late sprints: polish, error handling, edge cases, documentation
37
- - Every sprint should leave the product in a state you would demonstrate
38
- - Write concrete deliverables, not vague goals — "Add error states to all forms" not "improve UX"
56
+
57
+ Then start executing immediately.
58
+
59
+ ## Step 3 — The cycle
60
+
61
+ When you try to stop, the stop hook will:
62
+ - **Block** if unchecked items remain — re-prompting you with the full goal
63
+ - **Generate a verification sprint** when all items are checked — you must prove everything works end-to-end, with screenshots and evidence
64
+ - **Prompt the next sprint** after verification if a mission is active — you plan it using the 7-step method above, then execute it
65
+ - **Allow stop** only when the mission's sprint cap is reached or no mission is active
66
+
67
+ This cycle repeats autonomously: plan, build, verify, plan, build, verify.
39
68
 
40
69
  ## Rules
41
70
 
42
71
  - Execute continuously. Do not pause to ask for confirmation.
43
72
  - Do not skip items. Work through them in order.
44
73
  - Test each deliverable the way its end user would use it.
45
- - Constraints from `CLAUDE.md` and `mission.md` carry forward to every sprint.
74
+ - Constraints from `CLAUDE.md`, `mission.md`, and `stop-condition.md` carry forward to every sprint.
46
75
  - Never trade correctness for speed.
47
76
  - No sprint should leave the product in a state you would not demonstrate.
48
77
  - If stuck on an item, try a different approach. Do not abandon it.
78
+ - Every sprint must make the product's core thesis more true, more visible, and more demonstrable. If it doesn't, it's the wrong sprint.
49
79
 
50
80
  ## Begin
51
81
 
52
- Read `mission.md`, then `CLAUDE.md`, then the active goal file. Start executing. Go.
82
+ Read `mission.md`, then `CLAUDE.md`, then `stop-condition.md`, then the active goal file. Start executing. Go.
@@ -276,6 +276,12 @@ while IFS= read -r item; do
276
276
  - [ ] VERIFY: ${item}"
277
277
  done <<< "$DELIVERED"
278
278
 
279
+ # Check if project has a custom stop-condition.md with gates
280
+ STOP_CONDITION_CONTENT=""
281
+ if [ -f "$ROOT/stop-condition.md" ]; then
282
+ STOP_CONDITION_CONTENT=$(sed -n '/^## /,$p' "$ROOT/stop-condition.md")
283
+ fi
284
+
279
285
  # Write the verification sprint
280
286
  cat > "$VERIFY_FILE" <<SPRINT
281
287
  # Verification — ${SPRINT_TITLE}
@@ -288,45 +294,50 @@ Your job now is to prove — through actual testing — that everything
288
294
  delivered in the previous sprint is real, working, and ready for a
289
295
  real user to depend on.
290
296
 
291
- ## Work Ethic
297
+ ## What "tested" means
292
298
 
293
- Compilation is not verification. Test each thing the way its end user would use it:
299
+ - **UI feature** — open it in a browser (Playwright with real clicks, not just \`page.goto\`). Click the button. Fill the form. Submit it. Verify the result on screen. Follow every path a user would take. If you can't show a screenshot of the feature working, it's not tested.
300
+ - **API endpoint** — call it with real payloads the way its consumer would. Check the response body, status code, and side effects. Not just "returns 200."
301
+ - **CLI tool** — run the commands with real arguments, read the real output.
302
+ - **Pipeline/service** — connect with the actual client. Verify data flows end-to-end.
294
303
 
295
- - **A UI** open it in a browser, click every button, fill every form, walk every path
296
- - **An API** — call every endpoint with real requests, check the actual responses
297
- - **A CLI** — run the commands with real arguments, read the real output
298
- - **A pipeline** — feed it real data, verify what comes out the other end
304
+ ## What "tested" does NOT mean
299
305
 
300
- Whatever it is, use it the way a customer would — not the way the developer who
301
- wrote it imagines it works. If you cannot show real output proving it works
302
- end-to-end, it is not done. Never trade correctness for speed. One properly
303
- verified feature is worth more than ten that "should work."
306
+ - \`tsc --noEmit\` passes (that's compilation, not testing)
307
+ - \`curl\` returns 200 (that's a health check, not a user journey)
308
+ - "No errors in the log" (absence of failure is not presence of success)
309
+ - Checking off items you haven't personally verified
304
310
 
305
- ## Build Verification
311
+ ## Build Gate
306
312
 
307
313
  - [ ] Delete all cached/compiled artifacts and rebuild from a clean state
308
314
  - [ ] All dependencies resolve cleanly — no warnings, no version conflicts
309
315
  - [ ] Build completes without warnings that could indicate runtime issues
310
316
 
311
- ## Full Test Suite
317
+ ## Test Suite Gate
312
318
 
313
319
  - [ ] Run the COMPLETE test suite — every test file, not just the new ones
314
320
  - [ ] Zero failures, zero skipped tests without documented justification
315
321
  - [ ] If there are integration tests, run those too — they catch what unit tests miss
316
322
 
317
- ## End-to-End Feature Verification
323
+ ## Consumer Gate — End-to-End Feature Verification
318
324
 
319
325
  Walk through every feature delivered in the sprint as a real user would.
320
326
  Do not just call functions — use the actual UI/CLI/API entry points.
321
327
  ${VERIFY_ITEMS}
322
328
 
323
- ## Regression Testing
329
+ ## Regression Gate
324
330
 
325
331
  - [ ] Features from prior sprints still work — spot check at least 3
326
332
  - [ ] No existing functionality broken by the new changes
327
333
  - [ ] Trigger error cases intentionally and verify they are handled gracefully
328
334
  - [ ] Check edge cases: empty inputs, missing data, concurrent access, large payloads
329
335
 
336
+ ## Evidence Gate
337
+
338
+ - [ ] Each checked item above has real output proving it works — command output, screenshot path, API response, or browser verification
339
+ - [ ] No item was checked without personally verifying the evidence
340
+
330
341
  ## Deployment Readiness
331
342
 
332
343
  - [ ] No hardcoded development/debug values in production code
@@ -336,6 +347,19 @@ ${VERIFY_ITEMS}
336
347
  - [ ] Dependencies are locked — no floating versions that could break tomorrow
337
348
  - [ ] No new security vulnerabilities introduced (review dependency changes)
338
349
 
350
+ ## Visual Verification Gate
351
+
352
+ If this sprint changed anything a user or system interacts with:
353
+
354
+ - [ ] Screenshots captured for every feature changed (in demo-screenshots/ or pasted below)
355
+ - [ ] Each screenshot reviewed — correct data, no errors, layout not broken
356
+ - [ ] Findings listed: "Screenshot XX: [description]. PASS/FAIL"
357
+ - [ ] Any FAIL items fixed, re-screenshotted, and re-verified
358
+
359
+ **If you cannot show a screenshot of it working, it is not done.
360
+ If you showed a screenshot but didn't read it yourself, it is not verified.
361
+ If you read it and saw a problem but didn't fix it, it is not shipped.**
362
+
339
363
  ## Final Verdict
340
364
 
341
365
  - [ ] The product works. Not "should work." Works. You tested it yourself and saw it work.
package/bin/ccsg.js CHANGED
@@ -176,14 +176,25 @@ First sprint. Define what needs to be built and verify it works.
176
176
  info("Created starter goal: goals-open/goal-sprint-v1.md");
177
177
  }
178
178
 
179
+ // Mission
179
180
  const missionDest = path.join(root, "mission.md");
180
181
  if (!fs.existsSync(missionDest)) {
181
182
  const missionContent = await download("mission-template.md");
182
183
  fs.writeFileSync(missionDest, missionContent);
183
- info("Created mission.md — edit to set your product vision");
184
+ info("Created mission.md");
184
185
  } else {
185
186
  warn("mission.md already exists, skipping");
186
187
  }
188
+
189
+ // Stop condition (quality gates)
190
+ const stopDest = path.join(root, "stop-condition.md");
191
+ if (!fs.existsSync(stopDest)) {
192
+ const stopContent = await download("stop-condition-template.md");
193
+ fs.writeFileSync(stopDest, stopContent);
194
+ info("Created stop-condition.md");
195
+ } else {
196
+ warn("stop-condition.md already exists, skipping");
197
+ }
187
198
  }
188
199
 
189
200
  // 5. Summary
@@ -197,12 +208,20 @@ First sprint. Define what needs to be built and verify it works.
197
208
  console.log(" Commands: .claude/commands/");
198
209
  console.log(" Settings: .claude/settings.json");
199
210
  console.log(" Goals: goals-open/ \u2192 goals-completed/");
200
- console.log(" Mission: mission.md (edit to set your product vision)");
211
+ console.log(" Mission: mission.md");
212
+ console.log(" Gates: stop-condition.md");
201
213
  }
202
214
  console.log("\n Commands: /sprint, /sprint-status, /sprint-cancel");
203
- console.log(
204
- "\n Edit goals-open/goal-sprint-v1.md, then: claude\n Type /sprint to begin.\n"
205
- );
215
+ console.log(`
216
+ \x1b[1mBefore you start, review and customize:\x1b[0m
217
+
218
+ mission.md \x1b[2mYour product vision, persona, and constraints\x1b[0m
219
+ stop-condition.md \x1b[2mQuality gates — what "tested" means for your project\x1b[0m
220
+ goals-open/*.md \x1b[2mYour first sprint deliverables\x1b[0m
221
+
222
+ These files work out of the box but are most powerful when
223
+ tailored to your product. Then: claude && /sprint
224
+ `);
206
225
  }
207
226
 
208
227
  install().catch((err) => {
package/deploy.sh CHANGED
@@ -163,7 +163,7 @@ STARTER
163
163
  info "Created starter goal: goals-open/goal-sprint-v1.md"
164
164
  fi
165
165
 
166
- # Mission template (always included)
166
+ # Mission
167
167
  MISSION_DEST="$PROJECT_ROOT/mission.md"
168
168
  if [ ! -f "$MISSION_DEST" ]; then
169
169
  if command -v curl &>/dev/null; then
@@ -171,10 +171,23 @@ STARTER
171
171
  else
172
172
  wget -qO "$MISSION_DEST" "$REPO_RAW/mission-template.md"
173
173
  fi
174
- info "Created mission.md — edit to set your product vision"
174
+ info "Created mission.md"
175
175
  else
176
176
  warn "mission.md already exists, skipping"
177
177
  fi
178
+
179
+ # Stop condition (quality gates)
180
+ STOP_DEST="$PROJECT_ROOT/stop-condition.md"
181
+ if [ ! -f "$STOP_DEST" ]; then
182
+ if command -v curl &>/dev/null; then
183
+ curl -fsSL "$REPO_RAW/stop-condition-template.md" -o "$STOP_DEST"
184
+ else
185
+ wget -qO "$STOP_DEST" "$REPO_RAW/stop-condition-template.md"
186
+ fi
187
+ info "Created stop-condition.md"
188
+ else
189
+ warn "stop-condition.md already exists, skipping"
190
+ fi
178
191
  fi
179
192
 
180
193
  # Step 5: Summary
@@ -192,16 +205,21 @@ else
192
205
  echo -e " ${DIM}Commands:${RESET} .claude/commands/"
193
206
  echo -e " ${DIM}Settings:${RESET} .claude/settings.json"
194
207
  echo -e " ${DIM}Goals:${RESET} goals-open/ → goals-completed/"
195
- echo -e " ${DIM}Mission:${RESET} mission.md (edit to set your product vision)"
208
+ echo -e " ${DIM}Mission:${RESET} mission.md"
209
+ echo -e " ${DIM}Gates:${RESET} stop-condition.md"
196
210
  fi
197
211
 
198
212
  echo ""
199
213
  echo -e " ${DIM}Commands:${RESET} /sprint, /sprint-status, /sprint-cancel"
200
214
  echo ""
201
- echo -e " ${DIM}Next:${RESET}"
202
- echo " 1. Edit mission.md with your product vision"
203
- echo " 2. Edit goals-open/goal-sprint-v1.md with your sprint items"
204
- echo " 3. Start Claude Code, type /sprint to begin"
215
+ echo -e " ${BOLD}Before you start, review and customize:${RESET}"
216
+ echo ""
217
+ echo -e " mission.md ${DIM}Your product vision, persona, and constraints${RESET}"
218
+ echo -e " stop-condition.md ${DIM}Quality gates what 'tested' means for your project${RESET}"
219
+ echo -e " goals-open/*.md ${DIM}Your first sprint deliverables${RESET}"
220
+ echo ""
221
+ echo " These files work out of the box but are most powerful when"
222
+ echo " tailored to your product. Then: claude && /sprint"
205
223
  echo ""
206
224
  echo -e " ${DIM}Repo:${RESET} https://github.com/panbergco/claude-code-sprint-gate"
207
225
  echo ""
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "claude-sprint-gate",
3
- "version": "2.1.0",
3
+ "version": "2.2.0",
4
4
  "description": "Claude Code Sprint Gate — sprint lifecycle manager with verification gates",
5
5
  "bin": {
6
6
  "ccsg": "bin/ccsg.js"
@@ -11,42 +11,72 @@ You are entering an autonomous sprint cycle.
11
11
 
12
12
  1. Read `mission.md` — this defines who you are, what you're building, and your constraints. Adopt the persona and objectives defined there. If no mission file exists, your objective is to complete the current sprint.
13
13
  2. Read `CLAUDE.md` if it exists — these are rules that carry forward to every sprint.
14
- 3. Check `goals-open/` for the active goal file (most recent `goal-*.md`).
14
+ 3. Read `stop-condition.md` if it exists — these are the quality gates every sprint must pass.
15
+ 4. Check `goals-open/` for the active goal file (most recent `goal-*.md`).
15
16
 
16
- ## Step 2 — Execute
17
+ ## Step 2 — Execute or Plan
17
18
 
18
- - **If a goal file exists**: read it, execute every unchecked item, check them off (`- [x]`) as you complete and verify each one.
19
- - **If no goal file exists**: read `goals-completed/` to understand what has been delivered so far, then create the next sprint file based on your mission objectives. Define concrete, verifiable deliverables as checkboxes. Then execute it.
19
+ **If a goal file exists**: read it, execute every unchecked item, check them off (`- [x]`) as you complete and verify each one.
20
20
 
21
- ## Step 3 The cycle
21
+ **If no goal file exists**: you need to plan the next sprint. Follow this process:
22
22
 
23
- When you try to stop, the stop hook will:
24
- - **Block** if unchecked items remain — re-prompting you with the full goal
25
- - **Generate a verification sprint** when all items are checked — you must prove everything works end-to-end
26
- - **Prompt the next sprint** after verification if a mission is active — you plan it based on your mission, then execute it
27
- - **Allow stop** only when the mission's sprint cap is reached or no mission is active
23
+ ### Sprint Planning The 7-Step Method
24
+
25
+ Before writing a single checkbox, answer these questions in order:
26
+
27
+ 1. **What is the business value?** Not the feature the outcome. Strip away technical jargon. What problem disappears for the customer?
28
+
29
+ 2. **Does this align with the mission?** Re-read `mission.md`. Does this sprint make the product's core thesis more true, more visible, more demonstrable? Four outcomes: reinforces mission (ship it), neutral but needed (table stakes), contradicts mission (drop or re-scope), already delivered architecturally (just prove it).
30
+
31
+ 3. **First principles implementation.** Don't copy competitors. Start from your architecture. What is the minimum mechanism that delivers the business value within your constraints?
32
+
33
+ 4. **Who benefits?** Every sprint should deliver value to at least 3 stakeholders — investors, customers, end users, operators, developers. If fewer than 3 benefit, the scope is wrong.
34
+
35
+ 5. **What is the Minimum Demonstrable Increment?** The smallest slice that (a) advances the mission, (b) is visible to an end user, and (c) has a screenshot a salesperson can show in a demo. If you can't describe the screenshot that proves it works, the scope is wrong.
28
36
 
29
- This cycle repeats autonomously: build, verify, plan, build, verify, plan.
37
+ 6. **Acceptance criteria with evidence.** Each deliverable gets checkbox criteria with verifiable evidence — build output, screenshots reviewed by you, API responses, end-to-end user journeys.
30
38
 
31
- ## Sprint planning guidance
39
+ 7. **Four-Gate Filter** before committing to the sprint:
40
+ - **Thesis gate**: Does this make the product's core value proposition stronger?
41
+ - **Customer gate**: Would the first 10 customers refuse to buy without this?
42
+ - **Architecture gate**: Does this fit naturally, or does it bend the architecture?
43
+ - **Demo gate**: Can someone show this in a 5-minute demo and close a deal?
32
44
 
33
- When creating a new sprint, decide what the product needs most right now:
45
+ If a deliverable fails any gate, drop it or re-scope it.
46
+
47
+ ### Write the sprint file
48
+
49
+ Create `goals-open/goal-{name}-v{N}.md` with:
50
+ - Sprint title and context (what was delivered before, what this sprint advances)
51
+ - Concrete, verifiable deliverables as checkboxes
52
+ - Not "improve the UI" — instead "Add error states to all forms, loading indicators to async actions, test each page at mobile width"
34
53
  - Early sprints: foundation, architecture, core data model
35
- - Middle sprints: features that users interact with
54
+ - Middle sprints: features customers interact with
36
55
  - Late sprints: polish, error handling, edge cases, documentation
37
- - Every sprint should leave the product in a state you would demonstrate
38
- - Write concrete deliverables, not vague goals — "Add error states to all forms" not "improve UX"
56
+
57
+ Then start executing immediately.
58
+
59
+ ## Step 3 — The cycle
60
+
61
+ When you try to stop, the stop hook will:
62
+ - **Block** if unchecked items remain — re-prompting you with the full goal
63
+ - **Generate a verification sprint** when all items are checked — you must prove everything works end-to-end, with screenshots and evidence
64
+ - **Prompt the next sprint** after verification if a mission is active — you plan it using the 7-step method above, then execute it
65
+ - **Allow stop** only when the mission's sprint cap is reached or no mission is active
66
+
67
+ This cycle repeats autonomously: plan, build, verify, plan, build, verify.
39
68
 
40
69
  ## Rules
41
70
 
42
71
  - Execute continuously. Do not pause to ask for confirmation.
43
72
  - Do not skip items. Work through them in order.
44
73
  - Test each deliverable the way its end user would use it.
45
- - Constraints from `CLAUDE.md` and `mission.md` carry forward to every sprint.
74
+ - Constraints from `CLAUDE.md`, `mission.md`, and `stop-condition.md` carry forward to every sprint.
46
75
  - Never trade correctness for speed.
47
76
  - No sprint should leave the product in a state you would not demonstrate.
48
77
  - If stuck on an item, try a different approach. Do not abandon it.
78
+ - Every sprint must make the product's core thesis more true, more visible, and more demonstrable. If it doesn't, it's the wrong sprint.
49
79
 
50
80
  ## Begin
51
81
 
52
- Read `mission.md`, then `CLAUDE.md`, then the active goal file. Start executing. Go.
82
+ Read `mission.md`, then `CLAUDE.md`, then `stop-condition.md`, then the active goal file. Start executing. Go.