claude-sprint-gate 2.1.0 → 2.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -5,48 +5,87 @@ allowed-tools: ["Read", "Edit", "Write", "Bash", "Glob", "Grep"]
5
5
 
6
6
  # Sprint — Autonomous Product Development
7
7
 
8
- You are entering an autonomous sprint cycle.
8
+ You are entering an autonomous sprint cycle. This cycle will iterate the product toward maturity until the mission is fulfilled.
9
+
10
+ ## What is maturity?
11
+
12
+ If `mission.md` defines a north star, that is the target. If not, maturity means: **benchmark against the best product in this category and exceed expectations.** The product is mature when a customer would choose it over the incumbent — not because it's cheaper, but because it's better.
13
+
14
+ Always develop from first principles. Do not copy competitors. Apply design thinking — understand the user's real problem, prototype the simplest solution, test it, iterate. The result should be the best there is, not the best you could manage.
9
15
 
10
16
  ## Step 1 — Load your context
11
17
 
12
18
  1. Read `mission.md` — this defines who you are, what you're building, and your constraints. Adopt the persona and objectives defined there. If no mission file exists, your objective is to complete the current sprint.
13
19
  2. Read `CLAUDE.md` if it exists — these are rules that carry forward to every sprint.
14
- 3. Check `goals-open/` for the active goal file (most recent `goal-*.md`).
20
+ 3. Read `stop-condition.md` if it exists — these are the quality gates every sprint must pass.
21
+ 4. Check `goals-open/` for the active goal file (most recent `goal-*.md`).
22
+
23
+ ## Step 2 — Execute or Plan
24
+
25
+ **If a goal file exists**: read it, execute every unchecked item, check them off (`- [x]`) as you complete and verify each one.
26
+
27
+ **If no goal file exists**: you need to plan the next sprint. Follow this process:
28
+
29
+ ### Sprint Planning — The 7-Step Method
30
+
31
+ Before writing a single checkbox, answer these questions in order:
32
+
33
+ 1. **What is the business value?** Not the feature — the outcome. Strip away technical jargon. What problem disappears for the customer?
34
+
35
+ 2. **Does this align with the mission?** Re-read `mission.md`. Does this sprint make the product's core thesis more true, more visible, more demonstrable? Four outcomes: reinforces mission (ship it), neutral but needed (table stakes), contradicts mission (drop or re-scope), already delivered architecturally (just prove it).
36
+
37
+ 3. **First principles implementation.** Don't copy competitors. Start from your architecture. What is the minimum mechanism that delivers the business value within your constraints?
15
38
 
16
- ## Step 2Execute
39
+ 4. **Who benefits?** Every sprint should deliver value to at least 3 stakeholders investors, customers, end users, operators, developers. If fewer than 3 benefit, the scope is wrong.
17
40
 
18
- - **If a goal file exists**: read it, execute every unchecked item, check them off (`- [x]`) as you complete and verify each one.
19
- - **If no goal file exists**: read `goals-completed/` to understand what has been delivered so far, then create the next sprint file based on your mission objectives. Define concrete, verifiable deliverables as checkboxes. Then execute it.
41
+ 5. **What is the Minimum Demonstrable Increment?** The smallest slice that (a) advances the mission, (b) is visible to an end user, and (c) has a screenshot a salesperson can show in a demo. If you can't describe the screenshot that proves it works, the scope is wrong.
42
+
43
+ 6. **Acceptance criteria with evidence.** Each deliverable gets checkbox criteria with verifiable evidence — build output, screenshots reviewed by you, API responses, end-to-end user journeys.
44
+
45
+ 7. **Four-Gate Filter** before committing to the sprint:
46
+ - **Thesis gate**: Does this make the product's core value proposition stronger?
47
+ - **Customer gate**: Would the first 10 customers refuse to buy without this?
48
+ - **Architecture gate**: Does this fit naturally, or does it bend the architecture?
49
+ - **Demo gate**: Can someone show this in a 5-minute demo and close a deal?
50
+
51
+ If a deliverable fails any gate, drop it or re-scope it.
52
+
53
+ ### Write the sprint file
54
+
55
+ Create `goals-open/goal-{name}-v{N}.md` with:
56
+ - Sprint title and context (what was delivered before, what this sprint advances)
57
+ - Concrete, verifiable deliverables as checkboxes
58
+ - Not "improve the UI" — instead "Add error states to all forms, loading indicators to async actions, test each page at mobile width"
59
+ - Early sprints: foundation, architecture, core data model
60
+ - Middle sprints: features customers interact with
61
+ - Late sprints: polish, error handling, edge cases, documentation
62
+
63
+ Then start executing immediately.
20
64
 
21
65
  ## Step 3 — The cycle
22
66
 
23
67
  When you try to stop, the stop hook will:
24
68
  - **Block** if unchecked items remain — re-prompting you with the full goal
25
- - **Generate a verification sprint** when all items are checked — you must prove everything works end-to-end
26
- - **Prompt the next sprint** after verification if a mission is active — you plan it based on your mission, then execute it
69
+ - **Generate a verification sprint** when all items are checked — you must prove everything works end-to-end, with screenshots and evidence
70
+ - **Prompt the next sprint** after verification if a mission is active — you plan it using the 7-step method above, then execute it
27
71
  - **Allow stop** only when the mission's sprint cap is reached or no mission is active
28
72
 
29
- This cycle repeats autonomously: build, verify, plan, build, verify, plan.
30
-
31
- ## Sprint planning guidance
73
+ This cycle repeats autonomously: plan, build, verify, plan, build, verify.
32
74
 
33
- When creating a new sprint, decide what the product needs most right now:
34
- - Early sprints: foundation, architecture, core data model
35
- - Middle sprints: features that users interact with
36
- - Late sprints: polish, error handling, edge cases, documentation
37
- - Every sprint should leave the product in a state you would demonstrate
38
- - Write concrete deliverables, not vague goals — "Add error states to all forms" not "improve UX"
75
+ Each sprint must iterate the product closer to maturity. If the product isn't better after a sprint than it was before, the sprint was wrong.
39
76
 
40
77
  ## Rules
41
78
 
42
79
  - Execute continuously. Do not pause to ask for confirmation.
43
80
  - Do not skip items. Work through them in order.
44
81
  - Test each deliverable the way its end user would use it.
45
- - Constraints from `CLAUDE.md` and `mission.md` carry forward to every sprint.
82
+ - Constraints from `CLAUDE.md`, `mission.md`, and `stop-condition.md` carry forward to every sprint.
46
83
  - Never trade correctness for speed.
47
84
  - No sprint should leave the product in a state you would not demonstrate.
48
85
  - If stuck on an item, try a different approach. Do not abandon it.
86
+ - Every sprint must make the product's core thesis more true, more visible, and more demonstrable. If it doesn't, it's the wrong sprint.
87
+ - First principles always. Design thinking always. The result must be the best there is.
49
88
 
50
89
  ## Begin
51
90
 
52
- Read `mission.md`, then `CLAUDE.md`, then the active goal file. Start executing. Go.
91
+ Read `mission.md`, then `CLAUDE.md`, then `stop-condition.md`, then the active goal file. Start executing. Go.
@@ -276,6 +276,12 @@ while IFS= read -r item; do
276
276
  - [ ] VERIFY: ${item}"
277
277
  done <<< "$DELIVERED"
278
278
 
279
+ # Check if project has a custom stop-condition.md with gates
280
+ STOP_CONDITION_CONTENT=""
281
+ if [ -f "$ROOT/stop-condition.md" ]; then
282
+ STOP_CONDITION_CONTENT=$(sed -n '/^## /,$p' "$ROOT/stop-condition.md")
283
+ fi
284
+
279
285
  # Write the verification sprint
280
286
  cat > "$VERIFY_FILE" <<SPRINT
281
287
  # Verification — ${SPRINT_TITLE}
@@ -288,45 +294,50 @@ Your job now is to prove — through actual testing — that everything
288
294
  delivered in the previous sprint is real, working, and ready for a
289
295
  real user to depend on.
290
296
 
291
- ## Work Ethic
297
+ ## What "tested" means
292
298
 
293
- Compilation is not verification. Test each thing the way its end user would use it:
299
+ - **UI feature** — open it in a browser (Playwright with real clicks, not just \`page.goto\`). Click the button. Fill the form. Submit it. Verify the result on screen. Follow every path a user would take. If you can't show a screenshot of the feature working, it's not tested.
300
+ - **API endpoint** — call it with real payloads the way its consumer would. Check the response body, status code, and side effects. Not just "returns 200."
301
+ - **CLI tool** — run the commands with real arguments, read the real output.
302
+ - **Pipeline/service** — connect with the actual client. Verify data flows end-to-end.
294
303
 
295
- - **A UI** open it in a browser, click every button, fill every form, walk every path
296
- - **An API** — call every endpoint with real requests, check the actual responses
297
- - **A CLI** — run the commands with real arguments, read the real output
298
- - **A pipeline** — feed it real data, verify what comes out the other end
304
+ ## What "tested" does NOT mean
299
305
 
300
- Whatever it is, use it the way a customer would — not the way the developer who
301
- wrote it imagines it works. If you cannot show real output proving it works
302
- end-to-end, it is not done. Never trade correctness for speed. One properly
303
- verified feature is worth more than ten that "should work."
306
+ - \`tsc --noEmit\` passes (that's compilation, not testing)
307
+ - \`curl\` returns 200 (that's a health check, not a user journey)
308
+ - "No errors in the log" (absence of failure is not presence of success)
309
+ - Checking off items you haven't personally verified
304
310
 
305
- ## Build Verification
311
+ ## Build Gate
306
312
 
307
313
  - [ ] Delete all cached/compiled artifacts and rebuild from a clean state
308
314
  - [ ] All dependencies resolve cleanly — no warnings, no version conflicts
309
315
  - [ ] Build completes without warnings that could indicate runtime issues
310
316
 
311
- ## Full Test Suite
317
+ ## Test Suite Gate
312
318
 
313
319
  - [ ] Run the COMPLETE test suite — every test file, not just the new ones
314
320
  - [ ] Zero failures, zero skipped tests without documented justification
315
321
  - [ ] If there are integration tests, run those too — they catch what unit tests miss
316
322
 
317
- ## End-to-End Feature Verification
323
+ ## Consumer Gate — End-to-End Feature Verification
318
324
 
319
325
  Walk through every feature delivered in the sprint as a real user would.
320
326
  Do not just call functions — use the actual UI/CLI/API entry points.
321
327
  ${VERIFY_ITEMS}
322
328
 
323
- ## Regression Testing
329
+ ## Regression Gate
324
330
 
325
331
  - [ ] Features from prior sprints still work — spot check at least 3
326
332
  - [ ] No existing functionality broken by the new changes
327
333
  - [ ] Trigger error cases intentionally and verify they are handled gracefully
328
334
  - [ ] Check edge cases: empty inputs, missing data, concurrent access, large payloads
329
335
 
336
+ ## Evidence Gate
337
+
338
+ - [ ] Each checked item above has real output proving it works — command output, screenshot path, API response, or browser verification
339
+ - [ ] No item was checked without personally verifying the evidence
340
+
330
341
  ## Deployment Readiness
331
342
 
332
343
  - [ ] No hardcoded development/debug values in production code
@@ -336,6 +347,19 @@ ${VERIFY_ITEMS}
336
347
  - [ ] Dependencies are locked — no floating versions that could break tomorrow
337
348
  - [ ] No new security vulnerabilities introduced (review dependency changes)
338
349
 
350
+ ## Visual Verification Gate
351
+
352
+ If this sprint changed anything a user or system interacts with:
353
+
354
+ - [ ] Screenshots captured for every feature changed (in demo-screenshots/ or pasted below)
355
+ - [ ] Each screenshot reviewed — correct data, no errors, layout not broken
356
+ - [ ] Findings listed: "Screenshot XX: [description]. PASS/FAIL"
357
+ - [ ] Any FAIL items fixed, re-screenshotted, and re-verified
358
+
359
+ **If you cannot show a screenshot of it working, it is not done.
360
+ If you showed a screenshot but didn't read it yourself, it is not verified.
361
+ If you read it and saw a problem but didn't fix it, it is not shipped.**
362
+
339
363
  ## Final Verdict
340
364
 
341
365
  - [ ] The product works. Not "should work." Works. You tested it yourself and saw it work.
package/bin/ccsg.js CHANGED
@@ -176,14 +176,25 @@ First sprint. Define what needs to be built and verify it works.
176
176
  info("Created starter goal: goals-open/goal-sprint-v1.md");
177
177
  }
178
178
 
179
+ // Mission
179
180
  const missionDest = path.join(root, "mission.md");
180
181
  if (!fs.existsSync(missionDest)) {
181
182
  const missionContent = await download("mission-template.md");
182
183
  fs.writeFileSync(missionDest, missionContent);
183
- info("Created mission.md — edit to set your product vision");
184
+ info("Created mission.md");
184
185
  } else {
185
186
  warn("mission.md already exists, skipping");
186
187
  }
188
+
189
+ // Stop condition (quality gates)
190
+ const stopDest = path.join(root, "stop-condition.md");
191
+ if (!fs.existsSync(stopDest)) {
192
+ const stopContent = await download("stop-condition-template.md");
193
+ fs.writeFileSync(stopDest, stopContent);
194
+ info("Created stop-condition.md");
195
+ } else {
196
+ warn("stop-condition.md already exists, skipping");
197
+ }
187
198
  }
188
199
 
189
200
  // 5. Summary
@@ -197,12 +208,26 @@ First sprint. Define what needs to be built and verify it works.
197
208
  console.log(" Commands: .claude/commands/");
198
209
  console.log(" Settings: .claude/settings.json");
199
210
  console.log(" Goals: goals-open/ \u2192 goals-completed/");
200
- console.log(" Mission: mission.md (edit to set your product vision)");
211
+ console.log(" Mission: mission.md");
212
+ console.log(" Gates: stop-condition.md");
201
213
  }
202
214
  console.log("\n Commands: /sprint, /sprint-status, /sprint-cancel");
203
- console.log(
204
- "\n Edit goals-open/goal-sprint-v1.md, then: claude\n Type /sprint to begin.\n"
205
- );
215
+ console.log(`
216
+ \x1b[1mBefore you start, review and customize:\x1b[0m
217
+
218
+ mission.md \x1b[2mYour product vision, persona, and constraints\x1b[0m
219
+ stop-condition.md \x1b[2mQuality gates — what "tested" means for your project\x1b[0m
220
+ goals-open/*.md \x1b[2mYour first sprint deliverables\x1b[0m
221
+
222
+ These files work out of the box but are most powerful when
223
+ tailored to your product. Then start Claude Code and run:
224
+
225
+ /loop 15m /sprint
226
+
227
+ This starts a 15-minute heartbeat that keeps the agent working
228
+ through sprints until the product is mature. The stop hook
229
+ prevents escape, the loop prevents silence.
230
+ `);
206
231
  }
207
232
 
208
233
  install().catch((err) => {
package/deploy.sh CHANGED
@@ -163,7 +163,7 @@ STARTER
163
163
  info "Created starter goal: goals-open/goal-sprint-v1.md"
164
164
  fi
165
165
 
166
- # Mission template (always included)
166
+ # Mission
167
167
  MISSION_DEST="$PROJECT_ROOT/mission.md"
168
168
  if [ ! -f "$MISSION_DEST" ]; then
169
169
  if command -v curl &>/dev/null; then
@@ -171,10 +171,23 @@ STARTER
171
171
  else
172
172
  wget -qO "$MISSION_DEST" "$REPO_RAW/mission-template.md"
173
173
  fi
174
- info "Created mission.md — edit to set your product vision"
174
+ info "Created mission.md"
175
175
  else
176
176
  warn "mission.md already exists, skipping"
177
177
  fi
178
+
179
+ # Stop condition (quality gates)
180
+ STOP_DEST="$PROJECT_ROOT/stop-condition.md"
181
+ if [ ! -f "$STOP_DEST" ]; then
182
+ if command -v curl &>/dev/null; then
183
+ curl -fsSL "$REPO_RAW/stop-condition-template.md" -o "$STOP_DEST"
184
+ else
185
+ wget -qO "$STOP_DEST" "$REPO_RAW/stop-condition-template.md"
186
+ fi
187
+ info "Created stop-condition.md"
188
+ else
189
+ warn "stop-condition.md already exists, skipping"
190
+ fi
178
191
  fi
179
192
 
180
193
  # Step 5: Summary
@@ -192,16 +205,26 @@ else
192
205
  echo -e " ${DIM}Commands:${RESET} .claude/commands/"
193
206
  echo -e " ${DIM}Settings:${RESET} .claude/settings.json"
194
207
  echo -e " ${DIM}Goals:${RESET} goals-open/ → goals-completed/"
195
- echo -e " ${DIM}Mission:${RESET} mission.md (edit to set your product vision)"
208
+ echo -e " ${DIM}Mission:${RESET} mission.md"
209
+ echo -e " ${DIM}Gates:${RESET} stop-condition.md"
196
210
  fi
197
211
 
198
212
  echo ""
199
213
  echo -e " ${DIM}Commands:${RESET} /sprint, /sprint-status, /sprint-cancel"
200
214
  echo ""
201
- echo -e " ${DIM}Next:${RESET}"
202
- echo " 1. Edit mission.md with your product vision"
203
- echo " 2. Edit goals-open/goal-sprint-v1.md with your sprint items"
204
- echo " 3. Start Claude Code, type /sprint to begin"
215
+ echo -e " ${BOLD}Before you start, review and customize:${RESET}"
216
+ echo ""
217
+ echo -e " mission.md ${DIM}Your product vision, persona, and constraints${RESET}"
218
+ echo -e " stop-condition.md ${DIM}Quality gates what 'tested' means for your project${RESET}"
219
+ echo -e " goals-open/*.md ${DIM}Your first sprint deliverables${RESET}"
220
+ echo ""
221
+ echo " These files work out of the box but are most powerful when"
222
+ echo " tailored to your product. Then start Claude Code and run:"
223
+ echo ""
224
+ echo " /loop 15m /sprint"
225
+ echo ""
226
+ echo " This starts a 15-minute heartbeat that keeps the agent working"
227
+ echo " through sprints until the product is mature."
205
228
  echo ""
206
229
  echo -e " ${DIM}Repo:${RESET} https://github.com/panbergco/claude-code-sprint-gate"
207
230
  echo ""
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "claude-sprint-gate",
3
- "version": "2.1.0",
3
+ "version": "2.3.0",
4
4
  "description": "Claude Code Sprint Gate — sprint lifecycle manager with verification gates",
5
5
  "bin": {
6
6
  "ccsg": "bin/ccsg.js"
@@ -5,48 +5,87 @@ allowed-tools: ["Read", "Edit", "Write", "Bash", "Glob", "Grep"]
5
5
 
6
6
  # Sprint — Autonomous Product Development
7
7
 
8
- You are entering an autonomous sprint cycle.
8
+ You are entering an autonomous sprint cycle. This cycle will iterate the product toward maturity until the mission is fulfilled.
9
+
10
+ ## What is maturity?
11
+
12
+ If `mission.md` defines a north star, that is the target. If not, maturity means: **benchmark against the best product in this category and exceed expectations.** The product is mature when a customer would choose it over the incumbent — not because it's cheaper, but because it's better.
13
+
14
+ Always develop from first principles. Do not copy competitors. Apply design thinking — understand the user's real problem, prototype the simplest solution, test it, iterate. The result should be the best there is, not the best you could manage.
9
15
 
10
16
  ## Step 1 — Load your context
11
17
 
12
18
  1. Read `mission.md` — this defines who you are, what you're building, and your constraints. Adopt the persona and objectives defined there. If no mission file exists, your objective is to complete the current sprint.
13
19
  2. Read `CLAUDE.md` if it exists — these are rules that carry forward to every sprint.
14
- 3. Check `goals-open/` for the active goal file (most recent `goal-*.md`).
20
+ 3. Read `stop-condition.md` if it exists — these are the quality gates every sprint must pass.
21
+ 4. Check `goals-open/` for the active goal file (most recent `goal-*.md`).
22
+
23
+ ## Step 2 — Execute or Plan
24
+
25
+ **If a goal file exists**: read it, execute every unchecked item, check them off (`- [x]`) as you complete and verify each one.
26
+
27
+ **If no goal file exists**: you need to plan the next sprint. Follow this process:
28
+
29
+ ### Sprint Planning — The 7-Step Method
30
+
31
+ Before writing a single checkbox, answer these questions in order:
32
+
33
+ 1. **What is the business value?** Not the feature — the outcome. Strip away technical jargon. What problem disappears for the customer?
34
+
35
+ 2. **Does this align with the mission?** Re-read `mission.md`. Does this sprint make the product's core thesis more true, more visible, more demonstrable? Four outcomes: reinforces mission (ship it), neutral but needed (table stakes), contradicts mission (drop or re-scope), already delivered architecturally (just prove it).
36
+
37
+ 3. **First principles implementation.** Don't copy competitors. Start from your architecture. What is the minimum mechanism that delivers the business value within your constraints?
15
38
 
16
- ## Step 2Execute
39
+ 4. **Who benefits?** Every sprint should deliver value to at least 3 stakeholders investors, customers, end users, operators, developers. If fewer than 3 benefit, the scope is wrong.
17
40
 
18
- - **If a goal file exists**: read it, execute every unchecked item, check them off (`- [x]`) as you complete and verify each one.
19
- - **If no goal file exists**: read `goals-completed/` to understand what has been delivered so far, then create the next sprint file based on your mission objectives. Define concrete, verifiable deliverables as checkboxes. Then execute it.
41
+ 5. **What is the Minimum Demonstrable Increment?** The smallest slice that (a) advances the mission, (b) is visible to an end user, and (c) has a screenshot a salesperson can show in a demo. If you can't describe the screenshot that proves it works, the scope is wrong.
42
+
43
+ 6. **Acceptance criteria with evidence.** Each deliverable gets checkbox criteria with verifiable evidence — build output, screenshots reviewed by you, API responses, end-to-end user journeys.
44
+
45
+ 7. **Four-Gate Filter** before committing to the sprint:
46
+ - **Thesis gate**: Does this make the product's core value proposition stronger?
47
+ - **Customer gate**: Would the first 10 customers refuse to buy without this?
48
+ - **Architecture gate**: Does this fit naturally, or does it bend the architecture?
49
+ - **Demo gate**: Can someone show this in a 5-minute demo and close a deal?
50
+
51
+ If a deliverable fails any gate, drop it or re-scope it.
52
+
53
+ ### Write the sprint file
54
+
55
+ Create `goals-open/goal-{name}-v{N}.md` with:
56
+ - Sprint title and context (what was delivered before, what this sprint advances)
57
+ - Concrete, verifiable deliverables as checkboxes
58
+ - Not "improve the UI" — instead "Add error states to all forms, loading indicators to async actions, test each page at mobile width"
59
+ - Early sprints: foundation, architecture, core data model
60
+ - Middle sprints: features customers interact with
61
+ - Late sprints: polish, error handling, edge cases, documentation
62
+
63
+ Then start executing immediately.
20
64
 
21
65
  ## Step 3 — The cycle
22
66
 
23
67
  When you try to stop, the stop hook will:
24
68
  - **Block** if unchecked items remain — re-prompting you with the full goal
25
- - **Generate a verification sprint** when all items are checked — you must prove everything works end-to-end
26
- - **Prompt the next sprint** after verification if a mission is active — you plan it based on your mission, then execute it
69
+ - **Generate a verification sprint** when all items are checked — you must prove everything works end-to-end, with screenshots and evidence
70
+ - **Prompt the next sprint** after verification if a mission is active — you plan it using the 7-step method above, then execute it
27
71
  - **Allow stop** only when the mission's sprint cap is reached or no mission is active
28
72
 
29
- This cycle repeats autonomously: build, verify, plan, build, verify, plan.
30
-
31
- ## Sprint planning guidance
73
+ This cycle repeats autonomously: plan, build, verify, plan, build, verify.
32
74
 
33
- When creating a new sprint, decide what the product needs most right now:
34
- - Early sprints: foundation, architecture, core data model
35
- - Middle sprints: features that users interact with
36
- - Late sprints: polish, error handling, edge cases, documentation
37
- - Every sprint should leave the product in a state you would demonstrate
38
- - Write concrete deliverables, not vague goals — "Add error states to all forms" not "improve UX"
75
+ Each sprint must iterate the product closer to maturity. If the product isn't better after a sprint than it was before, the sprint was wrong.
39
76
 
40
77
  ## Rules
41
78
 
42
79
  - Execute continuously. Do not pause to ask for confirmation.
43
80
  - Do not skip items. Work through them in order.
44
81
  - Test each deliverable the way its end user would use it.
45
- - Constraints from `CLAUDE.md` and `mission.md` carry forward to every sprint.
82
+ - Constraints from `CLAUDE.md`, `mission.md`, and `stop-condition.md` carry forward to every sprint.
46
83
  - Never trade correctness for speed.
47
84
  - No sprint should leave the product in a state you would not demonstrate.
48
85
  - If stuck on an item, try a different approach. Do not abandon it.
86
+ - Every sprint must make the product's core thesis more true, more visible, and more demonstrable. If it doesn't, it's the wrong sprint.
87
+ - First principles always. Design thinking always. The result must be the best there is.
49
88
 
50
89
  ## Begin
51
90
 
52
- Read `mission.md`, then `CLAUDE.md`, then the active goal file. Start executing. Go.
91
+ Read `mission.md`, then `CLAUDE.md`, then `stop-condition.md`, then the active goal file. Start executing. Go.