joycraft 0.5.20 → 0.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -2,22 +2,24 @@
2
2
 
3
3
  // src/bundled-files.ts
4
4
  var SKILLS = {
5
- "joycraft-add-fact.md": '---\nname: joycraft-add-fact\ndescription: Capture a project fact and route it to the correct context document -- production map, dangerous assumptions, decision log, institutional knowledge, or troubleshooting\ninstructions: 38\n---\n\n# Add Fact\n\nThe user has a fact to capture. Your job is to classify it, route it to the correct context document, append it in the right format, and optionally add a CLAUDE.md boundary rule.\n\n## Step 1: Get the Fact\n\nIf the user already provided the fact (e.g., `/joycraft-add-fact the staging DB resets every Sunday`), use it directly.\n\nIf not, ask: "What fact do you want to capture?" -- then wait for their response.\n\nIf the user provides multiple facts at once, process each one separately through all the steps below, then give a combined confirmation at the end.\n\n## Step 2: Classify the Fact\n\nRoute the fact to one of these 5 context documents based on its content:\n\n### `docs/context/production-map.md`\nThe fact is about **infrastructure, services, environments, URLs, endpoints, credentials, or what is safe/unsafe to touch**.\n- Signal words: "production", "staging", "endpoint", "URL", "database", "service", "deployed", "hosted", "credentials", "secret", "environment"\n- Examples: "The staging DB is at postgres://staging.example.com", "We use Vercel for the frontend and Railway for the API"\n\n### `docs/context/dangerous-assumptions.md`\nThe fact is about **something an AI agent might get wrong -- a false assumption that leads to bad outcomes**.\n- Signal words: "assumes", "might think", "but actually", "looks like X but is Y", "not what it seems", "trap", "gotcha"\n- Examples: "The `users` table looks like a test table but it\'s production", "Deleting a workspace doesn\'t delete the billing subscription"\n\n### `docs/context/decision-log.md`\nThe fact is about **an architectural or tooling choice and why it was made**.\n- Signal words: "decided", "chose", "because", "instead of", "we went with", "the reason we use", "trade-off"\n- Examples: "We chose SQLite over Postgres because this runs on embedded devices", "We use pnpm instead of npm for workspace support"\n\n### `docs/context/institutional-knowledge.md`\nThe fact is about **team conventions, unwritten rules, organizational context, or who owns what**.\n- Signal words: "convention", "rule", "always", "never", "team", "process", "review", "approval", "owns", "responsible"\n- Examples: "The design team reviews all color changes", "We never deploy on Fridays", "PR titles must start with the ticket number"\n\n### `docs/context/troubleshooting.md`\nThe fact is about **diagnostic knowledge -- when X happens, do Y (or don\'t do Z)**.\n- Signal words: "when", "fails", "error", "if you see", "stuck", "broken", "fix", "workaround", "before trying", "reboot", "restart", "reset"\n- Examples: "If Wi-Fi disconnects during flash, wait and retry -- don\'t switch networks", "When tests fail with ECONNREFUSED, check if Docker is running"\n\n### Ambiguous Facts\n\nIf the fact fits multiple categories, pick the **best fit** based on the primary intent. You will mention the alternative in your confirmation message so the user can correct you.\n\n## Step 3: Ensure the Target Document Exists\n\n1. If `docs/context/` does not exist, create the directory.\n2. If the target document does not exist, create it from the template structure. Check `docs/templates/` for the matching template. If no template exists, use this minimal structure:\n\nFor **production-map.md**:\n```markdown\n# Production Map\n\n> What\'s real, what\'s staging, what\'s safe to touch.\n\n## Services\n\n| Service | Environment | URL/Endpoint | Impact if Corrupted |\n|---------|-------------|-------------|-------------------|\n```\n\nFor **dangerous-assumptions.md**:\n```markdown\n# Dangerous Assumptions\n\n> Things the AI agent might assume that are wrong in this project.\n\n## Assumptions\n\n| Agent Might Assume | But Actually | Impact If Wrong |\n|-------------------|-------------|----------------|\n```\n\nFor **decision-log.md**:\n```markdown\n# Decision Log\n\n> Why choices were made, not just what was chosen.\n\n## Decisions\n\n| Date | Decision | Why | Alternatives Rejected | Revisit When |\n|------|----------|-----|----------------------|-------------|\n```\n\nFor **institutional-knowledge.md**:\n```markdown\n# Institutional Knowledge\n\n> Unwritten rules, team conventions, and organizational context.\n\n## Team Conventions\n\n- (none yet)\n```\n\nFor **troubleshooting.md**:\n```markdown\n# Troubleshooting\n\n> What to do when things go wrong for non-code reasons.\n\n## Common Failures\n\n| When This Happens | Do This | Don\'t Do This |\n|-------------------|---------|---------------|\n```\n\n## Step 4: Read the Target Document\n\nRead the target document to understand its current structure. Note:\n- Which section to append to\n- Whether it uses tables or lists\n- The column format if it\'s a table\n\n## Step 5: Append the Fact\n\nAdd the fact to the appropriate section of the target document. Match the existing format exactly:\n\n- **Table-based documents** (production-map, dangerous-assumptions, decision-log, troubleshooting): Add a new table row in the correct columns. Use today\'s date where a date column exists.\n- **List-based documents** (institutional-knowledge): Add a new list item (`- `) to the most appropriate section.\n\nRemove any italic example rows (rows where all cells start with `_`) before appending, so the document transitions from template to real content. Only remove examples from the specific table you are appending to.\n\n**Append only. Never modify or remove existing real content.**\n\n## Step 6: Evaluate CLAUDE.md Boundary Rule\n\nDecide whether the fact also warrants a rule in CLAUDE.md\'s behavioral boundaries:\n\n**Add a CLAUDE.md rule if the fact:**\n- Describes something that should ALWAYS or NEVER be done\n- Could cause real damage if violated (data loss, broken deployments, security issues)\n- Is a hard constraint that applies across all work, not just a one-time note\n\n**Do NOT add a CLAUDE.md rule if the fact is:**\n- Purely informational (e.g., "staging DB is at this URL")\n- A one-time decision that\'s already captured\n- A diagnostic tip rather than a prohibition\n\nIf a rule is warranted, read CLAUDE.md, find the appropriate section (ALWAYS, ASK FIRST, or NEVER under Behavioral Boundaries), and append the rule. If no Behavioral Boundaries section exists, append one.\n\n## Step 7: Confirm\n\nReport what you did in this format:\n\n```\nAdded to [document name]:\n [summary of what was added]\n\n[If CLAUDE.md was also updated:]\nAdded CLAUDE.md rule:\n [ALWAYS/ASK FIRST/NEVER]: [rule text]\n\n[If the fact was ambiguous:]\nRouted to [chosen doc] -- move to [alternative doc] if this is more about [alternative category description].\n```\n',
6
- "joycraft-bugfix.md": "---\nname: joycraft-bugfix\ndescription: Structured bug fix workflow \u2014 triage, diagnose, discuss with user, write a focused spec, hand off for implementation\ninstructions: 32\n---\n\n# Bug Fix Workflow\n\nYou are fixing a bug. Follow this process in order. Do not skip steps.\n\n**Guard clause:** If this is clearly a new feature, redirect to `/joycraft-new-feature` and stop.\n\n---\n\n## Phase 1: Triage\n\nEstablish what's broken. Gather: symptom, steps to reproduce, expected vs actual behavior, when it started, relevant logs/errors. If an error message or stack trace is provided, read the referenced files immediately. Try to reproduce if steps are given.\n\n**Done when:** You can describe the symptom in one sentence.\n\n---\n\n## Phase 2: Diagnose\n\nFind the root cause. Start from the error site and trace backward. Read source files \u2014 don't guess. Identify the specific line(s) and logic error. Check git blame if it's a recent regression.\n\n**Done when:** You can explain what's wrong, why, and where in 2-3 sentences.\n\n---\n\n## Phase 3: Discuss\n\nPresent findings to the user BEFORE writing any code or spec:\n1. **Symptom** \u2014 confirm it matches what they see\n2. **Root cause** \u2014 specific file(s) and line(s)\n3. **Proposed fix** \u2014 what changes, where\n4. **Risk** \u2014 side effects? scope?\n\nAsk: \"Does this match? Comfortable with this approach?\" If large/risky, suggest decomposing into multiple specs.\n\n**Done when:** User agrees with the diagnosis and fix direction.\n\n---\n\n## Phase 4: Spec the Fix\n\nWrite a bug fix spec to `docs/specs/<feature-or-area>/bugfix-name.md`. Use the relevant feature name or area as the subdirectory (e.g., `auth`, `cli`, `parser`). Create the `docs/specs/<feature-or-area>/` directory if it doesn't exist.\n\n**Why:** Even bug fixes deserve a spec. It forces clarity on what \"fixed\" means, ensures test-first discipline, and creates a traceable record of the fix.\n\nUse this template:\n\n```markdown\n# Fix [Bug Description] \u2014 Bug Fix Spec\n\n> **Parent Brief:** none (bug fix)\n> **Issue/Error:** [error message, issue link, or symptom description]\n> **Status:** Ready\n> **Date:** YYYY-MM-DD\n> **Estimated scope:** [1 session / N files / ~N lines]\n\n---\n\n## Bug\n\nWhat is broken? Describe the symptom the user experiences.\n\n## Root Cause\n\nWhat is wrong in the code and why? Name the specific file(s) and line(s).\n\n## Fix\n\nWhat changes will fix this? Be specific \u2014 describe the code change, not just \"fix the bug.\"\n\n## Acceptance Criteria\n\n- [ ] [The bug no longer occurs \u2014 describe the correct behavior]\n- [ ] [No regressions in related functionality]\n- [ ] Build passes\n- [ ] Tests pass\n\n## Test Plan\n\n| Acceptance Criterion | Test | Type |\n|---------------------|------|------|\n| [Bug no longer occurs] | [Test that reproduces the bug, then verifies the fix] | [unit/integration/e2e] |\n| [No regressions] | [Existing tests still pass, or new regression test] | [unit/integration] |\n\n**Execution order:**\n1. Write a test that reproduces the bug \u2014 it should FAIL (red)\n2. Run the test to confirm it fails\n3. Apply the fix\n4. Run the test to confirm it passes (green)\n5. Run the full test suite to check for regressions\n\n**Smoke test:** [The bug reproduction test \u2014 fastest way to verify the fix works]\n\n**Before implementing, verify your test harness:**\n1. Run the reproduction test \u2014 it must FAIL (if it passes, you're not testing the actual bug)\n2. The test must exercise your actual code \u2014 not a reimplementation or mock\n3. Identify your smoke test \u2014 it must run in seconds, not minutes\n\n## Constraints\n\n- MUST: [any hard requirements for the fix]\n- MUST NOT: [any prohibitions \u2014 e.g., don't change the public API]\n\n## Affected Files\n\n| Action | File | What Changes |\n|--------|------|-------------|\n\n## Edge Cases\n\n| Scenario | Expected Behavior |\n|----------|------------------|\n```\n\n**For trivial bugs:** The spec will be short. That's fine \u2014 the structure is the point, not the length.\n\n**For large bugs that span multiple files/systems:** Consider whether this should be decomposed into multiple specs. If so, create a brief first using `/joycraft-new-feature`, then decompose. A bug fix spec should be implementable in a single session.\n\n---\n\n## Phase 5: Hand Off\n\nTell the user:\n\n```\nBug fix spec is ready: docs/specs/<feature-or-area>/bugfix-name.md\n\nSummary:\n- Bug: [one sentence]\n- Root cause: [one sentence]\n- Fix: [one sentence]\n- Estimated: 1 session\n\nTo execute: Start a fresh session and:\n1. Read the spec\n2. Write the reproduction test (must fail)\n3. Apply the fix (test must pass)\n4. Run full test suite\n5. Run /joycraft-session-end to capture discoveries\n6. Commit and PR\n\nReady to start?\n```\n\n**Why:** A fresh session for implementation produces better results. This diagnostic session has context noise from exploration \u2014 a clean session with just the spec is more focused.\n",
7
- "joycraft-decompose.md": '---\nname: joycraft-decompose\ndescription: Break a feature brief into atomic specs \u2014 small, testable, independently executable units\ninstructions: 32\n---\n\n# Decompose Feature into Atomic Specs\n\nYou have a Feature Brief (or the user has described a feature). Your job is to decompose it into atomic specs that can be executed independently \u2014 one spec per session.\n\n## Step 1: Verify the Brief Exists\n\nLook for a Feature Brief in `docs/briefs/`. If one doesn\'t exist yet, tell the user:\n\n> No feature brief found. Run `/joycraft-new-feature` first to interview and create one, or describe the feature now and I\'ll work from your description.\n\nIf the user describes the feature inline, work from that description directly. You don\'t need a formal brief to decompose \u2014 but recommend creating one for complex features.\n\n## Step 2: Identify Natural Boundaries\n\n**Why:** Good boundaries make specs independently testable and committable. Bad boundaries create specs that can\'t be verified without other specs also being done.\n\nRead the brief (or description) and identify natural split points:\n\n- **Data layer changes** (schemas, types, migrations) \u2014 always a separate spec\n- **Pure functions / business logic** \u2014 separate from I/O\n- **UI components** \u2014 separate from data fetching\n- **API endpoints / route handlers** \u2014 separate from business logic\n- **Test infrastructure** (mocks, fixtures, helpers) \u2014 can be its own spec if substantial\n- **Configuration / environment** \u2014 separate from code changes\n\nAsk yourself: "Can this piece be committed and tested without the other pieces existing?" If yes, it\'s a good boundary.\n\n## Step 3: Build the Decomposition Table\n\nFor each atomic spec, define:\n\n| # | Spec Name | Description | Dependencies | Size |\n|---|-----------|-------------|--------------|------|\n\n**Rules:**\n- Each spec name is `verb-object` format (e.g., `add-terminal-detection`, `extract-prompt-module`)\n- Each description is ONE sentence \u2014 if you need two, the spec is too big\n- Dependencies reference other spec numbers \u2014 keep the dependency graph shallow\n- More than 2 dependencies on a single spec = it\'s too big, split further\n- Aim for 3-7 specs per feature. Fewer than 3 = probably not decomposed enough. More than 10 = the feature brief is too big\n\n## Step 4: Present and Iterate\n\nShow the decomposition table to the user. Ask:\n1. "Does this breakdown match how you think about this feature?"\n2. "Are there any specs that feel too big or too small?"\n3. "Should any of these run in parallel (separate worktrees)?"\n\nIterate until the user approves.\n\n## Step 5: Generate Atomic Specs\n\nFor each approved row, create `docs/specs/<feature-name>/spec-name.md`. Derive the feature-name from the brief filename (strip the date prefix and `.md` \u2014 e.g., `2026-04-06-token-discipline.md` \u2192 `token-discipline`). If no brief exists, use a user-provided or inferred feature name (slugified to kebab-case). Create the `docs/specs/<feature-name>/` directory if it doesn\'t exist.\n\n**Why:** Each spec must be self-contained \u2014 a fresh Claude session should be able to execute it without reading the Feature Brief. Copy relevant constraints and context into each spec.\n\nUse this structure:\n\n```markdown\n# [Verb + Object] \u2014 Atomic Spec\n\n> **Parent Brief:** `docs/briefs/YYYY-MM-DD-feature-name.md` (or "standalone")\n> **Status:** Ready\n> **Date:** YYYY-MM-DD\n> **Estimated scope:** [1 session / N files / ~N lines]\n\n---\n\n## What\nOne paragraph \u2014 what changes when this spec is done?\n\n## Why\nOne sentence \u2014 what breaks or is missing without this?\n\n## Acceptance Criteria\n- [ ] [Observable behavior]\n- [ ] Build passes\n- [ ] Tests pass\n\n## Test Plan\n\n| Acceptance Criterion | Test | Type |\n|---------------------|------|------|\n| [Each AC above] | [What to call/assert] | [unit/integration/e2e] |\n\n**Execution order:**\n1. Write all tests above \u2014 they should fail against current/stubbed code\n2. Run tests to confirm they fail (red)\n3. Implement until all tests pass (green)\n\n**Smoke test:** [Identify the fastest test for iteration feedback]\n\n**Before implementing, verify your test harness:**\n1. Run all tests \u2014 they must FAIL (if they pass, you\'re testing the wrong thing)\n2. Each test calls your actual function/endpoint \u2014 not a reimplementation or the underlying library\n3. Identify your smoke test \u2014 it must run in seconds, not minutes, so you get fast feedback on each change\n\n## Constraints\n- MUST: [hard requirement]\n- MUST NOT: [hard prohibition]\n\n## Affected Files\n| Action | File | What Changes |\n|--------|------|-------------|\n\n## Approach\nStrategy, data flow, key decisions. Name one rejected alternative.\n\n## Edge Cases\n| Scenario | Expected Behavior |\n|----------|------------------|\n```\n\nIf `docs/templates/ATOMIC_SPEC_TEMPLATE.md` exists, reference it for the full template with additional guidance.\n\nFill in all sections \u2014 each spec must be self-contained (no "see the brief for context"). Copy relevant constraints from the Feature Brief into each spec. Write acceptance criteria specific to THIS spec, not the whole feature. Every acceptance criterion must have at least one corresponding test in the Test Plan. If the user provided test strategy info from the interview, use it to choose test types and frameworks. Include the test harness verification rules in every Test Plan.\n\n## Step 6: Recommend Execution Strategy\n\nBased on the dependency graph:\n- **Independent specs** \u2014 "These can run in parallel worktrees"\n- **Sequential specs** \u2014 "Execute these in order: 1 -> 2 -> 4"\n- **Mixed** \u2014 "Start specs 1 and 3 in parallel. After 1 completes, start 2."\n\nUpdate the Feature Brief\'s Execution Strategy section with the plan (if a brief exists).\n\n## Step 7: Hand Off\n\nTell the user:\n```\nDecomposition complete:\n- [N] atomic specs created in docs/specs/\n- [N] can run in parallel, [N] are sequential\n- Estimated total: [N] sessions\n\nTo execute:\n- Sequential: Open a session, point Claude at each spec in order\n- Parallel: Use worktrees \u2014 one spec per worktree, merge when done\n- Each session should end with /joycraft-session-end to capture discoveries\n\nReady to start execution?\n```\n\n**Tip:** Run `/clear` before starting the next step. Your artifacts are saved to files \u2014 this conversation context is disposable.\n',
8
- "joycraft-design.md": '---\nname: joycraft-design\ndescription: Design discussion before decomposition \u2014 produce a ~200-line design artifact for human review, catching wrong assumptions before they propagate into specs\n---\n\n# Design Discussion\n\nYou are producing a design discussion document for a feature. This sits between research and decomposition \u2014 it captures your understanding so the human can catch wrong assumptions before specs are written.\n\n**Guard clause:** If no brief path is provided and no brief exists in `docs/briefs/`, say:\n"No feature brief found. Run `/joycraft-new-feature` first to create one, or provide the path to your brief."\nThen stop.\n\n---\n\n## Step 1: Read Inputs\n\nRead the feature brief at the path the user provides. If the user also provides a research document path, read that too. Research is optional \u2014 if none exists, note that you\'ll explore the codebase directly.\n\n## Step 2: Explore the Codebase\n\nSpawn subagents to explore the codebase for patterns relevant to the brief. Focus on:\n\n- Files and functions that will be touched or extended\n- Existing patterns this feature should follow (naming, data flow, error handling)\n- Similar features already implemented that serve as models\n- Boundaries and interfaces the feature must integrate with\n\nGather file paths, function signatures, and code snippets. You need concrete evidence, not guesses.\n\n## Step 3: Write the Design Document\n\nCreate `docs/designs/` directory if it doesn\'t exist. Write the design document to `docs/designs/YYYY-MM-DD-feature-name.md`.\n\nThe document has exactly five sections:\n\n### Section 1: Current State\n\nWhat exists today in the codebase that is relevant to this feature. Include file paths, function signatures, and data flows. Be specific \u2014 reference actual code, not abstractions. If no research doc was provided, note that and describe what you found through direct exploration.\n\n### Section 2: Desired End State\n\nWhat the codebase should look like when this feature is complete. Describe the change at a high level \u2014 new files, modified interfaces, new data flows. Do NOT include implementation steps. This is the "what," not the "how."\n\n### Section 3: Patterns to Follow\n\nExisting patterns in the codebase that this feature should match. Include short code snippets and `file:line` references. Show the pattern, don\'t just name it.\n\nIf this is a greenfield project with no existing patterns, propose conventions and note that no precedent exists.\n\n### Section 4: Resolved Design Decisions\n\nDecisions you have already made, with brief rationale. Format each as:\n\n> **Decision:** [what you decided]\n> **Rationale:** [why, referencing existing code or constraints]\n> **Alternative rejected:** [what you considered and why you rejected it]\n\n### Section 5: Open Questions\n\nThings you don\'t know or where multiple valid approaches exist. Each question MUST present 2-3 concrete options with pros and cons. Format:\n\n> **Q: [question]**\n> - **Option A:** [description] \u2014 Pro: [benefit]. Con: [cost].\n> - **Option B:** [description] \u2014 Pro: [benefit]. Con: [cost].\n> - **Option C (if applicable):** [description] \u2014 Pro: [benefit]. Con: [cost].\n\nDo NOT ask vague questions like "what do you think?" Every question must have actionable options the human can choose from.\n\n## Step 4: Present and STOP\n\nPresent the design document to the user. Say:\n\n```\nDesign discussion written to docs/designs/YYYY-MM-DD-feature-name.md\n\nPlease review the document above. Specifically:\n1. Are the patterns in Section 3 the right ones to follow, or should I use different ones?\n2. Do you agree with the resolved decisions in Section 4?\n3. Pick an option for each open question in Section 5 (or propose your own).\n\nReply with your feedback. I will NOT proceed to decomposition until you have reviewed and approved this design.\n```\n\n**CRITICAL: Do NOT proceed to `/joycraft-decompose` or generate specs.** Wait for the human to review, answer open questions, and correct any wrong assumptions. The entire value of this skill is the pause \u2014 it forces a human checkpoint before mistakes propagate.\n\n## After Human Review\n\nOnce the human responds:\n- Update the design document with their corrections and chosen options\n- Move answered questions from "Open Questions" to "Resolved Design Decisions"\n- Present the updated document for final confirmation\n- Only after explicit approval, tell the user: "Design approved. Run `/joycraft-decompose` with this brief to generate atomic specs."\n',
5
+ "joycraft-add-fact.md": '---\nname: joycraft-add-fact\ndescription: Capture a project fact and route it to the correct context document -- production map, dangerous assumptions, decision log, institutional knowledge, or troubleshooting\ninstructions: 38\n---\n\n# Add Fact\n\nThe user has a fact to capture. Your job is to classify it, route it to the correct context document, append it in the right format, and optionally add a CLAUDE.md boundary rule.\n\n## Step 1: Get the Fact\n\nIf the user already provided the fact (e.g., `/joycraft-add-fact the staging DB resets every Sunday`), use it directly.\n\nIf not, ask: "What fact do you want to capture?" -- then wait for their response.\n\nIf the user provides multiple facts at once, process each one separately through all the steps below, then give a combined confirmation at the end.\n\n## Step 2: Classify the Fact\n\nRoute the fact to one of these 5 context documents based on its content:\n\n### `docs/context/production-map.md`\nThe fact is about **infrastructure, services, environments, URLs, endpoints, credentials, or what is safe/unsafe to touch**.\n- Signal words: "production", "staging", "endpoint", "URL", "database", "service", "deployed", "hosted", "credentials", "secret", "environment"\n- Examples: "The staging DB is at postgres://staging.example.com", "We use Vercel for the frontend and Railway for the API"\n\n### `docs/context/dangerous-assumptions.md`\nThe fact is about **something an AI agent might get wrong -- a false assumption that leads to bad outcomes**.\n- Signal words: "assumes", "might think", "but actually", "looks like X but is Y", "not what it seems", "trap", "gotcha"\n- Examples: "The `users` table looks like a test table but it\'s production", "Deleting a workspace doesn\'t delete the billing subscription"\n\n### `docs/context/decision-log.md`\nThe fact is about **an architectural or tooling choice and why it was made**.\n- Signal words: "decided", "chose", "because", "instead of", "we went with", "the reason we use", "trade-off"\n- Examples: "We chose SQLite over Postgres because this runs on embedded devices", "We use pnpm instead of npm for workspace support"\n\n### `docs/context/institutional-knowledge.md`\nThe fact is about **team conventions, unwritten rules, organizational context, or who owns what**.\n- Signal words: "convention", "rule", "always", "never", "team", "process", "review", "approval", "owns", "responsible"\n- Examples: "The design team reviews all color changes", "We never deploy on Fridays", "PR titles must start with the ticket number"\n\n### `docs/context/troubleshooting.md`\nThe fact is about **diagnostic knowledge -- when X happens, do Y (or don\'t do Z)**.\n- Signal words: "when", "fails", "error", "if you see", "stuck", "broken", "fix", "workaround", "before trying", "reboot", "restart", "reset"\n- Examples: "If Wi-Fi disconnects during flash, wait and retry -- don\'t switch networks", "When tests fail with ECONNREFUSED, check if Docker is running"\n\n### Ambiguous Facts\n\nIf the fact fits multiple categories, pick the **best fit** based on the primary intent. You will mention the alternative in your confirmation message so the user can correct you.\n\n## Step 3: Ensure the Target Document Exists\n\n1. If `docs/context/` does not exist, create the directory.\n2. If the target document does not exist, create it from the template structure. Check `docs/templates/` for the matching template. If no template exists, use this minimal structure:\n\nFor **production-map.md**:\n```markdown\n# Production Map\n\n> What\'s real, what\'s staging, what\'s safe to touch.\n\n## Services\n\n| Service | Environment | URL/Endpoint | Impact if Corrupted |\n|---------|-------------|-------------|-------------------|\n```\n\nFor **dangerous-assumptions.md**:\n```markdown\n# Dangerous Assumptions\n\n> Things the AI agent might assume that are wrong in this project.\n\n## Assumptions\n\n| Agent Might Assume | But Actually | Impact If Wrong |\n|-------------------|-------------|----------------|\n```\n\nFor **decision-log.md**:\n```markdown\n# Decision Log\n\n> Why choices were made, not just what was chosen.\n\n## Decisions\n\n| Date | Decision | Why | Alternatives Rejected | Revisit When |\n|------|----------|-----|----------------------|-------------|\n```\n\nFor **institutional-knowledge.md**:\n```markdown\n# Institutional Knowledge\n\n> Unwritten rules, team conventions, and organizational context.\n\n## Team Conventions\n\n- (none yet)\n```\n\nFor **troubleshooting.md**:\n```markdown\n# Troubleshooting\n\n> What to do when things go wrong for non-code reasons.\n\n## Common Failures\n\n| When This Happens | Do This | Don\'t Do This |\n|-------------------|---------|---------------|\n```\n\n## Step 4: Read the Target Document\n\nRead the target document to understand its current structure. Note:\n- Which section to append to\n- Whether it uses tables or lists\n- The column format if it\'s a table\n\n## Step 5: Append the Fact\n\nAdd the fact to the appropriate section of the target document. Match the existing format exactly:\n\n- **Table-based documents** (production-map, dangerous-assumptions, decision-log, troubleshooting): Add a new table row in the correct columns. Use today\'s date where a date column exists.\n- **List-based documents** (institutional-knowledge): Add a new list item (`- `) to the most appropriate section.\n\nRemove any italic example rows (rows where all cells start with `_`) before appending, so the document transitions from template to real content. Only remove examples from the specific table you are appending to.\n\n**Append only. Never modify or remove existing real content.**\n\n## Step 5b: Update Shared Frontmatter\n\nContext docs are *shared* artifacts (no single owner). After appending, update (or add) YAML frontmatter \u2014 the 2-field shared schema:\n\n```yaml\n---\nlast_updated: YYYY-MM-DD\nlast_updated_by: <resolved name>\n---\n```\n\nIf the file already has a frontmatter block, update the `last_updated` and `last_updated_by` fields in place. If it doesn\'t, prepend a fresh block ABOVE the existing `# Heading`.\n\n**Owner resolution:** look up the owner name in this order \u2014 (1) `git config user.name`, (2) value in your auto-memory `joycraft-owner.txt` if present, (3) ask the user once and persist.\n\n## Step 6: Evaluate CLAUDE.md Boundary Rule\n\nDecide whether the fact also warrants a rule in CLAUDE.md\'s behavioral boundaries:\n\n**Add a CLAUDE.md rule if the fact:**\n- Describes something that should ALWAYS or NEVER be done\n- Could cause real damage if violated (data loss, broken deployments, security issues)\n- Is a hard constraint that applies across all work, not just a one-time note\n\n**Do NOT add a CLAUDE.md rule if the fact is:**\n- Purely informational (e.g., "staging DB is at this URL")\n- A one-time decision that\'s already captured\n- A diagnostic tip rather than a prohibition\n\nIf a rule is warranted, read CLAUDE.md, find the appropriate section (ALWAYS, ASK FIRST, or NEVER under Behavioral Boundaries), and append the rule. If no Behavioral Boundaries section exists, append one.\n\n## Step 7: Confirm and Hand Off\n\nReport what you did in this format:\n\n```\nAdded to [document name]:\n [summary of what was added]\n\n[If CLAUDE.md was also updated:]\nAdded CLAUDE.md rule:\n [ALWAYS/ASK FIRST/NEVER]: [rule text]\n\n[If the fact was ambiguous:]\nRouted to [chosen doc] -- move to [alternative doc] if this is more about [alternative category description].\n```\n\nEnd with the canonical Handoff block. For most facts, the next move is back to whatever the user was doing \u2014 the Handoff block degrades to just a slash command pointing them home.\n\n## Recommended Next Steps\n\nNext:\n```bash\n/joycraft-session-end\n```\nRun /clear first.\n',
6
+ "joycraft-bugfix.md": "---\nname: joycraft-bugfix\ndescription: Structured bug fix workflow \u2014 triage, diagnose, discuss with user, write a focused spec, hand off for implementation\ninstructions: 32\n---\n\n# Bug Fix Workflow\n\nYou are fixing a bug. Follow this process in order. Do not skip steps.\n\n**Guard clause:** If this is clearly a new feature, redirect to `/joycraft-new-feature` and stop.\n\n---\n\n## Phase 1: Triage\n\nEstablish what's broken. Gather: symptom, steps to reproduce, expected vs actual behavior, when it started, relevant logs/errors. If an error message or stack trace is provided, read the referenced files immediately. Try to reproduce if steps are given.\n\n**Done when:** You can describe the symptom in one sentence.\n\n---\n\n## Phase 2: Diagnose\n\nFind the root cause. Start from the error site and trace backward. Read source files \u2014 don't guess. Identify the specific line(s) and logic error. Check git blame if it's a recent regression.\n\n**Done when:** You can explain what's wrong, why, and where in 2-3 sentences.\n\n---\n\n## Phase 3: Discuss\n\nPresent findings to the user BEFORE writing any code or spec:\n1. **Symptom** \u2014 confirm it matches what they see\n2. **Root cause** \u2014 specific file(s) and line(s)\n3. **Proposed fix** \u2014 what changes, where\n4. **Risk** \u2014 side effects? scope?\n\nAsk: \"Does this match? Comfortable with this approach?\" If large/risky, suggest decomposing into multiple specs.\n\n**Done when:** User agrees with the diagnosis and fix direction.\n\n---\n\n## Phase 4: Spec the Fix\n\nWrite a bug fix spec to `docs/specs/<feature-or-area>/bugfix-name.md`. Use the relevant feature name or area as the subdirectory (e.g., `auth`, `cli`, `parser`). Lazy-create the `docs/specs/<feature-or-area>/` directory if it doesn't exist.\n\n(Bugfixes intentionally stay at `docs/specs/<area>/...`, not `docs/features/<slug>/specs/`. Bugfixes are area-level, not feature-tied \u2014 multiple unrelated bugs can share the same area folder over time.)\n\n**Why:** Even bug fixes deserve a spec. It forces clarity on what \"fixed\" means, ensures test-first discipline, and creates a traceable record of the fix.\n\nThe spec file MUST start with YAML frontmatter \u2014 the 4-field personal schema (the `feature:` field carries the area name, used informally to indicate \"what folder this lives under\"):\n\n```yaml\n---\nstatus: active\nowner: <resolved name>\ncreated: YYYY-MM-DD\nfeature: <feature-or-area>\n---\n```\n\n**Owner resolution:** look up the owner name in this order \u2014 (1) `git config user.name`, (2) value in your auto-memory `joycraft-owner.txt` if present, (3) ask the user once and persist.\n\nUse this template for the body:\n\n```markdown\n# Fix [Bug Description] \u2014 Bug Fix Spec\n\n> **Parent Brief:** none (bug fix)\n> **Issue/Error:** [error message, issue link, or symptom description]\n> **Status:** Ready\n> **Date:** YYYY-MM-DD\n> **Estimated scope:** [1 session / N files / ~N lines]\n\n---\n\n## Bug\n\nWhat is broken? Describe the symptom the user experiences.\n\n## Root Cause\n\nWhat is wrong in the code and why? Name the specific file(s) and line(s).\n\n## Fix\n\nWhat changes will fix this? Be specific \u2014 describe the code change, not just \"fix the bug.\"\n\n## Acceptance Criteria\n\n- [ ] [The bug no longer occurs \u2014 describe the correct behavior]\n- [ ] [No regressions in related functionality]\n- [ ] Build passes\n- [ ] Tests pass\n\n## Test Plan\n\n| Acceptance Criterion | Test | Type |\n|---------------------|------|------|\n| [Bug no longer occurs] | [Test that reproduces the bug, then verifies the fix] | [unit/integration/e2e] |\n| [No regressions] | [Existing tests still pass, or new regression test] | [unit/integration] |\n\n**Execution order:**\n1. Write a test that reproduces the bug \u2014 it should FAIL (red)\n2. Run the test to confirm it fails\n3. Apply the fix\n4. Run the test to confirm it passes (green)\n5. Run the full test suite to check for regressions\n\n**Smoke test:** [The bug reproduction test \u2014 fastest way to verify the fix works]\n\n**Before implementing, verify your test harness:**\n1. Run the reproduction test \u2014 it must FAIL (if it passes, you're not testing the actual bug)\n2. The test must exercise your actual code \u2014 not a reimplementation or mock\n3. Identify your smoke test \u2014 it must run in seconds, not minutes\n\n## Constraints\n\n- MUST: [any hard requirements for the fix]\n- MUST NOT: [any prohibitions \u2014 e.g., don't change the public API]\n\n## Affected Files\n\n| Action | File | What Changes |\n|--------|------|-------------|\n\n## Edge Cases\n\n| Scenario | Expected Behavior |\n|----------|------------------|\n```\n\n**For trivial bugs:** The spec will be short. That's fine \u2014 the structure is the point, not the length.\n\n**For large bugs that span multiple files/systems:** Consider whether this should be decomposed into multiple specs. If so, create a brief first using `/joycraft-new-feature`, then decompose. A bug fix spec should be implementable in a single session.\n\n---\n\n## Phase 5: Hand Off\n\nTell the user a one-line summary, then emit the canonical Handoff block.\n\n## Recommended Next Steps\n\nNext:\n```bash\n/joycraft-implement docs/specs/<feature-or-area>/bugfix-name.md\n```\nRun /clear first.\n\n**Why:** A fresh session for implementation produces better results. This diagnostic session has context noise from exploration \u2014 a clean session with just the spec is more focused.\n",
7
+ "joycraft-collaborative-setup.md": '---\nname: joycraft-collaborative-setup\ndescription: Set up Joycraft for a team \u2014 scaffold per-area folders, owner conventions, and a team-facing CONTRIBUTING doc. Run once when adopting Joycraft on a multi-dev project.\n---\n\n# Collaborative Setup\n\nYou are setting up Joycraft for a team. Solo defaults stay solo; this skill adds the team-only ceremony \u2014 `docs/areas/` folders, area README/boundaries, and a thin team-facing CONTRIBUTING-joycraft doc.\n\nThis skill is **interactive** \u2014 ask the user, don\'t auto-detect.\n\n## When to run\n\nRun once when a team is adopting Joycraft on a multi-dev project. Solo users do **not** need this skill \u2014 solo defaults are fine without it.\n\n## Step 1: Confirm Team Context\n\nAsk the user:\n\n> "Setting up Joycraft for a team? (vs. solo work) If you\'re unsure, you can skip \u2014 solo defaults work fine and you can run this later."\n\nIf the user says "actually solo," bail before any writes:\n\n> "No problem. The solo workflow needs no extra setup. Run `/joycraft-new-feature` when you want to start a feature."\n\n## Step 2: Check for Flat Layout \u2014 Bail if Present\n\nBefore scaffolding team structure, check the project\'s docs/ for flat-layout artifacts. Look for any of:\n\n- `docs/briefs/*.md`\n- `docs/research/*.md`\n- `docs/designs/*.md`\n- `docs/specs/<feature>/` subdirectories whose names look like brief slugs\n\nIf any **flat layout** artifacts exist, tell the user:\n\n> "I see flat-layout artifacts in your docs/ (briefs/research/designs). Run `npx joycraft upgrade` first \u2014 it will migrate them into `docs/features/<slug>/` automatically. Then re-run this skill."\n\nThen stop. Skills don\'t reliably shell out, so the CLI does the migration.\n\n## Step 3: Gather Areas + Owners (Interactive)\n\nAsk the user:\n\n> "How many areas does your team work in? (e.g., `auth`, `api`, `frontend`, `infra`) \u2014 pick names that match how your team thinks about ownership. You can also skip and just create the team CONTRIBUTING doc."\n\nFor each area name the user provides:\n1. Confirm the name (kebab-case).\n2. Ask: "Who owns this area? (a name, an email, or a team handle \u2014 used in the area README\'s frontmatter)"\n3. Ask (optional): "Are there NEVER or ASK FIRST rules specific to this area? If yes, list them; if no, skip."\n\nIf the user provides duplicate names, ask them to pick a different one. Track the area list in your working memory before writing anything.\n\nIf the user provides 0 areas, skip Step 4 and go straight to Step 5 (CONTRIBUTING doc only). Useful path for "we just want the team doc, no areas yet."\n\n## Step 4: Scaffold Each Area\n\nFor each confirmed area, lazy-create `docs/areas/<area-name>/` and write a `README.md` with the **shared frontmatter schema** (areas are shared docs, not personal):\n\n```yaml\n---\nlast_updated: YYYY-MM-DD\nlast_updated_by: <owner from step 3>\n---\n```\n\n**Owner resolution for `last_updated_by`:** look up the owner name in this order \u2014 (1) `git config user.name`, (2) value in your auto-memory `joycraft-owner.txt` if present, (3) ask the user once and persist. Use the user-provided owner from Step 3 if they specified one for this area.\n\nBody of `README.md`:\n\n```markdown\n# <area-name>\n\n> **Owner:** <name from Step 3>\n> **Status:** active\n\n## What this area covers\n\n(Filled in by the area owner)\n\n## Conventions\n\n(Area-specific patterns or constraints)\n\n## Onboarding\n\nWhen a new dev joins this area, they should:\n1. Read this README\n2. Read `boundaries.md` (if present)\n3. Read the codebase under <area-relevant paths>\n```\n\nIf the user provided NEVER / ASK FIRST rules for the area, also write `docs/areas/<area-name>/boundaries.md` with the shared frontmatter and those rules. If they didn\'t, skip the boundaries file \u2014 the root CLAUDE.md boundaries already cover the project-wide cases.\n\n**Idempotency:** if `docs/areas/<area-name>/README.md` already exists, ASK before overwriting (default: skip + inform).\n\n## Step 5: Write the Team CONTRIBUTING Doc\n\nLazy-create `docs/CONTRIBUTING-joycraft.md` (NOT the project\'s main `CONTRIBUTING.md` \u2014 keep them separate so neither stomps on the other).\n\nIf `docs/templates/CONTRIBUTING-joycraft-template.md` exists in the project (it should \u2014 bundled by `npx joycraft init`), use it as the starting point. If not, fall back to the inline template below.\n\nThe doc starts with shared frontmatter:\n\n```yaml\n---\nlast_updated: YYYY-MM-DD\nlast_updated_by: <resolved owner>\n---\n```\n\nBody (inline fallback template \u2014 short by design):\n\n```markdown\n# Joycraft on this project\n\nWe use [Joycraft](https://www.npmjs.com/package/joycraft) for AI-assisted development.\n\n## How our team uses it\n\n(Filled in during /joycraft-collaborative-setup \u2014 fill this in with your team\'s specific conventions.)\n\n## Conventions\n\n- Per-feature work goes under `docs/features/<slug>/`\n- Area-level work and ownership: see `docs/areas/`\n- For "what is Joycraft?", see the package README\n\n## Onboarding\n\nWhen a new dev joins:\n1. Run `npx joycraft init` (idempotent on already-set-up projects)\n2. Read `docs/areas/<your-area>/README.md` for context\n```\n\nIf `docs/CONTRIBUTING-joycraft.md` already exists, ASK before overwriting \u2014 offer overwrite / append / skip; default to skip.\n\n## Step 6: Trigger CLAUDE.md Update\n\nNow that `docs/areas/` exists, the next `npx joycraft upgrade` (or any future `npx joycraft init`) will pick it up and add the **Areas pointer** to CLAUDE.md automatically \u2014 that pointer tells Claude "when working on the X area, read docs/areas/X/README.md first."\n\nTell the user:\n\n> "Run `npx joycraft upgrade` to refresh CLAUDE.md with the Areas pointer (or `npx joycraft init` if you haven\'t initialized yet)."\n\nDon\'t try to shell out from inside the skill \u2014 let the user run the CLI deliberately.\n\n## Step 7: Hand Off\n\nSummarize what you wrote (paths to area READMEs, the CONTRIBUTING doc, any boundaries files), then emit the canonical Handoff block.\n\n## Recommended Next Steps\n\nNext:\n```bash\n/joycraft-new-feature\n```\nRun /clear first.\n\nInclude the path to `docs/CONTRIBUTING-joycraft.md` and any newly-created area READMEs in the summary above the Handoff block.\n\n## Notes\n\n- This skill does NOT migrate flat-layout artifacts on its own. That\'s `npx joycraft upgrade`\'s job \u2014 Step 2 directs the user to run it first.\n- Area names are user-provided. Don\'t auto-detect from `src/auth/`, `src/api/`, etc. \u2014 many projects have monorepo or non-conventional layouts and auto-detection produces noise.\n- If the user stops mid-way (Ctrl-C, abandons), whatever\'s been written stays. Re-running the skill is the recovery path; it\'s idempotent on existing area folders (asks before overwriting).\n',
8
+ "joycraft-decompose.md": '---\nname: joycraft-decompose\ndescription: Break a feature brief into atomic specs \u2014 small, testable, independently executable units\ninstructions: 32\n---\n\n# Decompose Feature into Atomic Specs\n\nYou have a Feature Brief (or the user has described a feature). Your job is to decompose it into atomic specs that can be executed independently \u2014 one spec per session.\n\n## Step 1: Verify the Brief Exists\n\nLook for a Feature Brief at `docs/features/<slug>/brief.md`. If the user provided a brief path as an argument, use that. Otherwise, scan `docs/features/*/brief.md`.\n\n**Status filter when scanning neighbor briefs and specs:** read the YAML frontmatter at the top of each file. Treat each as `status: active` unless the frontmatter says otherwise. **Skip / ignore** any file whose `status:` is `shipped`, `deprecated`, or `superseded`. Also ignore anything under `docs/archive/` entirely.\n\nIf no brief exists, tell the user:\n\n> No feature brief found. Run `/joycraft-new-feature` first to interview and create one, or describe the feature now and I\'ll work from your description.\n\nIf the user describes the feature inline, work from that description directly. You don\'t need a formal brief to decompose \u2014 but recommend creating one for complex features.\n\n## Step 2: Identify Natural Boundaries\n\n**Why:** Good boundaries make specs independently testable and committable. Bad boundaries create specs that can\'t be verified without other specs also being done.\n\nRead the brief (or description) and identify natural split points:\n\n- **Data layer changes** (schemas, types, migrations) \u2014 always a separate spec\n- **Pure functions / business logic** \u2014 separate from I/O\n- **UI components** \u2014 separate from data fetching\n- **API endpoints / route handlers** \u2014 separate from business logic\n- **Test infrastructure** (mocks, fixtures, helpers) \u2014 can be its own spec if substantial\n- **Configuration / environment** \u2014 separate from code changes\n\nAsk yourself: "Can this piece be committed and tested without the other pieces existing?" If yes, it\'s a good boundary.\n\n## Step 3: Build the Decomposition Table\n\nFor each atomic spec, define:\n\n| # | Spec Name | Description | Dependencies | Size |\n|---|-----------|-------------|--------------|------|\n\n**Rules:**\n- Each spec name is `verb-object` format (e.g., `add-terminal-detection`, `extract-prompt-module`)\n- Each description is ONE sentence \u2014 if you need two, the spec is too big\n- Dependencies reference other spec numbers \u2014 keep the dependency graph shallow\n- More than 2 dependencies on a single spec = it\'s too big, split further\n- Aim for 3-7 specs per feature. Fewer than 3 = probably not decomposed enough. More than 10 = the feature brief is too big\n\n## Step 4: Present and Iterate\n\nShow the decomposition table to the user. Ask:\n1. "Does this breakdown match how you think about this feature?"\n2. "Are there any specs that feel too big or too small?"\n3. "Should any of these run in parallel (separate worktrees)?"\n\nIterate until the user approves.\n\n## Step 5: Generate Atomic Specs\n\nFor each approved row, create `docs/features/<slug>/specs/<spec-name>.md`. The slug is the feature folder name (e.g., `2026-04-06-token-discipline`). Lazy-create `docs/features/<slug>/specs/` if it doesn\'t exist.\n\nIf no brief exists and the user described the feature inline, derive a kebab-case slug yourself: `YYYY-MM-DD-<short-name>`. Create the folder structure under `docs/features/<slug>/`.\n\n**Why:** Each spec must be self-contained \u2014 a fresh Claude session should be able to execute it without reading the Feature Brief. Copy relevant constraints and context into each spec.\n\nEach spec file MUST start with YAML frontmatter \u2014 the 4-field personal schema:\n\n```yaml\n---\nstatus: active\nowner: <resolved name>\ncreated: YYYY-MM-DD\nfeature: <slug>\n---\n```\n\n**Owner resolution:** look up the owner name in this order \u2014 (1) `git config user.name`, (2) value in your auto-memory `joycraft-owner.txt` if present, (3) ask the user once and persist.\n\nUse this structure for the body:\n\n```markdown\n# [Verb + Object] \u2014 Atomic Spec\n\n> **Parent Brief:** `docs/features/<slug>/brief.md` (or "standalone")\n> **Status:** Ready\n> **Date:** YYYY-MM-DD\n> **Estimated scope:** [1 session / N files / ~N lines]\n\n---\n\n## What\nOne paragraph \u2014 what changes when this spec is done?\n\n## Why\nOne sentence \u2014 what breaks or is missing without this?\n\n## Acceptance Criteria\n- [ ] [Observable behavior]\n- [ ] Build passes\n- [ ] Tests pass\n\n## Test Plan\n\n| Acceptance Criterion | Test | Type |\n|---------------------|------|------|\n| [Each AC above] | [What to call/assert] | [unit/integration/e2e] |\n\n**Execution order:**\n1. Write all tests above \u2014 they should fail against current/stubbed code\n2. Run tests to confirm they fail (red)\n3. Implement until all tests pass (green)\n\n**Smoke test:** [Identify the fastest test for iteration feedback]\n\n**Before implementing, verify your test harness:**\n1. Run all tests \u2014 they must FAIL (if they pass, you\'re testing the wrong thing)\n2. Each test calls your actual function/endpoint \u2014 not a reimplementation or the underlying library\n3. Identify your smoke test \u2014 it must run in seconds, not minutes, so you get fast feedback on each change\n\n## Constraints\n- MUST: [hard requirement]\n- MUST NOT: [hard prohibition]\n\n## Affected Files\n| Action | File | What Changes |\n|--------|------|-------------|\n\n## Approach\nStrategy, data flow, key decisions. Name one rejected alternative.\n\n## Edge Cases\n| Scenario | Expected Behavior |\n|----------|------------------|\n```\n\nIf `docs/templates/ATOMIC_SPEC_TEMPLATE.md` exists, reference it for the full template with additional guidance.\n\nFill in all sections \u2014 each spec must be self-contained (no "see the brief for context"). Copy relevant constraints from the Feature Brief into each spec. Write acceptance criteria specific to THIS spec, not the whole feature. Every acceptance criterion must have at least one corresponding test in the Test Plan. If the user provided test strategy info from the interview, use it to choose test types and frameworks. Include the test harness verification rules in every Test Plan.\n\n## Step 6: Recommend Execution Strategy and Update Parent Brief\n\nBased on the dependency graph, group specs into execution waves:\n- **Independent specs** \u2014 "These can run in parallel worktrees"\n- **Sequential specs** \u2014 "Execute these in order: 1 -> 2 -> 4"\n- **Mixed** \u2014 "Start specs 1 and 3 in parallel. After 1 completes, start 2."\n\n**Update the parent brief\'s Execution Strategy section** at `docs/features/<slug>/brief.md` with this wave plan, so the brief stays a useful one-stop reference for feature reviewers.\n\n## Step 7: Write the Feature-Folder README.md (Single Source of Truth for Implementers)\n\nAfter generating per-spec files, ALSO write a `README.md` at the spec folder root: `docs/features/<slug>/specs/README.md` (for feature work). For legacy area-level specs (bugfixes), the path is `docs/specs/<feature-or-area>/README.md`.\n\nThe README is the single source of truth for *implementers*. It contains a **spec table** (one row per spec with dependencies) and the execution wave plan. Use this template:\n\n```markdown\n# <Feature Name> \u2014 Feature Specs\n\n> **Parent Brief:** `docs/features/<slug>/brief.md`\n> **Design:** `docs/features/<slug>/design.md` (when present)\n> **Research:** `docs/features/<slug>/research.md` (when present)\n> **Status:** Decomposed YYYY-MM-DD, ready for implementation\n\n## What this feature does\n\n<one paragraph summary, derived from the brief>\n\n## Specs\n\n| # | Spec | Depends On | Notes |\n|---|------|-----------|-------|\n| 1 | [spec-name.md](spec-name.md) | \u2014 | <one-line description> |\n| 2 | [other-spec.md](other-spec.md) | 1 | <one-line description> |\n\n## Execution waves\n\n- Wave 1 (parallel): specs ...\n- Wave 2 (after wave 1): specs ...\n\n## How to use this file\n\nIf you\'re running `/joycraft-implement <spec-path>`, the implement skill reads this README first so it understands the spec\'s position in the wave plan. Each spec is self-contained for the actual implementation; this README provides ordering context only.\n```\n\nThe brief and the README serve different audiences: the brief is for *feature reviewers* (vision, scope, decomposition decisions); the README is for *implementers* (what to run next, what depends on what).\n\n## Step 8: Hand Off\n\nTell the user a one-line summary, then emit the canonical Handoff block.\n\n## Recommended Next Steps\n\nNext:\n```bash\n/joycraft-implement docs/features/<slug>/specs/<first-spec>.md\n```\nRun /clear first.\n',
9
+ "joycraft-design.md": '---\nname: joycraft-design\ndescription: Design discussion before decomposition \u2014 produce a ~200-line design artifact for human review, catching wrong assumptions before they propagate into specs\n---\n\n# Design Discussion\n\nYou are producing a design discussion document for a feature. This sits between research and decomposition \u2014 it captures your understanding so the human can catch wrong assumptions before specs are written.\n\n**Guard clause:** If no brief path is provided and no brief exists at `docs/features/<slug>/brief.md`, say:\n"No feature brief found. Run `/joycraft-new-feature` first to create one, or provide the path to your brief."\nThen stop.\n\n---\n\n## Step 1: Read Inputs\n\nRead the feature brief at the path the user provides. If the user also provides a research document path, read that too. Research is optional \u2014 if none exists, note that you\'ll explore the codebase directly.\n\n## Step 2: Explore the Codebase\n\nSpawn subagents to explore the codebase for patterns relevant to the brief. Focus on:\n\n- Files and functions that will be touched or extended\n- Existing patterns this feature should follow (naming, data flow, error handling)\n- Similar features already implemented that serve as models\n- Boundaries and interfaces the feature must integrate with\n\nGather file paths, function signatures, and code snippets. You need concrete evidence, not guesses.\n\n## Step 3: Write the Design Document\n\nDerive the slug from the brief path (`docs/features/<slug>/brief.md`).\nLazy-create the folder `docs/features/<slug>/` if needed.\nWrite the design document to `docs/features/<slug>/design.md`.\n\nThe file MUST start with YAML frontmatter \u2014 the 4-field personal schema:\n\n```yaml\n---\nstatus: active\nowner: <resolved name>\ncreated: YYYY-MM-DD\nfeature: <slug>\n---\n```\n\n**Owner resolution:** look up the owner name in this order \u2014 (1) `git config user.name`, (2) value in your auto-memory `joycraft-owner.txt` if present, (3) ask the user once and persist.\n\nThe document has exactly five sections:\n\n### Section 1: Current State\n\nWhat exists today in the codebase that is relevant to this feature. Include file paths, function signatures, and data flows. Be specific \u2014 reference actual code, not abstractions. If no research doc was provided, note that and describe what you found through direct exploration.\n\n### Section 2: Desired End State\n\nWhat the codebase should look like when this feature is complete. Describe the change at a high level \u2014 new files, modified interfaces, new data flows. Do NOT include implementation steps. This is the "what," not the "how."\n\n### Section 3: Patterns to Follow\n\nExisting patterns in the codebase that this feature should match. Include short code snippets and `file:line` references. Show the pattern, don\'t just name it.\n\nIf this is a greenfield project with no existing patterns, propose conventions and note that no precedent exists.\n\n### Section 4: Resolved Design Decisions\n\nDecisions you have already made, with brief rationale. Format each as:\n\n> **Decision:** [what you decided]\n> **Rationale:** [why, referencing existing code or constraints]\n> **Alternative rejected:** [what you considered and why you rejected it]\n\n### Section 5: Open Questions\n\nThings you don\'t know or where multiple valid approaches exist. Each question MUST present 2-3 concrete options with pros and cons. Format:\n\n> **Q: [question]**\n> - **Option A:** [description] \u2014 Pro: [benefit]. Con: [cost].\n> - **Option B:** [description] \u2014 Pro: [benefit]. Con: [cost].\n> - **Option C (if applicable):** [description] \u2014 Pro: [benefit]. Con: [cost].\n\nDo NOT ask vague questions like "what do you think?" Every question must have actionable options the human can choose from.\n\n## Step 4: Present and STOP \u2014 Pre-Approval Hold\n\nPresent the design document to the user. Say:\n\n```\nDesign discussion written to docs/features/<slug>/design.md\n\nPlease review the document above. Specifically:\n1. Are the patterns in Section 3 the right ones to follow, or should I use different ones?\n2. Do you agree with the resolved decisions in Section 4?\n3. Pick an option for each open question in Section 5 (or propose your own).\n\nReply with your feedback. I will NOT proceed to decomposition until you have reviewed and approved this design.\n```\n\n**CRITICAL: Do NOT emit the canonical Handoff block at this point.** The Handoff block emits ONLY after human approval (see "Step 5: Hand Off (Post-Approval Only)" below). The entire value of this skill is the pause \u2014 it forces a human checkpoint before mistakes propagate.\n\n## Offer to Capture Deferred Items to Backlog\n\nIf during the design discussion the user mentions deferred work \u2014 "let\'s not do X yet," "save Y for later" \u2014 ASK before writing:\n\n> "This looks like deferred work \u2014 want me to capture it to `docs/backlog/`?"\n\nOnly on user confirmation, write a backlog entry at `docs/backlog/YYYY-MM-DD-<short-name>.md` with backlog frontmatter:\n\n```yaml\n---\nstatus: backlog\nowner: <resolved name>\ncreated: YYYY-MM-DD\nsource: docs/features/<slug>/brief.md\n---\n```\n\n**Never auto-write to `docs/backlog/`.** Every backlog entry is user-confirmed.\n\n## Step 5: Hand Off (Post-Approval Only)\n\nOnce the human approves the design:\n- Update the design document with their corrections and chosen options\n- Move answered questions from "Open Questions" to "Resolved Design Decisions"\n- Present the updated document for final confirmation\n- Once the user gives explicit approval, AND ONLY THEN, emit the canonical Handoff block:\n\n## Recommended Next Steps\n\nNext:\n```bash\n/joycraft-decompose docs/features/<slug>/brief.md\n```\nRun /clear first.\n\nInclude any backlog paths produced as a side effect.\n',
9
10
  "joycraft-implement-level5.md": "---\nname: joycraft-implement-level5\ndescription: Set up Level 5 autonomous development \u2014 autofix loop, holdout scenario testing, and scenario evolution from specs\ninstructions: 35\n---\n\n# Implement Level 5 \u2014 Autonomous Development Loop\n\nYou are guiding the user through setting up Level 5: the autonomous feedback loop where specs go in, validated software comes out. This is a one-time setup that installs workflows, creates a scenarios repo, and configures the autofix loop.\n\n## Before You Begin\n\nCheck prerequisites:\n\n1. **Project must be initialized.** Look for `.joycraft-version`. If missing, tell the user to run `npx joycraft init` first.\n2. **Project should be at Level 4.** Check `docs/joycraft-assessment.md` if it exists. If the project hasn't been assessed yet, suggest running `/joycraft-tune` first. But don't block \u2014 the user may know they're ready.\n3. **Git repo with GitHub remote.** This setup requires GitHub Actions. Check for `.git/` and a GitHub remote.\n\nIf prerequisites aren't met, explain what's needed and stop.\n\n## Step 1: Explain What Level 5 Means\n\nTell the user:\n\n> Level 5 is the autonomous loop. When you push specs, three things happen automatically:\n>\n> 1. **Scenario evolution** \u2014 A separate AI agent reads your specs and writes holdout tests in a private scenarios repo. These tests are invisible to your coding agent.\n> 2. **Autofix** \u2014 When CI fails on a PR, Claude Code automatically attempts a fix (up to 3 times).\n> 3. **Holdout validation** \u2014 When CI passes, your scenarios repo runs behavioral tests against the PR. Results post as PR comments.\n>\n> The key insight: your coding agent never sees the scenario tests. This prevents it from gaming the test suite \u2014 like a validation set in machine learning.\n\n## Step 2: Gather Configuration\n\nAsk these questions **one at a time**:\n\n### Question 1: Scenarios repo name\n\n> What should we call your scenarios repo? It'll be a private repo that holds your holdout tests.\n>\n> Default: `{current-repo-name}-scenarios`\n\nAccept the default or the user's choice.\n\n### Question 2: GitHub App\n\n> Level 5 needs a GitHub App to provide a separate identity for autofix pushes (this avoids GitHub's anti-recursion protection). Creating one takes about 2 minutes:\n>\n> 1. Go to https://github.com/settings/apps/new\n> 2. Give it a name (e.g., \"My Project Autofix\")\n> 3. Uncheck \"Webhook > Active\" (not needed)\n> 4. Under **Repository permissions**, set:\n> - **Contents**: Read & Write\n> - **Pull requests**: Read & Write\n> - **Actions**: Read & Write\n> 5. Click **Create GitHub App**\n> 6. Note the **App ID** from the settings page\n> 7. Scroll to **Private keys** > click **Generate a private key** > save the `.pem` file\n> 8. Click **Install App** in the left sidebar > install it on your repo\n>\n> What's your App ID?\n\n## Step 3: Run init-autofix\n\nRun the CLI command with the gathered configuration:\n\n```bash\nnpx joycraft init-autofix --scenarios-repo {name} --app-id {id}\n```\n\nReview the output with the user. Confirm files were created.\n\n## Step 4: Walk Through Secret Configuration\n\nGuide the user step by step:\n\n### 4a: Add Secrets to Main Repo\n\n> You should already have the `.pem` file from when you created the app in Step 2.\n\n> Go to your repo's Settings > Secrets and variables > Actions, and add:\n> - `JOYCRAFT_APP_PRIVATE_KEY` \u2014 paste the contents of your `.pem` file\n> - `ANTHROPIC_API_KEY` \u2014 your Anthropic API key\n\n### 4b: Create the Scenarios Repo\n\n> Create the private scenarios repo:\n> ```bash\n> gh repo create {scenarios-repo-name} --private\n> ```\n>\n> Then copy the scenario templates into it:\n> ```bash\n> cp -r docs/templates/scenarios/* ../{scenarios-repo-name}/\n> cd ../{scenarios-repo-name}\n> git add -A && git commit -m \"init: scaffold scenarios repo from Joycraft\"\n> git push\n> ```\n\n### 4c: Add Secrets to Scenarios Repo\n\n> The scenarios repo also needs the App private key:\n> - `JOYCRAFT_APP_PRIVATE_KEY` \u2014 same `.pem` file as the main repo\n> - `ANTHROPIC_API_KEY` \u2014 same key (needed for scenario generation)\n\n## Step 5: Verify Setup\n\nHelp the user verify everything is wired correctly:\n\n1. **Check workflow files exist:** `ls .github/workflows/autofix.yml .github/workflows/scenarios-dispatch.yml .github/workflows/spec-dispatch.yml .github/workflows/scenarios-rerun.yml`\n2. **Check scenario templates were copied:** Verify the scenarios repo has `example-scenario.test.ts`, `workflows/run.yml`, `workflows/generate.yml`, `prompts/scenario-agent.md`\n3. **Check the App ID is correct** in the workflow files (not still a placeholder)\n\n## Step 6: Update CLAUDE.md\n\nIf the project's CLAUDE.md doesn't already have an \"External Validation\" section, add one:\n\n> ## External Validation\n>\n> This project uses holdout scenario tests in a separate private repo.\n>\n> ### NEVER\n> - Access, read, or reference the scenarios repo\n> - Mention scenario test names or contents\n> - Modify the scenarios dispatch workflow to leak test information\n>\n> The scenarios repo is deliberately invisible to you. This is the holdout guarantee.\n\n## Step 7: First Test (Optional)\n\nIf the user wants to test the loop:\n\n> Want to do a quick test? Here's how:\n>\n> 1. Write a simple spec in `docs/specs/` and push to main \u2014 this triggers scenario generation\n> 2. Create a PR with a small change \u2014 when CI passes, scenarios will run\n> 3. Watch for the scenario test results as a PR comment\n>\n> Or deliberately break something in a PR to test the autofix loop.\n\n## Step 8: Summary\n\nPrint a summary of what was set up:\n\n> **Level 5 is live.** Here's what's running:\n>\n> | Trigger | What Happens |\n> |---------|-------------|\n> | Push specs to `docs/specs/` | Scenario agent writes holdout tests |\n> | PR fails CI | Claude autofix attempts (up to 3x) |\n> | PR passes CI | Holdout scenarios run against PR |\n> | Scenarios update | Open PRs re-tested with latest scenarios |\n>\n> Your scenarios repo: `{name}`\n> Your coding agent cannot see those tests. The holdout wall is intact.\n\n**Important:** Tell the user:\n\n> **Before you can test the loop**, you need to merge this PR to main first. GitHub's `workflow_run` triggers only activate for workflows that exist on the default branch. Once merged, create a new PR with any small change \u2014 that's when you'll see Autofix, Scenarios Dispatch, and Spec Dispatch fire for the first time.\n\nUpdate `docs/joycraft-assessment.md` if it exists \u2014 set the Level 5 score to reflect the new setup.\n",
10
- "joycraft-implement.md": "---\nname: joycraft-implement\ndescription: Execute atomic specs with TDD \u2014 read spec, write failing tests, implement until green, hand off to session-end\ninstructions: 28\n---\n\n# Implement Atomic Spec\n\nYou have one or more atomic spec paths to execute. Your job is to implement each spec using strict TDD \u2014 tests first, confirm they fail, then implement until green.\n\n## Step 1: Parse Arguments\n\nThe user should provide one or more spec paths (e.g., `docs/specs/my-feature/add-widget.md`).\n\nIf no spec path was provided, tell the user:\n\n> No spec path provided. Check `docs/specs/` for available specs, or provide a path like:\n> `/joycraft-implement docs/specs/feature-name/spec-name.md`\n\n## Step 2: Read and Understand the Spec\n\nFor each spec path:\n\n1. **Read the spec file.** The spec is your execution contract \u2014 the Acceptance Criteria and Test Plan define \"done.\"\n2. **Check the spec's Status field.** If it says \"Complete,\" warn the user and ask if they want to re-implement or skip.\n3. **Read the Acceptance Criteria** \u2014 these are your success conditions.\n4. **Read the Test Plan** \u2014 this tells you exactly what tests to write and in what order.\n5. **Read the Constraints** \u2014 these are hard boundaries you must not violate.\n\n### Finding Additional Context\n\nSpecs are designed to be self-contained, but if you need more context:\n\n- **Parent brief:** Linked in the spec's frontmatter (`> **Parent Brief:**` line). Read it for broader feature context.\n- **Related specs:** Live in the same directory. The spec directory convention is `docs/specs/<feature-name>/` where the feature name is derived from the brief filename (strip the date prefix and `.md` \u2014 e.g., `2026-04-06-token-discipline.md` \u2192 `token-discipline`).\n- **Affected Files:** The spec's Affected Files table tells you which files to create or modify.\n\n## Step 3: Execute the TDD Cycle\n\n**This is not optional. Write tests FIRST.**\n\n### 3a. Write Tests (Red Phase)\n\nUsing the spec's Test Plan:\n\n1. Write ALL tests listed in the Test Plan. Each Acceptance Criterion must have at least one test.\n2. Tests should call the actual function/endpoint \u2014 not a reimplementation or mock of the underlying library.\n3. Run the tests. **They MUST fail.** If any test passes immediately:\n - Flag it \u2014 either the test isn't testing the right thing, or the code already exists.\n - Investigate before proceeding. A test that passes before implementation is a test that proves nothing.\n\n### 3b. Implement (Green Phase)\n\n1. Follow the spec's Approach section for implementation strategy.\n2. Implement the minimum code needed to make tests pass.\n3. Run tests after each meaningful change \u2014 use the spec's Smoke Test for fast feedback.\n4. Continue until ALL tests pass.\n\n### 3c. Verify Acceptance Criteria\n\nWalk through every Acceptance Criterion in the spec:\n\n- [ ] Is each one met?\n- [ ] Does the build pass?\n- [ ] Do all tests pass?\n\nIf any criterion is not met, keep implementing. Do not move on until all criteria are green.\n\n## Step 4: Handle Edge Cases\n\nCheck the spec's Edge Cases table. For each scenario:\n\n- Verify the expected behavior is handled.\n- If the spec says \"warn the user\" or \"prompt,\" make sure that path works.\n\n## Step 5: Multi-Spec Handling\n\nIf the user provided multiple specs:\n\n1. Execute specs in dependency order (check each spec's frontmatter for dependencies).\n2. After completing each spec, run the full test suite to ensure no regressions.\n3. **Between specs:** Tell the user:\n\n```\nSpec [name] complete. [N] specs remaining.\n```\n\n**Tip:** Run `/clear` before starting the next spec. Your artifacts are saved to files \u2014 this conversation context is disposable.\n\n## Step 6: Hand Off\n\nWhen all specs are implemented and passing:\n\n```\nImplementation complete:\n- Spec(s): [list spec names] \u2014 all Acceptance Criteria met\n- Tests: [N] written, all passing\n- Build: passing\n\nNext steps:\n- Run /joycraft-session-end to capture discoveries and wrap up\n```\n\n**Tip:** Run `/clear` before starting the next step. Your artifacts are saved to files \u2014 this conversation context is disposable.\n",
11
- "joycraft-interview.md": "---\nname: joycraft-interview\ndescription: Brainstorm freely about what you want to build \u2014 yap, explore ideas, and get a structured summary you can use later\ninstructions: 18\n---\n\n# Interview \u2014 Idea Exploration\n\nYou are helping the user brainstorm and explore what they want to build. This is a lightweight, low-pressure conversation \u2014 not a formal spec process. Let them yap.\n\n## How to Run the Interview\n\n### 1. Open the Floor\n\nStart with something like:\n\"What are you thinking about building? Just talk \u2014 I'll listen and ask questions as we go.\"\n\nLet the user talk freely. Do not interrupt their flow. Do not push toward structure yet.\n\n### 2. Ask Clarifying Questions\n\nAs they talk, weave in questions naturally \u2014 don't fire them all at once:\n\n- **What problem does this solve?** Who feels the pain today?\n- **What does \"done\" look like?** If this worked perfectly, what would a user see?\n- **What are the constraints?** Time, tech, team, budget \u2014 what boxes are we in?\n- **What's NOT in scope?** What's tempting but should be deferred?\n- **What are the edge cases?** What could go wrong? What's the weird input?\n- **What exists already?** Are we building on something or starting fresh?\n\n### 3. Play Back Understanding\n\nAfter the user has gotten their ideas out, reflect back:\n\"So if I'm hearing you right, you want to [summary]. The core problem is [X], and done looks like [Y]. Is that right?\"\n\nLet them correct and refine. Iterate until they say \"yes, that's it.\"\n\n### 4. Write a Draft Brief\n\nCreate a draft file at `docs/briefs/YYYY-MM-DD-topic-draft.md`. Create the `docs/briefs/` directory if it doesn't exist.\n\nUse this format:\n\n```markdown\n# [Topic] \u2014 Draft Brief\n\n> **Date:** YYYY-MM-DD\n> **Status:** DRAFT\n> **Origin:** /joycraft-interview session\n\n---\n\n## The Idea\n[2-3 paragraphs capturing what the user described \u2014 their words, their framing]\n\n## Problem\n[What pain or gap this addresses]\n\n## What \"Done\" Looks Like\n[The user's description of success \u2014 observable outcomes]\n\n## Constraints\n- [constraint 1]\n- [constraint 2]\n\n## Open Questions\n- [things that came up but weren't resolved]\n- [decisions that need more thought]\n\n## Out of Scope (for now)\n- [things explicitly deferred]\n\n## Raw Notes\n[Any additional context, quotes, or tangents worth preserving]\n```\n\n### 5. Hand Off\n\nAfter writing the draft, tell the user:\n\n```\nDraft brief saved to docs/briefs/YYYY-MM-DD-topic-draft.md\n\nWhen you're ready to move forward, pick the path that fits the complexity:\n\nCOMPLEX (5+ files, architectural decisions, unfamiliar area):\n /joycraft-new-feature \u2192 /joycraft-research \u2192 /joycraft-design \u2192 /joycraft-decompose\n\nMEDIUM (clear scope but non-trivial):\n /joycraft-new-feature \u2192 /joycraft-design \u2192 /joycraft-decompose\n\nSIMPLE (scope is clear, < 5 files, well-understood area):\n /joycraft-new-feature \u2192 /joycraft-decompose\n\nNot sure yet? Just keep brainstorming \u2014 run /joycraft-interview again anytime.\n```\n\nIf the idea sounds complex \u2014 touches many files, involves architectural decisions, or the user is working in an unfamiliar area \u2014 nudge them toward research and design. But present it as a recommendation, not a gate.\n\n**Tip:** Run `/clear` before starting the next step. Your artifacts are saved to files \u2014 this conversation context is disposable.\n\n## Guidelines\n\n- **This is NOT /joycraft-new-feature.** Do not push toward formal briefs, decomposition tables, or atomic specs. The point is exploration.\n- **Let the user lead.** Your job is to listen, clarify, and capture \u2014 not to structure or direct.\n- **Mark everything as DRAFT.** The output is a starting point, not a commitment.\n- **Keep it short.** The draft brief should be 1-2 pages max. Capture the essence, not every detail.\n- **Multiple interviews are fine.** The user might run this several times as their thinking evolves. Each creates a new dated draft.\n",
11
+ "joycraft-implement.md": "---\nname: joycraft-implement\ndescription: Execute atomic specs with TDD \u2014 read spec, write failing tests, implement until green, hand off to session-end\ninstructions: 28\n---\n\n# Implement Atomic Spec\n\nYou have one or more atomic spec paths to execute. Your job is to implement each spec using strict TDD \u2014 tests first, confirm they fail, then implement until green.\n\n## Step 1: Parse Arguments\n\nThe user should provide one or more spec paths (e.g., `docs/specs/my-feature/add-widget.md`).\n\nIf no spec path was provided, tell the user:\n\n> No spec path provided. Check `docs/specs/` for available specs, or provide a path like:\n> `/joycraft-implement docs/specs/feature-name/spec-name.md`\n\n## Step 2: Read the Sibling README.md FIRST (if present)\n\nBefore reading the spec itself, check for a sibling `README.md` in the same folder as the spec \u2014 i.e., `<spec-path>/../README.md`. This file is the wave-plan + spec-table that `/joycraft-decompose` writes per feature.\n\n- **If present:** Read the README first. It tells you the spec's position in the wave plan, its dependencies, and which sibling specs (in the same folder) need to be done before this one.\n- **If absent:** That's fine \u2014 proceed normally. The convention is forward-only and many legacy spec folders pre-date it.\n\n### Warn on Unmet Dependencies\n\nIf the README shows that this spec depends on other specs in the same folder, check whether those dependencies are complete. A spec is complete when its frontmatter `status:` is `shipped` (or its body says `Status: Complete`).\n\nIf any dependency is **not** complete, tell the user:\n\n> \"This spec lists unmet dependencies in the sibling README.md: [list]. Proceed anyway, or stop?\"\n\nWait for confirmation before continuing. The user might be deliberately running out of order (a hotfix, an exploration, etc.) \u2014 your job is to surface the warning, not to gate.\n\n## Step 3: Read and Understand the Spec\n\nFor each spec path:\n\n1. **Read the spec file.** The spec is your execution contract \u2014 the Acceptance Criteria and Test Plan define \"done.\"\n2. **Check the spec's Status field.** If it says \"Complete,\" warn the user and ask if they want to re-implement or skip.\n3. **Read the Acceptance Criteria** \u2014 these are your success conditions.\n4. **Read the Test Plan** \u2014 this tells you exactly what tests to write and in what order.\n5. **Read the Constraints** \u2014 these are hard boundaries you must not violate.\n\n### Finding Additional Context\n\nSpecs are designed to be self-contained, but if you need more context:\n\n- **Parent brief:** Linked in the spec's body (`> **Parent Brief:**` line). The new convention is `docs/features/<slug>/brief.md`. Read it for broader feature context.\n- **Related specs:** Live in the same directory (typically `docs/features/<slug>/specs/`). The sibling `README.md` (read in Step 2 above) is the index.\n- **Affected Files:** The spec's Affected Files table tells you which files to create or modify.\n\n## Step 4: Execute the TDD Cycle\n\n**This is not optional. Write tests FIRST.**\n\n### 3a. Write Tests (Red Phase)\n\nUsing the spec's Test Plan:\n\n1. Write ALL tests listed in the Test Plan. Each Acceptance Criterion must have at least one test.\n2. Tests should call the actual function/endpoint \u2014 not a reimplementation or mock of the underlying library.\n3. Run the tests. **They MUST fail.** If any test passes immediately:\n - Flag it \u2014 either the test isn't testing the right thing, or the code already exists.\n - Investigate before proceeding. A test that passes before implementation is a test that proves nothing.\n\n### 3b. Implement (Green Phase)\n\n1. Follow the spec's Approach section for implementation strategy.\n2. Implement the minimum code needed to make tests pass.\n3. Run tests after each meaningful change \u2014 use the spec's Smoke Test for fast feedback.\n4. Continue until ALL tests pass.\n\n### 3c. Verify Acceptance Criteria\n\nWalk through every Acceptance Criterion in the spec:\n\n- [ ] Is each one met?\n- [ ] Does the build pass?\n- [ ] Do all tests pass?\n\nIf any criterion is not met, keep implementing. Do not move on until all criteria are green.\n\n## Step 5: Handle Edge Cases\n\nCheck the spec's Edge Cases table. For each scenario:\n\n- Verify the expected behavior is handled.\n- If the spec says \"warn the user\" or \"prompt,\" make sure that path works.\n\n## Step 6: Multi-Spec Handling\n\nIf the user provided multiple specs:\n\n1. Execute specs in dependency order (check each spec's frontmatter for dependencies).\n2. After completing each spec, run the full test suite to ensure no regressions.\n3. **Between specs:** Tell the user:\n\n```\nSpec [name] complete. [N] specs remaining.\n```\n\n**Tip:** Run `/clear` before starting the next spec. Your artifacts are saved to files \u2014 this conversation context is disposable.\n\n## Step 7: Hand Off\n\nWhen all specs are implemented and passing, end with the canonical Handoff block:\n\n## Recommended Next Steps\n\nNext:\n```bash\n/joycraft-session-end\n```\nRun /clear first.\n",
12
+ "joycraft-interview.md": '---\nname: joycraft-interview\ndescription: Brainstorm freely about what you want to build \u2014 yap, explore ideas, and get a structured summary you can use later\ninstructions: 18\n---\n\n# Interview \u2014 Idea Exploration\n\nYou are helping the user brainstorm and explore what they want to build. This is a lightweight, low-pressure conversation \u2014 not a formal spec process. Let them yap.\n\n## How to Run the Interview\n\n### 1. Open the Floor\n\nStart with something like:\n"What are you thinking about building? Just talk \u2014 I\'ll listen and ask questions as we go."\n\nLet the user talk freely. Do not interrupt their flow. Do not push toward structure yet.\n\n### 2. Ask Clarifying Questions\n\nAs they talk, weave in questions naturally \u2014 don\'t fire them all at once:\n\n- **What problem does this solve?** Who feels the pain today?\n- **What does "done" look like?** If this worked perfectly, what would a user see?\n- **What are the constraints?** Time, tech, team, budget \u2014 what boxes are we in?\n- **What\'s NOT in scope?** What\'s tempting but should be deferred?\n- **What are the edge cases?** What could go wrong? What\'s the weird input?\n- **What exists already?** Are we building on something or starting fresh?\n\n### 3. Play Back Understanding\n\nAfter the user has gotten their ideas out, reflect back:\n"So if I\'m hearing you right, you want to [summary]. The core problem is [X], and done looks like [Y]. Is that right?"\n\nLet them correct and refine. Iterate until they say "yes, that\'s it."\n\n### 4. Write a Draft Brief\n\nDerive a slug `YYYY-MM-DD-<topic>` (today\'s date + kebab-case topic \u2014 no `-draft` suffix).\nCreate a draft file at `docs/features/<slug>/brief.md`. Lazy-create `docs/features/<slug>/` if it doesn\'t exist.\n\nThe file MUST start with YAML frontmatter \u2014 the 4-field personal schema with `status: draft`:\n\n```yaml\n---\nstatus: draft\nowner: <resolved name>\ncreated: YYYY-MM-DD\nfeature: <slug>\n---\n```\n\n**Owner resolution:** look up the owner name in this order \u2014 (1) `git config user.name`, (2) value in your auto-memory `joycraft-owner.txt` if present, (3) ask the user once and persist. If you can\'t get a name, leave the field as `<resolved name>` and note it for the user.\n\nUse this format for the body:\n\n```markdown\n# [Topic] \u2014 Draft Brief\n\n> **Date:** YYYY-MM-DD\n> **Origin:** /joycraft-interview session\n\n---\n\n## The Idea\n[2-3 paragraphs capturing what the user described \u2014 their words, their framing]\n\n## Problem\n[What pain or gap this addresses]\n\n## What "Done" Looks Like\n[The user\'s description of success \u2014 observable outcomes]\n\n## Constraints\n- [constraint 1]\n- [constraint 2]\n\n## Open Questions\n- [things that came up but weren\'t resolved]\n- [decisions that need more thought]\n\n## Out of Scope (for now)\n- [things explicitly deferred \u2014 see also: deferred work goes to `docs/backlog/`]\n\n## Raw Notes\n[Any additional context, quotes, or tangents worth preserving]\n```\n\n### 5. Offer to Capture Deferred Items to Backlog\n\nIf during the conversation deferred work surfaces (a tangent, a "later" item, a "out-of-scope but tempting" idea), ASK the user:\n\n> "This looks like deferred work \u2014 want me to capture it to `docs/backlog/`?"\n\nOnly on user confirmation, write a backlog entry at `docs/backlog/YYYY-MM-DD-<short-name>.md` with backlog frontmatter:\n\n```yaml\n---\nstatus: backlog\nowner: <resolved name>\ncreated: YYYY-MM-DD\nsource: docs/features/<slug>/brief.md\n---\n```\n\n**Never auto-write to `docs/backlog/`.** Every backlog entry is user-confirmed.\n\n### 6. Hand Off\n\nAfter writing the draft (and any backlog entries), present the canonical Handoff block.\nInclude any backlog paths produced as a side effect.\n\n## Recommended Next Steps\n\nNext:\n```bash\n/joycraft-new-feature docs/features/<slug>/brief.md\n```\nRun /clear first.\n\nIf the idea sounds complex \u2014 touches many files, involves architectural decisions, or the user is working in an unfamiliar area \u2014 nudge them toward research and design (e.g., `/joycraft-research` then `/joycraft-design`). But present it as a recommendation, not a gate.\n\n## Guidelines\n\n- **This is NOT /joycraft-new-feature.** Do not push toward formal briefs, decomposition tables, or atomic specs. The point is exploration.\n- **Let the user lead.** Your job is to listen, clarify, and capture \u2014 not to structure or direct.\n- **Mark everything as DRAFT.** The output is a starting point, not a commitment.\n- **Keep it short.** The draft brief should be 1-2 pages max. Capture the essence, not every detail.\n- **Multiple interviews are fine.** The user might run this several times as their thinking evolves. Each creates a new dated draft.\n',
12
13
  "joycraft-lockdown.md": "---\nname: joycraft-lockdown\ndescription: Generate constrained execution boundaries for an implementation session -- NEVER rules and deny patterns to prevent agent overreach\ninstructions: 28\n---\n\n# Lockdown Mode\n\nThe user wants to constrain agent behavior for an implementation session. Your job is to interview them about what should be off-limits, then generate CLAUDE.md NEVER rules and `.claude/settings.json` deny patterns they can review and apply.\n\n## When Is Lockdown Useful?\n\nLockdown is most valuable for:\n- **Complex tech stacks** (hardware, firmware, multi-device) where agents can cause real damage\n- **Long-running autonomous sessions** where you won't be monitoring every action\n- **Production-adjacent work** where accidental network calls or package installs are risky\n\nFor simple feature work on a well-tested codebase, lockdown is usually overkill. Mention this context to the user so they can decide.\n\n## Step 1: Check for Tests\n\nBefore starting the interview, check if the project has test files or directories (look for `tests/`, `test/`, `__tests__/`, `spec/`, or files matching `*.test.*`, `*.spec.*`).\n\nIf no tests are found, tell the user:\n\n> Lockdown mode is most useful when you already have tests in place -- it prevents the agent from modifying them while constraining behavior to writing code and running tests. Consider running `/joycraft-new-feature` first to set up a test-driven workflow, then come back to lock it down.\n\nIf the user wants to proceed anyway, continue with the interview.\n\n## Step 2: Interview -- What to Lock Down\n\nAsk these three questions, one at a time. Wait for the user's response before proceeding to the next question.\n\n### Question 1: Read-Only Files\n\n> What test files or directories should be off-limits for editing? (e.g., `tests/`, `__tests__/`, `spec/`, specific test files)\n>\n> I'll generate NEVER rules to prevent editing these.\n\nIf the user isn't sure, suggest the test directories you found in Step 1.\n\n### Question 2: Allowed Commands\n\n> What commands should the agent be allowed to run? Defaults:\n> - Write and edit source code files\n> - Run the project's smoke test command\n> - Run the full test suite\n>\n> Any other commands to explicitly allow? Or should I restrict to just these?\n\n### Question 3: Denied Commands\n\n> What commands should be denied? Defaults:\n> - Package installs (`npm install`, `pip install`, `cargo add`, `go get`, etc.)\n> - Network tools (`curl`, `wget`, `ping`, `ssh`)\n> - Direct log file reading\n>\n> Any specific commands to add or remove from this list?\n\n**Edge case -- user wants to allow some network access:** If the user mentions API tests or specific endpoints that need network access, exclude those from the deny list and note the exception in the output.\n\n**Edge case -- user wants to lock down file writes:** If the user wants to prevent ALL file writes, warn them:\n\n> Denying all file writes would prevent the agent from doing any work. I recommend keeping source code writes allowed and only locking down test files, config files, or other sensitive directories.\n\n## Step 3: Generate Boundaries\n\nBased on the interview responses, generate output in this exact format:\n\n```\n## Lockdown boundaries generated\n\nReview these suggestions and add them to your project:\n\n### CLAUDE.md -- add to NEVER section:\n\n- Edit any file in `[user's test directories]`\n- Run `[denied package manager commands]`\n- Use `[denied network tools]`\n- Read log files directly -- interact with logs only through test assertions\n- [Any additional NEVER rules based on user responses]\n\n### .claude/settings.json -- suggested deny patterns:\n\nAdd these to the `permissions.deny` array:\n\n[\"[command1]\", \"[command2]\", \"[command3]\"]\n\n---\n\nCopy these into your project manually, or tell me to apply them now (I'll show you the exact changes for approval first).\n```\n\nAdjust the content based on the actual interview responses:\n- Only include deny patterns for commands the user confirmed should be denied\n- Only include NEVER rules for directories/files the user specified\n- If the user allowed certain network tools or package managers, exclude those\n\n## Recommended Permission Mode\n\nAfter generating the boundaries above, also recommend a Claude Code permission mode. Include this section in your output:\n\n```\n### Recommended Permission Mode\n\nYou don't need `--dangerously-skip-permissions`. Safer alternatives exist:\n\n| Your situation | Use | Why |\n|---|---|---|\n| Autonomous spec execution | `--permission-mode dontAsk` + allowlist above | Only pre-approved commands run |\n| Long session with some trust | `--permission-mode auto` | Safety classifier reviews each action |\n| Interactive development | `--permission-mode acceptEdits` | Auto-approves file edits, prompts for commands |\n\n**For lockdown mode, we recommend `--permission-mode dontAsk`** combined with the deny patterns above. This gives you full autonomy for allowed operations while blocking everything else -- no classifier overhead, no prompts, and no safety bypass.\n\n`--dangerously-skip-permissions` disables ALL safety checks. The modes above give you autonomy without removing the guardrails.\n```\n\n## Step 4: Offer to Apply\n\nIf the user asks you to apply the changes:\n\n1. **For CLAUDE.md:** Read the existing CLAUDE.md, find the Behavioral Boundaries section, and show the user the exact diff for the NEVER section. Ask for confirmation before writing.\n2. **For settings.json:** Read the existing `.claude/settings.json`, show the user what the `permissions.deny` array will look like after adding the new patterns. Ask for confirmation before writing.\n\n**Never auto-apply. Always show the exact changes and wait for explicit approval.**\n",
13
- "joycraft-new-feature.md": '---\nname: joycraft-new-feature\ndescription: Guided feature development \u2014 interview the user, produce a Feature Brief, then decompose into atomic specs\ninstructions: 35\n---\n\n# New Feature Workflow\n\nYou are starting a new feature. Follow this process in order. Do not skip steps.\n\n## Phase 0: Check for Existing Drafts\n\nBefore starting the interview, check if the user has already drafted a brief.\n\n**Skip this phase if:** the user provided a brief path as an argument (they already know what to work from).\n\n**Steps:**\n1. Check if `docs/briefs/` exists. If not, skip to Phase 1.\n2. Look for files matching `*-draft.md` in `docs/briefs/`.\n3. For any other `.md` files in `docs/briefs/`, read the first 10 lines and check for `Status: DRAFT`.\n4. If draft(s) found, present them:\n\n```\nI found draft brief(s) in docs/briefs/:\n- [path] (drafted YYYY-MM-DD)\n- [path] (drafted YYYY-MM-DD)\n\nWant me to:\n1. **Formalize** one of these into a full Feature Brief (skip interview, go to Phase 2)\n2. **Start a new interview** from scratch\n```\n\n5. If user chooses to formalize: read the full draft, extract the idea/problem/constraints, and jump to Phase 2 with that context pre-filled.\n6. If user chooses to start fresh, or no drafts found: proceed to Phase 1.\n\n## Phase 1: Interview\n\nInterview the user about what they want to build. Let them talk \u2014 your job is to listen, then sharpen.\n\n**Ask about:**\n- What problem does this solve? Who is affected?\n- What does "done" look like?\n- Hard constraints? (business rules, tech limitations, deadlines)\n- What is explicitly NOT in scope? (push hard on this)\n- Edge cases or error conditions?\n- What existing code/patterns should this follow?\n- Testing: existing setup? framework? smoke test budget? lockdown mode desired?\n\n**Interview technique:**\n- Let the user "yap" \u2014 don\'t interrupt their flow\n- Play back your understanding: "So if I\'m hearing you right..."\n- Push toward testable statements: "How would we verify that works?"\n\nKeep asking until you can fill out a Feature Brief.\n\n## Phase 2: Feature Brief\n\nWrite a Feature Brief to `docs/briefs/YYYY-MM-DD-feature-name.md`. Create the `docs/briefs/` directory if it doesn\'t exist.\n\n**Why:** The brief is the single source of truth for what we\'re building. It prevents scope creep and gives every spec a shared reference point.\n\nUse this structure:\n\n```markdown\n# [Feature Name] \u2014 Feature Brief\n\n> **Date:** YYYY-MM-DD\n> **Project:** [project name]\n> **Status:** Interview | Decomposing | Specs Ready | In Progress | Complete\n\n---\n\n## Vision\nWhat are we building and why? The full picture in 2-4 paragraphs.\n\n## User Stories\n- As a [role], I want [capability] so that [benefit]\n\n## Hard Constraints\n- MUST: [constraint that every spec must respect]\n- MUST NOT: [prohibition that every spec must respect]\n\n## Out of Scope\n- NOT: [tempting but deferred]\n\n## Test Strategy\n- **Existing setup:** [framework and tools, or "none yet"]\n- **User expertise:** [comfortable / learning / needs guidance]\n- **Test types:** [smoke, unit, integration, e2e, etc.]\n- **Smoke test budget:** [target time for fast-feedback tests]\n- **Lockdown mode:** [yes/no \u2014 constrain agent to code + tests only]\n\n## Decomposition\n| # | Spec Name | Description | Dependencies | Est. Size |\n|---|-----------|-------------|--------------|-----------|\n| 1 | [verb-object] | [one sentence] | None | [S/M/L] |\n\n## Execution Strategy\n- [ ] Sequential (specs have chain dependencies)\n- [ ] Parallel worktrees (specs are independent)\n- [ ] Mixed\n\n## Success Criteria\n- [ ] [End-to-end behavior 1]\n- [ ] [No regressions in existing features]\n```\n\nIf `docs/templates/FEATURE_BRIEF_TEMPLATE.md` exists, reference it for the full template with additional guidance.\n\nPresent the brief to the user. Focus review on:\n- "Does the decomposition match how you think about this?"\n- "Is anything in scope that shouldn\'t be?"\n- "Are the specs small enough? Can each be described in one sentence?"\n\nIterate until approved.\n\n## Phase 3: Generate Atomic Specs\n\nFor each row in the decomposition table, create a self-contained spec file at `docs/specs/<feature-name>/spec-name.md`. Derive the feature-name from the brief filename (strip the date prefix and `.md` \u2014 e.g., `2026-04-06-token-discipline.md` \u2192 `token-discipline`). Create the `docs/specs/<feature-name>/` directory if it doesn\'t exist.\n\n**Why:** Each spec must be understandable WITHOUT reading the Feature Brief. This prevents the "Curse of Instructions" \u2014 no spec should require holding the entire feature in context. Copy relevant context into each spec.\n\nUse this structure for each spec:\n\n```markdown\n# [Verb + Object] \u2014 Atomic Spec\n\n> **Parent Brief:** `docs/briefs/YYYY-MM-DD-feature-name.md`\n> **Status:** Ready\n> **Date:** YYYY-MM-DD\n> **Estimated scope:** [1 session / N files / ~N lines]\n\n---\n\n## What\nOne paragraph \u2014 what changes when this spec is done?\n\n## Why\nOne sentence \u2014 what breaks or is missing without this?\n\n## Acceptance Criteria\n- [ ] [Observable behavior]\n- [ ] Build passes\n- [ ] Tests pass\n\n## Test Plan\n\n| Acceptance Criterion | Test | Type |\n|---------------------|------|------|\n| [Each AC above] | [What to call/assert] | [unit/integration/e2e] |\n\n**Execution order:**\n1. Write all tests above \u2014 they should fail against current/stubbed code\n2. Run tests to confirm they fail (red)\n3. Implement until all tests pass (green)\n\n**Smoke test:** [Identify the fastest test for iteration feedback]\n\n**Before implementing, verify your test harness:**\n1. Run all tests \u2014 they must FAIL (if they pass, you\'re testing the wrong thing)\n2. Each test calls your actual function/endpoint \u2014 not a reimplementation or the underlying library\n3. Identify your smoke test \u2014 it must run in seconds, not minutes, so you get fast feedback on each change\n\n## Constraints\n- MUST: [hard requirement]\n- MUST NOT: [hard prohibition]\n\n## Affected Files\n| Action | File | What Changes |\n|--------|------|-------------|\n\n## Approach\nStrategy, data flow, key decisions. Name one rejected alternative.\n\n## Edge Cases\n| Scenario | Expected Behavior |\n|----------|------------------|\n```\n\nIf `docs/templates/ATOMIC_SPEC_TEMPLATE.md` exists, reference it for the full template with additional guidance.\n\n## Phase 4: Hand Off for Execution\n\nBefore jumping to execution, consider whether research or design would catch wrong assumptions early:\n\n```\nFeature Brief and [N] atomic specs are ready.\n\nSpecs:\n1. [spec-name] \u2014 [one sentence] [S/M/L]\n2. [spec-name] \u2014 [one sentence] [S/M/L]\n...\n\nBefore executing, consider the complexity of this feature:\n\nCOMPLEX (5+ files, architectural decisions, unfamiliar area):\n \u2192 /joycraft-research \u2014 gather codebase facts before committing to a design\n \u2192 /joycraft-design \u2014 make architectural decisions explicit\n \u2192 Then execute specs\n\nMEDIUM (clear scope but non-trivial):\n \u2192 /joycraft-design \u2014 make key decisions explicit before building\n \u2192 Then execute specs\n\nSIMPLE (scope is clear, < 5 files, well-understood area):\n \u2192 Skip to execution\n\nRecommended execution:\n- [Parallel/Sequential/Mixed strategy]\n- Estimated: [N] sessions total\n\nTo execute: Start a fresh session per spec. Each session should:\n1. Read the spec\n2. Implement\n3. Run /joycraft-session-end to capture discoveries\n4. Commit and PR\n\nReady to start?\n```\n\n**Why:** A fresh session for execution produces better results. The interview session has too much context noise \u2014 a clean session with just the spec is more focused. Research and design catch wrong assumptions before they propagate into specs \u2014 but skip them if the scope is clear and well-understood.\n\nYou can also use `/joycraft-decompose` to re-decompose a brief if the breakdown needs adjustment, or run `/joycraft-interview` first for a lighter brainstorm before committing to the full workflow.\n\n**Tip:** Run `/clear` before starting the next step. Your artifacts are saved to files \u2014 this conversation context is disposable.\n',
14
+ "joycraft-new-feature.md": '---\nname: joycraft-new-feature\ndescription: Guided feature development \u2014 interview the user, produce a Feature Brief, then decompose into atomic specs\ninstructions: 35\n---\n\n# New Feature Workflow\n\nYou are starting a new feature. Follow this process in order. Do not skip steps.\n\n## Phase 0: Check for Existing Drafts and In-Flight Features\n\nBefore starting the interview, scan `docs/features/` for existing artifacts the user may want to continue from.\n\n**Skip this phase if:** the user provided a brief path as an argument (they already know what to work from).\n\n**Steps:**\n1. Check if `docs/features/` exists. If not, skip to Phase 1.\n2. List subdirectories. For each `docs/features/<slug>/brief.md`, read the YAML frontmatter at the top.\n3. **Filter by status:** treat each brief as `status: active` unless its frontmatter says otherwise. **Skip** any brief whose `status:` is `shipped`, `deprecated`, or `superseded`. Also skip anything under `docs/archive/` \u2014 those are out-of-scope for new feature work.\n4. Group what you find:\n - **Drafts** (frontmatter `status: draft`) \u2014 likely from `/joycraft-interview`.\n - **Active in-flight** (frontmatter `status: active`) \u2014 work the user already started.\n\n5. Present them:\n\n```\nI found existing artifacts in docs/features/:\n\nDrafts:\n- docs/features/<slug>/brief.md (drafted YYYY-MM-DD)\n\nActive features:\n- docs/features/<slug>/brief.md (started YYYY-MM-DD)\n\nWant me to:\n1. **Formalize** a draft into a full Feature Brief\n2. **Continue** an active feature\n3. **Start a new interview** from scratch\n```\n\n6. If user picks formalize/continue: read the full brief, extract context, and jump to Phase 2 with that context pre-filled.\n7. If user picks start fresh, or nothing found: proceed to Phase 1.\n\n## Phase 1: Interview\n\nInterview the user about what they want to build. Let them talk \u2014 your job is to listen, then sharpen.\n\n**Ask about:**\n- What problem does this solve? Who is affected?\n- What does "done" look like?\n- Hard constraints? (business rules, tech limitations, deadlines)\n- What is explicitly NOT in scope? (push hard on this)\n- Edge cases or error conditions?\n- What existing code/patterns should this follow?\n- Testing: existing setup? framework? smoke test budget? lockdown mode desired?\n\n**Interview technique:**\n- Let the user "yap" \u2014 don\'t interrupt their flow\n- Play back your understanding: "So if I\'m hearing you right..."\n- Push toward testable statements: "How would we verify that works?"\n\nKeep asking until you can fill out a Feature Brief.\n\n## Phase 2: Feature Brief\n\nDerive a slug `YYYY-MM-DD-<feature-name>` (today\'s date + kebab-case feature name).\nWrite the Feature Brief to `docs/features/<slug>/brief.md`. Lazy-create the folder if needed.\n\n**Slug derivation:** today\'s date in `YYYY-MM-DD` format, then `-`, then the feature name lower-cased and hyphen-separated. Example: a feature about "Token Discipline" started on 2026-04-06 \u2192 slug `2026-04-06-token-discipline` \u2192 folder `docs/features/2026-04-06-token-discipline/`.\n\n**Why:** The brief is the single source of truth for what we\'re building. It prevents scope creep and gives every spec a shared reference point.\n\nThe brief MUST start with YAML frontmatter \u2014 the 4-field personal schema:\n\n```yaml\n---\nstatus: active\nowner: <resolved name>\ncreated: YYYY-MM-DD\nfeature: <slug>\n---\n```\n\n**Owner resolution:** look up the owner name in this order \u2014 (1) `git config user.name`, (2) value in your auto-memory `joycraft-owner.txt` if present, (3) ask the user once and persist. If you can\'t get a name, leave the field as `<resolved name>` and note it for the user.\n\nIf the brief was formalized from an existing draft, parse the existing draft\'s frontmatter and update `status:` from `draft` to `active`. Never silently overwrite \u2014 if the draft already has body content, preserve it and append/refine rather than replacing.\n\nUse this structure for the body:\n\n```markdown\n# [Feature Name] \u2014 Feature Brief\n\n> **Date:** YYYY-MM-DD\n> **Project:** [project name]\n\n---\n\n## Vision\nWhat are we building and why? The full picture in 2-4 paragraphs.\n\n## User Stories\n- As a [role], I want [capability] so that [benefit]\n\n## Hard Constraints\n- MUST: [constraint that every spec must respect]\n- MUST NOT: [prohibition that every spec must respect]\n\n## Out of Scope\n- NOT: [tempting but deferred]\n\n## Test Strategy\n- **Existing setup:** [framework and tools, or "none yet"]\n- **User expertise:** [comfortable / learning / needs guidance]\n- **Test types:** [smoke, unit, integration, e2e, etc.]\n- **Smoke test budget:** [target time for fast-feedback tests]\n- **Lockdown mode:** [yes/no \u2014 constrain agent to code + tests only]\n\n## Decomposition\n| # | Spec Name | Description | Dependencies | Est. Size |\n|---|-----------|-------------|--------------|-----------|\n| 1 | [verb-object] | [one sentence] | None | [S/M/L] |\n\n## Execution Strategy\n- [ ] Sequential (specs have chain dependencies)\n- [ ] Parallel worktrees (specs are independent)\n- [ ] Mixed\n\n## Success Criteria\n- [ ] [End-to-end behavior 1]\n- [ ] [No regressions in existing features]\n```\n\nIf `docs/templates/FEATURE_BRIEF_TEMPLATE.md` exists, reference it for the full template with additional guidance.\n\nPresent the brief to the user. Focus review on:\n- "Does the decomposition match how you think about this?"\n- "Is anything in scope that shouldn\'t be?"\n- "Are the specs small enough? Can each be described in one sentence?"\n\nIterate until approved.\n\n## Phase 3: Generate Atomic Specs\n\nFor each row in the decomposition table, create a self-contained spec file at `docs/features/<slug>/specs/<spec-name>.md`. Lazy-create the `specs/` subfolder if it doesn\'t exist.\n\n**Why:** Each spec must be understandable WITHOUT reading the Feature Brief. This prevents the "Curse of Instructions" \u2014 no spec should require holding the entire feature in context. Copy relevant context into each spec.\n\nEach spec file MUST start with YAML frontmatter \u2014 the 4-field personal schema:\n\n```yaml\n---\nstatus: active\nowner: <resolved name>\ncreated: YYYY-MM-DD\nfeature: <slug>\n---\n```\n\nWhen listing existing in-flight features in Phase 0, ignore briefs whose `status:` is `shipped`, `deprecated`, or `superseded`. Also ignore anything under `docs/archive/`.\n\nIf `docs/backlog/` items surface during the interview as "deferred work" candidates, ask the user before writing \u2014 never auto-write to `docs/backlog/`.\n\nUse this structure for each spec body:\n\n```markdown\n# [Verb + Object] \u2014 Atomic Spec\n\n> **Parent Brief:** `docs/features/<slug>/brief.md`\n> **Status:** Ready\n> **Date:** YYYY-MM-DD\n> **Estimated scope:** [1 session / N files / ~N lines]\n\n---\n\n## What\nOne paragraph \u2014 what changes when this spec is done?\n\n## Why\nOne sentence \u2014 what breaks or is missing without this?\n\n## Acceptance Criteria\n- [ ] [Observable behavior]\n- [ ] Build passes\n- [ ] Tests pass\n\n## Test Plan\n\n| Acceptance Criterion | Test | Type |\n|---------------------|------|------|\n| [Each AC above] | [What to call/assert] | [unit/integration/e2e] |\n\n**Execution order:**\n1. Write all tests above \u2014 they should fail against current/stubbed code\n2. Run tests to confirm they fail (red)\n3. Implement until all tests pass (green)\n\n**Smoke test:** [Identify the fastest test for iteration feedback]\n\n**Before implementing, verify your test harness:**\n1. Run all tests \u2014 they must FAIL (if they pass, you\'re testing the wrong thing)\n2. Each test calls your actual function/endpoint \u2014 not a reimplementation or the underlying library\n3. Identify your smoke test \u2014 it must run in seconds, not minutes, so you get fast feedback on each change\n\n## Constraints\n- MUST: [hard requirement]\n- MUST NOT: [hard prohibition]\n\n## Affected Files\n| Action | File | What Changes |\n|--------|------|-------------|\n\n## Approach\nStrategy, data flow, key decisions. Name one rejected alternative.\n\n## Edge Cases\n| Scenario | Expected Behavior |\n|----------|------------------|\n```\n\nIf `docs/templates/ATOMIC_SPEC_TEMPLATE.md` exists, reference it for the full template with additional guidance.\n\n## Phase 3.5: Offer to Capture Deferred Items to Backlog\n\nIf during the interview deferred work surfaces (out-of-scope items, "later" features, tangents), ASK the user:\n\n> "This looks like deferred work \u2014 want me to capture it to `docs/backlog/`?"\n\nOnly on user confirmation, write a backlog entry at `docs/backlog/YYYY-MM-DD-<short-name>.md` with backlog frontmatter:\n\n```yaml\n---\nstatus: backlog\nowner: <resolved name>\ncreated: YYYY-MM-DD\nsource: docs/features/<slug>/brief.md\n---\n```\n\n**Never auto-write to `docs/backlog/`.** Every backlog entry is user-confirmed.\n\n## Phase 4: Hand Off for Execution\n\nBefore jumping to execution, consider whether research or design would catch wrong assumptions early:\n\n```\nFeature Brief and [N] atomic specs are ready.\n\nSpecs:\n1. [spec-name] \u2014 [one sentence] [S/M/L]\n2. [spec-name] \u2014 [one sentence] [S/M/L]\n...\n\nBefore executing, consider the complexity of this feature:\n\nCOMPLEX (5+ files, architectural decisions, unfamiliar area):\n \u2192 /joycraft-research \u2014 gather codebase facts before committing to a design\n \u2192 /joycraft-design \u2014 make architectural decisions explicit\n \u2192 Then execute specs\n\nMEDIUM (clear scope but non-trivial):\n \u2192 /joycraft-design \u2014 make key decisions explicit before building\n \u2192 Then execute specs\n\nSIMPLE (scope is clear, < 5 files, well-understood area):\n \u2192 Skip to execution\n\nRecommended execution:\n- [Parallel/Sequential/Mixed strategy]\n- Estimated: [N] sessions total\n\nTo execute: Start a fresh session per spec. Each session should:\n1. Read the spec\n2. Implement\n3. Run /joycraft-session-end to capture discoveries\n4. Commit and PR\n\nReady to start?\n```\n\nEnd with the canonical Handoff block. Include any backlog paths produced as a side effect.\n\n## Recommended Next Steps\n\nNext:\n```bash\n/joycraft-decompose docs/features/<slug>/brief.md\n```\nRun /clear first.\n\n**Why:** A fresh session for execution produces better results. The interview session has too much context noise \u2014 a clean session with just the spec is more focused. Research and design catch wrong assumptions before they propagate into specs \u2014 but skip them if the scope is clear and well-understood.\n\nYou can also use `/joycraft-decompose` to re-decompose a brief if the breakdown needs adjustment, or run `/joycraft-interview` first for a lighter brainstorm before committing to the full workflow.\n',
14
15
  "joycraft-optimize.md": '---\nname: joycraft-optimize\ndescription: Audit your Claude Code or Codex session overhead \u2014 harness file sizes, plugins, MCP servers, hooks \u2014 and report actionable recommendations\ninstructions: 20\n---\n\n# Optimize \u2014 Session Overhead Audit\n\nYou are auditing the user\'s AI development session for token overhead. Produce a conversational diagnostic report \u2014 no files created.\n\n## Step 1: Detect Platform\n\nCheck which platform is active:\n- **Claude Code:** Look for `.claude/` directory, `CLAUDE.md`\n- **Codex:** Look for `.agents/` directory, `AGENTS.md`\n\nIf both exist, run both checks. If neither, default to Claude Code checks and note the uncertainty.\n\n## Step 2: Audit Harness Files\n\n### Claude Code Path\n\n1. **CLAUDE.md** \u2014 count lines. Threshold: \u2264200 lines.\n2. **Skill files** \u2014 glob `.claude/skills/**/*.md`. Count lines per file. Threshold: \u2264200 lines each.\n\n### Codex Path\n\n1. **AGENTS.md** \u2014 count lines. Threshold: \u2264200 lines.\n2. **Skill files** \u2014 glob `.agents/skills/**/*.md`. Count lines per file. Threshold: \u2264200 lines each.\n\n## Step 3: Audit Plugins & MCP Servers\n\n### Claude Code Path\n\n1. **Installed plugins** \u2014 read `~/.claude/plugins/installed_plugins.json`. List plugin names and versions. If not found, report "no plugins file found."\n2. **Enabled plugins** \u2014 read `~/.claude/settings.json`, check `enabledPlugins` array. Show enabled vs installed count.\n3. **MCP servers** \u2014 read `~/.claude/settings.json`, count entries under `mcpServers`. List server names.\n\n### Codex Path\n\n1. **Plugin config** \u2014 read `~/.codex/config.toml`. List any plugin toggles. Note: Codex syncs its curated plugin marketplace at startup \u2014 this is a boot cost even if you don\'t use them.\n2. **MCP servers** \u2014 check `~/.codex/config.toml` for MCP server entries. List server names.\n\n## Step 4: Audit Hooks (Claude Code Only)\n\nRead `.claude/settings.json` in the project directory. List all hook definitions under the `hooks` key \u2014 show the event name and command for each.\n\nFor Codex: note "hook auditing not yet supported on Codex."\n\n## Step 5: Report\n\nOrganize findings by category. Use pass/warn indicators:\n\n```\n## Session Overhead Report\n\n### Harness Files\n- CLAUDE.md: [N] lines [PASS \u2264200 / WARN >200]\n- Skills: [N] files, [list any over 200 lines]\n\n### Plugins\n- Installed: [N] ([list names])\n- Enabled: [N] of [M] installed\n- [If 0: "No plugins \u2014 zero boot cost from plugins."]\n\n### MCP Servers\n- Count: [N] ([list names])\n- [If 0: "No MCP servers \u2014 zero boot cost from servers."]\n\n### Hooks\n- [N] hook definitions ([list event names])\n\n### Recommendations\n- [Specific, actionable items for anything over threshold]\n- [e.g., "CLAUDE.md is 312 lines \u2014 consider splitting reference sections into docs/"]\n- [e.g., "3 MCP servers load at boot \u2014 disable unused ones in settings.json"]\n```\n\n## Step 6: Further Resources\n\nEnd with:\n\n> For deeper token optimization, see:\n> - [Nate B Jones\'s token optimization techniques](https://www.youtube.com/watch?v=bDcgHzCBgmQ)\n> - [OB1 repo](https://github.com/nate-b-j/OB1) \u2014 Heavy File Ingestion skill and stupid button prompt kit\n> - [Joycraft\'s token discipline guide](docs/guides/token-discipline.md)\n\n## Edge Cases\n\n| Scenario | Behavior |\n|----------|----------|\n| Config files don\'t exist | Report "not found" for that check, don\'t error |\n| No plugins installed | Report 0 plugins \u2014 this is good, say so |\n| CLAUDE.md/AGENTS.md exactly 200 lines | PASS \u2014 threshold is \u2264200 |\n| `~/.claude/` or `~/.codex/` not accessible | Skip user-level checks, note limitation |\n| Both platforms detected | Run both audits, report separately |\n',
15
- "joycraft-research.md": '---\nname: joycraft-research\ndescription: Produce objective codebase research by isolating question generation from fact-gathering \u2014 subagent sees only questions, never the brief\n---\n\n# Research Codebase for a Feature\n\nYou are producing objective codebase research to inform a future spec or implementation. The key insight: the researching agent must never see the brief or ticket \u2014 only research questions. This prevents opinions from contaminating the facts.\n\n**Guard clause:** If the user doesn\'t provide a brief path or inline description, ask:\n"What feature or change are you researching? Provide a brief path (e.g., `docs/briefs/2026-03-30-my-feature.md`) or describe it in a few sentences."\n\n---\n\n## Phase 1: Generate Research Questions\n\nRead the brief file (if a path was provided) or use the user\'s inline description.\n\nIdentify which zones of the codebase are relevant to this feature. Then generate 5-10 research questions that are:\n\n- **Objective and fact-seeking** \u2014 "How does X work?" not "How should we build X?"\n- **Specific to the codebase** \u2014 reference concrete systems, files, or flows\n- **Answerable by reading code** \u2014 no questions about business strategy or user preferences\n\nGood examples:\n- "How does endpoint registration work in the current router?"\n- "What patterns exist for input validation across existing handlers?"\n- "Trace the data flow from API request to database write for entity X."\n- "What test infrastructure exists? Where are fixtures, mocks, and helpers?"\n- "What dependencies does module Y import, and what does its public API look like?"\n\nBad examples (do NOT generate these):\n- "What\'s the best way to implement this feature?" (opinion)\n- "Should we use library X or Y?" (recommendation)\n- "What would a good architecture look like?" (design, not research)\n\nWrite the questions to a temporary file at `docs/research/.questions-tmp.md`. Create the `docs/research/` directory if it doesn\'t exist.\n\n**Do NOT include any content from the brief in this file \u2014 only the questions.**\n\n---\n\n## Phase 2: Spawn Research Subagent\n\nUse Claude Code\'s Agent tool to spawn a subagent. Pass ONLY the research questions \u2014 never the brief path, brief content, or feature description.\n\nBuild the subagent prompt by reading the questions file you just wrote, then use this template:\n\n```\nYou are researching a codebase to answer specific questions. You have NO context about why these questions are being asked \u2014 you are simply gathering facts.\n\nRULES \u2014 these are hard constraints:\n- Answer each question with FACTS ONLY: file paths, function signatures, data flows, patterns, dependencies\n- Do NOT recommend, suggest, or opine on anything\n- Do NOT speculate about what should be built or how\n- If a question cannot be answered (no relevant code exists), say "No existing code found for this"\n- Use the Read tool and Grep tool to explore the codebase thoroughly\n- Include code snippets only when they are essential evidence (e.g., a function signature, a config block)\n\nQUESTIONS:\n[INSERT_QUESTIONS_HERE]\n\nOUTPUT FORMAT \u2014 write your findings as a single markdown document using this structure:\n\n# Codebase Research\n\n**Date:** [today\'s date]\n**Questions answered:** [N/total]\n\n---\n\n## Q1: [question text]\n\n[Facts, file paths, function signatures, data flows. No opinions.]\n\n## Q2: [question text]\n\n[Facts, file paths, function signatures, data flows. No opinions.]\n\n[Continue for all questions]\n```\n\n## Phase 3: Write the Research Document\n\nTake the subagent\'s response and write it to `docs/research/YYYY-MM-DD-feature-name.md`. Derive the feature name from the brief filename or the user\'s description (lowercase, hyphenated).\n\nDelete the temporary questions file (`docs/research/.questions-tmp.md`).\n\nPresent the research document path to the user:\n\n```\nResearch complete: docs/research/YYYY-MM-DD-feature-name.md\n\nThis document contains objective facts about your codebase \u2014 no opinions or recommendations.\n\nRecommended next step:\n- /joycraft-design \u2014 translate research findings into architectural decisions before building\n\nIf the scope is simple (< 5 files, well-understood area, no architectural decisions):\n- /joycraft-decompose \u2014 skip design and break directly into atomic specs\n\nOther options:\n- /joycraft-new-feature \u2014 formalize into a full Feature Brief first\n- Read the research and add any corrections or missing context manually\n```\n\n## Edge Cases\n\n| Scenario | Behavior |\n|----------|----------|\n| No brief provided | Accept inline description, generate questions from that |\n| Codebase is empty or new | Research doc reports "no existing patterns found" per question |\n| User runs research twice for same feature | Overwrites previous research doc (same filename) |\n| Brief is very short (1-2 sentences) | Still generate questions \u2014 even simple features benefit from understanding existing patterns |\n| `docs/research/` doesn\'t exist | Create it |\n',
16
- "joycraft-session-end.md": '---\nname: joycraft-session-end\ndescription: Wrap up a session \u2014 capture discoveries, verify, prepare for PR or next session\ninstructions: 22\n---\n\n# Session Wrap-Up\n\nBefore ending this session, complete these steps in order.\n\n## 1. Capture Discoveries\n\n**Why:** Discoveries are the surprises \u2014 things that weren\'t in the spec or that contradicted expectations. They prevent future sessions from hitting the same walls.\n\nCheck: did anything surprising happen during this session? If yes, create or update a discovery file at `docs/discoveries/YYYY-MM-DD-topic.md`. Create the `docs/discoveries/` directory if it doesn\'t exist.\n\nOnly capture what\'s NOT obvious from the code or git diff:\n- "We thought X but found Y" \u2014 assumptions that were wrong\n- "This API/library behaves differently than documented" \u2014 external gotchas\n- "This edge case needs handling in a future spec" \u2014 deferred work with context\n- "The approach in the spec didn\'t work because..." \u2014 spec-vs-reality gaps\n- Key decisions made during implementation that aren\'t in the spec\n\n**Do NOT capture:**\n- Files changed (that\'s the diff)\n- What you set out to do (that\'s the spec)\n- Step-by-step narrative of the session (nobody re-reads these)\n\nUse this format:\n\n```markdown\n# Discoveries \u2014 [topic]\n\n**Date:** YYYY-MM-DD\n**Spec:** [link to spec if applicable]\n\n## [Discovery title]\n**Expected:** [what we thought would happen]\n**Actual:** [what actually happened]\n**Impact:** [what this means for future work]\n```\n\nIf nothing surprising happened, skip the discovery file entirely. No discovery is a good sign \u2014 the spec was accurate.\n\n## 1b. Update Context Documents\n\nIf `docs/context/` exists, quickly check whether this session revealed anything about:\n\n- **Production risks** \u2014 did you interact with or learn about production vs staging systems? \u2192 Update `docs/context/production-map.md`\n- **Wrong assumptions** \u2014 did the agent (or you) assume something that turned out to be false? \u2192 Update `docs/context/dangerous-assumptions.md`\n- **Key decisions** \u2014 did you make an architectural or tooling choice? \u2192 Add a row to `docs/context/decision-log.md`\n- **Unwritten rules** \u2014 did you discover a convention or constraint not documented anywhere? \u2192 Update `docs/context/institutional-knowledge.md`\n\nSkip this if nothing applies. Don\'t force it \u2014 only update when there\'s genuine new context.\n\n## 2. Run Validation\n\nRun the project\'s validation commands. Check CLAUDE.md for project-specific commands. Common checks:\n\n- Type-check (e.g., `tsc --noEmit`, `mypy`, `cargo check`)\n- Tests (e.g., `npm test`, `pytest`, `cargo test`)\n- Lint (e.g., `eslint`, `ruff`, `clippy`)\n\nFix any failures before proceeding.\n\n## 3. Update Spec Status\n\nIf working from an atomic spec in `docs/specs/` (scan recursively \u2014 specs may be in subdirectories like `docs/specs/<feature-name>/`):\n- All acceptance criteria met \u2014 update status to `Complete`\n- Partially done \u2014 update status to `In Progress`, note what\'s left\n\nIf working from a Feature Brief in `docs/briefs/`, check off completed specs in the decomposition table.\n\n## 4. Commit\n\nCommit all changes including the discovery file (if created) and spec status updates. The commit message should reference the spec if applicable.\n\n## 5. Push and PR (if autonomous git is enabled)\n\n**Check CLAUDE.md for "Git Autonomy" in the Behavioral Boundaries section.** If it says "STRICTLY ENFORCED" or the ALWAYS section includes "Push to feature branches immediately after every commit":\n\n1. **Push immediately.** Run `git push origin <branch>` \u2014 do not ask, do not hesitate.\n2. **Open a PR if the feature is complete.** Check the parent Feature Brief\'s decomposition table \u2014 if all specs are done, run `gh pr create` with a summary of all completed specs. Do not ask first.\n3. **If not all specs are done,** still push. The PR comes when the last spec is complete.\n\nIf CLAUDE.md does NOT have autonomous git rules (or has "ASK FIRST" for pushing), ask the user before pushing.\n\n## 6. Report\n\n```\nSession complete.\n- Spec: [spec name] \u2014 [Complete / In Progress]\n- Build: [passing / failing]\n- Discoveries: [N items / none]\n- Pushed: [yes / no \u2014 and why not]\n- PR: [opened #N / not yet \u2014 N specs remaining]\n- Next: [what the next session should tackle]\n```\n\n**Tip:** Run `/clear` before starting the next step. Your artifacts are saved to files \u2014 this conversation context is disposable.\n',
16
+ "joycraft-research.md": '---\nname: joycraft-research\ndescription: Produce objective codebase research by isolating question generation from fact-gathering \u2014 subagent sees only questions, never the brief\n---\n\n# Research Codebase for a Feature\n\nYou are producing objective codebase research to inform a future spec or implementation. The key insight: the researching agent must never see the brief or ticket \u2014 only research questions. This prevents opinions from contaminating the facts.\n\n**Guard clause:** If the user doesn\'t provide a brief path or inline description, ask:\n"What feature or change are you researching? Provide a brief path (e.g., `docs/features/2026-03-30-my-feature/brief.md`) or describe it in a few sentences."\n\n## Scanning Prior Research (Status Filter)\n\nBefore generating fresh questions, scan `docs/features/*/research.md` for prior research on similar topics. Read the YAML frontmatter at the top of each file:\n\n- Treat each file as `status: active` unless its frontmatter explicitly says otherwise.\n- **Skip / ignore** any file whose `status:` is `shipped`, `deprecated`, or `superseded` \u2014 they are no longer load-bearing.\n- Also ignore anything under `docs/archive/` entirely \u2014 archived research is out-of-scope.\n\nFiles without frontmatter at all are treated as `status: active` (legacy artifacts).\n\n---\n\n## Phase 1: Generate Research Questions\n\nRead the brief file (if a path was provided) or use the user\'s inline description.\n\nIdentify which zones of the codebase are relevant to this feature. Then generate 5-10 research questions that are:\n\n- **Objective and fact-seeking** \u2014 "How does X work?" not "How should we build X?"\n- **Specific to the codebase** \u2014 reference concrete systems, files, or flows\n- **Answerable by reading code** \u2014 no questions about business strategy or user preferences\n\nGood examples:\n- "How does endpoint registration work in the current router?"\n- "What patterns exist for input validation across existing handlers?"\n- "Trace the data flow from API request to database write for entity X."\n- "What test infrastructure exists? Where are fixtures, mocks, and helpers?"\n- "What dependencies does module Y import, and what does its public API look like?"\n\nBad examples (do NOT generate these):\n- "What\'s the best way to implement this feature?" (opinion)\n- "Should we use library X or Y?" (recommendation)\n- "What would a good architecture look like?" (design, not research)\n\nDerive a slug `YYYY-MM-DD-<feature-name>`. Lazy-create the folder `docs/features/<slug>/`.\nWrite the questions to a temporary file at `docs/features/<slug>/.questions-tmp.md`.\n\n**Do NOT include any content from the brief in this file \u2014 only the questions.**\n\n---\n\n## Phase 2: Spawn Research Subagent\n\nUse Claude Code\'s Agent tool to spawn a subagent. Pass ONLY the research questions \u2014 never the brief path, brief content, or feature description.\n\nBuild the subagent prompt by reading the questions file you just wrote, then use this template:\n\n```\nYou are researching a codebase to answer specific questions. You have NO context about why these questions are being asked \u2014 you are simply gathering facts.\n\nRULES \u2014 these are hard constraints:\n- Answer each question with FACTS ONLY: file paths, function signatures, data flows, patterns, dependencies\n- Do NOT recommend, suggest, or opine on anything\n- Do NOT speculate about what should be built or how\n- If a question cannot be answered (no relevant code exists), say "No existing code found for this"\n- Use the Read tool and Grep tool to explore the codebase thoroughly\n- Include code snippets only when they are essential evidence (e.g., a function signature, a config block)\n\nQUESTIONS:\n[INSERT_QUESTIONS_HERE]\n\nOUTPUT FORMAT \u2014 write your findings as a single markdown document using this structure:\n\n# Codebase Research\n\n**Date:** [today\'s date]\n**Questions answered:** [N/total]\n\n---\n\n## Q1: [question text]\n\n[Facts, file paths, function signatures, data flows. No opinions.]\n\n## Q2: [question text]\n\n[Facts, file paths, function signatures, data flows. No opinions.]\n\n[Continue for all questions]\n```\n\n## Phase 3: Write the Research Document\n\nTake the subagent\'s response and write it to `docs/features/<slug>/research.md`. The file MUST start with YAML frontmatter \u2014 the 4-field personal schema:\n\n```yaml\n---\nstatus: active\nowner: <resolved name>\ncreated: YYYY-MM-DD\nfeature: <slug>\n---\n```\n\n**Owner resolution:** look up the owner name in this order \u2014 (1) `git config user.name`, (2) value in your auto-memory `joycraft-owner.txt` if present, (3) ask the user once and persist.\n\nDelete the temporary questions file (`docs/features/<slug>/.questions-tmp.md`).\n\nEnd with the canonical Handoff block.\n\n## Recommended Next Steps\n\nNext:\n```bash\n/joycraft-design docs/features/<slug>/research.md\n```\nRun /clear first.\n\nIf the scope is simple (< 5 files, well-understood area, no architectural decisions), instead hand off to `/joycraft-decompose docs/features/<slug>/brief.md` to skip design and break directly into atomic specs.\n\n## Edge Cases\n\n| Scenario | Behavior |\n|----------|----------|\n| No brief provided | Accept inline description, generate questions from that |\n| Codebase is empty or new | Research doc reports "no existing patterns found" per question |\n| User runs research twice for same feature | Overwrites previous research doc (same filename) |\n| Brief is very short (1-2 sentences) | Still generate questions \u2014 even simple features benefit from understanding existing patterns |\n| `docs/features/<slug>/` doesn\'t exist | Lazy-create it |\n',
17
+ "joycraft-session-end.md": "---\nname: joycraft-session-end\ndescription: Wrap up a session \u2014 capture discoveries, verify, prepare for PR or next session\ninstructions: 22\n---\n\n# Session Wrap-Up\n\nBefore ending this session, complete these steps in order.\n\n## 1. Capture Discoveries\n\n**Why:** Discoveries are the surprises \u2014 things that weren't in the spec or that contradicted expectations. They prevent future sessions from hitting the same walls.\n\nCheck: did anything surprising happen during this session? If yes, create or update a discovery file at `docs/discoveries/YYYY-MM-DD-topic.md`. Lazy-create the `docs/discoveries/` directory if it doesn't exist.\n\n(Discoveries stay flat at `docs/discoveries/` rather than per-feature, since they often span features and are read serendipitously rather than via a known path.)\n\nThe discovery file MUST start with YAML frontmatter \u2014 the 4-field personal schema:\n\n```yaml\n---\nstatus: active\nowner: <resolved name>\ncreated: YYYY-MM-DD\nfeature: <slug-of-related-feature> # omit if not feature-tied\n---\n```\n\n**Owner resolution:** look up the owner name in this order \u2014 (1) `git config user.name`, (2) value in your auto-memory `joycraft-owner.txt` if present, (3) ask the user once and persist.\n\nOnly capture what's NOT obvious from the code or git diff:\n- \"We thought X but found Y\" \u2014 assumptions that were wrong\n- \"This API/library behaves differently than documented\" \u2014 external gotchas\n- \"This edge case needs handling in a future spec\" \u2014 deferred work with context\n- \"The approach in the spec didn't work because...\" \u2014 spec-vs-reality gaps\n- Key decisions made during implementation that aren't in the spec\n\n**Do NOT capture:**\n- Files changed (that's the diff)\n- What you set out to do (that's the spec)\n- Step-by-step narrative of the session (nobody re-reads these)\n\nUse this format:\n\n```markdown\n# Discoveries \u2014 [topic]\n\n**Date:** YYYY-MM-DD\n**Spec:** [link to spec if applicable]\n\n## [Discovery title]\n**Expected:** [what we thought would happen]\n**Actual:** [what actually happened]\n**Impact:** [what this means for future work]\n```\n\nIf nothing surprising happened, skip the discovery file entirely. No discovery is a good sign \u2014 the spec was accurate.\n\n## 1b. Update Context Documents\n\nIf `docs/context/` exists, quickly check whether this session revealed anything about:\n\n- **Production risks** \u2014 did you interact with or learn about production vs staging systems? \u2192 Update `docs/context/production-map.md`\n- **Wrong assumptions** \u2014 did the agent (or you) assume something that turned out to be false? \u2192 Update `docs/context/dangerous-assumptions.md`\n- **Key decisions** \u2014 did you make an architectural or tooling choice? \u2192 Add a row to `docs/context/decision-log.md`\n- **Unwritten rules** \u2014 did you discover a convention or constraint not documented anywhere? \u2192 Update `docs/context/institutional-knowledge.md`\n\nWhen you UPDATE a context doc, also bump (or add) its YAML frontmatter \u2014 the 2-field shared schema:\n\n```yaml\n---\nlast_updated: YYYY-MM-DD\nlast_updated_by: <resolved name>\n---\n```\n\nIf the file already has the frontmatter, update the `last_updated` and `last_updated_by` fields in place. If it doesn't, prepend a fresh block. Context docs are *shared* artifacts (no single owner) \u2014 the shared schema reflects that.\n\nSkip this if nothing applies. Don't force it \u2014 only update when there's genuine new context.\n\n## 2. Run Validation\n\nRun the project's validation commands. Check CLAUDE.md for project-specific commands. Common checks:\n\n- Type-check (e.g., `tsc --noEmit`, `mypy`, `cargo check`)\n- Tests (e.g., `npm test`, `pytest`, `cargo test`)\n- Lint (e.g., `eslint`, `ruff`, `clippy`)\n\nFix any failures before proceeding.\n\n## 3. Update Spec Status\n\nIf working from an atomic spec in `docs/features/<slug>/specs/` (or the legacy `docs/specs/<area>/` for bugfixes \u2014 scan recursively):\n- All acceptance criteria met \u2014 update the spec's frontmatter `status:` to reflect completion (e.g., `shipped`) and the body's Status field to `Complete`\n- Partially done \u2014 leave `status: active` and update the body's Status field to `In Progress`, note what's left\n\nIf working from a Feature Brief at `docs/features/<slug>/brief.md`, check off completed specs in the decomposition table.\n\n## 4. Commit\n\nCommit all changes including the discovery file (if created) and spec status updates. The commit message should reference the spec if applicable.\n\n## 5. Push and PR (if autonomous git is enabled)\n\n**Check CLAUDE.md for \"Git Autonomy\" in the Behavioral Boundaries section.** If it says \"STRICTLY ENFORCED\" or the ALWAYS section includes \"Push to feature branches immediately after every commit\":\n\n1. **Push immediately.** Run `git push origin <branch>` \u2014 do not ask, do not hesitate.\n2. **Open a PR if the feature is complete.** Check the parent Feature Brief's decomposition table \u2014 if all specs are done, run `gh pr create` with a summary of all completed specs. Do not ask first.\n3. **If not all specs are done,** still push. The PR comes when the last spec is complete.\n\nIf CLAUDE.md does NOT have autonomous git rules (or has \"ASK FIRST\" for pushing), ask the user before pushing.\n\n## 6. Report and Hand Off\n\n```\nSession complete.\n- Spec: [spec name] \u2014 [Complete / In Progress]\n- Build: [passing / failing]\n- Discoveries: [N items / none]\n- Pushed: [yes / no \u2014 and why not]\n- PR: [opened #N / not yet \u2014 N specs remaining]\n- Next: [what the next session should tackle]\n```\n\nEnd with the canonical Handoff block. Include any discovery and updated-context paths produced.\n\n## Recommended Next Steps\n\nNext:\n```bash\n/joycraft-implement docs/features/<slug>/specs/<next-spec>.md\n```\nRun /clear first.\n\nIf all specs in the feature are complete, hand off to a feature-level wrap-up instead (PR review, etc.) \u2014 the Handoff block is just the slash command for whatever the next move is.\n",
17
18
  "joycraft-tune.md": "---\nname: joycraft-tune\ndescription: Assess and upgrade your project's AI development harness \u2014 score 7 dimensions, apply fixes, show path to Level 5\ninstructions: 15\n---\n\n# Tune \u2014 Project Harness Assessment & Upgrade\n\nYou are evaluating and upgrading this project's AI development harness.\n\n## Step 1: Detect Harness State\n\nCheck for: CLAUDE.md (with meaningful content), `docs/specs/`, `docs/briefs/`, `docs/discoveries/`, `.claude/skills/`, and test configuration.\n\n## Step 2: Route\n\n- **No harness** (no CLAUDE.md or just a README): Recommend `npx joycraft init` and stop.\n- **Harness exists**: Continue to assessment.\n\n## Step 3: Assess \u2014 Score 7 Dimensions (1-5 scale)\n\nRead CLAUDE.md and explore the project. Score each with specific evidence:\n\n| Dimension | What to Check |\n|-----------|--------------|\n| Spec Quality | `docs/specs/` (scan recursively) \u2014 structured? acceptance criteria? self-contained? |\n| Spec Granularity | Can each spec be done in one session? |\n| Behavioral Boundaries | ALWAYS/ASK FIRST/NEVER sections (or equivalent rules under any heading) |\n| Skills & Hooks | `.claude/skills/` files, hooks config |\n| Documentation | `docs/` structure, templates, referenced from CLAUDE.md |\n| Knowledge Capture | `docs/discoveries/`, `docs/context/*.md` \u2014 existence AND real content |\n| Testing & Validation | Test framework, CI pipeline, validation commands in CLAUDE.md |\n\nScore 1 = absent, 3 = partially there, 5 = comprehensive. Give credit for substance over format.\n\n## Step 4: Write Assessment\n\nWrite to `docs/joycraft-assessment.md` AND display it. Include: scores table, detailed findings (evidence + gap + recommendation per dimension), and an upgrade plan (up to 5 actions ordered by impact).\n\n## Step 5: Apply Upgrades\n\nApply using three tiers \u2014 do NOT ask per-item permission:\n\n**Tier 1 (silent):** Create missing dirs, install missing skills, copy missing templates, create AGENTS.md.\n\n**Before Tier 2, ask TWO things:**\n\n1. **Git autonomy:** Cautious (ask before push/PR) or Autonomous (push + PR without asking)?\n2. **Risk interview (3-5 questions, one at a time):** What could break? What services connect to prod? Unwritten rules? Off-limits files/commands? Skip if `docs/context/` already has content.\n\nFrom answers, generate: CLAUDE.md boundary rules, `.claude/settings.json` deny patterns, `docs/context/` documents. Also recommend a permission mode (`auto` for most; `dontAsk` + allowlist for high-risk).\n\n**Tier 2 (show diff):** Add missing CLAUDE.md sections (Boundaries, Workflow, Key Files). Draft from real codebase content. Append only \u2014 never reformat existing content.\n\n**Tier 3 (confirm first):** Rewriting existing sections, overwriting customized files, suggesting test framework installs.\n\nAfter applying, append to `docs/joycraft-history.md` and show a consolidated upgrade results table.\n\n## Step 6: Show Path to Level 5\n\nShow a tailored roadmap: Level 2-5 table, specific next steps based on actual gaps, and the Level 5 north star (spec queue, autofix, holdout scenarios, self-improving harness).\n\n**Tip:** Run `/joycraft-optimize` to audit your session's token overhead \u2014 plugins, MCP servers, and harness file sizes.\n\n## Edge Cases\n\n- **CLAUDE.md is just a README:** Treat as no harness.\n- **Non-Joycraft skills:** Acknowledge, don't replace.\n- **Rules under non-standard headings:** Give credit for substance.\n- **Previous assessment exists:** Read it first. If nothing to upgrade, say so.\n- **Non-Joycraft content in CLAUDE.md:** Preserve as-is. Only append.\n",
18
19
  "joycraft-verify.md": '---\nname: joycraft-verify\ndescription: Spawn an independent verifier subagent to check an implementation against its spec -- read-only, no code edits, structured pass/fail verdict\ninstructions: 30\n---\n\n# Verify Implementation Against Spec\n\nThe user wants independent verification of an implementation. Your job is to find the relevant spec, extract its acceptance criteria and test plan, then spawn a separate verifier subagent that checks each criterion and produces a structured verdict.\n\n**Why a separate subagent?** Anthropic\'s research found that agents reliably skew positive when grading their own work. Separating the agent doing the work from the agent judging it consistently outperforms self-evaluation. The verifier gets a clean context window with no implementation bias.\n\n## Step 1: Find the Spec\n\nIf the user provided a spec path (e.g., `/joycraft-verify docs/specs/my-feature/add-widget.md`), use that path directly.\n\nIf no path was provided, scan `docs/specs/` recursively for spec files (they may be in subdirectories like `docs/specs/<feature-name>/`). Pick the most recently modified `.md` file. If `docs/specs/` doesn\'t exist or is empty, tell the user:\n\n> No specs found in `docs/specs/`. Please provide a spec path: `/joycraft-verify path/to/spec.md`\n\n## Step 2: Read and Parse the Spec\n\nRead the spec file and extract:\n\n1. **Spec name** -- from the H1 title\n2. **Acceptance Criteria** -- the checklist under the `## Acceptance Criteria` section\n3. **Test Plan** -- the table under the `## Test Plan` section, including any test commands\n4. **Constraints** -- the `## Constraints` section if present\n\nIf the spec has no Acceptance Criteria section, tell the user:\n\n> This spec doesn\'t have an Acceptance Criteria section. Verification needs criteria to check against. Add acceptance criteria to the spec and try again.\n\nIf the spec has no Test Plan section, note this but proceed -- the verifier can still check criteria by reading code and running any available project tests.\n\n## Step 3: Identify Test Commands\n\nLook for test commands in these locations (in priority order):\n\n1. The spec\'s Test Plan section (look for commands in backticks or "Type" column entries like "unit", "integration", "e2e", "build")\n2. The project\'s CLAUDE.md (look for test/build commands in the Development Workflow section)\n3. Common defaults based on the project type:\n - Node.js: `npm test` or `pnpm test --run`\n - Python: `pytest`\n - Rust: `cargo test`\n - Go: `go test ./...`\n\nBuild a list of specific commands the verifier should run.\n\n## Step 4: Spawn the Verifier Subagent\n\nUse Claude Code\'s Agent tool to spawn a subagent with the following prompt. Replace the placeholders with the actual content extracted in Steps 2-3.\n\n```\nYou are a QA verifier. Your job is to independently verify an implementation against its spec. You have NO context about how the implementation was done -- you are checking it fresh.\n\nRULES -- these are hard constraints, not suggestions:\n- You may READ any file using the Read tool or cat\n- You may RUN these specific test/build commands: [TEST_COMMANDS]\n- You may NOT edit, create, or delete any files\n- You may NOT run commands that modify state (no git commit, no npm install, no file writes)\n- You may NOT install packages or access the network\n- Report what you OBSERVE, not what you expect or hope\n\nSPEC NAME: [SPEC_NAME]\n\nACCEPTANCE CRITERIA:\n[ACCEPTANCE_CRITERIA]\n\nTEST PLAN:\n[TEST_PLAN]\n\nCONSTRAINTS:\n[CONSTRAINTS_OR_NONE]\n\nYOUR TASK:\nFor each acceptance criterion, determine if it PASSES or FAILS based on evidence:\n\n1. Run the test commands listed above. Record the output.\n2. For each acceptance criterion:\n a. Check if there is a corresponding test and whether it passes\n b. If no test exists, read the relevant source files to verify the criterion is met\n c. If the criterion cannot be verified by reading code or running tests, mark it MANUAL CHECK NEEDED\n3. For criteria about build/test passing, actually run the commands and report results.\n\nOUTPUT FORMAT -- you MUST use this exact format:\n\nVERIFICATION REPORT\n\n| # | Criterion | Verdict | Evidence |\n|---|-----------|---------|----------|\n| 1 | [criterion text] | PASS/FAIL/MANUAL CHECK NEEDED | [what you observed] |\n| 2 | [criterion text] | PASS/FAIL/MANUAL CHECK NEEDED | [what you observed] |\n[continue for all criteria]\n\nSUMMARY: X/Y criteria passed. [Z failures need attention. / All criteria verified.]\n\nIf any test commands fail to run (missing dependencies, wrong command, etc.), report the error as evidence for a FAIL verdict on the relevant criterion.\n```\n\n## Step 5: Format and Present the Verdict\n\nTake the subagent\'s response and present it to the user in this format:\n\n```\n## Verification Report -- [Spec Name]\n\n| # | Criterion | Verdict | Evidence |\n|---|-----------|---------|----------|\n| 1 | ... | PASS | ... |\n| 2 | ... | FAIL | ... |\n\n**Overall: X/Y criteria passed.**\n\n[If all passed:]\nAll criteria verified. Ready to commit and open a PR.\n\n[If any failed:]\nN failures need attention. Review the evidence above and fix before proceeding.\n\n[If any MANUAL CHECK NEEDED:]\nN criteria need manual verification -- they can\'t be checked by reading code or running tests alone.\n```\n\n## Step 6: Suggest Next Steps\n\nBased on the verdict:\n\n- **All PASS:** Suggest committing and opening a PR, or running `/joycraft-session-end` to capture discoveries.\n- **Some FAIL:** List the failed criteria and suggest the user fix them, then run `/joycraft-verify` again.\n- **MANUAL CHECK NEEDED items:** Explain what needs human eyes and why automation couldn\'t verify it.\n\n**Do NOT offer to fix failures yourself.** The verifier reports; the human (or implementation agent in a separate turn) decides what to do. This separation is the whole point.\n\n## Edge Cases\n\n| Scenario | Behavior |\n|----------|----------|\n| Spec has no Test Plan | Warn that verification is weaker without a test plan, but proceed by checking criteria through code reading and any available project-level tests |\n| All tests pass but a criterion is not testable | Mark as MANUAL CHECK NEEDED with explanation |\n| Subagent can\'t run tests (missing deps) | Report the error as FAIL evidence |\n| No specs found and no path given | Tell user to provide a spec path or create a spec first |\n| Spec status is "Complete" | Still run verification -- "Complete" means the implementer thinks it\'s done, verification confirms |\n'
19
20
  };
20
21
  var TEMPLATES = {
22
+ "CONTRIBUTING-joycraft-template.md": '---\nlast_updated: YYYY-MM-DD\nlast_updated_by: <owner>\n---\n\n# Joycraft on this project\n\nWe use [Joycraft](https://www.npmjs.com/package/joycraft) for AI-assisted development. This doc is the team-specific bit; Joycraft itself is documented in its package README.\n\n## How our team uses it\n\n(Filled in during `/joycraft-collaborative-setup`. Examples: which skills are part of our normal flow, when we use `/joycraft-research` vs. skipping it, who reviews what.)\n\n## Conventions\n\n- Per-feature work goes under `docs/features/<slug>/{brief.md, research.md, design.md, specs/}`\n- Area-level work and ownership: see `docs/areas/`\n- Bug-fix specs stay under `docs/specs/<area-name>/`\n- Deferred work goes to `docs/backlog/`\n- For "what is Joycraft?", see the package README\n\n## Onboarding\n\nWhen a new dev joins:\n1. Run `npx joycraft init` (idempotent on already-set-up projects)\n2. Read `docs/areas/<your-area>/README.md` for context\n3. Read this file for team conventions\n4. Skim a few recent `docs/features/*/brief.md` files to see how we frame work\n\n## Skills we lean on\n\n| Skill | When |\n|-------|------|\n| `/joycraft-new-feature` | Starting any non-trivial feature |\n| `/joycraft-decompose` | Once the brief is approved |\n| `/joycraft-implement` | Executing an atomic spec with TDD |\n| `/joycraft-bugfix` | Bugs that need diagnosis-then-fix discipline |\n| `/joycraft-session-end` | Wrapping up any session |\n\n(Add or remove rows to match how your team actually works.)\n',
21
23
  "context/dangerous-assumptions.md": "# Dangerous Assumptions\n\n> Things the AI agent might assume that are wrong in this project.\n> Generated by Joycraft risk interview. Update when you discover new gotchas.\n\n## Assumptions\n\n| Agent Might Assume | But Actually | Impact If Wrong |\n|-------------------|-------------|----------------|\n| _Example: All databases are dev/test_ | _The default connection is production_ | _Data loss_ |\n| _Example: Deleting and recreating is safe_ | _Some resources have manual config not in code_ | _Hours of manual recovery_ |\n\n## Historical Incidents\n\n| Date | What Happened | Lesson | Rule Added |\n|------|-------------|--------|------------|\n| _Example: 2026-03-15_ | _Agent deleted staging infra thinking it was temp_ | _Always verify environment before destructive ops_ | _NEVER: Delete cloud resources without listing them first_ |\n",
22
24
  "context/decision-log.md": `# Decision Log
23
25
 
@@ -249,17 +251,18 @@ describe("CLI: init command (example \u2014 replace with your real scenarios)",
249
251
  };
250
252
  var CODEX_SKILLS = {
251
253
  "joycraft-add-fact.md": '---\nname: joycraft-add-fact\ndescription: Capture a project fact and route it to the correct context document -- production map, dangerous assumptions, decision log, institutional knowledge, or troubleshooting\n---\n\n# Add Fact\n\nThe user has a fact to capture. Your job is to classify it, route it to the correct context document, append it in the right format, and optionally add a boundary rule to CLAUDE.md or AGENTS.md.\n\n## Step 1: Get the Fact\n\nIf the user already provided the fact (e.g., `$joycraft-add-fact the staging DB resets every Sunday`), use it directly.\n\nIf not, ask: "What fact do you want to capture?" -- then wait for their response.\n\nIf the user provides multiple facts at once, process each one separately through all the steps below, then give a combined confirmation at the end.\n\n## Step 2: Classify the Fact\n\nRoute the fact to one of these 5 context documents based on its content:\n\n### `docs/context/production-map.md`\nThe fact is about **infrastructure, services, environments, URLs, endpoints, credentials, or what is safe/unsafe to touch**.\n- Signal words: "production", "staging", "endpoint", "URL", "database", "service", "deployed", "hosted", "credentials", "secret", "environment"\n- Examples: "The staging DB is at postgres://staging.example.com", "We use Vercel for the frontend and Railway for the API"\n\n### `docs/context/dangerous-assumptions.md`\nThe fact is about **something an AI agent might get wrong -- a false assumption that leads to bad outcomes**.\n- Signal words: "assumes", "might think", "but actually", "looks like X but is Y", "not what it seems", "trap", "gotcha"\n- Examples: "The `users` table looks like a test table but it\'s production", "Deleting a workspace doesn\'t delete the billing subscription"\n\n### `docs/context/decision-log.md`\nThe fact is about **an architectural or tooling choice and why it was made**.\n- Signal words: "decided", "chose", "because", "instead of", "we went with", "the reason we use", "trade-off"\n- Examples: "We chose SQLite over Postgres because this runs on embedded devices", "We use pnpm instead of npm for workspace support"\n\n### `docs/context/institutional-knowledge.md`\nThe fact is about **team conventions, unwritten rules, organizational context, or who owns what**.\n- Signal words: "convention", "rule", "always", "never", "team", "process", "review", "approval", "owns", "responsible"\n- Examples: "The design team reviews all color changes", "We never deploy on Fridays", "PR titles must start with the ticket number"\n\n### `docs/context/troubleshooting.md`\nThe fact is about **diagnostic knowledge -- when X happens, do Y (or don\'t do Z)**.\n- Signal words: "when", "fails", "error", "if you see", "stuck", "broken", "fix", "workaround", "before trying", "reboot", "restart", "reset"\n- Examples: "If Wi-Fi disconnects during flash, wait and retry -- don\'t switch networks", "When tests fail with ECONNREFUSED, check if Docker is running"\n\n### Ambiguous Facts\n\nIf the fact fits multiple categories, pick the **best fit** based on the primary intent. You will mention the alternative in your confirmation message so the user can correct you.\n\n## Step 3: Ensure the Target Document Exists\n\n1. If `docs/context/` does not exist, create the directory.\n2. If the target document does not exist, create it from the template structure. Check `docs/templates/` for the matching template. If no template exists, use this minimal structure:\n\nFor **production-map.md**:\n```markdown\n# Production Map\n\n> What\'s real, what\'s staging, what\'s safe to touch.\n\n## Services\n\n| Service | Environment | URL/Endpoint | Impact if Corrupted |\n|---------|-------------|-------------|-------------------|\n```\n\nFor **dangerous-assumptions.md**:\n```markdown\n# Dangerous Assumptions\n\n> Things the AI agent might assume that are wrong in this project.\n\n## Assumptions\n\n| Agent Might Assume | But Actually | Impact If Wrong |\n|-------------------|-------------|----------------|\n```\n\nFor **decision-log.md**:\n```markdown\n# Decision Log\n\n> Why choices were made, not just what was chosen.\n\n## Decisions\n\n| Date | Decision | Why | Alternatives Rejected | Revisit When |\n|------|----------|-----|----------------------|-------------|\n```\n\nFor **institutional-knowledge.md**:\n```markdown\n# Institutional Knowledge\n\n> Unwritten rules, team conventions, and organizational context.\n\n## Team Conventions\n\n- (none yet)\n```\n\nFor **troubleshooting.md**:\n```markdown\n# Troubleshooting\n\n> What to do when things go wrong for non-code reasons.\n\n## Common Failures\n\n| When This Happens | Do This | Don\'t Do This |\n|-------------------|---------|---------------|\n```\n\n## Step 4: Read the Target Document\n\nRead the target document to understand its current structure. Note:\n- Which section to append to\n- Whether it uses tables or lists\n- The column format if it\'s a table\n\n## Step 5: Append the Fact\n\nAdd the fact to the appropriate section of the target document. Match the existing format exactly:\n\n- **Table-based documents** (production-map, dangerous-assumptions, decision-log, troubleshooting): Add a new table row in the correct columns. Use today\'s date where a date column exists.\n- **List-based documents** (institutional-knowledge): Add a new list item (`- `) to the most appropriate section.\n\nRemove any italic example rows (rows where all cells start with `_`) before appending, so the document transitions from template to real content. Only remove examples from the specific table you are appending to.\n\n**Append only. Never modify or remove existing real content.**\n\n## Step 6: Evaluate Boundary Rule\n\nDecide whether the fact also warrants a rule in the project\'s boundary configuration (CLAUDE.md and/or AGENTS.md -- check which files the project uses and update accordingly):\n\n**Add a boundary rule if the fact:**\n- Describes something that should ALWAYS or NEVER be done\n- Could cause real damage if violated (data loss, broken deployments, security issues)\n- Is a hard constraint that applies across all work, not just a one-time note\n\n**Do NOT add a boundary rule if the fact is:**\n- Purely informational (e.g., "staging DB is at this URL")\n- A one-time decision that\'s already captured\n- A diagnostic tip rather than a prohibition\n\nIf a rule is warranted, read the project\'s boundary file(s) -- CLAUDE.md and/or AGENTS.md -- find the appropriate section (ALWAYS, ASK FIRST, or NEVER under Behavioral Boundaries), and append the rule. If no Behavioral Boundaries section exists, append one. Update whichever boundary files the project uses (some projects have CLAUDE.md, some have AGENTS.md, some have both).\n\n## Step 7: Confirm\n\nReport what you did in this format:\n\n```\nAdded to [document name]:\n [summary of what was added]\n\n[If boundary file(s) were also updated:]\nAdded boundary rule to [CLAUDE.md / AGENTS.md / both]:\n [ALWAYS/ASK FIRST/NEVER]: [rule text]\n\n[If the fact was ambiguous:]\nRouted to [chosen doc] -- move to [alternative doc] if this is more about [alternative category description].\n```\n',
252
- "joycraft-bugfix.md": "---\nname: joycraft-bugfix\ndescription: Structured bug fix workflow \u2014 triage, diagnose, discuss with user, write a focused spec, hand off for implementation\n---\n\n# Bug Fix Workflow\n\nYou are fixing a bug. Follow this process in order. Do not skip steps.\n\n**Guard clause:** If this is clearly a new feature, redirect to `$joycraft-new-feature` and stop.\n\n---\n\n## Phase 1: Triage\n\nEstablish what's broken. Gather: symptom, steps to reproduce, expected vs actual behavior, when it started, relevant logs/errors. If an error message or stack trace is provided, read the referenced files immediately. Try to reproduce if steps are given.\n\n**Done when:** You can describe the symptom in one sentence.\n\n---\n\n## Phase 2: Diagnose\n\nFind the root cause. Start from the error site and trace backward. Search the codebase and read files \u2014 don't guess. Identify the specific line(s) and logic error. Check git blame if it's a recent regression.\n\n**Done when:** You can explain what's wrong, why, and where in 2-3 sentences.\n\n---\n\n## Phase 3: Discuss\n\nPresent findings to the user BEFORE writing any code or spec:\n1. **Symptom** \u2014 confirm it matches what they see\n2. **Root cause** \u2014 specific file(s) and line(s)\n3. **Proposed fix** \u2014 what changes, where\n4. **Risk** \u2014 side effects? scope?\n\nAsk: \"Does this match? Comfortable with this approach?\" If large/risky, suggest decomposing into multiple specs.\n\n**Done when:** User agrees with the diagnosis and fix direction.\n\n---\n\n## Phase 4: Spec the Fix\n\nWrite a bug fix spec to `docs/specs/<feature-or-area>/bugfix-name.md`. Use the relevant feature name or area as the subdirectory (e.g., `auth`, `cli`, `parser`). Create the `docs/specs/<feature-or-area>/` directory if it doesn't exist.\n\n**Why:** Even bug fixes deserve a spec. It forces clarity on what \"fixed\" means, ensures test-first discipline, and creates a traceable record of the fix.\n\nUse this structure:\n\n```markdown\n# [Bug Name] \u2014 Bug Fix Spec\n\n> **Status:** Ready\n> **Date:** YYYY-MM-DD\n> **Estimated scope:** [1 session / N files / ~N lines]\n\n---\n\n## Bug\nOne sentence \u2014 what's broken?\n\n## Root Cause\nWhat's actually wrong, in which file(s) and line(s)?\n\n## Fix\nWhat changes, where?\n\n## Acceptance Criteria\n- [ ] [Observable behavior that proves the fix works]\n- [ ] No regressions \u2014 existing tests still pass\n- [ ] Build passes\n\n## Test Plan\n1. Write a reproduction test that fails before the fix\n2. Apply the fix\n3. Reproduction test passes\n4. Full test suite passes\n\n## Constraints\n- MUST: [hard requirement]\n- MUST NOT: [hard prohibition]\n\n## Affected Files\n| Action | File | What Changes |\n|--------|------|-------------|\n\n## Edge Cases\n| Scenario | Expected Behavior |\n|----------|------------------|\n```\n\n**For large bugs that span multiple files/systems:** Consider whether this should be decomposed into multiple specs. If so, create a brief first using `$joycraft-new-feature`, then decompose.\n\n---\n\n## Phase 5: Hand Off\n\n```\nBug fix spec is ready: docs/specs/<feature-or-area>/bugfix-name.md\n\nSummary:\n- Bug: [one sentence]\n- Root cause: [one sentence]\n- Fix: [one sentence]\n- Estimated: 1 session\n\nTo execute: Start a fresh session and:\n1. Read the spec\n2. Write the reproduction test (must fail)\n3. Apply the fix (test must pass)\n4. Run full test suite\n5. Run $joycraft-session-end to capture discoveries\n6. Commit and PR\n\nReady to start?\n```\n",
253
- "joycraft-decompose.md": '---\nname: joycraft-decompose\ndescription: Break a feature brief into atomic specs \u2014 small, testable, independently executable units\n---\n\n# Decompose Feature into Atomic Specs\n\nYou have a Feature Brief (or the user has described a feature). Your job is to decompose it into atomic specs that can be executed independently \u2014 one spec per session.\n\n## Step 1: Verify the Brief Exists\n\nLook for a Feature Brief in `docs/briefs/`. If one doesn\'t exist yet, tell the user:\n\n> No feature brief found. Run `$joycraft-new-feature` first to interview and create one, or describe the feature now and I\'ll work from your description.\n\nIf the user describes the feature inline, work from that description directly. You don\'t need a formal brief to decompose \u2014 but recommend creating one for complex features.\n\n## Step 2: Identify Natural Boundaries\n\n**Why:** Good boundaries make specs independently testable and committable. Bad boundaries create specs that can\'t be verified without other specs also being done.\n\nRead the brief (or description) and identify natural split points:\n\n- **Data layer changes** (schemas, types, migrations) \u2014 always a separate spec\n- **Pure functions / business logic** \u2014 separate from I/O\n- **UI components** \u2014 separate from data fetching\n- **API endpoints / route handlers** \u2014 separate from business logic\n- **Test infrastructure** (mocks, fixtures, helpers) \u2014 can be its own spec if substantial\n- **Configuration / environment** \u2014 separate from code changes\n\nAsk yourself: "Can this piece be committed and tested without the other pieces existing?" If yes, it\'s a good boundary.\n\n## Step 3: Build the Decomposition Table\n\nFor each atomic spec, define:\n\n| # | Spec Name | Description | Dependencies | Size |\n|---|-----------|-------------|--------------|------|\n\n**Rules:**\n- Each spec name is `verb-object` format (e.g., `add-terminal-detection`, `extract-prompt-module`)\n- Each description is ONE sentence \u2014 if you need two, the spec is too big\n- Dependencies reference other spec numbers \u2014 keep the dependency graph shallow\n- More than 2 dependencies on a single spec = it\'s too big, split further\n- Aim for 3-7 specs per feature. Fewer than 3 = probably not decomposed enough. More than 10 = the feature brief is too big\n\n## Step 4: Present and Iterate\n\nShow the decomposition table to the user. Ask:\n1. "Does this breakdown match how you think about this feature?"\n2. "Are there any specs that feel too big or too small?"\n3. "Should any of these run in parallel (separate branches)?"\n\nIterate until the user approves.\n\n## Step 5: Generate Atomic Specs\n\nFor each approved row, create `docs/specs/<feature-name>/spec-name.md`. Derive the feature-name from the brief filename (strip the date prefix and `.md` \u2014 e.g., `2026-04-06-token-discipline.md` \u2192 `token-discipline`). If no brief exists, use a user-provided or inferred feature name (slugified to kebab-case). Create the `docs/specs/<feature-name>/` directory if it doesn\'t exist.\n\n**Why:** Each spec must be self-contained \u2014 a fresh session should be able to execute it without reading the Feature Brief. Copy relevant constraints and context into each spec.\n\nUse this structure:\n\n```markdown\n# [Verb + Object] \u2014 Atomic Spec\n\n> **Parent Brief:** `docs/briefs/YYYY-MM-DD-feature-name.md` (or "standalone")\n> **Status:** Ready\n> **Date:** YYYY-MM-DD\n> **Estimated scope:** [1 session / N files / ~N lines]\n\n---\n\n## What\nOne paragraph \u2014 what changes when this spec is done?\n\n## Why\nOne sentence \u2014 what breaks or is missing without this?\n\n## Acceptance Criteria\n- [ ] [Observable behavior]\n- [ ] Build passes\n- [ ] Tests pass\n\n## Test Plan\n\n| Acceptance Criterion | Test | Type |\n|---------------------|------|------|\n| [Each AC above] | [What to call/assert] | [unit/integration/e2e] |\n\n**Execution order:**\n1. Write all tests above \u2014 they should fail against current/stubbed code\n2. Run tests to confirm they fail (red)\n3. Implement until all tests pass (green)\n\n**Smoke test:** [Identify the fastest test for iteration feedback]\n\n**Before implementing, verify your test harness:**\n1. Run all tests \u2014 they must FAIL (if they pass, you\'re testing the wrong thing)\n2. Each test calls your actual function/endpoint \u2014 not a reimplementation or the underlying library\n3. Identify your smoke test \u2014 it must run in seconds, not minutes, so you get fast feedback on each change\n\n## Constraints\n- MUST: [hard requirement]\n- MUST NOT: [hard prohibition]\n\n## Affected Files\n| Action | File | What Changes |\n|--------|------|-------------|\n\n## Approach\nStrategy, data flow, key decisions. Name one rejected alternative.\n\n## Edge Cases\n| Scenario | Expected Behavior |\n|----------|------------------|\n```\n\nIf `docs/templates/ATOMIC_SPEC_TEMPLATE.md` exists, reference it for the full template with additional guidance.\n\nFill in all sections \u2014 each spec must be self-contained (no "see the brief for context"). Copy relevant constraints from the Feature Brief into each spec. Write acceptance criteria specific to THIS spec, not the whole feature. Every acceptance criterion must have at least one corresponding test in the Test Plan. If the user provided test strategy info from the interview, use it to choose test types and frameworks. Include the test harness verification rules in every Test Plan.\n\n## Step 6: Recommend Execution Strategy\n\nBased on the dependency graph:\n- **Independent specs** \u2014 "These can run in parallel branches"\n- **Sequential specs** \u2014 "Execute these in order: 1 -> 2 -> 4"\n- **Mixed** \u2014 "Start specs 1 and 3 in parallel. After 1 completes, start 2."\n\nUpdate the Feature Brief\'s Execution Strategy section with the plan (if a brief exists).\n\n## Step 7: Hand Off\n\nTell the user:\n```\nDecomposition complete:\n- [N] atomic specs created in docs/specs/\n- [N] can run in parallel, [N] are sequential\n- Estimated total: [N] sessions\n\nTo execute:\n- Sequential: Open a session, point at each spec in order\n- Parallel: One spec per branch, merge when done\n- Each session should end with $joycraft-session-end to capture discoveries\n\nReady to start execution?\n```\n\n**Tip:** Run `/new` before starting the next step. Your artifacts are saved to files \u2014 this conversation context is disposable.\n',
254
+ "joycraft-bugfix.md": "---\nname: joycraft-bugfix\ndescription: Structured bug fix workflow \u2014 triage, diagnose, discuss with user, write a focused spec, hand off for implementation\n---\n\n# Bug Fix Workflow\n\nYou are fixing a bug. Follow this process in order. Do not skip steps.\n\n**Guard clause:** If this is clearly a new feature, redirect to `$joycraft-new-feature` and stop.\n\n---\n\n## Phase 1: Triage\n\nEstablish what's broken. Gather: symptom, steps to reproduce, expected vs actual behavior, when it started, relevant logs/errors. If an error message or stack trace is provided, read the referenced files immediately. Try to reproduce if steps are given.\n\n**Done when:** You can describe the symptom in one sentence.\n\n---\n\n## Phase 2: Diagnose\n\nFind the root cause. Start from the error site and trace backward. Search the codebase and read files \u2014 don't guess. Identify the specific line(s) and logic error. Check git blame if it's a recent regression.\n\n**Done when:** You can explain what's wrong, why, and where in 2-3 sentences.\n\n---\n\n## Phase 3: Discuss\n\nPresent findings to the user BEFORE writing any code or spec:\n1. **Symptom** \u2014 confirm it matches what they see\n2. **Root cause** \u2014 specific file(s) and line(s)\n3. **Proposed fix** \u2014 what changes, where\n4. **Risk** \u2014 side effects? scope?\n\nAsk: \"Does this match? Comfortable with this approach?\" If large/risky, suggest decomposing into multiple specs.\n\n**Done when:** User agrees with the diagnosis and fix direction.\n\n---\n\n## Phase 4: Spec the Fix\n\nWrite a bug fix spec to `docs/specs/<feature-or-area>/bugfix-name.md`. Use the relevant feature name or area as the subdirectory (e.g., `auth`, `cli`, `parser`). Create the `docs/specs/<feature-or-area>/` directory if it doesn't exist.\n\n**Why:** Even bug fixes deserve a spec. It forces clarity on what \"fixed\" means, ensures test-first discipline, and creates a traceable record of the fix.\n\nUse this structure:\n\n```markdown\n# [Bug Name] \u2014 Bug Fix Spec\n\n> **Status:** Ready\n> **Date:** YYYY-MM-DD\n> **Estimated scope:** [1 session / N files / ~N lines]\n\n---\n\n## Bug\nOne sentence \u2014 what's broken?\n\n## Root Cause\nWhat's actually wrong, in which file(s) and line(s)?\n\n## Fix\nWhat changes, where?\n\n## Acceptance Criteria\n- [ ] [Observable behavior that proves the fix works]\n- [ ] No regressions \u2014 existing tests still pass\n- [ ] Build passes\n\n## Test Plan\n1. Write a reproduction test that fails before the fix\n2. Apply the fix\n3. Reproduction test passes\n4. Full test suite passes\n\n## Constraints\n- MUST: [hard requirement]\n- MUST NOT: [hard prohibition]\n\n## Affected Files\n| Action | File | What Changes |\n|--------|------|-------------|\n\n## Edge Cases\n| Scenario | Expected Behavior |\n|----------|------------------|\n```\n\n**For large bugs that span multiple files/systems:** Consider whether this should be decomposed into multiple specs. If so, create a brief first using `$joycraft-new-feature`, then decompose.\n\n---\n\n## Phase 5: Hand Off\n\n```\nBug fix spec is ready: docs/specs/<feature-or-area>/bugfix-name.md\n\nSummary:\n- Bug: [one sentence]\n- Root cause: [one sentence]\n- Fix: [one sentence]\n- Estimated: 1 session\n\nTo execute: Start a fresh session and:\n1. Read the spec\n2. Write the reproduction test (must fail)\n3. Apply the fix (test must pass)\n4. Run full test suite\n5. Run $joycraft-session-end to capture discoveries\n6. Commit and PR\n\nReady to start?\n\nRun /clear before your next step \u2014 your artifacts are saved to files.\n```\n",
255
+ "joycraft-collaborative-setup.md": '---\nname: joycraft-collaborative-setup\ndescription: Set up Joycraft for a team \u2014 scaffold per-area folders, owner conventions, and a team-facing CONTRIBUTING doc. Run once when adopting Joycraft on a multi-dev project.\n---\n\n# Collaborative Setup\n\nYou are setting up Joycraft for a team. Solo defaults stay solo; this skill adds the team-only ceremony \u2014 `docs/areas/` folders, area README/boundaries, and a thin team-facing CONTRIBUTING-joycraft doc.\n\nThis skill is **interactive** \u2014 ask the user, don\'t auto-detect.\n\n## When to run\n\nRun once when a team is adopting Joycraft on a multi-dev project. Solo users do **not** need this skill \u2014 solo defaults are fine without it.\n\n## Step 1: Confirm Team Context\n\nAsk the user:\n\n> "Setting up Joycraft for a team? (vs. solo work) If you\'re unsure, you can skip \u2014 solo defaults work fine and you can run this later."\n\nIf the user says "actually solo," bail before any writes:\n\n> "No problem. The solo workflow needs no extra setup. Run `$joycraft-new-feature` when you want to start a feature."\n\n## Step 2: Check for Flat Layout \u2014 Bail if Present\n\nBefore scaffolding team structure, check the project\'s docs/ for flat-layout artifacts. Look for any of:\n\n- `docs/briefs/*.md`\n- `docs/research/*.md`\n- `docs/designs/*.md`\n- `docs/specs/<feature>/` subdirectories whose names look like brief slugs\n\nIf any **flat layout** artifacts exist, tell the user:\n\n> "I see flat-layout artifacts in your docs/ (briefs/research/designs). Run `npx joycraft upgrade` first \u2014 it will migrate them into `docs/features/<slug>/` automatically. Then re-run this skill."\n\nThen stop. Skills don\'t reliably shell out, so the CLI does the migration.\n\n## Step 3: Gather Areas + Owners (Interactive)\n\nAsk the user:\n\n> "How many areas does your team work in? (e.g., `auth`, `api`, `frontend`, `infra`) \u2014 pick names that match how your team thinks about ownership. You can also skip and just create the team CONTRIBUTING doc."\n\nFor each area name the user provides:\n1. Confirm the name (kebab-case).\n2. Ask: "Who owns this area? (a name, an email, or a team handle \u2014 used in the area README\'s frontmatter)"\n3. Ask (optional): "Are there NEVER or ASK FIRST rules specific to this area? If yes, list them; if no, skip."\n\nIf the user provides duplicate names, ask them to pick a different one. Track the area list in your working memory before writing anything.\n\nIf the user provides 0 areas, skip Step 4 and go straight to Step 5 (CONTRIBUTING doc only). Useful path for "we just want the team doc, no areas yet."\n\n## Step 4: Scaffold Each Area\n\nFor each confirmed area, lazy-create `docs/areas/<area-name>/` and write a `README.md` with the **shared frontmatter schema** (areas are shared docs, not personal):\n\n```yaml\n---\nlast_updated: YYYY-MM-DD\nlast_updated_by: <owner from step 3>\n---\n```\n\n**Owner resolution for `last_updated_by`:** look up the owner name in this order \u2014 (1) `git config user.name`, (2) value in your auto-memory `joycraft-owner.txt` if present, (3) ask the user once and persist. Use the user-provided owner from Step 3 if they specified one for this area.\n\nBody of `README.md`:\n\n```markdown\n# <area-name>\n\n> **Owner:** <name from Step 3>\n> **Status:** active\n\n## What this area covers\n\n(Filled in by the area owner)\n\n## Conventions\n\n(Area-specific patterns or constraints)\n\n## Onboarding\n\nWhen a new dev joins this area, they should:\n1. Read this README\n2. Read `boundaries.md` (if present)\n3. Read the codebase under <area-relevant paths>\n```\n\nIf the user provided NEVER / ASK FIRST rules for the area, also write `docs/areas/<area-name>/boundaries.md` with the shared frontmatter and those rules. If they didn\'t, skip the boundaries file \u2014 the root CLAUDE.md boundaries already cover the project-wide cases.\n\n**Idempotency:** if `docs/areas/<area-name>/README.md` already exists, ASK before overwriting (default: skip + inform).\n\n## Step 5: Write the Team CONTRIBUTING Doc\n\nLazy-create `docs/CONTRIBUTING-joycraft.md` (NOT the project\'s main `CONTRIBUTING.md` \u2014 keep them separate so neither stomps on the other).\n\nIf `docs/templates/CONTRIBUTING-joycraft-template.md` exists in the project (it should \u2014 bundled by `npx joycraft init`), use it as the starting point. If not, fall back to the inline template below.\n\nThe doc starts with shared frontmatter:\n\n```yaml\n---\nlast_updated: YYYY-MM-DD\nlast_updated_by: <resolved owner>\n---\n```\n\nBody (inline fallback template \u2014 short by design):\n\n```markdown\n# Joycraft on this project\n\nWe use [Joycraft](https://www.npmjs.com/package/joycraft) for AI-assisted development.\n\n## How our team uses it\n\n(Filled in during $joycraft-collaborative-setup \u2014 fill this in with your team\'s specific conventions.)\n\n## Conventions\n\n- Per-feature work goes under `docs/features/<slug>/`\n- Area-level work and ownership: see `docs/areas/`\n- For "what is Joycraft?", see the package README\n\n## Onboarding\n\nWhen a new dev joins:\n1. Run `npx joycraft init` (idempotent on already-set-up projects)\n2. Read `docs/areas/<your-area>/README.md` for context\n```\n\nIf `docs/CONTRIBUTING-joycraft.md` already exists, ASK before overwriting \u2014 offer overwrite / append / skip; default to skip.\n\n## Step 6: Trigger CLAUDE.md Update\n\nNow that `docs/areas/` exists, the next `npx joycraft upgrade` (or any future `npx joycraft init`) will pick it up and add the **Areas pointer** to CLAUDE.md automatically \u2014 that pointer tells Claude "when working on the X area, read docs/areas/X/README.md first."\n\nTell the user:\n\n> "Run `npx joycraft upgrade` to refresh CLAUDE.md with the Areas pointer (or `npx joycraft init` if you haven\'t initialized yet)."\n\nDon\'t try to shell out from inside the skill \u2014 let the user run the CLI deliberately.\n\n## Step 7: Hand Off\n\nSummarize what you wrote (paths to area READMEs, the CONTRIBUTING doc, any boundaries files), then emit the canonical Handoff block.\n\n## Recommended Next Steps\n\nNext:\n```bash\n$joycraft-new-feature\n```\nRun /clear first.\n\nInclude the path to `docs/CONTRIBUTING-joycraft.md` and any newly-created area READMEs in the summary above the Handoff block.\n\n## Notes\n\n- This skill does NOT migrate flat-layout artifacts on its own. That\'s `npx joycraft upgrade`\'s job \u2014 Step 2 directs the user to run it first.\n- Area names are user-provided. Don\'t auto-detect from `src/auth/`, `src/api/`, etc. \u2014 many projects have monorepo or non-conventional layouts and auto-detection produces noise.\n- If the user stops mid-way (Ctrl-C, abandons), whatever\'s been written stays. Re-running the skill is the recovery path; it\'s idempotent on existing area folders (asks before overwriting).\n',
256
+ "joycraft-decompose.md": '---\nname: joycraft-decompose\ndescription: Break a feature brief into atomic specs \u2014 small, testable, independently executable units\n---\n\n# Decompose Feature into Atomic Specs\n\nYou have a Feature Brief (or the user has described a feature). Your job is to decompose it into atomic specs that can be executed independently \u2014 one spec per session.\n\n## Step 1: Verify the Brief Exists\n\nLook for a Feature Brief in `docs/briefs/`. If one doesn\'t exist yet, tell the user:\n\n> No feature brief found. Run `$joycraft-new-feature` first to interview and create one, or describe the feature now and I\'ll work from your description.\n\nIf the user describes the feature inline, work from that description directly. You don\'t need a formal brief to decompose \u2014 but recommend creating one for complex features.\n\n## Step 2: Identify Natural Boundaries\n\n**Why:** Good boundaries make specs independently testable and committable. Bad boundaries create specs that can\'t be verified without other specs also being done.\n\nRead the brief (or description) and identify natural split points:\n\n- **Data layer changes** (schemas, types, migrations) \u2014 always a separate spec\n- **Pure functions / business logic** \u2014 separate from I/O\n- **UI components** \u2014 separate from data fetching\n- **API endpoints / route handlers** \u2014 separate from business logic\n- **Test infrastructure** (mocks, fixtures, helpers) \u2014 can be its own spec if substantial\n- **Configuration / environment** \u2014 separate from code changes\n\nAsk yourself: "Can this piece be committed and tested without the other pieces existing?" If yes, it\'s a good boundary.\n\n## Step 3: Build the Decomposition Table\n\nFor each atomic spec, define:\n\n| # | Spec Name | Description | Dependencies | Size |\n|---|-----------|-------------|--------------|------|\n\n**Rules:**\n- Each spec name is `verb-object` format (e.g., `add-terminal-detection`, `extract-prompt-module`)\n- Each description is ONE sentence \u2014 if you need two, the spec is too big\n- Dependencies reference other spec numbers \u2014 keep the dependency graph shallow\n- More than 2 dependencies on a single spec = it\'s too big, split further\n- Aim for 3-7 specs per feature. Fewer than 3 = probably not decomposed enough. More than 10 = the feature brief is too big\n\n## Step 4: Present and Iterate\n\nShow the decomposition table to the user. Ask:\n1. "Does this breakdown match how you think about this feature?"\n2. "Are there any specs that feel too big or too small?"\n3. "Should any of these run in parallel (separate branches)?"\n\nIterate until the user approves.\n\n## Step 5: Generate Atomic Specs\n\nFor each approved row, create `docs/specs/<feature-name>/spec-name.md`. Derive the feature-name from the brief filename (strip the date prefix and `.md` \u2014 e.g., `2026-04-06-token-discipline.md` \u2192 `token-discipline`). If no brief exists, use a user-provided or inferred feature name (slugified to kebab-case). Create the `docs/specs/<feature-name>/` directory if it doesn\'t exist.\n\n**Why:** Each spec must be self-contained \u2014 a fresh session should be able to execute it without reading the Feature Brief. Copy relevant constraints and context into each spec.\n\nUse this structure:\n\n```markdown\n# [Verb + Object] \u2014 Atomic Spec\n\n> **Parent Brief:** `docs/briefs/YYYY-MM-DD-feature-name.md` (or "standalone")\n> **Status:** Ready\n> **Date:** YYYY-MM-DD\n> **Estimated scope:** [1 session / N files / ~N lines]\n\n---\n\n## What\nOne paragraph \u2014 what changes when this spec is done?\n\n## Why\nOne sentence \u2014 what breaks or is missing without this?\n\n## Acceptance Criteria\n- [ ] [Observable behavior]\n- [ ] Build passes\n- [ ] Tests pass\n\n## Test Plan\n\n| Acceptance Criterion | Test | Type |\n|---------------------|------|------|\n| [Each AC above] | [What to call/assert] | [unit/integration/e2e] |\n\n**Execution order:**\n1. Write all tests above \u2014 they should fail against current/stubbed code\n2. Run tests to confirm they fail (red)\n3. Implement until all tests pass (green)\n\n**Smoke test:** [Identify the fastest test for iteration feedback]\n\n**Before implementing, verify your test harness:**\n1. Run all tests \u2014 they must FAIL (if they pass, you\'re testing the wrong thing)\n2. Each test calls your actual function/endpoint \u2014 not a reimplementation or the underlying library\n3. Identify your smoke test \u2014 it must run in seconds, not minutes, so you get fast feedback on each change\n\n## Constraints\n- MUST: [hard requirement]\n- MUST NOT: [hard prohibition]\n\n## Affected Files\n| Action | File | What Changes |\n|--------|------|-------------|\n\n## Approach\nStrategy, data flow, key decisions. Name one rejected alternative.\n\n## Edge Cases\n| Scenario | Expected Behavior |\n|----------|------------------|\n```\n\nIf `docs/templates/ATOMIC_SPEC_TEMPLATE.md` exists, reference it for the full template with additional guidance.\n\nFill in all sections \u2014 each spec must be self-contained (no "see the brief for context"). Copy relevant constraints from the Feature Brief into each spec. Write acceptance criteria specific to THIS spec, not the whole feature. Every acceptance criterion must have at least one corresponding test in the Test Plan. If the user provided test strategy info from the interview, use it to choose test types and frameworks. Include the test harness verification rules in every Test Plan.\n\n## Step 6: Recommend Execution Strategy\n\nBased on the dependency graph:\n- **Independent specs** \u2014 "These can run in parallel branches"\n- **Sequential specs** \u2014 "Execute these in order: 1 -> 2 -> 4"\n- **Mixed** \u2014 "Start specs 1 and 3 in parallel. After 1 completes, start 2."\n\nUpdate the Feature Brief\'s Execution Strategy section with the plan (if a brief exists).\n\n## Step 7: Hand Off\n\nTell the user:\n```\nDecomposition complete:\n- [N] atomic specs created in docs/specs/\n- [N] can run in parallel, [N] are sequential\n- Estimated total: [N] sessions\n\nTo execute:\n- Sequential: Open a session, point at each spec in order\n- Parallel: One spec per branch, merge when done\n- Each session should end with $joycraft-session-end to capture discoveries\n\nReady to start execution?\n\nRun /clear before your next step \u2014 your artifacts are saved to files.\n```\n',
254
257
  "joycraft-design.md": '---\nname: joycraft-design\ndescription: Design discussion before decomposition \u2014 produce a ~200-line design artifact for human review, catching wrong assumptions before they propagate into specs\n---\n\n# Design Discussion\n\nYou are producing a design discussion document for a feature. This sits between research and decomposition \u2014 it captures your understanding so the human can catch wrong assumptions before specs are written.\n\n**Guard clause:** If no brief path is provided and no brief exists in `docs/briefs/`, say:\n"No feature brief found. Run `$joycraft-new-feature` first to create one, or provide the path to your brief."\nThen stop.\n\n---\n\n## Step 1: Read Inputs\n\nRead the feature brief at the path the user provides. If the user also provides a research document path, read that too.\n\n## Step 2: Explore the Codebase\n\nSpawn concurrent subagent threads to explore the codebase for patterns relevant to the brief. Focus on:\n\n- Files and functions that will be touched or extended\n- Existing patterns this feature should follow\n- Similar features already implemented that serve as models\n- Boundaries and interfaces the feature must integrate with\n\nEach subagent should search the codebase and read files to gather file paths, function signatures, and code snippets.\n\n## Step 3: Write the Design Document\n\nCreate `docs/designs/` directory if it doesn\'t exist. Write to `docs/designs/YYYY-MM-DD-feature-name.md`.\n\nThe document has exactly five sections:\n\n### Section 1: Current State\nWhat exists today in the codebase. Include file paths, function signatures, data flows. Be specific.\n\n### Section 2: Desired End State\nWhat the codebase should look like when this feature is complete.\n\n### Section 3: Patterns to Follow\nExisting patterns in the codebase that this feature should match. Include code snippets and `file:line` references.\n\n### Section 4: Resolved Design Decisions\nDecisions made with rationale. Format: Decision, Rationale, Alternative rejected.\n\n### Section 5: Open Questions\nThings where multiple valid approaches exist. Each question MUST present 2-3 concrete options with pros and cons.\n\n## Step 4: Present and STOP\n\nPresent the design document. Say:\n```\nDesign discussion written to docs/designs/YYYY-MM-DD-feature-name.md\n\nPlease review. Specifically:\n1. Are the patterns in Section 3 right?\n2. Do you agree with the resolved decisions?\n3. Pick an option for each open question.\n\nReply with your feedback. I will NOT proceed to decomposition until you have reviewed and approved.\n```\n\n**CRITICAL: Do NOT proceed to `$joycraft-decompose` or generate specs.** Wait for human review.\n\n## After Human Review\n\n- Update the design document with corrections\n- Move answered questions to Resolved Design Decisions\n- Present for final confirmation\n- Only after explicit approval: "Design approved. Run `$joycraft-decompose` with this brief to generate atomic specs."\n',
255
258
  "joycraft-implement-level5.md": "---\nname: joycraft-implement-level5\ndescription: Set up Level 5 autonomous development \u2014 autofix loop, holdout scenario testing, and scenario evolution from specs\n---\n\n# Implement Level 5 \u2014 Autonomous Development Loop\n\nYou are guiding the user through setting up Level 5: the autonomous feedback loop where specs go in, validated software comes out. This is a one-time setup that installs workflows, creates a scenarios repo, and configures the autofix loop.\n\n## Before You Begin\n\nCheck prerequisites:\n\n1. **Project must be initialized.** Search for `.joycraft-version`. If missing, tell the user to run `npx joycraft init` first.\n2. **Project should be at Level 4.** Read `docs/joycraft-assessment.md` if it exists. If the project hasn't been assessed yet, suggest running `$joycraft-tune` first. But don't block -- the user may know they're ready.\n3. **Git repo with GitHub remote.** This setup requires GitHub Actions. Check for `.git/` and a GitHub remote.\n\nIf prerequisites aren't met, explain what's needed and stop.\n\n## Step 1: Explain What Level 5 Means\n\nTell the user:\n\n> Level 5 is the autonomous loop. When you push specs, three things happen automatically:\n>\n> 1. **Scenario evolution** -- An AI agent reads your specs and writes holdout tests in a private scenarios repo. These tests are invisible to your coding agent.\n> 2. **Autofix** -- When CI fails on a PR, the agent automatically attempts a fix (up to 3 times).\n> 3. **Holdout validation** -- When CI passes, your scenarios repo runs behavioral tests against the PR. Results post as PR comments.\n>\n> The key insight: your coding agent never sees the scenario tests. This prevents it from gaming the test suite -- like a validation set in machine learning.\n\n## Step 2: Gather Configuration\n\nAsk these questions **one at a time**:\n\n### Question 1: Scenarios repo name\n\n> What should we call your scenarios repo? It'll be a private repo that holds your holdout tests.\n>\n> Default: `{current-repo-name}-scenarios`\n\nAccept the default or the user's choice.\n\n### Question 2: GitHub App\n\n> Level 5 needs a GitHub App to provide a separate identity for autofix pushes (this avoids GitHub's anti-recursion protection). Creating one takes about 2 minutes:\n>\n> 1. Go to https://github.com/settings/apps/new\n> 2. Give it a name (e.g., \"My Project Autofix\")\n> 3. Uncheck \"Webhook > Active\" (not needed)\n> 4. Under **Repository permissions**, set:\n> - **Contents**: Read & Write\n> - **Pull requests**: Read & Write\n> - **Actions**: Read & Write\n> 5. Click **Create GitHub App**\n> 6. Note the **App ID** from the settings page\n> 7. Scroll to **Private keys** > click **Generate a private key** > save the `.pem` file\n> 8. Click **Install App** in the left sidebar > install it on your repo\n>\n> What's your App ID?\n\n## Step 3: Run init-autofix\n\nRun the CLI command with the gathered configuration:\n\n```bash\nnpx joycraft init-autofix --scenarios-repo {name} --app-id {id}\n```\n\nReview the output with the user. Confirm files were created.\n\n## Step 4: Walk Through Secret Configuration\n\nGuide the user step by step:\n\n### 4a: Add Secrets to Main Repo\n\n> You should already have the `.pem` file from when you created the app in Step 2.\n\n> Go to your repo's Settings > Secrets and variables > Actions, and add:\n> - `JOYCRAFT_APP_PRIVATE_KEY` -- paste the contents of your `.pem` file\n> - `ANTHROPIC_API_KEY` -- your Anthropic API key (or the appropriate AI provider key for your setup)\n\n### 4b: Create the Scenarios Repo\n\n> Create the private scenarios repo:\n> ```bash\n> gh repo create {scenarios-repo-name} --private\n> ```\n>\n> Then copy the scenario templates into it:\n> ```bash\n> cp -r docs/templates/scenarios/* ../{scenarios-repo-name}/\n> cd ../{scenarios-repo-name}\n> git add -A && git commit -m \"init: scaffold scenarios repo from Joycraft\"\n> git push\n> ```\n\n### 4c: Add Secrets to Scenarios Repo\n\n> The scenarios repo also needs the App private key:\n> - `JOYCRAFT_APP_PRIVATE_KEY` -- same `.pem` file as the main repo\n> - `ANTHROPIC_API_KEY` -- same key (needed for scenario generation)\n\n## Step 5: Verify Setup\n\nHelp the user verify everything is wired correctly:\n\n1. **Check workflow files exist:** `ls .github/workflows/autofix.yml .github/workflows/scenarios-dispatch.yml .github/workflows/spec-dispatch.yml .github/workflows/scenarios-rerun.yml`\n2. **Check scenario templates were copied:** Verify the scenarios repo has `example-scenario.test.ts`, `workflows/run.yml`, `workflows/generate.yml`, `prompts/scenario-agent.md`\n3. **Check the App ID is correct** in the workflow files (not still a placeholder)\n\n## Step 6: Update AGENTS.md\n\nIf the project's AGENTS.md doesn't already have an \"External Validation\" section, add one:\n\n> ## External Validation\n>\n> This project uses holdout scenario tests in a separate private repo.\n>\n> ### NEVER\n> - Access, read, or reference the scenarios repo\n> - Mention scenario test names or contents\n> - Modify the scenarios dispatch workflow to leak test information\n>\n> The scenarios repo is deliberately invisible to you. This is the holdout guarantee.\n\n## Step 7: First Test (Optional)\n\nIf the user wants to test the loop:\n\n> Want to do a quick test? Here's how:\n>\n> 1. Write a simple spec in `docs/specs/` and push to main -- this triggers scenario generation\n> 2. Create a PR with a small change -- when CI passes, scenarios will run\n> 3. Watch for the scenario test results as a PR comment\n>\n> Or deliberately break something in a PR to test the autofix loop.\n\n## Step 8: Summary\n\nPrint a summary of what was set up:\n\n> **Level 5 is live.** Here's what's running:\n>\n> | Trigger | What Happens |\n> |---------|-------------|\n> | Push specs to `docs/specs/` | Scenario agent writes holdout tests |\n> | PR fails CI | Autofix agent attempts a fix (up to 3x) |\n> | PR passes CI | Holdout scenarios run against PR |\n> | Scenarios update | Open PRs re-tested with latest scenarios |\n>\n> Your scenarios repo: `{name}`\n> Your coding agent cannot see those tests. The holdout wall is intact.\n\n**Important:** Tell the user:\n\n> **Before you can test the loop**, you need to merge this PR to main first. GitHub's `workflow_run` triggers only activate for workflows that exist on the default branch. Once merged, create a new PR with any small change -- that's when you'll see Autofix, Scenarios Dispatch, and Spec Dispatch fire for the first time.\n\nUpdate `docs/joycraft-assessment.md` if it exists -- set the Level 5 score to reflect the new setup.\n",
256
259
  "joycraft-implement.md": "---\nname: joycraft-implement\ndescription: Execute atomic specs with TDD \u2014 read spec, write failing tests, implement until green, hand off to session-end\n---\n\n# Implement Atomic Spec\n\nYou have one or more atomic spec paths to execute. Your job is to implement each spec using strict TDD \u2014 tests first, confirm they fail, then implement until green.\n\n## Step 1: Parse Arguments\n\nThe user should provide one or more spec paths (e.g., `docs/specs/my-feature/add-widget.md`).\n\nIf no spec path was provided, tell the user:\n\n> No spec path provided. Check `docs/specs/` for available specs, or provide a path like:\n> `$joycraft-implement docs/specs/feature-name/spec-name.md`\n\n## Step 2: Read and Understand the Spec\n\nFor each spec path:\n\n1. **Read the spec file.** The spec is your execution contract \u2014 the Acceptance Criteria and Test Plan define \"done.\"\n2. **Check the spec's Status field.** If it says \"Complete,\" warn the user and ask if they want to re-implement or skip.\n3. **Read the Acceptance Criteria** \u2014 these are your success conditions.\n4. **Read the Test Plan** \u2014 this tells you exactly what tests to write and in what order.\n5. **Read the Constraints** \u2014 these are hard boundaries you must not violate.\n\n### Finding Additional Context\n\nSpecs are designed to be self-contained, but if you need more context:\n\n- **Parent brief:** Linked in the spec's frontmatter (`> **Parent Brief:**` line). Read it for broader feature context.\n- **Related specs:** Live in the same directory. The spec directory convention is `docs/specs/<feature-name>/` where the feature name is derived from the brief filename (strip the date prefix and `.md` \u2014 e.g., `2026-04-06-token-discipline.md` \u2192 `token-discipline`).\n- **Affected Files:** The spec's Affected Files table tells you which files to create or modify.\n\n## Step 3: Execute the TDD Cycle\n\n**This is not optional. Write tests FIRST.**\n\n### 3a. Write Tests (Red Phase)\n\nUsing the spec's Test Plan:\n\n1. Write ALL tests listed in the Test Plan. Each Acceptance Criterion must have at least one test.\n2. Tests should call the actual function/endpoint \u2014 not a reimplementation or mock of the underlying library.\n3. Run the tests. **They MUST fail.** If any test passes immediately:\n - Flag it \u2014 either the test isn't testing the right thing, or the code already exists.\n - Investigate before proceeding. A test that passes before implementation is a test that proves nothing.\n\n### 3b. Implement (Green Phase)\n\n1. Follow the spec's Approach section for implementation strategy.\n2. Implement the minimum code needed to make tests pass.\n3. Run tests after each meaningful change \u2014 use the spec's Smoke Test for fast feedback.\n4. Continue until ALL tests pass.\n\n### 3c. Verify Acceptance Criteria\n\nWalk through every Acceptance Criterion in the spec:\n\n- [ ] Is each one met?\n- [ ] Does the build pass?\n- [ ] Do all tests pass?\n\nIf any criterion is not met, keep implementing. Do not move on until all criteria are green.\n\n## Step 4: Handle Edge Cases\n\nCheck the spec's Edge Cases table. For each scenario:\n\n- Verify the expected behavior is handled.\n- If the spec says \"warn the user\" or \"prompt,\" make sure that path works.\n\n## Step 5: Multi-Spec Handling\n\nIf the user provided multiple specs:\n\n1. Execute specs in dependency order (check each spec's frontmatter for dependencies).\n2. After completing each spec, run the full test suite to ensure no regressions.\n3. **Between specs:** Tell the user:\n\n```\nSpec [name] complete. [N] specs remaining.\n```\n\n**Tip:** Run `/new` before starting the next spec. Your artifacts are saved to files \u2014 this conversation context is disposable.\n\n## Step 6: Hand Off\n\nWhen all specs are implemented and passing:\n\n```\nImplementation complete:\n- Spec(s): [list spec names] \u2014 all Acceptance Criteria met\n- Tests: [N] written, all passing\n- Build: passing\n\nNext steps:\n- Run $joycraft-session-end to capture discoveries and wrap up\n```\n\n**Tip:** Run `/new` before starting the next step. Your artifacts are saved to files \u2014 this conversation context is disposable.\n",
257
- "joycraft-interview.md": "---\nname: joycraft-interview\ndescription: Brainstorm freely about what you want to build \u2014 yap, explore ideas, and get a structured summary you can use later\n---\n\n# Interview \u2014 Idea Exploration\n\nYou are helping the user brainstorm and explore what they want to build. This is a lightweight, low-pressure conversation \u2014 not a formal spec process. Let them yap.\n\n## How to Run the Interview\n\n### 1. Open the Floor\n\nStart with something like:\n\"What are you thinking about building? Just talk \u2014 I'll listen and ask questions as we go.\"\n\nLet the user talk freely. Do not interrupt their flow. Do not push toward structure yet.\n\n### 2. Ask Clarifying Questions\n\nAs they talk, weave in questions naturally \u2014 don't fire them all at once:\n\n- **What problem does this solve?** Who feels the pain today?\n- **What does \"done\" look like?** If this worked perfectly, what would a user see?\n- **What are the constraints?** Time, tech, team, budget \u2014 what boxes are we in?\n- **What's NOT in scope?** What's tempting but should be deferred?\n- **What are the edge cases?** What could go wrong? What's the weird input?\n- **What exists already?** Are we building on something or starting fresh?\n\n### 3. Play Back Understanding\n\nAfter the user has gotten their ideas out, reflect back:\n\"So if I'm hearing you right, you want to [summary]. The core problem is [X], and done looks like [Y]. Is that right?\"\n\nLet them correct and refine. Iterate until they say \"yes, that's it.\"\n\n### 4. Write a Draft Brief\n\nCreate a draft file at `docs/briefs/YYYY-MM-DD-topic-draft.md`. Create the `docs/briefs/` directory if it doesn't exist.\n\nUse this format:\n\n```markdown\n# [Topic] \u2014 Draft Brief\n\n> **Date:** YYYY-MM-DD\n> **Status:** DRAFT\n> **Origin:** $joycraft-interview session\n\n---\n\n## The Idea\n[2-3 paragraphs capturing what the user described \u2014 their words, their framing]\n\n## Problem\n[What pain or gap this addresses]\n\n## What \"Done\" Looks Like\n[The user's description of success \u2014 observable outcomes]\n\n## Constraints\n- [constraint 1]\n- [constraint 2]\n\n## Open Questions\n- [things that came up but weren't resolved]\n- [decisions that need more thought]\n\n## Out of Scope (for now)\n- [things explicitly deferred]\n\n## Raw Notes\n[Any additional context, quotes, or tangents worth preserving]\n```\n\n### 5. Hand Off\n\nAfter writing the draft, tell the user:\n\n```\nDraft brief saved to docs/briefs/YYYY-MM-DD-topic-draft.md\n\nWhen you're ready to move forward, pick the path that fits the complexity:\n\nCOMPLEX (5+ files, architectural decisions, unfamiliar area):\n $joycraft-new-feature \u2192 $joycraft-research \u2192 $joycraft-design \u2192 $joycraft-decompose\n\nMEDIUM (clear scope but non-trivial):\n $joycraft-new-feature \u2192 $joycraft-design \u2192 $joycraft-decompose\n\nSIMPLE (scope is clear, < 5 files, well-understood area):\n $joycraft-new-feature \u2192 $joycraft-decompose\n\nNot sure yet? Just keep brainstorming \u2014 run $joycraft-interview again anytime.\n```\n\nIf the idea sounds complex \u2014 touches many files, involves architectural decisions, or the user is working in an unfamiliar area \u2014 nudge them toward research and design. But present it as a recommendation, not a gate.\n\n**Tip:** Run `/new` before starting the next step. Your artifacts are saved to files \u2014 this conversation context is disposable.\n\n## Guidelines\n\n- **This is NOT $joycraft-new-feature.** Do not push toward formal briefs, decomposition tables, or atomic specs. The point is exploration.\n- **Let the user lead.** Your job is to listen, clarify, and capture \u2014 not to structure or direct.\n- **Mark everything as DRAFT.** The output is a starting point, not a commitment.\n- **Keep it short.** The draft brief should be 1-2 pages max. Capture the essence, not every detail.\n- **Multiple interviews are fine.** The user might run this several times as their thinking evolves. Each creates a new dated draft.\n",
260
+ "joycraft-interview.md": "---\nname: joycraft-interview\ndescription: Brainstorm freely about what you want to build \u2014 yap, explore ideas, and get a structured summary you can use later\n---\n\n# Interview \u2014 Idea Exploration\n\nYou are helping the user brainstorm and explore what they want to build. This is a lightweight, low-pressure conversation \u2014 not a formal spec process. Let them yap.\n\n## How to Run the Interview\n\n### 1. Open the Floor\n\nStart with something like:\n\"What are you thinking about building? Just talk \u2014 I'll listen and ask questions as we go.\"\n\nLet the user talk freely. Do not interrupt their flow. Do not push toward structure yet.\n\n### 2. Ask Clarifying Questions\n\nAs they talk, weave in questions naturally \u2014 don't fire them all at once:\n\n- **What problem does this solve?** Who feels the pain today?\n- **What does \"done\" look like?** If this worked perfectly, what would a user see?\n- **What are the constraints?** Time, tech, team, budget \u2014 what boxes are we in?\n- **What's NOT in scope?** What's tempting but should be deferred?\n- **What are the edge cases?** What could go wrong? What's the weird input?\n- **What exists already?** Are we building on something or starting fresh?\n\n### 3. Play Back Understanding\n\nAfter the user has gotten their ideas out, reflect back:\n\"So if I'm hearing you right, you want to [summary]. The core problem is [X], and done looks like [Y]. Is that right?\"\n\nLet them correct and refine. Iterate until they say \"yes, that's it.\"\n\n### 4. Write a Draft Brief\n\nCreate a draft file at `docs/briefs/YYYY-MM-DD-topic-draft.md`. Create the `docs/briefs/` directory if it doesn't exist.\n\nUse this format:\n\n```markdown\n# [Topic] \u2014 Draft Brief\n\n> **Date:** YYYY-MM-DD\n> **Status:** DRAFT\n> **Origin:** $joycraft-interview session\n\n---\n\n## The Idea\n[2-3 paragraphs capturing what the user described \u2014 their words, their framing]\n\n## Problem\n[What pain or gap this addresses]\n\n## What \"Done\" Looks Like\n[The user's description of success \u2014 observable outcomes]\n\n## Constraints\n- [constraint 1]\n- [constraint 2]\n\n## Open Questions\n- [things that came up but weren't resolved]\n- [decisions that need more thought]\n\n## Out of Scope (for now)\n- [things explicitly deferred]\n\n## Raw Notes\n[Any additional context, quotes, or tangents worth preserving]\n```\n\n### 5. Hand Off\n\nAfter writing the draft, tell the user:\n\n```\nDraft brief saved to docs/briefs/YYYY-MM-DD-topic-draft.md\n\nWhen you're ready to move forward, pick the path that fits the complexity:\n\nCOMPLEX (5+ files, architectural decisions, unfamiliar area):\n $joycraft-new-feature \u2192 $joycraft-research \u2192 $joycraft-design \u2192 $joycraft-decompose\n\nMEDIUM (clear scope but non-trivial):\n $joycraft-new-feature \u2192 $joycraft-design \u2192 $joycraft-decompose\n\nSIMPLE (scope is clear, < 5 files, well-understood area):\n $joycraft-new-feature \u2192 $joycraft-decompose\n\nNot sure yet? Just keep brainstorming \u2014 run $joycraft-interview again anytime.\n\nRun /clear before your next step \u2014 your artifacts are saved to files.\n```\n\nIf the idea sounds complex \u2014 touches many files, involves architectural decisions, or the user is working in an unfamiliar area \u2014 nudge them toward research and design. But present it as a recommendation, not a gate.\n\n## Guidelines\n\n- **This is NOT $joycraft-new-feature.** Do not push toward formal briefs, decomposition tables, or atomic specs. The point is exploration.\n- **Let the user lead.** Your job is to listen, clarify, and capture \u2014 not to structure or direct.\n- **Mark everything as DRAFT.** The output is a starting point, not a commitment.\n- **Keep it short.** The draft brief should be 1-2 pages max. Capture the essence, not every detail.\n- **Multiple interviews are fine.** The user might run this several times as their thinking evolves. Each creates a new dated draft.\n",
258
261
  "joycraft-lockdown.md": "---\nname: joycraft-lockdown\ndescription: Generate constrained execution boundaries for an implementation session -- NEVER rules and deny patterns to prevent agent overreach\n---\n\n# Lockdown Mode\n\nThe user wants to constrain agent behavior for an implementation session. Your job is to interview them about what should be off-limits, then generate AGENTS.md NEVER rules and Codex configuration deny patterns they can review and apply.\n\n## When Is Lockdown Useful?\n\nLockdown is most valuable for:\n- **Complex tech stacks** (hardware, firmware, multi-device) where agents can cause real damage\n- **Long-running autonomous sessions** where you won't be monitoring every action\n- **Production-adjacent work** where accidental network calls or package installs are risky\n\nFor simple feature work on a well-tested codebase, lockdown is usually overkill. Mention this context to the user so they can decide.\n\n## Step 1: Check for Tests\n\nBefore starting the interview, search the codebase for test files or directories (look for `tests/`, `test/`, `__tests__/`, `spec/`, or files matching `*.test.*`, `*.spec.*`).\n\nIf no tests are found, tell the user:\n\n> Lockdown mode is most useful when you already have tests in place -- it prevents the agent from modifying them while constraining behavior to writing code and running tests. Consider running `$joycraft-new-feature` first to set up a test-driven workflow, then come back to lock it down.\n\nIf the user wants to proceed anyway, continue with the interview.\n\n## Step 2: Interview -- What to Lock Down\n\nAsk these three questions, one at a time. Wait for the user's response before proceeding to the next question.\n\n### Question 1: Read-Only Files\n\n> What test files or directories should be off-limits for editing? (e.g., `tests/`, `__tests__/`, `spec/`, specific test files)\n>\n> I'll generate NEVER rules to prevent editing these.\n\nIf the user isn't sure, suggest the test directories you found in Step 1.\n\n### Question 2: Allowed Commands\n\n> What commands should the agent be allowed to run? Defaults:\n> - Write and edit source code files\n> - Run the project's smoke test command\n> - Run the full test suite\n>\n> Any other commands to explicitly allow? Or should I restrict to just these?\n\n### Question 3: Denied Commands\n\n> What commands should be denied? Defaults:\n> - Package installs (`npm install`, `pip install`, `cargo add`, `go get`, etc.)\n> - Network tools (`curl`, `wget`, `ping`, `ssh`)\n> - Direct log file reading\n>\n> Any specific commands to add or remove from this list?\n\n**Edge case -- user wants to allow some network access:** If the user mentions API tests or specific endpoints that need network access, exclude those from the deny list and note the exception in the output.\n\n**Edge case -- user wants to lock down file writes:** If the user wants to prevent ALL file writes, warn them:\n\n> Denying all file writes would prevent the agent from doing any work. I recommend keeping source code writes allowed and only locking down test files, config files, or other sensitive directories.\n\n## Step 3: Generate Boundaries\n\nBased on the interview responses, generate output in this exact format:\n\n```\n## Lockdown boundaries generated\n\nReview these suggestions and add them to your project:\n\n### AGENTS.md -- add to NEVER section:\n\n- Edit any file in `[user's test directories]`\n- Run `[denied package manager commands]`\n- Use `[denied network tools]`\n- Read log files directly -- interact with logs only through test assertions\n- [Any additional NEVER rules based on user responses]\n\n### Codex configuration -- suggested deny patterns:\n\nAdd these to your Codex sandbox configuration to restrict command execution:\n\n[\"[command1]\", \"[command2]\", \"[command3]\"]\n\n---\n\nCopy these into your project manually, or tell me to apply them now (I'll show you the exact changes for approval first).\n```\n\nAdjust the content based on the actual interview responses:\n- Only include deny patterns for commands the user confirmed should be denied\n- Only include NEVER rules for directories/files the user specified\n- If the user allowed certain network tools or package managers, exclude those\n\n## Recommended Execution Model\n\nAfter generating the boundaries above, also recommend a Codex execution configuration. Include this section in your output:\n\n```\n### Recommended Execution Configuration\n\nCodex runs in a sandboxed environment by default. To maximize safety during lockdown:\n\n| Your situation | Configuration | Why |\n|---|---|---|\n| Autonomous spec execution | Sandbox with deny patterns above | Only pre-approved commands run |\n| Long session with some trust | Default sandbox | Network-disabled sandbox prevents external access |\n| Interactive development | Default with manual review | Review outputs before applying |\n\n**For lockdown mode, we recommend the default sandboxed execution** combined with the deny patterns above. Codex's sandbox already disables network access by default -- the deny patterns add file-level and command-level restrictions on top.\n\nIf you need network access for specific commands (e.g., API tests), configure explicit network allowances in your Codex setup rather than disabling the sandbox entirely.\n```\n\n## Step 4: Offer to Apply\n\nIf the user asks you to apply the changes:\n\n1. **For AGENTS.md:** Read the existing AGENTS.md, find the Behavioral Boundaries section, and show the user the exact diff for the NEVER section. Ask for confirmation before writing.\n2. **For Codex configuration:** Show the user what the deny patterns will look like after adding the new restrictions. Ask for confirmation before writing.\n\n**Never auto-apply. Always show the exact changes and wait for explicit approval.**\n",
259
- "joycraft-new-feature.md": '---\nname: joycraft-new-feature\ndescription: Guided feature development \u2014 interview the user, produce a Feature Brief, then decompose into atomic specs\n---\n\n# New Feature Workflow\n\nYou are starting a new feature. Follow this process in order. Do not skip steps.\n\n## Phase 0: Check for Existing Drafts\n\nBefore starting the interview, check if the user has already drafted a brief.\n\n**Skip this phase if:** the user provided a brief path as an argument (they already know what to work from).\n\n**Steps:**\n1. Check if `docs/briefs/` exists. If not, skip to Phase 1.\n2. Look for files matching `*-draft.md` in `docs/briefs/`.\n3. For any other `.md` files in `docs/briefs/`, read the first 10 lines and check for `Status: DRAFT`.\n4. If draft(s) found, present them:\n\n```\nI found draft brief(s) in docs/briefs/:\n- [path] (drafted YYYY-MM-DD)\n- [path] (drafted YYYY-MM-DD)\n\nWant me to:\n1. **Formalize** one of these into a full Feature Brief (skip interview, go to Phase 2)\n2. **Start a new interview** from scratch\n```\n\n5. If user chooses to formalize: read the full draft, extract the idea/problem/constraints, and jump to Phase 2 with that context pre-filled.\n6. If user chooses to start fresh, or no drafts found: proceed to Phase 1.\n\n## Phase 1: Interview\n\nInterview the user about what they want to build. Let them talk \u2014 your job is to listen, then sharpen.\n\n**Ask about:**\n- What problem does this solve? Who is affected?\n- What does "done" look like?\n- Hard constraints? (business rules, tech limitations, deadlines)\n- What is explicitly NOT in scope? (push hard on this)\n- Edge cases or error conditions?\n- What existing code/patterns should this follow?\n- Testing: existing setup? framework? smoke test budget? lockdown mode desired?\n\n**Interview technique:**\n- Let the user "yap" \u2014 don\'t interrupt their flow\n- Play back your understanding: "So if I\'m hearing you right..."\n- Push toward testable statements: "How would we verify that works?"\n\nKeep asking until you can fill out a Feature Brief.\n\n## Phase 2: Feature Brief\n\nWrite a Feature Brief to `docs/briefs/YYYY-MM-DD-feature-name.md`. Create the `docs/briefs/` directory if it doesn\'t exist.\n\n**Why:** The brief is the single source of truth for what we\'re building. It prevents scope creep and gives every spec a shared reference point.\n\nUse this structure:\n\n```markdown\n# [Feature Name] \u2014 Feature Brief\n\n> **Date:** YYYY-MM-DD\n> **Project:** [project name]\n> **Status:** Interview | Decomposing | Specs Ready | In Progress | Complete\n\n---\n\n## Vision\nWhat are we building and why? The full picture in 2-4 paragraphs.\n\n## User Stories\n- As a [role], I want [capability] so that [benefit]\n\n## Hard Constraints\n- MUST: [constraint that every spec must respect]\n- MUST NOT: [prohibition that every spec must respect]\n\n## Out of Scope\n- NOT: [tempting but deferred]\n\n## Test Strategy\n- **Existing setup:** [framework and tools, or "none yet"]\n- **User expertise:** [comfortable / learning / needs guidance]\n- **Test types:** [smoke, unit, integration, e2e, etc.]\n- **Smoke test budget:** [target time for fast-feedback tests]\n- **Lockdown mode:** [yes/no \u2014 constrain agent to code + tests only]\n\n## Decomposition\n| # | Spec Name | Description | Dependencies | Est. Size |\n|---|-----------|-------------|--------------|-----------|\n| 1 | [verb-object] | [one sentence] | None | [S/M/L] |\n\n## Execution Strategy\n- [ ] Sequential (specs have chain dependencies)\n- [ ] Parallel (specs are independent)\n- [ ] Mixed\n\n## Success Criteria\n- [ ] [End-to-end behavior 1]\n- [ ] [No regressions in existing features]\n```\n\nIf `docs/templates/FEATURE_BRIEF_TEMPLATE.md` exists, reference it for the full template with additional guidance.\n\nPresent the brief to the user. Focus review on:\n- "Does the decomposition match how you think about this?"\n- "Is anything in scope that shouldn\'t be?"\n- "Are the specs small enough? Can each be described in one sentence?"\n\nIterate until approved.\n\n## Phase 3: Generate Atomic Specs\n\nFor each row in the decomposition table, create a self-contained spec file at `docs/specs/<feature-name>/spec-name.md`. Derive the feature-name from the brief filename (strip the date prefix and `.md` \u2014 e.g., `2026-04-06-token-discipline.md` \u2192 `token-discipline`). Create the `docs/specs/<feature-name>/` directory if it doesn\'t exist.\n\n**Why:** Each spec must be understandable WITHOUT reading the Feature Brief. This prevents the "Curse of Instructions" \u2014 no spec should require holding the entire feature in context. Copy relevant context into each spec.\n\nUse this structure for each spec:\n\n```markdown\n# [Verb + Object] \u2014 Atomic Spec\n\n> **Parent Brief:** `docs/briefs/YYYY-MM-DD-feature-name.md`\n> **Status:** Ready\n> **Date:** YYYY-MM-DD\n> **Estimated scope:** [1 session / N files / ~N lines]\n\n---\n\n## What\nOne paragraph \u2014 what changes when this spec is done?\n\n## Why\nOne sentence \u2014 what breaks or is missing without this?\n\n## Acceptance Criteria\n- [ ] [Observable behavior]\n- [ ] Build passes\n- [ ] Tests pass\n\n## Test Plan\n\n| Acceptance Criterion | Test | Type |\n|---------------------|------|------|\n| [Each AC above] | [What to call/assert] | [unit/integration/e2e] |\n\n**Execution order:**\n1. Write all tests above \u2014 they should fail against current/stubbed code\n2. Run tests to confirm they fail (red)\n3. Implement until all tests pass (green)\n\n**Smoke test:** [Identify the fastest test for iteration feedback]\n\n**Before implementing, verify your test harness:**\n1. Run all tests \u2014 they must FAIL (if they pass, you\'re testing the wrong thing)\n2. Each test calls your actual function/endpoint \u2014 not a reimplementation or the underlying library\n3. Identify your smoke test \u2014 it must run in seconds, not minutes, so you get fast feedback on each change\n\n## Constraints\n- MUST: [hard requirement]\n- MUST NOT: [hard prohibition]\n\n## Affected Files\n| Action | File | What Changes |\n|--------|------|-------------|\n\n## Approach\nStrategy, data flow, key decisions. Name one rejected alternative.\n\n## Edge Cases\n| Scenario | Expected Behavior |\n|----------|------------------|\n```\n\nIf `docs/templates/ATOMIC_SPEC_TEMPLATE.md` exists, reference it for the full template with additional guidance.\n\n## Phase 4: Hand Off for Execution\n\nBefore jumping to execution, consider whether research or design would catch wrong assumptions early:\n\n```\nFeature Brief and [N] atomic specs are ready.\n\nSpecs:\n1. [spec-name] \u2014 [one sentence] [S/M/L]\n2. [spec-name] \u2014 [one sentence] [S/M/L]\n...\n\nBefore executing, consider the complexity of this feature:\n\nCOMPLEX (5+ files, architectural decisions, unfamiliar area):\n \u2192 $joycraft-research \u2014 gather codebase facts before committing to a design\n \u2192 $joycraft-design \u2014 make architectural decisions explicit\n \u2192 Then execute specs\n\nMEDIUM (clear scope but non-trivial):\n \u2192 $joycraft-design \u2014 make key decisions explicit before building\n \u2192 Then execute specs\n\nSIMPLE (scope is clear, < 5 files, well-understood area):\n \u2192 Skip to execution\n\nRecommended execution:\n- [Parallel/Sequential/Mixed strategy]\n- Estimated: [N] sessions total\n\nTo execute: Start a fresh session per spec. Each session should:\n1. Read the spec\n2. Implement\n3. Run $joycraft-session-end to capture discoveries\n4. Commit and PR\n\nReady to start?\n```\n\n**Why:** A fresh session for execution produces better results. The interview session has too much context noise \u2014 a clean session with just the spec is more focused. Research and design catch wrong assumptions before they propagate into specs \u2014 but skip them if the scope is clear and well-understood.\n\nYou can also use `$joycraft-decompose` to re-decompose a brief if the breakdown needs adjustment, or run `$joycraft-interview` first for a lighter brainstorm before committing to the full workflow.\n\n**Tip:** Run `/new` before starting the next step. Your artifacts are saved to files \u2014 this conversation context is disposable.\n',
262
+ "joycraft-new-feature.md": '---\nname: joycraft-new-feature\ndescription: Guided feature development \u2014 interview the user, produce a Feature Brief, then decompose into atomic specs\n---\n\n# New Feature Workflow\n\nYou are starting a new feature. Follow this process in order. Do not skip steps.\n\n## Phase 0: Check for Existing Drafts\n\nBefore starting the interview, check if the user has already drafted a brief.\n\n**Skip this phase if:** the user provided a brief path as an argument (they already know what to work from).\n\n**Steps:**\n1. Check if `docs/briefs/` exists. If not, skip to Phase 1.\n2. Look for files matching `*-draft.md` in `docs/briefs/`.\n3. For any other `.md` files in `docs/briefs/`, read the first 10 lines and check for `Status: DRAFT`.\n4. If draft(s) found, present them:\n\n```\nI found draft brief(s) in docs/briefs/:\n- [path] (drafted YYYY-MM-DD)\n- [path] (drafted YYYY-MM-DD)\n\nWant me to:\n1. **Formalize** one of these into a full Feature Brief (skip interview, go to Phase 2)\n2. **Start a new interview** from scratch\n```\n\n5. If user chooses to formalize: read the full draft, extract the idea/problem/constraints, and jump to Phase 2 with that context pre-filled.\n6. If user chooses to start fresh, or no drafts found: proceed to Phase 1.\n\n## Phase 1: Interview\n\nInterview the user about what they want to build. Let them talk \u2014 your job is to listen, then sharpen.\n\n**Ask about:**\n- What problem does this solve? Who is affected?\n- What does "done" look like?\n- Hard constraints? (business rules, tech limitations, deadlines)\n- What is explicitly NOT in scope? (push hard on this)\n- Edge cases or error conditions?\n- What existing code/patterns should this follow?\n- Testing: existing setup? framework? smoke test budget? lockdown mode desired?\n\n**Interview technique:**\n- Let the user "yap" \u2014 don\'t interrupt their flow\n- Play back your understanding: "So if I\'m hearing you right..."\n- Push toward testable statements: "How would we verify that works?"\n\nKeep asking until you can fill out a Feature Brief.\n\n## Phase 2: Feature Brief\n\nWrite a Feature Brief to `docs/briefs/YYYY-MM-DD-feature-name.md`. Create the `docs/briefs/` directory if it doesn\'t exist.\n\n**Why:** The brief is the single source of truth for what we\'re building. It prevents scope creep and gives every spec a shared reference point.\n\nUse this structure:\n\n```markdown\n# [Feature Name] \u2014 Feature Brief\n\n> **Date:** YYYY-MM-DD\n> **Project:** [project name]\n> **Status:** Interview | Decomposing | Specs Ready | In Progress | Complete\n\n---\n\n## Vision\nWhat are we building and why? The full picture in 2-4 paragraphs.\n\n## User Stories\n- As a [role], I want [capability] so that [benefit]\n\n## Hard Constraints\n- MUST: [constraint that every spec must respect]\n- MUST NOT: [prohibition that every spec must respect]\n\n## Out of Scope\n- NOT: [tempting but deferred]\n\n## Test Strategy\n- **Existing setup:** [framework and tools, or "none yet"]\n- **User expertise:** [comfortable / learning / needs guidance]\n- **Test types:** [smoke, unit, integration, e2e, etc.]\n- **Smoke test budget:** [target time for fast-feedback tests]\n- **Lockdown mode:** [yes/no \u2014 constrain agent to code + tests only]\n\n## Decomposition\n| # | Spec Name | Description | Dependencies | Est. Size |\n|---|-----------|-------------|--------------|-----------|\n| 1 | [verb-object] | [one sentence] | None | [S/M/L] |\n\n## Execution Strategy\n- [ ] Sequential (specs have chain dependencies)\n- [ ] Parallel (specs are independent)\n- [ ] Mixed\n\n## Success Criteria\n- [ ] [End-to-end behavior 1]\n- [ ] [No regressions in existing features]\n```\n\nIf `docs/templates/FEATURE_BRIEF_TEMPLATE.md` exists, reference it for the full template with additional guidance.\n\nPresent the brief to the user. Focus review on:\n- "Does the decomposition match how you think about this?"\n- "Is anything in scope that shouldn\'t be?"\n- "Are the specs small enough? Can each be described in one sentence?"\n\nIterate until approved.\n\n## Phase 3: Generate Atomic Specs\n\nFor each row in the decomposition table, create a self-contained spec file at `docs/specs/<feature-name>/spec-name.md`. Derive the feature-name from the brief filename (strip the date prefix and `.md` \u2014 e.g., `2026-04-06-token-discipline.md` \u2192 `token-discipline`). Create the `docs/specs/<feature-name>/` directory if it doesn\'t exist.\n\n**Why:** Each spec must be understandable WITHOUT reading the Feature Brief. This prevents the "Curse of Instructions" \u2014 no spec should require holding the entire feature in context. Copy relevant context into each spec.\n\nUse this structure for each spec:\n\n```markdown\n# [Verb + Object] \u2014 Atomic Spec\n\n> **Parent Brief:** `docs/briefs/YYYY-MM-DD-feature-name.md`\n> **Status:** Ready\n> **Date:** YYYY-MM-DD\n> **Estimated scope:** [1 session / N files / ~N lines]\n\n---\n\n## What\nOne paragraph \u2014 what changes when this spec is done?\n\n## Why\nOne sentence \u2014 what breaks or is missing without this?\n\n## Acceptance Criteria\n- [ ] [Observable behavior]\n- [ ] Build passes\n- [ ] Tests pass\n\n## Test Plan\n\n| Acceptance Criterion | Test | Type |\n|---------------------|------|------|\n| [Each AC above] | [What to call/assert] | [unit/integration/e2e] |\n\n**Execution order:**\n1. Write all tests above \u2014 they should fail against current/stubbed code\n2. Run tests to confirm they fail (red)\n3. Implement until all tests pass (green)\n\n**Smoke test:** [Identify the fastest test for iteration feedback]\n\n**Before implementing, verify your test harness:**\n1. Run all tests \u2014 they must FAIL (if they pass, you\'re testing the wrong thing)\n2. Each test calls your actual function/endpoint \u2014 not a reimplementation or the underlying library\n3. Identify your smoke test \u2014 it must run in seconds, not minutes, so you get fast feedback on each change\n\n## Constraints\n- MUST: [hard requirement]\n- MUST NOT: [hard prohibition]\n\n## Affected Files\n| Action | File | What Changes |\n|--------|------|-------------|\n\n## Approach\nStrategy, data flow, key decisions. Name one rejected alternative.\n\n## Edge Cases\n| Scenario | Expected Behavior |\n|----------|------------------|\n```\n\nIf `docs/templates/ATOMIC_SPEC_TEMPLATE.md` exists, reference it for the full template with additional guidance.\n\n## Phase 4: Hand Off for Execution\n\nBefore jumping to execution, consider whether research or design would catch wrong assumptions early:\n\n```\nFeature Brief and [N] atomic specs are ready.\n\nSpecs:\n1. [spec-name] \u2014 [one sentence] [S/M/L]\n2. [spec-name] \u2014 [one sentence] [S/M/L]\n...\n\nBefore executing, consider the complexity of this feature:\n\nCOMPLEX (5+ files, architectural decisions, unfamiliar area):\n \u2192 $joycraft-research \u2014 gather codebase facts before committing to a design\n \u2192 $joycraft-design \u2014 make architectural decisions explicit\n \u2192 Then execute specs\n\nMEDIUM (clear scope but non-trivial):\n \u2192 $joycraft-design \u2014 make key decisions explicit before building\n \u2192 Then execute specs\n\nSIMPLE (scope is clear, < 5 files, well-understood area):\n \u2192 Skip to execution\n\nRecommended execution:\n- [Parallel/Sequential/Mixed strategy]\n- Estimated: [N] sessions total\n\nTo execute: Start a fresh session per spec. Each session should:\n1. Read the spec\n2. Implement\n3. Run $joycraft-session-end to capture discoveries\n4. Commit and PR\n\nReady to start?\n\nRun /clear before your next step \u2014 your artifacts are saved to files.\n```\n\n**Why:** A fresh session for execution produces better results. The interview session has too much context noise \u2014 a clean session with just the spec is more focused. Research and design catch wrong assumptions before they propagate into specs \u2014 but skip them if the scope is clear and well-understood.\n\nYou can also use `$joycraft-decompose` to re-decompose a brief if the breakdown needs adjustment, or run `$joycraft-interview` first for a lighter brainstorm before committing to the full workflow.\n',
260
263
  "joycraft-optimize.md": '---\nname: joycraft-optimize\ndescription: Audit your Claude Code or Codex session overhead \u2014 harness file sizes, plugins, MCP servers, hooks \u2014 and report actionable recommendations\n---\n\n# Optimize \u2014 Session Overhead Audit\n\nYou are auditing the user\'s AI development session for token overhead. Produce a conversational diagnostic report \u2014 no files created.\n\n## Step 1: Detect Platform\n\nCheck which platform is active:\n- **Claude Code:** Look for `.claude/` directory, `CLAUDE.md`\n- **Codex:** Look for `.agents/` directory, `AGENTS.md`\n\nIf both exist, run both checks. If neither, default to Claude Code checks and note the uncertainty.\n\n## Step 2: Audit Harness Files\n\n### Claude Code Path\n\n1. **CLAUDE.md** \u2014 count lines. Threshold: \u2264200 lines.\n2. **Skill files** \u2014 glob `.claude/skills/**/*.md`. Count lines per file. Threshold: \u2264200 lines each.\n\n### Codex Path\n\n1. **AGENTS.md** \u2014 count lines. Threshold: \u2264200 lines.\n2. **Skill files** \u2014 glob `.agents/skills/**/*.md`. Count lines per file. Threshold: \u2264200 lines each.\n\n## Step 3: Audit Plugins & MCP Servers\n\n### Claude Code Path\n\n1. **Installed plugins** \u2014 read `~/.claude/plugins/installed_plugins.json`. List plugin names and versions. If not found, report "no plugins file found."\n2. **Enabled plugins** \u2014 read `~/.claude/settings.json`, check `enabledPlugins` array. Show enabled vs installed count.\n3. **MCP servers** \u2014 read `~/.claude/settings.json`, count entries under `mcpServers`. List server names.\n\n### Codex Path\n\n1. **Plugin config** \u2014 read `~/.codex/config.toml`. List any plugin toggles. Note: Codex syncs its curated plugin marketplace at startup \u2014 this is a boot cost even if you don\'t use them.\n2. **MCP servers** \u2014 check `~/.codex/config.toml` for MCP server entries. List server names.\n\n## Step 4: Audit Hooks (Claude Code Only)\n\nRead `.claude/settings.json` in the project directory. List all hook definitions under the `hooks` key \u2014 show the event name and command for each.\n\nFor Codex: note "hook auditing not yet supported on Codex."\n\n## Step 5: Report\n\nOrganize findings by category. Use pass/warn indicators:\n\n```\n## Session Overhead Report\n\n### Harness Files\n- CLAUDE.md/AGENTS.md: [N] lines [PASS \u2264200 / WARN >200]\n- Skills: [N] files, [list any over 200 lines]\n\n### Plugins\n- Installed: [N] ([list names])\n- Enabled: [N] of [M] installed\n- [If 0: "No plugins \u2014 zero boot cost from plugins."]\n\n### MCP Servers\n- Count: [N] ([list names])\n- [If 0: "No MCP servers \u2014 zero boot cost from servers."]\n\n### Hooks\n- [N] hook definitions ([list event names])\n\n### Recommendations\n- [Specific, actionable items for anything over threshold]\n- [e.g., "AGENTS.md is 312 lines \u2014 consider splitting reference sections into docs/"]\n- [e.g., "3 MCP servers load at boot \u2014 disable unused ones in config"]\n```\n\n## Step 6: Further Resources\n\nEnd with:\n\n> For deeper token optimization, see:\n> - [Nate B Jones\'s token optimization techniques](https://www.youtube.com/watch?v=bDcgHzCBgmQ)\n> - [OB1 repo](https://github.com/nate-b-j/OB1) \u2014 Heavy File Ingestion skill and stupid button prompt kit\n> - [Joycraft\'s token discipline guide](docs/guides/token-discipline.md)\n\n## Edge Cases\n\n| Scenario | Behavior |\n|----------|----------|\n| Config files don\'t exist | Report "not found" for that check, don\'t error |\n| No plugins installed | Report 0 plugins \u2014 this is good, say so |\n| CLAUDE.md/AGENTS.md exactly 200 lines | PASS \u2014 threshold is \u2264200 |\n| `~/.claude/` or `~/.codex/` not accessible | Skip user-level checks, note limitation |\n| Both platforms detected | Run both audits, report separately |\n',
261
264
  "joycraft-research.md": '---\nname: joycraft-research\ndescription: Produce objective codebase research by isolating question generation from fact-gathering \u2014 subagent sees only questions, never the brief\n---\n\n# Research Codebase for a Feature\n\nYou are producing objective codebase research to inform a future spec or implementation. The key insight: the researching agent must never see the brief or ticket \u2014 only research questions. This prevents opinions from contaminating the facts.\n\n**Guard clause:** If the user doesn\'t provide a brief path or inline description, ask:\n"What feature or change are you researching? Provide a brief path or describe it."\n\n---\n\n## Phase 1: Generate Research Questions\n\nRead the brief and identify which zones of the codebase are relevant. Generate 5-10 research questions that are:\n- **Objective and fact-seeking** \u2014 "How does X work?" not "How should we build X?"\n- **Specific to the codebase**\n- **Answerable by reading code**\n\nWrite the questions to `docs/research/.questions-tmp.md`. **Do NOT include any content from the brief.**\n\n---\n\n## Phase 2: Spawn Research Subagent\n\nSpawn a subagent to perform the research. Pass ONLY the research questions \u2014 never the brief.\n\nSubagent prompt:\n```\nYou are researching a codebase to answer specific questions. You have NO context about why these questions are being asked.\n\nRULES:\n- Answer each question with FACTS ONLY: file paths, function signatures, data flows, patterns, dependencies\n- Do NOT recommend, suggest, or opine\n- Do NOT speculate about what should be built\n- If a question cannot be answered, say "No existing code found for this"\n- Search the codebase and read files thoroughly\n- Include code snippets only when essential evidence\n\nQUESTIONS:\n[INSERT_QUESTIONS_HERE]\n\nOUTPUT FORMAT:\n\n# Codebase Research\n\n**Date:** [today]\n**Questions answered:** [N/total]\n\n---\n\n## Q1: [question]\n[Facts only]\n\n## Q2: [question]\n[Facts only]\n```\n\n## Phase 3: Write the Research Document\n\nWrite the subagent\'s response to `docs/research/YYYY-MM-DD-feature-name.md`. Delete the temporary questions file.\n\nPresent:\n```\nResearch complete: docs/research/YYYY-MM-DD-feature-name.md\n\nThis document contains objective facts \u2014 no opinions or recommendations.\n\nRecommended next step:\n- $joycraft-design \u2014 translate research findings into architectural decisions before building\n\nIf the scope is simple (< 5 files, well-understood area, no architectural decisions):\n- $joycraft-decompose \u2014 skip design and break directly into atomic specs\n\nOther options:\n- $joycraft-new-feature \u2014 formalize into a full Feature Brief first\n- Read the research and add corrections manually\n```\n',
262
- "joycraft-session-end.md": '---\nname: joycraft-session-end\ndescription: Wrap up a session \u2014 capture discoveries, verify, prepare for PR or next session\n---\n\n# Session Wrap-Up\n\nBefore ending this session, complete these steps in order.\n\n## 1. Capture Discoveries\n\n**Why:** Discoveries are the surprises \u2014 things that weren\'t in the spec or that contradicted expectations. They prevent future sessions from hitting the same walls.\n\nCheck: did anything surprising happen during this session? If yes, create or update a discovery file at `docs/discoveries/YYYY-MM-DD-topic.md`. Create the `docs/discoveries/` directory if it doesn\'t exist.\n\nOnly capture what\'s NOT obvious from the code or git diff:\n- "We thought X but found Y" \u2014 assumptions that were wrong\n- "This API/library behaves differently than documented" \u2014 external gotchas\n- "This edge case needs handling in a future spec" \u2014 deferred work with context\n- "The approach in the spec didn\'t work because..." \u2014 spec-vs-reality gaps\n- Key decisions made during implementation that aren\'t in the spec\n\n**Do NOT capture:**\n- Files changed (that\'s the diff)\n- What you set out to do (that\'s the spec)\n- Step-by-step narrative of the session (nobody re-reads these)\n\nUse this format:\n\n```markdown\n# Discoveries \u2014 [topic]\n\n**Date:** YYYY-MM-DD\n**Spec:** [link to spec if applicable]\n\n## [Discovery title]\n**Expected:** [what we thought would happen]\n**Actual:** [what actually happened]\n**Impact:** [what this means for future work]\n```\n\nIf nothing surprising happened, skip the discovery file entirely. No discovery is a good sign \u2014 the spec was accurate.\n\n## 1b. Update Context Documents\n\nIf `docs/context/` exists, quickly check whether this session revealed anything about:\n\n- **Production risks** \u2014 did you interact with or learn about production vs staging systems? Update `docs/context/production-map.md`\n- **Wrong assumptions** \u2014 did you assume something that turned out to be false? Update `docs/context/dangerous-assumptions.md`\n- **Key decisions** \u2014 did you make an architectural or tooling choice? Add a row to `docs/context/decision-log.md`\n- **Unwritten rules** \u2014 did you discover a convention or constraint not documented anywhere? Update `docs/context/institutional-knowledge.md`\n\nSkip this if nothing applies. Don\'t force it \u2014 only update when there\'s genuine new context.\n\n## 2. Run Validation\n\nRun the project\'s validation commands. Check CLAUDE.md or AGENTS.md for project-specific commands. Common checks:\n\n- Type-check (e.g., `tsc --noEmit`, `mypy`, `cargo check`)\n- Tests (e.g., `npm test`, `pytest`, `cargo test`)\n- Lint (e.g., `eslint`, `ruff`, `clippy`)\n\nFix any failures before proceeding.\n\n## 3. Update Spec Status\n\nIf working from an atomic spec in `docs/specs/` (scan recursively \u2014 specs may be in subdirectories like `docs/specs/<feature-name>/`):\n- All acceptance criteria met \u2014 update status to `Complete`\n- Partially done \u2014 update status to `In Progress`, note what\'s left\n\nIf working from a Feature Brief in `docs/briefs/`, check off completed specs in the decomposition table.\n\n## 4. Commit\n\nCommit all changes including the discovery file (if created) and spec status updates. The commit message should reference the spec if applicable.\n\n## 5. Push and PR (if autonomous git is enabled)\n\n**Check CLAUDE.md or AGENTS.md for "Git Autonomy" in the Behavioral Boundaries section.** If it says "STRICTLY ENFORCED" or the ALWAYS section includes "Push to feature branches immediately after every commit":\n\n1. **Push immediately.** Run `git push origin <branch>` \u2014 do not ask, do not hesitate.\n2. **Open a PR if the feature is complete.** Check the parent Feature Brief\'s decomposition table \u2014 if all specs are done, run `gh pr create` with a summary of all completed specs. Do not ask first.\n3. **If not all specs are done,** still push. The PR comes when the last spec is complete.\n\nIf CLAUDE.md or AGENTS.md does NOT have autonomous git rules (or has "ASK FIRST" for pushing), ask the user before pushing.\n\n## 6. Report\n\n```\nSession complete.\n- Spec: [spec name] \u2014 [Complete / In Progress]\n- Build: [passing / failing]\n- Discoveries: [N items / none]\n- Pushed: [yes / no \u2014 and why not]\n- PR: [opened #N / not yet \u2014 N specs remaining]\n- Next: [what the next session should tackle]\n```\n\n**Tip:** Run `/new` before starting the next step. Your artifacts are saved to files \u2014 this conversation context is disposable.\n',
265
+ "joycraft-session-end.md": '---\nname: joycraft-session-end\ndescription: Wrap up a session \u2014 capture discoveries, verify, prepare for PR or next session\n---\n\n# Session Wrap-Up\n\nBefore ending this session, complete these steps in order.\n\n## 1. Capture Discoveries\n\n**Why:** Discoveries are the surprises \u2014 things that weren\'t in the spec or that contradicted expectations. They prevent future sessions from hitting the same walls.\n\nCheck: did anything surprising happen during this session? If yes, create or update a discovery file at `docs/discoveries/YYYY-MM-DD-topic.md`. Create the `docs/discoveries/` directory if it doesn\'t exist.\n\nOnly capture what\'s NOT obvious from the code or git diff:\n- "We thought X but found Y" \u2014 assumptions that were wrong\n- "This API/library behaves differently than documented" \u2014 external gotchas\n- "This edge case needs handling in a future spec" \u2014 deferred work with context\n- "The approach in the spec didn\'t work because..." \u2014 spec-vs-reality gaps\n- Key decisions made during implementation that aren\'t in the spec\n\n**Do NOT capture:**\n- Files changed (that\'s the diff)\n- What you set out to do (that\'s the spec)\n- Step-by-step narrative of the session (nobody re-reads these)\n\nUse this format:\n\n```markdown\n# Discoveries \u2014 [topic]\n\n**Date:** YYYY-MM-DD\n**Spec:** [link to spec if applicable]\n\n## [Discovery title]\n**Expected:** [what we thought would happen]\n**Actual:** [what actually happened]\n**Impact:** [what this means for future work]\n```\n\nIf nothing surprising happened, skip the discovery file entirely. No discovery is a good sign \u2014 the spec was accurate.\n\n## 1b. Update Context Documents\n\nIf `docs/context/` exists, quickly check whether this session revealed anything about:\n\n- **Production risks** \u2014 did you interact with or learn about production vs staging systems? Update `docs/context/production-map.md`\n- **Wrong assumptions** \u2014 did you assume something that turned out to be false? Update `docs/context/dangerous-assumptions.md`\n- **Key decisions** \u2014 did you make an architectural or tooling choice? Add a row to `docs/context/decision-log.md`\n- **Unwritten rules** \u2014 did you discover a convention or constraint not documented anywhere? Update `docs/context/institutional-knowledge.md`\n\nSkip this if nothing applies. Don\'t force it \u2014 only update when there\'s genuine new context.\n\n## 2. Run Validation\n\nRun the project\'s validation commands. Check CLAUDE.md or AGENTS.md for project-specific commands. Common checks:\n\n- Type-check (e.g., `tsc --noEmit`, `mypy`, `cargo check`)\n- Tests (e.g., `npm test`, `pytest`, `cargo test`)\n- Lint (e.g., `eslint`, `ruff`, `clippy`)\n\nFix any failures before proceeding.\n\n## 3. Update Spec Status\n\nIf working from an atomic spec in `docs/specs/` (scan recursively \u2014 specs may be in subdirectories like `docs/specs/<feature-name>/`):\n- All acceptance criteria met \u2014 update status to `Complete`\n- Partially done \u2014 update status to `In Progress`, note what\'s left\n\nIf working from a Feature Brief in `docs/briefs/`, check off completed specs in the decomposition table.\n\n## 4. Commit\n\nCommit all changes including the discovery file (if created) and spec status updates. The commit message should reference the spec if applicable.\n\n## 5. Push and PR (if autonomous git is enabled)\n\n**Check CLAUDE.md or AGENTS.md for "Git Autonomy" in the Behavioral Boundaries section.** If it says "STRICTLY ENFORCED" or the ALWAYS section includes "Push to feature branches immediately after every commit":\n\n1. **Push immediately.** Run `git push origin <branch>` \u2014 do not ask, do not hesitate.\n2. **Open a PR if the feature is complete.** Check the parent Feature Brief\'s decomposition table \u2014 if all specs are done, run `gh pr create` with a summary of all completed specs. Do not ask first.\n3. **If not all specs are done,** still push. The PR comes when the last spec is complete.\n\nIf CLAUDE.md or AGENTS.md does NOT have autonomous git rules (or has "ASK FIRST" for pushing), ask the user before pushing.\n\n## 6. Report\n\n```\nSession complete.\n- Spec: [spec name] \u2014 [Complete / In Progress]\n- Build: [passing / failing]\n- Discoveries: [N items / none]\n- Pushed: [yes / no \u2014 and why not]\n- PR: [opened #N / not yet \u2014 N specs remaining]\n- Next: [what the next session should tackle]\n\nRun /clear before your next step \u2014 your artifacts are saved to files.\n```\n',
263
266
  "joycraft-tune.md": "---\nname: joycraft-tune\ndescription: Assess and upgrade your project's AI development harness \u2014 score 7 dimensions, apply fixes, show path to Level 5\n---\n\n# Tune \u2014 Project Harness Assessment & Upgrade\n\nYou are evaluating and upgrading this project's AI development harness.\n\n## Step 1: Detect Harness State\n\nSearch the codebase for: CLAUDE.md (with meaningful content), `docs/specs/`, `docs/briefs/`, `docs/discoveries/`, `.agents/skills/`, and test configuration.\n\n## Step 2: Route\n\n- **No harness** (no CLAUDE.md or just a README): Recommend `npx joycraft init` and stop.\n- **Harness exists**: Continue to assessment.\n\n## Step 3: Assess \u2014 Score 7 Dimensions (1-5 scale)\n\nRead CLAUDE.md and explore the project. Score each with specific evidence:\n\n| Dimension | What to Check |\n|-----------|--------------|\n| Spec Quality | `docs/specs/` (scan recursively) \u2014 structured? acceptance criteria? self-contained? |\n| Spec Granularity | Can each spec be done in one session? |\n| Behavioral Boundaries | ALWAYS/ASK FIRST/NEVER sections (or equivalent rules under any heading) |\n| Skills & Hooks | `.agents/skills/` files, hooks config |\n| Documentation | `docs/` structure, templates, referenced from CLAUDE.md |\n| Knowledge Capture | `docs/discoveries/`, `docs/context/*.md` \u2014 existence AND real content |\n| Testing & Validation | Test framework, CI pipeline, validation commands in CLAUDE.md |\n\nScore 1 = absent, 3 = partially there, 5 = comprehensive. Give credit for substance over format.\n\n## Step 4: Write Assessment\n\nWrite to `docs/joycraft-assessment.md` AND display it. Include: scores table, detailed findings (evidence + gap + recommendation per dimension), and an upgrade plan (up to 5 actions ordered by impact).\n\n## Step 5: Apply Upgrades\n\nApply using three tiers \u2014 do NOT ask per-item permission:\n\n**Tier 1 (silent):** Create missing dirs, install missing skills, copy missing templates, create AGENTS.md.\n\n**Before Tier 2, ask TWO things:**\n\n1. **Git autonomy:** Cautious (ask before push/PR) or Autonomous (push + PR without asking)?\n2. **Risk interview (3-5 questions, one at a time):** What could break? What services connect to prod? Unwritten rules? Off-limits files/commands? Skip if `docs/context/` already has content.\n\nFrom answers, generate: CLAUDE.md boundary rules, deny patterns configuration, `docs/context/` documents. Also recommend a permission mode (`auto` for most; `dontAsk` + allowlist for high-risk).\n\n**Tier 2 (show diff):** Add missing CLAUDE.md sections (Boundaries, Workflow, Key Files). Draft from real codebase content. Append only \u2014 never reformat existing content.\n\n**Tier 3 (confirm first):** Rewriting existing sections, overwriting customized files, suggesting test framework installs.\n\nAfter applying, append to `docs/joycraft-history.md` and show a consolidated upgrade results table.\n\n## Step 6: Show Path to Level 5\n\nShow a tailored roadmap: Level 2-5 table, specific next steps based on actual gaps, and the Level 5 north star (spec queue, autofix, holdout scenarios, self-improving harness).\n\n**Tip:** Run `$joycraft-optimize` to audit your session's token overhead \u2014 plugins, MCP servers, and harness file sizes.\n\n## Edge Cases\n\n- **CLAUDE.md is just a README:** Treat as no harness.\n- **Non-Joycraft skills:** Acknowledge, don't replace.\n- **Rules under non-standard headings:** Give credit for substance.\n- **Previous assessment exists:** Read it first. If nothing to upgrade, say so.\n- **Non-Joycraft content in CLAUDE.md:** Preserve as-is. Only append.\n",
264
267
  "joycraft-verify.md": '---\nname: joycraft-verify\ndescription: Spawn an independent verifier subagent to check an implementation against its spec -- read-only, no code edits, structured pass/fail verdict\n---\n\n# Verify Implementation Against Spec\n\nThe user wants independent verification of an implementation. Your job is to find the relevant spec, extract its acceptance criteria and test plan, then spawn a separate verifier subagent that checks each criterion and produces a structured verdict.\n\n**Why a separate subagent?** Research found that agents reliably skew positive when grading their own work. Separating the agent doing the work from the agent judging it consistently outperforms self-evaluation. The verifier gets a clean context window with no implementation bias.\n\n## Step 1: Find the Spec\n\nIf the user provided a spec path (e.g., `$joycraft-verify docs/specs/my-feature/add-widget.md`), use that path directly.\n\nIf no path was provided, scan `docs/specs/` recursively for spec files (they may be in subdirectories like `docs/specs/<feature-name>/`). Pick the most recently modified `.md` file. If `docs/specs/` doesn\'t exist or is empty, tell the user:\n\n> No specs found in `docs/specs/`. Please provide a spec path: `$joycraft-verify path/to/spec.md`\n\n## Step 2: Read and Parse the Spec\n\nRead the spec file and extract:\n\n1. **Spec name** -- from the H1 title\n2. **Acceptance Criteria** -- the checklist under the `## Acceptance Criteria` section\n3. **Test Plan** -- the table under the `## Test Plan` section, including any test commands\n4. **Constraints** -- the `## Constraints` section if present\n\nIf the spec has no Acceptance Criteria section, tell the user:\n\n> This spec doesn\'t have an Acceptance Criteria section. Verification needs criteria to check against. Add acceptance criteria to the spec and try again.\n\nIf the spec has no Test Plan section, note this but proceed -- the verifier can still check criteria by reading code and running any available project tests.\n\n## Step 3: Identify Test Commands\n\nLook for test commands in these locations (in priority order):\n\n1. The spec\'s Test Plan section (look for commands in backticks or "Type" column entries like "unit", "integration", "e2e", "build")\n2. The project\'s CLAUDE.md or AGENTS.md (look for test/build commands in the Development Workflow section)\n3. Common defaults based on the project type:\n - Node.js: `npm test` or `pnpm test --run`\n - Python: `pytest`\n - Rust: `cargo test`\n - Go: `go test ./...`\n\nBuild a list of specific commands the verifier should run.\n\n## Step 4: Spawn the Verifier Subagent\n\nSpawn a concurrent subagent thread with the following prompt. Replace the placeholders with the actual content extracted in Steps 2-3.\n\n**Important:** The subagent must be given read-only constraints. It may search the codebase, read files, and run the specified test/build commands, but it must NOT edit or create any files.\n\n```\nYou are a QA verifier. Your job is to independently verify an implementation against its spec. You have NO context about how the implementation was done -- you are checking it fresh.\n\nRULES -- these are hard constraints, not suggestions:\n- You may search the codebase and read any file\n- You may RUN these specific test/build commands: [TEST_COMMANDS]\n- You may NOT edit, create, or delete any files\n- You may NOT run commands that modify state (no git commit, no npm install, no file writes)\n- You may NOT install packages or access the network\n- Report what you OBSERVE, not what you expect or hope\n\nSPEC NAME: [SPEC_NAME]\n\nACCEPTANCE CRITERIA:\n[ACCEPTANCE_CRITERIA]\n\nTEST PLAN:\n[TEST_PLAN]\n\nCONSTRAINTS:\n[CONSTRAINTS_OR_NONE]\n\nYOUR TASK:\nFor each acceptance criterion, determine if it PASSES or FAILS based on evidence:\n\n1. Run the test commands listed above. Record the output.\n2. For each acceptance criterion:\n a. Check if there is a corresponding test and whether it passes\n b. If no test exists, read the relevant source files to verify the criterion is met\n c. If the criterion cannot be verified by reading code or running tests, mark it MANUAL CHECK NEEDED\n3. For criteria about build/test passing, actually run the commands and report results.\n\nOUTPUT FORMAT -- you MUST use this exact format:\n\nVERIFICATION REPORT\n\n| # | Criterion | Verdict | Evidence |\n|---|-----------|---------|----------|\n| 1 | [criterion text] | PASS/FAIL/MANUAL CHECK NEEDED | [what you observed] |\n| 2 | [criterion text] | PASS/FAIL/MANUAL CHECK NEEDED | [what you observed] |\n[continue for all criteria]\n\nSUMMARY: X/Y criteria passed. [Z failures need attention. / All criteria verified.]\n\nIf any test commands fail to run (missing dependencies, wrong command, etc.), report the error as evidence for a FAIL verdict on the relevant criterion.\n```\n\n## Step 5: Format and Present the Verdict\n\nTake the subagent\'s response and present it to the user in this format:\n\n```\n## Verification Report -- [Spec Name]\n\n| # | Criterion | Verdict | Evidence |\n|---|-----------|---------|----------|\n| 1 | ... | PASS | ... |\n| 2 | ... | FAIL | ... |\n\n**Overall: X/Y criteria passed.**\n\n[If all passed:]\nAll criteria verified. Ready to commit and open a PR.\n\n[If any failed:]\nN failures need attention. Review the evidence above and fix before proceeding.\n\n[If any MANUAL CHECK NEEDED:]\nN criteria need manual verification -- they can\'t be checked by reading code or running tests alone.\n```\n\n## Step 6: Suggest Next Steps\n\nBased on the verdict:\n\n- **All PASS:** Suggest committing and opening a PR, or running `$joycraft-session-end` to capture discoveries.\n- **Some FAIL:** List the failed criteria and suggest the user fix them, then run `$joycraft-verify` again.\n- **MANUAL CHECK NEEDED items:** Explain what needs human eyes and why automation couldn\'t verify it.\n\n**Do NOT offer to fix failures yourself.** The verifier reports; the human (or implementation agent in a separate turn) decides what to do. This separation is the whole point.\n\n## Edge Cases\n\n| Scenario | Behavior |\n|----------|----------|\n| Spec has no Test Plan | Warn that verification is weaker without a test plan, but proceed by checking criteria through code reading and any available project-level tests |\n| All tests pass but a criterion is not testable | Mark as MANUAL CHECK NEEDED with explanation |\n| Subagent can\'t run tests (missing deps) | Report the error as FAIL evidence |\n| No specs found and no path given | Tell user to provide a spec path or create a spec first |\n| Spec status is "Complete" | Still run verification -- "Complete" means the implementer thinks it\'s done, verification confirms |\n'
265
268
  };
@@ -269,4 +272,4 @@ export {
269
272
  TEMPLATES,
270
273
  CODEX_SKILLS
271
274
  };
272
- //# sourceMappingURL=chunk-MEPNNJIE.js.map
275
+ //# sourceMappingURL=chunk-QDRX3WM6.js.map