joycraft 0.3.0 → 0.3.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +9 -0
- package/dist/{chunk-IJ7SLXOI.js → chunk-LLJVCCB2.js} +42 -1
- package/dist/{chunk-IJ7SLXOI.js.map → chunk-LLJVCCB2.js.map} +1 -1
- package/dist/cli.js +2 -2
- package/dist/{init-ZFFV4NO3.js → init-R33WYEFZ.js} +13 -7
- package/dist/init-R33WYEFZ.js.map +1 -0
- package/dist/{upgrade-FHBKUDAL.js → upgrade-ZE6K64XX.js} +2 -2
- package/package.json +1 -1
- package/dist/init-ZFFV4NO3.js.map +0 -1
- /package/dist/{upgrade-FHBKUDAL.js.map → upgrade-ZE6K64XX.js.map} +0 -0
package/README.md
CHANGED
|
@@ -59,6 +59,15 @@ npx joycraft upgrade
|
|
|
59
59
|
|
|
60
60
|
Joycraft tracks what it installed vs. what you've customized. Unmodified files update automatically. Customized files show a diff and ask before overwriting. Use `--yes` for CI.
|
|
61
61
|
|
|
62
|
+
## Git Autonomy
|
|
63
|
+
|
|
64
|
+
When `/tune` runs for the first time, it asks one question: **how autonomous should git be?**
|
|
65
|
+
|
|
66
|
+
- **Cautious** (default) — commits freely, asks before pushing or opening PRs. Good for learning the workflow.
|
|
67
|
+
- **Autonomous** — commits, pushes to feature branches, and opens PRs without asking. Good for spec-driven development where you want full send.
|
|
68
|
+
|
|
69
|
+
Either way, Joycraft generates explicit git boundaries in your CLAUDE.md: commit message format (`verb: message`), specific file staging (no `git add -A`), no secrets in commits, no force-pushing.
|
|
70
|
+
|
|
62
71
|
## How It Works with AI Agents
|
|
63
72
|
|
|
64
73
|
**Claude Code** reads `CLAUDE.md` automatically and discovers skills in `.claude/skills/`. The behavioral boundaries guide every action. The skills provide structured workflows accessible via `/slash-commands`.
|
|
@@ -639,9 +639,50 @@ These are safe, additive operations. Apply them without asking:
|
|
|
639
639
|
- Copy missing templates to \`docs/templates/\`
|
|
640
640
|
- Create AGENTS.md if it doesn't exist
|
|
641
641
|
|
|
642
|
+
### Git Autonomy Preference
|
|
643
|
+
|
|
644
|
+
Before applying Behavioral Boundaries to CLAUDE.md, ask the user ONE question:
|
|
645
|
+
|
|
646
|
+
> How autonomous should git operations be?
|
|
647
|
+
> 1. **Cautious** \u2014 commits freely, asks before pushing or opening PRs *(good for learning the workflow)*
|
|
648
|
+
> 2. **Autonomous** \u2014 commits, pushes to branches, and opens PRs without asking *(good for spec-driven development)*
|
|
649
|
+
|
|
650
|
+
Based on their answer, use the appropriate git rules in the Behavioral Boundaries section:
|
|
651
|
+
|
|
652
|
+
**If Cautious (default):**
|
|
653
|
+
\`\`\`
|
|
654
|
+
### ASK FIRST
|
|
655
|
+
- Pushing to remote
|
|
656
|
+
- Creating or merging pull requests
|
|
657
|
+
- Any destructive git operation (force-push, reset --hard, branch deletion)
|
|
658
|
+
|
|
659
|
+
### NEVER
|
|
660
|
+
- Push directly to main/master without approval
|
|
661
|
+
- Amend commits that have been pushed
|
|
662
|
+
\`\`\`
|
|
663
|
+
|
|
664
|
+
**If Autonomous:**
|
|
665
|
+
\`\`\`
|
|
666
|
+
### ALWAYS
|
|
667
|
+
- Push to feature branches after each commit
|
|
668
|
+
- Open a PR when all specs in a feature are complete
|
|
669
|
+
- Use descriptive branch names: feature/spec-name
|
|
670
|
+
|
|
671
|
+
### ASK FIRST
|
|
672
|
+
- Merging PRs to main/master
|
|
673
|
+
- Any destructive git operation (force-push, reset --hard, branch deletion)
|
|
674
|
+
|
|
675
|
+
### NEVER
|
|
676
|
+
- Push directly to main/master (always use feature branches + PR)
|
|
677
|
+
- Amend commits that have been pushed to remote
|
|
678
|
+
\`\`\`
|
|
679
|
+
|
|
680
|
+
This is the ONLY question asked during the upgrade flow. Everything else is batch-applied.
|
|
681
|
+
|
|
642
682
|
### Tier 2: Apply and Show Diff (do it, then report)
|
|
643
683
|
These modify important files but are additive (append-only). Apply them, then show what changed so the user can review. Git is the undo button.
|
|
644
684
|
- Add missing sections to CLAUDE.md (Behavioral Boundaries, Development Workflow, Getting Started with Joycraft, Key Files, Common Gotchas)
|
|
685
|
+
- Use the git autonomy preference from above when generating the Behavioral Boundaries section
|
|
645
686
|
- Draft section content from the actual codebase \u2014 not generic placeholders. Read the project's real rules, real commands, real structure.
|
|
646
687
|
- Only append \u2014 never modify or reformat existing content
|
|
647
688
|
|
|
@@ -1085,4 +1126,4 @@ export {
|
|
|
1085
1126
|
readVersion,
|
|
1086
1127
|
writeVersion
|
|
1087
1128
|
};
|
|
1088
|
-
//# sourceMappingURL=chunk-
|
|
1129
|
+
//# sourceMappingURL=chunk-LLJVCCB2.js.map
|
|
@@ -1 +1 @@
|
|
|
1
|
-
{"version":3,"sources":["../src/bundled-files.ts","../src/version.ts"],"sourcesContent":["// Bundled file contents — embedded at build time since tsup bundles everything\n// and we cannot read files from the package at runtime.\n\nexport const SKILLS: Record<string, string> = {\n 'decompose.md': `---\nname: decompose\ndescription: Break a feature brief into atomic specs — small, testable, independently executable units\n---\n\n# Decompose Feature into Atomic Specs\n\nYou have a Feature Brief (or the user has described a feature). Your job is to decompose it into atomic specs that can be executed independently — one spec per session.\n\n## Step 1: Verify the Brief Exists\n\nLook for a Feature Brief in \\`docs/briefs/\\`. If one doesn't exist yet, tell the user:\n\n> No feature brief found. Run \\`/new-feature\\` first to interview and create one, or describe the feature now and I'll work from your description.\n\nIf the user describes the feature inline, work from that description directly. You don't need a formal brief to decompose — but recommend creating one for complex features.\n\n## Step 2: Identify Natural Boundaries\n\n**Why:** Good boundaries make specs independently testable and committable. Bad boundaries create specs that can't be verified without other specs also being done.\n\nRead the brief (or description) and identify natural split points:\n\n- **Data layer changes** (schemas, types, migrations) — always a separate spec\n- **Pure functions / business logic** — separate from I/O\n- **UI components** — separate from data fetching\n- **API endpoints / route handlers** — separate from business logic\n- **Test infrastructure** (mocks, fixtures, helpers) — can be its own spec if substantial\n- **Configuration / environment** — separate from code changes\n\nAsk yourself: \"Can this piece be committed and tested without the other pieces existing?\" If yes, it's a good boundary.\n\n## Step 3: Build the Decomposition Table\n\nFor each atomic spec, define:\n\n| # | Spec Name | Description | Dependencies | Size |\n|---|-----------|-------------|--------------|------|\n\n**Rules:**\n- Each spec name is \\`verb-object\\` format (e.g., \\`add-terminal-detection\\`, \\`extract-prompt-module\\`)\n- Each description is ONE sentence — if you need two, the spec is too big\n- Dependencies reference other spec numbers — keep the dependency graph shallow\n- More than 2 dependencies on a single spec = it's too big, split further\n- Aim for 3-7 specs per feature. Fewer than 3 = probably not decomposed enough. More than 10 = the feature brief is too big\n\n## Step 4: Present and Iterate\n\nShow the decomposition table to the user. Ask:\n1. \"Does this breakdown match how you think about this feature?\"\n2. \"Are there any specs that feel too big or too small?\"\n3. \"Should any of these run in parallel (separate worktrees)?\"\n\nIterate until the user approves.\n\n## Step 5: Generate Atomic Specs\n\nFor each approved row, create \\`docs/specs/YYYY-MM-DD-spec-name.md\\`. Create the \\`docs/specs/\\` directory if it doesn't exist.\n\n**Why:** Each spec must be self-contained — a fresh Claude session should be able to execute it without reading the Feature Brief. Copy relevant constraints and context into each spec.\n\nUse this structure:\n\n\\`\\`\\`markdown\n# [Verb + Object] — Atomic Spec\n\n> **Parent Brief:** \\`docs/briefs/YYYY-MM-DD-feature-name.md\\` (or \"standalone\")\n> **Status:** Ready\n> **Date:** YYYY-MM-DD\n> **Estimated scope:** [1 session / N files / ~N lines]\n\n---\n\n## What\nOne paragraph — what changes when this spec is done?\n\n## Why\nOne sentence — what breaks or is missing without this?\n\n## Acceptance Criteria\n- [ ] [Observable behavior]\n- [ ] Build passes\n- [ ] Tests pass\n\n## Constraints\n- MUST: [hard requirement]\n- MUST NOT: [hard prohibition]\n\n## Affected Files\n| Action | File | What Changes |\n|--------|------|-------------|\n\n## Approach\nStrategy, data flow, key decisions. Name one rejected alternative.\n\n## Edge Cases\n| Scenario | Expected Behavior |\n|----------|------------------|\n\\`\\`\\`\n\nIf \\`docs/templates/ATOMIC_SPEC_TEMPLATE.md\\` exists, reference it for the full template with additional guidance.\n\nFill in all sections — each spec must be self-contained (no \"see the brief for context\"). Copy relevant constraints from the Feature Brief into each spec. Write acceptance criteria specific to THIS spec, not the whole feature.\n\n## Step 6: Recommend Execution Strategy\n\nBased on the dependency graph:\n- **Independent specs** — \"These can run in parallel worktrees\"\n- **Sequential specs** — \"Execute these in order: 1 -> 2 -> 4\"\n- **Mixed** — \"Start specs 1 and 3 in parallel. After 1 completes, start 2.\"\n\nUpdate the Feature Brief's Execution Strategy section with the plan (if a brief exists).\n\n## Step 7: Hand Off\n\nTell the user:\n\\`\\`\\`\nDecomposition complete:\n- [N] atomic specs created in docs/specs/\n- [N] can run in parallel, [N] are sequential\n- Estimated total: [N] sessions\n\nTo execute:\n- Sequential: Open a session, point Claude at each spec in order\n- Parallel: Use worktrees — one spec per worktree, merge when done\n- Each session should end with /session-end to capture discoveries\n\nReady to start execution?\n\\`\\`\\`\n`,\n\n 'interview.md': `---\nname: interview\ndescription: Brainstorm freely about what you want to build — yap, explore ideas, and get a structured summary you can use later\n---\n\n# Interview — Idea Exploration\n\nYou are helping the user brainstorm and explore what they want to build. This is a lightweight, low-pressure conversation — not a formal spec process. Let them yap.\n\n## How to Run the Interview\n\n### 1. Open the Floor\n\nStart with something like:\n\"What are you thinking about building? Just talk — I'll listen and ask questions as we go.\"\n\nLet the user talk freely. Do not interrupt their flow. Do not push toward structure yet.\n\n### 2. Ask Clarifying Questions\n\nAs they talk, weave in questions naturally — don't fire them all at once:\n\n- **What problem does this solve?** Who feels the pain today?\n- **What does \"done\" look like?** If this worked perfectly, what would a user see?\n- **What are the constraints?** Time, tech, team, budget — what boxes are we in?\n- **What's NOT in scope?** What's tempting but should be deferred?\n- **What are the edge cases?** What could go wrong? What's the weird input?\n- **What exists already?** Are we building on something or starting fresh?\n\n### 3. Play Back Understanding\n\nAfter the user has gotten their ideas out, reflect back:\n\"So if I'm hearing you right, you want to [summary]. The core problem is [X], and done looks like [Y]. Is that right?\"\n\nLet them correct and refine. Iterate until they say \"yes, that's it.\"\n\n### 4. Write a Draft Brief\n\nCreate a draft file at \\`docs/briefs/YYYY-MM-DD-topic-draft.md\\`. Create the \\`docs/briefs/\\` directory if it doesn't exist.\n\nUse this format:\n\n\\`\\`\\`markdown\n# [Topic] — Draft Brief\n\n> **Date:** YYYY-MM-DD\n> **Status:** DRAFT\n> **Origin:** /interview session\n\n---\n\n## The Idea\n[2-3 paragraphs capturing what the user described — their words, their framing]\n\n## Problem\n[What pain or gap this addresses]\n\n## What \"Done\" Looks Like\n[The user's description of success — observable outcomes]\n\n## Constraints\n- [constraint 1]\n- [constraint 2]\n\n## Open Questions\n- [things that came up but weren't resolved]\n- [decisions that need more thought]\n\n## Out of Scope (for now)\n- [things explicitly deferred]\n\n## Raw Notes\n[Any additional context, quotes, or tangents worth preserving]\n\\`\\`\\`\n\n### 5. Hand Off\n\nAfter writing the draft, tell the user:\n\n\\`\\`\\`\nDraft brief saved to docs/briefs/YYYY-MM-DD-topic-draft.md\n\nWhen you're ready to move forward:\n- /new-feature — formalize this into a full Feature Brief with specs\n- /decompose — break it directly into atomic specs if scope is clear\n- Or just keep brainstorming — run /interview again anytime\n\\`\\`\\`\n\n## Guidelines\n\n- **This is NOT /new-feature.** Do not push toward formal briefs, decomposition tables, or atomic specs. The point is exploration.\n- **Let the user lead.** Your job is to listen, clarify, and capture — not to structure or direct.\n- **Mark everything as DRAFT.** The output is a starting point, not a commitment.\n- **Keep it short.** The draft brief should be 1-2 pages max. Capture the essence, not every detail.\n- **Multiple interviews are fine.** The user might run this several times as their thinking evolves. Each creates a new dated draft.\n`,\n\n 'new-feature.md': `---\nname: new-feature\ndescription: Guided feature development — interview the user, produce a Feature Brief, then decompose into atomic specs\n---\n\n# New Feature Workflow\n\nYou are starting a new feature. Follow this process in order. Do not skip steps.\n\n## Phase 1: Interview\n\nInterview the user about what they want to build. Let them talk — your job is to listen, then sharpen.\n\n**Why:** A thorough interview prevents wasted implementation time. Most failed features fail because the problem wasn't understood, not because the code was wrong.\n\n**Ask about:**\n- What problem does this solve? Who is affected?\n- What does \"done\" look like? How will a user know this works?\n- What are the hard constraints? (business rules, tech limitations, deadlines)\n- What is explicitly NOT in scope? (push hard on this — aggressive scoping is key)\n- Are there edge cases or error conditions we need to handle?\n- What existing code/patterns should this follow?\n\n**Interview technique:**\n- Let the user \"yap\" — don't interrupt their flow of ideas\n- After they finish, play back your understanding: \"So if I'm hearing you right...\"\n- Ask clarifying questions that force specificity: \"When you say 'handle errors,' what should the user see?\"\n- Push toward testable statements: \"How would we verify that works?\"\n\nKeep asking until you can fill out a Feature Brief. When ready, say:\n\"I have enough context. Let me write the Feature Brief for your review.\"\n\n## Phase 2: Feature Brief\n\nWrite a Feature Brief to \\`docs/briefs/YYYY-MM-DD-feature-name.md\\`. Create the \\`docs/briefs/\\` directory if it doesn't exist.\n\n**Why:** The brief is the single source of truth for what we're building. It prevents scope creep and gives every spec a shared reference point.\n\nUse this structure:\n\n\\`\\`\\`markdown\n# [Feature Name] — Feature Brief\n\n> **Date:** YYYY-MM-DD\n> **Project:** [project name]\n> **Status:** Interview | Decomposing | Specs Ready | In Progress | Complete\n\n---\n\n## Vision\nWhat are we building and why? The full picture in 2-4 paragraphs.\n\n## User Stories\n- As a [role], I want [capability] so that [benefit]\n\n## Hard Constraints\n- MUST: [constraint that every spec must respect]\n- MUST NOT: [prohibition that every spec must respect]\n\n## Out of Scope\n- NOT: [tempting but deferred]\n\n## Decomposition\n| # | Spec Name | Description | Dependencies | Est. Size |\n|---|-----------|-------------|--------------|-----------|\n| 1 | [verb-object] | [one sentence] | None | [S/M/L] |\n\n## Execution Strategy\n- [ ] Sequential (specs have chain dependencies)\n- [ ] Parallel worktrees (specs are independent)\n- [ ] Mixed\n\n## Success Criteria\n- [ ] [End-to-end behavior 1]\n- [ ] [No regressions in existing features]\n\\`\\`\\`\n\nIf \\`docs/templates/FEATURE_BRIEF_TEMPLATE.md\\` exists, reference it for the full template with additional guidance.\n\nPresent the brief to the user. Focus review on:\n- \"Does the decomposition match how you think about this?\"\n- \"Is anything in scope that shouldn't be?\"\n- \"Are the specs small enough? Can each be described in one sentence?\"\n\nIterate until approved.\n\n## Phase 3: Generate Atomic Specs\n\nFor each row in the decomposition table, create a self-contained spec file at \\`docs/specs/YYYY-MM-DD-spec-name.md\\`. Create the \\`docs/specs/\\` directory if it doesn't exist.\n\n**Why:** Each spec must be understandable WITHOUT reading the Feature Brief. This prevents the \"Curse of Instructions\" — no spec should require holding the entire feature in context. Copy relevant context into each spec.\n\nUse this structure for each spec:\n\n\\`\\`\\`markdown\n# [Verb + Object] — Atomic Spec\n\n> **Parent Brief:** \\`docs/briefs/YYYY-MM-DD-feature-name.md\\`\n> **Status:** Ready\n> **Date:** YYYY-MM-DD\n> **Estimated scope:** [1 session / N files / ~N lines]\n\n---\n\n## What\nOne paragraph — what changes when this spec is done?\n\n## Why\nOne sentence — what breaks or is missing without this?\n\n## Acceptance Criteria\n- [ ] [Observable behavior]\n- [ ] Build passes\n- [ ] Tests pass\n\n## Constraints\n- MUST: [hard requirement]\n- MUST NOT: [hard prohibition]\n\n## Affected Files\n| Action | File | What Changes |\n|--------|------|-------------|\n\n## Approach\nStrategy, data flow, key decisions. Name one rejected alternative.\n\n## Edge Cases\n| Scenario | Expected Behavior |\n|----------|------------------|\n\\`\\`\\`\n\nIf \\`docs/templates/ATOMIC_SPEC_TEMPLATE.md\\` exists, reference it for the full template with additional guidance.\n\n## Phase 4: Hand Off for Execution\n\nTell the user:\n\\`\\`\\`\nFeature Brief and [N] atomic specs are ready.\n\nSpecs:\n1. [spec-name] — [one sentence] [S/M/L]\n2. [spec-name] — [one sentence] [S/M/L]\n...\n\nRecommended execution:\n- [Parallel/Sequential/Mixed strategy]\n- Estimated: [N] sessions total\n\nTo execute: Start a fresh session per spec. Each session should:\n1. Read the spec\n2. Implement\n3. Run /session-end to capture discoveries\n4. Commit and PR\n\nReady to start?\n\\`\\`\\`\n\n**Why:** A fresh session for execution produces better results. The interview session has too much context noise — a clean session with just the spec is more focused.\n\nYou can also use \\`/decompose\\` to re-decompose a brief if the breakdown needs adjustment, or run \\`/interview\\` first for a lighter brainstorm before committing to the full workflow.\n`,\n\n 'session-end.md': `---\nname: session-end\ndescription: Wrap up a session — capture discoveries, verify, prepare for PR or next session\n---\n\n# Session Wrap-Up\n\nBefore ending this session, complete these steps in order.\n\n## 1. Capture Discoveries\n\n**Why:** Discoveries are the surprises — things that weren't in the spec or that contradicted expectations. They prevent future sessions from hitting the same walls.\n\nCheck: did anything surprising happen during this session? If yes, create or update a discovery file at \\`docs/discoveries/YYYY-MM-DD-topic.md\\`. Create the \\`docs/discoveries/\\` directory if it doesn't exist.\n\nOnly capture what's NOT obvious from the code or git diff:\n- \"We thought X but found Y\" — assumptions that were wrong\n- \"This API/library behaves differently than documented\" — external gotchas\n- \"This edge case needs handling in a future spec\" — deferred work with context\n- \"The approach in the spec didn't work because...\" — spec-vs-reality gaps\n- Key decisions made during implementation that aren't in the spec\n\n**Do NOT capture:**\n- Files changed (that's the diff)\n- What you set out to do (that's the spec)\n- Step-by-step narrative of the session (nobody re-reads these)\n\nUse this format:\n\n\\`\\`\\`markdown\n# Discoveries — [topic]\n\n**Date:** YYYY-MM-DD\n**Spec:** [link to spec if applicable]\n\n## [Discovery title]\n**Expected:** [what we thought would happen]\n**Actual:** [what actually happened]\n**Impact:** [what this means for future work]\n\\`\\`\\`\n\nIf nothing surprising happened, skip the discovery file entirely. No discovery is a good sign — the spec was accurate.\n\n## 2. Run Validation\n\nRun the project's validation commands. Check CLAUDE.md for project-specific commands. Common checks:\n\n- Type-check (e.g., \\`tsc --noEmit\\`, \\`mypy\\`, \\`cargo check\\`)\n- Tests (e.g., \\`npm test\\`, \\`pytest\\`, \\`cargo test\\`)\n- Lint (e.g., \\`eslint\\`, \\`ruff\\`, \\`clippy\\`)\n\nFix any failures before proceeding.\n\n## 3. Update Spec Status\n\nIf working from an atomic spec in \\`docs/specs/\\`:\n- All acceptance criteria met — update status to \\`Complete\\`\n- Partially done — update status to \\`In Progress\\`, note what's left\n\nIf working from a Feature Brief in \\`docs/briefs/\\`, check off completed specs in the decomposition table.\n\n## 4. Commit\n\nCommit all changes including the discovery file (if created) and spec status updates. The commit message should reference the spec if applicable.\n\n## 5. Report\n\n\\`\\`\\`\nSession complete.\n- Spec: [spec name] — [Complete / In Progress]\n- Build: [passing / failing]\n- Discoveries: [N items / none]\n- Next: [what the next session should tackle, or \"ready for PR\"]\n\\`\\`\\`\n`,\n\n 'tune.md': `---\nname: tune\ndescription: Assess and upgrade your project's AI development harness — score 7 dimensions, apply fixes, show path to Level 5\n---\n\n# Tune — Project Harness Assessment & Upgrade\n\nYou are evaluating and upgrading this project's AI development harness. Follow these steps in order.\n\n## Step 1: Detect Harness State\n\nCheck the following and note what exists:\n\n1. **CLAUDE.md** — Read it if it exists. Check whether it contains meaningful content (not just a project name or generic README).\n2. **Key directories** — Check for: \\`docs/specs/\\`, \\`docs/briefs/\\`, \\`docs/discoveries/\\`, \\`docs/templates/\\`, \\`.claude/skills/\\`\n3. **Boundary framework** — Look for \\`Always\\`, \\`Ask First\\`, and \\`Never\\` sections in CLAUDE.md (or similar behavioral constraints under any heading).\n4. **Skills infrastructure** — Check \\`.claude/skills/\\` for installed skill files.\n5. **Test configuration** — Look for test commands in package.json, pyproject.toml, Cargo.toml, Makefile, or CI config files.\n\n## Step 2: Route Based on State\n\n### If No Harness (no CLAUDE.md, or CLAUDE.md is just a README with no structured sections):\n\nTell the user:\n- Their project has no AI development harness\n- Recommend running \\`npx joycraft init\\` to scaffold one\n- Briefly explain what it sets up: CLAUDE.md with boundaries, spec/brief templates, skills, documentation structure\n- **Stop here** — do not run the full assessment on a bare project\n\n### If Harness Exists (CLAUDE.md has structured content — boundaries, commands, architecture, or domain rules):\n\nContinue to Step 3 for the full assessment.\n\n## Step 3: Score 7 Dimensions\n\nRead CLAUDE.md thoroughly. Explore the project structure. Score each dimension on a 1-5 scale with specific evidence.\n\n### Dimension 1: Spec Quality\n\nLook in \\`docs/specs/\\` for specification files.\n\n| Score | Criteria |\n|-------|----------|\n| 1 | No specs directory or no spec files |\n| 2 | Specs exist but are informal notes or TODOs |\n| 3 | Specs have structure (sections, some criteria) but lack consistency |\n| 4 | Specs are structured with clear acceptance criteria and constraints |\n| 5 | Atomic specs: self-contained, acceptance criteria, constraints, edge cases, affected files |\n\n**Evidence:** Number of specs found, example of best/worst, whether acceptance criteria are present.\n\n### Dimension 2: Spec Granularity\n\nCan each spec be completed in a single coding session?\n\n| Score | Criteria |\n|-------|----------|\n| 1 | No specs |\n| 2 | Specs cover entire features or epics |\n| 3 | Specs are feature-sized (multi-session but bounded) |\n| 4 | Most specs are session-sized with clear scope |\n| 5 | All specs are atomic — one session, one concern, clear done state |\n\n### Dimension 3: Behavioral Boundaries\n\nRead CLAUDE.md for explicit behavioral constraints.\n\n| Score | Criteria |\n|-------|----------|\n| 1 | No CLAUDE.md or no behavioral guidance |\n| 2 | CLAUDE.md exists with general instructions but no structured boundaries |\n| 3 | Some boundaries exist but not organized as Always/Ask First/Never |\n| 4 | Always/Ask First/Never sections present with reasonable coverage |\n| 5 | Comprehensive boundaries covering code style, testing, deployment, dependencies, and dangerous operations |\n\n**Important:** Projects may have strong rules under different headings (e.g., \"Critical Rules\", \"Constraints\"). Give credit for substance over format — a project with clear, enforced rules scores higher than one with empty Always/Ask First/Never sections.\n\n### Dimension 4: Skills & Hooks\n\nLook in \\`.claude/skills/\\` for skill files. Check for hooks configuration.\n\n| Score | Criteria |\n|-------|----------|\n| 1 | No .claude/ directory |\n| 2 | .claude/ exists but empty or minimal |\n| 3 | A few skills installed, no hooks |\n| 4 | Multiple relevant skills, basic hooks |\n| 5 | Comprehensive skills covering workflow, hooks for validation |\n\n### Dimension 5: Documentation\n\nExamine \\`docs/\\` directory structure and content.\n\n| Score | Criteria |\n|-------|----------|\n| 1 | No docs/ directory |\n| 2 | docs/ exists with ad-hoc files |\n| 3 | Some structure (subdirectories) but inconsistent |\n| 4 | Structured docs/ with templates and clear organization |\n| 5 | Full structure: briefs/, specs/, templates/, architecture docs, referenced from CLAUDE.md |\n\n### Dimension 6: Knowledge Capture\n\nLook for discoveries, decisions, and session notes.\n\n| Score | Criteria |\n|-------|----------|\n| 1 | No knowledge capture mechanism |\n| 2 | Ad-hoc notes in random locations |\n| 3 | A dedicated notes or decisions directory exists |\n| 4 | Structured discoveries/decisions directory with entries |\n| 5 | Active capture: discoveries with entries, session-end workflow, decision log |\n\n### Dimension 7: Testing & Validation\n\nLook for test config, CI setup, and validation commands.\n\n| Score | Criteria |\n|-------|----------|\n| 1 | No test configuration |\n| 2 | Test framework installed but few/no tests |\n| 3 | Tests exist with reasonable coverage |\n| 4 | Tests + CI pipeline configured |\n| 5 | Tests + CI + validation commands in CLAUDE.md + scenario tests |\n\n## Step 4: Write Assessment\n\nWrite the assessment to \\`docs/joycraft-assessment.md\\` AND display it in the conversation. Use this format:\n\n\\`\\`\\`markdown\n# Joycraft Assessment — [Project Name]\n\n**Date:** [today's date]\n**Overall Level:** [1-5, based on average score]\n\n## Scores\n\n| Dimension | Score | Summary |\n|-----------|-------|---------|\n| Spec Quality | X/5 | [one-line summary] |\n| Spec Granularity | X/5 | [one-line summary] |\n| Behavioral Boundaries | X/5 | [one-line summary] |\n| Skills & Hooks | X/5 | [one-line summary] |\n| Documentation | X/5 | [one-line summary] |\n| Knowledge Capture | X/5 | [one-line summary] |\n| Testing & Validation | X/5 | [one-line summary] |\n\n**Average:** X.X/5\n\n## Detailed Findings\n\n### [Dimension Name] — X/5\n**Evidence:** [specific files, paths, counts found]\n**Gap:** [what's missing]\n**Recommendation:** [specific action to improve]\n\n## Upgrade Plan\n\nTo reach Level [current + 1], complete these steps:\n1. [Most impactful action] — addresses [dimension] (X -> Y)\n2. [Next action] — addresses [dimension] (X -> Y)\n[up to 5 actions, ordered by impact]\n\\`\\`\\`\n\n## Step 5: Apply Upgrades\n\nImmediately after presenting the assessment, apply upgrades using the three-tier model below. Do NOT ask for per-item permission — batch everything and show a consolidated report at the end.\n\n### Tier 1: Silent Apply (just do it)\nThese are safe, additive operations. Apply them without asking:\n- Create missing directories (\\`docs/specs/\\`, \\`docs/briefs/\\`, \\`docs/discoveries/\\`, \\`docs/templates/\\`)\n- Install missing skills to \\`.claude/skills/\\`\n- Copy missing templates to \\`docs/templates/\\`\n- Create AGENTS.md if it doesn't exist\n\n### Tier 2: Apply and Show Diff (do it, then report)\nThese modify important files but are additive (append-only). Apply them, then show what changed so the user can review. Git is the undo button.\n- Add missing sections to CLAUDE.md (Behavioral Boundaries, Development Workflow, Getting Started with Joycraft, Key Files, Common Gotchas)\n- Draft section content from the actual codebase — not generic placeholders. Read the project's real rules, real commands, real structure.\n- Only append — never modify or reformat existing content\n\n### Tier 3: Confirm First (ask before acting)\nThese are potentially destructive or opinionated. Ask before proceeding:\n- Rewriting or reorganizing existing CLAUDE.md sections\n- Overwriting files the user has customized\n- Suggesting test framework installation or CI setup (present as recommendations, don't auto-install)\n\n### Reading a Previous Assessment\n\nIf \\`docs/joycraft-assessment.md\\` already exists, read it first. If all recommendations have been applied, report \"nothing to upgrade\" and offer to re-assess.\n\n### After Applying\n\nAppend a history entry to \\`docs/joycraft-history.md\\` (create if needed):\n\\`\\`\\`\n| [date] | [new avg score] | [change from last] | [summary of what changed] |\n\\`\\`\\`\n\nThen display a single consolidated report:\n\n\\`\\`\\`markdown\n## Upgrade Results\n\n| Dimension | Before | After | Change |\n|------------------------|--------|-------|--------|\n| Spec Quality | X/5 | X/5 | +X |\n| ... | ... | ... | ... |\n\n**Previous Level:** X — **New Level:** X\n\n### What Changed\n- [list each change applied]\n\n### Remaining Gaps\n- [anything still below 3.5, with specific next action]\n\\`\\`\\`\n\nUpdate \\`docs/joycraft-assessment.md\\` with the new scores and today's date.\n\n## Step 6: Show Path to Level 5\n\nAfter the upgrade report, always show the Level 5 roadmap tailored to the project's current state:\n\n\\`\\`\\`markdown\n## Path to Level 5 — Autonomous Development\n\nYou're at Level [X]. Here's what each level looks like:\n\n| Level | You | AI | Key Skill |\n|-------|-----|-----|-----------|\n| 2 | Guide direction | Multi-file changes | AI-native tooling |\n| 3 | Review diffs | Primary developer | Code review at scale |\n| 4 | Write specs, check tests | End-to-end development | Specification writing |\n| 5 | Define what + why | Specs in, software out | Systems design |\n\n### Your Next Steps Toward Level [X+1]:\n1. [Specific action based on current gaps — e.g., \"Write your first atomic spec using /new-feature\"]\n2. [Next action — e.g., \"Add vitest and write tests for your core logic\"]\n3. [Next action — e.g., \"Use /session-end consistently to build your discoveries log\"]\n\n### What Level 5 Looks Like (Your North Star):\n- A backlog of ready specs that agents pull from and execute autonomously\n- CI failures auto-generate fix specs — no human triage for regressions\n- Multi-agent execution with parallel worktrees, one spec per agent\n- External holdout scenarios (tests the agent can't see) prevent overfitting\n- CLAUDE.md evolves from discoveries — the harness improves itself\n\n### You'll Know You're at Level 5 When:\n- You describe a feature in one sentence and walk away\n- The system produces a PR with tests, docs, and discoveries — without further input\n- Failed CI runs generate their own fix specs\n- Your harness improves without you manually editing CLAUDE.md\n\nThis is a significant journey. Most teams are at Level 2. Getting to Level 4 with Joycraft's workflow is achievable — Level 5 requires building validation infrastructure (scenario tests, spec queues, CI feedback loops) that goes beyond what Joycraft scaffolds today. But the harness you're building now is the foundation.\n\\`\\`\\`\n\nTailor the \"Next Steps\" section based on the project's actual gaps — don't show generic advice.\n\n## Edge Cases\n\n- **Not a git repo:** Note this. Joycraft works best in a git repo.\n- **CLAUDE.md is just a README:** Treat as \"no harness.\"\n- **Non-Joycraft skills already installed:** Acknowledge them. Do not replace — suggest additions.\n- **Monorepo:** Assess the root CLAUDE.md. Note if component-level CLAUDE.md files exist.\n- **Project has rules under non-standard headings:** Give credit. Suggest reformatting as Always/Ask First/Never but acknowledge the rules are there.\n- **Assessment file missing when upgrading:** Run the full assessment first, then offer to apply.\n- **Assessment is stale:** Warn and offer to re-assess before proceeding.\n- **All recommendations already applied:** Report \"nothing to upgrade\" and stop.\n- **User declines a recommendation:** Skip it, continue, include in \"What Was Skipped.\"\n- **CLAUDE.md does not exist at all:** Create it with recommended sections, but ask the user first.\n- **Non-Joycraft content in CLAUDE.md:** Preserve exactly as-is. Only append or merge — never remove or reformat existing content.\n`,\n\n};\n\nexport const TEMPLATES: Record<string, string> = {\n 'ATOMIC_SPEC_TEMPLATE.md': `# [Verb + Object] — Atomic Spec\n\n> **Parent Brief:** \\`docs/briefs/YYYY-MM-DD-feature-name.md\\` (or \"standalone\")\n> **Status:** Draft | Ready | In Progress | Complete\n> **Date:** YYYY-MM-DD\n> **Estimated scope:** [1 session / 2-3 files / ~N lines]\n\n---\n\n## What\n\nOne paragraph. What changes when this spec is done? A developer with no context should understand the change in 15 seconds.\n\n## Why\n\nOne sentence. What breaks, hurts, or is missing without this? Links to the parent brief if part of a larger feature.\n\n## Acceptance Criteria\n\n- [ ] [Observable behavior — what a human would see/verify]\n- [ ] [Another observable behavior]\n- [ ] [Regression: existing behavior X still works]\n- [ ] Build passes\n- [ ] Tests pass\n\n> These are your \"done\" checkboxes. If Claude says \"done\" and these aren't all green, it's not done.\n\n## Constraints\n\n- MUST: [hard requirement]\n- MUST NOT: [hard prohibition]\n- SHOULD: [strong preference, with rationale]\n\n> Use RFC 2119 language. 2-5 constraints is typical. Zero is a red flag — every change has boundaries.\n\n## Affected Files\n\n| Action | File | What Changes |\n|--------|------|-------------|\n| Create | \\`path/to/file.ts\\` | [brief description] |\n| Modify | \\`path/to/file.ts\\` | [what specifically changes] |\n\n## Approach\n\nHow this will be implemented. Not pseudo-code — describe the strategy, data flow, and key decisions. Name one rejected alternative and why it was rejected.\n\n_Scale to complexity: 3 sentences for a bug fix, 1 page max for a feature. If you need more than a page, this spec is too big — decompose further._\n\n## Edge Cases\n\n| Scenario | Expected Behavior |\n|----------|------------------|\n| [what could go wrong] | [what should happen] |\n\n> Skip for trivial changes. Required for anything touching user input, data, or external APIs.`,\n\n 'FEATURE_BRIEF_TEMPLATE.md': `# [Feature Name] — Feature Brief\n\n> **Date:** YYYY-MM-DD\n> **Project:** [project name]\n> **Status:** Interview | Decomposing | Specs Ready | In Progress | Complete\n\n---\n\n## Vision\n\nWhat are we building and why? This is the \"yap\" distilled — the full picture in 2-4 paragraphs.\n\n## User Stories\n\n- As a [role], I want [capability] so that [benefit]\n\n## Hard Constraints\n\n- MUST: [constraint that every spec must respect]\n- MUST NOT: [prohibition that every spec must respect]\n\n## Out of Scope\n\n- NOT: [tempting but deferred]\n\n## Decomposition\n\n| # | Spec Name | Description | Dependencies | Est. Size |\n|---|-----------|-------------|--------------|-----------|\n| 1 | [verb-object] | [one sentence] | None | [S/M/L] |\n\n## Execution Strategy\n\n- [ ] Sequential (specs have chain dependencies)\n- [ ] Agent teams (parallel teammates within phases)\n- [ ] Parallel worktrees (specs are independent)\n\n## Success Criteria\n\n- [ ] [End-to-end behavior 1]\n- [ ] [No regressions in existing features]`,\n\n 'IMPLEMENTATION_PLAN_TEMPLATE.md': `# [Feature Name] — Implementation Plan\n\n> **Design Spec:** \\`docs/specs/YYYY-MM-DD-feature-name.md\\`\n> **Date:** YYYY-MM-DD\n> **Estimated Tasks:** [number]\n\n---\n\n## Prerequisites\n\n- [ ] Design spec is approved\n- [ ] Branch created (if warranted): \\`feature/feature-name\\`\n- [ ] Required context loaded: [list any docs Claude should read first]\n\n## Task 1: [Descriptive Name]\n\n**Goal:** One sentence — what is true after this task that wasn't true before.\n\n**Files:**\n- \\`path/to/file.ts\\` — [what changes]\n\n**Steps:**\n1. [Concrete action]\n2. [Next concrete action]\n\n**Verification:**\n- [ ] [How to confirm this task worked]\n\n**Commit:** \\`feat: description\\`\n\n---\n\n## Task N: Final Verification\n\n**Goal:** Confirm everything works end-to-end.\n\n**Steps:**\n1. Run full type-check\n2. Run linter\n3. Run tests\n4. Walk through verification checklist from design spec\n\n**Verification:**\n- [ ] All design spec verification items pass\n- [ ] No regressions in existing functionality`,\n\n 'BOUNDARY_FRAMEWORK.md': `# Boundary Framework\n\n> Add this to the TOP of your CLAUDE.md, before any project context.\n> Customize the specific rules per project, but keep the three-tier structure.\n\n---\n\n## Behavioral Boundaries\n\n### ALWAYS (do these without asking)\n- Run type-check and lint before every commit\n- Commit after completing each discrete task (atomic commits)\n- Follow patterns in existing code — match existing code style\n- Check the active implementation plan before starting work\n\n### ASK FIRST (pause and confirm before doing these)\n- Adding new dependencies\n- Modifying database schema, migrations, or data models\n- Changing authentication or authorization flows\n- Deviating from an approved implementation plan\n- Any destructive operation (deleting files, dropping tables, force-pushing)\n- Modifying CI/CD, deployment, or infrastructure configuration\n\n### NEVER (do not do these under any circumstances)\n- Push to production or main branch without explicit approval\n- Delete specs, plans, or documentation\n- Modify environment variables or secrets\n- Skip type-checking or linting to \"save time\"\n- Make changes outside the scope of the current spec/plan\n- Commit code that doesn't build\n- Remove or weaken existing tests\n- Hardcode secrets, API keys, or credentials`,\n 'examples/example-brief.md': `# Add User Notifications — Feature Brief\n\n> **Date:** 2026-03-15\n> **Project:** acme-web\n> **Status:** Specs Ready\n\n---\n\n## Vision\n\nOur users have no idea when things happen in their account. A teammate comments on their pull request, a deployment finishes, a billing threshold is hit — they find out by accident, minutes or hours later. This is the #1 complaint in our last user survey.\n\nWe are building a notification system that delivers real-time and batched notifications across in-app, email, and (later) Slack channels. Users will have fine-grained control over what they receive and how. When this ships, no important event goes unnoticed, and no user gets buried in noise they didn't ask for.\n\nThe system is designed to be extensible — new event types plug in without touching the notification infrastructure. We start with three event types (PR comments, deploy status, billing alerts) and prove the pattern works before expanding.\n\n## User Stories\n\n- As a developer, I want to see a notification badge in the app when someone comments on my PR so that I can respond quickly\n- As a team lead, I want to receive an email when a production deployment fails so that I can coordinate the response\n- As a billing admin, I want to get alerted when usage exceeds 80% of our plan limit so that I can upgrade before service is disrupted\n- As any user, I want to control which notifications I receive and through which channels so that I am not overwhelmed\n\n## Hard Constraints\n\n- MUST: All notifications go through a single event bus — no direct coupling between event producers and delivery channels\n- MUST: Email delivery uses the existing SendGrid integration (do not add a new email provider)\n- MUST: Respect user preferences before delivering — never send a notification the user has opted out of\n- MUST NOT: Store notification content in plaintext in the database — use the existing encryption-at-rest pattern\n- MUST NOT: Send more than 50 emails per user per day (batch if necessary)\n\n## Out of Scope\n\n- NOT: Slack/Discord integration (Phase 2)\n- NOT: Push notifications / mobile (Phase 2)\n- NOT: Notification templates with rich HTML — plain text and simple markdown only for now\n- NOT: Admin dashboard for monitoring notification delivery rates\n- NOT: Retroactive notifications for events that happened before the feature ships\n\n## Decomposition\n\n| # | Spec Name | Description | Dependencies | Est. Size |\n|---|-----------|-------------|--------------|-----------|\n| 1 | add-notification-preferences-api | Create REST endpoints for users to read and update their notification preferences | None | M |\n| 2 | add-event-bus-infrastructure | Set up the internal event bus that decouples event producers from notification delivery | None | M |\n| 3 | add-notification-delivery-service | Build the service that consumes events, checks preferences, and dispatches to channels (in-app, email) | Spec 1, Spec 2 | L |\n| 4 | add-in-app-notification-ui | Add notification bell, dropdown, and badge count to the app header | Spec 3 | M |\n| 5 | add-email-batching | Implement daily digest batching for email notifications that exceed the per-user threshold | Spec 3 | S |\n\n## Execution Strategy\n\n- [x] Agent teams (parallel teammates within phases, sequential between phases)\n\n\\`\\`\\`\nPhase 1: Teammate A -> Spec 1 (preferences API), Teammate B -> Spec 2 (event bus)\nPhase 2: Teammate A -> Spec 3 (delivery service) — depends on Phase 1\nPhase 3: Teammate A -> Spec 4 (UI), Teammate B -> Spec 5 (batching) — both depend on Spec 3\n\\`\\`\\`\n\n## Success Criteria\n\n- [ ] User updates notification preferences via API, and subsequent events respect those preferences\n- [ ] A PR comment event triggers an in-app notification visible in the UI within 2 seconds\n- [ ] A deploy failure event sends an email to subscribed users via SendGrid\n- [ ] When email threshold (50/day) is exceeded, remaining notifications are batched into a daily digest\n- [ ] No regressions in existing PR, deployment, or billing features\n\n## External Scenarios\n\n| Scenario | What It Tests | Pass Criteria |\n|----------|--------------|---------------|\n| opt-out-respected | User disables email for deploy events, deploy fails | No email sent, in-app notification still appears |\n| batch-threshold | Send 51 email-eligible events for one user in a day | 50 individual emails + 1 digest containing the overflow |\n| preference-persistence | User sets preferences, logs out, logs back in | Preferences are unchanged |\n`,\n\n 'examples/example-spec.md': `# Add Notification Preferences API — Atomic Spec\n\n> **Parent Brief:** \\`docs/briefs/2026-03-15-add-user-notifications.md\\`\n> **Status:** Ready\n> **Date:** 2026-03-15\n> **Estimated scope:** 1 session / 4 files / ~250 lines\n\n---\n\n## What\n\nAdd REST API endpoints that let users read and update their notification preferences. Each user gets a preferences record with per-event-type, per-channel toggles (e.g., \"PR comments: in-app=on, email=off\"). Preferences default to all-on for new users and are stored encrypted alongside the user profile.\n\n## Why\n\nThe notification delivery service (Spec 3) needs to check preferences before dispatching. Without this API, there is no way for users to control what they receive, and we cannot build the delivery pipeline.\n\n## Acceptance Criteria\n\n- [ ] \\`GET /api/v1/notifications/preferences\\` returns the current user's preferences as JSON\n- [ ] \\`PATCH /api/v1/notifications/preferences\\` updates one or more preference fields and returns the updated record\n- [ ] New users get default preferences (all channels enabled for all event types) on first read\n- [ ] Preferences are validated — unknown event types or channels return 400\n- [ ] Preferences are stored using the existing encryption-at-rest pattern (\\`EncryptedJsonColumn\\`)\n- [ ] Endpoint requires authentication (returns 401 for unauthenticated requests)\n- [ ] Build passes\n- [ ] Tests pass (unit + integration)\n\n## Constraints\n\n- MUST: Use the existing \\`EncryptedJsonColumn\\` utility for storage — do not roll a new encryption pattern\n- MUST: Follow the existing REST controller pattern in \\`src/controllers/\\`\n- MUST NOT: Expose other users' preferences (scope queries to authenticated user only)\n- SHOULD: Return the full preferences object on PATCH (not just the changed fields), so the frontend can replace state without merging\n\n## Affected Files\n\n| Action | File | What Changes |\n|--------|------|-------------|\n| Create | \\`src/controllers/notification-preferences.controller.ts\\` | New controller with GET and PATCH handlers |\n| Create | \\`src/models/notification-preferences.model.ts\\` | Sequelize model with EncryptedJsonColumn for preferences blob |\n| Create | \\`src/migrations/20260315-add-notification-preferences.ts\\` | Database migration to create notification_preferences table |\n| Create | \\`tests/controllers/notification-preferences.test.ts\\` | Unit and integration tests for both endpoints |\n| Modify | \\`src/routes/index.ts\\` | Register the new controller routes |\n\n## Approach\n\nCreate a \\`NotificationPreferences\\` model backed by a single \\`notification_preferences\\` table with columns: \\`id\\`, \\`user_id\\` (unique FK), \\`preferences\\` (EncryptedJsonColumn), \\`created_at\\`, \\`updated_at\\`. The \\`preferences\\` column stores a JSON blob shaped like \\`{ \"pr_comment\": { \"in_app\": true, \"email\": true }, \"deploy_status\": { ... } }\\`.\n\nThe GET endpoint does a find-or-create: if no record exists for the user, create one with defaults and return it. The PATCH endpoint deep-merges the request body into the existing preferences, validates the result against a known schema of event types and channels, and saves.\n\n**Rejected alternative:** Storing preferences as individual rows (one per event-type-channel pair). This would make queries more complex and would require N rows per user instead of 1. The JSON blob approach is simpler and matches how the frontend will consume the data.\n\n## Edge Cases\n\n| Scenario | Expected Behavior |\n|----------|------------------|\n| PATCH with empty body \\`{}\\` | Return 200 with unchanged preferences (no-op) |\n| PATCH with unknown event type \\`{\"foo\": {\"email\": true}}\\` | Return 400 with validation error listing valid event types |\n| GET for user with no existing record | Create default preferences, return 200 |\n| Concurrent PATCH requests | Last-write-wins (optimistic, no locking) — acceptable for user preferences |\n`,\n\n};\n","import { readFileSync, writeFileSync, existsSync } from 'node:fs';\nimport { join } from 'node:path';\nimport { createHash } from 'node:crypto';\n\nconst VERSION_FILE = '.joycraft-version';\n\nexport interface VersionInfo {\n version: string;\n files: Record<string, string>;\n}\n\nexport function hashContent(content: string): string {\n return createHash('sha256').update(content).digest('hex');\n}\n\nexport function readVersion(dir: string): VersionInfo | null {\n const filePath = join(dir, VERSION_FILE);\n if (!existsSync(filePath)) return null;\n try {\n const raw = readFileSync(filePath, 'utf-8');\n const parsed = JSON.parse(raw);\n if (typeof parsed.version === 'string' && typeof parsed.files === 'object') {\n return parsed as VersionInfo;\n }\n return null;\n } catch {\n return null;\n }\n}\n\nexport function writeVersion(dir: string, version: string, files: Record<string, string>): void {\n const filePath = join(dir, VERSION_FILE);\n const data: VersionInfo = { version, files };\n writeFileSync(filePath, JSON.stringify(data, null, 2) + '\\n', 'utf-8');\n}\n"],"mappings":";;;AAGO,IAAM,SAAiC;AAAA,EAC5C,gBAAgB;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAmIhB,gBAAgB;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAiGhB,kBAAkB;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAkKlB,kBAAkB;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EA4ElB,WAAW;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAiRb;AAEO,IAAM,YAAoC;AAAA,EAC/C,2BAA2B;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAwD3B,6BAA6B;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EA0C7B,mCAAmC;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EA8CnC,yBAAyB;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAgCzB,6BAA6B;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EA4E7B,4BAA4B;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AA+D9B;;;ACriCA,SAAS,cAAc,eAAe,kBAAkB;AACxD,SAAS,YAAY;AACrB,SAAS,kBAAkB;AAE3B,IAAM,eAAe;AAOd,SAAS,YAAY,SAAyB;AACnD,SAAO,WAAW,QAAQ,EAAE,OAAO,OAAO,EAAE,OAAO,KAAK;AAC1D;AAEO,SAAS,YAAY,KAAiC;AAC3D,QAAM,WAAW,KAAK,KAAK,YAAY;AACvC,MAAI,CAAC,WAAW,QAAQ,EAAG,QAAO;AAClC,MAAI;AACF,UAAM,MAAM,aAAa,UAAU,OAAO;AAC1C,UAAM,SAAS,KAAK,MAAM,GAAG;AAC7B,QAAI,OAAO,OAAO,YAAY,YAAY,OAAO,OAAO,UAAU,UAAU;AAC1E,aAAO;AAAA,IACT;AACA,WAAO;AAAA,EACT,QAAQ;AACN,WAAO;AAAA,EACT;AACF;AAEO,SAAS,aAAa,KAAa,SAAiB,OAAqC;AAC9F,QAAM,WAAW,KAAK,KAAK,YAAY;AACvC,QAAM,OAAoB,EAAE,SAAS,MAAM;AAC3C,gBAAc,UAAU,KAAK,UAAU,MAAM,MAAM,CAAC,IAAI,MAAM,OAAO;AACvE;","names":[]}
|
|
1
|
+
{"version":3,"sources":["../src/bundled-files.ts","../src/version.ts"],"sourcesContent":["// Bundled file contents — embedded at build time since tsup bundles everything\n// and we cannot read files from the package at runtime.\n\nexport const SKILLS: Record<string, string> = {\n 'decompose.md': `---\nname: decompose\ndescription: Break a feature brief into atomic specs — small, testable, independently executable units\n---\n\n# Decompose Feature into Atomic Specs\n\nYou have a Feature Brief (or the user has described a feature). Your job is to decompose it into atomic specs that can be executed independently — one spec per session.\n\n## Step 1: Verify the Brief Exists\n\nLook for a Feature Brief in \\`docs/briefs/\\`. If one doesn't exist yet, tell the user:\n\n> No feature brief found. Run \\`/new-feature\\` first to interview and create one, or describe the feature now and I'll work from your description.\n\nIf the user describes the feature inline, work from that description directly. You don't need a formal brief to decompose — but recommend creating one for complex features.\n\n## Step 2: Identify Natural Boundaries\n\n**Why:** Good boundaries make specs independently testable and committable. Bad boundaries create specs that can't be verified without other specs also being done.\n\nRead the brief (or description) and identify natural split points:\n\n- **Data layer changes** (schemas, types, migrations) — always a separate spec\n- **Pure functions / business logic** — separate from I/O\n- **UI components** — separate from data fetching\n- **API endpoints / route handlers** — separate from business logic\n- **Test infrastructure** (mocks, fixtures, helpers) — can be its own spec if substantial\n- **Configuration / environment** — separate from code changes\n\nAsk yourself: \"Can this piece be committed and tested without the other pieces existing?\" If yes, it's a good boundary.\n\n## Step 3: Build the Decomposition Table\n\nFor each atomic spec, define:\n\n| # | Spec Name | Description | Dependencies | Size |\n|---|-----------|-------------|--------------|------|\n\n**Rules:**\n- Each spec name is \\`verb-object\\` format (e.g., \\`add-terminal-detection\\`, \\`extract-prompt-module\\`)\n- Each description is ONE sentence — if you need two, the spec is too big\n- Dependencies reference other spec numbers — keep the dependency graph shallow\n- More than 2 dependencies on a single spec = it's too big, split further\n- Aim for 3-7 specs per feature. Fewer than 3 = probably not decomposed enough. More than 10 = the feature brief is too big\n\n## Step 4: Present and Iterate\n\nShow the decomposition table to the user. Ask:\n1. \"Does this breakdown match how you think about this feature?\"\n2. \"Are there any specs that feel too big or too small?\"\n3. \"Should any of these run in parallel (separate worktrees)?\"\n\nIterate until the user approves.\n\n## Step 5: Generate Atomic Specs\n\nFor each approved row, create \\`docs/specs/YYYY-MM-DD-spec-name.md\\`. Create the \\`docs/specs/\\` directory if it doesn't exist.\n\n**Why:** Each spec must be self-contained — a fresh Claude session should be able to execute it without reading the Feature Brief. Copy relevant constraints and context into each spec.\n\nUse this structure:\n\n\\`\\`\\`markdown\n# [Verb + Object] — Atomic Spec\n\n> **Parent Brief:** \\`docs/briefs/YYYY-MM-DD-feature-name.md\\` (or \"standalone\")\n> **Status:** Ready\n> **Date:** YYYY-MM-DD\n> **Estimated scope:** [1 session / N files / ~N lines]\n\n---\n\n## What\nOne paragraph — what changes when this spec is done?\n\n## Why\nOne sentence — what breaks or is missing without this?\n\n## Acceptance Criteria\n- [ ] [Observable behavior]\n- [ ] Build passes\n- [ ] Tests pass\n\n## Constraints\n- MUST: [hard requirement]\n- MUST NOT: [hard prohibition]\n\n## Affected Files\n| Action | File | What Changes |\n|--------|------|-------------|\n\n## Approach\nStrategy, data flow, key decisions. Name one rejected alternative.\n\n## Edge Cases\n| Scenario | Expected Behavior |\n|----------|------------------|\n\\`\\`\\`\n\nIf \\`docs/templates/ATOMIC_SPEC_TEMPLATE.md\\` exists, reference it for the full template with additional guidance.\n\nFill in all sections — each spec must be self-contained (no \"see the brief for context\"). Copy relevant constraints from the Feature Brief into each spec. Write acceptance criteria specific to THIS spec, not the whole feature.\n\n## Step 6: Recommend Execution Strategy\n\nBased on the dependency graph:\n- **Independent specs** — \"These can run in parallel worktrees\"\n- **Sequential specs** — \"Execute these in order: 1 -> 2 -> 4\"\n- **Mixed** — \"Start specs 1 and 3 in parallel. After 1 completes, start 2.\"\n\nUpdate the Feature Brief's Execution Strategy section with the plan (if a brief exists).\n\n## Step 7: Hand Off\n\nTell the user:\n\\`\\`\\`\nDecomposition complete:\n- [N] atomic specs created in docs/specs/\n- [N] can run in parallel, [N] are sequential\n- Estimated total: [N] sessions\n\nTo execute:\n- Sequential: Open a session, point Claude at each spec in order\n- Parallel: Use worktrees — one spec per worktree, merge when done\n- Each session should end with /session-end to capture discoveries\n\nReady to start execution?\n\\`\\`\\`\n`,\n\n 'interview.md': `---\nname: interview\ndescription: Brainstorm freely about what you want to build — yap, explore ideas, and get a structured summary you can use later\n---\n\n# Interview — Idea Exploration\n\nYou are helping the user brainstorm and explore what they want to build. This is a lightweight, low-pressure conversation — not a formal spec process. Let them yap.\n\n## How to Run the Interview\n\n### 1. Open the Floor\n\nStart with something like:\n\"What are you thinking about building? Just talk — I'll listen and ask questions as we go.\"\n\nLet the user talk freely. Do not interrupt their flow. Do not push toward structure yet.\n\n### 2. Ask Clarifying Questions\n\nAs they talk, weave in questions naturally — don't fire them all at once:\n\n- **What problem does this solve?** Who feels the pain today?\n- **What does \"done\" look like?** If this worked perfectly, what would a user see?\n- **What are the constraints?** Time, tech, team, budget — what boxes are we in?\n- **What's NOT in scope?** What's tempting but should be deferred?\n- **What are the edge cases?** What could go wrong? What's the weird input?\n- **What exists already?** Are we building on something or starting fresh?\n\n### 3. Play Back Understanding\n\nAfter the user has gotten their ideas out, reflect back:\n\"So if I'm hearing you right, you want to [summary]. The core problem is [X], and done looks like [Y]. Is that right?\"\n\nLet them correct and refine. Iterate until they say \"yes, that's it.\"\n\n### 4. Write a Draft Brief\n\nCreate a draft file at \\`docs/briefs/YYYY-MM-DD-topic-draft.md\\`. Create the \\`docs/briefs/\\` directory if it doesn't exist.\n\nUse this format:\n\n\\`\\`\\`markdown\n# [Topic] — Draft Brief\n\n> **Date:** YYYY-MM-DD\n> **Status:** DRAFT\n> **Origin:** /interview session\n\n---\n\n## The Idea\n[2-3 paragraphs capturing what the user described — their words, their framing]\n\n## Problem\n[What pain or gap this addresses]\n\n## What \"Done\" Looks Like\n[The user's description of success — observable outcomes]\n\n## Constraints\n- [constraint 1]\n- [constraint 2]\n\n## Open Questions\n- [things that came up but weren't resolved]\n- [decisions that need more thought]\n\n## Out of Scope (for now)\n- [things explicitly deferred]\n\n## Raw Notes\n[Any additional context, quotes, or tangents worth preserving]\n\\`\\`\\`\n\n### 5. Hand Off\n\nAfter writing the draft, tell the user:\n\n\\`\\`\\`\nDraft brief saved to docs/briefs/YYYY-MM-DD-topic-draft.md\n\nWhen you're ready to move forward:\n- /new-feature — formalize this into a full Feature Brief with specs\n- /decompose — break it directly into atomic specs if scope is clear\n- Or just keep brainstorming — run /interview again anytime\n\\`\\`\\`\n\n## Guidelines\n\n- **This is NOT /new-feature.** Do not push toward formal briefs, decomposition tables, or atomic specs. The point is exploration.\n- **Let the user lead.** Your job is to listen, clarify, and capture — not to structure or direct.\n- **Mark everything as DRAFT.** The output is a starting point, not a commitment.\n- **Keep it short.** The draft brief should be 1-2 pages max. Capture the essence, not every detail.\n- **Multiple interviews are fine.** The user might run this several times as their thinking evolves. Each creates a new dated draft.\n`,\n\n 'new-feature.md': `---\nname: new-feature\ndescription: Guided feature development — interview the user, produce a Feature Brief, then decompose into atomic specs\n---\n\n# New Feature Workflow\n\nYou are starting a new feature. Follow this process in order. Do not skip steps.\n\n## Phase 1: Interview\n\nInterview the user about what they want to build. Let them talk — your job is to listen, then sharpen.\n\n**Why:** A thorough interview prevents wasted implementation time. Most failed features fail because the problem wasn't understood, not because the code was wrong.\n\n**Ask about:**\n- What problem does this solve? Who is affected?\n- What does \"done\" look like? How will a user know this works?\n- What are the hard constraints? (business rules, tech limitations, deadlines)\n- What is explicitly NOT in scope? (push hard on this — aggressive scoping is key)\n- Are there edge cases or error conditions we need to handle?\n- What existing code/patterns should this follow?\n\n**Interview technique:**\n- Let the user \"yap\" — don't interrupt their flow of ideas\n- After they finish, play back your understanding: \"So if I'm hearing you right...\"\n- Ask clarifying questions that force specificity: \"When you say 'handle errors,' what should the user see?\"\n- Push toward testable statements: \"How would we verify that works?\"\n\nKeep asking until you can fill out a Feature Brief. When ready, say:\n\"I have enough context. Let me write the Feature Brief for your review.\"\n\n## Phase 2: Feature Brief\n\nWrite a Feature Brief to \\`docs/briefs/YYYY-MM-DD-feature-name.md\\`. Create the \\`docs/briefs/\\` directory if it doesn't exist.\n\n**Why:** The brief is the single source of truth for what we're building. It prevents scope creep and gives every spec a shared reference point.\n\nUse this structure:\n\n\\`\\`\\`markdown\n# [Feature Name] — Feature Brief\n\n> **Date:** YYYY-MM-DD\n> **Project:** [project name]\n> **Status:** Interview | Decomposing | Specs Ready | In Progress | Complete\n\n---\n\n## Vision\nWhat are we building and why? The full picture in 2-4 paragraphs.\n\n## User Stories\n- As a [role], I want [capability] so that [benefit]\n\n## Hard Constraints\n- MUST: [constraint that every spec must respect]\n- MUST NOT: [prohibition that every spec must respect]\n\n## Out of Scope\n- NOT: [tempting but deferred]\n\n## Decomposition\n| # | Spec Name | Description | Dependencies | Est. Size |\n|---|-----------|-------------|--------------|-----------|\n| 1 | [verb-object] | [one sentence] | None | [S/M/L] |\n\n## Execution Strategy\n- [ ] Sequential (specs have chain dependencies)\n- [ ] Parallel worktrees (specs are independent)\n- [ ] Mixed\n\n## Success Criteria\n- [ ] [End-to-end behavior 1]\n- [ ] [No regressions in existing features]\n\\`\\`\\`\n\nIf \\`docs/templates/FEATURE_BRIEF_TEMPLATE.md\\` exists, reference it for the full template with additional guidance.\n\nPresent the brief to the user. Focus review on:\n- \"Does the decomposition match how you think about this?\"\n- \"Is anything in scope that shouldn't be?\"\n- \"Are the specs small enough? Can each be described in one sentence?\"\n\nIterate until approved.\n\n## Phase 3: Generate Atomic Specs\n\nFor each row in the decomposition table, create a self-contained spec file at \\`docs/specs/YYYY-MM-DD-spec-name.md\\`. Create the \\`docs/specs/\\` directory if it doesn't exist.\n\n**Why:** Each spec must be understandable WITHOUT reading the Feature Brief. This prevents the \"Curse of Instructions\" — no spec should require holding the entire feature in context. Copy relevant context into each spec.\n\nUse this structure for each spec:\n\n\\`\\`\\`markdown\n# [Verb + Object] — Atomic Spec\n\n> **Parent Brief:** \\`docs/briefs/YYYY-MM-DD-feature-name.md\\`\n> **Status:** Ready\n> **Date:** YYYY-MM-DD\n> **Estimated scope:** [1 session / N files / ~N lines]\n\n---\n\n## What\nOne paragraph — what changes when this spec is done?\n\n## Why\nOne sentence — what breaks or is missing without this?\n\n## Acceptance Criteria\n- [ ] [Observable behavior]\n- [ ] Build passes\n- [ ] Tests pass\n\n## Constraints\n- MUST: [hard requirement]\n- MUST NOT: [hard prohibition]\n\n## Affected Files\n| Action | File | What Changes |\n|--------|------|-------------|\n\n## Approach\nStrategy, data flow, key decisions. Name one rejected alternative.\n\n## Edge Cases\n| Scenario | Expected Behavior |\n|----------|------------------|\n\\`\\`\\`\n\nIf \\`docs/templates/ATOMIC_SPEC_TEMPLATE.md\\` exists, reference it for the full template with additional guidance.\n\n## Phase 4: Hand Off for Execution\n\nTell the user:\n\\`\\`\\`\nFeature Brief and [N] atomic specs are ready.\n\nSpecs:\n1. [spec-name] — [one sentence] [S/M/L]\n2. [spec-name] — [one sentence] [S/M/L]\n...\n\nRecommended execution:\n- [Parallel/Sequential/Mixed strategy]\n- Estimated: [N] sessions total\n\nTo execute: Start a fresh session per spec. Each session should:\n1. Read the spec\n2. Implement\n3. Run /session-end to capture discoveries\n4. Commit and PR\n\nReady to start?\n\\`\\`\\`\n\n**Why:** A fresh session for execution produces better results. The interview session has too much context noise — a clean session with just the spec is more focused.\n\nYou can also use \\`/decompose\\` to re-decompose a brief if the breakdown needs adjustment, or run \\`/interview\\` first for a lighter brainstorm before committing to the full workflow.\n`,\n\n 'session-end.md': `---\nname: session-end\ndescription: Wrap up a session — capture discoveries, verify, prepare for PR or next session\n---\n\n# Session Wrap-Up\n\nBefore ending this session, complete these steps in order.\n\n## 1. Capture Discoveries\n\n**Why:** Discoveries are the surprises — things that weren't in the spec or that contradicted expectations. They prevent future sessions from hitting the same walls.\n\nCheck: did anything surprising happen during this session? If yes, create or update a discovery file at \\`docs/discoveries/YYYY-MM-DD-topic.md\\`. Create the \\`docs/discoveries/\\` directory if it doesn't exist.\n\nOnly capture what's NOT obvious from the code or git diff:\n- \"We thought X but found Y\" — assumptions that were wrong\n- \"This API/library behaves differently than documented\" — external gotchas\n- \"This edge case needs handling in a future spec\" — deferred work with context\n- \"The approach in the spec didn't work because...\" — spec-vs-reality gaps\n- Key decisions made during implementation that aren't in the spec\n\n**Do NOT capture:**\n- Files changed (that's the diff)\n- What you set out to do (that's the spec)\n- Step-by-step narrative of the session (nobody re-reads these)\n\nUse this format:\n\n\\`\\`\\`markdown\n# Discoveries — [topic]\n\n**Date:** YYYY-MM-DD\n**Spec:** [link to spec if applicable]\n\n## [Discovery title]\n**Expected:** [what we thought would happen]\n**Actual:** [what actually happened]\n**Impact:** [what this means for future work]\n\\`\\`\\`\n\nIf nothing surprising happened, skip the discovery file entirely. No discovery is a good sign — the spec was accurate.\n\n## 2. Run Validation\n\nRun the project's validation commands. Check CLAUDE.md for project-specific commands. Common checks:\n\n- Type-check (e.g., \\`tsc --noEmit\\`, \\`mypy\\`, \\`cargo check\\`)\n- Tests (e.g., \\`npm test\\`, \\`pytest\\`, \\`cargo test\\`)\n- Lint (e.g., \\`eslint\\`, \\`ruff\\`, \\`clippy\\`)\n\nFix any failures before proceeding.\n\n## 3. Update Spec Status\n\nIf working from an atomic spec in \\`docs/specs/\\`:\n- All acceptance criteria met — update status to \\`Complete\\`\n- Partially done — update status to \\`In Progress\\`, note what's left\n\nIf working from a Feature Brief in \\`docs/briefs/\\`, check off completed specs in the decomposition table.\n\n## 4. Commit\n\nCommit all changes including the discovery file (if created) and spec status updates. The commit message should reference the spec if applicable.\n\n## 5. Report\n\n\\`\\`\\`\nSession complete.\n- Spec: [spec name] — [Complete / In Progress]\n- Build: [passing / failing]\n- Discoveries: [N items / none]\n- Next: [what the next session should tackle, or \"ready for PR\"]\n\\`\\`\\`\n`,\n\n 'tune.md': `---\nname: tune\ndescription: Assess and upgrade your project's AI development harness — score 7 dimensions, apply fixes, show path to Level 5\n---\n\n# Tune — Project Harness Assessment & Upgrade\n\nYou are evaluating and upgrading this project's AI development harness. Follow these steps in order.\n\n## Step 1: Detect Harness State\n\nCheck the following and note what exists:\n\n1. **CLAUDE.md** — Read it if it exists. Check whether it contains meaningful content (not just a project name or generic README).\n2. **Key directories** — Check for: \\`docs/specs/\\`, \\`docs/briefs/\\`, \\`docs/discoveries/\\`, \\`docs/templates/\\`, \\`.claude/skills/\\`\n3. **Boundary framework** — Look for \\`Always\\`, \\`Ask First\\`, and \\`Never\\` sections in CLAUDE.md (or similar behavioral constraints under any heading).\n4. **Skills infrastructure** — Check \\`.claude/skills/\\` for installed skill files.\n5. **Test configuration** — Look for test commands in package.json, pyproject.toml, Cargo.toml, Makefile, or CI config files.\n\n## Step 2: Route Based on State\n\n### If No Harness (no CLAUDE.md, or CLAUDE.md is just a README with no structured sections):\n\nTell the user:\n- Their project has no AI development harness\n- Recommend running \\`npx joycraft init\\` to scaffold one\n- Briefly explain what it sets up: CLAUDE.md with boundaries, spec/brief templates, skills, documentation structure\n- **Stop here** — do not run the full assessment on a bare project\n\n### If Harness Exists (CLAUDE.md has structured content — boundaries, commands, architecture, or domain rules):\n\nContinue to Step 3 for the full assessment.\n\n## Step 3: Score 7 Dimensions\n\nRead CLAUDE.md thoroughly. Explore the project structure. Score each dimension on a 1-5 scale with specific evidence.\n\n### Dimension 1: Spec Quality\n\nLook in \\`docs/specs/\\` for specification files.\n\n| Score | Criteria |\n|-------|----------|\n| 1 | No specs directory or no spec files |\n| 2 | Specs exist but are informal notes or TODOs |\n| 3 | Specs have structure (sections, some criteria) but lack consistency |\n| 4 | Specs are structured with clear acceptance criteria and constraints |\n| 5 | Atomic specs: self-contained, acceptance criteria, constraints, edge cases, affected files |\n\n**Evidence:** Number of specs found, example of best/worst, whether acceptance criteria are present.\n\n### Dimension 2: Spec Granularity\n\nCan each spec be completed in a single coding session?\n\n| Score | Criteria |\n|-------|----------|\n| 1 | No specs |\n| 2 | Specs cover entire features or epics |\n| 3 | Specs are feature-sized (multi-session but bounded) |\n| 4 | Most specs are session-sized with clear scope |\n| 5 | All specs are atomic — one session, one concern, clear done state |\n\n### Dimension 3: Behavioral Boundaries\n\nRead CLAUDE.md for explicit behavioral constraints.\n\n| Score | Criteria |\n|-------|----------|\n| 1 | No CLAUDE.md or no behavioral guidance |\n| 2 | CLAUDE.md exists with general instructions but no structured boundaries |\n| 3 | Some boundaries exist but not organized as Always/Ask First/Never |\n| 4 | Always/Ask First/Never sections present with reasonable coverage |\n| 5 | Comprehensive boundaries covering code style, testing, deployment, dependencies, and dangerous operations |\n\n**Important:** Projects may have strong rules under different headings (e.g., \"Critical Rules\", \"Constraints\"). Give credit for substance over format — a project with clear, enforced rules scores higher than one with empty Always/Ask First/Never sections.\n\n### Dimension 4: Skills & Hooks\n\nLook in \\`.claude/skills/\\` for skill files. Check for hooks configuration.\n\n| Score | Criteria |\n|-------|----------|\n| 1 | No .claude/ directory |\n| 2 | .claude/ exists but empty or minimal |\n| 3 | A few skills installed, no hooks |\n| 4 | Multiple relevant skills, basic hooks |\n| 5 | Comprehensive skills covering workflow, hooks for validation |\n\n### Dimension 5: Documentation\n\nExamine \\`docs/\\` directory structure and content.\n\n| Score | Criteria |\n|-------|----------|\n| 1 | No docs/ directory |\n| 2 | docs/ exists with ad-hoc files |\n| 3 | Some structure (subdirectories) but inconsistent |\n| 4 | Structured docs/ with templates and clear organization |\n| 5 | Full structure: briefs/, specs/, templates/, architecture docs, referenced from CLAUDE.md |\n\n### Dimension 6: Knowledge Capture\n\nLook for discoveries, decisions, and session notes.\n\n| Score | Criteria |\n|-------|----------|\n| 1 | No knowledge capture mechanism |\n| 2 | Ad-hoc notes in random locations |\n| 3 | A dedicated notes or decisions directory exists |\n| 4 | Structured discoveries/decisions directory with entries |\n| 5 | Active capture: discoveries with entries, session-end workflow, decision log |\n\n### Dimension 7: Testing & Validation\n\nLook for test config, CI setup, and validation commands.\n\n| Score | Criteria |\n|-------|----------|\n| 1 | No test configuration |\n| 2 | Test framework installed but few/no tests |\n| 3 | Tests exist with reasonable coverage |\n| 4 | Tests + CI pipeline configured |\n| 5 | Tests + CI + validation commands in CLAUDE.md + scenario tests |\n\n## Step 4: Write Assessment\n\nWrite the assessment to \\`docs/joycraft-assessment.md\\` AND display it in the conversation. Use this format:\n\n\\`\\`\\`markdown\n# Joycraft Assessment — [Project Name]\n\n**Date:** [today's date]\n**Overall Level:** [1-5, based on average score]\n\n## Scores\n\n| Dimension | Score | Summary |\n|-----------|-------|---------|\n| Spec Quality | X/5 | [one-line summary] |\n| Spec Granularity | X/5 | [one-line summary] |\n| Behavioral Boundaries | X/5 | [one-line summary] |\n| Skills & Hooks | X/5 | [one-line summary] |\n| Documentation | X/5 | [one-line summary] |\n| Knowledge Capture | X/5 | [one-line summary] |\n| Testing & Validation | X/5 | [one-line summary] |\n\n**Average:** X.X/5\n\n## Detailed Findings\n\n### [Dimension Name] — X/5\n**Evidence:** [specific files, paths, counts found]\n**Gap:** [what's missing]\n**Recommendation:** [specific action to improve]\n\n## Upgrade Plan\n\nTo reach Level [current + 1], complete these steps:\n1. [Most impactful action] — addresses [dimension] (X -> Y)\n2. [Next action] — addresses [dimension] (X -> Y)\n[up to 5 actions, ordered by impact]\n\\`\\`\\`\n\n## Step 5: Apply Upgrades\n\nImmediately after presenting the assessment, apply upgrades using the three-tier model below. Do NOT ask for per-item permission — batch everything and show a consolidated report at the end.\n\n### Tier 1: Silent Apply (just do it)\nThese are safe, additive operations. Apply them without asking:\n- Create missing directories (\\`docs/specs/\\`, \\`docs/briefs/\\`, \\`docs/discoveries/\\`, \\`docs/templates/\\`)\n- Install missing skills to \\`.claude/skills/\\`\n- Copy missing templates to \\`docs/templates/\\`\n- Create AGENTS.md if it doesn't exist\n\n### Git Autonomy Preference\n\nBefore applying Behavioral Boundaries to CLAUDE.md, ask the user ONE question:\n\n> How autonomous should git operations be?\n> 1. **Cautious** — commits freely, asks before pushing or opening PRs *(good for learning the workflow)*\n> 2. **Autonomous** — commits, pushes to branches, and opens PRs without asking *(good for spec-driven development)*\n\nBased on their answer, use the appropriate git rules in the Behavioral Boundaries section:\n\n**If Cautious (default):**\n\\`\\`\\`\n### ASK FIRST\n- Pushing to remote\n- Creating or merging pull requests\n- Any destructive git operation (force-push, reset --hard, branch deletion)\n\n### NEVER\n- Push directly to main/master without approval\n- Amend commits that have been pushed\n\\`\\`\\`\n\n**If Autonomous:**\n\\`\\`\\`\n### ALWAYS\n- Push to feature branches after each commit\n- Open a PR when all specs in a feature are complete\n- Use descriptive branch names: feature/spec-name\n\n### ASK FIRST\n- Merging PRs to main/master\n- Any destructive git operation (force-push, reset --hard, branch deletion)\n\n### NEVER\n- Push directly to main/master (always use feature branches + PR)\n- Amend commits that have been pushed to remote\n\\`\\`\\`\n\nThis is the ONLY question asked during the upgrade flow. Everything else is batch-applied.\n\n### Tier 2: Apply and Show Diff (do it, then report)\nThese modify important files but are additive (append-only). Apply them, then show what changed so the user can review. Git is the undo button.\n- Add missing sections to CLAUDE.md (Behavioral Boundaries, Development Workflow, Getting Started with Joycraft, Key Files, Common Gotchas)\n- Use the git autonomy preference from above when generating the Behavioral Boundaries section\n- Draft section content from the actual codebase — not generic placeholders. Read the project's real rules, real commands, real structure.\n- Only append — never modify or reformat existing content\n\n### Tier 3: Confirm First (ask before acting)\nThese are potentially destructive or opinionated. Ask before proceeding:\n- Rewriting or reorganizing existing CLAUDE.md sections\n- Overwriting files the user has customized\n- Suggesting test framework installation or CI setup (present as recommendations, don't auto-install)\n\n### Reading a Previous Assessment\n\nIf \\`docs/joycraft-assessment.md\\` already exists, read it first. If all recommendations have been applied, report \"nothing to upgrade\" and offer to re-assess.\n\n### After Applying\n\nAppend a history entry to \\`docs/joycraft-history.md\\` (create if needed):\n\\`\\`\\`\n| [date] | [new avg score] | [change from last] | [summary of what changed] |\n\\`\\`\\`\n\nThen display a single consolidated report:\n\n\\`\\`\\`markdown\n## Upgrade Results\n\n| Dimension | Before | After | Change |\n|------------------------|--------|-------|--------|\n| Spec Quality | X/5 | X/5 | +X |\n| ... | ... | ... | ... |\n\n**Previous Level:** X — **New Level:** X\n\n### What Changed\n- [list each change applied]\n\n### Remaining Gaps\n- [anything still below 3.5, with specific next action]\n\\`\\`\\`\n\nUpdate \\`docs/joycraft-assessment.md\\` with the new scores and today's date.\n\n## Step 6: Show Path to Level 5\n\nAfter the upgrade report, always show the Level 5 roadmap tailored to the project's current state:\n\n\\`\\`\\`markdown\n## Path to Level 5 — Autonomous Development\n\nYou're at Level [X]. Here's what each level looks like:\n\n| Level | You | AI | Key Skill |\n|-------|-----|-----|-----------|\n| 2 | Guide direction | Multi-file changes | AI-native tooling |\n| 3 | Review diffs | Primary developer | Code review at scale |\n| 4 | Write specs, check tests | End-to-end development | Specification writing |\n| 5 | Define what + why | Specs in, software out | Systems design |\n\n### Your Next Steps Toward Level [X+1]:\n1. [Specific action based on current gaps — e.g., \"Write your first atomic spec using /new-feature\"]\n2. [Next action — e.g., \"Add vitest and write tests for your core logic\"]\n3. [Next action — e.g., \"Use /session-end consistently to build your discoveries log\"]\n\n### What Level 5 Looks Like (Your North Star):\n- A backlog of ready specs that agents pull from and execute autonomously\n- CI failures auto-generate fix specs — no human triage for regressions\n- Multi-agent execution with parallel worktrees, one spec per agent\n- External holdout scenarios (tests the agent can't see) prevent overfitting\n- CLAUDE.md evolves from discoveries — the harness improves itself\n\n### You'll Know You're at Level 5 When:\n- You describe a feature in one sentence and walk away\n- The system produces a PR with tests, docs, and discoveries — without further input\n- Failed CI runs generate their own fix specs\n- Your harness improves without you manually editing CLAUDE.md\n\nThis is a significant journey. Most teams are at Level 2. Getting to Level 4 with Joycraft's workflow is achievable — Level 5 requires building validation infrastructure (scenario tests, spec queues, CI feedback loops) that goes beyond what Joycraft scaffolds today. But the harness you're building now is the foundation.\n\\`\\`\\`\n\nTailor the \"Next Steps\" section based on the project's actual gaps — don't show generic advice.\n\n## Edge Cases\n\n- **Not a git repo:** Note this. Joycraft works best in a git repo.\n- **CLAUDE.md is just a README:** Treat as \"no harness.\"\n- **Non-Joycraft skills already installed:** Acknowledge them. Do not replace — suggest additions.\n- **Monorepo:** Assess the root CLAUDE.md. Note if component-level CLAUDE.md files exist.\n- **Project has rules under non-standard headings:** Give credit. Suggest reformatting as Always/Ask First/Never but acknowledge the rules are there.\n- **Assessment file missing when upgrading:** Run the full assessment first, then offer to apply.\n- **Assessment is stale:** Warn and offer to re-assess before proceeding.\n- **All recommendations already applied:** Report \"nothing to upgrade\" and stop.\n- **User declines a recommendation:** Skip it, continue, include in \"What Was Skipped.\"\n- **CLAUDE.md does not exist at all:** Create it with recommended sections, but ask the user first.\n- **Non-Joycraft content in CLAUDE.md:** Preserve exactly as-is. Only append or merge — never remove or reformat existing content.\n`,\n\n};\n\nexport const TEMPLATES: Record<string, string> = {\n 'ATOMIC_SPEC_TEMPLATE.md': `# [Verb + Object] — Atomic Spec\n\n> **Parent Brief:** \\`docs/briefs/YYYY-MM-DD-feature-name.md\\` (or \"standalone\")\n> **Status:** Draft | Ready | In Progress | Complete\n> **Date:** YYYY-MM-DD\n> **Estimated scope:** [1 session / 2-3 files / ~N lines]\n\n---\n\n## What\n\nOne paragraph. What changes when this spec is done? A developer with no context should understand the change in 15 seconds.\n\n## Why\n\nOne sentence. What breaks, hurts, or is missing without this? Links to the parent brief if part of a larger feature.\n\n## Acceptance Criteria\n\n- [ ] [Observable behavior — what a human would see/verify]\n- [ ] [Another observable behavior]\n- [ ] [Regression: existing behavior X still works]\n- [ ] Build passes\n- [ ] Tests pass\n\n> These are your \"done\" checkboxes. If Claude says \"done\" and these aren't all green, it's not done.\n\n## Constraints\n\n- MUST: [hard requirement]\n- MUST NOT: [hard prohibition]\n- SHOULD: [strong preference, with rationale]\n\n> Use RFC 2119 language. 2-5 constraints is typical. Zero is a red flag — every change has boundaries.\n\n## Affected Files\n\n| Action | File | What Changes |\n|--------|------|-------------|\n| Create | \\`path/to/file.ts\\` | [brief description] |\n| Modify | \\`path/to/file.ts\\` | [what specifically changes] |\n\n## Approach\n\nHow this will be implemented. Not pseudo-code — describe the strategy, data flow, and key decisions. Name one rejected alternative and why it was rejected.\n\n_Scale to complexity: 3 sentences for a bug fix, 1 page max for a feature. If you need more than a page, this spec is too big — decompose further._\n\n## Edge Cases\n\n| Scenario | Expected Behavior |\n|----------|------------------|\n| [what could go wrong] | [what should happen] |\n\n> Skip for trivial changes. Required for anything touching user input, data, or external APIs.`,\n\n 'FEATURE_BRIEF_TEMPLATE.md': `# [Feature Name] — Feature Brief\n\n> **Date:** YYYY-MM-DD\n> **Project:** [project name]\n> **Status:** Interview | Decomposing | Specs Ready | In Progress | Complete\n\n---\n\n## Vision\n\nWhat are we building and why? This is the \"yap\" distilled — the full picture in 2-4 paragraphs.\n\n## User Stories\n\n- As a [role], I want [capability] so that [benefit]\n\n## Hard Constraints\n\n- MUST: [constraint that every spec must respect]\n- MUST NOT: [prohibition that every spec must respect]\n\n## Out of Scope\n\n- NOT: [tempting but deferred]\n\n## Decomposition\n\n| # | Spec Name | Description | Dependencies | Est. Size |\n|---|-----------|-------------|--------------|-----------|\n| 1 | [verb-object] | [one sentence] | None | [S/M/L] |\n\n## Execution Strategy\n\n- [ ] Sequential (specs have chain dependencies)\n- [ ] Agent teams (parallel teammates within phases)\n- [ ] Parallel worktrees (specs are independent)\n\n## Success Criteria\n\n- [ ] [End-to-end behavior 1]\n- [ ] [No regressions in existing features]`,\n\n 'IMPLEMENTATION_PLAN_TEMPLATE.md': `# [Feature Name] — Implementation Plan\n\n> **Design Spec:** \\`docs/specs/YYYY-MM-DD-feature-name.md\\`\n> **Date:** YYYY-MM-DD\n> **Estimated Tasks:** [number]\n\n---\n\n## Prerequisites\n\n- [ ] Design spec is approved\n- [ ] Branch created (if warranted): \\`feature/feature-name\\`\n- [ ] Required context loaded: [list any docs Claude should read first]\n\n## Task 1: [Descriptive Name]\n\n**Goal:** One sentence — what is true after this task that wasn't true before.\n\n**Files:**\n- \\`path/to/file.ts\\` — [what changes]\n\n**Steps:**\n1. [Concrete action]\n2. [Next concrete action]\n\n**Verification:**\n- [ ] [How to confirm this task worked]\n\n**Commit:** \\`feat: description\\`\n\n---\n\n## Task N: Final Verification\n\n**Goal:** Confirm everything works end-to-end.\n\n**Steps:**\n1. Run full type-check\n2. Run linter\n3. Run tests\n4. Walk through verification checklist from design spec\n\n**Verification:**\n- [ ] All design spec verification items pass\n- [ ] No regressions in existing functionality`,\n\n 'BOUNDARY_FRAMEWORK.md': `# Boundary Framework\n\n> Add this to the TOP of your CLAUDE.md, before any project context.\n> Customize the specific rules per project, but keep the three-tier structure.\n\n---\n\n## Behavioral Boundaries\n\n### ALWAYS (do these without asking)\n- Run type-check and lint before every commit\n- Commit after completing each discrete task (atomic commits)\n- Follow patterns in existing code — match existing code style\n- Check the active implementation plan before starting work\n\n### ASK FIRST (pause and confirm before doing these)\n- Adding new dependencies\n- Modifying database schema, migrations, or data models\n- Changing authentication or authorization flows\n- Deviating from an approved implementation plan\n- Any destructive operation (deleting files, dropping tables, force-pushing)\n- Modifying CI/CD, deployment, or infrastructure configuration\n\n### NEVER (do not do these under any circumstances)\n- Push to production or main branch without explicit approval\n- Delete specs, plans, or documentation\n- Modify environment variables or secrets\n- Skip type-checking or linting to \"save time\"\n- Make changes outside the scope of the current spec/plan\n- Commit code that doesn't build\n- Remove or weaken existing tests\n- Hardcode secrets, API keys, or credentials`,\n 'examples/example-brief.md': `# Add User Notifications — Feature Brief\n\n> **Date:** 2026-03-15\n> **Project:** acme-web\n> **Status:** Specs Ready\n\n---\n\n## Vision\n\nOur users have no idea when things happen in their account. A teammate comments on their pull request, a deployment finishes, a billing threshold is hit — they find out by accident, minutes or hours later. This is the #1 complaint in our last user survey.\n\nWe are building a notification system that delivers real-time and batched notifications across in-app, email, and (later) Slack channels. Users will have fine-grained control over what they receive and how. When this ships, no important event goes unnoticed, and no user gets buried in noise they didn't ask for.\n\nThe system is designed to be extensible — new event types plug in without touching the notification infrastructure. We start with three event types (PR comments, deploy status, billing alerts) and prove the pattern works before expanding.\n\n## User Stories\n\n- As a developer, I want to see a notification badge in the app when someone comments on my PR so that I can respond quickly\n- As a team lead, I want to receive an email when a production deployment fails so that I can coordinate the response\n- As a billing admin, I want to get alerted when usage exceeds 80% of our plan limit so that I can upgrade before service is disrupted\n- As any user, I want to control which notifications I receive and through which channels so that I am not overwhelmed\n\n## Hard Constraints\n\n- MUST: All notifications go through a single event bus — no direct coupling between event producers and delivery channels\n- MUST: Email delivery uses the existing SendGrid integration (do not add a new email provider)\n- MUST: Respect user preferences before delivering — never send a notification the user has opted out of\n- MUST NOT: Store notification content in plaintext in the database — use the existing encryption-at-rest pattern\n- MUST NOT: Send more than 50 emails per user per day (batch if necessary)\n\n## Out of Scope\n\n- NOT: Slack/Discord integration (Phase 2)\n- NOT: Push notifications / mobile (Phase 2)\n- NOT: Notification templates with rich HTML — plain text and simple markdown only for now\n- NOT: Admin dashboard for monitoring notification delivery rates\n- NOT: Retroactive notifications for events that happened before the feature ships\n\n## Decomposition\n\n| # | Spec Name | Description | Dependencies | Est. Size |\n|---|-----------|-------------|--------------|-----------|\n| 1 | add-notification-preferences-api | Create REST endpoints for users to read and update their notification preferences | None | M |\n| 2 | add-event-bus-infrastructure | Set up the internal event bus that decouples event producers from notification delivery | None | M |\n| 3 | add-notification-delivery-service | Build the service that consumes events, checks preferences, and dispatches to channels (in-app, email) | Spec 1, Spec 2 | L |\n| 4 | add-in-app-notification-ui | Add notification bell, dropdown, and badge count to the app header | Spec 3 | M |\n| 5 | add-email-batching | Implement daily digest batching for email notifications that exceed the per-user threshold | Spec 3 | S |\n\n## Execution Strategy\n\n- [x] Agent teams (parallel teammates within phases, sequential between phases)\n\n\\`\\`\\`\nPhase 1: Teammate A -> Spec 1 (preferences API), Teammate B -> Spec 2 (event bus)\nPhase 2: Teammate A -> Spec 3 (delivery service) — depends on Phase 1\nPhase 3: Teammate A -> Spec 4 (UI), Teammate B -> Spec 5 (batching) — both depend on Spec 3\n\\`\\`\\`\n\n## Success Criteria\n\n- [ ] User updates notification preferences via API, and subsequent events respect those preferences\n- [ ] A PR comment event triggers an in-app notification visible in the UI within 2 seconds\n- [ ] A deploy failure event sends an email to subscribed users via SendGrid\n- [ ] When email threshold (50/day) is exceeded, remaining notifications are batched into a daily digest\n- [ ] No regressions in existing PR, deployment, or billing features\n\n## External Scenarios\n\n| Scenario | What It Tests | Pass Criteria |\n|----------|--------------|---------------|\n| opt-out-respected | User disables email for deploy events, deploy fails | No email sent, in-app notification still appears |\n| batch-threshold | Send 51 email-eligible events for one user in a day | 50 individual emails + 1 digest containing the overflow |\n| preference-persistence | User sets preferences, logs out, logs back in | Preferences are unchanged |\n`,\n\n 'examples/example-spec.md': `# Add Notification Preferences API — Atomic Spec\n\n> **Parent Brief:** \\`docs/briefs/2026-03-15-add-user-notifications.md\\`\n> **Status:** Ready\n> **Date:** 2026-03-15\n> **Estimated scope:** 1 session / 4 files / ~250 lines\n\n---\n\n## What\n\nAdd REST API endpoints that let users read and update their notification preferences. Each user gets a preferences record with per-event-type, per-channel toggles (e.g., \"PR comments: in-app=on, email=off\"). Preferences default to all-on for new users and are stored encrypted alongside the user profile.\n\n## Why\n\nThe notification delivery service (Spec 3) needs to check preferences before dispatching. Without this API, there is no way for users to control what they receive, and we cannot build the delivery pipeline.\n\n## Acceptance Criteria\n\n- [ ] \\`GET /api/v1/notifications/preferences\\` returns the current user's preferences as JSON\n- [ ] \\`PATCH /api/v1/notifications/preferences\\` updates one or more preference fields and returns the updated record\n- [ ] New users get default preferences (all channels enabled for all event types) on first read\n- [ ] Preferences are validated — unknown event types or channels return 400\n- [ ] Preferences are stored using the existing encryption-at-rest pattern (\\`EncryptedJsonColumn\\`)\n- [ ] Endpoint requires authentication (returns 401 for unauthenticated requests)\n- [ ] Build passes\n- [ ] Tests pass (unit + integration)\n\n## Constraints\n\n- MUST: Use the existing \\`EncryptedJsonColumn\\` utility for storage — do not roll a new encryption pattern\n- MUST: Follow the existing REST controller pattern in \\`src/controllers/\\`\n- MUST NOT: Expose other users' preferences (scope queries to authenticated user only)\n- SHOULD: Return the full preferences object on PATCH (not just the changed fields), so the frontend can replace state without merging\n\n## Affected Files\n\n| Action | File | What Changes |\n|--------|------|-------------|\n| Create | \\`src/controllers/notification-preferences.controller.ts\\` | New controller with GET and PATCH handlers |\n| Create | \\`src/models/notification-preferences.model.ts\\` | Sequelize model with EncryptedJsonColumn for preferences blob |\n| Create | \\`src/migrations/20260315-add-notification-preferences.ts\\` | Database migration to create notification_preferences table |\n| Create | \\`tests/controllers/notification-preferences.test.ts\\` | Unit and integration tests for both endpoints |\n| Modify | \\`src/routes/index.ts\\` | Register the new controller routes |\n\n## Approach\n\nCreate a \\`NotificationPreferences\\` model backed by a single \\`notification_preferences\\` table with columns: \\`id\\`, \\`user_id\\` (unique FK), \\`preferences\\` (EncryptedJsonColumn), \\`created_at\\`, \\`updated_at\\`. The \\`preferences\\` column stores a JSON blob shaped like \\`{ \"pr_comment\": { \"in_app\": true, \"email\": true }, \"deploy_status\": { ... } }\\`.\n\nThe GET endpoint does a find-or-create: if no record exists for the user, create one with defaults and return it. The PATCH endpoint deep-merges the request body into the existing preferences, validates the result against a known schema of event types and channels, and saves.\n\n**Rejected alternative:** Storing preferences as individual rows (one per event-type-channel pair). This would make queries more complex and would require N rows per user instead of 1. The JSON blob approach is simpler and matches how the frontend will consume the data.\n\n## Edge Cases\n\n| Scenario | Expected Behavior |\n|----------|------------------|\n| PATCH with empty body \\`{}\\` | Return 200 with unchanged preferences (no-op) |\n| PATCH with unknown event type \\`{\"foo\": {\"email\": true}}\\` | Return 400 with validation error listing valid event types |\n| GET for user with no existing record | Create default preferences, return 200 |\n| Concurrent PATCH requests | Last-write-wins (optimistic, no locking) — acceptable for user preferences |\n`,\n\n};\n","import { readFileSync, writeFileSync, existsSync } from 'node:fs';\nimport { join } from 'node:path';\nimport { createHash } from 'node:crypto';\n\nconst VERSION_FILE = '.joycraft-version';\n\nexport interface VersionInfo {\n version: string;\n files: Record<string, string>;\n}\n\nexport function hashContent(content: string): string {\n return createHash('sha256').update(content).digest('hex');\n}\n\nexport function readVersion(dir: string): VersionInfo | null {\n const filePath = join(dir, VERSION_FILE);\n if (!existsSync(filePath)) return null;\n try {\n const raw = readFileSync(filePath, 'utf-8');\n const parsed = JSON.parse(raw);\n if (typeof parsed.version === 'string' && typeof parsed.files === 'object') {\n return parsed as VersionInfo;\n }\n return null;\n } catch {\n return null;\n }\n}\n\nexport function writeVersion(dir: string, version: string, files: Record<string, string>): void {\n const filePath = join(dir, VERSION_FILE);\n const data: VersionInfo = { version, files };\n writeFileSync(filePath, JSON.stringify(data, null, 2) + '\\n', 'utf-8');\n}\n"],"mappings":";;;AAGO,IAAM,SAAiC;AAAA,EAC5C,gBAAgB;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAmIhB,gBAAgB;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAiGhB,kBAAkB;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAkKlB,kBAAkB;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EA4ElB,WAAW;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AA0Tb;AAEO,IAAM,YAAoC;AAAA,EAC/C,2BAA2B;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAwD3B,6BAA6B;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EA0C7B,mCAAmC;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EA8CnC,yBAAyB;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAgCzB,6BAA6B;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EA4E7B,4BAA4B;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AA+D9B;;;AC9kCA,SAAS,cAAc,eAAe,kBAAkB;AACxD,SAAS,YAAY;AACrB,SAAS,kBAAkB;AAE3B,IAAM,eAAe;AAOd,SAAS,YAAY,SAAyB;AACnD,SAAO,WAAW,QAAQ,EAAE,OAAO,OAAO,EAAE,OAAO,KAAK;AAC1D;AAEO,SAAS,YAAY,KAAiC;AAC3D,QAAM,WAAW,KAAK,KAAK,YAAY;AACvC,MAAI,CAAC,WAAW,QAAQ,EAAG,QAAO;AAClC,MAAI;AACF,UAAM,MAAM,aAAa,UAAU,OAAO;AAC1C,UAAM,SAAS,KAAK,MAAM,GAAG;AAC7B,QAAI,OAAO,OAAO,YAAY,YAAY,OAAO,OAAO,UAAU,UAAU;AAC1E,aAAO;AAAA,IACT;AACA,WAAO;AAAA,EACT,QAAQ;AACN,WAAO;AAAA,EACT;AACF;AAEO,SAAS,aAAa,KAAa,SAAiB,OAAqC;AAC9F,QAAM,WAAW,KAAK,KAAK,YAAY;AACvC,QAAM,OAAoB,EAAE,SAAS,MAAM;AAC3C,gBAAc,UAAU,KAAK,UAAU,MAAM,MAAM,CAAC,IAAI,MAAM,OAAO;AACvE;","names":[]}
|
package/dist/cli.js
CHANGED
|
@@ -5,11 +5,11 @@ import { Command } from "commander";
|
|
|
5
5
|
var program = new Command();
|
|
6
6
|
program.name("joycraft").description("Scaffold and upgrade AI development harnesses").version("0.1.0");
|
|
7
7
|
program.command("init").description("Scaffold the Joycraft harness into the current project").argument("[dir]", "Target directory", ".").option("--force", "Overwrite existing files").action(async (dir, opts) => {
|
|
8
|
-
const { init } = await import("./init-
|
|
8
|
+
const { init } = await import("./init-R33WYEFZ.js");
|
|
9
9
|
await init(dir, { force: opts.force ?? false });
|
|
10
10
|
});
|
|
11
11
|
program.command("upgrade").description("Upgrade installed Joycraft templates and skills to latest").argument("[dir]", "Target directory", ".").option("--yes", "Auto-accept all updates").action(async (dir, opts) => {
|
|
12
|
-
const { upgrade } = await import("./upgrade-
|
|
12
|
+
const { upgrade } = await import("./upgrade-ZE6K64XX.js");
|
|
13
13
|
await upgrade(dir, { yes: opts.yes ?? false });
|
|
14
14
|
});
|
|
15
15
|
program.command("check-version").description("Check if a newer version of Joycraft is available").action(async () => {
|
|
@@ -4,7 +4,7 @@ import {
|
|
|
4
4
|
TEMPLATES,
|
|
5
5
|
hashContent,
|
|
6
6
|
writeVersion
|
|
7
|
-
} from "./chunk-
|
|
7
|
+
} from "./chunk-LLJVCCB2.js";
|
|
8
8
|
|
|
9
9
|
// src/init.ts
|
|
10
10
|
import { mkdirSync, existsSync as existsSync2, writeFileSync, readFileSync as readFileSync2 } from "fs";
|
|
@@ -238,20 +238,26 @@ function generateBoundariesSection() {
|
|
|
238
238
|
|
|
239
239
|
### ALWAYS
|
|
240
240
|
- Run tests and type-check before committing
|
|
241
|
-
-
|
|
241
|
+
- Use \`verb: concise message\` format for commits
|
|
242
|
+
- Commit after completing each discrete task (atomic commits)
|
|
243
|
+
- Stage specific files by name (not \`git add -A\` or \`git add .\`)
|
|
242
244
|
- Follow existing code patterns and style
|
|
243
245
|
|
|
244
246
|
### ASK FIRST
|
|
247
|
+
- Pushing to remote
|
|
248
|
+
- Creating or merging pull requests
|
|
245
249
|
- Adding new dependencies
|
|
246
250
|
- Modifying database schema or data models
|
|
247
251
|
- Changing authentication or authorization flows
|
|
248
|
-
- Any destructive operation (
|
|
252
|
+
- Any destructive git operation (force-push, reset --hard, branch deletion)
|
|
249
253
|
|
|
250
254
|
### NEVER
|
|
251
|
-
- Push to main
|
|
255
|
+
- Push directly to main/master without approval
|
|
256
|
+
- Commit .env files, secrets, or credentials
|
|
257
|
+
- Use --no-verify to skip hooks
|
|
258
|
+
- Amend commits that have been pushed
|
|
252
259
|
- Skip type-checking or linting
|
|
253
|
-
- Commit code that doesn't build
|
|
254
|
-
- Hardcode secrets, API keys, or credentials`;
|
|
260
|
+
- Commit code that doesn't build`;
|
|
255
261
|
}
|
|
256
262
|
function generateWorkflowSection(stack) {
|
|
257
263
|
return `## Development Workflow
|
|
@@ -549,4 +555,4 @@ function printSummary(result, stack) {
|
|
|
549
555
|
export {
|
|
550
556
|
init
|
|
551
557
|
};
|
|
552
|
-
//# sourceMappingURL=init-
|
|
558
|
+
//# sourceMappingURL=init-R33WYEFZ.js.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"sources":["../src/init.ts","../src/detect.ts","../src/improve-claude-md.ts","../src/agents-md.ts"],"sourcesContent":["import { mkdirSync, existsSync, writeFileSync, readFileSync } from 'node:fs';\nimport { join, basename, resolve, dirname } from 'node:path';\nimport { detectStack } from './detect.js';\nimport { generateCLAUDEMd } from './improve-claude-md.js';\nimport { generateAgentsMd } from './agents-md.js';\nimport { SKILLS, TEMPLATES } from './bundled-files.js';\nimport { writeVersion, hashContent } from './version.js';\n\nexport interface InitOptions {\n force: boolean;\n}\n\ninterface InitResult {\n created: string[];\n skipped: string[];\n modified: string[];\n warnings: string[];\n}\n\nfunction ensureDir(dir: string): void {\n if (!existsSync(dir)) {\n mkdirSync(dir, { recursive: true });\n }\n}\n\nfunction writeFile(path: string, content: string, force: boolean, result: InitResult): void {\n if (existsSync(path) && !force) {\n result.skipped.push(path);\n return;\n }\n writeFileSync(path, content, 'utf-8');\n result.created.push(path);\n}\n\nexport async function init(dir: string, opts: InitOptions): Promise<void> {\n const targetDir = resolve(dir);\n const result: InitResult = { created: [], skipped: [], modified: [], warnings: [] };\n\n // Detect stack\n const stack = await detectStack(targetDir);\n\n // 1. Create docs/ subdirectories\n const docsDirs = ['briefs', 'specs', 'discoveries', 'contracts', 'decisions'];\n for (const sub of docsDirs) {\n ensureDir(join(targetDir, 'docs', sub));\n }\n\n // 2. Copy skill files to .claude/skills/<name>/SKILL.md\n const skillsDir = join(targetDir, '.claude', 'skills');\n for (const [filename, content] of Object.entries(SKILLS)) {\n const skillName = filename.replace(/\\.md$/, '');\n const skillDir = join(skillsDir, skillName);\n ensureDir(skillDir);\n writeFile(join(skillDir, 'SKILL.md'), content, opts.force, result);\n }\n\n // 3. Copy template files to docs/templates/\n const templatesDir = join(targetDir, 'docs', 'templates');\n ensureDir(templatesDir);\n for (const [filename, content] of Object.entries(TEMPLATES)) {\n ensureDir(dirname(join(templatesDir, filename)));\n writeFile(join(templatesDir, filename), content, opts.force, result);\n }\n\n // 4. Handle CLAUDE.md — only create if missing, never modify existing (unless --force)\n const claudeMdPath = join(targetDir, 'CLAUDE.md');\n if (existsSync(claudeMdPath) && !opts.force) {\n result.skipped.push(claudeMdPath);\n } else {\n const projectName = basename(targetDir);\n const content = generateCLAUDEMd(projectName, stack);\n writeFileSync(claudeMdPath, content, 'utf-8');\n result.created.push(claudeMdPath);\n }\n\n // 5. Handle AGENTS.md — only create if missing, never modify existing (unless --force)\n const agentsMdPath = join(targetDir, 'AGENTS.md');\n if (existsSync(agentsMdPath) && !opts.force) {\n result.skipped.push(agentsMdPath);\n } else {\n const projectName = basename(targetDir);\n const content = generateAgentsMd(projectName, stack);\n writeFileSync(agentsMdPath, content, 'utf-8');\n result.created.push(agentsMdPath);\n }\n\n // 6. Write .joycraft-version with hashes of all managed files\n const fileHashes: Record<string, string> = {};\n for (const [filename, content] of Object.entries(SKILLS)) {\n const skillName = filename.replace(/\\.md$/, '');\n fileHashes[join('.claude', 'skills', skillName, 'SKILL.md')] = hashContent(content);\n }\n for (const [filename, content] of Object.entries(TEMPLATES)) {\n fileHashes[join('docs', 'templates', filename)] = hashContent(content);\n }\n writeVersion(targetDir, '0.1.0', fileHashes);\n\n // 7. Install version check hook\n const hooksDir = join(targetDir, '.claude', 'hooks');\n ensureDir(hooksDir);\n const hookScript = `// Joycraft version check — runs on Claude Code session start\nimport { readFileSync } from 'node:fs';\nimport { join } from 'node:path';\ntry {\n const data = JSON.parse(readFileSync(join(process.cwd(), '.joycraft-version'), 'utf-8'));\n const res = await fetch('https://registry.npmjs.org/joycraft/latest', { signal: AbortSignal.timeout(3000) });\n if (res.ok) {\n const latest = (await res.json()).version;\n if (data.version !== latest) console.log('Joycraft ' + latest + ' available (you have ' + data.version + '). Run: npx joycraft upgrade');\n }\n} catch {}\n`;\n writeFile(join(hooksDir, 'joycraft-version-check.mjs'), hookScript, opts.force, result);\n\n // Update .claude/settings.json with SessionStart hook\n const settingsPath = join(targetDir, '.claude', 'settings.json');\n let settings: Record<string, unknown> = {};\n if (existsSync(settingsPath)) {\n try {\n settings = JSON.parse(readFileSync(settingsPath, 'utf-8'));\n } catch {\n // If settings.json is malformed, start fresh\n }\n }\n if (!settings.hooks) settings.hooks = {};\n const hooksConfig = settings.hooks as Record<string, unknown>;\n if (!hooksConfig.SessionStart) hooksConfig.SessionStart = [];\n const sessionStartHooks = hooksConfig.SessionStart as Array<Record<string, unknown>>;\n const hasJoycraftHook = sessionStartHooks.some(h => {\n const innerHooks = h.hooks as Array<Record<string, unknown>> | undefined;\n return innerHooks?.some(ih => typeof ih.command === 'string' && ih.command.includes('joycraft'));\n });\n if (!hasJoycraftHook) {\n sessionStartHooks.push({\n matcher: '',\n hooks: [{\n type: 'command',\n command: 'node .claude/hooks/joycraft-version-check.mjs',\n }],\n });\n writeFileSync(settingsPath, JSON.stringify(settings, null, 2) + '\\n', 'utf-8');\n result.created.push(settingsPath);\n }\n\n // 8. Check .gitignore for .claude/ exclusion\n const gitignorePath = join(targetDir, '.gitignore');\n if (existsSync(gitignorePath)) {\n const gitignore = readFileSync(gitignorePath, 'utf-8');\n if (/^\\.claude\\/?$/m.test(gitignore) || /^\\.claude\\/\\*$/m.test(gitignore)) {\n result.warnings.push(\n '.claude/ is in your .gitignore — teammates won\\'t get Joycraft skills.\\n' +\n ' Add this line to .gitignore to fix: !.claude/skills/'\n );\n }\n }\n\n // 9. Print summary\n printSummary(result, stack);\n}\n\nfunction printSummary(result: InitResult, stack: import('./detect.js').StackInfo): void {\n console.log('\\nJoycraft initialized!\\n');\n\n if (stack.language !== 'unknown') {\n const fw = stack.framework ? ` + ${stack.framework}` : '';\n console.log(` Detected stack: ${stack.language}${fw} (${stack.packageManager})`);\n } else {\n console.log(' Detected stack: unknown (no recognized manifest found)');\n }\n\n if (result.created.length > 0) {\n console.log(`\\n Created ${result.created.length} file(s):`);\n for (const f of result.created) {\n console.log(` + ${f}`);\n }\n }\n\n if (result.modified.length > 0) {\n console.log(`\\n Modified ${result.modified.length} file(s):`);\n for (const f of result.modified) {\n console.log(` ~ ${f}`);\n }\n }\n\n if (result.skipped.length > 0) {\n console.log(`\\n Skipped ${result.skipped.length} file(s) (already exist, use --force to overwrite):`);\n for (const f of result.skipped) {\n console.log(` - ${f}`);\n }\n }\n\n if (result.warnings.length > 0) {\n console.log('\\n Warnings:');\n for (const w of result.warnings) {\n console.log(` ⚠ ${w}`);\n }\n }\n\n const hasExistingClaude = result.skipped.some(f => f.endsWith('CLAUDE.md'));\n\n console.log('\\n Next steps:');\n if (hasExistingClaude) {\n console.log(' 1. Run Claude Code and try /tune to assess and improve your existing CLAUDE.md');\n } else {\n console.log(' 1. Review and customize the generated CLAUDE.md for your project');\n }\n console.log(' 2. Try /new-feature to start building with the spec-driven workflow');\n console.log(' 3. Commit .claude/skills/ and docs/ so your team gets the same workflow');\n console.log('');\n}\n","import { readFileSync, existsSync } from 'node:fs';\nimport { join } from 'node:path';\n\nexport interface StackInfo {\n language: string;\n packageManager: string;\n commands: {\n build?: string;\n test?: string;\n lint?: string;\n typecheck?: string;\n deploy?: string;\n };\n framework?: string;\n}\n\nfunction readFile(path: string): string | null {\n try {\n return readFileSync(path, 'utf-8');\n } catch {\n return null;\n }\n}\n\nfunction detectNodeFramework(pkg: { dependencies?: Record<string, string>; devDependencies?: Record<string, string> }): string | undefined {\n const allDeps = { ...pkg.dependencies, ...pkg.devDependencies };\n if (allDeps['next']) return 'Next.js';\n if (allDeps['nuxt']) return 'Nuxt';\n if (allDeps['@remix-run/node'] || allDeps['@remix-run/react']) return 'Remix';\n if (allDeps['express']) return 'Express';\n if (allDeps['fastify']) return 'Fastify';\n if (allDeps['react']) return 'React';\n if (allDeps['vue']) return 'Vue';\n if (allDeps['svelte']) return 'Svelte';\n return undefined;\n}\n\nfunction detectNodeTestFramework(pkg: { devDependencies?: Record<string, string>; dependencies?: Record<string, string> }): string | undefined {\n const allDeps = { ...pkg.dependencies, ...pkg.devDependencies };\n if (allDeps['vitest']) return 'vitest';\n if (allDeps['jest']) return 'jest';\n if (allDeps['mocha']) return 'mocha';\n return undefined;\n}\n\nfunction detectNodePackageManager(dir: string): string {\n if (existsSync(join(dir, 'pnpm-lock.yaml'))) return 'pnpm';\n if (existsSync(join(dir, 'yarn.lock'))) return 'yarn';\n if (existsSync(join(dir, 'bun.lockb')) || existsSync(join(dir, 'bun.lock'))) return 'bun';\n return 'npm';\n}\n\nfunction detectNode(dir: string): StackInfo | null {\n const raw = readFile(join(dir, 'package.json'));\n if (raw === null) return null;\n\n let pkg: Record<string, unknown>;\n try {\n pkg = JSON.parse(raw);\n } catch {\n return null;\n }\n\n const pm = detectNodePackageManager(dir);\n const run = pm === 'npm' ? 'npm run' : pm;\n const scripts = (pkg.scripts ?? {}) as Record<string, string>;\n const framework = detectNodeFramework(pkg as { dependencies?: Record<string, string>; devDependencies?: Record<string, string> });\n const testFramework = detectNodeTestFramework(pkg as { devDependencies?: Record<string, string>; dependencies?: Record<string, string> });\n\n const commands: StackInfo['commands'] = {};\n if (scripts.build) commands.build = `${run} build`;\n else commands.build = `${run} build`;\n if (scripts.test) commands.test = `${run} test`;\n else if (testFramework) commands.test = `${run} test`;\n else commands.test = `${pm === 'npm' ? 'npm' : pm} test`;\n if (scripts.lint) commands.lint = `${run} lint`;\n if (scripts.typecheck) commands.typecheck = `${run} typecheck`;\n else if ((pkg.devDependencies as Record<string, string> | undefined)?.['typescript']) {\n commands.typecheck = 'tsc --noEmit';\n }\n\n return {\n language: 'node',\n packageManager: pm,\n commands,\n framework,\n };\n}\n\nfunction detectPythonFramework(content: string): string | undefined {\n if (/fastapi/i.test(content)) return 'FastAPI';\n if (/django/i.test(content)) return 'Django';\n if (/flask/i.test(content)) return 'Flask';\n return undefined;\n}\n\nfunction detectPython(dir: string): StackInfo | null {\n const pyproject = readFile(join(dir, 'pyproject.toml'));\n if (pyproject !== null) {\n const isPoetry = /\\[tool\\.poetry\\]/.test(pyproject);\n const isUv = existsSync(join(dir, 'uv.lock'));\n\n let pm: string;\n let run: string;\n if (isUv) {\n pm = 'uv';\n run = 'uv run';\n } else if (isPoetry) {\n pm = 'poetry';\n run = 'poetry run';\n } else {\n pm = 'pip';\n run = 'python -m';\n }\n\n const framework = detectPythonFramework(pyproject);\n const hasPytest = /pytest/i.test(pyproject);\n\n return {\n language: 'python',\n packageManager: pm,\n commands: {\n build: `${pm === 'poetry' ? 'poetry' : pm} build`,\n test: hasPytest ? `${run} pytest` : `${run} pytest`,\n lint: `${run} ruff check .`,\n },\n framework,\n };\n }\n\n const requirements = readFile(join(dir, 'requirements.txt'));\n if (requirements !== null) {\n const framework = detectPythonFramework(requirements);\n return {\n language: 'python',\n packageManager: 'pip',\n commands: {\n build: 'pip install -e .',\n test: 'python -m pytest',\n lint: 'python -m ruff check .',\n },\n framework,\n };\n }\n\n return null;\n}\n\nfunction detectRust(dir: string): StackInfo | null {\n const cargo = readFile(join(dir, 'Cargo.toml'));\n if (cargo === null) return null;\n\n let framework: string | undefined;\n if (/actix-web/.test(cargo)) framework = 'Actix';\n else if (/axum/.test(cargo)) framework = 'Axum';\n else if (/rocket/.test(cargo)) framework = 'Rocket';\n\n return {\n language: 'rust',\n packageManager: 'cargo',\n commands: {\n build: 'cargo build',\n test: 'cargo test',\n lint: 'cargo clippy',\n },\n framework,\n };\n}\n\nfunction detectGo(dir: string): StackInfo | null {\n const gomod = readFile(join(dir, 'go.mod'));\n if (gomod === null) return null;\n\n let framework: string | undefined;\n if (/github\\.com\\/gin-gonic\\/gin/.test(gomod)) framework = 'Gin';\n else if (/github\\.com\\/gofiber\\/fiber/.test(gomod)) framework = 'Fiber';\n else if (/github\\.com\\/labstack\\/echo/.test(gomod)) framework = 'Echo';\n\n return {\n language: 'go',\n packageManager: 'go',\n commands: {\n build: 'go build ./...',\n test: 'go test ./...',\n lint: 'golangci-lint run',\n },\n framework,\n };\n}\n\nfunction detectSwift(dir: string): StackInfo | null {\n const pkg = readFile(join(dir, 'Package.swift'));\n if (pkg === null) return null;\n\n return {\n language: 'swift',\n packageManager: 'swift',\n commands: {\n build: 'swift build',\n test: 'swift test',\n },\n };\n}\n\nfunction detectMakefile(dir: string): StackInfo | null {\n const makefile = readFile(join(dir, 'Makefile'));\n if (makefile === null) return null;\n\n const commands: StackInfo['commands'] = {};\n commands.build = 'make build';\n if (/^test:/m.test(makefile)) commands.test = 'make test';\n if (/^lint:/m.test(makefile)) commands.lint = 'make lint';\n\n return {\n language: 'unknown',\n packageManager: 'make',\n commands,\n };\n}\n\nfunction detectDockerfile(dir: string): StackInfo | null {\n if (!existsSync(join(dir, 'Dockerfile'))) return null;\n\n return {\n language: 'unknown',\n packageManager: 'docker',\n commands: {\n build: 'docker build .',\n },\n };\n}\n\nexport async function detectStack(dir: string): Promise<StackInfo> {\n const detectors = [\n detectNode,\n detectPython,\n detectRust,\n detectGo,\n detectSwift,\n detectMakefile,\n detectDockerfile,\n ];\n\n for (const detect of detectors) {\n const result = detect(dir);\n if (result) return result;\n }\n\n return { language: 'unknown', packageManager: '', commands: {} };\n}\n","import type { StackInfo } from './detect.js';\n\ninterface Section {\n header: string;\n content: string;\n}\n\nfunction parseSections(markdown: string): Section[] {\n const lines = markdown.split('\\n');\n const sections: Section[] = [];\n let currentHeader = '';\n let currentLines: string[] = [];\n\n for (const line of lines) {\n if (line.startsWith('## ')) {\n if (currentHeader || currentLines.length > 0) {\n sections.push({ header: currentHeader, content: currentLines.join('\\n') });\n }\n currentHeader = line;\n currentLines = [];\n } else {\n currentLines.push(line);\n }\n }\n\n // Push the last section\n if (currentHeader || currentLines.length > 0) {\n sections.push({ header: currentHeader, content: currentLines.join('\\n') });\n }\n\n return sections;\n}\n\nfunction hasSection(sections: Section[], pattern: RegExp): boolean {\n return sections.some(s => pattern.test(s.header));\n}\n\nfunction generateCommandsBlock(stack: StackInfo): string {\n const lines: string[] = ['```bash'];\n if (stack.commands.build) lines.push(`# Build\\n${stack.commands.build}`);\n if (stack.commands.test) lines.push(`# Test\\n${stack.commands.test}`);\n if (stack.commands.lint) lines.push(`# Lint\\n${stack.commands.lint}`);\n if (stack.commands.typecheck) lines.push(`# Type check\\n${stack.commands.typecheck}`);\n if (stack.commands.deploy) lines.push(`# Deploy\\n${stack.commands.deploy}`);\n lines.push('```');\n return lines.join('\\n');\n}\n\nfunction generateBoundariesSection(): string {\n return `## Behavioral Boundaries\n\n### ALWAYS\n- Run tests and type-check before committing\n- Use \\`verb: concise message\\` format for commits\n- Commit after completing each discrete task (atomic commits)\n- Stage specific files by name (not \\`git add -A\\` or \\`git add .\\`)\n- Follow existing code patterns and style\n\n### ASK FIRST\n- Pushing to remote\n- Creating or merging pull requests\n- Adding new dependencies\n- Modifying database schema or data models\n- Changing authentication or authorization flows\n- Any destructive git operation (force-push, reset --hard, branch deletion)\n\n### NEVER\n- Push directly to main/master without approval\n- Commit .env files, secrets, or credentials\n- Use --no-verify to skip hooks\n- Amend commits that have been pushed\n- Skip type-checking or linting\n- Commit code that doesn't build`;\n}\n\nfunction generateWorkflowSection(stack: StackInfo): string {\n return `## Development Workflow\n\n${generateCommandsBlock(stack)}`;\n}\n\nfunction generateArchitectureSection(): string {\n return `## Architecture\n\n_TODO: Add a brief description of your project's architecture and key directories._`;\n}\n\nfunction generateKeyFilesSection(): string {\n return `## Key Files\n\n| File | Purpose |\n|------|---------|\n| _TODO_ | _Add key files and their purposes_ |`;\n}\n\nfunction generateGotchasSection(): string {\n return `## Common Gotchas\n\n_TODO: Add any gotchas, quirks, or non-obvious behaviors that developers should know about._`;\n}\n\nfunction generateGettingStartedSection(): string {\n return `## Getting Started with Joycraft\n\nThis project uses [Joycraft](https://github.com/maksutovic/joycraft) for AI development workflow. Available skills:\n\n| Skill | Purpose |\n|-------|---------|\n| \\`/tune\\` | Assess your harness, apply upgrades, see path to Level 5 |\n| \\`/new-feature\\` | Interview -> Feature Brief -> Atomic Specs |\n| \\`/interview\\` | Lightweight brainstorm — yap about ideas, get a structured summary |\n| \\`/decompose\\` | Break a brief into small, testable specs |\n| \\`/session-end\\` | Capture discoveries, verify, commit |\n\nRun \\`/tune\\` to see where your project stands and what to improve next.`;\n}\n\nexport function improveCLAUDEMd(existing: string, stack: StackInfo): string {\n const sections = parseSections(existing);\n const additions: string[] = [];\n\n if (!hasSection(sections, /behavioral\\s*boundar/i)) {\n additions.push(generateBoundariesSection());\n }\n\n if (!hasSection(sections, /development\\s*workflow/i) && !hasSection(sections, /workflow/i)) {\n additions.push(generateWorkflowSection(stack));\n }\n\n if (!hasSection(sections, /architecture/i)) {\n additions.push(generateArchitectureSection());\n }\n\n if (!hasSection(sections, /key\\s*files/i)) {\n additions.push(generateKeyFilesSection());\n }\n\n if (!hasSection(sections, /common\\s*gotchas/i) && !hasSection(sections, /gotchas/i)) {\n additions.push(generateGotchasSection());\n }\n\n if (!hasSection(sections, /getting\\s*started.*joycraft/i) && !hasSection(sections, /joycraft.*skills/i)) {\n additions.push(generateGettingStartedSection());\n }\n\n if (additions.length === 0) {\n return existing;\n }\n\n // Append missing sections\n const trimmed = existing.trimEnd();\n return trimmed + '\\n\\n' + additions.join('\\n\\n') + '\\n';\n}\n\nexport function generateCLAUDEMd(projectName: string, stack: StackInfo): string {\n const frameworkNote = stack.framework ? ` (${stack.framework})` : '';\n const langLabel = stack.language === 'unknown' ? '' : ` | **Stack:** ${stack.language}${frameworkNote}`;\n\n const lines: string[] = [\n `# ${projectName}`,\n '',\n `**Component:** _TODO: describe what this project is_${langLabel}`,\n '',\n '---',\n '',\n generateBoundariesSection(),\n '',\n generateWorkflowSection(stack),\n '',\n generateArchitectureSection(),\n '',\n generateKeyFilesSection(),\n '',\n generateGotchasSection(),\n '',\n generateGettingStartedSection(),\n '',\n ];\n\n return lines.join('\\n');\n}\n","import type { StackInfo } from './detect.js';\n\ninterface Section {\n header: string;\n content: string;\n}\n\nfunction parseSections(markdown: string): Section[] {\n const lines = markdown.split('\\n');\n const sections: Section[] = [];\n let currentHeader = '';\n let currentLines: string[] = [];\n\n for (const line of lines) {\n if (line.startsWith('## ')) {\n if (currentHeader || currentLines.length > 0) {\n sections.push({ header: currentHeader, content: currentLines.join('\\n') });\n }\n currentHeader = line;\n currentLines = [];\n } else {\n currentLines.push(line);\n }\n }\n\n if (currentHeader || currentLines.length > 0) {\n sections.push({ header: currentHeader, content: currentLines.join('\\n') });\n }\n\n return sections;\n}\n\nfunction hasSection(sections: Section[], pattern: RegExp): boolean {\n return sections.some(s => pattern.test(s.header));\n}\n\nfunction generateCommandsBlock(stack: StackInfo): string {\n const lines: string[] = ['```bash'];\n if (stack.commands.build) lines.push(stack.commands.build);\n if (stack.commands.test) lines.push(stack.commands.test);\n if (stack.commands.lint) lines.push(stack.commands.lint);\n if (stack.commands.typecheck) lines.push(stack.commands.typecheck);\n if (stack.commands.deploy) lines.push(stack.commands.deploy);\n lines.push('```');\n return lines.join('\\n');\n}\n\nfunction generateBoundariesSection(): string {\n return `## Behavioral Boundaries\n\n### ALWAYS\n- Run tests and type-check before committing\n- Follow existing code patterns and style\n\n### ASK FIRST\n- Adding new dependencies\n- Changing auth or data models\n- Any destructive operation\n\n### NEVER\n- Push to main without approval\n- Skip tests or type-checking\n- Hardcode secrets or credentials`;\n}\n\nfunction generateDevelopmentSection(stack: StackInfo): string {\n return `## Development\n\n${generateCommandsBlock(stack)}`;\n}\n\nfunction generateArchitectureSection(): string {\n return `## Architecture\n\n_TODO: Add a compact directory tree and one-paragraph summary._`;\n}\n\nfunction generateKeyFilesSection(): string {\n return `## Key Files\n\n| File | Purpose |\n|------|---------|\n| _TODO_ | _Add key files_ |`;\n}\n\nexport function generateAgentsMd(projectName: string, stack: StackInfo): string {\n const frameworkNote = stack.framework ? ` (${stack.framework})` : '';\n const langLabel = stack.language === 'unknown' ? '' : ` | **Stack:** ${stack.language}${frameworkNote}`;\n\n const lines: string[] = [\n `# ${projectName}`,\n '',\n `**Component:** _TODO: describe what this project is_${langLabel}`,\n '',\n '> Auto-generated by Joycraft. See CLAUDE.md for detailed instructions.',\n '',\n '---',\n '',\n generateBoundariesSection(),\n '',\n generateArchitectureSection(),\n '',\n generateKeyFilesSection(),\n '',\n generateDevelopmentSection(stack),\n '',\n ];\n\n return lines.join('\\n');\n}\n\nexport function improveAgentsMd(existing: string, stack: StackInfo): string {\n const sections = parseSections(existing);\n const additions: string[] = [];\n\n if (!hasSection(sections, /behavioral\\s*boundar/i)) {\n additions.push(generateBoundariesSection());\n }\n\n if (!hasSection(sections, /architecture/i)) {\n additions.push(generateArchitectureSection());\n }\n\n if (!hasSection(sections, /key\\s*files/i)) {\n additions.push(generateKeyFilesSection());\n }\n\n if (!hasSection(sections, /development/i)) {\n additions.push(generateDevelopmentSection(stack));\n }\n\n if (additions.length === 0) {\n return existing;\n }\n\n const trimmed = existing.trimEnd();\n return trimmed + '\\n\\n' + additions.join('\\n\\n') + '\\n';\n}\n"],"mappings":";;;;;;;;;AAAA,SAAS,WAAW,cAAAA,aAAY,eAAe,gBAAAC,qBAAoB;AACnE,SAAS,QAAAC,OAAM,UAAU,SAAS,eAAe;;;ACDjD,SAAS,cAAc,kBAAkB;AACzC,SAAS,YAAY;AAerB,SAAS,SAAS,MAA6B;AAC7C,MAAI;AACF,WAAO,aAAa,MAAM,OAAO;AAAA,EACnC,QAAQ;AACN,WAAO;AAAA,EACT;AACF;AAEA,SAAS,oBAAoB,KAA8G;AACzI,QAAM,UAAU,EAAE,GAAG,IAAI,cAAc,GAAG,IAAI,gBAAgB;AAC9D,MAAI,QAAQ,MAAM,EAAG,QAAO;AAC5B,MAAI,QAAQ,MAAM,EAAG,QAAO;AAC5B,MAAI,QAAQ,iBAAiB,KAAK,QAAQ,kBAAkB,EAAG,QAAO;AACtE,MAAI,QAAQ,SAAS,EAAG,QAAO;AAC/B,MAAI,QAAQ,SAAS,EAAG,QAAO;AAC/B,MAAI,QAAQ,OAAO,EAAG,QAAO;AAC7B,MAAI,QAAQ,KAAK,EAAG,QAAO;AAC3B,MAAI,QAAQ,QAAQ,EAAG,QAAO;AAC9B,SAAO;AACT;AAEA,SAAS,wBAAwB,KAA8G;AAC7I,QAAM,UAAU,EAAE,GAAG,IAAI,cAAc,GAAG,IAAI,gBAAgB;AAC9D,MAAI,QAAQ,QAAQ,EAAG,QAAO;AAC9B,MAAI,QAAQ,MAAM,EAAG,QAAO;AAC5B,MAAI,QAAQ,OAAO,EAAG,QAAO;AAC7B,SAAO;AACT;AAEA,SAAS,yBAAyB,KAAqB;AACrD,MAAI,WAAW,KAAK,KAAK,gBAAgB,CAAC,EAAG,QAAO;AACpD,MAAI,WAAW,KAAK,KAAK,WAAW,CAAC,EAAG,QAAO;AAC/C,MAAI,WAAW,KAAK,KAAK,WAAW,CAAC,KAAK,WAAW,KAAK,KAAK,UAAU,CAAC,EAAG,QAAO;AACpF,SAAO;AACT;AAEA,SAAS,WAAW,KAA+B;AACjD,QAAM,MAAM,SAAS,KAAK,KAAK,cAAc,CAAC;AAC9C,MAAI,QAAQ,KAAM,QAAO;AAEzB,MAAI;AACJ,MAAI;AACF,UAAM,KAAK,MAAM,GAAG;AAAA,EACtB,QAAQ;AACN,WAAO;AAAA,EACT;AAEA,QAAM,KAAK,yBAAyB,GAAG;AACvC,QAAM,MAAM,OAAO,QAAQ,YAAY;AACvC,QAAM,UAAW,IAAI,WAAW,CAAC;AACjC,QAAM,YAAY,oBAAoB,GAA0F;AAChI,QAAM,gBAAgB,wBAAwB,GAA0F;AAExI,QAAM,WAAkC,CAAC;AACzC,MAAI,QAAQ,MAAO,UAAS,QAAQ,GAAG,GAAG;AAAA,MACrC,UAAS,QAAQ,GAAG,GAAG;AAC5B,MAAI,QAAQ,KAAM,UAAS,OAAO,GAAG,GAAG;AAAA,WAC/B,cAAe,UAAS,OAAO,GAAG,GAAG;AAAA,MACzC,UAAS,OAAO,GAAG,OAAO,QAAQ,QAAQ,EAAE;AACjD,MAAI,QAAQ,KAAM,UAAS,OAAO,GAAG,GAAG;AACxC,MAAI,QAAQ,UAAW,UAAS,YAAY,GAAG,GAAG;AAAA,WACxC,IAAI,kBAAyD,YAAY,GAAG;AACpF,aAAS,YAAY;AAAA,EACvB;AAEA,SAAO;AAAA,IACL,UAAU;AAAA,IACV,gBAAgB;AAAA,IAChB;AAAA,IACA;AAAA,EACF;AACF;AAEA,SAAS,sBAAsB,SAAqC;AAClE,MAAI,WAAW,KAAK,OAAO,EAAG,QAAO;AACrC,MAAI,UAAU,KAAK,OAAO,EAAG,QAAO;AACpC,MAAI,SAAS,KAAK,OAAO,EAAG,QAAO;AACnC,SAAO;AACT;AAEA,SAAS,aAAa,KAA+B;AACnD,QAAM,YAAY,SAAS,KAAK,KAAK,gBAAgB,CAAC;AACtD,MAAI,cAAc,MAAM;AACtB,UAAM,WAAW,mBAAmB,KAAK,SAAS;AAClD,UAAM,OAAO,WAAW,KAAK,KAAK,SAAS,CAAC;AAE5C,QAAI;AACJ,QAAI;AACJ,QAAI,MAAM;AACR,WAAK;AACL,YAAM;AAAA,IACR,WAAW,UAAU;AACnB,WAAK;AACL,YAAM;AAAA,IACR,OAAO;AACL,WAAK;AACL,YAAM;AAAA,IACR;AAEA,UAAM,YAAY,sBAAsB,SAAS;AACjD,UAAM,YAAY,UAAU,KAAK,SAAS;AAE1C,WAAO;AAAA,MACL,UAAU;AAAA,MACV,gBAAgB;AAAA,MAChB,UAAU;AAAA,QACR,OAAO,GAAG,OAAO,WAAW,WAAW,EAAE;AAAA,QACzC,MAAM,YAAY,GAAG,GAAG,YAAY,GAAG,GAAG;AAAA,QAC1C,MAAM,GAAG,GAAG;AAAA,MACd;AAAA,MACA;AAAA,IACF;AAAA,EACF;AAEA,QAAM,eAAe,SAAS,KAAK,KAAK,kBAAkB,CAAC;AAC3D,MAAI,iBAAiB,MAAM;AACzB,UAAM,YAAY,sBAAsB,YAAY;AACpD,WAAO;AAAA,MACL,UAAU;AAAA,MACV,gBAAgB;AAAA,MAChB,UAAU;AAAA,QACR,OAAO;AAAA,QACP,MAAM;AAAA,QACN,MAAM;AAAA,MACR;AAAA,MACA;AAAA,IACF;AAAA,EACF;AAEA,SAAO;AACT;AAEA,SAAS,WAAW,KAA+B;AACjD,QAAM,QAAQ,SAAS,KAAK,KAAK,YAAY,CAAC;AAC9C,MAAI,UAAU,KAAM,QAAO;AAE3B,MAAI;AACJ,MAAI,YAAY,KAAK,KAAK,EAAG,aAAY;AAAA,WAChC,OAAO,KAAK,KAAK,EAAG,aAAY;AAAA,WAChC,SAAS,KAAK,KAAK,EAAG,aAAY;AAE3C,SAAO;AAAA,IACL,UAAU;AAAA,IACV,gBAAgB;AAAA,IAChB,UAAU;AAAA,MACR,OAAO;AAAA,MACP,MAAM;AAAA,MACN,MAAM;AAAA,IACR;AAAA,IACA;AAAA,EACF;AACF;AAEA,SAAS,SAAS,KAA+B;AAC/C,QAAM,QAAQ,SAAS,KAAK,KAAK,QAAQ,CAAC;AAC1C,MAAI,UAAU,KAAM,QAAO;AAE3B,MAAI;AACJ,MAAI,8BAA8B,KAAK,KAAK,EAAG,aAAY;AAAA,WAClD,8BAA8B,KAAK,KAAK,EAAG,aAAY;AAAA,WACvD,8BAA8B,KAAK,KAAK,EAAG,aAAY;AAEhE,SAAO;AAAA,IACL,UAAU;AAAA,IACV,gBAAgB;AAAA,IAChB,UAAU;AAAA,MACR,OAAO;AAAA,MACP,MAAM;AAAA,MACN,MAAM;AAAA,IACR;AAAA,IACA;AAAA,EACF;AACF;AAEA,SAAS,YAAY,KAA+B;AAClD,QAAM,MAAM,SAAS,KAAK,KAAK,eAAe,CAAC;AAC/C,MAAI,QAAQ,KAAM,QAAO;AAEzB,SAAO;AAAA,IACL,UAAU;AAAA,IACV,gBAAgB;AAAA,IAChB,UAAU;AAAA,MACR,OAAO;AAAA,MACP,MAAM;AAAA,IACR;AAAA,EACF;AACF;AAEA,SAAS,eAAe,KAA+B;AACrD,QAAM,WAAW,SAAS,KAAK,KAAK,UAAU,CAAC;AAC/C,MAAI,aAAa,KAAM,QAAO;AAE9B,QAAM,WAAkC,CAAC;AACzC,WAAS,QAAQ;AACjB,MAAI,UAAU,KAAK,QAAQ,EAAG,UAAS,OAAO;AAC9C,MAAI,UAAU,KAAK,QAAQ,EAAG,UAAS,OAAO;AAE9C,SAAO;AAAA,IACL,UAAU;AAAA,IACV,gBAAgB;AAAA,IAChB;AAAA,EACF;AACF;AAEA,SAAS,iBAAiB,KAA+B;AACvD,MAAI,CAAC,WAAW,KAAK,KAAK,YAAY,CAAC,EAAG,QAAO;AAEjD,SAAO;AAAA,IACL,UAAU;AAAA,IACV,gBAAgB;AAAA,IAChB,UAAU;AAAA,MACR,OAAO;AAAA,IACT;AAAA,EACF;AACF;AAEA,eAAsB,YAAY,KAAiC;AACjE,QAAM,YAAY;AAAA,IAChB;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,EACF;AAEA,aAAW,UAAU,WAAW;AAC9B,UAAM,SAAS,OAAO,GAAG;AACzB,QAAI,OAAQ,QAAO;AAAA,EACrB;AAEA,SAAO,EAAE,UAAU,WAAW,gBAAgB,IAAI,UAAU,CAAC,EAAE;AACjE;;;ACpNA,SAAS,sBAAsB,OAA0B;AACvD,QAAM,QAAkB,CAAC,SAAS;AAClC,MAAI,MAAM,SAAS,MAAO,OAAM,KAAK;AAAA,EAAY,MAAM,SAAS,KAAK,EAAE;AACvE,MAAI,MAAM,SAAS,KAAM,OAAM,KAAK;AAAA,EAAW,MAAM,SAAS,IAAI,EAAE;AACpE,MAAI,MAAM,SAAS,KAAM,OAAM,KAAK;AAAA,EAAW,MAAM,SAAS,IAAI,EAAE;AACpE,MAAI,MAAM,SAAS,UAAW,OAAM,KAAK;AAAA,EAAiB,MAAM,SAAS,SAAS,EAAE;AACpF,MAAI,MAAM,SAAS,OAAQ,OAAM,KAAK;AAAA,EAAa,MAAM,SAAS,MAAM,EAAE;AAC1E,QAAM,KAAK,KAAK;AAChB,SAAO,MAAM,KAAK,IAAI;AACxB;AAEA,SAAS,4BAAoC;AAC3C,SAAO;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAwBT;AAEA,SAAS,wBAAwB,OAA0B;AACzD,SAAO;AAAA;AAAA,EAEP,sBAAsB,KAAK,CAAC;AAC9B;AAEA,SAAS,8BAAsC;AAC7C,SAAO;AAAA;AAAA;AAGT;AAEA,SAAS,0BAAkC;AACzC,SAAO;AAAA;AAAA;AAAA;AAAA;AAKT;AAEA,SAAS,yBAAiC;AACxC,SAAO;AAAA;AAAA;AAGT;AAEA,SAAS,gCAAwC;AAC/C,SAAO;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAaT;AAuCO,SAAS,iBAAiB,aAAqB,OAA0B;AAC9E,QAAM,gBAAgB,MAAM,YAAY,KAAK,MAAM,SAAS,MAAM;AAClE,QAAM,YAAY,MAAM,aAAa,YAAY,KAAK,iBAAiB,MAAM,QAAQ,GAAG,aAAa;AAErG,QAAM,QAAkB;AAAA,IACtB,KAAK,WAAW;AAAA,IAChB;AAAA,IACA,uDAAuD,SAAS;AAAA,IAChE;AAAA,IACA;AAAA,IACA;AAAA,IACA,0BAA0B;AAAA,IAC1B;AAAA,IACA,wBAAwB,KAAK;AAAA,IAC7B;AAAA,IACA,4BAA4B;AAAA,IAC5B;AAAA,IACA,wBAAwB;AAAA,IACxB;AAAA,IACA,uBAAuB;AAAA,IACvB;AAAA,IACA,8BAA8B;AAAA,IAC9B;AAAA,EACF;AAEA,SAAO,MAAM,KAAK,IAAI;AACxB;;;AChJA,SAASC,uBAAsB,OAA0B;AACvD,QAAM,QAAkB,CAAC,SAAS;AAClC,MAAI,MAAM,SAAS,MAAO,OAAM,KAAK,MAAM,SAAS,KAAK;AACzD,MAAI,MAAM,SAAS,KAAM,OAAM,KAAK,MAAM,SAAS,IAAI;AACvD,MAAI,MAAM,SAAS,KAAM,OAAM,KAAK,MAAM,SAAS,IAAI;AACvD,MAAI,MAAM,SAAS,UAAW,OAAM,KAAK,MAAM,SAAS,SAAS;AACjE,MAAI,MAAM,SAAS,OAAQ,OAAM,KAAK,MAAM,SAAS,MAAM;AAC3D,QAAM,KAAK,KAAK;AAChB,SAAO,MAAM,KAAK,IAAI;AACxB;AAEA,SAASC,6BAAoC;AAC3C,SAAO;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAeT;AAEA,SAAS,2BAA2B,OAA0B;AAC5D,SAAO;AAAA;AAAA,EAEPD,uBAAsB,KAAK,CAAC;AAC9B;AAEA,SAASE,+BAAsC;AAC7C,SAAO;AAAA;AAAA;AAGT;AAEA,SAASC,2BAAkC;AACzC,SAAO;AAAA;AAAA;AAAA;AAAA;AAKT;AAEO,SAAS,iBAAiB,aAAqB,OAA0B;AAC9E,QAAM,gBAAgB,MAAM,YAAY,KAAK,MAAM,SAAS,MAAM;AAClE,QAAM,YAAY,MAAM,aAAa,YAAY,KAAK,iBAAiB,MAAM,QAAQ,GAAG,aAAa;AAErG,QAAM,QAAkB;AAAA,IACtB,KAAK,WAAW;AAAA,IAChB;AAAA,IACA,uDAAuD,SAAS;AAAA,IAChE;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,IACAF,2BAA0B;AAAA,IAC1B;AAAA,IACAC,6BAA4B;AAAA,IAC5B;AAAA,IACAC,yBAAwB;AAAA,IACxB;AAAA,IACA,2BAA2B,KAAK;AAAA,IAChC;AAAA,EACF;AAEA,SAAO,MAAM,KAAK,IAAI;AACxB;;;AH1FA,SAAS,UAAU,KAAmB;AACpC,MAAI,CAACC,YAAW,GAAG,GAAG;AACpB,cAAU,KAAK,EAAE,WAAW,KAAK,CAAC;AAAA,EACpC;AACF;AAEA,SAAS,UAAU,MAAc,SAAiB,OAAgB,QAA0B;AAC1F,MAAIA,YAAW,IAAI,KAAK,CAAC,OAAO;AAC9B,WAAO,QAAQ,KAAK,IAAI;AACxB;AAAA,EACF;AACA,gBAAc,MAAM,SAAS,OAAO;AACpC,SAAO,QAAQ,KAAK,IAAI;AAC1B;AAEA,eAAsB,KAAK,KAAa,MAAkC;AACxE,QAAM,YAAY,QAAQ,GAAG;AAC7B,QAAM,SAAqB,EAAE,SAAS,CAAC,GAAG,SAAS,CAAC,GAAG,UAAU,CAAC,GAAG,UAAU,CAAC,EAAE;AAGlF,QAAM,QAAQ,MAAM,YAAY,SAAS;AAGzC,QAAM,WAAW,CAAC,UAAU,SAAS,eAAe,aAAa,WAAW;AAC5E,aAAW,OAAO,UAAU;AAC1B,cAAUC,MAAK,WAAW,QAAQ,GAAG,CAAC;AAAA,EACxC;AAGA,QAAM,YAAYA,MAAK,WAAW,WAAW,QAAQ;AACrD,aAAW,CAAC,UAAU,OAAO,KAAK,OAAO,QAAQ,MAAM,GAAG;AACxD,UAAM,YAAY,SAAS,QAAQ,SAAS,EAAE;AAC9C,UAAM,WAAWA,MAAK,WAAW,SAAS;AAC1C,cAAU,QAAQ;AAClB,cAAUA,MAAK,UAAU,UAAU,GAAG,SAAS,KAAK,OAAO,MAAM;AAAA,EACnE;AAGA,QAAM,eAAeA,MAAK,WAAW,QAAQ,WAAW;AACxD,YAAU,YAAY;AACtB,aAAW,CAAC,UAAU,OAAO,KAAK,OAAO,QAAQ,SAAS,GAAG;AAC3D,cAAU,QAAQA,MAAK,cAAc,QAAQ,CAAC,CAAC;AAC/C,cAAUA,MAAK,cAAc,QAAQ,GAAG,SAAS,KAAK,OAAO,MAAM;AAAA,EACrE;AAGA,QAAM,eAAeA,MAAK,WAAW,WAAW;AAChD,MAAID,YAAW,YAAY,KAAK,CAAC,KAAK,OAAO;AAC3C,WAAO,QAAQ,KAAK,YAAY;AAAA,EAClC,OAAO;AACL,UAAM,cAAc,SAAS,SAAS;AACtC,UAAM,UAAU,iBAAiB,aAAa,KAAK;AACnD,kBAAc,cAAc,SAAS,OAAO;AAC5C,WAAO,QAAQ,KAAK,YAAY;AAAA,EAClC;AAGA,QAAM,eAAeC,MAAK,WAAW,WAAW;AAChD,MAAID,YAAW,YAAY,KAAK,CAAC,KAAK,OAAO;AAC3C,WAAO,QAAQ,KAAK,YAAY;AAAA,EAClC,OAAO;AACL,UAAM,cAAc,SAAS,SAAS;AACtC,UAAM,UAAU,iBAAiB,aAAa,KAAK;AACnD,kBAAc,cAAc,SAAS,OAAO;AAC5C,WAAO,QAAQ,KAAK,YAAY;AAAA,EAClC;AAGA,QAAM,aAAqC,CAAC;AAC5C,aAAW,CAAC,UAAU,OAAO,KAAK,OAAO,QAAQ,MAAM,GAAG;AACxD,UAAM,YAAY,SAAS,QAAQ,SAAS,EAAE;AAC9C,eAAWC,MAAK,WAAW,UAAU,WAAW,UAAU,CAAC,IAAI,YAAY,OAAO;AAAA,EACpF;AACA,aAAW,CAAC,UAAU,OAAO,KAAK,OAAO,QAAQ,SAAS,GAAG;AAC3D,eAAWA,MAAK,QAAQ,aAAa,QAAQ,CAAC,IAAI,YAAY,OAAO;AAAA,EACvE;AACA,eAAa,WAAW,SAAS,UAAU;AAG3C,QAAM,WAAWA,MAAK,WAAW,WAAW,OAAO;AACnD,YAAU,QAAQ;AAClB,QAAM,aAAa;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAYnB,YAAUA,MAAK,UAAU,4BAA4B,GAAG,YAAY,KAAK,OAAO,MAAM;AAGtF,QAAM,eAAeA,MAAK,WAAW,WAAW,eAAe;AAC/D,MAAI,WAAoC,CAAC;AACzC,MAAID,YAAW,YAAY,GAAG;AAC5B,QAAI;AACF,iBAAW,KAAK,MAAME,cAAa,cAAc,OAAO,CAAC;AAAA,IAC3D,QAAQ;AAAA,IAER;AAAA,EACF;AACA,MAAI,CAAC,SAAS,MAAO,UAAS,QAAQ,CAAC;AACvC,QAAM,cAAc,SAAS;AAC7B,MAAI,CAAC,YAAY,aAAc,aAAY,eAAe,CAAC;AAC3D,QAAM,oBAAoB,YAAY;AACtC,QAAM,kBAAkB,kBAAkB,KAAK,OAAK;AAClD,UAAM,aAAa,EAAE;AACrB,WAAO,YAAY,KAAK,QAAM,OAAO,GAAG,YAAY,YAAY,GAAG,QAAQ,SAAS,UAAU,CAAC;AAAA,EACjG,CAAC;AACD,MAAI,CAAC,iBAAiB;AACpB,sBAAkB,KAAK;AAAA,MACrB,SAAS;AAAA,MACT,OAAO,CAAC;AAAA,QACN,MAAM;AAAA,QACN,SAAS;AAAA,MACX,CAAC;AAAA,IACH,CAAC;AACD,kBAAc,cAAc,KAAK,UAAU,UAAU,MAAM,CAAC,IAAI,MAAM,OAAO;AAC7E,WAAO,QAAQ,KAAK,YAAY;AAAA,EAClC;AAGA,QAAM,gBAAgBD,MAAK,WAAW,YAAY;AAClD,MAAID,YAAW,aAAa,GAAG;AAC7B,UAAM,YAAYE,cAAa,eAAe,OAAO;AACrD,QAAI,iBAAiB,KAAK,SAAS,KAAK,kBAAkB,KAAK,SAAS,GAAG;AACzE,aAAO,SAAS;AAAA,QACd;AAAA,MAEF;AAAA,IACF;AAAA,EACF;AAGA,eAAa,QAAQ,KAAK;AAC5B;AAEA,SAAS,aAAa,QAAoB,OAA8C;AACtF,UAAQ,IAAI,2BAA2B;AAEvC,MAAI,MAAM,aAAa,WAAW;AAChC,UAAM,KAAK,MAAM,YAAY,MAAM,MAAM,SAAS,KAAK;AACvD,YAAQ,IAAI,qBAAqB,MAAM,QAAQ,GAAG,EAAE,KAAK,MAAM,cAAc,GAAG;AAAA,EAClF,OAAO;AACL,YAAQ,IAAI,0DAA0D;AAAA,EACxE;AAEA,MAAI,OAAO,QAAQ,SAAS,GAAG;AAC7B,YAAQ,IAAI;AAAA,YAAe,OAAO,QAAQ,MAAM,WAAW;AAC3D,eAAW,KAAK,OAAO,SAAS;AAC9B,cAAQ,IAAI,SAAS,CAAC,EAAE;AAAA,IAC1B;AAAA,EACF;AAEA,MAAI,OAAO,SAAS,SAAS,GAAG;AAC9B,YAAQ,IAAI;AAAA,aAAgB,OAAO,SAAS,MAAM,WAAW;AAC7D,eAAW,KAAK,OAAO,UAAU;AAC/B,cAAQ,IAAI,SAAS,CAAC,EAAE;AAAA,IAC1B;AAAA,EACF;AAEA,MAAI,OAAO,QAAQ,SAAS,GAAG;AAC7B,YAAQ,IAAI;AAAA,YAAe,OAAO,QAAQ,MAAM,qDAAqD;AACrG,eAAW,KAAK,OAAO,SAAS;AAC9B,cAAQ,IAAI,SAAS,CAAC,EAAE;AAAA,IAC1B;AAAA,EACF;AAEA,MAAI,OAAO,SAAS,SAAS,GAAG;AAC9B,YAAQ,IAAI,eAAe;AAC3B,eAAW,KAAK,OAAO,UAAU;AAC/B,cAAQ,IAAI,cAAS,CAAC,EAAE;AAAA,IAC1B;AAAA,EACF;AAEA,QAAM,oBAAoB,OAAO,QAAQ,KAAK,OAAK,EAAE,SAAS,WAAW,CAAC;AAE1E,UAAQ,IAAI,iBAAiB;AAC7B,MAAI,mBAAmB;AACrB,YAAQ,IAAI,oFAAoF;AAAA,EAClG,OAAO;AACL,YAAQ,IAAI,sEAAsE;AAAA,EACpF;AACA,UAAQ,IAAI,yEAAyE;AACrF,UAAQ,IAAI,6EAA6E;AACzF,UAAQ,IAAI,EAAE;AAChB;","names":["existsSync","readFileSync","join","generateCommandsBlock","generateBoundariesSection","generateArchitectureSection","generateKeyFilesSection","existsSync","join","readFileSync"]}
|
|
@@ -5,7 +5,7 @@ import {
|
|
|
5
5
|
hashContent,
|
|
6
6
|
readVersion,
|
|
7
7
|
writeVersion
|
|
8
|
-
} from "./chunk-
|
|
8
|
+
} from "./chunk-LLJVCCB2.js";
|
|
9
9
|
|
|
10
10
|
// src/upgrade.ts
|
|
11
11
|
import { existsSync, readFileSync, writeFileSync, mkdirSync } from "fs";
|
|
@@ -138,4 +138,4 @@ function getPackageVersion() {
|
|
|
138
138
|
export {
|
|
139
139
|
upgrade
|
|
140
140
|
};
|
|
141
|
-
//# sourceMappingURL=upgrade-
|
|
141
|
+
//# sourceMappingURL=upgrade-ZE6K64XX.js.map
|
package/package.json
CHANGED
|
@@ -1 +0,0 @@
|
|
|
1
|
-
{"version":3,"sources":["../src/init.ts","../src/detect.ts","../src/improve-claude-md.ts","../src/agents-md.ts"],"sourcesContent":["import { mkdirSync, existsSync, writeFileSync, readFileSync } from 'node:fs';\nimport { join, basename, resolve, dirname } from 'node:path';\nimport { detectStack } from './detect.js';\nimport { generateCLAUDEMd } from './improve-claude-md.js';\nimport { generateAgentsMd } from './agents-md.js';\nimport { SKILLS, TEMPLATES } from './bundled-files.js';\nimport { writeVersion, hashContent } from './version.js';\n\nexport interface InitOptions {\n force: boolean;\n}\n\ninterface InitResult {\n created: string[];\n skipped: string[];\n modified: string[];\n warnings: string[];\n}\n\nfunction ensureDir(dir: string): void {\n if (!existsSync(dir)) {\n mkdirSync(dir, { recursive: true });\n }\n}\n\nfunction writeFile(path: string, content: string, force: boolean, result: InitResult): void {\n if (existsSync(path) && !force) {\n result.skipped.push(path);\n return;\n }\n writeFileSync(path, content, 'utf-8');\n result.created.push(path);\n}\n\nexport async function init(dir: string, opts: InitOptions): Promise<void> {\n const targetDir = resolve(dir);\n const result: InitResult = { created: [], skipped: [], modified: [], warnings: [] };\n\n // Detect stack\n const stack = await detectStack(targetDir);\n\n // 1. Create docs/ subdirectories\n const docsDirs = ['briefs', 'specs', 'discoveries', 'contracts', 'decisions'];\n for (const sub of docsDirs) {\n ensureDir(join(targetDir, 'docs', sub));\n }\n\n // 2. Copy skill files to .claude/skills/<name>/SKILL.md\n const skillsDir = join(targetDir, '.claude', 'skills');\n for (const [filename, content] of Object.entries(SKILLS)) {\n const skillName = filename.replace(/\\.md$/, '');\n const skillDir = join(skillsDir, skillName);\n ensureDir(skillDir);\n writeFile(join(skillDir, 'SKILL.md'), content, opts.force, result);\n }\n\n // 3. Copy template files to docs/templates/\n const templatesDir = join(targetDir, 'docs', 'templates');\n ensureDir(templatesDir);\n for (const [filename, content] of Object.entries(TEMPLATES)) {\n ensureDir(dirname(join(templatesDir, filename)));\n writeFile(join(templatesDir, filename), content, opts.force, result);\n }\n\n // 4. Handle CLAUDE.md — only create if missing, never modify existing (unless --force)\n const claudeMdPath = join(targetDir, 'CLAUDE.md');\n if (existsSync(claudeMdPath) && !opts.force) {\n result.skipped.push(claudeMdPath);\n } else {\n const projectName = basename(targetDir);\n const content = generateCLAUDEMd(projectName, stack);\n writeFileSync(claudeMdPath, content, 'utf-8');\n result.created.push(claudeMdPath);\n }\n\n // 5. Handle AGENTS.md — only create if missing, never modify existing (unless --force)\n const agentsMdPath = join(targetDir, 'AGENTS.md');\n if (existsSync(agentsMdPath) && !opts.force) {\n result.skipped.push(agentsMdPath);\n } else {\n const projectName = basename(targetDir);\n const content = generateAgentsMd(projectName, stack);\n writeFileSync(agentsMdPath, content, 'utf-8');\n result.created.push(agentsMdPath);\n }\n\n // 6. Write .joycraft-version with hashes of all managed files\n const fileHashes: Record<string, string> = {};\n for (const [filename, content] of Object.entries(SKILLS)) {\n const skillName = filename.replace(/\\.md$/, '');\n fileHashes[join('.claude', 'skills', skillName, 'SKILL.md')] = hashContent(content);\n }\n for (const [filename, content] of Object.entries(TEMPLATES)) {\n fileHashes[join('docs', 'templates', filename)] = hashContent(content);\n }\n writeVersion(targetDir, '0.1.0', fileHashes);\n\n // 7. Install version check hook\n const hooksDir = join(targetDir, '.claude', 'hooks');\n ensureDir(hooksDir);\n const hookScript = `// Joycraft version check — runs on Claude Code session start\nimport { readFileSync } from 'node:fs';\nimport { join } from 'node:path';\ntry {\n const data = JSON.parse(readFileSync(join(process.cwd(), '.joycraft-version'), 'utf-8'));\n const res = await fetch('https://registry.npmjs.org/joycraft/latest', { signal: AbortSignal.timeout(3000) });\n if (res.ok) {\n const latest = (await res.json()).version;\n if (data.version !== latest) console.log('Joycraft ' + latest + ' available (you have ' + data.version + '). Run: npx joycraft upgrade');\n }\n} catch {}\n`;\n writeFile(join(hooksDir, 'joycraft-version-check.mjs'), hookScript, opts.force, result);\n\n // Update .claude/settings.json with SessionStart hook\n const settingsPath = join(targetDir, '.claude', 'settings.json');\n let settings: Record<string, unknown> = {};\n if (existsSync(settingsPath)) {\n try {\n settings = JSON.parse(readFileSync(settingsPath, 'utf-8'));\n } catch {\n // If settings.json is malformed, start fresh\n }\n }\n if (!settings.hooks) settings.hooks = {};\n const hooksConfig = settings.hooks as Record<string, unknown>;\n if (!hooksConfig.SessionStart) hooksConfig.SessionStart = [];\n const sessionStartHooks = hooksConfig.SessionStart as Array<Record<string, unknown>>;\n const hasJoycraftHook = sessionStartHooks.some(h => {\n const innerHooks = h.hooks as Array<Record<string, unknown>> | undefined;\n return innerHooks?.some(ih => typeof ih.command === 'string' && ih.command.includes('joycraft'));\n });\n if (!hasJoycraftHook) {\n sessionStartHooks.push({\n matcher: '',\n hooks: [{\n type: 'command',\n command: 'node .claude/hooks/joycraft-version-check.mjs',\n }],\n });\n writeFileSync(settingsPath, JSON.stringify(settings, null, 2) + '\\n', 'utf-8');\n result.created.push(settingsPath);\n }\n\n // 8. Check .gitignore for .claude/ exclusion\n const gitignorePath = join(targetDir, '.gitignore');\n if (existsSync(gitignorePath)) {\n const gitignore = readFileSync(gitignorePath, 'utf-8');\n if (/^\\.claude\\/?$/m.test(gitignore) || /^\\.claude\\/\\*$/m.test(gitignore)) {\n result.warnings.push(\n '.claude/ is in your .gitignore — teammates won\\'t get Joycraft skills.\\n' +\n ' Add this line to .gitignore to fix: !.claude/skills/'\n );\n }\n }\n\n // 9. Print summary\n printSummary(result, stack);\n}\n\nfunction printSummary(result: InitResult, stack: import('./detect.js').StackInfo): void {\n console.log('\\nJoycraft initialized!\\n');\n\n if (stack.language !== 'unknown') {\n const fw = stack.framework ? ` + ${stack.framework}` : '';\n console.log(` Detected stack: ${stack.language}${fw} (${stack.packageManager})`);\n } else {\n console.log(' Detected stack: unknown (no recognized manifest found)');\n }\n\n if (result.created.length > 0) {\n console.log(`\\n Created ${result.created.length} file(s):`);\n for (const f of result.created) {\n console.log(` + ${f}`);\n }\n }\n\n if (result.modified.length > 0) {\n console.log(`\\n Modified ${result.modified.length} file(s):`);\n for (const f of result.modified) {\n console.log(` ~ ${f}`);\n }\n }\n\n if (result.skipped.length > 0) {\n console.log(`\\n Skipped ${result.skipped.length} file(s) (already exist, use --force to overwrite):`);\n for (const f of result.skipped) {\n console.log(` - ${f}`);\n }\n }\n\n if (result.warnings.length > 0) {\n console.log('\\n Warnings:');\n for (const w of result.warnings) {\n console.log(` ⚠ ${w}`);\n }\n }\n\n const hasExistingClaude = result.skipped.some(f => f.endsWith('CLAUDE.md'));\n\n console.log('\\n Next steps:');\n if (hasExistingClaude) {\n console.log(' 1. Run Claude Code and try /tune to assess and improve your existing CLAUDE.md');\n } else {\n console.log(' 1. Review and customize the generated CLAUDE.md for your project');\n }\n console.log(' 2. Try /new-feature to start building with the spec-driven workflow');\n console.log(' 3. Commit .claude/skills/ and docs/ so your team gets the same workflow');\n console.log('');\n}\n","import { readFileSync, existsSync } from 'node:fs';\nimport { join } from 'node:path';\n\nexport interface StackInfo {\n language: string;\n packageManager: string;\n commands: {\n build?: string;\n test?: string;\n lint?: string;\n typecheck?: string;\n deploy?: string;\n };\n framework?: string;\n}\n\nfunction readFile(path: string): string | null {\n try {\n return readFileSync(path, 'utf-8');\n } catch {\n return null;\n }\n}\n\nfunction detectNodeFramework(pkg: { dependencies?: Record<string, string>; devDependencies?: Record<string, string> }): string | undefined {\n const allDeps = { ...pkg.dependencies, ...pkg.devDependencies };\n if (allDeps['next']) return 'Next.js';\n if (allDeps['nuxt']) return 'Nuxt';\n if (allDeps['@remix-run/node'] || allDeps['@remix-run/react']) return 'Remix';\n if (allDeps['express']) return 'Express';\n if (allDeps['fastify']) return 'Fastify';\n if (allDeps['react']) return 'React';\n if (allDeps['vue']) return 'Vue';\n if (allDeps['svelte']) return 'Svelte';\n return undefined;\n}\n\nfunction detectNodeTestFramework(pkg: { devDependencies?: Record<string, string>; dependencies?: Record<string, string> }): string | undefined {\n const allDeps = { ...pkg.dependencies, ...pkg.devDependencies };\n if (allDeps['vitest']) return 'vitest';\n if (allDeps['jest']) return 'jest';\n if (allDeps['mocha']) return 'mocha';\n return undefined;\n}\n\nfunction detectNodePackageManager(dir: string): string {\n if (existsSync(join(dir, 'pnpm-lock.yaml'))) return 'pnpm';\n if (existsSync(join(dir, 'yarn.lock'))) return 'yarn';\n if (existsSync(join(dir, 'bun.lockb')) || existsSync(join(dir, 'bun.lock'))) return 'bun';\n return 'npm';\n}\n\nfunction detectNode(dir: string): StackInfo | null {\n const raw = readFile(join(dir, 'package.json'));\n if (raw === null) return null;\n\n let pkg: Record<string, unknown>;\n try {\n pkg = JSON.parse(raw);\n } catch {\n return null;\n }\n\n const pm = detectNodePackageManager(dir);\n const run = pm === 'npm' ? 'npm run' : pm;\n const scripts = (pkg.scripts ?? {}) as Record<string, string>;\n const framework = detectNodeFramework(pkg as { dependencies?: Record<string, string>; devDependencies?: Record<string, string> });\n const testFramework = detectNodeTestFramework(pkg as { devDependencies?: Record<string, string>; dependencies?: Record<string, string> });\n\n const commands: StackInfo['commands'] = {};\n if (scripts.build) commands.build = `${run} build`;\n else commands.build = `${run} build`;\n if (scripts.test) commands.test = `${run} test`;\n else if (testFramework) commands.test = `${run} test`;\n else commands.test = `${pm === 'npm' ? 'npm' : pm} test`;\n if (scripts.lint) commands.lint = `${run} lint`;\n if (scripts.typecheck) commands.typecheck = `${run} typecheck`;\n else if ((pkg.devDependencies as Record<string, string> | undefined)?.['typescript']) {\n commands.typecheck = 'tsc --noEmit';\n }\n\n return {\n language: 'node',\n packageManager: pm,\n commands,\n framework,\n };\n}\n\nfunction detectPythonFramework(content: string): string | undefined {\n if (/fastapi/i.test(content)) return 'FastAPI';\n if (/django/i.test(content)) return 'Django';\n if (/flask/i.test(content)) return 'Flask';\n return undefined;\n}\n\nfunction detectPython(dir: string): StackInfo | null {\n const pyproject = readFile(join(dir, 'pyproject.toml'));\n if (pyproject !== null) {\n const isPoetry = /\\[tool\\.poetry\\]/.test(pyproject);\n const isUv = existsSync(join(dir, 'uv.lock'));\n\n let pm: string;\n let run: string;\n if (isUv) {\n pm = 'uv';\n run = 'uv run';\n } else if (isPoetry) {\n pm = 'poetry';\n run = 'poetry run';\n } else {\n pm = 'pip';\n run = 'python -m';\n }\n\n const framework = detectPythonFramework(pyproject);\n const hasPytest = /pytest/i.test(pyproject);\n\n return {\n language: 'python',\n packageManager: pm,\n commands: {\n build: `${pm === 'poetry' ? 'poetry' : pm} build`,\n test: hasPytest ? `${run} pytest` : `${run} pytest`,\n lint: `${run} ruff check .`,\n },\n framework,\n };\n }\n\n const requirements = readFile(join(dir, 'requirements.txt'));\n if (requirements !== null) {\n const framework = detectPythonFramework(requirements);\n return {\n language: 'python',\n packageManager: 'pip',\n commands: {\n build: 'pip install -e .',\n test: 'python -m pytest',\n lint: 'python -m ruff check .',\n },\n framework,\n };\n }\n\n return null;\n}\n\nfunction detectRust(dir: string): StackInfo | null {\n const cargo = readFile(join(dir, 'Cargo.toml'));\n if (cargo === null) return null;\n\n let framework: string | undefined;\n if (/actix-web/.test(cargo)) framework = 'Actix';\n else if (/axum/.test(cargo)) framework = 'Axum';\n else if (/rocket/.test(cargo)) framework = 'Rocket';\n\n return {\n language: 'rust',\n packageManager: 'cargo',\n commands: {\n build: 'cargo build',\n test: 'cargo test',\n lint: 'cargo clippy',\n },\n framework,\n };\n}\n\nfunction detectGo(dir: string): StackInfo | null {\n const gomod = readFile(join(dir, 'go.mod'));\n if (gomod === null) return null;\n\n let framework: string | undefined;\n if (/github\\.com\\/gin-gonic\\/gin/.test(gomod)) framework = 'Gin';\n else if (/github\\.com\\/gofiber\\/fiber/.test(gomod)) framework = 'Fiber';\n else if (/github\\.com\\/labstack\\/echo/.test(gomod)) framework = 'Echo';\n\n return {\n language: 'go',\n packageManager: 'go',\n commands: {\n build: 'go build ./...',\n test: 'go test ./...',\n lint: 'golangci-lint run',\n },\n framework,\n };\n}\n\nfunction detectSwift(dir: string): StackInfo | null {\n const pkg = readFile(join(dir, 'Package.swift'));\n if (pkg === null) return null;\n\n return {\n language: 'swift',\n packageManager: 'swift',\n commands: {\n build: 'swift build',\n test: 'swift test',\n },\n };\n}\n\nfunction detectMakefile(dir: string): StackInfo | null {\n const makefile = readFile(join(dir, 'Makefile'));\n if (makefile === null) return null;\n\n const commands: StackInfo['commands'] = {};\n commands.build = 'make build';\n if (/^test:/m.test(makefile)) commands.test = 'make test';\n if (/^lint:/m.test(makefile)) commands.lint = 'make lint';\n\n return {\n language: 'unknown',\n packageManager: 'make',\n commands,\n };\n}\n\nfunction detectDockerfile(dir: string): StackInfo | null {\n if (!existsSync(join(dir, 'Dockerfile'))) return null;\n\n return {\n language: 'unknown',\n packageManager: 'docker',\n commands: {\n build: 'docker build .',\n },\n };\n}\n\nexport async function detectStack(dir: string): Promise<StackInfo> {\n const detectors = [\n detectNode,\n detectPython,\n detectRust,\n detectGo,\n detectSwift,\n detectMakefile,\n detectDockerfile,\n ];\n\n for (const detect of detectors) {\n const result = detect(dir);\n if (result) return result;\n }\n\n return { language: 'unknown', packageManager: '', commands: {} };\n}\n","import type { StackInfo } from './detect.js';\n\ninterface Section {\n header: string;\n content: string;\n}\n\nfunction parseSections(markdown: string): Section[] {\n const lines = markdown.split('\\n');\n const sections: Section[] = [];\n let currentHeader = '';\n let currentLines: string[] = [];\n\n for (const line of lines) {\n if (line.startsWith('## ')) {\n if (currentHeader || currentLines.length > 0) {\n sections.push({ header: currentHeader, content: currentLines.join('\\n') });\n }\n currentHeader = line;\n currentLines = [];\n } else {\n currentLines.push(line);\n }\n }\n\n // Push the last section\n if (currentHeader || currentLines.length > 0) {\n sections.push({ header: currentHeader, content: currentLines.join('\\n') });\n }\n\n return sections;\n}\n\nfunction hasSection(sections: Section[], pattern: RegExp): boolean {\n return sections.some(s => pattern.test(s.header));\n}\n\nfunction generateCommandsBlock(stack: StackInfo): string {\n const lines: string[] = ['```bash'];\n if (stack.commands.build) lines.push(`# Build\\n${stack.commands.build}`);\n if (stack.commands.test) lines.push(`# Test\\n${stack.commands.test}`);\n if (stack.commands.lint) lines.push(`# Lint\\n${stack.commands.lint}`);\n if (stack.commands.typecheck) lines.push(`# Type check\\n${stack.commands.typecheck}`);\n if (stack.commands.deploy) lines.push(`# Deploy\\n${stack.commands.deploy}`);\n lines.push('```');\n return lines.join('\\n');\n}\n\nfunction generateBoundariesSection(): string {\n return `## Behavioral Boundaries\n\n### ALWAYS\n- Run tests and type-check before committing\n- Commit after completing each discrete task\n- Follow existing code patterns and style\n\n### ASK FIRST\n- Adding new dependencies\n- Modifying database schema or data models\n- Changing authentication or authorization flows\n- Any destructive operation (deleting files, force-pushing)\n\n### NEVER\n- Push to main branch without explicit approval\n- Skip type-checking or linting\n- Commit code that doesn't build\n- Hardcode secrets, API keys, or credentials`;\n}\n\nfunction generateWorkflowSection(stack: StackInfo): string {\n return `## Development Workflow\n\n${generateCommandsBlock(stack)}`;\n}\n\nfunction generateArchitectureSection(): string {\n return `## Architecture\n\n_TODO: Add a brief description of your project's architecture and key directories._`;\n}\n\nfunction generateKeyFilesSection(): string {\n return `## Key Files\n\n| File | Purpose |\n|------|---------|\n| _TODO_ | _Add key files and their purposes_ |`;\n}\n\nfunction generateGotchasSection(): string {\n return `## Common Gotchas\n\n_TODO: Add any gotchas, quirks, or non-obvious behaviors that developers should know about._`;\n}\n\nfunction generateGettingStartedSection(): string {\n return `## Getting Started with Joycraft\n\nThis project uses [Joycraft](https://github.com/maksutovic/joycraft) for AI development workflow. Available skills:\n\n| Skill | Purpose |\n|-------|---------|\n| \\`/tune\\` | Assess your harness, apply upgrades, see path to Level 5 |\n| \\`/new-feature\\` | Interview -> Feature Brief -> Atomic Specs |\n| \\`/interview\\` | Lightweight brainstorm — yap about ideas, get a structured summary |\n| \\`/decompose\\` | Break a brief into small, testable specs |\n| \\`/session-end\\` | Capture discoveries, verify, commit |\n\nRun \\`/tune\\` to see where your project stands and what to improve next.`;\n}\n\nexport function improveCLAUDEMd(existing: string, stack: StackInfo): string {\n const sections = parseSections(existing);\n const additions: string[] = [];\n\n if (!hasSection(sections, /behavioral\\s*boundar/i)) {\n additions.push(generateBoundariesSection());\n }\n\n if (!hasSection(sections, /development\\s*workflow/i) && !hasSection(sections, /workflow/i)) {\n additions.push(generateWorkflowSection(stack));\n }\n\n if (!hasSection(sections, /architecture/i)) {\n additions.push(generateArchitectureSection());\n }\n\n if (!hasSection(sections, /key\\s*files/i)) {\n additions.push(generateKeyFilesSection());\n }\n\n if (!hasSection(sections, /common\\s*gotchas/i) && !hasSection(sections, /gotchas/i)) {\n additions.push(generateGotchasSection());\n }\n\n if (!hasSection(sections, /getting\\s*started.*joycraft/i) && !hasSection(sections, /joycraft.*skills/i)) {\n additions.push(generateGettingStartedSection());\n }\n\n if (additions.length === 0) {\n return existing;\n }\n\n // Append missing sections\n const trimmed = existing.trimEnd();\n return trimmed + '\\n\\n' + additions.join('\\n\\n') + '\\n';\n}\n\nexport function generateCLAUDEMd(projectName: string, stack: StackInfo): string {\n const frameworkNote = stack.framework ? ` (${stack.framework})` : '';\n const langLabel = stack.language === 'unknown' ? '' : ` | **Stack:** ${stack.language}${frameworkNote}`;\n\n const lines: string[] = [\n `# ${projectName}`,\n '',\n `**Component:** _TODO: describe what this project is_${langLabel}`,\n '',\n '---',\n '',\n generateBoundariesSection(),\n '',\n generateWorkflowSection(stack),\n '',\n generateArchitectureSection(),\n '',\n generateKeyFilesSection(),\n '',\n generateGotchasSection(),\n '',\n generateGettingStartedSection(),\n '',\n ];\n\n return lines.join('\\n');\n}\n","import type { StackInfo } from './detect.js';\n\ninterface Section {\n header: string;\n content: string;\n}\n\nfunction parseSections(markdown: string): Section[] {\n const lines = markdown.split('\\n');\n const sections: Section[] = [];\n let currentHeader = '';\n let currentLines: string[] = [];\n\n for (const line of lines) {\n if (line.startsWith('## ')) {\n if (currentHeader || currentLines.length > 0) {\n sections.push({ header: currentHeader, content: currentLines.join('\\n') });\n }\n currentHeader = line;\n currentLines = [];\n } else {\n currentLines.push(line);\n }\n }\n\n if (currentHeader || currentLines.length > 0) {\n sections.push({ header: currentHeader, content: currentLines.join('\\n') });\n }\n\n return sections;\n}\n\nfunction hasSection(sections: Section[], pattern: RegExp): boolean {\n return sections.some(s => pattern.test(s.header));\n}\n\nfunction generateCommandsBlock(stack: StackInfo): string {\n const lines: string[] = ['```bash'];\n if (stack.commands.build) lines.push(stack.commands.build);\n if (stack.commands.test) lines.push(stack.commands.test);\n if (stack.commands.lint) lines.push(stack.commands.lint);\n if (stack.commands.typecheck) lines.push(stack.commands.typecheck);\n if (stack.commands.deploy) lines.push(stack.commands.deploy);\n lines.push('```');\n return lines.join('\\n');\n}\n\nfunction generateBoundariesSection(): string {\n return `## Behavioral Boundaries\n\n### ALWAYS\n- Run tests and type-check before committing\n- Follow existing code patterns and style\n\n### ASK FIRST\n- Adding new dependencies\n- Changing auth or data models\n- Any destructive operation\n\n### NEVER\n- Push to main without approval\n- Skip tests or type-checking\n- Hardcode secrets or credentials`;\n}\n\nfunction generateDevelopmentSection(stack: StackInfo): string {\n return `## Development\n\n${generateCommandsBlock(stack)}`;\n}\n\nfunction generateArchitectureSection(): string {\n return `## Architecture\n\n_TODO: Add a compact directory tree and one-paragraph summary._`;\n}\n\nfunction generateKeyFilesSection(): string {\n return `## Key Files\n\n| File | Purpose |\n|------|---------|\n| _TODO_ | _Add key files_ |`;\n}\n\nexport function generateAgentsMd(projectName: string, stack: StackInfo): string {\n const frameworkNote = stack.framework ? ` (${stack.framework})` : '';\n const langLabel = stack.language === 'unknown' ? '' : ` | **Stack:** ${stack.language}${frameworkNote}`;\n\n const lines: string[] = [\n `# ${projectName}`,\n '',\n `**Component:** _TODO: describe what this project is_${langLabel}`,\n '',\n '> Auto-generated by Joycraft. See CLAUDE.md for detailed instructions.',\n '',\n '---',\n '',\n generateBoundariesSection(),\n '',\n generateArchitectureSection(),\n '',\n generateKeyFilesSection(),\n '',\n generateDevelopmentSection(stack),\n '',\n ];\n\n return lines.join('\\n');\n}\n\nexport function improveAgentsMd(existing: string, stack: StackInfo): string {\n const sections = parseSections(existing);\n const additions: string[] = [];\n\n if (!hasSection(sections, /behavioral\\s*boundar/i)) {\n additions.push(generateBoundariesSection());\n }\n\n if (!hasSection(sections, /architecture/i)) {\n additions.push(generateArchitectureSection());\n }\n\n if (!hasSection(sections, /key\\s*files/i)) {\n additions.push(generateKeyFilesSection());\n }\n\n if (!hasSection(sections, /development/i)) {\n additions.push(generateDevelopmentSection(stack));\n }\n\n if (additions.length === 0) {\n return existing;\n }\n\n const trimmed = existing.trimEnd();\n return trimmed + '\\n\\n' + additions.join('\\n\\n') + '\\n';\n}\n"],"mappings":";;;;;;;;;AAAA,SAAS,WAAW,cAAAA,aAAY,eAAe,gBAAAC,qBAAoB;AACnE,SAAS,QAAAC,OAAM,UAAU,SAAS,eAAe;;;ACDjD,SAAS,cAAc,kBAAkB;AACzC,SAAS,YAAY;AAerB,SAAS,SAAS,MAA6B;AAC7C,MAAI;AACF,WAAO,aAAa,MAAM,OAAO;AAAA,EACnC,QAAQ;AACN,WAAO;AAAA,EACT;AACF;AAEA,SAAS,oBAAoB,KAA8G;AACzI,QAAM,UAAU,EAAE,GAAG,IAAI,cAAc,GAAG,IAAI,gBAAgB;AAC9D,MAAI,QAAQ,MAAM,EAAG,QAAO;AAC5B,MAAI,QAAQ,MAAM,EAAG,QAAO;AAC5B,MAAI,QAAQ,iBAAiB,KAAK,QAAQ,kBAAkB,EAAG,QAAO;AACtE,MAAI,QAAQ,SAAS,EAAG,QAAO;AAC/B,MAAI,QAAQ,SAAS,EAAG,QAAO;AAC/B,MAAI,QAAQ,OAAO,EAAG,QAAO;AAC7B,MAAI,QAAQ,KAAK,EAAG,QAAO;AAC3B,MAAI,QAAQ,QAAQ,EAAG,QAAO;AAC9B,SAAO;AACT;AAEA,SAAS,wBAAwB,KAA8G;AAC7I,QAAM,UAAU,EAAE,GAAG,IAAI,cAAc,GAAG,IAAI,gBAAgB;AAC9D,MAAI,QAAQ,QAAQ,EAAG,QAAO;AAC9B,MAAI,QAAQ,MAAM,EAAG,QAAO;AAC5B,MAAI,QAAQ,OAAO,EAAG,QAAO;AAC7B,SAAO;AACT;AAEA,SAAS,yBAAyB,KAAqB;AACrD,MAAI,WAAW,KAAK,KAAK,gBAAgB,CAAC,EAAG,QAAO;AACpD,MAAI,WAAW,KAAK,KAAK,WAAW,CAAC,EAAG,QAAO;AAC/C,MAAI,WAAW,KAAK,KAAK,WAAW,CAAC,KAAK,WAAW,KAAK,KAAK,UAAU,CAAC,EAAG,QAAO;AACpF,SAAO;AACT;AAEA,SAAS,WAAW,KAA+B;AACjD,QAAM,MAAM,SAAS,KAAK,KAAK,cAAc,CAAC;AAC9C,MAAI,QAAQ,KAAM,QAAO;AAEzB,MAAI;AACJ,MAAI;AACF,UAAM,KAAK,MAAM,GAAG;AAAA,EACtB,QAAQ;AACN,WAAO;AAAA,EACT;AAEA,QAAM,KAAK,yBAAyB,GAAG;AACvC,QAAM,MAAM,OAAO,QAAQ,YAAY;AACvC,QAAM,UAAW,IAAI,WAAW,CAAC;AACjC,QAAM,YAAY,oBAAoB,GAA0F;AAChI,QAAM,gBAAgB,wBAAwB,GAA0F;AAExI,QAAM,WAAkC,CAAC;AACzC,MAAI,QAAQ,MAAO,UAAS,QAAQ,GAAG,GAAG;AAAA,MACrC,UAAS,QAAQ,GAAG,GAAG;AAC5B,MAAI,QAAQ,KAAM,UAAS,OAAO,GAAG,GAAG;AAAA,WAC/B,cAAe,UAAS,OAAO,GAAG,GAAG;AAAA,MACzC,UAAS,OAAO,GAAG,OAAO,QAAQ,QAAQ,EAAE;AACjD,MAAI,QAAQ,KAAM,UAAS,OAAO,GAAG,GAAG;AACxC,MAAI,QAAQ,UAAW,UAAS,YAAY,GAAG,GAAG;AAAA,WACxC,IAAI,kBAAyD,YAAY,GAAG;AACpF,aAAS,YAAY;AAAA,EACvB;AAEA,SAAO;AAAA,IACL,UAAU;AAAA,IACV,gBAAgB;AAAA,IAChB;AAAA,IACA;AAAA,EACF;AACF;AAEA,SAAS,sBAAsB,SAAqC;AAClE,MAAI,WAAW,KAAK,OAAO,EAAG,QAAO;AACrC,MAAI,UAAU,KAAK,OAAO,EAAG,QAAO;AACpC,MAAI,SAAS,KAAK,OAAO,EAAG,QAAO;AACnC,SAAO;AACT;AAEA,SAAS,aAAa,KAA+B;AACnD,QAAM,YAAY,SAAS,KAAK,KAAK,gBAAgB,CAAC;AACtD,MAAI,cAAc,MAAM;AACtB,UAAM,WAAW,mBAAmB,KAAK,SAAS;AAClD,UAAM,OAAO,WAAW,KAAK,KAAK,SAAS,CAAC;AAE5C,QAAI;AACJ,QAAI;AACJ,QAAI,MAAM;AACR,WAAK;AACL,YAAM;AAAA,IACR,WAAW,UAAU;AACnB,WAAK;AACL,YAAM;AAAA,IACR,OAAO;AACL,WAAK;AACL,YAAM;AAAA,IACR;AAEA,UAAM,YAAY,sBAAsB,SAAS;AACjD,UAAM,YAAY,UAAU,KAAK,SAAS;AAE1C,WAAO;AAAA,MACL,UAAU;AAAA,MACV,gBAAgB;AAAA,MAChB,UAAU;AAAA,QACR,OAAO,GAAG,OAAO,WAAW,WAAW,EAAE;AAAA,QACzC,MAAM,YAAY,GAAG,GAAG,YAAY,GAAG,GAAG;AAAA,QAC1C,MAAM,GAAG,GAAG;AAAA,MACd;AAAA,MACA;AAAA,IACF;AAAA,EACF;AAEA,QAAM,eAAe,SAAS,KAAK,KAAK,kBAAkB,CAAC;AAC3D,MAAI,iBAAiB,MAAM;AACzB,UAAM,YAAY,sBAAsB,YAAY;AACpD,WAAO;AAAA,MACL,UAAU;AAAA,MACV,gBAAgB;AAAA,MAChB,UAAU;AAAA,QACR,OAAO;AAAA,QACP,MAAM;AAAA,QACN,MAAM;AAAA,MACR;AAAA,MACA;AAAA,IACF;AAAA,EACF;AAEA,SAAO;AACT;AAEA,SAAS,WAAW,KAA+B;AACjD,QAAM,QAAQ,SAAS,KAAK,KAAK,YAAY,CAAC;AAC9C,MAAI,UAAU,KAAM,QAAO;AAE3B,MAAI;AACJ,MAAI,YAAY,KAAK,KAAK,EAAG,aAAY;AAAA,WAChC,OAAO,KAAK,KAAK,EAAG,aAAY;AAAA,WAChC,SAAS,KAAK,KAAK,EAAG,aAAY;AAE3C,SAAO;AAAA,IACL,UAAU;AAAA,IACV,gBAAgB;AAAA,IAChB,UAAU;AAAA,MACR,OAAO;AAAA,MACP,MAAM;AAAA,MACN,MAAM;AAAA,IACR;AAAA,IACA;AAAA,EACF;AACF;AAEA,SAAS,SAAS,KAA+B;AAC/C,QAAM,QAAQ,SAAS,KAAK,KAAK,QAAQ,CAAC;AAC1C,MAAI,UAAU,KAAM,QAAO;AAE3B,MAAI;AACJ,MAAI,8BAA8B,KAAK,KAAK,EAAG,aAAY;AAAA,WAClD,8BAA8B,KAAK,KAAK,EAAG,aAAY;AAAA,WACvD,8BAA8B,KAAK,KAAK,EAAG,aAAY;AAEhE,SAAO;AAAA,IACL,UAAU;AAAA,IACV,gBAAgB;AAAA,IAChB,UAAU;AAAA,MACR,OAAO;AAAA,MACP,MAAM;AAAA,MACN,MAAM;AAAA,IACR;AAAA,IACA;AAAA,EACF;AACF;AAEA,SAAS,YAAY,KAA+B;AAClD,QAAM,MAAM,SAAS,KAAK,KAAK,eAAe,CAAC;AAC/C,MAAI,QAAQ,KAAM,QAAO;AAEzB,SAAO;AAAA,IACL,UAAU;AAAA,IACV,gBAAgB;AAAA,IAChB,UAAU;AAAA,MACR,OAAO;AAAA,MACP,MAAM;AAAA,IACR;AAAA,EACF;AACF;AAEA,SAAS,eAAe,KAA+B;AACrD,QAAM,WAAW,SAAS,KAAK,KAAK,UAAU,CAAC;AAC/C,MAAI,aAAa,KAAM,QAAO;AAE9B,QAAM,WAAkC,CAAC;AACzC,WAAS,QAAQ;AACjB,MAAI,UAAU,KAAK,QAAQ,EAAG,UAAS,OAAO;AAC9C,MAAI,UAAU,KAAK,QAAQ,EAAG,UAAS,OAAO;AAE9C,SAAO;AAAA,IACL,UAAU;AAAA,IACV,gBAAgB;AAAA,IAChB;AAAA,EACF;AACF;AAEA,SAAS,iBAAiB,KAA+B;AACvD,MAAI,CAAC,WAAW,KAAK,KAAK,YAAY,CAAC,EAAG,QAAO;AAEjD,SAAO;AAAA,IACL,UAAU;AAAA,IACV,gBAAgB;AAAA,IAChB,UAAU;AAAA,MACR,OAAO;AAAA,IACT;AAAA,EACF;AACF;AAEA,eAAsB,YAAY,KAAiC;AACjE,QAAM,YAAY;AAAA,IAChB;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,EACF;AAEA,aAAW,UAAU,WAAW;AAC9B,UAAM,SAAS,OAAO,GAAG;AACzB,QAAI,OAAQ,QAAO;AAAA,EACrB;AAEA,SAAO,EAAE,UAAU,WAAW,gBAAgB,IAAI,UAAU,CAAC,EAAE;AACjE;;;ACpNA,SAAS,sBAAsB,OAA0B;AACvD,QAAM,QAAkB,CAAC,SAAS;AAClC,MAAI,MAAM,SAAS,MAAO,OAAM,KAAK;AAAA,EAAY,MAAM,SAAS,KAAK,EAAE;AACvE,MAAI,MAAM,SAAS,KAAM,OAAM,KAAK;AAAA,EAAW,MAAM,SAAS,IAAI,EAAE;AACpE,MAAI,MAAM,SAAS,KAAM,OAAM,KAAK;AAAA,EAAW,MAAM,SAAS,IAAI,EAAE;AACpE,MAAI,MAAM,SAAS,UAAW,OAAM,KAAK;AAAA,EAAiB,MAAM,SAAS,SAAS,EAAE;AACpF,MAAI,MAAM,SAAS,OAAQ,OAAM,KAAK;AAAA,EAAa,MAAM,SAAS,MAAM,EAAE;AAC1E,QAAM,KAAK,KAAK;AAChB,SAAO,MAAM,KAAK,IAAI;AACxB;AAEA,SAAS,4BAAoC;AAC3C,SAAO;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAkBT;AAEA,SAAS,wBAAwB,OAA0B;AACzD,SAAO;AAAA;AAAA,EAEP,sBAAsB,KAAK,CAAC;AAC9B;AAEA,SAAS,8BAAsC;AAC7C,SAAO;AAAA;AAAA;AAGT;AAEA,SAAS,0BAAkC;AACzC,SAAO;AAAA;AAAA;AAAA;AAAA;AAKT;AAEA,SAAS,yBAAiC;AACxC,SAAO;AAAA;AAAA;AAGT;AAEA,SAAS,gCAAwC;AAC/C,SAAO;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAaT;AAuCO,SAAS,iBAAiB,aAAqB,OAA0B;AAC9E,QAAM,gBAAgB,MAAM,YAAY,KAAK,MAAM,SAAS,MAAM;AAClE,QAAM,YAAY,MAAM,aAAa,YAAY,KAAK,iBAAiB,MAAM,QAAQ,GAAG,aAAa;AAErG,QAAM,QAAkB;AAAA,IACtB,KAAK,WAAW;AAAA,IAChB;AAAA,IACA,uDAAuD,SAAS;AAAA,IAChE;AAAA,IACA;AAAA,IACA;AAAA,IACA,0BAA0B;AAAA,IAC1B;AAAA,IACA,wBAAwB,KAAK;AAAA,IAC7B;AAAA,IACA,4BAA4B;AAAA,IAC5B;AAAA,IACA,wBAAwB;AAAA,IACxB;AAAA,IACA,uBAAuB;AAAA,IACvB;AAAA,IACA,8BAA8B;AAAA,IAC9B;AAAA,EACF;AAEA,SAAO,MAAM,KAAK,IAAI;AACxB;;;AC1IA,SAASC,uBAAsB,OAA0B;AACvD,QAAM,QAAkB,CAAC,SAAS;AAClC,MAAI,MAAM,SAAS,MAAO,OAAM,KAAK,MAAM,SAAS,KAAK;AACzD,MAAI,MAAM,SAAS,KAAM,OAAM,KAAK,MAAM,SAAS,IAAI;AACvD,MAAI,MAAM,SAAS,KAAM,OAAM,KAAK,MAAM,SAAS,IAAI;AACvD,MAAI,MAAM,SAAS,UAAW,OAAM,KAAK,MAAM,SAAS,SAAS;AACjE,MAAI,MAAM,SAAS,OAAQ,OAAM,KAAK,MAAM,SAAS,MAAM;AAC3D,QAAM,KAAK,KAAK;AAChB,SAAO,MAAM,KAAK,IAAI;AACxB;AAEA,SAASC,6BAAoC;AAC3C,SAAO;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAeT;AAEA,SAAS,2BAA2B,OAA0B;AAC5D,SAAO;AAAA;AAAA,EAEPD,uBAAsB,KAAK,CAAC;AAC9B;AAEA,SAASE,+BAAsC;AAC7C,SAAO;AAAA;AAAA;AAGT;AAEA,SAASC,2BAAkC;AACzC,SAAO;AAAA;AAAA;AAAA;AAAA;AAKT;AAEO,SAAS,iBAAiB,aAAqB,OAA0B;AAC9E,QAAM,gBAAgB,MAAM,YAAY,KAAK,MAAM,SAAS,MAAM;AAClE,QAAM,YAAY,MAAM,aAAa,YAAY,KAAK,iBAAiB,MAAM,QAAQ,GAAG,aAAa;AAErG,QAAM,QAAkB;AAAA,IACtB,KAAK,WAAW;AAAA,IAChB;AAAA,IACA,uDAAuD,SAAS;AAAA,IAChE;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,IACA;AAAA,IACAF,2BAA0B;AAAA,IAC1B;AAAA,IACAC,6BAA4B;AAAA,IAC5B;AAAA,IACAC,yBAAwB;AAAA,IACxB;AAAA,IACA,2BAA2B,KAAK;AAAA,IAChC;AAAA,EACF;AAEA,SAAO,MAAM,KAAK,IAAI;AACxB;;;AH1FA,SAAS,UAAU,KAAmB;AACpC,MAAI,CAACC,YAAW,GAAG,GAAG;AACpB,cAAU,KAAK,EAAE,WAAW,KAAK,CAAC;AAAA,EACpC;AACF;AAEA,SAAS,UAAU,MAAc,SAAiB,OAAgB,QAA0B;AAC1F,MAAIA,YAAW,IAAI,KAAK,CAAC,OAAO;AAC9B,WAAO,QAAQ,KAAK,IAAI;AACxB;AAAA,EACF;AACA,gBAAc,MAAM,SAAS,OAAO;AACpC,SAAO,QAAQ,KAAK,IAAI;AAC1B;AAEA,eAAsB,KAAK,KAAa,MAAkC;AACxE,QAAM,YAAY,QAAQ,GAAG;AAC7B,QAAM,SAAqB,EAAE,SAAS,CAAC,GAAG,SAAS,CAAC,GAAG,UAAU,CAAC,GAAG,UAAU,CAAC,EAAE;AAGlF,QAAM,QAAQ,MAAM,YAAY,SAAS;AAGzC,QAAM,WAAW,CAAC,UAAU,SAAS,eAAe,aAAa,WAAW;AAC5E,aAAW,OAAO,UAAU;AAC1B,cAAUC,MAAK,WAAW,QAAQ,GAAG,CAAC;AAAA,EACxC;AAGA,QAAM,YAAYA,MAAK,WAAW,WAAW,QAAQ;AACrD,aAAW,CAAC,UAAU,OAAO,KAAK,OAAO,QAAQ,MAAM,GAAG;AACxD,UAAM,YAAY,SAAS,QAAQ,SAAS,EAAE;AAC9C,UAAM,WAAWA,MAAK,WAAW,SAAS;AAC1C,cAAU,QAAQ;AAClB,cAAUA,MAAK,UAAU,UAAU,GAAG,SAAS,KAAK,OAAO,MAAM;AAAA,EACnE;AAGA,QAAM,eAAeA,MAAK,WAAW,QAAQ,WAAW;AACxD,YAAU,YAAY;AACtB,aAAW,CAAC,UAAU,OAAO,KAAK,OAAO,QAAQ,SAAS,GAAG;AAC3D,cAAU,QAAQA,MAAK,cAAc,QAAQ,CAAC,CAAC;AAC/C,cAAUA,MAAK,cAAc,QAAQ,GAAG,SAAS,KAAK,OAAO,MAAM;AAAA,EACrE;AAGA,QAAM,eAAeA,MAAK,WAAW,WAAW;AAChD,MAAID,YAAW,YAAY,KAAK,CAAC,KAAK,OAAO;AAC3C,WAAO,QAAQ,KAAK,YAAY;AAAA,EAClC,OAAO;AACL,UAAM,cAAc,SAAS,SAAS;AACtC,UAAM,UAAU,iBAAiB,aAAa,KAAK;AACnD,kBAAc,cAAc,SAAS,OAAO;AAC5C,WAAO,QAAQ,KAAK,YAAY;AAAA,EAClC;AAGA,QAAM,eAAeC,MAAK,WAAW,WAAW;AAChD,MAAID,YAAW,YAAY,KAAK,CAAC,KAAK,OAAO;AAC3C,WAAO,QAAQ,KAAK,YAAY;AAAA,EAClC,OAAO;AACL,UAAM,cAAc,SAAS,SAAS;AACtC,UAAM,UAAU,iBAAiB,aAAa,KAAK;AACnD,kBAAc,cAAc,SAAS,OAAO;AAC5C,WAAO,QAAQ,KAAK,YAAY;AAAA,EAClC;AAGA,QAAM,aAAqC,CAAC;AAC5C,aAAW,CAAC,UAAU,OAAO,KAAK,OAAO,QAAQ,MAAM,GAAG;AACxD,UAAM,YAAY,SAAS,QAAQ,SAAS,EAAE;AAC9C,eAAWC,MAAK,WAAW,UAAU,WAAW,UAAU,CAAC,IAAI,YAAY,OAAO;AAAA,EACpF;AACA,aAAW,CAAC,UAAU,OAAO,KAAK,OAAO,QAAQ,SAAS,GAAG;AAC3D,eAAWA,MAAK,QAAQ,aAAa,QAAQ,CAAC,IAAI,YAAY,OAAO;AAAA,EACvE;AACA,eAAa,WAAW,SAAS,UAAU;AAG3C,QAAM,WAAWA,MAAK,WAAW,WAAW,OAAO;AACnD,YAAU,QAAQ;AAClB,QAAM,aAAa;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAYnB,YAAUA,MAAK,UAAU,4BAA4B,GAAG,YAAY,KAAK,OAAO,MAAM;AAGtF,QAAM,eAAeA,MAAK,WAAW,WAAW,eAAe;AAC/D,MAAI,WAAoC,CAAC;AACzC,MAAID,YAAW,YAAY,GAAG;AAC5B,QAAI;AACF,iBAAW,KAAK,MAAME,cAAa,cAAc,OAAO,CAAC;AAAA,IAC3D,QAAQ;AAAA,IAER;AAAA,EACF;AACA,MAAI,CAAC,SAAS,MAAO,UAAS,QAAQ,CAAC;AACvC,QAAM,cAAc,SAAS;AAC7B,MAAI,CAAC,YAAY,aAAc,aAAY,eAAe,CAAC;AAC3D,QAAM,oBAAoB,YAAY;AACtC,QAAM,kBAAkB,kBAAkB,KAAK,OAAK;AAClD,UAAM,aAAa,EAAE;AACrB,WAAO,YAAY,KAAK,QAAM,OAAO,GAAG,YAAY,YAAY,GAAG,QAAQ,SAAS,UAAU,CAAC;AAAA,EACjG,CAAC;AACD,MAAI,CAAC,iBAAiB;AACpB,sBAAkB,KAAK;AAAA,MACrB,SAAS;AAAA,MACT,OAAO,CAAC;AAAA,QACN,MAAM;AAAA,QACN,SAAS;AAAA,MACX,CAAC;AAAA,IACH,CAAC;AACD,kBAAc,cAAc,KAAK,UAAU,UAAU,MAAM,CAAC,IAAI,MAAM,OAAO;AAC7E,WAAO,QAAQ,KAAK,YAAY;AAAA,EAClC;AAGA,QAAM,gBAAgBD,MAAK,WAAW,YAAY;AAClD,MAAID,YAAW,aAAa,GAAG;AAC7B,UAAM,YAAYE,cAAa,eAAe,OAAO;AACrD,QAAI,iBAAiB,KAAK,SAAS,KAAK,kBAAkB,KAAK,SAAS,GAAG;AACzE,aAAO,SAAS;AAAA,QACd;AAAA,MAEF;AAAA,IACF;AAAA,EACF;AAGA,eAAa,QAAQ,KAAK;AAC5B;AAEA,SAAS,aAAa,QAAoB,OAA8C;AACtF,UAAQ,IAAI,2BAA2B;AAEvC,MAAI,MAAM,aAAa,WAAW;AAChC,UAAM,KAAK,MAAM,YAAY,MAAM,MAAM,SAAS,KAAK;AACvD,YAAQ,IAAI,qBAAqB,MAAM,QAAQ,GAAG,EAAE,KAAK,MAAM,cAAc,GAAG;AAAA,EAClF,OAAO;AACL,YAAQ,IAAI,0DAA0D;AAAA,EACxE;AAEA,MAAI,OAAO,QAAQ,SAAS,GAAG;AAC7B,YAAQ,IAAI;AAAA,YAAe,OAAO,QAAQ,MAAM,WAAW;AAC3D,eAAW,KAAK,OAAO,SAAS;AAC9B,cAAQ,IAAI,SAAS,CAAC,EAAE;AAAA,IAC1B;AAAA,EACF;AAEA,MAAI,OAAO,SAAS,SAAS,GAAG;AAC9B,YAAQ,IAAI;AAAA,aAAgB,OAAO,SAAS,MAAM,WAAW;AAC7D,eAAW,KAAK,OAAO,UAAU;AAC/B,cAAQ,IAAI,SAAS,CAAC,EAAE;AAAA,IAC1B;AAAA,EACF;AAEA,MAAI,OAAO,QAAQ,SAAS,GAAG;AAC7B,YAAQ,IAAI;AAAA,YAAe,OAAO,QAAQ,MAAM,qDAAqD;AACrG,eAAW,KAAK,OAAO,SAAS;AAC9B,cAAQ,IAAI,SAAS,CAAC,EAAE;AAAA,IAC1B;AAAA,EACF;AAEA,MAAI,OAAO,SAAS,SAAS,GAAG;AAC9B,YAAQ,IAAI,eAAe;AAC3B,eAAW,KAAK,OAAO,UAAU;AAC/B,cAAQ,IAAI,cAAS,CAAC,EAAE;AAAA,IAC1B;AAAA,EACF;AAEA,QAAM,oBAAoB,OAAO,QAAQ,KAAK,OAAK,EAAE,SAAS,WAAW,CAAC;AAE1E,UAAQ,IAAI,iBAAiB;AAC7B,MAAI,mBAAmB;AACrB,YAAQ,IAAI,oFAAoF;AAAA,EAClG,OAAO;AACL,YAAQ,IAAI,sEAAsE;AAAA,EACpF;AACA,UAAQ,IAAI,yEAAyE;AACrF,UAAQ,IAAI,6EAA6E;AACzF,UAAQ,IAAI,EAAE;AAChB;","names":["existsSync","readFileSync","join","generateCommandsBlock","generateBoundariesSection","generateArchitectureSection","generateKeyFilesSection","existsSync","join","readFileSync"]}
|
|
File without changes
|