joycraft 0.3.1 → 0.4.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +14 -14
- package/dist/{chunk-LLJVCCB2.js → chunk-FNGCEYUY.js} +180 -29
- package/dist/chunk-FNGCEYUY.js.map +1 -0
- package/dist/cli.js +2 -2
- package/dist/{init-R33WYEFZ.js → init-XHJDJIZW.js} +244 -41
- package/dist/init-XHJDJIZW.js.map +1 -0
- package/dist/{upgrade-ZE6K64XX.js → upgrade-NOHZWQMO.js} +3 -3
- package/dist/upgrade-NOHZWQMO.js.map +1 -0
- package/package.json +1 -1
- package/dist/chunk-LLJVCCB2.js.map +0 -1
- package/dist/init-R33WYEFZ.js.map +0 -1
- package/dist/upgrade-ZE6K64XX.js.map +0 -1
package/README.md
CHANGED
|
@@ -17,11 +17,11 @@ That's it. Joycraft auto-detects your tech stack and creates:
|
|
|
17
17
|
- **CLAUDE.md** with behavioral boundaries (Always / Ask First / Never) and correct build/test/lint commands
|
|
18
18
|
- **AGENTS.md** for Codex compatibility
|
|
19
19
|
- **Claude Code skills** installed to `.claude/skills/`:
|
|
20
|
-
- `/tune` — Assess your harness, apply upgrades, see your path to Level 5
|
|
21
|
-
- `/new-feature` — Interview → Feature Brief → Atomic Specs
|
|
22
|
-
- `/interview` — Lightweight brainstorm — yap about ideas, get a structured summary
|
|
23
|
-
- `/decompose` — Break a brief into small, testable specs
|
|
24
|
-
- `/session-end` — Capture discoveries, verify, commit
|
|
20
|
+
- `/joycraft-tune` — Assess your harness, apply upgrades, see your path to Level 5
|
|
21
|
+
- `/joycraft-new-feature` — Interview → Feature Brief → Atomic Specs
|
|
22
|
+
- `/joycraft-interview` — Lightweight brainstorm — yap about ideas, get a structured summary
|
|
23
|
+
- `/joycraft-decompose` — Break a brief into small, testable specs
|
|
24
|
+
- `/joycraft-session-end` — Capture discoveries, verify, commit
|
|
25
25
|
- **docs/** structure — `briefs/`, `specs/`, `discoveries/`, `contracts/`, `decisions/`
|
|
26
26
|
- **Templates** — Atomic spec, feature brief, implementation plan, boundary framework
|
|
27
27
|
|
|
@@ -36,11 +36,11 @@ Frameworks auto-detected: Next.js, FastAPI, Django, Flask, Actix, Axum, Express,
|
|
|
36
36
|
After init, open Claude Code and use the installed skills:
|
|
37
37
|
|
|
38
38
|
```
|
|
39
|
-
/tune # Assess your harness, apply upgrades, see path to Level 5
|
|
40
|
-
/interview # Brainstorm freely — yap about ideas, get a structured summary
|
|
41
|
-
/new-feature # Interview → Feature Brief → Atomic Specs → ready to execute
|
|
42
|
-
/decompose # Break any feature into small, independent specs
|
|
43
|
-
/session-end # Wrap up — discoveries, verification, commit
|
|
39
|
+
/joycraft-tune # Assess your harness, apply upgrades, see path to Level 5
|
|
40
|
+
/joycraft-interview # Brainstorm freely — yap about ideas, get a structured summary
|
|
41
|
+
/joycraft-new-feature # Interview → Feature Brief → Atomic Specs → ready to execute
|
|
42
|
+
/joycraft-decompose # Break any feature into small, independent specs
|
|
43
|
+
/joycraft-session-end # Wrap up — discoveries, verification, commit
|
|
44
44
|
```
|
|
45
45
|
|
|
46
46
|
The core loop:
|
|
@@ -61,7 +61,7 @@ Joycraft tracks what it installed vs. what you've customized. Unmodified files u
|
|
|
61
61
|
|
|
62
62
|
## Git Autonomy
|
|
63
63
|
|
|
64
|
-
When `/tune` runs for the first time, it asks one question: **how autonomous should git be?**
|
|
64
|
+
When `/joycraft-tune` runs for the first time, it asks one question: **how autonomous should git be?**
|
|
65
65
|
|
|
66
66
|
- **Cautious** (default) — commits freely, asks before pushing or opening PRs. Good for learning the workflow.
|
|
67
67
|
- **Autonomous** — commits, pushes to feature branches, and opens PRs without asking. Good for spec-driven development where you want full send.
|
|
@@ -101,15 +101,15 @@ Joycraft's approach is synthesized from several sources:
|
|
|
101
101
|
|
|
102
102
|
**Spec-driven development.** Instead of prompting AI in conversation, you write structured specifications — Feature Briefs that capture the *what* and *why*, then Atomic Specs that break work into small, testable, independently executable units. Each spec is self-contained: an agent can pick it up without reading anything else. This follows [Addy Osmani's](https://addyosmani.com/blog/good-spec/) principles for AI-consumable specs and [GitHub's Spec Kit](https://github.blog/ai-and-ml/generative-ai/spec-driven-development-with-ai-get-started-with-a-new-open-source-toolkit/) 4-phase process (Specify → Plan → Tasks → Implement).
|
|
103
103
|
|
|
104
|
-
**Context isolation.** [Boris Cherny](https://www.lennysnewsletter.com/p/head-of-claude-code-what-happens) (Head of Claude Code at Anthropic) recommends: interview in one session, write the spec, then execute in a *fresh session* with clean context. Joycraft's `/new-feature` → `/decompose` → execute workflow enforces this naturally. The interview session captures intent; the execution session has only the spec.
|
|
104
|
+
**Context isolation.** [Boris Cherny](https://www.lennysnewsletter.com/p/head-of-claude-code-what-happens) (Head of Claude Code at Anthropic) recommends: interview in one session, write the spec, then execute in a *fresh session* with clean context. Joycraft's `/joycraft-new-feature` → `/joycraft-decompose` → execute workflow enforces this naturally. The interview session captures intent; the execution session has only the spec.
|
|
105
105
|
|
|
106
106
|
**Behavioral boundaries.** CLAUDE.md isn't a suggestion box — it's a contract. Joycraft installs a three-tier boundary framework (Always / Ask First / Never) that prevents the most common AI development failures: overwriting user files, skipping tests, pushing without approval, hardcoding secrets. This is [Addy Osmani's](https://addyosmani.com/blog/good-spec/) "boundaries" principle made concrete.
|
|
107
107
|
|
|
108
|
-
**Knowledge capture over session notes.** Most session notes are never re-read. Joycraft's `/session-end` skill captures only *discoveries* — assumptions that were wrong, APIs that behaved unexpectedly, decisions made during implementation that aren't in the spec. If nothing surprising happened, you capture nothing. This keeps the signal-to-noise ratio high.
|
|
108
|
+
**Knowledge capture over session notes.** Most session notes are never re-read. Joycraft's `/joycraft-session-end` skill captures only *discoveries* — assumptions that were wrong, APIs that behaved unexpectedly, decisions made during implementation that aren't in the spec. If nothing surprising happened, you capture nothing. This keeps the signal-to-noise ratio high.
|
|
109
109
|
|
|
110
110
|
**External holdout scenarios.** [StrongDM's Software Factory](https://factory.strongdm.ai/) proved that AI agents will [actively game visible test suites](https://palisaderesearch.org/blog/specification-gaming). Their solution: scenarios that live *outside* the codebase, invisible to the agent during development. Like a holdout set in ML, this prevents overfitting. Joycraft provides the template for building these.
|
|
111
111
|
|
|
112
|
-
**The 5-level framework.** [Dan Shapiro's levels](https://www.danshapiro.com/blog/2026/01/the-five-levels-from-spicy-autocomplete-to-the-software-factory/) give you a map. Level 2 (Junior Developer) is where most teams plateau. Level 3 (Developer as Manager) means your life is diffs. Level 4 (Developer as PM) means you write specs, not code. Level 5 (Dark Factory) means specs in, software out. Joycraft's `/tune` assessment tells you where you are and what to do next.
|
|
112
|
+
**The 5-level framework.** [Dan Shapiro's levels](https://www.danshapiro.com/blog/2026/01/the-five-levels-from-spicy-autocomplete-to-the-software-factory/) give you a map. Level 2 (Junior Developer) is where most teams plateau. Level 3 (Developer as Manager) means your life is diffs. Level 4 (Developer as PM) means you write specs, not code. Level 5 (Dark Factory) means specs in, software out. Joycraft's `/joycraft-tune` assessment tells you where you are and what to do next.
|
|
113
113
|
|
|
114
114
|
## Standing on the Shoulders of Giants
|
|
115
115
|
|
|
@@ -2,8 +2,8 @@
|
|
|
2
2
|
|
|
3
3
|
// src/bundled-files.ts
|
|
4
4
|
var SKILLS = {
|
|
5
|
-
"decompose.md": `---
|
|
6
|
-
name: decompose
|
|
5
|
+
"joycraft-decompose.md": `---
|
|
6
|
+
name: joycraft-decompose
|
|
7
7
|
description: Break a feature brief into atomic specs \u2014 small, testable, independently executable units
|
|
8
8
|
---
|
|
9
9
|
|
|
@@ -15,7 +15,7 @@ You have a Feature Brief (or the user has described a feature). Your job is to d
|
|
|
15
15
|
|
|
16
16
|
Look for a Feature Brief in \`docs/briefs/\`. If one doesn't exist yet, tell the user:
|
|
17
17
|
|
|
18
|
-
> No feature brief found. Run \`/new-feature\` first to interview and create one, or describe the feature now and I'll work from your description.
|
|
18
|
+
> No feature brief found. Run \`/joycraft-new-feature\` first to interview and create one, or describe the feature now and I'll work from your description.
|
|
19
19
|
|
|
20
20
|
If the user describes the feature inline, work from that description directly. You don't need a formal brief to decompose \u2014 but recommend creating one for complex features.
|
|
21
21
|
|
|
@@ -127,13 +127,13 @@ Decomposition complete:
|
|
|
127
127
|
To execute:
|
|
128
128
|
- Sequential: Open a session, point Claude at each spec in order
|
|
129
129
|
- Parallel: Use worktrees \u2014 one spec per worktree, merge when done
|
|
130
|
-
- Each session should end with /session-end to capture discoveries
|
|
130
|
+
- Each session should end with /joycraft-session-end to capture discoveries
|
|
131
131
|
|
|
132
132
|
Ready to start execution?
|
|
133
133
|
\`\`\`
|
|
134
134
|
`,
|
|
135
|
-
"interview.md": `---
|
|
136
|
-
name: interview
|
|
135
|
+
"joycraft-interview.md": `---
|
|
136
|
+
name: joycraft-interview
|
|
137
137
|
description: Brainstorm freely about what you want to build \u2014 yap, explore ideas, and get a structured summary you can use later
|
|
138
138
|
---
|
|
139
139
|
|
|
@@ -179,7 +179,7 @@ Use this format:
|
|
|
179
179
|
|
|
180
180
|
> **Date:** YYYY-MM-DD
|
|
181
181
|
> **Status:** DRAFT
|
|
182
|
-
> **Origin:** /interview session
|
|
182
|
+
> **Origin:** /joycraft-interview session
|
|
183
183
|
|
|
184
184
|
---
|
|
185
185
|
|
|
@@ -215,21 +215,21 @@ After writing the draft, tell the user:
|
|
|
215
215
|
Draft brief saved to docs/briefs/YYYY-MM-DD-topic-draft.md
|
|
216
216
|
|
|
217
217
|
When you're ready to move forward:
|
|
218
|
-
- /new-feature \u2014 formalize this into a full Feature Brief with specs
|
|
219
|
-
- /decompose \u2014 break it directly into atomic specs if scope is clear
|
|
220
|
-
- Or just keep brainstorming \u2014 run /interview again anytime
|
|
218
|
+
- /joycraft-new-feature \u2014 formalize this into a full Feature Brief with specs
|
|
219
|
+
- /joycraft-decompose \u2014 break it directly into atomic specs if scope is clear
|
|
220
|
+
- Or just keep brainstorming \u2014 run /joycraft-interview again anytime
|
|
221
221
|
\`\`\`
|
|
222
222
|
|
|
223
223
|
## Guidelines
|
|
224
224
|
|
|
225
|
-
- **This is NOT /new-feature.** Do not push toward formal briefs, decomposition tables, or atomic specs. The point is exploration.
|
|
225
|
+
- **This is NOT /joycraft-new-feature.** Do not push toward formal briefs, decomposition tables, or atomic specs. The point is exploration.
|
|
226
226
|
- **Let the user lead.** Your job is to listen, clarify, and capture \u2014 not to structure or direct.
|
|
227
227
|
- **Mark everything as DRAFT.** The output is a starting point, not a commitment.
|
|
228
228
|
- **Keep it short.** The draft brief should be 1-2 pages max. Capture the essence, not every detail.
|
|
229
229
|
- **Multiple interviews are fine.** The user might run this several times as their thinking evolves. Each creates a new dated draft.
|
|
230
230
|
`,
|
|
231
|
-
"new-feature.md": `---
|
|
232
|
-
name: new-feature
|
|
231
|
+
"joycraft-new-feature.md": `---
|
|
232
|
+
name: joycraft-new-feature
|
|
233
233
|
description: Guided feature development \u2014 interview the user, produce a Feature Brief, then decompose into atomic specs
|
|
234
234
|
---
|
|
235
235
|
|
|
@@ -379,7 +379,7 @@ Recommended execution:
|
|
|
379
379
|
To execute: Start a fresh session per spec. Each session should:
|
|
380
380
|
1. Read the spec
|
|
381
381
|
2. Implement
|
|
382
|
-
3. Run /session-end to capture discoveries
|
|
382
|
+
3. Run /joycraft-session-end to capture discoveries
|
|
383
383
|
4. Commit and PR
|
|
384
384
|
|
|
385
385
|
Ready to start?
|
|
@@ -387,10 +387,10 @@ Ready to start?
|
|
|
387
387
|
|
|
388
388
|
**Why:** A fresh session for execution produces better results. The interview session has too much context noise \u2014 a clean session with just the spec is more focused.
|
|
389
389
|
|
|
390
|
-
You can also use \`/decompose\` to re-decompose a brief if the breakdown needs adjustment, or run \`/interview\` first for a lighter brainstorm before committing to the full workflow.
|
|
390
|
+
You can also use \`/joycraft-decompose\` to re-decompose a brief if the breakdown needs adjustment, or run \`/joycraft-interview\` first for a lighter brainstorm before committing to the full workflow.
|
|
391
391
|
`,
|
|
392
|
-
"session-end.md": `---
|
|
393
|
-
name: session-end
|
|
392
|
+
"joycraft-session-end.md": `---
|
|
393
|
+
name: joycraft-session-end
|
|
394
394
|
description: Wrap up a session \u2014 capture discoveries, verify, prepare for PR or next session
|
|
395
395
|
---
|
|
396
396
|
|
|
@@ -432,6 +432,17 @@ Use this format:
|
|
|
432
432
|
|
|
433
433
|
If nothing surprising happened, skip the discovery file entirely. No discovery is a good sign \u2014 the spec was accurate.
|
|
434
434
|
|
|
435
|
+
## 1b. Update Context Documents
|
|
436
|
+
|
|
437
|
+
If \`docs/context/\` exists, quickly check whether this session revealed anything about:
|
|
438
|
+
|
|
439
|
+
- **Production risks** \u2014 did you interact with or learn about production vs staging systems? \u2192 Update \`docs/context/production-map.md\`
|
|
440
|
+
- **Wrong assumptions** \u2014 did the agent (or you) assume something that turned out to be false? \u2192 Update \`docs/context/dangerous-assumptions.md\`
|
|
441
|
+
- **Key decisions** \u2014 did you make an architectural or tooling choice? \u2192 Add a row to \`docs/context/decision-log.md\`
|
|
442
|
+
- **Unwritten rules** \u2014 did you discover a convention or constraint not documented anywhere? \u2192 Update \`docs/context/institutional-knowledge.md\`
|
|
443
|
+
|
|
444
|
+
Skip this if nothing applies. Don't force it \u2014 only update when there's genuine new context.
|
|
445
|
+
|
|
435
446
|
## 2. Run Validation
|
|
436
447
|
|
|
437
448
|
Run the project's validation commands. Check CLAUDE.md for project-specific commands. Common checks:
|
|
@@ -464,8 +475,8 @@ Session complete.
|
|
|
464
475
|
- Next: [what the next session should tackle, or "ready for PR"]
|
|
465
476
|
\`\`\`
|
|
466
477
|
`,
|
|
467
|
-
"tune.md": `---
|
|
468
|
-
name: tune
|
|
478
|
+
"joycraft-tune.md": `---
|
|
479
|
+
name: joycraft-tune
|
|
469
480
|
description: Assess and upgrade your project's AI development harness \u2014 score 7 dimensions, apply fixes, show path to Level 5
|
|
470
481
|
---
|
|
471
482
|
|
|
@@ -565,17 +576,19 @@ Examine \`docs/\` directory structure and content.
|
|
|
565
576
|
| 4 | Structured docs/ with templates and clear organization |
|
|
566
577
|
| 5 | Full structure: briefs/, specs/, templates/, architecture docs, referenced from CLAUDE.md |
|
|
567
578
|
|
|
568
|
-
### Dimension 6: Knowledge Capture
|
|
579
|
+
### Dimension 6: Knowledge Capture & Contextual Stewardship
|
|
569
580
|
|
|
570
|
-
Look for discoveries, decisions, and
|
|
581
|
+
Look for discoveries, decisions, session notes, and context documents.
|
|
571
582
|
|
|
572
583
|
| Score | Criteria |
|
|
573
584
|
|-------|----------|
|
|
574
585
|
| 1 | No knowledge capture mechanism |
|
|
575
|
-
| 2 | Ad-hoc notes
|
|
576
|
-
| 3 |
|
|
577
|
-
| 4 |
|
|
578
|
-
| 5 |
|
|
586
|
+
| 2 | Ad-hoc notes or a discoveries directory with no entries |
|
|
587
|
+
| 3 | Discoveries directory with some entries, or context docs exist but empty |
|
|
588
|
+
| 4 | Active discoveries + at least 2 context docs with content (production-map, dangerous-assumptions, decision-log, institutional-knowledge) |
|
|
589
|
+
| 5 | Full contextual stewardship: discoveries with entries, all 4 context docs maintained, session-end workflow in active use |
|
|
590
|
+
|
|
591
|
+
**Check for:** \`docs/discoveries/\`, \`docs/context/production-map.md\`, \`docs/context/dangerous-assumptions.md\`, \`docs/context/decision-log.md\`, \`docs/context/institutional-knowledge.md\`. Score based on both existence AND whether they have real content (not just templates).
|
|
579
592
|
|
|
580
593
|
### Dimension 7: Testing & Validation
|
|
581
594
|
|
|
@@ -677,7 +690,42 @@ Based on their answer, use the appropriate git rules in the Behavioral Boundarie
|
|
|
677
690
|
- Amend commits that have been pushed to remote
|
|
678
691
|
\`\`\`
|
|
679
692
|
|
|
680
|
-
|
|
693
|
+
### Risk Interview
|
|
694
|
+
|
|
695
|
+
Before applying upgrades, ask 3-5 targeted questions to capture what's dangerous in this project. Skip this if \`docs/context/production-map.md\` or \`docs/context/dangerous-assumptions.md\` already exist (offer to update instead).
|
|
696
|
+
|
|
697
|
+
**Question 1:** "What could this agent break that would ruin your day? Think: production databases, live APIs, billing systems, user data, infrastructure."
|
|
698
|
+
|
|
699
|
+
From the answer, generate:
|
|
700
|
+
- NEVER rules for CLAUDE.md (e.g., "NEVER connect to production DB at postgres://prod.example.com")
|
|
701
|
+
- Deny patterns for .claude/settings.json (e.g., deny Bash commands containing production hostnames)
|
|
702
|
+
|
|
703
|
+
**Question 2:** "What external services does this project connect to? Which are production vs. staging/dev?"
|
|
704
|
+
|
|
705
|
+
From the answer, generate:
|
|
706
|
+
- \`docs/context/production-map.md\` documenting what's real vs safe to touch
|
|
707
|
+
- Include: service name, URL/endpoint, environment (prod/staging/dev), what happens if corrupted
|
|
708
|
+
|
|
709
|
+
**Question 3:** "What are the unwritten rules a new developer would need months to learn about this project?"
|
|
710
|
+
|
|
711
|
+
From the answer, generate:
|
|
712
|
+
- Additions to CLAUDE.md boundaries (new ALWAYS/ASK FIRST/NEVER rules)
|
|
713
|
+
- \`docs/context/dangerous-assumptions.md\` with "Agent might assume X, but actually Y"
|
|
714
|
+
|
|
715
|
+
**Question 4 (optional):** "What happened last time something went wrong with an automated tool or deploy?"
|
|
716
|
+
|
|
717
|
+
If the user has a story, capture the lesson as a specific NEVER rule and add to dangerous-assumptions.md.
|
|
718
|
+
|
|
719
|
+
**Question 5:** "Any files, directories, or commands that should be completely off-limits?"
|
|
720
|
+
|
|
721
|
+
From the answer, generate deny rules for .claude/settings.json and add to NEVER section.
|
|
722
|
+
|
|
723
|
+
**Rules for the interview:**
|
|
724
|
+
- Ask questions ONE AT A TIME, not all at once
|
|
725
|
+
- If the user says "nothing" or "skip", respect that and move on
|
|
726
|
+
- Keep it to 2-3 minutes total \u2014 don't interrogate
|
|
727
|
+
- Generate artifacts immediately after the interview, don't wait for all questions
|
|
728
|
+
- This is the SECOND and LAST set of questions during /joycraft-tune (first is git autonomy)
|
|
681
729
|
|
|
682
730
|
### Tier 2: Apply and Show Diff (do it, then report)
|
|
683
731
|
These modify important files but are additive (append-only). Apply them, then show what changed so the user can review. Git is the undo button.
|
|
@@ -741,9 +789,9 @@ You're at Level [X]. Here's what each level looks like:
|
|
|
741
789
|
| 5 | Define what + why | Specs in, software out | Systems design |
|
|
742
790
|
|
|
743
791
|
### Your Next Steps Toward Level [X+1]:
|
|
744
|
-
1. [Specific action based on current gaps \u2014 e.g., "Write your first atomic spec using /new-feature"]
|
|
792
|
+
1. [Specific action based on current gaps \u2014 e.g., "Write your first atomic spec using /joycraft-new-feature"]
|
|
745
793
|
2. [Next action \u2014 e.g., "Add vitest and write tests for your core logic"]
|
|
746
|
-
3. [Next action \u2014 e.g., "Use /session-end consistently to build your discoveries log"]
|
|
794
|
+
3. [Next action \u2014 e.g., "Use /joycraft-session-end consistently to build your discoveries log"]
|
|
747
795
|
|
|
748
796
|
### What Level 5 Looks Like (Your North Star):
|
|
749
797
|
- A backlog of ready specs that agents pull from and execute autonomously
|
|
@@ -1088,6 +1136,109 @@ The GET endpoint does a find-or-create: if no record exists for the user, create
|
|
|
1088
1136
|
| PATCH with unknown event type \`{"foo": {"email": true}}\` | Return 400 with validation error listing valid event types |
|
|
1089
1137
|
| GET for user with no existing record | Create default preferences, return 200 |
|
|
1090
1138
|
| Concurrent PATCH requests | Last-write-wins (optimistic, no locking) \u2014 acceptable for user preferences |
|
|
1139
|
+
`,
|
|
1140
|
+
"context/dangerous-assumptions.md": `# Dangerous Assumptions
|
|
1141
|
+
|
|
1142
|
+
> Things the AI agent might assume that are wrong in this project.
|
|
1143
|
+
> Generated by Joycraft risk interview. Update when you discover new gotchas.
|
|
1144
|
+
|
|
1145
|
+
## Assumptions
|
|
1146
|
+
|
|
1147
|
+
| Agent Might Assume | But Actually | Impact If Wrong |
|
|
1148
|
+
|-------------------|-------------|----------------|
|
|
1149
|
+
| _Example: All databases are dev/test_ | _The default connection is production_ | _Data loss_ |
|
|
1150
|
+
| _Example: Deleting and recreating is safe_ | _Some resources have manual config not in code_ | _Hours of manual recovery_ |
|
|
1151
|
+
|
|
1152
|
+
## Historical Incidents
|
|
1153
|
+
|
|
1154
|
+
| Date | What Happened | Lesson | Rule Added |
|
|
1155
|
+
|------|-------------|--------|------------|
|
|
1156
|
+
| _Example: 2026-03-15_ | _Agent deleted staging infra thinking it was temp_ | _Always verify environment before destructive ops_ | _NEVER: Delete cloud resources without listing them first_ |
|
|
1157
|
+
`,
|
|
1158
|
+
"context/decision-log.md": `# Decision Log
|
|
1159
|
+
|
|
1160
|
+
> Why choices were made, not just what was chosen.
|
|
1161
|
+
> Update this when making architectural, tooling, or process decisions.
|
|
1162
|
+
> This is the institutional memory that prevents re-litigating settled questions.
|
|
1163
|
+
|
|
1164
|
+
## Decisions
|
|
1165
|
+
|
|
1166
|
+
| Date | Decision | Why | Alternatives Rejected | Revisit When |
|
|
1167
|
+
|------|----------|-----|----------------------|-------------|
|
|
1168
|
+
| _Example: 2026-03-15_ | _Use Supabase over Firebase_ | _Postgres flexibility, row-level security, self-hostable_ | _Firebase (vendor lock-in), PlanetScale (no RLS)_ | _If we need real-time sync beyond Supabase's capabilities_ |
|
|
1169
|
+
|
|
1170
|
+
## Principles
|
|
1171
|
+
|
|
1172
|
+
_Capture recurring decision patterns here \u2014 they save time on future choices._
|
|
1173
|
+
|
|
1174
|
+
- _Example: "Prefer tools we can self-host over pure SaaS \u2014 reduces vendor risk"_
|
|
1175
|
+
- _Example: "Choose boring technology for infrastructure, cutting-edge only for core differentiators"_
|
|
1176
|
+
`,
|
|
1177
|
+
"context/institutional-knowledge.md": `# Institutional Knowledge
|
|
1178
|
+
|
|
1179
|
+
> Unwritten rules, team conventions, and organizational context that AI agents can't derive from code.
|
|
1180
|
+
> This is the knowledge that takes a new developer months to absorb.
|
|
1181
|
+
> Update when you catch yourself saying "oh, you didn't know about that?"
|
|
1182
|
+
|
|
1183
|
+
## Team Conventions
|
|
1184
|
+
|
|
1185
|
+
_Things everyone on the team knows but nobody wrote down._
|
|
1186
|
+
|
|
1187
|
+
- _Example: "We never deploy on Fridays"_
|
|
1188
|
+
- _Example: "The CEO reviews all UI changes before they ship"_
|
|
1189
|
+
- _Example: "PR titles must reference the Jira ticket number"_
|
|
1190
|
+
|
|
1191
|
+
## Organizational Constraints
|
|
1192
|
+
|
|
1193
|
+
_Business rules, compliance requirements, or political realities that affect technical decisions._
|
|
1194
|
+
|
|
1195
|
+
- _Example: "Legal requires all user data to be stored in EU regions"_
|
|
1196
|
+
- _Example: "The payments team owns the billing schema \u2014 never modify without their approval"_
|
|
1197
|
+
- _Example: "We have an informal agreement with Vendor X about API rate limits"_
|
|
1198
|
+
|
|
1199
|
+
## Historical Context
|
|
1200
|
+
|
|
1201
|
+
_Why things are the way they are \u2014 especially when it looks wrong._
|
|
1202
|
+
|
|
1203
|
+
- _Example: "The auth module uses an old pattern because it predates our TypeScript migration \u2014 don't refactor without a spec"_
|
|
1204
|
+
- _Example: "The caching layer has a 5-second TTL because we had a consistency bug in 2025 \u2014 increasing it requires careful testing"_
|
|
1205
|
+
|
|
1206
|
+
## People & Ownership
|
|
1207
|
+
|
|
1208
|
+
_Who owns what, who to ask, who cares about what._
|
|
1209
|
+
|
|
1210
|
+
- _Example: "Alice owns the payment pipeline \u2014 all changes need her review"_
|
|
1211
|
+
- _Example: "The data team is sensitive about query performance on the analytics tables"_
|
|
1212
|
+
`,
|
|
1213
|
+
"context/production-map.md": `# Production Map
|
|
1214
|
+
|
|
1215
|
+
> What's real, what's staging, what's safe to touch.
|
|
1216
|
+
> Generated by Joycraft risk interview. Update as your infrastructure evolves.
|
|
1217
|
+
|
|
1218
|
+
## Services
|
|
1219
|
+
|
|
1220
|
+
| Service | Environment | URL/Endpoint | Impact if Corrupted |
|
|
1221
|
+
|---------|-------------|-------------|-------------------|
|
|
1222
|
+
| _Example: Main DB_ | _Production_ | _postgres://prod.example.com_ | _1.9M user records lost_ |
|
|
1223
|
+
| _Example: Staging DB_ | _Staging_ | _postgres://staging.example.com_ | _Test data only, safe to reset_ |
|
|
1224
|
+
|
|
1225
|
+
## Secrets & Credentials
|
|
1226
|
+
|
|
1227
|
+
| Secret | Location | Notes |
|
|
1228
|
+
|--------|----------|-------|
|
|
1229
|
+
| _Example: DATABASE_URL_ | _.env.local_ | _Production connection \u2014 NEVER commit_ |
|
|
1230
|
+
|
|
1231
|
+
## Safe to Touch
|
|
1232
|
+
|
|
1233
|
+
- [ ] Staging environment at [URL]
|
|
1234
|
+
- [ ] Test/fixture data in [location]
|
|
1235
|
+
- [ ] Development API keys
|
|
1236
|
+
|
|
1237
|
+
## NEVER Touch Without Explicit Approval
|
|
1238
|
+
|
|
1239
|
+
- [ ] Production database
|
|
1240
|
+
- [ ] Live API endpoints
|
|
1241
|
+
- [ ] User-facing infrastructure
|
|
1091
1242
|
`
|
|
1092
1243
|
};
|
|
1093
1244
|
|
|
@@ -1126,4 +1277,4 @@ export {
|
|
|
1126
1277
|
readVersion,
|
|
1127
1278
|
writeVersion
|
|
1128
1279
|
};
|
|
1129
|
-
//# sourceMappingURL=chunk-
|
|
1280
|
+
//# sourceMappingURL=chunk-FNGCEYUY.js.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"sources":["../src/bundled-files.ts","../src/version.ts"],"sourcesContent":["// Bundled file contents — embedded at build time\n\nexport const SKILLS: Record<string, string> = {\n 'joycraft-decompose.md': `---\nname: joycraft-decompose\ndescription: Break a feature brief into atomic specs — small, testable, independently executable units\n---\n\n# Decompose Feature into Atomic Specs\n\nYou have a Feature Brief (or the user has described a feature). Your job is to decompose it into atomic specs that can be executed independently — one spec per session.\n\n## Step 1: Verify the Brief Exists\n\nLook for a Feature Brief in \\`docs/briefs/\\`. If one doesn't exist yet, tell the user:\n\n> No feature brief found. Run \\`/joycraft-new-feature\\` first to interview and create one, or describe the feature now and I'll work from your description.\n\nIf the user describes the feature inline, work from that description directly. You don't need a formal brief to decompose — but recommend creating one for complex features.\n\n## Step 2: Identify Natural Boundaries\n\n**Why:** Good boundaries make specs independently testable and committable. Bad boundaries create specs that can't be verified without other specs also being done.\n\nRead the brief (or description) and identify natural split points:\n\n- **Data layer changes** (schemas, types, migrations) — always a separate spec\n- **Pure functions / business logic** — separate from I/O\n- **UI components** — separate from data fetching\n- **API endpoints / route handlers** — separate from business logic\n- **Test infrastructure** (mocks, fixtures, helpers) — can be its own spec if substantial\n- **Configuration / environment** — separate from code changes\n\nAsk yourself: \"Can this piece be committed and tested without the other pieces existing?\" If yes, it's a good boundary.\n\n## Step 3: Build the Decomposition Table\n\nFor each atomic spec, define:\n\n| # | Spec Name | Description | Dependencies | Size |\n|---|-----------|-------------|--------------|------|\n\n**Rules:**\n- Each spec name is \\`verb-object\\` format (e.g., \\`add-terminal-detection\\`, \\`extract-prompt-module\\`)\n- Each description is ONE sentence — if you need two, the spec is too big\n- Dependencies reference other spec numbers — keep the dependency graph shallow\n- More than 2 dependencies on a single spec = it's too big, split further\n- Aim for 3-7 specs per feature. Fewer than 3 = probably not decomposed enough. More than 10 = the feature brief is too big\n\n## Step 4: Present and Iterate\n\nShow the decomposition table to the user. Ask:\n1. \"Does this breakdown match how you think about this feature?\"\n2. \"Are there any specs that feel too big or too small?\"\n3. \"Should any of these run in parallel (separate worktrees)?\"\n\nIterate until the user approves.\n\n## Step 5: Generate Atomic Specs\n\nFor each approved row, create \\`docs/specs/YYYY-MM-DD-spec-name.md\\`. Create the \\`docs/specs/\\` directory if it doesn't exist.\n\n**Why:** Each spec must be self-contained — a fresh Claude session should be able to execute it without reading the Feature Brief. Copy relevant constraints and context into each spec.\n\nUse this structure:\n\n\\`\\`\\`markdown\n# [Verb + Object] — Atomic Spec\n\n> **Parent Brief:** \\`docs/briefs/YYYY-MM-DD-feature-name.md\\` (or \"standalone\")\n> **Status:** Ready\n> **Date:** YYYY-MM-DD\n> **Estimated scope:** [1 session / N files / ~N lines]\n\n---\n\n## What\nOne paragraph — what changes when this spec is done?\n\n## Why\nOne sentence — what breaks or is missing without this?\n\n## Acceptance Criteria\n- [ ] [Observable behavior]\n- [ ] Build passes\n- [ ] Tests pass\n\n## Constraints\n- MUST: [hard requirement]\n- MUST NOT: [hard prohibition]\n\n## Affected Files\n| Action | File | What Changes |\n|--------|------|-------------|\n\n## Approach\nStrategy, data flow, key decisions. Name one rejected alternative.\n\n## Edge Cases\n| Scenario | Expected Behavior |\n|----------|------------------|\n\\`\\`\\`\n\nIf \\`docs/templates/ATOMIC_SPEC_TEMPLATE.md\\` exists, reference it for the full template with additional guidance.\n\nFill in all sections — each spec must be self-contained (no \"see the brief for context\"). Copy relevant constraints from the Feature Brief into each spec. Write acceptance criteria specific to THIS spec, not the whole feature.\n\n## Step 6: Recommend Execution Strategy\n\nBased on the dependency graph:\n- **Independent specs** — \"These can run in parallel worktrees\"\n- **Sequential specs** — \"Execute these in order: 1 -> 2 -> 4\"\n- **Mixed** — \"Start specs 1 and 3 in parallel. After 1 completes, start 2.\"\n\nUpdate the Feature Brief's Execution Strategy section with the plan (if a brief exists).\n\n## Step 7: Hand Off\n\nTell the user:\n\\`\\`\\`\nDecomposition complete:\n- [N] atomic specs created in docs/specs/\n- [N] can run in parallel, [N] are sequential\n- Estimated total: [N] sessions\n\nTo execute:\n- Sequential: Open a session, point Claude at each spec in order\n- Parallel: Use worktrees — one spec per worktree, merge when done\n- Each session should end with /joycraft-session-end to capture discoveries\n\nReady to start execution?\n\\`\\`\\`\n`,\n\n 'joycraft-interview.md': `---\nname: joycraft-interview\ndescription: Brainstorm freely about what you want to build — yap, explore ideas, and get a structured summary you can use later\n---\n\n# Interview — Idea Exploration\n\nYou are helping the user brainstorm and explore what they want to build. This is a lightweight, low-pressure conversation — not a formal spec process. Let them yap.\n\n## How to Run the Interview\n\n### 1. Open the Floor\n\nStart with something like:\n\"What are you thinking about building? Just talk — I'll listen and ask questions as we go.\"\n\nLet the user talk freely. Do not interrupt their flow. Do not push toward structure yet.\n\n### 2. Ask Clarifying Questions\n\nAs they talk, weave in questions naturally — don't fire them all at once:\n\n- **What problem does this solve?** Who feels the pain today?\n- **What does \"done\" look like?** If this worked perfectly, what would a user see?\n- **What are the constraints?** Time, tech, team, budget — what boxes are we in?\n- **What's NOT in scope?** What's tempting but should be deferred?\n- **What are the edge cases?** What could go wrong? What's the weird input?\n- **What exists already?** Are we building on something or starting fresh?\n\n### 3. Play Back Understanding\n\nAfter the user has gotten their ideas out, reflect back:\n\"So if I'm hearing you right, you want to [summary]. The core problem is [X], and done looks like [Y]. Is that right?\"\n\nLet them correct and refine. Iterate until they say \"yes, that's it.\"\n\n### 4. Write a Draft Brief\n\nCreate a draft file at \\`docs/briefs/YYYY-MM-DD-topic-draft.md\\`. Create the \\`docs/briefs/\\` directory if it doesn't exist.\n\nUse this format:\n\n\\`\\`\\`markdown\n# [Topic] — Draft Brief\n\n> **Date:** YYYY-MM-DD\n> **Status:** DRAFT\n> **Origin:** /joycraft-interview session\n\n---\n\n## The Idea\n[2-3 paragraphs capturing what the user described — their words, their framing]\n\n## Problem\n[What pain or gap this addresses]\n\n## What \"Done\" Looks Like\n[The user's description of success — observable outcomes]\n\n## Constraints\n- [constraint 1]\n- [constraint 2]\n\n## Open Questions\n- [things that came up but weren't resolved]\n- [decisions that need more thought]\n\n## Out of Scope (for now)\n- [things explicitly deferred]\n\n## Raw Notes\n[Any additional context, quotes, or tangents worth preserving]\n\\`\\`\\`\n\n### 5. Hand Off\n\nAfter writing the draft, tell the user:\n\n\\`\\`\\`\nDraft brief saved to docs/briefs/YYYY-MM-DD-topic-draft.md\n\nWhen you're ready to move forward:\n- /joycraft-new-feature — formalize this into a full Feature Brief with specs\n- /joycraft-decompose — break it directly into atomic specs if scope is clear\n- Or just keep brainstorming — run /joycraft-interview again anytime\n\\`\\`\\`\n\n## Guidelines\n\n- **This is NOT /joycraft-new-feature.** Do not push toward formal briefs, decomposition tables, or atomic specs. The point is exploration.\n- **Let the user lead.** Your job is to listen, clarify, and capture — not to structure or direct.\n- **Mark everything as DRAFT.** The output is a starting point, not a commitment.\n- **Keep it short.** The draft brief should be 1-2 pages max. Capture the essence, not every detail.\n- **Multiple interviews are fine.** The user might run this several times as their thinking evolves. Each creates a new dated draft.\n`,\n\n 'joycraft-new-feature.md': `---\nname: joycraft-new-feature\ndescription: Guided feature development — interview the user, produce a Feature Brief, then decompose into atomic specs\n---\n\n# New Feature Workflow\n\nYou are starting a new feature. Follow this process in order. Do not skip steps.\n\n## Phase 1: Interview\n\nInterview the user about what they want to build. Let them talk — your job is to listen, then sharpen.\n\n**Why:** A thorough interview prevents wasted implementation time. Most failed features fail because the problem wasn't understood, not because the code was wrong.\n\n**Ask about:**\n- What problem does this solve? Who is affected?\n- What does \"done\" look like? How will a user know this works?\n- What are the hard constraints? (business rules, tech limitations, deadlines)\n- What is explicitly NOT in scope? (push hard on this — aggressive scoping is key)\n- Are there edge cases or error conditions we need to handle?\n- What existing code/patterns should this follow?\n\n**Interview technique:**\n- Let the user \"yap\" — don't interrupt their flow of ideas\n- After they finish, play back your understanding: \"So if I'm hearing you right...\"\n- Ask clarifying questions that force specificity: \"When you say 'handle errors,' what should the user see?\"\n- Push toward testable statements: \"How would we verify that works?\"\n\nKeep asking until you can fill out a Feature Brief. When ready, say:\n\"I have enough context. Let me write the Feature Brief for your review.\"\n\n## Phase 2: Feature Brief\n\nWrite a Feature Brief to \\`docs/briefs/YYYY-MM-DD-feature-name.md\\`. Create the \\`docs/briefs/\\` directory if it doesn't exist.\n\n**Why:** The brief is the single source of truth for what we're building. It prevents scope creep and gives every spec a shared reference point.\n\nUse this structure:\n\n\\`\\`\\`markdown\n# [Feature Name] — Feature Brief\n\n> **Date:** YYYY-MM-DD\n> **Project:** [project name]\n> **Status:** Interview | Decomposing | Specs Ready | In Progress | Complete\n\n---\n\n## Vision\nWhat are we building and why? The full picture in 2-4 paragraphs.\n\n## User Stories\n- As a [role], I want [capability] so that [benefit]\n\n## Hard Constraints\n- MUST: [constraint that every spec must respect]\n- MUST NOT: [prohibition that every spec must respect]\n\n## Out of Scope\n- NOT: [tempting but deferred]\n\n## Decomposition\n| # | Spec Name | Description | Dependencies | Est. Size |\n|---|-----------|-------------|--------------|-----------|\n| 1 | [verb-object] | [one sentence] | None | [S/M/L] |\n\n## Execution Strategy\n- [ ] Sequential (specs have chain dependencies)\n- [ ] Parallel worktrees (specs are independent)\n- [ ] Mixed\n\n## Success Criteria\n- [ ] [End-to-end behavior 1]\n- [ ] [No regressions in existing features]\n\\`\\`\\`\n\nIf \\`docs/templates/FEATURE_BRIEF_TEMPLATE.md\\` exists, reference it for the full template with additional guidance.\n\nPresent the brief to the user. Focus review on:\n- \"Does the decomposition match how you think about this?\"\n- \"Is anything in scope that shouldn't be?\"\n- \"Are the specs small enough? Can each be described in one sentence?\"\n\nIterate until approved.\n\n## Phase 3: Generate Atomic Specs\n\nFor each row in the decomposition table, create a self-contained spec file at \\`docs/specs/YYYY-MM-DD-spec-name.md\\`. Create the \\`docs/specs/\\` directory if it doesn't exist.\n\n**Why:** Each spec must be understandable WITHOUT reading the Feature Brief. This prevents the \"Curse of Instructions\" — no spec should require holding the entire feature in context. Copy relevant context into each spec.\n\nUse this structure for each spec:\n\n\\`\\`\\`markdown\n# [Verb + Object] — Atomic Spec\n\n> **Parent Brief:** \\`docs/briefs/YYYY-MM-DD-feature-name.md\\`\n> **Status:** Ready\n> **Date:** YYYY-MM-DD\n> **Estimated scope:** [1 session / N files / ~N lines]\n\n---\n\n## What\nOne paragraph — what changes when this spec is done?\n\n## Why\nOne sentence — what breaks or is missing without this?\n\n## Acceptance Criteria\n- [ ] [Observable behavior]\n- [ ] Build passes\n- [ ] Tests pass\n\n## Constraints\n- MUST: [hard requirement]\n- MUST NOT: [hard prohibition]\n\n## Affected Files\n| Action | File | What Changes |\n|--------|------|-------------|\n\n## Approach\nStrategy, data flow, key decisions. Name one rejected alternative.\n\n## Edge Cases\n| Scenario | Expected Behavior |\n|----------|------------------|\n\\`\\`\\`\n\nIf \\`docs/templates/ATOMIC_SPEC_TEMPLATE.md\\` exists, reference it for the full template with additional guidance.\n\n## Phase 4: Hand Off for Execution\n\nTell the user:\n\\`\\`\\`\nFeature Brief and [N] atomic specs are ready.\n\nSpecs:\n1. [spec-name] — [one sentence] [S/M/L]\n2. [spec-name] — [one sentence] [S/M/L]\n...\n\nRecommended execution:\n- [Parallel/Sequential/Mixed strategy]\n- Estimated: [N] sessions total\n\nTo execute: Start a fresh session per spec. Each session should:\n1. Read the spec\n2. Implement\n3. Run /joycraft-session-end to capture discoveries\n4. Commit and PR\n\nReady to start?\n\\`\\`\\`\n\n**Why:** A fresh session for execution produces better results. The interview session has too much context noise — a clean session with just the spec is more focused.\n\nYou can also use \\`/joycraft-decompose\\` to re-decompose a brief if the breakdown needs adjustment, or run \\`/joycraft-interview\\` first for a lighter brainstorm before committing to the full workflow.\n`,\n\n 'joycraft-session-end.md': `---\nname: joycraft-session-end\ndescription: Wrap up a session — capture discoveries, verify, prepare for PR or next session\n---\n\n# Session Wrap-Up\n\nBefore ending this session, complete these steps in order.\n\n## 1. Capture Discoveries\n\n**Why:** Discoveries are the surprises — things that weren't in the spec or that contradicted expectations. They prevent future sessions from hitting the same walls.\n\nCheck: did anything surprising happen during this session? If yes, create or update a discovery file at \\`docs/discoveries/YYYY-MM-DD-topic.md\\`. Create the \\`docs/discoveries/\\` directory if it doesn't exist.\n\nOnly capture what's NOT obvious from the code or git diff:\n- \"We thought X but found Y\" — assumptions that were wrong\n- \"This API/library behaves differently than documented\" — external gotchas\n- \"This edge case needs handling in a future spec\" — deferred work with context\n- \"The approach in the spec didn't work because...\" — spec-vs-reality gaps\n- Key decisions made during implementation that aren't in the spec\n\n**Do NOT capture:**\n- Files changed (that's the diff)\n- What you set out to do (that's the spec)\n- Step-by-step narrative of the session (nobody re-reads these)\n\nUse this format:\n\n\\`\\`\\`markdown\n# Discoveries — [topic]\n\n**Date:** YYYY-MM-DD\n**Spec:** [link to spec if applicable]\n\n## [Discovery title]\n**Expected:** [what we thought would happen]\n**Actual:** [what actually happened]\n**Impact:** [what this means for future work]\n\\`\\`\\`\n\nIf nothing surprising happened, skip the discovery file entirely. No discovery is a good sign — the spec was accurate.\n\n## 1b. Update Context Documents\n\nIf \\`docs/context/\\` exists, quickly check whether this session revealed anything about:\n\n- **Production risks** — did you interact with or learn about production vs staging systems? → Update \\`docs/context/production-map.md\\`\n- **Wrong assumptions** — did the agent (or you) assume something that turned out to be false? → Update \\`docs/context/dangerous-assumptions.md\\`\n- **Key decisions** — did you make an architectural or tooling choice? → Add a row to \\`docs/context/decision-log.md\\`\n- **Unwritten rules** — did you discover a convention or constraint not documented anywhere? → Update \\`docs/context/institutional-knowledge.md\\`\n\nSkip this if nothing applies. Don't force it — only update when there's genuine new context.\n\n## 2. Run Validation\n\nRun the project's validation commands. Check CLAUDE.md for project-specific commands. Common checks:\n\n- Type-check (e.g., \\`tsc --noEmit\\`, \\`mypy\\`, \\`cargo check\\`)\n- Tests (e.g., \\`npm test\\`, \\`pytest\\`, \\`cargo test\\`)\n- Lint (e.g., \\`eslint\\`, \\`ruff\\`, \\`clippy\\`)\n\nFix any failures before proceeding.\n\n## 3. Update Spec Status\n\nIf working from an atomic spec in \\`docs/specs/\\`:\n- All acceptance criteria met — update status to \\`Complete\\`\n- Partially done — update status to \\`In Progress\\`, note what's left\n\nIf working from a Feature Brief in \\`docs/briefs/\\`, check off completed specs in the decomposition table.\n\n## 4. Commit\n\nCommit all changes including the discovery file (if created) and spec status updates. The commit message should reference the spec if applicable.\n\n## 5. Report\n\n\\`\\`\\`\nSession complete.\n- Spec: [spec name] — [Complete / In Progress]\n- Build: [passing / failing]\n- Discoveries: [N items / none]\n- Next: [what the next session should tackle, or \"ready for PR\"]\n\\`\\`\\`\n`,\n\n 'joycraft-tune.md': `---\nname: joycraft-tune\ndescription: Assess and upgrade your project's AI development harness — score 7 dimensions, apply fixes, show path to Level 5\n---\n\n# Tune — Project Harness Assessment & Upgrade\n\nYou are evaluating and upgrading this project's AI development harness. Follow these steps in order.\n\n## Step 1: Detect Harness State\n\nCheck the following and note what exists:\n\n1. **CLAUDE.md** — Read it if it exists. Check whether it contains meaningful content (not just a project name or generic README).\n2. **Key directories** — Check for: \\`docs/specs/\\`, \\`docs/briefs/\\`, \\`docs/discoveries/\\`, \\`docs/templates/\\`, \\`.claude/skills/\\`\n3. **Boundary framework** — Look for \\`Always\\`, \\`Ask First\\`, and \\`Never\\` sections in CLAUDE.md (or similar behavioral constraints under any heading).\n4. **Skills infrastructure** — Check \\`.claude/skills/\\` for installed skill files.\n5. **Test configuration** — Look for test commands in package.json, pyproject.toml, Cargo.toml, Makefile, or CI config files.\n\n## Step 2: Route Based on State\n\n### If No Harness (no CLAUDE.md, or CLAUDE.md is just a README with no structured sections):\n\nTell the user:\n- Their project has no AI development harness\n- Recommend running \\`npx joycraft init\\` to scaffold one\n- Briefly explain what it sets up: CLAUDE.md with boundaries, spec/brief templates, skills, documentation structure\n- **Stop here** — do not run the full assessment on a bare project\n\n### If Harness Exists (CLAUDE.md has structured content — boundaries, commands, architecture, or domain rules):\n\nContinue to Step 3 for the full assessment.\n\n## Step 3: Score 7 Dimensions\n\nRead CLAUDE.md thoroughly. Explore the project structure. Score each dimension on a 1-5 scale with specific evidence.\n\n### Dimension 1: Spec Quality\n\nLook in \\`docs/specs/\\` for specification files.\n\n| Score | Criteria |\n|-------|----------|\n| 1 | No specs directory or no spec files |\n| 2 | Specs exist but are informal notes or TODOs |\n| 3 | Specs have structure (sections, some criteria) but lack consistency |\n| 4 | Specs are structured with clear acceptance criteria and constraints |\n| 5 | Atomic specs: self-contained, acceptance criteria, constraints, edge cases, affected files |\n\n**Evidence:** Number of specs found, example of best/worst, whether acceptance criteria are present.\n\n### Dimension 2: Spec Granularity\n\nCan each spec be completed in a single coding session?\n\n| Score | Criteria |\n|-------|----------|\n| 1 | No specs |\n| 2 | Specs cover entire features or epics |\n| 3 | Specs are feature-sized (multi-session but bounded) |\n| 4 | Most specs are session-sized with clear scope |\n| 5 | All specs are atomic — one session, one concern, clear done state |\n\n### Dimension 3: Behavioral Boundaries\n\nRead CLAUDE.md for explicit behavioral constraints.\n\n| Score | Criteria |\n|-------|----------|\n| 1 | No CLAUDE.md or no behavioral guidance |\n| 2 | CLAUDE.md exists with general instructions but no structured boundaries |\n| 3 | Some boundaries exist but not organized as Always/Ask First/Never |\n| 4 | Always/Ask First/Never sections present with reasonable coverage |\n| 5 | Comprehensive boundaries covering code style, testing, deployment, dependencies, and dangerous operations |\n\n**Important:** Projects may have strong rules under different headings (e.g., \"Critical Rules\", \"Constraints\"). Give credit for substance over format — a project with clear, enforced rules scores higher than one with empty Always/Ask First/Never sections.\n\n### Dimension 4: Skills & Hooks\n\nLook in \\`.claude/skills/\\` for skill files. Check for hooks configuration.\n\n| Score | Criteria |\n|-------|----------|\n| 1 | No .claude/ directory |\n| 2 | .claude/ exists but empty or minimal |\n| 3 | A few skills installed, no hooks |\n| 4 | Multiple relevant skills, basic hooks |\n| 5 | Comprehensive skills covering workflow, hooks for validation |\n\n### Dimension 5: Documentation\n\nExamine \\`docs/\\` directory structure and content.\n\n| Score | Criteria |\n|-------|----------|\n| 1 | No docs/ directory |\n| 2 | docs/ exists with ad-hoc files |\n| 3 | Some structure (subdirectories) but inconsistent |\n| 4 | Structured docs/ with templates and clear organization |\n| 5 | Full structure: briefs/, specs/, templates/, architecture docs, referenced from CLAUDE.md |\n\n### Dimension 6: Knowledge Capture & Contextual Stewardship\n\nLook for discoveries, decisions, session notes, and context documents.\n\n| Score | Criteria |\n|-------|----------|\n| 1 | No knowledge capture mechanism |\n| 2 | Ad-hoc notes or a discoveries directory with no entries |\n| 3 | Discoveries directory with some entries, or context docs exist but empty |\n| 4 | Active discoveries + at least 2 context docs with content (production-map, dangerous-assumptions, decision-log, institutional-knowledge) |\n| 5 | Full contextual stewardship: discoveries with entries, all 4 context docs maintained, session-end workflow in active use |\n\n**Check for:** \\`docs/discoveries/\\`, \\`docs/context/production-map.md\\`, \\`docs/context/dangerous-assumptions.md\\`, \\`docs/context/decision-log.md\\`, \\`docs/context/institutional-knowledge.md\\`. Score based on both existence AND whether they have real content (not just templates).\n\n### Dimension 7: Testing & Validation\n\nLook for test config, CI setup, and validation commands.\n\n| Score | Criteria |\n|-------|----------|\n| 1 | No test configuration |\n| 2 | Test framework installed but few/no tests |\n| 3 | Tests exist with reasonable coverage |\n| 4 | Tests + CI pipeline configured |\n| 5 | Tests + CI + validation commands in CLAUDE.md + scenario tests |\n\n## Step 4: Write Assessment\n\nWrite the assessment to \\`docs/joycraft-assessment.md\\` AND display it in the conversation. Use this format:\n\n\\`\\`\\`markdown\n# Joycraft Assessment — [Project Name]\n\n**Date:** [today's date]\n**Overall Level:** [1-5, based on average score]\n\n## Scores\n\n| Dimension | Score | Summary |\n|-----------|-------|---------|\n| Spec Quality | X/5 | [one-line summary] |\n| Spec Granularity | X/5 | [one-line summary] |\n| Behavioral Boundaries | X/5 | [one-line summary] |\n| Skills & Hooks | X/5 | [one-line summary] |\n| Documentation | X/5 | [one-line summary] |\n| Knowledge Capture | X/5 | [one-line summary] |\n| Testing & Validation | X/5 | [one-line summary] |\n\n**Average:** X.X/5\n\n## Detailed Findings\n\n### [Dimension Name] — X/5\n**Evidence:** [specific files, paths, counts found]\n**Gap:** [what's missing]\n**Recommendation:** [specific action to improve]\n\n## Upgrade Plan\n\nTo reach Level [current + 1], complete these steps:\n1. [Most impactful action] — addresses [dimension] (X -> Y)\n2. [Next action] — addresses [dimension] (X -> Y)\n[up to 5 actions, ordered by impact]\n\\`\\`\\`\n\n## Step 5: Apply Upgrades\n\nImmediately after presenting the assessment, apply upgrades using the three-tier model below. Do NOT ask for per-item permission — batch everything and show a consolidated report at the end.\n\n### Tier 1: Silent Apply (just do it)\nThese are safe, additive operations. Apply them without asking:\n- Create missing directories (\\`docs/specs/\\`, \\`docs/briefs/\\`, \\`docs/discoveries/\\`, \\`docs/templates/\\`)\n- Install missing skills to \\`.claude/skills/\\`\n- Copy missing templates to \\`docs/templates/\\`\n- Create AGENTS.md if it doesn't exist\n\n### Git Autonomy Preference\n\nBefore applying Behavioral Boundaries to CLAUDE.md, ask the user ONE question:\n\n> How autonomous should git operations be?\n> 1. **Cautious** — commits freely, asks before pushing or opening PRs *(good for learning the workflow)*\n> 2. **Autonomous** — commits, pushes to branches, and opens PRs without asking *(good for spec-driven development)*\n\nBased on their answer, use the appropriate git rules in the Behavioral Boundaries section:\n\n**If Cautious (default):**\n\\`\\`\\`\n### ASK FIRST\n- Pushing to remote\n- Creating or merging pull requests\n- Any destructive git operation (force-push, reset --hard, branch deletion)\n\n### NEVER\n- Push directly to main/master without approval\n- Amend commits that have been pushed\n\\`\\`\\`\n\n**If Autonomous:**\n\\`\\`\\`\n### ALWAYS\n- Push to feature branches after each commit\n- Open a PR when all specs in a feature are complete\n- Use descriptive branch names: feature/spec-name\n\n### ASK FIRST\n- Merging PRs to main/master\n- Any destructive git operation (force-push, reset --hard, branch deletion)\n\n### NEVER\n- Push directly to main/master (always use feature branches + PR)\n- Amend commits that have been pushed to remote\n\\`\\`\\`\n\n### Risk Interview\n\nBefore applying upgrades, ask 3-5 targeted questions to capture what's dangerous in this project. Skip this if \\`docs/context/production-map.md\\` or \\`docs/context/dangerous-assumptions.md\\` already exist (offer to update instead).\n\n**Question 1:** \"What could this agent break that would ruin your day? Think: production databases, live APIs, billing systems, user data, infrastructure.\"\n\nFrom the answer, generate:\n- NEVER rules for CLAUDE.md (e.g., \"NEVER connect to production DB at postgres://prod.example.com\")\n- Deny patterns for .claude/settings.json (e.g., deny Bash commands containing production hostnames)\n\n**Question 2:** \"What external services does this project connect to? Which are production vs. staging/dev?\"\n\nFrom the answer, generate:\n- \\`docs/context/production-map.md\\` documenting what's real vs safe to touch\n- Include: service name, URL/endpoint, environment (prod/staging/dev), what happens if corrupted\n\n**Question 3:** \"What are the unwritten rules a new developer would need months to learn about this project?\"\n\nFrom the answer, generate:\n- Additions to CLAUDE.md boundaries (new ALWAYS/ASK FIRST/NEVER rules)\n- \\`docs/context/dangerous-assumptions.md\\` with \"Agent might assume X, but actually Y\"\n\n**Question 4 (optional):** \"What happened last time something went wrong with an automated tool or deploy?\"\n\nIf the user has a story, capture the lesson as a specific NEVER rule and add to dangerous-assumptions.md.\n\n**Question 5:** \"Any files, directories, or commands that should be completely off-limits?\"\n\nFrom the answer, generate deny rules for .claude/settings.json and add to NEVER section.\n\n**Rules for the interview:**\n- Ask questions ONE AT A TIME, not all at once\n- If the user says \"nothing\" or \"skip\", respect that and move on\n- Keep it to 2-3 minutes total — don't interrogate\n- Generate artifacts immediately after the interview, don't wait for all questions\n- This is the SECOND and LAST set of questions during /joycraft-tune (first is git autonomy)\n\n### Tier 2: Apply and Show Diff (do it, then report)\nThese modify important files but are additive (append-only). Apply them, then show what changed so the user can review. Git is the undo button.\n- Add missing sections to CLAUDE.md (Behavioral Boundaries, Development Workflow, Getting Started with Joycraft, Key Files, Common Gotchas)\n- Use the git autonomy preference from above when generating the Behavioral Boundaries section\n- Draft section content from the actual codebase — not generic placeholders. Read the project's real rules, real commands, real structure.\n- Only append — never modify or reformat existing content\n\n### Tier 3: Confirm First (ask before acting)\nThese are potentially destructive or opinionated. Ask before proceeding:\n- Rewriting or reorganizing existing CLAUDE.md sections\n- Overwriting files the user has customized\n- Suggesting test framework installation or CI setup (present as recommendations, don't auto-install)\n\n### Reading a Previous Assessment\n\nIf \\`docs/joycraft-assessment.md\\` already exists, read it first. If all recommendations have been applied, report \"nothing to upgrade\" and offer to re-assess.\n\n### After Applying\n\nAppend a history entry to \\`docs/joycraft-history.md\\` (create if needed):\n\\`\\`\\`\n| [date] | [new avg score] | [change from last] | [summary of what changed] |\n\\`\\`\\`\n\nThen display a single consolidated report:\n\n\\`\\`\\`markdown\n## Upgrade Results\n\n| Dimension | Before | After | Change |\n|------------------------|--------|-------|--------|\n| Spec Quality | X/5 | X/5 | +X |\n| ... | ... | ... | ... |\n\n**Previous Level:** X — **New Level:** X\n\n### What Changed\n- [list each change applied]\n\n### Remaining Gaps\n- [anything still below 3.5, with specific next action]\n\\`\\`\\`\n\nUpdate \\`docs/joycraft-assessment.md\\` with the new scores and today's date.\n\n## Step 6: Show Path to Level 5\n\nAfter the upgrade report, always show the Level 5 roadmap tailored to the project's current state:\n\n\\`\\`\\`markdown\n## Path to Level 5 — Autonomous Development\n\nYou're at Level [X]. Here's what each level looks like:\n\n| Level | You | AI | Key Skill |\n|-------|-----|-----|-----------|\n| 2 | Guide direction | Multi-file changes | AI-native tooling |\n| 3 | Review diffs | Primary developer | Code review at scale |\n| 4 | Write specs, check tests | End-to-end development | Specification writing |\n| 5 | Define what + why | Specs in, software out | Systems design |\n\n### Your Next Steps Toward Level [X+1]:\n1. [Specific action based on current gaps — e.g., \"Write your first atomic spec using /joycraft-new-feature\"]\n2. [Next action — e.g., \"Add vitest and write tests for your core logic\"]\n3. [Next action — e.g., \"Use /joycraft-session-end consistently to build your discoveries log\"]\n\n### What Level 5 Looks Like (Your North Star):\n- A backlog of ready specs that agents pull from and execute autonomously\n- CI failures auto-generate fix specs — no human triage for regressions\n- Multi-agent execution with parallel worktrees, one spec per agent\n- External holdout scenarios (tests the agent can't see) prevent overfitting\n- CLAUDE.md evolves from discoveries — the harness improves itself\n\n### You'll Know You're at Level 5 When:\n- You describe a feature in one sentence and walk away\n- The system produces a PR with tests, docs, and discoveries — without further input\n- Failed CI runs generate their own fix specs\n- Your harness improves without you manually editing CLAUDE.md\n\nThis is a significant journey. Most teams are at Level 2. Getting to Level 4 with Joycraft's workflow is achievable — Level 5 requires building validation infrastructure (scenario tests, spec queues, CI feedback loops) that goes beyond what Joycraft scaffolds today. But the harness you're building now is the foundation.\n\\`\\`\\`\n\nTailor the \"Next Steps\" section based on the project's actual gaps — don't show generic advice.\n\n## Edge Cases\n\n- **Not a git repo:** Note this. Joycraft works best in a git repo.\n- **CLAUDE.md is just a README:** Treat as \"no harness.\"\n- **Non-Joycraft skills already installed:** Acknowledge them. Do not replace — suggest additions.\n- **Monorepo:** Assess the root CLAUDE.md. Note if component-level CLAUDE.md files exist.\n- **Project has rules under non-standard headings:** Give credit. Suggest reformatting as Always/Ask First/Never but acknowledge the rules are there.\n- **Assessment file missing when upgrading:** Run the full assessment first, then offer to apply.\n- **Assessment is stale:** Warn and offer to re-assess before proceeding.\n- **All recommendations already applied:** Report \"nothing to upgrade\" and stop.\n- **User declines a recommendation:** Skip it, continue, include in \"What Was Skipped.\"\n- **CLAUDE.md does not exist at all:** Create it with recommended sections, but ask the user first.\n- **Non-Joycraft content in CLAUDE.md:** Preserve exactly as-is. Only append or merge — never remove or reformat existing content.\n`,\n\n};\n\nexport const TEMPLATES: Record<string, string> = {\n 'ATOMIC_SPEC_TEMPLATE.md': `# [Verb + Object] — Atomic Spec\n\n> **Parent Brief:** \\`docs/briefs/YYYY-MM-DD-feature-name.md\\` (or \"standalone\")\n> **Status:** Draft | Ready | In Progress | Complete\n> **Date:** YYYY-MM-DD\n> **Estimated scope:** [1 session / 2-3 files / ~N lines]\n\n---\n\n## What\n\nOne paragraph. What changes when this spec is done? A developer with no context should understand the change in 15 seconds.\n\n## Why\n\nOne sentence. What breaks, hurts, or is missing without this? Links to the parent brief if part of a larger feature.\n\n## Acceptance Criteria\n\n- [ ] [Observable behavior — what a human would see/verify]\n- [ ] [Another observable behavior]\n- [ ] [Regression: existing behavior X still works]\n- [ ] Build passes\n- [ ] Tests pass\n\n> These are your \"done\" checkboxes. If Claude says \"done\" and these aren't all green, it's not done.\n\n## Constraints\n\n- MUST: [hard requirement]\n- MUST NOT: [hard prohibition]\n- SHOULD: [strong preference, with rationale]\n\n> Use RFC 2119 language. 2-5 constraints is typical. Zero is a red flag — every change has boundaries.\n\n## Affected Files\n\n| Action | File | What Changes |\n|--------|------|-------------|\n| Create | \\`path/to/file.ts\\` | [brief description] |\n| Modify | \\`path/to/file.ts\\` | [what specifically changes] |\n\n## Approach\n\nHow this will be implemented. Not pseudo-code — describe the strategy, data flow, and key decisions. Name one rejected alternative and why it was rejected.\n\n_Scale to complexity: 3 sentences for a bug fix, 1 page max for a feature. If you need more than a page, this spec is too big — decompose further._\n\n## Edge Cases\n\n| Scenario | Expected Behavior |\n|----------|------------------|\n| [what could go wrong] | [what should happen] |\n\n> Skip for trivial changes. Required for anything touching user input, data, or external APIs.`,\n\n 'FEATURE_BRIEF_TEMPLATE.md': `# [Feature Name] — Feature Brief\n\n> **Date:** YYYY-MM-DD\n> **Project:** [project name]\n> **Status:** Interview | Decomposing | Specs Ready | In Progress | Complete\n\n---\n\n## Vision\n\nWhat are we building and why? This is the \"yap\" distilled — the full picture in 2-4 paragraphs.\n\n## User Stories\n\n- As a [role], I want [capability] so that [benefit]\n\n## Hard Constraints\n\n- MUST: [constraint that every spec must respect]\n- MUST NOT: [prohibition that every spec must respect]\n\n## Out of Scope\n\n- NOT: [tempting but deferred]\n\n## Decomposition\n\n| # | Spec Name | Description | Dependencies | Est. Size |\n|---|-----------|-------------|--------------|-----------|\n| 1 | [verb-object] | [one sentence] | None | [S/M/L] |\n\n## Execution Strategy\n\n- [ ] Sequential (specs have chain dependencies)\n- [ ] Agent teams (parallel teammates within phases)\n- [ ] Parallel worktrees (specs are independent)\n\n## Success Criteria\n\n- [ ] [End-to-end behavior 1]\n- [ ] [No regressions in existing features]`,\n\n 'IMPLEMENTATION_PLAN_TEMPLATE.md': `# [Feature Name] — Implementation Plan\n\n> **Design Spec:** \\`docs/specs/YYYY-MM-DD-feature-name.md\\`\n> **Date:** YYYY-MM-DD\n> **Estimated Tasks:** [number]\n\n---\n\n## Prerequisites\n\n- [ ] Design spec is approved\n- [ ] Branch created (if warranted): \\`feature/feature-name\\`\n- [ ] Required context loaded: [list any docs Claude should read first]\n\n## Task 1: [Descriptive Name]\n\n**Goal:** One sentence — what is true after this task that wasn't true before.\n\n**Files:**\n- \\`path/to/file.ts\\` — [what changes]\n\n**Steps:**\n1. [Concrete action]\n2. [Next concrete action]\n\n**Verification:**\n- [ ] [How to confirm this task worked]\n\n**Commit:** \\`feat: description\\`\n\n---\n\n## Task N: Final Verification\n\n**Goal:** Confirm everything works end-to-end.\n\n**Steps:**\n1. Run full type-check\n2. Run linter\n3. Run tests\n4. Walk through verification checklist from design spec\n\n**Verification:**\n- [ ] All design spec verification items pass\n- [ ] No regressions in existing functionality`,\n\n 'BOUNDARY_FRAMEWORK.md': `# Boundary Framework\n\n> Add this to the TOP of your CLAUDE.md, before any project context.\n> Customize the specific rules per project, but keep the three-tier structure.\n\n---\n\n## Behavioral Boundaries\n\n### ALWAYS (do these without asking)\n- Run type-check and lint before every commit\n- Commit after completing each discrete task (atomic commits)\n- Follow patterns in existing code — match existing code style\n- Check the active implementation plan before starting work\n\n### ASK FIRST (pause and confirm before doing these)\n- Adding new dependencies\n- Modifying database schema, migrations, or data models\n- Changing authentication or authorization flows\n- Deviating from an approved implementation plan\n- Any destructive operation (deleting files, dropping tables, force-pushing)\n- Modifying CI/CD, deployment, or infrastructure configuration\n\n### NEVER (do not do these under any circumstances)\n- Push to production or main branch without explicit approval\n- Delete specs, plans, or documentation\n- Modify environment variables or secrets\n- Skip type-checking or linting to \"save time\"\n- Make changes outside the scope of the current spec/plan\n- Commit code that doesn't build\n- Remove or weaken existing tests\n- Hardcode secrets, API keys, or credentials`,\n 'examples/example-brief.md': `# Add User Notifications — Feature Brief\n\n> **Date:** 2026-03-15\n> **Project:** acme-web\n> **Status:** Specs Ready\n\n---\n\n## Vision\n\nOur users have no idea when things happen in their account. A teammate comments on their pull request, a deployment finishes, a billing threshold is hit — they find out by accident, minutes or hours later. This is the #1 complaint in our last user survey.\n\nWe are building a notification system that delivers real-time and batched notifications across in-app, email, and (later) Slack channels. Users will have fine-grained control over what they receive and how. When this ships, no important event goes unnoticed, and no user gets buried in noise they didn't ask for.\n\nThe system is designed to be extensible — new event types plug in without touching the notification infrastructure. We start with three event types (PR comments, deploy status, billing alerts) and prove the pattern works before expanding.\n\n## User Stories\n\n- As a developer, I want to see a notification badge in the app when someone comments on my PR so that I can respond quickly\n- As a team lead, I want to receive an email when a production deployment fails so that I can coordinate the response\n- As a billing admin, I want to get alerted when usage exceeds 80% of our plan limit so that I can upgrade before service is disrupted\n- As any user, I want to control which notifications I receive and through which channels so that I am not overwhelmed\n\n## Hard Constraints\n\n- MUST: All notifications go through a single event bus — no direct coupling between event producers and delivery channels\n- MUST: Email delivery uses the existing SendGrid integration (do not add a new email provider)\n- MUST: Respect user preferences before delivering — never send a notification the user has opted out of\n- MUST NOT: Store notification content in plaintext in the database — use the existing encryption-at-rest pattern\n- MUST NOT: Send more than 50 emails per user per day (batch if necessary)\n\n## Out of Scope\n\n- NOT: Slack/Discord integration (Phase 2)\n- NOT: Push notifications / mobile (Phase 2)\n- NOT: Notification templates with rich HTML — plain text and simple markdown only for now\n- NOT: Admin dashboard for monitoring notification delivery rates\n- NOT: Retroactive notifications for events that happened before the feature ships\n\n## Decomposition\n\n| # | Spec Name | Description | Dependencies | Est. Size |\n|---|-----------|-------------|--------------|-----------|\n| 1 | add-notification-preferences-api | Create REST endpoints for users to read and update their notification preferences | None | M |\n| 2 | add-event-bus-infrastructure | Set up the internal event bus that decouples event producers from notification delivery | None | M |\n| 3 | add-notification-delivery-service | Build the service that consumes events, checks preferences, and dispatches to channels (in-app, email) | Spec 1, Spec 2 | L |\n| 4 | add-in-app-notification-ui | Add notification bell, dropdown, and badge count to the app header | Spec 3 | M |\n| 5 | add-email-batching | Implement daily digest batching for email notifications that exceed the per-user threshold | Spec 3 | S |\n\n## Execution Strategy\n\n- [x] Agent teams (parallel teammates within phases, sequential between phases)\n\n\\`\\`\\`\nPhase 1: Teammate A -> Spec 1 (preferences API), Teammate B -> Spec 2 (event bus)\nPhase 2: Teammate A -> Spec 3 (delivery service) — depends on Phase 1\nPhase 3: Teammate A -> Spec 4 (UI), Teammate B -> Spec 5 (batching) — both depend on Spec 3\n\\`\\`\\`\n\n## Success Criteria\n\n- [ ] User updates notification preferences via API, and subsequent events respect those preferences\n- [ ] A PR comment event triggers an in-app notification visible in the UI within 2 seconds\n- [ ] A deploy failure event sends an email to subscribed users via SendGrid\n- [ ] When email threshold (50/day) is exceeded, remaining notifications are batched into a daily digest\n- [ ] No regressions in existing PR, deployment, or billing features\n\n## External Scenarios\n\n| Scenario | What It Tests | Pass Criteria |\n|----------|--------------|---------------|\n| opt-out-respected | User disables email for deploy events, deploy fails | No email sent, in-app notification still appears |\n| batch-threshold | Send 51 email-eligible events for one user in a day | 50 individual emails + 1 digest containing the overflow |\n| preference-persistence | User sets preferences, logs out, logs back in | Preferences are unchanged |\n`,\n\n 'examples/example-spec.md': `# Add Notification Preferences API — Atomic Spec\n\n> **Parent Brief:** \\`docs/briefs/2026-03-15-add-user-notifications.md\\`\n> **Status:** Ready\n> **Date:** 2026-03-15\n> **Estimated scope:** 1 session / 4 files / ~250 lines\n\n---\n\n## What\n\nAdd REST API endpoints that let users read and update their notification preferences. Each user gets a preferences record with per-event-type, per-channel toggles (e.g., \"PR comments: in-app=on, email=off\"). Preferences default to all-on for new users and are stored encrypted alongside the user profile.\n\n## Why\n\nThe notification delivery service (Spec 3) needs to check preferences before dispatching. Without this API, there is no way for users to control what they receive, and we cannot build the delivery pipeline.\n\n## Acceptance Criteria\n\n- [ ] \\`GET /api/v1/notifications/preferences\\` returns the current user's preferences as JSON\n- [ ] \\`PATCH /api/v1/notifications/preferences\\` updates one or more preference fields and returns the updated record\n- [ ] New users get default preferences (all channels enabled for all event types) on first read\n- [ ] Preferences are validated — unknown event types or channels return 400\n- [ ] Preferences are stored using the existing encryption-at-rest pattern (\\`EncryptedJsonColumn\\`)\n- [ ] Endpoint requires authentication (returns 401 for unauthenticated requests)\n- [ ] Build passes\n- [ ] Tests pass (unit + integration)\n\n## Constraints\n\n- MUST: Use the existing \\`EncryptedJsonColumn\\` utility for storage — do not roll a new encryption pattern\n- MUST: Follow the existing REST controller pattern in \\`src/controllers/\\`\n- MUST NOT: Expose other users' preferences (scope queries to authenticated user only)\n- SHOULD: Return the full preferences object on PATCH (not just the changed fields), so the frontend can replace state without merging\n\n## Affected Files\n\n| Action | File | What Changes |\n|--------|------|-------------|\n| Create | \\`src/controllers/notification-preferences.controller.ts\\` | New controller with GET and PATCH handlers |\n| Create | \\`src/models/notification-preferences.model.ts\\` | Sequelize model with EncryptedJsonColumn for preferences blob |\n| Create | \\`src/migrations/20260315-add-notification-preferences.ts\\` | Database migration to create notification_preferences table |\n| Create | \\`tests/controllers/notification-preferences.test.ts\\` | Unit and integration tests for both endpoints |\n| Modify | \\`src/routes/index.ts\\` | Register the new controller routes |\n\n## Approach\n\nCreate a \\`NotificationPreferences\\` model backed by a single \\`notification_preferences\\` table with columns: \\`id\\`, \\`user_id\\` (unique FK), \\`preferences\\` (EncryptedJsonColumn), \\`created_at\\`, \\`updated_at\\`. The \\`preferences\\` column stores a JSON blob shaped like \\`{ \"pr_comment\": { \"in_app\": true, \"email\": true }, \"deploy_status\": { ... } }\\`.\n\nThe GET endpoint does a find-or-create: if no record exists for the user, create one with defaults and return it. The PATCH endpoint deep-merges the request body into the existing preferences, validates the result against a known schema of event types and channels, and saves.\n\n**Rejected alternative:** Storing preferences as individual rows (one per event-type-channel pair). This would make queries more complex and would require N rows per user instead of 1. The JSON blob approach is simpler and matches how the frontend will consume the data.\n\n## Edge Cases\n\n| Scenario | Expected Behavior |\n|----------|------------------|\n| PATCH with empty body \\`{}\\` | Return 200 with unchanged preferences (no-op) |\n| PATCH with unknown event type \\`{\"foo\": {\"email\": true}}\\` | Return 400 with validation error listing valid event types |\n| GET for user with no existing record | Create default preferences, return 200 |\n| Concurrent PATCH requests | Last-write-wins (optimistic, no locking) — acceptable for user preferences |\n`,\n\n 'context/dangerous-assumptions.md': `# Dangerous Assumptions\n\n> Things the AI agent might assume that are wrong in this project.\n> Generated by Joycraft risk interview. Update when you discover new gotchas.\n\n## Assumptions\n\n| Agent Might Assume | But Actually | Impact If Wrong |\n|-------------------|-------------|----------------|\n| _Example: All databases are dev/test_ | _The default connection is production_ | _Data loss_ |\n| _Example: Deleting and recreating is safe_ | _Some resources have manual config not in code_ | _Hours of manual recovery_ |\n\n## Historical Incidents\n\n| Date | What Happened | Lesson | Rule Added |\n|------|-------------|--------|------------|\n| _Example: 2026-03-15_ | _Agent deleted staging infra thinking it was temp_ | _Always verify environment before destructive ops_ | _NEVER: Delete cloud resources without listing them first_ |\n`,\n\n 'context/decision-log.md': `# Decision Log\n\n> Why choices were made, not just what was chosen.\n> Update this when making architectural, tooling, or process decisions.\n> This is the institutional memory that prevents re-litigating settled questions.\n\n## Decisions\n\n| Date | Decision | Why | Alternatives Rejected | Revisit When |\n|------|----------|-----|----------------------|-------------|\n| _Example: 2026-03-15_ | _Use Supabase over Firebase_ | _Postgres flexibility, row-level security, self-hostable_ | _Firebase (vendor lock-in), PlanetScale (no RLS)_ | _If we need real-time sync beyond Supabase's capabilities_ |\n\n## Principles\n\n_Capture recurring decision patterns here — they save time on future choices._\n\n- _Example: \"Prefer tools we can self-host over pure SaaS — reduces vendor risk\"_\n- _Example: \"Choose boring technology for infrastructure, cutting-edge only for core differentiators\"_\n`,\n\n 'context/institutional-knowledge.md': `# Institutional Knowledge\n\n> Unwritten rules, team conventions, and organizational context that AI agents can't derive from code.\n> This is the knowledge that takes a new developer months to absorb.\n> Update when you catch yourself saying \"oh, you didn't know about that?\"\n\n## Team Conventions\n\n_Things everyone on the team knows but nobody wrote down._\n\n- _Example: \"We never deploy on Fridays\"_\n- _Example: \"The CEO reviews all UI changes before they ship\"_\n- _Example: \"PR titles must reference the Jira ticket number\"_\n\n## Organizational Constraints\n\n_Business rules, compliance requirements, or political realities that affect technical decisions._\n\n- _Example: \"Legal requires all user data to be stored in EU regions\"_\n- _Example: \"The payments team owns the billing schema — never modify without their approval\"_\n- _Example: \"We have an informal agreement with Vendor X about API rate limits\"_\n\n## Historical Context\n\n_Why things are the way they are — especially when it looks wrong._\n\n- _Example: \"The auth module uses an old pattern because it predates our TypeScript migration — don't refactor without a spec\"_\n- _Example: \"The caching layer has a 5-second TTL because we had a consistency bug in 2025 — increasing it requires careful testing\"_\n\n## People & Ownership\n\n_Who owns what, who to ask, who cares about what._\n\n- _Example: \"Alice owns the payment pipeline — all changes need her review\"_\n- _Example: \"The data team is sensitive about query performance on the analytics tables\"_\n`,\n\n 'context/production-map.md': `# Production Map\n\n> What's real, what's staging, what's safe to touch.\n> Generated by Joycraft risk interview. Update as your infrastructure evolves.\n\n## Services\n\n| Service | Environment | URL/Endpoint | Impact if Corrupted |\n|---------|-------------|-------------|-------------------|\n| _Example: Main DB_ | _Production_ | _postgres://prod.example.com_ | _1.9M user records lost_ |\n| _Example: Staging DB_ | _Staging_ | _postgres://staging.example.com_ | _Test data only, safe to reset_ |\n\n## Secrets & Credentials\n\n| Secret | Location | Notes |\n|--------|----------|-------|\n| _Example: DATABASE_URL_ | _.env.local_ | _Production connection — NEVER commit_ |\n\n## Safe to Touch\n\n- [ ] Staging environment at [URL]\n- [ ] Test/fixture data in [location]\n- [ ] Development API keys\n\n## NEVER Touch Without Explicit Approval\n\n- [ ] Production database\n- [ ] Live API endpoints\n- [ ] User-facing infrastructure\n`,\n\n};\n","import { readFileSync, writeFileSync, existsSync } from 'node:fs';\nimport { join } from 'node:path';\nimport { createHash } from 'node:crypto';\n\nconst VERSION_FILE = '.joycraft-version';\n\nexport interface VersionInfo {\n version: string;\n files: Record<string, string>;\n}\n\nexport function hashContent(content: string): string {\n return createHash('sha256').update(content).digest('hex');\n}\n\nexport function readVersion(dir: string): VersionInfo | null {\n const filePath = join(dir, VERSION_FILE);\n if (!existsSync(filePath)) return null;\n try {\n const raw = readFileSync(filePath, 'utf-8');\n const parsed = JSON.parse(raw);\n if (typeof parsed.version === 'string' && typeof parsed.files === 'object') {\n return parsed as VersionInfo;\n }\n return null;\n } catch {\n return null;\n }\n}\n\nexport function writeVersion(dir: string, version: string, files: Record<string, string>): void {\n const filePath = join(dir, VERSION_FILE);\n const data: VersionInfo = { version, files };\n writeFileSync(filePath, JSON.stringify(data, null, 2) + '\\n', 'utf-8');\n}\n"],"mappings":";;;AAEO,IAAM,SAAiC;AAAA,EAC5C,yBAAyB;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAmIzB,yBAAyB;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAiGzB,2BAA2B;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAkK3B,2BAA2B;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAuF3B,oBAAoB;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AA+VtB;AAEO,IAAM,YAAoC;AAAA,EAC/C,2BAA2B;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAwD3B,6BAA6B;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EA0C7B,mCAAmC;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EA8CnC,yBAAyB;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAgCzB,6BAA6B;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EA4E7B,4BAA4B;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EA+D5B,oCAAoC;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAmBpC,2BAA2B;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAoB3B,sCAAsC;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAqCtC,6BAA6B;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AA+B/B;;;ACxuCA,SAAS,cAAc,eAAe,kBAAkB;AACxD,SAAS,YAAY;AACrB,SAAS,kBAAkB;AAE3B,IAAM,eAAe;AAOd,SAAS,YAAY,SAAyB;AACnD,SAAO,WAAW,QAAQ,EAAE,OAAO,OAAO,EAAE,OAAO,KAAK;AAC1D;AAEO,SAAS,YAAY,KAAiC;AAC3D,QAAM,WAAW,KAAK,KAAK,YAAY;AACvC,MAAI,CAAC,WAAW,QAAQ,EAAG,QAAO;AAClC,MAAI;AACF,UAAM,MAAM,aAAa,UAAU,OAAO;AAC1C,UAAM,SAAS,KAAK,MAAM,GAAG;AAC7B,QAAI,OAAO,OAAO,YAAY,YAAY,OAAO,OAAO,UAAU,UAAU;AAC1E,aAAO;AAAA,IACT;AACA,WAAO;AAAA,EACT,QAAQ;AACN,WAAO;AAAA,EACT;AACF;AAEO,SAAS,aAAa,KAAa,SAAiB,OAAqC;AAC9F,QAAM,WAAW,KAAK,KAAK,YAAY;AACvC,QAAM,OAAoB,EAAE,SAAS,MAAM;AAC3C,gBAAc,UAAU,KAAK,UAAU,MAAM,MAAM,CAAC,IAAI,MAAM,OAAO;AACvE;","names":[]}
|
package/dist/cli.js
CHANGED
|
@@ -5,11 +5,11 @@ import { Command } from "commander";
|
|
|
5
5
|
var program = new Command();
|
|
6
6
|
program.name("joycraft").description("Scaffold and upgrade AI development harnesses").version("0.1.0");
|
|
7
7
|
program.command("init").description("Scaffold the Joycraft harness into the current project").argument("[dir]", "Target directory", ".").option("--force", "Overwrite existing files").action(async (dir, opts) => {
|
|
8
|
-
const { init } = await import("./init-
|
|
8
|
+
const { init } = await import("./init-XHJDJIZW.js");
|
|
9
9
|
await init(dir, { force: opts.force ?? false });
|
|
10
10
|
});
|
|
11
11
|
program.command("upgrade").description("Upgrade installed Joycraft templates and skills to latest").argument("[dir]", "Target directory", ".").option("--yes", "Auto-accept all updates").action(async (dir, opts) => {
|
|
12
|
-
const { upgrade } = await import("./upgrade-
|
|
12
|
+
const { upgrade } = await import("./upgrade-NOHZWQMO.js");
|
|
13
13
|
await upgrade(dir, { yes: opts.yes ?? false });
|
|
14
14
|
});
|
|
15
15
|
program.command("check-version").description("Check if a newer version of Joycraft is available").action(async () => {
|