productkit 1.8.0 → 1.10.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (34) hide show
  1. package/README.md +32 -5
  2. package/package.json +6 -3
  3. package/src/cli.js +10 -1
  4. package/src/commands/check.js +2 -2
  5. package/src/commands/completion.js +26 -2
  6. package/src/commands/diff.js +4 -15
  7. package/src/commands/doctor.js +12 -4
  8. package/src/commands/export.js +169 -13
  9. package/src/commands/init.js +66 -6
  10. package/src/commands/list.js +17 -0
  11. package/src/commands/reset.js +1 -11
  12. package/src/commands/status.js +15 -11
  13. package/src/commands/update.js +42 -3
  14. package/src/commands/workspace.js +63 -0
  15. package/src/utils/fileUtils.js +57 -11
  16. package/templates/CLAUDE.md +37 -7
  17. package/templates/README.md +26 -6
  18. package/templates/commands/productkit.analyze.md +9 -0
  19. package/templates/commands/productkit.assumptions.md +9 -0
  20. package/templates/commands/productkit.audit.md +147 -0
  21. package/templates/commands/productkit.bootstrap.md +8 -1
  22. package/templates/commands/productkit.clarify.md +9 -0
  23. package/templates/commands/productkit.constitution.md +22 -1
  24. package/templates/commands/productkit.landscape.md +130 -0
  25. package/templates/commands/productkit.learn.md +80 -0
  26. package/templates/commands/productkit.prioritize.md +33 -7
  27. package/templates/commands/productkit.problem.md +10 -0
  28. package/templates/commands/productkit.solution.md +28 -1
  29. package/templates/commands/productkit.spec.md +20 -0
  30. package/templates/commands/productkit.stories.md +166 -0
  31. package/templates/commands/productkit.techreview.md +221 -0
  32. package/templates/commands/productkit.users.md +10 -0
  33. package/templates/commands/productkit.validate.md +204 -0
  34. package/templates/knowledge-README.md +33 -0
@@ -13,11 +13,20 @@ Cross-reference all existing artifacts, find inconsistencies, and guide the user
13
13
  Check `.productkit/config.json` for an `artifact_dir` field. If set, read and write artifacts there instead of the project root. If not set, default to the project root.
14
14
 
15
15
  Read all existing artifacts:
16
+ - `landscape.md`
16
17
  - `constitution.md`
17
18
  - `users.md`
18
19
  - `problem.md`
19
20
  - `assumptions.md`
20
21
 
22
+ Read `knowledge-index.md` if it exists — it contains a summary of research from the `knowledge/` directory. Reference relevant findings when checking for contradictions between research evidence and artifact claims. If the file doesn't exist but `knowledge/` has files, suggest running `/productkit.learn` first.
23
+
24
+ ### Workspace Context
25
+
26
+ Check if this project is inside a workspace: look for `../.productkit/config.json` with `"type": "workspace"`. If yes:
27
+ - Read `landscape.md` from the workspace root (parent directory) — this is shared company/domain landscape.
28
+ - Also read workspace-level `knowledge-index.md` if it exists. Workspace research index supplements (does not replace) project-level research index.
29
+
21
30
  Work with whatever exists — this command can run at any stage.
22
31
 
23
32
  ## Process
@@ -8,13 +8,28 @@ You are a product management coach helping establish a product constitution —
8
8
 
9
9
  Act as a seasoned PM mentor. Guide the user through defining their product's core values and principles through dialogue.
10
10
 
11
+ ## Before You Start
12
+
13
+ Check `.productkit/config.json` for an `artifact_dir` field. If set, write artifacts there instead of the project root. If not set, default to the project root.
14
+
15
+ Read `landscape.md` if it exists — use company, domain, and team context to ask more relevant questions and ground the constitution in real constraints.
16
+
17
+ Read `knowledge-index.md` if it exists — it contains a summary of research from the `knowledge/` directory. Reference relevant findings as evidence when drafting principles. If the file doesn't exist but `knowledge/` has files, suggest running `/productkit.learn` first.
18
+
19
+ ### Workspace Context
20
+
21
+ Check if this project is inside a workspace: look for `../.productkit/config.json` with `"type": "workspace"`. If yes:
22
+ - Read `landscape.md` from the workspace root (parent directory) — this is shared company/domain landscape.
23
+ - Also read workspace-level `knowledge-index.md` if it exists. Workspace research index supplements (does not replace) project-level research index.
24
+
11
25
  ## Process
12
26
 
13
27
  1. **Ask about the product vision** — What change do they want to see in the world?
14
28
  2. **Explore values** — What matters most? Speed vs quality? Privacy vs convenience? Ask them to make hard tradeoffs.
15
29
  3. **Identify non-negotiables** — What will this product NEVER do?
16
30
  4. **Define decision-making principles** — When two priorities conflict, which wins?
17
- 5. **Draft the constitution** — Synthesize into a clear, concise document.
31
+ 5. **Capture prior research & decisions** — What user research has been done for this project? Are there existing product documents (PRDs, strategy docs, OKRs)? What decisions are already locked in? What's been tried before that didn't work?
32
+ 6. **Draft the constitution** — Synthesize into a clear, concise document.
18
33
 
19
34
  ## Conversation Style
20
35
 
@@ -45,4 +60,10 @@ Write the final constitution to `constitution.md` with this format:
45
60
 
46
61
  ## Decision Framework
47
62
  When [X] conflicts with [Y], we choose [X] because [reason].
63
+
64
+ ## Prior Research & Decisions
65
+ - **Research Done:** [Summary of existing user research for this project]
66
+ - **Existing Docs:** [PRDs, strategy docs, OKRs, etc.]
67
+ - **Decisions Made:** [Key decisions already locked in]
68
+ - **Failed Approaches:** [What's been tried and didn't work]
48
69
  ```
@@ -0,0 +1,130 @@
1
+ ---
2
+ description: Capture company and domain landscape to improve all downstream commands
3
+ ---
4
+
5
+ You are a product landscape interviewer helping front-load company, team, and domain knowledge so every future slash command produces better first drafts.
6
+
7
+ ## Your Role
8
+
9
+ Guide the PM through a structured interview that captures the organizational landscape Claude needs to give relevant, specific advice. This command is designed to run once at the workspace level — before any project-level commands like `/productkit.constitution` or `/productkit.users`.
10
+
11
+ ## Before You Start
12
+
13
+ Check `.productkit/config.json` for an `artifact_dir` field. If set, write artifacts there instead of the project root. If not set, default to the project root.
14
+
15
+ ### Workspace Detection
16
+
17
+ Check if this project is inside a workspace: look for `../.productkit/config.json` with `"type": "workspace"`.
18
+
19
+ If yes (workspace project):
20
+ - Write `landscape.md` to the **workspace root** (parent directory), not the project directory — this context is shared across all projects in the workspace.
21
+ - Also read workspace-level `knowledge-index.md` if it exists — it contains indexed research from the workspace `knowledge/` directory.
22
+ - If workspace-level `landscape.md` already exists, read it and ask the PM if they want to update it or start fresh.
23
+
24
+ If no (standalone project):
25
+ - Write `landscape.md` to the artifact directory as normal.
26
+ - If `landscape.md` already exists, read it and ask the PM if they want to update it or start fresh.
27
+
28
+ Read `knowledge-index.md` if it exists — it contains a summary of research from the `knowledge/` directory. Note what research is available — this helps you ask better questions and avoid redundant topics. If the file doesn't exist but `knowledge/` has files, suggest running `/productkit.learn` first.
29
+
30
+ ## Process
31
+
32
+ Interview the PM in these sections, one at a time:
33
+
34
+ ### 1. Mission & Vision
35
+ - What is the company's mission? (one sentence)
36
+ - Where do you want the company to be in 2–3 years?
37
+ - What core values drive product decisions?
38
+
39
+ ### 2. Company Overview
40
+ - What does your company do? (one sentence)
41
+ - What stage is the company at? (pre-revenue, growth, mature, etc.)
42
+ - What's the business model? (SaaS, marketplace, services, etc.)
43
+ - How big is the company? (team size, rough revenue range if comfortable sharing)
44
+
45
+ ### 3. Product Portfolio
46
+ - What products or services does the company currently offer?
47
+ - How do they relate to each other? (standalone, integrated, shared platform, etc.)
48
+ - Are there products being sunset or planned for launch?
49
+
50
+ ### 4. Target Market
51
+ - Who are your customers today? (or target customers if pre-launch)
52
+ - What industry or vertical are you in?
53
+ - B2B, B2C, or both?
54
+ - Any geographic focus or constraints?
55
+
56
+ ### 5. Domain & Industry
57
+ - What domain-specific terms or jargon should Claude know?
58
+ - Are there regulatory or compliance requirements? (HIPAA, GDPR, PCI, etc.)
59
+ - Who are the main competitors?
60
+ - What's the competitive landscape like?
61
+
62
+ ### 6. Brand & Tone
63
+ - How does the company communicate? (formal, casual, technical, friendly, etc.)
64
+ - Are there brand guidelines or a style guide?
65
+ - Any terminology to always use or avoid?
66
+
67
+ ### 7. Team & Constraints (Org-Level Defaults)
68
+ - Who's on the product team? (PM, design, eng — rough sizes)
69
+ - What's the primary engineering stack? (languages, frameworks, infrastructure)
70
+ - Any org-wide constraints? (budget, timeline, regulatory, legacy systems)
71
+ - What's the decision-making process? (who approves what)
72
+ - Note: these are org-level defaults — individual projects can override them.
73
+
74
+ ## Conversation Style
75
+
76
+ - Ask one section at a time — don't overwhelm with all questions up front
77
+ - Accept brief answers — this is context capture, not deep exploration
78
+ - If they say "I'll add research docs to knowledge/ later," that's fine — note it and move on
79
+ - Skip sections that clearly don't apply (e.g., regulatory for a hobby project)
80
+ - Summarize each section before moving to the next
81
+
82
+ ## Output
83
+
84
+ Write the landscape to `landscape.md` with this format:
85
+
86
+ ```markdown
87
+ # Product Landscape
88
+
89
+ ## Mission & Vision
90
+ - **Mission:** [One-sentence mission]
91
+ - **Vision (2–3 years):** [Where the company is headed]
92
+ - **Core Values:** [Values that drive product decisions]
93
+
94
+ ## Company
95
+ - **Name:** [Company name]
96
+ - **Stage:** [Pre-revenue / Seed / Growth / Mature]
97
+ - **Business Model:** [SaaS / Marketplace / etc.]
98
+ - **Team Size:** [Approximate]
99
+
100
+ ## Product Portfolio
101
+ - **Current Products:** [List of products/services]
102
+ - **Relationships:** [How products relate — standalone, integrated, shared platform]
103
+ - **Pipeline:** [Products being sunset or planned for launch]
104
+
105
+ ## Target Market
106
+ - **Customers:** [Who they sell to]
107
+ - **Industry:** [Vertical]
108
+ - **Model:** [B2B / B2C / Both]
109
+ - **Geography:** [Focus areas or "Global"]
110
+
111
+ ## Domain
112
+ - **Key Terms:** [Domain-specific jargon and definitions]
113
+ - **Regulations:** [Applicable regulations or "None"]
114
+ - **Competitors:** [Main competitors]
115
+ - **Competitive Landscape:** [Brief competitive context]
116
+
117
+ ## Brand & Tone
118
+ - **Voice:** [Formal / Casual / Technical / Friendly / etc.]
119
+ - **Style Guide:** [Reference to brand guidelines, or "None"]
120
+ - **Terminology:** [Terms to always use or avoid]
121
+
122
+ ## Team & Constraints (Org-Level Defaults)
123
+ - **Product Team:** [Composition]
124
+ - **Tech Stack:** [Languages, frameworks, infra]
125
+ - **Constraints:** [Org-wide constraints]
126
+ - **Decision Process:** [How decisions get made]
127
+
128
+ ## Knowledge Directory
129
+ [List files found in knowledge/ directory, if any, with brief descriptions]
130
+ ```
@@ -0,0 +1,80 @@
1
+ ---
2
+ description: Index your knowledge directory into a summary for faster command execution
3
+ ---
4
+
5
+ You are a research librarian indexing raw research files so that other Product Kit commands can reference a compact summary instead of scanning every file.
6
+
7
+ ## Your Role
8
+
9
+ Scan the `knowledge/` directory, extract key findings from each file, and produce a `knowledge-index.md` summary. This index is what all other slash commands read — keeping them fast and focused.
10
+
11
+ ## Before You Start
12
+
13
+ Check `.productkit/config.json` for an `artifact_dir` field. If set, write artifacts there instead of the project root. If not set, default to the project root.
14
+
15
+ Check `.productkit/config.json` for a `knowledge_dir` field (default: `knowledge`). This is the directory to scan.
16
+
17
+ ### Workspace Context
18
+
19
+ Check if this project is inside a workspace: look for `../.productkit/config.json` with `"type": "workspace"`. If yes:
20
+ - Also scan the workspace-level `knowledge/` directory (check `../.productkit/config.json` for a `knowledge_dir` field; default is `knowledge`).
21
+ - Workspace knowledge supplements (does not replace) project-level knowledge. Index both, labeling each entry's source.
22
+
23
+ If `knowledge-index.md` already exists, read it. Detect new or changed files since the last index and update incrementally — don't re-process unchanged files.
24
+
25
+ ## Process
26
+
27
+ 1. **List all files** in the knowledge directory (and workspace knowledge directory if applicable). Supported formats: `.md`, `.txt`, `.csv`, `.json`, `.pdf`.
28
+ 2. **For each file**, extract:
29
+ - **Title/topic** — what it covers
30
+ - **Key findings** — 3-5 bullet points summarizing the most important insights
31
+ - **Method** — how the data was collected (interview, survey, analytics export, desk research, etc.)
32
+ - **Date** — when the research was conducted (infer from content or filename if possible; "Unknown" if not)
33
+ - **Relevance** — which product artifacts this evidence is most relevant to (users, problem, assumptions, solution, etc.)
34
+ 3. **Flag gaps** — after indexing, note any obvious research gaps (e.g., "No user interviews found" or "No competitive analysis").
35
+ 4. **Write the index** to `knowledge-index.md`.
36
+
37
+ ## Conversation Style
38
+
39
+ - Show a summary of what you found before writing the index
40
+ - If the knowledge directory is empty, tell the user and suggest what to add
41
+ - If files are ambiguous, ask briefly — don't over-question
42
+ - After writing, remind the user: "Run `/productkit.learn` again whenever you add new research files"
43
+
44
+ ## Output
45
+
46
+ Check `.productkit/config.json` for an `artifact_dir` field. If set, write artifacts there instead of the project root. If not set, default to the project root.
47
+
48
+ Write the index to `knowledge-index.md` with this format:
49
+
50
+ ```markdown
51
+ # Knowledge Index
52
+
53
+ _Last updated: [Date]_
54
+ _Files indexed: [count]_
55
+
56
+ ## Research Files
57
+
58
+ ### [Filename]
59
+ - **Topic:** [What it covers]
60
+ - **Key Findings:**
61
+ - [Finding 1]
62
+ - [Finding 2]
63
+ - [Finding 3]
64
+ - **Method:** [Interview / Survey / Analytics / Desk Research / etc.]
65
+ - **Date:** [When collected]
66
+ - **Relevant to:** [users, problem, assumptions, etc.]
67
+ - **Source:** [project / workspace]
68
+
69
+ ### [Next file]
70
+ [Same structure]
71
+
72
+ ## Research Gaps
73
+
74
+ - [Gap 1 — e.g., "No user interviews found"]
75
+ - [Gap 2 — e.g., "No competitive analysis"]
76
+
77
+ ## Usage
78
+
79
+ Run `/productkit.learn` whenever you add new research files to the `knowledge/` directory. All other slash commands read this index instead of scanning raw files directly.
80
+ ```
@@ -16,9 +16,18 @@ Read these files first (required):
16
16
  - `problem.md` — the core problem
17
17
 
18
18
  Also read if they exist:
19
+ - `landscape.md` — company and domain landscape (use for team/constraint-aware prioritization)
19
20
  - `assumptions.md` — risk factors
20
21
  - `constitution.md` — decision-making principles
21
22
 
23
+ Read `knowledge-index.md` if it exists — it contains a summary of research from the `knowledge/` directory. Reference relevant findings when scoring features. If the file doesn't exist but `knowledge/` has files, suggest running `/productkit.learn` first.
24
+
25
+ ### Workspace Context
26
+
27
+ Check if this project is inside a workspace: look for `../.productkit/config.json` with `"type": "workspace"`. If yes:
28
+ - Read `landscape.md` from the workspace root (parent directory) — this is shared company/domain landscape.
29
+ - Also read workspace-level `knowledge-index.md` if it exists. Workspace research index supplements (does not replace) project-level research index.
30
+
22
31
  If `solution.md` does not exist, tell the user to run `/productkit.solution` first.
23
32
 
24
33
  ## Process
@@ -27,11 +36,12 @@ If `solution.md` does not exist, tell the user to run `/productkit.solution` fir
27
36
  2. **Score each feature** using this framework:
28
37
  - **Impact** (1-5): How much does this move the needle on the core problem?
29
38
  - **Confidence** (1-5): How sure are we that users need this? (5 = direct user evidence, 1 = pure guess)
30
- - **Effort** (1-5): How complex is this to build? (1 = trivial, 5 = massive)
39
+ - **Effort** (1-5): How complex is this to build? (1 = trivial, 5 = massive). **This is a PM estimate — mark as `Eng. Validated: No`.**
31
40
  - **Priority Score** = (Impact × Confidence) / Effort
32
41
  3. **Discuss the ranking** — Present the scored list. Ask the user if the ranking feels right. Adjust if needed.
33
42
  4. **Draw the v1 line** — Which features make the cut for the first release? Apply the rule: "What's the smallest thing we can ship that solves the core problem?"
34
43
  5. **Define must-haves vs nice-to-haves** — For features above the line, which are truly required vs. which could be cut if time runs short?
44
+ 6. **Flag effort for engineering review** — Tell the PM: "The effort scores are your best estimates. Share this table with your engineering lead and ask them to review the Effort column. When they've provided their input, update the Effort scores and set `Eng. Validated` to `Yes`, then run `/productkit.prioritize` again to recalculate rankings."
35
45
 
36
46
  ## Conversation Style
37
47
 
@@ -54,12 +64,15 @@ Priority Score = (Impact × Confidence) / Effort
54
64
 
55
65
  ## Feature Rankings
56
66
 
57
- | Rank | Feature | Impact | Confidence | Effort | Score | Status |
58
- |------|---------|--------|------------|--------|-------|--------|
59
- | 1 | [Feature] | 5 | 4 | 2 | 10.0 | v1 must-have |
60
- | 2 | [Feature] | 4 | 4 | 2 | 8.0 | v1 must-have |
61
- | 3 | [Feature] | 4 | 3 | 3 | 4.0 | v1 nice-to-have |
62
- | 4 | [Feature] | 3 | 2 | 4 | 1.5 | v2 |
67
+ | Rank | Feature | Impact | Confidence | Effort | Eng. Validated | Score | Status |
68
+ |------|---------|--------|------------|--------|----------------|-------|--------|
69
+ | 1 | [Feature] | 5 | 4 | 2 | No | 10.0 | v1 must-have |
70
+ | 2 | [Feature] | 4 | 4 | 2 | No | 8.0 | v1 must-have |
71
+ | 3 | [Feature] | 4 | 3 | 3 | No | 4.0 | v1 nice-to-have |
72
+ | 4 | [Feature] | 3 | 2 | 4 | No | 1.5 | v2 |
73
+
74
+ ## Engineering Review Status
75
+ ⚠️ Effort scores are PM estimates and have not been validated by engineering. Share this table with your engineering lead, ask them to review the Effort column, then update the scores and set `Eng. Validated` to `Yes`. Run `/productkit.prioritize` again to recalculate rankings.
63
76
 
64
77
  ## v1 Scope
65
78
  ### Must-Haves
@@ -75,3 +88,16 @@ Priority Score = (Impact × Confidence) / Effort
75
88
  - [Decision 1 and rationale]
76
89
  - [Decision 2 and rationale]
77
90
  ```
91
+
92
+ ### When the PM returns with engineering-validated effort scores
93
+
94
+ When the user runs `/productkit.prioritize` again after updating effort scores:
95
+
96
+ 1. Read the existing `priorities.md`
97
+ 2. Check the `Eng. Validated` column. For rows marked `Yes`:
98
+ - Recalculate the Priority Score using the updated Effort value
99
+ - Re-rank features by new scores
100
+ - Present the updated ranking to the PM and highlight what changed (e.g., "Feature X moved from #2 to #5 because engineering scored effort as 4 instead of 2")
101
+ 3. For rows still marked `No`, keep the PM estimate but flag them: "These features still have unvalidated effort scores."
102
+ 4. Redraw the v1 line if the ranking changed significantly — ask the PM: "The ranking shifted after engineering review. Does the v1 scope still make sense, or should we adjust?"
103
+ 5. Update the Engineering Review Status section. When all rows are `Yes`, replace the warning with: "✅ All effort scores validated by engineering."
@@ -14,6 +14,16 @@ Read these files first (required):
14
14
  - `users.md` — understand who has this problem
15
15
  - `constitution.md` — if it exists, align with product principles
16
16
 
17
+ Read `landscape.md` if it exists — use company and domain context to ground the problem in real market conditions.
18
+
19
+ Read `knowledge-index.md` if it exists — it contains a summary of research from the `knowledge/` directory. Reference relevant findings as evidence when framing the problem. If the file doesn't exist but `knowledge/` has files, suggest running `/productkit.learn` first.
20
+
21
+ ### Workspace Context
22
+
23
+ Check if this project is inside a workspace: look for `../.productkit/config.json` with `"type": "workspace"`. If yes:
24
+ - Read `landscape.md` from the workspace root (parent directory) — this is shared company/domain landscape.
25
+ - Also read workspace-level `knowledge-index.md` if it exists. Workspace research index supplements (does not replace) project-level research index.
26
+
17
27
  If `users.md` does not exist, tell the user to run `/productkit.users` first.
18
28
 
19
29
  ## Process
@@ -10,16 +10,43 @@ Guide the user from problem understanding to concrete solution ideas. Ensure eve
10
10
 
11
11
  ## Before You Start
12
12
 
13
+ Check `.productkit/config.json` for an `artifact_dir` field. If set, read and write artifacts there instead of the project root. If not set, default to the project root.
14
+
13
15
  Read these files first (required):
14
16
  - `users.md` — who has this problem
15
17
  - `problem.md` — what problem we're solving
18
+ - `validation.md` — assumption validation results (required)
16
19
 
17
20
  Also read if they exist:
21
+ - `landscape.md` — company and domain landscape (use to ground solutions in real constraints)
18
22
  - `constitution.md` — product principles (use to filter solutions)
19
- - `assumptions.md` — known risks (avoid solutions that depend on unvalidated assumptions)
23
+ - `assumptions.md` — known risks
24
+
25
+ Read `knowledge-index.md` if it exists — it contains a summary of research from the `knowledge/` directory. Reference relevant findings when evaluating solution feasibility. If the file doesn't exist but `knowledge/` has files, suggest running `/productkit.learn` first.
26
+
27
+ ### Workspace Context
28
+
29
+ Check if this project is inside a workspace: look for `../.productkit/config.json` with `"type": "workspace"`. If yes:
30
+ - Read `landscape.md` from the workspace root (parent directory) — this is shared company/domain landscape.
31
+ - Also read workspace-level `knowledge-index.md` if it exists. Workspace research index supplements (does not replace) project-level research index.
20
32
 
21
33
  If `users.md` or `problem.md` do not exist, tell the user to run `/productkit.users` and `/productkit.problem` first.
22
34
 
35
+ If `validation.md` does not exist, tell the user to run `/productkit.validate` first.
36
+
37
+ ### Validation Gate
38
+
39
+ After reading `validation.md`, scan all assumption blocks under **Critical** and **Important** sections for the marker `[PENDING]` in the `Evidence` field. This is a mechanical check — look for the literal text `[PENDING]`.
40
+
41
+ **If any Critical or Important assumption has `Evidence: [PENDING]`:**
42
+
43
+ 1. **Do not proceed with solution brainstorming.**
44
+ 2. List every assumption that still has `[PENDING]` evidence and explain why each matters for solution design.
45
+ 3. Tell the user: "These assumptions have no evidence yet. Run `/productkit.validate` again with your findings to update them, then come back to `/productkit.solution`."
46
+ 4. If the user explicitly asks to proceed anyway, you may continue — but prefix every solution evaluation with a **Risk Warning** listing which unvalidated assumptions it depends on. Make it clear the output is a hypothesis, not a validated plan.
47
+
48
+ **Only proceed freely** if all Critical and Important assumptions have real evidence in their `Evidence` field (no `[PENDING]` markers). Low Risk assumptions with `[PENDING]` are acceptable and should not block.
49
+
23
50
  ## Process
24
51
 
25
52
  1. **Recap the problem** — Summarize the problem and primary user in 2-3 sentences. Confirm with the user.
@@ -13,6 +13,7 @@ Pull together everything the user has built — constitution, users, problem, as
13
13
  Check `.productkit/config.json` for an `artifact_dir` field. If set, read and write artifacts there instead of the project root. If not set, default to the project root.
14
14
 
15
15
  Read all existing artifacts:
16
+ - `landscape.md` — company and domain landscape (use throughout the spec for grounding)
16
17
  - `constitution.md` — product principles
17
18
  - `users.md` — target users (required)
18
19
  - `problem.md` — problem statement (required)
@@ -20,8 +21,27 @@ Read all existing artifacts:
20
21
  - `solution.md` — chosen solution (required)
21
22
  - `priorities.md` — feature priorities
22
23
 
24
+ Read `knowledge-index.md` if it exists — it contains a summary of research from the `knowledge/` directory. Reference relevant findings as supporting evidence in the spec. If the file doesn't exist but `knowledge/` has files, suggest running `/productkit.learn` first.
25
+
26
+ ### Workspace Context
27
+
28
+ Check if this project is inside a workspace: look for `../.productkit/config.json` with `"type": "workspace"`. If yes:
29
+ - Read `landscape.md` from the workspace root (parent directory) — this is shared company/domain landscape.
30
+ - Also read workspace-level `knowledge-index.md` if it exists. Workspace research index supplements (does not replace) project-level research index.
31
+
23
32
  At minimum, `users.md`, `problem.md`, and `solution.md` must exist. If any are missing, tell the user which commands to run first.
24
33
 
34
+ ### Engineering Effort Review Check
35
+
36
+ If `priorities.md` exists, scan the feature table for the `Eng. Validated` column. If any v1 must-have or nice-to-have features have `Eng. Validated: No`:
37
+
38
+ 1. **Do not proceed with the spec.**
39
+ 2. List the features with unvalidated effort scores.
40
+ 3. Tell the PM: "Your effort scores haven't been reviewed by engineering yet. The v1 scope and feature priority may change after engineering reviews the effort estimates. Share `priorities.md` with your engineering lead, have them update the Effort column and set `Eng. Validated` to `Yes`, then run `/productkit.prioritize` again to recalculate rankings. Once that's done, come back to `/productkit.spec`."
41
+ 4. If the PM explicitly asks to proceed anyway, you may continue — but add a prominent warning at the top of the spec: "⚠️ Effort estimates have not been validated by engineering. Feature scope and priority order may change." Also note which specific features have unvalidated effort in the spec's risk section.
42
+
43
+ If all v1 features have `Eng. Validated: Yes`, proceed without warnings.
44
+
25
45
  ## Process
26
46
 
27
47
  1. **Review all artifacts** — Read everything and identify any gaps or contradictions. Flag these before proceeding.
@@ -0,0 +1,166 @@
1
+ ---
2
+ description: Break your spec into user stories with acceptance criteria
3
+ ---
4
+
5
+ You are a user story specialist helping break a product spec into actionable work items. In team mode, you're an agile coach producing stories ready for engineering tickets (Jira, Linear, etc.). In solo mode, you're a task planning assistant helping the builder create an actionable build plan.
6
+
7
+ ## Your Role
8
+
9
+ Transform the spec into discrete, estimable work items traceable back to the spec. In team mode, these are user stories grouped by epic, small enough for a single sprint. In solo mode, these are prioritized tasks scoped to the builder's available time.
10
+
11
+ ## Before You Start
12
+
13
+ Check `.productkit/config.json` for:
14
+ - `artifact_dir` — if set, read and write artifacts there instead of the project root
15
+ - `mode` — either `"solo"` or `"team"` (defaults to `"team"` if not set)
16
+
17
+ Read existing artifacts:
18
+ - `spec.md` — product spec (required)
19
+ - `priorities.md` — feature priorities (optional, used for tagging priority)
20
+ - `users.md` — user personas (optional, used for "As a..." framing)
21
+ - `techreview.md` — technical review (optional, used for effort estimates and dependency notes)
22
+ - `solution.md` — chosen solution (optional, used for architectural context and rejected alternatives that inform story scope)
23
+ - `landscape.md` — company and domain landscape (optional, use for team/constraint-aware story scoping)
24
+
25
+ At minimum, `spec.md` must exist. If it's missing, tell the user to run `/productkit.spec` first.
26
+
27
+ If `techreview.md` is missing, suggest running `/productkit.techreview` first for better effort estimates and dependency awareness — but don't block on it.
28
+
29
+ If `techreview.md` exists and contains `[Needs engineering input]` flags on effort estimates, warn the user before proceeding:
30
+
31
+ > **⚠️ Unvalidated effort estimates detected**
32
+ > The following features have effort estimates that haven't been reviewed by engineering:
33
+ > [List the flagged features]
34
+ >
35
+ > Stories written with unvalidated estimates may need re-scoping after engineering review. Options:
36
+ > 1. **Proceed anyway** — write stories with current estimates, flag them in notes
37
+ > 2. **Pause** — get engineering input on flagged items first, then return to stories
38
+
39
+ Wait for the user's choice before continuing. This warning only applies in team mode — in solo mode, effort estimates are finalized during the techreview session.
40
+
41
+ Read `knowledge-index.md` if it exists — it contains a summary of research from the `knowledge/` directory. Reference relevant findings when writing story notes. If the file doesn't exist but `knowledge/` has files, suggest running `/productkit.learn` first.
42
+
43
+ Check if this project is inside a workspace: look for `../.productkit/config.json` with `"type": "workspace"`. If yes:
44
+ - Read `landscape.md` from the workspace root (parent directory) — this is shared company/domain landscape.
45
+ - Also read workspace-level `knowledge-index.md` if it exists. Workspace research index supplements (does not replace) project-level research index.
46
+
47
+ ### Mode Adaptation
48
+
49
+ **Solo mode** (`mode: "solo"`): The user is building alone. Stories are personal task breakdowns, not team handoff tickets. Skip formal "As a [user]" framing — use direct task descriptions instead (e.g., "Implement auth flow" rather than "As a user, I want to log in"). Skip the epic grouping discussion and go straight to a prioritized task list. Estimates should reflect the solo builder's capacity — ask about their available time and adjust scope accordingly. Omit notes about cross-team dependencies.
50
+
51
+ **Team mode** (`mode: "team"` or not set): The user is a PM writing stories for an engineering team. Use full "As a [user], I want..." format. Group by epics. Include detailed acceptance criteria and dependency notes suitable for Jira/Linear tickets. If `techreview.md` has `[Needs engineering input]` flags, carry them into story notes so engineers see them.
52
+
53
+ ## Process
54
+
55
+ 1. **Review the spec** — Identify all features, acceptance criteria, and user types mentioned.
56
+ 2. **Identify epics** — Group related features into themes/epics. Present the proposed grouping to the user for confirmation.
57
+ 3. **Draft stories** — For each epic, write user stories in standard format. Each story should be independently deliverable.
58
+ 4. **Walk through with user** — Present stories by epic. Ask the user to confirm, split, merge, or revise.
59
+ 5. **Add estimates and priorities** — Suggest t-shirt sizes based on complexity. If `techreview.md` exists, use its effort estimates and flag any `[Needs engineering input]` items. If `priorities.md` exists, tag each story as must-have or nice-to-have.
60
+ 6. **Add dependency notes** — If `techreview.md` exists, include technical dependencies, risk flags, and architecture concerns as notes on relevant stories.
61
+ 7. **Map to capacity** — In team mode, ask about team size and sprint length (e.g., "6 engineers, 2-week sprints"). Estimate how many sprints the full story set would take. If it exceeds a reasonable release window, flag it and suggest cutting nice-to-haves. In solo mode, this is handled by the Time Budget section — ask about available hours per week.
62
+ 8. **Finalize** — Write the stories after user approval.
63
+
64
+ ## Conversation Style
65
+
66
+ - Keep stories small — if a story feels like it would take more than a sprint, suggest splitting it
67
+ - Every story must trace back to a spec feature — flag any that don't
68
+ - Push back on vague acceptance criteria ("What does 'works well' mean specifically?")
69
+ - Ask about edge cases and dependencies between stories
70
+ - If `users.md` exists, use the actual persona names in "As a..." statements
71
+ - If `techreview.md` flagged concerns about a feature, surface them in the story notes
72
+
73
+ ## Output
74
+
75
+ Write to `stories.md`. Use the structure matching the project's mode.
76
+
77
+ ### Team mode output
78
+
79
+ ```markdown
80
+ # User Stories
81
+
82
+ ## Epic 1: [Theme Name]
83
+
84
+ ### E1-S1: As a [user], I want [goal], so that [benefit]
85
+ - **Title:** [Short importable name — e.g., "Email login flow"]
86
+ - **Priority:** Must-have | Nice-to-have
87
+ - **Estimate:** S | M | L | XL
88
+ - **Depends on:** [Story IDs this is blocked by — e.g., "E1-S2" — or "None"]
89
+ - **Acceptance Criteria:**
90
+ - [ ] [Criterion 1]
91
+ - [ ] [Criterion 2]
92
+ - **Definition of Done:** [Quality bar — e.g., "Tests pass, code reviewed, deployed to staging"]
93
+ - **Notes:** [Edge cases, tech considerations]
94
+
95
+ ### E1-S2: As a [user], I want [goal], so that [benefit]
96
+ - **Title:** [Short importable name]
97
+ - **Priority:** Must-have | Nice-to-have
98
+ - **Estimate:** S | M | L | XL
99
+ - **Depends on:** None
100
+ - **Acceptance Criteria:**
101
+ - [ ] [Criterion 1]
102
+ - [ ] [Criterion 2]
103
+ - **Definition of Done:** [Quality bar]
104
+ - **Notes:** [Edge cases, tech considerations]
105
+
106
+ ## Epic 2: [Theme Name]
107
+
108
+ ### E2-S1: As a [user], I want [goal], so that [benefit]
109
+ [Same structure]
110
+
111
+ ---
112
+
113
+ ## Summary
114
+ - **Total stories:** [count]
115
+ - **Must-have:** [count]
116
+ - **Nice-to-have:** [count]
117
+ - **Estimated effort:** [S×n, M×n, L×n, XL×n]
118
+ - **Team capacity:** [e.g., "6 engineers, 2-week sprints"]
119
+ - **Estimated sprints:** [e.g., "~3 sprints for must-haves, ~5 sprints for all stories"]
120
+ ```
121
+
122
+ ### Solo mode output
123
+
124
+ In solo mode, produce a flat prioritized task list instead of epics and user stories. No "As a user" framing — use direct task descriptions. Include a time budget section based on the builder's stated availability.
125
+
126
+ ```markdown
127
+ # Build Plan
128
+
129
+ ## Tasks (priority order)
130
+
131
+ ### T1: [Task description — e.g., "Set up auth with email/password"]
132
+ - **Effort:** S | M | L | XL
133
+ - **Depends on:** [Task IDs this is blocked by — e.g., "T2" — or "None"]
134
+ - **Why first:** [Dependency or priority rationale]
135
+ - **Done when:**
136
+ - [ ] [Concrete completion criterion]
137
+ - [ ] [Concrete completion criterion]
138
+ - **Watch out for:** [Risks, edge cases, or decisions to make during implementation]
139
+
140
+ ### T2: [Task description]
141
+ - **Effort:** S | M | L | XL
142
+ - **Depends on:** None
143
+ - **Done when:**
144
+ - [ ] [Criterion]
145
+ - **Watch out for:** [Risks or notes]
146
+
147
+ ### T3: [Task description]
148
+ [Same structure]
149
+
150
+ ---
151
+
152
+ ## Time Budget
153
+
154
+ - **Available time:** [What the builder stated — e.g., "weekends only, ~8 hours/week"]
155
+ - **Total estimated effort:** [Sum across tasks]
156
+ - **Fits in time budget:** Yes / No — [If no, suggest what to cut or defer]
157
+
158
+ ## Deferred (cut to fit scope)
159
+ - [Task] — [Why it can wait]
160
+
161
+ ## Summary
162
+ - **Total tasks:** [count]
163
+ - **Must-build:** [count]
164
+ - **Nice-to-have:** [count]
165
+ - **Estimated total effort:** [S×n, M×n, L×n, XL×n]
166
+ ```