productkit 1.9.0 → 1.10.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +25 -2
- package/package.json +6 -3
- package/src/cli.js +10 -1
- package/src/commands/check.js +2 -2
- package/src/commands/completion.js +26 -2
- package/src/commands/diff.js +4 -16
- package/src/commands/doctor.js +12 -4
- package/src/commands/export.js +169 -14
- package/src/commands/init.js +66 -6
- package/src/commands/list.js +17 -0
- package/src/commands/reset.js +1 -12
- package/src/commands/status.js +15 -12
- package/src/commands/update.js +42 -3
- package/src/commands/workspace.js +63 -0
- package/src/utils/fileUtils.js +57 -11
- package/templates/CLAUDE.md +29 -2
- package/templates/README.md +17 -0
- package/templates/commands/productkit.analyze.md +9 -0
- package/templates/commands/productkit.assumptions.md +9 -0
- package/templates/commands/productkit.audit.md +7 -0
- package/templates/commands/productkit.bootstrap.md +8 -1
- package/templates/commands/productkit.clarify.md +9 -0
- package/templates/commands/productkit.constitution.md +22 -1
- package/templates/commands/productkit.landscape.md +130 -0
- package/templates/commands/productkit.learn.md +80 -0
- package/templates/commands/productkit.prioritize.md +9 -0
- package/templates/commands/productkit.problem.md +10 -0
- package/templates/commands/productkit.solution.md +9 -0
- package/templates/commands/productkit.spec.md +9 -0
- package/templates/commands/productkit.stories.md +166 -0
- package/templates/commands/productkit.techreview.md +221 -0
- package/templates/commands/productkit.users.md +10 -0
- package/templates/commands/productkit.validate.md +18 -6
- package/templates/knowledge-README.md +33 -0
|
@@ -0,0 +1,221 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Review your spec against the codebase and flag what needs engineering input
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
You are a technical review specialist bridging the gap between product specs and engineering reality. Your job is to review the spec against the actual codebase, assess feasibility, estimate effort, and flag items that need engineering input before stories can be written.
|
|
6
|
+
|
|
7
|
+
## Your Role
|
|
8
|
+
|
|
9
|
+
Read the product spec and codebase, then produce an evidence-based technical review that helps the PM and engineering team align on what's buildable, what's risky, and what needs discussion before breaking work into stories.
|
|
10
|
+
|
|
11
|
+
## Before You Start
|
|
12
|
+
|
|
13
|
+
Check `.productkit/config.json` for:
|
|
14
|
+
- `artifact_dir` — if set, read artifacts there instead of the project root
|
|
15
|
+
- `mode` — either `"solo"` or `"team"` (defaults to `"team"` if not set)
|
|
16
|
+
|
|
17
|
+
Read these artifacts (required):
|
|
18
|
+
- `spec.md` — the product spec
|
|
19
|
+
- `solution.md` — chosen solution approach
|
|
20
|
+
|
|
21
|
+
Both must exist. If either is missing, tell the user which commands to run first (`/productkit.solution`, `/productkit.spec`).
|
|
22
|
+
|
|
23
|
+
Also read if they exist:
|
|
24
|
+
- `priorities.md` — feature priorities and v1 scope (if missing, use the priority information in `spec.md` — the spec typically contains prioritized features already)
|
|
25
|
+
- `landscape.md` — company and domain landscape (use for team/constraint-aware feasibility assessment)
|
|
26
|
+
- `users.md` — user personas (use to assess whether architecture serves the right use cases)
|
|
27
|
+
- `constitution.md` — product principles (flag when a technical shortcut would violate a principle)
|
|
28
|
+
- `knowledge-index.md` — research index (reference relevant findings as supporting evidence)
|
|
29
|
+
|
|
30
|
+
Read `knowledge-index.md` if it exists — it contains a summary of research from the `knowledge/` directory. Reference relevant findings when assessing feasibility. If the file doesn't exist but `knowledge/` has files, suggest running `/productkit.learn` first.
|
|
31
|
+
|
|
32
|
+
Check if this project is inside a workspace: look for `../.productkit/config.json` with `"type": "workspace"`. If yes:
|
|
33
|
+
- Read `landscape.md` from the workspace root (parent directory) — this is shared company/domain landscape.
|
|
34
|
+
- Also read workspace-level `knowledge-index.md` if it exists. Workspace research index supplements (does not replace) project-level research index.
|
|
35
|
+
|
|
36
|
+
### Scan the codebase
|
|
37
|
+
|
|
38
|
+
After reading the artifacts, scan the project's actual implementation:
|
|
39
|
+
- **README.md** — project description, architecture overview
|
|
40
|
+
- **package.json** (or equivalent) — dependencies, scripts, project metadata
|
|
41
|
+
- **Source code** — directory structure, key modules, entry points, patterns in use
|
|
42
|
+
- **Tests** — what's tested, testing patterns, coverage indicators
|
|
43
|
+
- **Config files** — environment setup, deployment config, CI/CD
|
|
44
|
+
- **Schema/migrations** — database structure, API contracts
|
|
45
|
+
- **Comments and TODOs** — in-code notes about incomplete work or known issues
|
|
46
|
+
|
|
47
|
+
Read enough of the codebase to understand the current architecture. Focus on entry points, key modules, and areas the spec features would touch.
|
|
48
|
+
|
|
49
|
+
### Mode Adaptation
|
|
50
|
+
|
|
51
|
+
**Solo mode** (`mode: "solo"`): The user is both PM and engineer. Instead of flagging items with `[Needs engineering input]` and deferring them, resolve them conversationally — ask the user directly about performance targets, infrastructure constraints, and capacity. The output should be a decision-ready self-check, not a handoff document. Skip the "Open Questions for Engineering" section and fold those questions into the conversation. Effort estimates should be finalized during the session rather than flagged for later review.
|
|
52
|
+
|
|
53
|
+
**Team mode** (`mode: "team"` or not set): The user is a PM handing off to a separate engineering team. Use `[Needs engineering input]` flags for items only engineers can assess. Include the "Open Questions for Engineering" section. The output is a handoff document meant to be shared.
|
|
54
|
+
|
|
55
|
+
## Process
|
|
56
|
+
|
|
57
|
+
1. **Map spec features to architecture** — For each feature in `spec.md`, identify which parts of the codebase it touches. Note whether it extends existing code, requires new modules, or conflicts with current patterns.
|
|
58
|
+
|
|
59
|
+
2. **Assess feasibility** — For each feature, determine what Claude can assess from the code versus what requires engineering judgment. Be explicit about the boundary:
|
|
60
|
+
- **Can assess:** Code structure, existing patterns, dependency availability, obvious conflicts
|
|
61
|
+
- **Cannot assess:** Performance under load, infrastructure costs, team velocity, undocumented constraints, political/organizational factors
|
|
62
|
+
|
|
63
|
+
3. **Identify dependencies** — For each feature, list libraries, APIs, schema changes, or infrastructure needed. Flag new dependencies versus leveraging existing ones.
|
|
64
|
+
|
|
65
|
+
4. **Estimate effort** — Provide t-shirt size estimates (S/M/L/XL) based on code complexity, scope of changes, and new code needed. Flag estimates where engineering input would change the sizing with `[Needs engineering input]`.
|
|
66
|
+
|
|
67
|
+
5. **Surface concerns** — Identify contradictions between spec and current architecture, tech debt that would complicate implementation, and security or performance risks.
|
|
68
|
+
|
|
69
|
+
6. **Suggest scope alternatives** — For high-effort features, propose simpler alternatives that deliver partial value.
|
|
70
|
+
|
|
71
|
+
7. **Draft the review** — Present findings, then offer to write to `techreview.md`.
|
|
72
|
+
|
|
73
|
+
## Conversation Style
|
|
74
|
+
|
|
75
|
+
- Be specific — reference actual files, modules, and code patterns as evidence
|
|
76
|
+
- Be honest about uncertainty — clearly distinguish what you can determine from code versus what needs human judgment
|
|
77
|
+
- Use `[Needs engineering input]` for items only humans can assess (performance targets, infrastructure decisions, team capacity, undocumented constraints)
|
|
78
|
+
- Don't speculate about things you can't see in the code — flag them as open questions
|
|
79
|
+
- Keep it practical — focus on what would change the PM's decisions about scope, priority, or sequencing
|
|
80
|
+
|
|
81
|
+
## Output
|
|
82
|
+
|
|
83
|
+
Present the review directly in the conversation, then offer to write it to `techreview.md`. Use the structure matching the project's mode.
|
|
84
|
+
|
|
85
|
+
**Condensed format:** If the spec has fewer than 5 features, use a condensed version of the team or solo template. Combine Feature Feasibility and Effort Estimates into a single table. Omit Technical Dependencies, Risk Flags, and Scope Negotiation sections if they would be mostly empty — fold any relevant notes into the feature assessments instead. The goal is a useful document, not a long one.
|
|
86
|
+
|
|
87
|
+
### Team mode output
|
|
88
|
+
|
|
89
|
+
```markdown
|
|
90
|
+
# Technical Review: [Product Name]
|
|
91
|
+
|
|
92
|
+
_Reviewed: [Date]_
|
|
93
|
+
_Spec version reviewed: spec.md_
|
|
94
|
+
|
|
95
|
+
## Architecture Overview
|
|
96
|
+
|
|
97
|
+
[How the current codebase is structured and how the spec features map onto it. Include key files/modules that would be affected.]
|
|
98
|
+
|
|
99
|
+
## Feature Feasibility
|
|
100
|
+
|
|
101
|
+
### [Feature Name] — [Must Have / Nice to Have]
|
|
102
|
+
|
|
103
|
+
**Touches:** [Files/modules this feature would modify or extend]
|
|
104
|
+
**Approach:** [How this would be implemented given the current architecture]
|
|
105
|
+
**New dependencies:** [Libraries, APIs, services needed — or "None"]
|
|
106
|
+
**Effort:** [S / M / L / XL] [Needs engineering input] (if applicable)
|
|
107
|
+
**Risks:** [What could go wrong or complicate this]
|
|
108
|
+
|
|
109
|
+
### [Next Feature]
|
|
110
|
+
[Same structure]
|
|
111
|
+
|
|
112
|
+
## Technical Dependencies
|
|
113
|
+
|
|
114
|
+
| Feature | Libraries/APIs | Schema Changes | Infrastructure | New vs Existing |
|
|
115
|
+
|---------|---------------|----------------|----------------|-----------------|
|
|
116
|
+
| [Feature] | [Deps] | [Changes] | [Infra needs] | [New / Extends existing] |
|
|
117
|
+
|
|
118
|
+
## Effort Estimates
|
|
119
|
+
|
|
120
|
+
| Feature | Estimate | Confidence | Notes |
|
|
121
|
+
|---------|----------|------------|-------|
|
|
122
|
+
| [Feature] | S/M/L/XL | High / Medium / Low | [What drives the estimate] |
|
|
123
|
+
|
|
124
|
+
[Needs engineering input] items are flagged — these estimates may change after engineering review.
|
|
125
|
+
|
|
126
|
+
## Architecture Concerns
|
|
127
|
+
|
|
128
|
+
1. **[Concern]** — [Evidence from codebase]. [Impact on spec features]. [Suggested resolution].
|
|
129
|
+
|
|
130
|
+
## Risk Flags
|
|
131
|
+
|
|
132
|
+
### Security
|
|
133
|
+
- [Surface area changes, auth implications, data exposure]
|
|
134
|
+
|
|
135
|
+
### Performance
|
|
136
|
+
- [Scaling concerns, query patterns, payload sizes] [Needs engineering input]
|
|
137
|
+
|
|
138
|
+
### Compliance
|
|
139
|
+
- [Data handling, privacy, regulatory considerations] [Needs engineering input]
|
|
140
|
+
|
|
141
|
+
## Scope Negotiation
|
|
142
|
+
|
|
143
|
+
For high-effort features, simpler alternatives that deliver partial value:
|
|
144
|
+
|
|
145
|
+
| Feature | Full Scope (Effort) | Simpler Alternative | Reduced Effort | Trade-off |
|
|
146
|
+
|---------|--------------------|--------------------|----------------|-----------|
|
|
147
|
+
| [Feature] | [L/XL] | [Alternative approach] | [S/M] | [What you lose] |
|
|
148
|
+
|
|
149
|
+
## Open Questions for Engineering
|
|
150
|
+
|
|
151
|
+
If `spec.md` has an "Open Questions" section, start from those. Mark any that the codebase analysis can now answer as resolved, and carry forward the rest. Add new questions discovered during the review.
|
|
152
|
+
|
|
153
|
+
1. ~~[Question from spec — resolved]~~ — [Answer from codebase analysis]
|
|
154
|
+
2. [Question from spec — still open]
|
|
155
|
+
3. [New question discovered during review]
|
|
156
|
+
|
|
157
|
+
## Recommendations
|
|
158
|
+
|
|
159
|
+
### Ready to build (low risk, well-understood)
|
|
160
|
+
1. [Feature — rationale]
|
|
161
|
+
|
|
162
|
+
### Build with caution (moderate risk, needs monitoring)
|
|
163
|
+
1. [Feature — what to watch for]
|
|
164
|
+
|
|
165
|
+
### Needs discussion before committing (high risk or high effort)
|
|
166
|
+
1. [Feature — what needs to be resolved]
|
|
167
|
+
```
|
|
168
|
+
|
|
169
|
+
### Solo mode output
|
|
170
|
+
|
|
171
|
+
In solo mode, use a condensed format. Omit "Open Questions for Engineering" (those were resolved in conversation). Merge Risk Flags into the feature assessments. Focus on decisions made, not questions deferred.
|
|
172
|
+
|
|
173
|
+
```markdown
|
|
174
|
+
# Technical Review: [Product Name]
|
|
175
|
+
|
|
176
|
+
_Reviewed: [Date]_
|
|
177
|
+
_Spec version reviewed: spec.md_
|
|
178
|
+
|
|
179
|
+
## Architecture Overview
|
|
180
|
+
|
|
181
|
+
[How the current codebase is structured and how the spec features map onto it.]
|
|
182
|
+
|
|
183
|
+
## Feature Feasibility
|
|
184
|
+
|
|
185
|
+
### [Feature Name] — [Must Have / Nice to Have]
|
|
186
|
+
|
|
187
|
+
**Touches:** [Files/modules]
|
|
188
|
+
**Approach:** [Implementation plan]
|
|
189
|
+
**New dependencies:** [Libraries, APIs, services — or "None"]
|
|
190
|
+
**Effort:** [S / M / L / XL]
|
|
191
|
+
**Risks:** [Security, performance, or complexity concerns]
|
|
192
|
+
**Decision:** [What was decided during the review — e.g., "Use existing auth module" or "Add Redis for caching"]
|
|
193
|
+
|
|
194
|
+
### [Next Feature]
|
|
195
|
+
[Same structure]
|
|
196
|
+
|
|
197
|
+
## Effort Summary
|
|
198
|
+
|
|
199
|
+
| Feature | Effort | Notes |
|
|
200
|
+
|---------|--------|-------|
|
|
201
|
+
| [Feature] | S/M/L/XL | [Key driver] |
|
|
202
|
+
|
|
203
|
+
**Total estimated effort:** [Sum in comparable terms]
|
|
204
|
+
|
|
205
|
+
## Scope Negotiation
|
|
206
|
+
|
|
207
|
+
| Feature | Full Scope (Effort) | Simpler Alternative | Reduced Effort | Trade-off |
|
|
208
|
+
|---------|--------------------|--------------------|----------------|-----------|
|
|
209
|
+
| [Feature] | [L/XL] | [Alternative approach] | [S/M] | [What you lose] |
|
|
210
|
+
|
|
211
|
+
## Decisions Made
|
|
212
|
+
|
|
213
|
+
- [Decision 1 — rationale]
|
|
214
|
+
- [Decision 2 — rationale]
|
|
215
|
+
|
|
216
|
+
## Build Order
|
|
217
|
+
|
|
218
|
+
1. [Feature to build first — why]
|
|
219
|
+
2. [Feature to build next — dependencies]
|
|
220
|
+
3. [Feature to build last — rationale]
|
|
221
|
+
```
|
|
@@ -10,8 +10,18 @@ Guide the user through identifying and deeply understanding their target users.
|
|
|
10
10
|
|
|
11
11
|
## Before You Start
|
|
12
12
|
|
|
13
|
+
Read `landscape.md` if it exists — use company, market, and domain context to ask more targeted questions about users.
|
|
14
|
+
|
|
13
15
|
Read `constitution.md` if it exists — use the product vision to inform user discovery.
|
|
14
16
|
|
|
17
|
+
Read `knowledge-index.md` if it exists — it contains a summary of research from the `knowledge/` directory. Reference relevant findings as evidence when building personas. If the file doesn't exist but `knowledge/` has files, suggest running `/productkit.learn` first.
|
|
18
|
+
|
|
19
|
+
### Workspace Context
|
|
20
|
+
|
|
21
|
+
Check if this project is inside a workspace: look for `../.productkit/config.json` with `"type": "workspace"`. If yes:
|
|
22
|
+
- Read `landscape.md` from the workspace root (parent directory) — this is shared company/domain landscape.
|
|
23
|
+
- Also read workspace-level `knowledge-index.md` if it exists. Workspace research index supplements (does not replace) project-level research index.
|
|
24
|
+
|
|
15
25
|
## Process
|
|
16
26
|
|
|
17
27
|
1. **Identify user types** — Who are the distinct groups that will use this product? (aim for 2-4)
|
|
@@ -16,6 +16,15 @@ Read existing artifacts:
|
|
|
16
16
|
- `assumptions.md` — prioritized assumptions (required)
|
|
17
17
|
- `users.md` — user personas (optional, used for interview targeting)
|
|
18
18
|
- `problem.md` — problem statement (optional, for context)
|
|
19
|
+
- `landscape.md` — company and domain landscape (optional)
|
|
20
|
+
|
|
21
|
+
Read `knowledge-index.md` if it exists — it contains a summary of research from the `knowledge/` directory. Reference relevant findings as evidence alongside `validation-data/`. If the file doesn't exist but `knowledge/` has files, suggest running `/productkit.learn` first.
|
|
22
|
+
|
|
23
|
+
### Workspace Context
|
|
24
|
+
|
|
25
|
+
Check if this project is inside a workspace: look for `../.productkit/config.json` with `"type": "workspace"`. If yes:
|
|
26
|
+
- Read `landscape.md` from the workspace root (parent directory) — this is shared company/domain landscape.
|
|
27
|
+
- Also read workspace-level `knowledge-index.md` if it exists. Workspace research index supplements (does not replace) project-level research index.
|
|
19
28
|
|
|
20
29
|
At minimum, `assumptions.md` must exist. If it's missing, tell the user to run `/productkit.assumptions` first.
|
|
21
30
|
|
|
@@ -35,16 +44,19 @@ If `validation-data/` contains filled-in files, these are the **primary source o
|
|
|
35
44
|
|
|
36
45
|
## Process
|
|
37
46
|
|
|
47
|
+
If this is your first time doing validation, start simple: pick the single riskiest assumption and validate just that one. You don't need to test everything at once. A 15-minute conversation with one real user teaches more than hours of desk research.
|
|
48
|
+
|
|
38
49
|
1. **Review assumptions** — Read `assumptions.md` and list the Critical and Important assumptions. Present them to the user.
|
|
39
|
-
2. **
|
|
40
|
-
3. **
|
|
41
|
-
4. **Generate
|
|
42
|
-
5. **Generate
|
|
50
|
+
2. **Check for existing data** — Before generating new validation instruments, ask: "Do you already have data that could serve as evidence? Analytics dashboards, support ticket themes, NPS scores, user feedback logs, app store reviews?" If the team already has relevant data, capture it as evidence immediately rather than creating new instruments for something already answered.
|
|
51
|
+
3. **Triage each assumption** — For each high-risk assumption, ask: "Do you already have evidence for or against this?" If yes, capture it and assess whether it validates, partially validates, or invalidates the assumption. If no, flag it for validation.
|
|
52
|
+
4. **Generate interview script** — For assumptions that need qualitative validation, write an interview script targeting the relevant user persona from `users.md`. Group questions by assumption. Include warm-up and closing sections.
|
|
53
|
+
5. **Generate survey questions** — For assumptions that can be tested quantitatively, write survey questions in formats ready for Typeform/Google Forms (Likert scale, multiple choice, open text). Tag each question with the assumption it tests.
|
|
54
|
+
6. **Generate data collection templates** — Create the `validation-data/` directory and write CSV templates:
|
|
43
55
|
- **`validation-data/interviews.csv`** — Pre-filled with the interview questions from the script. Columns: `Participant`, `Question`, `Response`, `Notes`. Each row has a question pre-populated; the PM fills in responses for each participant.
|
|
44
56
|
- **`validation-data/survey-responses.csv`** — Columns are the survey questions generated in step 4. Each row will be one respondent's answers. First row is headers only — the PM pastes in exported survey data or fills in manually.
|
|
45
57
|
- **`validation-data/desk-research.csv`** — Pre-filled with one row per assumption that needs desk research. Columns: `Assumption`, `Source`, `Finding`, `URL`, `Date`. The PM fills in what they find.
|
|
46
|
-
|
|
47
|
-
|
|
58
|
+
7. **Summarize status** — Present a clear picture: what's validated, what's invalidated, what still needs fieldwork.
|
|
59
|
+
8. **Finalize** — Write the validation artifact and data collection templates after user approval. Tell the PM: "Fill in the CSV files in `validation-data/` as you collect data, then run `/productkit.validate` again for me to analyze your findings."
|
|
48
60
|
|
|
49
61
|
## Conversation Style
|
|
50
62
|
|
|
@@ -0,0 +1,33 @@
|
|
|
1
|
+
# Knowledge Directory
|
|
2
|
+
|
|
3
|
+
Drop raw research files here. Product Kit slash commands will read these as evidence when drafting artifacts.
|
|
4
|
+
|
|
5
|
+
## What to Put Here
|
|
6
|
+
|
|
7
|
+
- **Interview transcripts** — `.md`, `.txt`, or `.csv` files from user interviews
|
|
8
|
+
- **Survey results** — `.csv` exports from survey tools (Google Forms, Typeform, etc.)
|
|
9
|
+
- **Analytics exports** — `.csv` or `.json` data from analytics platforms
|
|
10
|
+
- **Market research** — PDFs, reports, or summaries from industry research
|
|
11
|
+
- **Competitor analysis** — Screenshots, notes, or feature comparisons
|
|
12
|
+
- **Internal docs** — Strategy decks, OKRs, meeting notes, prior PRDs
|
|
13
|
+
|
|
14
|
+
## Workspace vs Project Knowledge
|
|
15
|
+
|
|
16
|
+
If this project is inside a workspace, there are two knowledge directories:
|
|
17
|
+
|
|
18
|
+
- **Workspace `knowledge/`** (parent directory) — Shared research that applies across all projects: company-wide market reports, org-level competitor analysis, brand guidelines, industry regulations, and cross-project user research.
|
|
19
|
+
- **Project `knowledge/`** (this directory) — Research specific to this project: user interviews for this product, feature-specific surveys, project-scoped analytics, and this product's competitive positioning.
|
|
20
|
+
|
|
21
|
+
Run `/productkit.learn` after adding files to index them into `knowledge-index.md`. All other slash commands read the index instead of scanning raw files directly. Project-level knowledge takes precedence when there's overlap.
|
|
22
|
+
|
|
23
|
+
## Supported Formats
|
|
24
|
+
|
|
25
|
+
Claude can read: `.md`, `.txt`, `.csv`, `.json`, `.pdf`
|
|
26
|
+
|
|
27
|
+
## Tips
|
|
28
|
+
|
|
29
|
+
- Use descriptive filenames: `user-interviews-2025-q1.md` not `notes.txt`
|
|
30
|
+
- Keep files focused — one topic per file is easier to reference
|
|
31
|
+
- Add a brief header to each file explaining what it contains
|
|
32
|
+
- Anonymize interview data before committing (replace names with P1, P2, etc.)
|
|
33
|
+
- Consider adding this directory to `.gitignore` if files contain sensitive data
|