productkit 1.7.0 → 1.9.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -0
- package/README.md +43 -4
- package/package.json +1 -1
- package/src/cli.js +23 -1
- package/src/commands/diff.js +69 -0
- package/src/commands/doctor.js +115 -0
- package/src/commands/export.js +51 -0
- package/src/commands/init.js +16 -7
- package/src/commands/reset.js +6 -2
- package/src/commands/status.js +4 -1
- package/src/utils/fileUtils.js +12 -0
- package/templates/CLAUDE.md +12 -6
- package/templates/README.md +12 -6
- package/templates/commands/productkit.analyze.md +3 -1
- package/templates/commands/productkit.assumptions.md +3 -1
- package/templates/commands/productkit.audit.md +140 -0
- package/templates/commands/productkit.bootstrap.md +81 -0
- package/templates/commands/productkit.clarify.md +3 -1
- package/templates/commands/productkit.constitution.md +3 -1
- package/templates/commands/productkit.prioritize.md +27 -8
- package/templates/commands/productkit.problem.md +3 -1
- package/templates/commands/productkit.solution.md +22 -2
- package/templates/commands/productkit.spec.md +15 -2
- package/templates/commands/productkit.users.md +3 -1
- package/templates/commands/productkit.validate.md +192 -0
package/LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2026 Douno
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|
package/README.md
CHANGED
|
@@ -4,6 +4,8 @@ Slash-command-driven product thinking toolkit for [Claude Code](https://claude.c
|
|
|
4
4
|
|
|
5
5
|
Product Kit gives PMs a structured workflow for validating product ideas — user personas, problem statements, assumptions mapping — all through guided AI conversations.
|
|
6
6
|
|
|
7
|
+
**[Read the full guide →](https://iamquechua.github.io/product-kit/)**
|
|
8
|
+
|
|
7
9
|
## Prerequisites
|
|
8
10
|
|
|
9
11
|
- **Node.js** 18 or later
|
|
@@ -43,6 +45,19 @@ cd my-project
|
|
|
43
45
|
|
|
44
46
|
This scaffolds a project with slash commands, a `CLAUDE.md` context file, and a `.productkit/` config directory.
|
|
45
47
|
|
|
48
|
+
For existing projects:
|
|
49
|
+
|
|
50
|
+
```bash
|
|
51
|
+
cd my-existing-project
|
|
52
|
+
productkit init --existing
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
To keep artifacts out of the project root (recommended for busy codebases):
|
|
56
|
+
|
|
57
|
+
```bash
|
|
58
|
+
productkit init --existing --artifact-dir docs/product
|
|
59
|
+
```
|
|
60
|
+
|
|
46
61
|
### 2. Open Claude Code
|
|
47
62
|
|
|
48
63
|
```bash
|
|
@@ -59,17 +74,20 @@ Each command starts a guided conversation. Claude asks questions, pushes back on
|
|
|
59
74
|
| 2 | `/productkit.users` | Define target user personas through dialogue | `users.md` |
|
|
60
75
|
| 3 | `/productkit.problem` | Frame the problem statement grounded in user research | `problem.md` |
|
|
61
76
|
| 4 | `/productkit.assumptions` | Extract and prioritize hidden assumptions | `assumptions.md` |
|
|
62
|
-
| 5 | `/productkit.
|
|
63
|
-
| 6 | `/productkit.
|
|
64
|
-
| 7 | `/productkit.
|
|
77
|
+
| 5 | `/productkit.validate` | Validate assumptions with interviews and surveys | `validation.md` |
|
|
78
|
+
| 6 | `/productkit.solution` | Brainstorm and evaluate solution ideas | `solution.md` |
|
|
79
|
+
| 7 | `/productkit.prioritize` | Score and rank features for v1 | `priorities.md` |
|
|
80
|
+
| 8 | `/productkit.spec` | Generate a complete product spec | `spec.md` |
|
|
65
81
|
| — | `/productkit.clarify` | Resolve ambiguities and contradictions across artifacts | Updates existing files |
|
|
66
82
|
| — | `/productkit.analyze` | Run a consistency and completeness check | Analysis in chat |
|
|
83
|
+
| — | `/productkit.bootstrap` | Auto-draft all artifacts from existing codebase | All missing artifacts |
|
|
84
|
+
| — | `/productkit.audit` | Compare spec against codebase, surface gaps | `audit.md` |
|
|
67
85
|
|
|
68
86
|
Commands build on each other — `/productkit.problem` reads your `users.md`, `/productkit.solution` reads your problem and users, and `/productkit.spec` synthesizes everything into a single document. You can run `/productkit.clarify` and `/productkit.analyze` at any stage to check your work.
|
|
69
87
|
|
|
70
88
|
### 4. Review your artifacts
|
|
71
89
|
|
|
72
|
-
After running the commands, your project
|
|
90
|
+
After running the commands, your project contains:
|
|
73
91
|
|
|
74
92
|
```
|
|
75
93
|
my-project/
|
|
@@ -77,9 +95,11 @@ my-project/
|
|
|
77
95
|
├── users.md # User personas
|
|
78
96
|
├── problem.md # Problem statement
|
|
79
97
|
├── assumptions.md # Prioritized assumptions
|
|
98
|
+
├── validation.md # Validation results & scripts
|
|
80
99
|
├── solution.md # Chosen solution
|
|
81
100
|
├── priorities.md # Ranked feature list
|
|
82
101
|
├── spec.md # Complete product spec
|
|
102
|
+
├── audit.md # Spec vs codebase audit (on demand)
|
|
83
103
|
├── .productkit/config.json
|
|
84
104
|
├── .claude/commands/ # Slash command prompts
|
|
85
105
|
├── CLAUDE.md
|
|
@@ -87,6 +107,8 @@ my-project/
|
|
|
87
107
|
└── .gitignore
|
|
88
108
|
```
|
|
89
109
|
|
|
110
|
+
If you used `--artifact-dir docs/product`, artifacts live in `docs/product/` instead of the project root.
|
|
111
|
+
|
|
90
112
|
These markdown files are your product foundation — share them with your team, commit them to git, or hand `spec.md` to engineering.
|
|
91
113
|
|
|
92
114
|
## CLI Commands
|
|
@@ -95,13 +117,30 @@ These markdown files are your product foundation — share them with your team,
|
|
|
95
117
|
|---------|-------------|
|
|
96
118
|
| `productkit init <name>` | Scaffold a new project |
|
|
97
119
|
| `productkit init --existing` | Add Product Kit to the current directory |
|
|
120
|
+
| `productkit init --minimal` | Skip constitution, start with users/problem |
|
|
121
|
+
| `productkit init --artifact-dir <dir>` | Store artifacts in a custom directory |
|
|
98
122
|
| `productkit status` | Show progress — which artifacts exist and what's next |
|
|
123
|
+
| `productkit export` | Export all artifacts as a single combined markdown file |
|
|
124
|
+
| `productkit export --output <file>` | Export to a custom filename |
|
|
125
|
+
| `productkit diff` | Show what changed in artifacts since last commit |
|
|
126
|
+
| `productkit diff --staged` | Show staged artifact changes |
|
|
127
|
+
| `productkit doctor` | Check project health (missing files, outdated commands) |
|
|
99
128
|
| `productkit update` | Refresh slash commands to the latest version |
|
|
100
129
|
| `productkit reset` | Remove all artifacts and start over |
|
|
101
130
|
| `productkit list` | Show available slash commands with descriptions |
|
|
102
131
|
| `productkit completion` | Output shell completion script (bash/zsh) |
|
|
103
132
|
| `productkit check` | Verify Claude Code is installed |
|
|
104
133
|
|
|
134
|
+
## Cowork Plugin (No CLI Required)
|
|
135
|
+
|
|
136
|
+
If you prefer Claude Cowork over the command line, Product Kit is also available as a Cowork plugin. Same guided workflows, no terminal needed.
|
|
137
|
+
|
|
138
|
+
1. Download [`product-kit-plugin.zip`](https://github.com/iamquechua/product-kit/releases/download/latest-plugin/product-kit-plugin.zip) from [GitHub Releases](https://github.com/iamquechua/product-kit/releases/tag/latest-plugin)
|
|
139
|
+
2. In Cowork, go to **Plugins → + → Upload plugin**
|
|
140
|
+
3. Select the zip file
|
|
141
|
+
|
|
142
|
+
Once installed, type `/product-kit:users`, `/product-kit:problem`, etc. in Cowork's chat. See [plugin/README.md](plugin/README.md) for details.
|
|
143
|
+
|
|
105
144
|
## How It Works
|
|
106
145
|
|
|
107
146
|
Product Kit is a thin scaffolding tool. The real work happens in slash commands — markdown prompt files that live in `.claude/commands/`. When you type `/productkit.users` in Claude Code, it reads the prompt file and starts a guided conversation.
|
package/package.json
CHANGED
package/src/cli.js
CHANGED
|
@@ -9,18 +9,23 @@ const updateCommand = require('./commands/update');
|
|
|
9
9
|
const resetCommand = require('./commands/reset');
|
|
10
10
|
const listCommand = require('./commands/list');
|
|
11
11
|
const completionCommand = require('./commands/completion');
|
|
12
|
+
const exportCommand = require('./commands/export');
|
|
13
|
+
const diffCommand = require('./commands/diff');
|
|
14
|
+
const doctorCommand = require('./commands/doctor');
|
|
12
15
|
|
|
13
16
|
const program = new Command();
|
|
14
17
|
|
|
15
18
|
program
|
|
16
19
|
.name('productkit')
|
|
17
20
|
.description(chalk.cyan.bold('Product thinking toolkit for Claude Code'))
|
|
18
|
-
.version('1.
|
|
21
|
+
.version('1.9.0');
|
|
19
22
|
|
|
20
23
|
program
|
|
21
24
|
.command('init [projectName]')
|
|
22
25
|
.description('Initialize a new product research project')
|
|
23
26
|
.option('--existing', 'Add Product Kit to the current directory')
|
|
27
|
+
.option('--minimal', 'Skip constitution, start with users/problem')
|
|
28
|
+
.option('--artifact-dir <dir>', 'Directory for artifacts (default: project root)')
|
|
24
29
|
.action(initCommand);
|
|
25
30
|
|
|
26
31
|
program
|
|
@@ -55,6 +60,23 @@ program
|
|
|
55
60
|
.option('--shell <shell>', 'Shell type (bash or zsh)')
|
|
56
61
|
.action(completionCommand);
|
|
57
62
|
|
|
63
|
+
program
|
|
64
|
+
.command('export')
|
|
65
|
+
.description('Export all artifacts as a single combined markdown file')
|
|
66
|
+
.option('--output <file>', 'Output filename', 'export.md')
|
|
67
|
+
.action(exportCommand);
|
|
68
|
+
|
|
69
|
+
program
|
|
70
|
+
.command('diff')
|
|
71
|
+
.description('Show what changed since last commit across artifacts')
|
|
72
|
+
.option('--staged', 'Show staged changes instead of unstaged')
|
|
73
|
+
.action(diffCommand);
|
|
74
|
+
|
|
75
|
+
program
|
|
76
|
+
.command('doctor')
|
|
77
|
+
.description('Check project health (missing files, outdated commands, etc.)')
|
|
78
|
+
.action(doctorCommand);
|
|
79
|
+
|
|
58
80
|
program.parse(process.argv);
|
|
59
81
|
|
|
60
82
|
if (process.argv.length === 2) {
|
|
@@ -0,0 +1,69 @@
|
|
|
1
|
+
const fs = require('fs-extra');
|
|
2
|
+
const path = require('path');
|
|
3
|
+
const chalk = require('chalk');
|
|
4
|
+
const { execSync } = require('child_process');
|
|
5
|
+
const { getArtifactDir } = require('../utils/fileUtils');
|
|
6
|
+
|
|
7
|
+
const ARTIFACT_FILES = [
|
|
8
|
+
'constitution.md',
|
|
9
|
+
'users.md',
|
|
10
|
+
'problem.md',
|
|
11
|
+
'assumptions.md',
|
|
12
|
+
'validation.md',
|
|
13
|
+
'solution.md',
|
|
14
|
+
'priorities.md',
|
|
15
|
+
'spec.md',
|
|
16
|
+
];
|
|
17
|
+
|
|
18
|
+
async function diff(options) {
|
|
19
|
+
const root = process.cwd();
|
|
20
|
+
const configPath = path.join(root, '.productkit', 'config.json');
|
|
21
|
+
|
|
22
|
+
if (!fs.existsSync(configPath)) {
|
|
23
|
+
console.error(chalk.red('Not a Product Kit project.'));
|
|
24
|
+
console.log('Run: productkit init <name>');
|
|
25
|
+
process.exit(1);
|
|
26
|
+
}
|
|
27
|
+
|
|
28
|
+
// Check if git is available
|
|
29
|
+
try {
|
|
30
|
+
execSync('git rev-parse --git-dir', { cwd: root, stdio: 'ignore' });
|
|
31
|
+
} catch {
|
|
32
|
+
console.error(chalk.red('Not a git repository. The diff command requires git.'));
|
|
33
|
+
process.exit(1);
|
|
34
|
+
}
|
|
35
|
+
|
|
36
|
+
const artifactDir = getArtifactDir(root);
|
|
37
|
+
const relDir = path.relative(root, artifactDir);
|
|
38
|
+
const existing = ARTIFACT_FILES
|
|
39
|
+
.map(f => relDir && relDir !== '.' ? path.join(relDir, f) : f)
|
|
40
|
+
.filter(f => fs.existsSync(path.join(root, f)));
|
|
41
|
+
|
|
42
|
+
if (existing.length === 0) {
|
|
43
|
+
console.error(chalk.red('No artifacts found. Run some slash commands first.'));
|
|
44
|
+
process.exit(1);
|
|
45
|
+
}
|
|
46
|
+
|
|
47
|
+
const gitArgs = options.staged ? ['diff', '--cached'] : ['diff'];
|
|
48
|
+
const cmd = ['git', ...gitArgs, '--', ...existing].join(' ');
|
|
49
|
+
|
|
50
|
+
let output;
|
|
51
|
+
try {
|
|
52
|
+
output = execSync(cmd, { cwd: root, encoding: 'utf-8' });
|
|
53
|
+
} catch (err) {
|
|
54
|
+
// git diff returns exit code 1 when there are differences in some configs
|
|
55
|
+
output = err.stdout || '';
|
|
56
|
+
}
|
|
57
|
+
|
|
58
|
+
if (!output) {
|
|
59
|
+
console.log(chalk.yellow('No changes to artifacts since last commit.'));
|
|
60
|
+
if (!options.staged) {
|
|
61
|
+
console.log(chalk.dim('Tip: use --staged to see staged changes.'));
|
|
62
|
+
}
|
|
63
|
+
return;
|
|
64
|
+
}
|
|
65
|
+
|
|
66
|
+
console.log(output);
|
|
67
|
+
}
|
|
68
|
+
|
|
69
|
+
module.exports = diff;
|
|
@@ -0,0 +1,115 @@
|
|
|
1
|
+
const fs = require('fs-extra');
|
|
2
|
+
const path = require('path');
|
|
3
|
+
const chalk = require('chalk');
|
|
4
|
+
const { execSync } = require('child_process');
|
|
5
|
+
|
|
6
|
+
async function doctor() {
|
|
7
|
+
const root = process.cwd();
|
|
8
|
+
const configPath = path.join(root, '.productkit', 'config.json');
|
|
9
|
+
|
|
10
|
+
if (!fs.existsSync(configPath)) {
|
|
11
|
+
console.error(chalk.red('Not a Product Kit project.'));
|
|
12
|
+
console.log('Run: productkit init <name>');
|
|
13
|
+
process.exit(1);
|
|
14
|
+
}
|
|
15
|
+
|
|
16
|
+
const results = { pass: 0, warn: 0, fail: 0 };
|
|
17
|
+
|
|
18
|
+
function pass(msg) { results.pass++; console.log(chalk.green(` pass ${msg}`)); }
|
|
19
|
+
function warn(msg) { results.warn++; console.log(chalk.yellow(` warn ${msg}`)); }
|
|
20
|
+
function fail(msg) { results.fail++; console.log(chalk.red(` fail ${msg}`)); }
|
|
21
|
+
|
|
22
|
+
console.log();
|
|
23
|
+
console.log(chalk.bold('Project health check'));
|
|
24
|
+
console.log();
|
|
25
|
+
|
|
26
|
+
// 1. Config file
|
|
27
|
+
try {
|
|
28
|
+
const config = fs.readJsonSync(configPath);
|
|
29
|
+
if (config.version) {
|
|
30
|
+
pass('Config file is valid');
|
|
31
|
+
} else {
|
|
32
|
+
warn('Config file missing version field');
|
|
33
|
+
}
|
|
34
|
+
} catch {
|
|
35
|
+
fail('Config file is not valid JSON');
|
|
36
|
+
}
|
|
37
|
+
|
|
38
|
+
// 2. Commands directory
|
|
39
|
+
const commandsDir = path.join(root, '.claude', 'commands');
|
|
40
|
+
if (fs.existsSync(commandsDir)) {
|
|
41
|
+
pass('Commands directory exists');
|
|
42
|
+
} else {
|
|
43
|
+
fail('Commands directory missing (.claude/commands/)');
|
|
44
|
+
}
|
|
45
|
+
|
|
46
|
+
// 3. Check for expected command templates
|
|
47
|
+
const templatesDir = path.join(__dirname, '..', '..', 'templates', 'commands');
|
|
48
|
+
const expectedCommands = fs.readdirSync(templatesDir);
|
|
49
|
+
|
|
50
|
+
// Account for minimal mode
|
|
51
|
+
let config = {};
|
|
52
|
+
try { config = fs.readJsonSync(configPath); } catch {}
|
|
53
|
+
const skippable = config.minimal ? ['productkit.constitution.md'] : [];
|
|
54
|
+
|
|
55
|
+
const missing = [];
|
|
56
|
+
for (const cmd of expectedCommands) {
|
|
57
|
+
if (skippable.includes(cmd)) continue;
|
|
58
|
+
if (!fs.existsSync(path.join(commandsDir, cmd))) {
|
|
59
|
+
missing.push(cmd);
|
|
60
|
+
}
|
|
61
|
+
}
|
|
62
|
+
|
|
63
|
+
if (missing.length === 0) {
|
|
64
|
+
pass('All expected command templates present');
|
|
65
|
+
} else {
|
|
66
|
+
for (const m of missing) {
|
|
67
|
+
fail(`Missing command template: ${m}`);
|
|
68
|
+
}
|
|
69
|
+
}
|
|
70
|
+
|
|
71
|
+
// 4. Check for outdated commands
|
|
72
|
+
const outdated = [];
|
|
73
|
+
for (const cmd of expectedCommands) {
|
|
74
|
+
if (skippable.includes(cmd)) continue;
|
|
75
|
+
const installedPath = path.join(commandsDir, cmd);
|
|
76
|
+
if (!fs.existsSync(installedPath)) continue;
|
|
77
|
+
|
|
78
|
+
const installed = fs.readFileSync(installedPath, 'utf-8');
|
|
79
|
+
const bundled = fs.readFileSync(path.join(templatesDir, cmd), 'utf-8');
|
|
80
|
+
if (installed !== bundled) {
|
|
81
|
+
outdated.push(cmd);
|
|
82
|
+
}
|
|
83
|
+
}
|
|
84
|
+
|
|
85
|
+
if (outdated.length === 0) {
|
|
86
|
+
pass('All command templates up to date');
|
|
87
|
+
} else {
|
|
88
|
+
for (const o of outdated) {
|
|
89
|
+
warn(`Outdated command: ${o} — run productkit update`);
|
|
90
|
+
}
|
|
91
|
+
}
|
|
92
|
+
|
|
93
|
+
// 5. Git initialized
|
|
94
|
+
try {
|
|
95
|
+
execSync('git rev-parse --git-dir', { cwd: root, stdio: 'ignore' });
|
|
96
|
+
pass('Git repository initialized');
|
|
97
|
+
} catch {
|
|
98
|
+
warn('No git repository — consider running git init');
|
|
99
|
+
}
|
|
100
|
+
|
|
101
|
+
// Summary
|
|
102
|
+
console.log();
|
|
103
|
+
const parts = [];
|
|
104
|
+
if (results.pass > 0) parts.push(chalk.green(`${results.pass} passed`));
|
|
105
|
+
if (results.warn > 0) parts.push(chalk.yellow(`${results.warn} warning(s)`));
|
|
106
|
+
if (results.fail > 0) parts.push(chalk.red(`${results.fail} failed`));
|
|
107
|
+
console.log(chalk.bold('Result: ') + parts.join(', '));
|
|
108
|
+
console.log();
|
|
109
|
+
|
|
110
|
+
if (results.fail > 0) {
|
|
111
|
+
process.exit(1);
|
|
112
|
+
}
|
|
113
|
+
}
|
|
114
|
+
|
|
115
|
+
module.exports = doctor;
|
|
@@ -0,0 +1,51 @@
|
|
|
1
|
+
const fs = require('fs-extra');
|
|
2
|
+
const path = require('path');
|
|
3
|
+
const chalk = require('chalk');
|
|
4
|
+
const { getArtifactDir } = require('../utils/fileUtils');
|
|
5
|
+
|
|
6
|
+
const ARTIFACTS = [
|
|
7
|
+
{ file: 'constitution.md', label: 'Constitution' },
|
|
8
|
+
{ file: 'users.md', label: 'Users' },
|
|
9
|
+
{ file: 'problem.md', label: 'Problem' },
|
|
10
|
+
{ file: 'assumptions.md', label: 'Assumptions' },
|
|
11
|
+
{ file: 'validation.md', label: 'Validation' },
|
|
12
|
+
{ file: 'solution.md', label: 'Solution' },
|
|
13
|
+
{ file: 'priorities.md', label: 'Priorities' },
|
|
14
|
+
{ file: 'spec.md', label: 'Spec' },
|
|
15
|
+
];
|
|
16
|
+
|
|
17
|
+
async function exportCommand(options) {
|
|
18
|
+
const root = process.cwd();
|
|
19
|
+
const configPath = path.join(root, '.productkit', 'config.json');
|
|
20
|
+
|
|
21
|
+
if (!fs.existsSync(configPath)) {
|
|
22
|
+
console.error(chalk.red('Not a Product Kit project.'));
|
|
23
|
+
console.log('Run: productkit init <name>');
|
|
24
|
+
process.exit(1);
|
|
25
|
+
}
|
|
26
|
+
|
|
27
|
+
const artifactDir = getArtifactDir(root);
|
|
28
|
+
const existing = ARTIFACTS.filter(a => fs.existsSync(path.join(artifactDir, a.file)));
|
|
29
|
+
|
|
30
|
+
if (existing.length === 0) {
|
|
31
|
+
console.error(chalk.red('No artifacts found. Run some slash commands first.'));
|
|
32
|
+
process.exit(1);
|
|
33
|
+
}
|
|
34
|
+
|
|
35
|
+
const sections = [];
|
|
36
|
+
for (const artifact of existing) {
|
|
37
|
+
const content = fs.readFileSync(path.join(artifactDir, artifact.file), 'utf-8');
|
|
38
|
+
sections.push(content);
|
|
39
|
+
}
|
|
40
|
+
|
|
41
|
+
const projectName = path.basename(root);
|
|
42
|
+
const header = `# ${projectName} — Product Kit Export\n\n_Exported: ${new Date().toISOString().split('T')[0]}_\n\n---\n`;
|
|
43
|
+
const combined = header + sections.join('\n\n---\n\n') + '\n';
|
|
44
|
+
|
|
45
|
+
const outputFile = options.output || 'export.md';
|
|
46
|
+
fs.writeFileSync(path.join(root, outputFile), combined);
|
|
47
|
+
|
|
48
|
+
console.log(chalk.green.bold(`Exported ${existing.length} artifact(s) to ${outputFile}`));
|
|
49
|
+
}
|
|
50
|
+
|
|
51
|
+
module.exports = exportCommand;
|
package/src/commands/init.js
CHANGED
|
@@ -2,7 +2,7 @@ const fs = require('fs-extra');
|
|
|
2
2
|
const path = require('path');
|
|
3
3
|
const chalk = require('chalk');
|
|
4
4
|
|
|
5
|
-
function scaffold(projectRoot, projectName) {
|
|
5
|
+
function scaffold(projectRoot, projectName, minimal, artifactDir) {
|
|
6
6
|
const templatesDir = path.join(__dirname, '..', '..', 'templates');
|
|
7
7
|
|
|
8
8
|
// Create directories
|
|
@@ -10,15 +10,24 @@ function scaffold(projectRoot, projectName) {
|
|
|
10
10
|
fs.ensureDirSync(path.join(projectRoot, '.claude', 'commands'));
|
|
11
11
|
|
|
12
12
|
// Write config
|
|
13
|
-
|
|
13
|
+
const config = {
|
|
14
14
|
version: '1.0.0',
|
|
15
15
|
created: new Date().toISOString(),
|
|
16
|
-
}
|
|
16
|
+
};
|
|
17
|
+
if (minimal) {
|
|
18
|
+
config.minimal = true;
|
|
19
|
+
}
|
|
20
|
+
if (artifactDir) {
|
|
21
|
+
config.artifact_dir = artifactDir;
|
|
22
|
+
fs.ensureDirSync(path.join(projectRoot, artifactDir));
|
|
23
|
+
}
|
|
24
|
+
fs.writeJsonSync(path.join(projectRoot, '.productkit', 'config.json'), config, { spaces: 2 });
|
|
17
25
|
|
|
18
26
|
// Copy slash command templates
|
|
19
27
|
const commandsDir = path.join(templatesDir, 'commands');
|
|
20
28
|
const commandFiles = fs.readdirSync(commandsDir);
|
|
21
29
|
for (const file of commandFiles) {
|
|
30
|
+
if (minimal && file === 'productkit.constitution.md') continue;
|
|
22
31
|
fs.copyFileSync(
|
|
23
32
|
path.join(commandsDir, file),
|
|
24
33
|
path.join(projectRoot, '.claude', 'commands', file)
|
|
@@ -61,13 +70,13 @@ async function init(projectName, options) {
|
|
|
61
70
|
}
|
|
62
71
|
|
|
63
72
|
try {
|
|
64
|
-
scaffold(projectRoot, path.basename(projectRoot));
|
|
73
|
+
scaffold(projectRoot, path.basename(projectRoot), options.minimal, options.artifactDir);
|
|
65
74
|
|
|
66
75
|
console.log(chalk.green.bold('Product Kit added to existing project!'));
|
|
67
76
|
console.log();
|
|
68
77
|
console.log(chalk.cyan('Next steps:'));
|
|
69
78
|
console.log(' 1. claude');
|
|
70
|
-
console.log(
|
|
79
|
+
console.log(` 2. /productkit.${options.minimal ? 'users' : 'constitution'}`);
|
|
71
80
|
console.log();
|
|
72
81
|
} catch (error) {
|
|
73
82
|
console.error(chalk.red('Error initializing:'), error.message);
|
|
@@ -89,7 +98,7 @@ async function init(projectName, options) {
|
|
|
89
98
|
}
|
|
90
99
|
|
|
91
100
|
try {
|
|
92
|
-
scaffold(projectRoot, projectName);
|
|
101
|
+
scaffold(projectRoot, projectName, options.minimal, options.artifactDir);
|
|
93
102
|
|
|
94
103
|
// Init git repo
|
|
95
104
|
const { execSync } = require('child_process');
|
|
@@ -104,7 +113,7 @@ async function init(projectName, options) {
|
|
|
104
113
|
console.log(chalk.cyan('Next steps:'));
|
|
105
114
|
console.log(` 1. cd ${projectName}`);
|
|
106
115
|
console.log(' 2. claude');
|
|
107
|
-
console.log(
|
|
116
|
+
console.log(` 3. /productkit.${options.minimal ? 'users' : 'constitution'}`);
|
|
108
117
|
console.log();
|
|
109
118
|
} catch (error) {
|
|
110
119
|
console.error(chalk.red('Error initializing project:'), error.message);
|
package/src/commands/reset.js
CHANGED
|
@@ -2,12 +2,14 @@ const fs = require('fs-extra');
|
|
|
2
2
|
const path = require('path');
|
|
3
3
|
const chalk = require('chalk');
|
|
4
4
|
const readline = require('readline');
|
|
5
|
+
const { getArtifactDir } = require('../utils/fileUtils');
|
|
5
6
|
|
|
6
7
|
const ARTIFACTS = [
|
|
7
8
|
'constitution.md',
|
|
8
9
|
'users.md',
|
|
9
10
|
'problem.md',
|
|
10
11
|
'assumptions.md',
|
|
12
|
+
'validation.md',
|
|
11
13
|
'solution.md',
|
|
12
14
|
'priorities.md',
|
|
13
15
|
'spec.md',
|
|
@@ -36,9 +38,11 @@ async function reset(options) {
|
|
|
36
38
|
process.exit(1);
|
|
37
39
|
}
|
|
38
40
|
|
|
41
|
+
const artifactDir = getArtifactDir(root);
|
|
42
|
+
|
|
39
43
|
// Find existing artifacts
|
|
40
44
|
const existing = ARTIFACTS.filter(file =>
|
|
41
|
-
fs.existsSync(path.join(
|
|
45
|
+
fs.existsSync(path.join(artifactDir, file))
|
|
42
46
|
);
|
|
43
47
|
|
|
44
48
|
if (existing.length === 0) {
|
|
@@ -67,7 +71,7 @@ async function reset(options) {
|
|
|
67
71
|
}
|
|
68
72
|
|
|
69
73
|
for (const file of existing) {
|
|
70
|
-
fs.removeSync(path.join(
|
|
74
|
+
fs.removeSync(path.join(artifactDir, file));
|
|
71
75
|
console.log(chalk.yellow(` removed ${file}`));
|
|
72
76
|
}
|
|
73
77
|
|
package/src/commands/status.js
CHANGED
|
@@ -1,12 +1,14 @@
|
|
|
1
1
|
const fs = require('fs-extra');
|
|
2
2
|
const path = require('path');
|
|
3
3
|
const chalk = require('chalk');
|
|
4
|
+
const { getArtifactDir } = require('../utils/fileUtils');
|
|
4
5
|
|
|
5
6
|
const ARTIFACTS = [
|
|
6
7
|
{ file: 'constitution.md', command: '/productkit.constitution', label: 'Constitution' },
|
|
7
8
|
{ file: 'users.md', command: '/productkit.users', label: 'Users' },
|
|
8
9
|
{ file: 'problem.md', command: '/productkit.problem', label: 'Problem' },
|
|
9
10
|
{ file: 'assumptions.md', command: '/productkit.assumptions', label: 'Assumptions' },
|
|
11
|
+
{ file: 'validation.md', command: '/productkit.validate', label: 'Validation' },
|
|
10
12
|
{ file: 'solution.md', command: '/productkit.solution', label: 'Solution' },
|
|
11
13
|
{ file: 'priorities.md', command: '/productkit.prioritize', label: 'Priorities' },
|
|
12
14
|
{ file: 'spec.md', command: '/productkit.spec', label: 'Spec' },
|
|
@@ -22,11 +24,12 @@ async function status() {
|
|
|
22
24
|
process.exit(1);
|
|
23
25
|
}
|
|
24
26
|
|
|
27
|
+
const artifactDir = getArtifactDir(root);
|
|
25
28
|
const done = [];
|
|
26
29
|
const remaining = [];
|
|
27
30
|
|
|
28
31
|
for (const artifact of ARTIFACTS) {
|
|
29
|
-
const exists = fs.existsSync(path.join(
|
|
32
|
+
const exists = fs.existsSync(path.join(artifactDir, artifact.file));
|
|
30
33
|
if (exists) {
|
|
31
34
|
done.push(artifact);
|
|
32
35
|
} else {
|
package/src/utils/fileUtils.js
CHANGED
|
@@ -12,7 +12,19 @@ function getProjectRoot() {
|
|
|
12
12
|
return null;
|
|
13
13
|
}
|
|
14
14
|
|
|
15
|
+
function getArtifactDir(root) {
|
|
16
|
+
const configPath = path.join(root, '.productkit', 'config.json');
|
|
17
|
+
try {
|
|
18
|
+
const config = fs.readJsonSync(configPath);
|
|
19
|
+
if (config.artifact_dir) {
|
|
20
|
+
return path.join(root, config.artifact_dir);
|
|
21
|
+
}
|
|
22
|
+
} catch {}
|
|
23
|
+
return root;
|
|
24
|
+
}
|
|
25
|
+
|
|
15
26
|
module.exports = {
|
|
16
27
|
isProductKitProject,
|
|
17
28
|
getProjectRoot,
|
|
29
|
+
getArtifactDir,
|
|
18
30
|
};
|
package/templates/CLAUDE.md
CHANGED
|
@@ -10,19 +10,23 @@ Use these commands in order to build your product foundation:
|
|
|
10
10
|
2. `/productkit.users` — Define target user personas
|
|
11
11
|
3. `/productkit.problem` — Frame the problem statement
|
|
12
12
|
4. `/productkit.assumptions` — Extract and prioritize assumptions
|
|
13
|
-
5. `/productkit.
|
|
14
|
-
6. `/productkit.
|
|
15
|
-
7. `/productkit.
|
|
16
|
-
8. `/productkit.
|
|
17
|
-
9. `/productkit.
|
|
13
|
+
5. `/productkit.validate` — Validate assumptions with interview scripts and surveys
|
|
14
|
+
6. `/productkit.solution` — Brainstorm and evaluate solutions
|
|
15
|
+
7. `/productkit.prioritize` — Score and rank features
|
|
16
|
+
8. `/productkit.spec` — Generate a product spec
|
|
17
|
+
9. `/productkit.clarify` — Resolve ambiguities across artifacts
|
|
18
|
+
10. `/productkit.analyze` — Run a completeness/consistency check
|
|
19
|
+
11. `/productkit.bootstrap` — Auto-draft all artifacts from an existing codebase
|
|
20
|
+
12. `/productkit.audit` — Compare spec against codebase and surface gaps
|
|
18
21
|
|
|
19
22
|
## Artifacts
|
|
20
23
|
|
|
21
|
-
Product artifacts are written
|
|
24
|
+
Product artifacts are written as markdown files. Check `.productkit/config.json` for an `artifact_dir` field — if set, artifacts live in that directory instead of the project root. Default artifact locations:
|
|
22
25
|
- `constitution.md` — Product principles and values
|
|
23
26
|
- `users.md` — Target user personas
|
|
24
27
|
- `problem.md` — Problem statement
|
|
25
28
|
- `assumptions.md` — Prioritized assumptions
|
|
29
|
+
- `validation.md` — Assumption validation results, interview scripts, and survey questions
|
|
26
30
|
- `solution.md` — Chosen solution with alternatives considered
|
|
27
31
|
- `priorities.md` — Scored and ranked feature list
|
|
28
32
|
- `spec.md` — Complete product spec ready for engineering
|
|
@@ -30,3 +34,5 @@ Product artifacts are written to the project root as markdown files:
|
|
|
30
34
|
## Workflow
|
|
31
35
|
|
|
32
36
|
Start with `/productkit.constitution` or `/productkit.users`, then work through the commands in order. Each command reads previous artifacts to maintain consistency.
|
|
37
|
+
|
|
38
|
+
For existing projects, use `/productkit.bootstrap` to auto-draft all artifacts from your codebase in one session.
|
package/templates/README.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# {{PROJECT_NAME}}
|
|
2
2
|
|
|
3
|
-
A product research project powered by [Product Kit](https://github.com/
|
|
3
|
+
A product research project powered by [Product Kit](https://github.com/iamquechua/product-kit). See the [full guide](https://iamquechua.github.io/product-kit/) for a walkthrough.
|
|
4
4
|
|
|
5
5
|
## Getting Started
|
|
6
6
|
|
|
@@ -14,20 +14,26 @@ Then use the slash commands to build your product foundation:
|
|
|
14
14
|
2. `/productkit.users` — Define target user personas
|
|
15
15
|
3. `/productkit.problem` — Frame the problem statement
|
|
16
16
|
4. `/productkit.assumptions` — Extract and prioritize assumptions
|
|
17
|
-
5. `/productkit.
|
|
18
|
-
6. `/productkit.
|
|
19
|
-
7. `/productkit.
|
|
20
|
-
8. `/productkit.
|
|
21
|
-
9. `/productkit.
|
|
17
|
+
5. `/productkit.validate` — Validate assumptions with interviews and surveys
|
|
18
|
+
6. `/productkit.solution` — Brainstorm and evaluate solutions
|
|
19
|
+
7. `/productkit.prioritize` — Score and rank features
|
|
20
|
+
8. `/productkit.spec` — Generate a product spec
|
|
21
|
+
9. `/productkit.clarify` — Resolve ambiguities
|
|
22
|
+
10. `/productkit.analyze` — Check consistency and completeness
|
|
23
|
+
11. `/productkit.bootstrap` — Auto-draft all artifacts from existing codebase
|
|
24
|
+
12. `/productkit.audit` — Compare spec against actual implementation
|
|
22
25
|
|
|
23
26
|
## Artifacts
|
|
24
27
|
|
|
28
|
+
Artifacts are written to the project root by default. If `artifact_dir` is set in `.productkit/config.json`, they are written there instead.
|
|
29
|
+
|
|
25
30
|
| File | Description |
|
|
26
31
|
|------|-------------|
|
|
27
32
|
| `constitution.md` | Product principles and values |
|
|
28
33
|
| `users.md` | Target user personas |
|
|
29
34
|
| `problem.md` | Problem statement |
|
|
30
35
|
| `assumptions.md` | Prioritized assumptions |
|
|
36
|
+
| `validation.md` | Assumption validation, interview scripts, survey questions |
|
|
31
37
|
| `solution.md` | Chosen solution with alternatives considered |
|
|
32
38
|
| `priorities.md` | Scored and ranked feature list |
|
|
33
39
|
| `spec.md` | Complete product spec ready for engineering |
|
|
@@ -10,7 +10,9 @@ Evaluate the overall quality, consistency, and completeness of the product think
|
|
|
10
10
|
|
|
11
11
|
## Before You Start
|
|
12
12
|
|
|
13
|
-
|
|
13
|
+
Check `.productkit/config.json` for an `artifact_dir` field. If set, read artifacts there instead of the project root. If not set, default to the project root.
|
|
14
|
+
|
|
15
|
+
Read all existing artifacts:
|
|
14
16
|
- `constitution.md`
|
|
15
17
|
- `users.md`
|
|
16
18
|
- `problem.md`
|
|
@@ -41,7 +41,9 @@ Also read if they exist:
|
|
|
41
41
|
|
|
42
42
|
## Output
|
|
43
43
|
|
|
44
|
-
|
|
44
|
+
Check `.productkit/config.json` for an `artifact_dir` field. If set, write artifacts there instead of the project root. If not set, default to the project root.
|
|
45
|
+
|
|
46
|
+
Write to `assumptions.md`:
|
|
45
47
|
|
|
46
48
|
```markdown
|
|
47
49
|
# Assumptions
|
|
@@ -0,0 +1,140 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Compare your spec against the actual codebase and surface gaps
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
You are a product audit specialist comparing what was planned (in the product artifacts) against what was actually built (in the codebase). Your job is to surface gaps, scope creep, and unmet acceptance criteria so the PM can make informed decisions about what to do next.
|
|
6
|
+
|
|
7
|
+
## Your Role
|
|
8
|
+
|
|
9
|
+
Read the product spec and supporting artifacts, then systematically scan the codebase to determine what was implemented, what's missing, what was added beyond the spec, and whether acceptance criteria are met. Produce a clear, actionable audit report.
|
|
10
|
+
|
|
11
|
+
## Before You Start
|
|
12
|
+
|
|
13
|
+
Check `.productkit/config.json` for an `artifact_dir` field. If set, read artifacts there instead of the project root. If not set, default to the project root.
|
|
14
|
+
|
|
15
|
+
Read these artifacts (required):
|
|
16
|
+
- `spec.md` — the product spec (required)
|
|
17
|
+
- `priorities.md` — feature priorities and v1 scope (required)
|
|
18
|
+
|
|
19
|
+
Also read if they exist:
|
|
20
|
+
- `solution.md` — chosen solution
|
|
21
|
+
- `validation.md` — assumption validation results
|
|
22
|
+
- `assumptions.md` — known risks
|
|
23
|
+
|
|
24
|
+
At minimum, `spec.md` must exist. If it's missing, tell the user to run `/productkit.spec` first.
|
|
25
|
+
|
|
26
|
+
### Scan the codebase
|
|
27
|
+
|
|
28
|
+
After reading the artifacts, scan the project's actual implementation:
|
|
29
|
+
- **README.md** — project description, setup instructions, documented features
|
|
30
|
+
- **package.json** (or equivalent) — dependencies, scripts, project metadata
|
|
31
|
+
- **Source code** — scan the directory structure, read key files, understand what's built
|
|
32
|
+
- **Tests** — what's tested indicates what's implemented and what the expected behavior is
|
|
33
|
+
- **Config files** — environment setup, deployment config, CI/CD
|
|
34
|
+
- **Comments and TODOs** — in-code notes about incomplete work or known issues
|
|
35
|
+
|
|
36
|
+
Read enough of the codebase to understand what exists. You don't need to read every file — focus on entry points, key modules, and test files to build a picture of what's implemented.
|
|
37
|
+
|
|
38
|
+
## Process
|
|
39
|
+
|
|
40
|
+
1. **Map spec features to code** — For each feature in `spec.md`, determine whether it's implemented, partially implemented, or missing. Reference specific files/modules as evidence.
|
|
41
|
+
|
|
42
|
+
2. **Check acceptance criteria** — For each feature's acceptance criteria in the spec, assess whether the implementation meets it. Mark each criterion as:
|
|
43
|
+
- ✅ **Met** — evidence in code/tests that this works
|
|
44
|
+
- ⚠️ **Partially met** — implemented but incomplete or with caveats
|
|
45
|
+
- ❌ **Not met** — no evidence of implementation
|
|
46
|
+
- ❓ **Cannot assess** — would need manual testing or runtime verification
|
|
47
|
+
|
|
48
|
+
3. **Identify scope creep** — Look for significant functionality in the codebase that isn't described in the spec. Flag it — it may be intentional evolution or unplanned drift.
|
|
49
|
+
|
|
50
|
+
4. **Check deferred items** — Review the "Out of Scope" and "Deferred to v2+" sections. Were any deferred items actually built? Were any v1 items actually deferred?
|
|
51
|
+
|
|
52
|
+
5. **Review risks and assumptions** — If `validation.md` exists, check whether invalidated assumptions affected the implementation. If `assumptions.md` exists, check whether high-risk assumptions have been addressed in the code (error handling, fallbacks, etc.).
|
|
53
|
+
|
|
54
|
+
6. **Check success metrics** — Are the success metrics from the spec measurable with the current implementation? Is there analytics, logging, or monitoring in place?
|
|
55
|
+
|
|
56
|
+
7. **Present findings** — Walk the PM through the audit, feature by feature. Discuss implications and recommendations.
|
|
57
|
+
|
|
58
|
+
## Conversation Style
|
|
59
|
+
|
|
60
|
+
- Be specific — reference actual files, modules, and code when citing evidence
|
|
61
|
+
- Be fair — distinguish between "not implemented" and "implemented differently than specified"
|
|
62
|
+
- Don't assume missing code means failure — the PM may have intentionally changed course
|
|
63
|
+
- Ask about ambiguous cases rather than making assumptions
|
|
64
|
+
- Focus on what matters — minor deviations from spec wording are less important than missing core functionality
|
|
65
|
+
|
|
66
|
+
## Output
|
|
67
|
+
|
|
68
|
+
Present the audit directly in the conversation, then offer to write it to `audit.md`. Use this structure:
|
|
69
|
+
|
|
70
|
+
```markdown
|
|
71
|
+
# Product Audit: [Product Name]
|
|
72
|
+
|
|
73
|
+
_Audited: [Date]_
|
|
74
|
+
_Spec version compared: spec.md_
|
|
75
|
+
|
|
76
|
+
## Summary
|
|
77
|
+
|
|
78
|
+
- **Features in spec:** [count]
|
|
79
|
+
- **Fully implemented:** [count]
|
|
80
|
+
- **Partially implemented:** [count]
|
|
81
|
+
- **Not implemented:** [count]
|
|
82
|
+
- **Unspecified features found:** [count]
|
|
83
|
+
|
|
84
|
+
## Feature-by-Feature Audit
|
|
85
|
+
|
|
86
|
+
### [Feature Name] — [Must Have / Nice to Have]
|
|
87
|
+
**Spec status:** [v1 must-have / v1 nice-to-have / deferred]
|
|
88
|
+
**Implementation status:** ✅ Implemented | ⚠️ Partial | ❌ Missing
|
|
89
|
+
|
|
90
|
+
**Evidence:** [Files/modules where this is implemented]
|
|
91
|
+
|
|
92
|
+
**Acceptance Criteria:**
|
|
93
|
+
- ✅ [Criterion 1] — [Evidence: file/test that confirms this]
|
|
94
|
+
- ⚠️ [Criterion 2] — [What's missing or incomplete]
|
|
95
|
+
- ❌ [Criterion 3] — [No evidence found]
|
|
96
|
+
|
|
97
|
+
**Notes:** [Any observations about implementation quality, approach differences, etc.]
|
|
98
|
+
|
|
99
|
+
### [Next Feature]
|
|
100
|
+
[Same structure]
|
|
101
|
+
|
|
102
|
+
## Scope Creep
|
|
103
|
+
|
|
104
|
+
Features found in the codebase that are NOT in the spec:
|
|
105
|
+
|
|
106
|
+
1. **[Feature/functionality]** — Found in [file/module]. [Is this intentional? Should it be added to the spec?]
|
|
107
|
+
|
|
108
|
+
## Deferred Items Check
|
|
109
|
+
|
|
110
|
+
| Deferred Item | Was it built? | Notes |
|
|
111
|
+
|--------------|---------------|-------|
|
|
112
|
+
| [Item from spec] | Yes / No | [Details] |
|
|
113
|
+
|
|
114
|
+
## Risk & Assumption Check
|
|
115
|
+
|
|
116
|
+
| Risk/Assumption | Addressed in code? | How |
|
|
117
|
+
|----------------|-------------------|-----|
|
|
118
|
+
| [From spec/validation.md] | Yes / No / Partial | [Evidence] |
|
|
119
|
+
|
|
120
|
+
## Success Metrics Readiness
|
|
121
|
+
|
|
122
|
+
| Metric | Measurable? | How |
|
|
123
|
+
|--------|------------|-----|
|
|
124
|
+
| [From spec] | Yes / No | [What's in place — analytics, logging, etc.] |
|
|
125
|
+
|
|
126
|
+
## Recommendations
|
|
127
|
+
|
|
128
|
+
### Critical (block launch)
|
|
129
|
+
1. [Missing must-have feature or unmet critical criterion]
|
|
130
|
+
|
|
131
|
+
### Important (address soon)
|
|
132
|
+
1. [Partially implemented feature that needs completion]
|
|
133
|
+
|
|
134
|
+
### Nice to Have (backlog)
|
|
135
|
+
1. [Minor gaps or improvements]
|
|
136
|
+
|
|
137
|
+
### Process Observations
|
|
138
|
+
- [Any patterns noticed — e.g., "spec was too vague on X, leading to implementation ambiguity"]
|
|
139
|
+
- [Suggestions for improving the spec → build → audit cycle]
|
|
140
|
+
```
|
|
@@ -0,0 +1,81 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Auto-draft all product artifacts from an existing codebase
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
You are a product analyst bootstrapping Product Kit artifacts for an existing project. Your job is to read the codebase and draft each artifact so the user gets a fast start instead of building from scratch.
|
|
6
|
+
|
|
7
|
+
## Your Role
|
|
8
|
+
|
|
9
|
+
Analyze the existing project — code, docs, README, config files, comments — and draft product artifacts in workflow order. Present each draft for user approval before writing it.
|
|
10
|
+
|
|
11
|
+
## Before You Start
|
|
12
|
+
|
|
13
|
+
1. Read the project's README, CLAUDE.md, package.json (or equivalent), and scan the directory structure to understand what this project does.
|
|
14
|
+
2. Check `.productkit/config.json` for an `artifact_dir` field. If set, read and write artifacts there instead of the project root. If not set, default to the project root.
|
|
15
|
+
3. Check which artifacts already exist (constitution.md, users.md, problem.md, assumptions.md, solution.md, priorities.md, spec.md) in the artifact directory. **Skip any that already exist** — tell the user you're skipping them.
|
|
16
|
+
4. Check `.productkit/config.json` — if `minimal: true`, skip `constitution.md`.
|
|
17
|
+
|
|
18
|
+
## Process
|
|
19
|
+
|
|
20
|
+
Work through each missing artifact in this order:
|
|
21
|
+
|
|
22
|
+
### 1. Constitution (`constitution.md`)
|
|
23
|
+
Draft based on: README vision/mission, CLAUDE.md principles, project conventions.
|
|
24
|
+
- Product vision — infer from what the project does
|
|
25
|
+
- Core principles — infer from code patterns, docs, and design choices
|
|
26
|
+
- Non-negotiables — infer from what the project explicitly avoids
|
|
27
|
+
|
|
28
|
+
### 2. Users (`users.md`)
|
|
29
|
+
Draft based on: README audience, docs, issue tracker themes, CLI help text, UI copy.
|
|
30
|
+
- Identify 2-4 user types from project context
|
|
31
|
+
- Describe each with specifics inferred from the codebase
|
|
32
|
+
|
|
33
|
+
### 3. Problem (`problem.md`)
|
|
34
|
+
Draft based on: README "why", issue patterns, gaps the project fills.
|
|
35
|
+
- Frame the core problem the project solves
|
|
36
|
+
- Ground it in the users you just defined
|
|
37
|
+
|
|
38
|
+
### 4. Assumptions (`assumptions.md`)
|
|
39
|
+
Draft based on: implicit bets in the architecture, undocumented dependencies, target audience guesses.
|
|
40
|
+
- Surface 5-10 assumptions from code and docs
|
|
41
|
+
- Categorize by risk (high/medium/low)
|
|
42
|
+
|
|
43
|
+
### 5. Solution (`solution.md`)
|
|
44
|
+
Draft based on: the actual implementation, architecture choices, alternatives mentioned in docs/comments.
|
|
45
|
+
- Describe the chosen approach and why
|
|
46
|
+
- Note alternatives that were likely considered
|
|
47
|
+
|
|
48
|
+
### 6. Priorities (`priorities.md`)
|
|
49
|
+
Draft based on: feature completeness, TODO comments, open issues, roadmap docs.
|
|
50
|
+
- List features/capabilities by apparent priority
|
|
51
|
+
- Flag gaps between what exists and what's needed
|
|
52
|
+
|
|
53
|
+
### 7. Spec (`spec.md`)
|
|
54
|
+
Draft based on: all previous artifacts plus technical implementation details.
|
|
55
|
+
- Synthesize everything into a product spec
|
|
56
|
+
|
|
57
|
+
## For Each Artifact
|
|
58
|
+
|
|
59
|
+
1. **Show your draft** — present the full markdown content
|
|
60
|
+
2. **Explain your reasoning** — briefly note what codebase signals you used
|
|
61
|
+
3. **Ask for approval** — "Should I write this to `[filename]`? Or would you like to adjust anything?"
|
|
62
|
+
4. **On approval** — write the file to the artifact directory
|
|
63
|
+
5. **On feedback** — revise and re-present
|
|
64
|
+
|
|
65
|
+
## Conversation Style
|
|
66
|
+
|
|
67
|
+
- Be direct — present drafts quickly, don't over-ask before showing something
|
|
68
|
+
- Flag low-confidence sections with "[Needs input]" where the codebase doesn't give enough signal
|
|
69
|
+
- If the codebase has very little documentation, acknowledge gaps honestly and ask the user to fill in
|
|
70
|
+
- After each artifact, move to the next without prompting — keep momentum
|
|
71
|
+
|
|
72
|
+
## Output
|
|
73
|
+
|
|
74
|
+
Each artifact follows the same format as its corresponding slash command would produce. Refer to the individual command templates for the expected structure.
|
|
75
|
+
|
|
76
|
+
## When Done
|
|
77
|
+
|
|
78
|
+
After all artifacts are written (or skipped), summarize:
|
|
79
|
+
- Which artifacts were drafted and written
|
|
80
|
+
- Which were skipped (already existed)
|
|
81
|
+
- Suggest running `/productkit.clarify` to resolve any cross-artifact inconsistencies
|
|
@@ -10,7 +10,9 @@ Cross-reference all existing artifacts, find inconsistencies, and guide the user
|
|
|
10
10
|
|
|
11
11
|
## Before You Start
|
|
12
12
|
|
|
13
|
-
|
|
13
|
+
Check `.productkit/config.json` for an `artifact_dir` field. If set, read and write artifacts there instead of the project root. If not set, default to the project root.
|
|
14
|
+
|
|
15
|
+
Read all existing artifacts:
|
|
14
16
|
- `constitution.md`
|
|
15
17
|
- `users.md`
|
|
16
18
|
- `problem.md`
|
|
@@ -25,7 +25,9 @@ Act as a seasoned PM mentor. Guide the user through defining their product's cor
|
|
|
25
25
|
|
|
26
26
|
## Output
|
|
27
27
|
|
|
28
|
-
|
|
28
|
+
Check `.productkit/config.json` for an `artifact_dir` field. If set, write artifacts there instead of the project root. If not set, default to the project root.
|
|
29
|
+
|
|
30
|
+
Write the final constitution to `constitution.md` with this format:
|
|
29
31
|
|
|
30
32
|
```markdown
|
|
31
33
|
# Product Constitution
|
|
@@ -27,11 +27,12 @@ If `solution.md` does not exist, tell the user to run `/productkit.solution` fir
|
|
|
27
27
|
2. **Score each feature** using this framework:
|
|
28
28
|
- **Impact** (1-5): How much does this move the needle on the core problem?
|
|
29
29
|
- **Confidence** (1-5): How sure are we that users need this? (5 = direct user evidence, 1 = pure guess)
|
|
30
|
-
- **Effort** (1-5): How complex is this to build? (1 = trivial, 5 = massive)
|
|
30
|
+
- **Effort** (1-5): How complex is this to build? (1 = trivial, 5 = massive). **This is a PM estimate — mark as `Eng. Validated: No`.**
|
|
31
31
|
- **Priority Score** = (Impact × Confidence) / Effort
|
|
32
32
|
3. **Discuss the ranking** — Present the scored list. Ask the user if the ranking feels right. Adjust if needed.
|
|
33
33
|
4. **Draw the v1 line** — Which features make the cut for the first release? Apply the rule: "What's the smallest thing we can ship that solves the core problem?"
|
|
34
34
|
5. **Define must-haves vs nice-to-haves** — For features above the line, which are truly required vs. which could be cut if time runs short?
|
|
35
|
+
6. **Flag effort for engineering review** — Tell the PM: "The effort scores are your best estimates. Share this table with your engineering lead and ask them to review the Effort column. When they've provided their input, update the Effort scores and set `Eng. Validated` to `Yes`, then run `/productkit.prioritize` again to recalculate rankings."
|
|
35
36
|
|
|
36
37
|
## Conversation Style
|
|
37
38
|
|
|
@@ -42,7 +43,9 @@ If `solution.md` does not exist, tell the user to run `/productkit.solution` fir
|
|
|
42
43
|
|
|
43
44
|
## Output
|
|
44
45
|
|
|
45
|
-
|
|
46
|
+
Check `.productkit/config.json` for an `artifact_dir` field. If set, write artifacts there instead of the project root. If not set, default to the project root.
|
|
47
|
+
|
|
48
|
+
Write to `priorities.md`:
|
|
46
49
|
|
|
47
50
|
```markdown
|
|
48
51
|
# Feature Priorities
|
|
@@ -52,12 +55,15 @@ Priority Score = (Impact × Confidence) / Effort
|
|
|
52
55
|
|
|
53
56
|
## Feature Rankings
|
|
54
57
|
|
|
55
|
-
| Rank | Feature | Impact | Confidence | Effort | Score | Status |
|
|
56
|
-
|
|
57
|
-
| 1 | [Feature] | 5 | 4 | 2 | 10.0 | v1 must-have |
|
|
58
|
-
| 2 | [Feature] | 4 | 4 | 2 | 8.0 | v1 must-have |
|
|
59
|
-
| 3 | [Feature] | 4 | 3 | 3 | 4.0 | v1 nice-to-have |
|
|
60
|
-
| 4 | [Feature] | 3 | 2 | 4 | 1.5 | v2 |
|
|
58
|
+
| Rank | Feature | Impact | Confidence | Effort | Eng. Validated | Score | Status |
|
|
59
|
+
|------|---------|--------|------------|--------|----------------|-------|--------|
|
|
60
|
+
| 1 | [Feature] | 5 | 4 | 2 | No | 10.0 | v1 must-have |
|
|
61
|
+
| 2 | [Feature] | 4 | 4 | 2 | No | 8.0 | v1 must-have |
|
|
62
|
+
| 3 | [Feature] | 4 | 3 | 3 | No | 4.0 | v1 nice-to-have |
|
|
63
|
+
| 4 | [Feature] | 3 | 2 | 4 | No | 1.5 | v2 |
|
|
64
|
+
|
|
65
|
+
## Engineering Review Status
|
|
66
|
+
⚠️ Effort scores are PM estimates and have not been validated by engineering. Share this table with your engineering lead, ask them to review the Effort column, then update the scores and set `Eng. Validated` to `Yes`. Run `/productkit.prioritize` again to recalculate rankings.
|
|
61
67
|
|
|
62
68
|
## v1 Scope
|
|
63
69
|
### Must-Haves
|
|
@@ -73,3 +79,16 @@ Priority Score = (Impact × Confidence) / Effort
|
|
|
73
79
|
- [Decision 1 and rationale]
|
|
74
80
|
- [Decision 2 and rationale]
|
|
75
81
|
```
|
|
82
|
+
|
|
83
|
+
### When the PM returns with engineering-validated effort scores
|
|
84
|
+
|
|
85
|
+
When the user runs `/productkit.prioritize` again after updating effort scores:
|
|
86
|
+
|
|
87
|
+
1. Read the existing `priorities.md`
|
|
88
|
+
2. Check the `Eng. Validated` column. For rows marked `Yes`:
|
|
89
|
+
- Recalculate the Priority Score using the updated Effort value
|
|
90
|
+
- Re-rank features by new scores
|
|
91
|
+
- Present the updated ranking to the PM and highlight what changed (e.g., "Feature X moved from #2 to #5 because engineering scored effort as 4 instead of 2")
|
|
92
|
+
3. For rows still marked `No`, keep the PM estimate but flag them: "These features still have unvalidated effort scores."
|
|
93
|
+
4. Redraw the v1 line if the ranking changed significantly — ask the PM: "The ranking shifted after engineering review. Does the v1 scope still make sense, or should we adjust?"
|
|
94
|
+
5. Update the Engineering Review Status section. When all rows are `Yes`, replace the warning with: "✅ All effort scores validated by engineering."
|
|
@@ -34,7 +34,9 @@ If `users.md` does not exist, tell the user to run `/productkit.users` first.
|
|
|
34
34
|
|
|
35
35
|
## Output
|
|
36
36
|
|
|
37
|
-
|
|
37
|
+
Check `.productkit/config.json` for an `artifact_dir` field. If set, write artifacts there instead of the project root. If not set, default to the project root.
|
|
38
|
+
|
|
39
|
+
Write the problem statement to `problem.md`:
|
|
38
40
|
|
|
39
41
|
```markdown
|
|
40
42
|
# Problem Statement
|
|
@@ -10,16 +10,34 @@ Guide the user from problem understanding to concrete solution ideas. Ensure eve
|
|
|
10
10
|
|
|
11
11
|
## Before You Start
|
|
12
12
|
|
|
13
|
+
Check `.productkit/config.json` for an `artifact_dir` field. If set, read and write artifacts there instead of the project root. If not set, default to the project root.
|
|
14
|
+
|
|
13
15
|
Read these files first (required):
|
|
14
16
|
- `users.md` — who has this problem
|
|
15
17
|
- `problem.md` — what problem we're solving
|
|
18
|
+
- `validation.md` — assumption validation results (required)
|
|
16
19
|
|
|
17
20
|
Also read if they exist:
|
|
18
21
|
- `constitution.md` — product principles (use to filter solutions)
|
|
19
|
-
- `assumptions.md` — known risks
|
|
22
|
+
- `assumptions.md` — known risks
|
|
20
23
|
|
|
21
24
|
If `users.md` or `problem.md` do not exist, tell the user to run `/productkit.users` and `/productkit.problem` first.
|
|
22
25
|
|
|
26
|
+
If `validation.md` does not exist, tell the user to run `/productkit.validate` first.
|
|
27
|
+
|
|
28
|
+
### Validation Gate
|
|
29
|
+
|
|
30
|
+
After reading `validation.md`, scan all assumption blocks under **Critical** and **Important** sections for the marker `[PENDING]` in the `Evidence` field. This is a mechanical check — look for the literal text `[PENDING]`.
|
|
31
|
+
|
|
32
|
+
**If any Critical or Important assumption has `Evidence: [PENDING]`:**
|
|
33
|
+
|
|
34
|
+
1. **Do not proceed with solution brainstorming.**
|
|
35
|
+
2. List every assumption that still has `[PENDING]` evidence and explain why each matters for solution design.
|
|
36
|
+
3. Tell the user: "These assumptions have no evidence yet. Run `/productkit.validate` again with your findings to update them, then come back to `/productkit.solution`."
|
|
37
|
+
4. If the user explicitly asks to proceed anyway, you may continue — but prefix every solution evaluation with a **Risk Warning** listing which unvalidated assumptions it depends on. Make it clear the output is a hypothesis, not a validated plan.
|
|
38
|
+
|
|
39
|
+
**Only proceed freely** if all Critical and Important assumptions have real evidence in their `Evidence` field (no `[PENDING]` markers). Low Risk assumptions with `[PENDING]` are acceptable and should not block.
|
|
40
|
+
|
|
23
41
|
## Process
|
|
24
42
|
|
|
25
43
|
1. **Recap the problem** — Summarize the problem and primary user in 2-3 sentences. Confirm with the user.
|
|
@@ -44,7 +62,9 @@ If `users.md` or `problem.md` do not exist, tell the user to run `/productkit.us
|
|
|
44
62
|
|
|
45
63
|
## Output
|
|
46
64
|
|
|
47
|
-
|
|
65
|
+
Check `.productkit/config.json` for an `artifact_dir` field. If set, write artifacts there instead of the project root. If not set, default to the project root.
|
|
66
|
+
|
|
67
|
+
Write to `solution.md`:
|
|
48
68
|
|
|
49
69
|
```markdown
|
|
50
70
|
# Solution
|
|
@@ -10,7 +10,9 @@ Pull together everything the user has built — constitution, users, problem, as
|
|
|
10
10
|
|
|
11
11
|
## Before You Start
|
|
12
12
|
|
|
13
|
-
|
|
13
|
+
Check `.productkit/config.json` for an `artifact_dir` field. If set, read and write artifacts there instead of the project root. If not set, default to the project root.
|
|
14
|
+
|
|
15
|
+
Read all existing artifacts:
|
|
14
16
|
- `constitution.md` — product principles
|
|
15
17
|
- `users.md` — target users (required)
|
|
16
18
|
- `problem.md` — problem statement (required)
|
|
@@ -20,6 +22,17 @@ Read all existing artifacts in the project root:
|
|
|
20
22
|
|
|
21
23
|
At minimum, `users.md`, `problem.md`, and `solution.md` must exist. If any are missing, tell the user which commands to run first.
|
|
22
24
|
|
|
25
|
+
### Engineering Effort Review Check
|
|
26
|
+
|
|
27
|
+
If `priorities.md` exists, scan the feature table for the `Eng. Validated` column. If any v1 must-have or nice-to-have features have `Eng. Validated: No`:
|
|
28
|
+
|
|
29
|
+
1. **Do not proceed with the spec.**
|
|
30
|
+
2. List the features with unvalidated effort scores.
|
|
31
|
+
3. Tell the PM: "Your effort scores haven't been reviewed by engineering yet. The v1 scope and feature priority may change after engineering reviews the effort estimates. Share `priorities.md` with your engineering lead, have them update the Effort column and set `Eng. Validated` to `Yes`, then run `/productkit.prioritize` again to recalculate rankings. Once that's done, come back to `/productkit.spec`."
|
|
32
|
+
4. If the PM explicitly asks to proceed anyway, you may continue — but add a prominent warning at the top of the spec: "⚠️ Effort estimates have not been validated by engineering. Feature scope and priority order may change." Also note which specific features have unvalidated effort in the spec's risk section.
|
|
33
|
+
|
|
34
|
+
If all v1 features have `Eng. Validated: Yes`, proceed without warnings.
|
|
35
|
+
|
|
23
36
|
## Process
|
|
24
37
|
|
|
25
38
|
1. **Review all artifacts** — Read everything and identify any gaps or contradictions. Flag these before proceeding.
|
|
@@ -37,7 +50,7 @@ At minimum, `users.md`, `problem.md`, and `solution.md` must exist. If any are m
|
|
|
37
50
|
|
|
38
51
|
## Output
|
|
39
52
|
|
|
40
|
-
Write to `spec.md
|
|
53
|
+
Write to `spec.md`:
|
|
41
54
|
|
|
42
55
|
```markdown
|
|
43
56
|
# Product Spec: [Product Name]
|
|
@@ -33,7 +33,9 @@ Read `constitution.md` if it exists — use the product vision to inform user di
|
|
|
33
33
|
|
|
34
34
|
## Output
|
|
35
35
|
|
|
36
|
-
|
|
36
|
+
Check `.productkit/config.json` for an `artifact_dir` field. If set, write artifacts there instead of the project root. If not set, default to the project root.
|
|
37
|
+
|
|
38
|
+
Write the final personas to `users.md` with this format:
|
|
37
39
|
|
|
38
40
|
```markdown
|
|
39
41
|
# Target Users
|
|
@@ -0,0 +1,192 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Validate assumptions with interview scripts and survey questions
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
You are a research methodologist and validation specialist helping PMs test their assumptions before committing to a solution.
|
|
6
|
+
|
|
7
|
+
## Your Role
|
|
8
|
+
|
|
9
|
+
Turn prioritized assumptions into actionable validation materials — interview scripts and survey questions. If the PM already has evidence, capture it. If not, give them the tools to go get it.
|
|
10
|
+
|
|
11
|
+
## Before You Start
|
|
12
|
+
|
|
13
|
+
Check `.productkit/config.json` for an `artifact_dir` field. If set, read and write artifacts there instead of the project root. If not set, default to the project root.
|
|
14
|
+
|
|
15
|
+
Read existing artifacts:
|
|
16
|
+
- `assumptions.md` — prioritized assumptions (required)
|
|
17
|
+
- `users.md` — user personas (optional, used for interview targeting)
|
|
18
|
+
- `problem.md` — problem statement (optional, for context)
|
|
19
|
+
|
|
20
|
+
At minimum, `assumptions.md` must exist. If it's missing, tell the user to run `/productkit.assumptions` first.
|
|
21
|
+
|
|
22
|
+
### Check for raw validation data
|
|
23
|
+
|
|
24
|
+
Look for a `validation-data/` directory in the artifact directory (or project root if no artifact_dir is set). If it exists, read the files inside:
|
|
25
|
+
|
|
26
|
+
- **`interviews.csv`** — interview responses. Columns: Participant, Question, Response, Notes.
|
|
27
|
+
- **`survey-responses.csv`** — survey results. Columns are the survey questions generated on the first run.
|
|
28
|
+
- **`desk-research.csv`** — desk research findings. Columns: Assumption, Source, Finding, URL, Date.
|
|
29
|
+
- **`.md` or `.txt` files** — free-form interview transcripts or notes. Read each one.
|
|
30
|
+
- **Any other files** — note their presence but flag that you can only analyze text-based formats.
|
|
31
|
+
|
|
32
|
+
If `validation-data/` contains filled-in files, these are the **primary source of evidence**. Analyze them directly rather than relying on the PM's summary. If the directory doesn't exist or is empty, proceed with the normal flow (ask the PM for evidence or generate validation materials).
|
|
33
|
+
|
|
34
|
+
**Privacy note:** Interview data may contain personally identifiable information. Remind the PM to anonymize data (replace real names with pseudonyms like P1, P2) before committing to version control. Suggest adding `validation-data/` to `.gitignore` if the data is sensitive.
|
|
35
|
+
|
|
36
|
+
## Process
|
|
37
|
+
|
|
38
|
+
1. **Review assumptions** — Read `assumptions.md` and list the Critical and Important assumptions. Present them to the user.
|
|
39
|
+
2. **Triage each assumption** — For each high-risk assumption, ask: "Do you already have evidence for or against this?" If yes, capture it and assess whether it validates, partially validates, or invalidates the assumption. If no, flag it for validation.
|
|
40
|
+
3. **Generate interview script** — For assumptions that need qualitative validation, write an interview script targeting the relevant user persona from `users.md`. Group questions by assumption. Include warm-up and closing sections.
|
|
41
|
+
4. **Generate survey questions** — For assumptions that can be tested quantitatively, write survey questions in formats ready for Typeform/Google Forms (Likert scale, multiple choice, open text). Tag each question with the assumption it tests.
|
|
42
|
+
5. **Generate data collection templates** — Create the `validation-data/` directory and write CSV templates:
|
|
43
|
+
- **`validation-data/interviews.csv`** — Pre-filled with the interview questions from the script. Columns: `Participant`, `Question`, `Response`, `Notes`. Each row has a question pre-populated; the PM fills in responses for each participant.
|
|
44
|
+
- **`validation-data/survey-responses.csv`** — Columns are the survey questions generated in step 4. Each row will be one respondent's answers. First row is headers only — the PM pastes in exported survey data or fills in manually.
|
|
45
|
+
- **`validation-data/desk-research.csv`** — Pre-filled with one row per assumption that needs desk research. Columns: `Assumption`, `Source`, `Finding`, `URL`, `Date`. The PM fills in what they find.
|
|
46
|
+
6. **Summarize status** — Present a clear picture: what's validated, what's invalidated, what still needs fieldwork.
|
|
47
|
+
7. **Finalize** — Write the validation artifact and data collection templates after user approval. Tell the PM: "Fill in the CSV files in `validation-data/` as you collect data, then run `/productkit.validate` again for me to analyze your findings."
|
|
48
|
+
|
|
49
|
+
## Conversation Style
|
|
50
|
+
|
|
51
|
+
- Be rigorous — "I think users want this" is not evidence. Push for specifics.
|
|
52
|
+
- Accept diverse evidence — user interviews, analytics data, support tickets, competitor research, domain expertise all count
|
|
53
|
+
- For invalidated assumptions, flag the downstream impact ("This assumption is in your problem statement — you may need to revisit it")
|
|
54
|
+
- Keep interview questions open-ended and non-leading
|
|
55
|
+
- Keep survey questions clear and unambiguous — no double-barreled questions
|
|
56
|
+
- If all critical assumptions are already validated, celebrate that and generate materials only for remaining gaps
|
|
57
|
+
|
|
58
|
+
## Output
|
|
59
|
+
|
|
60
|
+
Write to `validation.md`. Every assumption gets a structured block with an `Evidence` field. For assumptions the PM has already validated, fill in the evidence. For assumptions that still need validation, write `[PENDING]` as the evidence value. This marker is critical — `/productkit.solution` will check for `[PENDING]` markers and block if any exist on critical or important assumptions.
|
|
61
|
+
|
|
62
|
+
```markdown
|
|
63
|
+
# Validation
|
|
64
|
+
|
|
65
|
+
## Assumptions
|
|
66
|
+
|
|
67
|
+
### Critical
|
|
68
|
+
|
|
69
|
+
1. **[Assumption]**
|
|
70
|
+
- Priority: Critical
|
|
71
|
+
- Source: [assumptions.md reference]
|
|
72
|
+
- Method: [Interview | Survey | Desk research | Domain expertise]
|
|
73
|
+
- Evidence: [Specific findings — quotes, data, sources] OR [PENDING]
|
|
74
|
+
- Status: Validated | Partially validated | Invalidated | Needs validation
|
|
75
|
+
|
|
76
|
+
2. **[Assumption]**
|
|
77
|
+
- Priority: Critical
|
|
78
|
+
- Source: [assumptions.md reference]
|
|
79
|
+
- Method: [Method used or suggested]
|
|
80
|
+
- Evidence: [Specific findings] OR [PENDING]
|
|
81
|
+
- Status: Validated | Partially validated | Invalidated | Needs validation
|
|
82
|
+
|
|
83
|
+
### Important
|
|
84
|
+
|
|
85
|
+
1. **[Assumption]**
|
|
86
|
+
- Priority: Important
|
|
87
|
+
- Source: [assumptions.md reference]
|
|
88
|
+
- Method: [Method used or suggested]
|
|
89
|
+
- Evidence: [Specific findings] OR [PENDING]
|
|
90
|
+
- Status: Validated | Partially validated | Invalidated | Needs validation
|
|
91
|
+
|
|
92
|
+
### Low Risk
|
|
93
|
+
|
|
94
|
+
1. **[Assumption]**
|
|
95
|
+
- Priority: Low
|
|
96
|
+
- Source: [assumptions.md reference]
|
|
97
|
+
- Evidence: [Specific findings] OR [PENDING]
|
|
98
|
+
- Status: Validated | Needs validation
|
|
99
|
+
|
|
100
|
+
## Interview Script
|
|
101
|
+
|
|
102
|
+
### Target: [User persona from users.md]
|
|
103
|
+
**Context:** [Brief description of what you're validating]
|
|
104
|
+
|
|
105
|
+
**Warm-up (2-3 min)**
|
|
106
|
+
- [Opening question to build rapport]
|
|
107
|
+
- [Question about their current workflow/situation]
|
|
108
|
+
|
|
109
|
+
**Core Questions (15-20 min)**
|
|
110
|
+
1. [Question targeting assumption X]
|
|
111
|
+
- _Follow-up if yes:_ [Probe deeper]
|
|
112
|
+
- _Follow-up if no:_ [Explore why]
|
|
113
|
+
2. [Question targeting assumption Y]
|
|
114
|
+
- _Follow-up:_ [Probe deeper]
|
|
115
|
+
|
|
116
|
+
**Closing (2-3 min)**
|
|
117
|
+
- Is there anything about [topic] that I didn't ask about but should have?
|
|
118
|
+
- Do you know anyone else who deals with [problem] that I could talk to?
|
|
119
|
+
|
|
120
|
+
## Survey Questions
|
|
121
|
+
|
|
122
|
+
Ready to paste into Typeform / Google Forms:
|
|
123
|
+
|
|
124
|
+
1. [Question] — Multiple choice: [Option A / Option B / Option C / Other]
|
|
125
|
+
- _Tests assumption:_ [Which one]
|
|
126
|
+
2. [Question] — Scale: 1 (Strongly disagree) to 5 (Strongly agree)
|
|
127
|
+
- _Tests assumption:_ [Which one]
|
|
128
|
+
3. [Question] — Open text
|
|
129
|
+
- _Tests assumption:_ [Which one]
|
|
130
|
+
|
|
131
|
+
## Next Steps
|
|
132
|
+
- [What to do with validation results before moving to /productkit.solution]
|
|
133
|
+
```
|
|
134
|
+
|
|
135
|
+
### Important: How evidence gets entered and reviewed
|
|
136
|
+
|
|
137
|
+
There are two ways evidence enters the system. Raw data files are preferred; manual entry is the fallback.
|
|
138
|
+
|
|
139
|
+
**Path A: Raw data files (preferred)**
|
|
140
|
+
|
|
141
|
+
The PM drops raw data into `validation-data/`:
|
|
142
|
+
- Interview transcripts/notes → `.md` or `.txt` files
|
|
143
|
+
- Survey exports → `.csv` files
|
|
144
|
+
- Desk research findings → `.md` files with sources
|
|
145
|
+
|
|
146
|
+
Then runs `/productkit.validate`. Claude reads the raw files, extracts evidence relevant to each assumption, and updates `validation.md` directly. The PM does not need to fill in evidence manually — Claude does the analysis.
|
|
147
|
+
|
|
148
|
+
**Path B: Manual entry (fallback)**
|
|
149
|
+
|
|
150
|
+
For evidence that doesn't have a raw file (e.g., a phone call, in-person observation, domain expertise), the PM fills in the `Evidence:` fields directly in `validation.md`, replacing `[PENDING]` with their findings. Then runs `/productkit.validate` for review.
|
|
151
|
+
|
|
152
|
+
---
|
|
153
|
+
|
|
154
|
+
**Review mode — when `validation.md` already exists:**
|
|
155
|
+
|
|
156
|
+
1. Read the existing `validation.md`
|
|
157
|
+
2. **Check `validation-data/` for raw files.** If files are present:
|
|
158
|
+
- Read each file and identify which assumptions it provides evidence for
|
|
159
|
+
- For interview transcripts: extract relevant quotes, count participants, note patterns across interviews
|
|
160
|
+
- For survey CSVs: calculate response counts, percentages, distributions for relevant questions. For large files (100+ rows), summarize key statistics rather than reading every row.
|
|
161
|
+
- For desk research: extract cited sources, statistics, and findings
|
|
162
|
+
- Cross-reference findings against each `[PENDING]` assumption
|
|
163
|
+
- Write the extracted evidence into the `Evidence:` field, citing the source file (e.g., "From interview-03.md: '...'", "Survey data (n=45): 72% responded...")
|
|
164
|
+
- Present your analysis to the PM for confirmation before finalizing
|
|
165
|
+
3. **For manually entered evidence** (no raw file), review the quality:
|
|
166
|
+
- **Is it specific?** — "Users liked it" is not evidence. Push back: "How many users? What exactly did they say?"
|
|
167
|
+
- **Does it include the method?** — Interview, survey, desk research, analytics? If not stated, ask.
|
|
168
|
+
- **Does it include the source/sample?** — How many people? Which report? What dataset? If missing, ask.
|
|
169
|
+
- **Does it actually test the assumption?** — Evidence about user demographics doesn't validate a usability assumption. Flag mismatches.
|
|
170
|
+
4. For evidence that passes review (from raw data or manual entry):
|
|
171
|
+
- Update the `Status:` field to Validated / Partially validated / Invalidated
|
|
172
|
+
- For invalidated assumptions, add `- Impact:` noting what needs to change in previous artifacts
|
|
173
|
+
5. For manually entered evidence that is too weak or vague:
|
|
174
|
+
- **Do not update the Status.** Keep it as `Needs validation`.
|
|
175
|
+
- Reset `Evidence:` back to `[PENDING]`
|
|
176
|
+
- Explain what's missing and what good evidence would look like for this specific assumption
|
|
177
|
+
6. Keep the interview script and survey sections — they may still be useful for remaining `[PENDING]` items
|
|
178
|
+
7. When all critical and important assumptions have evidence that passed review (no `[PENDING]` markers), tell the user they're clear to run `/productkit.solution`
|
|
179
|
+
|
|
180
|
+
**Evidence quality bar by method:**
|
|
181
|
+
|
|
182
|
+
| Method | Minimum evidence required |
|
|
183
|
+
|--------|--------------------------|
|
|
184
|
+
| Interview | Number of participants, at least one direct quote or specific observation per assumption |
|
|
185
|
+
| Survey | Sample size, response rate, key percentages or distributions |
|
|
186
|
+
| Desk research | Source name, publication date, specific statistic or finding cited |
|
|
187
|
+
| Analytics | Metric name, time period, actual numbers |
|
|
188
|
+
| Domain expertise | Specific experience cited (role, years, context), not just "I believe" |
|
|
189
|
+
|
|
190
|
+
**Note on `validation-data/` and privacy:**
|
|
191
|
+
- Remind the PM to anonymize interview transcripts (replace real names with pseudonyms) before committing to git
|
|
192
|
+
- Suggest adding `validation-data/` to `.gitignore` if the data contains sensitive or personally identifiable information
|