dev-workflow 1.3.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -0
- package/README.md +59 -0
- package/bin/dev-workflow.js +113 -0
- package/package.json +38 -0
- package/src/init.js +54 -0
- package/src/prompt.js +34 -0
- package/src/utils.js +50 -0
- package/templates/SKILL.md +124 -0
- package/templates/references/phase-1-spec.md +95 -0
- package/templates/references/phase-2-plan.md +120 -0
- package/templates/references/phase-3-implement.md +134 -0
- package/templates/references/phase-4-verify.md +84 -0
package/LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2025 cfvargas
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|
package/README.md
ADDED
|
@@ -0,0 +1,59 @@
|
|
|
1
|
+
# dev-workflow
|
|
2
|
+
|
|
3
|
+
CLI tool that installs the **Spec Driven Development (SDD)** workflow skill into [Claude Code](https://docs.anthropic.com/en/docs/claude-code) projects.
|
|
4
|
+
|
|
5
|
+
SDD is a structured development cycle: define **what** you want before writing code, then implement from specifications. Each phase produces a persistent artifact that the next session consumes.
|
|
6
|
+
|
|
7
|
+
```
|
|
8
|
+
SPEC → PLAN → IMPLEMENT → VERIFY
|
|
9
|
+
```
|
|
10
|
+
|
|
11
|
+
## Install
|
|
12
|
+
|
|
13
|
+
```bash
|
|
14
|
+
npm i -g dev-workflow
|
|
15
|
+
```
|
|
16
|
+
|
|
17
|
+
Requires Node.js >= 18.
|
|
18
|
+
|
|
19
|
+
## Usage
|
|
20
|
+
|
|
21
|
+
### Initialize
|
|
22
|
+
|
|
23
|
+
Install the SDD skill into your Claude Code configuration:
|
|
24
|
+
|
|
25
|
+
```bash
|
|
26
|
+
dev-workflow init # global (default)
|
|
27
|
+
dev-workflow init --local # current project only
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
### Update
|
|
31
|
+
|
|
32
|
+
Update the skill files to the latest version:
|
|
33
|
+
|
|
34
|
+
```bash
|
|
35
|
+
dev-workflow update
|
|
36
|
+
```
|
|
37
|
+
|
|
38
|
+
### Status
|
|
39
|
+
|
|
40
|
+
Check where the skill is installed:
|
|
41
|
+
|
|
42
|
+
```bash
|
|
43
|
+
dev-workflow status
|
|
44
|
+
```
|
|
45
|
+
|
|
46
|
+
## How it works
|
|
47
|
+
|
|
48
|
+
Once installed, the `/dev-workflow` slash command becomes available in Claude Code. It guides you through four phases:
|
|
49
|
+
|
|
50
|
+
1. **SPEC** — Define what the feature does (functional requirements, edge cases, acceptance criteria)
|
|
51
|
+
2. **PLAN** — Translate the spec into ordered tasks with file maps and architecture decisions
|
|
52
|
+
3. **IMPLEMENT** — Build each task iteratively: RED → GREEN → REFACTOR, with user review between tasks
|
|
53
|
+
4. **VERIFY** — Lint, type-check, commit, and open a PR
|
|
54
|
+
|
|
55
|
+
Each phase can run in a separate session — the artifacts travel, not the context.
|
|
56
|
+
|
|
57
|
+
## License
|
|
58
|
+
|
|
59
|
+
MIT
|
|
@@ -0,0 +1,113 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
|
|
3
|
+
import { readFile } from "node:fs/promises";
|
|
4
|
+
import { fileURLToPath } from "node:url";
|
|
5
|
+
import path from "node:path";
|
|
6
|
+
import { init, update } from "../src/init.js";
|
|
7
|
+
import { getInstallations, resolveGlobalDir } from "../src/utils.js";
|
|
8
|
+
import { askInstallScope, formatWarning } from "../src/prompt.js";
|
|
9
|
+
|
|
10
|
+
const __dirname = path.dirname(fileURLToPath(import.meta.url));
|
|
11
|
+
|
|
12
|
+
const args = process.argv.slice(2);
|
|
13
|
+
const command = args[0];
|
|
14
|
+
const flags = new Set(args.slice(1));
|
|
15
|
+
const isLocal = flags.has("--local");
|
|
16
|
+
|
|
17
|
+
if (command === "--version" || command === "-v") {
|
|
18
|
+
const pkg = JSON.parse(
|
|
19
|
+
await readFile(path.resolve(__dirname, "../package.json"), "utf-8")
|
|
20
|
+
);
|
|
21
|
+
console.log(pkg.version);
|
|
22
|
+
} else if (command === "--help" || command === "-h" || !command) {
|
|
23
|
+
console.log(`Usage: dev-workflow <command> [options]
|
|
24
|
+
|
|
25
|
+
Commands:
|
|
26
|
+
init Install the SDD workflow skill (global by default)
|
|
27
|
+
update Update the skill files to the latest version
|
|
28
|
+
status Show where the skill is installed
|
|
29
|
+
|
|
30
|
+
Options:
|
|
31
|
+
--local Install/update in the current project instead of globally
|
|
32
|
+
--version Show version number
|
|
33
|
+
--help Show this help message`);
|
|
34
|
+
} else if (command === "init") {
|
|
35
|
+
await handleInit();
|
|
36
|
+
} else if (command === "update") {
|
|
37
|
+
await handleUpdate();
|
|
38
|
+
} else if (command === "status") {
|
|
39
|
+
await handleStatus();
|
|
40
|
+
} else {
|
|
41
|
+
console.error(`Unknown command: ${command}`);
|
|
42
|
+
console.error('Run "dev-workflow --help" for usage information.');
|
|
43
|
+
process.exit(1);
|
|
44
|
+
}
|
|
45
|
+
|
|
46
|
+
async function handleInit() {
|
|
47
|
+
const cwd = process.cwd();
|
|
48
|
+
const installations = await getInstallations(cwd);
|
|
49
|
+
const scope = isLocal ? "local" : "global";
|
|
50
|
+
|
|
51
|
+
if (installations.local && installations.global) {
|
|
52
|
+
console.log(formatWarning());
|
|
53
|
+
const chosen = await askInstallScope();
|
|
54
|
+
if (chosen === "both") {
|
|
55
|
+
await init(cwd, { scope: "local", confirm: true });
|
|
56
|
+
await init(cwd, { scope: "global", confirm: true });
|
|
57
|
+
} else {
|
|
58
|
+
await init(cwd, { scope: chosen, confirm: true });
|
|
59
|
+
}
|
|
60
|
+
} else {
|
|
61
|
+
await init(cwd, { scope });
|
|
62
|
+
if (scope === "local" && installations.global) {
|
|
63
|
+
console.log(formatWarning());
|
|
64
|
+
}
|
|
65
|
+
}
|
|
66
|
+
}
|
|
67
|
+
|
|
68
|
+
async function handleUpdate() {
|
|
69
|
+
const cwd = process.cwd();
|
|
70
|
+
const installations = await getInstallations(cwd);
|
|
71
|
+
|
|
72
|
+
if (!installations.local && !installations.global) {
|
|
73
|
+
console.error("No installation found. Run `dev-workflow init` first.");
|
|
74
|
+
process.exit(1);
|
|
75
|
+
}
|
|
76
|
+
|
|
77
|
+
if (installations.local && installations.global) {
|
|
78
|
+
console.log(formatWarning());
|
|
79
|
+
const chosen = await askInstallScope();
|
|
80
|
+
if (chosen === "both") {
|
|
81
|
+
await update(cwd, { scope: "local" });
|
|
82
|
+
await update(cwd, { scope: "global" });
|
|
83
|
+
} else {
|
|
84
|
+
await update(cwd, { scope: chosen });
|
|
85
|
+
}
|
|
86
|
+
} else if (installations.local) {
|
|
87
|
+
await update(cwd, { scope: "local" });
|
|
88
|
+
} else {
|
|
89
|
+
await update(cwd, { scope: "global" });
|
|
90
|
+
}
|
|
91
|
+
}
|
|
92
|
+
|
|
93
|
+
async function handleStatus() {
|
|
94
|
+
const cwd = process.cwd();
|
|
95
|
+
const installations = await getInstallations(cwd);
|
|
96
|
+
const globalDir = resolveGlobalDir();
|
|
97
|
+
const localDir = path.join(cwd, ".claude/skills/dev-workflow");
|
|
98
|
+
|
|
99
|
+
if (!installations.local && !installations.global) {
|
|
100
|
+
console.log("dev-workflow is not installed.");
|
|
101
|
+
return;
|
|
102
|
+
}
|
|
103
|
+
|
|
104
|
+
if (installations.local && installations.global) {
|
|
105
|
+
console.log(`Installations found:`);
|
|
106
|
+
console.log(` local: ${localDir} (active — local takes precedence)`);
|
|
107
|
+
console.log(` global: ${globalDir}`);
|
|
108
|
+
} else if (installations.local) {
|
|
109
|
+
console.log(`Installed locally (active): ${localDir}`);
|
|
110
|
+
} else {
|
|
111
|
+
console.log(`Installed globally (active): ${globalDir}`);
|
|
112
|
+
}
|
|
113
|
+
}
|
package/package.json
ADDED
|
@@ -0,0 +1,38 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "dev-workflow",
|
|
3
|
+
"version": "1.3.0",
|
|
4
|
+
"description": "CLI tool that installs the Spec Driven Development (SDD) workflow skill into Claude Code projects",
|
|
5
|
+
"type": "module",
|
|
6
|
+
"bin": {
|
|
7
|
+
"dev-workflow": "./bin/dev-workflow.js"
|
|
8
|
+
},
|
|
9
|
+
"files": [
|
|
10
|
+
"bin/",
|
|
11
|
+
"src/",
|
|
12
|
+
"templates/",
|
|
13
|
+
"LICENSE",
|
|
14
|
+
"README.md"
|
|
15
|
+
],
|
|
16
|
+
"scripts": {
|
|
17
|
+
"test": "vitest"
|
|
18
|
+
},
|
|
19
|
+
"keywords": [
|
|
20
|
+
"claude-code",
|
|
21
|
+
"sdd",
|
|
22
|
+
"spec-driven-development",
|
|
23
|
+
"workflow",
|
|
24
|
+
"cli"
|
|
25
|
+
],
|
|
26
|
+
"repository": {
|
|
27
|
+
"type": "git",
|
|
28
|
+
"url": "git+https://github.com/cfvargas/dev-workflow.git"
|
|
29
|
+
},
|
|
30
|
+
"author": "cfvargas",
|
|
31
|
+
"license": "MIT",
|
|
32
|
+
"engines": {
|
|
33
|
+
"node": ">=18"
|
|
34
|
+
},
|
|
35
|
+
"devDependencies": {
|
|
36
|
+
"vitest": "^3.1.1"
|
|
37
|
+
}
|
|
38
|
+
}
|
package/src/init.js
ADDED
|
@@ -0,0 +1,54 @@
|
|
|
1
|
+
import path from "node:path";
|
|
2
|
+
import fs from "node:fs/promises";
|
|
3
|
+
import { copyTemplates, resolveGlobalDir } from "./utils.js";
|
|
4
|
+
|
|
5
|
+
const SKILL_DIR = ".claude/skills/dev-workflow";
|
|
6
|
+
|
|
7
|
+
function resolveTargetDir(projectDir, scope) {
|
|
8
|
+
if (scope === "global") return resolveGlobalDir();
|
|
9
|
+
return path.join(projectDir, SKILL_DIR);
|
|
10
|
+
}
|
|
11
|
+
|
|
12
|
+
async function exists(targetDir) {
|
|
13
|
+
try {
|
|
14
|
+
await fs.access(path.join(targetDir, "SKILL.md"));
|
|
15
|
+
return true;
|
|
16
|
+
} catch {
|
|
17
|
+
return false;
|
|
18
|
+
}
|
|
19
|
+
}
|
|
20
|
+
|
|
21
|
+
export async function init(projectDir, options = {}) {
|
|
22
|
+
const scope = options.scope || "local";
|
|
23
|
+
const targetDir = resolveTargetDir(projectDir, scope);
|
|
24
|
+
const alreadyExists = await exists(targetDir);
|
|
25
|
+
|
|
26
|
+
if (!alreadyExists) {
|
|
27
|
+
await copyTemplates(targetDir);
|
|
28
|
+
return;
|
|
29
|
+
}
|
|
30
|
+
|
|
31
|
+
if (options.confirm === true) {
|
|
32
|
+
await copyTemplates(targetDir);
|
|
33
|
+
}
|
|
34
|
+
}
|
|
35
|
+
|
|
36
|
+
export async function update(projectDir, options = {}) {
|
|
37
|
+
const scope = options?.scope;
|
|
38
|
+
|
|
39
|
+
if (scope) {
|
|
40
|
+
const targetDir = resolveTargetDir(projectDir, scope);
|
|
41
|
+
const alreadyExists = await exists(targetDir);
|
|
42
|
+
if (!alreadyExists) {
|
|
43
|
+
throw new Error(
|
|
44
|
+
"No installation found. Run `dev-workflow init` first."
|
|
45
|
+
);
|
|
46
|
+
}
|
|
47
|
+
await copyTemplates(targetDir);
|
|
48
|
+
return;
|
|
49
|
+
}
|
|
50
|
+
|
|
51
|
+
// Legacy behavior: update local
|
|
52
|
+
const targetDir = path.join(projectDir, SKILL_DIR);
|
|
53
|
+
await copyTemplates(targetDir);
|
|
54
|
+
}
|
package/src/prompt.js
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
1
|
+
import readline from "node:readline";
|
|
2
|
+
|
|
3
|
+
const OPTIONS = {
|
|
4
|
+
1: "local",
|
|
5
|
+
2: "global",
|
|
6
|
+
3: "both",
|
|
7
|
+
};
|
|
8
|
+
|
|
9
|
+
export function askInstallScope({ input, output } = {}) {
|
|
10
|
+
return new Promise((resolve) => {
|
|
11
|
+
const rl = readline.createInterface({
|
|
12
|
+
input: input || process.stdin,
|
|
13
|
+
output: output || process.stdout,
|
|
14
|
+
});
|
|
15
|
+
|
|
16
|
+
const prompt = [
|
|
17
|
+
"Both local and global installations found.",
|
|
18
|
+
" 1) local",
|
|
19
|
+
" 2) global (default)",
|
|
20
|
+
" 3) both",
|
|
21
|
+
"Choose [1/2/3]: ",
|
|
22
|
+
].join("\n");
|
|
23
|
+
|
|
24
|
+
rl.question(prompt, (answer) => {
|
|
25
|
+
rl.close();
|
|
26
|
+
const trimmed = answer.trim();
|
|
27
|
+
resolve(OPTIONS[trimmed] || "global");
|
|
28
|
+
});
|
|
29
|
+
});
|
|
30
|
+
}
|
|
31
|
+
|
|
32
|
+
export function formatWarning() {
|
|
33
|
+
return "Warning: local installation takes precedence over global.";
|
|
34
|
+
}
|
package/src/utils.js
ADDED
|
@@ -0,0 +1,50 @@
|
|
|
1
|
+
import fs from "node:fs/promises";
|
|
2
|
+
import path from "node:path";
|
|
3
|
+
import os from "node:os";
|
|
4
|
+
import { fileURLToPath } from "node:url";
|
|
5
|
+
|
|
6
|
+
const __dirname = path.dirname(fileURLToPath(import.meta.url));
|
|
7
|
+
const TEMPLATES_DIR = path.resolve(__dirname, "../templates");
|
|
8
|
+
const SKILL_DIR = ".claude/skills/dev-workflow";
|
|
9
|
+
|
|
10
|
+
const TEMPLATE_FILES = [
|
|
11
|
+
"SKILL.md",
|
|
12
|
+
"references/phase-1-spec.md",
|
|
13
|
+
"references/phase-2-plan.md",
|
|
14
|
+
"references/phase-3-implement.md",
|
|
15
|
+
"references/phase-4-verify.md",
|
|
16
|
+
];
|
|
17
|
+
|
|
18
|
+
export async function copyTemplates(targetDir) {
|
|
19
|
+
for (const file of TEMPLATE_FILES) {
|
|
20
|
+
const src = path.join(TEMPLATES_DIR, file);
|
|
21
|
+
const dest = path.join(targetDir, file);
|
|
22
|
+
await fs.mkdir(path.dirname(dest), { recursive: true });
|
|
23
|
+
await fs.copyFile(src, dest);
|
|
24
|
+
}
|
|
25
|
+
}
|
|
26
|
+
|
|
27
|
+
export function resolveGlobalDir() {
|
|
28
|
+
return path.join(os.homedir(), SKILL_DIR);
|
|
29
|
+
}
|
|
30
|
+
|
|
31
|
+
export async function getInstallations(projectDir) {
|
|
32
|
+
const localPath = path.join(projectDir, SKILL_DIR, "SKILL.md");
|
|
33
|
+
const globalPath = path.join(resolveGlobalDir(), "SKILL.md");
|
|
34
|
+
|
|
35
|
+
const [local, global] = await Promise.all([
|
|
36
|
+
fs.access(localPath).then(() => true, () => false),
|
|
37
|
+
fs.access(globalPath).then(() => true, () => false),
|
|
38
|
+
]);
|
|
39
|
+
|
|
40
|
+
return { local, global };
|
|
41
|
+
}
|
|
42
|
+
|
|
43
|
+
export async function skillExists(targetDir) {
|
|
44
|
+
try {
|
|
45
|
+
await fs.access(path.join(targetDir, SKILL_DIR, "SKILL.md"));
|
|
46
|
+
return true;
|
|
47
|
+
} catch {
|
|
48
|
+
return false;
|
|
49
|
+
}
|
|
50
|
+
}
|
|
@@ -0,0 +1,124 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: dev-workflow
|
|
3
|
+
description: >
|
|
4
|
+
Use when the user explicitly asks to run the Spec Driven Development workflow (SDD) / structured dev-workflow,
|
|
5
|
+
e.g. they say "dev-workflow", "SDD", "workflow", or use /dev-workflow. Do NOT use for
|
|
6
|
+
generic feature requests like "build X" unless they explicitly requested the workflow.
|
|
7
|
+
license: MIT
|
|
8
|
+
compatibility: Any project with tests, git, and CLAUDE.md
|
|
9
|
+
metadata:
|
|
10
|
+
version: "1.0"
|
|
11
|
+
tags: workflow, sdd, tdd, development, planning, testing, spec-driven
|
|
12
|
+
---
|
|
13
|
+
|
|
14
|
+
# Spec Driven Development Workflow
|
|
15
|
+
|
|
16
|
+
You are orchestrating a structured development cycle based on Spec Driven Development (SDD). The core idea: define WHAT you want before writing code, then implement from structured specifications. Each phase produces a persistent artifact that the next session consumes — the files travel, not the context.
|
|
17
|
+
|
|
18
|
+
## Project Detection
|
|
19
|
+
|
|
20
|
+
Before starting, read the project's `CLAUDE.md` (or `AGENTS.md`) to determine:
|
|
21
|
+
|
|
22
|
+
| Setting | Examples | Default |
|
|
23
|
+
|---------|----------|---------|
|
|
24
|
+
| Base branch | `develop`, `main`, `master` | `main` |
|
|
25
|
+
| Test command | `npm test`, `pytest`, `cargo test` | `npm test` |
|
|
26
|
+
| Lint command | `npm run lint`, `ruff check` | `npm run lint` |
|
|
27
|
+
| Type check | `npm run typescript`, `mypy .` | Skip if N/A |
|
|
28
|
+
| Test runner (watch) | `vitest`, `jest --watch` | `vitest` |
|
|
29
|
+
| Commit format | conventional, project-specific | conventional |
|
|
30
|
+
| Versioning | semver, calver, none | none |
|
|
31
|
+
| Milestones | per-version, per-sprint, none | none |
|
|
32
|
+
| Releases | GitHub releases, tags only, none | none |
|
|
33
|
+
| Project skills | `.claude/skills/` entries | — |
|
|
34
|
+
|
|
35
|
+
## Complexity Triage
|
|
36
|
+
|
|
37
|
+
Not every task needs the full 4-phase ceremony. Classify the task before starting:
|
|
38
|
+
|
|
39
|
+
**Simple** — bugfix, config change, small tweak, single-file change:
|
|
40
|
+
→ Skip Phase 1. Flow: `PLAN → IMPLEMENT → VERIFY`
|
|
41
|
+
→ Phase 2 (PLAN) still creates the feature branch — no code touches the base branch directly.
|
|
42
|
+
|
|
43
|
+
**Standard** — new feature, multi-file change, domain logic, anything where "obvious" business rules aren't obvious:
|
|
44
|
+
→ Full flow: `SPEC → PLAN → IMPLEMENT → VERIFY`
|
|
45
|
+
|
|
46
|
+
When in doubt, go Standard. The cost of a spec you didn't need is low. The cost of ambiguity in implementation is high.
|
|
47
|
+
|
|
48
|
+
## Naming Convention
|
|
49
|
+
|
|
50
|
+
Phase 1 defines a `<feature-name>` (e.g., `add-ssl-filters`, `fix-pagination`). This name is used consistently:
|
|
51
|
+
|
|
52
|
+
- **Directory:** `docs/workflow/<feature-name>/`
|
|
53
|
+
- **Branch:** `feature/<feature-name>`
|
|
54
|
+
|
|
55
|
+
Phase 1 creates the directory. Phase 2 creates the git branch matching it.
|
|
56
|
+
|
|
57
|
+
## Workflow Overview
|
|
58
|
+
|
|
59
|
+
```
|
|
60
|
+
Phase 1: SPEC → docs/workflow/<feature-name>/SPEC.md → User reviews
|
|
61
|
+
Phase 2: PLAN → docs/workflow/<feature-name>/PLAN.md → User reviews + branch
|
|
62
|
+
Phase 3: IMPLEMENT → Per-task loop: RED → GREEN → REFACTOR → User reviews each task
|
|
63
|
+
Phase 4: VERIFY → Lint + Types + Commit/PR → Done
|
|
64
|
+
```
|
|
65
|
+
|
|
66
|
+
Phase 3 is iterative: it cycles through each task in the plan one at a time. For each task, it writes failing tests (RED), implements until they pass (GREEN), refactors, and then presents the results for user review before moving to the next task. This prevents large code dumps and gives the user control at every step.
|
|
67
|
+
|
|
68
|
+
Each phase can run in a **separate session** with fresh context. The artifact from the previous phase is the only input needed.
|
|
69
|
+
|
|
70
|
+
## Phase Detection
|
|
71
|
+
|
|
72
|
+
When starting a session, detect the current phase by checking state in `docs/workflow/<feature-name>/`:
|
|
73
|
+
|
|
74
|
+
1. **No workflow directory** or directory exists but no `SPEC.md` and no `PLAN.md` → **Start Phase 1** (standard) or **Phase 2** (simple — user confirms)
|
|
75
|
+
2. **`SPEC.md` exists, no `PLAN.md`** → **Start Phase 2**
|
|
76
|
+
3. **`PLAN.md` exists, not all tasks are implemented** → **Start Phase 3** (check which tasks still need tests/implementation)
|
|
77
|
+
4. **All tasks implemented and tests pass** → **Start Phase 4**
|
|
78
|
+
|
|
79
|
+
**Simple flow without SPEC:** If `PLAN.md` exists and its header shows `Complexity: simple` with no `SPEC.md`, this is expected — skip Phase 1 detection.
|
|
80
|
+
|
|
81
|
+
**Phase 3 resume:** Phase 3 is iterative — it may span multiple sessions. When resuming, check which tasks from PLAN.md already have passing tests and implementation. Pick up from the next incomplete task.
|
|
82
|
+
|
|
83
|
+
**Ambiguous states:**
|
|
84
|
+
- Test files exist but have syntax/import errors → still Phase 3 (fix the tests for that task)
|
|
85
|
+
- Test files exist and pass but no implementation code → the tests may be wrong, investigate before advancing
|
|
86
|
+
- Multiple workflow directories exist → ask the user which feature to continue
|
|
87
|
+
|
|
88
|
+
## Phase Instructions
|
|
89
|
+
|
|
90
|
+
Each phase has detailed instructions in a reference file. Read ONLY the reference for the current phase — loading all phases wastes context.
|
|
91
|
+
|
|
92
|
+
| Phase | Reference | Input | Output |
|
|
93
|
+
|-------|-----------|-------|--------|
|
|
94
|
+
| 1. SPEC | `references/phase-1-spec.md` | User request | `SPEC.md` + directory |
|
|
95
|
+
| 2. PLAN | `references/phase-2-plan.md` | `SPEC.md` | `PLAN.md` + git branch |
|
|
96
|
+
| 3. IMPLEMENT | `references/phase-3-implement.md` | `PLAN.md` + `SPEC.md` | Per-task: tests + code + refactor, user-reviewed |
|
|
97
|
+
| 4. VERIFY | `references/phase-4-verify.md` | All tasks passing | Commit/PR |
|
|
98
|
+
|
|
99
|
+
## Rules
|
|
100
|
+
|
|
101
|
+
- **Artifacts are the source of truth.** Every decision lives in SPEC.md or PLAN.md, not in conversation context.
|
|
102
|
+
- **Branch before code.** Every task — simple or standard — must have a `feature/<name>` branch created before any code is written. Never commit directly to the base branch.
|
|
103
|
+
- **Tests before code.** Always. Within each task, write failing tests before implementation.
|
|
104
|
+
- **User reviews each task.** In Phase 3, the user reviews after each task's RED → GREEN → REFACTOR cycle — not just at the end of the phase.
|
|
105
|
+
- **Read CLAUDE.md first.** Every project has different commands, conventions, and skills.
|
|
106
|
+
- **One phase per session.** Keep context clean. The user can continue in the same session if they want, but the default is one phase per session.
|
|
107
|
+
|
|
108
|
+
## Abort Protocol
|
|
109
|
+
|
|
110
|
+
If the user wants to abandon a workflow at any point:
|
|
111
|
+
|
|
112
|
+
1. **Ask for confirmation** — "Are you sure? This will clean up the branch and workflow artifacts."
|
|
113
|
+
2. **Clean up the branch** (if created):
|
|
114
|
+
```bash
|
|
115
|
+
git checkout <base-branch>
|
|
116
|
+
git branch -D feature/<feature-name>
|
|
117
|
+
```
|
|
118
|
+
3. **Remove workflow artifacts:**
|
|
119
|
+
```bash
|
|
120
|
+
rm -rf docs/workflow/<feature-name>
|
|
121
|
+
```
|
|
122
|
+
4. **Confirm cleanup** — list what was removed so the user knows the state is clean.
|
|
123
|
+
|
|
124
|
+
If the user wants to **pause** (not abort), just stop. The artifacts persist and Phase Detection will pick up where they left off in a future session.
|
|
@@ -0,0 +1,95 @@
|
|
|
1
|
+
# Phase 1: SPEC — What, Not How
|
|
2
|
+
|
|
3
|
+
**Goal:** Define the feature functionally. Technology-agnostic. No implementation decisions.
|
|
4
|
+
|
|
5
|
+
**Input:** User's request or ticket description.
|
|
6
|
+
|
|
7
|
+
**Output:** `docs/workflow/<feature-name>/SPEC.md` + directory created
|
|
8
|
+
|
|
9
|
+
## Why a Separate Spec?
|
|
10
|
+
|
|
11
|
+
When functional requirements and technical decisions are mixed in the same document, the agent juggles two concerns simultaneously. Technical decisions create branching paths that compound ambiguity. A pure functional spec gives a clear objective without premature implementation choices. It also defines expected behavior for edge cases before anyone writes a line of code.
|
|
12
|
+
|
|
13
|
+
## Steps
|
|
14
|
+
|
|
15
|
+
### 1. Explore the Codebase
|
|
16
|
+
|
|
17
|
+
Investigate the domain area relevant to the task:
|
|
18
|
+
- Identify what already exists vs what's new
|
|
19
|
+
- Understand existing business rules and entities
|
|
20
|
+
- Check for related types, validations, and patterns
|
|
21
|
+
- Load relevant project-specific skills from `.claude/skills/`
|
|
22
|
+
|
|
23
|
+
### 2. Eliminate Ambiguity
|
|
24
|
+
|
|
25
|
+
Ask clarifying questions about things you cannot resolve by reading the code. Group questions together. Focus on the "silent decisions" that agents normally guess wrong:
|
|
26
|
+
|
|
27
|
+
- Authorization — who can do this?
|
|
28
|
+
- Idempotency — what happens if the action is repeated?
|
|
29
|
+
- Edge cases — empty states, errors, limits, concurrent access
|
|
30
|
+
- Scope — which specific system/component is involved?
|
|
31
|
+
- Existing behavior — does this replace or extend something?
|
|
32
|
+
|
|
33
|
+
### 3. Define Feature Name and Create Directory
|
|
34
|
+
|
|
35
|
+
Derive a short, descriptive kebab-case name from the task (e.g., `add-ssl-filters`, `fix-pagination-bug`). This name will be used for both the directory and the git branch in Phase 2.
|
|
36
|
+
|
|
37
|
+
```bash
|
|
38
|
+
mkdir -p docs/workflow/<feature-name>
|
|
39
|
+
```
|
|
40
|
+
|
|
41
|
+
### 4. Write the Spec
|
|
42
|
+
|
|
43
|
+
Save to `docs/workflow/<feature-name>/SPEC.md`:
|
|
44
|
+
|
|
45
|
+
```markdown
|
|
46
|
+
# Spec: <Feature Name>
|
|
47
|
+
|
|
48
|
+
## Purpose
|
|
49
|
+
One paragraph: what this feature does and why it exists.
|
|
50
|
+
|
|
51
|
+
## Use Cases
|
|
52
|
+
- As a [role], I want to [action] so that [benefit]
|
|
53
|
+
|
|
54
|
+
## Requirements
|
|
55
|
+
- [ ] Requirement 1
|
|
56
|
+
- [ ] Requirement 2
|
|
57
|
+
(functional requirements — not technical decisions)
|
|
58
|
+
|
|
59
|
+
## Edge Cases
|
|
60
|
+
- What happens when [scenario]?
|
|
61
|
+
- What happens if [error condition]?
|
|
62
|
+
(every edge case you can identify — this is where ambiguity hides)
|
|
63
|
+
|
|
64
|
+
## Acceptance Criteria
|
|
65
|
+
|
|
66
|
+
Given [precondition]
|
|
67
|
+
When [action]
|
|
68
|
+
Then [expected result]
|
|
69
|
+
|
|
70
|
+
Given [precondition]
|
|
71
|
+
When [action]
|
|
72
|
+
Then [expected result]
|
|
73
|
+
|
|
74
|
+
(one block per behavior — these drive the RED step in Phase 3's per-task loop)
|
|
75
|
+
|
|
76
|
+
## Out of Scope
|
|
77
|
+
- Things this feature explicitly does NOT do
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
### 5. Present for Review
|
|
81
|
+
|
|
82
|
+
Show the spec to the user. The spec is not ready until the user approves it.
|
|
83
|
+
|
|
84
|
+
Common review questions:
|
|
85
|
+
- Are there edge cases missing?
|
|
86
|
+
- Are the acceptance criteria complete?
|
|
87
|
+
- Is anything listed that should be out of scope (or vice versa)?
|
|
88
|
+
|
|
89
|
+
## Exit Criteria
|
|
90
|
+
|
|
91
|
+
- Feature name defined (e.g., `add-ssl-filters`)
|
|
92
|
+
- Directory created at `docs/workflow/<feature-name>/`
|
|
93
|
+
- `SPEC.md` written inside that directory
|
|
94
|
+
- User has reviewed and approved the spec
|
|
95
|
+
- The user can close this session and start Phase 2 in a new one
|
|
@@ -0,0 +1,120 @@
|
|
|
1
|
+
# Phase 2: PLAN — How to Build It
|
|
2
|
+
|
|
3
|
+
**Goal:** Translate the spec into a technical implementation plan with ordered, self-contained tasks. Create the feature branch.
|
|
4
|
+
|
|
5
|
+
**Input:** `docs/workflow/<feature-name>/SPEC.md` (read it first). If Phase 1 was skipped (simple task), gather context from the user's request.
|
|
6
|
+
|
|
7
|
+
**Output:** `docs/workflow/<feature-name>/PLAN.md` + git branch `feature/<feature-name>` created.
|
|
8
|
+
|
|
9
|
+
## Steps
|
|
10
|
+
|
|
11
|
+
### 1. Identify the Feature
|
|
12
|
+
|
|
13
|
+
Find the existing workflow directory in `docs/workflow/`. The directory name IS the feature name (set in Phase 1).
|
|
14
|
+
|
|
15
|
+
If Phase 1 was skipped (simple task), define the feature name now and create the directory:
|
|
16
|
+
```bash
|
|
17
|
+
mkdir -p docs/workflow/<feature-name>
|
|
18
|
+
```
|
|
19
|
+
|
|
20
|
+
### 2. Read the Spec
|
|
21
|
+
|
|
22
|
+
Read `docs/workflow/<feature-name>/SPEC.md`. Understand every requirement, edge case, and acceptance criterion before making technical decisions.
|
|
23
|
+
|
|
24
|
+
### 3. Create the Feature Branch
|
|
25
|
+
|
|
26
|
+
The branch name matches the directory name. Check if the branch already exists first:
|
|
27
|
+
|
|
28
|
+
```bash
|
|
29
|
+
git checkout <base-branch>
|
|
30
|
+
git pull origin <base-branch>
|
|
31
|
+
|
|
32
|
+
# Check if branch exists locally
|
|
33
|
+
if git show-ref --verify --quiet refs/heads/feature/<feature-name>; then
|
|
34
|
+
# Branch exists — switch to it and rebase on latest base
|
|
35
|
+
git checkout feature/<feature-name>
|
|
36
|
+
git rebase <base-branch>
|
|
37
|
+
else
|
|
38
|
+
git checkout -b feature/<feature-name>
|
|
39
|
+
fi
|
|
40
|
+
```
|
|
41
|
+
|
|
42
|
+
If the branch exists from a previous attempt, inform the user and ask whether to continue from the existing branch or start fresh (`git branch -D` + recreate).
|
|
43
|
+
|
|
44
|
+
### 4. Research the Codebase
|
|
45
|
+
|
|
46
|
+
Make the architectural decisions:
|
|
47
|
+
- Which layers does this touch? (domain, application, infrastructure, UI)
|
|
48
|
+
- Which existing patterns to follow? (find similar features and use them as reference)
|
|
49
|
+
- What files need to be created vs modified?
|
|
50
|
+
- What are the dependencies and contracts?
|
|
51
|
+
- Load relevant project-specific skills from `.claude/skills/`
|
|
52
|
+
|
|
53
|
+
### 5. Write the Plan
|
|
54
|
+
|
|
55
|
+
Save to `docs/workflow/<feature-name>/PLAN.md`:
|
|
56
|
+
|
|
57
|
+
```markdown
|
|
58
|
+
# Plan: <Feature Name>
|
|
59
|
+
|
|
60
|
+
> Spec: [SPEC.md](./SPEC.md)
|
|
61
|
+
> Branch: `feature/<name>`
|
|
62
|
+
> Complexity: simple | standard
|
|
63
|
+
|
|
64
|
+
## Architecture Decisions
|
|
65
|
+
- Which layers this touches and why
|
|
66
|
+
- Patterns to follow (with file references)
|
|
67
|
+
- Dependencies and contracts
|
|
68
|
+
|
|
69
|
+
## File Map
|
|
70
|
+
Files that will be created or modified, organized by task.
|
|
71
|
+
|
|
72
|
+
## Tasks
|
|
73
|
+
|
|
74
|
+
Tasks are ordered: test tasks FIRST, then implementation, then refactoring.
|
|
75
|
+
Each task is self-contained — the agent executing it should not need to guess or search for missing context.
|
|
76
|
+
|
|
77
|
+
### Task 1: <Descriptive Name>
|
|
78
|
+
**Type:** test | implementation | refactor
|
|
79
|
+
**Files:**
|
|
80
|
+
- Create: `exact/path/to/file.ts`
|
|
81
|
+
- Modify: `exact/path/to/existing.ts`
|
|
82
|
+
- Test: `tests/exact/path/to/file.test.ts`
|
|
83
|
+
|
|
84
|
+
**Acceptance Criteria** (from spec):
|
|
85
|
+
- Given X, When Y, Then Z
|
|
86
|
+
|
|
87
|
+
**Steps:**
|
|
88
|
+
- [ ] Step 1 (one action, 2-5 min max)
|
|
89
|
+
- [ ] Step 2
|
|
90
|
+
- [ ] Step 3
|
|
91
|
+
|
|
92
|
+
### Task 2: <Descriptive Name>
|
|
93
|
+
...
|
|
94
|
+
|
|
95
|
+
## Testing Strategy
|
|
96
|
+
- Which acceptance criteria map to which tests
|
|
97
|
+
- Mocking strategy (what to mock, what to hit real)
|
|
98
|
+
- Edge cases from spec that need explicit test coverage
|
|
99
|
+
```
|
|
100
|
+
|
|
101
|
+
### 6. Review the Plan
|
|
102
|
+
|
|
103
|
+
Before presenting to the user, self-check:
|
|
104
|
+
- Are there too many tasks? 3 tasks is better than 7 if 3 covers it.
|
|
105
|
+
- Is the agent creating unnecessary abstractions?
|
|
106
|
+
- Do test tasks come before implementation tasks?
|
|
107
|
+
- Does each task have all the context it needs embedded?
|
|
108
|
+
- Are acceptance criteria from the spec mapped to specific tasks?
|
|
109
|
+
|
|
110
|
+
### 7. Present for Review
|
|
111
|
+
|
|
112
|
+
Show the plan to the user. The plan is the last checkpoint before code gets written — it's worth spending time making sure it's pragmatic.
|
|
113
|
+
|
|
114
|
+
## Exit Criteria
|
|
115
|
+
|
|
116
|
+
- Feature branch `feature/<feature-name>` created from base branch
|
|
117
|
+
- `PLAN.md` is written at `docs/workflow/<feature-name>/PLAN.md`
|
|
118
|
+
- Tasks ordered: tests first, then implementation, then refactoring
|
|
119
|
+
- User has reviewed and approved the plan
|
|
120
|
+
- The user can close this session and start Phase 3 in a new one
|
|
@@ -0,0 +1,134 @@
|
|
|
1
|
+
# Phase 3: IMPLEMENT — Iterative RED → GREEN → REFACTOR
|
|
2
|
+
|
|
3
|
+
**Goal:** For each task in the plan, follow the TDD cycle: write failing tests (RED), implement until they pass (GREEN), refactor, and then review with the user before moving on. This keeps changes small, reviewable, and correctable.
|
|
4
|
+
|
|
5
|
+
**Input:** `docs/workflow/<feature-name>/PLAN.md` and `docs/workflow/<feature-name>/SPEC.md` (read both first).
|
|
6
|
+
|
|
7
|
+
**Output:** All tasks implemented with passing tests, reviewed incrementally by the user.
|
|
8
|
+
|
|
9
|
+
## Precondition
|
|
10
|
+
|
|
11
|
+
The plan must exist and be approved. If `PLAN.md` doesn't exist, go back to Phase 2.
|
|
12
|
+
|
|
13
|
+
You must be on the feature branch (`feature/<feature-name>`), NOT on the base branch. If you are on the base branch, go back to Phase 2 to create the branch before writing any code.
|
|
14
|
+
|
|
15
|
+
## The Iteration Loop
|
|
16
|
+
|
|
17
|
+
Work through tasks **one at a time**, in the order defined in PLAN.md. For each task:
|
|
18
|
+
|
|
19
|
+
```
|
|
20
|
+
┌──────────────────────────────────────────────┐
|
|
21
|
+
│ Task N │
|
|
22
|
+
│ │
|
|
23
|
+
│ 1. RED — Write tests (they fail) │
|
|
24
|
+
│ 2. GREEN — Implement (tests pass) │
|
|
25
|
+
│ 3. REFACTOR — Clean up code, review quality │
|
|
26
|
+
│ 4. REVIEW — User checks & gives feedback │
|
|
27
|
+
│ ├─ OK → next task │
|
|
28
|
+
│ └─ Feedback → adjust, re-verify │
|
|
29
|
+
└──────────────────────────────────────────────┘
|
|
30
|
+
```
|
|
31
|
+
|
|
32
|
+
Do NOT jump ahead to the next task until the user approves the current one. This is the core principle: small iterations with feedback between each one.
|
|
33
|
+
|
|
34
|
+
## Step 1: RED — Write Failing Tests
|
|
35
|
+
|
|
36
|
+
Write ONLY the test files for the current task. Do NOT write any implementation code. Do NOT write tests for other tasks.
|
|
37
|
+
|
|
38
|
+
Each Given/When/Then from the acceptance criteria should map to at least one test case. Test names should describe the expected behavior, not the implementation detail.
|
|
39
|
+
|
|
40
|
+
After writing each test file, run ONLY the tests you wrote:
|
|
41
|
+
```bash
|
|
42
|
+
<test-command> <path-to-test-file>
|
|
43
|
+
```
|
|
44
|
+
|
|
45
|
+
Confirm they FAIL as expected. Failures must be because the feature doesn't exist — not syntax errors or broken imports. If a test fails for the wrong reason, fix it before continuing.
|
|
46
|
+
|
|
47
|
+
### Verify RED
|
|
48
|
+
|
|
49
|
+
- [ ] Tests exist and are syntactically correct
|
|
50
|
+
- [ ] Tests FAIL (not pass, not error)
|
|
51
|
+
- [ ] Failures are because the feature doesn't exist
|
|
52
|
+
- [ ] Test names clearly describe expected behavior
|
|
53
|
+
- [ ] Only the current task's acceptance criteria are covered
|
|
54
|
+
|
|
55
|
+
**Hard gate:** All of the above must be true before moving to the BUILD step.
|
|
56
|
+
|
|
57
|
+
## Step 2: GREEN — Implement
|
|
58
|
+
|
|
59
|
+
Write the MINIMUM implementation code to make the current task's tests pass. Do not add features beyond what the tests require. Do not touch code related to other tasks.
|
|
60
|
+
|
|
61
|
+
After writing implementation code, run ONLY the current task's tests:
|
|
62
|
+
```bash
|
|
63
|
+
<test-command> <path-to-test-file>
|
|
64
|
+
```
|
|
65
|
+
|
|
66
|
+
Confirm ALL tests pass. Do NOT run lint or type checks — those run in Phase 4.
|
|
67
|
+
|
|
68
|
+
### Verify GREEN
|
|
69
|
+
|
|
70
|
+
- [ ] All current task's tests pass
|
|
71
|
+
- [ ] Implementation follows project patterns
|
|
72
|
+
- [ ] No features beyond what tests require
|
|
73
|
+
- [ ] Previously passing tests from earlier tasks still pass
|
|
74
|
+
|
|
75
|
+
## Step 3: REFACTOR & REVIEW
|
|
76
|
+
|
|
77
|
+
Once the current task's tests pass, clean up the code before presenting it to the user.
|
|
78
|
+
|
|
79
|
+
### 3a. Refactor
|
|
80
|
+
|
|
81
|
+
Review the current task's files for:
|
|
82
|
+
- Code smells and duplication
|
|
83
|
+
- Naming clarity
|
|
84
|
+
- Patterns that don't match project conventions
|
|
85
|
+
- Unnecessary complexity
|
|
86
|
+
- Security concerns
|
|
87
|
+
|
|
88
|
+
If issues are found, fix them. After each change, re-run the tests to confirm they still pass.
|
|
89
|
+
|
|
90
|
+
### 3b. Review with User
|
|
91
|
+
|
|
92
|
+
Present the current task's results:
|
|
93
|
+
|
|
94
|
+
- **What was done:** Brief summary of the task
|
|
95
|
+
- **Tests written:** List of test cases and what they verify
|
|
96
|
+
- **Code written:** Files created/modified
|
|
97
|
+
- **Refactor notes:** What was cleaned up (if anything)
|
|
98
|
+
- **Test results:** Passing confirmation
|
|
99
|
+
|
|
100
|
+
Then ask:
|
|
101
|
+
|
|
102
|
+
> "Task N is done — tests pass and code is cleaned up. Any feedback or adjustments before moving to the next task?"
|
|
103
|
+
|
|
104
|
+
**If the user has feedback:**
|
|
105
|
+
1. Apply the changes
|
|
106
|
+
2. Re-run the task's tests to confirm everything still passes
|
|
107
|
+
3. Present the updated results
|
|
108
|
+
4. Ask again if they're satisfied
|
|
109
|
+
|
|
110
|
+
**If the user approves:**
|
|
111
|
+
1. **Commit the task** — stage the test and implementation files for the current task and create a commit using the project's commit format (from CLAUDE.md). The commit message should reference the task (e.g., `feat(auth): add token validation - task 2/5`). Do NOT include workflow artifacts (`docs/workflow/`).
|
|
112
|
+
2. Move to the next task and repeat from Step 1.
|
|
113
|
+
|
|
114
|
+
## Task Progress Tracking
|
|
115
|
+
|
|
116
|
+
Keep the user oriented by showing progress at each step:
|
|
117
|
+
|
|
118
|
+
```
|
|
119
|
+
Task 2/5: Add validation logic
|
|
120
|
+
✓ Tests written (3 test cases)
|
|
121
|
+
✓ Implementation complete
|
|
122
|
+
✓ Refactored
|
|
123
|
+
→ Waiting for your review
|
|
124
|
+
|
|
125
|
+
Task 1/5: Create user model ✓ Committed (a1b2c3d)
|
|
126
|
+
```
|
|
127
|
+
|
|
128
|
+
## Exit Criteria
|
|
129
|
+
|
|
130
|
+
- All tasks from the plan have been implemented
|
|
131
|
+
- Each task was reviewed, approved, and committed individually
|
|
132
|
+
- All tests pass
|
|
133
|
+
- Refactor done per task (not deferred to the end)
|
|
134
|
+
- Ready for Phase 4 (VERIFY)
|
|
@@ -0,0 +1,84 @@
|
|
|
1
|
+
# Phase 4: VERIFY — Quality Gates + Delivery
|
|
2
|
+
|
|
3
|
+
**Goal:** Pass all quality checks and deliver the work.
|
|
4
|
+
|
|
5
|
+
**Input:** Code from Phase 3 with all tests passing.
|
|
6
|
+
|
|
7
|
+
**Output:** Committed code or pull request.
|
|
8
|
+
|
|
9
|
+
## Steps
|
|
10
|
+
|
|
11
|
+
### 1. Run Full Verification Suite
|
|
12
|
+
|
|
13
|
+
Run all quality checks sequentially:
|
|
14
|
+
|
|
15
|
+
```bash
|
|
16
|
+
<test-command> && <lint-command> && <type-check-command>
|
|
17
|
+
```
|
|
18
|
+
|
|
19
|
+
All must pass. If anything fails, fix it and re-run. Do not proceed with failures.
|
|
20
|
+
|
|
21
|
+
### 2. Present Summary
|
|
22
|
+
|
|
23
|
+
Show the user:
|
|
24
|
+
- What was implemented (link back to the spec's purpose)
|
|
25
|
+
- Files created and modified
|
|
26
|
+
- Test coverage added (which acceptance criteria are covered)
|
|
27
|
+
- Any decisions made during implementation that deviated from the plan
|
|
28
|
+
|
|
29
|
+
### 3. Deliver
|
|
30
|
+
|
|
31
|
+
Ask the user how to proceed:
|
|
32
|
+
|
|
33
|
+
1. **Commit and create PR** — stage source and test files, commit with conventional format, push and create PR
|
|
34
|
+
2. **Review changes first** — let the user inspect the diff before committing
|
|
35
|
+
3. **Make adjustments** — address any final feedback
|
|
36
|
+
|
|
37
|
+
**When committing:**
|
|
38
|
+
- NEVER include workflow artifacts (`docs/workflow/`)
|
|
39
|
+
- Only stage source code and test files
|
|
40
|
+
- Use the commit format from the project's CLAUDE.md
|
|
41
|
+
|
|
42
|
+
### 4. Release & Milestone
|
|
43
|
+
|
|
44
|
+
Check the project's `CLAUDE.md` for versioning, milestone, and release conventions. If conventions are defined, follow them. If they are NOT defined (or `CLAUDE.md` doesn't exist), ask the user:
|
|
45
|
+
|
|
46
|
+
> "Would you like to set up a milestone and release for this PR? (yes/no)"
|
|
47
|
+
|
|
48
|
+
If the user says no, skip to the next step.
|
|
49
|
+
|
|
50
|
+
**Version bump:**
|
|
51
|
+
- Bump the version in the appropriate file (e.g., `package.json`, `Cargo.toml`, `pyproject.toml`) according to the versioning scheme (semver: PATCH for fixes, MINOR for features, MAJOR for breaking changes).
|
|
52
|
+
- If the project has no versioning convention, ask the user what version to use.
|
|
53
|
+
- This should already be part of the PR from Phase 3. If not, add it now.
|
|
54
|
+
|
|
55
|
+
**Milestone:**
|
|
56
|
+
- Check if open milestones exist in the repo.
|
|
57
|
+
- If one or more exist, show them to the user and ask: assign this PR to an existing milestone, or create a new one?
|
|
58
|
+
- If none exist, create a new milestone matching the version (e.g., `v1.2.0`) and assign the PR.
|
|
59
|
+
|
|
60
|
+
**After merge:**
|
|
61
|
+
- If the milestone has remaining open issues/PRs, do NOT create the release yet — the milestone is still in progress. Inform the user.
|
|
62
|
+
- If the milestone is now fully closed (all issues/PRs done), create a GitHub release with the version tag (e.g., `v1.2.0`). Write release notes summarizing everything in the milestone, not just this PR.
|
|
63
|
+
|
|
64
|
+
### 5. Clean Up Workflow Artifacts
|
|
65
|
+
|
|
66
|
+
After the code is committed/PR is created, offer to clean up the workflow directory:
|
|
67
|
+
|
|
68
|
+
```bash
|
|
69
|
+
rm -rf docs/workflow/<feature-name>
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
If `docs/workflow/` is now empty, remove it too:
|
|
73
|
+
```bash
|
|
74
|
+
rmdir docs/workflow 2>/dev/null
|
|
75
|
+
```
|
|
76
|
+
|
|
77
|
+
The user may choose to keep artifacts for reference — ask before deleting. If kept, they should NOT be committed (they're already in `.gitignore` or excluded via the "NEVER include workflow artifacts" rule).
|
|
78
|
+
|
|
79
|
+
## Exit Criteria
|
|
80
|
+
|
|
81
|
+
- All quality checks pass (tests, lint, types)
|
|
82
|
+
- Code is committed or PR is created
|
|
83
|
+
- Workflow artifacts cleaned up (or user chose to keep them)
|
|
84
|
+
- Workflow complete
|