forge-workflow 0.0.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude/commands/dev.md +314 -0
- package/.claude/commands/plan.md +389 -0
- package/.claude/commands/premerge.md +179 -0
- package/.claude/commands/research.md +42 -0
- package/.claude/commands/review.md +442 -0
- package/.claude/commands/rollback.md +721 -0
- package/.claude/commands/ship.md +134 -0
- package/.claude/commands/sonarcloud.md +152 -0
- package/.claude/commands/status.md +77 -0
- package/.claude/commands/validate.md +237 -0
- package/.claude/commands/verify.md +221 -0
- package/.claude/rules/greptile-review-process.md +285 -0
- package/.claude/rules/workflow.md +105 -0
- package/.claude/scripts/greptile-resolve.sh +526 -0
- package/.claude/scripts/load-env.sh +32 -0
- package/.forge/hooks/check-tdd.js +240 -0
- package/.github/PLUGIN_TEMPLATE.json +32 -0
- package/.mcp.json.example +12 -0
- package/AGENTS.md +169 -0
- package/CLAUDE.md +99 -0
- package/LICENSE +21 -0
- package/README.md +414 -0
- package/bin/forge-cmd.js +313 -0
- package/bin/forge-validate.js +303 -0
- package/bin/forge.js +4228 -0
- package/docs/AGENT_INSTALL_PROMPT.md +342 -0
- package/docs/ENHANCED_ONBOARDING.md +602 -0
- package/docs/EXAMPLES.md +482 -0
- package/docs/GREPTILE_SETUP.md +400 -0
- package/docs/MANUAL_REVIEW_GUIDE.md +106 -0
- package/docs/ROADMAP.md +359 -0
- package/docs/SETUP.md +632 -0
- package/docs/TOOLCHAIN.md +849 -0
- package/docs/VALIDATION.md +363 -0
- package/docs/WORKFLOW.md +400 -0
- package/docs/planning/PROGRESS.md +396 -0
- package/docs/plans/.gitkeep +0 -0
- package/docs/plans/2026-02-27-forge-test-suite-v2-decisions.md +21 -0
- package/docs/plans/2026-02-27-forge-test-suite-v2-design.md +362 -0
- package/docs/plans/2026-02-27-forge-test-suite-v2-tasks.md +343 -0
- package/docs/plans/2026-03-02-superpowers-gaps-decisions.md +26 -0
- package/docs/plans/2026-03-02-superpowers-gaps-design.md +239 -0
- package/docs/plans/2026-03-02-superpowers-gaps-tasks.md +260 -0
- package/docs/plans/2026-03-04-agent-command-parity-design.md +163 -0
- package/docs/plans/2026-03-04-verify-worktree-cleanup-decisions.md +7 -0
- package/docs/plans/2026-03-04-verify-worktree-cleanup-design.md +165 -0
- package/docs/plans/2026-03-05-forge-uto-decisions.md +6 -0
- package/docs/plans/2026-03-05-forge-uto-design.md +116 -0
- package/docs/plans/2026-03-05-forge-uto-tasks.md +244 -0
- package/docs/plans/2026-03-10-command-creator-and-eval-decisions.md +52 -0
- package/docs/plans/2026-03-10-command-creator-and-eval-design.md +350 -0
- package/docs/plans/2026-03-10-command-creator-and-eval-tasks.md +426 -0
- package/docs/plans/2026-03-10-stale-workflow-refs-decisions.md +8 -0
- package/docs/plans/2026-03-10-stale-workflow-refs-design.md +80 -0
- package/docs/plans/2026-03-10-stale-workflow-refs-tasks.md +90 -0
- package/docs/plans/2026-03-14-beads-plan-context-decisions.md +9 -0
- package/docs/plans/2026-03-14-beads-plan-context-design.md +171 -0
- package/docs/plans/2026-03-14-beads-plan-context-tasks.md +160 -0
- package/docs/plans/2026-03-14-skill-eval-loop-decisions.md +33 -0
- package/docs/plans/2026-03-14-skill-eval-loop-design.md +118 -0
- package/docs/plans/2026-03-14-skill-eval-loop-results.md +78 -0
- package/docs/plans/2026-03-14-skill-eval-loop-tasks.md +160 -0
- package/docs/plans/2026-03-15-agent-command-parity-v2-decisions.md +11 -0
- package/docs/plans/2026-03-15-agent-command-parity-v2-design.md +145 -0
- package/docs/plans/2026-03-15-agent-command-parity-v2-tasks.md +211 -0
- package/docs/research/TEMPLATE.md +292 -0
- package/docs/research/advanced-testing.md +297 -0
- package/docs/research/agent-permissions.md +167 -0
- package/docs/research/dependency-chain.md +328 -0
- package/docs/research/forge-workflow-v2.md +550 -0
- package/docs/research/plugin-architecture.md +772 -0
- package/docs/research/pr4-cli-automation.md +326 -0
- package/docs/research/premerge-verify-restructure.md +205 -0
- package/docs/research/skills-restructure.md +508 -0
- package/docs/research/sonarcloud-perfection-plan.md +166 -0
- package/docs/research/sonarcloud-quality-gate.md +184 -0
- package/docs/research/superpowers-integration.md +403 -0
- package/docs/research/superpowers.md +319 -0
- package/docs/research/test-environment.md +519 -0
- package/install.sh +1062 -0
- package/lefthook.yml +39 -0
- package/lib/agents/README.md +198 -0
- package/lib/agents/claude.plugin.json +28 -0
- package/lib/agents/cline.plugin.json +22 -0
- package/lib/agents/codex.plugin.json +19 -0
- package/lib/agents/copilot.plugin.json +24 -0
- package/lib/agents/cursor.plugin.json +25 -0
- package/lib/agents/kilocode.plugin.json +22 -0
- package/lib/agents/opencode.plugin.json +20 -0
- package/lib/agents/roo.plugin.json +23 -0
- package/lib/agents-config.js +2112 -0
- package/lib/commands/dev.js +513 -0
- package/lib/commands/plan.js +696 -0
- package/lib/commands/recommend.js +119 -0
- package/lib/commands/ship.js +377 -0
- package/lib/commands/status.js +378 -0
- package/lib/commands/validate.js +602 -0
- package/lib/context-merge.js +359 -0
- package/lib/plugin-catalog.js +360 -0
- package/lib/plugin-manager.js +166 -0
- package/lib/plugin-recommender.js +141 -0
- package/lib/project-discovery.js +491 -0
- package/lib/setup.js +118 -0
- package/lib/workflow-profiles.js +203 -0
- package/package.json +115 -0
|
@@ -0,0 +1,171 @@
|
|
|
1
|
+
# Design Doc: beads-plan-context
|
|
2
|
+
|
|
3
|
+
**Feature**: beads-plan-context
|
|
4
|
+
**Date**: 2026-03-14
|
|
5
|
+
**Status**: Phase 3 complete — ready for /dev
|
|
6
|
+
**Branch**: feat/beads-plan-context
|
|
7
|
+
**Beads**: forge-bmy (open)
|
|
8
|
+
|
|
9
|
+
---
|
|
10
|
+
|
|
11
|
+
## Purpose
|
|
12
|
+
|
|
13
|
+
When resuming work across sessions, agents must read the Beads issue AND separately find/read the design doc AND have no visibility into task progress. This feature embeds plan context directly in Beads fields so that `bd show <id>` returns enough context to resume without hunting for files.
|
|
14
|
+
|
|
15
|
+
**Who benefits**: Any agent (Claude Code, Cursor, Cline, Copilot, etc.) resuming a multi-session feature.
|
|
16
|
+
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
## Success Criteria
|
|
20
|
+
|
|
21
|
+
1. `/plan` Phase 3 auto-runs `scripts/beads-context.sh set-design` after task list creation — populates `--design` with task count + file path
|
|
22
|
+
2. `/plan` Phase 3 auto-runs `scripts/beads-context.sh set-acceptance` — populates `--acceptance` with success criteria from design doc
|
|
23
|
+
3. `/dev` Step E auto-runs `scripts/beads-context.sh update-progress` after each task completion — appends progress line to `--notes`
|
|
24
|
+
4. `/status` calls `scripts/beads-context.sh parse-progress` to show compact progress (e.g., "3/7 tasks done | Last: Validation logic (def5678)") with a hint to run `bd show <id>` for details
|
|
25
|
+
5. `scripts/beads-context.sh` exists, is agent-agnostic (plain bash), and handles formatting + error checking
|
|
26
|
+
6. `bd update` failure in `/dev` Step E is a HARD-GATE — blocks progression to next task
|
|
27
|
+
7. All existing tests pass after changes
|
|
28
|
+
8. Command sync (`scripts/sync-commands.js`) still works — no adapter changes needed (body-only modifications)
|
|
29
|
+
9. Stage transitions are recorded via `scripts/beads-context.sh stage-transition` using `--comment` at each stage exit — enables agents to determine current workflow stage on resume
|
|
30
|
+
|
|
31
|
+
---
|
|
32
|
+
|
|
33
|
+
## Out of Scope
|
|
34
|
+
|
|
35
|
+
1. **Modifying Beads itself** — we consume existing `bd update` commands, not change the tool
|
|
36
|
+
2. **Changing design doc file format** — `docs/plans/` structure stays the same
|
|
37
|
+
3. **Retroactively updating old issues** — pre-existing issues won't have design/notes populated
|
|
38
|
+
4. **Modifying `scripts/sync-commands.js` or adapter pipeline** — command body changes sync automatically
|
|
39
|
+
5. **Agent-specific adapter files** — only canonical `.claude/commands/` files are edited
|
|
40
|
+
|
|
41
|
+
---
|
|
42
|
+
|
|
43
|
+
## Approach Selected: Helper script + inline skill updates
|
|
44
|
+
|
|
45
|
+
**Why a helper script (`scripts/beads-context.sh`)**:
|
|
46
|
+
- Forge supports 8+ agents. Each reads the same command body (via sync pipeline). But natural language formatting instructions can be interpreted differently by different LLMs.
|
|
47
|
+
- A shell script enforces a single, consistent format for Beads field content — any agent just calls the script with structured args.
|
|
48
|
+
- Parsing logic (for `/status`) lives in one place — not duplicated across agent configs.
|
|
49
|
+
- Error handling (exit code checking for HARD-GATE) is centralized.
|
|
50
|
+
|
|
51
|
+
**Why not inline-only (Approach 1)**: Format consistency across agents is not guaranteed when relying on natural language instructions alone.
|
|
52
|
+
|
|
53
|
+
**Why not convention doc (Approach 3)**: A convention doc can drift from implementation. The script IS the convention — self-documenting and self-enforcing.
|
|
54
|
+
|
|
55
|
+
---
|
|
56
|
+
|
|
57
|
+
## Constraints
|
|
58
|
+
|
|
59
|
+
- `bd update --design` and `--append-notes` are existing Beads fields — no new fields needed
|
|
60
|
+
- The script must work on Windows (Git Bash), macOS, and Linux
|
|
61
|
+
- Content in `--design` should be a summary + file path (not the full task list) to avoid duplication
|
|
62
|
+
- Content in `--append-notes` should be medium granularity: task title + test count + commit SHA + decision gate count
|
|
63
|
+
- `bd update` failure is a HARD-GATE in `/dev` Step E — blocks next task
|
|
64
|
+
|
|
65
|
+
---
|
|
66
|
+
|
|
67
|
+
## Edge Cases
|
|
68
|
+
|
|
69
|
+
1. **`bd update` fails** (locked DB, invalid ID, disk error): HARD-GATE — stop and surface the error. Do not proceed to next task.
|
|
70
|
+
2. **Task title contains special characters** (quotes, newlines): Script must sanitize before passing to `bd update`.
|
|
71
|
+
3. **Old issues without design/notes fields**: `/status` shows "No progress data" — no crash, no backfill.
|
|
72
|
+
4. **Issue ID not found** (typo, wrong worktree): Script validates exit code and shows clear error.
|
|
73
|
+
5. **Multiple agents working on same issue**: Each agent's `--append-notes` appends — no conflict (Beads appends are additive).
|
|
74
|
+
|
|
75
|
+
---
|
|
76
|
+
|
|
77
|
+
## Ambiguity Policy
|
|
78
|
+
|
|
79
|
+
**(B) Pause and ask.** If a spec gap is found mid-dev, the agent stops and asks the user before proceeding.
|
|
80
|
+
|
|
81
|
+
---
|
|
82
|
+
|
|
83
|
+
## Decisions Log
|
|
84
|
+
|
|
85
|
+
| # | Question | Decision | Rationale |
|
|
86
|
+
|---|----------|----------|-----------|
|
|
87
|
+
| 1 | What Beads fields to use? | `--design` (plan summary), `--acceptance` (success criteria), `--append-notes` (task progress) | These exist today — no Beads modifications needed |
|
|
88
|
+
| 2 | Content limits | Summary + file path in `--design`, not full task list | Avoids duplication, lighter, single source of truth stays in the file |
|
|
89
|
+
| 3 | Progress granularity | Medium: title + test count + commit + decision gates | Enough to resume, no duplication of review results |
|
|
90
|
+
| 4 | `/status` display | Compact summary + `bd show` hint | Preserves `/status` as fast scan, scales to parallel features |
|
|
91
|
+
| 5 | `bd update` failure handling | HARD-GATE — blocks next task | User wants strict enforcement |
|
|
92
|
+
| 6 | Ambiguity policy | Pause and ask | User preference for control over speed |
|
|
93
|
+
| 7 | Agent-agnostic approach | Helper script (`scripts/beads-context.sh`) | Shell script callable from any agent, enforces consistent format |
|
|
94
|
+
| 8 | Stage tracking | `--comment` with standardized format at stage exits | Enables agents to determine current workflow stage on resume |
|
|
95
|
+
|
|
96
|
+
---
|
|
97
|
+
|
|
98
|
+
## Script Interface
|
|
99
|
+
|
|
100
|
+
```bash
|
|
101
|
+
# Set design summary in Beads (called from /plan Phase 3)
|
|
102
|
+
bash scripts/beads-context.sh set-design <issue-id> <task-count> <task-file-path>
|
|
103
|
+
# → bd update <id> --design "N tasks | <task-file-path>"
|
|
104
|
+
|
|
105
|
+
# Set acceptance criteria (called from /plan Phase 3)
|
|
106
|
+
bash scripts/beads-context.sh set-acceptance <issue-id> "<criteria-text>"
|
|
107
|
+
# → bd update <id> --acceptance "<criteria-text>"
|
|
108
|
+
|
|
109
|
+
# Append task progress (called from /dev Step E)
|
|
110
|
+
bash scripts/beads-context.sh update-progress <issue-id> <task-num> <total> "<title>" <commit-sha> <test-count> <gate-count>
|
|
111
|
+
# → bd update <id> --append-notes "Task N/M done: <title> | <test-count> tests | <commit-sha> | <gate-count> gates"
|
|
112
|
+
|
|
113
|
+
# Parse progress for /status display
|
|
114
|
+
bash scripts/beads-context.sh parse-progress <issue-id>
|
|
115
|
+
# → "3/7 tasks done | Last: <title> (<commit-sha>)"
|
|
116
|
+
|
|
117
|
+
# Record stage transition (called at each stage exit)
|
|
118
|
+
bash scripts/beads-context.sh stage-transition <issue-id> <completed-stage> <next-stage>
|
|
119
|
+
# → bd update <id> --comment "Stage: <completed-stage> complete → ready for <next-stage>"
|
|
120
|
+
```
|
|
121
|
+
|
|
122
|
+
---
|
|
123
|
+
|
|
124
|
+
## Technical Research
|
|
125
|
+
|
|
126
|
+
### Beads Field Verification (2026-03-14)
|
|
127
|
+
|
|
128
|
+
Verified against Beads v0.49.1:
|
|
129
|
+
|
|
130
|
+
| Flag | Works? | Behavior | Persists in JSONL? |
|
|
131
|
+
|------|--------|----------|-------------------|
|
|
132
|
+
| `--design "text"` | Yes | Overwrites | Yes — `DESIGN` section in `bd show` |
|
|
133
|
+
| `--acceptance "text"` | Yes | Overwrites | Yes — `ACCEPTANCE CRITERIA` section |
|
|
134
|
+
| `--append-notes "text"` | Yes | Appends with `\n` separator | Yes — `NOTES` section |
|
|
135
|
+
| `--notes "text"` | Yes | Overwrites all notes | Yes |
|
|
136
|
+
| `--design ""` | Yes | Clears the field | Yes |
|
|
137
|
+
|
|
138
|
+
No character/length limits documented. Real-world issues have multi-paragraph notes with no truncation.
|
|
139
|
+
|
|
140
|
+
### DRY Check
|
|
141
|
+
|
|
142
|
+
No existing Beads field population infrastructure exists in the codebase:
|
|
143
|
+
- No scripts format or parse Beads fields
|
|
144
|
+
- No progress tracking via `--append-notes` in any command
|
|
145
|
+
- `/status` uses only `bd list` — no field inspection
|
|
146
|
+
- This is greenfield work — no duplication risk
|
|
147
|
+
|
|
148
|
+
### OWASP Top 10 Analysis
|
|
149
|
+
|
|
150
|
+
| Category | Applies? | Mitigation |
|
|
151
|
+
|----------|----------|------------|
|
|
152
|
+
| A03: Injection | Yes — task titles passed as shell args to `bd update` | Script quotes all variables, sanitizes special chars |
|
|
153
|
+
| A01-A02, A04-A10 | No | No auth, network, data exposure, or crypto |
|
|
154
|
+
|
|
155
|
+
### TDD Test Scenarios
|
|
156
|
+
|
|
157
|
+
1. **Happy path — update-progress**: Run with valid args → exit 0, `bd show` contains formatted line
|
|
158
|
+
2. **Error path — invalid ID**: Run with bad issue ID → exit non-zero, clear error message
|
|
159
|
+
3. **Edge case — special characters**: Task title with quotes → properly escaped, no injection
|
|
160
|
+
4. **Edge case — parse empty notes**: `parse-progress` when no notes → "No progress data"
|
|
161
|
+
5. **Happy path — set-design + set-acceptance**: Both populate, `bd show` displays correctly
|
|
162
|
+
6. **Happy path — stage-transition**: Records comment with standardized format, `bd show` displays it
|
|
163
|
+
|
|
164
|
+
### Codebase Integration Points
|
|
165
|
+
|
|
166
|
+
| File | Current Beads usage | Change needed |
|
|
167
|
+
|------|-------------------|---------------|
|
|
168
|
+
| `.claude/commands/plan.md` L196-197 | `bd create` + `bd update --status` | Add `beads-context.sh set-design` + `set-acceptance` after task list |
|
|
169
|
+
| `.claude/commands/dev.md` L251 | `bd update --comment` at completion | Add `beads-context.sh update-progress` in Step E HARD-GATE |
|
|
170
|
+
| `.claude/commands/status.md` L28-30 | `bd list --status in_progress` | Add `beads-context.sh parse-progress` for compact display |
|
|
171
|
+
| `scripts/` | No Beads scripts | New `beads-context.sh` |
|
|
@@ -0,0 +1,160 @@
|
|
|
1
|
+
# Task List: beads-plan-context
|
|
2
|
+
|
|
3
|
+
**Design doc**: docs/plans/2026-03-14-beads-plan-context-design.md
|
|
4
|
+
**Branch**: feat/beads-plan-context
|
|
5
|
+
**Beads**: forge-bmy
|
|
6
|
+
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
## Task 1: Create `scripts/beads-context.sh` with all 5 commands
|
|
10
|
+
|
|
11
|
+
File(s): `scripts/beads-context.sh`
|
|
12
|
+
|
|
13
|
+
What to implement: A bash script with 5 subcommands: `set-design`, `set-acceptance`, `update-progress`, `parse-progress`, `stage-transition`. Each command validates arguments, calls `bd update` with properly quoted values, checks exit codes, and outputs success/error messages. Must work on Windows (Git Bash), macOS, and Linux.
|
|
14
|
+
|
|
15
|
+
TDD steps:
|
|
16
|
+
1. Write test: `scripts/beads-context.test.js` — test `set-design` with valid args → exit 0, `bd show` output contains formatted design line
|
|
17
|
+
2. Run test: confirm it fails (script doesn't exist yet)
|
|
18
|
+
3. Implement: `scripts/beads-context.sh` with all 5 commands, argument validation, quoting, error handling
|
|
19
|
+
4. Run test: confirm it passes
|
|
20
|
+
5. Commit: `feat: add beads-context.sh helper script`
|
|
21
|
+
|
|
22
|
+
Expected output: All 5 commands work — `set-design`, `set-acceptance`, `update-progress` write to Beads fields; `parse-progress` outputs formatted progress string; `stage-transition` writes standardized comment.
|
|
23
|
+
|
|
24
|
+
Anchors: Success criteria 5, 6; Edge cases 1-5; Decisions 7, 8
|
|
25
|
+
|
|
26
|
+
---
|
|
27
|
+
|
|
28
|
+
## Task 2: Add tests for error paths and edge cases
|
|
29
|
+
|
|
30
|
+
File(s): `scripts/beads-context.test.js`
|
|
31
|
+
|
|
32
|
+
What to implement: Additional test cases for: invalid issue ID (exit non-zero), special characters in task title (no injection), `parse-progress` with no notes ("No progress data"), missing required args (usage error).
|
|
33
|
+
|
|
34
|
+
TDD steps:
|
|
35
|
+
1. Write test: error path — invalid issue ID → exit non-zero with clear error message
|
|
36
|
+
2. Run test: confirm it fails (script returns 0 or wrong message)
|
|
37
|
+
3. Implement: add argument validation and error messages to `beads-context.sh`
|
|
38
|
+
4. Run test: confirm it passes
|
|
39
|
+
5. Commit: `test: add error path and edge case tests for beads-context.sh`
|
|
40
|
+
|
|
41
|
+
Expected output: Script exits non-zero with clear messages for all error cases. Special characters in titles are properly escaped.
|
|
42
|
+
|
|
43
|
+
Anchors: TDD scenarios 2-4; Edge cases 1-4; OWASP A03
|
|
44
|
+
|
|
45
|
+
---
|
|
46
|
+
|
|
47
|
+
## Task 3: Update `/plan` Phase 3 to call `beads-context.sh`
|
|
48
|
+
|
|
49
|
+
File(s): `.claude/commands/plan.md`
|
|
50
|
+
|
|
51
|
+
What to implement: After Step 5 (task list saved) and before Step 6 (user review), add instructions to run:
|
|
52
|
+
1. `bash scripts/beads-context.sh set-design <id> <task-count> <task-file-path>`
|
|
53
|
+
2. `bash scripts/beads-context.sh set-acceptance <id> "<success-criteria>"`
|
|
54
|
+
|
|
55
|
+
Also add to `/plan` exit HARD-GATE:
|
|
56
|
+
7. `beads-context.sh set-design` ran successfully (exit code 0)
|
|
57
|
+
8. `beads-context.sh set-acceptance` ran successfully (exit code 0)
|
|
58
|
+
|
|
59
|
+
Add `stage-transition` call after the exit HARD-GATE:
|
|
60
|
+
`bash scripts/beads-context.sh stage-transition <id> plan dev`
|
|
61
|
+
|
|
62
|
+
TDD steps:
|
|
63
|
+
1. Write test: `scripts/beads-context.test.js` — integration test: simulate `/plan` flow, verify `bd show` has design + acceptance populated
|
|
64
|
+
2. Run test: confirm it fails (plan.md doesn't call the script yet)
|
|
65
|
+
3. Implement: edit plan.md with new steps and HARD-GATE additions
|
|
66
|
+
4. Run test: confirm it passes
|
|
67
|
+
5. Commit: `feat: integrate beads-context.sh into /plan Phase 3`
|
|
68
|
+
|
|
69
|
+
Expected output: After `/plan` completes, `bd show <id>` displays DESIGN and ACCEPTANCE CRITERIA sections.
|
|
70
|
+
|
|
71
|
+
Anchors: Success criteria 1, 2, 9
|
|
72
|
+
|
|
73
|
+
---
|
|
74
|
+
|
|
75
|
+
## Task 4: Update `/dev` Step E to call `beads-context.sh`
|
|
76
|
+
|
|
77
|
+
File(s): `.claude/commands/dev.md`
|
|
78
|
+
|
|
79
|
+
What to implement: In the Step E HARD-GATE (after line 193), add:
|
|
80
|
+
7. `bash scripts/beads-context.sh update-progress <id> <task-num> <total> "<title>" <commit-sha> <test-count> <gate-count>` ran successfully (exit code 0)
|
|
81
|
+
|
|
82
|
+
If it fails: STOP. Show error. Do not proceed to next task.
|
|
83
|
+
|
|
84
|
+
Also update `/dev` exit section (line 251) to replace the existing `bd update --comment` with:
|
|
85
|
+
`bash scripts/beads-context.sh stage-transition <id> dev validate`
|
|
86
|
+
|
|
87
|
+
TDD steps:
|
|
88
|
+
1. Write test: integration test — simulate task completion, verify `bd show` has progress note appended
|
|
89
|
+
2. Run test: confirm it fails (dev.md doesn't call the script yet)
|
|
90
|
+
3. Implement: edit dev.md with new HARD-GATE item and stage-transition call
|
|
91
|
+
4. Run test: confirm it passes
|
|
92
|
+
5. Commit: `feat: integrate beads-context.sh into /dev Step E`
|
|
93
|
+
|
|
94
|
+
Expected output: After each `/dev` task, `bd show` notes section has a new progress line. After `/dev` completion, a stage transition comment is recorded.
|
|
95
|
+
|
|
96
|
+
Anchors: Success criteria 3, 6, 9
|
|
97
|
+
|
|
98
|
+
---
|
|
99
|
+
|
|
100
|
+
## Task 5: Update `/status` to show compact progress
|
|
101
|
+
|
|
102
|
+
File(s): `.claude/commands/status.md`
|
|
103
|
+
|
|
104
|
+
What to implement: In Step 2 (Check Active Work), after `bd list --status in_progress`, add:
|
|
105
|
+
- For each in-progress issue, run `bash scripts/beads-context.sh parse-progress <id>`
|
|
106
|
+
- Display the compact output (e.g., "3/7 tasks done | Last: Validation logic (def5678)")
|
|
107
|
+
- Add hint: "→ bd show <id> for full context"
|
|
108
|
+
|
|
109
|
+
Update the Example Output section to show the new format.
|
|
110
|
+
|
|
111
|
+
TDD steps:
|
|
112
|
+
1. Write test: verify `parse-progress` output format matches expected compact format
|
|
113
|
+
2. Run test: confirm existing format doesn't match (no progress line yet)
|
|
114
|
+
3. Implement: edit status.md with new instructions
|
|
115
|
+
4. Run test: confirm it passes
|
|
116
|
+
5. Commit: `feat: integrate beads-context.sh into /status`
|
|
117
|
+
|
|
118
|
+
Expected output: `/status` shows compact progress for in-progress issues with `bd show` hint.
|
|
119
|
+
|
|
120
|
+
Anchors: Success criteria 4; Decision 4
|
|
121
|
+
|
|
122
|
+
---
|
|
123
|
+
|
|
124
|
+
## Task 6: Add stage-transition calls to remaining stage exits
|
|
125
|
+
|
|
126
|
+
File(s): `.claude/commands/validate.md`, `.claude/commands/ship.md`, `.claude/commands/review.md`
|
|
127
|
+
|
|
128
|
+
What to implement: At each stage's exit HARD-GATE, add a `beads-context.sh stage-transition` call:
|
|
129
|
+
- `/validate` exit: `bash scripts/beads-context.sh stage-transition <id> validate ship`
|
|
130
|
+
- `/ship` exit: `bash scripts/beads-context.sh stage-transition <id> ship review`
|
|
131
|
+
- `/review` exit: `bash scripts/beads-context.sh stage-transition <id> review premerge`
|
|
132
|
+
|
|
133
|
+
TDD steps:
|
|
134
|
+
1. Write test: verify stage-transition produces correct comment format for each stage pair
|
|
135
|
+
2. Run test: confirm it fails (commands don't call script yet)
|
|
136
|
+
3. Implement: edit 3 command files with stage-transition calls
|
|
137
|
+
4. Run test: confirm it passes
|
|
138
|
+
5. Commit: `feat: add stage-transition calls to validate, ship, review`
|
|
139
|
+
|
|
140
|
+
Expected output: After each stage completes, `bd show` comments section shows the transition.
|
|
141
|
+
|
|
142
|
+
Anchors: Success criteria 9; Decision 8
|
|
143
|
+
|
|
144
|
+
---
|
|
145
|
+
|
|
146
|
+
## Task 7: Verify sync compatibility and run full test suite
|
|
147
|
+
|
|
148
|
+
File(s): (no new files — verification only)
|
|
149
|
+
|
|
150
|
+
What to implement: Run `node scripts/sync-commands.js --check` to verify modified command files still sync correctly. Run `bun test` to verify all existing tests pass. Verify `bd show` output looks correct end-to-end.
|
|
151
|
+
|
|
152
|
+
TDD steps:
|
|
153
|
+
1. Run: `node scripts/sync-commands.js --check` → no drift errors
|
|
154
|
+
2. Run: `bun test` → all tests pass
|
|
155
|
+
3. Run: manual end-to-end check — create test issue, run each script command, verify `bd show` output
|
|
156
|
+
4. Commit: `test: verify sync compatibility and full test suite`
|
|
157
|
+
|
|
158
|
+
Expected output: Zero sync drift, zero test failures, clean `bd show` output.
|
|
159
|
+
|
|
160
|
+
Anchors: Success criteria 7, 8
|
|
@@ -0,0 +1,33 @@
|
|
|
1
|
+
# Skill Eval Loop — Decisions Log
|
|
2
|
+
|
|
3
|
+
**Design doc**: [2026-03-14-skill-eval-loop-design.md](2026-03-14-skill-eval-loop-design.md)
|
|
4
|
+
**Beads**: forge-1jx
|
|
5
|
+
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
## Decision 1
|
|
9
|
+
**Date**: 2026-03-14
|
|
10
|
+
**Task**: Task 7-9 — Run skill-creator eval loops
|
|
11
|
+
**Gap**: Skills in `skills/` directory not discoverable by `claude -p`
|
|
12
|
+
**Score**: 0/14
|
|
13
|
+
**Route**: PROCEED
|
|
14
|
+
**Choice made**: Recreated `.claude/skills/` symlinks in the worktree (normally created by `bunx skills sync`, but gitignored so not present in worktrees). Also created Windows-compatible eval script (`scripts/eval_win.py`) since `run_eval.py` uses `select.select()` which fails on Windows pipes.
|
|
15
|
+
**Status**: RESOLVED
|
|
16
|
+
|
|
17
|
+
## Decision 2
|
|
18
|
+
**Date**: 2026-03-14
|
|
19
|
+
**Task**: Task 7-9 — Run skill-creator eval loops
|
|
20
|
+
**Gap**: 4 Parallel AI skills (web-search, web-extract, deep-research, data-enrichment) compete with Claude's built-in tools (WebSearch, WebFetch). No description change can make them auto-trigger because Claude prefers built-in capabilities.
|
|
21
|
+
**Score**: 3/14
|
|
22
|
+
**Route**: PROCEED
|
|
23
|
+
**Choice made**: Accepted that built-in tool competition is a Claude Code architecture limitation, not a description quality issue. These skills must be invoked explicitly via `/parallel-web-search` in workflows (already the case in /plan and /research). Focused improvement efforts on description clarity and documented the finding. Skipped iterative improvement loop for these 4 skills since the root cause is not addressable via descriptions.
|
|
24
|
+
**Status**: RESOLVED
|
|
25
|
+
|
|
26
|
+
## Decision 3
|
|
27
|
+
**Date**: 2026-03-14
|
|
28
|
+
**Task**: Task 10 — Cross-skill regression check
|
|
29
|
+
**Gap**: Cannot test cross-skill disambiguation for Parallel AI skills because they don't auto-trigger
|
|
30
|
+
**Score**: 1/14
|
|
31
|
+
**Route**: PROCEED
|
|
32
|
+
**Choice made**: Cross-skill disambiguation is moot for skills that don't auto-trigger. The true-negative rates are 100% for all 6 skills, confirming no false-positive cross-triggering. Documented this as the cross-skill check result.
|
|
33
|
+
**Status**: RESOLVED
|
|
@@ -0,0 +1,118 @@
|
|
|
1
|
+
# Skill Eval Loop — Design Doc
|
|
2
|
+
|
|
3
|
+
| Field | Value |
|
|
4
|
+
|-------|-------|
|
|
5
|
+
| Feature | skill-eval-loop |
|
|
6
|
+
| Date | 2026-03-14 |
|
|
7
|
+
| Status | Draft |
|
|
8
|
+
| Beads | forge-1jx |
|
|
9
|
+
|
|
10
|
+
## Purpose
|
|
11
|
+
|
|
12
|
+
Optimize trigger accuracy for all 6 skills in `skills/` using the installed `skill-creator` plugin. Ensure each skill fires for the right user queries and doesn't fire for wrong ones — especially important for the 4 Parallel AI skills that share similar domains.
|
|
13
|
+
|
|
14
|
+
## Success Criteria
|
|
15
|
+
|
|
16
|
+
1. All 6 skills have `evals.json` files with 10-15 queries each (mix of should-trigger and should-not-trigger)
|
|
17
|
+
2. Cross-skill disambiguation queries included for the 4 Parallel AI skills
|
|
18
|
+
3. Baseline trigger rates captured (before)
|
|
19
|
+
4. `skill-creator` eval loop run on each skill (up to 5 iterations)
|
|
20
|
+
5. After trigger rates captured with before/after comparison
|
|
21
|
+
6. Improved descriptions committed back to each skill's SKILL.md
|
|
22
|
+
|
|
23
|
+
## Out of Scope
|
|
24
|
+
|
|
25
|
+
- Full quality eval (end-to-end execution with output grading) — requires API keys, expensive, separate effort
|
|
26
|
+
- Creating new skills
|
|
27
|
+
- Modifying skill logic/implementation beyond the description field
|
|
28
|
+
- Changes to the skill-creator plugin itself
|
|
29
|
+
|
|
30
|
+
## Approach Selected
|
|
31
|
+
|
|
32
|
+
Use the `skill-creator` skill directly. It handles:
|
|
33
|
+
- Trigger accuracy measurement via `run_eval.py` (uses `claude -p` subprocess with stream event detection)
|
|
34
|
+
- Train/test split (60% train / 40% test holdout, stratified by should_trigger)
|
|
35
|
+
- Description improvement via Claude with extended thinking
|
|
36
|
+
- Iterative loop (up to 5 iterations per skill)
|
|
37
|
+
- Benchmark generation via `aggregate_benchmark.py`
|
|
38
|
+
- Interactive HTML review UI
|
|
39
|
+
|
|
40
|
+
### Execution batching (Option C selected)
|
|
41
|
+
- **Batch 1**: 4 Parallel AI skills (`web-search`, `deep-research`, `web-extract`, `data-enrichment`) — run 2 at a time, shared cross-skill disambiguation context
|
|
42
|
+
- **Batch 2**: `citation-standards` + `sonarcloud-analysis` — run together
|
|
43
|
+
|
|
44
|
+
### Eval set design
|
|
45
|
+
- 10-15 queries per skill (moderate coverage)
|
|
46
|
+
- Each includes should-trigger and should-NOT-trigger queries
|
|
47
|
+
- Parallel AI skills include cross-skill disambiguation queries (e.g., "scrape this URL" → should trigger `web-extract`, should NOT trigger `web-search`)
|
|
48
|
+
|
|
49
|
+
## Constraints
|
|
50
|
+
|
|
51
|
+
- `claude -p` subprocess calls: 30s timeout per query, 3 runs per query
|
|
52
|
+
- Cross-skill disambiguation is critical for the 4 Parallel AI skills
|
|
53
|
+
- Resource-aware batching: max 2-3 concurrent eval loops
|
|
54
|
+
|
|
55
|
+
## Edge Cases
|
|
56
|
+
|
|
57
|
+
- **Cross-skill overlap**: A query like "find information about X" could legitimately trigger both `web-search` and `deep-research`. Eval sets must have clear intent boundaries.
|
|
58
|
+
- **Description changes causing regressions**: Improving one skill's trigger accuracy may hurt another's if descriptions become too similar. Monitor cross-skill results.
|
|
59
|
+
- **Already-optimal descriptions**: Some skills may already have high trigger accuracy. The loop will exit early if all train queries pass.
|
|
60
|
+
|
|
61
|
+
## Ambiguity Policy
|
|
62
|
+
|
|
63
|
+
**(B) Pause and ask for input** — especially if a description change hurts one skill's trigger rate while improving another. Cross-skill trade-offs require human judgment.
|
|
64
|
+
|
|
65
|
+
## Technical Research
|
|
66
|
+
|
|
67
|
+
### skill-creator capabilities (verified from plugin source)
|
|
68
|
+
- `run_eval.py`: Single eval run with trigger rate measurement
|
|
69
|
+
- `run_loop.py`: Full optimization loop with train/test split, max 5 iterations
|
|
70
|
+
- `aggregate_benchmark.py`: Benchmark stats (mean, stddev, delta)
|
|
71
|
+
- `improve_description.py`: Description improvement via Claude with extended thinking
|
|
72
|
+
- `eval-viewer/generate_review.py`: Interactive HTML review UI
|
|
73
|
+
|
|
74
|
+
### evals.json schema (from plugin references/schemas.md)
|
|
75
|
+
```json
|
|
76
|
+
{
|
|
77
|
+
"skill_name": "skill-name",
|
|
78
|
+
"evals": [
|
|
79
|
+
{
|
|
80
|
+
"id": "unique-id",
|
|
81
|
+
"prompt": "user query text",
|
|
82
|
+
"should_trigger": true,
|
|
83
|
+
"expected_output": "optional expected output",
|
|
84
|
+
"files": [],
|
|
85
|
+
"expectations": []
|
|
86
|
+
}
|
|
87
|
+
]
|
|
88
|
+
}
|
|
89
|
+
```
|
|
90
|
+
|
|
91
|
+
### OWASP Top 10 Analysis
|
|
92
|
+
|
|
93
|
+
| Category | Applies? | Notes |
|
|
94
|
+
|----------|----------|-------|
|
|
95
|
+
| A01: Broken Access Control | No | No auth/access control involved |
|
|
96
|
+
| A02: Cryptographic Failures | No | No crypto operations |
|
|
97
|
+
| A03: Injection | Low | `claude -p` subprocess calls use controlled inputs from evals.json |
|
|
98
|
+
| A04: Insecure Design | No | Eval-only, no production features |
|
|
99
|
+
| A05: Security Misconfiguration | No | Local tool usage |
|
|
100
|
+
| A06: Vulnerable Components | No | Using installed plugin as-is |
|
|
101
|
+
| A07: Auth Failures | No | No authentication |
|
|
102
|
+
| A08: Data Integrity Failures | No | Local file operations |
|
|
103
|
+
| A09: Logging Failures | No | Eval results are logged by design |
|
|
104
|
+
| A10: SSRF | No | No server-side requests |
|
|
105
|
+
|
|
106
|
+
Risk surface: Minimal. This is a local dev-time optimization task.
|
|
107
|
+
|
|
108
|
+
### TDD Test Scenarios
|
|
109
|
+
|
|
110
|
+
This feature is eval-driven rather than code-driven (we're creating eval sets, not writing application code). The "tests" are the evals.json files themselves. However, we can validate:
|
|
111
|
+
|
|
112
|
+
1. **Happy path**: Each evals.json is valid against the schema and contains 10-15 queries with correct should_trigger values
|
|
113
|
+
2. **Cross-skill disambiguation**: Queries that should trigger skill A explicitly should-NOT-trigger for overlapping skill B
|
|
114
|
+
3. **Balanced split**: Each eval set has a reasonable mix of should-trigger (true/false) for stratified train/test split
|
|
115
|
+
|
|
116
|
+
### DRY Check
|
|
117
|
+
|
|
118
|
+
No existing eval sets or benchmark results found in the project. This is greenfield work for eval infrastructure.
|
|
@@ -0,0 +1,78 @@
|
|
|
1
|
+
# Skill Eval Loop — Results Summary
|
|
2
|
+
|
|
3
|
+
**Date**: 2026-03-14
|
|
4
|
+
**Beads**: forge-1jx
|
|
5
|
+
**Branch**: feat/skill-eval-loop
|
|
6
|
+
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
## Before/After Trigger Rates
|
|
10
|
+
|
|
11
|
+
| Skill | Before (recall) | After (recall) | True-Negative | Score |
|
|
12
|
+
|-------|----------------|----------------|---------------|-------|
|
|
13
|
+
| citation-standards | 0% (0/6) | **50% (3/6)** | 100% (6/6) | 9/12 |
|
|
14
|
+
| sonarcloud-analysis | 0% (0/6) | **50% (3/6)** | 100% (6/6) | 9/12 |
|
|
15
|
+
| parallel-data-enrichment | 0% (0/7) | **14% (1/7)** | 100% (8/8) | 9/15 |
|
|
16
|
+
| parallel-deep-research | 0% (0/7) | **0% (0/7)** | 100% (8/8) | 8/15 |
|
|
17
|
+
| parallel-web-search | 0% (0/8) | **0% (0/8)** | 100% (7/7) | 7/15 |
|
|
18
|
+
| parallel-web-extract | 0% (0/8) | **0% (0/8)** | 100% (7/7) | 7/15 |
|
|
19
|
+
|
|
20
|
+
**Note**: "Before" baselines were all 0% due to two issues:
|
|
21
|
+
1. Skills were in `skills/` (not discoverable) — needed to be in `.claude/skills/`
|
|
22
|
+
2. Previous eval script used `select.select()` on Windows (broken on pipes)
|
|
23
|
+
|
|
24
|
+
After fixing discovery + eval script, the "after" results above are the true baselines with improved descriptions.
|
|
25
|
+
|
|
26
|
+
## What Changed
|
|
27
|
+
|
|
28
|
+
### Discovery Fix
|
|
29
|
+
- Skills must be in `.claude/skills/<name>/SKILL.md` for Claude Code to discover them
|
|
30
|
+
- Project uses `skills/` as source of truth (committed), with `.claude/skills/` as gitignored symlinks
|
|
31
|
+
- Worktrees need symlinks recreated: `ln -s "../../skills/$name" ".claude/skills/$name"`
|
|
32
|
+
|
|
33
|
+
### Description Improvements
|
|
34
|
+
All 6 skill descriptions were updated to be more "pushy" per skill-creator guidance:
|
|
35
|
+
- Added explicit trigger phrases (e.g., "ALWAYS use this when...")
|
|
36
|
+
- Added trigger keyword examples (e.g., "Trigger on phrases like 'search for', 'find sources'")
|
|
37
|
+
- Added context about when to prefer the skill over alternatives
|
|
38
|
+
|
|
39
|
+
### Eval Query Improvements
|
|
40
|
+
All eval queries rewritten to be:
|
|
41
|
+
- More substantive (multi-step, complex tasks)
|
|
42
|
+
- More realistic (contextual detail, backstory, file paths)
|
|
43
|
+
- Better cross-skill disambiguation (Parallel AI siblings)
|
|
44
|
+
|
|
45
|
+
### Windows-Compatible Eval Script
|
|
46
|
+
Created `scripts/eval_win.py`:
|
|
47
|
+
- Uses `subprocess.communicate()` instead of `select.select()` (works on Windows)
|
|
48
|
+
- Runs queries sequentially instead of ProcessPoolExecutor (avoids paging file crashes)
|
|
49
|
+
- Tests REAL skill triggering (no temp command files) via `.claude/skills/` discovery
|
|
50
|
+
- Flexible name matching for skill aliases (e.g., `sonarcloud` matches `sonarcloud-analysis`)
|
|
51
|
+
|
|
52
|
+
## Key Finding: Built-in Tool Competition
|
|
53
|
+
|
|
54
|
+
**Skills that compete with Claude's built-in tools get 0% recall regardless of description quality.**
|
|
55
|
+
|
|
56
|
+
| Skill | Competing Built-in Tool | Auto-trigger? |
|
|
57
|
+
|-------|------------------------|---------------|
|
|
58
|
+
| parallel-web-search | WebSearch | No |
|
|
59
|
+
| parallel-web-extract | WebFetch | No |
|
|
60
|
+
| parallel-deep-research | WebSearch + reasoning | No |
|
|
61
|
+
| parallel-data-enrichment | WebSearch + JSON output | Rarely (14%) |
|
|
62
|
+
| citation-standards | None | Yes (50%) |
|
|
63
|
+
| sonarcloud-analysis | None (specialized) | Yes (50%) |
|
|
64
|
+
|
|
65
|
+
From the skill-creator docs: *"Claude only consults skills for tasks it can't easily handle on its own. Simple queries won't trigger a skill even if the description matches perfectly."*
|
|
66
|
+
|
|
67
|
+
**Implication**: The 4 Parallel AI skills must be invoked explicitly via `/parallel-web-search`, `/parallel-deep-research`, etc. in workflows like `/plan` and `/research`. They cannot auto-trigger from user queries because Claude handles those tasks natively.
|
|
68
|
+
|
|
69
|
+
## Cross-Skill Regression Check
|
|
70
|
+
|
|
71
|
+
All 6 skills achieved **100% true-negative rate** — no false-positive cross-triggering between skills. The cross-skill disambiguation queries worked perfectly (skills that shouldn't trigger never do).
|
|
72
|
+
|
|
73
|
+
## Recommendations
|
|
74
|
+
|
|
75
|
+
1. **Keep explicit invocation** for Parallel AI skills in /plan and /research workflows
|
|
76
|
+
2. **Consider `run_loop.py` improvement** for citation-standards and sonarcloud-analysis (50% → higher recall possible via description optimization)
|
|
77
|
+
3. **Fix Windows compatibility** in upstream `run_eval.py` if running full skill-creator loops is needed
|
|
78
|
+
4. **Document skill directory requirement** in project setup (`.claude/skills/` symlinks needed for worktrees)
|