codeforge-dev 1.10.0 → 1.12.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (90) hide show
  1. package/.devcontainer/.env +7 -1
  2. package/.devcontainer/.gitignore +1 -0
  3. package/.devcontainer/CHANGELOG.md +138 -0
  4. package/.devcontainer/CLAUDE.md +87 -8
  5. package/.devcontainer/README.md +55 -18
  6. package/.devcontainer/config/defaults/main-system-prompt.md +132 -152
  7. package/.devcontainer/config/defaults/rules/session-search.md +66 -0
  8. package/.devcontainer/config/defaults/rules/spec-workflow.md +39 -12
  9. package/.devcontainer/config/defaults/settings.json +2 -1
  10. package/.devcontainer/config/defaults/writing-system-prompt.md +185 -0
  11. package/.devcontainer/config/file-manifest.json +12 -0
  12. package/.devcontainer/connect-external-terminal.ps1 +1 -1
  13. package/.devcontainer/devcontainer.json +40 -10
  14. package/.devcontainer/docs/configuration-reference.md +3 -0
  15. package/.devcontainer/docs/plugins.md +9 -2
  16. package/.devcontainer/docs/troubleshooting.md +2 -2
  17. package/.devcontainer/features/README.md +8 -9
  18. package/.devcontainer/features/agent-browser/devcontainer-feature.json +21 -21
  19. package/.devcontainer/features/agent-browser/install.sh +0 -7
  20. package/.devcontainer/features/ast-grep/devcontainer-feature.json +22 -22
  21. package/.devcontainer/features/biome/devcontainer-feature.json +12 -14
  22. package/.devcontainer/features/ccms/README.md +50 -0
  23. package/.devcontainer/features/ccms/devcontainer-feature.json +21 -0
  24. package/.devcontainer/features/ccms/install.sh +122 -0
  25. package/.devcontainer/features/ccstatusline/install.sh +24 -2
  26. package/.devcontainer/features/lsp-servers/devcontainer-feature.json +43 -43
  27. package/.devcontainer/features/mcp-qdrant/poststart-hook.sh +2 -1
  28. package/.devcontainer/features/ruff/devcontainer-feature.json +17 -19
  29. package/.devcontainer/features/tmux/install.sh +2 -2
  30. package/.devcontainer/plugins/devs-marketplace/.claude-plugin/marketplace.json +8 -1
  31. package/.devcontainer/plugins/devs-marketplace/plugins/auto-formatter/README.md +81 -0
  32. package/.devcontainer/plugins/devs-marketplace/plugins/auto-linter/README.md +92 -0
  33. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/README.md +250 -0
  34. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/architect.md +1 -0
  35. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/claude-guide.md +2 -2
  36. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/debug-logs.md +1 -1
  37. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/dependency-analyst.md +1 -1
  38. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/doc-writer.md +4 -4
  39. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/explorer.md +1 -1
  40. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/generalist.md +2 -1
  41. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/git-archaeologist.md +2 -2
  42. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/researcher.md +1 -1
  43. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/security-auditor.md +1 -1
  44. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/spec-writer.md +8 -8
  45. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/hooks/hooks.json +10 -0
  46. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/scripts/__pycache__/skill-suggester.cpython-314.pyc +0 -0
  47. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/scripts/git-state-injector.py +15 -4
  48. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/scripts/inject-cwd.py +37 -0
  49. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/scripts/skill-suggester.py +24 -0
  50. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/scripts/spec-reminder.py +3 -2
  51. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/spec-build/SKILL.md +353 -0
  52. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/spec-build/references/review-checklist.md +175 -0
  53. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/spec-check/SKILL.md +15 -14
  54. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/spec-init/SKILL.md +12 -11
  55. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/spec-init/references/backlog-template.md +1 -1
  56. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/spec-init/references/milestones-template.md +32 -0
  57. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/spec-new/SKILL.md +17 -18
  58. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/spec-new/references/template.md +12 -2
  59. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/spec-review/SKILL.md +229 -0
  60. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/spec-update/SKILL.md +6 -2
  61. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/specification-writing/SKILL.md +1 -1
  62. package/.devcontainer/plugins/devs-marketplace/plugins/codeforge-lsp/.claude-plugin/plugin.json +38 -5
  63. package/.devcontainer/plugins/devs-marketplace/plugins/codeforge-lsp/README.md +41 -0
  64. package/.devcontainer/plugins/devs-marketplace/plugins/dangerous-command-blocker/README.md +72 -0
  65. package/.devcontainer/plugins/devs-marketplace/plugins/dangerous-command-blocker/scripts/block-dangerous.py +73 -47
  66. package/.devcontainer/plugins/devs-marketplace/plugins/notify-hook/README.md +42 -0
  67. package/.devcontainer/plugins/devs-marketplace/plugins/protected-files-guard/README.md +86 -0
  68. package/.devcontainer/plugins/devs-marketplace/plugins/protected-files-guard/hooks/hooks.json +25 -15
  69. package/.devcontainer/plugins/devs-marketplace/plugins/protected-files-guard/scripts/guard-protected-bash.py +122 -0
  70. package/.devcontainer/plugins/devs-marketplace/plugins/protected-files-guard/scripts/guard-protected.py +3 -3
  71. package/.devcontainer/plugins/devs-marketplace/plugins/ticket-workflow/README.md +96 -0
  72. package/.devcontainer/plugins/devs-marketplace/plugins/workspace-scope-guard/.claude-plugin/plugin.json +7 -0
  73. package/.devcontainer/plugins/devs-marketplace/plugins/workspace-scope-guard/README.md +94 -0
  74. package/.devcontainer/plugins/devs-marketplace/plugins/workspace-scope-guard/hooks/hooks.json +17 -0
  75. package/.devcontainer/plugins/devs-marketplace/plugins/workspace-scope-guard/scripts/__pycache__/guard-workspace-scope.cpython-314.pyc +0 -0
  76. package/.devcontainer/plugins/devs-marketplace/plugins/workspace-scope-guard/scripts/guard-workspace-scope.py +132 -0
  77. package/.devcontainer/scripts/check-setup.sh +1 -1
  78. package/.devcontainer/scripts/setup-aliases.sh +68 -75
  79. package/.devcontainer/scripts/setup-projects.sh +23 -16
  80. package/.devcontainer/scripts/setup.sh +48 -5
  81. package/README.md +17 -8
  82. package/package.json +1 -2
  83. package/.devcontainer/features/mcp-reasoner/README.md +0 -177
  84. package/.devcontainer/features/mcp-reasoner/devcontainer-feature.json +0 -25
  85. package/.devcontainer/features/mcp-reasoner/install.sh +0 -184
  86. package/.devcontainer/features/mcp-reasoner/poststart-hook.sh +0 -67
  87. package/.devcontainer/features/splitrail/README.md +0 -140
  88. package/.devcontainer/features/splitrail/devcontainer-feature.json +0 -39
  89. package/.devcontainer/features/splitrail/install.sh +0 -136
  90. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/spec-init/references/roadmap-template.md +0 -33
@@ -0,0 +1,353 @@
1
+ ---
2
+ name: spec-build
3
+ description: >-
4
+ This skill should be used when the user asks to "implement the spec",
5
+ "build from spec", "start building the feature", "spec-build",
6
+ "implement this feature from the spec", "build what the spec describes",
7
+ or needs to orchestrate full implementation of an approved specification
8
+ through a phased workflow of planning, building, reviewing, and closing.
9
+ version: 0.1.0
10
+ ---
11
+
12
+ # Spec-Driven Implementation
13
+
14
+ ## Mental Model
15
+
16
+ An approved spec is a contract — it defines exactly what to build, what to skip, and how to verify success. This skill takes a `user-approved` spec and orchestrates the full implementation lifecycle: plan the work, build it, review everything against the spec, and close the loop. No separate `/spec-update` run is needed afterward — Phase 5 performs full as-built closure.
17
+
18
+ The workflow is five phases executed in strict order. Each phase has a clear gate before the next can begin.
19
+
20
+ ```
21
+ /spec-new -> /spec-refine -> /spec-build
22
+ |
23
+ +-> Phase 1: Discovery & Gate Check
24
+ +-> Phase 2: Implementation Planning
25
+ +-> Phase 3: Implementation
26
+ +-> Phase 4: Comprehensive Review
27
+ +-> Phase 5: Spec Closure
28
+ ```
29
+
30
+ > **Note:** Phase 4's review functionality is also available standalone via `/spec-review` for features implemented outside of `/spec-build`.
31
+
32
+ ---
33
+
34
+ ## Acceptance Criteria Markers
35
+
36
+ During implementation, acceptance criteria use three states:
37
+
38
+ | Marker | Meaning |
39
+ |--------|---------|
40
+ | `[ ]` | Not started |
41
+ | `[~]` | Implemented, not yet verified — code written, tests not confirmed |
42
+ | `[x]` | Verified — tests pass, behavior confirmed |
43
+
44
+ Phase 3 flips `[ ]` to `[~]` as criteria are addressed in code. Phase 4 upgrades `[~]` to `[x]` after verification. This convention is the only spec edit during active implementation.
45
+
46
+ ---
47
+
48
+ ## CRITICAL: Planning Before Implementation
49
+
50
+ Phase 2 generates an implementation plan. This plan MUST be created and approved before any code changes begin in Phase 3. Use `EnterPlanMode` to create the plan. The plan MUST include Phases 3, 4, and 5 instructions verbatim — these phases run after plan approval, and the instructions must be preserved so they execute correctly even across context boundaries.
51
+
52
+ Do NOT skip planning. Do NOT begin writing code during Phase 2. The plan is a contract with the user — get approval first.
53
+
54
+ ---
55
+
56
+ ## Complexity Assessment
57
+
58
+ Before planning, assess the spec's complexity to determine whether team spawning would benefit the implementation.
59
+
60
+ **Complexity indicators** — if two or more apply, the spec is complex:
61
+ - 8+ functional requirements (FR-*)
62
+ - Cross-layer work (backend + frontend + tests spanning different frameworks)
63
+ - 3+ independent workstreams that could run in parallel
64
+ - Multiple services or modules affected
65
+
66
+ ### When Complexity is High: Recommend Team Spawning
67
+
68
+ Decompose work into parallel workstreams and recommend team composition using the project's existing custom agents. These agents carry frontloaded skills, safety hooks, and tailored instructions — always prefer them over generalist agents.
69
+
70
+ **Recommended compositions by spec type:**
71
+
72
+ | Spec Type | Teammates |
73
+ |-----------|-----------|
74
+ | Full-stack feature | researcher + test-writer + doc-writer |
75
+ | Backend-heavy | researcher + test-writer |
76
+ | Security-sensitive | security-auditor + test-writer |
77
+ | Refactoring work | refactorer + test-writer |
78
+ | Multi-service | researcher per service + test-writer |
79
+
80
+ **Available specialist agents:** `architect`, `bash-exec`, `claude-guide`, `debug-logs`, `dependency-analyst`, `doc-writer`, `explorer`, `generalist`, `git-archaeologist`, `migrator`, `perf-profiler`, `refactorer`, `researcher`, `security-auditor`, `spec-writer`, `statusline-config`, `test-writer`
81
+
82
+ Use `generalist` only when no specialist matches the workstream. Hard limit: 3-5 active teammates maximum.
83
+
84
+ **When complexity is low** (< 8 requirements, single layer, sequential work): skip team spawning, implement directly in the main thread. Still follow all 5 phases.
85
+
86
+ The user can override the team recommendation in either direction.
87
+
88
+ ---
89
+
90
+ ## Phase 1: Discovery & Gate Check
91
+
92
+ ### Step 1: Find the Spec
93
+
94
+ ```
95
+ Glob: .specs/**/*.md
96
+ ```
97
+
98
+ Match by `$ARGUMENTS` — the user provides a feature name or path. If ambiguous, list matching specs and ask which one to implement.
99
+
100
+ ### Step 2: Read the Full Spec
101
+
102
+ Read every line. Extract structured data:
103
+
104
+ - **All `[user-approved]` requirements** — every FR-* and NFR-* with their EARS-format text
105
+ - **All acceptance criteria** — every `[ ]` checkbox item
106
+ - **Key Files** — existing files to read for implementation context
107
+ - **Dependencies** — prerequisite features, systems, or libraries
108
+ - **Out of Scope** — explicit exclusions that define boundaries to respect
109
+
110
+ ### Step 3: Gate Check
111
+
112
+ **Hard gate**: Verify the spec has `**Approval:** user-approved`.
113
+
114
+ - If `user-approved` -> proceed to Step 4
115
+ - If `draft` or missing -> **STOP**. Print: "This spec is not approved for implementation. Run `/spec-refine <feature>` first to validate assumptions and get user approval." Do not continue.
116
+
117
+ This gate is non-negotiable. Draft specs contain unvalidated assumptions — building against them risks wasted work.
118
+
119
+ ### Step 4: Build Context
120
+
121
+ Read every file listed in the spec's `## Key Files` section. These are the files the spec author identified as most relevant to implementation. Understanding them is prerequisite to planning.
122
+
123
+ After reading, note:
124
+ - Which key files exist vs. which are new (to be created)
125
+ - Patterns, conventions, and interfaces in existing files
126
+ - Any dependencies or constraints discovered in the code
127
+
128
+ ### Step 5: Assess Complexity
129
+
130
+ Apply the complexity indicators from the assessment section above. Note the result for Phase 2 — it determines whether to recommend team spawning.
131
+
132
+ ---
133
+
134
+ ## Phase 2: Implementation Planning
135
+
136
+ **Do NOT write any code in this phase.** This phase produces a plan only.
137
+
138
+ Use `EnterPlanMode` to enter plan mode. Create a structured implementation plan covering:
139
+
140
+ ### Plan Structure
141
+
142
+ 1. **Spec Reference** — path to the spec file, domain, feature name
143
+ 2. **Complexity Assessment** — indicators found, team recommendation (if applicable)
144
+ 3. **Requirement-to-File Mapping** — each FR-*/NFR-* mapped to specific file changes
145
+ 4. **Implementation Steps** — ordered by dependency, grouped by related requirements:
146
+ - For each step: files to create/modify, requirements addressed, acceptance criteria to verify
147
+ - Mark which steps depend on others completing first
148
+ 5. **Out-of-Scope Boundaries** — items from the spec's Out of Scope section, noted as "do not touch"
149
+ 6. **Verification Checkpoints** — acceptance criteria listed as checkpoints after each logical group of steps
150
+
151
+ ### Preserving Phase Instructions
152
+
153
+ The plan MUST include the following phases verbatim so they survive context across the implementation session. Include them as a "Post-Implementation Phases" section in the plan:
154
+
155
+ **Phase 3 instructions**: Execute steps, flip `[ ]` to `[~]` after addressing each criterion in code.
156
+
157
+ **Phase 4 instructions**: Run comprehensive review using the Spec Implementation Review Checklist at `skills/spec-build/references/review-checklist.md`. Walk every requirement, verify every criterion, audit code quality, check spec consistency. Produce a summary report.
158
+
159
+ **Phase 5 instructions**: Update spec status, add Implementation Notes, update Key Files, add Discrepancies, set Last Updated date.
160
+
161
+ ### Team Plan (if applicable)
162
+
163
+ If complexity assessment recommends team spawning, the plan should additionally include:
164
+ - Workstream decomposition with clear boundaries
165
+ - Teammate assignments by specialist type
166
+ - Task dependencies between workstreams
167
+ - Integration points where workstreams converge
168
+
169
+ Present the plan via `ExitPlanMode` and wait for explicit user approval before proceeding.
170
+
171
+ ---
172
+
173
+ ## Phase 3: Implementation
174
+
175
+ Execute the approved plan step by step. This is where code gets written.
176
+
177
+ ### Execution Rules
178
+
179
+ 1. **Follow the plan order** — implement steps in the sequence approved by the user
180
+ 2. **Live spec updates** — after completing work on an acceptance criterion, immediately edit the spec file:
181
+ - Flip `[ ]` to `[~]` for criteria addressed in code
182
+ - This is the ONLY spec edit during Phase 3 — no structural changes to the spec
183
+ 3. **Track requirement coverage** — mentally track which FR-*/NFR-* requirements have been addressed as you work through the steps
184
+ 4. **Note deviations** — if the implementation must deviate from the plan (unexpected constraint, better approach discovered, missing dependency), note the deviation for Phase 4. Do not silently diverge.
185
+ 5. **Respect boundaries** — do not implement anything listed in the spec's Out of Scope section
186
+
187
+ ### If Using a Team
188
+
189
+ If team spawning was approved in Phase 2:
190
+
191
+ 1. Create the team using `TeamCreate`
192
+ 2. Create tasks in the team task list mapped to spec requirements
193
+ 3. Spawn teammates using the recommended specialist agent types
194
+ 4. Assign tasks by domain match
195
+ 5. Coordinate integration points as workstreams converge
196
+ 6. Collect results and ensure all `[ ]` criteria are flipped to `[~]`
197
+
198
+ ### Progress Tracking
199
+
200
+ The spec file itself is the progress tracker. At any point during Phase 3:
201
+ - `[ ]` criteria = not yet addressed
202
+ - `[~]` criteria = addressed in code, awaiting verification
203
+ - Count of `[~]` vs total criteria shows implementation progress
204
+
205
+ ---
206
+
207
+ ## Phase 4: Comprehensive Review
208
+
209
+ The most critical phase. Audit everything built against the spec. Use the Spec Implementation Review Checklist at `skills/spec-build/references/review-checklist.md` as the authoritative guide.
210
+
211
+ ### 4A: Requirement Coverage Audit
212
+
213
+ Walk through every FR-* and NFR-* requirement from the spec:
214
+
215
+ 1. For each requirement: identify the specific files and functions that address it
216
+ 2. Verify the implementation matches the EARS-format requirement text
217
+ 3. Flag requirements that were missed entirely
218
+ 4. Flag requirements only partially addressed
219
+ 5. Flag code written outside the spec's scope (scope creep)
220
+
221
+ ### 4B: Acceptance Criteria Verification
222
+
223
+ For each `[~]` criterion in the spec:
224
+
225
+ 1. Find or write the corresponding test
226
+ 2. Run the test and confirm it passes
227
+ 3. If the test passes -> upgrade `[~]` to `[x]` in the spec
228
+ 4. If the test fails -> note the failure, do not upgrade
229
+ 5. For criteria without tests: write the test, run it, then decide
230
+
231
+ Report any criteria that cannot be verified and explain why.
232
+
233
+ ### 4C: Code Quality Review
234
+
235
+ Check the implementation against code quality standards:
236
+
237
+ - Error handling at appropriate boundaries
238
+ - No hardcoded values that should be configurable
239
+ - Function sizes within limits (short, single-purpose)
240
+ - Nesting depth within limits
241
+ - Test coverage for new code paths
242
+ - No regressions in existing tests
243
+
244
+ ### 4D: Spec Consistency Check
245
+
246
+ Compare implemented behavior against each EARS requirement:
247
+
248
+ - Does the code actually do what each requirement says?
249
+ - Are there behavioral differences between spec intent and actual implementation?
250
+ - Are Key Files in the spec still accurate? Any new files missing from the list?
251
+ - Are there files created during implementation that should be added?
252
+
253
+ ### 4E: Summary Report
254
+
255
+ Present a structured summary to the user:
256
+
257
+ ```
258
+ ## Implementation Review Summary
259
+
260
+ **Requirements:** N/M addressed (list any gaps)
261
+ **Acceptance Criteria:** N verified [x], M in progress [~], K not started [ ]
262
+ **Deviations from Plan:** (list any, or "None")
263
+ **Discrepancies Found:** (spec vs reality gaps, or "None")
264
+ **Code Quality Issues:** (list any, or "None")
265
+
266
+ **Recommendation:** Proceed to Phase 5 / Fix issues first (with specific list)
267
+ ```
268
+
269
+ If issues are found, address them before moving to Phase 5. If issues require user input, present them and wait for direction.
270
+
271
+ ---
272
+
273
+ ## Phase 5: Spec Closure
274
+
275
+ The final phase. Update the spec to reflect what was actually built. This replaces the need for a separate `/spec-update` run.
276
+
277
+ ### Step 1: Update Status
278
+
279
+ Set `**Status:**` to:
280
+ - `implemented` — if all acceptance criteria are `[x]`
281
+ - `partial` — if any criteria remain `[ ]` or `[~]`
282
+
283
+ ### Step 2: Update Metadata
284
+
285
+ - Set `**Last Updated:**` to today's date (YYYY-MM-DD)
286
+ - Preserve `**Approval:** user-approved` — never downgrade
287
+
288
+ ### Step 3: Add Implementation Notes
289
+
290
+ In the `## Implementation Notes` section, document:
291
+
292
+ - **Deviations from the original spec** — what changed and why
293
+ - **Key design decisions** — choices made during implementation not in the original spec
294
+ - **Trade-offs accepted** — what was sacrificed and the reasoning
295
+ - **Surprising findings** — edge cases, performance characteristics, limitations discovered
296
+
297
+ Reference file paths, not code. Keep notes concise.
298
+
299
+ ### Step 4: Update Key Files
300
+
301
+ In `## Key Files`:
302
+ - Add files created during implementation
303
+ - Remove files that no longer exist
304
+ - Update paths that changed
305
+ - Verify every path listed actually exists
306
+
307
+ ### Step 5: Add Discrepancies
308
+
309
+ In `## Discrepancies`, document any gaps between spec intent and actual build:
310
+ - Requirements that were met differently than specified
311
+ - Behavioral differences from the original EARS requirements
312
+ - Scope adjustments that happened during implementation
313
+
314
+ If no discrepancies exist, leave the section empty or note "None."
315
+
316
+ ### Step 6: Final Message
317
+
318
+ Print: "Implementation complete. Spec updated to `[status]`. Run `/spec-check` to verify spec health."
319
+
320
+ ---
321
+
322
+ ## Persistence Policy
323
+
324
+ Complete all five phases. Stop only when:
325
+ - Gate check fails in Phase 1 (spec not approved) — hard stop
326
+ - User explicitly requests stop
327
+ - A genuine blocker requires user input that cannot be resolved
328
+
329
+ If interrupted mid-phase, resume from the last completed step. Phase 3 progress is tracked via acceptance criteria markers in the spec — `[~]` markers show exactly where implementation left off.
330
+
331
+ Do not skip phases. Do not combine phases. Each phase exists because it surfaces different types of issues. Phase 4 in particular catches problems that are invisible during Phase 3.
332
+
333
+ ---
334
+
335
+ ## Ambiguity Policy
336
+
337
+ - If `$ARGUMENTS` matches multiple specs, list them and ask the user which to implement.
338
+ - If a spec has no acceptance criteria, warn the user and suggest adding criteria before implementation. Offer to proceed anyway if the user confirms.
339
+ - If Key Files reference paths that don't exist, note this in Phase 1 and proceed — they may be files to create.
340
+ - If the spec has both `[assumed]` and `[user-approved]` requirements, the gate check still fails — all requirements must be `[user-approved]` before implementation begins.
341
+ - If Phase 4 reveals significant gaps, do not silently proceed to Phase 5. Present the gaps and get user direction on whether to fix them first or document them as discrepancies.
342
+ - If the spec is already `implemented`, ask: is this a re-implementation, an update, or an error?
343
+
344
+ ---
345
+
346
+ ## Anti-Patterns
347
+
348
+ - **Skipping the plan**: Jumping from Phase 1 to Phase 3 without a plan leads to unstructured work and missed requirements. Always plan first.
349
+ - **Optimistic verification**: Marking `[~]` as `[x]` without running the actual test. Every `[x]` must be backed by a passing test or confirmed behavior.
350
+ - **Scope creep during implementation**: Building features not in the spec because they "seem useful." Respect Out of Scope boundaries.
351
+ - **Deferring Phase 4**: "I'll review later" means "I won't review." Phase 4 runs immediately after Phase 3.
352
+ - **Silent deviations**: Changing the implementation approach without noting it. Every deviation gets documented in Phase 4/5.
353
+ - **Skipping Phase 5**: The spec-reminder hook will catch this, but it's better to close the loop immediately. Phase 5 is not optional.
@@ -0,0 +1,175 @@
1
+ # Spec Implementation Review Checklist
2
+
3
+ Comprehensive checklist for spec implementation reviews. Used by `/spec-build` Phase 4 and `/spec-review`. Walk through every section methodically. Do not skip sections — each catches different categories of issues.
4
+
5
+ ---
6
+
7
+ ## 4A: Requirement Coverage Audit
8
+
9
+ For each FR-* requirement in the spec:
10
+
11
+ - [ ] Identify the file(s) and function(s) that implement this requirement
12
+ - [ ] Verify the implementation matches the EARS-format requirement text
13
+ - [ ] Confirm the requirement is fully addressed (not partially)
14
+ - [ ] Note if the requirement was met through a different approach than planned
15
+
16
+ For each NFR-* requirement in the spec:
17
+
18
+ - [ ] Identify how the non-functional requirement is enforced (e.g., timeout config, index, validation)
19
+ - [ ] Verify measurable NFRs have been tested or measured (response time, throughput, size limits)
20
+ - [ ] Confirm the NFR is met under expected conditions, not just ideal conditions
21
+
22
+ Cross-checks:
23
+
24
+ - [ ] Every FR-* has corresponding code — no requirements were skipped
25
+ - [ ] Every NFR-* has corresponding enforcement — no hand-waving
26
+ - [ ] No code was written that doesn't map to a requirement (scope creep check)
27
+ - [ ] Out of Scope items from the spec were NOT implemented
28
+
29
+ ---
30
+
31
+ ## 4B: Acceptance Criteria Verification
32
+
33
+ For each criterion currently marked `[~]` (implemented, not yet verified):
34
+
35
+ - [ ] Locate the corresponding test (unit, integration, or manual verification)
36
+ - [ ] If no test exists: write one
37
+ - [ ] Run the test
38
+ - [ ] If test passes: upgrade `[~]` to `[x]` in the spec
39
+ - [ ] If test fails: note the failure, keep as `[~]`, document the issue
40
+
41
+ Summary checks:
42
+
43
+ - [ ] Count total criteria vs. verified `[x]` — report the ratio
44
+ - [ ] Any criteria still `[ ]` (not started)? Flag as missed
45
+ - [ ] Any criteria that cannot be tested? Document why and note as discrepancy
46
+ - [ ] Do the tests actually verify the criterion, or just exercise the code path?
47
+
48
+ ---
49
+
50
+ ## 4C: Code Quality Review
51
+
52
+ ### Error Handling
53
+
54
+ - [ ] Errors are caught at appropriate boundaries (not swallowed, not over-caught)
55
+ - [ ] Error messages are informative (include context, not just "error occurred")
56
+ - [ ] External call failures (I/O, network, subprocess) have explicit handling
57
+ - [ ] No bare except/catch-all that hides real errors
58
+
59
+ ### Code Structure
60
+
61
+ - [ ] Functions are short and single-purpose
62
+ - [ ] Nesting depth is within limits (2-3 for Python, 3-4 for other languages)
63
+ - [ ] No duplicated logic that should be extracted
64
+ - [ ] Names are descriptive (functions, variables, parameters)
65
+
66
+ ### Hardcoded Values
67
+
68
+ - [ ] No magic numbers without explanation
69
+ - [ ] Configuration values that may change are externalized (not inline)
70
+ - [ ] File paths, URLs, and credentials are not hardcoded
71
+
72
+ ### Test Quality
73
+
74
+ - [ ] New code has corresponding tests
75
+ - [ ] Tests verify behavior, not implementation details
76
+ - [ ] Tests cover happy path, error cases, and key edge cases
77
+ - [ ] No over-mocking that makes tests trivially pass
78
+ - [ ] Existing tests still pass (no regressions introduced)
79
+
80
+ ### Dependencies
81
+
82
+ - [ ] New imports/dependencies are necessary (no unused imports)
83
+ - [ ] No circular dependencies introduced
84
+ - [ ] Third-party dependencies are justified (not added for trivial functionality)
85
+
86
+ ---
87
+
88
+ ## 4D: Spec Consistency Check
89
+
90
+ ### Requirement-to-Implementation Fidelity
91
+
92
+ - [ ] Re-read each EARS requirement and compare against the actual implementation
93
+ - [ ] For "When [event], the system shall [action]" — does the code handle that event and perform that action?
94
+ - [ ] For "If [unwanted condition], the system shall [action]" — is the unwanted condition detected and handled?
95
+ - [ ] For ubiquitous requirements ("The system shall...") — is the behavior always active?
96
+
97
+ ### Key Files Accuracy
98
+
99
+ - [ ] Every file in the spec's Key Files section still exists at that path
100
+ - [ ] New files created during implementation are listed in Key Files
101
+ - [ ] Deleted or moved files have been removed/updated in Key Files
102
+ - [ ] File descriptions in Key Files are still accurate
103
+
104
+ ### Schema and API Consistency
105
+
106
+ - [ ] If the spec has a Schema/Data Model section, verify referenced files are current
107
+ - [ ] If the spec has API Endpoints, verify routes match the implementation
108
+ - [ ] Any new endpoints or schema changes are reflected in the spec
109
+
110
+ ### Behavioral Alignment
111
+
112
+ - [ ] Edge cases discovered during implementation are documented
113
+ - [ ] Performance characteristics match NFR expectations
114
+ - [ ] Integration points work as the spec describes
115
+ - [ ] Default values and fallback behaviors match spec intent
116
+
117
+ ---
118
+
119
+ ## 4E: Summary Report Template
120
+
121
+ After completing sections 4A through 4D, compile findings into this format:
122
+
123
+ ```
124
+ ## Implementation Review Summary
125
+
126
+ **Spec:** [feature name] ([spec file path])
127
+ **Date:** YYYY-MM-DD
128
+
129
+ ### Requirement Coverage
130
+ - Functional: N/M addressed
131
+ - Non-Functional: N/M addressed
132
+ - Gaps: [list or "None"]
133
+
134
+ ### Acceptance Criteria
135
+ - [x] Verified: N
136
+ - [~] Implemented, pending verification: N
137
+ - [ ] Not started: N
138
+ - Failures: [list or "None"]
139
+
140
+ ### Code Quality
141
+ - Issues found: [list or "None"]
142
+ - Regressions: [list or "None"]
143
+
144
+ ### Spec Consistency
145
+ - Key Files updates needed: [list or "None"]
146
+ - Discrepancies: [list or "None"]
147
+
148
+ ### Deviations from Plan
149
+ [list or "None"]
150
+
151
+ ### Recommendation
152
+ [ ] Proceed to Phase 5 — all clear
153
+ [ ] Fix issues first: [specific list]
154
+ [ ] Requires user input: [specific questions]
155
+ ```
156
+
157
+ ---
158
+
159
+ ## When to Fail the Review
160
+
161
+ The review should recommend "fix issues first" when:
162
+
163
+ - Any FR-* requirement has no corresponding implementation
164
+ - Any acceptance criterion test fails
165
+ - Existing tests regress (new code broke something)
166
+ - Code was written outside the spec's scope without user approval
167
+ - Critical error handling is missing (crashes on expected error conditions)
168
+
169
+ The review should recommend "proceed to Phase 5" when:
170
+
171
+ - All requirements have corresponding implementations
172
+ - All acceptance criteria are `[x]` (or `[~]` with documented reason)
173
+ - No test regressions
174
+ - Code quality is acceptable (no critical issues)
175
+ - Discrepancies are documented, not hidden
@@ -25,23 +25,22 @@ Glob: .specs/**/*.md
25
25
  If `.specs/` does not exist, report: "No specification directory found. Use `/spec-new` to create your first spec."
26
26
 
27
27
  Exclude non-spec files:
28
- - `ROADMAP.md`
28
+ - `MILESTONES.md`
29
29
  - `BACKLOG.md`
30
30
  - `LESSONS_LEARNED.md`
31
31
  - Files in `archive/`
32
- - `_overview.md` files (report them separately as parent specs)
33
32
 
34
33
  ### Step 2: Read Each Spec
35
34
 
36
35
  For each spec file, extract:
37
36
  - **Feature name** from the `# Feature: [Name]` header
38
- - **Version** from the `**Version:**` field
37
+ - **Domain** from the `**Domain:**` field
39
38
  - **Status** from the `**Status:**` field
40
39
  - **Last Updated** from the `**Last Updated:**` field
41
40
  - **Approval** from the `**Approval:**` field (default `draft` if missing)
42
41
  - **Line count** (wc -l)
43
42
  - **Sections present** — check for each required section header
44
- - **Acceptance criteria** — count total, count checked `[x]`
43
+ - **Acceptance criteria** — count total, count checked `[x]`, count in-progress `[~]`
45
44
  - **Requirements** — count total, count `[assumed]`, count `[user-approved]`
46
45
  - **Discrepancies** — check if section has content
47
46
 
@@ -62,6 +61,7 @@ For each spec, check these conditions:
62
61
  | **Stale paths** | Key Files references paths that don't exist | Low |
63
62
  | **Draft + implemented** | Status is `implemented` but Approval is `draft` — approval gate was bypassed | High |
64
63
  | **Inconsistent approval** | Approval is `user-approved` but spec has `[assumed]` requirements | High |
64
+ | **In-progress criteria** | Has acceptance criteria marked `[~]` (implemented, not yet verified) | Info |
65
65
 
66
66
  ### Step 4: Report
67
67
 
@@ -70,24 +70,25 @@ Output a summary table:
70
70
  ```
71
71
  ## Spec Health Report
72
72
 
73
- | Feature | Version | Status | Approval | Updated | Lines | Issues |
74
- |---------|---------|--------|----------|---------|-------|--------|
75
- | Session History | v0.2.0 | implemented | user-approved | 2026-02-08 | 74 | None |
76
- | Auth Flow | v0.3.0 | planned | draft | 2026-01-15 | 45 | Unapproved, Stale (26 days) |
77
- | Settings Page | v0.2.0 | partial | draft | 2026-02-05 | 210 | Unapproved, Long spec |
73
+ | Feature | Domain | Status | Approval | Updated | Lines | Issues |
74
+ |---------|--------|--------|----------|---------|-------|--------|
75
+ | Session History | sessions | implemented | user-approved | 2026-02-08 | 74 | None |
76
+ | Auth Flow | auth | planned | draft | 2026-01-15 | 45 | Unapproved, Stale (26 days) |
77
+ | Settings Page | ui | partial | draft | 2026-02-05 | 210 | Unapproved, Long spec |
78
78
 
79
79
  ## Issues Found
80
80
 
81
81
  ### High Priority
82
- - **Auth Flow** (`.specs/v0.3.0/auth-flow.md`): Status is `planned` but last updated 26 days ago. Either implementation is stalled or the spec needs an as-built update.
82
+ - **Auth Flow** (`.specs/auth/auth-flow.md`): Status is `planned` but last updated 26 days ago. Either implementation is stalled or the spec needs an as-built update.
83
83
 
84
84
  ### Medium Priority
85
- - **Settings Page** (`.specs/v0.2.0/settings-page.md`): 210 lines — consider splitting into sub-specs for easier consumption.
85
+ - **Settings Page** (`.specs/ui/settings-page.md`): 210 lines — consider splitting into separate specs in the domain folder.
86
86
 
87
87
  ### Suggested Actions
88
88
  1. Run `/spec-refine auth-flow` to validate assumptions and get user approval
89
- 2. Run `/spec-update auth-flow` to update the auth flow spec
90
- 3. Split settings-page.md into sub-specs
89
+ 2. Run `/spec-review auth-flow` to verify implementation against the spec
90
+ 3. Run `/spec-update auth-flow` to update the auth flow spec
91
+ 4. Split settings-page.md into sub-specs
91
92
 
92
93
  ### Approval Summary
93
94
  - **User-approved:** 1 spec
@@ -95,4 +96,4 @@ Output a summary table:
95
96
  - **Assumed requirements across all specs:** 8
96
97
  ```
97
98
 
98
- If no issues are found, report: "All specs healthy. N specs across M versions. All user-approved."
99
+ If no issues are found, report: "All specs healthy. N specs across M domains. All user-approved."
@@ -5,14 +5,14 @@ description: >-
5
5
  "set up specs", "bootstrap specs", "start using specs", "create spec
6
6
  directory", "init specs for this project", or needs to set up the
7
7
  .specs/ directory structure for a project that doesn't have one yet.
8
- version: 0.1.0
8
+ version: 0.2.0
9
9
  ---
10
10
 
11
11
  # Initialize Specification Directory
12
12
 
13
13
  ## Mental Model
14
14
 
15
- Before any spec can be created, the project needs a `.specs/` directory with its supporting files: a ROADMAP (what each version delivers) and a BACKLOG (deferred items). This skill bootstraps that structure so `/spec-new` has a home.
15
+ Before any spec can be created, the project needs a `.specs/` directory with its supporting files: a MILESTONES tracker (what each milestone delivers) and a BACKLOG (deferred items). This skill bootstraps that structure so `/spec-new` has a home.
16
16
 
17
17
  ---
18
18
 
@@ -25,7 +25,7 @@ Glob: .specs/**/*.md
25
25
  ```
26
26
 
27
27
  **If `.specs/` already exists:**
28
- - Report current state: how many specs, versions, whether ROADMAP.md and BACKLOG.md exist
28
+ - Report current state: how many specs, domains, whether MILESTONES.md and BACKLOG.md exist
29
29
  - Suggest `/spec-check` to audit health instead
30
30
  - Do NOT recreate or overwrite anything
31
31
  - Stop here
@@ -36,9 +36,9 @@ Glob: .specs/**/*.md
36
36
 
37
37
  Create the `.specs/` directory at the project root.
38
38
 
39
- ### Step 3: Create ROADMAP.md
39
+ ### Step 3: Create MILESTONES.md
40
40
 
41
- Write `.specs/ROADMAP.md` using the template from `references/roadmap-template.md`.
41
+ Write `.specs/MILESTONES.md` using the template from `references/milestones-template.md`.
42
42
 
43
43
  ### Step 4: Create BACKLOG.md
44
44
 
@@ -63,15 +63,16 @@ Summarize what was created:
63
63
 
64
64
  Created:
65
65
  - `.specs/` directory
66
- - `.specs/ROADMAP.md` — version tracking table
66
+ - `.specs/MILESTONES.md` — milestone tracker
67
67
  - `.specs/BACKLOG.md` — deferred items list
68
68
 
69
69
  Next steps:
70
70
  - Add features to `BACKLOG.md` with priority grades (P0–P3)
71
- - Pull features into a version in `ROADMAP.md` when ready to scope
72
- - Use `/spec-new <feature-name> <version>` to create a spec
71
+ - Pull features into a milestone in `MILESTONES.md` when ready to scope
72
+ - Use `/spec-new <feature-name>` to create a spec (domain is inferred)
73
73
  - Use `/spec-refine <feature-name>` to validate before implementation
74
- - After implementing, use `/spec-update` to close the loop
74
+ - After implementing, use `/spec-review <feature-name>` to verify against the spec
75
+ - Then use `/spec-update` to close the loop
75
76
  - Use `/spec-check` to audit spec health at any time
76
77
  ```
77
78
 
@@ -87,7 +88,7 @@ Next steps:
87
88
  ## Ambiguity Policy
88
89
 
89
90
  - If the user runs this in a workspace root with multiple projects, ask which project to initialize.
90
- - If `.specs/` exists but is missing ROADMAP.md or BACKLOG.md, offer to create only the missing files.
91
+ - If `.specs/` exists but is missing MILESTONES.md or BACKLOG.md, offer to create only the missing files.
91
92
 
92
93
  ---
93
94
 
@@ -95,5 +96,5 @@ Next steps:
95
96
 
96
97
  | File | Contents |
97
98
  |------|----------|
98
- | `references/roadmap-template.md` | Starter ROADMAP with version table format |
99
+ | `references/milestones-template.md` | Starter MILESTONES with milestone table format |
99
100
  | `references/backlog-template.md` | Starter BACKLOG with item format |
@@ -1,6 +1,6 @@
1
1
  # Backlog
2
2
 
3
- Priority-graded feature and infrastructure backlog. Items are pulled into versions when ready to scope and spec. See `ROADMAP.md` for the versioning workflow.
3
+ Priority-graded feature and infrastructure backlog. Items are pulled into milestones when ready to scope and spec. See `MILESTONES.md` for the milestone workflow.
4
4
 
5
5
  ## P0 — High Priority
6
6