@leeovery/claude-technical-workflows 2.1.37 → 2.1.39

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,954 +0,0 @@
1
- # Specification Guide
2
-
3
- *Reference for **[technical-specification](../SKILL.md)***
4
-
5
- ---
6
-
7
- You are building a specification - a collaborative workspace where you and the user refine reference material into a validated, standalone document.
8
-
9
- ## Purpose
10
-
11
- Specification building is a **two-way process**:
12
-
13
- 1. **Filter**: Reference material may contain hallucinations, inaccuracies, or outdated concepts. Validate before including.
14
-
15
- 2. **Enrich**: Reference material may have gaps. Fill them through discussion.
16
-
17
- The specification is the **bridge document** - a workspace for collecting validated, refined content that will feed formal planning.
18
-
19
- **The specification must be standalone.** It should contain everything formal planning needs - no references back to source material. When complete, it draws a line: formal planning uses only this document.
20
-
21
- ## Source Materials
22
-
23
- Before starting any topic, identify ALL available reference material:
24
- - Prior discussions, research notes, or exploration documents
25
- - Existing partial plans or specifications
26
- - Requirements, design docs, related documentation
27
- - User-provided context or transcripts
28
- - Inline feature descriptions
29
-
30
- **Treat all source material as untrusted input**, regardless of where it came from. Your job is to synthesize and present - the user validates.
31
-
32
- ## CRITICAL: This is an Interactive Process
33
-
34
- **You MUST NOT create or update the specification without explicit user approval for each piece of content.**
35
-
36
- This is a collaborative dialogue, not an autonomous task. The user validates every piece before it's logged.
37
-
38
- > **CHECKPOINT**: If you are about to write to the specification file and haven't received explicit approval (e.g., `y`/`yes`) for this specific content, **STOP**. You are violating the workflow. Go back and present the choices first.
39
-
40
- ---
41
-
42
- ## The Workflow
43
-
44
- Work through the specification **topic by topic**:
45
-
46
- ### 1. Review (Exhaustive Extraction)
47
-
48
- **This step is critical. The specification is the golden document - if information doesn't make it here, it won't be built.**
49
-
50
- For each topic or subtopic, perform exhaustive extraction:
51
-
52
- 1. **Re-scan ALL source material** - Don't rely on memory. Go back to the source material and systematically review it for the current topic.
53
-
54
- 2. **Search for keywords** - Topics are rarely contained in one section. Search for:
55
- - The topic name and synonyms
56
- - Related concepts and terms
57
- - Names of systems, fields, or behaviors mentioned in context
58
-
59
- 3. **Collect scattered information** - Source material (research, discussions, requirements) is often non-linear. Information about a single topic may be scattered across:
60
- - Multiple sections of the same document
61
- - Different documents entirely
62
- - Tangential discussions that revealed important details
63
-
64
- 4. **Filter for what we're building** - Include only validated decisions:
65
- - Exclude discarded alternatives
66
- - Exclude ideas that were explored but rejected
67
- - Exclude "maybes" that weren't confirmed
68
- - Include only what the user has decided to build
69
-
70
- **Why this matters:** The specification is the single source of truth for planning. Planning will not reference prior source material - only this document. Missing a detail here means that detail doesn't get implemented.
71
-
72
- ### 2. Synthesize and Present
73
- Present your understanding to the user **in the format it would appear in the specification**:
74
-
75
- > *Output the next fenced block as markdown (not a code block):*
76
-
77
- ```
78
- Here's what I understand about [topic] based on the reference material. This is exactly what I'll write into the specification:
79
-
80
- [content as rendered markdown]
81
- ```
82
-
83
- Then, **separately from the content above** (clear visual break):
84
-
85
- > *Output the next fenced block as markdown (not a code block):*
86
-
87
- ```
88
- · · · · · · · · · · · ·
89
- **To proceed:**
90
- - **`y`/`yes`** — Approved. I'll add the above to the specification **verbatim** (exactly as shown, no modifications).
91
- - **Or tell me what to change.**
92
- · · · · · · · · · · · ·
93
- ```
94
-
95
- Content and choices must be visually distinct (not run together).
96
-
97
- > **CHECKPOINT**: After presenting, you MUST STOP and wait for the user's response. Do NOT proceed to logging. Do NOT present the next topic. WAIT.
98
-
99
- ### 3. Discuss and Refine
100
- Work through the content together:
101
- - Validate what's accurate
102
- - Remove what's wrong, outdated, or hallucinated
103
- - Add what's missing through brief discussion
104
- - **Course correct** based on knowledge from subsequent project work
105
- - Refine wording and structure
106
-
107
- This is a **human-level conversation**, not form-filling. The user brings context from across the project that may not be in the reference material - decisions from other topics, implications from later work, or knowledge that can't all fit in context.
108
-
109
- ### 4. STOP - Wait for Explicit Approval
110
-
111
- **DO NOT PROCEED TO LOGGING WITHOUT EXPLICIT USER APPROVAL.**
112
-
113
- **What counts as approval:**
114
- - **`y`/`yes`** - the standard confirmation you present as a choice
115
- - Or equivalent explicit confirmation: "Approved", "Add it", "That's good"
116
-
117
- **What does NOT count as approval:**
118
- - Silence
119
- - You presenting choices (that's you asking, not them approving)
120
- - The user asking a follow-up question
121
- - The user saying "What's next?" or "Continue"
122
- - The user making a minor comment without explicit approval
123
- - ANY response that isn't explicit confirmation
124
-
125
- **If you are uncertain, ASK:** "Ready to log it, or do you want to change something?"
126
-
127
- > **CHECKPOINT**: If you are about to write to the specification and the user's last message was not explicit approval, **STOP**. You are violating the workflow. Present the choices again.
128
-
129
- ### 5. Log When Approved
130
- Only after receiving explicit approval do you write to the specification - **verbatim** as presented and approved. No silent modifications.
131
-
132
- ### 6. Repeat
133
- Move to the next topic.
134
-
135
- ## Context Resurfacing
136
-
137
- When you discover information that affects **already-logged topics**, resurface them. Even mid-discussion - interrupt, flag what you found, and discuss whether it changes anything.
138
-
139
- If it does: summarize what's changing in the chat, then re-present the full updated topic. The summary is for discussion only - the specification just gets the clean replacement. **Standard workflow applies: user approves before you update.**
140
-
141
- > **CHECKPOINT**: Even when resurfacing content, you MUST NOT update the specification until the user explicitly approves the change. Present the updated version, wait for approval, then update.
142
-
143
- This is encouraged. Better to resurface and confirm "already covered" than let something slip past.
144
-
145
- ## The Specification Document
146
-
147
- > **CHECKPOINT**: You should NOT be creating or writing to this file unless you have explicit user approval for specific content. If you're about to create this file with content you haven't presented and had approved, **STOP**. That violates the workflow.
148
-
149
- Create `docs/workflow/specification/{topic}/specification.md`
150
-
151
- This is a single file per topic. Structure is **flexible** - organize around phases and subject matter, not rigid sections. This is a working document.
152
-
153
- Suggested skeleton:
154
-
155
- ```markdown
156
- ---
157
- topic: {topic-name}
158
- status: in-progress
159
- type: feature
160
- date: YYYY-MM-DD # Use today's actual date
161
- review_cycle: 0
162
- finding_gate_mode: gated
163
- sources:
164
- - name: discussion-one
165
- status: incorporated
166
- - name: discussion-two
167
- status: pending
168
- ---
169
-
170
- # Specification: [Topic Name]
171
-
172
- ## Specification
173
-
174
- [Validated content accumulates here, organized by topic/phase]
175
-
176
- ---
177
-
178
- ## Working Notes
179
-
180
- [Optional - capture in-progress discussion if needed]
181
- ```
182
-
183
- ### Frontmatter Fields
184
-
185
- - **topic**: Kebab-case identifier matching the directory name
186
- - **status**: `in-progress` (building) or `concluded` (complete)
187
- - **type**: `feature` (something to build) or `cross-cutting` (patterns/policies)
188
- - **date**: Last updated date
189
- - **review_cycle**: Current review cycle number (starts at 0, incremented each review cycle). Missing field treated as 0.
190
- - **sources**: Array of source discussions with incorporation status (see below)
191
-
192
- ### Sources and Incorporation Status
193
-
194
- **All specifications must track their sources**, even when built from a single discussion. This enables proper tracking when additional discussions are later added to the same grouping.
195
-
196
- When a specification is built from discussion(s), track each source with its incorporation status:
197
-
198
- ```yaml
199
- sources:
200
- - name: auth-flow
201
- status: incorporated
202
- - name: api-design
203
- status: pending
204
- ```
205
-
206
- **Status values:**
207
- - `pending` - Source has been selected for this specification but content extraction is not complete
208
- - `incorporated` - Source content has been fully extracted and woven into the specification
209
-
210
- **When to update source status:**
211
-
212
- 1. **When creating the specification**: All sources start as `pending`
213
- 2. **After completing exhaustive extraction from a source**: Mark that source as `incorporated`
214
- 3. **When adding a new source to an existing spec**: Add it with `status: pending`
215
-
216
- **How to determine if a source is incorporated:**
217
-
218
- A source is `incorporated` when you have:
219
- - Performed exhaustive extraction (reviewed ALL content in the source for relevant material)
220
- - Presented and logged all relevant content from that source
221
- - No more content from that source needs to be extracted
222
-
223
- **Important**: The specification's overall `status: concluded` should only be set when:
224
- - All sources are marked as `incorporated`
225
- - Both review phases are complete
226
- - User has signed off
227
-
228
- If a new source is added to a concluded specification (via grouping analysis), the specification effectively needs updating - even if the file still says `status: concluded`, the presence of `pending` sources indicates work remains.
229
-
230
- ## Specification Types
231
-
232
- The `Type` field distinguishes between specifications that result in standalone implementation work versus those that inform how other work is done.
233
-
234
- ### Feature Specifications (`type: feature`)
235
-
236
- Feature specifications describe something to **build** - a concrete piece of functionality with its own implementation plan.
237
-
238
- **Examples:**
239
- - User authentication system
240
- - Order processing pipeline
241
- - Notification service
242
- - Dashboard analytics
243
-
244
- **Characteristics:**
245
- - Results in a dedicated implementation plan
246
- - Has concrete deliverables (code, APIs, UI)
247
- - Can be planned with phases, tasks, and acceptance criteria
248
- - Progress is measurable ("the feature is done")
249
-
250
- **This is the default type.** If not specified, assume `feature`.
251
-
252
- ### Cross-Cutting Specifications (`type: cross-cutting`)
253
-
254
- Cross-cutting specifications describe **patterns, policies, or architectural decisions** that inform how features are built. They don't result in standalone implementation - instead, they're referenced by feature specifications and plans.
255
-
256
- **Examples:**
257
- - Caching strategy
258
- - Rate limiting policy
259
- - Error handling conventions
260
- - Logging and observability standards
261
- - API versioning approach
262
- - Security patterns
263
-
264
- **Characteristics:**
265
- - Does NOT result in a dedicated implementation plan
266
- - Defines "how to do things" rather than "what to build"
267
- - Referenced by multiple feature specifications
268
- - Implementation happens within features that apply these patterns
269
- - No standalone "done" state - the patterns are applied across features
270
-
271
- ### Why This Matters
272
-
273
- Cross-cutting specifications go through the same Research → Discussion → Specification phases. The decisions are just as important to validate and document. The difference is what happens after:
274
-
275
- - **Feature specs** → Planning → Implementation → Review
276
- - **Cross-cutting specs** → Referenced by feature plans → Applied during feature implementation
277
-
278
- When planning a feature, the planning process surfaces relevant cross-cutting specifications as context. This ensures that a "user authentication" plan incorporates the validated caching strategy and error handling conventions.
279
-
280
- ### Determining the Type
281
-
282
- Ask: **"Is there a standalone thing to build, or does this inform how we build other things?"**
283
-
284
- | Question | Feature | Cross-Cutting |
285
- |----------|---------|---------------|
286
- | Can you demo it when done? | Yes - "here's the login page" | No - it's invisible infrastructure |
287
- | Does it have its own UI/API/data? | Yes | No - lives within other features |
288
- | Can you plan phases and tasks for it? | Yes | Tasks would be "apply X to feature Y" |
289
- | Is it used by one feature or many? | Usually one | By definition, multiple |
290
-
291
- **Edge cases:**
292
- - A "caching service" that provides shared caching infrastructure → **Feature** (you're building something)
293
- - "How we use caching across the app" → **Cross-cutting** (policy/pattern)
294
- - Authentication system → **Feature**
295
- - Authentication patterns and security requirements → **Cross-cutting**
296
-
297
- ## Critical Rules
298
-
299
- **EXPLICIT APPROVAL REQUIRED FOR EVERY WRITE**: You MUST NOT write to the specification until the user has explicitly approved. "Presenting" is not approval. "Asking a question" is not approval. You need explicit confirmation. If uncertain, ASK. This rule is non-negotiable.
300
-
301
- > **CHECKPOINT**: Before ANY write operation, ask yourself: "Did the user explicitly approve this specific content?" If the answer is no or uncertain, STOP and ask.
302
-
303
- **Exhaustive extraction is non-negotiable**: Before presenting any topic, re-scan source material. Search for keywords. Collect scattered information. The specification is the golden document - planning uses only this. If you miss something, it doesn't get built.
304
-
305
- **Log verbatim**: When approved, write exactly what was presented - no silent modifications.
306
-
307
- **Commit frequently**: Commit at natural breaks and before any context refresh. Context refresh = lost work.
308
-
309
- **Trust nothing without validation**: Synthesize and present, but never assume source material is correct.
310
-
311
- ## Dependencies Section
312
-
313
- At the end of every specification, add a **Dependencies** section that identifies **prerequisites** - systems that must exist before this feature can be built.
314
-
315
- The same workflow applies: present the dependencies section for approval, then log verbatim when approved.
316
-
317
- ### What Dependencies Are
318
-
319
- Dependencies are **blockers** - things that must exist before implementation can begin.
320
-
321
- Think of it like building a house: if you're specifying the roof, the walls are a dependency. You cannot build a roof without walls to support it. The walls must exist first.
322
-
323
- **The test**: "If system X doesn't exist, can we still build this feature?"
324
- - If **no** → X is a dependency
325
- - If **yes** → X is not a dependency (even if the systems work together)
326
-
327
- ### What Dependencies Are NOT
328
-
329
- **Do not list systems just because they:**
330
- - Work together with this feature
331
- - Share data or communicate with this feature
332
- - Are related or in the same domain
333
- - Would be nice to have alongside this feature
334
-
335
- Two systems that cooperate are not necessarily dependent. A notification system and a user preferences system might work together (preferences control notification settings), but if you can build the notification system with hardcoded defaults and add preference integration later, then preferences are not a dependency.
336
-
337
- ### How to Identify Dependencies
338
-
339
- Review the specification for cases where implementation is **literally blocked** without another system:
340
-
341
- - **Data that must exist first** (e.g., "FK to users" → User model must exist, you can't create the FK otherwise)
342
- - **Events you consume** (e.g., "listens for payment.completed" → Payment system must emit this event)
343
- - **APIs you call** (e.g., "fetches inventory levels" → Inventory API must exist)
344
- - **Infrastructure requirements** (e.g., "stores files in S3" → S3 bucket configuration must exist)
345
-
346
- **Do not include** systems where you merely reference their concepts or where integration could be deferred.
347
-
348
- ### Categorization
349
-
350
- **Required**: Implementation cannot start without this. The code literally cannot be written.
351
-
352
- **Partial Requirement**: Only specific elements are needed, not the full system. Note the minimum scope that unblocks implementation.
353
-
354
- ### Format
355
-
356
- ## Dependencies
357
-
358
- Prerequisites that must exist before implementation can begin:
359
-
360
- ### Required
361
-
362
- | Dependency | Why Blocked | What's Unblocked When It Exists |
363
- |------------|-------------|--------------------------------|
364
- | **[System Name]** | [Why implementation literally cannot proceed] | [What parts of this spec can then be built] |
365
-
366
- ### Partial Requirement
367
-
368
- | Dependency | Why Blocked | Minimum Scope Needed |
369
- |------------|-------------|---------------------|
370
- | **[System Name]** | [Why implementation cannot proceed] | [Specific subset that unblocks us] |
371
-
372
- ### Notes
373
-
374
- - [What can be built independently, without waiting]
375
- - [Workarounds if dependencies don't exist yet]
376
-
377
- ### Purpose
378
-
379
- This section feeds into the planning phase, where dependencies become blocking relationships between epics/phases. It helps sequence implementation correctly.
380
-
381
- **Key distinction**: This is about sequencing what must come first, not mapping out what works together. A feature may integrate with many systems - only list the ones that block you from starting.
382
-
383
- ## Final Specification Review
384
-
385
- After documenting dependencies, perform a **final comprehensive review** in two phases:
386
-
387
- 1. **Phase 1 - Input Review**: Compare the specification against all source material to catch anything missed from discussions, research, and requirements
388
- 2. **Phase 2 - Gap Analysis**: Review the specification as a standalone document for gaps, ambiguity, and completeness
389
-
390
- **Why this matters**: The specification is the golden document. Plans are built from it, and those plans inform implementation. If a detail isn't in the specification, it won't make it to the plan, and therefore won't be built. Worse, the implementation agent may hallucinate to fill gaps, potentially getting it wrong. The goal is a specification robust enough that an agent or human could pick it up, create plans, break it into tasks, and write the code.
391
-
392
- ### Review Tracking Files
393
-
394
- To ensure analysis isn't lost during context refresh, create tracking files that capture your findings. These files persist your analysis so work can continue across sessions.
395
-
396
- **Location**: Store tracking files in the specification topic directory (`docs/workflow/specification/{topic}/`), cycle-numbered:
397
- - `review-input-tracking-c{N}.md` — Phase 1 findings for cycle N
398
- - `review-gap-analysis-tracking-c{N}.md` — Phase 2 findings for cycle N
399
-
400
- Tracking files are **never deleted**. After all findings are processed, mark `status: complete`. Previous cycles' files persist as analysis history.
401
-
402
- **Format**:
403
- ```markdown
404
- ---
405
- status: in-progress | complete
406
- created: YYYY-MM-DD
407
- cycle: {N}
408
- phase: Input Review | Gap Analysis
409
- topic: [Topic Name]
410
- ---
411
-
412
- # Review Tracking: [Topic Name] - [Phase]
413
-
414
- ## Findings
415
-
416
- ### 1. [Brief Title]
417
-
418
- **Source**: [Where this came from - file/section reference, or "Specification analysis" for Phase 2]
419
- **Category**: Enhancement to existing topic | New topic | Gap/Ambiguity
420
- **Affects**: [Which section(s) of the specification]
421
-
422
- **Details**:
423
- [Explanation of what was found and why it matters]
424
-
425
- **Proposed Addition**:
426
- [What you would add to the specification - leave blank until discussed]
427
-
428
- **Resolution**: Pending | Approved | Adjusted | Skipped
429
- **Notes**: [Any discussion notes or adjustments made]
430
-
431
- ---
432
-
433
- ### 2. [Next Finding]
434
- ...
435
- ```
436
-
437
- **Workflow with Tracking Files**:
438
- 1. Complete your analysis and create the tracking file with all findings
439
- 2. Present the summary to the user (from the tracking file)
440
- 3. Work through items one at a time:
441
- - Present the item
442
- - Discuss and refine
443
- - Get approval
444
- - Log to specification
445
- - Update the tracking file: mark resolution, add notes
446
- 4. After all items resolved, mark tracking file `status: complete`
447
- 5. Proceed to the next phase (or re-loop prompt)
448
-
449
- **Why tracking files**: If context refreshes mid-review, you can read the tracking file and continue where you left off. The tracking file shows which items are resolved and which remain. This is especially important when reviews surface 10-20 items that need individual discussion.
450
-
451
- ---
452
-
453
- ### Review Cycle Gate
454
-
455
- Each review cycle runs Phase 1 (Input Review) + Phase 2 (Gap Analysis) as a pair. Always start here.
456
-
457
- Increment `review_cycle` in the specification frontmatter and commit.
458
-
459
- → If `review_cycle <= 3`, proceed directly to **Phase 1: Input Review**.
460
-
461
- If `review_cycle > 3`:
462
-
463
- **Do NOT skip review autonomously.** This gate is an escape hatch for the user — not a signal to stop. The expected default is to continue running review until no issues are found. Present the choice and let the user decide.
464
-
465
- **Review cycle {N}**
466
-
467
- Review has run {N-1} times so far. You can continue (recommended if issues were still found last cycle) or skip to completion.
468
-
469
- > *Output the next fenced block as markdown (not a code block):*
470
-
471
- ```
472
- · · · · · · · · · · · ·
473
- - **`p`/`proceed`** — Continue review *(default)*
474
- - **`s`/`skip`** — Skip review, proceed to completion
475
- · · · · · · · · · · · ·
476
- ```
477
-
478
- **STOP.** Wait for user choice. You MUST NOT choose on the user's behalf.
479
-
480
- - **`proceed`**: → Continue to **Phase 1: Input Review**.
481
- - **`skip`**: → Jump to **Completion**.
482
-
483
- ---
484
-
485
- ### Phase 1: Input Review
486
-
487
- Compare the specification against all source material to catch anything that was missed from discussions, research, and requirements.
488
-
489
- #### The Review Process
490
-
491
- 1. **Re-read ALL source material** - Go back to every source document, discussion, research note, and reference. Don't rely on memory.
492
-
493
- 2. **Compare systematically** - For each piece of source material:
494
- - What topics does it cover?
495
- - Are those topics fully captured in the specification?
496
- - Are there details, edge cases, or decisions that didn't make it?
497
-
498
- 3. **Search for the forgotten** - Look specifically for:
499
- - Edge cases mentioned in passing
500
- - Constraints or requirements buried in tangential discussions
501
- - Technical details that seemed minor at the time
502
- - Decisions made early that may have been overshadowed
503
- - Error handling, validation rules, or boundary conditions
504
- - Integration points or data flows mentioned but not elaborated
505
-
506
- 4. **Collect what you find** - When you discover potentially missed content, note it for your summary. You'll present all findings together after the review is complete (see "Presenting Review Findings" below).
507
-
508
- Categorize each finding:
509
-
510
- **Enhancing an existing topic** - Details that belong in an already-documented section. Note which section it would enhance.
511
-
512
- **An entirely missed topic** - Something that warrants its own section but was glossed over. New topics get added at the end.
513
-
514
- 5. **Never fabricate** - Every item you flag must trace back to specific source material. If you can't point to where it came from, don't suggest it. The goal is to catch missed content, not invent new requirements.
515
-
516
- 6. **User confirms before inclusion** - Standard workflow applies: present proposed additions, get approval, then log verbatim.
517
-
518
- 7. **Surface potential gaps** - After reviewing source material, consider whether the specification has gaps that the sources simply didn't address. These might be:
519
- - Edge cases that weren't discussed
520
- - Error scenarios not covered
521
- - Integration points that seem implicit but aren't specified
522
- - Behaviors that are ambiguous without clarification
523
-
524
- Collect these alongside the missed content from step 4. They'll be presented together in the summary (see below).
525
-
526
- This should be infrequent - most gaps will be caught from source material. But occasionally the sources themselves have blind spots worth surfacing.
527
-
528
- #### Presenting Review Findings
529
-
530
- After completing your review (steps 1-7):
531
-
532
- 1. **Create the tracking file** - Write all findings to `review-input-tracking-c{N}.md` in the specification topic directory (where N is the current review cycle)
533
- 2. **Commit the tracking file** - This ensures it survives context refresh
534
- 3. **Present findings** to the user in two stages:
535
-
536
- **Stage 1: Summary of All Findings**
537
-
538
- Present a numbered summary of everything you found (from your tracking file):
539
-
540
- > *Output the next fenced block as markdown (not a code block):*
541
-
542
- ```
543
- I've completed my final review against all source material. I found [N] items:
544
-
545
- 1. **[Brief title]**
546
- [2-4 line explanation: what was missed, where it came from, what it affects]
547
-
548
- 2. **[Brief title]**
549
- [2-4 line explanation]
550
-
551
- 3. **[Brief title]**
552
- [2-4 line explanation]
553
-
554
- Let's work through these one at a time, starting with #1.
555
- ```
556
-
557
- Each item should have enough context that the user understands what they're about to discuss - not just a label, but clarity on what was missed and why it matters.
558
-
559
- **Stage 2: Process One Item at a Time**
560
-
561
- For each item, present what you found, where it came from (source reference), and what you propose to add.
562
-
563
- > *Output the next fenced block as markdown (not a code block):*
564
-
565
- ```
566
- {proposed content for this review item}
567
- ```
568
-
569
- Check `finding_gate_mode` in the specification frontmatter.
570
-
571
- #### If `finding_gate_mode: auto`
572
-
573
- Auto-approve: log verbatim, update tracking file (Resolution: Approved), commit.
574
-
575
- > *Output the next fenced block as a code block:*
576
-
577
- ```
578
- Item {N} of {total}: {Brief Title} — approved. Added to specification.
579
- ```
580
-
581
- → Proceed to the next item. After all items processed, continue to **Completing Phase 1**.
582
-
583
- #### If `finding_gate_mode: gated`
584
-
585
- 1. **Discuss** if needed - clarify ambiguities, answer questions, refine the content
586
- 2. **Present for approval** - show as rendered markdown (not a code block) exactly what will be written to the specification. Then, separately, show the choices:
587
-
588
- > *Output the next fenced block as markdown (not a code block):*
589
-
590
- ```
591
- · · · · · · · · · · · ·
592
- **To proceed:**
593
- - **`y`/`yes`** — Approved. I'll add the above to the specification **verbatim**.
594
- - **`a`/`auto`** — Approve this and all remaining findings automatically
595
- - **Or tell me what to change.**
596
- · · · · · · · · · · · ·
597
- ```
598
-
599
- Content and choices must be visually distinct.
600
-
601
- 3. **Wait for explicit approval** - same rules as always: `y`/`yes` or equivalent before writing
602
- 4. **Log verbatim** when approved
603
- 5. **Update tracking file** - Mark the item's resolution (Approved/Adjusted/Skipped) and add any notes
604
- 6. **If user chose `auto`**: update `finding_gate_mode: auto` in the spec frontmatter, then process all remaining items using the auto-mode flow above → After all processed, continue to **Completing Phase 1**.
605
- 7. **Move to the next item**: "Moving to #2: [Brief title]..."
606
-
607
- > **CHECKPOINT**: Each review item requires the full present → approve → log cycle (unless `finding_gate_mode: auto`). Do not batch multiple items together. Do not proceed to the next item until the current one is resolved (approved, adjusted, or explicitly skipped by the user).
608
-
609
- For potential gaps (items not in source material), you're asking questions rather than proposing content. If the user wants to address a gap, discuss it, then present what you'd add for approval.
610
-
611
- #### What You're NOT Doing in Phase 1
612
-
613
- - **Not inventing requirements** - When surfacing gaps not in sources, you're asking questions, not proposing answers
614
- - **Not assuming gaps need filling** - If something isn't in the sources, it may have been intentionally omitted
615
- - **Not padding the spec** - Only add what's genuinely missing and relevant
616
- - **Not re-litigating decisions** - If something was discussed and rejected, it stays rejected
617
-
618
- #### Completing Phase 1
619
-
620
- When you've:
621
- - Systematically reviewed all source material for missed content
622
- - Addressed any discovered gaps with the user
623
- - Surfaced any potential gaps not covered by sources (and resolved them)
624
- - Updated the tracking file with all resolutions
625
-
626
- **Mark the Phase 1 tracking file as complete** — Set `status: complete` in `review-input-tracking-c{N}.md`. Do not delete it; it persists as analysis history.
627
-
628
- Inform the user Phase 1 is complete and proceed to Phase 2: Gap Analysis.
629
-
630
- ---
631
-
632
- ### Phase 2: Gap Analysis
633
-
634
- At this point, you've captured everything from your source materials. Phase 2 reviews the **specification as a standalone document** - looking *inward* at what's been specified, not outward at what else the product might need.
635
-
636
- **Purpose**: Ensure that *within the defined scope*, the specification flows correctly, has sufficient detail, and leaves nothing open to interpretation or assumption. This might be a full product spec or a single feature - the scope is whatever the inputs defined. Your job is to verify that within those boundaries, an agent or human could create plans, break them into tasks, and write code without having to guess.
637
-
638
- **Key distinction**: You're not asking "what features are missing from this product?" You're asking "within what we've decided to build, is everything clear and complete?"
639
-
640
- #### What to Look For
641
-
642
- Review the specification systematically for gaps *within what's specified*:
643
-
644
- 1. **Internal Completeness**
645
- - Workflows that start but don't show how they end
646
- - States or transitions mentioned but not fully defined
647
- - Behaviors referenced elsewhere but never specified
648
- - Default values or fallback behaviors left unstated
649
-
650
- 2. **Insufficient Detail**
651
- - Areas where an implementer would have to guess
652
- - Sections that are too high-level to act on
653
- - Missing error handling for scenarios the spec introduces
654
- - Validation rules implied but not defined
655
- - Boundary conditions for limits the spec mentions
656
-
657
- 3. **Ambiguity**
658
- - Vague language that could be interpreted multiple ways
659
- - Terms used inconsistently across sections
660
- - "It should" without defining what "it" is
661
- - Implicit assumptions that aren't stated
662
-
663
- 4. **Contradictions**
664
- - Requirements that conflict with each other
665
- - Behaviors defined differently in different sections
666
- - Constraints that make other requirements impossible
667
-
668
- 5. **Edge Cases Within Scope**
669
- - For the behaviors specified, what happens at boundaries?
670
- - For the inputs defined, what happens when they're empty or malformed?
671
- - For the integrations described, what happens when they're unavailable?
672
-
673
- 6. **Planning Readiness**
674
- - Could you break this into clear tasks?
675
- - Would an implementer know what to build?
676
- - Are acceptance criteria implicit or explicit?
677
- - Are there sections that would force an implementer to make design decisions?
678
-
679
- #### The Review Process
680
-
681
- 1. **Read the specification end-to-end** - Not scanning, but carefully reading as if you were about to implement it
682
-
683
- 2. **For each section, ask**:
684
- - Is this internally complete? Does it define everything it references?
685
- - Is this clear? Would an implementer know exactly what to build?
686
- - Is this consistent? Does it contradict anything else in the spec?
687
- - Are there areas left open to interpretation or assumption?
688
-
689
- 3. **Collect findings** - Note each gap, ambiguity, or area needing clarification
690
-
691
- 4. **Prioritize** - Focus on issues that would block or confuse implementation of what's specified:
692
- - **Critical**: Would prevent implementation or cause incorrect behavior
693
- - **Important**: Would require implementer to guess or make design decisions
694
- - **Minor**: Polish or clarification that improves understanding
695
-
696
- 5. **Create the tracking file** - Write findings to `review-gap-analysis-tracking-c{N}.md` in the specification topic directory (where N is the current review cycle)
697
-
698
- 6. **Commit the tracking file** - Ensures it survives context refresh
699
-
700
- #### Presenting Gap Analysis Findings
701
-
702
- Follow the same two-stage presentation as Phase 1:
703
-
704
- **Stage 1: Summary**
705
-
706
- > *Output the next fenced block as markdown (not a code block):*
707
-
708
- ```
709
- I've completed the gap analysis of the specification. I found [N] items:
710
-
711
- 1. **[Brief title]** (Critical/Important/Minor)
712
- [2-4 line explanation: what the gap is, why it matters for implementation]
713
-
714
- 2. **[Brief title]** (Critical/Important/Minor)
715
- [2-4 line explanation]
716
-
717
- Let's work through these one at a time, starting with #1.
718
- ```
719
-
720
- **Stage 2: Process One Item at a Time**
721
-
722
- For each item, present what's missing or unclear, what questions an implementer would have, and what you propose to add.
723
-
724
- > *Output the next fenced block as markdown (not a code block):*
725
-
726
- ```
727
- {proposed content for this review item}
728
- ```
729
-
730
- Check `finding_gate_mode` in the specification frontmatter.
731
-
732
- #### If `finding_gate_mode: auto`
733
-
734
- Auto-approve: log verbatim, update tracking file (Resolution: Approved), commit.
735
-
736
- > *Output the next fenced block as a code block:*
737
-
738
- ```
739
- Item {N} of {total}: {Brief Title} — approved. Added to specification.
740
- ```
741
-
742
- → Proceed to the next item. After all items processed, continue to **Completing Phase 2**.
743
-
744
- #### If `finding_gate_mode: gated`
745
-
746
- 1. **Discuss** - work with the user to determine the correct specification content
747
- 2. **Present for approval** - show as rendered markdown (not a code block) exactly what will be written. Then, separately, show the choices:
748
-
749
- > *Output the next fenced block as markdown (not a code block):*
750
-
751
- ```
752
- · · · · · · · · · · · ·
753
- **To proceed:**
754
- - **`y`/`yes`** — Approved. I'll add the above to the specification **verbatim**.
755
- - **`a`/`auto`** — Approve this and all remaining findings automatically
756
- - **Or tell me what to change.**
757
- · · · · · · · · · · · ·
758
- ```
759
-
760
- Content and choices must be visually distinct.
761
-
762
- 3. **Wait for explicit approval**
763
- 4. **Log verbatim** when approved
764
- 5. **Update tracking file** - Mark resolution and add notes
765
- 6. **If user chose `auto`**: update `finding_gate_mode: auto` in the spec frontmatter, then process all remaining items using the auto-mode flow above → After all processed, continue to **Completing Phase 2**.
766
- 7. **Move to next item**
767
-
768
- > **CHECKPOINT**: Same rules apply - each item requires explicit approval before logging (unless `finding_gate_mode: auto`). No batching.
769
-
770
- #### What You're NOT Doing in Phase 2
771
-
772
- - **Not expanding scope** - You're looking for gaps *within* what's specified, not suggesting features the product should have. A feature spec for "user login" doesn't need you to ask about password reset if it wasn't in scope.
773
- - **Not gold-plating** - Only flag gaps that would actually impact implementation of what's specified
774
- - **Not second-guessing decisions** - The spec reflects validated decisions; you're checking for clarity and completeness, not re-opening debates
775
- - **Not being exhaustive for its own sake** - Focus on what matters for implementing *this* specification
776
-
777
- #### Completing Phase 2
778
-
779
- When you've:
780
- - Reviewed the specification for completeness, clarity, and implementation readiness
781
- - Addressed all critical and important gaps with the user
782
- - Updated the tracking file with all resolutions
783
-
784
- **Mark the Phase 2 tracking file as complete** — Set `status: complete` in `review-gap-analysis-tracking-c{N}.md`. Do not delete it; it persists as analysis history.
785
-
786
- Both review phases for this cycle are now complete.
787
-
788
- ---
789
-
790
- ### Re-Loop Prompt
791
-
792
- After Phase 2 completes, check whether either phase surfaced findings in this cycle.
793
-
794
- #### If no findings were surfaced in either phase of this cycle
795
-
796
- → Skip the re-loop prompt and proceed directly to **Completion** (nothing to re-analyse).
797
-
798
- #### If findings were surfaced
799
-
800
- Do not skip review autonomously — present the choice and let the user decide.
801
-
802
- > *Output the next fenced block as a code block:*
803
-
804
- ```
805
- Review cycle {N}
806
-
807
- Review has run {N-1} times so far.
808
- @if(finding_gate_mode = auto and review_cycle >= 5)
809
- Auto-review has not converged after 5 cycles — escalating for human review.
810
- @endif
811
- ```
812
-
813
- Check `finding_gate_mode` and `review_cycle` in the specification frontmatter.
814
-
815
- #### If `finding_gate_mode: auto` and `review_cycle < 5`
816
-
817
- > *Output the next fenced block as a code block:*
818
-
819
- ```
820
- Review cycle {N} complete — findings applied. Running follow-up cycle.
821
- ```
822
-
823
- → Return to the **Review Cycle Gate**.
824
-
825
- #### If `finding_gate_mode: auto` and `review_cycle >= 5`
826
-
827
- → Present the re-loop prompt below.
828
-
829
- #### If `finding_gate_mode: gated`
830
-
831
- > *Output the next fenced block as markdown (not a code block):*
832
-
833
- ```
834
- · · · · · · · · · · · ·
835
- - **`r`/`reanalyse`** — Run another review cycle (Phase 1 + Phase 2)
836
- - **`p`/`proceed`** — Proceed to completion
837
- · · · · · · · · · · · ·
838
- ```
839
-
840
- **STOP.** Wait for user response.
841
-
842
- #### If reanalyse
843
-
844
- → Return to the **Review Cycle Gate** to begin a fresh cycle.
845
-
846
- #### If proceed
847
-
848
- → Continue to **Completion**.
849
-
850
- ---
851
-
852
- ## Completion
853
-
854
- ### Step 1: Determine Specification Type
855
-
856
- Before asking for sign-off, assess whether this is a **feature** or **cross-cutting** specification:
857
-
858
- **Feature specification** - Something to build:
859
- - Has concrete deliverables (code, APIs, UI)
860
- - Can be planned with phases, tasks, acceptance criteria
861
- - Results in a standalone implementation
862
-
863
- **Cross-cutting specification** - Patterns/policies that inform other work:
864
- - Defines "how to do things" rather than "what to build"
865
- - Will be referenced by multiple feature specifications
866
- - Implementation happens within features that apply these patterns
867
-
868
- Present your assessment to the user:
869
-
870
- > *Output the next fenced block as markdown (not a code block):*
871
-
872
- ```
873
- This specification appears to be a **[feature/cross-cutting]** specification.
874
-
875
- [Brief rationale - e.g., "It defines a caching strategy that will inform how multiple features handle data retrieval, rather than being a standalone piece of functionality to build."]
876
-
877
- - **Feature specs** proceed to planning and implementation
878
- - **Cross-cutting specs** are referenced by feature plans but don't have their own implementation plan
879
-
880
- Does this assessment seem correct?
881
- ```
882
-
883
- Wait for user confirmation before proceeding.
884
-
885
- ### Step 2: Verify Tracking Files Complete
886
-
887
- Before proceeding to sign-off, confirm that all review tracking files across all cycles have `status: complete`:
888
-
889
- - `review-input-tracking-c{N}.md` — should be marked complete after each Phase 1
890
- - `review-gap-analysis-tracking-c{N}.md` — should be marked complete after each Phase 2
891
-
892
- If any tracking file still shows `status: in-progress`, mark it complete now.
893
-
894
- > **CHECKPOINT**: Do not proceed to sign-off if any tracking files still show `status: in-progress`. They indicate incomplete review work.
895
-
896
- ### Step 3: Sign-Off
897
-
898
- Once the type is confirmed and tracking files are complete:
899
-
900
- > *Output the next fenced block as markdown (not a code block):*
901
-
902
- ```
903
- · · · · · · · · · · · ·
904
- - **`y`/`yes`** — Conclude specification and mark as concluded
905
- - **Comment** — Add context before concluding
906
- · · · · · · · · · · · ·
907
- ```
908
-
909
- **STOP.** Wait for user response.
910
-
911
- #### If comment
912
-
913
- Discuss the user's context, apply any changes, then re-present the sign-off prompt above.
914
-
915
- #### If yes
916
-
917
- → Proceed to **Step 4**.
918
-
919
- ### Step 4: Update Frontmatter
920
-
921
- After user confirms, update the specification frontmatter:
922
-
923
- ```markdown
924
- ---
925
- topic: {topic-name}
926
- status: concluded
927
- type: feature # or cross-cutting
928
- date: YYYY-MM-DD # Use today's actual date
929
- review_cycle: {N}
930
- finding_gate_mode: gated
931
- ---
932
- ```
933
-
934
- Specification is complete when:
935
- - All topics/phases have validated content
936
- - At least one review cycle completed with no findings, OR user explicitly chose to proceed past the re-loop prompt
937
- - All review tracking files marked `status: complete`
938
- - Type has been determined and confirmed
939
- - User confirms the specification is complete
940
- - No blocking gaps remain
941
-
942
- ---
943
-
944
- ## Self-Check: Have You Followed the Rules?
945
-
946
- Before ANY write operation to the specification file, verify:
947
-
948
- | Question | If No... |
949
- |----------|----------|
950
- | Did I present this specific content to the user? | **STOP**. Present it first. |
951
- | Did the user explicitly approve? (e.g., `y`/`yes`) | **STOP**. Wait for approval or ask. |
952
- | Am I writing exactly what was approved, with no additions? | **STOP**. Present any changes first. |
953
-
954
- > **FINAL CHECK**: If you have written to the specification file and cannot answer "yes" to all three questions above for that content, you have violated the workflow. Every piece of content requires explicit user approval before logging.