sdlc-framework 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -0
- package/README.md +321 -0
- package/bin/install.js +193 -0
- package/package.json +39 -0
- package/src/commands/close.md +200 -0
- package/src/commands/debug.md +124 -0
- package/src/commands/fast.md +149 -0
- package/src/commands/fix.md +104 -0
- package/src/commands/help.md +144 -0
- package/src/commands/hotfix.md +99 -0
- package/src/commands/impl.md +142 -0
- package/src/commands/init.md +93 -0
- package/src/commands/milestone.md +136 -0
- package/src/commands/pause.md +115 -0
- package/src/commands/research.md +136 -0
- package/src/commands/resume.md +103 -0
- package/src/commands/review.md +195 -0
- package/src/commands/spec.md +164 -0
- package/src/commands/status.md +118 -0
- package/src/commands/verify.md +153 -0
- package/src/references/clarification-strategy.md +352 -0
- package/src/references/engineering-laws.md +374 -0
- package/src/references/loop-phases.md +331 -0
- package/src/references/playwright-testing.md +298 -0
- package/src/references/prompt-detection.md +264 -0
- package/src/references/sub-agent-strategy.md +260 -0
- package/src/rules/commands.md +180 -0
- package/src/rules/style.md +354 -0
- package/src/rules/templates.md +238 -0
- package/src/rules/workflows.md +314 -0
- package/src/templates/HANDOFF.md +121 -0
- package/src/templates/LAWS.md +521 -0
- package/src/templates/PROJECT.md +112 -0
- package/src/templates/REVIEW.md +145 -0
- package/src/templates/ROADMAP.md +101 -0
- package/src/templates/SPEC.md +231 -0
- package/src/templates/STATE.md +106 -0
- package/src/templates/SUMMARY.md +126 -0
- package/src/workflows/close-phase.md +189 -0
- package/src/workflows/debug-flow.md +302 -0
- package/src/workflows/fast-forward.md +340 -0
- package/src/workflows/fix-findings.md +235 -0
- package/src/workflows/hotfix-flow.md +190 -0
- package/src/workflows/impl-phase.md +229 -0
- package/src/workflows/init-project.md +249 -0
- package/src/workflows/milestone-management.md +169 -0
- package/src/workflows/pause-work.md +153 -0
- package/src/workflows/research.md +219 -0
- package/src/workflows/resume-project.md +159 -0
- package/src/workflows/review-phase.md +337 -0
- package/src/workflows/spec-phase.md +379 -0
- package/src/workflows/transition-phase.md +203 -0
- package/src/workflows/verify-phase.md +280 -0
|
@@ -0,0 +1,352 @@
|
|
|
1
|
+
# Clarification Strategy Reference
|
|
2
|
+
|
|
3
|
+
This document explains how the spec phase gathers requirements through targeted questions instead of assumptions. It covers why guessing is dangerous, what categories of questions to ask, how to give recommendations, when to stop asking, and common anti-patterns.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## Why No Guessing
|
|
8
|
+
|
|
9
|
+
AI code generators default to assumptions when requirements are unclear. This is the single biggest source of wasted work in AI-assisted development. The SDLC framework enforces a clarification-first approach: when something is ambiguous, ask — do not guess.
|
|
10
|
+
|
|
11
|
+
### Concrete Examples of Assumption Failures
|
|
12
|
+
|
|
13
|
+
**Example 1: "Add authentication"**
|
|
14
|
+
|
|
15
|
+
Without clarification, the AI assumes:
|
|
16
|
+
- JWT-based auth (could have been session-based)
|
|
17
|
+
- Email + password login (could have been OAuth only)
|
|
18
|
+
- Bcrypt for password hashing (could have been argon2)
|
|
19
|
+
- Access and refresh tokens (could have been single token)
|
|
20
|
+
- 1-hour token expiry (could have been 15 minutes)
|
|
21
|
+
|
|
22
|
+
The developer spends 3 hours implementing JWT with refresh tokens. The stakeholder wanted OAuth with Google only. 3 hours wasted.
|
|
23
|
+
|
|
24
|
+
**Example 2: "Fix the search"**
|
|
25
|
+
|
|
26
|
+
Without clarification, the AI assumes:
|
|
27
|
+
- The bug is in the search results (could be in the search input)
|
|
28
|
+
- It affects all searches (could be only empty searches)
|
|
29
|
+
- The fix is in the backend (could be a frontend display issue)
|
|
30
|
+
- Performance is the issue (could be incorrect results)
|
|
31
|
+
|
|
32
|
+
The developer optimizes database queries for 2 hours. The actual bug was a CSS issue hiding the search results on mobile. 2 hours wasted.
|
|
33
|
+
|
|
34
|
+
**Example 3: "Add a dashboard"**
|
|
35
|
+
|
|
36
|
+
Without clarification, the AI assumes:
|
|
37
|
+
- A web dashboard (could be a CLI dashboard)
|
|
38
|
+
- Real-time data (could be static daily reports)
|
|
39
|
+
- Charts and graphs (could be just numbers in a table)
|
|
40
|
+
- All users see the same dashboard (could be role-based)
|
|
41
|
+
|
|
42
|
+
The developer builds a real-time React dashboard with Chart.js. The stakeholder wanted a simple table of daily stats in the admin panel. A week wasted.
|
|
43
|
+
|
|
44
|
+
### The Cost Formula
|
|
45
|
+
|
|
46
|
+
```
|
|
47
|
+
Cost of asking = 5 minutes per question
|
|
48
|
+
Cost of wrong assumption = 1-8 hours of rework
|
|
49
|
+
Break-even = asking is cheaper if there is even a 10% chance the assumption is wrong
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
When in doubt, ask. The math always favors it.
|
|
53
|
+
|
|
54
|
+
---
|
|
55
|
+
|
|
56
|
+
## Question Categories
|
|
57
|
+
|
|
58
|
+
Questions fall into five categories. The spec phase should cover all categories that are relevant to the work.
|
|
59
|
+
|
|
60
|
+
### 1. Scope Questions
|
|
61
|
+
|
|
62
|
+
**What they clarify:** The boundaries of the work. What is included and what is not.
|
|
63
|
+
|
|
64
|
+
**When to ask:** Always. Every task has scope ambiguity.
|
|
65
|
+
|
|
66
|
+
**Examples:**
|
|
67
|
+
- "The request mentions user search. Should this include search by name only, or also by email and role?"
|
|
68
|
+
- "Should the API support pagination, or is the dataset small enough to return all results?"
|
|
69
|
+
- "Does 'fix the login' mean the login form, the authentication logic, or both?"
|
|
70
|
+
- "Should this work on mobile devices, or is desktop-only acceptable for now?"
|
|
71
|
+
|
|
72
|
+
### 2. Approach Questions
|
|
73
|
+
|
|
74
|
+
**What they clarify:** How the work should be done. Technical choices.
|
|
75
|
+
|
|
76
|
+
**When to ask:** When there are multiple valid approaches and the choice affects architecture, performance, or maintainability.
|
|
77
|
+
|
|
78
|
+
**Examples:**
|
|
79
|
+
- "For storing user sessions, should we use JWT tokens (stateless, scalable) or database sessions (revocable, more control)?"
|
|
80
|
+
- "Should the search be implemented with a simple SQL LIKE query or a full-text search engine like Elasticsearch?"
|
|
81
|
+
- "Should we create a new service for this, or add methods to the existing UserService?"
|
|
82
|
+
- "Should the form validation happen client-side, server-side, or both?"
|
|
83
|
+
|
|
84
|
+
### 3. Edge Case Questions
|
|
85
|
+
|
|
86
|
+
**What they clarify:** What happens when things go wrong or inputs are unexpected.
|
|
87
|
+
|
|
88
|
+
**When to ask:** When the happy path is clear but failure modes are not specified.
|
|
89
|
+
|
|
90
|
+
**Examples:**
|
|
91
|
+
- "What should happen if a user tries to register with an email that already exists?"
|
|
92
|
+
- "If the external API is down, should we show cached data, an error message, or a fallback?"
|
|
93
|
+
- "What is the maximum number of items a user can add to their cart?"
|
|
94
|
+
- "Should deleted users be hard-deleted or soft-deleted?"
|
|
95
|
+
|
|
96
|
+
### 4. Integration Questions
|
|
97
|
+
|
|
98
|
+
**What they clarify:** How this work connects to existing systems.
|
|
99
|
+
|
|
100
|
+
**When to ask:** When the work interacts with other services, APIs, or systems.
|
|
101
|
+
|
|
102
|
+
**Examples:**
|
|
103
|
+
- "This feature sends emails. Should it use the existing NotificationService, or is there a different email system?"
|
|
104
|
+
- "The dashboard needs user data. Should it call the UserService directly, or go through the API?"
|
|
105
|
+
- "Should the new endpoint follow the existing API versioning pattern (/v1/users) or use a different scheme?"
|
|
106
|
+
- "Does this need to integrate with the existing authentication middleware?"
|
|
107
|
+
|
|
108
|
+
### 5. Testing Questions
|
|
109
|
+
|
|
110
|
+
**What they clarify:** How to verify the work is correct.
|
|
111
|
+
|
|
112
|
+
**When to ask:** When acceptance criteria are not obvious from the requirement.
|
|
113
|
+
|
|
114
|
+
**Examples:**
|
|
115
|
+
- "How should we verify the email was sent? Mock the email service and check it was called, or use a test email server?"
|
|
116
|
+
- "Should the e2e tests run against a real database or an in-memory database?"
|
|
117
|
+
- "What is the expected response time for the search API? Do we need performance tests?"
|
|
118
|
+
- "Are there specific browsers or devices we need to test against?"
|
|
119
|
+
|
|
120
|
+
---
|
|
121
|
+
|
|
122
|
+
## How to Give Recommendations
|
|
123
|
+
|
|
124
|
+
Asking questions without offering recommendations puts the burden entirely on the stakeholder. The SDLC framework requires that every question includes a recommendation.
|
|
125
|
+
|
|
126
|
+
### The Recommendation Pattern
|
|
127
|
+
|
|
128
|
+
```
|
|
129
|
+
Question: [What are the options?]
|
|
130
|
+
|
|
131
|
+
Options:
|
|
132
|
+
A) [First option] — [trade-off summary]
|
|
133
|
+
B) [Second option] — [trade-off summary]
|
|
134
|
+
C) [Third option if applicable] — [trade-off summary]
|
|
135
|
+
|
|
136
|
+
Recommendation: [Option X] because [concrete reason].
|
|
137
|
+
```
|
|
138
|
+
|
|
139
|
+
### Good Recommendation Example
|
|
140
|
+
|
|
141
|
+
```
|
|
142
|
+
Question: How should we handle user session storage?
|
|
143
|
+
|
|
144
|
+
Options:
|
|
145
|
+
A) JWT tokens — Stateless, scales horizontally without shared storage.
|
|
146
|
+
Token revocation requires a blocklist (adds complexity).
|
|
147
|
+
|
|
148
|
+
B) Database sessions — Simple revocation (delete the row).
|
|
149
|
+
Requires shared database for multi-server setups.
|
|
150
|
+
Adds a database query on every request.
|
|
151
|
+
|
|
152
|
+
C) Redis sessions — Fast lookups, built-in TTL for expiry.
|
|
153
|
+
Requires Redis infrastructure.
|
|
154
|
+
Good middle ground between stateless and database.
|
|
155
|
+
|
|
156
|
+
Recommendation: Option A (JWT) because the project uses a stateless
|
|
157
|
+
architecture and does not require immediate token revocation. If
|
|
158
|
+
revocation becomes a requirement later, we can add a Redis blocklist.
|
|
159
|
+
```
|
|
160
|
+
|
|
161
|
+
### Bad Recommendation Example
|
|
162
|
+
|
|
163
|
+
```
|
|
164
|
+
Question: How should we handle sessions?
|
|
165
|
+
|
|
166
|
+
Options:
|
|
167
|
+
A) JWT
|
|
168
|
+
B) Database sessions
|
|
169
|
+
|
|
170
|
+
It depends on your needs.
|
|
171
|
+
```
|
|
172
|
+
|
|
173
|
+
This is useless. "It depends" is not a recommendation. Explain the trade-offs and pick one.
|
|
174
|
+
|
|
175
|
+
### When You Genuinely Cannot Recommend
|
|
176
|
+
|
|
177
|
+
Sometimes the decision depends on information only the stakeholder has (budget, timeline, team preference). In this case:
|
|
178
|
+
|
|
179
|
+
```
|
|
180
|
+
Question: Should we build this in-house or use a third-party service?
|
|
181
|
+
|
|
182
|
+
Options:
|
|
183
|
+
A) Build in-house — Full control, no vendor dependency.
|
|
184
|
+
Estimated 2 weeks of development.
|
|
185
|
+
|
|
186
|
+
B) Use [Service X] — Ready immediately. $50/month.
|
|
187
|
+
Limited customization.
|
|
188
|
+
|
|
189
|
+
I cannot recommend one because it depends on your budget and timeline
|
|
190
|
+
constraints. If budget is flexible, Option B saves 2 weeks. If vendor
|
|
191
|
+
dependency is a concern, Option A gives full control.
|
|
192
|
+
```
|
|
193
|
+
|
|
194
|
+
---
|
|
195
|
+
|
|
196
|
+
## When to Stop Asking
|
|
197
|
+
|
|
198
|
+
Clarification has diminishing returns. At some point, additional questions delay progress without reducing risk.
|
|
199
|
+
|
|
200
|
+
### Signals to Stop Asking
|
|
201
|
+
|
|
202
|
+
**You have enough to write acceptance criteria.** If you can write specific, testable Given/When/Then criteria, you have enough clarity to proceed.
|
|
203
|
+
|
|
204
|
+
**Questions are getting granular.** "Should the button be blue or green?" is a question that does not affect architecture or correctness. Make a reasonable choice and note it.
|
|
205
|
+
|
|
206
|
+
**The stakeholder is repeating themselves.** If the answers to new questions are "like I said before," you have already covered the scope.
|
|
207
|
+
|
|
208
|
+
**Questions are about implementation, not requirements.** "Should I use a for loop or map?" is not a clarification question — it is a code decision. Make it during implementation.
|
|
209
|
+
|
|
210
|
+
### The Rule of Three
|
|
211
|
+
|
|
212
|
+
Ask a maximum of three rounds of questions. Each round should have 2-5 questions. After three rounds (6-15 questions total), you should have enough to write the spec. If you still do not, the requirement itself may need to be split into smaller pieces.
|
|
213
|
+
|
|
214
|
+
**Round 1:** Scope and approach (the big questions).
|
|
215
|
+
**Round 2:** Edge cases and integration (the detailed questions).
|
|
216
|
+
**Round 3:** Testing and remaining ambiguities (cleanup).
|
|
217
|
+
|
|
218
|
+
### Diminishing Returns Heuristic
|
|
219
|
+
|
|
220
|
+
```
|
|
221
|
+
Round 1: Resolves ~60% of ambiguity
|
|
222
|
+
Round 2: Resolves ~25% of remaining ambiguity
|
|
223
|
+
Round 3: Resolves ~10% of remaining ambiguity
|
|
224
|
+
Round 4+: Resolves <5% — diminishing returns territory
|
|
225
|
+
```
|
|
226
|
+
|
|
227
|
+
After three rounds, the remaining ambiguity is small enough to handle with reasonable defaults and a note in the spec.
|
|
228
|
+
|
|
229
|
+
---
|
|
230
|
+
|
|
231
|
+
## Anti-Patterns
|
|
232
|
+
|
|
233
|
+
### Anti-Pattern 1: Asking Too Many Questions
|
|
234
|
+
|
|
235
|
+
**Symptom:** 15+ questions in a single round. The stakeholder feels interrogated.
|
|
236
|
+
|
|
237
|
+
**Why it happens:** The AI tries to eliminate all ambiguity at once.
|
|
238
|
+
|
|
239
|
+
**Fix:** Prioritize. Ask the 3-5 most impactful questions first. The answers will often resolve other questions.
|
|
240
|
+
|
|
241
|
+
**Example:**
|
|
242
|
+
```
|
|
243
|
+
BAD: 12 questions about the login page including font size,
|
|
244
|
+
button color, error animation duration, and input padding.
|
|
245
|
+
|
|
246
|
+
GOOD: 3 questions: What authentication method? What error handling
|
|
247
|
+
for invalid credentials? Any existing UI components to reuse?
|
|
248
|
+
```
|
|
249
|
+
|
|
250
|
+
### Anti-Pattern 2: Asking Obvious Questions
|
|
251
|
+
|
|
252
|
+
**Symptom:** Questions whose answers are already in the codebase or project context.
|
|
253
|
+
|
|
254
|
+
**Why it happens:** The AI did not read the existing code before asking.
|
|
255
|
+
|
|
256
|
+
**Fix:** Search the codebase first. If there is an existing auth system using JWT, do not ask "should we use JWT or sessions?" Use JWT.
|
|
257
|
+
|
|
258
|
+
**Example:**
|
|
259
|
+
```
|
|
260
|
+
BAD: "What testing framework should we use?"
|
|
261
|
+
(package.json clearly shows vitest)
|
|
262
|
+
|
|
263
|
+
GOOD: "The project uses vitest. Should the new tests follow the
|
|
264
|
+
existing mocking pattern in tests/user.test.ts, or is there
|
|
265
|
+
a reason to use a different approach?"
|
|
266
|
+
```
|
|
267
|
+
|
|
268
|
+
### Anti-Pattern 3: Not Offering Recommendations
|
|
269
|
+
|
|
270
|
+
**Symptom:** Every question ends with "what do you think?" without the AI expressing any opinion.
|
|
271
|
+
|
|
272
|
+
**Why it happens:** The AI tries to be neutral and not impose preferences.
|
|
273
|
+
|
|
274
|
+
**Fix:** Always recommend. "I recommend Option A because..." Stakeholders can disagree, but they should not have to do all the thinking.
|
|
275
|
+
|
|
276
|
+
**Example:**
|
|
277
|
+
```
|
|
278
|
+
BAD: "We could use PostgreSQL or MongoDB. Which do you prefer?"
|
|
279
|
+
|
|
280
|
+
GOOD: "I recommend PostgreSQL because the existing project uses it,
|
|
281
|
+
the data is relational, and adding MongoDB would introduce a
|
|
282
|
+
second database technology without clear benefit."
|
|
283
|
+
```
|
|
284
|
+
|
|
285
|
+
### Anti-Pattern 4: Asking Questions That Do Not Affect the Code
|
|
286
|
+
|
|
287
|
+
**Symptom:** Questions about business strategy, user demographics, or market positioning.
|
|
288
|
+
|
|
289
|
+
**Why it happens:** The AI is trained on business analysis data and conflates business questions with technical requirements.
|
|
290
|
+
|
|
291
|
+
**Fix:** Only ask questions whose answers change the code. "Who is the target user?" does not change the code. "Should the UI support screen readers?" does.
|
|
292
|
+
|
|
293
|
+
**Example:**
|
|
294
|
+
```
|
|
295
|
+
BAD: "What is the long-term vision for this feature?"
|
|
296
|
+
|
|
297
|
+
GOOD: "Should this API support versioning now, or is it internal-only
|
|
298
|
+
with no backward compatibility requirement?"
|
|
299
|
+
```
|
|
300
|
+
|
|
301
|
+
### Anti-Pattern 5: Re-Asking Answered Questions
|
|
302
|
+
|
|
303
|
+
**Symptom:** A question that was answered in a previous round is asked again with different wording.
|
|
304
|
+
|
|
305
|
+
**Why it happens:** Context loss between clarification rounds.
|
|
306
|
+
|
|
307
|
+
**Fix:** Maintain a clarification log (the `<clarifications>` section of the spec). Check it before each new round of questions.
|
|
308
|
+
|
|
309
|
+
**Example:**
|
|
310
|
+
```
|
|
311
|
+
BAD: Round 1: "Should users be able to reset their password?"
|
|
312
|
+
Answer: "Yes, via email link."
|
|
313
|
+
Round 2: "How should password recovery work?"
|
|
314
|
+
|
|
315
|
+
GOOD: Round 2 skips this question because it was answered in Round 1.
|
|
316
|
+
```
|
|
317
|
+
|
|
318
|
+
---
|
|
319
|
+
|
|
320
|
+
## Putting It All Together: A Spec Phase Example
|
|
321
|
+
|
|
322
|
+
**Request:** "Add user notifications"
|
|
323
|
+
|
|
324
|
+
**Round 1: Scope and Approach**
|
|
325
|
+
|
|
326
|
+
1. "What types of notifications? Email only, in-app only, or both?"
|
|
327
|
+
- Recommendation: Start with in-app only. Email adds infrastructure complexity. We can add email in a later iteration.
|
|
328
|
+
|
|
329
|
+
2. "What events trigger notifications? User actions (someone liked your post) or system events (your subscription expires) or both?"
|
|
330
|
+
- Recommendation: Start with user actions. They are higher value and simpler to implement.
|
|
331
|
+
|
|
332
|
+
3. "Should notifications be real-time (WebSocket push) or poll-based (check on page load)?"
|
|
333
|
+
- Recommendation: Poll on page load. The project does not have WebSocket infrastructure. Real-time can be added later without changing the data model.
|
|
334
|
+
|
|
335
|
+
**Stakeholder answers:** In-app only. User actions only. Poll on page load is fine.
|
|
336
|
+
|
|
337
|
+
**Round 2: Edge Cases and Integration**
|
|
338
|
+
|
|
339
|
+
4. "What happens when a user has 1000+ notifications? Should we paginate, auto-archive, or limit?"
|
|
340
|
+
- Recommendation: Show the 50 most recent. Auto-archive after 30 days. This keeps the UI fast and the database clean.
|
|
341
|
+
|
|
342
|
+
5. "Should the notification bell show an unread count?"
|
|
343
|
+
- Recommendation: Yes. It is standard UX and adds minimal implementation effort.
|
|
344
|
+
|
|
345
|
+
**Stakeholder answers:** 50 most recent with auto-archive. Yes to unread count.
|
|
346
|
+
|
|
347
|
+
**Round 3: Not needed.** Enough clarity to write acceptance criteria:
|
|
348
|
+
- AC-1: Given a user action, When the action completes, Then a notification is created.
|
|
349
|
+
- AC-2: Given a user with unread notifications, When they load the page, Then the bell shows the unread count.
|
|
350
|
+
- AC-3: Given more than 50 notifications, When the user opens notifications, Then only the 50 most recent are shown.
|
|
351
|
+
|
|
352
|
+
Two rounds, five questions, clear spec. No assumptions.
|