specpulse 1.3.4__py3-none-any.whl → 1.4.0__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- specpulse/__init__.py +1 -1
- specpulse/core/specpulse.py +22 -88
- specpulse/core/validator.py +38 -34
- specpulse/resources/commands/claude/sp-decompose.md +17 -17
- specpulse/resources/commands/claude/sp-plan.md +47 -47
- specpulse/resources/commands/claude/sp-pulse.md +10 -10
- specpulse/resources/commands/claude/sp-spec.md +15 -10
- specpulse/resources/commands/claude/sp-task.md +15 -15
- specpulse/resources/commands/gemini/sp-plan.toml +17 -17
- specpulse/resources/memory/constitution.md +237 -128
- specpulse/resources/scripts/sp-pulse-plan.ps1 +146 -142
- specpulse/resources/scripts/sp-pulse-plan.sh +131 -127
- specpulse/resources/scripts/sp-pulse-task.ps1 +165 -165
- specpulse/resources/scripts/sp-pulse-task.sh +6 -6
- specpulse/resources/templates/decomposition/integration-plan.md +6 -5
- specpulse/resources/templates/decomposition/microservices.md +6 -5
- specpulse/resources/templates/decomposition/service-plan.md +6 -5
- specpulse/resources/templates/plan.md +229 -205
- specpulse/resources/templates/task.md +165 -165
- {specpulse-1.3.4.dist-info → specpulse-1.4.0.dist-info}/METADATA +69 -30
- {specpulse-1.3.4.dist-info → specpulse-1.4.0.dist-info}/RECORD +25 -25
- {specpulse-1.3.4.dist-info → specpulse-1.4.0.dist-info}/WHEEL +0 -0
- {specpulse-1.3.4.dist-info → specpulse-1.4.0.dist-info}/entry_points.txt +0 -0
- {specpulse-1.3.4.dist-info → specpulse-1.4.0.dist-info}/licenses/LICENSE +0 -0
- {specpulse-1.3.4.dist-info → specpulse-1.4.0.dist-info}/top_level.txt +0 -0
@@ -81,7 +81,7 @@ When called with `/sp-task $ARGUMENTS`, I will:
|
|
81
81
|
* Layer-based tasks (data, business, API)
|
82
82
|
* Module-specific tasks
|
83
83
|
- **Common categories**:
|
84
|
-
*
|
84
|
+
* SDD Gates Compliance
|
85
85
|
* Critical Path identification
|
86
86
|
* Parallel vs Sequential grouping
|
87
87
|
* Progress Tracking configuration
|
@@ -120,7 +120,7 @@ When called with `/sp-task $ARGUMENTS`, I will:
|
|
120
120
|
d. **Parse current tasks** from selected file with comprehensive status:
|
121
121
|
- Total tasks, completed, pending, blocked
|
122
122
|
- Parallel tasks identification
|
123
|
-
-
|
123
|
+
- SDD gates status
|
124
124
|
- Completion percentage calculation
|
125
125
|
e. **Interactive task updates**:
|
126
126
|
- Mark tasks as completed/in-progress/blocked
|
@@ -137,20 +137,20 @@ When called with `/sp-task $ARGUMENTS`, I will:
|
|
137
137
|
TOTAL_TASKS=25
|
138
138
|
COMPLETED_TASKS=10
|
139
139
|
COMPLETION_PERCENTAGE=40%
|
140
|
-
|
140
|
+
SDD_GATES_PENDING=2
|
141
141
|
```
|
142
142
|
d. **Display comprehensive progress**:
|
143
143
|
- Overall completion percentage
|
144
144
|
- Phase-by-phase progress
|
145
145
|
- Blocker identification and resolution
|
146
146
|
- Velocity metrics and estimates
|
147
|
-
-
|
147
|
+
- SDD gates compliance status
|
148
148
|
|
149
149
|
6. **For `/sp-task execute`:**
|
150
150
|
a. **Show existing task files**: List all task-XXX.md files in current feature directory
|
151
151
|
b. **Ask user to select**: Which task file to execute from
|
152
152
|
c. **Ask user to specify**: Which specific task ID to execute
|
153
|
-
d. **Validate task readiness** using
|
153
|
+
d. **Validate task readiness** using SDD gates
|
154
154
|
e. **Execute task** using AI assistant capabilities:
|
155
155
|
```markdown
|
156
156
|
## Task Execution: {{ TASK_ID }}
|
@@ -215,13 +215,13 @@ metrics:
|
|
215
215
|
completion_percentage: 40%
|
216
216
|
```
|
217
217
|
|
218
|
-
##
|
218
|
+
## SDD Gates Integration
|
219
219
|
|
220
|
-
Each task breakdown includes
|
221
|
-
- **
|
222
|
-
- **
|
223
|
-
- **
|
224
|
-
- **
|
220
|
+
Each task breakdown includes SDD compliance validation:
|
221
|
+
- **Specification First**: Tasks trace to specifications
|
222
|
+
- **Task Decomposition**: Concrete, actionable tasks
|
223
|
+
- **Quality Assurance**: Appropriate testing tasks included
|
224
|
+
- **Traceable Implementation**: Clear linkage to requirements
|
225
225
|
|
226
226
|
## Examples
|
227
227
|
|
@@ -264,14 +264,14 @@ I will update task status and recalculate progress metrics.
|
|
264
264
|
```
|
265
265
|
User: /sp-task status
|
266
266
|
```
|
267
|
-
I will display detailed progress with
|
267
|
+
I will display detailed progress with SDD gates compliance.
|
268
268
|
|
269
269
|
### Execute specific task
|
270
270
|
```
|
271
271
|
User: /sp-task execute T001
|
272
272
|
```
|
273
273
|
I will:
|
274
|
-
- Validate:
|
274
|
+
- Validate: SDD gates compliance and task readiness
|
275
275
|
- Execute: Cross-platform task execution
|
276
276
|
```bash
|
277
277
|
bash scripts/sp-pulse-task.sh "$FEATURE_DIR" "execute:$TASK_ID"
|
@@ -283,7 +283,7 @@ I will:
|
|
283
283
|
- **Script execution** with Bash
|
284
284
|
- **AI-optimized templates** with Jinja2-style variables
|
285
285
|
- **Script integration** for validation and execution
|
286
|
-
- **
|
286
|
+
- **SDD gates compliance** tracking
|
287
287
|
- **Parallel task identification** and execution
|
288
288
|
- **Comprehensive progress tracking** with YAML configuration
|
289
289
|
- **Automatic percentage calculation** and velocity metrics
|
@@ -294,7 +294,7 @@ I will:
|
|
294
294
|
## Error Handling
|
295
295
|
|
296
296
|
- Plan existence validation before task generation
|
297
|
-
-
|
297
|
+
- SDD gates compliance checking
|
298
298
|
- Template structure validation
|
299
299
|
- Dependency conflict detection
|
300
300
|
- Task execution error handling with rollback
|
@@ -9,7 +9,7 @@ First, detect current feature context:
|
|
9
9
|
- If no context found, ask user to specify feature or run /sp-pulse first
|
10
10
|
|
11
11
|
Parse arguments to determine action:
|
12
|
-
- If "validate": Check plan against
|
12
|
+
- If "validate": Check plan against SDD principles
|
13
13
|
- Otherwise: Generate new plan
|
14
14
|
|
15
15
|
## For /sp-plan generate or /sp-plan:
|
@@ -18,11 +18,11 @@ Parse arguments to determine action:
|
|
18
18
|
3. Read selected specification from @{specs/*/spec-XXX.md}
|
19
19
|
|
20
20
|
4. Run Phase Gates checks:
|
21
|
-
-
|
22
|
-
-
|
23
|
-
-
|
24
|
-
-
|
25
|
-
-
|
21
|
+
- SDD compliance verification
|
22
|
+
- Architecture documentation check
|
23
|
+
- Quality assurance strategy defined
|
24
|
+
- Stakeholder alignment confirmed
|
25
|
+
- Incremental planning validated
|
26
26
|
|
27
27
|
5. Generate plan sections:
|
28
28
|
- Technology stack
|
@@ -32,9 +32,9 @@ Parse arguments to determine action:
|
|
32
32
|
- Data models
|
33
33
|
- Testing strategy
|
34
34
|
|
35
|
-
6. Track
|
36
|
-
-
|
37
|
-
-
|
35
|
+
6. Track architectural decisions:
|
36
|
+
- Document all technology choices
|
37
|
+
- Record trade-offs and rationale
|
38
38
|
|
39
39
|
7. Check existing plan files and create next version (plan-001.md, plan-002.md, etc.)
|
40
40
|
8. Write plan to plans/XXX-feature/plan-XXX.md
|
@@ -48,17 +48,17 @@ Parse arguments to determine action:
|
|
48
48
|
4. Run validation:
|
49
49
|
- !{bash scripts/sp-pulse-plan.sh "XXX-feature"}
|
50
50
|
5. Verify Phase Gates compliance
|
51
|
-
6. Check
|
52
|
-
7. Ensure
|
51
|
+
6. Check architecture documentation
|
52
|
+
7. Ensure quality assurance strategy
|
53
53
|
8. Validate framework choices
|
54
54
|
9. Report validation results
|
55
55
|
|
56
|
-
|
57
|
-
- ✅
|
58
|
-
- ✅
|
59
|
-
- ✅
|
60
|
-
- ✅
|
61
|
-
- ✅
|
56
|
+
SDD Compliance Gates must pass before implementation:
|
57
|
+
- ✅ Specification clearly defined
|
58
|
+
- ✅ Incremental phases planned
|
59
|
+
- ✅ Tasks properly decomposed
|
60
|
+
- ✅ Quality strategy appropriate
|
61
|
+
- ✅ Architecture documented
|
62
62
|
|
63
63
|
Examples:
|
64
64
|
- /sp-plan generate
|
@@ -1,134 +1,242 @@
|
|
1
1
|
# Project Constitution
|
2
|
-
*
|
2
|
+
*Universal principles that enable Specification-Driven Development (SDD) for any software project*
|
3
3
|
|
4
|
-
## The Nine
|
4
|
+
## The Nine Universal SDD Principles
|
5
5
|
|
6
|
-
###
|
7
|
-
Every feature
|
6
|
+
### 1. SPECIFICATION FIRST
|
7
|
+
**Rule:** Every feature starts with a clear specification.
|
8
8
|
|
9
|
-
**
|
9
|
+
**Requirements:**
|
10
|
+
- Define what you're building before how
|
11
|
+
- Include user stories and acceptance criteria
|
12
|
+
- Use [NEEDS CLARIFICATION] markers for unknowns
|
13
|
+
- Document functional and non-functional requirements
|
10
14
|
|
11
|
-
|
12
|
-
Every library MUST expose its core functionality through a command-line interface.
|
15
|
+
**Validation:** Can someone else understand what to build from this spec?
|
13
16
|
|
14
|
-
|
15
|
-
|
16
|
-
|
17
|
-
|
18
|
-
-
|
19
|
-
-
|
17
|
+
**Example:**
|
18
|
+
```markdown
|
19
|
+
# ✅ GOOD: Clear specification
|
20
|
+
## User Authentication
|
21
|
+
- Users can register with email and password
|
22
|
+
- [NEEDS CLARIFICATION]: OAuth2 provider list
|
23
|
+
- Password reset via email required
|
20
24
|
|
21
|
-
|
25
|
+
# ❌ BAD: Vague specification
|
26
|
+
"Make a login system that works"
|
27
|
+
```
|
28
|
+
|
29
|
+
### 2. INCREMENTAL PLANNING
|
30
|
+
**Rule:** Break specifications into manageable, phased plans.
|
31
|
+
|
32
|
+
**Requirements:**
|
33
|
+
- Create phase-based implementation plans
|
34
|
+
- Define clear milestones and checkpoints
|
35
|
+
- Prioritize features by business value
|
36
|
+
- Each phase should deliver working software
|
37
|
+
|
38
|
+
**Validation:** Is each phase independently valuable and deployable?
|
39
|
+
|
40
|
+
**Example:**
|
41
|
+
```markdown
|
42
|
+
# ✅ GOOD: Phased delivery
|
43
|
+
Phase 1: Core authentication (Week 1)
|
44
|
+
Phase 2: User profiles (Week 2)
|
45
|
+
Phase 3: Role management (Week 3)
|
46
|
+
|
47
|
+
# ❌ BAD: Everything at once
|
48
|
+
"Complete user management system in one sprint"
|
49
|
+
```
|
22
50
|
|
23
|
-
###
|
24
|
-
|
51
|
+
### 3. TASK DECOMPOSITION
|
52
|
+
**Rule:** Break plans into concrete, executable tasks.
|
25
53
|
|
26
|
-
|
27
|
-
|
28
|
-
|
29
|
-
|
30
|
-
|
31
|
-
5. Code is refactored while maintaining passing tests (Refactor phase)
|
54
|
+
**Requirements:**
|
55
|
+
- Create specific, actionable tasks with clear outcomes
|
56
|
+
- Estimate effort in hours or days
|
57
|
+
- Define "Definition of Done" for each task
|
58
|
+
- Include acceptance criteria
|
32
59
|
|
33
|
-
**
|
60
|
+
**Validation:** Could a developer pick this up and start immediately?
|
34
61
|
|
35
|
-
|
36
|
-
|
62
|
+
**Example:**
|
63
|
+
```markdown
|
64
|
+
# ✅ GOOD: Actionable task
|
65
|
+
T001: Implement user registration endpoint
|
66
|
+
- Effort: 4 hours
|
67
|
+
- Done: POST /api/users accepts and validates data
|
68
|
+
- Test: Registration creates user in database
|
37
69
|
|
38
|
-
|
39
|
-
|
40
|
-
|
70
|
+
# ❌ BAD: Vague task
|
71
|
+
"Do user stuff"
|
72
|
+
```
|
73
|
+
|
74
|
+
### 4. TRACEABLE IMPLEMENTATION
|
75
|
+
**Rule:** Every piece of code should trace back to a specification.
|
76
|
+
|
77
|
+
**Requirements:**
|
78
|
+
- Reference spec requirements in code comments
|
79
|
+
- Link commits to tasks and specs
|
80
|
+
- Update specs when requirements change
|
41
81
|
- Maintain bidirectional traceability
|
42
82
|
|
43
|
-
**
|
83
|
+
**Validation:** Can you trace this code to a specific requirement?
|
84
|
+
|
85
|
+
**Example:**
|
86
|
+
```markdown
|
87
|
+
# ✅ GOOD: Traceable implementation
|
88
|
+
- Commit message: "Implement user auth (SPEC-001, T003)"
|
89
|
+
- Code comment: "// Implements REQ-SEC-03: Password validation"
|
90
|
+
- PR description: Links to spec-001.md#security
|
91
|
+
- Task reference: T003 from task-001.md
|
92
|
+
|
93
|
+
# ❌ BAD: No traceability
|
94
|
+
- Commit: "Fixed stuff"
|
95
|
+
- No spec references in code
|
96
|
+
- No task linkage
|
97
|
+
```
|
98
|
+
|
99
|
+
### 5. CONTINUOUS VALIDATION
|
100
|
+
**Rule:** Validate implementation against specifications continuously.
|
44
101
|
|
45
|
-
|
46
|
-
|
102
|
+
**Requirements:**
|
103
|
+
- Check implementation matches spec after each task
|
104
|
+
- Run acceptance tests regularly
|
105
|
+
- Update specs if reality differs from plan
|
106
|
+
- Maintain spec-code synchronization
|
47
107
|
|
48
|
-
|
49
|
-
- Analyzed for ambiguity and contradictions
|
50
|
-
- Marked with [NEEDS CLARIFICATION] for uncertainties
|
51
|
-
- Validated against the constitution
|
52
|
-
- Refined based on implementation feedback
|
108
|
+
**Validation:** Does the current implementation match the specification?
|
53
109
|
|
54
|
-
**
|
110
|
+
**Example:**
|
111
|
+
```markdown
|
112
|
+
# ✅ GOOD: Regular validation
|
113
|
+
- After each task: Check against spec
|
114
|
+
- Daily: Run acceptance tests
|
115
|
+
- Weekly: Full spec review
|
55
116
|
|
56
|
-
|
57
|
-
|
117
|
+
# ❌ BAD: No validation
|
118
|
+
"We'll check at the end of the project"
|
119
|
+
```
|
120
|
+
|
121
|
+
### 6. QUALITY ASSURANCE
|
122
|
+
**Rule:** Ensure quality through appropriate testing and review.
|
123
|
+
|
124
|
+
**Requirements:**
|
125
|
+
- Test based on acceptance criteria
|
126
|
+
- Choose appropriate test types for your project
|
127
|
+
- Automate testing where valuable
|
128
|
+
- Conduct code reviews for critical features
|
129
|
+
|
130
|
+
**Validation:** Are all acceptance criteria verifiable and tested?
|
131
|
+
|
132
|
+
**Example:**
|
133
|
+
```markdown
|
134
|
+
# ✅ GOOD: Appropriate testing
|
135
|
+
- Unit tests for business logic
|
136
|
+
- Integration tests for APIs
|
137
|
+
- E2E tests for critical user flows
|
138
|
+
|
139
|
+
# ❌ BAD: No testing strategy
|
140
|
+
"We'll test manually"
|
141
|
+
```
|
58
142
|
|
59
|
-
|
60
|
-
|
61
|
-
- Performance implications are documented
|
62
|
-
- Security considerations are analyzed
|
63
|
-
- Organizational constraints are identified
|
143
|
+
### 7. ARCHITECTURE DOCUMENTATION
|
144
|
+
**Rule:** Document key architectural decisions and patterns.
|
64
145
|
|
65
|
-
**
|
146
|
+
**Requirements:**
|
147
|
+
- Record technology choices and rationale
|
148
|
+
- Document integration points and APIs
|
149
|
+
- Track technical debt and trade-offs
|
150
|
+
- Maintain architecture decision records (ADRs)
|
66
151
|
|
67
|
-
|
68
|
-
Start simple, add complexity only when proven necessary.
|
152
|
+
**Validation:** Will someone understand these decisions in 6 months?
|
69
153
|
|
70
|
-
|
71
|
-
|
72
|
-
|
73
|
-
-
|
74
|
-
-
|
75
|
-
-
|
154
|
+
**Example:**
|
155
|
+
```markdown
|
156
|
+
# ✅ GOOD: Documented decisions
|
157
|
+
ADR-001: Choose PostgreSQL over MongoDB
|
158
|
+
- Date: 2025-01-15
|
159
|
+
- Rationale: Need ACID transactions
|
160
|
+
- Trade-offs: Less flexible schema
|
76
161
|
|
77
|
-
|
162
|
+
# ❌ BAD: No documentation
|
163
|
+
"We just picked what we knew"
|
164
|
+
```
|
165
|
+
|
166
|
+
### 8. ITERATIVE REFINEMENT
|
167
|
+
**Rule:** Specifications and implementations evolve based on learnings.
|
78
168
|
|
79
|
-
|
80
|
-
|
169
|
+
**Requirements:**
|
170
|
+
- Update specs based on user feedback
|
171
|
+
- Refine based on implementation discoveries
|
172
|
+
- Version specifications for traceability
|
173
|
+
- Document lessons learned
|
81
174
|
|
82
|
-
|
83
|
-
1. Contract tests (API boundaries)
|
84
|
-
2. Integration tests (component interaction)
|
85
|
-
3. End-to-end tests (user workflows)
|
86
|
-
4. Unit tests (isolated logic)
|
175
|
+
**Validation:** Do specs reflect current reality and learnings?
|
87
176
|
|
88
|
-
|
89
|
-
|
90
|
-
|
91
|
-
|
92
|
-
|
177
|
+
**Example:**
|
178
|
+
```markdown
|
179
|
+
# ✅ GOOD: Learning from implementation
|
180
|
+
Spec v1: "Users login with email"
|
181
|
+
Spec v2: "Added: Support for username login (user feedback)"
|
182
|
+
Spec v3: "Added: MFA option (security review)"
|
93
183
|
|
94
|
-
|
184
|
+
# ❌ BAD: Never updating specs
|
185
|
+
"Original spec from 6 months ago"
|
186
|
+
```
|
95
187
|
|
96
|
-
###
|
97
|
-
|
188
|
+
### 9. STAKEHOLDER ALIGNMENT
|
189
|
+
**Rule:** Keep all stakeholders aligned through specifications.
|
98
190
|
|
99
|
-
|
100
|
-
-
|
101
|
-
-
|
102
|
-
-
|
103
|
-
-
|
191
|
+
**Requirements:**
|
192
|
+
- Share specs with team and clients
|
193
|
+
- Get approval before major phases
|
194
|
+
- Communicate changes clearly
|
195
|
+
- Maintain shared understanding
|
104
196
|
|
105
|
-
**
|
197
|
+
**Validation:** Does everyone understand what's being built and why?
|
106
198
|
|
107
|
-
|
199
|
+
**Example:**
|
200
|
+
```markdown
|
201
|
+
# ✅ GOOD: Clear communication
|
202
|
+
- Weekly spec reviews with team
|
203
|
+
- Client approval before each phase
|
204
|
+
- Documented change requests
|
205
|
+
|
206
|
+
# ❌ BAD: Working in isolation
|
207
|
+
"We'll show them when it's done"
|
208
|
+
```
|
209
|
+
|
210
|
+
## SDD Methodology Enforcement
|
108
211
|
|
109
212
|
### Phase Gates
|
110
|
-
Every implementation plan MUST pass through
|
213
|
+
Every implementation plan MUST pass through SDD compliance gates:
|
111
214
|
|
112
215
|
#### Phase -1: Pre-Implementation Gates
|
113
|
-
- [ ]
|
114
|
-
- [ ]
|
115
|
-
- [ ]
|
116
|
-
- [ ]
|
117
|
-
- [ ]
|
118
|
-
|
119
|
-
|
120
|
-
|
216
|
+
- [ ] Specification First: Requirements clear and documented?
|
217
|
+
- [ ] Incremental Planning: Work broken into valuable phases?
|
218
|
+
- [ ] Task Decomposition: Tasks concrete and actionable?
|
219
|
+
- [ ] Traceable Implementation: Code-to-spec mapping planned?
|
220
|
+
- [ ] Continuous Validation: Validation checkpoints defined?
|
221
|
+
- [ ] Quality Assurance: Test strategy appropriate?
|
222
|
+
- [ ] Architecture Documentation: Decision tracking planned?
|
223
|
+
- [ ] Iterative Refinement: Feedback loops established?
|
224
|
+
- [ ] Stakeholder Alignment: Communication plan in place?
|
225
|
+
|
226
|
+
### Decision Tracking
|
227
|
+
All significant architectural decisions and trade-offs MUST be documented:
|
121
228
|
```yaml
|
122
|
-
|
123
|
-
-
|
124
|
-
|
125
|
-
|
229
|
+
architectural_decisions:
|
230
|
+
- decision: "Microservices for payment processing"
|
231
|
+
rationale: "PCI compliance requires isolation"
|
232
|
+
trade_offs: "Increased operational complexity"
|
126
233
|
approved_by: "Team Lead"
|
127
234
|
date: "2025-09-11"
|
235
|
+
review_date: "2025-Q2"
|
128
236
|
```
|
129
237
|
|
130
238
|
### Amendment Process
|
131
|
-
While principles
|
239
|
+
While principles guide development, their application can evolve:
|
132
240
|
|
133
241
|
1. Proposed amendments require:
|
134
242
|
- Explicit documentation of rationale
|
@@ -143,46 +251,47 @@ While principles are immutable, their application can evolve:
|
|
143
251
|
## Principles in Practice
|
144
252
|
|
145
253
|
### When Starting a New Feature
|
146
|
-
1. Write specification first (
|
147
|
-
2.
|
148
|
-
3.
|
149
|
-
4.
|
150
|
-
5.
|
151
|
-
6.
|
152
|
-
7.
|
254
|
+
1. Write specification first (Principle 1)
|
255
|
+
2. Create phased plan (Principle 2)
|
256
|
+
3. Break into tasks (Principle 3)
|
257
|
+
4. Ensure traceability (Principle 4)
|
258
|
+
5. Set up validation (Principle 5)
|
259
|
+
6. Define quality strategy (Principle 6)
|
260
|
+
7. Document architecture (Principle 7)
|
153
261
|
|
154
262
|
### When Reviewing Code
|
155
|
-
- Does it trace to a specification? (
|
156
|
-
-
|
157
|
-
-
|
158
|
-
-
|
159
|
-
-
|
160
|
-
|
161
|
-
### When
|
162
|
-
1. Document
|
163
|
-
2. Get
|
164
|
-
3. Track in
|
165
|
-
4. Plan for future
|
263
|
+
- Does it trace to a specification? (Principle 4)
|
264
|
+
- Is testing appropriate? (Principle 6)
|
265
|
+
- Are decisions documented? (Principle 7)
|
266
|
+
- Has it been validated? (Principle 5)
|
267
|
+
- Are stakeholders aligned? (Principle 9)
|
268
|
+
|
269
|
+
### When Making Architectural Decisions
|
270
|
+
1. Document the decision and rationale
|
271
|
+
2. Get stakeholder approval for significant changes
|
272
|
+
3. Track in architecture decision records
|
273
|
+
4. Plan for future improvements
|
166
274
|
|
167
275
|
## Technical Standards
|
168
276
|
|
169
|
-
### Code Style
|
170
|
-
-
|
171
|
-
-
|
172
|
-
-
|
173
|
-
-
|
277
|
+
### Code Style Guidelines
|
278
|
+
- Follow language-specific best practices
|
279
|
+
- Use consistent formatting (Prettier, Black, etc.)
|
280
|
+
- Write self-documenting code
|
281
|
+
- Add meaningful comments where needed
|
174
282
|
|
175
283
|
### Testing Requirements
|
176
|
-
-
|
177
|
-
-
|
178
|
-
- Critical
|
179
|
-
-
|
180
|
-
|
181
|
-
###
|
182
|
-
-
|
183
|
-
-
|
184
|
-
-
|
185
|
-
-
|
284
|
+
- Choose coverage appropriate for project risk
|
285
|
+
- Test based on acceptance criteria
|
286
|
+
- Critical features need comprehensive testing
|
287
|
+
- Use testing approach suitable for project type
|
288
|
+
|
289
|
+
### Project-Specific Standards
|
290
|
+
- Define standards appropriate for your project type
|
291
|
+
- Web apps: Response times, Core Web Vitals
|
292
|
+
- Mobile apps: Battery usage, offline capability
|
293
|
+
- Games: FPS targets, load times
|
294
|
+
- APIs: Throughput, latency percentiles
|
186
295
|
|
187
296
|
## Development Workflow
|
188
297
|
|
@@ -191,8 +300,8 @@ While principles are immutable, their application can evolve:
|
|
191
300
|
2. `/sp-spec` - Create specification following template guidelines
|
192
301
|
3. `/sp-plan` - Generate implementation plan with Phase Gates
|
193
302
|
4. `/sp-task` - Break down into executable tasks
|
194
|
-
5. Execute with
|
195
|
-
6. `/sp-validate` - Validate against
|
303
|
+
5. Execute with Quality Assurance
|
304
|
+
6. `/sp-validate` - Validate against SDD principles and specification
|
196
305
|
7. Update specifications based on learnings
|
197
306
|
|
198
307
|
### Version Control
|
@@ -203,9 +312,9 @@ While principles are immutable, their application can evolve:
|
|
203
312
|
|
204
313
|
### Continuous Integration
|
205
314
|
- Specification validation on every commit
|
206
|
-
-
|
207
|
-
-
|
208
|
-
-
|
315
|
+
- SDD compliance checks in CI
|
316
|
+
- Appropriate test coverage for project type
|
317
|
+
- Architecture decision tracking
|
209
318
|
|
210
319
|
## Specification Quality Standards
|
211
320
|
|