sdd-mcp-server 1.8.1 → 2.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +97 -41
- package/dist/cli/install-skills.d.ts +53 -0
- package/dist/cli/install-skills.js +171 -0
- package/dist/cli/install-skills.js.map +1 -0
- package/dist/index.js +140 -122
- package/dist/index.js.map +1 -1
- package/dist/skills/SkillManager.d.ts +69 -0
- package/dist/skills/SkillManager.js +138 -0
- package/dist/skills/SkillManager.js.map +1 -0
- package/package.json +4 -2
- package/skills/sdd-commit/SKILL.md +285 -0
- package/skills/sdd-design/SKILL.md +262 -0
- package/skills/sdd-implement/SKILL.md +281 -0
- package/skills/sdd-requirements/SKILL.md +140 -0
- package/skills/sdd-steering/SKILL.md +225 -0
- package/skills/sdd-steering-custom/SKILL.md +211 -0
- package/skills/sdd-tasks/SKILL.md +233 -0
- package/skills/simple-task/SKILL.md +148 -0
|
@@ -0,0 +1,225 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: sdd-steering
|
|
3
|
+
description: Create project-specific steering documents for SDD workflow. Use when setting up project context, documenting technology stack, or establishing project conventions. Invoked via /sdd-steering.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# SDD Steering Document Generation
|
|
7
|
+
|
|
8
|
+
Generate project-specific steering documents that provide context and guidance for AI-assisted development.
|
|
9
|
+
|
|
10
|
+
## What Are Steering Documents?
|
|
11
|
+
|
|
12
|
+
Steering documents are markdown files in `.spec/steering/` that provide project-specific context to guide AI interactions. They describe YOUR project's unique characteristics.
|
|
13
|
+
|
|
14
|
+
## Document Types
|
|
15
|
+
|
|
16
|
+
| Document | Purpose | Content |
|
|
17
|
+
|----------|---------|---------|
|
|
18
|
+
| `product.md` | Product context | Vision, users, features, goals |
|
|
19
|
+
| `tech.md` | Technology stack | Languages, frameworks, tools, architecture |
|
|
20
|
+
| `structure.md` | Project conventions | Directory structure, naming, patterns |
|
|
21
|
+
| `custom-*.md` | Custom guidance | Specialized rules for specific contexts |
|
|
22
|
+
|
|
23
|
+
## Workflow
|
|
24
|
+
|
|
25
|
+
### Step 1: Analyze Project
|
|
26
|
+
|
|
27
|
+
Gather information from:
|
|
28
|
+
1. **Project manifest** - Dependencies and metadata (e.g., `package.json`, `Cargo.toml`, `pyproject.toml`, `pom.xml`, `go.mod`)
|
|
29
|
+
2. **Directory structure** - Folder organization
|
|
30
|
+
3. **Existing code patterns** - Naming conventions, architecture
|
|
31
|
+
4. **Documentation** - README, existing docs
|
|
32
|
+
5. **Build configuration** - Build tools, scripts, CI/CD
|
|
33
|
+
|
|
34
|
+
### Step 2: Generate Product Steering
|
|
35
|
+
|
|
36
|
+
Create `.spec/steering/product.md`:
|
|
37
|
+
|
|
38
|
+
```markdown
|
|
39
|
+
# Product Overview
|
|
40
|
+
|
|
41
|
+
## Description
|
|
42
|
+
{Project description from manifest or analysis}
|
|
43
|
+
|
|
44
|
+
## Vision
|
|
45
|
+
{Long-term product vision}
|
|
46
|
+
|
|
47
|
+
## Target Users
|
|
48
|
+
- **Primary:** {Main user persona}
|
|
49
|
+
- **Secondary:** {Other user types}
|
|
50
|
+
|
|
51
|
+
## Core Features
|
|
52
|
+
1. {Feature 1} - {Brief description}
|
|
53
|
+
2. {Feature 2} - {Brief description}
|
|
54
|
+
|
|
55
|
+
## Key Value Propositions
|
|
56
|
+
- {Value 1}
|
|
57
|
+
- {Value 2}
|
|
58
|
+
|
|
59
|
+
## Success Metrics
|
|
60
|
+
- {Metric 1}: {Target}
|
|
61
|
+
- {Metric 2}: {Target}
|
|
62
|
+
```
|
|
63
|
+
|
|
64
|
+
### Step 3: Generate Tech Steering
|
|
65
|
+
|
|
66
|
+
Create `.spec/steering/tech.md`:
|
|
67
|
+
|
|
68
|
+
```markdown
|
|
69
|
+
# Technology Overview
|
|
70
|
+
|
|
71
|
+
## Stack
|
|
72
|
+
|
|
73
|
+
### Language
|
|
74
|
+
- **Primary:** {e.g., TypeScript, Python, Go, Rust, Java}
|
|
75
|
+
- **Version:** {e.g., 5.x, 3.11, 1.21}
|
|
76
|
+
- **Runtime:** {if applicable, e.g., Node.js, JVM, .NET}
|
|
77
|
+
|
|
78
|
+
### Frameworks
|
|
79
|
+
- {Framework 1}: {Purpose}
|
|
80
|
+
- {Framework 2}: {Purpose}
|
|
81
|
+
|
|
82
|
+
### Key Dependencies
|
|
83
|
+
| Package | Version | Purpose |
|
|
84
|
+
|---------|---------|---------|
|
|
85
|
+
| {dep1} | {version} | {why used} |
|
|
86
|
+
|
|
87
|
+
## Architecture
|
|
88
|
+
|
|
89
|
+
### Pattern
|
|
90
|
+
{e.g., Clean Architecture, MVC, Microservices, Hexagonal}
|
|
91
|
+
|
|
92
|
+
### Layers
|
|
93
|
+
```
|
|
94
|
+
┌─────────────────┐
|
|
95
|
+
│ Presentation │ Controllers, CLI, UI
|
|
96
|
+
├─────────────────┤
|
|
97
|
+
│ Application │ Services, Use Cases
|
|
98
|
+
├─────────────────┤
|
|
99
|
+
│ Domain │ Entities, Business Logic
|
|
100
|
+
├─────────────────┤
|
|
101
|
+
│ Infrastructure │ Database, External APIs
|
|
102
|
+
└─────────────────┘
|
|
103
|
+
```
|
|
104
|
+
|
|
105
|
+
## Development Environment
|
|
106
|
+
|
|
107
|
+
### Prerequisites
|
|
108
|
+
- {Language runtime and version}
|
|
109
|
+
- {Package manager} (e.g., npm, pip, cargo, go mod, maven)
|
|
110
|
+
|
|
111
|
+
### Setup
|
|
112
|
+
```bash
|
|
113
|
+
# Clone and install dependencies
|
|
114
|
+
{package manager install command}
|
|
115
|
+
|
|
116
|
+
# Run development server
|
|
117
|
+
{dev server command}
|
|
118
|
+
```
|
|
119
|
+
|
|
120
|
+
### Common Commands
|
|
121
|
+
| Command | Purpose |
|
|
122
|
+
|---------|---------|
|
|
123
|
+
| `{install}` | Install dependencies |
|
|
124
|
+
| `{dev}` | Development server |
|
|
125
|
+
| `{build}` | Production build |
|
|
126
|
+
| `{test}` | Run tests |
|
|
127
|
+
| `{lint}` | Code linting |
|
|
128
|
+
|
|
129
|
+
## Quality Standards
|
|
130
|
+
- Test coverage: >= 80%
|
|
131
|
+
- Linting: {language-appropriate linter}
|
|
132
|
+
- Type checking: {if applicable}
|
|
133
|
+
```
|
|
134
|
+
|
|
135
|
+
### Step 4: Generate Structure Steering
|
|
136
|
+
|
|
137
|
+
Create `.spec/steering/structure.md`:
|
|
138
|
+
|
|
139
|
+
```markdown
|
|
140
|
+
# Project Structure
|
|
141
|
+
|
|
142
|
+
## Directory Layout
|
|
143
|
+
|
|
144
|
+
```
|
|
145
|
+
project-root/
|
|
146
|
+
├── src/ # Source code
|
|
147
|
+
│ ├── domain/ # Business logic
|
|
148
|
+
│ ├── application/ # Use cases
|
|
149
|
+
│ ├── infrastructure/ # External adapters
|
|
150
|
+
│ └── {entry point} # Main entry point
|
|
151
|
+
├── tests/ # Test files
|
|
152
|
+
├── docs/ # Documentation
|
|
153
|
+
└── {project manifest} # Dependencies/config
|
|
154
|
+
```
|
|
155
|
+
|
|
156
|
+
## Naming Conventions
|
|
157
|
+
|
|
158
|
+
### Files
|
|
159
|
+
| Type | Convention | Example |
|
|
160
|
+
|------|------------|---------|
|
|
161
|
+
| Components | {project convention} | `UserProfile.{ext}` |
|
|
162
|
+
| Services | {project convention} | `AuthService.{ext}` |
|
|
163
|
+
| Utilities | {project convention} | `format_date.{ext}` |
|
|
164
|
+
| Tests | {test convention} | `auth_test.{ext}` |
|
|
165
|
+
| Types/Interfaces | {project convention} | `user_types.{ext}` |
|
|
166
|
+
|
|
167
|
+
### Code
|
|
168
|
+
| Element | Convention | Example |
|
|
169
|
+
|---------|------------|---------|
|
|
170
|
+
| Classes/Types | PascalCase | `UserService` |
|
|
171
|
+
| Functions/Methods | {language convention} | `get_user_by_id` or `getUserById` |
|
|
172
|
+
| Constants | UPPER_SNAKE | `MAX_RETRY_COUNT` |
|
|
173
|
+
| Variables | {language convention} | `user_name` or `userName` |
|
|
174
|
+
| Interfaces | {project convention} | `UserRepository` or `IUserRepository` |
|
|
175
|
+
|
|
176
|
+
## Module Organization
|
|
177
|
+
|
|
178
|
+
### Import Order
|
|
179
|
+
1. Standard library / built-ins
|
|
180
|
+
2. External packages / dependencies
|
|
181
|
+
3. Internal modules (absolute paths)
|
|
182
|
+
4. Relative imports
|
|
183
|
+
5. Type/interface imports (if language separates them)
|
|
184
|
+
|
|
185
|
+
### Export Pattern
|
|
186
|
+
- Use barrel exports / index files where appropriate
|
|
187
|
+
- Named exports preferred over default (language-dependent)
|
|
188
|
+
- Public API exposed from domain layer
|
|
189
|
+
|
|
190
|
+
## Patterns
|
|
191
|
+
|
|
192
|
+
### Dependency Injection
|
|
193
|
+
{DI approach used, e.g., Inversify, manual}
|
|
194
|
+
|
|
195
|
+
### Error Handling
|
|
196
|
+
{Error handling pattern, e.g., Result types, exceptions}
|
|
197
|
+
|
|
198
|
+
### Logging
|
|
199
|
+
{Logging approach, e.g., structured logging}
|
|
200
|
+
```
|
|
201
|
+
|
|
202
|
+
### Step 5: Create Directory
|
|
203
|
+
|
|
204
|
+
Ensure `.spec/steering/` directory exists and save documents.
|
|
205
|
+
|
|
206
|
+
## MCP Tool Integration
|
|
207
|
+
|
|
208
|
+
This skill generates documents manually. For automated analysis, the `sdd-init` MCP tool creates basic steering during project initialization.
|
|
209
|
+
|
|
210
|
+
## Quality Checklist
|
|
211
|
+
|
|
212
|
+
- [ ] Product description is clear and specific
|
|
213
|
+
- [ ] Target users are well-defined
|
|
214
|
+
- [ ] Technology stack is accurately documented
|
|
215
|
+
- [ ] Directory structure matches actual project
|
|
216
|
+
- [ ] Naming conventions are consistent
|
|
217
|
+
- [ ] Development setup instructions are complete
|
|
218
|
+
- [ ] Key patterns are documented
|
|
219
|
+
|
|
220
|
+
## Notes
|
|
221
|
+
|
|
222
|
+
- Steering documents are **project-specific** - they describe YOUR project
|
|
223
|
+
- Keep them updated as the project evolves
|
|
224
|
+
- Use custom steering for domain-specific rules
|
|
225
|
+
- Reference from AGENTS.md or CLAUDE.md in project root
|
|
@@ -0,0 +1,211 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: sdd-steering-custom
|
|
3
|
+
description: Create custom steering documents for specialized contexts. Use when you need domain-specific guidance for particular file types, modules, or workflows. Invoked via /sdd-steering-custom.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# SDD Custom Steering Document Creation
|
|
7
|
+
|
|
8
|
+
Create specialized steering documents that provide context-specific guidance beyond the standard product/tech/structure documents.
|
|
9
|
+
|
|
10
|
+
## When to Use Custom Steering
|
|
11
|
+
|
|
12
|
+
Custom steering is useful for:
|
|
13
|
+
- **Domain-specific rules**: API design, database conventions
|
|
14
|
+
- **File-type guidance**: Test patterns, component standards
|
|
15
|
+
- **Workflow processes**: PR reviews, deployment procedures
|
|
16
|
+
- **Team conventions**: Code review standards, documentation rules
|
|
17
|
+
|
|
18
|
+
## Inclusion Modes
|
|
19
|
+
|
|
20
|
+
Custom steering documents can be loaded in three ways:
|
|
21
|
+
|
|
22
|
+
| Mode | Behavior | Use Case |
|
|
23
|
+
|------|----------|----------|
|
|
24
|
+
| **ALWAYS** | Loaded in every AI interaction | Core conventions, critical rules |
|
|
25
|
+
| **CONDITIONAL** | Loaded when file patterns match | Test-specific, API-specific rules |
|
|
26
|
+
| **MANUAL** | Referenced with `@filename.md` | Rarely needed, specialized contexts |
|
|
27
|
+
|
|
28
|
+
## Workflow
|
|
29
|
+
|
|
30
|
+
### Step 1: Identify the Need
|
|
31
|
+
|
|
32
|
+
Ask yourself:
|
|
33
|
+
- What specialized context is missing?
|
|
34
|
+
- When should this guidance apply?
|
|
35
|
+
- Is this project-wide or context-specific?
|
|
36
|
+
|
|
37
|
+
### Step 2: Choose Inclusion Mode
|
|
38
|
+
|
|
39
|
+
```
|
|
40
|
+
Is this guidance ALWAYS relevant?
|
|
41
|
+
├── YES → Use ALWAYS mode
|
|
42
|
+
│
|
|
43
|
+
└── NO → Is it relevant for specific file types?
|
|
44
|
+
├── YES → Use CONDITIONAL mode with patterns
|
|
45
|
+
│ Examples: *.test.ts, src/api/**/*
|
|
46
|
+
│
|
|
47
|
+
└── NO → Use MANUAL mode
|
|
48
|
+
Reference with @filename.md when needed
|
|
49
|
+
```
|
|
50
|
+
|
|
51
|
+
### Step 3: Create Document
|
|
52
|
+
|
|
53
|
+
Save to `.spec/steering/{filename}.md`:
|
|
54
|
+
|
|
55
|
+
```markdown
|
|
56
|
+
# {Topic Name}
|
|
57
|
+
|
|
58
|
+
## Purpose
|
|
59
|
+
{Why this steering document exists}
|
|
60
|
+
|
|
61
|
+
## Scope
|
|
62
|
+
{When this guidance applies}
|
|
63
|
+
|
|
64
|
+
## Guidelines
|
|
65
|
+
|
|
66
|
+
### Guideline 1: {Name}
|
|
67
|
+
{Detailed guidance}
|
|
68
|
+
|
|
69
|
+
**Do:**
|
|
70
|
+
- {Good practice}
|
|
71
|
+
|
|
72
|
+
**Don't:**
|
|
73
|
+
- {Anti-pattern}
|
|
74
|
+
|
|
75
|
+
### Guideline 2: {Name}
|
|
76
|
+
{Detailed guidance}
|
|
77
|
+
|
|
78
|
+
## Examples
|
|
79
|
+
|
|
80
|
+
### Good Example
|
|
81
|
+
```{language}
|
|
82
|
+
{Code showing good practice}
|
|
83
|
+
```
|
|
84
|
+
|
|
85
|
+
### Bad Example
|
|
86
|
+
```{language}
|
|
87
|
+
// DON'T: {explanation}
|
|
88
|
+
{Code showing anti-pattern}
|
|
89
|
+
```
|
|
90
|
+
|
|
91
|
+
## Checklist
|
|
92
|
+
- [ ] {Verification item 1}
|
|
93
|
+
- [ ] {Verification item 2}
|
|
94
|
+
|
|
95
|
+
---
|
|
96
|
+
<!-- Steering Metadata -->
|
|
97
|
+
Inclusion Mode: {ALWAYS | CONDITIONAL | MANUAL}
|
|
98
|
+
File Patterns: {patterns for CONDITIONAL mode}
|
|
99
|
+
Created: {date}
|
|
100
|
+
```
|
|
101
|
+
|
|
102
|
+
## Common Custom Steering Documents
|
|
103
|
+
|
|
104
|
+
### API Design Standards
|
|
105
|
+
```markdown
|
|
106
|
+
# API Design Standards
|
|
107
|
+
|
|
108
|
+
## Inclusion
|
|
109
|
+
Mode: CONDITIONAL
|
|
110
|
+
Patterns: src/api/**/*.ts, src/routes/**/*.ts
|
|
111
|
+
|
|
112
|
+
## Guidelines
|
|
113
|
+
|
|
114
|
+
### RESTful Conventions
|
|
115
|
+
- Use plural nouns for resources: `/users`, not `/user`
|
|
116
|
+
- Use HTTP methods correctly: GET (read), POST (create), PUT (update), DELETE (remove)
|
|
117
|
+
- Return appropriate status codes
|
|
118
|
+
|
|
119
|
+
### Request/Response Format
|
|
120
|
+
- Use JSON for request/response bodies
|
|
121
|
+
- Include `Content-Type: application/json` header
|
|
122
|
+
- Wrap responses in consistent envelope
|
|
123
|
+
```
|
|
124
|
+
|
|
125
|
+
### Test Patterns
|
|
126
|
+
```markdown
|
|
127
|
+
# Test Patterns
|
|
128
|
+
|
|
129
|
+
## Inclusion
|
|
130
|
+
Mode: CONDITIONAL
|
|
131
|
+
Patterns: **/*.test.ts, **/*.spec.ts
|
|
132
|
+
|
|
133
|
+
## Guidelines
|
|
134
|
+
|
|
135
|
+
### Arrange-Act-Assert
|
|
136
|
+
Every test should follow:
|
|
137
|
+
1. **Arrange**: Set up test data and mocks
|
|
138
|
+
2. **Act**: Execute the code under test
|
|
139
|
+
3. **Assert**: Verify the results
|
|
140
|
+
|
|
141
|
+
### Naming Convention
|
|
142
|
+
`describe('{Class/Function}', () => {`
|
|
143
|
+
` it('should {expected behavior} when {condition}', () => {`
|
|
144
|
+
```
|
|
145
|
+
|
|
146
|
+
### Component Standards
|
|
147
|
+
```markdown
|
|
148
|
+
# Component Standards
|
|
149
|
+
|
|
150
|
+
## Inclusion
|
|
151
|
+
Mode: CONDITIONAL
|
|
152
|
+
Patterns: src/components/**/*.tsx
|
|
153
|
+
|
|
154
|
+
## Guidelines
|
|
155
|
+
|
|
156
|
+
### Component Structure
|
|
157
|
+
1. Imports (external, internal, types)
|
|
158
|
+
2. Type definitions
|
|
159
|
+
3. Component function
|
|
160
|
+
4. Helper functions
|
|
161
|
+
5. Exports
|
|
162
|
+
```
|
|
163
|
+
|
|
164
|
+
### Database Conventions
|
|
165
|
+
```markdown
|
|
166
|
+
# Database Conventions
|
|
167
|
+
|
|
168
|
+
## Inclusion
|
|
169
|
+
Mode: CONDITIONAL
|
|
170
|
+
Patterns: src/db/**/*.ts, **/migrations/**/*
|
|
171
|
+
|
|
172
|
+
## Guidelines
|
|
173
|
+
|
|
174
|
+
### Table Naming
|
|
175
|
+
- Use snake_case: `user_accounts`
|
|
176
|
+
- Use plural: `orders`, not `order`
|
|
177
|
+
- Prefix with domain: `auth_sessions`
|
|
178
|
+
|
|
179
|
+
### Column Naming
|
|
180
|
+
- Use snake_case: `created_at`
|
|
181
|
+
- Foreign keys: `{table}_id`
|
|
182
|
+
- Booleans: `is_active`, `has_access`
|
|
183
|
+
```
|
|
184
|
+
|
|
185
|
+
## File Pattern Syntax
|
|
186
|
+
|
|
187
|
+
Patterns use glob syntax:
|
|
188
|
+
|
|
189
|
+
| Pattern | Matches |
|
|
190
|
+
|---------|---------|
|
|
191
|
+
| `*.test.ts` | All test files in current dir |
|
|
192
|
+
| `**/*.test.ts` | All test files recursively |
|
|
193
|
+
| `src/api/**/*` | All files in api directory tree |
|
|
194
|
+
| `*.{ts,tsx}` | TypeScript and TSX files |
|
|
195
|
+
| `!node_modules/**` | Exclude node_modules |
|
|
196
|
+
|
|
197
|
+
## MCP Tool Integration
|
|
198
|
+
|
|
199
|
+
Custom steering documents are managed manually. After creating:
|
|
200
|
+
1. Save to `.spec/steering/{name}.md`
|
|
201
|
+
2. Verify file patterns work as expected
|
|
202
|
+
3. Reference in AGENTS.md/CLAUDE.md if needed
|
|
203
|
+
|
|
204
|
+
## Quality Checklist
|
|
205
|
+
|
|
206
|
+
- [ ] Purpose is clearly stated
|
|
207
|
+
- [ ] Inclusion mode is appropriate
|
|
208
|
+
- [ ] File patterns are specific (for CONDITIONAL)
|
|
209
|
+
- [ ] Guidelines are actionable
|
|
210
|
+
- [ ] Examples show good and bad practices
|
|
211
|
+
- [ ] Checklist for verification included
|
|
@@ -0,0 +1,233 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: sdd-tasks
|
|
3
|
+
description: Generate TDD task breakdown for SDD workflow. Use when breaking down design into implementable tasks with test-first approach. Invoked via /sdd-tasks <feature-name>.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# SDD Task Breakdown Generation
|
|
7
|
+
|
|
8
|
+
Generate comprehensive TDD-based task breakdowns that translate approved designs into implementable work items.
|
|
9
|
+
|
|
10
|
+
## Prerequisites
|
|
11
|
+
|
|
12
|
+
Before generating tasks:
|
|
13
|
+
1. Design must be generated using `/sdd-design`
|
|
14
|
+
2. Design phase should be approved (use `sdd-approve design` MCP tool)
|
|
15
|
+
3. Review the design document in `.spec/specs/{feature}/design.md`
|
|
16
|
+
|
|
17
|
+
## Workflow
|
|
18
|
+
|
|
19
|
+
### Step 1: Verify Prerequisites
|
|
20
|
+
|
|
21
|
+
Use `sdd-status` MCP tool to verify:
|
|
22
|
+
- `design.generated: true`
|
|
23
|
+
- `design.approved: true` (recommended before tasks)
|
|
24
|
+
|
|
25
|
+
### Step 2: Review Design
|
|
26
|
+
|
|
27
|
+
1. Read `.spec/specs/{feature}/design.md`
|
|
28
|
+
2. Identify all components to implement
|
|
29
|
+
3. Note interfaces and data models
|
|
30
|
+
4. Understand dependencies between components
|
|
31
|
+
|
|
32
|
+
### Step 3: Apply TDD Workflow
|
|
33
|
+
|
|
34
|
+
For each task, follow the Red-Green-Refactor cycle:
|
|
35
|
+
|
|
36
|
+
```
|
|
37
|
+
┌─────────────────────────────────────────────────────────────┐
|
|
38
|
+
│ TDD CYCLE │
|
|
39
|
+
├─────────────────────────────────────────────────────────────┤
|
|
40
|
+
│ │
|
|
41
|
+
│ 1. RED ──────> Write failing test first │
|
|
42
|
+
│ (Test describes expected behavior) │
|
|
43
|
+
│ │
|
|
44
|
+
│ 2. GREEN ──────> Write minimal code to pass │
|
|
45
|
+
│ (Just enough to make test green) │
|
|
46
|
+
│ │
|
|
47
|
+
│ 3. REFACTOR ────> Clean up, maintain tests passing │
|
|
48
|
+
│ (Improve design without breaking) │
|
|
49
|
+
│ │
|
|
50
|
+
│ ───────────────────────────────────────────────────── │
|
|
51
|
+
│ REPEAT │
|
|
52
|
+
└─────────────────────────────────────────────────────────────┘
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
### Step 4: Apply Test Pyramid
|
|
56
|
+
|
|
57
|
+
Structure tests following the 70/20/10 ratio:
|
|
58
|
+
|
|
59
|
+
```
|
|
60
|
+
╱╲
|
|
61
|
+
╱ ╲
|
|
62
|
+
╱ E2E╲ 10% - Critical user journeys
|
|
63
|
+
╱──────╲
|
|
64
|
+
╱ ╲
|
|
65
|
+
╱Integration╲ 20% - Component interactions
|
|
66
|
+
╱────────────╲
|
|
67
|
+
╱ ╲
|
|
68
|
+
╱ Unit Tests ╲ 70% - Individual functions
|
|
69
|
+
╱──────────────────╲
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
| Level | Coverage | Scope | Speed |
|
|
73
|
+
|-------|----------|-------|-------|
|
|
74
|
+
| **Unit** | 70% | Single function/class | Fast (ms) |
|
|
75
|
+
| **Integration** | 20% | Component interactions | Medium (s) |
|
|
76
|
+
| **E2E** | 10% | Full user journeys | Slow (min) |
|
|
77
|
+
|
|
78
|
+
### Step 5: Generate Task Breakdown
|
|
79
|
+
|
|
80
|
+
Structure tasks hierarchically:
|
|
81
|
+
|
|
82
|
+
```markdown
|
|
83
|
+
# Tasks: {Feature Name}
|
|
84
|
+
|
|
85
|
+
## Overview
|
|
86
|
+
{Summary of implementation approach}
|
|
87
|
+
|
|
88
|
+
## Task Groups
|
|
89
|
+
|
|
90
|
+
### 1. {Component/Layer Name}
|
|
91
|
+
|
|
92
|
+
#### 1.1 {Task Name}
|
|
93
|
+
**Type:** Unit | Integration | E2E
|
|
94
|
+
**Estimated Effort:** S | M | L | XL
|
|
95
|
+
**Dependencies:** {Task IDs}
|
|
96
|
+
|
|
97
|
+
**TDD Steps:**
|
|
98
|
+
1. RED: Write test for {specific behavior}
|
|
99
|
+
```typescript
|
|
100
|
+
describe('{Component}', () => {
|
|
101
|
+
it('should {expected behavior}', () => {
|
|
102
|
+
// Arrange
|
|
103
|
+
// Act
|
|
104
|
+
// Assert
|
|
105
|
+
});
|
|
106
|
+
});
|
|
107
|
+
```
|
|
108
|
+
2. GREEN: Implement {minimal solution}
|
|
109
|
+
3. REFACTOR: {Specific improvements}
|
|
110
|
+
|
|
111
|
+
**Acceptance Criteria:**
|
|
112
|
+
- [ ] Test passes
|
|
113
|
+
- [ ] Code coverage >= 80%
|
|
114
|
+
- [ ] No lint errors
|
|
115
|
+
|
|
116
|
+
#### 1.2 {Next Task}
|
|
117
|
+
...
|
|
118
|
+
|
|
119
|
+
### 2. {Next Component}
|
|
120
|
+
...
|
|
121
|
+
|
|
122
|
+
## Implementation Order
|
|
123
|
+
|
|
124
|
+
```
|
|
125
|
+
[1.1] ──> [1.2] ──> [2.1]
|
|
126
|
+
│
|
|
127
|
+
└──> [1.3] ──> [2.2]
|
|
128
|
+
```
|
|
129
|
+
|
|
130
|
+
## Definition of Done
|
|
131
|
+
- [ ] All tests pass
|
|
132
|
+
- [ ] Code coverage >= 80%
|
|
133
|
+
- [ ] No lint/type errors
|
|
134
|
+
- [ ] Code reviewed
|
|
135
|
+
- [ ] Documentation updated
|
|
136
|
+
```
|
|
137
|
+
|
|
138
|
+
### Step 6: Task Sizing Guidelines
|
|
139
|
+
|
|
140
|
+
| Size | Description | Test Count | Time |
|
|
141
|
+
|------|-------------|------------|------|
|
|
142
|
+
| **S** | Single function, 1-2 tests | 1-2 | < 1 hour |
|
|
143
|
+
| **M** | Multiple functions, 3-5 tests | 3-5 | 1-4 hours |
|
|
144
|
+
| **L** | Component with integration | 5-10 | 4-8 hours |
|
|
145
|
+
| **XL** | Complex component, many edge cases | 10+ | 1-2 days |
|
|
146
|
+
|
|
147
|
+
### Step 7: Test-First Task Template
|
|
148
|
+
|
|
149
|
+
For each implementation task:
|
|
150
|
+
|
|
151
|
+
```markdown
|
|
152
|
+
#### Task {X.Y}: {Task Name}
|
|
153
|
+
|
|
154
|
+
**Component:** {ComponentName}
|
|
155
|
+
**Type:** Unit Test → Implementation
|
|
156
|
+
|
|
157
|
+
**Test Scenarios:**
|
|
158
|
+
1. Happy path: {Expected behavior when inputs are valid}
|
|
159
|
+
2. Edge case: {Boundary conditions}
|
|
160
|
+
3. Error case: {Invalid inputs, failures}
|
|
161
|
+
|
|
162
|
+
**Test Code (RED):**
|
|
163
|
+
```typescript
|
|
164
|
+
import { {Component} } from './{component}';
|
|
165
|
+
|
|
166
|
+
describe('{Component}', () => {
|
|
167
|
+
describe('{method}', () => {
|
|
168
|
+
it('should {happy path behavior}', async () => {
|
|
169
|
+
// Arrange
|
|
170
|
+
const input = { /* valid input */ };
|
|
171
|
+
|
|
172
|
+
// Act
|
|
173
|
+
const result = await component.method(input);
|
|
174
|
+
|
|
175
|
+
// Assert
|
|
176
|
+
expect(result).toEqual({ /* expected */ });
|
|
177
|
+
});
|
|
178
|
+
|
|
179
|
+
it('should throw when {error condition}', async () => {
|
|
180
|
+
// Arrange
|
|
181
|
+
const invalidInput = { /* invalid */ };
|
|
182
|
+
|
|
183
|
+
// Act & Assert
|
|
184
|
+
await expect(component.method(invalidInput))
|
|
185
|
+
.rejects.toThrow('{ErrorType}');
|
|
186
|
+
});
|
|
187
|
+
});
|
|
188
|
+
});
|
|
189
|
+
```
|
|
190
|
+
|
|
191
|
+
**Implementation (GREEN):**
|
|
192
|
+
{Brief description of minimal implementation}
|
|
193
|
+
|
|
194
|
+
**Refactor:**
|
|
195
|
+
- Extract {helper function} if needed
|
|
196
|
+
- Apply {specific pattern}
|
|
197
|
+
```
|
|
198
|
+
|
|
199
|
+
### Step 8: Save and Execute
|
|
200
|
+
|
|
201
|
+
1. Save tasks to `.spec/specs/{feature}/tasks.md`
|
|
202
|
+
2. Use `sdd-approve tasks` MCP tool to mark phase complete
|
|
203
|
+
3. Use `sdd-spec-impl` MCP tool to execute tasks with TDD
|
|
204
|
+
|
|
205
|
+
## MCP Tool Integration
|
|
206
|
+
|
|
207
|
+
| Tool | When to Use |
|
|
208
|
+
|------|-------------|
|
|
209
|
+
| `sdd-status` | Verify design phase complete |
|
|
210
|
+
| `sdd-approve` | Mark tasks phase as approved |
|
|
211
|
+
| `sdd-spec-impl` | Execute tasks using TDD methodology |
|
|
212
|
+
| `sdd-quality-check` | Validate code quality during implementation |
|
|
213
|
+
|
|
214
|
+
## Quality Checklist
|
|
215
|
+
|
|
216
|
+
- [ ] All design components have corresponding tasks
|
|
217
|
+
- [ ] Tasks follow TDD (test first)
|
|
218
|
+
- [ ] Test pyramid ratio maintained (70/20/10)
|
|
219
|
+
- [ ] Dependencies between tasks are clear
|
|
220
|
+
- [ ] Each task has specific acceptance criteria
|
|
221
|
+
- [ ] Tasks are sized appropriately (avoid XL when possible)
|
|
222
|
+
- [ ] Implementation order respects dependencies
|
|
223
|
+
- [ ] Definition of Done is clear
|
|
224
|
+
|
|
225
|
+
## Common Anti-Patterns to Avoid
|
|
226
|
+
|
|
227
|
+
| Anti-Pattern | Problem | Solution |
|
|
228
|
+
|--------------|---------|----------|
|
|
229
|
+
| **Test After** | Missing edge cases | Always write test first |
|
|
230
|
+
| **Ice Cream Cone** | Too many E2E tests | Follow pyramid (70/20/10) |
|
|
231
|
+
| **Big Tasks** | Hard to track progress | Break into S/M sizes |
|
|
232
|
+
| **No Dependencies** | Blocked work | Map dependencies explicitly |
|
|
233
|
+
| **Vague Criteria** | Unclear completion | Specific, measurable criteria |
|