fraim-framework 1.0.12 → 2.0.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.ai-agents/agent-guardrails.md +58 -0
- package/.ai-agents/mcp-template.jsonc +34 -0
- package/.ai-agents/rules/agent-testing-guidelines.md +545 -0
- package/.ai-agents/rules/architecture.md +52 -0
- package/.ai-agents/rules/communication.md +122 -0
- package/.ai-agents/rules/continuous-learning.md +55 -0
- package/.ai-agents/rules/git-safe-commands.md +34 -0
- package/.ai-agents/rules/integrity-and-test-ethics.md +223 -0
- package/.ai-agents/rules/local-development.md +252 -0
- package/.ai-agents/rules/merge-requirements.md +231 -0
- package/.ai-agents/rules/pr-workflow-completeness.md +191 -0
- package/.ai-agents/rules/simplicity.md +112 -0
- package/.ai-agents/rules/software-development-lifecycle.md +276 -0
- package/.ai-agents/rules/spike-first-development.md +199 -0
- package/.ai-agents/rules/successful-debugging-patterns.md +313 -0
- package/.ai-agents/scripts/cleanup-branch.ts +278 -0
- package/.ai-agents/scripts/exec-with-timeout.ts +122 -0
- package/.ai-agents/scripts/prep-issue.sh +162 -0
- package/.ai-agents/templates/evidence/Design-Evidence.md +30 -0
- package/.ai-agents/templates/evidence/Implementation-BugEvidence.md +48 -0
- package/.ai-agents/templates/evidence/Implementation-FeatureEvidence.md +54 -0
- package/.ai-agents/templates/evidence/Spec-Evidence.md +19 -0
- package/.ai-agents/templates/help/HelpNeeded.md +14 -0
- package/.ai-agents/templates/retrospective/RETROSPECTIVE-TEMPLATE.md +55 -0
- package/.ai-agents/templates/specs/BUGSPEC-TEMPLATE.md +37 -0
- package/.ai-agents/templates/specs/FEATURESPEC-TEMPLATE.md +29 -0
- package/.ai-agents/templates/specs/TECHSPEC-TEMPLATE.md +39 -0
- package/.ai-agents/workflows/design.md +121 -0
- package/.ai-agents/workflows/implement.md +170 -0
- package/.ai-agents/workflows/resolve.md +152 -0
- package/.ai-agents/workflows/retrospect.md +84 -0
- package/.ai-agents/workflows/spec.md +103 -0
- package/.ai-agents/workflows/test.md +90 -0
- package/.cursor/rules/cursor-rules.mdc +8 -0
- package/.cursor/rules/design.mdc +4 -0
- package/.cursor/rules/implement.mdc +6 -0
- package/.cursor/rules/resolve.mdc +5 -0
- package/.cursor/rules/retrospect.mdc +4 -0
- package/.cursor/rules/spec.mdc +4 -0
- package/.cursor/rules/test.mdc +5 -0
- package/.windsurf/rules/windsurf-rules.md +7 -0
- package/.windsurf/workflows/resolve-issue.md +6 -0
- package/.windsurf/workflows/retrospect.md +6 -0
- package/.windsurf/workflows/start-design.md +6 -0
- package/.windsurf/workflows/start-impl.md +6 -0
- package/.windsurf/workflows/start-spec.md +6 -0
- package/.windsurf/workflows/start-tests.md +6 -0
- package/CHANGELOG.md +66 -0
- package/CODEOWNERS +24 -0
- package/DISTRIBUTION.md +6 -6
- package/PUBLISH_INSTRUCTIONS.md +93 -0
- package/README.md +330 -104
- package/bin/fraim.js +49 -3
- package/index.js +30 -3
- package/install.sh +58 -58
- package/labels.json +52 -0
- package/linkedin-post.md +23 -0
- package/package.json +12 -7
- package/sample_package.json +18 -0
- package/setup.js +733 -384
- package/test-utils.ts +118 -0
- package/tsconfig.json +22 -0
- package/agents/claude/CLAUDE.md +0 -42
- package/agents/cursor/rules/architecture.mdc +0 -49
- package/agents/cursor/rules/continuous-learning.mdc +0 -48
- package/agents/cursor/rules/cursor-workflow.mdc +0 -29
- package/agents/cursor/rules/design.mdc +0 -25
- package/agents/cursor/rules/implement.mdc +0 -26
- package/agents/cursor/rules/local-development.mdc +0 -104
- package/agents/cursor/rules/prep.mdc +0 -15
- package/agents/cursor/rules/resolve.mdc +0 -46
- package/agents/cursor/rules/simplicity.mdc +0 -18
- package/agents/cursor/rules/software-development-lifecycle.mdc +0 -41
- package/agents/cursor/rules/test.mdc +0 -25
- package/agents/windsurf/rules/architecture.md +0 -49
- package/agents/windsurf/rules/continuous-learning.md +0 -47
- package/agents/windsurf/rules/local-development.md +0 -103
- package/agents/windsurf/rules/remote-development.md +0 -22
- package/agents/windsurf/rules/simplicity.md +0 -17
- package/agents/windsurf/rules/windsurf-workflow.md +0 -28
- package/agents/windsurf/workflows/prep.md +0 -20
- package/agents/windsurf/workflows/resolve-issue.md +0 -47
- package/agents/windsurf/workflows/start-design.md +0 -26
- package/agents/windsurf/workflows/start-impl.md +0 -27
- package/agents/windsurf/workflows/start-tests.md +0 -26
- package/github/phase-change.yml +0 -218
- package/github/status-change.yml +0 -68
- package/github/sync-on-pr-review.yml +0 -66
- package/scripts/__init__.py +0 -10
- package/scripts/cli.py +0 -141
- package/setup.py +0 -0
- package/test-config.json +0 -32
- package/workflows/setup-fraim.yml +0 -147
|
@@ -0,0 +1,276 @@
|
|
|
1
|
+
# Software Development Lifecycle
|
|
2
|
+
|
|
3
|
+
## INTENT
|
|
4
|
+
To establish a systematic, phase-based development process that ensures quality, maintainability, and proper documentation throughout the software development lifecycle.
|
|
5
|
+
|
|
6
|
+
## PRINCIPLES
|
|
7
|
+
- **Phase-Based Development**: Clear phases with defined deliverables
|
|
8
|
+
- **Quality Gates**: Each phase has completion criteria
|
|
9
|
+
- **Documentation**: Comprehensive documentation at each phase
|
|
10
|
+
- **Review Process**: Peer review and approval at key milestones
|
|
11
|
+
- **Traceability**: Clear links between requirements, design, and implementation
|
|
12
|
+
|
|
13
|
+
## DEVELOPMENT WORKFLOW
|
|
14
|
+
|
|
15
|
+
### Branch Management
|
|
16
|
+
Always work on the feature branch for the current issue: `feature/<issue#>-<kebab-title>`. Never push to master.
|
|
17
|
+
|
|
18
|
+
### Development Workflow
|
|
19
|
+
1. **Clone Setup**: Work in your own cloned repository folder. Folder name should be `{PROJECT_NAME} - Issue {issue_number}`
|
|
20
|
+
2. **Branch Management**: Create/checkout feature branch for your issue
|
|
21
|
+
3. **Local Development**: Make changes, run tests locally
|
|
22
|
+
4. **Check before commit**: Only commit after approval from the user.
|
|
23
|
+
5. **Push Changes**: Push to feature branch, never to master
|
|
24
|
+
6. **PR Creation**: Let GitHub Actions create/update PRs automatically
|
|
25
|
+
|
|
26
|
+
## PHASE-BASED DEVELOPMENT
|
|
27
|
+
|
|
28
|
+
### Phase 1: Specification (phase:spec)
|
|
29
|
+
**Objective**: Define what needs to be built
|
|
30
|
+
|
|
31
|
+
**Activities**:
|
|
32
|
+
- Gather requirements from stakeholders
|
|
33
|
+
- Define functional and non-functional requirements
|
|
34
|
+
- Create user stories and acceptance criteria
|
|
35
|
+
- Document constraints and assumptions
|
|
36
|
+
- Risk assessment and mitigation planning
|
|
37
|
+
|
|
38
|
+
**Deliverables**:
|
|
39
|
+
- Requirements specification document
|
|
40
|
+
- User stories with acceptance criteria
|
|
41
|
+
- Risk assessment
|
|
42
|
+
- Success metrics definition
|
|
43
|
+
|
|
44
|
+
**Completion Criteria**:
|
|
45
|
+
- All requirements documented and approved
|
|
46
|
+
- Stakeholder sign-off obtained
|
|
47
|
+
- Technical feasibility confirmed
|
|
48
|
+
- Success criteria defined and measurable
|
|
49
|
+
|
|
50
|
+
### Phase 2: Design (phase:design)
|
|
51
|
+
**Objective**: Define how it will be built
|
|
52
|
+
|
|
53
|
+
**Activities**:
|
|
54
|
+
- System architecture design
|
|
55
|
+
- Component design and interfaces
|
|
56
|
+
- Database schema design
|
|
57
|
+
- API specification
|
|
58
|
+
- Security design considerations
|
|
59
|
+
- Performance requirements analysis
|
|
60
|
+
|
|
61
|
+
**Deliverables**:
|
|
62
|
+
- Architecture design document
|
|
63
|
+
- Component diagrams
|
|
64
|
+
- Database schema
|
|
65
|
+
- API specifications
|
|
66
|
+
- Security design
|
|
67
|
+
- Performance benchmarks
|
|
68
|
+
|
|
69
|
+
**Completion Criteria**:
|
|
70
|
+
- Design document complete and approved
|
|
71
|
+
- Architecture review passed
|
|
72
|
+
- Security review completed
|
|
73
|
+
- Performance requirements validated
|
|
74
|
+
|
|
75
|
+
### Phase 3: Test Planning (phase:tests)
|
|
76
|
+
**Objective**: Define how it will be tested
|
|
77
|
+
|
|
78
|
+
**Activities**:
|
|
79
|
+
- Test strategy development
|
|
80
|
+
- Test case creation
|
|
81
|
+
- Test data preparation
|
|
82
|
+
- Test environment setup
|
|
83
|
+
- Automation test planning
|
|
84
|
+
- Performance test planning
|
|
85
|
+
|
|
86
|
+
**Deliverables**:
|
|
87
|
+
- Test plan document
|
|
88
|
+
- Test cases and scenarios
|
|
89
|
+
- Test data sets
|
|
90
|
+
- Automated test scripts
|
|
91
|
+
- Performance test plan
|
|
92
|
+
|
|
93
|
+
**Completion Criteria**:
|
|
94
|
+
- Comprehensive test plan approved
|
|
95
|
+
- Test cases cover all requirements
|
|
96
|
+
- Test automation framework ready
|
|
97
|
+
- Test environment configured
|
|
98
|
+
|
|
99
|
+
### Phase 4: Implementation (phase:impl)
|
|
100
|
+
**Objective**: Build the solution
|
|
101
|
+
|
|
102
|
+
**Activities**:
|
|
103
|
+
- Code implementation
|
|
104
|
+
- Unit testing
|
|
105
|
+
- Integration testing
|
|
106
|
+
- Code review
|
|
107
|
+
- Documentation updates
|
|
108
|
+
- Performance optimization
|
|
109
|
+
|
|
110
|
+
**Deliverables**:
|
|
111
|
+
- Working software
|
|
112
|
+
- Unit tests
|
|
113
|
+
- Integration tests
|
|
114
|
+
- Code documentation
|
|
115
|
+
- Updated system documentation
|
|
116
|
+
|
|
117
|
+
**Completion Criteria**:
|
|
118
|
+
- All features implemented per design
|
|
119
|
+
- All tests passing
|
|
120
|
+
- Code review completed
|
|
121
|
+
- Documentation updated
|
|
122
|
+
- Performance requirements met
|
|
123
|
+
|
|
124
|
+
## QUALITY GATES
|
|
125
|
+
|
|
126
|
+
### Specification Gate
|
|
127
|
+
- [ ] Requirements are clear and testable
|
|
128
|
+
- [ ] Stakeholder approval obtained
|
|
129
|
+
- [ ] Technical feasibility confirmed
|
|
130
|
+
- [ ] Success criteria defined
|
|
131
|
+
|
|
132
|
+
### Design Gate
|
|
133
|
+
- [ ] Architecture is sound and scalable
|
|
134
|
+
- [ ] Security considerations addressed
|
|
135
|
+
- [ ] Performance requirements feasible
|
|
136
|
+
- [ ] Design review approved
|
|
137
|
+
|
|
138
|
+
### Test Gate
|
|
139
|
+
- [ ] Test coverage is comprehensive
|
|
140
|
+
- [ ] Test automation is in place
|
|
141
|
+
- [ ] Test environment is ready
|
|
142
|
+
- [ ] Test plan approved
|
|
143
|
+
|
|
144
|
+
### Implementation Gate
|
|
145
|
+
- [ ] All functionality implemented
|
|
146
|
+
- [ ] All tests passing
|
|
147
|
+
- [ ] Code review completed
|
|
148
|
+
- [ ] Documentation complete
|
|
149
|
+
- [ ] Performance validated
|
|
150
|
+
|
|
151
|
+
## DOCUMENTATION REQUIREMENTS
|
|
152
|
+
|
|
153
|
+
### Phase Documentation
|
|
154
|
+
Each phase must produce:
|
|
155
|
+
- Phase-specific deliverables
|
|
156
|
+
- Decision rationale
|
|
157
|
+
- Assumptions and constraints
|
|
158
|
+
- Risks and mitigation strategies
|
|
159
|
+
- Review and approval records
|
|
160
|
+
|
|
161
|
+
### Code Documentation
|
|
162
|
+
- Inline code comments for complex logic
|
|
163
|
+
- API documentation
|
|
164
|
+
- Configuration documentation
|
|
165
|
+
- Deployment documentation
|
|
166
|
+
- Troubleshooting guides
|
|
167
|
+
|
|
168
|
+
### Process Documentation
|
|
169
|
+
- Development setup instructions
|
|
170
|
+
- Testing procedures
|
|
171
|
+
- Deployment procedures
|
|
172
|
+
- Maintenance procedures
|
|
173
|
+
|
|
174
|
+
## REVIEW PROCESS
|
|
175
|
+
|
|
176
|
+
### Design Reviews
|
|
177
|
+
- Architecture review by senior developers
|
|
178
|
+
- Security review by security team
|
|
179
|
+
- Performance review by performance team
|
|
180
|
+
- Stakeholder review for business alignment
|
|
181
|
+
|
|
182
|
+
### Code Reviews
|
|
183
|
+
- Peer review of all code changes
|
|
184
|
+
- Security review for sensitive changes
|
|
185
|
+
- Performance review for critical paths
|
|
186
|
+
- Documentation review
|
|
187
|
+
|
|
188
|
+
### Testing Reviews
|
|
189
|
+
- Test plan review
|
|
190
|
+
- Test case review
|
|
191
|
+
- Test results review
|
|
192
|
+
- Performance test review
|
|
193
|
+
|
|
194
|
+
## BRANCH AND MERGE STRATEGY
|
|
195
|
+
|
|
196
|
+
### Branch Naming
|
|
197
|
+
- Feature branches: `feature/{issue-number}-{description}`
|
|
198
|
+
- Hotfix branches: `hotfix/{issue-number}-{description}`
|
|
199
|
+
- Release branches: `release/{version}`
|
|
200
|
+
|
|
201
|
+
### Merge Requirements
|
|
202
|
+
- All tests must pass
|
|
203
|
+
- Code review approved
|
|
204
|
+
- Documentation updated
|
|
205
|
+
- No merge conflicts
|
|
206
|
+
- Branch up to date with master
|
|
207
|
+
|
|
208
|
+
### Merge Process
|
|
209
|
+
1. Create pull request from feature branch
|
|
210
|
+
2. Automated tests run
|
|
211
|
+
3. Code review conducted
|
|
212
|
+
4. Approval obtained
|
|
213
|
+
5. Merge to master
|
|
214
|
+
6. Deploy to staging
|
|
215
|
+
7. Validation testing
|
|
216
|
+
8. Deploy to production
|
|
217
|
+
|
|
218
|
+
## CLEANUP PROCESS
|
|
219
|
+
|
|
220
|
+
### End of Development
|
|
221
|
+
When work is complete, clean up your environment:
|
|
222
|
+
|
|
223
|
+
```bash
|
|
224
|
+
# Navigate out of local clone
|
|
225
|
+
cd ..
|
|
226
|
+
|
|
227
|
+
# Remove your local clone folder
|
|
228
|
+
rm -rf \"{PROJECT_NAME} - Issue {issue_number}\"
|
|
229
|
+
```
|
|
230
|
+
|
|
231
|
+
### Branch Cleanup
|
|
232
|
+
- Delete feature branch after merge
|
|
233
|
+
- Clean up local branches
|
|
234
|
+
- Archive old release branches
|
|
235
|
+
|
|
236
|
+
## CONTINUOUS IMPROVEMENT
|
|
237
|
+
|
|
238
|
+
### Retrospectives
|
|
239
|
+
- Conduct retrospectives after each major release
|
|
240
|
+
- Document lessons learned
|
|
241
|
+
- Update processes based on feedback
|
|
242
|
+
- Share learnings across teams
|
|
243
|
+
|
|
244
|
+
### Metrics Collection
|
|
245
|
+
- Track development velocity
|
|
246
|
+
- Monitor defect rates
|
|
247
|
+
- Measure test coverage
|
|
248
|
+
- Analyze performance metrics
|
|
249
|
+
|
|
250
|
+
### Process Updates
|
|
251
|
+
- Regular review of development processes
|
|
252
|
+
- Update based on industry best practices
|
|
253
|
+
- Incorporate team feedback
|
|
254
|
+
- Align with organizational standards
|
|
255
|
+
|
|
256
|
+
## COMPLIANCE AND GOVERNANCE
|
|
257
|
+
|
|
258
|
+
### Code Standards
|
|
259
|
+
- Follow established coding standards
|
|
260
|
+
- Use automated code formatting
|
|
261
|
+
- Enforce code quality metrics
|
|
262
|
+
- Regular code quality audits
|
|
263
|
+
|
|
264
|
+
### Security Requirements
|
|
265
|
+
- Security code review for all changes
|
|
266
|
+
- Vulnerability scanning
|
|
267
|
+
- Dependency security checks
|
|
268
|
+
- Security testing
|
|
269
|
+
|
|
270
|
+
### Documentation Standards
|
|
271
|
+
- Consistent documentation format
|
|
272
|
+
- Regular documentation updates
|
|
273
|
+
- Version control for documentation
|
|
274
|
+
- Accessibility compliance
|
|
275
|
+
|
|
276
|
+
Respect CODEOWNERS; don't modify auth/CI without approval.
|
|
@@ -0,0 +1,199 @@
|
|
|
1
|
+
# Spike-First Development Pattern
|
|
2
|
+
|
|
3
|
+
## INTENT
|
|
4
|
+
Prevent the "Build First, Integrate Later" anti-pattern that leads to wasted work, technical debt, and incomplete implementations. Ensure agents validate technology compatibility and requirements understanding before building complex solutions.
|
|
5
|
+
|
|
6
|
+
## CORE PRINCIPLE
|
|
7
|
+
**"Validate Early, Validate Often"** - Always prove that your approach works with the smallest possible test before building anything complex.
|
|
8
|
+
|
|
9
|
+
## THE ANTI-PATTERN: "Build First, Integrate Later" ❌
|
|
10
|
+
|
|
11
|
+
### What It Looks Like
|
|
12
|
+
1. **Build Infrastructure First**: Create complex modular systems, frameworks, or architectures
|
|
13
|
+
2. **Assume Technology Support**: Assume unfamiliar technologies will support your approach
|
|
14
|
+
3. **Attempt Integration**: Discover incompatibilities or limitations late in the process
|
|
15
|
+
4. **Panic Implementation**: Rush to salvage work, often missing original requirements
|
|
16
|
+
|
|
17
|
+
### Why It's Dangerous
|
|
18
|
+
- **Wasted Work**: Build incompatible solutions that must be thrown away
|
|
19
|
+
- **Rushed Integration**: Leads to incomplete implementations and missed requirements
|
|
20
|
+
- **Technical Debt**: Creates bloat and confusion in the codebase
|
|
21
|
+
- **False Progress**: Appears productive while actually going backwards
|
|
22
|
+
- **Missed Requirements**: Focus on infrastructure instead of actual goals
|
|
23
|
+
|
|
24
|
+
## THE CORRECT PATTERN: "Spike, Analyze, Implement Incrementally" ✅
|
|
25
|
+
|
|
26
|
+
### 1. SPIKE/PROOF-OF-CONCEPT FIRST (5-15 minutes)
|
|
27
|
+
**Goal**: Validate the basic technology works with minimal effort
|
|
28
|
+
|
|
29
|
+
**Examples**:
|
|
30
|
+
- Testing Jinja in BAML: Add `{% if true %}Hello{% endif %}` to a prompt
|
|
31
|
+
- Testing API integration: Make one simple API call
|
|
32
|
+
- Testing database connection: Execute one basic query
|
|
33
|
+
- Testing new library: Import and call one function
|
|
34
|
+
|
|
35
|
+
**Questions to Answer**:
|
|
36
|
+
- Does the technology support what I need?
|
|
37
|
+
- What are the syntax requirements?
|
|
38
|
+
- What are the limitations?
|
|
39
|
+
- Does it integrate with existing systems?
|
|
40
|
+
|
|
41
|
+
### 2. ANALYZE DATA STRUCTURES (10-20 minutes)
|
|
42
|
+
**Goal**: Understand what data is available for your implementation
|
|
43
|
+
|
|
44
|
+
**Examples**:
|
|
45
|
+
- Examine input/output classes and their fields
|
|
46
|
+
- Review existing data flows and transformations
|
|
47
|
+
- Identify what fields are available for conditional logic
|
|
48
|
+
- Map data relationships and dependencies
|
|
49
|
+
|
|
50
|
+
**Questions to Answer**:
|
|
51
|
+
- What fields can I use for conditionals?
|
|
52
|
+
- What data is available at runtime?
|
|
53
|
+
- How does data flow through the system?
|
|
54
|
+
- What are the data constraints?
|
|
55
|
+
|
|
56
|
+
### 3. IDENTIFY OPPORTUNITIES (15-30 minutes)
|
|
57
|
+
**Goal**: Map requirements to implementation opportunities
|
|
58
|
+
|
|
59
|
+
**Examples**:
|
|
60
|
+
- Identify large code sections that should be conditional
|
|
61
|
+
- Find repetitive patterns that can be optimized
|
|
62
|
+
- Locate areas where data-driven logic would help
|
|
63
|
+
- Spot opportunities for code reduction or simplification
|
|
64
|
+
|
|
65
|
+
**Questions to Answer**:
|
|
66
|
+
- Where should conditional logic be applied?
|
|
67
|
+
- What sections are candidates for optimization?
|
|
68
|
+
- How can I reduce complexity or bloat?
|
|
69
|
+
- What are the highest-impact changes?
|
|
70
|
+
|
|
71
|
+
### 4. IMPLEMENT INCREMENTALLY (Variable time)
|
|
72
|
+
**Goal**: Build one small piece at a time with continuous validation
|
|
73
|
+
|
|
74
|
+
**Process**:
|
|
75
|
+
- Add ONE conditional/feature at a time
|
|
76
|
+
- Test after each change
|
|
77
|
+
- Ensure existing functionality is preserved
|
|
78
|
+
- Validate requirements are being met
|
|
79
|
+
- Only proceed to next change after current one works
|
|
80
|
+
|
|
81
|
+
**Examples**:
|
|
82
|
+
- Add one `{% if %}` conditional, test, then add next
|
|
83
|
+
- Implement one API endpoint, test, then add next
|
|
84
|
+
- Add one database operation, test, then add next
|
|
85
|
+
|
|
86
|
+
### 5. VALIDATE CONTINUOUSLY (Throughout)
|
|
87
|
+
**Goal**: Ensure each step works before proceeding
|
|
88
|
+
|
|
89
|
+
**Validation Steps**:
|
|
90
|
+
- Run tests after each change
|
|
91
|
+
- Verify compilation/generation works
|
|
92
|
+
- Check that existing functionality is preserved
|
|
93
|
+
- Confirm requirements are being addressed
|
|
94
|
+
- Get feedback early and often
|
|
95
|
+
|
|
96
|
+
## GOOD vs BAD EXAMPLES
|
|
97
|
+
|
|
98
|
+
### ❌ BAD: Jinja Templating Implementation
|
|
99
|
+
```
|
|
100
|
+
1. Create 15 modular Jinja template files
|
|
101
|
+
2. Build complex include system
|
|
102
|
+
3. Assume BAML supports {% include %}
|
|
103
|
+
4. Discover BAML doesn't support includes
|
|
104
|
+
5. Panic and rush minimal implementation
|
|
105
|
+
6. Miss obvious conditional opportunities
|
|
106
|
+
7. Break existing functionality
|
|
107
|
+
```
|
|
108
|
+
|
|
109
|
+
### ✅ GOOD: Jinja Templating Implementation
|
|
110
|
+
```
|
|
111
|
+
1. SPIKE: Test {% if true %}Hello{% endif %} in BAML (5 min)
|
|
112
|
+
2. ANALYZE: Examine UserIntent/UserInfo classes (10 min)
|
|
113
|
+
3. IDENTIFY: Map prompt sections to conditional opportunities (15 min)
|
|
114
|
+
4. IMPLEMENT: Add {% if user.role == "admin" %} around admin logic (20 min)
|
|
115
|
+
5. VALIDATE: Run tests, ensure functionality preserved (10 min)
|
|
116
|
+
6. REPEAT: Add next conditional incrementally
|
|
117
|
+
```
|
|
118
|
+
|
|
119
|
+
### ❌ BAD: API Integration
|
|
120
|
+
```
|
|
121
|
+
1. Build complex API client framework
|
|
122
|
+
2. Create elaborate error handling system
|
|
123
|
+
3. Design sophisticated caching layer
|
|
124
|
+
4. Discover API has rate limits that break the design
|
|
125
|
+
5. Rush to add rate limiting as afterthought
|
|
126
|
+
6. End up with over-engineered, fragile system
|
|
127
|
+
```
|
|
128
|
+
|
|
129
|
+
### ✅ GOOD: API Integration
|
|
130
|
+
```
|
|
131
|
+
1. SPIKE: Make one simple API call (5 min)
|
|
132
|
+
2. ANALYZE: Read API documentation for limits/constraints (15 min)
|
|
133
|
+
3. IDENTIFY: Determine what endpoints are needed (10 min)
|
|
134
|
+
4. IMPLEMENT: Add one endpoint call with basic error handling (30 min)
|
|
135
|
+
5. VALIDATE: Test the call works reliably (10 min)
|
|
136
|
+
6. REPEAT: Add next endpoint incrementally
|
|
137
|
+
```
|
|
138
|
+
|
|
139
|
+
### ❌ BAD: Database Schema Changes
|
|
140
|
+
```
|
|
141
|
+
1. Design complete new schema
|
|
142
|
+
2. Write migration scripts for all tables
|
|
143
|
+
3. Update all model classes
|
|
144
|
+
4. Discover performance issues with new design
|
|
145
|
+
5. Rush to add indexes and optimize queries
|
|
146
|
+
6. Break existing functionality in multiple places
|
|
147
|
+
```
|
|
148
|
+
|
|
149
|
+
### ✅ GOOD: Database Schema Changes
|
|
150
|
+
```
|
|
151
|
+
1. SPIKE: Test schema change on one small table (10 min)
|
|
152
|
+
2. ANALYZE: Review existing queries and performance (20 min)
|
|
153
|
+
3. IDENTIFY: Plan migration strategy and rollback plan (15 min)
|
|
154
|
+
4. IMPLEMENT: Change one table with migration (45 min)
|
|
155
|
+
5. VALIDATE: Test performance and functionality (15 min)
|
|
156
|
+
6. REPEAT: Migrate next table incrementally
|
|
157
|
+
```
|
|
158
|
+
|
|
159
|
+
## ENFORCEMENT RULES
|
|
160
|
+
|
|
161
|
+
### MANDATORY SPIKE REQUIREMENTS
|
|
162
|
+
- **Any unfamiliar technology**: Must spike basic functionality first
|
|
163
|
+
- **Any complex integration**: Must test simplest case first
|
|
164
|
+
- **Any architectural changes**: Must validate approach with minimal example
|
|
165
|
+
- **Any new libraries/frameworks**: Must test basic usage first
|
|
166
|
+
|
|
167
|
+
### VALIDATION CHECKPOINTS
|
|
168
|
+
- After spike: Technology compatibility confirmed
|
|
169
|
+
- After analysis: Data structures and constraints understood
|
|
170
|
+
- After identification: Implementation plan is clear and achievable
|
|
171
|
+
- After each increment: Functionality works and tests pass
|
|
172
|
+
- Before completion: All requirements met and validated
|
|
173
|
+
|
|
174
|
+
### RED FLAGS (Stop and Spike)
|
|
175
|
+
- Building complex systems without testing basic functionality
|
|
176
|
+
- Making assumptions about unfamiliar technology capabilities
|
|
177
|
+
- Creating elaborate architectures before validating core concepts
|
|
178
|
+
- Spending significant time on infrastructure before proving it works
|
|
179
|
+
- Claiming progress without demonstrable working functionality
|
|
180
|
+
|
|
181
|
+
## BENEFITS OF SPIKE-FIRST DEVELOPMENT
|
|
182
|
+
|
|
183
|
+
1. **Reduced Risk**: Discover incompatibilities early when they're cheap to fix
|
|
184
|
+
2. **Faster Delivery**: Avoid wasted work on incompatible approaches
|
|
185
|
+
3. **Better Quality**: Continuous validation ensures functionality is preserved
|
|
186
|
+
4. **Clearer Requirements**: Understanding constraints leads to better solutions
|
|
187
|
+
5. **Increased Confidence**: Each step is validated before proceeding
|
|
188
|
+
6. **Easier Debugging**: Problems are isolated to small, recent changes
|
|
189
|
+
|
|
190
|
+
## SUMMARY
|
|
191
|
+
|
|
192
|
+
The spike-first development pattern prevents catastrophic failures by ensuring agents:
|
|
193
|
+
- Validate technology compatibility before building
|
|
194
|
+
- Understand data structures and constraints upfront
|
|
195
|
+
- Implement incrementally with continuous validation
|
|
196
|
+
- Focus on requirements rather than infrastructure
|
|
197
|
+
- Avoid the dangerous "Build First, Integrate Later" anti-pattern
|
|
198
|
+
|
|
199
|
+
**Remember**: It's always faster to spike first than to rebuild later.
|