fraim-framework 2.0.55 → 2.0.57
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +10 -0
- package/dist/src/cli/commands/init-project.js +10 -4
- package/dist/src/cli/setup/mcp-config-generator.js +23 -15
- package/dist/src/local-mcp-server/stdio-server.js +207 -0
- package/dist/src/utils/validate-workflows.js +101 -0
- package/dist/src/utils/workflow-parser.js +81 -0
- package/package.json +16 -11
- package/registry/scripts/pdf-styles.css +172 -0
- package/registry/scripts/prep-issue.sh +46 -4
- package/registry/scripts/profile-server.ts +131 -130
- package/registry/stubs/workflows/customer-development/user-survey-dispatch.md +1 -1
- package/registry/stubs/workflows/customer-development/users-to-target.md +1 -1
- package/registry/stubs/workflows/product-building/design.md +1 -1
- package/registry/stubs/workflows/product-building/implement.md +1 -1
- package/Claude.md +0 -1
- package/dist/registry/ai-manager-rules/design-phases/design-completeness-review.md +0 -73
- package/dist/registry/ai-manager-rules/design-phases/design-design.md +0 -145
- package/dist/registry/ai-manager-rules/implement-phases/implement-code.md +0 -283
- package/dist/registry/ai-manager-rules/implement-phases/implement-completeness-review.md +0 -120
- package/dist/registry/ai-manager-rules/implement-phases/implement-regression.md +0 -173
- package/dist/registry/ai-manager-rules/implement-phases/implement-repro.md +0 -104
- package/dist/registry/ai-manager-rules/implement-phases/implement-scoping.md +0 -100
- package/dist/registry/ai-manager-rules/implement-phases/implement-smoke.md +0 -237
- package/dist/registry/ai-manager-rules/implement-phases/implement-spike.md +0 -121
- package/dist/registry/ai-manager-rules/implement-phases/implement-validate.md +0 -375
- package/dist/registry/ai-manager-rules/retrospective.md +0 -116
- package/dist/registry/ai-manager-rules/shared-phases/address-pr-feedback.md +0 -188
- package/dist/registry/ai-manager-rules/shared-phases/submit-pr.md +0 -202
- package/dist/registry/ai-manager-rules/shared-phases/wait-for-pr-review.md +0 -170
- package/dist/registry/ai-manager-rules/spec-phases/spec-competitor-analysis.md +0 -105
- package/dist/registry/ai-manager-rules/spec-phases/spec-completeness-review.md +0 -66
- package/dist/registry/ai-manager-rules/spec-phases/spec-spec.md +0 -139
- package/dist/registry/providers/ado.json +0 -19
- package/dist/registry/providers/github.json +0 -19
- package/dist/registry/scripts/cleanup-branch.js +0 -287
- package/dist/registry/scripts/evaluate-code-quality.js +0 -66
- package/dist/registry/scripts/exec-with-timeout.js +0 -142
- package/dist/registry/scripts/generate-engagement-emails.js +0 -705
- package/dist/registry/scripts/newsletter-helpers.js +0 -671
- package/dist/registry/scripts/profile-server.js +0 -388
- package/dist/registry/scripts/run-thank-you-workflow.js +0 -92
- package/dist/registry/scripts/send-newsletter-simple.js +0 -85
- package/dist/registry/scripts/send-thank-you-emails.js +0 -54
- package/dist/registry/scripts/validate-openapi-limits.js +0 -311
- package/dist/registry/scripts/validate-test-coverage.js +0 -262
- package/dist/registry/scripts/verify-test-coverage.js +0 -66
- package/dist/scripts/build-stub-registry.js +0 -108
- package/dist/src/ai-manager/ai-manager.js +0 -482
- package/dist/src/ai-manager/phase-flow.js +0 -357
- package/dist/src/ai-manager/types.js +0 -5
- package/dist/src/fraim-mcp-server.js +0 -1885
- package/dist/tests/debug-tools.js +0 -80
- package/dist/tests/shared-server-utils.js +0 -57
- package/dist/tests/test-add-ide.js +0 -283
- package/dist/tests/test-ai-coach-edge-cases.js +0 -420
- package/dist/tests/test-ai-coach-mcp-integration.js +0 -450
- package/dist/tests/test-ai-coach-performance.js +0 -328
- package/dist/tests/test-ai-coach-phase-content.js +0 -264
- package/dist/tests/test-ai-coach-workflows.js +0 -514
- package/dist/tests/test-cli.js +0 -228
- package/dist/tests/test-client-scripts-validation.js +0 -167
- package/dist/tests/test-complete-setup-flow.js +0 -110
- package/dist/tests/test-config-system.js +0 -279
- package/dist/tests/test-debug-session.js +0 -134
- package/dist/tests/test-end-to-end-hybrid-validation.js +0 -328
- package/dist/tests/test-enhanced-session-init.js +0 -188
- package/dist/tests/test-first-run-journey.js +0 -368
- package/dist/tests/test-fraim-issues.js +0 -59
- package/dist/tests/test-genericization.js +0 -44
- package/dist/tests/test-hybrid-script-execution.js +0 -340
- package/dist/tests/test-ide-detector.js +0 -46
- package/dist/tests/test-improved-setup.js +0 -121
- package/dist/tests/test-mcp-config-generator.js +0 -99
- package/dist/tests/test-mcp-connection.js +0 -107
- package/dist/tests/test-mcp-issue-integration.js +0 -156
- package/dist/tests/test-mcp-lifecycle-methods.js +0 -240
- package/dist/tests/test-mcp-shared-server.js +0 -308
- package/dist/tests/test-mcp-template-processing.js +0 -160
- package/dist/tests/test-modular-issue-tracking.js +0 -165
- package/dist/tests/test-node-compatibility.js +0 -95
- package/dist/tests/test-npm-install.js +0 -68
- package/dist/tests/test-package-size.js +0 -108
- package/dist/tests/test-pr-review-workflow.js +0 -307
- package/dist/tests/test-prep-issue.js +0 -129
- package/dist/tests/test-productivity-integration.js +0 -157
- package/dist/tests/test-script-location-independence.js +0 -198
- package/dist/tests/test-script-sync.js +0 -557
- package/dist/tests/test-server-utils.js +0 -32
- package/dist/tests/test-session-rehydration.js +0 -148
- package/dist/tests/test-setup-integration.js +0 -98
- package/dist/tests/test-setup-scenarios.js +0 -322
- package/dist/tests/test-standalone.js +0 -143
- package/dist/tests/test-stub-registry.js +0 -136
- package/dist/tests/test-sync-stubs.js +0 -143
- package/dist/tests/test-sync-version-update.js +0 -93
- package/dist/tests/test-telemetry.js +0 -193
- package/dist/tests/test-token-validator.js +0 -30
- package/dist/tests/test-user-journey.js +0 -236
- package/dist/tests/test-users-to-target-workflow.js +0 -253
- package/dist/tests/test-utils.js +0 -109
- package/dist/tests/test-wizard.js +0 -71
- package/dist/tests/test-workflow-discovery.js +0 -242
- package/labels.json +0 -52
- package/registry/agent-guardrails.md +0 -63
- package/registry/fraim.md +0 -48
- package/registry/stubs/workflows/customer-development/ai-coach-phases/phase1-customer-profiling.md +0 -11
- package/registry/stubs/workflows/customer-development/ai-coach-phases/phase1-survey-scoping.md +0 -11
- package/registry/stubs/workflows/customer-development/ai-coach-phases/phase2-platform-discovery.md +0 -11
- package/registry/stubs/workflows/customer-development/ai-coach-phases/phase2-survey-build-linkedin.md +0 -11
- package/registry/stubs/workflows/customer-development/ai-coach-phases/phase3-prospect-qualification.md +0 -11
- package/registry/stubs/workflows/customer-development/ai-coach-phases/phase3-survey-build-reddit.md +0 -11
- package/registry/stubs/workflows/customer-development/ai-coach-phases/phase4-inventory-compilation.md +0 -11
- package/registry/stubs/workflows/customer-development/ai-coach-phases/phase4-survey-build-x.md +0 -11
- package/registry/stubs/workflows/customer-development/ai-coach-phases/phase5-survey-build-facebook.md +0 -11
- package/registry/stubs/workflows/customer-development/ai-coach-phases/phase6-survey-build-custom.md +0 -11
- package/registry/stubs/workflows/customer-development/ai-coach-phases/phase7-survey-dispatch.md +0 -11
- package/registry/stubs/workflows/customer-development/templates/customer-persona-template.md +0 -11
- package/registry/stubs/workflows/customer-development/templates/search-strategy-template.md +0 -11
- package/setup.js +0 -171
- package/tsconfig.json +0 -23
|
@@ -1,121 +0,0 @@
|
|
|
1
|
-
# Phase: Implement-Spike (Features Only)
|
|
2
|
-
|
|
3
|
-
## INTENT
|
|
4
|
-
To build a proof-of-concept that validates the technical approach, tests uncertain or risky aspects, and confirms the design is viable before full implementation.
|
|
5
|
-
|
|
6
|
-
## OUTCOME
|
|
7
|
-
A working POC that:
|
|
8
|
-
- Validates key technical assumptions
|
|
9
|
-
- Tests risky or uncertain aspects
|
|
10
|
-
- Gets user approval
|
|
11
|
-
- Informs any necessary design updates
|
|
12
|
-
|
|
13
|
-
## PRINCIPLES
|
|
14
|
-
- **Spike-First Development**: Follow the spike-first rule for unfamiliar technology
|
|
15
|
-
- **Risk Reduction**: Focus on highest-risk or most uncertain aspects
|
|
16
|
-
- **Quick Validation**: Build minimal POC, not production code
|
|
17
|
-
- **User Approval**: Get explicit approval before proceeding
|
|
18
|
-
- **Design Updates**: Update design doc with findings
|
|
19
|
-
|
|
20
|
-
## WORKFLOW
|
|
21
|
-
|
|
22
|
-
### Step 1: Review Design Document
|
|
23
|
-
- Read the technical design/RFC thoroughly
|
|
24
|
-
- Identify technical assumptions
|
|
25
|
-
- Note any unfamiliar technologies or patterns
|
|
26
|
-
- List risky or uncertain aspects
|
|
27
|
-
|
|
28
|
-
### Step 2: Identify Spike Scope
|
|
29
|
-
**Focus on:**
|
|
30
|
-
- Unfamiliar technology or libraries
|
|
31
|
-
- Integration points with external systems
|
|
32
|
-
- Performance-critical components
|
|
33
|
-
- Complex algorithms or logic
|
|
34
|
-
- Uncertain technical feasibility
|
|
35
|
-
|
|
36
|
-
**Do NOT spike:**
|
|
37
|
-
- Well-understood patterns
|
|
38
|
-
- Standard CRUD operations
|
|
39
|
-
- Simple UI changes
|
|
40
|
-
- Routine implementations
|
|
41
|
-
|
|
42
|
-
### Step 3: Build Proof-of-Concept
|
|
43
|
-
- Create minimal code to test assumptions
|
|
44
|
-
- Focus on proving/disproving technical approach
|
|
45
|
-
- Don't worry about production quality
|
|
46
|
-
- Document what you're testing
|
|
47
|
-
|
|
48
|
-
### Step 4: Test Key Assumptions
|
|
49
|
-
- Run the POC
|
|
50
|
-
- Verify it works as expected
|
|
51
|
-
- Test edge cases if relevant
|
|
52
|
-
- Document findings
|
|
53
|
-
|
|
54
|
-
### Step 5: Get User Approval
|
|
55
|
-
- Present POC to user
|
|
56
|
-
- Explain what was validated
|
|
57
|
-
- Share any findings or concerns
|
|
58
|
-
- **Wait for explicit approval** before proceeding
|
|
59
|
-
|
|
60
|
-
### Step 6: Update Design Document
|
|
61
|
-
If POC revealed:
|
|
62
|
-
- Better approaches
|
|
63
|
-
- Technical constraints
|
|
64
|
-
- Performance considerations
|
|
65
|
-
- Integration challenges
|
|
66
|
-
|
|
67
|
-
**Update the design document** with findings
|
|
68
|
-
|
|
69
|
-
## VALIDATION
|
|
70
|
-
|
|
71
|
-
### Phase Complete When:
|
|
72
|
-
- ✅ POC built and tested
|
|
73
|
-
- ✅ Key technical assumptions validated
|
|
74
|
-
- ✅ User has approved the approach
|
|
75
|
-
- ✅ Design document updated (if needed)
|
|
76
|
-
- ✅ Ready to proceed with full implementation
|
|
77
|
-
|
|
78
|
-
### Phase Incomplete If:
|
|
79
|
-
- ❌ POC doesn't work as expected
|
|
80
|
-
- ❌ Technical approach not viable
|
|
81
|
-
- ❌ User hasn't approved
|
|
82
|
-
- ❌ Significant findings not documented
|
|
83
|
-
|
|
84
|
-
## RULES FOR THIS PHASE
|
|
85
|
-
|
|
86
|
-
### Spike-First Development
|
|
87
|
-
Read `registry/rules/spike-first-development.md` via `get_fraim_file` for complete spike methodology.
|
|
88
|
-
|
|
89
|
-
### Architecture Compliance
|
|
90
|
-
Read `.fraim/config.json` for the architecture document path (`customizations.architectureDoc`), then read the local architecture document to understand the full architecture guidelines.
|
|
91
|
-
|
|
92
|
-
### Git Operations (if needed)
|
|
93
|
-
When using git commands directly (if MCP tools unavailable), read `rules/git-safe-commands.md` via `get_fraim_file` to avoid interactive commands that hang agents.
|
|
94
|
-
|
|
95
|
-
## SCRIPTS
|
|
96
|
-
Run POC:
|
|
97
|
-
```bash
|
|
98
|
-
npm run spike # or appropriate command
|
|
99
|
-
node spike/your-poc.js
|
|
100
|
-
```
|
|
101
|
-
|
|
102
|
-
### Report Back:
|
|
103
|
-
When you complete this phase, call:
|
|
104
|
-
|
|
105
|
-
```javascript
|
|
106
|
-
seekCoachingOnNextStep({
|
|
107
|
-
workflowType: "implement",
|
|
108
|
-
issueNumber: "{issue_number}",
|
|
109
|
-
currentPhase: "implement-spike",
|
|
110
|
-
status: "complete",
|
|
111
|
-
findings: {
|
|
112
|
-
issueType: "feature" // Required for phase validation
|
|
113
|
-
},
|
|
114
|
-
evidence: {
|
|
115
|
-
pocLocation: "spike/auth-flow.ts",
|
|
116
|
-
validationResults: "OAuth2 integration works as expected",
|
|
117
|
-
userApproval: "User approved approach on [date]",
|
|
118
|
-
designUpdates: "Updated design doc with token refresh findings"
|
|
119
|
-
}
|
|
120
|
-
})
|
|
121
|
-
```
|
|
@@ -1,375 +0,0 @@
|
|
|
1
|
-
# Phase: Implement-Validate
|
|
2
|
-
|
|
3
|
-
## INTENT
|
|
4
|
-
To verify that the implementation works correctly by testing it in appropriate ways: browser testing for UI changes, API testing for backend changes, and command-line testing for CLI changes.
|
|
5
|
-
|
|
6
|
-
## OUTCOME
|
|
7
|
-
Confirmation that:
|
|
8
|
-
- Implementation works as expected
|
|
9
|
-
- All acceptance criteria are met
|
|
10
|
-
- Edge cases are handled
|
|
11
|
-
- No obvious bugs or issues
|
|
12
|
-
|
|
13
|
-
## 🎯 VALIDATION MINDSET
|
|
14
|
-
|
|
15
|
-
This is THE critical phase where you prove your implementation actually works. No shortcuts, no assumptions.
|
|
16
|
-
|
|
17
|
-
**Your Mission**: Prove beyond doubt that a real user can successfully use this feature.
|
|
18
|
-
|
|
19
|
-
## RULES FOR THIS PHASE
|
|
20
|
-
|
|
21
|
-
### Config-Aware Testing
|
|
22
|
-
**CRITICAL**: Always check `.fraim/config.json` first for project-specific test commands:
|
|
23
|
-
- `customizations.validation.testSuiteCommand` - Use this for comprehensive testing (default: `npm test`)
|
|
24
|
-
|
|
25
|
-
### Success Criteria
|
|
26
|
-
Read `registry/rules/agent-success-criteria.md` via `get_fraim_file` for the complete framework. Focus on:
|
|
27
|
-
- **Integrity** (honest reporting of validation results - never claim something works if you didn't test it)
|
|
28
|
-
- **Correctness** (implementation actually works as expected)
|
|
29
|
-
- **Completeness** (all acceptance criteria validated, edge cases tested)
|
|
30
|
-
|
|
31
|
-
### Simplicity Principles
|
|
32
|
-
Read `registry/rules/simplicity.md` via `get_fraim_file` for complete guidelines. Critical for validation:
|
|
33
|
-
- **Manual Validation Required** - This is THE manual validation phase
|
|
34
|
-
- **Prototype-First** - Validate the working solution before engineering tests
|
|
35
|
-
- **Resource Waste Prevention** - Efficient testing approach
|
|
36
|
-
|
|
37
|
-
### Architecture Compliance
|
|
38
|
-
Read `.fraim/config.json` for the architecture document path (`customizations.architectureDoc`), then read the local architecture document to understand the full architecture guidelines. Ensure implementation follows architectural patterns.
|
|
39
|
-
|
|
40
|
-
### Debugging and Git Operations
|
|
41
|
-
- Use Successful Debugging Patterns from `rules/successful-debugging-patterns.md` via `get_fraim_file`
|
|
42
|
-
- When using git commands directly (if MCP tools unavailable), read `rules/git-safe-commands.md` via `get_fraim_file` to avoid interactive commands that hang agents.
|
|
43
|
-
|
|
44
|
-
### Failure Handling
|
|
45
|
-
- **If validation fails**: Return to implement-code phase
|
|
46
|
-
- **Do not proceed to smoke tests** if validation fails
|
|
47
|
-
- Fix the implementation first, then re-validate
|
|
48
|
-
|
|
49
|
-
## PRINCIPLES
|
|
50
|
-
- **Real Testing**: Test actual functionality, not mocks
|
|
51
|
-
- **Appropriate Tools**: Use browser for UI, curl for APIs, CLI for commands
|
|
52
|
-
- **Complete Coverage**: Test all acceptance criteria
|
|
53
|
-
- **Edge Cases**: Test boundary conditions and error scenarios
|
|
54
|
-
- **Evidence**: Document validation results
|
|
55
|
-
|
|
56
|
-
## 📋 MANDATORY VALIDATION STEPS
|
|
57
|
-
|
|
58
|
-
### Step 1: Build and Compilation Check ✅
|
|
59
|
-
|
|
60
|
-
**Requirements**:
|
|
61
|
-
- TypeScript compiles without errors
|
|
62
|
-
- Build process completes successfully
|
|
63
|
-
- No type errors or warnings
|
|
64
|
-
|
|
65
|
-
**Commands to Run**:
|
|
66
|
-
```bash
|
|
67
|
-
# Check TypeScript compilation
|
|
68
|
-
npx tsc --noEmit --skipLibCheck
|
|
69
|
-
|
|
70
|
-
# Run full build (includes validation)
|
|
71
|
-
npm run build
|
|
72
|
-
|
|
73
|
-
# Check for any 'as any' type bypassing
|
|
74
|
-
grep -r "as any" src/ || echo "✅ No type bypassing found"
|
|
75
|
-
```
|
|
76
|
-
|
|
77
|
-
**What to Look For**:
|
|
78
|
-
- Exit code 0 (success) for all commands
|
|
79
|
-
- No error messages in output
|
|
80
|
-
- Clean compilation without warnings
|
|
81
|
-
|
|
82
|
-
### Step 2: Test Suite Validation 📊
|
|
83
|
-
|
|
84
|
-
**Requirements**:
|
|
85
|
-
- ALL existing tests pass (100% success rate) - not just tests you created
|
|
86
|
-
- No timeouts or hanging tests
|
|
87
|
-
- Comprehensive test coverage validation
|
|
88
|
-
|
|
89
|
-
**CRITICAL**: You must run the COMPLETE test suite, including all existing tests, smoke tests, and any tests you created. Do not run only the tests you wrote.
|
|
90
|
-
|
|
91
|
-
**Commands to Run** (check .fraim/config.json for project-specific commands first):
|
|
92
|
-
```bash
|
|
93
|
-
# 1. FIRST: Check for custom test commands
|
|
94
|
-
cat .fraim/config.json | grep -A 5 "validation"
|
|
95
|
-
|
|
96
|
-
# 2. Run comprehensive test suite for current work
|
|
97
|
-
# Use customizations.validation.testSuiteCommand if available, otherwise:
|
|
98
|
-
npm test
|
|
99
|
-
|
|
100
|
-
# 3. Run targeted tests for your specific changes (if applicable)
|
|
101
|
-
# Example: If you modified user authentication, run auth-related tests
|
|
102
|
-
# npm run test -- --grep "auth"
|
|
103
|
-
# OR: npm run test test-{issue-number}-*.ts
|
|
104
|
-
```
|
|
105
|
-
|
|
106
|
-
**What to Look For**:
|
|
107
|
-
- "All tests passed" or similar success message for ALL commands
|
|
108
|
-
- No "FAILED" or "ERROR" in output from any test run
|
|
109
|
-
- Test count should include existing tests + any new tests you added
|
|
110
|
-
- No timeout messages
|
|
111
|
-
- Verify that existing functionality tests are included in the run
|
|
112
|
-
|
|
113
|
-
**Example Expected Output**:
|
|
114
|
-
```
|
|
115
|
-
✅ Running 45 tests (including existing + new tests)
|
|
116
|
-
✅ All tests passed
|
|
117
|
-
✅ Test suite completed successfully
|
|
118
|
-
```
|
|
119
|
-
|
|
120
|
-
### Step 3: Manual Functionality Testing 🧪
|
|
121
|
-
|
|
122
|
-
**Requirements**:
|
|
123
|
-
- Test actual user workflows end-to-end
|
|
124
|
-
- Verify all acceptance criteria manually
|
|
125
|
-
- Test both happy path and error scenarios
|
|
126
|
-
|
|
127
|
-
**For UI Changes**:
|
|
128
|
-
```bash
|
|
129
|
-
# Start development server
|
|
130
|
-
npm run dev
|
|
131
|
-
# Then manually test in browser at http://localhost:3000
|
|
132
|
-
```
|
|
133
|
-
|
|
134
|
-
**For API Changes**:
|
|
135
|
-
```bash
|
|
136
|
-
# Test API endpoints
|
|
137
|
-
curl -X GET http://localhost:3000/health
|
|
138
|
-
curl -X POST http://localhost:3000/api/test -H "Content-Type: application/json" -d '{"test":"data"}'
|
|
139
|
-
```
|
|
140
|
-
|
|
141
|
-
**For CLI Changes**:
|
|
142
|
-
```bash
|
|
143
|
-
# Test CLI commands
|
|
144
|
-
node bin/fraim.js --help
|
|
145
|
-
node bin/fraim.js sync --dry-run
|
|
146
|
-
```
|
|
147
|
-
|
|
148
|
-
**What to Document**:
|
|
149
|
-
- Specific steps you performed
|
|
150
|
-
- What you clicked/typed/tested
|
|
151
|
-
- Results you observed
|
|
152
|
-
- Any errors encountered and how they were handled
|
|
153
|
-
|
|
154
|
-
### Step 4: Git and Code Quality Check 🔍
|
|
155
|
-
|
|
156
|
-
**Requirements**:
|
|
157
|
-
- Git status is clean
|
|
158
|
-
- No debugging code remains
|
|
159
|
-
- Code follows quality standards
|
|
160
|
-
|
|
161
|
-
**Commands to Run**:
|
|
162
|
-
```bash
|
|
163
|
-
# Check git status
|
|
164
|
-
git status
|
|
165
|
-
|
|
166
|
-
# Look for debugging code
|
|
167
|
-
grep -r "console.log" src/ | grep -v "logger" | grep -v "// OK" || echo "✅ No debug logs found"
|
|
168
|
-
|
|
169
|
-
# Check for task placeholder comments in core functionality
|
|
170
|
-
grep -r "task-placeholder" src/ || echo "✅ No task placeholders found"
|
|
171
|
-
|
|
172
|
-
# Validate registry paths (if applicable)
|
|
173
|
-
npm run validate:registry
|
|
174
|
-
```
|
|
175
|
-
|
|
176
|
-
**What to Look For**:
|
|
177
|
-
- Only intended files are modified
|
|
178
|
-
- No console.log statements (except intentional logging)
|
|
179
|
-
- No task placeholder comments in critical code
|
|
180
|
-
- Clean working directory
|
|
181
|
-
|
|
182
|
-
### Step 5: Bug-Specific Validation (For Bug Fixes) 🐛
|
|
183
|
-
|
|
184
|
-
**Requirements**:
|
|
185
|
-
- Original reproduction test now passes
|
|
186
|
-
- Bug symptoms are gone
|
|
187
|
-
- No regressions introduced
|
|
188
|
-
|
|
189
|
-
**Commands to Run**:
|
|
190
|
-
```bash
|
|
191
|
-
# Run the specific repro test
|
|
192
|
-
npm test -- path/to/repro/test.test.ts
|
|
193
|
-
|
|
194
|
-
# Test the original bug scenario manually
|
|
195
|
-
# (Follow the original reproduction steps)
|
|
196
|
-
```
|
|
197
|
-
|
|
198
|
-
**What to Verify**:
|
|
199
|
-
- Repro test that was failing now passes
|
|
200
|
-
- Original bug behavior is fixed
|
|
201
|
-
- Related functionality still works
|
|
202
|
-
|
|
203
|
-
### Step 6: Feature-Specific Validation (For Features) ✨
|
|
204
|
-
|
|
205
|
-
**Requirements**:
|
|
206
|
-
- New functionality works as specified
|
|
207
|
-
- Integration with existing features works
|
|
208
|
-
- All acceptance criteria met
|
|
209
|
-
|
|
210
|
-
**What to Test**:
|
|
211
|
-
- Main user scenarios from the spec
|
|
212
|
-
- Edge cases and error conditions
|
|
213
|
-
- Integration points with existing code
|
|
214
|
-
- Performance (if applicable)
|
|
215
|
-
|
|
216
|
-
## 🛠️ VALIDATION COMMANDS REFERENCE
|
|
217
|
-
|
|
218
|
-
### Quick Health Check
|
|
219
|
-
```bash
|
|
220
|
-
# One-liner to check basic health
|
|
221
|
-
npx tsc --noEmit --skipLibCheck && npm test && git status
|
|
222
|
-
```
|
|
223
|
-
|
|
224
|
-
### Comprehensive Validation
|
|
225
|
-
```bash
|
|
226
|
-
# Full validation suite
|
|
227
|
-
npm run build && npm run test-all-ci && npm run validate:registry
|
|
228
|
-
```
|
|
229
|
-
|
|
230
|
-
### API Testing
|
|
231
|
-
```bash
|
|
232
|
-
# Test API health and basic functionality
|
|
233
|
-
curl -f http://localhost:3000/health && echo "✅ API healthy"
|
|
234
|
-
```
|
|
235
|
-
|
|
236
|
-
### Build Verification
|
|
237
|
-
```bash
|
|
238
|
-
# Verify build artifacts are created correctly
|
|
239
|
-
npm run build && ls -la dist/ && echo "✅ Build artifacts present"
|
|
240
|
-
```
|
|
241
|
-
|
|
242
|
-
## WORKFLOW
|
|
243
|
-
|
|
244
|
-
### Step 1: Identify Validation Method
|
|
245
|
-
|
|
246
|
-
**For UI Changes:**
|
|
247
|
-
- Use browser to navigate and test
|
|
248
|
-
- Verify visual correctness
|
|
249
|
-
- Test user interactions
|
|
250
|
-
- Check responsive behavior
|
|
251
|
-
|
|
252
|
-
**For API Changes:**
|
|
253
|
-
- Use curl or similar tools
|
|
254
|
-
- Test all endpoints
|
|
255
|
-
- Verify request/response formats
|
|
256
|
-
- Test error handling
|
|
257
|
-
|
|
258
|
-
**For Backend/CLI Changes:**
|
|
259
|
-
- Run relevant commands
|
|
260
|
-
- Test with various inputs
|
|
261
|
-
- Verify output correctness
|
|
262
|
-
- Test error conditions
|
|
263
|
-
|
|
264
|
-
### Step 2: Test All Acceptance Criteria
|
|
265
|
-
- Review acceptance criteria from implement-scoping phase
|
|
266
|
-
- Test each criterion systematically
|
|
267
|
-
- Document results for each
|
|
268
|
-
- Verify all are met
|
|
269
|
-
|
|
270
|
-
### Step 3: Test Edge Cases
|
|
271
|
-
**Common edge cases:**
|
|
272
|
-
- Empty inputs
|
|
273
|
-
- Invalid inputs
|
|
274
|
-
- Boundary values
|
|
275
|
-
- Error conditions
|
|
276
|
-
- Concurrent operations (if applicable)
|
|
277
|
-
|
|
278
|
-
### Step 4: Document Results
|
|
279
|
-
In evidence, include:
|
|
280
|
-
- What was tested
|
|
281
|
-
- How it was tested
|
|
282
|
-
- Results of each test
|
|
283
|
-
- Any issues found and fixed
|
|
284
|
-
- Screenshots/output (if applicable)
|
|
285
|
-
|
|
286
|
-
## VALIDATION
|
|
287
|
-
|
|
288
|
-
### Phase Complete When:
|
|
289
|
-
- ✅ All acceptance criteria tested and passing
|
|
290
|
-
- ✅ Edge cases handled correctly
|
|
291
|
-
- ✅ For bugs: repro test now passes
|
|
292
|
-
- ✅ For features: new functionality works
|
|
293
|
-
- ✅ No obvious bugs or issues
|
|
294
|
-
- ✅ Validation results documented with evidence
|
|
295
|
-
- ✅ All servers running without errors
|
|
296
|
-
- ✅ 100% test success rate
|
|
297
|
-
- ✅ Error scenarios tested and handled
|
|
298
|
-
|
|
299
|
-
### Phase Incomplete If:
|
|
300
|
-
- ❌ Acceptance criteria not met
|
|
301
|
-
- ❌ For bugs: repro test still fails
|
|
302
|
-
- ❌ For features: functionality doesn't work
|
|
303
|
-
- ❌ Edge cases not handled
|
|
304
|
-
- ❌ Obvious bugs found
|
|
305
|
-
- ❌ Missing evidence for claims
|
|
306
|
-
- ❌ Server errors present
|
|
307
|
-
- ❌ Any tests failing
|
|
308
|
-
|
|
309
|
-
### Report Back:
|
|
310
|
-
When you complete this phase, call:
|
|
311
|
-
|
|
312
|
-
```javascript
|
|
313
|
-
seekCoachingOnNextStep({
|
|
314
|
-
workflowType: "implement",
|
|
315
|
-
issueNumber: "{issue_number}",
|
|
316
|
-
currentPhase: "implement-validate",
|
|
317
|
-
status: "complete",
|
|
318
|
-
findings: {
|
|
319
|
-
issueType: "bug" // or "feature" - Required for phase validation
|
|
320
|
-
},
|
|
321
|
-
evidence: {
|
|
322
|
-
buildPassed: "YES", // Did `npm run build` succeed?
|
|
323
|
-
testsPassed: "YES", // Did `npm test` show 100% pass rate?
|
|
324
|
-
manualValidationDone: "YES", // Did you manually test the functionality?
|
|
325
|
-
serversStarted: "YES", // Did servers start without errors?
|
|
326
|
-
reproTestPassed: "YES", // (For bugs) Does the repro test now pass?
|
|
327
|
-
featureWorking: "YES", // (For features) Does the new functionality work?
|
|
328
|
-
commandsRun: "npm run build && npm test", // What validation commands did you run?
|
|
329
|
-
summary: "All validation steps completed successfully. Build passes, all 15 tests pass, manual testing confirmed functionality works as expected."
|
|
330
|
-
}
|
|
331
|
-
})
|
|
332
|
-
```
|
|
333
|
-
|
|
334
|
-
If validation reveals issues, report back with status "incomplete":
|
|
335
|
-
```javascript
|
|
336
|
-
seekCoachingOnNextStep({
|
|
337
|
-
workflowType: "implement",
|
|
338
|
-
issueNumber: "{issue_number}",
|
|
339
|
-
currentPhase: "implement-validate",
|
|
340
|
-
status: "incomplete",
|
|
341
|
-
findings: {
|
|
342
|
-
issueType: "bug" // or "feature" - Required for phase validation
|
|
343
|
-
},
|
|
344
|
-
evidence: {
|
|
345
|
-
buildPassed: "NO", // Specify what failed
|
|
346
|
-
testsPassed: "NO", // Be specific about failures
|
|
347
|
-
issues: ["Build failed with TypeScript errors", "3 tests failing", "Manual testing revealed login button not working"],
|
|
348
|
-
commandsRun: "npm run build && npm test",
|
|
349
|
-
nextAction: "Need to return to implement phase to fix these issues"
|
|
350
|
-
}
|
|
351
|
-
})
|
|
352
|
-
```
|
|
353
|
-
|
|
354
|
-
## SCRIPTS
|
|
355
|
-
|
|
356
|
-
**Browser testing:**
|
|
357
|
-
```bash
|
|
358
|
-
npm run dev # Start dev server
|
|
359
|
-
# Then manually test in browser
|
|
360
|
-
```
|
|
361
|
-
|
|
362
|
-
**API testing:**
|
|
363
|
-
```bash
|
|
364
|
-
curl -X POST http://localhost:3000/api/endpoint -d '{"data": "value"}'
|
|
365
|
-
```
|
|
366
|
-
|
|
367
|
-
**CLI testing:**
|
|
368
|
-
```bash
|
|
369
|
-
npm run cli -- command --flag value
|
|
370
|
-
```
|
|
371
|
-
|
|
372
|
-
**Run all tests:**
|
|
373
|
-
```bash
|
|
374
|
-
npm test
|
|
375
|
-
```
|
|
@@ -1,116 +0,0 @@
|
|
|
1
|
-
# Phase: Retrospective
|
|
2
|
-
|
|
3
|
-
## INTENT
|
|
4
|
-
To conduct a comprehensive retrospective of the completed work while full context is available, capturing learnings and insights before branch cleanup and resolution. This phase applies to all workflow types (spec, design, implement, test).
|
|
5
|
-
|
|
6
|
-
## TRIGGER
|
|
7
|
-
This phase is triggered after PR approval (`wait-for-pr-review` completion) and before `resolve` phase for all workflow types.
|
|
8
|
-
|
|
9
|
-
## CONTEXT ADVANTAGES
|
|
10
|
-
- **Full Work Context**: All changes, decisions, and challenges are fresh in memory
|
|
11
|
-
- **Branch Available**: Can reference specific commits, diffs, and work artifacts
|
|
12
|
-
- **Problem-Solution Mapping**: Clear connection between original problem and completed solution
|
|
13
|
-
- **Fresh Insights**: Recent experience provides accurate reflection on what worked/didn't work
|
|
14
|
-
- **PR Feedback Available**: Can analyze and learn from user feedback received during review
|
|
15
|
-
- **Complete Journey**: Can analyze entire workflow from start to finish
|
|
16
|
-
|
|
17
|
-
## WORKFLOW
|
|
18
|
-
|
|
19
|
-
### Step 1: Retrospective Preparation
|
|
20
|
-
- Ensure PR is approved and ready for merge
|
|
21
|
-
- Gather work artifacts (commits, documents, test results, etc.)
|
|
22
|
-
- Review original issue requirements and acceptance criteria
|
|
23
|
-
- **Collect PR feedback**: Review all comments, suggestions, and feedback received during PR review
|
|
24
|
-
- Analyze user feedback patterns and themes
|
|
25
|
-
- Identify key decision points and challenges encountered
|
|
26
|
-
|
|
27
|
-
### Step 2: Create Retrospective Document
|
|
28
|
-
- Create (or use existing) retrospective file: `retrospectives/issue-{issue-number}-{kebab-title}-postmortem.md`
|
|
29
|
-
- Use retrospective template: `get_fraim_file({ path: "templates/retrospective/RETROSPECTIVE-TEMPLATE.md" })`
|
|
30
|
-
- Document while work context is fresh and complete
|
|
31
|
-
|
|
32
|
-
### Step 3: Work Analysis
|
|
33
|
-
- **What Went Well**: Successful approaches, tools, and techniques used
|
|
34
|
-
- **What Went Poorly**: Challenges, obstacles, and inefficiencies encountered
|
|
35
|
-
- **Root Cause Analysis**: Why problems occurred, not just what happened
|
|
36
|
-
- **Decision Analysis**: Review key decisions made during the work
|
|
37
|
-
- **PR Feedback Analysis**: What user feedback revealed about the work quality
|
|
38
|
-
- **Process Effectiveness**: How well the workflow supported the work
|
|
39
|
-
|
|
40
|
-
### Step 4: Learning Extraction
|
|
41
|
-
- **Key Learnings**: Specific insights that can help future work of this type
|
|
42
|
-
- **Process Improvements**: Workflow or rule changes that would help
|
|
43
|
-
- **Tool/Technique Discoveries**: New approaches or tools that proved valuable
|
|
44
|
-
- **Anti-Patterns Identified**: Approaches to avoid in future work
|
|
45
|
-
- **User Feedback Patterns**: Common themes in user feedback to address
|
|
46
|
-
- **Quality Insights**: What contributed to or detracted from work quality
|
|
47
|
-
|
|
48
|
-
### Step 5: Prevention Measures
|
|
49
|
-
- **Specific Actions**: Concrete steps to prevent similar issues in future work
|
|
50
|
-
- **Rule Updates**: Suggestions for updating FRAIM rules or workflows
|
|
51
|
-
- **Knowledge Sharing**: Insights that should be shared with other agents
|
|
52
|
-
- **Process Enhancements**: Workflow improvements based on experience
|
|
53
|
-
- **Feedback Integration**: How to better incorporate user feedback in future work
|
|
54
|
-
- **Quality Improvements**: Changes to improve work quality
|
|
55
|
-
|
|
56
|
-
### Step 6: Validation and Completion
|
|
57
|
-
- Ensure retrospective meets quality criteria
|
|
58
|
-
- Validate all template sections are complete
|
|
59
|
-
- Confirm learnings are actionable and specific
|
|
60
|
-
- Verify PR feedback has been analyzed and learned from
|
|
61
|
-
- Check that prevention measures are concrete and implementable
|
|
62
|
-
- Mark phase complete and proceed to resolve
|
|
63
|
-
|
|
64
|
-
## VALIDATION CRITERIA
|
|
65
|
-
- ✅ Retrospective document created using template
|
|
66
|
-
- ✅ Root cause analysis completed (not just symptoms)
|
|
67
|
-
- ✅ Prevention measures documented and actionable
|
|
68
|
-
- ✅ Key learnings extracted and documented
|
|
69
|
-
- ✅ Process improvements identified
|
|
70
|
-
- ✅ PR feedback analyzed and lessons captured
|
|
71
|
-
- ✅ All template sections completed with substantial content
|
|
72
|
-
- ✅ Retrospective quality meets standards
|
|
73
|
-
- ✅ Learnings are specific to the workflow type and transferable
|
|
74
|
-
|
|
75
|
-
## PHASE COMPLETION
|
|
76
|
-
This phase is complete when:
|
|
77
|
-
1. Comprehensive retrospective document is created
|
|
78
|
-
2. All validation criteria are met
|
|
79
|
-
3. Learnings are documented for future reference
|
|
80
|
-
4. PR feedback has been systematically analyzed
|
|
81
|
-
5. Prevention measures are specific and actionable
|
|
82
|
-
6. Process improvements are identified
|
|
83
|
-
7. Agent is ready to proceed to resolve phase
|
|
84
|
-
|
|
85
|
-
## WORKFLOW-SPECIFIC FOCUS AREAS
|
|
86
|
-
|
|
87
|
-
### Spec Retrospective Focus
|
|
88
|
-
- Requirements gathering effectiveness
|
|
89
|
-
- Stakeholder engagement quality
|
|
90
|
-
- Competitive analysis thoroughness
|
|
91
|
-
- User experience definition clarity
|
|
92
|
-
- Validation plan completeness
|
|
93
|
-
|
|
94
|
-
### Design Retrospective Focus
|
|
95
|
-
- Technical decision quality
|
|
96
|
-
- Architecture choice rationale
|
|
97
|
-
- Design review feedback incorporation
|
|
98
|
-
- Technical feasibility assessment
|
|
99
|
-
- Implementation guidance clarity
|
|
100
|
-
|
|
101
|
-
### Implement Retrospective Focus
|
|
102
|
-
- Code quality and maintainability
|
|
103
|
-
- Technical approach effectiveness
|
|
104
|
-
- Testing strategy success
|
|
105
|
-
- Performance considerations
|
|
106
|
-
- Integration challenges
|
|
107
|
-
|
|
108
|
-
### Test Retrospective Focus
|
|
109
|
-
- Test coverage adequacy
|
|
110
|
-
- Quality assurance approach effectiveness
|
|
111
|
-
- Testing strategy validation
|
|
112
|
-
- Bug detection capability
|
|
113
|
-
- Test maintenance considerations
|
|
114
|
-
|
|
115
|
-
## NEXT PHASE
|
|
116
|
-
After completing retrospective, proceed to `resolve` phase for final merge and cleanup.
|