fraim-framework 2.0.44 → 2.0.46
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/bin/fraim.js +1 -1
- package/dist/registry/ai-manager-rules/design-phases/design-completeness-review.md +73 -0
- package/dist/registry/ai-manager-rules/design-phases/design-design.md +145 -0
- package/dist/registry/ai-manager-rules/design-phases/design.md +108 -0
- package/dist/registry/ai-manager-rules/design-phases/finalize.md +60 -0
- package/dist/registry/ai-manager-rules/design-phases/validate.md +125 -0
- package/dist/registry/ai-manager-rules/implement-phases/code.md +323 -0
- package/dist/registry/ai-manager-rules/implement-phases/completeness-review.md +94 -0
- package/dist/registry/ai-manager-rules/implement-phases/finalize.md +177 -0
- package/dist/registry/ai-manager-rules/implement-phases/implement-code.md +286 -0
- package/dist/registry/ai-manager-rules/implement-phases/implement-completeness-review.md +120 -0
- package/dist/registry/ai-manager-rules/implement-phases/implement-regression.md +173 -0
- package/dist/registry/ai-manager-rules/implement-phases/implement-repro.md +104 -0
- package/dist/registry/ai-manager-rules/implement-phases/implement-scoping.md +100 -0
- package/dist/registry/ai-manager-rules/implement-phases/implement-smoke.md +230 -0
- package/dist/registry/ai-manager-rules/implement-phases/implement-spike.md +121 -0
- package/dist/registry/ai-manager-rules/implement-phases/implement-validate.md +371 -0
- package/dist/registry/ai-manager-rules/implement-phases/quality-review.md +304 -0
- package/dist/registry/ai-manager-rules/implement-phases/regression.md +159 -0
- package/dist/registry/ai-manager-rules/implement-phases/repro.md +101 -0
- package/dist/registry/ai-manager-rules/implement-phases/scoping.md +93 -0
- package/dist/registry/ai-manager-rules/implement-phases/smoke.md +225 -0
- package/dist/registry/ai-manager-rules/implement-phases/spike.md +118 -0
- package/dist/registry/ai-manager-rules/implement-phases/validate.md +347 -0
- package/dist/registry/ai-manager-rules/shared-phases/finalize.md +169 -0
- package/dist/registry/ai-manager-rules/shared-phases/submit-pr.md +202 -0
- package/dist/registry/ai-manager-rules/shared-phases/wait-for-pr-review.md +170 -0
- package/dist/registry/ai-manager-rules/spec-phases/finalize.md +60 -0
- package/dist/registry/ai-manager-rules/spec-phases/spec-completeness-review.md +66 -0
- package/dist/registry/ai-manager-rules/spec-phases/spec-spec.md +139 -0
- package/dist/registry/ai-manager-rules/spec-phases/spec.md +102 -0
- package/dist/registry/ai-manager-rules/spec-phases/validate.md +118 -0
- package/dist/src/ai-manager/ai-manager.js +380 -119
- package/dist/src/ai-manager/evidence-validator.js +309 -0
- package/dist/src/ai-manager/phase-flow.js +244 -0
- package/dist/src/ai-manager/types.js +5 -0
- package/dist/src/cli/commands/sync.js +81 -0
- package/dist/src/fraim-mcp-server.js +45 -153
- package/dist/src/static-website-middleware.js +75 -0
- package/dist/tests/test-ai-coach-edge-cases.js +415 -0
- package/dist/tests/test-ai-coach-mcp-integration.js +432 -0
- package/dist/tests/test-ai-coach-performance.js +328 -0
- package/dist/tests/test-ai-coach-phase-content.js +264 -0
- package/dist/tests/test-ai-coach-workflows.js +487 -0
- package/dist/tests/test-ai-manager-phase-protocol.js +147 -0
- package/dist/tests/test-ai-manager.js +60 -71
- package/dist/tests/test-evidence-validation.js +221 -0
- package/dist/tests/test-mcp-lifecycle-methods.js +18 -23
- package/dist/tests/test-pr-review-integration.js +1 -0
- package/dist/tests/test-pr-review-workflow.js +299 -0
- package/dist/website/.nojekyll +0 -0
- package/dist/website/404.html +101 -0
- package/dist/website/CNAME +1 -0
- package/dist/website/README.md +22 -0
- package/dist/website/demo.html +604 -0
- package/dist/website/images/.gitkeep +1 -0
- package/dist/website/images/fraim-logo.png +0 -0
- package/dist/website/index.html +290 -0
- package/dist/website/pricing.html +414 -0
- package/dist/website/script.js +55 -0
- package/dist/website/styles.css +2647 -0
- package/package.json +2 -2
- package/registry/agent-guardrails.md +1 -1
- package/registry/stubs/workflows/brainstorming/blue-sky-brainstorming.md +11 -0
- package/registry/stubs/workflows/brainstorming/codebase-brainstorming.md +11 -0
- package/registry/stubs/workflows/compliance/detect-compliance-requirements.md +11 -0
- package/registry/stubs/workflows/compliance/generate-audit-evidence.md +11 -0
- package/registry/stubs/workflows/learning/synthesize-learnings.md +11 -0
- package/registry/stubs/workflows/legal/nda.md +11 -0
- package/registry/stubs/workflows/legal/patent-filing.md +11 -0
- package/registry/stubs/workflows/legal/trademark-filing.md +11 -0
- package/registry/stubs/workflows/product-building/design.md +1 -1
- package/registry/stubs/workflows/product-building/implement.md +1 -2
|
@@ -0,0 +1,159 @@
|
|
|
1
|
+
# Phase: Regression
|
|
2
|
+
|
|
3
|
+
## INTENT
|
|
4
|
+
For bugs: Verify the repro test now passes. For features: Write comprehensive regression tests to prevent future breakage.
|
|
5
|
+
|
|
6
|
+
## OUTCOME
|
|
7
|
+
**For Bugs:**
|
|
8
|
+
- Repro test from repro phase now PASSES
|
|
9
|
+
- Bug is confirmed fixed
|
|
10
|
+
- Test serves as regression protection
|
|
11
|
+
|
|
12
|
+
**For Features:**
|
|
13
|
+
- New regression tests written
|
|
14
|
+
- All key functionality covered
|
|
15
|
+
- Tests verify acceptance criteria
|
|
16
|
+
- All tests pass
|
|
17
|
+
|
|
18
|
+
## PRINCIPLES
|
|
19
|
+
- **Test Quality**: Write meaningful tests, not checkbox tests
|
|
20
|
+
- **Coverage**: Test main scenarios and edge cases
|
|
21
|
+
- **Real Testing**: Test actual functionality, not mocks
|
|
22
|
+
- **Maintainable**: Tests should be clear and maintainable
|
|
23
|
+
|
|
24
|
+
## WORKFLOW
|
|
25
|
+
|
|
26
|
+
### For Bug Fixes:
|
|
27
|
+
|
|
28
|
+
#### Step 1: Run Repro Test
|
|
29
|
+
```bash
|
|
30
|
+
npm test -- path/to/repro/test.test.ts
|
|
31
|
+
```
|
|
32
|
+
|
|
33
|
+
#### Step 2: Verify Test Passes
|
|
34
|
+
- Test should now PASS (was failing in repro phase)
|
|
35
|
+
- Confirms bug is fixed
|
|
36
|
+
- Test serves as regression protection
|
|
37
|
+
|
|
38
|
+
#### Step 3: If Test Still Fails
|
|
39
|
+
**Return to implement phase:**
|
|
40
|
+
- Bug is not actually fixed
|
|
41
|
+
- Review the fix
|
|
42
|
+
- Debug why test still fails
|
|
43
|
+
- Fix and re-validate
|
|
44
|
+
|
|
45
|
+
### For Features:
|
|
46
|
+
|
|
47
|
+
#### Step 1: Identify Test Coverage Needed
|
|
48
|
+
Based on acceptance criteria:
|
|
49
|
+
- Main user scenarios
|
|
50
|
+
- Edge cases
|
|
51
|
+
- Error conditions
|
|
52
|
+
- Integration points
|
|
53
|
+
|
|
54
|
+
#### Step 2: Write Regression Tests
|
|
55
|
+
- Create tests for all key functionality
|
|
56
|
+
- Test main user workflows
|
|
57
|
+
- Cover edge cases
|
|
58
|
+
- Test error handling
|
|
59
|
+
|
|
60
|
+
#### Step 3: Follow Test Architecture
|
|
61
|
+
Read `.fraim/config.json` for the architecture document path (`customizations.architectureDoc`), then use `get_fraim_file` to read the full architecture guidelines. Key requirements:
|
|
62
|
+
- Use shared-server-utils.ts
|
|
63
|
+
- Unique API keys per test
|
|
64
|
+
- Proper cleanup in finally blocks
|
|
65
|
+
- No individual server instances
|
|
66
|
+
- Proper timeouts (5-10s for network calls)
|
|
67
|
+
|
|
68
|
+
#### Step 4: Run All New Tests
|
|
69
|
+
```bash
|
|
70
|
+
npm test -- path/to/new/tests.test.ts
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
Verify all tests pass
|
|
74
|
+
|
|
75
|
+
#### Step 5: Tag Tests Appropriately
|
|
76
|
+
- `smoke`: If testing core functionality
|
|
77
|
+
- `integration`: If testing multiple components
|
|
78
|
+
- `unit`: If testing individual functions
|
|
79
|
+
- `e2e`: If testing complete workflows
|
|
80
|
+
|
|
81
|
+
## VALIDATION
|
|
82
|
+
|
|
83
|
+
### Phase Complete When (Bugs):
|
|
84
|
+
- ✅ Repro test now PASSES
|
|
85
|
+
- ✅ Bug confirmed fixed
|
|
86
|
+
- ✅ Test output documented
|
|
87
|
+
|
|
88
|
+
### Phase Complete When (Features):
|
|
89
|
+
- ✅ Regression tests written
|
|
90
|
+
- ✅ All acceptance criteria covered
|
|
91
|
+
- ✅ Tests follow architecture standards
|
|
92
|
+
- ✅ All tests pass
|
|
93
|
+
- ✅ Tests appropriately tagged
|
|
94
|
+
|
|
95
|
+
### Phase Incomplete If:
|
|
96
|
+
- ❌ (Bugs) Repro test still fails
|
|
97
|
+
- ❌ (Features) No regression tests written
|
|
98
|
+
- ❌ (Features) Tests don't cover acceptance criteria
|
|
99
|
+
- ❌ Any tests fail
|
|
100
|
+
- ❌ Tests violate architecture standards
|
|
101
|
+
|
|
102
|
+
## RULES FOR THIS PHASE
|
|
103
|
+
- Read `.fraim/config.json` for the architecture document path (`customizations.architectureDoc`), then use `get_fraim_file` to read the full test architecture guidelines
|
|
104
|
+
- Follow test requirements from the implement workflow (use `get_fraim_file({ path: "workflows/product-building/implement.md" })`)
|
|
105
|
+
- Use Successful Debugging Patterns from `rules/successful-debugging-patterns.md` via `get_fraim_file`
|
|
106
|
+
- Follow Git Safe Commands from `rules/git-safe-commands.md` via `get_fraim_file`
|
|
107
|
+
|
|
108
|
+
## TEST QUALITY REQUIREMENTS
|
|
109
|
+
- Tests must validate real functionality, not mocked behavior
|
|
110
|
+
- No tests that default to passing without validation
|
|
111
|
+
- Use flow testing, not just static analysis
|
|
112
|
+
- Tests should fail if code is broken
|
|
113
|
+
|
|
114
|
+
## SCRIPTS
|
|
115
|
+
|
|
116
|
+
**Run tests:**
|
|
117
|
+
```bash
|
|
118
|
+
npm test -- path/to/test.test.ts
|
|
119
|
+
```
|
|
120
|
+
|
|
121
|
+
**Run all tests:**
|
|
122
|
+
```bash
|
|
123
|
+
npm test
|
|
124
|
+
```
|
|
125
|
+
|
|
126
|
+
### Report Back (Bugs):
|
|
127
|
+
When you complete this phase for a bug fix, call:
|
|
128
|
+
|
|
129
|
+
```javascript
|
|
130
|
+
seekCoachingOnNextStep({
|
|
131
|
+
workflowType: "implement",
|
|
132
|
+
issueNumber: "{issue_number}",
|
|
133
|
+
currentPhase: "regression",
|
|
134
|
+
status: "complete",
|
|
135
|
+
evidence: {
|
|
136
|
+
reproTestResult: "Repro test at test/sync.test.ts:45 now PASSES",
|
|
137
|
+
bugStatus: "Bug confirmed fixed",
|
|
138
|
+
regressionProtection: "Test serves as regression protection"
|
|
139
|
+
}
|
|
140
|
+
})
|
|
141
|
+
```
|
|
142
|
+
|
|
143
|
+
### Report Back (Features):
|
|
144
|
+
When you complete this phase for a feature, call:
|
|
145
|
+
|
|
146
|
+
```javascript
|
|
147
|
+
seekCoachingOnNextStep({
|
|
148
|
+
workflowType: "implement",
|
|
149
|
+
issueNumber: "{issue_number}",
|
|
150
|
+
currentPhase: "regression",
|
|
151
|
+
status: "complete",
|
|
152
|
+
evidence: {
|
|
153
|
+
testLocation: "test/auth.test.ts",
|
|
154
|
+
testCoverage: "All acceptance criteria covered",
|
|
155
|
+
testResults: "8/8 tests passing",
|
|
156
|
+
testTags: "Tagged as integration"
|
|
157
|
+
}
|
|
158
|
+
})
|
|
159
|
+
```
|
|
@@ -0,0 +1,101 @@
|
|
|
1
|
+
# Phase: Repro (Bug Fixes Only)
|
|
2
|
+
|
|
3
|
+
## INTENT
|
|
4
|
+
To create a failing test that reproduces the reported bug, demonstrating that the issue exists and providing a clear success criterion for the fix.
|
|
5
|
+
|
|
6
|
+
## OUTCOME
|
|
7
|
+
A test that:
|
|
8
|
+
- **FAILS** when run, reproducing the bug
|
|
9
|
+
- Clearly demonstrates the incorrect behavior
|
|
10
|
+
- Will pass once the bug is fixed
|
|
11
|
+
- Serves as a regression test
|
|
12
|
+
|
|
13
|
+
## PRINCIPLES
|
|
14
|
+
- **Test-First**: Create the failing test before fixing
|
|
15
|
+
- **Actual Reproduction**: Use real reproduction, not hypothetical scenarios
|
|
16
|
+
- **Clear Failure**: Test should fail obviously and consistently
|
|
17
|
+
- **No Assumptions**: If reproduction steps are unclear, escalate to user
|
|
18
|
+
- **Minimal Test**: Focus on reproducing the specific bug
|
|
19
|
+
|
|
20
|
+
## WORKFLOW
|
|
21
|
+
|
|
22
|
+
### Step 1: Review Bug Spec
|
|
23
|
+
- Read the bug description thoroughly
|
|
24
|
+
- Note the reproduction steps
|
|
25
|
+
- Understand the expected vs actual behavior
|
|
26
|
+
- Identify the failure conditions
|
|
27
|
+
|
|
28
|
+
### Step 2: Check Reproduction Steps
|
|
29
|
+
**If reproduction steps are clear:**
|
|
30
|
+
- Proceed to create test
|
|
31
|
+
|
|
32
|
+
**If reproduction steps are unclear or missing:**
|
|
33
|
+
- **DO NOT** hypothesize about the issue
|
|
34
|
+
- **ESCALATE** to user with specific questions:
|
|
35
|
+
- What exact steps trigger the bug?
|
|
36
|
+
- What is the expected behavior?
|
|
37
|
+
- What is the actual (incorrect) behavior?
|
|
38
|
+
- Are there specific inputs or conditions?
|
|
39
|
+
|
|
40
|
+
### Step 3: Create Failing Test
|
|
41
|
+
- Choose appropriate test file location
|
|
42
|
+
- Write test that follows reproduction steps
|
|
43
|
+
- Test should demonstrate the bug clearly
|
|
44
|
+
- Use descriptive test name: `test: reproduces bug #{issue_number} - {description}`
|
|
45
|
+
|
|
46
|
+
### Step 4: Run the Test
|
|
47
|
+
- Execute the test
|
|
48
|
+
- **Verify it FAILS** as expected
|
|
49
|
+
- Capture the failure output
|
|
50
|
+
- Confirm it fails for the right reason (reproducing the bug, not test errors)
|
|
51
|
+
|
|
52
|
+
### Step 5: Document Test Location
|
|
53
|
+
In your evidence, include:
|
|
54
|
+
- Path to the test file
|
|
55
|
+
- Test name/description
|
|
56
|
+
- Failure output showing the bug
|
|
57
|
+
- Confirmation that test reproduces the issue
|
|
58
|
+
|
|
59
|
+
## VALIDATION
|
|
60
|
+
|
|
61
|
+
### Phase Complete When:
|
|
62
|
+
- ✅ Test created that reproduces the bug
|
|
63
|
+
- ✅ Test FAILS when run
|
|
64
|
+
- ✅ Failure clearly demonstrates the bug
|
|
65
|
+
- ✅ Test will pass once bug is fixed
|
|
66
|
+
- ✅ Test location documented
|
|
67
|
+
|
|
68
|
+
### Phase Incomplete If:
|
|
69
|
+
- ❌ Test passes (not reproducing the bug)
|
|
70
|
+
- ❌ Test fails for wrong reason (test error, not bug)
|
|
71
|
+
- ❌ Reproduction steps were unclear and not escalated
|
|
72
|
+
- ❌ Test is hypothetical, not based on actual reproduction
|
|
73
|
+
|
|
74
|
+
## RULES FOR THIS PHASE
|
|
75
|
+
- Read `.fraim/config.json` for the architecture document path (`customizations.architectureDoc`), then use `get_fraim_file` to read the full test architecture standards
|
|
76
|
+
- Use Successful Debugging Patterns from `rules/successful-debugging-patterns.md` via `get_fraim_file`
|
|
77
|
+
- Follow Git Safe Commands from `rules/git-safe-commands.md` via `get_fraim_file`
|
|
78
|
+
|
|
79
|
+
## SCRIPTS
|
|
80
|
+
Run test:
|
|
81
|
+
```bash
|
|
82
|
+
npm test -- path/to/test/file.test.ts
|
|
83
|
+
```
|
|
84
|
+
|
|
85
|
+
### Report Back:
|
|
86
|
+
When you complete this phase, call:
|
|
87
|
+
|
|
88
|
+
```javascript
|
|
89
|
+
seekCoachingOnNextStep({
|
|
90
|
+
workflowType: "implement",
|
|
91
|
+
issueNumber: "{issue_number}",
|
|
92
|
+
currentPhase: "repro",
|
|
93
|
+
status: "complete",
|
|
94
|
+
evidence: {
|
|
95
|
+
testLocation: "test/sync.test.ts:45",
|
|
96
|
+
testResult: "FAILS with timeout after 30s",
|
|
97
|
+
reproductionConfirmed: true,
|
|
98
|
+
testOutput: "Include actual test failure output here"
|
|
99
|
+
}
|
|
100
|
+
})
|
|
101
|
+
```
|
|
@@ -0,0 +1,93 @@
|
|
|
1
|
+
# Phase: Scoping
|
|
2
|
+
|
|
3
|
+
## INTENT
|
|
4
|
+
To understand the issue requirements and determine whether it's a bug fix or feature implementation.
|
|
5
|
+
|
|
6
|
+
## OUTCOME
|
|
7
|
+
Clear understanding of:
|
|
8
|
+
- Issue type (bug or feature)
|
|
9
|
+
- All requirements and acceptance criteria
|
|
10
|
+
- Ready to proceed to next phase (repro for bugs, spike for features)
|
|
11
|
+
|
|
12
|
+
## PRINCIPLES
|
|
13
|
+
- **Thorough Understanding**: Read all linked documents and context
|
|
14
|
+
- **Clear Classification**: Accurately determine bug vs feature
|
|
15
|
+
- **Question Early**: Escalate if requirements are unclear
|
|
16
|
+
|
|
17
|
+
## RULES FOR THIS PHASE
|
|
18
|
+
|
|
19
|
+
### Success Criteria
|
|
20
|
+
Read `registry/rules/agent-success-criteria.md` via `get_fraim_file` for the complete 5-criteria success framework. Focus especially on **Integrity** (truthfulness in reporting) and **Independence** (smart decision making).
|
|
21
|
+
|
|
22
|
+
### Simplicity Principles
|
|
23
|
+
Read `registry/rules/simplicity.md` via `get_fraim_file` for complete guidelines. Key principles for scoping:
|
|
24
|
+
- **Focus on the assigned issue only** - don't scope other issues
|
|
25
|
+
- **Don't over-think it** - understand the specific need being addressed
|
|
26
|
+
|
|
27
|
+
### Architecture Compliance
|
|
28
|
+
Read `.fraim/config.json` for the architecture document path (`customizations.architectureDoc`), then use `get_fraim_file` to read the full architecture guidelines. Understanding the architecture helps with proper scoping.
|
|
29
|
+
|
|
30
|
+
## WORKFLOW
|
|
31
|
+
|
|
32
|
+
### Step 1: Read Issue Description
|
|
33
|
+
- Read the full issue description in GitHub (use MCP where available)
|
|
34
|
+
- Note the issue title, labels, and any linked documents
|
|
35
|
+
- Identify if there's a spec, RFC, or design document
|
|
36
|
+
|
|
37
|
+
### Step 2: Review Linked Documents
|
|
38
|
+
- If spec/RFC exists: Read it thoroughly
|
|
39
|
+
- If design document exists: Review the technical approach
|
|
40
|
+
- Note all acceptance criteria and requirements
|
|
41
|
+
|
|
42
|
+
### Step 3: Determine Issue Type
|
|
43
|
+
**Bug indicators:**
|
|
44
|
+
- Issue describes broken/incorrect behavior
|
|
45
|
+
- References a regression or failure
|
|
46
|
+
- Has reproduction steps
|
|
47
|
+
- Reports unexpected behavior
|
|
48
|
+
|
|
49
|
+
**Feature indicators:**
|
|
50
|
+
- Issue describes new functionality
|
|
51
|
+
- Adds capabilities that don't exist
|
|
52
|
+
- Enhances existing features
|
|
53
|
+
- Has acceptance criteria for new behavior
|
|
54
|
+
|
|
55
|
+
### Step 4: Understand Requirements
|
|
56
|
+
- List all acceptance criteria
|
|
57
|
+
- Identify edge cases mentioned
|
|
58
|
+
- Note any constraints or limitations
|
|
59
|
+
- Understand success criteria
|
|
60
|
+
|
|
61
|
+
### Step 5: Identify Uncertainties
|
|
62
|
+
If anything is unclear:
|
|
63
|
+
- **DO NOT** make assumptions
|
|
64
|
+
- **DO NOT** hypothesize requirements
|
|
65
|
+
- **ESCALATE** to user with specific questions
|
|
66
|
+
- Wait for clarification before proceeding
|
|
67
|
+
|
|
68
|
+
## VALIDATION
|
|
69
|
+
|
|
70
|
+
### Phase Complete When:
|
|
71
|
+
- ✅ Issue type determined (bug or feature)
|
|
72
|
+
- ✅ All requirements understood
|
|
73
|
+
- ✅ No blocking uncertainties remain
|
|
74
|
+
- ✅ Ready to proceed to next phase
|
|
75
|
+
|
|
76
|
+
### Report Back:
|
|
77
|
+
When you complete scoping, call:
|
|
78
|
+
|
|
79
|
+
```javascript
|
|
80
|
+
seekCoachingOnNextStep({
|
|
81
|
+
workflowType: "implement",
|
|
82
|
+
issueNumber: "{issue_number}",
|
|
83
|
+
currentPhase: "scoping",
|
|
84
|
+
status: "complete",
|
|
85
|
+
findings: {
|
|
86
|
+
issueType: "bug", // or "feature"
|
|
87
|
+
requirements: "Brief summary of what you understood",
|
|
88
|
+
uncertainties: [] // List any unclear aspects, or empty array if none
|
|
89
|
+
}
|
|
90
|
+
})
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
The AI Coach will provide instructions for your next phase based on the issue type you determined.
|
|
@@ -0,0 +1,225 @@
|
|
|
1
|
+
# Phase: Smoke
|
|
2
|
+
|
|
3
|
+
## INTENT
|
|
4
|
+
To run technical health checks ensuring the system is stable: build passes, all tests pass, git status is clean, and core functionality hasn't been broken.
|
|
5
|
+
|
|
6
|
+
## OUTCOME
|
|
7
|
+
All technical checks pass, confirming:
|
|
8
|
+
- Build compiles successfully
|
|
9
|
+
- All tests pass (100% success rate)
|
|
10
|
+
- Git status is clean
|
|
11
|
+
- Core system functionality intact
|
|
12
|
+
- No regressions in critical paths
|
|
13
|
+
|
|
14
|
+
## PRINCIPLES
|
|
15
|
+
- **Zero Tolerance**: All checks must pass
|
|
16
|
+
- **Technical Focus**: Automated verification only
|
|
17
|
+
- **Fix Immediately**: If any check fails, return to implement phase
|
|
18
|
+
- **Fast Feedback**: Quick automated verification
|
|
19
|
+
|
|
20
|
+
## 📋 MANDATORY TECHNICAL CHECKS
|
|
21
|
+
|
|
22
|
+
### Step 1: Build Compilation Check ✅
|
|
23
|
+
|
|
24
|
+
**Requirements**:
|
|
25
|
+
- TypeScript compiles without errors
|
|
26
|
+
- Build process completes successfully
|
|
27
|
+
- No type errors or warnings
|
|
28
|
+
|
|
29
|
+
**Commands to Run**:
|
|
30
|
+
```bash
|
|
31
|
+
# Check TypeScript compilation
|
|
32
|
+
npx tsc --noEmit --skipLibCheck
|
|
33
|
+
|
|
34
|
+
# Run full build (includes validation)
|
|
35
|
+
npm run build
|
|
36
|
+
```
|
|
37
|
+
|
|
38
|
+
**What to Look For**:
|
|
39
|
+
- Exit code 0 (success) for all commands
|
|
40
|
+
- No error messages in output
|
|
41
|
+
- Clean compilation without warnings
|
|
42
|
+
|
|
43
|
+
### Step 2: Complete Test Suite ✅
|
|
44
|
+
|
|
45
|
+
**Requirements**:
|
|
46
|
+
- All tests pass (100% success rate)
|
|
47
|
+
- No timeouts or hanging tests
|
|
48
|
+
- Reasonable test count
|
|
49
|
+
|
|
50
|
+
**Commands to Run**:
|
|
51
|
+
```bash
|
|
52
|
+
# Run all tests
|
|
53
|
+
npm test
|
|
54
|
+
|
|
55
|
+
# Run smoke tests specifically
|
|
56
|
+
npm run test-smoke-ci
|
|
57
|
+
|
|
58
|
+
# Run all tests in CI mode
|
|
59
|
+
npm run test-all-ci
|
|
60
|
+
```
|
|
61
|
+
|
|
62
|
+
**What to Look For**:
|
|
63
|
+
- "All tests passed" or similar success message
|
|
64
|
+
- No "FAILED" or "ERROR" in output
|
|
65
|
+
- No timeout messages
|
|
66
|
+
- Reasonable execution time
|
|
67
|
+
|
|
68
|
+
### Step 3: Git Status Check ✅
|
|
69
|
+
|
|
70
|
+
**Requirements**:
|
|
71
|
+
- Git status is clean
|
|
72
|
+
- Only intended changes present
|
|
73
|
+
- No untracked files (except evidence docs)
|
|
74
|
+
|
|
75
|
+
**Commands to Run**:
|
|
76
|
+
```bash
|
|
77
|
+
# Check git status
|
|
78
|
+
git status
|
|
79
|
+
|
|
80
|
+
# Check recent commits
|
|
81
|
+
git log --oneline -3
|
|
82
|
+
```
|
|
83
|
+
|
|
84
|
+
**What to Look For**:
|
|
85
|
+
- Only intended files are modified
|
|
86
|
+
- No accidentally modified files
|
|
87
|
+
- Clean working directory
|
|
88
|
+
- Meaningful commit messages
|
|
89
|
+
|
|
90
|
+
### Step 4: Code Quality Verification ✅
|
|
91
|
+
|
|
92
|
+
**Requirements**:
|
|
93
|
+
- No type bypassing
|
|
94
|
+
- No debugging code remains
|
|
95
|
+
- No TODO comments in core functionality
|
|
96
|
+
|
|
97
|
+
**Commands to Run**:
|
|
98
|
+
```bash
|
|
99
|
+
# Check for type bypassing
|
|
100
|
+
grep -r "as any" src/ || echo "✅ No type bypassing found"
|
|
101
|
+
|
|
102
|
+
# Look for debugging code
|
|
103
|
+
grep -r "console.log" src/ | grep -v "logger" | grep -v "// OK" || echo "✅ No debug logs found"
|
|
104
|
+
|
|
105
|
+
# Check for TODO comments
|
|
106
|
+
grep -r "TODO" src/ || echo "✅ No TODOs found"
|
|
107
|
+
```
|
|
108
|
+
|
|
109
|
+
**What to Look For**:
|
|
110
|
+
- Zero instances of "as any" (or only justified ones with comments)
|
|
111
|
+
- No console.log statements (except intentional logging)
|
|
112
|
+
- No TODO comments in critical code
|
|
113
|
+
|
|
114
|
+
### Step 5: Registry Validation (If Applicable) ✅
|
|
115
|
+
|
|
116
|
+
**Commands to Run**:
|
|
117
|
+
```bash
|
|
118
|
+
# Validate registry paths
|
|
119
|
+
npm run validate:registry
|
|
120
|
+
```
|
|
121
|
+
|
|
122
|
+
**What to Look For**:
|
|
123
|
+
- No validation errors
|
|
124
|
+
- All registry paths valid
|
|
125
|
+
|
|
126
|
+
## 🛠️ TECHNICAL COMMANDS REFERENCE
|
|
127
|
+
|
|
128
|
+
### Quick Health Check
|
|
129
|
+
```bash
|
|
130
|
+
# One-liner to check basic health
|
|
131
|
+
npx tsc --noEmit --skipLibCheck && npm test && git status
|
|
132
|
+
```
|
|
133
|
+
|
|
134
|
+
### Comprehensive Technical Check
|
|
135
|
+
```bash
|
|
136
|
+
# Full technical validation suite
|
|
137
|
+
npm run build && npm run test-all-ci && git status
|
|
138
|
+
```
|
|
139
|
+
|
|
140
|
+
### Build Verification
|
|
141
|
+
```bash
|
|
142
|
+
# Verify build artifacts are created correctly
|
|
143
|
+
npm run build && ls -la dist/ && echo "✅ Build artifacts present"
|
|
144
|
+
```
|
|
145
|
+
|
|
146
|
+
## VALIDATION
|
|
147
|
+
|
|
148
|
+
### Phase Complete When:
|
|
149
|
+
- ✅ TypeScript compiles (exit code 0)
|
|
150
|
+
- ✅ Build succeeds without errors
|
|
151
|
+
- ✅ All tests pass (100% success rate)
|
|
152
|
+
- ✅ Git status clean (only intended changes)
|
|
153
|
+
- ✅ No "as any" type bypassing
|
|
154
|
+
- ✅ No debugging code remains
|
|
155
|
+
- ✅ No TODO comments in core functionality
|
|
156
|
+
- ✅ Registry validation passes (if applicable)
|
|
157
|
+
- ✅ All technical checks documented
|
|
158
|
+
|
|
159
|
+
### Phase Incomplete If:
|
|
160
|
+
- ❌ TypeScript compilation errors
|
|
161
|
+
- ❌ Build fails
|
|
162
|
+
- ❌ ANY test fails
|
|
163
|
+
- ❌ Git status shows unintended changes
|
|
164
|
+
- ❌ Code quality issues found
|
|
165
|
+
- ❌ Registry validation fails
|
|
166
|
+
|
|
167
|
+
**If ANY check fails, return to implement phase immediately.**
|
|
168
|
+
|
|
169
|
+
## RULES FOR THIS PHASE
|
|
170
|
+
|
|
171
|
+
### Technical Standards
|
|
172
|
+
- Read `.fraim/config.json` for the architecture document path (`customizations.architectureDoc`), then use `get_fraim_file` to read the full technical standards
|
|
173
|
+
- Use Successful Debugging Patterns from `rules/successful-debugging-patterns.md` via `get_fraim_file`
|
|
174
|
+
- When using git commands directly (if MCP tools unavailable), read `registry/rules/git-safe-commands.md` via `get_fraim_file`
|
|
175
|
+
|
|
176
|
+
## SCRIPTS
|
|
177
|
+
|
|
178
|
+
**Run all technical checks:**
|
|
179
|
+
```bash
|
|
180
|
+
npm run build && npm test && git status
|
|
181
|
+
```
|
|
182
|
+
|
|
183
|
+
**Run smoke tests specifically:**
|
|
184
|
+
```bash
|
|
185
|
+
npm run test-smoke-ci
|
|
186
|
+
```
|
|
187
|
+
|
|
188
|
+
### Report Back:
|
|
189
|
+
When you complete this phase, call:
|
|
190
|
+
|
|
191
|
+
```javascript
|
|
192
|
+
seekCoachingOnNextStep({
|
|
193
|
+
workflowType: "implement",
|
|
194
|
+
issueNumber: "{issue_number}",
|
|
195
|
+
currentPhase: "smoke",
|
|
196
|
+
status: "complete",
|
|
197
|
+
evidence: {
|
|
198
|
+
buildPassed: "YES", // Did `npm run build` succeed?
|
|
199
|
+
allTestsPassed: "YES", // Did `npm test` show 100% pass rate?
|
|
200
|
+
gitStatusClean: "YES", // Does `git status` show only intended changes?
|
|
201
|
+
noTypeBypass: "YES", // Did `grep -r "as any" src/` find zero instances?
|
|
202
|
+
noDebugCode: "YES", // Did you remove console.log statements?
|
|
203
|
+
registryValid: "YES", // Did `npm run validate:registry` pass?
|
|
204
|
+
commandsRun: "npm run build && npm test && git status",
|
|
205
|
+
summary: "All technical checks passed. Build succeeds, all tests pass, git status clean."
|
|
206
|
+
}
|
|
207
|
+
})
|
|
208
|
+
```
|
|
209
|
+
|
|
210
|
+
If any technical check fails:
|
|
211
|
+
```javascript
|
|
212
|
+
seekCoachingOnNextStep({
|
|
213
|
+
workflowType: "implement",
|
|
214
|
+
issueNumber: "{issue_number}",
|
|
215
|
+
currentPhase: "smoke",
|
|
216
|
+
status: "incomplete",
|
|
217
|
+
evidence: {
|
|
218
|
+
buildPassed: "NO", // Specify what failed
|
|
219
|
+
allTestsPassed: "NO", // Be specific about failures
|
|
220
|
+
issues: ["Build failed with TypeScript errors", "3 tests failing"],
|
|
221
|
+
commandsRun: "npm run build && npm test",
|
|
222
|
+
nextAction: "Returning to implement phase to fix technical issues"
|
|
223
|
+
}
|
|
224
|
+
})
|
|
225
|
+
```
|
|
@@ -0,0 +1,118 @@
|
|
|
1
|
+
# Phase: Spike (Features Only)
|
|
2
|
+
|
|
3
|
+
## INTENT
|
|
4
|
+
To build a proof-of-concept that validates the technical approach, tests uncertain or risky aspects, and confirms the design is viable before full implementation.
|
|
5
|
+
|
|
6
|
+
## OUTCOME
|
|
7
|
+
A working POC that:
|
|
8
|
+
- Validates key technical assumptions
|
|
9
|
+
- Tests risky or uncertain aspects
|
|
10
|
+
- Gets user approval
|
|
11
|
+
- Informs any necessary design updates
|
|
12
|
+
|
|
13
|
+
## PRINCIPLES
|
|
14
|
+
- **Spike-First Development**: Follow the spike-first rule for unfamiliar technology
|
|
15
|
+
- **Risk Reduction**: Focus on highest-risk or most uncertain aspects
|
|
16
|
+
- **Quick Validation**: Build minimal POC, not production code
|
|
17
|
+
- **User Approval**: Get explicit approval before proceeding
|
|
18
|
+
- **Design Updates**: Update design doc with findings
|
|
19
|
+
|
|
20
|
+
## WORKFLOW
|
|
21
|
+
|
|
22
|
+
### Step 1: Review Design Document
|
|
23
|
+
- Read the technical design/RFC thoroughly
|
|
24
|
+
- Identify technical assumptions
|
|
25
|
+
- Note any unfamiliar technologies or patterns
|
|
26
|
+
- List risky or uncertain aspects
|
|
27
|
+
|
|
28
|
+
### Step 2: Identify Spike Scope
|
|
29
|
+
**Focus on:**
|
|
30
|
+
- Unfamiliar technology or libraries
|
|
31
|
+
- Integration points with external systems
|
|
32
|
+
- Performance-critical components
|
|
33
|
+
- Complex algorithms or logic
|
|
34
|
+
- Uncertain technical feasibility
|
|
35
|
+
|
|
36
|
+
**Do NOT spike:**
|
|
37
|
+
- Well-understood patterns
|
|
38
|
+
- Standard CRUD operations
|
|
39
|
+
- Simple UI changes
|
|
40
|
+
- Routine implementations
|
|
41
|
+
|
|
42
|
+
### Step 3: Build Proof-of-Concept
|
|
43
|
+
- Create minimal code to test assumptions
|
|
44
|
+
- Focus on proving/disproving technical approach
|
|
45
|
+
- Don't worry about production quality
|
|
46
|
+
- Document what you're testing
|
|
47
|
+
|
|
48
|
+
### Step 4: Test Key Assumptions
|
|
49
|
+
- Run the POC
|
|
50
|
+
- Verify it works as expected
|
|
51
|
+
- Test edge cases if relevant
|
|
52
|
+
- Document findings
|
|
53
|
+
|
|
54
|
+
### Step 5: Get User Approval
|
|
55
|
+
- Present POC to user
|
|
56
|
+
- Explain what was validated
|
|
57
|
+
- Share any findings or concerns
|
|
58
|
+
- **Wait for explicit approval** before proceeding
|
|
59
|
+
|
|
60
|
+
### Step 6: Update Design Document
|
|
61
|
+
If POC revealed:
|
|
62
|
+
- Better approaches
|
|
63
|
+
- Technical constraints
|
|
64
|
+
- Performance considerations
|
|
65
|
+
- Integration challenges
|
|
66
|
+
|
|
67
|
+
**Update the design document** with findings
|
|
68
|
+
|
|
69
|
+
## VALIDATION
|
|
70
|
+
|
|
71
|
+
### Phase Complete When:
|
|
72
|
+
- ✅ POC built and tested
|
|
73
|
+
- ✅ Key technical assumptions validated
|
|
74
|
+
- ✅ User has approved the approach
|
|
75
|
+
- ✅ Design document updated (if needed)
|
|
76
|
+
- ✅ Ready to proceed with full implementation
|
|
77
|
+
|
|
78
|
+
### Phase Incomplete If:
|
|
79
|
+
- ❌ POC doesn't work as expected
|
|
80
|
+
- ❌ Technical approach not viable
|
|
81
|
+
- ❌ User hasn't approved
|
|
82
|
+
- ❌ Significant findings not documented
|
|
83
|
+
|
|
84
|
+
## RULES FOR THIS PHASE
|
|
85
|
+
|
|
86
|
+
### Spike-First Development
|
|
87
|
+
Read `registry/rules/spike-first-development.md` via `get_fraim_file` for complete spike methodology.
|
|
88
|
+
|
|
89
|
+
### Architecture Compliance
|
|
90
|
+
Read `.fraim/config.json` for the architecture document path (`customizations.architectureDoc`), then use `get_fraim_file` to read the full architecture guidelines.
|
|
91
|
+
|
|
92
|
+
### Git Operations (if needed)
|
|
93
|
+
When using git commands directly (if MCP tools unavailable), read `registry/rules/git-safe-commands.md` via `get_fraim_file` to avoid interactive commands that hang agents.
|
|
94
|
+
|
|
95
|
+
## SCRIPTS
|
|
96
|
+
Run POC:
|
|
97
|
+
```bash
|
|
98
|
+
npm run spike # or appropriate command
|
|
99
|
+
node spike/your-poc.js
|
|
100
|
+
```
|
|
101
|
+
|
|
102
|
+
### Report Back:
|
|
103
|
+
When you complete this phase, call:
|
|
104
|
+
|
|
105
|
+
```javascript
|
|
106
|
+
seekCoachingOnNextStep({
|
|
107
|
+
workflowType: "implement",
|
|
108
|
+
issueNumber: "{issue_number}",
|
|
109
|
+
currentPhase: "spike",
|
|
110
|
+
status: "complete",
|
|
111
|
+
evidence: {
|
|
112
|
+
pocLocation: "spike/auth-flow.ts",
|
|
113
|
+
validationResults: "OAuth2 integration works as expected",
|
|
114
|
+
userApproval: "User approved approach on [date]",
|
|
115
|
+
designUpdates: "Updated design doc with token refresh findings"
|
|
116
|
+
}
|
|
117
|
+
})
|
|
118
|
+
```
|