midas-mcp 5.44.4 → 5.44.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (61) hide show
  1. package/.claude/PLUGIN.md +124 -0
  2. package/.claude/agents/midas-coach.md +108 -0
  3. package/.claude/agents/midas-reviewer.md +93 -0
  4. package/.claude/agents/midas-verifier.md +81 -0
  5. package/.claude/hooks/hooks.json +67 -0
  6. package/.claude/settings.json +41 -0
  7. package/.claude/skills/build-debug/SKILL.md +106 -0
  8. package/.claude/skills/build-implement/SKILL.md +81 -0
  9. package/.claude/skills/build-rules/SKILL.md +80 -0
  10. package/.claude/skills/build-test/SKILL.md +89 -0
  11. package/.claude/skills/horizon-expand/SKILL.md +103 -0
  12. package/.claude/skills/oneshot-retry/SKILL.md +96 -0
  13. package/.claude/skills/plan-brainlift/SKILL.md +65 -0
  14. package/.claude/skills/plan-gameplan/SKILL.md +71 -0
  15. package/.claude/skills/plan-idea/SKILL.md +51 -0
  16. package/.claude/skills/plan-prd/SKILL.md +79 -0
  17. package/.claude/skills/plan-research/SKILL.md +63 -0
  18. package/.claude/skills/tornado-debug/SKILL.md +128 -0
  19. package/README.md +51 -1
  20. package/dist/.claude/PLUGIN.md +124 -0
  21. package/dist/.claude/agents/midas-coach.md +108 -0
  22. package/dist/.claude/agents/midas-reviewer.md +93 -0
  23. package/dist/.claude/agents/midas-verifier.md +81 -0
  24. package/dist/.claude/hooks/hooks.json +67 -0
  25. package/dist/.claude/settings.json +41 -0
  26. package/dist/.claude/skills/build-debug/SKILL.md +106 -0
  27. package/dist/.claude/skills/build-implement/SKILL.md +81 -0
  28. package/dist/.claude/skills/build-rules/SKILL.md +80 -0
  29. package/dist/.claude/skills/build-test/SKILL.md +89 -0
  30. package/dist/.claude/skills/horizon-expand/SKILL.md +103 -0
  31. package/dist/.claude/skills/oneshot-retry/SKILL.md +96 -0
  32. package/dist/.claude/skills/plan-brainlift/SKILL.md +65 -0
  33. package/dist/.claude/skills/plan-gameplan/SKILL.md +71 -0
  34. package/dist/.claude/skills/plan-idea/SKILL.md +51 -0
  35. package/dist/.claude/skills/plan-prd/SKILL.md +79 -0
  36. package/dist/.claude/skills/plan-research/SKILL.md +63 -0
  37. package/dist/.claude/skills/tornado-debug/SKILL.md +128 -0
  38. package/dist/cli.d.ts.map +1 -1
  39. package/dist/cli.js +56 -2
  40. package/dist/cli.js.map +1 -1
  41. package/dist/github-integration.d.ts +37 -0
  42. package/dist/github-integration.d.ts.map +1 -0
  43. package/dist/github-integration.js +219 -0
  44. package/dist/github-integration.js.map +1 -0
  45. package/dist/phase-detector.d.ts +46 -0
  46. package/dist/phase-detector.d.ts.map +1 -0
  47. package/dist/phase-detector.js +251 -0
  48. package/dist/phase-detector.js.map +1 -0
  49. package/dist/pilot.d.ts +51 -0
  50. package/dist/pilot.d.ts.map +1 -1
  51. package/dist/pilot.js +50 -6
  52. package/dist/pilot.js.map +1 -1
  53. package/dist/tools/journal.d.ts +10 -1
  54. package/dist/tools/journal.d.ts.map +1 -1
  55. package/dist/tools/journal.js +38 -2
  56. package/dist/tools/journal.js.map +1 -1
  57. package/dist/tui-lite.d.ts +23 -0
  58. package/dist/tui-lite.d.ts.map +1 -0
  59. package/dist/tui-lite.js +188 -0
  60. package/dist/tui-lite.js.map +1 -0
  61. package/package.json +3 -2
@@ -0,0 +1,106 @@
1
+ ---
2
+ name: build-debug
3
+ description: Debug systematically using the Tornado cycle
4
+ auto_trigger:
5
+ - "debug"
6
+ - "stuck on error"
7
+ - "can't figure out"
8
+ ---
9
+
10
+ # BUILD: DEBUG Step
11
+
12
+ When stuck, random changes make it worse. Tornado systematically narrows possibilities.
13
+
14
+ ## The Tornado Cycle
15
+
16
+ ```
17
+ RESEARCH
18
+ / \
19
+ / \
20
+ LOGS ←→ TESTS
21
+ \ /
22
+ \ /
23
+ REPEAT
24
+ ```
25
+
26
+ 1. **RESEARCH**: Search for the exact error message
27
+ 2. **LOGS**: Add logging to see actual runtime values
28
+ 3. **TESTS**: Write a minimal test that reproduces the bug
29
+ 4. **REPEAT**: Until you understand the root cause
30
+
31
+ ## When to Use Tornado
32
+
33
+ - Same error after 2+ fix attempts
34
+ - Error message doesn't make sense
35
+ - Fix in one place breaks another
36
+ - "It works on my machine" situations
37
+
38
+ ## Research Phase
39
+
40
+ ```bash
41
+ # Search GitHub for the exact error
42
+ gh search issues "[exact error message]"
43
+
44
+ # Search Stack Overflow
45
+ # Copy the error and search directly
46
+
47
+ # Check library issues
48
+ open https://github.com/[lib]/[repo]/issues
49
+ ```
50
+
51
+ **What to look for:**
52
+ - Is this a known issue?
53
+ - What versions are affected?
54
+ - What workarounds exist?
55
+
56
+ ## Logs Phase
57
+
58
+ ```typescript
59
+ // Before the failing line
60
+ console.log('=== DEBUG ===');
61
+ console.log('input:', JSON.stringify(input, null, 2));
62
+ console.log('config:', config);
63
+ console.log('typeof value:', typeof value);
64
+
65
+ // Around the error
66
+ try {
67
+ result = problematicFunction(input);
68
+ console.log('success:', result);
69
+ } catch (e) {
70
+ console.log('error:', e);
71
+ console.log('stack:', e.stack);
72
+ throw e;
73
+ }
74
+ ```
75
+
76
+ ## Tests Phase
77
+
78
+ Write the smallest possible test that fails:
79
+
80
+ ```typescript
81
+ it('reproduces the bug', () => {
82
+ // Minimal setup
83
+ const input = { /* exact values that cause failure */ };
84
+
85
+ // This should fail the same way as production
86
+ expect(() => buggyFunction(input)).toThrow('Expected error');
87
+ });
88
+ ```
89
+
90
+ **Why this works:**
91
+ - Small test = fast iteration
92
+ - Reproducible = you can fix it
93
+ - When the test passes, the bug is fixed
94
+
95
+ ## Common Bug Patterns
96
+
97
+ | Symptom | Likely Cause | Fix |
98
+ |---------|--------------|-----|
99
+ | "undefined is not a function" | Wrong import, typo | Check import statement |
100
+ | "Cannot read property of null" | Missing null check | Add `?.` or early return |
101
+ | Works sometimes, fails sometimes | Race condition | Add proper async/await |
102
+ | Works locally, fails in CI | Environment difference | Check env vars, paths |
103
+
104
+ ## Next Step
105
+
106
+ Once the bug is fixed and tests pass, return to BUILD:IMPLEMENT for the next task.
@@ -0,0 +1,81 @@
1
+ ---
2
+ name: build-implement
3
+ description: Write code with tests using test-first methodology
4
+ auto_trigger:
5
+ - "implement feature"
6
+ - "write code"
7
+ - "build the"
8
+ ---
9
+
10
+ # BUILD: IMPLEMENT Step
11
+
12
+ Write code that works. Test-first defines "working" before you code.
13
+
14
+ ## The Golden Code Implementation Cycle
15
+
16
+ ```
17
+ 1. Write test first (defines success)
18
+ 2. Run test (should fail)
19
+ 3. Write minimal code to pass
20
+ 4. Run test (should pass)
21
+ 5. Refactor if needed
22
+ 6. Commit
23
+ ```
24
+
25
+ ## What to Do
26
+
27
+ 1. **Pick ONE task** from gameplan.md
28
+ 2. **Write a failing test** that describes what "done" looks like
29
+ 3. **Implement** just enough to pass the test
30
+ 4. **Run all tests** to catch regressions
31
+ 5. **Commit** with descriptive message
32
+
33
+ ## Why This Matters
34
+
35
+ Test-first catches misunderstandings early. Writing the test first forces you to think about the interface before the implementation. It's faster than debugging later.
36
+
37
+ ## Test-First Example
38
+
39
+ ```typescript
40
+ // 1. Write the test FIRST
41
+ describe('calculateTotal', () => {
42
+ it('should sum prices with tax', () => {
43
+ const items = [{ price: 10 }, { price: 20 }];
44
+ const result = calculateTotal(items, 0.1);
45
+ expect(result).toBe(33); // 30 + 10% tax
46
+ });
47
+
48
+ it('should handle empty array', () => {
49
+ expect(calculateTotal([], 0.1)).toBe(0);
50
+ });
51
+ });
52
+
53
+ // 2. Run test - it fails (function doesn't exist)
54
+
55
+ // 3. Write minimal implementation
56
+ function calculateTotal(items: Item[], taxRate: number): number {
57
+ const subtotal = items.reduce((sum, item) => sum + item.price, 0);
58
+ return subtotal * (1 + taxRate);
59
+ }
60
+
61
+ // 4. Run test - it passes
62
+
63
+ // 5. Commit: "feat: add calculateTotal with tax support"
64
+ ```
65
+
66
+ ## Commit Discipline
67
+
68
+ ```bash
69
+ # Before starting
70
+ git status # Check you're clean
71
+ git add -A && git commit -m "checkpoint: before implementing X"
72
+
73
+ # After completing
74
+ npm test # Ensure all pass
75
+ npm run build # Ensure it compiles
76
+ git add -A && git commit -m "feat: implement X with tests"
77
+ ```
78
+
79
+ ## Next Step
80
+
81
+ After implementing, advance to BUILD:TEST to verify with full test suite.
@@ -0,0 +1,80 @@
1
+ ---
2
+ name: build-rules
3
+ description: Read and understand project conventions before coding
4
+ auto_trigger:
5
+ - "start building"
6
+ - "read cursorrules"
7
+ - "understand conventions"
8
+ ---
9
+
10
+ # BUILD: RULES Step
11
+
12
+ Every project has conventions. Reading first prevents "works but doesn't fit" code.
13
+
14
+ ## What to Do
15
+
16
+ 1. **Read .cursorrules**: Understand project conventions
17
+ 2. **Check package.json scripts**: Know how to build, test, lint
18
+ 3. **Review existing patterns**: Look at existing code style
19
+ 4. **Note constraints**: What tools, versions, patterns are required?
20
+
21
+ ## Why This Matters
22
+
23
+ Building without knowing the rules wastes time. You'll write code that technically works but violates project conventions, requiring rewrites.
24
+
25
+ ## Create .cursorrules
26
+
27
+ If `.cursorrules` doesn't exist, create it:
28
+
29
+ ```markdown
30
+ # Project Rules
31
+
32
+ ## Language & Runtime
33
+ - TypeScript with strict mode
34
+ - Node.js 20+
35
+ - ES Modules (type: "module")
36
+
37
+ ## Code Style
38
+ - Use explicit return types on exported functions
39
+ - Prefer `type` imports: `import type { X } from 'y'`
40
+ - No `any` - use `unknown` and narrow
41
+
42
+ ## File Naming
43
+ - kebab-case for files (my-component.ts)
44
+ - PascalCase for types/interfaces
45
+ - camelCase for functions/variables
46
+
47
+ ## Testing
48
+ - Co-locate tests: `foo.ts` → `foo.test.ts`
49
+ - Use descriptive test names
50
+ - Test behavior, not implementation
51
+
52
+ ## Git
53
+ - Conventional commits: feat:, fix:, docs:, refactor:
54
+ - Commit before AND after significant changes
55
+
56
+ ## What NOT to Do
57
+ - No console.log in production code (use logger)
58
+ - No hardcoded secrets
59
+ - No synchronous file operations in async contexts
60
+ ```
61
+
62
+ ## Commands to Run
63
+
64
+ ```bash
65
+ # Check what scripts exist
66
+ cat package.json | jq '.scripts'
67
+
68
+ # Run build to ensure it works
69
+ npm run build
70
+
71
+ # Run tests to see current state
72
+ npm test
73
+
74
+ # Run lint to see style requirements
75
+ npm run lint
76
+ ```
77
+
78
+ ## Next Step
79
+
80
+ Once you understand the rules and `.cursorrules` exists, advance to BUILD:INDEX.
@@ -0,0 +1,89 @@
1
+ ---
2
+ name: build-test
3
+ description: Run and fix tests, add edge cases
4
+ auto_trigger:
5
+ - "run tests"
6
+ - "fix failing tests"
7
+ - "test coverage"
8
+ ---
9
+
10
+ # BUILD: TEST Step
11
+
12
+ Your change might break something unrelated. Full suite catches regressions.
13
+
14
+ ## What to Do
15
+
16
+ 1. **Run all tests**: Not just the ones you wrote
17
+ 2. **Fix any failures**: Your change broke something
18
+ 3. **Add edge cases**: What inputs could break this?
19
+ 4. **Check coverage**: Are critical paths tested?
20
+
21
+ ## Why This Matters
22
+
23
+ A failing test is a gift - it caught a bug before users did. The longer you wait to run tests, the harder it is to find what broke.
24
+
25
+ ## Commands
26
+
27
+ ```bash
28
+ # Run all tests
29
+ npm test
30
+
31
+ # Run with coverage
32
+ npm test -- --coverage
33
+
34
+ # Run specific test file
35
+ npm test -- path/to/file.test.ts
36
+
37
+ # Run tests in watch mode (during development)
38
+ npm test -- --watch
39
+ ```
40
+
41
+ ## Edge Cases Checklist
42
+
43
+ For every function, consider:
44
+
45
+ - [ ] **Empty input**: `[]`, `""`, `null`, `undefined`
46
+ - [ ] **Single item**: Array with one element
47
+ - [ ] **Large input**: Performance with 1000+ items
48
+ - [ ] **Invalid types**: What if someone passes wrong type?
49
+ - [ ] **Boundary values**: 0, -1, MAX_INT, empty string
50
+ - [ ] **Concurrent access**: Race conditions?
51
+ - [ ] **Error states**: Network failure, disk full, permission denied
52
+
53
+ ## Example Edge Case Tests
54
+
55
+ ```typescript
56
+ describe('parseConfig', () => {
57
+ it('should handle valid config', () => {
58
+ expect(parseConfig({ port: 3000 })).toEqual({ port: 3000 });
59
+ });
60
+
61
+ // Edge cases
62
+ it('should handle empty object', () => {
63
+ expect(parseConfig({})).toEqual(DEFAULT_CONFIG);
64
+ });
65
+
66
+ it('should handle null', () => {
67
+ expect(parseConfig(null)).toEqual(DEFAULT_CONFIG);
68
+ });
69
+
70
+ it('should reject invalid port', () => {
71
+ expect(() => parseConfig({ port: -1 })).toThrow('Invalid port');
72
+ });
73
+
74
+ it('should reject port > 65535', () => {
75
+ expect(() => parseConfig({ port: 99999 })).toThrow('Invalid port');
76
+ });
77
+ });
78
+ ```
79
+
80
+ ## When Tests Fail
81
+
82
+ 1. **Read the error carefully** - What's expected vs actual?
83
+ 2. **Don't change the test first** - The test might be right
84
+ 3. **Add a console.log** to see actual values
85
+ 4. **Check recent changes** - `git diff` shows what you touched
86
+
87
+ ## Next Step
88
+
89
+ When all tests pass, advance to BUILD:DEBUG only if there are issues. Otherwise, move to SHIP:REVIEW.
@@ -0,0 +1,103 @@
1
+ ---
2
+ name: horizon-expand
3
+ description: Widen context when AI output doesn't fit the codebase
4
+ auto_trigger:
5
+ - "horizon"
6
+ - "doesn't fit"
7
+ - "wrong pattern"
8
+ - "not how we do it"
9
+ ---
10
+
11
+ # Horizon Expansion
12
+
13
+ AI output technically works but doesn't fit your codebase? Widen the context.
14
+
15
+ ## The Horizon Formula
16
+
17
+ ```
18
+ Wrong Output? → Widen Context → Correct Output
19
+ ```
20
+
21
+ ## When to Use Horizon
22
+
23
+ - AI wrote code that works but uses wrong patterns
24
+ - AI missed existing utilities you already have
25
+ - AI created duplicate functionality
26
+ - AI used different naming conventions
27
+ - AI didn't follow your architecture
28
+
29
+ ## How to Expand
30
+
31
+ ### 1. Show Related Files
32
+
33
+ ```
34
+ Before implementing [feature], read these files to understand our patterns:
35
+
36
+ - src/utils/helpers.ts (reusable utilities we already have)
37
+ - src/types/index.ts (type conventions)
38
+ - src/services/auth.ts (example of how we structure services)
39
+ ```
40
+
41
+ ### 2. Show Examples
42
+
43
+ ```
44
+ Here's how we implement similar features in this codebase:
45
+
46
+ Example 1 (from src/services/user.ts):
47
+ [paste relevant code section]
48
+
49
+ Example 2 (from src/services/product.ts):
50
+ [paste relevant code section]
51
+
52
+ Please follow these patterns for the new implementation.
53
+ ```
54
+
55
+ ### 3. Specify Constraints
56
+
57
+ ```
58
+ When implementing this:
59
+ - Use our existing logger at src/utils/logger.ts, not console.log
60
+ - Follow the Repository pattern we use in src/repositories/
61
+ - Use the existing ErrorHandler class, don't throw raw errors
62
+ - Name files in kebab-case, types in PascalCase
63
+ ```
64
+
65
+ ## Horizon Template
66
+
67
+ ```
68
+ ## Context Files to Read First
69
+ - [file1] - for [pattern/utility]
70
+ - [file2] - for [pattern/utility]
71
+
72
+ ## Existing Patterns to Follow
73
+ [paste or describe 1-2 examples from your codebase]
74
+
75
+ ## Constraints
76
+ - Use existing [X], don't create new
77
+ - Follow [pattern] as seen in [file]
78
+ - Match naming: [conventions]
79
+
80
+ ## Now Implement
81
+ [your actual request]
82
+ ```
83
+
84
+ ## Common Horizon Fixes
85
+
86
+ | Problem | Expand With |
87
+ |---------|-------------|
88
+ | Wrong import style | Show tsconfig.json, existing imports |
89
+ | Duplicate utility | Link to existing util file |
90
+ | Wrong architecture | Show 2 similar services as examples |
91
+ | Wrong naming | Show naming conventions doc or examples |
92
+ | Missed types | Show types directory |
93
+
94
+ ## Prevention
95
+
96
+ To avoid needing Horizon in the future:
97
+ 1. Keep `.cursorrules` updated with conventions
98
+ 2. Use `@codebase` or `@folder` references liberally
99
+ 3. Start prompts with "Following our existing patterns in [file]..."
100
+
101
+ ## Next Step
102
+
103
+ After expanding context, retry your request. The output should now fit your codebase naturally.
@@ -0,0 +1,96 @@
1
+ ---
2
+ name: oneshot-retry
3
+ description: Construct a perfect retry prompt after an error
4
+ auto_trigger:
5
+ - "oneshot"
6
+ - "retry prompt"
7
+ - "try again with"
8
+ - "that didn't work"
9
+ ---
10
+
11
+ # Oneshot Retry
12
+
13
+ First attempt failed? Don't just retry - construct a better prompt.
14
+
15
+ ## The Oneshot Formula
16
+
17
+ ```
18
+ ONESHOT = ORIGINAL + ERROR + AVOID = WORKS
19
+ ```
20
+
21
+ ## Template
22
+
23
+ Copy this exact structure for your retry:
24
+
25
+ ```
26
+ ## Original Request
27
+ [Paste what you originally asked for]
28
+
29
+ ## What Happened
30
+ [Paste the exact error or unexpected result]
31
+
32
+ ## What to Avoid
33
+ - Don't [specific approach that failed]
34
+ - Don't [another thing that didn't work]
35
+ - Don't [assumption that was wrong]
36
+
37
+ ## Additional Context
38
+ [New information you discovered]
39
+
40
+ ## Try This Instead
41
+ [Specific alternative approach if you have one]
42
+ ```
43
+
44
+ ## Example
45
+
46
+ ### Bad Retry (Vague)
47
+ > "That didn't work, try again"
48
+
49
+ ### Good Retry (Oneshot)
50
+ ```
51
+ ## Original Request
52
+ Create a function to parse CSV files
53
+
54
+ ## What Happened
55
+ Error: Cannot read property 'split' of undefined at line 15
56
+ The function failed when the CSV had empty lines at the end.
57
+
58
+ ## What to Avoid
59
+ - Don't assume all lines have content
60
+ - Don't use .split() without null checking
61
+ - Don't process the file line-by-line (memory issues with large files)
62
+
63
+ ## Additional Context
64
+ - CSV files can be up to 100MB
65
+ - Empty lines are valid in our format
66
+ - Need to handle quoted fields with commas
67
+
68
+ ## Try This Instead
69
+ Use a streaming CSV parser like 'csv-parse' that handles edge cases.
70
+ ```
71
+
72
+ ## Why Oneshot Works
73
+
74
+ 1. **Context**: AI sees the full picture
75
+ 2. **Constraints**: AI knows what NOT to do
76
+ 3. **Direction**: AI has a starting point
77
+ 4. **Efficiency**: One well-constructed prompt > 5 vague retries
78
+
79
+ ## When to Use Oneshot
80
+
81
+ - ❌ First attempt failed
82
+ - ❌ Same error twice
83
+ - ❌ AI made wrong assumptions
84
+ - ❌ Output was close but not quite right
85
+
86
+ ## Quick Oneshot Checklist
87
+
88
+ Before sending your retry, ensure you have:
89
+ - [ ] Original request (what you wanted)
90
+ - [ ] Error/result (what happened)
91
+ - [ ] At least 2 "don't do X" constraints
92
+ - [ ] Any new context you discovered
93
+
94
+ ## Next Step
95
+
96
+ After successful Oneshot, commit your working code and continue with the next task.
@@ -0,0 +1,65 @@
1
+ ---
2
+ name: plan-brainlift
3
+ description: Document your unique insights and mental models
4
+ auto_trigger:
5
+ - "create brainlift"
6
+ - "document context"
7
+ - "what do I know"
8
+ ---
9
+
10
+ # PLAN: BRAINLIFT Step
11
+
12
+ Extract what YOU know that the AI doesn't. This is your competitive advantage.
13
+
14
+ ## What to Do
15
+
16
+ 1. **Domain Knowledge**: What do you know about this problem space?
17
+ 2. **User Insights**: What have you learned from real users?
18
+ 3. **Technical Constraints**: What limitations affect the solution?
19
+ 4. **Past Failures**: What approaches have you tried that didn't work?
20
+ 5. **Mental Models**: How do you think about this problem?
21
+
22
+ ## Why This Matters
23
+
24
+ AI read the internet. You have specific context it doesn't. The brainlift document becomes context for every future AI session, making suggestions more relevant.
25
+
26
+ ## Template
27
+
28
+ Create `docs/brainlift.md`:
29
+
30
+ ```markdown
31
+ # Brainlift: [Project Name]
32
+
33
+ ## Problem Statement
34
+ [Clear, specific description of the problem]
35
+
36
+ ## Target User
37
+ [Specific persona - not "developers" but "solo developers building MVPs"]
38
+
39
+ ## Why Now
40
+ [What changed - new API, market shift, personal itch]
41
+
42
+ ## Domain Knowledge
43
+ [Industry-specific insights, jargon, constraints]
44
+
45
+ ## Technical Context
46
+ - Language: [e.g., TypeScript]
47
+ - Environment: [e.g., Node.js, browser, both]
48
+ - Key Dependencies: [e.g., React, Express]
49
+ - Constraints: [e.g., must work offline, <100ms response]
50
+
51
+ ## What I Know That Others Don't
52
+ [Your unique insights - THIS IS THE KEY SECTION]
53
+ - [Insight 1]
54
+ - [Insight 2]
55
+
56
+ ## Anti-Patterns to Avoid
57
+ [What NOT to do based on your experience]
58
+
59
+ ## Success Criteria
60
+ [How will you know this is working?]
61
+ ```
62
+
63
+ ## Next Step
64
+
65
+ Once brainlift.md is complete, advance to PLAN:PRD to define requirements.
@@ -0,0 +1,71 @@
1
+ ---
2
+ name: plan-gameplan
3
+ description: Break the build into ordered, actionable tasks
4
+ auto_trigger:
5
+ - "create gameplan"
6
+ - "plan the build"
7
+ - "break down tasks"
8
+ ---
9
+
10
+ # PLAN: GAMEPLAN Step
11
+
12
+ Sequence work so you're never blocked waiting for yourself.
13
+
14
+ ## What to Do
15
+
16
+ 1. **Identify Dependencies**: What must be built first?
17
+ 2. **Order Tasks**: Sequence by dependency, not preference
18
+ 3. **Size Tasks**: Each task completable in one session (2-4 hours max)
19
+ 4. **Define Done**: Clear completion criteria for each task
20
+
21
+ ## Why This Matters
22
+
23
+ Some things depend on other things. A gameplan prevents the "where was I?" problem and ensures you can measure progress objectively.
24
+
25
+ ## Template
26
+
27
+ Create `docs/gameplan.md`:
28
+
29
+ ```markdown
30
+ # Gameplan
31
+
32
+ ## Tech Stack
33
+ - Runtime: [e.g., Node.js 20]
34
+ - Language: [e.g., TypeScript 5.x]
35
+ - Framework: [e.g., Express, React]
36
+ - Database: [e.g., SQLite, Postgres]
37
+ - Testing: [e.g., Vitest, Jest]
38
+
39
+ ## Phase 1: Foundation
40
+ - [ ] Task 1.1: [Description] - **Done when**: [criteria]
41
+ - [ ] Task 1.2: [Description] - **Done when**: [criteria]
42
+
43
+ ## Phase 2: Core Features
44
+ - [ ] Task 2.1: [Description] - **Done when**: [criteria]
45
+ - [ ] Task 2.2: [Description] - **Done when**: [criteria]
46
+
47
+ ## Phase 3: Polish
48
+ - [ ] Task 3.1: [Description] - **Done when**: [criteria]
49
+
50
+ ## Risks & Mitigations
51
+ | Risk | Impact | Mitigation |
52
+ |------|--------|------------|
53
+ | [Risk] | [High/Med/Low] | [Plan B] |
54
+
55
+ ## Open Questions
56
+ - [ ] [Question that needs answering]
57
+ ```
58
+
59
+ ## Task Sizing Guidelines
60
+
61
+ - **Too big**: "Build the auth system" (multi-day)
62
+ - **Just right**: "Add JWT token generation with tests" (2-4 hours)
63
+ - **Too small**: "Add a comment" (minutes)
64
+
65
+ ## Next Step
66
+
67
+ Once gameplan.md exists with ordered tasks, advance to BUILD:RULES.
68
+
69
+ ---
70
+
71
+ **Tip**: The first task should be "Set up project with build/test/lint scripts" - always start with a green CI.