mindforge-cc 5.0.0 → 5.2.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.agent/CLAUDE.md +14 -12
- package/.agent/hooks/mindforge-session-init_extended.js +42 -0
- package/.agent/settings.json +4 -0
- package/.agent/skills/mindforge-brainstorming/SKILL.md +164 -0
- package/.agent/skills/mindforge-brainstorming/scripts/frame-template.html +214 -0
- package/.agent/skills/mindforge-brainstorming/scripts/helper.js +88 -0
- package/.agent/skills/mindforge-brainstorming/scripts/server.cjs +354 -0
- package/.agent/skills/mindforge-brainstorming/scripts/start-server.sh +148 -0
- package/.agent/skills/mindforge-brainstorming/scripts/stop-server.sh +56 -0
- package/.agent/skills/mindforge-brainstorming/spec-document-reviewer-prompt.md +49 -0
- package/.agent/skills/mindforge-brainstorming/visual-companion.md +287 -0
- package/.agent/skills/mindforge-debug_extended/CREATION-LOG.md +119 -0
- package/.agent/skills/mindforge-debug_extended/SKILL.md +296 -0
- package/.agent/skills/mindforge-debug_extended/condition-based-waiting-example.ts +158 -0
- package/.agent/skills/mindforge-debug_extended/condition-based-waiting.md +115 -0
- package/.agent/skills/mindforge-debug_extended/defense-in-depth.md +122 -0
- package/.agent/skills/mindforge-debug_extended/find-polluter.sh +63 -0
- package/.agent/skills/mindforge-debug_extended/root-cause-tracing.md +169 -0
- package/.agent/skills/mindforge-debug_extended/test-academic.md +14 -0
- package/.agent/skills/mindforge-debug_extended/test-pressure-1.md +58 -0
- package/.agent/skills/mindforge-debug_extended/test-pressure-2.md +68 -0
- package/.agent/skills/mindforge-debug_extended/test-pressure-3.md +69 -0
- package/.agent/skills/mindforge-execute-phase_extended/SKILL.md +70 -0
- package/.agent/skills/mindforge-neural-orchestrator/SKILL.md +115 -0
- package/.agent/skills/mindforge-neural-orchestrator/references/codex-tools.md +100 -0
- package/.agent/skills/mindforge-neural-orchestrator/references/gemini-tools.md +33 -0
- package/.agent/skills/mindforge-parallel-mesh_extended/SKILL.md +182 -0
- package/.agent/skills/mindforge-plan-phase_extended/SKILL.md +152 -0
- package/.agent/skills/mindforge-plan-phase_extended/plan-document-reviewer-prompt.md +49 -0
- package/.agent/skills/mindforge-review-inbound/SKILL.md +213 -0
- package/.agent/skills/mindforge-review-request/SKILL.md +105 -0
- package/.agent/skills/mindforge-review-request/code-reviewer.md +146 -0
- package/.agent/skills/mindforge-ship_extended/SKILL.md +200 -0
- package/.agent/skills/mindforge-skill-creation/SKILL.md +655 -0
- package/.agent/skills/mindforge-skill-creation/anthropic-best-practices.md +1150 -0
- package/.agent/skills/mindforge-skill-creation/examples/CLAUDE_MD_TESTING.md +189 -0
- package/.agent/skills/mindforge-skill-creation/graphviz-conventions.dot +172 -0
- package/.agent/skills/mindforge-skill-creation/persuasion-principles.md +187 -0
- package/.agent/skills/mindforge-skill-creation/render-graphs.js +168 -0
- package/.agent/skills/mindforge-skill-creation/testing-skills-with-subagents.md +384 -0
- package/.agent/skills/mindforge-swarm-execution/SKILL.md +277 -0
- package/.agent/skills/mindforge-swarm-execution/code-quality-reviewer-prompt.md +26 -0
- package/.agent/skills/mindforge-swarm-execution/implementer-prompt.md +113 -0
- package/.agent/skills/mindforge-swarm-execution/spec-reviewer-prompt.md +61 -0
- package/.agent/skills/mindforge-tdd_extended/SKILL.md +371 -0
- package/.agent/skills/mindforge-tdd_extended/testing-anti-patterns.md +299 -0
- package/.agent/skills/mindforge-verify-work_extended/SKILL.md +139 -0
- package/.agent/skills/mindforge-workspace-isolated/SKILL.md +218 -0
- package/.agent/workflows/mindforge-verify-work.md +5 -0
- package/.agent/workflows/mindforge:brainstorming.md +16 -0
- package/.agent/workflows/mindforge:debug.md +4 -2
- package/.agent/workflows/mindforge:execute-phase.md +12 -0
- package/.agent/workflows/mindforge:plan-phase.md +11 -0
- package/.agent/workflows/mindforge:ship.md +6 -1
- package/.agent/workflows/mindforge:tdd.md +7 -2
- package/CHANGELOG.md +243 -115
- package/MINDFORGE.md +17 -9
- package/README.md +2 -2
- package/RELEASENOTES.md +16 -7
- package/bin/memory/federated-sync.js +82 -2
- package/docs/INTELLIGENCE-MESH.md +7 -3
- package/docs/PERSONAS.md +150 -2
- package/docs/architecture/V5-ENTERPRISE.md +8 -7
- package/docs/commands-reference.md +20 -1
- package/docs/governance-guide.md +13 -7
- package/docs/troubleshooting.md +24 -4
- package/docs/user-guide.md +37 -19
- package/package.json +1 -1
|
@@ -0,0 +1,100 @@
|
|
|
1
|
+
# Codex Tool Mapping
|
|
2
|
+
|
|
3
|
+
Skills use Claude Code tool names. When you encounter these in a skill, use your platform equivalent:
|
|
4
|
+
|
|
5
|
+
| Skill references | Codex equivalent |
|
|
6
|
+
|-----------------|------------------|
|
|
7
|
+
| `Task` tool (dispatch subagent) | `spawn_agent` (see [Named agent dispatch](#named-agent-dispatch)) |
|
|
8
|
+
| Multiple `Task` calls (parallel) | Multiple `spawn_agent` calls |
|
|
9
|
+
| Task returns result | `wait` |
|
|
10
|
+
| Task completes automatically | `close_agent` to free slot |
|
|
11
|
+
| `TodoWrite` (task tracking) | `update_plan` |
|
|
12
|
+
| `Skill` tool (invoke a skill) | Skills load natively — just follow the instructions |
|
|
13
|
+
| `Read`, `Write`, `Edit` (files) | Use your native file tools |
|
|
14
|
+
| `Bash` (run commands) | Use your native shell tools |
|
|
15
|
+
|
|
16
|
+
## Subagent dispatch requires multi-agent support
|
|
17
|
+
|
|
18
|
+
Add to your Codex config (`~/.codex/config.toml`):
|
|
19
|
+
|
|
20
|
+
```toml
|
|
21
|
+
[features]
|
|
22
|
+
multi_agent = true
|
|
23
|
+
```
|
|
24
|
+
|
|
25
|
+
This enables `spawn_agent`, `wait`, and `close_agent` for skills like `dispatching-parallel-agents` and `mindforge-swarm-execution`.
|
|
26
|
+
|
|
27
|
+
## Named agent dispatch
|
|
28
|
+
|
|
29
|
+
Claude Code skills reference named agent types like `mindforge:code-reviewer`.
|
|
30
|
+
Codex does not have a named agent registry — `spawn_agent` creates generic agents
|
|
31
|
+
from built-in roles (`default`, `explorer`, `worker`).
|
|
32
|
+
|
|
33
|
+
When a skill says to dispatch a named agent type:
|
|
34
|
+
|
|
35
|
+
1. Find the agent's prompt file (e.g., `agents/code-reviewer.md` or the skill's
|
|
36
|
+
local prompt template like `code-quality-reviewer-prompt.md`)
|
|
37
|
+
2. Read the prompt content
|
|
38
|
+
3. Fill any template placeholders (`{BASE_SHA}`, `{WHAT_WAS_IMPLEMENTED}`, etc.)
|
|
39
|
+
4. Spawn a `worker` agent with the filled content as the `message`
|
|
40
|
+
|
|
41
|
+
| Skill instruction | Codex equivalent |
|
|
42
|
+
|-------------------|------------------|
|
|
43
|
+
| `Task tool (mindforge:code-reviewer)` | `spawn_agent(agent_type="worker", message=...)` with `code-reviewer.md` content |
|
|
44
|
+
| `Task tool (general-purpose)` with inline prompt | `spawn_agent(message=...)` with the same prompt |
|
|
45
|
+
|
|
46
|
+
### Message framing
|
|
47
|
+
|
|
48
|
+
The `message` parameter is user-level input, not a system prompt. Structure it
|
|
49
|
+
for maximum instruction adherence:
|
|
50
|
+
|
|
51
|
+
```
|
|
52
|
+
Your task is to perform the following. Follow the instructions below exactly.
|
|
53
|
+
|
|
54
|
+
<agent-instructions>
|
|
55
|
+
[filled prompt content from the agent's .md file]
|
|
56
|
+
</agent-instructions>
|
|
57
|
+
|
|
58
|
+
Execute this now. Output ONLY the structured response following the format
|
|
59
|
+
specified in the instructions above.
|
|
60
|
+
```
|
|
61
|
+
|
|
62
|
+
- Use task-delegation framing ("Your task is...") rather than persona framing ("You are...")
|
|
63
|
+
- Wrap instructions in XML tags — the model treats tagged blocks as authoritative
|
|
64
|
+
- End with an explicit execution directive to prevent summarization of the instructions
|
|
65
|
+
|
|
66
|
+
### When this workaround can be removed
|
|
67
|
+
|
|
68
|
+
This approach compensates for Codex's plugin system not yet supporting an `agents`
|
|
69
|
+
field in `plugin.json`. When `RawPluginManifest` gains an `agents` field, the
|
|
70
|
+
plugin can symlink to `agents/` (mirroring the existing `skills/` symlink) and
|
|
71
|
+
skills can dispatch named agent types directly.
|
|
72
|
+
|
|
73
|
+
## Environment Detection
|
|
74
|
+
|
|
75
|
+
Skills that create worktrees or finish branches should detect their
|
|
76
|
+
environment with read-only git commands before proceeding:
|
|
77
|
+
|
|
78
|
+
```bash
|
|
79
|
+
GIT_DIR=$(cd "$(git rev-parse --git-dir)" 2>/dev/null && pwd -P)
|
|
80
|
+
GIT_COMMON=$(cd "$(git rev-parse --git-common-dir)" 2>/dev/null && pwd -P)
|
|
81
|
+
BRANCH=$(git branch --show-current)
|
|
82
|
+
```
|
|
83
|
+
|
|
84
|
+
- `GIT_DIR != GIT_COMMON` → already in a linked worktree (skip creation)
|
|
85
|
+
- `BRANCH` empty → detached HEAD (cannot branch/push/PR from sandbox)
|
|
86
|
+
|
|
87
|
+
See `using-git-worktrees` Step 0 and `mindforge-ship_extended`
|
|
88
|
+
Step 1 for how each skill uses these signals.
|
|
89
|
+
|
|
90
|
+
## Codex App Finishing
|
|
91
|
+
|
|
92
|
+
When the sandbox blocks branch/push operations (detached HEAD in an
|
|
93
|
+
externally managed worktree), the agent commits all work and informs
|
|
94
|
+
the user to use the App's native controls:
|
|
95
|
+
|
|
96
|
+
- **"Create branch"** — names the branch, then commit/push/PR via App UI
|
|
97
|
+
- **"Hand off to local"** — transfers work to the user's local checkout
|
|
98
|
+
|
|
99
|
+
The agent can still run tests, stage files, and output suggested branch
|
|
100
|
+
names, commit messages, and PR descriptions for the user to copy.
|
|
@@ -0,0 +1,33 @@
|
|
|
1
|
+
# Gemini CLI Tool Mapping
|
|
2
|
+
|
|
3
|
+
Skills use Claude Code tool names. When you encounter these in a skill, use your platform equivalent:
|
|
4
|
+
|
|
5
|
+
| Skill references | Gemini CLI equivalent |
|
|
6
|
+
|-----------------|----------------------|
|
|
7
|
+
| `Read` (file reading) | `read_file` |
|
|
8
|
+
| `Write` (file creation) | `write_file` |
|
|
9
|
+
| `Edit` (file editing) | `replace` |
|
|
10
|
+
| `Bash` (run commands) | `run_shell_command` |
|
|
11
|
+
| `Grep` (search file content) | `grep_search` |
|
|
12
|
+
| `Glob` (search files by name) | `glob` |
|
|
13
|
+
| `TodoWrite` (task tracking) | `write_todos` |
|
|
14
|
+
| `Skill` tool (invoke a skill) | `activate_skill` |
|
|
15
|
+
| `WebSearch` | `google_web_search` |
|
|
16
|
+
| `WebFetch` | `web_fetch` |
|
|
17
|
+
| `Task` tool (dispatch subagent) | No equivalent — Gemini CLI does not support subagents |
|
|
18
|
+
|
|
19
|
+
## No subagent support
|
|
20
|
+
|
|
21
|
+
Gemini CLI has no equivalent to Claude Code's `Task` tool. Skills that rely on subagent dispatch (`mindforge-swarm-execution`, `dispatching-parallel-agents`) will fall back to single-session execution via `mindforge-execute-phase_extended`.
|
|
22
|
+
|
|
23
|
+
## Additional Gemini CLI tools
|
|
24
|
+
|
|
25
|
+
These tools are available in Gemini CLI but have no Claude Code equivalent:
|
|
26
|
+
|
|
27
|
+
| Tool | Purpose |
|
|
28
|
+
|------|---------|
|
|
29
|
+
| `list_directory` | List files and subdirectories |
|
|
30
|
+
| `save_memory` | Persist facts to GEMINI.md across sessions |
|
|
31
|
+
| `ask_user` | Request structured input from the user |
|
|
32
|
+
| `tracker_create_task` | Rich task management (create, update, list, visualize) |
|
|
33
|
+
| `enter_plan_mode` / `exit_plan_mode` | Switch to read-only research mode before making changes |
|
|
@@ -0,0 +1,182 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: dispatching-parallel-agents
|
|
3
|
+
description: Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Dispatching Parallel Agents
|
|
7
|
+
|
|
8
|
+
## Overview
|
|
9
|
+
|
|
10
|
+
You delegate tasks to specialized agents with isolated context. By precisely crafting their instructions and context, you ensure they stay focused and succeed at their task. They should never inherit your session's context or history — you construct exactly what they need. This also preserves your own context for coordination work.
|
|
11
|
+
|
|
12
|
+
When you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.
|
|
13
|
+
|
|
14
|
+
**Core principle:** Dispatch one agent per independent problem domain. Let them work concurrently.
|
|
15
|
+
|
|
16
|
+
## When to Use
|
|
17
|
+
|
|
18
|
+
```dot
|
|
19
|
+
digraph when_to_use {
|
|
20
|
+
"Multiple failures?" [shape=diamond];
|
|
21
|
+
"Are they independent?" [shape=diamond];
|
|
22
|
+
"Single agent investigates all" [shape=box];
|
|
23
|
+
"One agent per problem domain" [shape=box];
|
|
24
|
+
"Can they work in parallel?" [shape=diamond];
|
|
25
|
+
"Sequential agents" [shape=box];
|
|
26
|
+
"Parallel dispatch" [shape=box];
|
|
27
|
+
|
|
28
|
+
"Multiple failures?" -> "Are they independent?" [label="yes"];
|
|
29
|
+
"Are they independent?" -> "Single agent investigates all" [label="no - related"];
|
|
30
|
+
"Are they independent?" -> "Can they work in parallel?" [label="yes"];
|
|
31
|
+
"Can they work in parallel?" -> "Parallel dispatch" [label="yes"];
|
|
32
|
+
"Can they work in parallel?" -> "Sequential agents" [label="no - shared state"];
|
|
33
|
+
}
|
|
34
|
+
```
|
|
35
|
+
|
|
36
|
+
**Use when:**
|
|
37
|
+
- 3+ test files failing with different root causes
|
|
38
|
+
- Multiple subsystems broken independently
|
|
39
|
+
- Each problem can be understood without context from others
|
|
40
|
+
- No shared state between investigations
|
|
41
|
+
|
|
42
|
+
**Don't use when:**
|
|
43
|
+
- Failures are related (fix one might fix others)
|
|
44
|
+
- Need to understand full system state
|
|
45
|
+
- Agents would interfere with each other
|
|
46
|
+
|
|
47
|
+
## The Pattern
|
|
48
|
+
|
|
49
|
+
### 1. Identify Independent Domains
|
|
50
|
+
|
|
51
|
+
Group failures by what's broken:
|
|
52
|
+
- File A tests: Tool approval flow
|
|
53
|
+
- File B tests: Batch completion behavior
|
|
54
|
+
- File C tests: Abort functionality
|
|
55
|
+
|
|
56
|
+
Each domain is independent - fixing tool approval doesn't affect abort tests.
|
|
57
|
+
|
|
58
|
+
### 2. Create Focused Agent Tasks
|
|
59
|
+
|
|
60
|
+
Each agent gets:
|
|
61
|
+
- **Specific scope:** One test file or subsystem
|
|
62
|
+
- **Clear goal:** Make these tests pass
|
|
63
|
+
- **Constraints:** Don't change other code
|
|
64
|
+
- **Expected output:** Summary of what you found and fixed
|
|
65
|
+
|
|
66
|
+
### 3. Dispatch in Parallel
|
|
67
|
+
|
|
68
|
+
```typescript
|
|
69
|
+
// In Claude Code / AI environment
|
|
70
|
+
Task("Fix agent-tool-abort.test.ts failures")
|
|
71
|
+
Task("Fix batch-completion-behavior.test.ts failures")
|
|
72
|
+
Task("Fix tool-approval-race-conditions.test.ts failures")
|
|
73
|
+
// All three run concurrently
|
|
74
|
+
```
|
|
75
|
+
|
|
76
|
+
### 4. Review and Integrate
|
|
77
|
+
|
|
78
|
+
When agents return:
|
|
79
|
+
- Read each summary
|
|
80
|
+
- Verify fixes don't conflict
|
|
81
|
+
- Run full test suite
|
|
82
|
+
- Integrate all changes
|
|
83
|
+
|
|
84
|
+
## Agent Prompt Structure
|
|
85
|
+
|
|
86
|
+
Good agent prompts are:
|
|
87
|
+
1. **Focused** - One clear problem domain
|
|
88
|
+
2. **Self-contained** - All context needed to understand the problem
|
|
89
|
+
3. **Specific about output** - What should the agent return?
|
|
90
|
+
|
|
91
|
+
```markdown
|
|
92
|
+
Fix the 3 failing tests in src/agents/agent-tool-abort.test.ts:
|
|
93
|
+
|
|
94
|
+
1. "should abort tool with partial output capture" - expects 'interrupted at' in message
|
|
95
|
+
2. "should handle mixed completed and aborted tools" - fast tool aborted instead of completed
|
|
96
|
+
3. "should properly track pendingToolCount" - expects 3 results but gets 0
|
|
97
|
+
|
|
98
|
+
These are timing/race condition issues. Your task:
|
|
99
|
+
|
|
100
|
+
1. Read the test file and understand what each test verifies
|
|
101
|
+
2. Identify root cause - timing issues or actual bugs?
|
|
102
|
+
3. Fix by:
|
|
103
|
+
- Replacing arbitrary timeouts with event-based waiting
|
|
104
|
+
- Fixing bugs in abort implementation if found
|
|
105
|
+
- Adjusting test expectations if testing changed behavior
|
|
106
|
+
|
|
107
|
+
Do NOT just increase timeouts - find the real issue.
|
|
108
|
+
|
|
109
|
+
Return: Summary of what you found and what you fixed.
|
|
110
|
+
```
|
|
111
|
+
|
|
112
|
+
## Common Mistakes
|
|
113
|
+
|
|
114
|
+
**❌ Too broad:** "Fix all the tests" - agent gets lost
|
|
115
|
+
**✅ Specific:** "Fix agent-tool-abort.test.ts" - focused scope
|
|
116
|
+
|
|
117
|
+
**❌ No context:** "Fix the race condition" - agent doesn't know where
|
|
118
|
+
**✅ Context:** Paste the error messages and test names
|
|
119
|
+
|
|
120
|
+
**❌ No constraints:** Agent might refactor everything
|
|
121
|
+
**✅ Constraints:** "Do NOT change production code" or "Fix tests only"
|
|
122
|
+
|
|
123
|
+
**❌ Vague output:** "Fix it" - you don't know what changed
|
|
124
|
+
**✅ Specific:** "Return summary of root cause and changes"
|
|
125
|
+
|
|
126
|
+
## When NOT to Use
|
|
127
|
+
|
|
128
|
+
**Related failures:** Fixing one might fix others - investigate together first
|
|
129
|
+
**Need full context:** Understanding requires seeing entire system
|
|
130
|
+
**Exploratory debugging:** You don't know what's broken yet
|
|
131
|
+
**Shared state:** Agents would interfere (editing same files, using same resources)
|
|
132
|
+
|
|
133
|
+
## Real Example from Session
|
|
134
|
+
|
|
135
|
+
**Scenario:** 6 test failures across 3 files after major refactoring
|
|
136
|
+
|
|
137
|
+
**Failures:**
|
|
138
|
+
- agent-tool-abort.test.ts: 3 failures (timing issues)
|
|
139
|
+
- batch-completion-behavior.test.ts: 2 failures (tools not executing)
|
|
140
|
+
- tool-approval-race-conditions.test.ts: 1 failure (execution count = 0)
|
|
141
|
+
|
|
142
|
+
**Decision:** Independent domains - abort logic separate from batch completion separate from race conditions
|
|
143
|
+
|
|
144
|
+
**Dispatch:**
|
|
145
|
+
```
|
|
146
|
+
Agent 1 → Fix agent-tool-abort.test.ts
|
|
147
|
+
Agent 2 → Fix batch-completion-behavior.test.ts
|
|
148
|
+
Agent 3 → Fix tool-approval-race-conditions.test.ts
|
|
149
|
+
```
|
|
150
|
+
|
|
151
|
+
**Results:**
|
|
152
|
+
- Agent 1: Replaced timeouts with event-based waiting
|
|
153
|
+
- Agent 2: Fixed event structure bug (threadId in wrong place)
|
|
154
|
+
- Agent 3: Added wait for async tool execution to complete
|
|
155
|
+
|
|
156
|
+
**Integration:** All fixes independent, no conflicts, full suite green
|
|
157
|
+
|
|
158
|
+
**Time saved:** 3 problems solved in parallel vs sequentially
|
|
159
|
+
|
|
160
|
+
## Key Benefits
|
|
161
|
+
|
|
162
|
+
1. **Parallelization** - Multiple investigations happen simultaneously
|
|
163
|
+
2. **Focus** - Each agent has narrow scope, less context to track
|
|
164
|
+
3. **Independence** - Agents don't interfere with each other
|
|
165
|
+
4. **Speed** - 3 problems solved in time of 1
|
|
166
|
+
|
|
167
|
+
## Verification
|
|
168
|
+
|
|
169
|
+
After agents return:
|
|
170
|
+
1. **Review each summary** - Understand what changed
|
|
171
|
+
2. **Check for conflicts** - Did agents edit same code?
|
|
172
|
+
3. **Run full suite** - Verify all fixes work together
|
|
173
|
+
4. **Spot check** - Agents can make systematic errors
|
|
174
|
+
|
|
175
|
+
## Real-World Impact
|
|
176
|
+
|
|
177
|
+
From debugging session (2025-10-03):
|
|
178
|
+
- 6 failures across 3 files
|
|
179
|
+
- 3 agents dispatched in parallel
|
|
180
|
+
- All investigations completed concurrently
|
|
181
|
+
- All fixes integrated successfully
|
|
182
|
+
- Zero conflicts between agent changes
|
|
@@ -0,0 +1,152 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: mindforge-plan-phase_extended
|
|
3
|
+
description: Use when you have a spec or requirements for a multi-step task, before touching code
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Writing Plans
|
|
7
|
+
|
|
8
|
+
## Overview
|
|
9
|
+
|
|
10
|
+
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
|
|
11
|
+
|
|
12
|
+
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
|
|
13
|
+
|
|
14
|
+
**Announce at start:** "I'm using the mindforge-plan-phase_extended skill to create the implementation plan."
|
|
15
|
+
|
|
16
|
+
**Context:** This should be run in a dedicated worktree (created by brainstorming skill).
|
|
17
|
+
|
|
18
|
+
**Save plans to:** `docs/mindforge/plans/YYYY-MM-DD-<feature-name>.md`
|
|
19
|
+
- (User preferences for plan location override this default)
|
|
20
|
+
|
|
21
|
+
## Scope Check
|
|
22
|
+
|
|
23
|
+
If the spec covers multiple independent subsystems, it should have been broken into sub-project specs during brainstorming. If it wasn't, suggest breaking this into separate plans — one per subsystem. Each plan should produce working, testable software on its own.
|
|
24
|
+
|
|
25
|
+
## File Structure
|
|
26
|
+
|
|
27
|
+
Before defining tasks, map out which files will be created or modified and what each one is responsible for. This is where decomposition decisions get locked in.
|
|
28
|
+
|
|
29
|
+
- Design units with clear boundaries and well-defined interfaces. Each file should have one clear responsibility.
|
|
30
|
+
- You reason best about code you can hold in context at once, and your edits are more reliable when files are focused. Prefer smaller, focused files over large ones that do too much.
|
|
31
|
+
- Files that change together should live together. Split by responsibility, not by technical layer.
|
|
32
|
+
- In existing codebases, follow established patterns. If the codebase uses large files, don't unilaterally restructure - but if a file you're modifying has grown unwieldy, including a split in the plan is reasonable.
|
|
33
|
+
|
|
34
|
+
This structure informs the task decomposition. Each task should produce self-contained changes that make sense independently.
|
|
35
|
+
|
|
36
|
+
## Bite-Sized Task Granularity
|
|
37
|
+
|
|
38
|
+
**Each step is one action (2-5 minutes):**
|
|
39
|
+
- "Write the failing test" - step
|
|
40
|
+
- "Run it to make sure it fails" - step
|
|
41
|
+
- "Implement the minimal code to make the test pass" - step
|
|
42
|
+
- "Run the tests and make sure they pass" - step
|
|
43
|
+
- "Commit" - step
|
|
44
|
+
|
|
45
|
+
## Plan Document Header
|
|
46
|
+
|
|
47
|
+
**Every plan MUST start with this header:**
|
|
48
|
+
|
|
49
|
+
```markdown
|
|
50
|
+
# [Feature Name] Implementation Plan
|
|
51
|
+
|
|
52
|
+
> **For agentic workers:** REQUIRED SUB-SKILL: Use mindforge:swarm-execution (recommended) or mindforge:execute-phase_extended to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
|
53
|
+
|
|
54
|
+
**Goal:** [One sentence describing what this builds]
|
|
55
|
+
|
|
56
|
+
**Architecture:** [2-3 sentences about approach]
|
|
57
|
+
|
|
58
|
+
**Tech Stack:** [Key technologies/libraries]
|
|
59
|
+
|
|
60
|
+
---
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
## Task Structure
|
|
64
|
+
|
|
65
|
+
````markdown
|
|
66
|
+
### Task N: [Component Name]
|
|
67
|
+
|
|
68
|
+
**Files:**
|
|
69
|
+
- Create: `exact/path/to/file.py`
|
|
70
|
+
- Modify: `exact/path/to/existing.py:123-145`
|
|
71
|
+
- Test: `tests/exact/path/to/test.py`
|
|
72
|
+
|
|
73
|
+
- [ ] **Step 1: Write the failing test**
|
|
74
|
+
|
|
75
|
+
```python
|
|
76
|
+
def test_specific_behavior():
|
|
77
|
+
result = function(input)
|
|
78
|
+
assert result == expected
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
- [ ] **Step 2: Run test to verify it fails**
|
|
82
|
+
|
|
83
|
+
Run: `pytest tests/path/test.py::test_name -v`
|
|
84
|
+
Expected: FAIL with "function not defined"
|
|
85
|
+
|
|
86
|
+
- [ ] **Step 3: Write minimal implementation**
|
|
87
|
+
|
|
88
|
+
```python
|
|
89
|
+
def function(input):
|
|
90
|
+
return expected
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
- [ ] **Step 4: Run test to verify it passes**
|
|
94
|
+
|
|
95
|
+
Run: `pytest tests/path/test.py::test_name -v`
|
|
96
|
+
Expected: PASS
|
|
97
|
+
|
|
98
|
+
- [ ] **Step 5: Commit**
|
|
99
|
+
|
|
100
|
+
```bash
|
|
101
|
+
git add tests/path/test.py src/path/file.py
|
|
102
|
+
git commit -m "feat: add specific feature"
|
|
103
|
+
```
|
|
104
|
+
````
|
|
105
|
+
|
|
106
|
+
## No Placeholders
|
|
107
|
+
|
|
108
|
+
Every step must contain the actual content an engineer needs. These are **plan failures** — never write them:
|
|
109
|
+
- "TBD", "TODO", "implement later", "fill in details"
|
|
110
|
+
- "Add appropriate error handling" / "add validation" / "handle edge cases"
|
|
111
|
+
- "Write tests for the above" (without actual test code)
|
|
112
|
+
- "Similar to Task N" (repeat the code — the engineer may be reading tasks out of order)
|
|
113
|
+
- Steps that describe what to do without showing how (code blocks required for code steps)
|
|
114
|
+
- References to types, functions, or methods not defined in any task
|
|
115
|
+
|
|
116
|
+
## Remember
|
|
117
|
+
- Exact file paths always
|
|
118
|
+
- Complete code in every step — if a step changes code, show the code
|
|
119
|
+
- Exact commands with expected output
|
|
120
|
+
- DRY, YAGNI, TDD, frequent commits
|
|
121
|
+
|
|
122
|
+
## Self-Review
|
|
123
|
+
|
|
124
|
+
After writing the complete plan, look at the spec with fresh eyes and check the plan against it. This is a checklist you run yourself — not a subagent dispatch.
|
|
125
|
+
|
|
126
|
+
**1. Spec coverage:** Skim each section/requirement in the spec. Can you point to a task that implements it? List any gaps.
|
|
127
|
+
|
|
128
|
+
**2. Placeholder scan:** Search your plan for red flags — any of the patterns from the "No Placeholders" section above. Fix them.
|
|
129
|
+
|
|
130
|
+
**3. Type consistency:** Do the types, method signatures, and property names you used in later tasks match what you defined in earlier tasks? A function called `clearLayers()` in Task 3 but `clearFullLayers()` in Task 7 is a bug.
|
|
131
|
+
|
|
132
|
+
If you find issues, fix them inline. No need to re-review — just fix and move on. If you find a spec requirement with no task, add the task.
|
|
133
|
+
|
|
134
|
+
## Execution Handoff
|
|
135
|
+
|
|
136
|
+
After saving the plan, offer execution choice:
|
|
137
|
+
|
|
138
|
+
**"Plan complete and saved to `docs/mindforge/plans/<filename>.md`. Two execution options:**
|
|
139
|
+
|
|
140
|
+
**1. Subagent-Driven (recommended)** - I dispatch a fresh subagent per task, review between tasks, fast iteration
|
|
141
|
+
|
|
142
|
+
**2. Inline Execution** - Execute tasks in this session using mindforge-execute-phase_extended, batch execution with checkpoints
|
|
143
|
+
|
|
144
|
+
**Which approach?"**
|
|
145
|
+
|
|
146
|
+
**If Subagent-Driven chosen:**
|
|
147
|
+
- **REQUIRED SUB-SKILL:** Use mindforge:swarm-execution
|
|
148
|
+
- Fresh subagent per task + two-stage review
|
|
149
|
+
|
|
150
|
+
**If Inline Execution chosen:**
|
|
151
|
+
- **REQUIRED SUB-SKILL:** Use mindforge:execute-phase_extended
|
|
152
|
+
- Batch execution with checkpoints for review
|
|
@@ -0,0 +1,49 @@
|
|
|
1
|
+
# Plan Document Reviewer Prompt Template
|
|
2
|
+
|
|
3
|
+
Use this template when dispatching a plan document reviewer subagent.
|
|
4
|
+
|
|
5
|
+
**Purpose:** Verify the plan is complete, matches the spec, and has proper task decomposition.
|
|
6
|
+
|
|
7
|
+
**Dispatch after:** The complete plan is written.
|
|
8
|
+
|
|
9
|
+
```
|
|
10
|
+
Task tool (general-purpose):
|
|
11
|
+
description: "Review plan document"
|
|
12
|
+
prompt: |
|
|
13
|
+
You are a plan document reviewer. Verify this plan is complete and ready for implementation.
|
|
14
|
+
|
|
15
|
+
**Plan to review:** [PLAN_FILE_PATH]
|
|
16
|
+
**Spec for reference:** [SPEC_FILE_PATH]
|
|
17
|
+
|
|
18
|
+
## What to Check
|
|
19
|
+
|
|
20
|
+
| Category | What to Look For |
|
|
21
|
+
|----------|------------------|
|
|
22
|
+
| Completeness | TODOs, placeholders, incomplete tasks, missing steps |
|
|
23
|
+
| Spec Alignment | Plan covers spec requirements, no major scope creep |
|
|
24
|
+
| Task Decomposition | Tasks have clear boundaries, steps are actionable |
|
|
25
|
+
| Buildability | Could an engineer follow this plan without getting stuck? |
|
|
26
|
+
|
|
27
|
+
## Calibration
|
|
28
|
+
|
|
29
|
+
**Only flag issues that would cause real problems during implementation.**
|
|
30
|
+
An implementer building the wrong thing or getting stuck is an issue.
|
|
31
|
+
Minor wording, stylistic preferences, and "nice to have" suggestions are not.
|
|
32
|
+
|
|
33
|
+
Approve unless there are serious gaps — missing requirements from the spec,
|
|
34
|
+
contradictory steps, placeholder content, or tasks so vague they can't be acted on.
|
|
35
|
+
|
|
36
|
+
## Output Format
|
|
37
|
+
|
|
38
|
+
## Plan Review
|
|
39
|
+
|
|
40
|
+
**Status:** Approved | Issues Found
|
|
41
|
+
|
|
42
|
+
**Issues (if any):**
|
|
43
|
+
- [Task X, Step Y]: [specific issue] - [why it matters for implementation]
|
|
44
|
+
|
|
45
|
+
**Recommendations (advisory, do not block approval):**
|
|
46
|
+
- [suggestions for improvement]
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
**Reviewer returns:** Status, Issues (if any), Recommendations
|