@titan-design/brain 0.6.1 → 0.7.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -92,7 +92,7 @@ async function resolveUnknownCommand(input, db, embedder, fusionWeights) {
92
92
  }
93
93
  if (db && embedder) {
94
94
  try {
95
- const { search } = await import("./search-AKSAQJOR.js");
95
+ const { search } = await import("./search-NPTRJV4W.js");
96
96
  const results = await search(
97
97
  db,
98
98
  embedder,
@@ -3,8 +3,8 @@ import {
3
3
  computeFacets,
4
4
  search,
5
5
  searchMemories
6
- } from "./chunk-KSJZ7CMP.js";
7
- import "./chunk-AJKFX2TM.js";
6
+ } from "./chunk-4GDSQB2E.js";
7
+ import "./chunk-LLAHWRO4.js";
8
8
  import "./chunk-PO3GJPIC.js";
9
9
  export {
10
10
  checkAndPromote,
@@ -0,0 +1,30 @@
1
+ # Brainstorm: Present Design
2
+
3
+ Assisted-mode phase. Present the design section by section, getting approval after each.
4
+
5
+ ---
6
+
7
+ ## Context
8
+
9
+ - Topic: `{{TASK_DESCRIPTION}}`
10
+ - Project: `{{PROJECT_PREFIX}}`
11
+ - Chosen approach: (from propose phase)
12
+
13
+ ## Instructions
14
+
15
+ Present the design incrementally. Cover these sections, scaled to complexity:
16
+
17
+ 1. **Architecture** — Components, layers, data flow
18
+ 2. **API / Interface** — Public contracts, types, signatures
19
+ 3. **Data model** — State shape, storage, migrations
20
+ 4. **Error handling** — Failure modes, recovery, user feedback
21
+ 5. **Testing strategy** — What to test, how, coverage targets
22
+
23
+ After each section, ask: "Does this look right so far?"
24
+
25
+ If the user has concerns, revise before moving on. If something needs clarification, the workflow can loop back to interview.
26
+
27
+ ## Completion
28
+
29
+ This phase is complete when all sections are approved.
30
+ Advance to write-doc.
@@ -0,0 +1,30 @@
1
+ # Brainstorm: Explore Context
2
+
3
+ Explore-type agent gathering project context before a brainstorming session.
4
+
5
+ ---
6
+
7
+ ## Setup
8
+
9
+ - Topic: `{{TASK_DESCRIPTION}}`
10
+ - Project: `{{PROJECT_PREFIX}}`
11
+ - Location: `{{REPO_PATH}}`
12
+
13
+ ## Instructions
14
+
15
+ Survey the codebase to understand:
16
+
17
+ 1. **Current state** — What exists today? Recent commits, active branches, open PRs.
18
+ 2. **Related code** — Files, modules, and patterns relevant to "{{TASK_DESCRIPTION}}".
19
+ 3. **Constraints** — Existing architecture decisions, dependencies, or conventions that apply.
20
+ 4. **Prior art** — Has something similar been attempted? Check docs, plans, design notes.
21
+
22
+ ## Output
23
+
24
+ Write a structured context brief covering:
25
+ - Key files and their roles
26
+ - Architectural patterns in use
27
+ - Potential constraints or risks
28
+ - Open questions for the interview phase
29
+
30
+ Keep it under 200 lines. Facts only — no design recommendations yet.
@@ -0,0 +1,26 @@
1
+ # Brainstorm: Interview
2
+
3
+ Assisted-mode phase. The orchestrator asks the user clarifying questions one at a time.
4
+
5
+ ---
6
+
7
+ ## Context
8
+
9
+ - Topic: `{{TASK_DESCRIPTION}}`
10
+ - Project: `{{PROJECT_PREFIX}}`
11
+
12
+ ## Interview Guidelines
13
+
14
+ Ask questions **one at a time**. Prefer multiple choice when possible.
15
+
16
+ Focus areas:
17
+ 1. **Purpose** — What problem does this solve? Who benefits?
18
+ 2. **Scope** — What's in v1 vs future? What explicitly isn't included?
19
+ 3. **Constraints** — Performance, compatibility, timeline, dependencies?
20
+ 4. **Success criteria** — How do we know it's done? What does "working" look like?
21
+ 5. **Prior context** — Has the user researched this? Are there examples to follow?
22
+
23
+ ## Completion
24
+
25
+ This phase is complete when the orchestrator has enough context to propose 2-3 approaches.
26
+ Mark the task done and advance the workflow.
@@ -0,0 +1,28 @@
1
+ # Brainstorm: Propose Approaches
2
+
3
+ Assisted-mode phase. Present 2-3 approaches with trade-offs and a recommendation.
4
+
5
+ ---
6
+
7
+ ## Context
8
+
9
+ - Topic: `{{TASK_DESCRIPTION}}`
10
+ - Project: `{{PROJECT_PREFIX}}`
11
+
12
+ ## Instructions
13
+
14
+ Based on the interview findings, propose 2-3 approaches:
15
+
16
+ For each approach:
17
+ 1. **Name** — Short label
18
+ 2. **Summary** — 2-3 sentences
19
+ 3. **Pros** — What it does well
20
+ 4. **Cons** — What it does poorly or risks
21
+ 5. **Effort** — Relative complexity (low/medium/high)
22
+
23
+ Lead with your **recommended** approach and explain why.
24
+
25
+ ## Completion
26
+
27
+ This phase is complete when the user selects an approach.
28
+ Record the chosen approach and advance the workflow.
@@ -0,0 +1,30 @@
1
+ # Brainstorm: Write Design Doc
2
+
3
+ Agent-mode phase. Write the validated design to a markdown file and commit.
4
+
5
+ ---
6
+
7
+ ## Context
8
+
9
+ - Topic: `{{TASK_DESCRIPTION}}`
10
+ - Project: `{{PROJECT_PREFIX}}`
11
+ - Location: `{{REPO_PATH}}`
12
+
13
+ ## Instructions
14
+
15
+ Write the approved design to `docs/plans/{{PLAN_ID}}-design.md` in the target repo.
16
+
17
+ Structure:
18
+ - Title and one-line goal
19
+ - Architecture overview
20
+ - Component breakdown
21
+ - Data model
22
+ - Error handling
23
+ - Testing strategy
24
+ - Open questions (if any remain)
25
+
26
+ Commit with message: "Add design doc: {{TASK_DESCRIPTION}}"
27
+
28
+ ## Output
29
+
30
+ The design document path, ready for the writing-plans phase.
@@ -0,0 +1,46 @@
1
+ # Implementation Task: {{TASK_ID}}
2
+
3
+ ## Setup
4
+
5
+ - Location: `{{REPO_PATH}}`
6
+ - Branch: `{{BRANCH_NAME}}` (base: `{{BASE_BRANCH}}`)
7
+ - Build: `{{BUILD_CMD}}` | Test: `{{TEST_CMD}}` | Typecheck: `{{TYPECHECK_CMD}}` | Lint: `{{LINT_CMD}}`
8
+ - Worktree: `{{WORKTREE_PATH}}` (omit if not isolated)
9
+ - Brain PM task: `{{BRAIN_TASK_ID}}` / claim token: `{{CLAIM_TOKEN}}`
10
+
11
+ ## Ownership Scope
12
+
13
+ Only modify files matching: `{{OWNERSHIP_PATTERNS}}`
14
+
15
+ ## Context
16
+
17
+ {{CONTEXT_FILES}}
18
+
19
+ {{PREREQUISITES}}
20
+
21
+ ## What to Implement
22
+
23
+ {{IMPLEMENTATION_INSTRUCTIONS}}
24
+
25
+ Constraints: {{DESIGN_CONSTRAINTS}}
26
+
27
+ ## Tests
28
+
29
+ {{TEST_INSTRUCTIONS}}
30
+
31
+ ## Verify and Complete
32
+
33
+ Run in order — all must pass:
34
+
35
+ 1. `{{TYPECHECK_CMD}}` — no type errors
36
+ 2. `{{TEST_CMD}}` — all tests pass
37
+ 3. `{{LINT_CMD}}` — no lint errors
38
+ 4. `npx prettier --check .` — no formatting issues (fix with `npx prettier --write .` if needed)
39
+ 5. `{{BUILD_CMD}}` — builds successfully
40
+ 6. {{MANUAL_VERIFICATION_STEPS}}
41
+
42
+ If verification fails, fix and retry twice before exiting with diagnostics.
43
+
44
+ When all pass, commit referencing `{{TASK_ID}}` (imperative mood, <72 chars) and push to `origin/{{BRANCH_NAME}}`. Report: summary of changes, files modified, test count delta, issues encountered.
45
+
46
+ Stay within ownership scope, run all verification steps, no dead code, no skipping on failure.
@@ -0,0 +1,92 @@
1
+ # Implementation Task: {{TASK_ID}}
2
+
3
+ ## Repo Setup
4
+
5
+ - Location: `{{REPO_PATH}}`
6
+ - Branch: `{{BRANCH_NAME}}` (base: `{{BASE_BRANCH}}`)
7
+ - Build: `{{BUILD_CMD}}`
8
+ - Test: `{{TEST_CMD}}`
9
+ - Typecheck: `{{TYPECHECK_CMD}}`
10
+ - Lint: `{{LINT_CMD}}`
11
+ - Worktree: `{{WORKTREE_PATH}}` (omit if not isolated)
12
+ - Brain PM task: `{{BRAIN_TASK_ID}}` / claim token: `{{CLAIM_TOKEN}}`
13
+
14
+ ## Ownership Scope
15
+
16
+ You may only modify files matching these patterns:
17
+
18
+ ```
19
+ {{OWNERSHIP_PATTERNS}}
20
+ ```
21
+
22
+ Do NOT modify files outside this scope.
23
+
24
+ ## Context
25
+
26
+ ### Read first
27
+
28
+ {{CONTEXT_FILES}}
29
+
30
+ ### Prerequisites
31
+
32
+ {{PREREQUISITES}}
33
+
34
+ ### What NOT to change
35
+
36
+ {{DO_NOT_CHANGE}}
37
+
38
+ ## What to Implement
39
+
40
+ {{IMPLEMENTATION_INSTRUCTIONS}}
41
+
42
+ ## Design Constraints
43
+
44
+ {{DESIGN_CONSTRAINTS}}
45
+
46
+ ## Tests
47
+
48
+ {{TEST_INSTRUCTIONS}}
49
+
50
+ ## Verification
51
+
52
+ After all changes, run each of these in order. All must pass.
53
+
54
+ 1. **Typecheck**: `{{TYPECHECK_CMD}}` — no type errors
55
+ 2. **Tests**: `{{TEST_CMD}}` — all tests pass
56
+ 3. **Lint**: `{{LINT_CMD}}` — no lint errors
57
+ 4. **Format**: `npx prettier --check .` — no formatting issues (fix with `npx prettier --write .`)
58
+ 5. **Build**: `{{BUILD_CMD}}` — builds successfully
59
+ 5. **Manual checks**:
60
+
61
+ {{MANUAL_VERIFICATION_STEPS}}
62
+
63
+ ## Completion
64
+
65
+ When all verification passes:
66
+
67
+ 1. Commit with a descriptive message referencing `{{TASK_ID}}` (imperative mood, <72 chars subject)
68
+ 2. Push to `origin/{{BRANCH_NAME}}`
69
+ 3. Report back with:
70
+ - Summary of what changed and why
71
+ - Files modified (list)
72
+ - Test count (before/after if tests were added)
73
+ - Any issues encountered or things the orchestrator should know
74
+
75
+ ## Retry Policy
76
+
77
+ If tests, typecheck, lint, or build fail after your changes, investigate the root cause and fix it. Try at least **twice** before giving up. If after 2 fix attempts you still cannot resolve the issue, exit with a clear explanation of:
78
+
79
+ 1. What is failing (exact error output)
80
+ 2. What you tried to fix it
81
+ 3. Your best theory on the root cause
82
+
83
+ ## Anti-Patterns
84
+
85
+ - Do NOT modify files outside the ownership scope listed above
86
+ - Do NOT skip any verification step
87
+ - Do NOT commit generated files unless explicitly told to
88
+ - Do NOT create new files unless necessary — prefer editing existing files
89
+ - Do NOT leave TODO comments without a tracking issue
90
+ - Do NOT leave dead code or commented-out code
91
+ - Do NOT suppress lint/type errors — fix the underlying issue
92
+ - Do NOT exit on first test failure — follow the retry policy above
@@ -0,0 +1,18 @@
1
+ # Ops Task: {{TASK_ID}}
2
+
3
+ ## Setup
4
+ - Location: {{REPO_PATH}}
5
+ - Branch: {{BRANCH_NAME}} (or "N/A" for cleanup)
6
+
7
+ ## What to Do
8
+ {{INSTRUCTIONS}}
9
+
10
+ ## Verify
11
+ {{VERIFICATION_STEPS}}
12
+
13
+ ## Report
14
+ - What was done
15
+ - Current state
16
+ - Anything needing follow-up
17
+
18
+ If anything fails, try to fix it twice. If stuck, report what failed and why.
@@ -0,0 +1,123 @@
1
+ # Planning Critic: {{TASK_ID}}
2
+
3
+ ## Context
4
+
5
+ - Plan: {{PLAN_ID}}
6
+ - Project: {{PROJECT_PREFIX}}
7
+ - Location: `{{REPO_PATH}}`
8
+
9
+ ## Input
10
+
11
+ Read these artifacts. You did NOT write them — you are reviewing them with fresh eyes.
12
+
13
+ 1. `.plans/{{PLAN_ID}}/spec.md` — problem specification
14
+ 2. `.plans/{{PLAN_ID}}/design.md` — technical approach
15
+ 3. `.plans/{{PLAN_ID}}/acceptance-criteria.md` — testable conditions
16
+ 4. Project CLAUDE.md at `{{REPO_PATH}}/CLAUDE.md` — project conventions
17
+
18
+ Do NOT read the research brief or interview answers. Your review must be independent of the designer's reasoning process.
19
+
20
+ ## Your Role
21
+
22
+ You are an adversarial critic. Your job is to find problems, not confirm correctness. If you find no issues, explain why that's suspicious and look harder. Early agreement is a warning sign — it usually means you haven't examined closely enough.
23
+
24
+ You are NOT here to be helpful or supportive. You are here to catch mistakes, gaps, and bad decisions before they become expensive rework during implementation.
25
+
26
+ ## Review Process
27
+
28
+ ### Phase 1: Multi-Persona Review
29
+
30
+ Review the design from three independent perspectives. For each, write your findings before moving to the next perspective.
31
+
32
+ **Perspective 1: Correctness Reviewer**
33
+ - Does the design satisfy EVERY acceptance criterion? Map each criterion to the design element that fulfills it.
34
+ - Are there requirements in spec.md with no corresponding design element?
35
+ - Are there acceptance criteria that the proposed code structure cannot satisfy?
36
+ - Are there implicit assumptions that should be explicit?
37
+
38
+ **Perspective 2: Error Handling Reviewer**
39
+ - What happens when each external dependency fails? (network, filesystem, database, BLE)
40
+ - What happens with invalid input at every boundary?
41
+ - Are there race conditions in concurrent operations?
42
+ - What happens during partial failure (some operations succeed, some fail)?
43
+ - Are error messages actionable for debugging?
44
+
45
+ **Perspective 3: API Design Reviewer**
46
+ - Are names clear and consistent? Do they follow project conventions?
47
+ - Is the interface minimal? Could anything be removed without losing functionality?
48
+ - Will this be easy to change later? Where are the coupling points?
49
+ - Does the data flow make sense? Are there unnecessary intermediaries?
50
+ - Are types precise enough to prevent misuse?
51
+
52
+ ### Phase 2: Chain of Verification
53
+
54
+ Generate 5-10 specific questions about the design. Then answer each question independently — do NOT refer to the design's own justifications when answering. Compare your answers to the design. Flag contradictions.
55
+
56
+ Example questions:
57
+ - "Given requirement R3, what is the minimum API surface needed?"
58
+ - "If component A fails, what should component B do?"
59
+ - "What happens if this function is called with an empty array?"
60
+
61
+ ### Phase 3: Cross-Cutting Checks
62
+
63
+ - **Scaffolding completeness:** Can wave 1 (scaffolding) be built and tested independently? Does wave 2+ have everything it needs from wave 1?
64
+ - **File ownership conflicts:** Do any two implementation tasks need to modify the same file? If so, flag as a decomposition risk.
65
+ - **Test coverage:** Does every acceptance criterion have a clear test strategy? Are edge cases covered?
66
+ - **Convention compliance:** Does the design follow the project's coding standards (from CLAUDE.md)?
67
+
68
+ ## Findings Format
69
+
70
+ For EVERY finding, assign one of:
71
+ - `[FIX]` — Must be addressed before implementation. Explain what's wrong and suggest a fix.
72
+ - `[ACCEPTED]` — Reviewed and found correct. Brief explanation of why.
73
+
74
+ There is no "suggestion" or "nice to have" category. Every aspect is either a problem or it isn't.
75
+
76
+ ## Output
77
+
78
+ Write your findings to `.plans/{{PLAN_ID}}/critic-report.md` with this structure:
79
+
80
+ ```
81
+ # Critic Report: {{TASK_ID}}
82
+
83
+ ## Summary
84
+ [2-3 sentence overall assessment. Be direct — is this design ready or not?]
85
+
86
+ ## Verdict: <READY or NEEDS REVISION>
87
+
88
+ ## Findings
89
+
90
+ ### Correctness
91
+ - [FIX] <finding> — <suggested fix>
92
+ - [ACCEPTED] <aspect reviewed> — <why it's correct>
93
+
94
+ ### Error Handling
95
+ - [FIX] <finding> — <suggested fix>
96
+ - [ACCEPTED] <aspect reviewed> — <why it's adequate>
97
+
98
+ ### API Design
99
+ - [FIX] <finding> — <suggested fix>
100
+ - [ACCEPTED] <aspect reviewed> — <why it's clean>
101
+
102
+ ### Chain of Verification
103
+ | Question | Independent Answer | Design Says | Match? |
104
+ |----------|-------------------|-------------|--------|
105
+ | Q1 | ... | ... | Yes/No |
106
+
107
+ ### Cross-Cutting
108
+ - [FIX] / [ACCEPTED] <finding>
109
+
110
+ ## Open Questions
111
+ [Questions the critic cannot resolve — these may trigger a human walkthrough]
112
+
113
+ ## FIX Summary
114
+ Total: <N> FIX items
115
+ 1. <file/section> — <brief description>
116
+ 2. ...
117
+ ```
118
+
119
+ ## Important
120
+
121
+ - Be thorough but focused. Target ~15-20% of the expected implementation token budget.
122
+ - If the design is genuinely solid, say so — but explain WHY you believe that despite your adversarial mandate.
123
+ - Open questions in your report may trigger a human walkthrough — flag anything you're uncertain about.
@@ -0,0 +1,221 @@
1
+ # Planning Decompose: {{TASK_ID}}
2
+
3
+ Decompose an approved design into PM tasks and briefing files. Each task becomes an independently executable unit dispatched via `implementation-compact.md`.
4
+
5
+ ---
6
+
7
+ ## Context
8
+
9
+ - Plan: `{{PLAN_ID}}`
10
+ - Project: `{{PROJECT_PREFIX}}`
11
+ - Location: `{{REPO_PATH}}`
12
+ - Build: `{{BUILD_CMD}}` | Test: `{{TEST_CMD}}` | Typecheck: `{{TYPECHECK_CMD}}` | Lint: `{{LINT_CMD}}`
13
+ - Brain PM task: {{BRAIN_TASK_ID}}
14
+
15
+ ## Input
16
+
17
+ Read these approved design artifacts before proceeding:
18
+
19
+ 1. `.plans/{{PLAN_ID}}/design.md` -- technical approach, file list, scaffolding boundaries, PR boundary recommendation
20
+ 2. `.plans/{{PLAN_ID}}/acceptance-criteria.md` -- testable conditions mapped to tests
21
+ 3. `.plans/{{PLAN_ID}}/critic-report.md` -- critic findings and their resolutions
22
+
23
+ Also read for conventions:
24
+ - `{{REPO_PATH}}/CLAUDE.md` -- project conventions and commands
25
+ - Any existing code referenced in `design.md` -- understand current patterns before splitting work
26
+
27
+ ## Decomposition Rules
28
+
29
+ Apply all five rules. If any rule is violated, restructure until it holds.
30
+
31
+ 1. **One logical unit per task** -- a task produces one coherent change: a component, a store, a utility, a hook, or a test suite. Not "half a component."
32
+ 2. **Exclusive file ownership** -- no two tasks modify the same file. If a file needs changes from multiple tasks, assign it to one task and make others depend on it.
33
+ 3. **Independently verifiable** -- each task's success criteria can be checked without other incomplete tasks. Tests may import from scaffolding (wave 1) but not from peer tasks in the same wave.
34
+ 4. **50-line briefing limit** -- if a briefing exceeds 50 lines of meaningful content (excluding code blocks), the task is too large. Split it.
35
+ 5. **TDD step structure** -- every task follows: write test, verify fail, implement, verify pass, commit.
36
+
37
+ ## Wave Structure
38
+
39
+ Group tasks into waves based on `design.md` scaffolding boundaries:
40
+
41
+ ### Wave 1: Scaffolding
42
+
43
+ Types, interfaces, shared utilities, store structure, configuration. These are the foundations other tasks depend on.
44
+
45
+ - **Gate:** `{{TYPECHECK_CMD}}` passes. Any scaffolding tests pass.
46
+ - **Dependency:** None -- wave 1 tasks run in parallel.
47
+
48
+ ### Wave 2+: Implementation
49
+
50
+ Features built on top of scaffolding. Tasks within a wave run in parallel.
51
+
52
+ - **Gate:** `{{TEST_CMD}}` passes. `{{LINT_CMD}}` clean. `{{TYPECHECK_CMD}}` passes.
53
+ - **Dependency:** All wave 1 tasks must be complete before wave 2 starts. Within a wave, tasks are independent.
54
+
55
+ ### Final Wave: Integration
56
+
57
+ Cross-cutting tests, cleanup, integration validation. Only if the design calls for it.
58
+
59
+ - **Gate:** Full verification pipeline -- `{{TYPECHECK_CMD}}`, `{{TEST_CMD}}`, `{{LINT_CMD}}`, `npx prettier --check .`, `{{BUILD_CMD}}`.
60
+ - **Dependency:** All prior waves complete.
61
+
62
+ ## Creating PM Tasks
63
+
64
+ For each task, create a brain PM entry:
65
+
66
+ ```bash
67
+ brain pm task add "<task name>" \
68
+ --workstream <appropriate workstream> \
69
+ --project {{PROJECT_PREFIX}} \
70
+ --category implementation \
71
+ --priority medium \
72
+ --description "Briefing: .plans/{{PLAN_ID}}/briefings/task-NN.md"
73
+ ```
74
+
75
+ Set dependencies to enforce wave ordering:
76
+
77
+ ```bash
78
+ brain pm task update <TASK-ID> --depends-on <DEPENDENCY-ID>
79
+ ```
80
+
81
+ - Wave 1 tasks: no dependencies.
82
+ - Wave 2+ tasks: depend on all wave 1 tasks.
83
+ - Final wave tasks: depend on all prior wave tasks.
84
+
85
+ Record each PM task ID -- you will need them for the output summary.
86
+
87
+ ## Creating Briefing Files
88
+
89
+ For each task, create `.plans/{{PLAN_ID}}/briefings/task-NN.md` using this format:
90
+
91
+ ```markdown
92
+ # Task NN: <Component Name>
93
+
94
+ ## Architectural Context
95
+
96
+ [2-3 sentences: where this fits, what it depends on, why it exists. Reference specific files.]
97
+
98
+ ## File Ownership
99
+
100
+ **May modify:**
101
+ - `exact/path/to/file.ts`
102
+ - `exact/path/to/file.test.ts`
103
+
104
+ **Must not touch:**
105
+ - `exact/path/to/adjacent.ts`
106
+
107
+ **Read for context (do not modify):**
108
+ - `exact/path/to/dependency.ts` -- [why relevant]
109
+
110
+ ## Steps
111
+
112
+ ### Step 1: Unstash spec tests (if applicable)
113
+ Run: `git stash list` to find `spec-tests-{{PLAN_ID}}`
114
+ Run: `git stash pop stash@{N}` (where N is the stash index)
115
+ Move relevant test files to this task's test location if needed.
116
+
117
+ ### Step 2: Write the failing test
118
+ [Exact test code or clear description of what to test]
119
+
120
+ ### Step 3: Run test to verify it fails
121
+ Run: `{{TEST_CMD}} -- <test file path>`
122
+ Expected: FAIL
123
+
124
+ ### Step 4: Write minimal implementation
125
+ [Implementation guidance -- not full code, but unambiguous direction]
126
+
127
+ ### Step 5: Run test to verify it passes
128
+ Run: `{{TEST_CMD}} -- <test file path>`
129
+ Expected: PASS
130
+
131
+ ### Step 6: Commit
132
+ git add <files>
133
+ git commit -m "<imperative mood commit message>"
134
+
135
+ [Repeat step groups 2-6 for additional changes within this task]
136
+
137
+ ## Success Criteria
138
+
139
+ - [ ] Tests pass: `{{TEST_CMD}} -- <path>`
140
+ - [ ] No lint warnings: `{{LINT_CMD}}`
141
+ - [ ] Types check: `{{TYPECHECK_CMD}}`
142
+ - [ ] [Feature-specific criterion from acceptance-criteria.md]
143
+
144
+ ## Anti-patterns
145
+
146
+ - Do NOT modify files outside the ownership list
147
+ - Do NOT modify CLAUDE.md
148
+ - Do NOT add features beyond the steps
149
+ - [Task-specific anti-pattern if applicable]
150
+ ```
151
+
152
+ ### Briefing Authoring Guidelines
153
+
154
+ - **Architectural Context**: Orient the agent in 2-3 sentences. Reference concrete file paths.
155
+ - **File Ownership**: Explicit allowlist. If a task touches 5+ files, split it.
156
+ - **Steps**: Complete enough to be unambiguous. Include exact test expectations.
157
+ - **Success Criteria**: Every criterion maps to a command the agent can run.
158
+ - **Anti-patterns**: Always include the three universal ones (file ownership, CLAUDE.md, scope creep) plus task-specific ones.
159
+ - **Parallel awareness**: If a task's tests depend on code from another wave-2 task, note that the dependency will exist at the wave gate, not during task execution.
160
+
161
+ ## Spec Test Distribution
162
+
163
+ The spec tests from the previous phase are stashed as `spec-tests-{{PLAN_ID}}`. Distribute them:
164
+
165
+ - List the stashed test files and map each to the task that will use it.
166
+ - Wave 1 (scaffolding) tasks typically have no spec tests -- scaffolding is validated by typecheck.
167
+ - Wave 2+ tasks unstash and use spec tests as their TDD targets.
168
+ - If a spec test covers multiple tasks, assign it to the task with the most relevant ownership. Other tasks reference it as read-only context.
169
+ - Include the unstash step only in the FIRST task of each wave (to avoid double-popping).
170
+
171
+ ## PR Boundary
172
+
173
+ Read the PR boundary recommendation from `design.md`:
174
+
175
+ - **Single PR:** All tasks commit to the same feature branch. Simpler, but larger review.
176
+ - **Per-wave PR:** Each wave gets its own branch stacked on the previous. Smaller reviews, but more branch management.
177
+
178
+ Document the chosen strategy in your output. The orchestrator needs this to manage branches during dispatch.
179
+
180
+ ## Output
181
+
182
+ When decomposition is complete, report:
183
+
184
+ ```
185
+ ## Decomposition Summary
186
+
187
+ ### Tasks Created
188
+ | # | PM Task ID | Name | Wave | Depends On | Briefing |
189
+ |---|-----------|------|------|-----------|---------|
190
+ | 1 | PROJ-XX.YY | Scaffold types | 1 | -- | task-01.md |
191
+ | 2 | PROJ-XX.YY | Implement feature A | 2 | Task 1 | task-02.md |
192
+ | ... | ... | ... | ... | ... | ... |
193
+
194
+ ### Wave Gates
195
+ - Wave 1 gate: {{TYPECHECK_CMD}} passes
196
+ - Wave 2 gate: {{TEST_CMD}} + {{LINT_CMD}} + {{TYPECHECK_CMD}} pass
197
+ - [Final wave gate if applicable]
198
+
199
+ ### PR Strategy
200
+ [Single PR / Per-wave -- from design.md recommendation, with rationale]
201
+
202
+ ### File Ownership Map
203
+ | File | Owned By |
204
+ |------|----------|
205
+ | src/types.ts | Task 1 |
206
+ | src/feature-a.ts | Task 2 |
207
+ | ... | ... |
208
+
209
+ ### Spec Test Distribution
210
+ | Test File | Assigned To | Unstash In |
211
+ |-----------|------------|-----------|
212
+ | __tests__/feature-a.test.ts | Task 2 | Task 2 (first in wave 2) |
213
+ | ... | ... | ... |
214
+ ```
215
+
216
+ Verify before reporting:
217
+ - Every file from `design.md` appears exactly once in the ownership map
218
+ - Every acceptance criterion maps to at least one task
219
+ - No circular dependencies between tasks
220
+ - Wave ordering respects scaffolding-first
221
+ - All briefing files are under 50 lines of meaningful content