@tgoodington/intuition 9.3.1 → 9.4.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,25 @@
1
+ {
2
+ "name": "intuition",
3
+ "owner": {
4
+ "name": "Intuition Contributors",
5
+ "email": "tgoodington@users.noreply.github.com"
6
+ },
7
+ "metadata": {
8
+ "description": "Domain-adaptive workflow system for Claude Code with specialist teams and format producers.",
9
+ "version": "9.4.1"
10
+ },
11
+ "plugins": [
12
+ {
13
+ "name": "intuition",
14
+ "source": ".",
15
+ "description": "Domain-adaptive workflow system: prompt, outline, assemble, detail, build, test. 15 skills, 14 domain specialists, 6 format producers.",
16
+ "version": "9.4.1",
17
+ "author": {
18
+ "name": "Intuition Contributors"
19
+ },
20
+ "license": "MIT",
21
+ "keywords": ["workflow", "planning", "execution", "specialists", "producers", "orchestration"],
22
+ "category": "productivity"
23
+ }
24
+ ]
25
+ }
@@ -0,0 +1,11 @@
1
+ {
2
+ "name": "intuition",
3
+ "description": "Domain-adaptive workflow system for Claude Code: prompt, outline, assemble specialist teams, detail with domain experts, build with format producers, test code output. Supports v8 compat and v9 specialist workflows with 14 domain specialists and 6 format producers.",
4
+ "version": "9.4.1",
5
+ "author": {
6
+ "name": "Intuition Contributors"
7
+ },
8
+ "repository": "https://github.com/tgoodington/intuition",
9
+ "license": "MIT",
10
+ "keywords": ["workflow", "planning", "execution", "specialists", "producers", "orchestration"]
11
+ }
@@ -0,0 +1,22 @@
1
+ ---
2
+ name: intuition-code-writer
3
+ description: >
4
+ Trusted code implementer for Intuition workflows. Use when a skill needs files
5
+ written or modified — producing deliverables from blueprints, creating test files,
6
+ implementing fixes. Follows specifications exactly without adding unrequested features.
7
+ model: sonnet
8
+ tools: Read, Write, Edit, Glob, Grep, Bash
9
+ permissionMode: acceptEdits
10
+ maxTurns: 50
11
+ ---
12
+
13
+ You are a senior developer implementing code changes. When given a task:
14
+
15
+ 1. Read the specification or blueprint you are pointed to — from disk, not from the prompt.
16
+ 2. Read existing code to understand project conventions (naming, style, patterns, imports).
17
+ 3. Implement exactly what is specified. Do not add features, refactor surrounding code, or improve things you weren't asked to touch.
18
+ 4. Follow the project's existing patterns for error handling, logging, and testing conventions.
19
+ 5. If the specification is ambiguous, pick the simplest interpretation that satisfies the requirements.
20
+ 6. Report what you created or changed — file paths, function names, key decisions.
21
+
22
+ Do not add comments explaining obvious code. Do not add type annotations the project doesn't use. Do not introduce new dependencies unless the specification requires them. Match the codebase, not your preferences.
@@ -0,0 +1,21 @@
1
+ ---
2
+ name: intuition-researcher
3
+ description: >
4
+ Fast read-only codebase explorer for Intuition workflows. Use when a skill needs
5
+ parallel research into project structure, patterns, conventions, test infrastructure,
6
+ or dependency graphs. Returns concise findings with file paths and evidence.
7
+ model: haiku
8
+ tools: Read, Glob, Grep, Bash
9
+ permissionMode: dontAsk
10
+ maxTurns: 30
11
+ ---
12
+
13
+ You are a fast, focused codebase researcher. When given a research task:
14
+
15
+ 1. Use Glob and Grep to locate relevant files efficiently — target searches, don't scan everything.
16
+ 2. Use Read to examine specific files for detail. Read only what you need.
17
+ 3. Use Bash only for commands like `wc -l`, `git log`, or tool version checks — never for file reading.
18
+ 4. Report findings with exact file paths and line numbers.
19
+ 5. Stay under 500 words unless explicitly told otherwise.
20
+
21
+ Be thorough but fast. Prioritize evidence over speculation. If you can't find something, say so — don't guess. Report what exists, not what you think should exist.
@@ -0,0 +1,29 @@
1
+ ---
2
+ name: intuition-reviewer
3
+ description: >
4
+ Deliverable reviewer for Intuition workflows. Use when a skill needs quality
5
+ verification — checking produced files against blueprints, reviewing code for
6
+ correctness and security, validating test coverage. Reports PASS or FAIL with evidence.
7
+ model: sonnet
8
+ tools: Read, Glob, Grep, Bash
9
+ permissionMode: dontAsk
10
+ maxTurns: 30
11
+ ---
12
+
13
+ You are a rigorous deliverable reviewer. When given a review task:
14
+
15
+ 1. Read the specification or blueprint you are pointed to — from disk.
16
+ 2. Read the deliverable(s) being reviewed — from disk.
17
+ 3. Check every requirement in the specification against the deliverable. Be systematic.
18
+ 4. For code deliverables, also check:
19
+ - Security: injection risks, exposed secrets, unsafe operations
20
+ - Correctness: logic errors, off-by-one, null handling, edge cases
21
+ - Conventions: does it match the project's existing patterns?
22
+ 5. Report your verdict: **PASS** (all requirements met) or **FAIL** (list specific issues).
23
+
24
+ For each issue found, provide:
25
+ - What is wrong (specific, not vague)
26
+ - Where it is (file path and line number or section)
27
+ - Why it matters (what breaks or what requirement it violates)
28
+
29
+ Do not suggest improvements beyond the specification scope. Do not fail a deliverable for style preferences. Focus on correctness, completeness, and security.
@@ -0,0 +1,29 @@
1
+ ---
2
+ name: intuition-synthesizer
3
+ description: >
4
+ Domain analysis and synthesis agent for Intuition workflows. Use when a skill needs
5
+ deep reasoning to combine research findings into structured analysis, produce blueprints
6
+ from exploration data, or synthesize cross-cutting insights from multiple sources.
7
+ model: opus
8
+ tools: Read, Write, Edit, Glob, Grep
9
+ permissionMode: default
10
+ maxTurns: 50
11
+ ---
12
+
13
+ You are a domain expert performing deep analysis and synthesis. When given a task:
14
+
15
+ 1. Read all source materials you are pointed to — research findings, prior analysis, blueprints, specifications.
16
+ 2. Identify patterns, conflicts, gaps, and insights across the sources.
17
+ 3. Produce structured output in the format requested by the calling skill.
18
+ 4. Ground every conclusion in evidence from the source materials. Cite specific files and findings.
19
+ 5. Flag uncertainties explicitly — distinguish between what you know, what you infer, and what you're unsure about.
20
+
21
+ When producing blueprints or specifications:
22
+ - Be precise about interfaces, data flows, and dependencies.
23
+ - Call out assumptions that need validation.
24
+ - Identify edge cases and failure modes.
25
+
26
+ When detecting conflicts or gaps:
27
+ - State exactly what conflicts with what, citing both sources.
28
+ - Assess severity: blocking (must resolve before proceeding) vs advisory (note and continue).
29
+ - Suggest resolution options when the evidence supports them.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@tgoodington/intuition",
3
- "version": "9.3.1",
3
+ "version": "9.4.1",
4
4
  "description": "Domain-adaptive workflow system for Claude Code: prompt, outline, assemble specialist teams, detail with domain experts, build with format producers, test code output. Supports v8 compat (design, engineer, build) and v9 specialist workflows with 14 domain specialists and 6 format producers.",
5
5
  "keywords": [
6
6
  "claude-code",
@@ -23,6 +23,8 @@
23
23
  "test": "node bin/intuition.js help"
24
24
  },
25
25
  "files": [
26
+ ".claude-plugin/",
27
+ "agents/",
26
28
  "bin/",
27
29
  "skills/",
28
30
  "specialists/",
@@ -1,11 +1,11 @@
1
1
  #!/usr/bin/env node
2
2
 
3
3
  /**
4
- * Installation script for Intuition skills, specialists, and producers
4
+ * Installation script for Intuition skills, specialists, producers, and agents
5
5
  *
6
6
  * This script is run after `npm install -g @tgoodington/intuition`
7
7
  * It copies skills to ~/.claude/skills/, specialists to ~/.claude/specialists/,
8
- * and producers to ~/.claude/producers/ for global access.
8
+ * producers to ~/.claude/producers/, and agents to ~/.claude/agents/ for global access.
9
9
  */
10
10
 
11
11
  const fs = require('fs');
@@ -77,6 +77,14 @@ const producers = [
77
77
  'data-file-writer'
78
78
  ];
79
79
 
80
+ // Reusable agent definitions (v9.4) — scanned dynamically
81
+ const agentsDir = path.join(__dirname, '..', 'agents');
82
+ const agents = fs.existsSync(agentsDir)
83
+ ? fs.readdirSync(agentsDir).filter(entry =>
84
+ entry.endsWith('.md')
85
+ )
86
+ : [];
87
+
80
88
  // Main installation logic
81
89
  try {
82
90
  const homeDir = os.homedir();
@@ -130,6 +138,14 @@ try {
130
138
  log(`Created ${claudeProducersDir}`);
131
139
  }
132
140
 
141
+ // --- Agents directory (v9.4) ---
142
+ const claudeAgentsDir = path.join(homeDir, '.claude', 'agents');
143
+
144
+ if (!fs.existsSync(claudeAgentsDir)) {
145
+ fs.mkdirSync(claudeAgentsDir, { recursive: true });
146
+ log(`Created ${claudeAgentsDir}`);
147
+ }
148
+
133
149
  // Install each skill
134
150
  skills.forEach(skillName => {
135
151
  const src = path.join(packageRoot, 'skills', skillName);
@@ -175,6 +191,23 @@ try {
175
191
  }
176
192
  });
177
193
 
194
+ // Install each agent definition (flat .md files)
195
+ if (agents.length === 0) {
196
+ log(`No agent definitions found in ${agentsDir} — skipping agent install`);
197
+ }
198
+ agents.forEach(agentFile => {
199
+ const src = path.join(agentsDir, agentFile);
200
+ const dest = path.join(claudeAgentsDir, agentFile);
201
+
202
+ if (fs.existsSync(src)) {
203
+ fs.copyFileSync(src, dest);
204
+ log(`\u2713 Installed ${agentFile} agent to ${dest}`);
205
+ } else {
206
+ error(`${agentFile} agent not found at ${src}`);
207
+ process.exit(1);
208
+ }
209
+ });
210
+
178
211
  // Verify installation
179
212
  const allSkillsInstalled = skills.every(skillName =>
180
213
  fs.existsSync(path.join(claudeSkillsDir, skillName))
@@ -185,8 +218,11 @@ try {
185
218
  const allProducersInstalled = producers.every(name =>
186
219
  fs.existsSync(path.join(claudeProducersDir, name))
187
220
  );
221
+ const allAgentsInstalled = agents.every(name =>
222
+ fs.existsSync(path.join(claudeAgentsDir, name))
223
+ );
188
224
 
189
- if (allSkillsInstalled && allSpecialistsInstalled && allProducersInstalled) {
225
+ if (allSkillsInstalled && allSpecialistsInstalled && allProducersInstalled && allAgentsInstalled) {
190
226
  log(`\u2713 Installation complete!`);
191
227
  log(`Skills are now available globally:`);
192
228
  log(` /intuition-start - Load project context and detect workflow phase`);
@@ -209,6 +245,8 @@ try {
209
245
  specialists.forEach(name => log(` ${name}`));
210
246
  log(`Format producers (${producers.length}):`);
211
247
  producers.forEach(name => log(` ${name}`));
248
+ log(`Reusable agents (${agents.length}):`);
249
+ agents.forEach(name => log(` ${name.replace('.md', '')}`));
212
250
  log(`\nYou can now use these skills in any project with Claude Code.`);
213
251
  } else {
214
252
  error(`Verification failed - not all components properly installed`);
@@ -1,10 +1,10 @@
1
1
  #!/usr/bin/env node
2
2
 
3
3
  /**
4
- * Uninstallation script for Intuition skills
4
+ * Uninstallation script for Intuition skills, agents, specialists, and producers
5
5
  *
6
6
  * This script is run before `npm uninstall -g @tgoodington/intuition`
7
- * It removes all Intuition skills from ~/.claude/skills/
7
+ * It removes all Intuition components from ~/.claude/
8
8
  */
9
9
 
10
10
  const fs = require('fs');
@@ -72,6 +72,23 @@ try {
72
72
  }
73
73
  });
74
74
 
75
+ // Remove Intuition agent definitions
76
+ const claudeAgentsDir = path.join(homeDir, '.claude', 'agents');
77
+ const agentsToRemove = [
78
+ 'intuition-researcher.md',
79
+ 'intuition-code-writer.md',
80
+ 'intuition-reviewer.md',
81
+ 'intuition-synthesizer.md'
82
+ ];
83
+
84
+ agentsToRemove.forEach(agentFile => {
85
+ const agentDest = path.join(claudeAgentsDir, agentFile);
86
+ if (fs.existsSync(agentDest)) {
87
+ fs.rmSync(agentDest, { force: true });
88
+ log(`\u2713 Removed ${agentFile} agent from ${agentDest}`);
89
+ }
90
+ });
91
+
75
92
  // Clean up empty .claude/skills directory if it's empty
76
93
  if (fs.existsSync(claudeSkillsDir)) {
77
94
  const remaining = fs.readdirSync(claudeSkillsDir);
@@ -46,21 +46,25 @@ Scan three tiers in priority order. Deduplicate by `name` — first found wins.
46
46
 
47
47
  1. Glob `.claude/specialists/*/*.specialist.md` (project-level)
48
48
  2. Glob `~/.claude/specialists/*/*.specialist.md` (user-level, expand `~` via Bash)
49
- 3. Determine the Intuition package root: run `node -e "console.log(require.resolve('@tgoodington/intuition/package.json'))"` via Bash, extract the directory. Glob `{package_root}/specialists/*/*.specialist.md`.
49
+ 3. Framework-bundled specialists try in order, stop at first success:
50
+ a. **Plugin path**: Glob `${CLAUDE_PLUGIN_ROOT}/specialists/*/*.specialist.md`. If `${CLAUDE_PLUGIN_ROOT}` is empty or the glob returns nothing, fall through.
51
+ b. **npm path**: Run `node -e "console.log(require.resolve('@tgoodington/intuition/package.json'))"` via Bash, extract the directory. Glob `{package_root}/specialists/*/*.specialist.md`.
52
+ c. **Fallback**: Glob `node_modules/@tgoodington/intuition/specialists/*/*.specialist.md` relative to project root.
50
53
 
51
- For each profile found: read the YAML frontmatter, extract `name` and `domain_tags`. Build a specialists list.
54
+ For each profile found: read ONLY the YAML frontmatter using `Read` with `limit: 30` (frontmatter is typically under 25 lines). Extract `name` and `domain_tags`. Do NOT read the full profile body — the Stage 1/2 protocols are not needed for matching. Build a specialists list.
52
55
 
53
56
  If zero specialists found after all three tiers, HALT with this message:
54
57
  "No specialist profiles found. Install specialist profiles in one of these locations:
55
58
  - `.claude/specialists/` (project-level)
56
59
  - `~/.claude/specialists/` (user-level)
57
- - Or ensure `@tgoodington/intuition` is installed with its bundled specialists."
60
+ - Install the Intuition plugin: `/plugin install intuition`
61
+ - Or install via npm: `npm install -g @tgoodington/intuition`"
58
62
 
59
63
  ### Step 3: Scan Producer Registry
60
64
 
61
- Same three-tier pattern using `producers/` directories and `*.producer.md` files. Extract `name` and `output_formats` from each. Deduplicate by name with same priority (first found wins).
65
+ Same three-tier pattern as Step 2, using `producers/` directories and `*.producer.md` files. Tier 3 uses the same resolution order (plugin path → npm path → fallback). Read ONLY the YAML frontmatter using `Read` with `limit: 30`. Extract `name` and `output_formats` from each. Do NOT read the full profile body. Deduplicate by name with same priority (first found wins).
62
66
 
63
- If zero producers found, HALT with the same pattern message referencing producer directories.
67
+ If zero producers found, HALT with the same pattern message referencing producer directories and install methods.
64
68
 
65
69
  ### Step 4: Team Assembly (Inline Matching)
66
70
 
@@ -128,7 +132,7 @@ If the outline has no format constraints and no Section 3 technology decisions a
128
132
  ### Step 5: Prerequisite Checking
129
133
 
130
134
  For each producer in `producer_assignments`:
131
- 1. Read the full producer profile from the registry
135
+ 1. Read the producer profile frontmatter from the registry (the `tooling` field is within the frontmatter, already read in Step 3)
132
136
  2. Check `tooling.{output_format}.required` array
133
137
  3. For each required tool, run Bash to verify availability (e.g., `python --version`, `which pandoc`)
134
138
  4. Record results in `prerequisite_check` (format: `"producer/format": "PASS — tool version found"` or `"FAIL — tool not found"`)
@@ -319,7 +323,7 @@ Set on root: `last_handoff` → current ISO timestamp, `last_handoff_transition`
319
323
 
320
324
  - **Zero specialists found**: Halt at Step 2 with install instructions.
321
325
  - **Zero producers found**: Halt at Step 3 with install instructions.
322
- - **Package root resolution fails**: Fallback to scanning `node_modules/@tgoodington/intuition/` relative to project root.
326
+ - **Framework-bundled resolution fails**: Tier 3 tries plugin path, then npm resolution, then local node_modules. If all three fail, Tiers 1 and 2 may still have results.
323
327
  - **All tasks unmatched**: Present the full unmatched list at Step 6. If user chooses to create specialists, write the request file and route to agent-advisor. Do not silently skip everything.
324
328
  - **User rejects team**: Allow adjustments, re-present. Do not write anything until approved.
325
329
  - **Prerequisites missing**: Halt with exact install commands. Do not proceed to team confirmation.
@@ -149,8 +149,8 @@ For each task per `team_assignment.json` execution order (parallelize tasks with
149
149
  - Project: `.claude/producers/{producer-name}/{producer-name}.producer.md`
150
150
  - User: `~/.claude/producers/{producer-name}/{producer-name}.producer.md`
151
151
  - Framework-shipped: scan the `producers/` directory at the package root
152
- 4. Construct the delegation prompt using the producer profile as system instructions and the blueprint as task context. Only include non-test output files in the delegation.
153
- 5. Spawn the producer as a Task subagent using the model declared in the producer profile.
152
+ 4. Construct the delegation prompt using the producer profile as system instructions. Direct the subagent to READ the blueprint from disk (do NOT inject blueprint content into the prompt — this avoids duplicating large files in both parent and subagent contexts). Only include non-test output files in the delegation.
153
+ 5. Spawn the producer as an `intuition-code-writer` agent (or appropriate producer-specific agent if one exists). Use the model declared in the producer profile.
154
154
 
155
155
  **Producer delegation format:**
156
156
  ```
@@ -174,25 +174,27 @@ When building on a branch, add to subagent prompts:
174
174
 
175
175
  ## STEP 5: THREE-LAYER REVIEW CHAIN
176
176
 
177
- After a producer completes each deliverable, execute all three review layers in sequence.
177
+ After producers complete deliverables, execute all three review layers. **Batch deliverables from the same specialist** into a single review subagent (up to 3 deliverables per review — if a specialist has more than 3, split into multiple batches). This reduces subagent spawn overhead.
178
178
 
179
179
  ### Layer 1: Domain Specialist Review
180
180
 
181
181
  1. Identify the specialist that authored the blueprint (from blueprint YAML frontmatter `specialist` field).
182
- 2. Load that specialist's profile from the registry (same scan order as producers: project → user → framework).
183
- 3. Extract the Review Protocol section from the specialist profile body.
184
- 4. Spawn a review subagent with adversarial framing. Use the `reviewer_model` declared in the specialist profile's YAML frontmatter.
182
+ 2. Locate that specialist's profile path in the registry (same scan order as producers: project → user → framework).
183
+ 3. Spawn an `intuition-reviewer` agent with adversarial framing. Use the `reviewer_model` declared in the specialist profile's YAML frontmatter. If this specialist produced multiple deliverables, include ALL of them (up to 3) in a single review agent.
185
184
 
186
185
  **Specialist review delegation format:**
187
186
  ```
188
- You are a [specialist display_name] reviewing a deliverable produced from your blueprint. Your job is to FIND PROBLEMS — not to approve.
187
+ You are a [specialist display_name] reviewing deliverables produced from your blueprint. Your job is to FIND PROBLEMS — not to approve.
189
188
 
190
- [Specialist Review Protocol section content]
189
+ Read your review protocol from: [specialist profile path] — find the ## Review Protocol section.
191
190
 
192
191
  Blueprint: Read {context_path}/blueprints/{specialist-name}.md
193
- Deliverable: Read [produced output file paths]
192
+ Deliverables: Read each of these files:
193
+ - [produced output file path 1]
194
+ - [produced output file path 2]
195
+ - ...
194
196
 
195
- Does this deliverable accurately capture what the blueprint specified? Are the domain-specific requirements met? Check every review criterion. Return: PASS + summary OR FAIL + specific issues list with blueprint section references.
197
+ For EACH deliverable: does it accurately capture what the blueprint specified? Are the domain-specific requirements met? Check every review criterion. Return per deliverable: PASS + summary OR FAIL + specific issues list with blueprint section references.
196
198
  ```
197
199
 
198
200
  - If FAIL → send feedback back to the producer (re-delegate with specific issues). Do NOT proceed to Layer 2.
@@ -222,14 +224,15 @@ Log all deviations (additions and omissions) in the build report's "Deviations f
222
224
  ### Layer 3: Mandatory Cross-Cutting Reviewers
223
225
 
224
226
  1. Check the specialist profile's `mandatory_reviewers` field in its YAML frontmatter.
225
- 2. For EACH mandatory reviewer listed: load their specialist profile, extract their Review Protocol, spawn a review subagent using their `reviewer_model`.
226
- 3. **Security Expert is ALWAYS mandatory** — even if `mandatory_reviewers` is empty. Spawn a Security Expert review for every deliverable that produces code, configuration, or scripts.
227
+ 2. For EACH mandatory reviewer listed: locate their specialist profile, spawn an `intuition-reviewer` agent using their `reviewer_model`.
228
+ 3. **Security Expert is ALWAYS mandatory** — even if `mandatory_reviewers` is empty. Spawn a Security Expert `intuition-reviewer` agent for every deliverable that produces code, configuration, or scripts.
229
+ 4. **Batch cross-cutting reviews** the same way as Layer 1: include up to 3 deliverables per review agent. If all code deliverables in the current execution phase share the same cross-cutting reviewer, batch them into one review call.
227
230
 
228
231
  **Cross-cutting review delegation format:**
229
232
  ```
230
233
  You are a [reviewer display_name] performing a cross-cutting review. Your job is to FIND PROBLEMS in your area of expertise.
231
234
 
232
- [Reviewer's Review Protocol section content]
235
+ Read your review protocol from: [reviewer profile path] — find the ## Review Protocol section.
233
236
 
234
237
  Deliverable: Read [produced output file paths]
235
238
  Blueprint: Read {context_path}/blueprints/{specialist-name}.md (for context only)
@@ -348,7 +351,7 @@ Present a concise version: task count, pass/fail status, files produced count, r
348
351
 
349
352
  After reporting results:
350
353
 
351
- **8a. Extract to memory.** Spawn a haiku Task subagent: "Read `{context_path}/build_report.md`. Then read `docs/project_notes/key_facts.md`, `docs/project_notes/issues.md`, and `docs/project_notes/bugs.md`. Append only NEW entries: lessons/deviations `key_facts.md`, completed work `issues.md`, bugs found `bugs.md`. Do not duplicate. Preserve existing formatting." Run in background.
354
+ **8a. Extract to memory (inline).** Review the build report you just wrote. For any notable deviations or lessons learned, read `docs/project_notes/key_facts.md` and use Edit to append concise entries (2-3 lines each) if not already present. For any bugs found during review cycles, read `docs/project_notes/bugs.md` and append. Do NOT spawn a subagent write directly.
352
355
 
353
356
  **8b. Determine next phase.** Read `{context_path}/team_assignment.json`. Check if any `producer_assignments` entry has `producer == "code-writer"`.
354
357
 
@@ -191,7 +191,7 @@ Execute the investigation protocol for the classified category. This is NOT a ch
191
191
  - Compare against implementation — does code match outline?
192
192
  - The answer determines where the fix belongs (code, plan, or discovery).
193
193
 
194
- **For large dependency graphs:** Launch a Research/Explorer subagent (haiku):
194
+ **For large dependency graphs:** Launch an `intuition-researcher` agent:
195
195
  ```
196
196
  Task: "Map all imports and usages of [module/function] across the codebase.
197
197
  Report: file paths, line numbers, how each usage depends on this module.
@@ -259,8 +259,8 @@ Do NOT proceed to Step 8 without explicit user confirmation.
259
259
  | Scenario | Action |
260
260
  |----------|--------|
261
261
  | Trivial (1-3 lines, single file) | Debugger MAY fix directly |
262
- | Moderate (multiple lines, single file) | Delegate to Code Writer (sonnet) |
263
- | Complex (multiple files) | Delegate to Code Writer (sonnet) with full causal chain context |
262
+ | Moderate (multiple lines, single file) | Delegate to `intuition-code-writer` agent |
263
+ | Complex (multiple files) | Delegate to `intuition-code-writer` agent with full causal chain context |
264
264
  | Cross-context | Delegate with BOTH contexts' implementation guides referenced |
265
265
 
266
266
  **Subagent prompt template:**
@@ -295,8 +295,8 @@ ALWAYS populate dependent files and interfaces. Never omit context from subagent
295
295
  After the subagent returns:
296
296
 
297
297
  1. **Review the changes** — Read modified files. Confirm the fix addresses the ROOT CAUSE, not just the symptom.
298
- 2. **Run tests** — Launch Test Runner (haiku) if test infrastructure exists.
299
- 3. **Impact check** — Launch Impact Analyst (haiku):
298
+ 2. **Run tests** — Launch `intuition-researcher` agent if test infrastructure exists.
299
+ 3. **Impact check** — Launch `intuition-researcher` agent:
300
300
  ```
301
301
  "Read [dependent files]. Verify compatibility with changes to [modified files].
302
302
  Report broken imports, changed interfaces, or behavioral mismatches. Under 400 words."
@@ -365,12 +365,10 @@ After reporting (and optional git commit), ask: "Is there another issue to inves
365
365
 
366
366
  # SUBAGENT TABLE
367
367
 
368
- | Agent | Model | When to Use |
369
- |-------|-------|-------------|
370
- | **Code Writer** | sonnet | Implementing fixes — moderate to complex changes |
371
- | **Research/Explorer** | haiku | Mapping dependencies, cross-context analysis, profiling setup |
372
- | **Test Runner** | haiku | Running tests after fixes to verify correctness |
373
- | **Impact Analyst** | haiku | Verifying dependent code is compatible after changes |
368
+ | Agent | Definition | When to Use |
369
+ |-------|-----------|-------------|
370
+ | `intuition-code-writer` | sonnet, acceptEdits | Implementing fixes — moderate to complex changes |
371
+ | `intuition-researcher` | haiku, dontAsk | Mapping dependencies, cross-context analysis, test running, impact analysis |
374
372
 
375
373
  ---
376
374
 
@@ -125,10 +125,10 @@ From the design brief, extract:
125
125
 
126
126
  Create the directory `{context_path}/.design_research/[item_name]/` if it does not exist.
127
127
 
128
- **Agent 1 — Existing Work Scan** (subagent_type: Explore, model: haiku):
128
+ **Agent 1 — Existing Work Scan** (subagent_type: `intuition-researcher`):
129
129
  Prompt: "Search the project for existing work related to [item description]. Look for: prior documentation, existing implementations, reference material, patterns that inform this design. Check docs/, src/, and any relevant directories. Report findings in under 400 words. Facts only."
130
130
 
131
- **Agent 2 — Context Mapping** (subagent_type: Explore, model: haiku):
131
+ **Agent 2 — Context Mapping** (subagent_type: `intuition-researcher`):
132
132
  Prompt: "Map the context surrounding [item description]. What already exists that this design must work with or within? What are the boundaries and integration points? Check the codebase structure, existing docs, and configuration. Report in under 400 words. Facts only."
133
133
 
134
134
  When both return, combine results and write to `{context_path}/.design_research/[item_name]/context.md`.
@@ -156,7 +156,7 @@ Domain-adaptive focus questions:
156
156
 
157
157
  Each turn: 2-4 sentences of analysis referencing research findings, then ONE question via AskUserQuestion with 2-4 options.
158
158
 
159
- **Research triggers:** If an element definition requires investigating existing patterns or prior art, launch a targeted haiku agent. WAIT for results before continuing the dialogue.
159
+ **Research triggers:** If an element definition requires investigating existing patterns or prior art, launch a targeted `intuition-researcher` agent. WAIT for results before continuing the dialogue.
160
160
 
161
161
  # PHASE 3: CONNECTIONS (1-2 turns) [ECD: C]
162
162
 
@@ -184,7 +184,7 @@ Domain-adaptive focus questions:
184
184
 
185
185
  This phase gets the most turns because dynamics design often reveals new elements or connection needs. If a gap appears, loop back briefly to address it.
186
186
 
187
- **Research triggers:** For complex design questions requiring deeper analysis, launch a sonnet agent (subagent_type: general-purpose, model: sonnet) for trade-off analysis. Limit: 1 at a time, 600-word responses. WAIT for results before continuing the dialogue.
187
+ **Research triggers:** For complex design questions requiring deeper analysis, launch an `intuition-researcher` agent (model override: sonnet) for trade-off analysis. Limit: 1 at a time, 600-word responses. WAIT for results before continuing the dialogue.
188
188
 
189
189
  # PHASE 5: FORMALIZATION (1 turn)
190
190
 
@@ -354,12 +354,12 @@ Working files in `.design_research/` enable resuming interrupted design sessions
354
354
 
355
355
  ## Context Research (launched in Phase 1)
356
356
 
357
- Launch 2 haiku Explore agents in parallel via Task tool. See Phase 1, Step 2 for prompt templates. Write combined results to `.design_research/[item_name]/context.md`.
357
+ Launch 2 `intuition-researcher` agents in parallel via Task tool. See Phase 1, Step 2 for prompt templates. Write combined results to `.design_research/[item_name]/context.md`.
358
358
 
359
359
  ## Targeted Research (launched on demand in Phases 2-4)
360
360
 
361
- - Use haiku Explore agents for fact-gathering (e.g., "What patterns exist in the project for this kind of thing?")
362
- - Use sonnet general-purpose agents for trade-off analysis (e.g., "Compare approach X and Y given the existing context")
361
+ - Use `intuition-researcher` agents for fact-gathering (e.g., "What patterns exist in the project for this kind of thing?")
362
+ - Use `intuition-researcher` agents (model override: sonnet) for trade-off analysis (e.g., "Compare approach X and Y given the existing context")
363
363
  - Each prompt MUST specify the design question and a 400-word limit (600 for sonnet)
364
364
  - Write results to `.design_research/[item_name]/options_[topic].md`
365
365
  - NEVER launch more than 2 agents simultaneously
@@ -93,7 +93,7 @@ Ensure the `{context_path}/scratch/` directory exists (create via Bash `mkdir -p
93
93
 
94
94
  ### Light Tasks (single-pass bypass)
95
95
 
96
- Spawn an opus Task subagent that combines exploration AND specification in one pass:
96
+ Spawn an `intuition-synthesizer` agent that combines exploration AND specification in one pass:
97
97
  - **System prompt**: Stage 1 Protocol text + Stage 2 Protocol text (concatenated with a separator)
98
98
  - **Task context**: plan tasks, research patterns from profile frontmatter, prior blueprints, outline Section 10 context
99
99
  - **Output instruction**: "Research the project, then produce the complete blueprint directly. No user gate — use your best judgment for all decisions. Write to `{context_path}/blueprints/{specialist-name}.md`."
@@ -104,7 +104,7 @@ Ensure the `{context_path}/blueprints/` directory exists. After the subagent ret
104
104
 
105
105
  #### Stage 1a: Research Planning
106
106
 
107
- Spawn an opus Task subagent. The system prompt combines a research-planning framing (owned by this skill) with the specialist's domain expertise (from the profile):
107
+ Spawn an `intuition-synthesizer` agent (model override: sonnet). The system prompt combines a research-planning framing (owned by this skill) with the specialist's domain expertise (from the profile):
108
108
 
109
109
  - **System prompt**: Construct by concatenating:
110
110
  1. **Framing (detail skill provides this):**
@@ -148,7 +148,7 @@ After 1a returns, write the specialist's research plan output to `{context_path}
148
148
 
149
149
  Parse the specialist's research plan output. Enforce the depth-based research cap: Deep tasks allow 3 entries max, Standard tasks allow 2. If the specialist's plan contains more entries than the cap, take ONLY the first {cap} entries and log a warning to the user: "Research plan had {N} items, capped at {cap} per depth policy."
150
150
 
151
- For each `### R{N}:` entry (up to the cap), spawn a haiku Task subagent (subagent_type: `Explore`):
151
+ For each `### R{N}:` entry (up to the cap), spawn an `intuition-researcher` agent:
152
152
  - **Task**: the natural language description from the research plan entry
153
153
  - **Instruction suffix**: "Search the project codebase thoroughly. Report: file paths found, key patterns observed, relevant code snippets, and any constraints or conventions discovered. Be specific — include exact paths, field names, and data types."
154
154
 
@@ -159,7 +159,7 @@ If any research agent finds nothing relevant, note this — the specialist needs
159
159
  #### Stage 1c: Analysis and Synthesis (Resume 1a or Fresh)
160
160
 
161
161
  **Normal flow:** Resume the Stage 1a specialist subagent using the saved agent ID.
162
- **Crash recovery flow (no agent ID):** Spawn a fresh opus Task subagent. Provide the specialist's Stage 1 Exploration Protocol as system prompt, and include the saved research plan from `{context_path}/scratch/{specialist-name}-research-plan.md` as additional context so the fresh agent understands what was asked for.
162
+ **Crash recovery flow (no agent ID):** Spawn a fresh `intuition-synthesizer` agent. Provide the specialist's Stage 1 Exploration Protocol as system prompt, and include the saved research plan from `{context_path}/scratch/{specialist-name}-research-plan.md` as additional context so the fresh agent understands what was asked for.
163
163
 
164
164
  In either case, provide this prompt (the synthesis framing is owned by this skill, not the specialist):
165
165
 
@@ -335,14 +335,16 @@ Mark these decisions with `"classified_by": "detail"` in decisions.json.
335
335
 
336
336
  ## STEP 7: STAGE 2 — SPECIFICATION SUBAGENT
337
337
 
338
- Spawn a FRESH opus Task subagent (do NOT resume Stage 1):
338
+ Spawn a FRESH `intuition-synthesizer` agent (do NOT resume Stage 1):
339
339
  - **System prompt**: the specialist's Stage 2 Specification Protocol text (extracted in Step 3)
340
340
  - **Injected context**:
341
341
  - Full contents of `{context_path}/scratch/{specialist-name}-stage1.md`
342
342
  - Full contents of `{context_path}/scratch/{specialist-name}-decisions.json`
343
343
  - Plan tasks with acceptance criteria
344
344
  - Prior blueprint contents (if any — read each path and include full text)
345
- - **Output instruction**: "Produce the complete blueprint in the universal envelope format (9 sections: Task Reference, Research Findings, Approach, Decisions Made, Deliverable Specification, Acceptance Mapping, Integration Points, Open Items, Producer Handoff). Write to `{context_path}/blueprints/{specialist-name}.md`. Every design choice must trace to Stage 1 research, a user decision from decisions.json, or a named domain standard. Ungrounded choices go in the Open Items section."
345
+ - **Output instruction**: "Produce the complete blueprint in the universal envelope format (9 sections: Task Reference, Research Findings, Approach, Decisions Made, Deliverable Specification, Acceptance Mapping, Integration Points, Open Items, Producer Handoff). Write to `{context_path}/blueprints/{specialist-name}.md`. Every design choice must trace to Stage 1 research, a user decision from decisions.json, or a named domain standard. Ungrounded choices go in the Open Items section.
346
+
347
+ IMPORTANT — Testing boundary: Do NOT specify test files or test deliverables in Producer Handoff (Section 9). Testing is handled by a dedicated test phase, not by producers. If you have domain-specific testing knowledge (edge cases, critical paths, failure modes, boundary conditions), include it in the Approach section (Section 3) under a '### Testability Notes' subheading. This gives the test phase domain context without prescribing test files."
346
348
 
347
349
  Ensure the `{context_path}/blueprints/` directory exists (create via Bash `mkdir -p` if needed).
348
350
 
@@ -370,7 +372,9 @@ After a blueprint passes the traceability check:
370
372
 
371
373
  **8b. Update specialist state.** Read `.project-memory-state.json`. In `workflow.detail.specialists`, mark the completed specialist: `status → "completed"`, `stage → "done"`, `blueprint_path → "{context_path}/blueprints/{specialist-name}.md"`. Write back.
372
374
 
373
- **8c. Extract to memory.** Spawn a haiku Task subagent: "Read `{context_path}/blueprints/{specialist-name}.md`. Then read `docs/project_notes/decisions.md` and `docs/project_notes/key_facts.md`. Append only NEW entries: decisions from the blueprint's Decisions Made section → `decisions.md` as ADRs, domain facts from Research Findings `key_facts.md`. Do not duplicate. Preserve existing formatting." Run in background.
375
+ **8c. Extract to memory (inline).** Read the just-written blueprint's Decisions Made section (Section 4). For each decision, read `docs/project_notes/decisions.md` and use Edit to append a new ADR entry if one doesn't already exist. For key domain facts from the blueprint's Research Findings (Section 2), read `docs/project_notes/key_facts.md` and append if not present. Keep entries concise (2-3 lines each). Do NOT spawn a subagent — write directly.
376
+
377
+ **8c-ii. Extract testability notes.** If the blueprint's Approach section (Section 3) contains a `### Testability Notes` subheading, extract its contents and append to `{context_path}/test_advisory.md` (create if it doesn't exist). Format: `## {Specialist Display Name}\n{testability notes content}\n`. This gives the test phase a compact file instead of needing to read all blueprints.
374
378
 
375
379
  **8d. Check for next specialist.** Read `{context_path}/team_assignment.json`. Read current state.
376
380
 
@@ -398,11 +402,11 @@ If the COMPLETED specialist was Deep depth, recommend: "Context is heavy — con
398
402
 
399
403
  Triggers when Step 8d finds no remaining specialists.
400
404
 
401
- **9a. Conflict detection.** Spawn a haiku Task subagent: "Read all blueprint files in `{context_path}/blueprints/`. Compare for: contradictory decisions, overlapping file modifications with conflicting changes, inconsistent interface assumptions, and duplicated work. Write findings to `{context_path}/blueprint-conflicts.md`. If no conflicts, write 'No conflicts detected.'" Wait for completion. If conflicts found, present to user via AskUserQuestion and resolve before continuing.
405
+ **9a. Conflict detection.** Spawn an `intuition-researcher` agent: "Read all blueprint files in `{context_path}/blueprints/`. Compare for: contradictory decisions, overlapping file modifications with conflicting changes, inconsistent interface assumptions, and duplicated work. Write findings to `{context_path}/blueprint-conflicts.md`. If no conflicts, write 'No conflicts detected.'" Wait for completion. If conflicts found, present to user via AskUserQuestion and resolve before continuing.
402
406
 
403
407
  **9b. Vision review.** Skip this step if only 1 specialist completed (no cross-specialist seams to check).
404
408
 
405
- For multi-specialist projects, spawn a sonnet Task subagent:
409
+ For multi-specialist projects, spawn an `intuition-reviewer` agent:
406
410
 
407
411
  "Read these files:
408
412
  1. `{context_path}/prompt_brief.md` — extract Commander's Intent (desired end state, non-negotiables, boundaries) and Success Criteria
@@ -101,7 +101,7 @@ Options:
101
101
 
102
102
  ## STEP 2: FAN-OUT RESEARCH
103
103
 
104
- For each task (or group of related tasks), launch a haiku research subagent via the Task tool (subagent_type: Explore, model: haiku).
104
+ For each task (or group of related tasks), launch an `intuition-researcher` agent via the Task tool.
105
105
 
106
106
  When constructing each prompt, replace bracketed placeholders with actual values from the outline. If the task has known file paths, use the "Known Files" variant. If files are marked TBD, use the "TBD Files" variant.
107
107
 
@@ -97,9 +97,11 @@ From the prompt brief, extract: core problem, success criteria, stakeholders, co
97
97
 
98
98
  Create the directory `{context_path}/.outline_research/` if it does not exist.
99
99
 
100
- Launch 2 sonnet research agents in parallel using the Task tool:
100
+ **Resume check:** If `{context_path}/.outline_research/orientation.md` already exists AND `{context_path}/.outline_research/decisions_log.md` exists with at least one entry, skip the research agents read the existing orientation.md and proceed to Step 3. This avoids re-spending tokens on research that hasn't changed.
101
101
 
102
- **Agent 1 Codebase Topology** (subagent_type: Explore, model: sonnet):
102
+ Launch 2 `intuition-researcher` agents in parallel using the Task tool (both calls in a single response):
103
+
104
+ **Agent 1 — Codebase Topology** (subagent_type: `intuition-researcher`):
103
105
  Prompt:
104
106
  "The project root is the current working directory. Analyze the codebase structure by following these steps in order:
105
107
 
@@ -122,7 +124,7 @@ Report on:
122
124
 
123
125
  Under 500 words. Facts only, no speculation."
124
126
 
125
- **Agent 2 — Pattern Extraction** (subagent_type: Explore, model: sonnet):
127
+ **Agent 2 — Pattern Extraction** (subagent_type: `intuition-researcher`):
126
128
  Prompt:
127
129
  "The project root is the current working directory. Analyze codebase patterns by following these steps:
128
130
 
@@ -154,7 +156,7 @@ When `active_context` is NOT trunk:
154
156
  3. Read parent's outline.md and any design specs at `{parent_path}/design_spec_*.md`.
155
157
  4. Launch a THIRD orientation research agent alongside the existing two:
156
158
 
157
- **Agent 3 — Parent Intersection Analysis** (subagent_type: Explore, model: sonnet):
159
+ **Agent 3 — Parent Intersection Analysis** (subagent_type: `intuition-researcher`):
158
160
  Prompt:
159
161
  "The project root is the current working directory. Compare two workflow artifacts:
160
162
 
@@ -201,7 +203,7 @@ When actors are sufficiently mapped (user has confirmed or adjusted), transition
201
203
  Based on the scope revealed by the prompt brief and actors discussion, recommend a outline depth tier:
202
204
 
203
205
  - **Lightweight** (1-4 tasks): Focused scope, few unknowns. Outline includes: Objective, Discovery Summary, Task Sequence, Execution Notes.
204
- - **Standard** (5-10 tasks): Moderate complexity. Adds: Technology Decisions, Testing Strategy, Risks & Mitigations.
206
+ - **Standard** (5-10 tasks): Moderate complexity. Adds: Technology Decisions, Risks & Mitigations.
205
207
  - **Comprehensive** (10+ tasks): Broad scope, multiple components. All sections including Component Architecture and Interface Contracts.
206
208
 
207
209
  Present your recommendation with reasoning via AskUserQuestion. Options: the three tiers (with your recommendation marked). The user may agree or pick a different tier.
@@ -255,8 +257,8 @@ For each major decision domain identified from the prompt brief, orientation res
255
257
 
256
258
  1. **Identify** the decision needed. State it clearly.
257
259
  2. **Research** (when needed): Launch 1-2 targeted research agents via Task tool.
258
- - Use haiku (subagent_type: Explore) for straightforward fact-gathering.
259
- - Use sonnet (subagent_type: general-purpose) for trade-off analysis against the existing codebase.
260
+ - Use `intuition-researcher` for straightforward fact-gathering.
261
+ - Use `intuition-researcher` (model override: sonnet) for trade-off analysis against the existing codebase.
260
262
  - Each agent prompt MUST reference the specific decision domain, return under 400 words.
261
263
  - Write results to `{context_path}/.outline_research/decision_[domain].md` (snake_case).
262
264
  - NEVER launch more than 2 agents simultaneously.
@@ -354,7 +356,7 @@ After writing `outline.md`:
354
356
 
355
357
  **1. Update state:** Read `.project-memory-state.json`. Target the active context object (trunk or branch). Set: `status` → `"outline"`, `workflow.outline.completed` → `true`, `workflow.outline.completed_at` → current ISO timestamp, `workflow.outline.approved` → `true`. Set on root: `last_handoff` → current ISO timestamp, `last_handoff_transition` → `"outline_complete"`. Write back.
356
358
 
357
- **2. Extract to memory:** Spawn a haiku Task subagent (subagent_type: Explore): "Read `{context_path}/outline.md` and `{context_path}/.outline_research/decisions_log.md`. Then read `docs/project_notes/decisions.md` and `docs/project_notes/issues.md`. Append only NEW entries: architectural decisions `decisions.md` as ADRs, risks and dependencies `issues.md`. Do not duplicate existing entries. Preserve existing formatting." Run in background do not wait for completion.
359
+ **2. Extract to memory (inline).** Read `{context_path}/.outline_research/decisions_log.md`. For each locked decision, read `docs/project_notes/decisions.md` and use Edit to append a new ADR entry if one doesn't already exist for that decision. For each risk identified during dialogue, read `docs/project_notes/issues.md` and use Edit to append if not already present. Keep entries concise (2-3 lines each). Do NOT spawn a subagent for this write directly.
358
360
 
359
361
  **3. Fast Track Assessment (v9 only):**
360
362
 
@@ -405,8 +407,8 @@ If fast track declined OR conditions not met, continue to step 4.
405
407
  ## Scope Scaling
406
408
 
407
409
  - **Lightweight**: Sections 1, 2, 6, 6.5, 10
408
- - **Standard**: Sections 1, 2, 3, 6, 6.5, 7, 8, 10
409
- - **Comprehensive**: All sections (1-10, including 6.5)
410
+ - **Standard**: Sections 1, 2, 3, 6, 6.5, 8, 10
411
+ - **Comprehensive**: All sections (1-6.5, 8-10)
410
412
 
411
413
  Section 6.5 (Detail Assessment) is ALWAYS included regardless of tier.
412
414
  Section 2.5 is Parent Context — included for ALL tiers when on a branch.
@@ -482,8 +484,7 @@ Depth controls specialist invocation:
482
484
 
483
485
  **Acceptance criteria rule:** If a criterion can only be satisfied ONE way, it is over-specified. Criteria describe outcomes ("users can reset passwords via email"), not implementations ("add a resetPassword() method that calls sendEmail()"). The engineer and build phases decide the code-level HOW.
484
486
 
485
- ### 7. Testing Strategy (Standard+, when code is produced)
486
- Test types required. Which tasks need tests (reference task numbers). Critical test scenarios. Infrastructure needed.
487
+ **No test tasks.** Do NOT create tasks for writing tests (e.g., "Write unit tests for the API layer"). Testing is a dedicated phase (`/intuition-test`), not a task. The test phase discovers infrastructure, designs strategy, and creates tests independently. Outline tasks describe what gets built — verification is the test phase's job.
487
488
 
488
489
  ### 8. Risks & Mitigations (Standard+)
489
490
 
@@ -605,14 +606,14 @@ If any check fails, fix it before presenting.
605
606
 
606
607
  ## Tier 1: Orientation (launched in Phase 1)
607
608
 
608
- Launch 2 sonnet Explore agents in parallel via Task tool. See Phase 1, Step 2 for prompt templates. Write combined results to `{context_path}/.outline_research/orientation.md`.
609
+ Launch 2 `intuition-researcher` agents in parallel via Task tool. See Phase 1, Step 2 for prompt templates. Write combined results to `{context_path}/.outline_research/orientation.md`.
609
610
 
610
611
  ## Tier 2: Decision Research (launched on demand in Phase 3)
611
612
 
612
613
  Launch 1-2 agents per decision domain when dialogue reveals unknowns needing investigation.
613
614
 
614
- - Use haiku Explore agents for fact-gathering (e.g., "What testing framework does this project use?").
615
- - Use sonnet general-purpose agents for trade-off analysis (e.g., "Compare approaches X and Y given the current architecture").
615
+ - Use `intuition-researcher` agents for fact-gathering (e.g., "What testing framework does this project use?").
616
+ - Use `intuition-researcher` agents (model override: sonnet) for trade-off analysis (e.g., "Compare approaches X and Y given the current architecture").
616
617
  - Each prompt MUST specify the decision domain and a 400-word limit.
617
618
  - Reference specific files or directories when possible.
618
619
  - Write results to `{context_path}/.outline_research/decision_[domain].md`.
@@ -327,12 +327,11 @@ You do NOT launch research subagents by default. Research fires ONLY in this sce
327
327
  - "Are there compliance requirements for Z?"
328
328
  - "What do other teams typically use for this?"
329
329
 
330
- **Action:** Launch ONE targeted Task call:
330
+ **Action:** Launch ONE targeted `intuition-researcher` agent:
331
331
 
332
332
  ```
333
333
  Description: "Research [specific question]"
334
- Subagent type: Explore
335
- Model: haiku
334
+ Subagent type: intuition-researcher
336
335
  Prompt: "Research [specific question from the user].
337
336
  Context: [what the user is building].
338
337
  Search the web and local codebase for relevant information.
@@ -24,6 +24,7 @@ These are non-negotiable. Violating any of these means the protocol has failed.
24
24
  8. You MUST write `{context_path}/test_report.md` before routing to handoff.
25
25
  9. You MUST run the Exit Protocol after writing the test report. NEVER route to `/intuition-handoff`.
26
26
  10. You MUST update `.project-memory-state.json` as part of the Exit Protocol.
27
+ 11. You MUST NOT use `run_in_background` for subagents in Steps 2 and 5. All research and test-creation agents MUST complete before their next step begins.
27
28
 
28
29
  ## CONTEXT PATH RESOLUTION
29
30
 
@@ -39,7 +40,7 @@ On startup, before reading any files:
39
40
 
40
41
  ```
41
42
  Step 1: Read context (state, build_report, blueprints, decisions, outline)
42
- Step 2: Analyze test infrastructure (2 parallel haiku Explore agents)
43
+ Step 2: Analyze test infrastructure (2 parallel intuition-researcher agents)
43
44
  Step 3: Design test strategy (self-contained domain reasoning)
44
45
  Step 4: Confirm test plan with user
45
46
  Step 5: Create tests (delegate to sonnet code-writer subagents)
@@ -63,11 +64,11 @@ Check for existing artifacts before starting. Use `{context_path}/scratch/test_s
63
64
  Read these files:
64
65
 
65
66
  1. `{context_path}/build_report.md` — REQUIRED. Extract: files modified, task results, deviations from blueprints, decision compliance notes.
66
- 3. `{context_path}/outline.md` — acceptance criteria per task.
67
- 4. ALL files matching `{context_path}/blueprints/*.md` specialist blueprints with deliverable specifications.
68
- 5. `{context_path}/team_assignment.json` — producer assignments (identify code-writer tasks).
69
- 6. ALL files matching `{context_path}/scratch/*-decisions.json` — decision tiers and chosen options per specialist.
70
- 7. `docs/project_notes/decisions.md` — project-level ADRs.
67
+ 2. `{context_path}/outline.md` — acceptance criteria per task.
68
+ 3. `{context_path}/test_advisory.md` compact testability notes extracted by the detail phase (one section per specialist). Read this INSTEAD of all blueprints. If this file does not exist (older workflows), fall back to reading `{context_path}/blueprints/*.md` and extracting Testability Notes from each Approach section.
69
+ 4. `{context_path}/team_assignment.json` — producer assignments (identify code-writer tasks).
70
+ 5. ALL files matching `{context_path}/scratch/*-decisions.json` — decision tiers and chosen options per specialist.
71
+ 6. `docs/project_notes/decisions.md` — project-level ADRs.
71
72
 
72
73
  From build_report.md, extract:
73
74
  - **Files modified** — the scope boundary for testing and fixes
@@ -76,19 +77,18 @@ From build_report.md, extract:
76
77
  - **Decision compliance** — any flagged decision issues
77
78
  - **Test Deliverables Deferred** — test specs/files that specialists recommended but build skipped (if this section exists)
78
79
 
79
- From blueprints, extract any test recommendations:
80
- - Test cases specialists suggested in their blueprints
81
- - Edge cases or coverage areas they flagged
82
- - Test-related deliverables from Producer Handoff sections
80
+ From test_advisory.md (or blueprints as fallback), extract domain test knowledge:
81
+ - Edge cases, critical paths, failure modes, and boundary conditions flagged by specialists
82
+ - Any test-relevant domain insights
83
83
 
84
84
  From decisions files, build a decision index:
85
85
  - Map each `[USER]` decision to its chosen option
86
86
  - Map each `[SPEC]` decision to its chosen option and rationale
87
87
  - This index is used in Step 6 for fix boundary checking
88
88
 
89
- ## STEP 2: RESEARCH (2 Parallel Haiku Explore Agents)
89
+ ## STEP 2: RESEARCH (2 Parallel Research Agents)
90
90
 
91
- Spawn two haiku Explore agents in parallel (both Task calls in a single response):
91
+ Spawn two `intuition-researcher` agents in parallel (both Task calls in a single response). Do NOT use `run_in_background` — you MUST wait for both agents to return before proceeding to Step 3:
92
92
 
93
93
  **Agent 1 — Test Infrastructure:**
94
94
  "Search the project for test infrastructure. Find: test framework and runner (jest, vitest, mocha, pytest, etc.), test configuration files, existing test directories and naming conventions, mock/fixture patterns, test utility helpers, CI test commands, coverage configuration and thresholds. Report exact paths and configuration values."
@@ -157,11 +157,11 @@ Tests that only exercise isolated helper functions satisfy unit coverage but do
157
157
 
158
158
  ### Specialist Test Recommendations
159
159
 
160
- Before finalizing the test plan, review specialist test recommendations from two sources:
161
- - **Blueprint test recommendations**: Test cases, edge cases, and coverage areas that specialists flagged in their blueprints
162
- - **Deferred test deliverables**: Test specs/files from build_report.md's "Test Deliverables Deferred" section (and/or test_brief.md's "Specialist Test Recommendations" section)
160
+ Before finalizing the test plan, review specialist domain knowledge from blueprints:
161
+ - **Testability Notes**: Edge cases, critical paths, failure modes, and boundary conditions from each blueprint's Approach section (Section 3, `### Testability Notes` subheading)
162
+ - **Deferred test deliverables**: Any test specs from build_report.md's "Test Deliverables Deferred" section (legacy older blueprints may still include test files in Producer Handoff)
163
163
 
164
- Specialists have domain expertise about what should be tested. Incorporate relevant recommendations into your test plan, but you are not bound to follow them exactly. You own the test strategy — use specialist input as advisory, not prescriptive.
164
+ Specialists have domain expertise about what should be tested. Incorporate their testability insights into your test plan, but you own the test strategy — use specialist input as advisory, not prescriptive.
165
165
 
166
166
  ### Output
167
167
 
@@ -203,9 +203,9 @@ Options:
203
203
 
204
204
  ## STEP 5: CREATE TESTS
205
205
 
206
- Delegate test creation to sonnet Task subagents. Parallelize independent test files (multiple Task calls in a single response).
206
+ Delegate test creation to `intuition-code-writer` agents. Parallelize independent test files (multiple Task calls in a single response). Do NOT use `run_in_background` — you MUST wait for ALL subagents to return before proceeding to Step 6.
207
207
 
208
- For each test file, spawn a sonnet subagent:
208
+ For each test file, spawn an `intuition-code-writer` agent:
209
209
 
210
210
  ```
211
211
  You are a test writer. Create a test file following these specifications exactly.
@@ -224,7 +224,7 @@ You are a test writer. Create a test file following these specifications exactly
224
224
  Write the complete test file to the specified path. Follow the project's existing test style exactly. Do NOT add test infrastructure (no new packages, no config changes).
225
225
  ```
226
226
 
227
- After all subagents return, verify each test file was written. If any failed, retry once with error context.
227
+ SYNCHRONIZATION GATE: After all subagents return, verify each test file exists on disk using Glob. If any file is missing, retry that subagent once (foreground) with error context. Do NOT proceed to Step 6 until every planned test file is confirmed on disk.
228
228
 
229
229
  ## STEP 6: RUN TESTS + FIX CYCLE
230
230
 
@@ -244,9 +244,9 @@ For each failure, classify:
244
244
 
245
245
  | Classification | Action |
246
246
  |---|---|
247
- | **Test bug** (wrong assertion, incorrect mock, import error) | Fix autonomously — haiku Task subagent |
248
- | **Implementation bug, trivial** (off-by-one, missing null check, typo — 1-3 lines) | Fix directly — haiku Task subagent |
249
- | **Implementation bug, moderate** (logic error, missing handler — contained to one file) | Fix — sonnet Task subagent with full diagnosis |
247
+ | **Test bug** (wrong assertion, incorrect mock, import error) | Fix autonomously — `intuition-code-writer` agent |
248
+ | **Implementation bug, trivial** (off-by-one, missing null check, typo — 1-3 lines) | Fix directly — `intuition-code-writer` agent |
249
+ | **Implementation bug, moderate** (logic error, missing handler — contained to one file) | Fix — `intuition-code-writer` agent with full diagnosis |
250
250
  | **Implementation bug, complex** (multi-file structural issue) | Escalate to user |
251
251
  | **Fix would violate [USER] decision** | STOP — escalate to user immediately |
252
252
  | **Fix would violate [SPEC] decision** | Note the conflict, proceed with fix (specialist had authority) |
@@ -327,7 +327,7 @@ Write `{context_path}/test_report.md`:
327
327
 
328
328
  ## STEP 8: EXIT PROTOCOL
329
329
 
330
- **8a. Extract to memory.** Spawn a haiku Task subagent: "Read `{context_path}/test_report.md`. Then read `docs/project_notes/key_facts.md`, `docs/project_notes/issues.md`, and `docs/project_notes/bugs.md`. Append only NEW entries: test coverage insights `key_facts.md`, implementation fixes `bugs.md`, escalated issues `issues.md`. Do not duplicate. Preserve existing formatting." Run in background.
330
+ **8a. Extract to memory (inline).** Review the test report you just wrote. For test coverage insights, read `docs/project_notes/key_facts.md` and use Edit to append concise entries (2-3 lines each) if not already present. For implementation fixes applied, read `docs/project_notes/bugs.md` and append. For escalated issues, read `docs/project_notes/issues.md` and append. Do NOT spawn a subagent write directly.
331
331
 
332
332
  **8b. Update state.** Read `.project-memory-state.json`. Target active context. Set: `status` → `"complete"`, `workflow.test.completed` → `true`, `workflow.test.completed_at` → current ISO timestamp, `workflow.build.completed` → `true`, `workflow.build.completed_at` → current ISO timestamp (if not already set). Set on root: `last_handoff` → current ISO timestamp, `last_handoff_transition` → `"test_to_complete"`. Write back.
333
333