@tgoodington/intuition 10.8.0 → 10.9.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@tgoodington/intuition",
3
- "version": "10.8.0",
3
+ "version": "10.9.0",
4
4
  "description": "Domain-adaptive workflow system for Claude Code: prompt, outline, assemble specialist teams, detail with domain experts, build with format producers, test code output. Supports v8 compat (design, engineer, build) and v9 specialist workflows with 14 domain specialists and 6 format producers.",
5
5
  "keywords": [
6
6
  "claude-code",
@@ -256,14 +256,33 @@ If no large data files are detected, skip this entirely.
256
256
  For each major decision domain identified from the prompt brief, orientation research, and dialogue:
257
257
 
258
258
  1. **Identify** the decision needed. State it clearly.
259
- 2. **Research** (when needed): Launch 1-2 targeted research agents via Task tool.
260
- - Use `intuition-researcher` for straightforward fact-gathering.
261
- - Use `intuition-researcher` (model override: sonnet) for trade-off analysis against the existing codebase.
262
- - Each agent prompt MUST reference the specific decision domain, return under 400 words.
263
- - Write results to `{context_path}/.outline_research/decision_[domain].md` (snake_case).
264
- - NEVER launch more than 2 agents simultaneously.
259
+ 2. **Research** (when needed): Launch research agents via Task tool. Agent count scales with the selected depth tier:
260
+
261
+ **Lightweight (1 agent):** Launch a single `intuition-researcher` agent for neutral fact-gathering.
262
+ "Research [decision domain] in the context of [project]. What are the viable approaches, key trade-offs, and relevant patterns in the codebase? Under 400 words."
263
+ Write results to `{context_path}/.outline_research/decision_[domain].md`.
264
+
265
+ **Standard (2 agents — Advocate/Challenger):** Launch 2 agents in parallel using divergent framing to combat confirmation bias.
266
+
267
+ **Agent A — Advocate** (subagent_type: `intuition-researcher`):
268
+ "Research [decision domain] in the context of [project]. Make the strongest case for the most natural approach given the existing codebase and constraints. What patterns already exist that support it? What makes it the obvious choice? Under 400 words."
269
+
270
+ **Agent B — Challenger** (subagent_type: `intuition-researcher`, model override: sonnet):
271
+ "Research [decision domain] in the context of [project]. Identify the strongest reasons NOT to take the obvious approach. What are the risks, overlooked alternatives, and counterarguments? What has gone wrong when similar projects made the default choice? Under 400 words."
272
+
273
+ Both agents MUST be launched in parallel in a single response. Write combined results to `{context_path}/.outline_research/decision_[domain].md`, clearly labeling the advocate and challenger findings.
274
+
275
+ **Comprehensive (3 agents — Advocate/Challenger/Lateral):** Launch all Standard agents plus:
276
+
277
+ **Agent C — Lateral** (subagent_type: `intuition-researcher`):
278
+ "Research how [decision domain] has been solved in different contexts — different frameworks, industries, or scales. What non-obvious approaches exist that the team might not have considered? Under 400 words."
279
+
280
+ - NEVER launch more than 3 agents simultaneously.
265
281
  - WAIT for all research agents to return and read their results before proceeding to step 3.
266
- 3. **Present** 2-3 options with trade-offs. Include your recommendation and why. Incorporate the research findings.
282
+
283
+ 3. **Synthesize and present** 2-3 options with trade-offs. Synthesis rules scale with tier:
284
+ - **Lightweight**: Present findings directly with your recommendation.
285
+ - **Standard/Comprehensive**: You MUST incorporate findings from BOTH the advocate and challenger before forming your recommendation. If advocate and challenger agree, note that convergence — it strengthens confidence. If they disagree, the tension between them should directly shape the options you present. Do not simply pick the advocate's position.
267
286
  4. **Ask** the user to select via AskUserQuestion.
268
287
  5. **Record** the resolved decision to `{context_path}/.outline_research/decisions_log.md`:
269
288
 
@@ -658,14 +677,26 @@ Launch 2 `intuition-researcher` agents in parallel via Task tool. See Phase 1, S
658
677
 
659
678
  ## Tier 2: Decision Research (launched on demand in Phase 3)
660
679
 
661
- Launch 1-2 agents per decision domain when dialogue reveals unknowns needing investigation.
680
+ Agent count scales with depth tier to balance token cost against confirmation bias risk.
681
+
682
+ **Lightweight (1 agent):**
683
+ - Single `intuition-researcher` for neutral fact-gathering. Low-stakes decisions don't justify the overhead of divergent framing.
684
+
685
+ **Standard (2 agents — Advocate/Challenger):**
686
+ - **Agent A — Advocate** (`intuition-researcher`): Makes the strongest case FOR the natural/obvious approach. Looks for supporting patterns in the codebase.
687
+ - **Agent B — Challenger** (`intuition-researcher`, model: sonnet): Makes the strongest case AGAINST the obvious approach. Surfaces risks, alternatives, counterarguments.
688
+
689
+ **Comprehensive (3 agents — Advocate/Challenger/Lateral):**
690
+ - Advocate and Challenger as above, plus:
691
+ - **Agent C — Lateral** (`intuition-researcher`): Explores how the problem has been solved in different contexts, frameworks, or industries.
662
692
 
663
- - Use `intuition-researcher` agents for fact-gathering (e.g., "What testing framework does this project use?").
664
- - Use `intuition-researcher` agents (model override: sonnet) for trade-off analysis (e.g., "Compare approaches X and Y given the current architecture").
693
+ Rules:
694
+ - Standard/Comprehensive: advocate and challenger MUST be launched in parallel in a single response.
665
695
  - Each prompt MUST specify the decision domain and a 400-word limit.
666
696
  - Reference specific files or directories when possible.
667
- - Write results to `{context_path}/.outline_research/decision_[domain].md`.
668
- - NEVER launch more than 2 simultaneously.
697
+ - Write results to `{context_path}/.outline_research/decision_[domain].md`. Standard/Comprehensive: label advocate and challenger sections.
698
+ - NEVER launch more than 3 simultaneously.
699
+ - Standard/Comprehensive synthesis MUST incorporate tension between advocate and challenger findings. If you find yourself ignoring the challenger, stop and re-read its findings.
669
700
 
670
701
  # CONTEXT MANAGEMENT
671
702
 
@@ -92,17 +92,22 @@ This is the core of the skill. Each turn targets ONE gap using a dependency-orde
92
92
  ### Refinement Order
93
93
 
94
94
  ```
95
- 1. SCOPE → What is IN and what is OUT?
96
- 2. INTENT → What does the end-user experience when this is done and working?
97
- 3. SUCCESS How do you know it worked? What's observable/testable?
98
- 4. CONSTRAINTS → What can't change? Technology, team, timeline, budget?
99
- 5. ASSUMPTIONS → What are we taking as given? How confident are we?
95
+ 1. SCOPE → What is IN and what is OUT? Broad boundaries, not itemized feature lists.
96
+ 2. INTENT → What does "done" look and feel like? The experiential outcome.
97
+ 3. BOUNDARIES What's fixed and what's flexible? Hard constraints and key givens.
100
98
  ```
101
99
 
100
+ The prompt phase paints in broad strokes. Detailed success criteria, testable outcomes, assumption confidence ratings, and architectural constraints belong in outline — not here.
101
+
102
+ **SCOPE sets the playing field** — enough to know what's in-bounds and out-of-bounds, not a requirements list.
103
+
102
104
  **INTENT captures the experiential outcome** — not metrics, but feel:
103
105
  - What the end-user experiences when interacting with the finished product
104
106
  - What the output/interface looks like and feels like in practice
105
107
  - Non-negotiable experiential qualities (fast, simple, invisible, delightful, etc.)
108
+ - What "done" looks like at a high level — outline will sharpen this into testable criteria
109
+
110
+ **BOUNDARIES merge constraints and assumptions into one pass** — what can't change, what we're taking as given, what's flexible. No confidence ratings or detailed analysis — just the lay of the land.
106
111
 
107
112
  INTENT grounds the brief in what success *feels like*, which downstream phases use to distinguish user-facing decisions from technical internals.
108
113
 
@@ -111,21 +116,19 @@ INTENT grounds the brief in what success *feels like*, which downstream phases u
111
116
  Before each question, run this internal check:
112
117
 
113
118
  ```
114
- Is SCOPE clear enough to plan against?
115
- NO → Ask a scope question
116
- YES → Is INTENT defined (experiential outcome, look/feel, non-negotiable qualities)?
119
+ Is SCOPE clear enough to know what's in-bounds and out-of-bounds?
120
+ NO → Ask a scope question (broad boundaries, not feature lists)
121
+ YES → Is INTENT defined (experiential outcome, look/feel, what "done" looks like)?
117
122
  NO → Ask an intent question
118
- YES → Is SUCCESS defined with observable criteria?
119
- NO → Ask a success criteria question
120
- YES → Are binding CONSTRAINTS surfaced?
121
- NO → Ask a constraints question
122
- YES → Are key ASSUMPTIONS identified?
123
- NO → Ask an assumptions question
124
- YES → Move to REFLECT
123
+ YES → Are BOUNDARIES clear (hard constraints, key givens, what's flexible)?
124
+ NO → Ask a boundaries question
125
+ YES → Move to REFLECT
125
126
  ```
126
127
 
127
128
  If the user's initial CAPTURE response already covers some dimensions, skip them. Do not ask about what's already clear.
128
129
 
130
+ **Stay at vision altitude.** If you catch yourself asking about specific metrics, implementation details, or testable acceptance criteria — stop. That's outline's job. The prompt phase asks "what are we building and why does it matter?" not "how will we verify it works?"
131
+
129
132
  ### Question Crafting Rules
130
133
 
131
134
  Every question in REFINE follows these principles:
@@ -170,9 +173,9 @@ Always include a trailing "or something else entirely?" when the space might be
170
173
  **Aggressive skip rule:** After CAPTURE, check each dimension. If the user's initial response provides a clear, actionable answer for a dimension, mark it satisfied and skip it entirely. Do not ask confirmatory questions for dimensions that are already clear — that's ceremony, not refinement.
171
174
 
172
175
  **Convergence triggers — move to REFLECT when ANY of these are true:**
173
- - All 5 dimensions are satisfied (even if that's after 1 turn of REFINE)
174
- - You've asked 3+ REFINE questions and remaining gaps are minor enough to flag as open questions
175
- - By turn 3-4 you should be asking about what the solution DOES, not what the problem IS. If you're still gathering background context after turn 4, flag remaining unknowns as open questions and move to REFLECT.
176
+ - All 3 dimensions are satisfied (even if that's after 1 turn of REFINE)
177
+ - You've asked 2+ REFINE questions and remaining gaps are minor enough to flag as open questions for outline
178
+ - By turn 3 you should have a clear enough picture to move toward REFLECT. If you're still gathering broad context after turn 3, flag remaining unknowns as open questions and move on.
176
179
 
177
180
  The goal is precision, not thoroughness. A 4-turn prompt session that nails the brief is better than a 9-turn session that asks about things the user already told you.
178
181
 
@@ -190,18 +193,14 @@ Question: "Here's what I've captured from our conversation:
190
193
  **Commander's Intent:**
191
194
  - Desired end state: [What success feels/looks like to the end user — experiential, not metrics]
192
195
  - Non-negotiables: [The 2-3 experiential qualities that would make the user reject the result]
193
- - Boundaries: [Constraints on the solution space, not prescribed solutions]
194
-
195
- **Success looks like:** [bullet list of observable outcomes]
196
196
 
197
- **In scope:** [list]
198
- **Out of scope:** [list]
197
+ **In scope:** [list — broad boundaries]
198
+ **Out of scope:** [list — broad boundaries]
199
199
 
200
- **Constraints:** [list]
200
+ **What's fixed:** [hard constraints and key givens — brief list]
201
+ **What's flexible:** [areas where outline has room to explore]
201
202
 
202
- **Assumptions:** [list with confidence notes]
203
-
204
- **Open questions for outlining:** [list]
203
+ **Open questions for outlining:** [list things outline should investigate]
205
204
 
206
205
  What needs adjusting?"
207
206
 
@@ -257,35 +256,29 @@ Write the output files and route to handoff.
257
256
  ## Commander's Intent
258
257
  **Desired end state:** [What success feels/looks like to the end user — experiential, not metrics]
259
258
  **Non-negotiables:** [The 2-3 experiential qualities that would make the user reject the result]
260
- **Boundaries:** [Constraints on the solution space, not prescribed solutions]
261
-
262
- ## Success Criteria
263
- - [Observable, testable outcome 1]
264
- - [Observable, testable outcome 2]
265
- - [Observable, testable outcome 3]
266
259
 
267
260
  ## Scope
268
261
  **In scope:**
269
- - [Item 1]
270
- - [Item 2]
262
+ - [Broad boundary 1]
263
+ - [Broad boundary 2]
271
264
 
272
265
  **Out of scope:**
273
- - [Item 1]
274
- - [Item 2]
266
+ - [Broad boundary 1]
267
+ - [Broad boundary 2]
275
268
 
276
- ## Constraints
277
- - [Non-negotiable limit 1]
278
- - [Non-negotiable limit 2]
269
+ ## Boundaries
270
+ **What's fixed:**
271
+ - [Hard constraint or key given 1]
272
+ - [Hard constraint or key given 2]
279
273
 
280
- ## Key Assumptions
281
- | Assumption | Confidence | Basis |
282
- |-----------|-----------|-------|
283
- | [statement] | High/Med/Low | [why we believe this] |
274
+ **What's flexible:**
275
+ - [Area where outline has room to explore 1]
276
+ - [Area where outline has room to explore 2]
284
277
 
285
278
  ## Open Questions for Planning
286
- - [Build decision the outline phase should investigate]
287
- - [Technical unknown that affects architecture]
279
+ - [Thing outline should investigate or decide]
288
280
  - [Assumption that needs validation]
281
+ - [Area where the user was uncertain]
289
282
 
290
283
  ## Decision Posture
291
284
  | Area | Posture | Notes |
@@ -300,26 +293,23 @@ Write the output files and route to handoff.
300
293
  "summary": {
301
294
  "title": "...",
302
295
  "one_liner": "...",
303
- "problem_statement": "...",
304
- "success_criteria": "..."
296
+ "problem_statement": "..."
305
297
  },
306
298
  "commander_intent": {
307
299
  "desired_end_state": "...",
308
- "non_negotiables": ["..."],
309
- "boundaries": ["..."]
300
+ "non_negotiables": ["..."]
310
301
  },
311
302
  "scope": {
312
303
  "in": ["..."],
313
304
  "out": ["..."]
314
305
  },
315
- "constraints": ["..."],
316
- "assumptions": [
317
- { "assumption": "...", "confidence": "high|medium|low", "basis": "..." }
318
- ],
306
+ "boundaries": {
307
+ "fixed": ["..."],
308
+ "flexible": ["..."]
309
+ },
319
310
  "decision_posture": [
320
311
  { "area": "...", "posture": "i_decide|show_options|team_handles", "notes": "..." }
321
312
  ],
322
- "research_performed": [],
323
313
  "open_questions": ["..."]
324
314
  }
325
315
  ```
@@ -381,6 +371,7 @@ These are banned. If you catch yourself doing any of these, stop and correct cou
381
371
  - Asking questions you could have asked in turn one (generic background)
382
372
  - Staying on the same sub-topic for more than 2 follow-ups when the user is uncertain — flag it as an open question and move on
383
373
  - Producing a brief with sections the outline phase doesn't consume
374
+ - **Drilling into implementation-level detail** — observable/testable criteria, confidence-rated assumptions, architectural specifics. The prompt phase captures vision and boundaries; outline sharpens into specifications
384
375
  - **Presenting exactly 3 options without running the Mandatory Option Enumeration procedure** — this is the single most persistent failure mode. If you have 3 options, you MUST have verified via Step 2 that you aren't collapsing or omitting possibilities
385
376
 
386
377
  ## RESUME LOGIC