@cubis/foundry 0.3.70 → 0.3.71

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (35) hide show
  1. package/package.json +1 -1
  2. package/workflows/powers/ask-questions-if-underspecified/SKILL.md +51 -3
  3. package/workflows/powers/behavioral-modes/SKILL.md +100 -9
  4. package/workflows/skills/agent-design/SKILL.md +198 -0
  5. package/workflows/skills/agent-design/references/clarification-patterns.md +153 -0
  6. package/workflows/skills/agent-design/references/skill-testing.md +164 -0
  7. package/workflows/skills/agent-design/references/workflow-patterns.md +226 -0
  8. package/workflows/skills/deep-research/SKILL.md +25 -20
  9. package/workflows/skills/deep-research/references/multi-round-research-loop.md +73 -8
  10. package/workflows/skills/frontend-design/SKILL.md +37 -32
  11. package/workflows/skills/frontend-design/commands/brand.md +167 -0
  12. package/workflows/skills/frontend-design/references/brand-presets.md +228 -0
  13. package/workflows/skills/generated/skill-audit.json +11 -2
  14. package/workflows/skills/generated/skill-catalog.json +37 -5
  15. package/workflows/skills/skills_index.json +1 -1
  16. package/workflows/workflows/agent-environment-setup/platforms/claude/skills/agent-design/SKILL.md +198 -0
  17. package/workflows/workflows/agent-environment-setup/platforms/claude/skills/agent-design/references/clarification-patterns.md +153 -0
  18. package/workflows/workflows/agent-environment-setup/platforms/claude/skills/agent-design/references/skill-testing.md +164 -0
  19. package/workflows/workflows/agent-environment-setup/platforms/claude/skills/agent-design/references/workflow-patterns.md +226 -0
  20. package/workflows/workflows/agent-environment-setup/platforms/claude/skills/deep-research/SKILL.md +25 -20
  21. package/workflows/workflows/agent-environment-setup/platforms/claude/skills/deep-research/references/multi-round-research-loop.md +73 -8
  22. package/workflows/workflows/agent-environment-setup/platforms/claude/skills/frontend-design/SKILL.md +37 -32
  23. package/workflows/workflows/agent-environment-setup/platforms/claude/skills/frontend-design/commands/brand.md +167 -0
  24. package/workflows/workflows/agent-environment-setup/platforms/claude/skills/frontend-design/references/brand-presets.md +228 -0
  25. package/workflows/workflows/agent-environment-setup/platforms/claude/skills/skills_index.json +1 -1
  26. package/workflows/workflows/agent-environment-setup/platforms/copilot/skills/agent-design/SKILL.md +197 -0
  27. package/workflows/workflows/agent-environment-setup/platforms/copilot/skills/agent-design/references/clarification-patterns.md +153 -0
  28. package/workflows/workflows/agent-environment-setup/platforms/copilot/skills/agent-design/references/skill-testing.md +164 -0
  29. package/workflows/workflows/agent-environment-setup/platforms/copilot/skills/agent-design/references/workflow-patterns.md +226 -0
  30. package/workflows/workflows/agent-environment-setup/platforms/copilot/skills/deep-research/SKILL.md +25 -20
  31. package/workflows/workflows/agent-environment-setup/platforms/copilot/skills/deep-research/references/multi-round-research-loop.md +73 -8
  32. package/workflows/workflows/agent-environment-setup/platforms/copilot/skills/frontend-design/SKILL.md +37 -32
  33. package/workflows/workflows/agent-environment-setup/platforms/copilot/skills/frontend-design/commands/brand.md +167 -0
  34. package/workflows/workflows/agent-environment-setup/platforms/copilot/skills/frontend-design/references/brand-presets.md +228 -0
  35. package/workflows/workflows/agent-environment-setup/platforms/copilot/skills/skills_index.json +1 -1
@@ -0,0 +1,228 @@
1
+ # Brand Presets Reference
2
+
3
+ Use this reference when building interfaces that must conform to an existing brand system — whether a client-supplied style guide, a design system handoff, or a well-documented brand like Anthropic's.
4
+
5
+ ## Receiving Brand Guidelines
6
+
7
+ When a user hands over brand guidelines, extract these five things before writing any code:
8
+
9
+ | Extract | Ask or infer |
10
+ | ------------------ | ---------------------------------------------------------------------------- |
11
+ | **Color palette** | What are the primary, secondary, and accent hex values? |
12
+ | **Neutral system** | Are neutrals warm, cool, or truly achromatic? |
13
+ | **Typography** | Specific font families for headings and body? Variable font available? |
14
+ | **Spacing DNA** | Base unit? Tight/airy preference? |
15
+ | **Tone** | Where on the spectrum between playful ↔ authoritative, minimal ↔ expressive? |
16
+
17
+ ## Hex → OKLCH Conversion Workflow
18
+
19
+ Always convert brand hex colors to OKLCH for CSS. OKLCH gives you perceptual uniformity — colors with the same L value look equally bright, unlike hex or HSL.
20
+
21
+ ```css
22
+ /* Conversion pattern: hex → oklch via CSS Color 4 */
23
+ /* Most modern browsers accept oklch() natively */
24
+ /* Use https://oklch.com to find values, or compute: */
25
+
26
+ /* L = perceived lightness 0–100% */
27
+ /* C = chroma (colorfulness) 0–0.37ish */
28
+ /* H = hue angle 0–360 */
29
+
30
+ /* Example: #d97757 (warm orange) */
31
+ --brand-orange: oklch(65% 0.145 42);
32
+
33
+ /* Example: #faf9f5 (warm cream) */
34
+ --brand-surface: oklch(98.2% 0.008 85);
35
+
36
+ /* Example: #141413 (warm near-black) */
37
+ --brand-ink: oklch(10.5% 0.006 85);
38
+ ```
39
+
40
+ **Shorthand**: enter any hex at [oklch.com](https://oklch.com) to get the L/C/H values.
41
+
42
+ ## Semantic Token Mapping
43
+
44
+ Once you have OKLCH values, map them to semantic tokens — never use raw hex or oklch values directly in components:
45
+
46
+ ```css
47
+ :root {
48
+ /* --- RAW BRAND TOKENS (source of truth) --- */
49
+ --brand-ink: oklch(10.5% 0.006 85); /* near-black, warm */
50
+ --brand-surface: oklch(98.2% 0.008 85); /* cream white */
51
+ --brand-mid: oklch(72% 0.009 85); /* mid-range gray */
52
+ --brand-subtle: oklch(92% 0.01 85); /* light gray */
53
+ --brand-orange: oklch(65% 0.145 42); /* primary accent */
54
+ --brand-blue: oklch(65% 0.09 235); /* secondary accent */
55
+ --brand-green: oklch(57% 0.09 130); /* tertiary accent */
56
+
57
+ /* --- SEMANTIC TOKENS (what components use) --- */
58
+ --color-bg: var(--brand-surface);
59
+ --color-bg-subtle: var(--brand-subtle);
60
+ --color-text: var(--brand-ink);
61
+ --color-text-secondary: var(--brand-mid);
62
+ --color-accent: var(--brand-orange);
63
+ --color-accent-secondary: var(--brand-blue);
64
+ --color-accent-tertiary: var(--brand-green);
65
+ --color-border: var(--brand-subtle);
66
+ }
67
+ ```
68
+
69
+ ## Anthropic Brand System
70
+
71
+ Anthropic's brand (from [anthropics/skills](https://github.com/anthropics/skills/tree/main/skills/brand-guidelines)) is a useful reference implementation. It's a warm, editorial system — earthy neutrals with bold accent contrast.
72
+
73
+ ### Color Palette
74
+
75
+ ```css
76
+ :root {
77
+ /* Neutrals — warm hue angle ~85 (yellow-brown direction) */
78
+ --anthropic-ink: oklch(10.5% 0.006 85); /* #141413 — body text, dark bg */
79
+ --anthropic-cream: oklch(
80
+ 98.2% 0.008 85
81
+ ); /* #faf9f5 — light bg, text on dark */
82
+ --anthropic-mid: oklch(72% 0.009 85); /* #b0aea5 — secondary text */
83
+ --anthropic-subtle: oklch(92% 0.01 85); /* #e8e6dc — dividers, subtle bg */
84
+
85
+ /* Accents — arranged in visual temperature order */
86
+ --anthropic-orange: oklch(
87
+ 65% 0.145 42
88
+ ); /* #d97757 — primary CTA, highlights */
89
+ --anthropic-blue: oklch(65% 0.09 235); /* #6a9bcc — secondary actions */
90
+ --anthropic-green: oklch(
91
+ 57% 0.09 130
92
+ ); /* #788c5d — tertiary, success states */
93
+ }
94
+ ```
95
+
96
+ ### Typography
97
+
98
+ ```css
99
+ /* Load from Google Fonts */
100
+ @import url("https://fonts.googleapis.com/css2?family=Poppins:wght@400;500;600;700&family=Lora:ital,wght@0,400;0,500;0,600;1,400&display=swap");
101
+
102
+ :root {
103
+ --font-display: "Poppins", Arial, sans-serif; /* headings, nav, labels */
104
+ --font-body: "Lora", Georgia, serif; /* body text, long-form */
105
+ }
106
+
107
+ /* Application rules */
108
+ h1,
109
+ h2,
110
+ h3,
111
+ h4,
112
+ h5,
113
+ h6,
114
+ .label,
115
+ .nav-item,
116
+ .button,
117
+ .badge {
118
+ font-family: var(--font-display);
119
+ }
120
+
121
+ p,
122
+ blockquote,
123
+ .prose,
124
+ article {
125
+ font-family: var(--font-body);
126
+ }
127
+ ```
128
+
129
+ **Why this pairing works**: Poppins is geometric and structured — clear, modern authority. Lora is elegant serif — warm, readable, literary. Together they balance clarity with warmth, matching Anthropic's positioning between scientific rigor and approachability.
130
+
131
+ ### Spacing DNA
132
+
133
+ The brand leans into generous negative space. Use a `8px` base unit with larger gaps between major sections:
134
+
135
+ ```css
136
+ :root {
137
+ --space-1: 0.5rem; /* 8px — tight groupings */
138
+ --space-2: 1rem; /* 16px — component padding */
139
+ --space-3: 1.5rem; /* 24px — between related elements */
140
+ --space-4: 2rem; /* 32px — section mini-gap */
141
+ --space-6: 3rem; /* 48px — between sections */
142
+ --space-8: 4rem; /* 64px — major section spacing */
143
+ }
144
+ ```
145
+
146
+ ### Component Patterns
147
+
148
+ ```css
149
+ /* Card — warm border, generous padding, no shadow */
150
+ .card {
151
+ background: var(--anthropic-subtle);
152
+ border: 1px solid var(--anthropic-mid);
153
+ border-radius: 4px; /* subtle — not pill-shaped */
154
+ padding: var(--space-4);
155
+ }
156
+
157
+ /* Button — orange CTA */
158
+ .button-primary {
159
+ background: var(--anthropic-orange);
160
+ color: var(--anthropic-cream);
161
+ font-family: var(--font-display);
162
+ font-weight: 500;
163
+ letter-spacing: 0.01em;
164
+ border-radius: 4px;
165
+ padding: 0.625rem 1.5rem;
166
+ border: none;
167
+ }
168
+ .button-primary:hover {
169
+ background: oklch(60% 0.145 42); /* slightly darker orange */
170
+ }
171
+
172
+ /* Text link — uses orange, not blue */
173
+ a {
174
+ color: var(--anthropic-orange);
175
+ text-decoration-thickness: 1px;
176
+ text-underline-offset: 3px;
177
+ }
178
+
179
+ /* Code / monospace — uses ink on subtle bg */
180
+ code {
181
+ background: var(--anthropic-subtle);
182
+ color: var(--anthropic-ink);
183
+ font-size: 0.875em;
184
+ padding: 0.2em 0.4em;
185
+ border-radius: 3px;
186
+ }
187
+ ```
188
+
189
+ ### Dark Mode
190
+
191
+ Anthropic's palette inverts gracefully — the cream and ink swap, mid-tones pull slightly warmer:
192
+
193
+ ```css
194
+ @media (prefers-color-scheme: dark) {
195
+ :root {
196
+ --color-bg: var(--anthropic-ink);
197
+ --color-bg-subtle: oklch(16% 0.008 85); /* slightly elevated */
198
+ --color-text: var(--anthropic-cream);
199
+ --color-text-secondary: var(--anthropic-mid);
200
+ --color-border: oklch(22% 0.008 85);
201
+ /* Accents stay the same — they hold on both backgrounds */
202
+ }
203
+ }
204
+ ```
205
+
206
+ ## Applying Other Brand Systems
207
+
208
+ When adapting a different brand, follow this checklist:
209
+
210
+ 1. **Extract and convert** — Get all hex values, convert to oklch
211
+ 2. **Identify neutrals** — Are they warm, cool, or pure gray? Find the hue angle
212
+ 3. **Map the hierarchy** — Which color is dominant (60%), which secondary (30%), which accent (10%)?
213
+ 4. **Check contrast** — Use the WCAG APCa algorithm for text contrast. oklch makes this predictable
214
+ 5. **Find the typography voice** — Geometric sans = structured/modern; humanist sans = friendly; slab = authoritative; oldstyle serif = editorial; transitional serif = professional
215
+ 6. **Test the mood** — Show a prototype section in brand colors. Does it _feel_ like the brand in motion, not just color?
216
+
217
+ ## Common Brand Color Archetypes
218
+
219
+ | Archetype | Neutrals | Primary accent | Feeling |
220
+ | ------------------------------ | ----------------------- | -------------------- | --------------------------------- |
221
+ | **Warm editorial** (Anthropic) | Cream / warm near-black | Orange / terracotta | Thoughtful, approachable, premium |
222
+ | **Cool tech** | True gray / white | Electric blue / teal | Precise, efficient, modern |
223
+ | **Finance/enterprise** | Navy / white | Deep blue / gold | Stable, trustworthy, conservative |
224
+ | **Health/wellness** | Off-white / dark green | Sage green / amber | Natural, calm, nurturing |
225
+ | **Startup/consumer** | White / black | Bold purple or coral | Energetic, fun, accessible |
226
+ | **Luxury** | White / true black | Gold / burgundy | Exclusive, refined, timeless |
227
+
228
+ When working with a brand that fits these archetypes, pull from the pattern — then make one unexpected choice to give it character.
@@ -552,7 +552,7 @@
552
552
  "research"
553
553
  ],
554
554
  "path": ".claude/skills/deep-research/SKILL.md",
555
- "description": "Use when a task needs multi-round research rather than a quick lookup: iterative search, gap finding, corroboration across sources, contradiction handling, or evidence-led synthesis before planning or implementation.",
555
+ "description": "Use when a task needs multi-round research rather than a quick lookup: iterative search, gap finding, corroboration across sources, contradiction handling, evidence-led synthesis before planning or implementation. Also use when the user asks for 'deep research', 'latest info', or 'how does X compare to Y publicly'.",
556
556
  "triggers": [
557
557
  "deep-research",
558
558
  "deep research",
@@ -0,0 +1,197 @@
1
+ ---
2
+ name: agent-design
3
+ description: "Use when designing, building, or improving a CBX agent, skill, or workflow: clarification strategy, progressive disclosure structure, workflow pattern selection (sequential, parallel, evaluator-optimizer), skill type taxonomy, description tuning, and eval-first testing."
4
+ license: MIT
5
+ metadata:
6
+ author: cubis-foundry
7
+ version: "1.0"
8
+ compatibility: Claude Code, Codex, GitHub Copilot, Gemini CLI
9
+ ---
10
+ # Agent Design
11
+
12
+ ## Purpose
13
+
14
+ You are the specialist for designing CBX agents and skills that behave intelligently — asking the right questions, knowing when to pause, executing in the right workflow pattern, and testing their own output.
15
+
16
+ Your job is to close the gap between "it kinda works" and "it works reliably under any input."
17
+
18
+ ## When to Use
19
+
20
+ - Designing or refactoring a SKILL.md or POWER.md
21
+ - Choosing between sequential, parallel, or evaluator-optimizer workflow
22
+ - Writing clarification logic for an agent that handles ambiguous requests
23
+ - Deciding whether a task needs a skill or just a prompt
24
+ - Testing whether a skill actually works as intended
25
+ - Writing descriptions that trigger the right skill at the right time
26
+
27
+ ## Core Principles
28
+
29
+ These come directly from Anthropic's agent engineering research (["Equipping agents for the real world"](https://claude.com/blog/equipping-agents-for-the-real-world-with-agent-skills), March 2026):
30
+
31
+ 1. **Progressive disclosure** — A skill's SKILL.md provides just enough context to know when to load it. Full instructions, references, and scripts are loaded lazily, only when needed. More context in a single file does not equal better behavior — it usually hurts it.
32
+
33
+ 2. **Eval before optimizing** — Define what "good looks like" (test cases + success criteria) before editing the skill. This prevents regression and tells you when improvement actually happened.
34
+
35
+ 3. **Description precision** — The `description` field in YAML frontmatter controls triggering. Too broad = false positives. Too narrow = the skill never fires. Tune it like a search query.
36
+
37
+ 4. **Two skill types** — See [Skill Type Taxonomy](#skill-type-taxonomy). These need different testing strategies and have different shelf lives.
38
+
39
+ 5. **Start with a single agent** — Before adding workflow complexity, first try a single agent with a rich prompt. Only add orchestration when it measurably improves results.
40
+
41
+ ## Skill Type Taxonomy
42
+
43
+ | Type | What it does | Testing goal | Shelf life |
44
+ | ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------- | ------------------------------------------------------- |
45
+ | **Capability uplift** | Teaches Claude to do something it can't do alone (e.g. manipulate PDFs, fill forms, use a domain-specific API) | Verify the output is correct and consistent | Medium — may become obsolete as models improve |
46
+ | **Encoded preference** | Sequences steps Claude could do individually, but in your team's specific order and style (e.g. NDA review checklist, weekly update format) | Verify fidelity to the actual workflow | High — these stay useful because they're uniquely yours |
47
+
48
+ Design question: "Is this skill teaching Claude something new, or encoding how we do things?"
49
+
50
+ ## Clarification Strategy
51
+
52
+ An agent that starts wrong wastes everyone's time. Smart agents pause at the right moments.
53
+
54
+ Load `references/clarification-patterns.md` when:
55
+
56
+ - Designing how a skill should handle ambiguous or underspecified inputs
57
+ - Writing the early steps of a workflow where user intent matters
58
+ - Deciding what questions to ask vs. what to infer
59
+
60
+ ## Workflow Pattern Selection
61
+
62
+ Three patterns cover 95% of production agent workflows:
63
+
64
+ | Pattern | Use when | Cost | Benefit |
65
+ | ----------------------- | --------------------------------------------------------------- | ----------------------- | ----------------------------------------- |
66
+ | **Sequential** | Steps have dependencies (B needs A's output) | Latency (linear) | Focus: each step does one thing well |
67
+ | **Parallel** | Steps are independent and concurrency helps | Tokens (multiplicative) | Speed + separation of concerns |
68
+ | **Evaluator-optimizer** | First-draft quality isn't good enough and quality is measurable | Tokens × iterations | Better output through structured feedback |
69
+
70
+ Default to sequential. Add parallel when latency is the bottleneck and tasks are genuinely independent. Add evaluator-optimizer only when you can measure the improvement.
71
+
72
+ Load `references/workflow-patterns.md` for the full decision tree, examples, and anti-patterns.
73
+
74
+ ## Progressive Disclosure Structure
75
+
76
+ A well-structured CBX skill looks like:
77
+
78
+ ```
79
+ skill-name/
80
+ SKILL.md ← lean entry: name, description, purpose, when-to-use, load-table
81
+ references/ ← detailed guides loaded lazily when step requires it
82
+ topic-a.md
83
+ topic-b.md
84
+ commands/ ← slash commands (optional)
85
+ command.md
86
+ scripts/ ← executable code (optional)
87
+ helper.py
88
+ ```
89
+
90
+ **SKILL.md should be loadable in <2000 tokens.** Everything else lives in references.
91
+
92
+ The metadata table pattern that works:
93
+
94
+ ```markdown
95
+ ## References
96
+
97
+ | File | Load when |
98
+ | ----------------------- | ------------------------------------------ |
99
+ | `references/topic-a.md` | Task involves [specific trigger condition] |
100
+ | `references/topic-b.md` | Task involves [specific trigger condition] |
101
+ ```
102
+
103
+ This lets the agent make intelligent decisions about what context to load rather than ingesting everything upfront.
104
+
105
+ ## Description Writing
106
+
107
+ The `description` field is a trigger — write it like a search query, not marketing copy.
108
+
109
+ **Good description:**
110
+
111
+ ```yaml
112
+ description: "Use when evaluating an agent, skill, workflow, or MCP server: rubric design, evaluator-optimizer loops, LLM-as-judge patterns, regression suites, or prototype-vs-production quality gaps."
113
+ ```
114
+
115
+ **Bad description:**
116
+
117
+ ```yaml
118
+ description: "A comprehensive skill for evaluating things and making sure they work well."
119
+ ```
120
+
121
+ Rules:
122
+
123
+ - Lead with the specific trigger verb: "Use when [user does X]"
124
+ - List the specific task types with commas — these act like search keywords
125
+ - Include domain-specific nouns the user would actually type
126
+ - Avoid generic adjectives ("comprehensive", "powerful", "advanced")
127
+
128
+ Test your description: would a user's natural-language request match the intent of these words?
129
+
130
+ ## Testing a Skill
131
+
132
+ Before shipping, verify with this checklist:
133
+
134
+ 1. **Positive trigger** — Does the skill load when it should? Test 5 natural phrasings of the target task.
135
+ 2. **Negative trigger** — Does it stay quiet when it shouldn't load? Test 5 near-miss phrasings.
136
+ 3. **Happy path** — Does the skill complete the standard task correctly?
137
+ 4. **Edge cases** — What happens with missing input, ambiguous phrasing, or edge-case content?
138
+ 5. **Reader test** — Run the delivery (e.g., a generated doc, a plan) through a fresh sub-agent with no context. Can it answer questions about the output correctly?
139
+
140
+ For formal regression suites, load `references/skill-testing.md`.
141
+
142
+ ## Instructions
143
+
144
+ ### Step 1 — Understand the design task
145
+
146
+ Before touching any file, clarify:
147
+
148
+ - Is this a new skill or improving an existing one?
149
+ - Is it capability uplift or encoded preference?
150
+ - What's the specific failure mode being fixed?
151
+ - What would passing look like?
152
+
153
+ If any of these are unclear, apply the clarification pattern from `references/clarification-patterns.md`.
154
+
155
+ ### Step 2 — Choose the structure
156
+
157
+ - If the skill is simple (single task, single purpose): lean SKILL.md with no references
158
+ - If the skill is complex (multiple phases, conditional logic): SKILL.md + references loaded lazily
159
+ - If the skill has reusable commands: add `commands/` directory
160
+
161
+ ### Step 3 — Design the workflow
162
+
163
+ Use the pattern selection table above. Start with sequential. Prove you need complexity before adding it.
164
+
165
+ ### Step 4 — Write the description
166
+
167
+ Write it last. Once you know what the skill does and how it differs from adjacent skills, the right description is usually obvious.
168
+
169
+ ### Step 5 — Define a test
170
+
171
+ Write at least 3 test cases (input → expected output or behavior) before considering the skill done. These become the regression suite.
172
+
173
+ ## Output Format
174
+
175
+ Deliver:
176
+
177
+ 1. **Skill structure** — directory layout, file list
178
+ 2. **SKILL.md** — production-ready with lean body and reference table
179
+ 3. **Reference files** — if needed, each scoped to a specific phase or topic
180
+ 4. **Test cases** — 3-5 natural language inputs with expected behaviors
181
+ 5. **Description** — the final `description` field, tuned for triggering
182
+
183
+ ## References
184
+
185
+ | File | Load when |
186
+ | -------------------------------------- | ------------------------------------------------------------------------------ |
187
+ | `references/clarification-patterns.md` | Designing how the agent handles ambiguous or underspecified input |
188
+ | `references/workflow-patterns.md` | Choosing or implementing sequential, parallel, or evaluator-optimizer workflow |
189
+ | `references/skill-testing.md` | Writing evals, regression sets, or triggering tests for a skill |
190
+
191
+ ## Examples
192
+
193
+ - "Design a skill for our NDA review process — it should follow our checklist exactly."
194
+ - "The feature-forge skill triggers on the wrong prompts. Help me fix the description."
195
+ - "How do I test whether my skill still works after a model update?"
196
+ - "I need a workflow where 3 agents review code in parallel then one synthesizes findings."
197
+ - "This skill's SKILL.md is 4000 tokens. Help me split it into lean structure with references."
@@ -0,0 +1,153 @@
1
+ # Clarification Patterns Reference
2
+
3
+ Load this when designing how an agent handles ambiguous, underspecified, or multi-interpretation input.
4
+
5
+ Source: Anthropic doc-coauthoring skill pattern + CBX ask-questions-if-underspecified research (2026).
6
+
7
+ ---
8
+
9
+ ## When to Clarify vs. When to Infer
10
+
11
+ The wrong default is to ask everything. The right default is to ask what genuinely branches the work.
12
+
13
+ **Clarify** when:
14
+
15
+ - Multiple plausible interpretations produce significantly different implementations
16
+ - The wrong interpretation wastes significant time or produces the wrong output
17
+ - A key parameter (scope, audience, constraint) changes the entire approach
18
+
19
+ **Infer and state assumptions** when:
20
+
21
+ - A quick read (repo structure, config file, existing code) can answer the question
22
+ - The request is clear for 90%+ of the obvious interpretations
23
+ - The user explicitly asked you to proceed
24
+
25
+ **Proceed without asking** when:
26
+
27
+ - The task is clear and unambiguous
28
+ - Discovery is faster than asking
29
+ - The cost of being slightly wrong is low and reversible
30
+
31
+ ---
32
+
33
+ ## The 1-5 Question Rule
34
+
35
+ Ask at most **5 questions** in the first pass. Prefer questions that eliminate entire branches of work.
36
+
37
+ If more than 5 things are unclear, rank by impact and ask the highest-impact ones first. More questions surface after the user's first answers.
38
+
39
+ ---
40
+
41
+ ## Fast-Path Design
42
+
43
+ Every clarification block should have a fast path. Users who know what they want shouldn't wade through 5 questions.
44
+
45
+ **Include always:**
46
+
47
+ - A compact reply format: `"Reply 1b 2a 3c to accept these options"`
48
+ - Default options explicitly labeled: `(default)` or _bolded_
49
+ - A fast-path shortcut: `"Reply 'defaults' to accept all recommended choices"`
50
+
51
+ **Example block:**
52
+
53
+ ```
54
+ Before I start, a few quick questions:
55
+
56
+ 1. **Scope?**
57
+ a) Only the requested function **(default)**
58
+ b) Refactor any touched code
59
+ c) Not sure — use default
60
+
61
+ 2. **Framework target?**
62
+ a) Match existing project **(default)**
63
+ b) Specify: ___
64
+
65
+ 3. **Test coverage?**
66
+ a) None needed **(default)**
67
+ b) Unit tests alongside
68
+ c) Full integration test
69
+
70
+ Reply with numbers and letters (e.g., `1a 2a 3b`) or `defaults` to proceed with all defaults.
71
+ ```
72
+
73
+ ---
74
+
75
+ ## Three-Stage Context Gathering (for complex tasks)
76
+
77
+ Use this when a task is substantial enough that getting it wrong = significant wasted work. Borrowed from Anthropic's doc-coauthoring skill.
78
+
79
+ ### Stage 1: Initial Questions (meta-context)
80
+
81
+ Ask 3-5 questions about the big-picture framing before touching the content:
82
+
83
+ - What type of deliverable is this? (spec, code, doc, design, plan)
84
+ - Who's the audience / consumer of this output?
85
+ - What's the definition of done — what would make this clearly successful?
86
+ - Are there constraints (framework, format, performance bar, audience knowledge level)?
87
+ - Is there an existing template or precedent to follow?
88
+
89
+ Tell the user they can answer in shorthand. Offer: "Or just dump your context and I'll ask follow-ups."
90
+
91
+ ### Stage 2: Info Dump + Follow-up
92
+
93
+ After initial answers, invite a full brain dump:
94
+
95
+ > "Dump everything you know about this — background, prior decisions, constraints, blockers, opinions. Don't organize it, just get it out."
96
+
97
+ Then ask targeted follow-up questions based on gaps in what they provided. Aim for 5-10 numbered follow-ups. Users can use shorthand (e.g., "1: yes, 2: see previous context, 3: no").
98
+
99
+ **Exit condition for Stage 2:** You understand the objective, the constraints, and at least one clear definition of success.
100
+
101
+ ### Stage 3: Confirm Interpretation, Then Proceed
102
+
103
+ Restate the requirements in 1-3 sentences before starting work:
104
+
105
+ > "Here's my understanding: [objective in one sentence]. [Key constraint]. [What done looks like]. Starting now — let me know if anything's off."
106
+
107
+ ---
108
+
109
+ ## Reader Test (for deliverables)
110
+
111
+ When the deliverable is substantial (a plan, a document, a design decision), test it with a fresh context before handing it to the user.
112
+
113
+ **How:** Invoke a sub-agent or fresh prompt with only the deliverable (no conversation history) and ask:
114
+
115
+ - "What is this about?"
116
+ - "What are the key decisions made here?"
117
+ - "What's missing or unclear?"
118
+
119
+ If the fresh read surfaces gaps the user would have found, fix them first.
120
+
121
+ **When to use:** After generating complex plans, multi-section documents, architecture decisions, or any output that will be read by someone without conversation context.
122
+
123
+ ---
124
+
125
+ ## Clarification Anti-Patterns
126
+
127
+ Avoid these:
128
+
129
+ | Anti-pattern | Problem |
130
+ | ------------------------------------ | ------------------------------------------------------------ |
131
+ | Asking everything upfront | Overwhelms users; many questions are answerable by inference |
132
+ | Asking about things you can discover | Read the file/repo before asking about it |
133
+ | No default options | Forces users to reason through every option |
134
+ | Open-ended questions without choices | High friction; users don't know the option space |
135
+ | Not restating interpretation | User doesn't know what you understood |
136
+ | Asking the same question twice | Signals you didn't read the answer |
137
+ | Asking about reversible decisions | Just pick one and move; it can be changed |
138
+
139
+ ---
140
+
141
+ ## Decision: Which Pattern to Use
142
+
143
+ ```
144
+ Is the task clear and unambiguous?
145
+ → YES: Proceed. State assumptions inline if any.
146
+ → NO: Is missing info discoverable by reading files/code?
147
+ → YES: Read first, then proceed or ask a single targeted question.
148
+ → NO: Is this a quick task where wrong interpretation is cheap?
149
+ → YES: Proceed with stated assumptions, invite correction.
150
+ → NO: Use the 1-5 Question Rule or Three-Stage Context Gathering.
151
+ ```
152
+
153
+ Use Three-Stage context gathering only for substantial deliverables (docs, plans, architecture, complex features). For code tasks, the 1-5 question rule is usually sufficient.