@vibe-agent-toolkit/vat-example-cat-agents 0.1.32-rc.4 → 0.1.32
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
|
@@ -0,0 +1,729 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: vat-example-cat-agents
|
|
3
|
+
description: Comprehensive orchestration guide for Claude Code using the vat-example-cat-agents toolkit
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# VAT Example Cat Agents
|
|
7
|
+
|
|
8
|
+
Comprehensive orchestration guide for Claude Code using the vat-example-cat-agents toolkit.
|
|
9
|
+
|
|
10
|
+
## Purpose: For Claude Code, Not the LLM
|
|
11
|
+
|
|
12
|
+
**Key distinction:**
|
|
13
|
+
- **This file** = Guidance for Claude Code (how to orchestrate agents)
|
|
14
|
+
- **Agent resources** = Content for the LLM (what to say/know)
|
|
15
|
+
|
|
16
|
+
This skill references agent resources via markdown links but doesn't duplicate LLM prompts.
|
|
17
|
+
|
|
18
|
+
## Agent Inventory
|
|
19
|
+
|
|
20
|
+
The vat-example-cat-agents package provides 8 agents across 4 archetypes:
|
|
21
|
+
|
|
22
|
+
### Pure Function Tools (2 agents)
|
|
23
|
+
- **haiku-validator** - Validates 5-7-5 syllable structure + kigo/kireji
|
|
24
|
+
- **name-validator** - Quirky characteristic-based validation
|
|
25
|
+
|
|
26
|
+
### One-Shot LLM Analyzers (4 agents)
|
|
27
|
+
- **photo-analyzer** - Vision LLM extracts characteristics from images
|
|
28
|
+
- **description-parser** - Text parsing extracts characteristics from descriptions
|
|
29
|
+
- **name-generator** - Creates characteristic-based cat names
|
|
30
|
+
- **haiku-generator** - Composes haikus about cats
|
|
31
|
+
|
|
32
|
+
### Conversational Assistant (1 agent)
|
|
33
|
+
- **breed-advisor** - Multi-turn breed selection through natural dialogue
|
|
34
|
+
|
|
35
|
+
### External Event Integrator (1 agent)
|
|
36
|
+
- **human-approval** - HITL approval gate (mockable)
|
|
37
|
+
|
|
38
|
+
## When to Use This Skill
|
|
39
|
+
|
|
40
|
+
Trigger this skill when:
|
|
41
|
+
- User wants help selecting a cat breed
|
|
42
|
+
- User has a cat photo and wants analysis
|
|
43
|
+
- User needs a cat name suggestion
|
|
44
|
+
- User wants a haiku about their cat
|
|
45
|
+
- User needs validation of generated content (names, haikus)
|
|
46
|
+
- User wants to combine multiple agents in a workflow
|
|
47
|
+
|
|
48
|
+
## High-Level Orchestration Patterns
|
|
49
|
+
|
|
50
|
+
### Pattern 1: Single Agent (Simple)
|
|
51
|
+
|
|
52
|
+
Use when user has a straightforward request that maps to one agent.
|
|
53
|
+
|
|
54
|
+
**Example workflows:**
|
|
55
|
+
- "What cat breed should I get?" → breed-advisor
|
|
56
|
+
- "Analyze this cat photo" → photo-analyzer
|
|
57
|
+
- "Is this a valid haiku?" → haiku-validator (pure function)
|
|
58
|
+
|
|
59
|
+
### Pattern 2: Sequential Pipeline (Multi-Agent)
|
|
60
|
+
|
|
61
|
+
Use when output of one agent feeds into another.
|
|
62
|
+
|
|
63
|
+
**Example workflows:**
|
|
64
|
+
1. Photo → Characteristics → Name
|
|
65
|
+
- photo-analyzer → name-generator
|
|
66
|
+
2. Photo → Characteristics → Haiku
|
|
67
|
+
- photo-analyzer → haiku-generator
|
|
68
|
+
3. Description → Characteristics → Name → Validation
|
|
69
|
+
- description-parser → name-generator → name-validator
|
|
70
|
+
|
|
71
|
+
### Pattern 3: Generate-Validate Loop (Iterative)
|
|
72
|
+
|
|
73
|
+
Use when generator produces content that needs validation with retry logic.
|
|
74
|
+
|
|
75
|
+
**Example workflows:**
|
|
76
|
+
1. Name generation with validation
|
|
77
|
+
- Generate → Validate → If invalid, retry with feedback
|
|
78
|
+
- Uses: name-generator + name-validator
|
|
79
|
+
2. Haiku generation with validation
|
|
80
|
+
- Generate → Validate → If invalid, retry
|
|
81
|
+
- Uses: haiku-generator + haiku-validator
|
|
82
|
+
|
|
83
|
+
**Orchestration tip:** Generator agents have NO knowledge of validation rules. This is intentional - forces iteration and tests feedback loops.
|
|
84
|
+
|
|
85
|
+
### Pattern 4: HITL Approval Gate (External Event)
|
|
86
|
+
|
|
87
|
+
Use when decision requires human judgment.
|
|
88
|
+
|
|
89
|
+
**Example workflows:**
|
|
90
|
+
1. Breed application approval
|
|
91
|
+
- Gather info → Generate application → human-approval → Process result
|
|
92
|
+
2. Name approval before finalization
|
|
93
|
+
- Generate names → Present options → Human selects → Finalize
|
|
94
|
+
|
|
95
|
+
## Detailed Workflow Orchestration
|
|
96
|
+
|
|
97
|
+
### Workflow A: Breed Selection (Conversational)
|
|
98
|
+
|
|
99
|
+
**When:** User wants help finding the right cat breed.
|
|
100
|
+
|
|
101
|
+
**Agent:** breed-advisor
|
|
102
|
+
|
|
103
|
+
**Orchestration strategy:**
|
|
104
|
+
1. **Phase 1 - Gathering** (see Conversation Strategy)
|
|
105
|
+
- Collect ≥4 factors including music preference
|
|
106
|
+
- ONE question at a time (don't bombard)
|
|
107
|
+
- Extract factors after each turn
|
|
108
|
+
- Monitor readiness: `factorsCollected >= 4 && musicPreference != null`
|
|
109
|
+
|
|
110
|
+
2. **Phase 2 - Recommendation**
|
|
111
|
+
- Present 3-5 matched breeds
|
|
112
|
+
- Allow exploration, questions, comparisons
|
|
113
|
+
- Use conversational formatting (not data dump)
|
|
114
|
+
|
|
115
|
+
3. **Phase 3 - Selection**
|
|
116
|
+
- Detect selection signals ("I'll take", "sounds good")
|
|
117
|
+
- Conclude gracefully
|
|
118
|
+
- Provide next steps or exit instruction
|
|
119
|
+
|
|
120
|
+
**Critical factor:** Music preference is the PRIMARY compatibility factor. Ask early, use as conversation anchor. See Music Preference Insight for mappings.
|
|
121
|
+
|
|
122
|
+
**Reference implementation:** See cat-breed-selection.md for detailed breed selection orchestration.
|
|
123
|
+
|
|
124
|
+
### Workflow B: Photo Analysis Pipeline
|
|
125
|
+
|
|
126
|
+
**When:** User provides a cat photo and wants analysis, name, or haiku.
|
|
127
|
+
|
|
128
|
+
**Agents:** photo-analyzer → name-generator OR haiku-generator
|
|
129
|
+
|
|
130
|
+
**Orchestration strategy:**
|
|
131
|
+
|
|
132
|
+
```
|
|
133
|
+
Step 1: Analyze Photo
|
|
134
|
+
- Input: Image path/URL
|
|
135
|
+
- Agent: photo-analyzer
|
|
136
|
+
- Output: CatCharacteristics
|
|
137
|
+
- Note: Supports mock mode (reads EXIF) for testing
|
|
138
|
+
|
|
139
|
+
Step 2: Generate Content
|
|
140
|
+
- Input: CatCharacteristics from Step 1
|
|
141
|
+
- Agent: name-generator OR haiku-generator
|
|
142
|
+
- Output: NameSuggestion OR Haiku
|
|
143
|
+
|
|
144
|
+
Optional Step 3: Validate
|
|
145
|
+
- Input: Generated content + original characteristics
|
|
146
|
+
- Agent: name-validator OR haiku-validator (pure functions)
|
|
147
|
+
- Output: Validation result
|
|
148
|
+
- If invalid: Retry Step 2 with feedback
|
|
149
|
+
```
|
|
150
|
+
|
|
151
|
+
**Mockable behavior:** photo-analyzer reads EXIF metadata in mock mode. For production, set `mockable: false` to use real vision API.
|
|
152
|
+
|
|
153
|
+
### Workflow C: Text Description Pipeline
|
|
154
|
+
|
|
155
|
+
**When:** User describes a cat in text (no photo).
|
|
156
|
+
|
|
157
|
+
**Agents:** description-parser → name-generator OR haiku-generator
|
|
158
|
+
|
|
159
|
+
**Orchestration strategy:**
|
|
160
|
+
|
|
161
|
+
```
|
|
162
|
+
Step 1: Parse Description
|
|
163
|
+
- Input: Text description
|
|
164
|
+
- Agent: description-parser
|
|
165
|
+
- Output: CatCharacteristics (same schema as photo-analyzer!)
|
|
166
|
+
|
|
167
|
+
Step 2-3: Same as Workflow B
|
|
168
|
+
- Multi-modal convergence: photo and text produce same schema
|
|
169
|
+
- Downstream agents (name-gen, haiku-gen) work with either
|
|
170
|
+
```
|
|
171
|
+
|
|
172
|
+
**Key insight:** Photo and text paths converge at `CatCharacteristics` schema. This enables multi-modal workflows without agent changes.
|
|
173
|
+
|
|
174
|
+
### Workflow D: Generate-Validate-Retry Loop
|
|
175
|
+
|
|
176
|
+
**When:** Generator produces content that must pass validation rules.
|
|
177
|
+
|
|
178
|
+
**Pattern:** Generator (LLM) + Validator (pure function) + Retry logic
|
|
179
|
+
|
|
180
|
+
**Implementation:**
|
|
181
|
+
|
|
182
|
+
```typescript
|
|
183
|
+
// Pseudo-code for orchestration
|
|
184
|
+
let attempts = 0;
|
|
185
|
+
const maxAttempts = 3;
|
|
186
|
+
|
|
187
|
+
while (attempts < maxAttempts) {
|
|
188
|
+
// Generate content
|
|
189
|
+
const suggestion = await generator(characteristics);
|
|
190
|
+
|
|
191
|
+
// Validate (pure function, instant)
|
|
192
|
+
const validation = validator(suggestion, characteristics);
|
|
193
|
+
|
|
194
|
+
if (validation.status === 'valid') {
|
|
195
|
+
return suggestion; // Success!
|
|
196
|
+
}
|
|
197
|
+
|
|
198
|
+
// Retry with feedback
|
|
199
|
+
attempts++;
|
|
200
|
+
// Optional: Adjust approach based on validation.reason
|
|
201
|
+
}
|
|
202
|
+
|
|
203
|
+
throw new Error('Could not generate valid content after 3 attempts');
|
|
204
|
+
```
|
|
205
|
+
|
|
206
|
+
**Why this pattern works:**
|
|
207
|
+
- Generator has NO knowledge of validation rules (isolated concerns)
|
|
208
|
+
- Validator is deterministic (pure function, fast)
|
|
209
|
+
- Feedback loop tests multi-turn orchestration
|
|
210
|
+
- ~60-70% initial rejection rate forces iteration
|
|
211
|
+
|
|
212
|
+
**Agents using this pattern:**
|
|
213
|
+
- name-generator + name-validator
|
|
214
|
+
- haiku-generator + haiku-validator
|
|
215
|
+
|
|
216
|
+
### Workflow E: HITL Approval Gate
|
|
217
|
+
|
|
218
|
+
**When:** Decision requires human judgment (compliance, taste, ethics).
|
|
219
|
+
|
|
220
|
+
**Agent:** human-approval
|
|
221
|
+
|
|
222
|
+
**Orchestration strategy:**
|
|
223
|
+
|
|
224
|
+
```
|
|
225
|
+
Step 1: Prepare Request
|
|
226
|
+
- Gather all necessary context
|
|
227
|
+
- Format clearly for human reviewer
|
|
228
|
+
- Include relevant data (characteristics, generated content, reasoning)
|
|
229
|
+
|
|
230
|
+
Step 2: Emit Approval Request
|
|
231
|
+
- Agent: human-approval
|
|
232
|
+
- Input: Request payload
|
|
233
|
+
- Behavior: Blocks waiting for response
|
|
234
|
+
- Timeout: Configurable (default: 60s)
|
|
235
|
+
|
|
236
|
+
Step 3: Handle Response
|
|
237
|
+
- Approved: Continue workflow
|
|
238
|
+
- Rejected: Handle gracefully (retry, inform user, etc.)
|
|
239
|
+
- Timeout: Fall back to safe default or escalate
|
|
240
|
+
```
|
|
241
|
+
|
|
242
|
+
**Mockable behavior:** Set `mockable: true` to auto-approve without human (useful for testing).
|
|
243
|
+
|
|
244
|
+
**Real-world uses:**
|
|
245
|
+
- Breeding application approval
|
|
246
|
+
- Name selection from alternatives
|
|
247
|
+
- Content moderation decisions
|
|
248
|
+
|
|
249
|
+
## CLI Exposure for Pure Functions
|
|
250
|
+
|
|
251
|
+
Pure function tools (validators) can be exposed via CLI for direct invocation.
|
|
252
|
+
|
|
253
|
+
### Usage Pattern
|
|
254
|
+
|
|
255
|
+
```bash
|
|
256
|
+
# Pass JSON input, get JSON/YAML output
|
|
257
|
+
echo '{"name": "Mr. Whiskers", "characteristics": {...}}' | vat agents validate-name
|
|
258
|
+
|
|
259
|
+
# Output format (using 2>&1):
|
|
260
|
+
# [stdout - complete output]
|
|
261
|
+
# ---
|
|
262
|
+
# [stderr - error messages]
|
|
263
|
+
```
|
|
264
|
+
|
|
265
|
+
### Implementation Requirements
|
|
266
|
+
|
|
267
|
+
1. **Input:** JSON on stdin
|
|
268
|
+
2. **Output:** JSON or YAML on stdout (complete, flushed)
|
|
269
|
+
3. **Errors:** stderr (separated with `---` when using `2>&1`)
|
|
270
|
+
4. **Exit codes:** 0 = success, 1 = validation failure, 2 = error
|
|
271
|
+
|
|
272
|
+
### Sequence Example
|
|
273
|
+
|
|
274
|
+
```
|
|
275
|
+
[Start]
|
|
276
|
+
↓
|
|
277
|
+
Read stdin (JSON input)
|
|
278
|
+
↓
|
|
279
|
+
Parse and validate input schema
|
|
280
|
+
↓
|
|
281
|
+
Execute pure function
|
|
282
|
+
↓
|
|
283
|
+
Flush stdout with complete result
|
|
284
|
+
↓
|
|
285
|
+
Print "---" separator
|
|
286
|
+
↓
|
|
287
|
+
Flush stderr with any error messages
|
|
288
|
+
↓
|
|
289
|
+
Exit with appropriate code
|
|
290
|
+
```
|
|
291
|
+
|
|
292
|
+
### Benefits
|
|
293
|
+
|
|
294
|
+
- MCP can map to CLI calls (fast, stateless)
|
|
295
|
+
- No long-running processes for pure functions
|
|
296
|
+
- Clear separation of output vs errors
|
|
297
|
+
- Composable with other CLI tools
|
|
298
|
+
|
|
299
|
+
### Applicable Agents
|
|
300
|
+
|
|
301
|
+
- **name-validator** - `vat agents validate-name`
|
|
302
|
+
- **haiku-validator** - `vat agents validate-haiku`
|
|
303
|
+
- Future: Any pure function tool
|
|
304
|
+
|
|
305
|
+
## What Claude Code Does vs What Agents Do
|
|
306
|
+
|
|
307
|
+
### Claude Code's Role (This Skill):
|
|
308
|
+
|
|
309
|
+
**Orchestration:**
|
|
310
|
+
- Select which agent(s) to use
|
|
311
|
+
- Chain agents in workflows (pipelines)
|
|
312
|
+
- Manage state between agents (pass outputs as inputs)
|
|
313
|
+
- Handle retries and error recovery
|
|
314
|
+
|
|
315
|
+
**Monitoring:**
|
|
316
|
+
- Track conversation phase (gathering, recommendation, selection)
|
|
317
|
+
- Count validation attempts (retry limits)
|
|
318
|
+
- Detect completion signals (phase transitions)
|
|
319
|
+
- Monitor timeouts (HITL approvals)
|
|
320
|
+
|
|
321
|
+
**Decision Making:**
|
|
322
|
+
- When to transition between phases
|
|
323
|
+
- When to retry vs give up
|
|
324
|
+
- Which agent path to take (photo vs text)
|
|
325
|
+
- How to present results to user
|
|
326
|
+
|
|
327
|
+
### Agents' Role (Agent Resources):
|
|
328
|
+
|
|
329
|
+
**Pure Functions:**
|
|
330
|
+
- Execute deterministic logic (validation rules)
|
|
331
|
+
- Return results instantly
|
|
332
|
+
- No side effects, no state
|
|
333
|
+
|
|
334
|
+
**LLM Analyzers:**
|
|
335
|
+
- Extract structured data from unstructured input
|
|
336
|
+
- Apply domain knowledge to classification
|
|
337
|
+
- Generate creative content (names, haikus)
|
|
338
|
+
|
|
339
|
+
**Conversational Assistants:**
|
|
340
|
+
- Conduct natural dialogue
|
|
341
|
+
- Accumulate context over turns
|
|
342
|
+
- Make recommendations based on collected factors
|
|
343
|
+
|
|
344
|
+
**Event Integrators:**
|
|
345
|
+
- Emit events to external systems
|
|
346
|
+
- Block waiting for responses
|
|
347
|
+
- Handle timeouts and errors
|
|
348
|
+
|
|
349
|
+
**Key insight:** Agents are specialized, focused on their domain. Claude Code provides the glue, orchestration, and workflow logic.
|
|
350
|
+
|
|
351
|
+
## Content Separation Guidelines
|
|
352
|
+
|
|
353
|
+
### What Belongs in Agent Resources (LLM-Facing):
|
|
354
|
+
|
|
355
|
+
✅ System prompts and LLM instructions
|
|
356
|
+
✅ Domain knowledge (breed database, syllable counting rules, kigo lists)
|
|
357
|
+
✅ Extraction formats (JSON schemas, output templates)
|
|
358
|
+
✅ Examples for few-shot learning
|
|
359
|
+
✅ Natural language mappings
|
|
360
|
+
✅ Validation rules and constraints
|
|
361
|
+
|
|
362
|
+
**Location:** `resources/agents/*.md`
|
|
363
|
+
|
|
364
|
+
### What Belongs in Skills (Claude Code-Facing):
|
|
365
|
+
|
|
366
|
+
✅ When to trigger agents (user intent signals)
|
|
367
|
+
✅ How to chain agents (workflow patterns)
|
|
368
|
+
✅ What to monitor (readiness criteria, retry limits)
|
|
369
|
+
✅ Debugging guidance (common issues, pitfalls)
|
|
370
|
+
✅ Meta-strategy (why photo first, when to retry)
|
|
371
|
+
✅ CLI exposure patterns
|
|
372
|
+
|
|
373
|
+
**Location:** `resources/skills/*.md` (this file)
|
|
374
|
+
|
|
375
|
+
### Overlap (Reference, Don't Duplicate):
|
|
376
|
+
|
|
377
|
+
⚠️ Workflow structure (skill explains orchestration, agent implements)
|
|
378
|
+
⚠️ Phase transitions (skill monitors, agent executes)
|
|
379
|
+
⚠️ Validation criteria (skill manages retries, agent defines rules)
|
|
380
|
+
|
|
381
|
+
**Resolution:** Keep agent resources authoritative. Skills REFERENCE via links, don't duplicate.
|
|
382
|
+
|
|
383
|
+
## Common Pitfalls
|
|
384
|
+
|
|
385
|
+
### ❌ Don't: Call Agents Without Understanding Their Archetype
|
|
386
|
+
|
|
387
|
+
Each archetype has different behavior:
|
|
388
|
+
- Pure functions: Instant, deterministic
|
|
389
|
+
- LLM analyzers: Single call, non-deterministic
|
|
390
|
+
- Conversational: Multi-turn, stateful
|
|
391
|
+
- Event integrators: Blocking, timeout handling
|
|
392
|
+
|
|
393
|
+
### ✅ Do: Match Orchestration to Archetype
|
|
394
|
+
|
|
395
|
+
```
|
|
396
|
+
Pure function → Call directly, no retry needed
|
|
397
|
+
LLM analyzer → Call once, parse result
|
|
398
|
+
Conversational → Multi-turn loop with state
|
|
399
|
+
Event integrator → Emit, wait, handle timeout
|
|
400
|
+
```
|
|
401
|
+
|
|
402
|
+
### ❌ Don't: Skip Validation in Generate-Validate Loops
|
|
403
|
+
|
|
404
|
+
Validators exist for a reason. Don't bypass them or assume LLM output is always valid.
|
|
405
|
+
|
|
406
|
+
### ✅ Do: Embrace the Feedback Loop
|
|
407
|
+
|
|
408
|
+
```
|
|
409
|
+
Generate → Validate → If invalid, retry with feedback
|
|
410
|
+
```
|
|
411
|
+
|
|
412
|
+
This pattern tests real-world orchestration where first attempts often fail.
|
|
413
|
+
|
|
414
|
+
### ❌ Don't: Mix Mock and Production Modes Accidentally
|
|
415
|
+
|
|
416
|
+
Mock mode (EXIF metadata) is fast and free. Production mode (real APIs) is slow and expensive. Be explicit about which mode you're using.
|
|
417
|
+
|
|
418
|
+
### ✅ Do: Use Mock Mode for Development, Production for Deployment
|
|
419
|
+
|
|
420
|
+
```typescript
|
|
421
|
+
// Development/testing
|
|
422
|
+
const result = await analyzePhoto(path, { mockable: true });
|
|
423
|
+
|
|
424
|
+
// Production
|
|
425
|
+
const result = await analyzePhoto(path, { mockable: false });
|
|
426
|
+
```
|
|
427
|
+
|
|
428
|
+
### ❌ Don't: Bombard Users with Questions in Conversational Flows
|
|
429
|
+
|
|
430
|
+
One question at a time. Give users space to think and respond naturally.
|
|
431
|
+
|
|
432
|
+
### ✅ Do: Guide Conversation Gently
|
|
433
|
+
|
|
434
|
+
```
|
|
435
|
+
Bad: "Tell me your music, living space, activity level, grooming, family, and allergies"
|
|
436
|
+
Good: "What's your favorite type of music?" → [response] → "Great! Tell me about your living space..."
|
|
437
|
+
```
|
|
438
|
+
|
|
439
|
+
## Debugging: When Things Go Wrong
|
|
440
|
+
|
|
441
|
+
### Issue: Photo Analysis Fails
|
|
442
|
+
|
|
443
|
+
**Symptoms:** Vision API errors, unexpected characteristics
|
|
444
|
+
|
|
445
|
+
**Debug steps:**
|
|
446
|
+
1. Check if image path is valid
|
|
447
|
+
2. Verify image is actually a photo (not text, PDF, etc.)
|
|
448
|
+
3. Try mock mode first: `{ mockable: true }`
|
|
449
|
+
4. Check API key and rate limits for production mode
|
|
450
|
+
5. Verify EXIF metadata format if using mock mode
|
|
451
|
+
|
|
452
|
+
### Issue: Name Validation Always Fails
|
|
453
|
+
|
|
454
|
+
**Symptoms:** Name generator can't produce valid names after 3+ attempts
|
|
455
|
+
|
|
456
|
+
**Debug steps:**
|
|
457
|
+
1. Check validation rules: name-validator has quirky requirements
|
|
458
|
+
2. Review characteristics: Validation rules depend on cat traits
|
|
459
|
+
3. Verify characteristics schema: All required fields present?
|
|
460
|
+
4. Check feedback loop: Is validation.reason being used for retries?
|
|
461
|
+
|
|
462
|
+
**Common cause:** ~60-70% rejection rate is EXPECTED. This forces iteration and tests feedback loops.
|
|
463
|
+
|
|
464
|
+
### Issue: Haiku Validation Rejects Everything
|
|
465
|
+
|
|
466
|
+
**Symptoms:** Haiku generator struggles to meet 5-7-5 syllable structure
|
|
467
|
+
|
|
468
|
+
**Debug steps:**
|
|
469
|
+
1. Check syllable counting algorithm: May differ from LLM's internal counting
|
|
470
|
+
2. Verify kigo (seasonal word) presence: Required for valid haiku
|
|
471
|
+
3. Check kireji (cutting word) detection: Optional but improves score
|
|
472
|
+
4. Review haiku format: Must be exactly 3 lines
|
|
473
|
+
|
|
474
|
+
**Solution:** Allow multiple retry attempts (3-5). Haiku generation is hard.
|
|
475
|
+
|
|
476
|
+
### Issue: Conversation Stalls in Breed Selection
|
|
477
|
+
|
|
478
|
+
**Symptoms:** Agent doesn't transition to recommendations
|
|
479
|
+
|
|
480
|
+
**Debug steps:**
|
|
481
|
+
1. Check factor count: Need ≥4 factors collected
|
|
482
|
+
2. Verify music preference: REQUIRED for transition
|
|
483
|
+
3. Review conversation history: Are factors being extracted?
|
|
484
|
+
4. Check readiness criteria: Session state vs actual factors
|
|
485
|
+
|
|
486
|
+
**Solution:** Ensure extraction happens after each turn. Monitor `factorsCollected` metadata.
|
|
487
|
+
|
|
488
|
+
### Issue: HITL Approval Times Out
|
|
489
|
+
|
|
490
|
+
**Symptoms:** human-approval agent returns timeout status
|
|
491
|
+
|
|
492
|
+
**Debug steps:**
|
|
493
|
+
1. Check timeout value: Default 60s, may need adjustment
|
|
494
|
+
2. Verify event emission: Is request actually sent?
|
|
495
|
+
3. Check mock mode: Set `mockable: true` for testing
|
|
496
|
+
4. Review request format: Is it clear what needs approval?
|
|
497
|
+
|
|
498
|
+
**Solution:** For testing, use mock mode. For production, increase timeout or add retry logic.
|
|
499
|
+
|
|
500
|
+
## Agent Resource Links
|
|
501
|
+
|
|
502
|
+
### Pure Function Tools
|
|
503
|
+
- name-validator (no agent resource - pure TypeScript logic)
|
|
504
|
+
- haiku-validator (no agent resource - pure TypeScript logic)
|
|
505
|
+
|
|
506
|
+
### One-Shot LLM Analyzers
|
|
507
|
+
- photo-analyzer - Vision analysis
|
|
508
|
+
- description-parser - Text parsing
|
|
509
|
+
- name-generator - Creative naming
|
|
510
|
+
- haiku-generator - Haiku composition
|
|
511
|
+
|
|
512
|
+
### Conversational Assistant
|
|
513
|
+
- breed-advisor - Multi-turn breed selection
|
|
514
|
+
- Welcome Message
|
|
515
|
+
- Music Preference Insight
|
|
516
|
+
- Factor Definitions
|
|
517
|
+
- Conversation Strategy
|
|
518
|
+
- Factor Extraction Prompt
|
|
519
|
+
- Transition Message
|
|
520
|
+
- Recommendation Presentation
|
|
521
|
+
- Selection Extraction
|
|
522
|
+
- Conclusion Prompt
|
|
523
|
+
|
|
524
|
+
### External Event Integrator
|
|
525
|
+
- human-approval - HITL approval gate
|
|
526
|
+
|
|
527
|
+
### Related Documentation
|
|
528
|
+
- cat-breed-selection.md - Detailed breed advisor orchestration
|
|
529
|
+
- Package README - Human-facing documentation
|
|
530
|
+
- Package Structure - Technical organization
|
|
531
|
+
|
|
532
|
+
## Success Criteria
|
|
533
|
+
|
|
534
|
+
Your orchestration is successful when:
|
|
535
|
+
|
|
536
|
+
**For Single Agents:**
|
|
537
|
+
- Agent called with valid input schema
|
|
538
|
+
- Output parsed and validated correctly
|
|
539
|
+
- Errors handled gracefully
|
|
540
|
+
- User receives clear, actionable results
|
|
541
|
+
|
|
542
|
+
**For Pipelines:**
|
|
543
|
+
- Data flows correctly between agents
|
|
544
|
+
- Schema compatibility maintained (CatCharacteristics convergence)
|
|
545
|
+
- Intermediate results stored appropriately
|
|
546
|
+
- Final output meets user's original intent
|
|
547
|
+
|
|
548
|
+
**For Loops:**
|
|
549
|
+
- Retry logic prevents infinite loops (max attempts)
|
|
550
|
+
- Feedback improves subsequent generations
|
|
551
|
+
- User informed of progress ("Generating... Attempt 2 of 3")
|
|
552
|
+
- Success achieved within reasonable attempts
|
|
553
|
+
|
|
554
|
+
**For Conversational Flows:**
|
|
555
|
+
- User feels heard, not interrogated
|
|
556
|
+
- Natural dialogue rhythm maintained
|
|
557
|
+
- Factors collected efficiently (4-6 turns typical)
|
|
558
|
+
- Recommendations feel personalized
|
|
559
|
+
- Selection confirmed clearly
|
|
560
|
+
|
|
561
|
+
**For HITL Workflows:**
|
|
562
|
+
- Request clearly formatted for human reviewer
|
|
563
|
+
- Timeout handling prevents indefinite blocking
|
|
564
|
+
- Approval/rejection handled appropriately
|
|
565
|
+
- User understands what happened
|
|
566
|
+
|
|
567
|
+
## Advanced Orchestration Patterns
|
|
568
|
+
|
|
569
|
+
### Pattern: Parallel Execution
|
|
570
|
+
|
|
571
|
+
When agents don't depend on each other, run them in parallel.
|
|
572
|
+
|
|
573
|
+
**Example:** Generate both name and haiku from same characteristics
|
|
574
|
+
|
|
575
|
+
```typescript
|
|
576
|
+
const [name, haiku] = await Promise.all([
|
|
577
|
+
generateCatName(characteristics),
|
|
578
|
+
generateCatHaiku(characteristics),
|
|
579
|
+
]);
|
|
580
|
+
```
|
|
581
|
+
|
|
582
|
+
**Benefits:** Faster execution, better user experience
|
|
583
|
+
|
|
584
|
+
### Pattern: Fallback Chain
|
|
585
|
+
|
|
586
|
+
When one agent fails, try alternatives.
|
|
587
|
+
|
|
588
|
+
**Example:** Photo analysis with text description fallback
|
|
589
|
+
|
|
590
|
+
```typescript
|
|
591
|
+
let characteristics;
|
|
592
|
+
try {
|
|
593
|
+
characteristics = await analyzePhoto(imagePath);
|
|
594
|
+
} catch {
|
|
595
|
+
// Fallback to text description
|
|
596
|
+
const description = await getUserDescription();
|
|
597
|
+
characteristics = await parseDescription(description);
|
|
598
|
+
}
|
|
599
|
+
```
|
|
600
|
+
|
|
601
|
+
**Benefits:** Resilience, better error handling
|
|
602
|
+
|
|
603
|
+
### Pattern: Conditional Routing
|
|
604
|
+
|
|
605
|
+
Choose agent path based on user input type.
|
|
606
|
+
|
|
607
|
+
**Example:** Multi-modal input handling
|
|
608
|
+
|
|
609
|
+
```typescript
|
|
610
|
+
if (input.type === 'image') {
|
|
611
|
+
characteristics = await analyzePhoto(input.imagePath);
|
|
612
|
+
} else if (input.type === 'text') {
|
|
613
|
+
characteristics = await parseDescription(input.description);
|
|
614
|
+
} else {
|
|
615
|
+
// Conversational gathering
|
|
616
|
+
characteristics = await breedAdvisor.gather();
|
|
617
|
+
}
|
|
618
|
+
```
|
|
619
|
+
|
|
620
|
+
**Benefits:** Flexible input handling, better UX
|
|
621
|
+
|
|
622
|
+
### Pattern: Staged Approval
|
|
623
|
+
|
|
624
|
+
Break complex workflows into approval stages.
|
|
625
|
+
|
|
626
|
+
**Example:** Multi-stage breeding application
|
|
627
|
+
|
|
628
|
+
```typescript
|
|
629
|
+
// Stage 1: Basic info approval
|
|
630
|
+
const basicApproval = await humanApproval({ stage: 'basic', data });
|
|
631
|
+
if (!basicApproval.approved) return;
|
|
632
|
+
|
|
633
|
+
// Stage 2: Detailed questionnaire
|
|
634
|
+
const detailApproval = await humanApproval({ stage: 'detail', data });
|
|
635
|
+
if (!detailApproval.approved) return;
|
|
636
|
+
|
|
637
|
+
// Stage 3: Final review
|
|
638
|
+
const finalApproval = await humanApproval({ stage: 'final', data });
|
|
639
|
+
```
|
|
640
|
+
|
|
641
|
+
**Benefits:** Checkpoints prevent wasted work, clearer decision points
|
|
642
|
+
|
|
643
|
+
## CLI Integration Examples
|
|
644
|
+
|
|
645
|
+
### Example 1: Validate Name via CLI
|
|
646
|
+
|
|
647
|
+
```bash
|
|
648
|
+
# Input
|
|
649
|
+
echo '{
|
|
650
|
+
"name": "Mr. Whiskers",
|
|
651
|
+
"characteristics": {
|
|
652
|
+
"physical": {
|
|
653
|
+
"furColor": "Orange",
|
|
654
|
+
"furPattern": "Tabby",
|
|
655
|
+
"size": "medium"
|
|
656
|
+
},
|
|
657
|
+
"behavioral": {
|
|
658
|
+
"personality": ["Playful", "Curious"]
|
|
659
|
+
}
|
|
660
|
+
}
|
|
661
|
+
}' | vat agents validate-name
|
|
662
|
+
|
|
663
|
+
# Output (stdout)
|
|
664
|
+
{
|
|
665
|
+
"status": "valid",
|
|
666
|
+
"reason": "Name meets all quirky validation rules",
|
|
667
|
+
"confidence": 0.95
|
|
668
|
+
}
|
|
669
|
+
---
|
|
670
|
+
# (stderr - empty if no errors)
|
|
671
|
+
```
|
|
672
|
+
|
|
673
|
+
### Example 2: Validate Haiku via CLI
|
|
674
|
+
|
|
675
|
+
```bash
|
|
676
|
+
# Input
|
|
677
|
+
echo '{
|
|
678
|
+
"lines": [
|
|
679
|
+
"Orange sunset fur",
|
|
680
|
+
"Paws dance across the tatami",
|
|
681
|
+
"Zen master purrs"
|
|
682
|
+
]
|
|
683
|
+
}' | vat agents validate-haiku
|
|
684
|
+
|
|
685
|
+
# Output (stdout)
|
|
686
|
+
{
|
|
687
|
+
"status": "valid",
|
|
688
|
+
"syllableCounts": [5, 7, 5],
|
|
689
|
+
"hasKigo": true,
|
|
690
|
+
"kigo": "sunset",
|
|
691
|
+
"hasKireji": false,
|
|
692
|
+
"errors": []
|
|
693
|
+
}
|
|
694
|
+
---
|
|
695
|
+
# (stderr - empty if no errors)
|
|
696
|
+
```
|
|
697
|
+
|
|
698
|
+
### Example 3: Error Handling
|
|
699
|
+
|
|
700
|
+
```bash
|
|
701
|
+
# Invalid input
|
|
702
|
+
echo '{"invalid": "schema"}' | vat agents validate-name
|
|
703
|
+
|
|
704
|
+
# Output
|
|
705
|
+
# (stdout - empty)
|
|
706
|
+
---
|
|
707
|
+
Error: Invalid input schema. Expected 'name' and 'characteristics' fields.
|
|
708
|
+
Schema validation failed:
|
|
709
|
+
- Missing required field: name
|
|
710
|
+
- Missing required field: characteristics
|
|
711
|
+
# (exit code: 2)
|
|
712
|
+
```
|
|
713
|
+
|
|
714
|
+
## Next Steps
|
|
715
|
+
|
|
716
|
+
Once you've successfully orchestrated agents:
|
|
717
|
+
|
|
718
|
+
1. **Explore Combinations** - Try chaining agents in new ways
|
|
719
|
+
2. **Add Custom Validation** - Create your own quirky validation rules
|
|
720
|
+
3. **Build New Workflows** - Combine agents for complex use cases
|
|
721
|
+
4. **Contribute Patterns** - Share successful orchestration strategies
|
|
722
|
+
5. **Package Skills** - Create distributable skill packages with pure functions
|
|
723
|
+
|
|
724
|
+
## Questions?
|
|
725
|
+
|
|
726
|
+
- **For orchestration patterns:** This file (SKILL.md)
|
|
727
|
+
- **For agent-specific details:** See individual agent resources
|
|
728
|
+
- **For implementation examples:** See examples/ directory
|
|
729
|
+
- **For archetype theory:** See package README.md
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@vibe-agent-toolkit/vat-example-cat-agents",
|
|
3
|
-
"version": "0.1.32
|
|
3
|
+
"version": "0.1.32",
|
|
4
4
|
"description": "Example agents: 8 quirky cat agents demonstrating VAT patterns",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"exports": {
|
|
@@ -37,9 +37,9 @@
|
|
|
37
37
|
"postinstall": "vat skills install --npm-postinstall || exit 0"
|
|
38
38
|
},
|
|
39
39
|
"dependencies": {
|
|
40
|
-
"@vibe-agent-toolkit/agent-runtime": "0.1.32
|
|
41
|
-
"@vibe-agent-toolkit/agent-schema": "0.1.32
|
|
42
|
-
"@vibe-agent-toolkit/transports": "0.1.32
|
|
40
|
+
"@vibe-agent-toolkit/agent-runtime": "0.1.32",
|
|
41
|
+
"@vibe-agent-toolkit/agent-schema": "0.1.32",
|
|
42
|
+
"@vibe-agent-toolkit/transports": "0.1.32",
|
|
43
43
|
"syllable": "^5.0.1",
|
|
44
44
|
"zod": "^3.24.1"
|
|
45
45
|
},
|
|
@@ -48,10 +48,10 @@
|
|
|
48
48
|
"@anthropic-ai/sdk": "^0.36.0",
|
|
49
49
|
"@langchain/openai": "^0.3.16",
|
|
50
50
|
"@types/node": "^22.10.5",
|
|
51
|
-
"@vibe-agent-toolkit/resource-compiler": "0.1.32
|
|
52
|
-
"@vibe-agent-toolkit/runtime-claude-agent-sdk": "0.1.32
|
|
53
|
-
"@vibe-agent-toolkit/runtime-langchain": "0.1.32
|
|
54
|
-
"@vibe-agent-toolkit/runtime-openai": "0.1.32
|
|
51
|
+
"@vibe-agent-toolkit/resource-compiler": "0.1.32",
|
|
52
|
+
"@vibe-agent-toolkit/runtime-claude-agent-sdk": "0.1.32",
|
|
53
|
+
"@vibe-agent-toolkit/runtime-langchain": "0.1.32",
|
|
54
|
+
"@vibe-agent-toolkit/runtime-openai": "0.1.32",
|
|
55
55
|
"@vitest/coverage-v8": "2.1.8",
|
|
56
56
|
"ai": "^6.0.48",
|
|
57
57
|
"openai": "^4.77.3",
|