@octavus/docs 2.2.0 → 2.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,480 @@
1
+ ---
2
+ title: Workers
3
+ description: Defining worker agents for background and task-based execution.
4
+ ---
5
+
6
+ # Workers
7
+
8
+ Workers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value.
9
+
10
+ ## When to Use Workers
11
+
12
+ Workers are ideal for:
13
+
14
+ - **Background processing** — Long-running tasks that don't need conversation
15
+ - **Composable tasks** — Reusable units of work called by other agents
16
+ - **Pipelines** — Multi-step processing with structured output
17
+ - **Parallel execution** — Tasks that can run independently
18
+
19
+ Use interactive agents instead when:
20
+
21
+ - **Conversation is needed** — Multi-turn dialogue with users
22
+ - **Persistence matters** — State should survive across interactions
23
+ - **Session context** — User context needs to persist
24
+
25
+ ## Worker vs Interactive
26
+
27
+ | Aspect | Interactive | Worker |
28
+ | ---------- | ---------------------------------- | ----------------------------- |
29
+ | Structure | `triggers` + `handlers` + `agent` | `steps` + `output` |
30
+ | LLM Config | Global `agent:` section | Per-thread via `start-thread` |
31
+ | Invocation | Fire a named trigger | Direct execution with input |
32
+ | Session | Persists across triggers (24h TTL) | Single execution |
33
+ | Result | Streaming chat | Streaming + output value |
34
+
35
+ ## Protocol Structure
36
+
37
+ Workers use a simpler protocol structure than interactive agents:
38
+
39
+ ```yaml
40
+ # Input schema - provided when worker is executed
41
+ input:
42
+ TOPIC:
43
+ type: string
44
+ description: Topic to research
45
+ DEPTH:
46
+ type: string
47
+ optional: true
48
+ default: medium
49
+
50
+ # Variables for intermediate results
51
+ variables:
52
+ RESEARCH_DATA:
53
+ type: string
54
+ ANALYSIS:
55
+ type: string
56
+ description: Final analysis result
57
+
58
+ # Tools available to the worker
59
+ tools:
60
+ web-search:
61
+ description: Search the web
62
+ parameters:
63
+ query: { type: string }
64
+
65
+ # Sequential execution steps
66
+ steps:
67
+ Start research:
68
+ block: start-thread
69
+ thread: research
70
+ model: anthropic/claude-sonnet-4-5
71
+ system: research-system
72
+ input: [TOPIC, DEPTH]
73
+ tools: [web-search]
74
+ maxSteps: 5
75
+
76
+ Add research request:
77
+ block: add-message
78
+ thread: research
79
+ role: user
80
+ prompt: research-prompt
81
+ input: [TOPIC, DEPTH]
82
+
83
+ Generate research:
84
+ block: next-message
85
+ thread: research
86
+ output: RESEARCH_DATA
87
+
88
+ Start analysis:
89
+ block: start-thread
90
+ thread: analysis
91
+ model: anthropic/claude-sonnet-4-5
92
+ system: analysis-system
93
+
94
+ Add analysis request:
95
+ block: add-message
96
+ thread: analysis
97
+ role: user
98
+ prompt: analysis-prompt
99
+ input: [RESEARCH_DATA]
100
+
101
+ Generate analysis:
102
+ block: next-message
103
+ thread: analysis
104
+ output: ANALYSIS
105
+
106
+ # Output variable - the worker's return value
107
+ output: ANALYSIS
108
+ ```
109
+
110
+ ## settings.json
111
+
112
+ Workers are identified by the `format` field:
113
+
114
+ ```json
115
+ {
116
+ "slug": "research-assistant",
117
+ "name": "Research Assistant",
118
+ "description": "Researches topics and returns structured analysis",
119
+ "format": "worker"
120
+ }
121
+ ```
122
+
123
+ ## Key Differences
124
+
125
+ ### No Global Agent Config
126
+
127
+ Interactive agents have a global `agent:` section that configures a main thread. Workers don't have this — every thread must be explicitly created via `start-thread`:
128
+
129
+ ```yaml
130
+ # Interactive agent: Global config
131
+ agent:
132
+ model: anthropic/claude-sonnet-4-5
133
+ system: system
134
+ tools: [tool-a, tool-b]
135
+
136
+ # Worker: Each thread configured independently
137
+ steps:
138
+ Start thread A:
139
+ block: start-thread
140
+ thread: research
141
+ model: anthropic/claude-sonnet-4-5
142
+ tools: [tool-a]
143
+
144
+ Start thread B:
145
+ block: start-thread
146
+ thread: analysis
147
+ model: openai/gpt-4o
148
+ tools: [tool-b]
149
+ ```
150
+
151
+ This gives workers flexibility to use different models, tools, and settings at different stages.
152
+
153
+ ### Steps Instead of Handlers
154
+
155
+ Workers use `steps:` instead of `handlers:`. Steps execute sequentially, like handler blocks:
156
+
157
+ ```yaml
158
+ # Interactive: Handlers respond to triggers
159
+ handlers:
160
+ user-message:
161
+ Add message:
162
+ block: add-message
163
+ # ...
164
+
165
+ # Worker: Steps execute in sequence
166
+ steps:
167
+ Add message:
168
+ block: add-message
169
+ # ...
170
+ ```
171
+
172
+ ### Output Value
173
+
174
+ Workers can return an output value to the caller:
175
+
176
+ ```yaml
177
+ variables:
178
+ RESULT:
179
+ type: string
180
+
181
+ steps:
182
+ # ... steps that populate RESULT ...
183
+
184
+ output: RESULT # Return this variable's value
185
+ ```
186
+
187
+ The `output` field references a variable declared in `variables:`. If omitted, the worker completes without returning a value.
188
+
189
+ ## Available Blocks
190
+
191
+ Workers support the same blocks as handlers:
192
+
193
+ | Block | Purpose |
194
+ | ------------------ | -------------------------------------------- |
195
+ | `start-thread` | Create a named thread with LLM configuration |
196
+ | `add-message` | Add a message to a thread |
197
+ | `next-message` | Generate LLM response |
198
+ | `tool-call` | Call a tool deterministically |
199
+ | `set-resource` | Update a resource value |
200
+ | `serialize-thread` | Convert thread to text |
201
+ | `generate-image` | Generate an image from a prompt variable |
202
+
203
+ ### start-thread (Required for LLM)
204
+
205
+ Every thread must be initialized with `start-thread` before using `next-message`:
206
+
207
+ ```yaml
208
+ steps:
209
+ Start research:
210
+ block: start-thread
211
+ thread: research
212
+ model: anthropic/claude-sonnet-4-5
213
+ system: research-system
214
+ input: [TOPIC]
215
+ tools: [web-search]
216
+ thinking: medium
217
+ maxSteps: 5
218
+ ```
219
+
220
+ All LLM configuration goes here:
221
+
222
+ | Field | Description |
223
+ | ------------- | ------------------------------------------------- |
224
+ | `thread` | Thread name (defaults to block name) |
225
+ | `model` | LLM model to use |
226
+ | `system` | System prompt filename (required) |
227
+ | `input` | Variables for system prompt |
228
+ | `tools` | Tools available in this thread |
229
+ | `workers` | Workers available to this thread (as LLM tools) |
230
+ | `imageModel` | Image generation model |
231
+ | `thinking` | Extended reasoning level |
232
+ | `temperature` | Model temperature |
233
+ | `maxSteps` | Maximum tool call cycles (enables agentic if > 1) |
234
+
235
+ ## Simple Example
236
+
237
+ A worker that generates a title from a summary:
238
+
239
+ ```yaml
240
+ # Input
241
+ input:
242
+ CONVERSATION_SUMMARY:
243
+ type: string
244
+ description: Summary to generate a title for
245
+
246
+ # Variables
247
+ variables:
248
+ TITLE:
249
+ type: string
250
+ description: The generated title
251
+
252
+ # Steps
253
+ steps:
254
+ Start title thread:
255
+ block: start-thread
256
+ thread: title-gen
257
+ model: anthropic/claude-sonnet-4-5
258
+ system: title-system
259
+
260
+ Add title request:
261
+ block: add-message
262
+ thread: title-gen
263
+ role: user
264
+ prompt: title-request
265
+ input: [CONVERSATION_SUMMARY]
266
+
267
+ Generate title:
268
+ block: next-message
269
+ thread: title-gen
270
+ output: TITLE
271
+ display: stream
272
+
273
+ # Output
274
+ output: TITLE
275
+ ```
276
+
277
+ ## Advanced Example
278
+
279
+ A worker with multiple threads, tools, and agentic behavior:
280
+
281
+ ```yaml
282
+ input:
283
+ USER_MESSAGE:
284
+ type: string
285
+ description: The user's message to respond to
286
+ USER_ID:
287
+ type: string
288
+ description: User ID for account lookups
289
+ optional: true
290
+
291
+ tools:
292
+ get-user-account:
293
+ description: Looking up account information
294
+ parameters:
295
+ userId: { type: string }
296
+ create-support-ticket:
297
+ description: Creating a support ticket
298
+ parameters:
299
+ summary: { type: string }
300
+ priority: { type: string }
301
+
302
+ variables:
303
+ ASSISTANT_RESPONSE:
304
+ type: string
305
+ CHAT_TRANSCRIPT:
306
+ type: string
307
+ CONVERSATION_SUMMARY:
308
+ type: string
309
+
310
+ steps:
311
+ # Thread 1: Chat with agentic tool calling
312
+ Start chat thread:
313
+ block: start-thread
314
+ thread: chat
315
+ model: anthropic/claude-sonnet-4-5
316
+ system: chat-system
317
+ input: [USER_ID]
318
+ tools: [get-user-account, create-support-ticket]
319
+ thinking: medium
320
+ maxSteps: 5
321
+
322
+ Add user message:
323
+ block: add-message
324
+ thread: chat
325
+ role: user
326
+ prompt: user-message
327
+ input: [USER_MESSAGE]
328
+
329
+ Generate response:
330
+ block: next-message
331
+ thread: chat
332
+ output: ASSISTANT_RESPONSE
333
+ display: stream
334
+
335
+ # Serialize for summary
336
+ Save conversation:
337
+ block: serialize-thread
338
+ thread: chat
339
+ output: CHAT_TRANSCRIPT
340
+
341
+ # Thread 2: Summary generation
342
+ Start summary thread:
343
+ block: start-thread
344
+ thread: summary
345
+ model: anthropic/claude-sonnet-4-5
346
+ system: summary-system
347
+ thinking: low
348
+
349
+ Add summary request:
350
+ block: add-message
351
+ thread: summary
352
+ role: user
353
+ prompt: summary-request
354
+ input: [CHAT_TRANSCRIPT]
355
+
356
+ Generate summary:
357
+ block: next-message
358
+ thread: summary
359
+ output: CONVERSATION_SUMMARY
360
+ display: stream
361
+
362
+ output: CONVERSATION_SUMMARY
363
+ ```
364
+
365
+ ## Tool Handling
366
+
367
+ Workers support the same tool handling as interactive agents:
368
+
369
+ - **Server tools** — Handled by tool handlers you provide
370
+ - **Client tools** — Pause execution, return tool request to caller
371
+
372
+ ```typescript
373
+ const events = client.workers.execute(
374
+ agentId,
375
+ { TOPIC: 'AI safety' },
376
+ {
377
+ tools: {
378
+ 'web-search': async (args) => {
379
+ return await searchWeb(args.query);
380
+ },
381
+ },
382
+ },
383
+ );
384
+ ```
385
+
386
+ See [Server SDK Workers](/docs/server-sdk/workers) for tool handling details.
387
+
388
+ ## Stream Events
389
+
390
+ Workers emit the same events as interactive agents, plus worker-specific events:
391
+
392
+ | Event | Description |
393
+ | --------------- | ---------------------------------- |
394
+ | `worker-start` | Worker execution begins |
395
+ | `worker-result` | Worker completes (includes output) |
396
+
397
+ All standard events (text-delta, tool calls, etc.) are also emitted.
398
+
399
+ ## Calling Workers from Interactive Agents
400
+
401
+ Interactive agents can call workers in two ways:
402
+
403
+ 1. **Deterministically** — Using the `run-worker` block
404
+ 2. **Agentically** — LLM calls worker as a tool
405
+
406
+ ### Worker Declaration
407
+
408
+ First, declare workers in your interactive agent's protocol:
409
+
410
+ ```yaml
411
+ workers:
412
+ generate-title:
413
+ description: Generating conversation title
414
+ display: description
415
+ research-assistant:
416
+ description: Researching topic
417
+ display: stream
418
+ tools:
419
+ search: web-search # Map worker tool → parent tool
420
+ ```
421
+
422
+ ### run-worker Block
423
+
424
+ Call a worker deterministically from a handler:
425
+
426
+ ```yaml
427
+ handlers:
428
+ request-human:
429
+ Generate title:
430
+ block: run-worker
431
+ worker: generate-title
432
+ input:
433
+ CONVERSATION_SUMMARY: SUMMARY
434
+ output: CONVERSATION_TITLE
435
+ ```
436
+
437
+ ### LLM Tool Invocation
438
+
439
+ Make workers available to the LLM:
440
+
441
+ ```yaml
442
+ agent:
443
+ model: anthropic/claude-sonnet-4-5
444
+ system: system
445
+ workers: [generate-title, research-assistant]
446
+ agentic: true
447
+ ```
448
+
449
+ The LLM can then call workers as tools during conversation.
450
+
451
+ ### Display Modes
452
+
453
+ Control how worker execution appears to users:
454
+
455
+ | Mode | Behavior |
456
+ | ------------- | --------------------------------- |
457
+ | `hidden` | Worker runs silently |
458
+ | `name` | Shows worker name |
459
+ | `description` | Shows description text |
460
+ | `stream` | Streams all worker events to user |
461
+
462
+ ### Tool Mapping
463
+
464
+ Map parent tools to worker tools when the worker needs access to your tool handlers:
465
+
466
+ ```yaml
467
+ workers:
468
+ research-assistant:
469
+ description: Research topics
470
+ tools:
471
+ search: web-search # Worker's "search" → parent's "web-search"
472
+ ```
473
+
474
+ When the worker calls its `search` tool, your `web-search` handler executes.
475
+
476
+ ## Next Steps
477
+
478
+ - [Server SDK Workers](/docs/server-sdk/workers) — Executing workers from code
479
+ - [Handlers](/docs/protocol/handlers) — Block reference for steps
480
+ - [Agent Config](/docs/protocol/agent-config) — Model and settings