@octavus/docs 2.12.0 → 2.13.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -540,7 +540,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
540
540
  section: "protocol",
541
541
  title: "Overview",
542
542
  description: "Introduction to Octavus agent protocols.",
543
- content: '\n# Protocol Overview\n\nAgent protocols define how an AI agent behaves. They\'re written in YAML and specify inputs, triggers, tools, and execution handlers.\n\n## Why Protocols?\n\nProtocols provide:\n\n- **Declarative definition** \u2014 Define behavior, not implementation\n- **Portable agents** \u2014 Move agents between projects\n- **Versioning** \u2014 Track changes with git\n- **Validation** \u2014 Catch errors before runtime\n- **Visualization** \u2014 Debug execution flows\n\n## Agent Formats\n\nOctavus supports two agent formats:\n\n| Format | Use Case | Structure |\n| ------------- | ------------------------------ | --------------------------------- |\n| `interactive` | Chat and multi-turn dialogue | `triggers` + `handlers` + `agent` |\n| `worker` | Background tasks and pipelines | `steps` + `output` |\n\n**Interactive agents** handle conversations \u2014 they respond to triggers (like user messages) and maintain session state across interactions.\n\n**Worker agents** execute tasks \u2014 they run steps sequentially and return an output value. Workers can be called independently or composed into interactive agents.\n\nSee [Workers](/docs/protocol/workers) for the worker protocol reference.\n\n## Interactive Protocol Structure\n\n```yaml\n# Agent inputs (provided when creating a session)\ninput:\n COMPANY_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\n# Persistent resources the agent can read/write\nresources:\n CONVERSATION_SUMMARY:\n description: Summary for handoff\n default: \'\'\n\n# How the agent can be invoked\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n request-human:\n description: User clicks "Talk to Human"\n\n# Temporary variables for execution (with types)\nvariables:\n SUMMARY:\n type: string\n TICKET:\n type: unknown\n\n# Tools the agent can use\ntools:\n get-user-account:\n description: Looking up your account\n parameters:\n userId: { type: string }\n\n# Octavus skills (provider-agnostic code execution)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\n# Agent configuration (model, tools, etc.)\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account]\n skills: [qr-code] # Enable skills\n imageModel: google/gemini-2.5-flash-image # Enable image generation\n agentic: true # Allow multiple tool calls\n thinking: medium # Extended reasoning\n\n# What happens when triggers fire\nhandlers:\n user-message:\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond to user:\n block: next-message\n```\n\n## File Structure\n\nEach agent is a folder with:\n\n```\nmy-agent/\n\u251C\u2500\u2500 protocol.yaml # Main logic (required)\n\u251C\u2500\u2500 settings.json # Agent metadata (required)\n\u251C\u2500\u2500 prompts/ # Prompt templates (supports subdirectories)\n\u2502 \u251C\u2500\u2500 system.md\n\u2502 \u251C\u2500\u2500 user-message.md\n\u2502 \u2514\u2500\u2500 shared/\n\u2502 \u251C\u2500\u2500 company-info.md\n\u2502 \u2514\u2500\u2500 formatting-rules.md\n\u2514\u2500\u2500 references/ # On-demand context documents (optional)\n \u2514\u2500\u2500 api-guidelines.md\n```\n\nPrompts can be organized in subdirectories. In the protocol, reference nested prompts by their path relative to `prompts/` (without `.md`): `shared/company-info`.\n\nReferences are markdown files with YAML frontmatter that the agent can fetch on demand during execution. See [References](/docs/protocol/references).\n\n### settings.json\n\n```json\n{\n "slug": "my-agent",\n "name": "My Agent",\n "description": "What this agent does",\n "format": "interactive"\n}\n```\n\n| Field | Required | Description |\n| ------------- | -------- | ----------------------------------------------- |\n| `slug` | Yes | URL-safe identifier (lowercase, digits, dashes) |\n| `name` | Yes | Human-readable name |\n| `description` | No | Brief description |\n| `format` | Yes | `interactive` (chat) or `worker` (background) |\n\n## Naming Conventions\n\n- **Slugs**: `lowercase-with-dashes`\n- **Variables**: `UPPERCASE_SNAKE_CASE`\n- **Prompts**: `lowercase-with-dashes.md` (paths use `/` for subdirectories)\n- **Tools**: `lowercase-with-dashes`\n- **Triggers**: `lowercase-with-dashes`\n\n## Variables in Prompts\n\nReference variables with `{{VARIABLE_NAME}}`:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a support agent for {{COMPANY_NAME}}.\n\nHelp users with their {{PRODUCT_NAME}} questions.\n\n## Support Policies\n\n{{SUPPORT_POLICIES}}\n```\n\nVariables are replaced with their values at runtime. If a variable is not provided, the placeholder is kept as-is.\n\n## Prompt Interpolation\n\nInclude other prompts inside a prompt with `{{@path.md}}`:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a customer support agent.\n\n{{@shared/company-info.md}}\n\n{{@shared/formatting-rules.md}}\n\nHelp users with their questions.\n```\n\nThe referenced prompt content is inserted before variable interpolation, so variables in included prompts work the same way. Circular references are not allowed and will be caught during validation.\n\n## Next Steps\n\n- [Input & Resources](/docs/protocol/input-resources) \u2014 Defining agent inputs\n- [Triggers](/docs/protocol/triggers) \u2014 How agents are invoked\n- [Tools](/docs/protocol/tools) \u2014 External capabilities\n- [Skills](/docs/protocol/skills) \u2014 Code execution and knowledge packages\n- [References](/docs/protocol/references) \u2014 On-demand context documents\n- [Handlers](/docs/protocol/handlers) \u2014 Execution blocks\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n- [Workers](/docs/protocol/workers) \u2014 Worker agent format\n- [Provider Options](/docs/protocol/provider-options) \u2014 Provider-specific features\n- [Types](/docs/protocol/types) \u2014 Custom type definitions\n',
543
+ content: '\n# Protocol Overview\n\nAgent protocols define how an AI agent behaves. They\'re written in YAML and specify inputs, triggers, tools, and execution handlers.\n\n## Why Protocols?\n\nProtocols provide:\n\n- **Declarative definition** \u2014 Define behavior, not implementation\n- **Portable agents** \u2014 Move agents between projects\n- **Versioning** \u2014 Track changes with git\n- **Validation** \u2014 Catch errors before runtime\n- **Visualization** \u2014 Debug execution flows\n\n## Agent Formats\n\nOctavus supports two agent formats:\n\n| Format | Use Case | Structure |\n| ------------- | ------------------------------ | --------------------------------- |\n| `interactive` | Chat and multi-turn dialogue | `triggers` + `handlers` + `agent` |\n| `worker` | Background tasks and pipelines | `steps` + `output` |\n\n**Interactive agents** handle conversations \u2014 they respond to triggers (like user messages) and maintain session state across interactions.\n\n**Worker agents** execute tasks \u2014 they run steps sequentially and return an output value. Workers can be called independently or composed into interactive agents.\n\nSee [Workers](/docs/protocol/workers) for the worker protocol reference.\n\n## Interactive Protocol Structure\n\n```yaml\n# Agent inputs (provided when creating a session)\ninput:\n COMPANY_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\n# Persistent resources the agent can read/write\nresources:\n CONVERSATION_SUMMARY:\n description: Summary for handoff\n default: \'\'\n\n# How the agent can be invoked\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n request-human:\n description: User clicks "Talk to Human"\n\n# Temporary variables for execution (with types)\nvariables:\n SUMMARY:\n type: string\n TICKET:\n type: unknown\n\n# Tools the agent can use\ntools:\n get-user-account:\n description: Looking up your account\n parameters:\n userId: { type: string }\n\n# Octavus skills (provider-agnostic code execution)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\n# Agent configuration (model, tools, etc.)\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account]\n skills: [qr-code] # Enable skills\n imageModel: google/gemini-2.5-flash-image # Enable image generation\n webSearch: true # Enable web search\n agentic: true # Allow multiple tool calls\n thinking: medium # Extended reasoning\n\n# What happens when triggers fire\nhandlers:\n user-message:\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond to user:\n block: next-message\n```\n\n## File Structure\n\nEach agent is a folder with:\n\n```\nmy-agent/\n\u251C\u2500\u2500 protocol.yaml # Main logic (required)\n\u251C\u2500\u2500 settings.json # Agent metadata (required)\n\u251C\u2500\u2500 prompts/ # Prompt templates (supports subdirectories)\n\u2502 \u251C\u2500\u2500 system.md\n\u2502 \u251C\u2500\u2500 user-message.md\n\u2502 \u2514\u2500\u2500 shared/\n\u2502 \u251C\u2500\u2500 company-info.md\n\u2502 \u2514\u2500\u2500 formatting-rules.md\n\u2514\u2500\u2500 references/ # On-demand context documents (optional)\n \u2514\u2500\u2500 api-guidelines.md\n```\n\nPrompts can be organized in subdirectories. In the protocol, reference nested prompts by their path relative to `prompts/` (without `.md`): `shared/company-info`.\n\nReferences are markdown files with YAML frontmatter that the agent can fetch on demand during execution. See [References](/docs/protocol/references).\n\n### settings.json\n\n```json\n{\n "slug": "my-agent",\n "name": "My Agent",\n "description": "What this agent does",\n "format": "interactive"\n}\n```\n\n| Field | Required | Description |\n| ------------- | -------- | ----------------------------------------------- |\n| `slug` | Yes | URL-safe identifier (lowercase, digits, dashes) |\n| `name` | Yes | Human-readable name |\n| `description` | No | Brief description |\n| `format` | Yes | `interactive` (chat) or `worker` (background) |\n\n## Naming Conventions\n\n- **Slugs**: `lowercase-with-dashes`\n- **Variables**: `UPPERCASE_SNAKE_CASE`\n- **Prompts**: `lowercase-with-dashes.md` (paths use `/` for subdirectories)\n- **Tools**: `lowercase-with-dashes`\n- **Triggers**: `lowercase-with-dashes`\n\n## Variables in Prompts\n\nReference variables with `{{VARIABLE_NAME}}`:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a support agent for {{COMPANY_NAME}}.\n\nHelp users with their {{PRODUCT_NAME}} questions.\n\n## Support Policies\n\n{{SUPPORT_POLICIES}}\n```\n\nVariables are replaced with their values at runtime. If a variable is not provided, the placeholder is kept as-is.\n\n## Prompt Interpolation\n\nInclude other prompts inside a prompt with `{{@path.md}}`:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a customer support agent.\n\n{{@shared/company-info.md}}\n\n{{@shared/formatting-rules.md}}\n\nHelp users with their questions.\n```\n\nThe referenced prompt content is inserted before variable interpolation, so variables in included prompts work the same way. Circular references are not allowed and will be caught during validation.\n\n## Next Steps\n\n- [Input & Resources](/docs/protocol/input-resources) \u2014 Defining agent inputs\n- [Triggers](/docs/protocol/triggers) \u2014 How agents are invoked\n- [Tools](/docs/protocol/tools) \u2014 External capabilities\n- [Skills](/docs/protocol/skills) \u2014 Code execution and knowledge packages\n- [References](/docs/protocol/references) \u2014 On-demand context documents\n- [Handlers](/docs/protocol/handlers) \u2014 Execution blocks\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n- [Workers](/docs/protocol/workers) \u2014 Worker agent format\n- [Provider Options](/docs/protocol/provider-options) \u2014 Provider-specific features\n- [Types](/docs/protocol/types) \u2014 Custom type definitions\n',
544
544
  excerpt: "Protocol Overview Agent protocols define how an AI agent behaves. They're written in YAML and specify inputs, triggers, tools, and execution handlers. Why Protocols? Protocols provide: - Declarative...",
545
545
  order: 1
546
546
  },
@@ -567,8 +567,8 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
567
567
  section: "protocol",
568
568
  title: "Tools",
569
569
  description: "Defining external tools implemented in your backend.",
570
- content: '\n# Tools\n\nTools extend what agents can do. Octavus supports multiple types:\n\n1. **External Tools** \u2014 Defined in the protocol, implemented in your backend (this page)\n2. **Provider Tools** \u2014 Built-in tools executed server-side by the provider (e.g., Anthropic\'s web search)\n3. **Skills** \u2014 Code execution and knowledge packages (see [Skills](/docs/protocol/skills))\n\nThis page covers external tools. For provider tools, see [Provider Options](/docs/protocol/provider-options). For code execution capabilities, see [Skills](/docs/protocol/skills).\n\n## External Tools\n\nExternal tools are defined in the `tools:` section and implemented in your backend.\n\n## Defining Tools\n\n```yaml\ntools:\n get-user-account:\n description: Looking up your account information\n display: description\n parameters:\n userId:\n type: string\n description: The user ID to look up\n```\n\n### Tool Fields\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------------ |\n| `description` | Yes | What the tool does (shown to LLM and optionally user) |\n| `display` | No | How to show in UI: `hidden`, `name`, `description`, `stream` |\n| `parameters` | No | Input parameters the tool accepts |\n\n### Display Modes\n\n| Mode | Behavior |\n| ------------- | ------------------------------------------- |\n| `hidden` | Tool runs silently, user doesn\'t see it |\n| `name` | Shows tool name while executing |\n| `description` | Shows description while executing (default) |\n| `stream` | Streams tool progress if available |\n\n## Parameters\n\nTool calls are always objects where each parameter name maps to a value. The LLM generates: `{ param1: value1, param2: value2, ... }`\n\n### Parameter Fields\n\n| Field | Required | Description |\n| ------------- | -------- | -------------------------------------------------------------------------------- |\n| `type` | Yes | Data type: `string`, `number`, `integer`, `boolean`, `unknown`, or a custom type |\n| `description` | No | Describes what this parameter is for |\n| `optional` | No | If true, parameter is not required (default: false) |\n\n> **Tip**: You can use [custom types](/docs/protocol/types) for complex parameters like `type: ProductFilter` or `type: SearchOptions`.\n\n### Array Parameters\n\nFor array parameters, define a [top-level array type](/docs/protocol/types#top-level-array-types) and use it:\n\n```yaml\ntypes:\n CartItem:\n productId:\n type: string\n quantity:\n type: integer\n\n CartItemList:\n type: array\n items:\n type: CartItem\n\ntools:\n add-to-cart:\n description: Add items to cart\n parameters:\n items:\n type: CartItemList\n description: Items to add\n```\n\nThe tool receives: `{ items: [{ productId: "...", quantity: 1 }, ...] }`\n\n### Optional Parameters\n\nParameters are **required by default**. Use `optional: true` to make a parameter optional:\n\n```yaml\ntools:\n search-products:\n description: Search the product catalog\n parameters:\n query:\n type: string\n description: Search query\n\n category:\n type: string\n description: Filter by category\n optional: true\n\n maxPrice:\n type: number\n description: Maximum price filter\n optional: true\n\n inStock:\n type: boolean\n description: Only show in-stock items\n optional: true\n```\n\n## Making Tools Available\n\nTools defined in `tools:` are available. To make them usable by the LLM, add them to `agent.tools`:\n\n```yaml\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high, urgent\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools:\n - get-user-account\n - create-support-ticket # LLM can decide when to call these\n agentic: true\n```\n\n## Tool Invocation Modes\n\n### LLM-Decided (Agentic)\n\nThe LLM decides when to call tools based on the conversation:\n\n```yaml\nagent:\n tools: [get-user-account, create-support-ticket]\n agentic: true # Allow multiple tool calls\n maxSteps: 10 # Max tool call cycles\n```\n\n### Deterministic (Block-Based)\n\nForce tool calls at specific points in the handler:\n\n```yaml\nhandlers:\n request-human:\n # Always create a ticket when escalating\n Create support ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY # From variable\n priority: medium # Literal value\n output: TICKET # Store result\n```\n\n## Tool Results\n\n### In Prompts\n\nTool results are stored in variables. Reference the variable in prompts:\n\n```markdown\n<!-- prompts/ticket-directive.md -->\n\nA support ticket has been created:\n{{TICKET}}\n\nLet the user know their ticket has been created.\n```\n\nWhen the `TICKET` variable contains an object, it\'s automatically serialized as JSON in the prompt:\n\n```\nA support ticket has been created:\n{\n "ticketId": "TKT-123ABC",\n "estimatedResponse": "24 hours"\n}\n\nLet the user know their ticket has been created.\n```\n\n> **Note**: Variables use `{{VARIABLE_NAME}}` syntax with `UPPERCASE_SNAKE_CASE`. Dot notation (like `{{TICKET.ticketId}}`) is not supported. Objects are automatically JSON-serialized.\n\n### In Variables\n\nStore tool results for later use:\n\n```yaml\nhandlers:\n request-human:\n Get account:\n block: tool-call\n tool: get-user-account\n input:\n userId: USER_ID\n output: ACCOUNT # Result stored here\n\n Create ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY\n priority: medium\n output: TICKET\n```\n\n## Implementing Tools\n\nTools are implemented in your backend:\n\n```typescript\nconst session = client.agentSessions.attach(sessionId, {\n tools: {\n \'get-user-account\': async (args) => {\n const userId = args.userId as string;\n const user = await db.users.findById(userId);\n\n return {\n name: user.name,\n email: user.email,\n plan: user.subscription.plan,\n createdAt: user.createdAt.toISOString(),\n };\n },\n\n \'create-support-ticket\': async (args) => {\n const ticket = await ticketService.create({\n summary: args.summary as string,\n priority: args.priority as string,\n });\n\n return {\n ticketId: ticket.id,\n estimatedResponse: getEstimatedTime(args.priority),\n };\n },\n },\n});\n```\n\n## Tool Best Practices\n\n### 1. Clear Descriptions\n\n```yaml\ntools:\n # Good - clear and specific\n get-user-account:\n description: >\n Retrieves the user\'s account information including name, email,\n subscription plan, and account creation date. Use this when the\n user asks about their account or you need to verify their identity.\n\n # Avoid - vague\n get-data:\n description: Gets some data\n```\n\n### 2. Document Constrained Values\n\n```yaml\ntools:\n create-support-ticket:\n parameters:\n priority:\n type: string\n description: Ticket priority level (low, medium, high, urgent)\n```\n',
571
- excerpt: "Tools Tools extend what agents can do. Octavus supports multiple types: 1. External Tools \u2014 Defined in the protocol, implemented in your backend (this page) 2. Provider Tools \u2014 Built-in tools...",
570
+ content: '\n# Tools\n\nTools extend what agents can do. Octavus supports multiple types:\n\n1. **External Tools** \u2014 Defined in the protocol, implemented in your backend (this page)\n2. **Built-in Tools** \u2014 Provider-agnostic tools managed by Octavus (web search, image generation)\n3. **Provider Tools** \u2014 Provider-specific tools executed by the provider (e.g., Anthropic\'s code execution)\n4. **Skills** \u2014 Code execution and knowledge packages (see [Skills](/docs/protocol/skills))\n\nThis page covers external tools. Built-in tools are enabled via agent config \u2014 see [Web Search](/docs/protocol/agent-config#web-search) and [Image Generation](/docs/protocol/agent-config#image-generation). For provider-specific tools, see [Provider Options](/docs/protocol/provider-options). For code execution, see [Skills](/docs/protocol/skills).\n\n## External Tools\n\nExternal tools are defined in the `tools:` section and implemented in your backend.\n\n## Defining Tools\n\n```yaml\ntools:\n get-user-account:\n description: Looking up your account information\n display: description\n parameters:\n userId:\n type: string\n description: The user ID to look up\n```\n\n### Tool Fields\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------------ |\n| `description` | Yes | What the tool does (shown to LLM and optionally user) |\n| `display` | No | How to show in UI: `hidden`, `name`, `description`, `stream` |\n| `parameters` | No | Input parameters the tool accepts |\n\n### Display Modes\n\n| Mode | Behavior |\n| ------------- | ------------------------------------------- |\n| `hidden` | Tool runs silently, user doesn\'t see it |\n| `name` | Shows tool name while executing |\n| `description` | Shows description while executing (default) |\n| `stream` | Streams tool progress if available |\n\n## Parameters\n\nTool calls are always objects where each parameter name maps to a value. The LLM generates: `{ param1: value1, param2: value2, ... }`\n\n### Parameter Fields\n\n| Field | Required | Description |\n| ------------- | -------- | -------------------------------------------------------------------------------- |\n| `type` | Yes | Data type: `string`, `number`, `integer`, `boolean`, `unknown`, or a custom type |\n| `description` | No | Describes what this parameter is for |\n| `optional` | No | If true, parameter is not required (default: false) |\n\n> **Tip**: You can use [custom types](/docs/protocol/types) for complex parameters like `type: ProductFilter` or `type: SearchOptions`.\n\n### Array Parameters\n\nFor array parameters, define a [top-level array type](/docs/protocol/types#top-level-array-types) and use it:\n\n```yaml\ntypes:\n CartItem:\n productId:\n type: string\n quantity:\n type: integer\n\n CartItemList:\n type: array\n items:\n type: CartItem\n\ntools:\n add-to-cart:\n description: Add items to cart\n parameters:\n items:\n type: CartItemList\n description: Items to add\n```\n\nThe tool receives: `{ items: [{ productId: "...", quantity: 1 }, ...] }`\n\n### Optional Parameters\n\nParameters are **required by default**. Use `optional: true` to make a parameter optional:\n\n```yaml\ntools:\n search-products:\n description: Search the product catalog\n parameters:\n query:\n type: string\n description: Search query\n\n category:\n type: string\n description: Filter by category\n optional: true\n\n maxPrice:\n type: number\n description: Maximum price filter\n optional: true\n\n inStock:\n type: boolean\n description: Only show in-stock items\n optional: true\n```\n\n## Making Tools Available\n\nTools defined in `tools:` are available. To make them usable by the LLM, add them to `agent.tools`:\n\n```yaml\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high, urgent\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools:\n - get-user-account\n - create-support-ticket # LLM can decide when to call these\n agentic: true\n```\n\n## Tool Invocation Modes\n\n### LLM-Decided (Agentic)\n\nThe LLM decides when to call tools based on the conversation:\n\n```yaml\nagent:\n tools: [get-user-account, create-support-ticket]\n agentic: true # Allow multiple tool calls\n maxSteps: 10 # Max tool call cycles\n```\n\n### Deterministic (Block-Based)\n\nForce tool calls at specific points in the handler:\n\n```yaml\nhandlers:\n request-human:\n # Always create a ticket when escalating\n Create support ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY # From variable\n priority: medium # Literal value\n output: TICKET # Store result\n```\n\n## Tool Results\n\n### In Prompts\n\nTool results are stored in variables. Reference the variable in prompts:\n\n```markdown\n<!-- prompts/ticket-directive.md -->\n\nA support ticket has been created:\n{{TICKET}}\n\nLet the user know their ticket has been created.\n```\n\nWhen the `TICKET` variable contains an object, it\'s automatically serialized as JSON in the prompt:\n\n```\nA support ticket has been created:\n{\n "ticketId": "TKT-123ABC",\n "estimatedResponse": "24 hours"\n}\n\nLet the user know their ticket has been created.\n```\n\n> **Note**: Variables use `{{VARIABLE_NAME}}` syntax with `UPPERCASE_SNAKE_CASE`. Dot notation (like `{{TICKET.ticketId}}`) is not supported. Objects are automatically JSON-serialized.\n\n### In Variables\n\nStore tool results for later use:\n\n```yaml\nhandlers:\n request-human:\n Get account:\n block: tool-call\n tool: get-user-account\n input:\n userId: USER_ID\n output: ACCOUNT # Result stored here\n\n Create ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY\n priority: medium\n output: TICKET\n```\n\n## Implementing Tools\n\nTools are implemented in your backend:\n\n```typescript\nconst session = client.agentSessions.attach(sessionId, {\n tools: {\n \'get-user-account\': async (args) => {\n const userId = args.userId as string;\n const user = await db.users.findById(userId);\n\n return {\n name: user.name,\n email: user.email,\n plan: user.subscription.plan,\n createdAt: user.createdAt.toISOString(),\n };\n },\n\n \'create-support-ticket\': async (args) => {\n const ticket = await ticketService.create({\n summary: args.summary as string,\n priority: args.priority as string,\n });\n\n return {\n ticketId: ticket.id,\n estimatedResponse: getEstimatedTime(args.priority),\n };\n },\n },\n});\n```\n\n## Tool Best Practices\n\n### 1. Clear Descriptions\n\n```yaml\ntools:\n # Good - clear and specific\n get-user-account:\n description: >\n Retrieves the user\'s account information including name, email,\n subscription plan, and account creation date. Use this when the\n user asks about their account or you need to verify their identity.\n\n # Avoid - vague\n get-data:\n description: Gets some data\n```\n\n### 2. Document Constrained Values\n\n```yaml\ntools:\n create-support-ticket:\n parameters:\n priority:\n type: string\n description: Ticket priority level (low, medium, high, urgent)\n```\n',
571
+ excerpt: "Tools Tools extend what agents can do. Octavus supports multiple types: 1. External Tools \u2014 Defined in the protocol, implemented in your backend (this page) 2. Built-in Tools \u2014 Provider-agnostic...",
572
572
  order: 4
573
573
  },
574
574
  {
@@ -594,7 +594,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
594
594
  section: "protocol",
595
595
  title: "Agent Config",
596
596
  description: "Configuring the agent model and behavior.",
597
- content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n references: [api-guidelines] # On-demand context documents\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ---------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to pass to the system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `references` | No | List of references the LLM can fetch on demand |\n| `sandboxTimeout` | No | Skill sandbox timeout in ms (default: 5 min, max: 1 hour) |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | -------------------------------------------------------------------- |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash-preview\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions. The `input` field controls which variables are available to the prompt \u2014 only variables listed in `input` are interpolated.\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nVariables in `input` can come from `protocol.input`, `protocol.resources`, or `protocol.variables`.\n\n### Input Mapping Formats\n\n```yaml\n# Array format (same name)\ninput:\n - COMPANY_NAME\n - PRODUCT_NAME\n\n# Array format (rename)\ninput:\n - CONTEXT: CONVERSATION_SUMMARY # Prompt sees CONTEXT, value comes from CONVERSATION_SUMMARY\n\n# Object format (rename)\ninput:\n CONTEXT: CONVERSATION_SUMMARY\n```\n\nThe left side (label) is what the prompt sees. The right side (source) is where the value comes from.\n\n### Example\n\n`prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## References\n\nEnable on-demand context loading via reference documents:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n references: [api-guidelines, error-codes]\n agentic: true\n```\n\nReferences are markdown files stored in the agent's `references/` directory. When enabled, the LLM can list available references and read their content using `octavus_reference_list` and `octavus_reference_read` tools.\n\nSee [References](/docs/protocol/references) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests. The tool supports both text-to-image generation and image editing/transformation using reference images.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Image Editing with Reference Images\n\nBoth the agentic tool and the `generate-image` block support reference images for editing and transformation. When reference images are provided, the prompt describes how to modify or use those images.\n\n| Provider | Models | Reference Image Support |\n| -------- | -------------------------------- | ----------------------- |\n| OpenAI | `gpt-image-1` | Yes |\n| Google | Gemini native (`gemini-*-image`) | Yes |\n| Google | Imagen (`imagen-*`) | No |\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate or edit images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation or editing\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n skills: [data-analysis] # Thread-specific skills\n references: [escalation-policy] # Thread-specific references\n imageModel: google/gemini-2.5-flash-image # Thread-specific image model\n```\n\nEach thread can have its own skills, references, and image model. Skills must be defined in the protocol's `skills:` section. References must exist in the agent's `references/` directory. Workers use this same pattern since they don't have a global `agent:` section.\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n references: [support-policies] # On-demand context\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
597
+ content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n references: [api-guidelines] # On-demand context documents\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ---------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to pass to the system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `references` | No | List of references the LLM can fetch on demand |\n| `sandboxTimeout` | No | Skill sandbox timeout in ms (default: 5 min, max: 1 hour) |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `webSearch` | No | Enable built-in web search tool (provider-agnostic) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | -------------------------------------------------------------------- |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash-preview\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions. The `input` field controls which variables are available to the prompt \u2014 only variables listed in `input` are interpolated.\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nVariables in `input` can come from `protocol.input`, `protocol.resources`, or `protocol.variables`.\n\n### Input Mapping Formats\n\n```yaml\n# Array format (same name)\ninput:\n - COMPANY_NAME\n - PRODUCT_NAME\n\n# Array format (rename)\ninput:\n - CONTEXT: CONVERSATION_SUMMARY # Prompt sees CONTEXT, value comes from CONVERSATION_SUMMARY\n\n# Object format (rename)\ninput:\n CONTEXT: CONVERSATION_SUMMARY\n```\n\nThe left side (label) is what the prompt sees. The right side (source) is where the value comes from.\n\n### Example\n\n`prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## References\n\nEnable on-demand context loading via reference documents:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n references: [api-guidelines, error-codes]\n agentic: true\n```\n\nReferences are markdown files stored in the agent's `references/` directory. When enabled, the LLM can list available references and read their content using `octavus_reference_list` and `octavus_reference_read` tools.\n\nSee [References](/docs/protocol/references) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests. The tool supports both text-to-image generation and image editing/transformation using reference images.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Image Editing with Reference Images\n\nBoth the agentic tool and the `generate-image` block support reference images for editing and transformation. When reference images are provided, the prompt describes how to modify or use those images.\n\n| Provider | Models | Reference Image Support |\n| -------- | -------------------------------- | ----------------------- |\n| OpenAI | `gpt-image-1` | Yes |\n| Google | Gemini native (`gemini-*-image`) | Yes |\n| Google | Imagen (`imagen-*`) | No |\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate or edit images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation or editing\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Web Search\n\nEnable the LLM to search the web for current information:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n webSearch: true\n agentic: true\n```\n\nWhen `webSearch` is enabled, the `octavus_web_search` tool becomes available. The LLM can decide when to search the web based on the conversation. Search results include source URLs that are emitted as citations in the UI.\n\nThis is a **provider-agnostic** built-in tool \u2014 it works with any LLM provider (Anthropic, Google, OpenAI, etc.). For Anthropic's own web search implementation, see [Provider Options](/docs/protocol/provider-options).\n\nUse cases:\n\n- Current events and real-time data\n- Fact verification and documentation lookups\n- Any information that may have changed since the model's training\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n skills: [data-analysis] # Thread-specific skills\n references: [escalation-policy] # Thread-specific references\n imageModel: google/gemini-2.5-flash-image # Thread-specific image model\n webSearch: true # Thread-specific web search\n```\n\nEach thread can have its own skills, references, image model, and web search setting. Skills must be defined in the protocol's `skills:` section. References must exist in the agent's `references/` directory. Workers use this same pattern since they don't have a global `agent:` section.\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n references: [support-policies] # On-demand context\n webSearch: true # Built-in web search\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
598
598
  excerpt: "Agent Config The section configures the LLM model, system prompt, tools, and behavior. Basic Configuration Configuration Options | Field | Required | Description ...",
599
599
  order: 7
600
600
  },
@@ -603,7 +603,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
603
603
  section: "protocol",
604
604
  title: "Provider Options",
605
605
  description: "Configuring provider-specific tools and features.",
606
- content: "\n# Provider Options\n\nProvider options let you enable provider-specific features like Anthropic's built-in tools and skills. These features run server-side on the provider's infrastructure.\n\n> **Note**: For provider-agnostic code execution, use [Octavus Skills](/docs/protocol/skills) instead. Octavus Skills work with any LLM provider and run in isolated sandbox environments.\n\n## Anthropic Options\n\nConfigure Anthropic-specific features when using `anthropic/*` models:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n anthropic:\n # Provider tools (server-side)\n tools:\n web-search:\n display: description\n description: Searching the web\n code-execution:\n display: description\n description: Running code\n\n # Skills (knowledge packages)\n skills:\n pdf:\n type: anthropic\n display: description\n description: Processing PDF document\n```\n\n> **Note**: Provider options are validated against the model provider. Using `anthropic:` options with non-Anthropic models will result in a validation error.\n\n## Provider Tools\n\nProvider tools are executed server-side by the provider (Anthropic). Unlike external tools that you implement, provider tools are built-in capabilities.\n\n### Available Tools\n\n| Tool | Description |\n| ---------------- | -------------------------------------------- |\n| `web-search` | Search the web for current information |\n| `code-execution` | Execute Python/Bash in a sandboxed container |\n\n### Tool Configuration\n\n```yaml\nanthropic:\n tools:\n web-search:\n display: description # How to show in UI\n description: Searching... # Custom display text\n```\n\n| Field | Required | Description |\n| ------------- | -------- | --------------------------------------------------------------------- |\n| `display` | No | `hidden`, `name`, `description`, or `stream` (default: `description`) |\n| `description` | No | Custom text shown to users during execution |\n\n### Web Search\n\nAllows the agent to search the web for current information:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Looking up current information\n```\n\nUse cases:\n\n- Current events and news\n- Real-time data (prices, weather)\n- Fact verification\n- Documentation lookups\n\n### Code Execution\n\nEnables Python and Bash execution in a sandboxed container:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n code-execution:\n display: description\n description: Running analysis\n```\n\nUse cases:\n\n- Data analysis and calculations\n- File processing\n- Chart generation\n- Script execution\n\n> **Note**: Code execution is automatically enabled when skills are configured (skills require the container environment).\n\n## Skills\n\n> **Important**: This section covers **Anthropic's built-in skills** (provider-specific). For provider-agnostic skills that work with any LLM, see [Octavus Skills](/docs/protocol/skills).\n\nAnthropic skills are knowledge packages that give the agent specialized capabilities. They're loaded into Anthropic's code execution container at `/skills/{skill-id}/` and only work with Anthropic models.\n\n### Skill Configuration\n\n```yaml\nanthropic:\n skills:\n pdf:\n type: anthropic # 'anthropic' or 'custom'\n version: latest # Optional version\n display: description\n description: Processing PDF\n```\n\n| Field | Required | Description |\n| ------------- | -------- | --------------------------------------------------------------------- |\n| `type` | Yes | `anthropic` (built-in) or `custom` (uploaded) |\n| `version` | No | Skill version (default: `latest`) |\n| `display` | No | `hidden`, `name`, `description`, or `stream` (default: `description`) |\n| `description` | No | Custom text shown to users |\n\n### Built-in Skills\n\nAnthropic provides several built-in skills:\n\n| Skill ID | Purpose |\n| -------- | ----------------------------------------------- |\n| `pdf` | PDF manipulation, text extraction, form filling |\n| `xlsx` | Excel spreadsheet operations and analysis |\n| `docx` | Word document creation and editing |\n| `pptx` | PowerPoint presentation creation |\n\n### Using Skills\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n anthropic:\n skills:\n pdf:\n type: anthropic\n description: Processing your PDF\n xlsx:\n type: anthropic\n description: Analyzing spreadsheet\n```\n\nWhen skills are configured:\n\n1. Code execution is automatically enabled\n2. Skill files are loaded into the container\n3. The agent can read skill instructions and execute scripts\n\n### Custom Skills\n\nYou can create and upload custom skills to Anthropic:\n\n```yaml\nanthropic:\n skills:\n custom-analysis:\n type: custom\n version: latest\n description: Running custom analysis\n```\n\nCustom skills follow the [Agent Skills standard](https://agentskills.io) and contain:\n\n- `SKILL.md` with instructions and metadata\n- Optional `scripts/`, `references/`, and `assets/` directories\n\n### Octavus Skills vs Anthropic Skills\n\n| Feature | Anthropic Skills | Octavus Skills |\n| ----------------- | ------------------------ | ----------------------------- |\n| **Provider** | Anthropic only | Any (agnostic) |\n| **Execution** | Anthropic's container | Isolated sandbox |\n| **Configuration** | `agent.anthropic.skills` | `agent.skills` |\n| **Definition** | `anthropic:` section | `skills:` section |\n| **Use Case** | Claude-specific features | Cross-provider code execution |\n\nFor provider-agnostic code execution, use Octavus Skills defined in the protocol's `skills:` section and enabled via `agent.skills`. See [Skills](/docs/protocol/skills) for details.\n\n## Display Modes\n\nBoth tools and skills support display modes:\n\n| Mode | Behavior |\n| ------------- | ------------------------------- |\n| `hidden` | Not shown to users |\n| `name` | Shows the tool/skill name |\n| `description` | Shows the description (default) |\n| `stream` | Streams progress if available |\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\ntools:\n get-user-account:\n description: Looking up your account\n parameters:\n userId: { type: string }\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input: [COMPANY_NAME, USER_ID]\n tools: [get-user-account] # External tools\n agentic: true\n thinking: medium\n\n # Anthropic-specific options\n anthropic:\n # Provider tools (server-side)\n tools:\n web-search:\n display: description\n description: Searching the web\n code-execution:\n display: description\n description: Running code\n\n # Skills (knowledge packages)\n skills:\n pdf:\n type: anthropic\n display: description\n description: Processing PDF document\n xlsx:\n type: anthropic\n display: description\n description: Analyzing spreadsheet\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n\n## Validation\n\nThe protocol validator enforces:\n\n1. **Model match**: Provider options must match the model provider\n - `anthropic:` options require `anthropic/*` model\n - Using mismatched options results in a validation error\n\n2. **Valid tool types**: Only recognized tools are accepted\n - `web-search` and `code-execution` for Anthropic\n\n3. **Valid skill types**: Only `anthropic` or `custom` are accepted\n\n### Error Example\n\n```yaml\n# This will fail validation\nagent:\n model: openai/gpt-4o # OpenAI model\n anthropic: # Anthropic options - mismatch!\n tools:\n web-search: {}\n```\n\nError: `\"anthropic\" options require an anthropic model. Current model provider: \"openai\"`\n",
606
+ content: "\n# Provider Options\n\nProvider options let you enable provider-specific features like Anthropic's built-in tools and skills. These features run server-side on the provider's infrastructure.\n\n> **Note**: For provider-agnostic code execution, use [Octavus Skills](/docs/protocol/skills) instead. Octavus Skills work with any LLM provider and run in isolated sandbox environments.\n\n## Anthropic Options\n\nConfigure Anthropic-specific features when using `anthropic/*` models:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n anthropic:\n # Provider tools (server-side)\n tools:\n web-search:\n display: description\n description: Searching the web\n code-execution:\n display: description\n description: Running code\n\n # Skills (knowledge packages)\n skills:\n pdf:\n type: anthropic\n display: description\n description: Processing PDF document\n```\n\n> **Note**: Provider options are validated against the model provider. Using `anthropic:` options with non-Anthropic models will result in a validation error.\n\n## Provider Tools\n\nProvider tools are executed server-side by the provider (Anthropic). Unlike external tools that you implement, provider tools are built-in capabilities.\n\n### Available Tools\n\n| Tool | Description |\n| ---------------- | -------------------------------------------- |\n| `web-search` | Search the web for current information |\n| `code-execution` | Execute Python/Bash in a sandboxed container |\n\n### Tool Configuration\n\n```yaml\nanthropic:\n tools:\n web-search:\n display: description # How to show in UI\n description: Searching... # Custom display text\n```\n\n| Field | Required | Description |\n| ------------- | -------- | --------------------------------------------------------------------- |\n| `display` | No | `hidden`, `name`, `description`, or `stream` (default: `description`) |\n| `description` | No | Custom text shown to users during execution |\n\n### Web Search\n\nAllows the agent to search the web using Anthropic's built-in web search:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Looking up current information\n```\n\n> **Tip**: Octavus also provides a **provider-agnostic** web search via `webSearch: true` in the agent config. This works with any LLM provider and is the recommended approach for multi-provider agents. See [Web Search](/docs/protocol/agent-config#web-search) for details.\n\n### Code Execution\n\nEnables Python and Bash execution in a sandboxed container:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n code-execution:\n display: description\n description: Running analysis\n```\n\nUse cases:\n\n- Data analysis and calculations\n- File processing\n- Chart generation\n- Script execution\n\n> **Note**: Code execution is automatically enabled when skills are configured (skills require the container environment).\n\n## Skills\n\n> **Important**: This section covers **Anthropic's built-in skills** (provider-specific). For provider-agnostic skills that work with any LLM, see [Octavus Skills](/docs/protocol/skills).\n\nAnthropic skills are knowledge packages that give the agent specialized capabilities. They're loaded into Anthropic's code execution container at `/skills/{skill-id}/` and only work with Anthropic models.\n\n### Skill Configuration\n\n```yaml\nanthropic:\n skills:\n pdf:\n type: anthropic # 'anthropic' or 'custom'\n version: latest # Optional version\n display: description\n description: Processing PDF\n```\n\n| Field | Required | Description |\n| ------------- | -------- | --------------------------------------------------------------------- |\n| `type` | Yes | `anthropic` (built-in) or `custom` (uploaded) |\n| `version` | No | Skill version (default: `latest`) |\n| `display` | No | `hidden`, `name`, `description`, or `stream` (default: `description`) |\n| `description` | No | Custom text shown to users |\n\n### Built-in Skills\n\nAnthropic provides several built-in skills:\n\n| Skill ID | Purpose |\n| -------- | ----------------------------------------------- |\n| `pdf` | PDF manipulation, text extraction, form filling |\n| `xlsx` | Excel spreadsheet operations and analysis |\n| `docx` | Word document creation and editing |\n| `pptx` | PowerPoint presentation creation |\n\n### Using Skills\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n anthropic:\n skills:\n pdf:\n type: anthropic\n description: Processing your PDF\n xlsx:\n type: anthropic\n description: Analyzing spreadsheet\n```\n\nWhen skills are configured:\n\n1. Code execution is automatically enabled\n2. Skill files are loaded into the container\n3. The agent can read skill instructions and execute scripts\n\n### Custom Skills\n\nYou can create and upload custom skills to Anthropic:\n\n```yaml\nanthropic:\n skills:\n custom-analysis:\n type: custom\n version: latest\n description: Running custom analysis\n```\n\nCustom skills follow the [Agent Skills standard](https://agentskills.io) and contain:\n\n- `SKILL.md` with instructions and metadata\n- Optional `scripts/`, `references/`, and `assets/` directories\n\n### Octavus Skills vs Anthropic Skills\n\n| Feature | Anthropic Skills | Octavus Skills |\n| ----------------- | ------------------------ | ----------------------------- |\n| **Provider** | Anthropic only | Any (agnostic) |\n| **Execution** | Anthropic's container | Isolated sandbox |\n| **Configuration** | `agent.anthropic.skills` | `agent.skills` |\n| **Definition** | `anthropic:` section | `skills:` section |\n| **Use Case** | Claude-specific features | Cross-provider code execution |\n\nFor provider-agnostic code execution, use Octavus Skills defined in the protocol's `skills:` section and enabled via `agent.skills`. See [Skills](/docs/protocol/skills) for details.\n\n## Display Modes\n\nBoth tools and skills support display modes:\n\n| Mode | Behavior |\n| ------------- | ------------------------------- |\n| `hidden` | Not shown to users |\n| `name` | Shows the tool/skill name |\n| `description` | Shows the description (default) |\n| `stream` | Streams progress if available |\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\ntools:\n get-user-account:\n description: Looking up your account\n parameters:\n userId: { type: string }\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input: [COMPANY_NAME, USER_ID]\n tools: [get-user-account] # External tools\n agentic: true\n thinking: medium\n\n # Anthropic-specific options\n anthropic:\n # Provider tools (server-side)\n tools:\n web-search:\n display: description\n description: Searching the web\n code-execution:\n display: description\n description: Running code\n\n # Skills (knowledge packages)\n skills:\n pdf:\n type: anthropic\n display: description\n description: Processing PDF document\n xlsx:\n type: anthropic\n display: description\n description: Analyzing spreadsheet\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n\n## Validation\n\nThe protocol validator enforces:\n\n1. **Model match**: Provider options must match the model provider\n - `anthropic:` options require `anthropic/*` model\n - Using mismatched options results in a validation error\n\n2. **Valid tool types**: Only recognized tools are accepted\n - `web-search` and `code-execution` for Anthropic\n\n3. **Valid skill types**: Only `anthropic` or `custom` are accepted\n\n### Error Example\n\n```yaml\n# This will fail validation\nagent:\n model: openai/gpt-4o # OpenAI model\n anthropic: # Anthropic options - mismatch!\n tools:\n web-search: {}\n```\n\nError: `\"anthropic\" options require an anthropic model. Current model provider: \"openai\"`\n",
607
607
  excerpt: "Provider Options Provider options let you enable provider-specific features like Anthropic's built-in tools and skills. These features run server-side on the provider's infrastructure. > Note: For...",
608
608
  order: 8
609
609
  },
@@ -630,7 +630,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
630
630
  section: "protocol",
631
631
  title: "Workers",
632
632
  description: "Defining worker agents for background and task-based execution.",
633
- content: '\n# Workers\n\nWorkers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value.\n\n## When to Use Workers\n\nWorkers are ideal for:\n\n- **Background processing** \u2014 Long-running tasks that don\'t need conversation\n- **Composable tasks** \u2014 Reusable units of work called by other agents\n- **Pipelines** \u2014 Multi-step processing with structured output\n- **Parallel execution** \u2014 Tasks that can run independently\n\nUse interactive agents instead when:\n\n- **Conversation is needed** \u2014 Multi-turn dialogue with users\n- **Persistence matters** \u2014 State should survive across interactions\n- **Session context** \u2014 User context needs to persist\n\n## Worker vs Interactive\n\n| Aspect | Interactive | Worker |\n| ---------- | ---------------------------------- | ----------------------------- |\n| Structure | `triggers` + `handlers` + `agent` | `steps` + `output` |\n| LLM Config | Global `agent:` section | Per-thread via `start-thread` |\n| Invocation | Fire a named trigger | Direct execution with input |\n| Session | Persists across triggers (24h TTL) | Single execution |\n| Result | Streaming chat | Streaming + output value |\n\n## Protocol Structure\n\nWorkers use a simpler protocol structure than interactive agents:\n\n```yaml\n# Input schema - provided when worker is executed\ninput:\n TOPIC:\n type: string\n description: Topic to research\n DEPTH:\n type: string\n optional: true\n default: medium\n\n# Variables for intermediate results\nvariables:\n RESEARCH_DATA:\n type: string\n ANALYSIS:\n type: string\n description: Final analysis result\n\n# Tools available to the worker\ntools:\n web-search:\n description: Search the web\n parameters:\n query: { type: string }\n\n# Sequential execution steps\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC, DEPTH]\n tools: [web-search]\n maxSteps: 5\n\n Add research request:\n block: add-message\n thread: research\n role: user\n prompt: research-prompt\n input: [TOPIC, DEPTH]\n\n Generate research:\n block: next-message\n thread: research\n output: RESEARCH_DATA\n\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: analysis-system\n\n Add analysis request:\n block: add-message\n thread: analysis\n role: user\n prompt: analysis-prompt\n input: [RESEARCH_DATA]\n\n Generate analysis:\n block: next-message\n thread: analysis\n output: ANALYSIS\n\n# Output variable - the worker\'s return value\noutput: ANALYSIS\n```\n\n## settings.json\n\nWorkers are identified by the `format` field:\n\n```json\n{\n "slug": "research-assistant",\n "name": "Research Assistant",\n "description": "Researches topics and returns structured analysis",\n "format": "worker"\n}\n```\n\n## Key Differences\n\n### No Global Agent Config\n\nInteractive agents have a global `agent:` section that configures a main thread. Workers don\'t have this \u2014 every thread must be explicitly created via `start-thread`:\n\n```yaml\n# Interactive agent: Global config\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [tool-a, tool-b]\n\n# Worker: Each thread configured independently\nsteps:\n Start thread A:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n tools: [tool-a]\n\n Start thread B:\n block: start-thread\n thread: analysis\n model: openai/gpt-4o\n tools: [tool-b]\n```\n\nThis gives workers flexibility to use different models, tools, skills, and settings at different stages.\n\n### Steps Instead of Handlers\n\nWorkers use `steps:` instead of `handlers:`. Steps execute sequentially, like handler blocks:\n\n```yaml\n# Interactive: Handlers respond to triggers\nhandlers:\n user-message:\n Add message:\n block: add-message\n # ...\n\n# Worker: Steps execute in sequence\nsteps:\n Add message:\n block: add-message\n # ...\n```\n\n### Output Value\n\nWorkers can return an output value to the caller:\n\n```yaml\nvariables:\n RESULT:\n type: string\n\nsteps:\n # ... steps that populate RESULT ...\n\noutput: RESULT # Return this variable\'s value\n```\n\nThe `output` field references a variable declared in `variables:`. If omitted, the worker completes without returning a value.\n\n## Available Blocks\n\nWorkers support the same blocks as handlers:\n\n| Block | Purpose |\n| ------------------ | -------------------------------------------- |\n| `start-thread` | Create a named thread with LLM configuration |\n| `add-message` | Add a message to a thread |\n| `next-message` | Generate LLM response |\n| `tool-call` | Call a tool deterministically |\n| `set-resource` | Update a resource value |\n| `serialize-thread` | Convert thread to text |\n| `generate-image` | Generate an image from a prompt variable |\n\n### start-thread (Required for LLM)\n\nEvery thread must be initialized with `start-thread` before using `next-message`:\n\n```yaml\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC]\n tools: [web-search]\n thinking: medium\n maxSteps: 5\n```\n\nAll LLM configuration goes here:\n\n| Field | Description |\n| ------------- | ------------------------------------------------- |\n| `thread` | Thread name (defaults to block name) |\n| `model` | LLM model to use |\n| `system` | System prompt filename (required) |\n| `input` | Variables for system prompt |\n| `tools` | Tools available in this thread |\n| `skills` | Octavus skills available in this thread |\n| `imageModel` | Image generation model |\n| `thinking` | Extended reasoning level |\n| `temperature` | Model temperature |\n| `maxSteps` | Maximum tool call cycles (enables agentic if > 1) |\n\n## Simple Example\n\nA worker that generates a title from a summary:\n\n```yaml\n# Input\ninput:\n CONVERSATION_SUMMARY:\n type: string\n description: Summary to generate a title for\n\n# Variables\nvariables:\n TITLE:\n type: string\n description: The generated title\n\n# Steps\nsteps:\n Start title thread:\n block: start-thread\n thread: title-gen\n model: anthropic/claude-sonnet-4-5\n system: title-system\n\n Add title request:\n block: add-message\n thread: title-gen\n role: user\n prompt: title-request\n input: [CONVERSATION_SUMMARY]\n\n Generate title:\n block: next-message\n thread: title-gen\n output: TITLE\n display: stream\n\n# Output\noutput: TITLE\n```\n\n## Advanced Example\n\nA worker with multiple threads, tools, and agentic behavior:\n\n```yaml\ninput:\n USER_MESSAGE:\n type: string\n description: The user\'s message to respond to\n USER_ID:\n type: string\n description: User ID for account lookups\n optional: true\n\ntools:\n get-user-account:\n description: Looking up account information\n parameters:\n userId: { type: string }\n create-support-ticket:\n description: Creating a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string }\n\nvariables:\n ASSISTANT_RESPONSE:\n type: string\n CHAT_TRANSCRIPT:\n type: string\n CONVERSATION_SUMMARY:\n type: string\n\nsteps:\n # Thread 1: Chat with agentic tool calling\n Start chat thread:\n block: start-thread\n thread: chat\n model: anthropic/claude-sonnet-4-5\n system: chat-system\n input: [USER_ID]\n tools: [get-user-account, create-support-ticket]\n thinking: medium\n maxSteps: 5\n\n Add user message:\n block: add-message\n thread: chat\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Generate response:\n block: next-message\n thread: chat\n output: ASSISTANT_RESPONSE\n display: stream\n\n # Serialize for summary\n Save conversation:\n block: serialize-thread\n thread: chat\n output: CHAT_TRANSCRIPT\n\n # Thread 2: Summary generation\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n system: summary-system\n thinking: low\n\n Add summary request:\n block: add-message\n thread: summary\n role: user\n prompt: summary-request\n input: [CHAT_TRANSCRIPT]\n\n Generate summary:\n block: next-message\n thread: summary\n output: CONVERSATION_SUMMARY\n display: stream\n\noutput: CONVERSATION_SUMMARY\n```\n\n## Skills and Image Generation\n\nWorkers can use Octavus skills and image generation, configured per-thread via `start-thread`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generate QR codes\n\nsteps:\n Start thread:\n block: start-thread\n thread: worker\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n imageModel: google/gemini-2.5-flash-image\n maxSteps: 10\n```\n\nWorkers define their own skills independently -- they don\'t inherit skills from a parent interactive agent. Each thread gets its own sandbox scoped to only its listed skills.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## Tool Handling\n\nWorkers support the same tool handling as interactive agents:\n\n- **Server tools** \u2014 Handled by tool handlers you provide\n- **Client tools** \u2014 Pause execution, return tool request to caller\n\n```typescript\n// Non-streaming: get the output directly\nconst { output } = await client.workers.generate(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => await searchWeb(args.query),\n },\n },\n);\n\n// Streaming: observe events in real-time\nconst events = client.workers.execute(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => await searchWeb(args.query),\n },\n },\n);\n```\n\nSee [Server SDK Workers](/docs/server-sdk/workers) for tool handling details.\n\n## Stream Events\n\nWorkers emit the same events as interactive agents, plus worker-specific events:\n\n| Event | Description |\n| --------------- | ---------------------------------- |\n| `worker-start` | Worker execution begins |\n| `worker-result` | Worker completes (includes output) |\n\nAll standard events (text-delta, tool calls, etc.) are also emitted.\n\n## Calling Workers from Interactive Agents\n\nInteractive agents can call workers in two ways:\n\n1. **Deterministically** \u2014 Using the `run-worker` block\n2. **Agentically** \u2014 LLM calls worker as a tool\n\n### Worker Declaration\n\nFirst, declare workers in your interactive agent\'s protocol:\n\n```yaml\nworkers:\n generate-title:\n description: Generating conversation title\n display: description\n research-assistant:\n description: Researching topic\n display: stream\n tools:\n search: web-search # Map worker tool \u2192 parent tool\n```\n\n### run-worker Block\n\nCall a worker deterministically from a handler:\n\n```yaml\nhandlers:\n request-human:\n Generate title:\n block: run-worker\n worker: generate-title\n input:\n CONVERSATION_SUMMARY: SUMMARY\n output: CONVERSATION_TITLE\n```\n\n### LLM Tool Invocation\n\nMake workers available to the LLM:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n workers: [generate-title, research-assistant]\n agentic: true\n```\n\nThe LLM can then call workers as tools during conversation.\n\n### Display Modes\n\nControl how worker execution appears to users:\n\n| Mode | Behavior |\n| ------------- | --------------------------------- |\n| `hidden` | Worker runs silently |\n| `name` | Shows worker name |\n| `description` | Shows description text |\n| `stream` | Streams all worker events to user |\n\n### Tool Mapping\n\nMap parent tools to worker tools when the worker needs access to your tool handlers:\n\n```yaml\nworkers:\n research-assistant:\n description: Research topics\n tools:\n search: web-search # Worker\'s "search" \u2192 parent\'s "web-search"\n```\n\nWhen the worker calls its `search` tool, your `web-search` handler executes.\n\n## Next Steps\n\n- [Server SDK Workers](/docs/server-sdk/workers) \u2014 Executing workers from code\n- [Handlers](/docs/protocol/handlers) \u2014 Block reference for steps\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n',
633
+ content: '\n# Workers\n\nWorkers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value.\n\n## When to Use Workers\n\nWorkers are ideal for:\n\n- **Background processing** \u2014 Long-running tasks that don\'t need conversation\n- **Composable tasks** \u2014 Reusable units of work called by other agents\n- **Pipelines** \u2014 Multi-step processing with structured output\n- **Parallel execution** \u2014 Tasks that can run independently\n\nUse interactive agents instead when:\n\n- **Conversation is needed** \u2014 Multi-turn dialogue with users\n- **Persistence matters** \u2014 State should survive across interactions\n- **Session context** \u2014 User context needs to persist\n\n## Worker vs Interactive\n\n| Aspect | Interactive | Worker |\n| ---------- | ---------------------------------- | ----------------------------- |\n| Structure | `triggers` + `handlers` + `agent` | `steps` + `output` |\n| LLM Config | Global `agent:` section | Per-thread via `start-thread` |\n| Invocation | Fire a named trigger | Direct execution with input |\n| Session | Persists across triggers (24h TTL) | Single execution |\n| Result | Streaming chat | Streaming + output value |\n\n## Protocol Structure\n\nWorkers use a simpler protocol structure than interactive agents:\n\n```yaml\n# Input schema - provided when worker is executed\ninput:\n TOPIC:\n type: string\n description: Topic to research\n DEPTH:\n type: string\n optional: true\n default: medium\n\n# Variables for intermediate results\nvariables:\n RESEARCH_DATA:\n type: string\n ANALYSIS:\n type: string\n description: Final analysis result\n\n# Tools available to the worker\ntools:\n web-search:\n description: Search the web\n parameters:\n query: { type: string }\n\n# Sequential execution steps\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC, DEPTH]\n tools: [web-search]\n maxSteps: 5\n\n Add research request:\n block: add-message\n thread: research\n role: user\n prompt: research-prompt\n input: [TOPIC, DEPTH]\n\n Generate research:\n block: next-message\n thread: research\n output: RESEARCH_DATA\n\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: analysis-system\n\n Add analysis request:\n block: add-message\n thread: analysis\n role: user\n prompt: analysis-prompt\n input: [RESEARCH_DATA]\n\n Generate analysis:\n block: next-message\n thread: analysis\n output: ANALYSIS\n\n# Output variable - the worker\'s return value\noutput: ANALYSIS\n```\n\n## settings.json\n\nWorkers are identified by the `format` field:\n\n```json\n{\n "slug": "research-assistant",\n "name": "Research Assistant",\n "description": "Researches topics and returns structured analysis",\n "format": "worker"\n}\n```\n\n## Key Differences\n\n### No Global Agent Config\n\nInteractive agents have a global `agent:` section that configures a main thread. Workers don\'t have this \u2014 every thread must be explicitly created via `start-thread`:\n\n```yaml\n# Interactive agent: Global config\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [tool-a, tool-b]\n\n# Worker: Each thread configured independently\nsteps:\n Start thread A:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n tools: [tool-a]\n\n Start thread B:\n block: start-thread\n thread: analysis\n model: openai/gpt-4o\n tools: [tool-b]\n```\n\nThis gives workers flexibility to use different models, tools, skills, and settings at different stages.\n\n### Steps Instead of Handlers\n\nWorkers use `steps:` instead of `handlers:`. Steps execute sequentially, like handler blocks:\n\n```yaml\n# Interactive: Handlers respond to triggers\nhandlers:\n user-message:\n Add message:\n block: add-message\n # ...\n\n# Worker: Steps execute in sequence\nsteps:\n Add message:\n block: add-message\n # ...\n```\n\n### Output Value\n\nWorkers can return an output value to the caller:\n\n```yaml\nvariables:\n RESULT:\n type: string\n\nsteps:\n # ... steps that populate RESULT ...\n\noutput: RESULT # Return this variable\'s value\n```\n\nThe `output` field references a variable declared in `variables:`. If omitted, the worker completes without returning a value.\n\n## Available Blocks\n\nWorkers support the same blocks as handlers:\n\n| Block | Purpose |\n| ------------------ | -------------------------------------------- |\n| `start-thread` | Create a named thread with LLM configuration |\n| `add-message` | Add a message to a thread |\n| `next-message` | Generate LLM response |\n| `tool-call` | Call a tool deterministically |\n| `set-resource` | Update a resource value |\n| `serialize-thread` | Convert thread to text |\n| `generate-image` | Generate an image from a prompt variable |\n\n### start-thread (Required for LLM)\n\nEvery thread must be initialized with `start-thread` before using `next-message`:\n\n```yaml\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC]\n tools: [web-search]\n thinking: medium\n maxSteps: 5\n```\n\nAll LLM configuration goes here:\n\n| Field | Description |\n| ------------- | ------------------------------------------------- |\n| `thread` | Thread name (defaults to block name) |\n| `model` | LLM model to use |\n| `system` | System prompt filename (required) |\n| `input` | Variables for system prompt |\n| `tools` | Tools available in this thread |\n| `skills` | Octavus skills available in this thread |\n| `imageModel` | Image generation model |\n| `webSearch` | Enable built-in web search tool |\n| `thinking` | Extended reasoning level |\n| `temperature` | Model temperature |\n| `maxSteps` | Maximum tool call cycles (enables agentic if > 1) |\n\n## Simple Example\n\nA worker that generates a title from a summary:\n\n```yaml\n# Input\ninput:\n CONVERSATION_SUMMARY:\n type: string\n description: Summary to generate a title for\n\n# Variables\nvariables:\n TITLE:\n type: string\n description: The generated title\n\n# Steps\nsteps:\n Start title thread:\n block: start-thread\n thread: title-gen\n model: anthropic/claude-sonnet-4-5\n system: title-system\n\n Add title request:\n block: add-message\n thread: title-gen\n role: user\n prompt: title-request\n input: [CONVERSATION_SUMMARY]\n\n Generate title:\n block: next-message\n thread: title-gen\n output: TITLE\n display: stream\n\n# Output\noutput: TITLE\n```\n\n## Advanced Example\n\nA worker with multiple threads, tools, and agentic behavior:\n\n```yaml\ninput:\n USER_MESSAGE:\n type: string\n description: The user\'s message to respond to\n USER_ID:\n type: string\n description: User ID for account lookups\n optional: true\n\ntools:\n get-user-account:\n description: Looking up account information\n parameters:\n userId: { type: string }\n create-support-ticket:\n description: Creating a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string }\n\nvariables:\n ASSISTANT_RESPONSE:\n type: string\n CHAT_TRANSCRIPT:\n type: string\n CONVERSATION_SUMMARY:\n type: string\n\nsteps:\n # Thread 1: Chat with agentic tool calling\n Start chat thread:\n block: start-thread\n thread: chat\n model: anthropic/claude-sonnet-4-5\n system: chat-system\n input: [USER_ID]\n tools: [get-user-account, create-support-ticket]\n thinking: medium\n maxSteps: 5\n\n Add user message:\n block: add-message\n thread: chat\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Generate response:\n block: next-message\n thread: chat\n output: ASSISTANT_RESPONSE\n display: stream\n\n # Serialize for summary\n Save conversation:\n block: serialize-thread\n thread: chat\n output: CHAT_TRANSCRIPT\n\n # Thread 2: Summary generation\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n system: summary-system\n thinking: low\n\n Add summary request:\n block: add-message\n thread: summary\n role: user\n prompt: summary-request\n input: [CHAT_TRANSCRIPT]\n\n Generate summary:\n block: next-message\n thread: summary\n output: CONVERSATION_SUMMARY\n display: stream\n\noutput: CONVERSATION_SUMMARY\n```\n\n## Skills, Image Generation, and Web Search\n\nWorkers can use Octavus skills, image generation, and web search, configured per-thread via `start-thread`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generate QR codes\n\nsteps:\n Start thread:\n block: start-thread\n thread: worker\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n imageModel: google/gemini-2.5-flash-image\n webSearch: true\n maxSteps: 10\n```\n\nWorkers define their own skills independently -- they don\'t inherit skills from a parent interactive agent. Each thread gets its own sandbox scoped to only its listed skills.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## Tool Handling\n\nWorkers support the same tool handling as interactive agents:\n\n- **Server tools** \u2014 Handled by tool handlers you provide\n- **Client tools** \u2014 Pause execution, return tool request to caller\n\n```typescript\n// Non-streaming: get the output directly\nconst { output } = await client.workers.generate(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => await searchWeb(args.query),\n },\n },\n);\n\n// Streaming: observe events in real-time\nconst events = client.workers.execute(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => await searchWeb(args.query),\n },\n },\n);\n```\n\nSee [Server SDK Workers](/docs/server-sdk/workers) for tool handling details.\n\n## Stream Events\n\nWorkers emit the same events as interactive agents, plus worker-specific events:\n\n| Event | Description |\n| --------------- | ---------------------------------- |\n| `worker-start` | Worker execution begins |\n| `worker-result` | Worker completes (includes output) |\n\nAll standard events (text-delta, tool calls, etc.) are also emitted.\n\n## Calling Workers from Interactive Agents\n\nInteractive agents can call workers in two ways:\n\n1. **Deterministically** \u2014 Using the `run-worker` block\n2. **Agentically** \u2014 LLM calls worker as a tool\n\n### Worker Declaration\n\nFirst, declare workers in your interactive agent\'s protocol:\n\n```yaml\nworkers:\n generate-title:\n description: Generating conversation title\n display: description\n research-assistant:\n description: Researching topic\n display: stream\n tools:\n search: web-search # Map worker tool \u2192 parent tool\n```\n\n### run-worker Block\n\nCall a worker deterministically from a handler:\n\n```yaml\nhandlers:\n request-human:\n Generate title:\n block: run-worker\n worker: generate-title\n input:\n CONVERSATION_SUMMARY: SUMMARY\n output: CONVERSATION_TITLE\n```\n\n### LLM Tool Invocation\n\nMake workers available to the LLM:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n workers: [generate-title, research-assistant]\n agentic: true\n```\n\nThe LLM can then call workers as tools during conversation.\n\n### Display Modes\n\nControl how worker execution appears to users:\n\n| Mode | Behavior |\n| ------------- | --------------------------------- |\n| `hidden` | Worker runs silently |\n| `name` | Shows worker name |\n| `description` | Shows description text |\n| `stream` | Streams all worker events to user |\n\n### Tool Mapping\n\nMap parent tools to worker tools when the worker needs access to your tool handlers:\n\n```yaml\nworkers:\n research-assistant:\n description: Research topics\n tools:\n search: web-search # Worker\'s "search" \u2192 parent\'s "web-search"\n```\n\nWhen the worker calls its `search` tool, your `web-search` handler executes.\n\n## Next Steps\n\n- [Server SDK Workers](/docs/server-sdk/workers) \u2014 Executing workers from code\n- [Handlers](/docs/protocol/handlers) \u2014 Block reference for steps\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n',
634
634
  excerpt: "Workers Workers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value. When to...",
635
635
  order: 11
636
636
  },
@@ -1280,7 +1280,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
1280
1280
  section: "protocol",
1281
1281
  title: "Overview",
1282
1282
  description: "Introduction to Octavus agent protocols.",
1283
- content: '\n# Protocol Overview\n\nAgent protocols define how an AI agent behaves. They\'re written in YAML and specify inputs, triggers, tools, and execution handlers.\n\n## Why Protocols?\n\nProtocols provide:\n\n- **Declarative definition** \u2014 Define behavior, not implementation\n- **Portable agents** \u2014 Move agents between projects\n- **Versioning** \u2014 Track changes with git\n- **Validation** \u2014 Catch errors before runtime\n- **Visualization** \u2014 Debug execution flows\n\n## Agent Formats\n\nOctavus supports two agent formats:\n\n| Format | Use Case | Structure |\n| ------------- | ------------------------------ | --------------------------------- |\n| `interactive` | Chat and multi-turn dialogue | `triggers` + `handlers` + `agent` |\n| `worker` | Background tasks and pipelines | `steps` + `output` |\n\n**Interactive agents** handle conversations \u2014 they respond to triggers (like user messages) and maintain session state across interactions.\n\n**Worker agents** execute tasks \u2014 they run steps sequentially and return an output value. Workers can be called independently or composed into interactive agents.\n\nSee [Workers](/docs/protocol/workers) for the worker protocol reference.\n\n## Interactive Protocol Structure\n\n```yaml\n# Agent inputs (provided when creating a session)\ninput:\n COMPANY_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\n# Persistent resources the agent can read/write\nresources:\n CONVERSATION_SUMMARY:\n description: Summary for handoff\n default: \'\'\n\n# How the agent can be invoked\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n request-human:\n description: User clicks "Talk to Human"\n\n# Temporary variables for execution (with types)\nvariables:\n SUMMARY:\n type: string\n TICKET:\n type: unknown\n\n# Tools the agent can use\ntools:\n get-user-account:\n description: Looking up your account\n parameters:\n userId: { type: string }\n\n# Octavus skills (provider-agnostic code execution)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\n# Agent configuration (model, tools, etc.)\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account]\n skills: [qr-code] # Enable skills\n imageModel: google/gemini-2.5-flash-image # Enable image generation\n agentic: true # Allow multiple tool calls\n thinking: medium # Extended reasoning\n\n# What happens when triggers fire\nhandlers:\n user-message:\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond to user:\n block: next-message\n```\n\n## File Structure\n\nEach agent is a folder with:\n\n```\nmy-agent/\n\u251C\u2500\u2500 protocol.yaml # Main logic (required)\n\u251C\u2500\u2500 settings.json # Agent metadata (required)\n\u251C\u2500\u2500 prompts/ # Prompt templates (supports subdirectories)\n\u2502 \u251C\u2500\u2500 system.md\n\u2502 \u251C\u2500\u2500 user-message.md\n\u2502 \u2514\u2500\u2500 shared/\n\u2502 \u251C\u2500\u2500 company-info.md\n\u2502 \u2514\u2500\u2500 formatting-rules.md\n\u2514\u2500\u2500 references/ # On-demand context documents (optional)\n \u2514\u2500\u2500 api-guidelines.md\n```\n\nPrompts can be organized in subdirectories. In the protocol, reference nested prompts by their path relative to `prompts/` (without `.md`): `shared/company-info`.\n\nReferences are markdown files with YAML frontmatter that the agent can fetch on demand during execution. See [References](/docs/protocol/references).\n\n### settings.json\n\n```json\n{\n "slug": "my-agent",\n "name": "My Agent",\n "description": "What this agent does",\n "format": "interactive"\n}\n```\n\n| Field | Required | Description |\n| ------------- | -------- | ----------------------------------------------- |\n| `slug` | Yes | URL-safe identifier (lowercase, digits, dashes) |\n| `name` | Yes | Human-readable name |\n| `description` | No | Brief description |\n| `format` | Yes | `interactive` (chat) or `worker` (background) |\n\n## Naming Conventions\n\n- **Slugs**: `lowercase-with-dashes`\n- **Variables**: `UPPERCASE_SNAKE_CASE`\n- **Prompts**: `lowercase-with-dashes.md` (paths use `/` for subdirectories)\n- **Tools**: `lowercase-with-dashes`\n- **Triggers**: `lowercase-with-dashes`\n\n## Variables in Prompts\n\nReference variables with `{{VARIABLE_NAME}}`:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a support agent for {{COMPANY_NAME}}.\n\nHelp users with their {{PRODUCT_NAME}} questions.\n\n## Support Policies\n\n{{SUPPORT_POLICIES}}\n```\n\nVariables are replaced with their values at runtime. If a variable is not provided, the placeholder is kept as-is.\n\n## Prompt Interpolation\n\nInclude other prompts inside a prompt with `{{@path.md}}`:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a customer support agent.\n\n{{@shared/company-info.md}}\n\n{{@shared/formatting-rules.md}}\n\nHelp users with their questions.\n```\n\nThe referenced prompt content is inserted before variable interpolation, so variables in included prompts work the same way. Circular references are not allowed and will be caught during validation.\n\n## Next Steps\n\n- [Input & Resources](/docs/protocol/input-resources) \u2014 Defining agent inputs\n- [Triggers](/docs/protocol/triggers) \u2014 How agents are invoked\n- [Tools](/docs/protocol/tools) \u2014 External capabilities\n- [Skills](/docs/protocol/skills) \u2014 Code execution and knowledge packages\n- [References](/docs/protocol/references) \u2014 On-demand context documents\n- [Handlers](/docs/protocol/handlers) \u2014 Execution blocks\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n- [Workers](/docs/protocol/workers) \u2014 Worker agent format\n- [Provider Options](/docs/protocol/provider-options) \u2014 Provider-specific features\n- [Types](/docs/protocol/types) \u2014 Custom type definitions\n',
1283
+ content: '\n# Protocol Overview\n\nAgent protocols define how an AI agent behaves. They\'re written in YAML and specify inputs, triggers, tools, and execution handlers.\n\n## Why Protocols?\n\nProtocols provide:\n\n- **Declarative definition** \u2014 Define behavior, not implementation\n- **Portable agents** \u2014 Move agents between projects\n- **Versioning** \u2014 Track changes with git\n- **Validation** \u2014 Catch errors before runtime\n- **Visualization** \u2014 Debug execution flows\n\n## Agent Formats\n\nOctavus supports two agent formats:\n\n| Format | Use Case | Structure |\n| ------------- | ------------------------------ | --------------------------------- |\n| `interactive` | Chat and multi-turn dialogue | `triggers` + `handlers` + `agent` |\n| `worker` | Background tasks and pipelines | `steps` + `output` |\n\n**Interactive agents** handle conversations \u2014 they respond to triggers (like user messages) and maintain session state across interactions.\n\n**Worker agents** execute tasks \u2014 they run steps sequentially and return an output value. Workers can be called independently or composed into interactive agents.\n\nSee [Workers](/docs/protocol/workers) for the worker protocol reference.\n\n## Interactive Protocol Structure\n\n```yaml\n# Agent inputs (provided when creating a session)\ninput:\n COMPANY_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\n# Persistent resources the agent can read/write\nresources:\n CONVERSATION_SUMMARY:\n description: Summary for handoff\n default: \'\'\n\n# How the agent can be invoked\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n request-human:\n description: User clicks "Talk to Human"\n\n# Temporary variables for execution (with types)\nvariables:\n SUMMARY:\n type: string\n TICKET:\n type: unknown\n\n# Tools the agent can use\ntools:\n get-user-account:\n description: Looking up your account\n parameters:\n userId: { type: string }\n\n# Octavus skills (provider-agnostic code execution)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\n# Agent configuration (model, tools, etc.)\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account]\n skills: [qr-code] # Enable skills\n imageModel: google/gemini-2.5-flash-image # Enable image generation\n webSearch: true # Enable web search\n agentic: true # Allow multiple tool calls\n thinking: medium # Extended reasoning\n\n# What happens when triggers fire\nhandlers:\n user-message:\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond to user:\n block: next-message\n```\n\n## File Structure\n\nEach agent is a folder with:\n\n```\nmy-agent/\n\u251C\u2500\u2500 protocol.yaml # Main logic (required)\n\u251C\u2500\u2500 settings.json # Agent metadata (required)\n\u251C\u2500\u2500 prompts/ # Prompt templates (supports subdirectories)\n\u2502 \u251C\u2500\u2500 system.md\n\u2502 \u251C\u2500\u2500 user-message.md\n\u2502 \u2514\u2500\u2500 shared/\n\u2502 \u251C\u2500\u2500 company-info.md\n\u2502 \u2514\u2500\u2500 formatting-rules.md\n\u2514\u2500\u2500 references/ # On-demand context documents (optional)\n \u2514\u2500\u2500 api-guidelines.md\n```\n\nPrompts can be organized in subdirectories. In the protocol, reference nested prompts by their path relative to `prompts/` (without `.md`): `shared/company-info`.\n\nReferences are markdown files with YAML frontmatter that the agent can fetch on demand during execution. See [References](/docs/protocol/references).\n\n### settings.json\n\n```json\n{\n "slug": "my-agent",\n "name": "My Agent",\n "description": "What this agent does",\n "format": "interactive"\n}\n```\n\n| Field | Required | Description |\n| ------------- | -------- | ----------------------------------------------- |\n| `slug` | Yes | URL-safe identifier (lowercase, digits, dashes) |\n| `name` | Yes | Human-readable name |\n| `description` | No | Brief description |\n| `format` | Yes | `interactive` (chat) or `worker` (background) |\n\n## Naming Conventions\n\n- **Slugs**: `lowercase-with-dashes`\n- **Variables**: `UPPERCASE_SNAKE_CASE`\n- **Prompts**: `lowercase-with-dashes.md` (paths use `/` for subdirectories)\n- **Tools**: `lowercase-with-dashes`\n- **Triggers**: `lowercase-with-dashes`\n\n## Variables in Prompts\n\nReference variables with `{{VARIABLE_NAME}}`:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a support agent for {{COMPANY_NAME}}.\n\nHelp users with their {{PRODUCT_NAME}} questions.\n\n## Support Policies\n\n{{SUPPORT_POLICIES}}\n```\n\nVariables are replaced with their values at runtime. If a variable is not provided, the placeholder is kept as-is.\n\n## Prompt Interpolation\n\nInclude other prompts inside a prompt with `{{@path.md}}`:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a customer support agent.\n\n{{@shared/company-info.md}}\n\n{{@shared/formatting-rules.md}}\n\nHelp users with their questions.\n```\n\nThe referenced prompt content is inserted before variable interpolation, so variables in included prompts work the same way. Circular references are not allowed and will be caught during validation.\n\n## Next Steps\n\n- [Input & Resources](/docs/protocol/input-resources) \u2014 Defining agent inputs\n- [Triggers](/docs/protocol/triggers) \u2014 How agents are invoked\n- [Tools](/docs/protocol/tools) \u2014 External capabilities\n- [Skills](/docs/protocol/skills) \u2014 Code execution and knowledge packages\n- [References](/docs/protocol/references) \u2014 On-demand context documents\n- [Handlers](/docs/protocol/handlers) \u2014 Execution blocks\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n- [Workers](/docs/protocol/workers) \u2014 Worker agent format\n- [Provider Options](/docs/protocol/provider-options) \u2014 Provider-specific features\n- [Types](/docs/protocol/types) \u2014 Custom type definitions\n',
1284
1284
  excerpt: "Protocol Overview Agent protocols define how an AI agent behaves. They're written in YAML and specify inputs, triggers, tools, and execution handlers. Why Protocols? Protocols provide: - Declarative...",
1285
1285
  order: 1
1286
1286
  },
@@ -1307,8 +1307,8 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
1307
1307
  section: "protocol",
1308
1308
  title: "Tools",
1309
1309
  description: "Defining external tools implemented in your backend.",
1310
- content: '\n# Tools\n\nTools extend what agents can do. Octavus supports multiple types:\n\n1. **External Tools** \u2014 Defined in the protocol, implemented in your backend (this page)\n2. **Provider Tools** \u2014 Built-in tools executed server-side by the provider (e.g., Anthropic\'s web search)\n3. **Skills** \u2014 Code execution and knowledge packages (see [Skills](/docs/protocol/skills))\n\nThis page covers external tools. For provider tools, see [Provider Options](/docs/protocol/provider-options). For code execution capabilities, see [Skills](/docs/protocol/skills).\n\n## External Tools\n\nExternal tools are defined in the `tools:` section and implemented in your backend.\n\n## Defining Tools\n\n```yaml\ntools:\n get-user-account:\n description: Looking up your account information\n display: description\n parameters:\n userId:\n type: string\n description: The user ID to look up\n```\n\n### Tool Fields\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------------ |\n| `description` | Yes | What the tool does (shown to LLM and optionally user) |\n| `display` | No | How to show in UI: `hidden`, `name`, `description`, `stream` |\n| `parameters` | No | Input parameters the tool accepts |\n\n### Display Modes\n\n| Mode | Behavior |\n| ------------- | ------------------------------------------- |\n| `hidden` | Tool runs silently, user doesn\'t see it |\n| `name` | Shows tool name while executing |\n| `description` | Shows description while executing (default) |\n| `stream` | Streams tool progress if available |\n\n## Parameters\n\nTool calls are always objects where each parameter name maps to a value. The LLM generates: `{ param1: value1, param2: value2, ... }`\n\n### Parameter Fields\n\n| Field | Required | Description |\n| ------------- | -------- | -------------------------------------------------------------------------------- |\n| `type` | Yes | Data type: `string`, `number`, `integer`, `boolean`, `unknown`, or a custom type |\n| `description` | No | Describes what this parameter is for |\n| `optional` | No | If true, parameter is not required (default: false) |\n\n> **Tip**: You can use [custom types](/docs/protocol/types) for complex parameters like `type: ProductFilter` or `type: SearchOptions`.\n\n### Array Parameters\n\nFor array parameters, define a [top-level array type](/docs/protocol/types#top-level-array-types) and use it:\n\n```yaml\ntypes:\n CartItem:\n productId:\n type: string\n quantity:\n type: integer\n\n CartItemList:\n type: array\n items:\n type: CartItem\n\ntools:\n add-to-cart:\n description: Add items to cart\n parameters:\n items:\n type: CartItemList\n description: Items to add\n```\n\nThe tool receives: `{ items: [{ productId: "...", quantity: 1 }, ...] }`\n\n### Optional Parameters\n\nParameters are **required by default**. Use `optional: true` to make a parameter optional:\n\n```yaml\ntools:\n search-products:\n description: Search the product catalog\n parameters:\n query:\n type: string\n description: Search query\n\n category:\n type: string\n description: Filter by category\n optional: true\n\n maxPrice:\n type: number\n description: Maximum price filter\n optional: true\n\n inStock:\n type: boolean\n description: Only show in-stock items\n optional: true\n```\n\n## Making Tools Available\n\nTools defined in `tools:` are available. To make them usable by the LLM, add them to `agent.tools`:\n\n```yaml\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high, urgent\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools:\n - get-user-account\n - create-support-ticket # LLM can decide when to call these\n agentic: true\n```\n\n## Tool Invocation Modes\n\n### LLM-Decided (Agentic)\n\nThe LLM decides when to call tools based on the conversation:\n\n```yaml\nagent:\n tools: [get-user-account, create-support-ticket]\n agentic: true # Allow multiple tool calls\n maxSteps: 10 # Max tool call cycles\n```\n\n### Deterministic (Block-Based)\n\nForce tool calls at specific points in the handler:\n\n```yaml\nhandlers:\n request-human:\n # Always create a ticket when escalating\n Create support ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY # From variable\n priority: medium # Literal value\n output: TICKET # Store result\n```\n\n## Tool Results\n\n### In Prompts\n\nTool results are stored in variables. Reference the variable in prompts:\n\n```markdown\n<!-- prompts/ticket-directive.md -->\n\nA support ticket has been created:\n{{TICKET}}\n\nLet the user know their ticket has been created.\n```\n\nWhen the `TICKET` variable contains an object, it\'s automatically serialized as JSON in the prompt:\n\n```\nA support ticket has been created:\n{\n "ticketId": "TKT-123ABC",\n "estimatedResponse": "24 hours"\n}\n\nLet the user know their ticket has been created.\n```\n\n> **Note**: Variables use `{{VARIABLE_NAME}}` syntax with `UPPERCASE_SNAKE_CASE`. Dot notation (like `{{TICKET.ticketId}}`) is not supported. Objects are automatically JSON-serialized.\n\n### In Variables\n\nStore tool results for later use:\n\n```yaml\nhandlers:\n request-human:\n Get account:\n block: tool-call\n tool: get-user-account\n input:\n userId: USER_ID\n output: ACCOUNT # Result stored here\n\n Create ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY\n priority: medium\n output: TICKET\n```\n\n## Implementing Tools\n\nTools are implemented in your backend:\n\n```typescript\nconst session = client.agentSessions.attach(sessionId, {\n tools: {\n \'get-user-account\': async (args) => {\n const userId = args.userId as string;\n const user = await db.users.findById(userId);\n\n return {\n name: user.name,\n email: user.email,\n plan: user.subscription.plan,\n createdAt: user.createdAt.toISOString(),\n };\n },\n\n \'create-support-ticket\': async (args) => {\n const ticket = await ticketService.create({\n summary: args.summary as string,\n priority: args.priority as string,\n });\n\n return {\n ticketId: ticket.id,\n estimatedResponse: getEstimatedTime(args.priority),\n };\n },\n },\n});\n```\n\n## Tool Best Practices\n\n### 1. Clear Descriptions\n\n```yaml\ntools:\n # Good - clear and specific\n get-user-account:\n description: >\n Retrieves the user\'s account information including name, email,\n subscription plan, and account creation date. Use this when the\n user asks about their account or you need to verify their identity.\n\n # Avoid - vague\n get-data:\n description: Gets some data\n```\n\n### 2. Document Constrained Values\n\n```yaml\ntools:\n create-support-ticket:\n parameters:\n priority:\n type: string\n description: Ticket priority level (low, medium, high, urgent)\n```\n',
1311
- excerpt: "Tools Tools extend what agents can do. Octavus supports multiple types: 1. External Tools \u2014 Defined in the protocol, implemented in your backend (this page) 2. Provider Tools \u2014 Built-in tools...",
1310
+ content: '\n# Tools\n\nTools extend what agents can do. Octavus supports multiple types:\n\n1. **External Tools** \u2014 Defined in the protocol, implemented in your backend (this page)\n2. **Built-in Tools** \u2014 Provider-agnostic tools managed by Octavus (web search, image generation)\n3. **Provider Tools** \u2014 Provider-specific tools executed by the provider (e.g., Anthropic\'s code execution)\n4. **Skills** \u2014 Code execution and knowledge packages (see [Skills](/docs/protocol/skills))\n\nThis page covers external tools. Built-in tools are enabled via agent config \u2014 see [Web Search](/docs/protocol/agent-config#web-search) and [Image Generation](/docs/protocol/agent-config#image-generation). For provider-specific tools, see [Provider Options](/docs/protocol/provider-options). For code execution, see [Skills](/docs/protocol/skills).\n\n## External Tools\n\nExternal tools are defined in the `tools:` section and implemented in your backend.\n\n## Defining Tools\n\n```yaml\ntools:\n get-user-account:\n description: Looking up your account information\n display: description\n parameters:\n userId:\n type: string\n description: The user ID to look up\n```\n\n### Tool Fields\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------------ |\n| `description` | Yes | What the tool does (shown to LLM and optionally user) |\n| `display` | No | How to show in UI: `hidden`, `name`, `description`, `stream` |\n| `parameters` | No | Input parameters the tool accepts |\n\n### Display Modes\n\n| Mode | Behavior |\n| ------------- | ------------------------------------------- |\n| `hidden` | Tool runs silently, user doesn\'t see it |\n| `name` | Shows tool name while executing |\n| `description` | Shows description while executing (default) |\n| `stream` | Streams tool progress if available |\n\n## Parameters\n\nTool calls are always objects where each parameter name maps to a value. The LLM generates: `{ param1: value1, param2: value2, ... }`\n\n### Parameter Fields\n\n| Field | Required | Description |\n| ------------- | -------- | -------------------------------------------------------------------------------- |\n| `type` | Yes | Data type: `string`, `number`, `integer`, `boolean`, `unknown`, or a custom type |\n| `description` | No | Describes what this parameter is for |\n| `optional` | No | If true, parameter is not required (default: false) |\n\n> **Tip**: You can use [custom types](/docs/protocol/types) for complex parameters like `type: ProductFilter` or `type: SearchOptions`.\n\n### Array Parameters\n\nFor array parameters, define a [top-level array type](/docs/protocol/types#top-level-array-types) and use it:\n\n```yaml\ntypes:\n CartItem:\n productId:\n type: string\n quantity:\n type: integer\n\n CartItemList:\n type: array\n items:\n type: CartItem\n\ntools:\n add-to-cart:\n description: Add items to cart\n parameters:\n items:\n type: CartItemList\n description: Items to add\n```\n\nThe tool receives: `{ items: [{ productId: "...", quantity: 1 }, ...] }`\n\n### Optional Parameters\n\nParameters are **required by default**. Use `optional: true` to make a parameter optional:\n\n```yaml\ntools:\n search-products:\n description: Search the product catalog\n parameters:\n query:\n type: string\n description: Search query\n\n category:\n type: string\n description: Filter by category\n optional: true\n\n maxPrice:\n type: number\n description: Maximum price filter\n optional: true\n\n inStock:\n type: boolean\n description: Only show in-stock items\n optional: true\n```\n\n## Making Tools Available\n\nTools defined in `tools:` are available. To make them usable by the LLM, add them to `agent.tools`:\n\n```yaml\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high, urgent\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools:\n - get-user-account\n - create-support-ticket # LLM can decide when to call these\n agentic: true\n```\n\n## Tool Invocation Modes\n\n### LLM-Decided (Agentic)\n\nThe LLM decides when to call tools based on the conversation:\n\n```yaml\nagent:\n tools: [get-user-account, create-support-ticket]\n agentic: true # Allow multiple tool calls\n maxSteps: 10 # Max tool call cycles\n```\n\n### Deterministic (Block-Based)\n\nForce tool calls at specific points in the handler:\n\n```yaml\nhandlers:\n request-human:\n # Always create a ticket when escalating\n Create support ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY # From variable\n priority: medium # Literal value\n output: TICKET # Store result\n```\n\n## Tool Results\n\n### In Prompts\n\nTool results are stored in variables. Reference the variable in prompts:\n\n```markdown\n<!-- prompts/ticket-directive.md -->\n\nA support ticket has been created:\n{{TICKET}}\n\nLet the user know their ticket has been created.\n```\n\nWhen the `TICKET` variable contains an object, it\'s automatically serialized as JSON in the prompt:\n\n```\nA support ticket has been created:\n{\n "ticketId": "TKT-123ABC",\n "estimatedResponse": "24 hours"\n}\n\nLet the user know their ticket has been created.\n```\n\n> **Note**: Variables use `{{VARIABLE_NAME}}` syntax with `UPPERCASE_SNAKE_CASE`. Dot notation (like `{{TICKET.ticketId}}`) is not supported. Objects are automatically JSON-serialized.\n\n### In Variables\n\nStore tool results for later use:\n\n```yaml\nhandlers:\n request-human:\n Get account:\n block: tool-call\n tool: get-user-account\n input:\n userId: USER_ID\n output: ACCOUNT # Result stored here\n\n Create ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY\n priority: medium\n output: TICKET\n```\n\n## Implementing Tools\n\nTools are implemented in your backend:\n\n```typescript\nconst session = client.agentSessions.attach(sessionId, {\n tools: {\n \'get-user-account\': async (args) => {\n const userId = args.userId as string;\n const user = await db.users.findById(userId);\n\n return {\n name: user.name,\n email: user.email,\n plan: user.subscription.plan,\n createdAt: user.createdAt.toISOString(),\n };\n },\n\n \'create-support-ticket\': async (args) => {\n const ticket = await ticketService.create({\n summary: args.summary as string,\n priority: args.priority as string,\n });\n\n return {\n ticketId: ticket.id,\n estimatedResponse: getEstimatedTime(args.priority),\n };\n },\n },\n});\n```\n\n## Tool Best Practices\n\n### 1. Clear Descriptions\n\n```yaml\ntools:\n # Good - clear and specific\n get-user-account:\n description: >\n Retrieves the user\'s account information including name, email,\n subscription plan, and account creation date. Use this when the\n user asks about their account or you need to verify their identity.\n\n # Avoid - vague\n get-data:\n description: Gets some data\n```\n\n### 2. Document Constrained Values\n\n```yaml\ntools:\n create-support-ticket:\n parameters:\n priority:\n type: string\n description: Ticket priority level (low, medium, high, urgent)\n```\n',
1311
+ excerpt: "Tools Tools extend what agents can do. Octavus supports multiple types: 1. External Tools \u2014 Defined in the protocol, implemented in your backend (this page) 2. Built-in Tools \u2014 Provider-agnostic...",
1312
1312
  order: 4
1313
1313
  },
1314
1314
  {
@@ -1334,7 +1334,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
1334
1334
  section: "protocol",
1335
1335
  title: "Agent Config",
1336
1336
  description: "Configuring the agent model and behavior.",
1337
- content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n references: [api-guidelines] # On-demand context documents\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ---------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to pass to the system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `references` | No | List of references the LLM can fetch on demand |\n| `sandboxTimeout` | No | Skill sandbox timeout in ms (default: 5 min, max: 1 hour) |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | -------------------------------------------------------------------- |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash-preview\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions. The `input` field controls which variables are available to the prompt \u2014 only variables listed in `input` are interpolated.\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nVariables in `input` can come from `protocol.input`, `protocol.resources`, or `protocol.variables`.\n\n### Input Mapping Formats\n\n```yaml\n# Array format (same name)\ninput:\n - COMPANY_NAME\n - PRODUCT_NAME\n\n# Array format (rename)\ninput:\n - CONTEXT: CONVERSATION_SUMMARY # Prompt sees CONTEXT, value comes from CONVERSATION_SUMMARY\n\n# Object format (rename)\ninput:\n CONTEXT: CONVERSATION_SUMMARY\n```\n\nThe left side (label) is what the prompt sees. The right side (source) is where the value comes from.\n\n### Example\n\n`prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## References\n\nEnable on-demand context loading via reference documents:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n references: [api-guidelines, error-codes]\n agentic: true\n```\n\nReferences are markdown files stored in the agent's `references/` directory. When enabled, the LLM can list available references and read their content using `octavus_reference_list` and `octavus_reference_read` tools.\n\nSee [References](/docs/protocol/references) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests. The tool supports both text-to-image generation and image editing/transformation using reference images.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Image Editing with Reference Images\n\nBoth the agentic tool and the `generate-image` block support reference images for editing and transformation. When reference images are provided, the prompt describes how to modify or use those images.\n\n| Provider | Models | Reference Image Support |\n| -------- | -------------------------------- | ----------------------- |\n| OpenAI | `gpt-image-1` | Yes |\n| Google | Gemini native (`gemini-*-image`) | Yes |\n| Google | Imagen (`imagen-*`) | No |\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate or edit images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation or editing\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n skills: [data-analysis] # Thread-specific skills\n references: [escalation-policy] # Thread-specific references\n imageModel: google/gemini-2.5-flash-image # Thread-specific image model\n```\n\nEach thread can have its own skills, references, and image model. Skills must be defined in the protocol's `skills:` section. References must exist in the agent's `references/` directory. Workers use this same pattern since they don't have a global `agent:` section.\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n references: [support-policies] # On-demand context\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
1337
+ content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n references: [api-guidelines] # On-demand context documents\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ---------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to pass to the system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `references` | No | List of references the LLM can fetch on demand |\n| `sandboxTimeout` | No | Skill sandbox timeout in ms (default: 5 min, max: 1 hour) |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `webSearch` | No | Enable built-in web search tool (provider-agnostic) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | -------------------------------------------------------------------- |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash-preview\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions. The `input` field controls which variables are available to the prompt \u2014 only variables listed in `input` are interpolated.\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nVariables in `input` can come from `protocol.input`, `protocol.resources`, or `protocol.variables`.\n\n### Input Mapping Formats\n\n```yaml\n# Array format (same name)\ninput:\n - COMPANY_NAME\n - PRODUCT_NAME\n\n# Array format (rename)\ninput:\n - CONTEXT: CONVERSATION_SUMMARY # Prompt sees CONTEXT, value comes from CONVERSATION_SUMMARY\n\n# Object format (rename)\ninput:\n CONTEXT: CONVERSATION_SUMMARY\n```\n\nThe left side (label) is what the prompt sees. The right side (source) is where the value comes from.\n\n### Example\n\n`prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## References\n\nEnable on-demand context loading via reference documents:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n references: [api-guidelines, error-codes]\n agentic: true\n```\n\nReferences are markdown files stored in the agent's `references/` directory. When enabled, the LLM can list available references and read their content using `octavus_reference_list` and `octavus_reference_read` tools.\n\nSee [References](/docs/protocol/references) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests. The tool supports both text-to-image generation and image editing/transformation using reference images.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Image Editing with Reference Images\n\nBoth the agentic tool and the `generate-image` block support reference images for editing and transformation. When reference images are provided, the prompt describes how to modify or use those images.\n\n| Provider | Models | Reference Image Support |\n| -------- | -------------------------------- | ----------------------- |\n| OpenAI | `gpt-image-1` | Yes |\n| Google | Gemini native (`gemini-*-image`) | Yes |\n| Google | Imagen (`imagen-*`) | No |\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate or edit images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation or editing\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Web Search\n\nEnable the LLM to search the web for current information:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n webSearch: true\n agentic: true\n```\n\nWhen `webSearch` is enabled, the `octavus_web_search` tool becomes available. The LLM can decide when to search the web based on the conversation. Search results include source URLs that are emitted as citations in the UI.\n\nThis is a **provider-agnostic** built-in tool \u2014 it works with any LLM provider (Anthropic, Google, OpenAI, etc.). For Anthropic's own web search implementation, see [Provider Options](/docs/protocol/provider-options).\n\nUse cases:\n\n- Current events and real-time data\n- Fact verification and documentation lookups\n- Any information that may have changed since the model's training\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n skills: [data-analysis] # Thread-specific skills\n references: [escalation-policy] # Thread-specific references\n imageModel: google/gemini-2.5-flash-image # Thread-specific image model\n webSearch: true # Thread-specific web search\n```\n\nEach thread can have its own skills, references, image model, and web search setting. Skills must be defined in the protocol's `skills:` section. References must exist in the agent's `references/` directory. Workers use this same pattern since they don't have a global `agent:` section.\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n references: [support-policies] # On-demand context\n webSearch: true # Built-in web search\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
1338
1338
  excerpt: "Agent Config The section configures the LLM model, system prompt, tools, and behavior. Basic Configuration Configuration Options | Field | Required | Description ...",
1339
1339
  order: 7
1340
1340
  },
@@ -1343,7 +1343,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
1343
1343
  section: "protocol",
1344
1344
  title: "Provider Options",
1345
1345
  description: "Configuring provider-specific tools and features.",
1346
- content: "\n# Provider Options\n\nProvider options let you enable provider-specific features like Anthropic's built-in tools and skills. These features run server-side on the provider's infrastructure.\n\n> **Note**: For provider-agnostic code execution, use [Octavus Skills](/docs/protocol/skills) instead. Octavus Skills work with any LLM provider and run in isolated sandbox environments.\n\n## Anthropic Options\n\nConfigure Anthropic-specific features when using `anthropic/*` models:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n anthropic:\n # Provider tools (server-side)\n tools:\n web-search:\n display: description\n description: Searching the web\n code-execution:\n display: description\n description: Running code\n\n # Skills (knowledge packages)\n skills:\n pdf:\n type: anthropic\n display: description\n description: Processing PDF document\n```\n\n> **Note**: Provider options are validated against the model provider. Using `anthropic:` options with non-Anthropic models will result in a validation error.\n\n## Provider Tools\n\nProvider tools are executed server-side by the provider (Anthropic). Unlike external tools that you implement, provider tools are built-in capabilities.\n\n### Available Tools\n\n| Tool | Description |\n| ---------------- | -------------------------------------------- |\n| `web-search` | Search the web for current information |\n| `code-execution` | Execute Python/Bash in a sandboxed container |\n\n### Tool Configuration\n\n```yaml\nanthropic:\n tools:\n web-search:\n display: description # How to show in UI\n description: Searching... # Custom display text\n```\n\n| Field | Required | Description |\n| ------------- | -------- | --------------------------------------------------------------------- |\n| `display` | No | `hidden`, `name`, `description`, or `stream` (default: `description`) |\n| `description` | No | Custom text shown to users during execution |\n\n### Web Search\n\nAllows the agent to search the web for current information:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Looking up current information\n```\n\nUse cases:\n\n- Current events and news\n- Real-time data (prices, weather)\n- Fact verification\n- Documentation lookups\n\n### Code Execution\n\nEnables Python and Bash execution in a sandboxed container:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n code-execution:\n display: description\n description: Running analysis\n```\n\nUse cases:\n\n- Data analysis and calculations\n- File processing\n- Chart generation\n- Script execution\n\n> **Note**: Code execution is automatically enabled when skills are configured (skills require the container environment).\n\n## Skills\n\n> **Important**: This section covers **Anthropic's built-in skills** (provider-specific). For provider-agnostic skills that work with any LLM, see [Octavus Skills](/docs/protocol/skills).\n\nAnthropic skills are knowledge packages that give the agent specialized capabilities. They're loaded into Anthropic's code execution container at `/skills/{skill-id}/` and only work with Anthropic models.\n\n### Skill Configuration\n\n```yaml\nanthropic:\n skills:\n pdf:\n type: anthropic # 'anthropic' or 'custom'\n version: latest # Optional version\n display: description\n description: Processing PDF\n```\n\n| Field | Required | Description |\n| ------------- | -------- | --------------------------------------------------------------------- |\n| `type` | Yes | `anthropic` (built-in) or `custom` (uploaded) |\n| `version` | No | Skill version (default: `latest`) |\n| `display` | No | `hidden`, `name`, `description`, or `stream` (default: `description`) |\n| `description` | No | Custom text shown to users |\n\n### Built-in Skills\n\nAnthropic provides several built-in skills:\n\n| Skill ID | Purpose |\n| -------- | ----------------------------------------------- |\n| `pdf` | PDF manipulation, text extraction, form filling |\n| `xlsx` | Excel spreadsheet operations and analysis |\n| `docx` | Word document creation and editing |\n| `pptx` | PowerPoint presentation creation |\n\n### Using Skills\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n anthropic:\n skills:\n pdf:\n type: anthropic\n description: Processing your PDF\n xlsx:\n type: anthropic\n description: Analyzing spreadsheet\n```\n\nWhen skills are configured:\n\n1. Code execution is automatically enabled\n2. Skill files are loaded into the container\n3. The agent can read skill instructions and execute scripts\n\n### Custom Skills\n\nYou can create and upload custom skills to Anthropic:\n\n```yaml\nanthropic:\n skills:\n custom-analysis:\n type: custom\n version: latest\n description: Running custom analysis\n```\n\nCustom skills follow the [Agent Skills standard](https://agentskills.io) and contain:\n\n- `SKILL.md` with instructions and metadata\n- Optional `scripts/`, `references/`, and `assets/` directories\n\n### Octavus Skills vs Anthropic Skills\n\n| Feature | Anthropic Skills | Octavus Skills |\n| ----------------- | ------------------------ | ----------------------------- |\n| **Provider** | Anthropic only | Any (agnostic) |\n| **Execution** | Anthropic's container | Isolated sandbox |\n| **Configuration** | `agent.anthropic.skills` | `agent.skills` |\n| **Definition** | `anthropic:` section | `skills:` section |\n| **Use Case** | Claude-specific features | Cross-provider code execution |\n\nFor provider-agnostic code execution, use Octavus Skills defined in the protocol's `skills:` section and enabled via `agent.skills`. See [Skills](/docs/protocol/skills) for details.\n\n## Display Modes\n\nBoth tools and skills support display modes:\n\n| Mode | Behavior |\n| ------------- | ------------------------------- |\n| `hidden` | Not shown to users |\n| `name` | Shows the tool/skill name |\n| `description` | Shows the description (default) |\n| `stream` | Streams progress if available |\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\ntools:\n get-user-account:\n description: Looking up your account\n parameters:\n userId: { type: string }\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input: [COMPANY_NAME, USER_ID]\n tools: [get-user-account] # External tools\n agentic: true\n thinking: medium\n\n # Anthropic-specific options\n anthropic:\n # Provider tools (server-side)\n tools:\n web-search:\n display: description\n description: Searching the web\n code-execution:\n display: description\n description: Running code\n\n # Skills (knowledge packages)\n skills:\n pdf:\n type: anthropic\n display: description\n description: Processing PDF document\n xlsx:\n type: anthropic\n display: description\n description: Analyzing spreadsheet\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n\n## Validation\n\nThe protocol validator enforces:\n\n1. **Model match**: Provider options must match the model provider\n - `anthropic:` options require `anthropic/*` model\n - Using mismatched options results in a validation error\n\n2. **Valid tool types**: Only recognized tools are accepted\n - `web-search` and `code-execution` for Anthropic\n\n3. **Valid skill types**: Only `anthropic` or `custom` are accepted\n\n### Error Example\n\n```yaml\n# This will fail validation\nagent:\n model: openai/gpt-4o # OpenAI model\n anthropic: # Anthropic options - mismatch!\n tools:\n web-search: {}\n```\n\nError: `\"anthropic\" options require an anthropic model. Current model provider: \"openai\"`\n",
1346
+ content: "\n# Provider Options\n\nProvider options let you enable provider-specific features like Anthropic's built-in tools and skills. These features run server-side on the provider's infrastructure.\n\n> **Note**: For provider-agnostic code execution, use [Octavus Skills](/docs/protocol/skills) instead. Octavus Skills work with any LLM provider and run in isolated sandbox environments.\n\n## Anthropic Options\n\nConfigure Anthropic-specific features when using `anthropic/*` models:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n anthropic:\n # Provider tools (server-side)\n tools:\n web-search:\n display: description\n description: Searching the web\n code-execution:\n display: description\n description: Running code\n\n # Skills (knowledge packages)\n skills:\n pdf:\n type: anthropic\n display: description\n description: Processing PDF document\n```\n\n> **Note**: Provider options are validated against the model provider. Using `anthropic:` options with non-Anthropic models will result in a validation error.\n\n## Provider Tools\n\nProvider tools are executed server-side by the provider (Anthropic). Unlike external tools that you implement, provider tools are built-in capabilities.\n\n### Available Tools\n\n| Tool | Description |\n| ---------------- | -------------------------------------------- |\n| `web-search` | Search the web for current information |\n| `code-execution` | Execute Python/Bash in a sandboxed container |\n\n### Tool Configuration\n\n```yaml\nanthropic:\n tools:\n web-search:\n display: description # How to show in UI\n description: Searching... # Custom display text\n```\n\n| Field | Required | Description |\n| ------------- | -------- | --------------------------------------------------------------------- |\n| `display` | No | `hidden`, `name`, `description`, or `stream` (default: `description`) |\n| `description` | No | Custom text shown to users during execution |\n\n### Web Search\n\nAllows the agent to search the web using Anthropic's built-in web search:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Looking up current information\n```\n\n> **Tip**: Octavus also provides a **provider-agnostic** web search via `webSearch: true` in the agent config. This works with any LLM provider and is the recommended approach for multi-provider agents. See [Web Search](/docs/protocol/agent-config#web-search) for details.\n\n### Code Execution\n\nEnables Python and Bash execution in a sandboxed container:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n code-execution:\n display: description\n description: Running analysis\n```\n\nUse cases:\n\n- Data analysis and calculations\n- File processing\n- Chart generation\n- Script execution\n\n> **Note**: Code execution is automatically enabled when skills are configured (skills require the container environment).\n\n## Skills\n\n> **Important**: This section covers **Anthropic's built-in skills** (provider-specific). For provider-agnostic skills that work with any LLM, see [Octavus Skills](/docs/protocol/skills).\n\nAnthropic skills are knowledge packages that give the agent specialized capabilities. They're loaded into Anthropic's code execution container at `/skills/{skill-id}/` and only work with Anthropic models.\n\n### Skill Configuration\n\n```yaml\nanthropic:\n skills:\n pdf:\n type: anthropic # 'anthropic' or 'custom'\n version: latest # Optional version\n display: description\n description: Processing PDF\n```\n\n| Field | Required | Description |\n| ------------- | -------- | --------------------------------------------------------------------- |\n| `type` | Yes | `anthropic` (built-in) or `custom` (uploaded) |\n| `version` | No | Skill version (default: `latest`) |\n| `display` | No | `hidden`, `name`, `description`, or `stream` (default: `description`) |\n| `description` | No | Custom text shown to users |\n\n### Built-in Skills\n\nAnthropic provides several built-in skills:\n\n| Skill ID | Purpose |\n| -------- | ----------------------------------------------- |\n| `pdf` | PDF manipulation, text extraction, form filling |\n| `xlsx` | Excel spreadsheet operations and analysis |\n| `docx` | Word document creation and editing |\n| `pptx` | PowerPoint presentation creation |\n\n### Using Skills\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n anthropic:\n skills:\n pdf:\n type: anthropic\n description: Processing your PDF\n xlsx:\n type: anthropic\n description: Analyzing spreadsheet\n```\n\nWhen skills are configured:\n\n1. Code execution is automatically enabled\n2. Skill files are loaded into the container\n3. The agent can read skill instructions and execute scripts\n\n### Custom Skills\n\nYou can create and upload custom skills to Anthropic:\n\n```yaml\nanthropic:\n skills:\n custom-analysis:\n type: custom\n version: latest\n description: Running custom analysis\n```\n\nCustom skills follow the [Agent Skills standard](https://agentskills.io) and contain:\n\n- `SKILL.md` with instructions and metadata\n- Optional `scripts/`, `references/`, and `assets/` directories\n\n### Octavus Skills vs Anthropic Skills\n\n| Feature | Anthropic Skills | Octavus Skills |\n| ----------------- | ------------------------ | ----------------------------- |\n| **Provider** | Anthropic only | Any (agnostic) |\n| **Execution** | Anthropic's container | Isolated sandbox |\n| **Configuration** | `agent.anthropic.skills` | `agent.skills` |\n| **Definition** | `anthropic:` section | `skills:` section |\n| **Use Case** | Claude-specific features | Cross-provider code execution |\n\nFor provider-agnostic code execution, use Octavus Skills defined in the protocol's `skills:` section and enabled via `agent.skills`. See [Skills](/docs/protocol/skills) for details.\n\n## Display Modes\n\nBoth tools and skills support display modes:\n\n| Mode | Behavior |\n| ------------- | ------------------------------- |\n| `hidden` | Not shown to users |\n| `name` | Shows the tool/skill name |\n| `description` | Shows the description (default) |\n| `stream` | Streams progress if available |\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\ntools:\n get-user-account:\n description: Looking up your account\n parameters:\n userId: { type: string }\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input: [COMPANY_NAME, USER_ID]\n tools: [get-user-account] # External tools\n agentic: true\n thinking: medium\n\n # Anthropic-specific options\n anthropic:\n # Provider tools (server-side)\n tools:\n web-search:\n display: description\n description: Searching the web\n code-execution:\n display: description\n description: Running code\n\n # Skills (knowledge packages)\n skills:\n pdf:\n type: anthropic\n display: description\n description: Processing PDF document\n xlsx:\n type: anthropic\n display: description\n description: Analyzing spreadsheet\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n\n## Validation\n\nThe protocol validator enforces:\n\n1. **Model match**: Provider options must match the model provider\n - `anthropic:` options require `anthropic/*` model\n - Using mismatched options results in a validation error\n\n2. **Valid tool types**: Only recognized tools are accepted\n - `web-search` and `code-execution` for Anthropic\n\n3. **Valid skill types**: Only `anthropic` or `custom` are accepted\n\n### Error Example\n\n```yaml\n# This will fail validation\nagent:\n model: openai/gpt-4o # OpenAI model\n anthropic: # Anthropic options - mismatch!\n tools:\n web-search: {}\n```\n\nError: `\"anthropic\" options require an anthropic model. Current model provider: \"openai\"`\n",
1347
1347
  excerpt: "Provider Options Provider options let you enable provider-specific features like Anthropic's built-in tools and skills. These features run server-side on the provider's infrastructure. > Note: For...",
1348
1348
  order: 8
1349
1349
  },
@@ -1370,7 +1370,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
1370
1370
  section: "protocol",
1371
1371
  title: "Workers",
1372
1372
  description: "Defining worker agents for background and task-based execution.",
1373
- content: '\n# Workers\n\nWorkers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value.\n\n## When to Use Workers\n\nWorkers are ideal for:\n\n- **Background processing** \u2014 Long-running tasks that don\'t need conversation\n- **Composable tasks** \u2014 Reusable units of work called by other agents\n- **Pipelines** \u2014 Multi-step processing with structured output\n- **Parallel execution** \u2014 Tasks that can run independently\n\nUse interactive agents instead when:\n\n- **Conversation is needed** \u2014 Multi-turn dialogue with users\n- **Persistence matters** \u2014 State should survive across interactions\n- **Session context** \u2014 User context needs to persist\n\n## Worker vs Interactive\n\n| Aspect | Interactive | Worker |\n| ---------- | ---------------------------------- | ----------------------------- |\n| Structure | `triggers` + `handlers` + `agent` | `steps` + `output` |\n| LLM Config | Global `agent:` section | Per-thread via `start-thread` |\n| Invocation | Fire a named trigger | Direct execution with input |\n| Session | Persists across triggers (24h TTL) | Single execution |\n| Result | Streaming chat | Streaming + output value |\n\n## Protocol Structure\n\nWorkers use a simpler protocol structure than interactive agents:\n\n```yaml\n# Input schema - provided when worker is executed\ninput:\n TOPIC:\n type: string\n description: Topic to research\n DEPTH:\n type: string\n optional: true\n default: medium\n\n# Variables for intermediate results\nvariables:\n RESEARCH_DATA:\n type: string\n ANALYSIS:\n type: string\n description: Final analysis result\n\n# Tools available to the worker\ntools:\n web-search:\n description: Search the web\n parameters:\n query: { type: string }\n\n# Sequential execution steps\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC, DEPTH]\n tools: [web-search]\n maxSteps: 5\n\n Add research request:\n block: add-message\n thread: research\n role: user\n prompt: research-prompt\n input: [TOPIC, DEPTH]\n\n Generate research:\n block: next-message\n thread: research\n output: RESEARCH_DATA\n\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: analysis-system\n\n Add analysis request:\n block: add-message\n thread: analysis\n role: user\n prompt: analysis-prompt\n input: [RESEARCH_DATA]\n\n Generate analysis:\n block: next-message\n thread: analysis\n output: ANALYSIS\n\n# Output variable - the worker\'s return value\noutput: ANALYSIS\n```\n\n## settings.json\n\nWorkers are identified by the `format` field:\n\n```json\n{\n "slug": "research-assistant",\n "name": "Research Assistant",\n "description": "Researches topics and returns structured analysis",\n "format": "worker"\n}\n```\n\n## Key Differences\n\n### No Global Agent Config\n\nInteractive agents have a global `agent:` section that configures a main thread. Workers don\'t have this \u2014 every thread must be explicitly created via `start-thread`:\n\n```yaml\n# Interactive agent: Global config\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [tool-a, tool-b]\n\n# Worker: Each thread configured independently\nsteps:\n Start thread A:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n tools: [tool-a]\n\n Start thread B:\n block: start-thread\n thread: analysis\n model: openai/gpt-4o\n tools: [tool-b]\n```\n\nThis gives workers flexibility to use different models, tools, skills, and settings at different stages.\n\n### Steps Instead of Handlers\n\nWorkers use `steps:` instead of `handlers:`. Steps execute sequentially, like handler blocks:\n\n```yaml\n# Interactive: Handlers respond to triggers\nhandlers:\n user-message:\n Add message:\n block: add-message\n # ...\n\n# Worker: Steps execute in sequence\nsteps:\n Add message:\n block: add-message\n # ...\n```\n\n### Output Value\n\nWorkers can return an output value to the caller:\n\n```yaml\nvariables:\n RESULT:\n type: string\n\nsteps:\n # ... steps that populate RESULT ...\n\noutput: RESULT # Return this variable\'s value\n```\n\nThe `output` field references a variable declared in `variables:`. If omitted, the worker completes without returning a value.\n\n## Available Blocks\n\nWorkers support the same blocks as handlers:\n\n| Block | Purpose |\n| ------------------ | -------------------------------------------- |\n| `start-thread` | Create a named thread with LLM configuration |\n| `add-message` | Add a message to a thread |\n| `next-message` | Generate LLM response |\n| `tool-call` | Call a tool deterministically |\n| `set-resource` | Update a resource value |\n| `serialize-thread` | Convert thread to text |\n| `generate-image` | Generate an image from a prompt variable |\n\n### start-thread (Required for LLM)\n\nEvery thread must be initialized with `start-thread` before using `next-message`:\n\n```yaml\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC]\n tools: [web-search]\n thinking: medium\n maxSteps: 5\n```\n\nAll LLM configuration goes here:\n\n| Field | Description |\n| ------------- | ------------------------------------------------- |\n| `thread` | Thread name (defaults to block name) |\n| `model` | LLM model to use |\n| `system` | System prompt filename (required) |\n| `input` | Variables for system prompt |\n| `tools` | Tools available in this thread |\n| `skills` | Octavus skills available in this thread |\n| `imageModel` | Image generation model |\n| `thinking` | Extended reasoning level |\n| `temperature` | Model temperature |\n| `maxSteps` | Maximum tool call cycles (enables agentic if > 1) |\n\n## Simple Example\n\nA worker that generates a title from a summary:\n\n```yaml\n# Input\ninput:\n CONVERSATION_SUMMARY:\n type: string\n description: Summary to generate a title for\n\n# Variables\nvariables:\n TITLE:\n type: string\n description: The generated title\n\n# Steps\nsteps:\n Start title thread:\n block: start-thread\n thread: title-gen\n model: anthropic/claude-sonnet-4-5\n system: title-system\n\n Add title request:\n block: add-message\n thread: title-gen\n role: user\n prompt: title-request\n input: [CONVERSATION_SUMMARY]\n\n Generate title:\n block: next-message\n thread: title-gen\n output: TITLE\n display: stream\n\n# Output\noutput: TITLE\n```\n\n## Advanced Example\n\nA worker with multiple threads, tools, and agentic behavior:\n\n```yaml\ninput:\n USER_MESSAGE:\n type: string\n description: The user\'s message to respond to\n USER_ID:\n type: string\n description: User ID for account lookups\n optional: true\n\ntools:\n get-user-account:\n description: Looking up account information\n parameters:\n userId: { type: string }\n create-support-ticket:\n description: Creating a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string }\n\nvariables:\n ASSISTANT_RESPONSE:\n type: string\n CHAT_TRANSCRIPT:\n type: string\n CONVERSATION_SUMMARY:\n type: string\n\nsteps:\n # Thread 1: Chat with agentic tool calling\n Start chat thread:\n block: start-thread\n thread: chat\n model: anthropic/claude-sonnet-4-5\n system: chat-system\n input: [USER_ID]\n tools: [get-user-account, create-support-ticket]\n thinking: medium\n maxSteps: 5\n\n Add user message:\n block: add-message\n thread: chat\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Generate response:\n block: next-message\n thread: chat\n output: ASSISTANT_RESPONSE\n display: stream\n\n # Serialize for summary\n Save conversation:\n block: serialize-thread\n thread: chat\n output: CHAT_TRANSCRIPT\n\n # Thread 2: Summary generation\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n system: summary-system\n thinking: low\n\n Add summary request:\n block: add-message\n thread: summary\n role: user\n prompt: summary-request\n input: [CHAT_TRANSCRIPT]\n\n Generate summary:\n block: next-message\n thread: summary\n output: CONVERSATION_SUMMARY\n display: stream\n\noutput: CONVERSATION_SUMMARY\n```\n\n## Skills and Image Generation\n\nWorkers can use Octavus skills and image generation, configured per-thread via `start-thread`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generate QR codes\n\nsteps:\n Start thread:\n block: start-thread\n thread: worker\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n imageModel: google/gemini-2.5-flash-image\n maxSteps: 10\n```\n\nWorkers define their own skills independently -- they don\'t inherit skills from a parent interactive agent. Each thread gets its own sandbox scoped to only its listed skills.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## Tool Handling\n\nWorkers support the same tool handling as interactive agents:\n\n- **Server tools** \u2014 Handled by tool handlers you provide\n- **Client tools** \u2014 Pause execution, return tool request to caller\n\n```typescript\n// Non-streaming: get the output directly\nconst { output } = await client.workers.generate(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => await searchWeb(args.query),\n },\n },\n);\n\n// Streaming: observe events in real-time\nconst events = client.workers.execute(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => await searchWeb(args.query),\n },\n },\n);\n```\n\nSee [Server SDK Workers](/docs/server-sdk/workers) for tool handling details.\n\n## Stream Events\n\nWorkers emit the same events as interactive agents, plus worker-specific events:\n\n| Event | Description |\n| --------------- | ---------------------------------- |\n| `worker-start` | Worker execution begins |\n| `worker-result` | Worker completes (includes output) |\n\nAll standard events (text-delta, tool calls, etc.) are also emitted.\n\n## Calling Workers from Interactive Agents\n\nInteractive agents can call workers in two ways:\n\n1. **Deterministically** \u2014 Using the `run-worker` block\n2. **Agentically** \u2014 LLM calls worker as a tool\n\n### Worker Declaration\n\nFirst, declare workers in your interactive agent\'s protocol:\n\n```yaml\nworkers:\n generate-title:\n description: Generating conversation title\n display: description\n research-assistant:\n description: Researching topic\n display: stream\n tools:\n search: web-search # Map worker tool \u2192 parent tool\n```\n\n### run-worker Block\n\nCall a worker deterministically from a handler:\n\n```yaml\nhandlers:\n request-human:\n Generate title:\n block: run-worker\n worker: generate-title\n input:\n CONVERSATION_SUMMARY: SUMMARY\n output: CONVERSATION_TITLE\n```\n\n### LLM Tool Invocation\n\nMake workers available to the LLM:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n workers: [generate-title, research-assistant]\n agentic: true\n```\n\nThe LLM can then call workers as tools during conversation.\n\n### Display Modes\n\nControl how worker execution appears to users:\n\n| Mode | Behavior |\n| ------------- | --------------------------------- |\n| `hidden` | Worker runs silently |\n| `name` | Shows worker name |\n| `description` | Shows description text |\n| `stream` | Streams all worker events to user |\n\n### Tool Mapping\n\nMap parent tools to worker tools when the worker needs access to your tool handlers:\n\n```yaml\nworkers:\n research-assistant:\n description: Research topics\n tools:\n search: web-search # Worker\'s "search" \u2192 parent\'s "web-search"\n```\n\nWhen the worker calls its `search` tool, your `web-search` handler executes.\n\n## Next Steps\n\n- [Server SDK Workers](/docs/server-sdk/workers) \u2014 Executing workers from code\n- [Handlers](/docs/protocol/handlers) \u2014 Block reference for steps\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n',
1373
+ content: '\n# Workers\n\nWorkers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value.\n\n## When to Use Workers\n\nWorkers are ideal for:\n\n- **Background processing** \u2014 Long-running tasks that don\'t need conversation\n- **Composable tasks** \u2014 Reusable units of work called by other agents\n- **Pipelines** \u2014 Multi-step processing with structured output\n- **Parallel execution** \u2014 Tasks that can run independently\n\nUse interactive agents instead when:\n\n- **Conversation is needed** \u2014 Multi-turn dialogue with users\n- **Persistence matters** \u2014 State should survive across interactions\n- **Session context** \u2014 User context needs to persist\n\n## Worker vs Interactive\n\n| Aspect | Interactive | Worker |\n| ---------- | ---------------------------------- | ----------------------------- |\n| Structure | `triggers` + `handlers` + `agent` | `steps` + `output` |\n| LLM Config | Global `agent:` section | Per-thread via `start-thread` |\n| Invocation | Fire a named trigger | Direct execution with input |\n| Session | Persists across triggers (24h TTL) | Single execution |\n| Result | Streaming chat | Streaming + output value |\n\n## Protocol Structure\n\nWorkers use a simpler protocol structure than interactive agents:\n\n```yaml\n# Input schema - provided when worker is executed\ninput:\n TOPIC:\n type: string\n description: Topic to research\n DEPTH:\n type: string\n optional: true\n default: medium\n\n# Variables for intermediate results\nvariables:\n RESEARCH_DATA:\n type: string\n ANALYSIS:\n type: string\n description: Final analysis result\n\n# Tools available to the worker\ntools:\n web-search:\n description: Search the web\n parameters:\n query: { type: string }\n\n# Sequential execution steps\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC, DEPTH]\n tools: [web-search]\n maxSteps: 5\n\n Add research request:\n block: add-message\n thread: research\n role: user\n prompt: research-prompt\n input: [TOPIC, DEPTH]\n\n Generate research:\n block: next-message\n thread: research\n output: RESEARCH_DATA\n\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: analysis-system\n\n Add analysis request:\n block: add-message\n thread: analysis\n role: user\n prompt: analysis-prompt\n input: [RESEARCH_DATA]\n\n Generate analysis:\n block: next-message\n thread: analysis\n output: ANALYSIS\n\n# Output variable - the worker\'s return value\noutput: ANALYSIS\n```\n\n## settings.json\n\nWorkers are identified by the `format` field:\n\n```json\n{\n "slug": "research-assistant",\n "name": "Research Assistant",\n "description": "Researches topics and returns structured analysis",\n "format": "worker"\n}\n```\n\n## Key Differences\n\n### No Global Agent Config\n\nInteractive agents have a global `agent:` section that configures a main thread. Workers don\'t have this \u2014 every thread must be explicitly created via `start-thread`:\n\n```yaml\n# Interactive agent: Global config\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [tool-a, tool-b]\n\n# Worker: Each thread configured independently\nsteps:\n Start thread A:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n tools: [tool-a]\n\n Start thread B:\n block: start-thread\n thread: analysis\n model: openai/gpt-4o\n tools: [tool-b]\n```\n\nThis gives workers flexibility to use different models, tools, skills, and settings at different stages.\n\n### Steps Instead of Handlers\n\nWorkers use `steps:` instead of `handlers:`. Steps execute sequentially, like handler blocks:\n\n```yaml\n# Interactive: Handlers respond to triggers\nhandlers:\n user-message:\n Add message:\n block: add-message\n # ...\n\n# Worker: Steps execute in sequence\nsteps:\n Add message:\n block: add-message\n # ...\n```\n\n### Output Value\n\nWorkers can return an output value to the caller:\n\n```yaml\nvariables:\n RESULT:\n type: string\n\nsteps:\n # ... steps that populate RESULT ...\n\noutput: RESULT # Return this variable\'s value\n```\n\nThe `output` field references a variable declared in `variables:`. If omitted, the worker completes without returning a value.\n\n## Available Blocks\n\nWorkers support the same blocks as handlers:\n\n| Block | Purpose |\n| ------------------ | -------------------------------------------- |\n| `start-thread` | Create a named thread with LLM configuration |\n| `add-message` | Add a message to a thread |\n| `next-message` | Generate LLM response |\n| `tool-call` | Call a tool deterministically |\n| `set-resource` | Update a resource value |\n| `serialize-thread` | Convert thread to text |\n| `generate-image` | Generate an image from a prompt variable |\n\n### start-thread (Required for LLM)\n\nEvery thread must be initialized with `start-thread` before using `next-message`:\n\n```yaml\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC]\n tools: [web-search]\n thinking: medium\n maxSteps: 5\n```\n\nAll LLM configuration goes here:\n\n| Field | Description |\n| ------------- | ------------------------------------------------- |\n| `thread` | Thread name (defaults to block name) |\n| `model` | LLM model to use |\n| `system` | System prompt filename (required) |\n| `input` | Variables for system prompt |\n| `tools` | Tools available in this thread |\n| `skills` | Octavus skills available in this thread |\n| `imageModel` | Image generation model |\n| `webSearch` | Enable built-in web search tool |\n| `thinking` | Extended reasoning level |\n| `temperature` | Model temperature |\n| `maxSteps` | Maximum tool call cycles (enables agentic if > 1) |\n\n## Simple Example\n\nA worker that generates a title from a summary:\n\n```yaml\n# Input\ninput:\n CONVERSATION_SUMMARY:\n type: string\n description: Summary to generate a title for\n\n# Variables\nvariables:\n TITLE:\n type: string\n description: The generated title\n\n# Steps\nsteps:\n Start title thread:\n block: start-thread\n thread: title-gen\n model: anthropic/claude-sonnet-4-5\n system: title-system\n\n Add title request:\n block: add-message\n thread: title-gen\n role: user\n prompt: title-request\n input: [CONVERSATION_SUMMARY]\n\n Generate title:\n block: next-message\n thread: title-gen\n output: TITLE\n display: stream\n\n# Output\noutput: TITLE\n```\n\n## Advanced Example\n\nA worker with multiple threads, tools, and agentic behavior:\n\n```yaml\ninput:\n USER_MESSAGE:\n type: string\n description: The user\'s message to respond to\n USER_ID:\n type: string\n description: User ID for account lookups\n optional: true\n\ntools:\n get-user-account:\n description: Looking up account information\n parameters:\n userId: { type: string }\n create-support-ticket:\n description: Creating a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string }\n\nvariables:\n ASSISTANT_RESPONSE:\n type: string\n CHAT_TRANSCRIPT:\n type: string\n CONVERSATION_SUMMARY:\n type: string\n\nsteps:\n # Thread 1: Chat with agentic tool calling\n Start chat thread:\n block: start-thread\n thread: chat\n model: anthropic/claude-sonnet-4-5\n system: chat-system\n input: [USER_ID]\n tools: [get-user-account, create-support-ticket]\n thinking: medium\n maxSteps: 5\n\n Add user message:\n block: add-message\n thread: chat\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Generate response:\n block: next-message\n thread: chat\n output: ASSISTANT_RESPONSE\n display: stream\n\n # Serialize for summary\n Save conversation:\n block: serialize-thread\n thread: chat\n output: CHAT_TRANSCRIPT\n\n # Thread 2: Summary generation\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n system: summary-system\n thinking: low\n\n Add summary request:\n block: add-message\n thread: summary\n role: user\n prompt: summary-request\n input: [CHAT_TRANSCRIPT]\n\n Generate summary:\n block: next-message\n thread: summary\n output: CONVERSATION_SUMMARY\n display: stream\n\noutput: CONVERSATION_SUMMARY\n```\n\n## Skills, Image Generation, and Web Search\n\nWorkers can use Octavus skills, image generation, and web search, configured per-thread via `start-thread`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generate QR codes\n\nsteps:\n Start thread:\n block: start-thread\n thread: worker\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n imageModel: google/gemini-2.5-flash-image\n webSearch: true\n maxSteps: 10\n```\n\nWorkers define their own skills independently -- they don\'t inherit skills from a parent interactive agent. Each thread gets its own sandbox scoped to only its listed skills.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## Tool Handling\n\nWorkers support the same tool handling as interactive agents:\n\n- **Server tools** \u2014 Handled by tool handlers you provide\n- **Client tools** \u2014 Pause execution, return tool request to caller\n\n```typescript\n// Non-streaming: get the output directly\nconst { output } = await client.workers.generate(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => await searchWeb(args.query),\n },\n },\n);\n\n// Streaming: observe events in real-time\nconst events = client.workers.execute(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => await searchWeb(args.query),\n },\n },\n);\n```\n\nSee [Server SDK Workers](/docs/server-sdk/workers) for tool handling details.\n\n## Stream Events\n\nWorkers emit the same events as interactive agents, plus worker-specific events:\n\n| Event | Description |\n| --------------- | ---------------------------------- |\n| `worker-start` | Worker execution begins |\n| `worker-result` | Worker completes (includes output) |\n\nAll standard events (text-delta, tool calls, etc.) are also emitted.\n\n## Calling Workers from Interactive Agents\n\nInteractive agents can call workers in two ways:\n\n1. **Deterministically** \u2014 Using the `run-worker` block\n2. **Agentically** \u2014 LLM calls worker as a tool\n\n### Worker Declaration\n\nFirst, declare workers in your interactive agent\'s protocol:\n\n```yaml\nworkers:\n generate-title:\n description: Generating conversation title\n display: description\n research-assistant:\n description: Researching topic\n display: stream\n tools:\n search: web-search # Map worker tool \u2192 parent tool\n```\n\n### run-worker Block\n\nCall a worker deterministically from a handler:\n\n```yaml\nhandlers:\n request-human:\n Generate title:\n block: run-worker\n worker: generate-title\n input:\n CONVERSATION_SUMMARY: SUMMARY\n output: CONVERSATION_TITLE\n```\n\n### LLM Tool Invocation\n\nMake workers available to the LLM:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n workers: [generate-title, research-assistant]\n agentic: true\n```\n\nThe LLM can then call workers as tools during conversation.\n\n### Display Modes\n\nControl how worker execution appears to users:\n\n| Mode | Behavior |\n| ------------- | --------------------------------- |\n| `hidden` | Worker runs silently |\n| `name` | Shows worker name |\n| `description` | Shows description text |\n| `stream` | Streams all worker events to user |\n\n### Tool Mapping\n\nMap parent tools to worker tools when the worker needs access to your tool handlers:\n\n```yaml\nworkers:\n research-assistant:\n description: Research topics\n tools:\n search: web-search # Worker\'s "search" \u2192 parent\'s "web-search"\n```\n\nWhen the worker calls its `search` tool, your `web-search` handler executes.\n\n## Next Steps\n\n- [Server SDK Workers](/docs/server-sdk/workers) \u2014 Executing workers from code\n- [Handlers](/docs/protocol/handlers) \u2014 Block reference for steps\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n',
1374
1374
  excerpt: "Workers Workers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value. When to...",
1375
1375
  order: 11
1376
1376
  },
@@ -1504,4 +1504,4 @@ export {
1504
1504
  getDocSlugs,
1505
1505
  getSectionBySlug
1506
1506
  };
1507
- //# sourceMappingURL=chunk-SAB5XUB6.js.map
1507
+ //# sourceMappingURL=chunk-SNBEHHFU.js.map