@octavus/docs 2.16.0 → 2.17.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/content/02-server-sdk/01-overview.md +26 -0
- package/content/02-server-sdk/02-sessions.md +11 -0
- package/content/02-server-sdk/03-tools.md +4 -1
- package/content/02-server-sdk/08-computer.md +400 -0
- package/content/04-protocol/01-overview.md +9 -0
- package/content/04-protocol/04-tools.md +5 -4
- package/content/04-protocol/06-handlers.md +3 -1
- package/content/04-protocol/07-agent-config.md +65 -17
- package/content/04-protocol/13-mcp-servers.md +289 -0
- package/dist/chunk-4PNP4HF5.js +1549 -0
- package/dist/chunk-4PNP4HF5.js.map +1 -0
- package/dist/{chunk-NKLLG2WY.js → chunk-54ND2CTI.js} +5 -5
- package/dist/chunk-54ND2CTI.js.map +1 -0
- package/dist/chunk-B4A36GEV.js +1549 -0
- package/dist/chunk-B4A36GEV.js.map +1 -0
- package/dist/{chunk-4ABNR2ZK.js → chunk-CFDET7QG.js} +63 -23
- package/dist/chunk-CFDET7QG.js.map +1 -0
- package/dist/chunk-DKVYIFV7.js +1549 -0
- package/dist/chunk-DKVYIFV7.js.map +1 -0
- package/dist/{chunk-4WYG2JYA.js → chunk-UZWGRPRR.js} +47 -11
- package/dist/chunk-UZWGRPRR.js.map +1 -0
- package/dist/content.js +1 -1
- package/dist/docs.json +28 -10
- package/dist/index.js +1 -1
- package/dist/search-index.json +1 -1
- package/dist/search.js +1 -1
- package/dist/search.js.map +1 -1
- package/dist/sections.json +28 -10
- package/package.json +1 -1
- package/dist/chunk-4ABNR2ZK.js.map +0 -1
- package/dist/chunk-4WYG2JYA.js.map +0 -1
- package/dist/chunk-NKLLG2WY.js.map +0 -1
|
@@ -588,7 +588,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
588
588
|
section: "protocol",
|
|
589
589
|
title: "Handlers",
|
|
590
590
|
description: "Defining execution handlers with blocks.",
|
|
591
|
-
content: "\n# Handlers\n\nHandlers define what happens when a trigger fires. They contain execution blocks that run in sequence.\n\n## Handler Structure\n\n```yaml\nhandlers:\n trigger-name:\n Block Name:\n block: block-kind\n # block-specific properties\n\n Another Block:\n block: another-kind\n # ...\n```\n\nEach block has a human-readable name (shown in debug UI) and a `block` field that determines its behavior.\n\n## Block Kinds\n\n### next-message\n\nGenerate a response from the LLM:\n\n```yaml\nhandlers:\n user-message:\n Respond to user:\n block: next-message\n # Uses main conversation thread by default\n # Display defaults to 'stream'\n```\n\nWith options:\n\n```yaml\nGenerate summary:\n block: next-message\n thread: summary # Use named thread\n display: stream # Show streaming content\n independent: true # Don't add to main chat\n output: SUMMARY # Store output in variable\n description: Generating summary # Shown in UI\n```\n\nFor structured output (typed JSON response):\n\n```yaml\nRespond with suggestions:\n block: next-message\n responseType: ChatResponse # Type defined in types section\n output: RESPONSE # Stores the parsed object\n```\n\nWhen `responseType` is specified:\n\n- The LLM generates JSON matching the type schema\n- The `output` variable receives the parsed object (not plain text)\n- The client receives a `UIObjectPart` for custom rendering\n\nSee [Types](/docs/protocol/types#structured-output) for more details.\n\n### add-message\n\nAdd a message to the conversation:\n\n```yaml\nAdd user message:\n block: add-message\n role: user # user | assistant | system\n prompt: user-message # Reference to prompt file\n input: [USER_MESSAGE] # Variables to interpolate\n display: hidden # Don't show in UI\n```\n\nFor internal directives (LLM sees it, user doesn't):\n\n```yaml\nAdd internal directive:\n block: add-message\n role: user\n prompt: ticket-directive\n input: [TICKET_DETAILS]\n visible: false # LLM sees this, user doesn't\n```\n\nFor structured user input (object shown in UI, prompt for LLM context):\n\n```yaml\nAdd user message:\n block: add-message\n role: user\n prompt: user-message # Rendered for LLM context (hidden from UI)\n input: [USER_INPUT]\n uiContent: USER_INPUT # Variable shown in UI (object \u2192 object part)\n display: hidden\n```\n\nWhen `uiContent` is set:\n\n- The variable value is shown in the UI (string \u2192 text part, object \u2192 object part)\n- The prompt text is hidden from the UI but kept for LLM context\n- Useful for rich UI interactions where the visual differs from the LLM context\n\n### tool-call\n\nCall a tool deterministically:\n\n```yaml\nCreate ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY # Variable reference\n priority: medium # Literal value\n output: TICKET # Store result\n```\n\n### set-resource\n\nUpdate a persistent resource:\n\n```yaml\nSave summary:\n block: set-resource\n resource: CONVERSATION_SUMMARY\n value: SUMMARY # Variable to save\n display: name # Show block name\n```\n\n### start-thread\n\nCreate a named conversation thread:\n\n```yaml\nStart summary thread:\n block: start-thread\n thread: summary # Thread name\n model: anthropic/claude-sonnet-4-5 # Optional: different model\n thinking: low # Extended reasoning level\n maxSteps: 1 # Tool call limit\n system: escalation-summary # System prompt\n input: [COMPANY_NAME] # Variables for prompt\n skills: [qr-code] # Octavus skills for this thread\n sandboxTimeout: 600000 # Skill sandbox timeout (default: 5 min, max: 1 hour)\n imageModel: google/gemini-2.5-flash-image # Image generation model\n```\n\nThe `model` field can also reference a variable for dynamic model selection
|
|
591
|
+
content: "\n# Handlers\n\nHandlers define what happens when a trigger fires. They contain execution blocks that run in sequence.\n\n## Handler Structure\n\n```yaml\nhandlers:\n trigger-name:\n Block Name:\n block: block-kind\n # block-specific properties\n\n Another Block:\n block: another-kind\n # ...\n```\n\nEach block has a human-readable name (shown in debug UI) and a `block` field that determines its behavior.\n\n## Block Kinds\n\n### next-message\n\nGenerate a response from the LLM:\n\n```yaml\nhandlers:\n user-message:\n Respond to user:\n block: next-message\n # Uses main conversation thread by default\n # Display defaults to 'stream'\n```\n\nWith options:\n\n```yaml\nGenerate summary:\n block: next-message\n thread: summary # Use named thread\n display: stream # Show streaming content\n independent: true # Don't add to main chat\n output: SUMMARY # Store output in variable\n description: Generating summary # Shown in UI\n```\n\nFor structured output (typed JSON response):\n\n```yaml\nRespond with suggestions:\n block: next-message\n responseType: ChatResponse # Type defined in types section\n output: RESPONSE # Stores the parsed object\n```\n\nWhen `responseType` is specified:\n\n- The LLM generates JSON matching the type schema\n- The `output` variable receives the parsed object (not plain text)\n- The client receives a `UIObjectPart` for custom rendering\n\nSee [Types](/docs/protocol/types#structured-output) for more details.\n\n### add-message\n\nAdd a message to the conversation:\n\n```yaml\nAdd user message:\n block: add-message\n role: user # user | assistant | system\n prompt: user-message # Reference to prompt file\n input: [USER_MESSAGE] # Variables to interpolate\n display: hidden # Don't show in UI\n```\n\nFor internal directives (LLM sees it, user doesn't):\n\n```yaml\nAdd internal directive:\n block: add-message\n role: user\n prompt: ticket-directive\n input: [TICKET_DETAILS]\n visible: false # LLM sees this, user doesn't\n```\n\nFor structured user input (object shown in UI, prompt for LLM context):\n\n```yaml\nAdd user message:\n block: add-message\n role: user\n prompt: user-message # Rendered for LLM context (hidden from UI)\n input: [USER_INPUT]\n uiContent: USER_INPUT # Variable shown in UI (object \u2192 object part)\n display: hidden\n```\n\nWhen `uiContent` is set:\n\n- The variable value is shown in the UI (string \u2192 text part, object \u2192 object part)\n- The prompt text is hidden from the UI but kept for LLM context\n- Useful for rich UI interactions where the visual differs from the LLM context\n\n### tool-call\n\nCall a tool deterministically:\n\n```yaml\nCreate ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY # Variable reference\n priority: medium # Literal value\n output: TICKET # Store result\n```\n\n### set-resource\n\nUpdate a persistent resource:\n\n```yaml\nSave summary:\n block: set-resource\n resource: CONVERSATION_SUMMARY\n value: SUMMARY # Variable to save\n display: name # Show block name\n```\n\n### start-thread\n\nCreate a named conversation thread:\n\n```yaml\nStart summary thread:\n block: start-thread\n thread: summary # Thread name\n model: anthropic/claude-sonnet-4-5 # Optional: different model\n backupModel: openai/gpt-4o # Failover on provider errors\n thinking: low # Extended reasoning level\n maxSteps: 1 # Tool call limit\n system: escalation-summary # System prompt\n input: [COMPANY_NAME] # Variables for prompt\n skills: [qr-code] # Octavus skills for this thread\n sandboxTimeout: 600000 # Skill sandbox timeout (default: 5 min, max: 1 hour)\n imageModel: google/gemini-2.5-flash-image # Image generation model\n```\n\nThe `model` field can also reference a variable for dynamic model selection. The `backupModel` field follows the same format and supports variable references.\n\n```yaml\nStart summary thread:\n block: start-thread\n thread: summary\n model: SUMMARY_MODEL # Resolved from input variable\n system: escalation-summary\n```\n\n### serialize-thread\n\nConvert conversation to text:\n\n```yaml\nSerialize conversation:\n block: serialize-thread\n thread: main # Which thread (default: main)\n format: markdown # markdown | json\n output: CONVERSATION_TEXT # Variable to store result\n```\n\n### generate-image\n\nGenerate an image from a prompt variable:\n\n```yaml\nGenerate image:\n block: generate-image\n prompt: OPTIMIZED_PROMPT # Variable containing the prompt\n imageModel: google/gemini-2.5-flash-image # Required image model\n size: 1024x1024 # 1024x1024 | 1792x1024 | 1024x1792\n output: GENERATED_IMAGE # Store URL in variable\n description: Generating your image... # Shown in UI\n```\n\nEdit an existing image using reference images:\n\n```yaml\nEdit image:\n block: generate-image\n prompt: EDIT_INSTRUCTIONS # e.g., \"Remove the background\"\n referenceImages: [SOURCE_IMAGE_URL] # Variable(s) containing image URLs\n imageModel: google/gemini-2.5-flash-image\n output: EDITED_IMAGE\n description: Editing image...\n```\n\n| Field | Required | Description |\n| ----------------- | -------- | --------------------------------------------------------------- |\n| `prompt` | Yes | Variable name containing the image prompt or edit instructions |\n| `imageModel` | Yes | Image model identifier (e.g., `google/gemini-2.5-flash-image`) |\n| `size` | No | Image dimensions: `1024x1024`, `1792x1024`, or `1024x1792` |\n| `referenceImages` | No | Variable names containing image URLs for editing/transformation |\n| `output` | No | Variable name to store the generated image URL |\n| `thread` | No | Thread to associate the output file with |\n| `description` | No | Description shown in the UI during generation |\n\nThis block is for deterministic image generation pipelines where the prompt is constructed programmatically (e.g., via prompt engineering in a separate thread). When `referenceImages` are provided, the prompt describes how to modify those images.\n\nFor agentic image generation where the LLM decides when to generate, configure `imageModel` in the [agent config](/docs/protocol/agent-config#image-generation).\n\n## Display Modes\n\nEvery block has a `display` property:\n\n| Mode | Default For | Behavior |\n| ------------- | ------------------------- | ----------------- |\n| `hidden` | add-message | Not shown to user |\n| `name` | set-resource | Shows block name |\n| `description` | tool-call, generate-image | Shows description |\n| `stream` | next-message | Streams content |\n\n## Complete Example\n\n```yaml\nhandlers:\n user-message:\n # Add the user's message to conversation\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n # Generate response (LLM may call tools)\n Respond to user:\n block: next-message\n # display: stream (default)\n\n request-human:\n # Step 1: Serialize conversation for summary\n Serialize conversation:\n block: serialize-thread\n format: markdown\n output: CONVERSATION_TEXT\n\n # Step 2: Create separate thread for summarization\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n thinking: low\n system: escalation-summary\n input: [COMPANY_NAME]\n\n # Step 3: Add request to summary thread\n Add summarize request:\n block: add-message\n thread: summary\n role: user\n prompt: summarize-request\n input:\n - CONVERSATION: CONVERSATION_TEXT\n\n # Step 4: Generate summary\n Generate summary:\n block: next-message\n thread: summary\n display: stream\n description: Summarizing your conversation\n independent: true\n output: SUMMARY\n\n # Step 5: Save to resource\n Save summary:\n block: set-resource\n resource: CONVERSATION_SUMMARY\n value: SUMMARY\n\n # Step 6: Create support ticket\n Create ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY\n priority: medium\n output: TICKET\n\n # Step 7: Add directive for response\n Add directive:\n block: add-message\n role: user\n prompt: ticket-directive\n input: [TICKET_DETAILS: TICKET]\n visible: false\n\n # Step 8: Respond to user\n Respond:\n block: next-message\n```\n\n## Block Input Mapping\n\nThe `input` field on blocks controls which variables are passed to the prompt. Only variables listed in `input` are available for interpolation.\n\nVariables can come from `protocol.input`, `protocol.resources`, `protocol.variables`, `trigger.input`, or outputs from prior blocks.\n\n```yaml\n# Array format (same name)\ninput: [USER_MESSAGE, COMPANY_NAME]\n\n# Array format (rename)\ninput:\n - CONVERSATION: CONVERSATION_TEXT # Prompt sees CONVERSATION, value comes from CONVERSATION_TEXT\n - TICKET_DETAILS: TICKET\n\n# Object format (rename)\ninput:\n CONVERSATION: CONVERSATION_TEXT\n TICKET_DETAILS: TICKET\n```\n\n## Independent Blocks\n\nUse `independent: true` for content that shouldn't go to the main chat:\n\n```yaml\nGenerate summary:\n block: next-message\n thread: summary\n independent: true # Output stored in variable, not main chat\n output: SUMMARY\n```\n\nThis is useful for:\n\n- Background processing\n- Summarization in separate threads\n- Generating content for tools\n",
|
|
592
592
|
excerpt: "Handlers Handlers define what happens when a trigger fires. They contain execution blocks that run in sequence. Handler Structure Each block has a human-readable name (shown in debug UI) and a ...",
|
|
593
593
|
order: 6
|
|
594
594
|
},
|
|
@@ -597,7 +597,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
597
597
|
section: "protocol",
|
|
598
598
|
title: "Agent Config",
|
|
599
599
|
description: "Configuring the agent model and behavior.",
|
|
600
|
-
content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n references: [api-guidelines] # On-demand context documents\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ---------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to pass to the system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `references` | No | List of references the LLM can fetch on demand |\n| `sandboxTimeout` | No | Skill sandbox timeout in ms (default: 5 min, max: 1 hour) |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `webSearch` | No | Enable built-in web search tool (provider-agnostic) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | -------------------------------------------------------------------- |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash-preview\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions. The `input` field controls which variables are available to the prompt \u2014 only variables listed in `input` are interpolated.\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nVariables in `input` can come from `protocol.input`, `protocol.resources`, or `protocol.variables`.\n\n### Input Mapping Formats\n\n```yaml\n# Array format (same name)\ninput:\n - COMPANY_NAME\n - PRODUCT_NAME\n\n# Array format (rename)\ninput:\n - CONTEXT: CONVERSATION_SUMMARY # Prompt sees CONTEXT, value comes from CONVERSATION_SUMMARY\n\n# Object format (rename)\ninput:\n CONTEXT: CONVERSATION_SUMMARY\n```\n\nThe left side (label) is what the prompt sees. The right side (source) is where the value comes from.\n\n### Example\n\n`prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## References\n\nEnable on-demand context loading via reference documents:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n references: [api-guidelines, error-codes]\n agentic: true\n```\n\nReferences are markdown files stored in the agent's `references/` directory. When enabled, the LLM can list available references and read their content using `octavus_reference_list` and `octavus_reference_read` tools.\n\nSee [References](/docs/protocol/references) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests. The tool supports both text-to-image generation and image editing/transformation using reference images.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Image Editing with Reference Images\n\nBoth the agentic tool and the `generate-image` block support reference images for editing and transformation. When reference images are provided, the prompt describes how to modify or use those images.\n\n| Provider | Models | Reference Image Support |\n| -------- | -------------------------------- | ----------------------- |\n| OpenAI | `gpt-image-1` | Yes |\n| Google | Gemini native (`gemini-*-image`) | Yes |\n| Google | Imagen (`imagen-*`) | No |\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate or edit images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation or editing\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Web Search\n\nEnable the LLM to search the web for current information:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n webSearch: true\n agentic: true\n```\n\nWhen `webSearch` is enabled, the `octavus_web_search` tool becomes available. The LLM can decide when to search the web based on the conversation. Search results include source URLs that are emitted as citations in the UI.\n\nThis is a **provider-agnostic** built-in tool \u2014 it works with any LLM provider (Anthropic, Google, OpenAI, etc.). For Anthropic's own web search implementation, see [Provider Options](/docs/protocol/provider-options).\n\nUse cases:\n\n- Current events and real-time data\n- Fact verification and documentation lookups\n- Any information that may have changed since the model's training\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n skills: [data-analysis] # Thread-specific skills\n references: [escalation-policy] # Thread-specific references\n imageModel: google/gemini-2.5-flash-image # Thread-specific image model\n webSearch: true # Thread-specific web search\n```\n\nEach thread can have its own skills, references, image model, and web search setting. Skills must be defined in the protocol's `skills:` section. References must exist in the agent's `references/` directory. Workers use this same pattern since they don't have a global `agent:` section.\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n references: [support-policies] # On-demand context\n webSearch: true # Built-in web search\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
|
|
600
|
+
content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n references: [api-guidelines] # On-demand context documents\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ---------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `backupModel` | No | Backup model for automatic failover on provider errors |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to pass to the system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `references` | No | List of references the LLM can fetch on demand |\n| `sandboxTimeout` | No | Skill sandbox timeout in ms (default: 5 min, max: 1 hour) |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `webSearch` | No | Enable built-in web search tool (provider-agnostic) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | -------------------------------------------------------------------- |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash-preview\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## Backup Model\n\nConfigure a fallback model that activates automatically when the primary model encounters a transient provider error (rate limits, outages, timeouts):\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n backupModel: openai/gpt-4o\n system: system\n```\n\nWhen a provider error occurs, the system retries once with the backup model. If the backup also fails, the original error is returned.\n\n**Key behaviors:**\n\n- Only transient provider errors trigger fallback \u2014 authentication and validation errors are not retried\n- Provider-specific options (like `anthropic:`) are only forwarded to the backup model if it uses the same provider\n- For streaming responses, fallback only occurs if no content has been sent to the client yet\n\nLike `model`, `backupModel` supports variable references:\n\n```yaml\ninput:\n BACKUP_MODEL:\n type: string\n description: Fallback model for provider errors\n\nagent:\n model: anthropic/claude-sonnet-4-5\n backupModel: BACKUP_MODEL\n system: system\n```\n\n> **Tip**: Use a different provider for your backup model (e.g., primary on Anthropic, backup on OpenAI) to maximize resilience against single-provider outages.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions. The `input` field controls which variables are available to the prompt \u2014 only variables listed in `input` are interpolated.\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nVariables in `input` can come from `protocol.input`, `protocol.resources`, or `protocol.variables`.\n\n### Input Mapping Formats\n\n```yaml\n# Array format (same name)\ninput:\n - COMPANY_NAME\n - PRODUCT_NAME\n\n# Array format (rename)\ninput:\n - CONTEXT: CONVERSATION_SUMMARY # Prompt sees CONTEXT, value comes from CONVERSATION_SUMMARY\n\n# Object format (rename)\ninput:\n CONTEXT: CONVERSATION_SUMMARY\n```\n\nThe left side (label) is what the prompt sees. The right side (source) is where the value comes from.\n\n### Example\n\n`prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## References\n\nEnable on-demand context loading via reference documents:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n references: [api-guidelines, error-codes]\n agentic: true\n```\n\nReferences are markdown files stored in the agent's `references/` directory. When enabled, the LLM can list available references and read their content using `octavus_reference_list` and `octavus_reference_read` tools.\n\nSee [References](/docs/protocol/references) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests. The tool supports both text-to-image generation and image editing/transformation using reference images.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Image Editing with Reference Images\n\nBoth the agentic tool and the `generate-image` block support reference images for editing and transformation. When reference images are provided, the prompt describes how to modify or use those images.\n\n| Provider | Models | Reference Image Support |\n| -------- | -------------------------------- | ----------------------- |\n| OpenAI | `gpt-image-1` | Yes |\n| Google | Gemini native (`gemini-*-image`) | Yes |\n| Google | Imagen (`imagen-*`) | No |\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate or edit images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation or editing\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Web Search\n\nEnable the LLM to search the web for current information:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n webSearch: true\n agentic: true\n```\n\nWhen `webSearch` is enabled, the `octavus_web_search` tool becomes available. The LLM can decide when to search the web based on the conversation. Search results include source URLs that are emitted as citations in the UI.\n\nThis is a **provider-agnostic** built-in tool \u2014 it works with any LLM provider (Anthropic, Google, OpenAI, etc.). For Anthropic's own web search implementation, see [Provider Options](/docs/protocol/provider-options).\n\nUse cases:\n\n- Current events and real-time data\n- Fact verification and documentation lookups\n- Any information that may have changed since the model's training\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n backupModel: openai/gpt-4o # Failover model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n skills: [data-analysis] # Thread-specific skills\n references: [escalation-policy] # Thread-specific references\n imageModel: google/gemini-2.5-flash-image # Thread-specific image model\n webSearch: true # Thread-specific web search\n```\n\nEach thread can have its own model, backup model, skills, references, image model, and web search setting. Skills must be defined in the protocol's `skills:` section. References must exist in the agent's `references/` directory. Workers use this same pattern since they don't have a global `agent:` section.\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n backupModel: openai/gpt-4o\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n references: [support-policies] # On-demand context\n webSearch: true # Built-in web search\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
|
|
601
601
|
excerpt: "Agent Config The section configures the LLM model, system prompt, tools, and behavior. Basic Configuration Configuration Options | Field | Required | Description ...",
|
|
602
602
|
order: 7
|
|
603
603
|
},
|
|
@@ -1331,7 +1331,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1331
1331
|
section: "protocol",
|
|
1332
1332
|
title: "Handlers",
|
|
1333
1333
|
description: "Defining execution handlers with blocks.",
|
|
1334
|
-
content: "\n# Handlers\n\nHandlers define what happens when a trigger fires. They contain execution blocks that run in sequence.\n\n## Handler Structure\n\n```yaml\nhandlers:\n trigger-name:\n Block Name:\n block: block-kind\n # block-specific properties\n\n Another Block:\n block: another-kind\n # ...\n```\n\nEach block has a human-readable name (shown in debug UI) and a `block` field that determines its behavior.\n\n## Block Kinds\n\n### next-message\n\nGenerate a response from the LLM:\n\n```yaml\nhandlers:\n user-message:\n Respond to user:\n block: next-message\n # Uses main conversation thread by default\n # Display defaults to 'stream'\n```\n\nWith options:\n\n```yaml\nGenerate summary:\n block: next-message\n thread: summary # Use named thread\n display: stream # Show streaming content\n independent: true # Don't add to main chat\n output: SUMMARY # Store output in variable\n description: Generating summary # Shown in UI\n```\n\nFor structured output (typed JSON response):\n\n```yaml\nRespond with suggestions:\n block: next-message\n responseType: ChatResponse # Type defined in types section\n output: RESPONSE # Stores the parsed object\n```\n\nWhen `responseType` is specified:\n\n- The LLM generates JSON matching the type schema\n- The `output` variable receives the parsed object (not plain text)\n- The client receives a `UIObjectPart` for custom rendering\n\nSee [Types](/docs/protocol/types#structured-output) for more details.\n\n### add-message\n\nAdd a message to the conversation:\n\n```yaml\nAdd user message:\n block: add-message\n role: user # user | assistant | system\n prompt: user-message # Reference to prompt file\n input: [USER_MESSAGE] # Variables to interpolate\n display: hidden # Don't show in UI\n```\n\nFor internal directives (LLM sees it, user doesn't):\n\n```yaml\nAdd internal directive:\n block: add-message\n role: user\n prompt: ticket-directive\n input: [TICKET_DETAILS]\n visible: false # LLM sees this, user doesn't\n```\n\nFor structured user input (object shown in UI, prompt for LLM context):\n\n```yaml\nAdd user message:\n block: add-message\n role: user\n prompt: user-message # Rendered for LLM context (hidden from UI)\n input: [USER_INPUT]\n uiContent: USER_INPUT # Variable shown in UI (object \u2192 object part)\n display: hidden\n```\n\nWhen `uiContent` is set:\n\n- The variable value is shown in the UI (string \u2192 text part, object \u2192 object part)\n- The prompt text is hidden from the UI but kept for LLM context\n- Useful for rich UI interactions where the visual differs from the LLM context\n\n### tool-call\n\nCall a tool deterministically:\n\n```yaml\nCreate ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY # Variable reference\n priority: medium # Literal value\n output: TICKET # Store result\n```\n\n### set-resource\n\nUpdate a persistent resource:\n\n```yaml\nSave summary:\n block: set-resource\n resource: CONVERSATION_SUMMARY\n value: SUMMARY # Variable to save\n display: name # Show block name\n```\n\n### start-thread\n\nCreate a named conversation thread:\n\n```yaml\nStart summary thread:\n block: start-thread\n thread: summary # Thread name\n model: anthropic/claude-sonnet-4-5 # Optional: different model\n thinking: low # Extended reasoning level\n maxSteps: 1 # Tool call limit\n system: escalation-summary # System prompt\n input: [COMPANY_NAME] # Variables for prompt\n skills: [qr-code] # Octavus skills for this thread\n sandboxTimeout: 600000 # Skill sandbox timeout (default: 5 min, max: 1 hour)\n imageModel: google/gemini-2.5-flash-image # Image generation model\n```\n\nThe `model` field can also reference a variable for dynamic model selection
|
|
1334
|
+
content: "\n# Handlers\n\nHandlers define what happens when a trigger fires. They contain execution blocks that run in sequence.\n\n## Handler Structure\n\n```yaml\nhandlers:\n trigger-name:\n Block Name:\n block: block-kind\n # block-specific properties\n\n Another Block:\n block: another-kind\n # ...\n```\n\nEach block has a human-readable name (shown in debug UI) and a `block` field that determines its behavior.\n\n## Block Kinds\n\n### next-message\n\nGenerate a response from the LLM:\n\n```yaml\nhandlers:\n user-message:\n Respond to user:\n block: next-message\n # Uses main conversation thread by default\n # Display defaults to 'stream'\n```\n\nWith options:\n\n```yaml\nGenerate summary:\n block: next-message\n thread: summary # Use named thread\n display: stream # Show streaming content\n independent: true # Don't add to main chat\n output: SUMMARY # Store output in variable\n description: Generating summary # Shown in UI\n```\n\nFor structured output (typed JSON response):\n\n```yaml\nRespond with suggestions:\n block: next-message\n responseType: ChatResponse # Type defined in types section\n output: RESPONSE # Stores the parsed object\n```\n\nWhen `responseType` is specified:\n\n- The LLM generates JSON matching the type schema\n- The `output` variable receives the parsed object (not plain text)\n- The client receives a `UIObjectPart` for custom rendering\n\nSee [Types](/docs/protocol/types#structured-output) for more details.\n\n### add-message\n\nAdd a message to the conversation:\n\n```yaml\nAdd user message:\n block: add-message\n role: user # user | assistant | system\n prompt: user-message # Reference to prompt file\n input: [USER_MESSAGE] # Variables to interpolate\n display: hidden # Don't show in UI\n```\n\nFor internal directives (LLM sees it, user doesn't):\n\n```yaml\nAdd internal directive:\n block: add-message\n role: user\n prompt: ticket-directive\n input: [TICKET_DETAILS]\n visible: false # LLM sees this, user doesn't\n```\n\nFor structured user input (object shown in UI, prompt for LLM context):\n\n```yaml\nAdd user message:\n block: add-message\n role: user\n prompt: user-message # Rendered for LLM context (hidden from UI)\n input: [USER_INPUT]\n uiContent: USER_INPUT # Variable shown in UI (object \u2192 object part)\n display: hidden\n```\n\nWhen `uiContent` is set:\n\n- The variable value is shown in the UI (string \u2192 text part, object \u2192 object part)\n- The prompt text is hidden from the UI but kept for LLM context\n- Useful for rich UI interactions where the visual differs from the LLM context\n\n### tool-call\n\nCall a tool deterministically:\n\n```yaml\nCreate ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY # Variable reference\n priority: medium # Literal value\n output: TICKET # Store result\n```\n\n### set-resource\n\nUpdate a persistent resource:\n\n```yaml\nSave summary:\n block: set-resource\n resource: CONVERSATION_SUMMARY\n value: SUMMARY # Variable to save\n display: name # Show block name\n```\n\n### start-thread\n\nCreate a named conversation thread:\n\n```yaml\nStart summary thread:\n block: start-thread\n thread: summary # Thread name\n model: anthropic/claude-sonnet-4-5 # Optional: different model\n backupModel: openai/gpt-4o # Failover on provider errors\n thinking: low # Extended reasoning level\n maxSteps: 1 # Tool call limit\n system: escalation-summary # System prompt\n input: [COMPANY_NAME] # Variables for prompt\n skills: [qr-code] # Octavus skills for this thread\n sandboxTimeout: 600000 # Skill sandbox timeout (default: 5 min, max: 1 hour)\n imageModel: google/gemini-2.5-flash-image # Image generation model\n```\n\nThe `model` field can also reference a variable for dynamic model selection. The `backupModel` field follows the same format and supports variable references.\n\n```yaml\nStart summary thread:\n block: start-thread\n thread: summary\n model: SUMMARY_MODEL # Resolved from input variable\n system: escalation-summary\n```\n\n### serialize-thread\n\nConvert conversation to text:\n\n```yaml\nSerialize conversation:\n block: serialize-thread\n thread: main # Which thread (default: main)\n format: markdown # markdown | json\n output: CONVERSATION_TEXT # Variable to store result\n```\n\n### generate-image\n\nGenerate an image from a prompt variable:\n\n```yaml\nGenerate image:\n block: generate-image\n prompt: OPTIMIZED_PROMPT # Variable containing the prompt\n imageModel: google/gemini-2.5-flash-image # Required image model\n size: 1024x1024 # 1024x1024 | 1792x1024 | 1024x1792\n output: GENERATED_IMAGE # Store URL in variable\n description: Generating your image... # Shown in UI\n```\n\nEdit an existing image using reference images:\n\n```yaml\nEdit image:\n block: generate-image\n prompt: EDIT_INSTRUCTIONS # e.g., \"Remove the background\"\n referenceImages: [SOURCE_IMAGE_URL] # Variable(s) containing image URLs\n imageModel: google/gemini-2.5-flash-image\n output: EDITED_IMAGE\n description: Editing image...\n```\n\n| Field | Required | Description |\n| ----------------- | -------- | --------------------------------------------------------------- |\n| `prompt` | Yes | Variable name containing the image prompt or edit instructions |\n| `imageModel` | Yes | Image model identifier (e.g., `google/gemini-2.5-flash-image`) |\n| `size` | No | Image dimensions: `1024x1024`, `1792x1024`, or `1024x1792` |\n| `referenceImages` | No | Variable names containing image URLs for editing/transformation |\n| `output` | No | Variable name to store the generated image URL |\n| `thread` | No | Thread to associate the output file with |\n| `description` | No | Description shown in the UI during generation |\n\nThis block is for deterministic image generation pipelines where the prompt is constructed programmatically (e.g., via prompt engineering in a separate thread). When `referenceImages` are provided, the prompt describes how to modify those images.\n\nFor agentic image generation where the LLM decides when to generate, configure `imageModel` in the [agent config](/docs/protocol/agent-config#image-generation).\n\n## Display Modes\n\nEvery block has a `display` property:\n\n| Mode | Default For | Behavior |\n| ------------- | ------------------------- | ----------------- |\n| `hidden` | add-message | Not shown to user |\n| `name` | set-resource | Shows block name |\n| `description` | tool-call, generate-image | Shows description |\n| `stream` | next-message | Streams content |\n\n## Complete Example\n\n```yaml\nhandlers:\n user-message:\n # Add the user's message to conversation\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n # Generate response (LLM may call tools)\n Respond to user:\n block: next-message\n # display: stream (default)\n\n request-human:\n # Step 1: Serialize conversation for summary\n Serialize conversation:\n block: serialize-thread\n format: markdown\n output: CONVERSATION_TEXT\n\n # Step 2: Create separate thread for summarization\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n thinking: low\n system: escalation-summary\n input: [COMPANY_NAME]\n\n # Step 3: Add request to summary thread\n Add summarize request:\n block: add-message\n thread: summary\n role: user\n prompt: summarize-request\n input:\n - CONVERSATION: CONVERSATION_TEXT\n\n # Step 4: Generate summary\n Generate summary:\n block: next-message\n thread: summary\n display: stream\n description: Summarizing your conversation\n independent: true\n output: SUMMARY\n\n # Step 5: Save to resource\n Save summary:\n block: set-resource\n resource: CONVERSATION_SUMMARY\n value: SUMMARY\n\n # Step 6: Create support ticket\n Create ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY\n priority: medium\n output: TICKET\n\n # Step 7: Add directive for response\n Add directive:\n block: add-message\n role: user\n prompt: ticket-directive\n input: [TICKET_DETAILS: TICKET]\n visible: false\n\n # Step 8: Respond to user\n Respond:\n block: next-message\n```\n\n## Block Input Mapping\n\nThe `input` field on blocks controls which variables are passed to the prompt. Only variables listed in `input` are available for interpolation.\n\nVariables can come from `protocol.input`, `protocol.resources`, `protocol.variables`, `trigger.input`, or outputs from prior blocks.\n\n```yaml\n# Array format (same name)\ninput: [USER_MESSAGE, COMPANY_NAME]\n\n# Array format (rename)\ninput:\n - CONVERSATION: CONVERSATION_TEXT # Prompt sees CONVERSATION, value comes from CONVERSATION_TEXT\n - TICKET_DETAILS: TICKET\n\n# Object format (rename)\ninput:\n CONVERSATION: CONVERSATION_TEXT\n TICKET_DETAILS: TICKET\n```\n\n## Independent Blocks\n\nUse `independent: true` for content that shouldn't go to the main chat:\n\n```yaml\nGenerate summary:\n block: next-message\n thread: summary\n independent: true # Output stored in variable, not main chat\n output: SUMMARY\n```\n\nThis is useful for:\n\n- Background processing\n- Summarization in separate threads\n- Generating content for tools\n",
|
|
1335
1335
|
excerpt: "Handlers Handlers define what happens when a trigger fires. They contain execution blocks that run in sequence. Handler Structure Each block has a human-readable name (shown in debug UI) and a ...",
|
|
1336
1336
|
order: 6
|
|
1337
1337
|
},
|
|
@@ -1340,7 +1340,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1340
1340
|
section: "protocol",
|
|
1341
1341
|
title: "Agent Config",
|
|
1342
1342
|
description: "Configuring the agent model and behavior.",
|
|
1343
|
-
content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n references: [api-guidelines] # On-demand context documents\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ---------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to pass to the system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `references` | No | List of references the LLM can fetch on demand |\n| `sandboxTimeout` | No | Skill sandbox timeout in ms (default: 5 min, max: 1 hour) |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `webSearch` | No | Enable built-in web search tool (provider-agnostic) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | -------------------------------------------------------------------- |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash-preview\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions. The `input` field controls which variables are available to the prompt \u2014 only variables listed in `input` are interpolated.\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nVariables in `input` can come from `protocol.input`, `protocol.resources`, or `protocol.variables`.\n\n### Input Mapping Formats\n\n```yaml\n# Array format (same name)\ninput:\n - COMPANY_NAME\n - PRODUCT_NAME\n\n# Array format (rename)\ninput:\n - CONTEXT: CONVERSATION_SUMMARY # Prompt sees CONTEXT, value comes from CONVERSATION_SUMMARY\n\n# Object format (rename)\ninput:\n CONTEXT: CONVERSATION_SUMMARY\n```\n\nThe left side (label) is what the prompt sees. The right side (source) is where the value comes from.\n\n### Example\n\n`prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## References\n\nEnable on-demand context loading via reference documents:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n references: [api-guidelines, error-codes]\n agentic: true\n```\n\nReferences are markdown files stored in the agent's `references/` directory. When enabled, the LLM can list available references and read their content using `octavus_reference_list` and `octavus_reference_read` tools.\n\nSee [References](/docs/protocol/references) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests. The tool supports both text-to-image generation and image editing/transformation using reference images.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Image Editing with Reference Images\n\nBoth the agentic tool and the `generate-image` block support reference images for editing and transformation. When reference images are provided, the prompt describes how to modify or use those images.\n\n| Provider | Models | Reference Image Support |\n| -------- | -------------------------------- | ----------------------- |\n| OpenAI | `gpt-image-1` | Yes |\n| Google | Gemini native (`gemini-*-image`) | Yes |\n| Google | Imagen (`imagen-*`) | No |\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate or edit images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation or editing\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Web Search\n\nEnable the LLM to search the web for current information:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n webSearch: true\n agentic: true\n```\n\nWhen `webSearch` is enabled, the `octavus_web_search` tool becomes available. The LLM can decide when to search the web based on the conversation. Search results include source URLs that are emitted as citations in the UI.\n\nThis is a **provider-agnostic** built-in tool \u2014 it works with any LLM provider (Anthropic, Google, OpenAI, etc.). For Anthropic's own web search implementation, see [Provider Options](/docs/protocol/provider-options).\n\nUse cases:\n\n- Current events and real-time data\n- Fact verification and documentation lookups\n- Any information that may have changed since the model's training\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n skills: [data-analysis] # Thread-specific skills\n references: [escalation-policy] # Thread-specific references\n imageModel: google/gemini-2.5-flash-image # Thread-specific image model\n webSearch: true # Thread-specific web search\n```\n\nEach thread can have its own skills, references, image model, and web search setting. Skills must be defined in the protocol's `skills:` section. References must exist in the agent's `references/` directory. Workers use this same pattern since they don't have a global `agent:` section.\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n references: [support-policies] # On-demand context\n webSearch: true # Built-in web search\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
|
|
1343
|
+
content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n references: [api-guidelines] # On-demand context documents\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ---------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `backupModel` | No | Backup model for automatic failover on provider errors |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to pass to the system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `references` | No | List of references the LLM can fetch on demand |\n| `sandboxTimeout` | No | Skill sandbox timeout in ms (default: 5 min, max: 1 hour) |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `webSearch` | No | Enable built-in web search tool (provider-agnostic) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | -------------------------------------------------------------------- |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash-preview\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## Backup Model\n\nConfigure a fallback model that activates automatically when the primary model encounters a transient provider error (rate limits, outages, timeouts):\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n backupModel: openai/gpt-4o\n system: system\n```\n\nWhen a provider error occurs, the system retries once with the backup model. If the backup also fails, the original error is returned.\n\n**Key behaviors:**\n\n- Only transient provider errors trigger fallback \u2014 authentication and validation errors are not retried\n- Provider-specific options (like `anthropic:`) are only forwarded to the backup model if it uses the same provider\n- For streaming responses, fallback only occurs if no content has been sent to the client yet\n\nLike `model`, `backupModel` supports variable references:\n\n```yaml\ninput:\n BACKUP_MODEL:\n type: string\n description: Fallback model for provider errors\n\nagent:\n model: anthropic/claude-sonnet-4-5\n backupModel: BACKUP_MODEL\n system: system\n```\n\n> **Tip**: Use a different provider for your backup model (e.g., primary on Anthropic, backup on OpenAI) to maximize resilience against single-provider outages.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions. The `input` field controls which variables are available to the prompt \u2014 only variables listed in `input` are interpolated.\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nVariables in `input` can come from `protocol.input`, `protocol.resources`, or `protocol.variables`.\n\n### Input Mapping Formats\n\n```yaml\n# Array format (same name)\ninput:\n - COMPANY_NAME\n - PRODUCT_NAME\n\n# Array format (rename)\ninput:\n - CONTEXT: CONVERSATION_SUMMARY # Prompt sees CONTEXT, value comes from CONVERSATION_SUMMARY\n\n# Object format (rename)\ninput:\n CONTEXT: CONVERSATION_SUMMARY\n```\n\nThe left side (label) is what the prompt sees. The right side (source) is where the value comes from.\n\n### Example\n\n`prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## References\n\nEnable on-demand context loading via reference documents:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n references: [api-guidelines, error-codes]\n agentic: true\n```\n\nReferences are markdown files stored in the agent's `references/` directory. When enabled, the LLM can list available references and read their content using `octavus_reference_list` and `octavus_reference_read` tools.\n\nSee [References](/docs/protocol/references) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests. The tool supports both text-to-image generation and image editing/transformation using reference images.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Image Editing with Reference Images\n\nBoth the agentic tool and the `generate-image` block support reference images for editing and transformation. When reference images are provided, the prompt describes how to modify or use those images.\n\n| Provider | Models | Reference Image Support |\n| -------- | -------------------------------- | ----------------------- |\n| OpenAI | `gpt-image-1` | Yes |\n| Google | Gemini native (`gemini-*-image`) | Yes |\n| Google | Imagen (`imagen-*`) | No |\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate or edit images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation or editing\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Web Search\n\nEnable the LLM to search the web for current information:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n webSearch: true\n agentic: true\n```\n\nWhen `webSearch` is enabled, the `octavus_web_search` tool becomes available. The LLM can decide when to search the web based on the conversation. Search results include source URLs that are emitted as citations in the UI.\n\nThis is a **provider-agnostic** built-in tool \u2014 it works with any LLM provider (Anthropic, Google, OpenAI, etc.). For Anthropic's own web search implementation, see [Provider Options](/docs/protocol/provider-options).\n\nUse cases:\n\n- Current events and real-time data\n- Fact verification and documentation lookups\n- Any information that may have changed since the model's training\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n backupModel: openai/gpt-4o # Failover model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n skills: [data-analysis] # Thread-specific skills\n references: [escalation-policy] # Thread-specific references\n imageModel: google/gemini-2.5-flash-image # Thread-specific image model\n webSearch: true # Thread-specific web search\n```\n\nEach thread can have its own model, backup model, skills, references, image model, and web search setting. Skills must be defined in the protocol's `skills:` section. References must exist in the agent's `references/` directory. Workers use this same pattern since they don't have a global `agent:` section.\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n backupModel: openai/gpt-4o\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n references: [support-policies] # On-demand context\n webSearch: true # Built-in web search\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
|
|
1344
1344
|
excerpt: "Agent Config The section configures the LLM model, system prompt, tools, and behavior. Basic Configuration Configuration Options | Field | Required | Description ...",
|
|
1345
1345
|
order: 7
|
|
1346
1346
|
},
|
|
@@ -1510,4 +1510,4 @@ export {
|
|
|
1510
1510
|
getDocSlugs,
|
|
1511
1511
|
getSectionBySlug
|
|
1512
1512
|
};
|
|
1513
|
-
//# sourceMappingURL=chunk-
|
|
1513
|
+
//# sourceMappingURL=chunk-54ND2CTI.js.map
|