@octavus/docs 2.10.0 → 2.11.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/content/02-server-sdk/01-overview.md +16 -0
- package/content/02-server-sdk/06-workers.md +218 -143
- package/content/04-protocol/01-overview.md +26 -4
- package/content/04-protocol/05-skills.md +43 -7
- package/content/04-protocol/06-handlers.md +3 -0
- package/content/04-protocol/07-agent-config.md +18 -13
- package/content/04-protocol/09-skills-advanced.md +50 -29
- package/content/04-protocol/11-workers.md +40 -5
- package/dist/{chunk-HPVIPOLY.js → chunk-6TO62UOU.js} +13 -13
- package/dist/chunk-6TO62UOU.js.map +1 -0
- package/dist/chunk-EIUCL4CP.js +1489 -0
- package/dist/chunk-EIUCL4CP.js.map +1 -0
- package/dist/{chunk-RZZE5BMI.js → chunk-H6M6M3MY.js} +23 -23
- package/dist/chunk-H6M6M3MY.js.map +1 -0
- package/dist/chunk-NCTX3Y2J.js +1489 -0
- package/dist/chunk-NCTX3Y2J.js.map +1 -0
- package/dist/content.js +1 -1
- package/dist/docs.json +12 -12
- package/dist/index.js +1 -1
- package/dist/search-index.json +1 -1
- package/dist/search.js +1 -1
- package/dist/search.js.map +1 -1
- package/dist/sections.json +12 -12
- package/package.json +1 -1
- package/dist/chunk-HPVIPOLY.js.map +0 -1
- package/dist/chunk-RZZE5BMI.js.map +0 -1
|
@@ -576,7 +576,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
576
576
|
section: "protocol",
|
|
577
577
|
title: "Skills",
|
|
578
578
|
description: "Using Octavus skills for code execution and specialized capabilities.",
|
|
579
|
-
content: "\n# Skills\n\nSkills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are self-contained packages with documentation and scripts that run in secure sandboxes.\n\n## Overview\n\nOctavus Skills provide **provider-agnostic** code execution. They work with any LLM provider (Anthropic, OpenAI, Google) by using explicit tool calls and system prompt injection.\n\n### How Skills Work\n\n1. **Skill Definition**: Skills are defined in the protocol's `skills:` section\n2. **Skill Resolution**: Skills are resolved from available sources (see below)\n3. **Sandbox Execution**: When a skill is used, code runs in an isolated sandbox environment\n4. **File Generation**: Files saved to `/output/` are automatically captured and made available for download\n\n### Skill Sources\n\nSkills come from two sources, visible in the Skills tab of your organization:\n\n| Source | Badge in UI | Visibility | Example |\n| ----------- | ----------- | ------------------------------ | ------------------ |\n| **Octavus** | `Octavus` | Available to all organizations | `qr-code` |\n| **Custom** | None | Private to your organization | `my-company-skill` |\n\nWhen you reference a skill in your protocol, Octavus resolves it from your available skills. If you create a custom skill with the same name as an Octavus skill, your custom skill takes precedence.\n\n## Defining Skills\n\nDefine skills in the protocol's `skills:` section:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n```\n\n### Skill Fields\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------------------------------------- |\n| `display` | No | How to show in UI: `hidden`, `name`, `description`, `stream` (default: `description`) |\n| `description` | No | Custom description shown to users (overrides skill's built-in description) |\n\n### Display Modes\n\n| Mode | Behavior |\n| ------------- | ------------------------------------------- |\n| `hidden` | Skill usage not shown to users |\n| `name` | Shows skill name while executing |\n| `description` | Shows description while executing (default) |\n| `stream` | Streams progress if available |\n\n## Enabling Skills\n\nAfter defining skills in the `skills:` section, specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account]\n skills: [qr-code] # Skills available for this thread\n agentic: true\n```\n\n## Skill Tools\n\nWhen skills are enabled, the LLM has access to these tools:\n\n| Tool | Purpose |\n| -------------------- | --------------------------------------- |\n| `octavus_skill_read` | Read skill documentation (SKILL.md) |\n| `octavus_skill_list` | List available scripts in a skill |\n| `octavus_skill_run` | Execute a pre-built script from a skill |\n| `octavus_code_run` | Execute arbitrary Python/Bash code |\n| `octavus_file_write` | Create files in the sandbox |\n| `octavus_file_read` | Read files from the sandbox |\n\nThe LLM learns about available skills through system prompt injection and can use these tools to interact with skills.\n\n## Example: QR Code Generation\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n agentic: true\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond:\n block: next-message\n```\n\nWhen a user asks \"Create a QR code for octavus.ai\", the LLM will:\n\n1. Recognize the task matches the `qr-code` skill\n2. Call `octavus_skill_read` to learn how to use the skill\n3. Execute code (via `octavus_code_run` or `octavus_skill_run`) to generate the QR code\n4. Save the image to `/output/` in the sandbox\n5. The file is automatically captured and made available for download\n\n## File Output\n\nFiles saved to `/output/` in the sandbox are automatically:\n\n1. **Captured** after code execution\n2. **Uploaded** to S3 storage\n3. **Made available** via presigned URLs\n4. **Included** in the message as file parts\n\nFiles persist across page refreshes and are stored in the session's message history.\n\n## Skill Format\n\nSkills follow the [Agent Skills](https://agentskills.io) open standard:\n\n- `SKILL.md` - Required skill documentation with YAML frontmatter\n- `scripts/` - Optional executable code (Python/Bash)\n- `references/` - Optional documentation loaded as needed\n- `assets/` - Optional files used in outputs (templates, images)\n\n### SKILL.md Format\n\n````yaml\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\nversion: 1.0.0\nlicense: MIT\nauthor: Octavus Team\n---\n\n# QR Code Generator\n\n## Overview\n\nThis skill creates QR codes from text data using Python...\n\n## Quick Start\n\nGenerate a QR code with Python:\n\n```python\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n# ... code to generate QR code ...\n````\n\n## Scripts Reference\n\n### scripts/generate.py\n\nMain script for generating QR codes...\n\n````\n\n## Best Practices\n\n### 1. Clear Descriptions\n\nProvide clear, purpose-driven descriptions:\n\n```yaml\nskills:\n # Good - clear purpose\n qr-code:\n description: Generating QR codes for URLs, contact info, or any text data\n\n # Avoid - vague\n utility:\n description: Does stuff\n````\n\n### 2. When to Use Skills vs Tools\n\n| Use Skills When | Use Tools When |\n| ------------------------ | ---------------------------- |\n| Code execution needed | Simple API calls |\n| File generation | Database queries |\n| Complex calculations | External service integration |\n| Data processing | Authentication required |\n| Provider-agnostic needed | Backend-specific logic |\n\n### 3. Skill Selection\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n pdf-processor:\n display: description\n description: Processing PDFs\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis] # Skills available for this thread\n```\n\n### 4. Display Modes\n\nChoose appropriate display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n## Comparison: Skills vs Tools vs Provider Options\n\n| Feature | Octavus Skills | External Tools | Provider Tools/Skills |\n| ------------------ | ----------------- | ------------------- | --------------------- |\n| **Execution** | Isolated sandbox | Your backend | Provider servers |\n| **Provider** | Any (agnostic) | N/A | Provider-specific |\n| **Code Execution** | Yes | No | Yes (provider tools) |\n| **File Output** | Yes | No | Yes (provider skills) |\n| **Implementation** | Skill packages | Your code | Built-in |\n| **Cost** | Sandbox + LLM API | Your infrastructure | Included in API |\n\n## Uploading Custom Skills\n\nYou can upload custom skills to your organization:\n\n1. Create a skill following the [Agent Skills](https://agentskills.io) format\n2. Package it as a `.skill` bundle (ZIP file)\n3. Upload via the platform UI\n4. Reference by slug in your protocol\n\n```yaml\nskills:\n custom-analysis:\n display: description\n description: Custom analysis tool\n\nagent:\n skills: [custom-analysis]\n```\n\n## Sandbox Timeout\n\nThe default sandbox timeout is 5 minutes. For long-running operations, you can configure a custom timeout using `sandboxTimeout` in the agent config:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes (in milliseconds)\n```\n\n`sandboxTimeout` Maximum: 1 hour (3,600,000 ms)\n\n## Security\n\nSkills run in isolated sandbox environments:\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n## Next Steps\n\n- [Agent Config](/docs/protocol/agent-config) \u2014 Configuring skills in agent settings\n- [Provider Options](/docs/protocol/provider-options) \u2014 Anthropic's built-in skills\n- [Skills Advanced Guide](/docs/protocol/skills-advanced) \u2014 Best practices and advanced patterns\n",
|
|
579
|
+
content: "\n# Skills\n\nSkills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are self-contained packages with documentation and scripts that run in secure sandboxes.\n\n## Overview\n\nOctavus Skills provide **provider-agnostic** code execution. They work with any LLM provider (Anthropic, OpenAI, Google) by using explicit tool calls and system prompt injection.\n\n### How Skills Work\n\n1. **Skill Definition**: Skills are defined in the protocol's `skills:` section\n2. **Skill Resolution**: Skills are resolved from available sources (see below)\n3. **Sandbox Execution**: When a skill is used, code runs in an isolated sandbox environment\n4. **File Generation**: Files saved to `/output/` are automatically captured and made available for download\n\n### Skill Sources\n\nSkills come from two sources, visible in the Skills tab of your organization:\n\n| Source | Badge in UI | Visibility | Example |\n| ----------- | ----------- | ------------------------------ | ------------------ |\n| **Octavus** | `Octavus` | Available to all organizations | `qr-code` |\n| **Custom** | None | Private to your organization | `my-company-skill` |\n\nWhen you reference a skill in your protocol, Octavus resolves it from your available skills. If you create a custom skill with the same name as an Octavus skill, your custom skill takes precedence.\n\n## Defining Skills\n\nDefine skills in the protocol's `skills:` section:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n```\n\n### Skill Fields\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------------------------------------- |\n| `display` | No | How to show in UI: `hidden`, `name`, `description`, `stream` (default: `description`) |\n| `description` | No | Custom description shown to users (overrides skill's built-in description) |\n\n### Display Modes\n\n| Mode | Behavior |\n| ------------- | ------------------------------------------- |\n| `hidden` | Skill usage not shown to users |\n| `name` | Shows skill name while executing |\n| `description` | Shows description while executing (default) |\n| `stream` | Streams progress if available |\n\n## Enabling Skills\n\nAfter defining skills in the `skills:` section, specify which skills are available. Skills work in both interactive agents and workers.\n\n### Interactive Agents\n\nReference skills in `agent.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account]\n skills: [qr-code]\n agentic: true\n```\n\n### Workers and Named Threads\n\nReference skills per-thread in `start-thread.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nsteps:\n Start thread:\n block: start-thread\n thread: worker\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n maxSteps: 10\n```\n\nThis also works for named threads in interactive agents, allowing different threads to have different skills.\n\n## Skill Tools\n\nWhen skills are enabled, the LLM has access to these tools:\n\n| Tool | Purpose |\n| -------------------- | --------------------------------------- |\n| `octavus_skill_read` | Read skill documentation (SKILL.md) |\n| `octavus_skill_list` | List available scripts in a skill |\n| `octavus_skill_run` | Execute a pre-built script from a skill |\n| `octavus_code_run` | Execute arbitrary Python/Bash code |\n| `octavus_file_write` | Create files in the sandbox |\n| `octavus_file_read` | Read files from the sandbox |\n\nThe LLM learns about available skills through system prompt injection and can use these tools to interact with skills.\n\n## Example: QR Code Generation\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n agentic: true\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond:\n block: next-message\n```\n\nWhen a user asks \"Create a QR code for octavus.ai\", the LLM will:\n\n1. Recognize the task matches the `qr-code` skill\n2. Call `octavus_skill_read` to learn how to use the skill\n3. Execute code (via `octavus_code_run` or `octavus_skill_run`) to generate the QR code\n4. Save the image to `/output/` in the sandbox\n5. The file is automatically captured and made available for download\n\n## File Output\n\nFiles saved to `/output/` in the sandbox are automatically:\n\n1. **Captured** after code execution\n2. **Uploaded** to S3 storage\n3. **Made available** via presigned URLs\n4. **Included** in the message as file parts\n\nFiles persist across page refreshes and are stored in the session's message history.\n\n## Skill Format\n\nSkills follow the [Agent Skills](https://agentskills.io) open standard:\n\n- `SKILL.md` - Required skill documentation with YAML frontmatter\n- `scripts/` - Optional executable code (Python/Bash)\n- `references/` - Optional documentation loaded as needed\n- `assets/` - Optional files used in outputs (templates, images)\n\n### SKILL.md Format\n\n````yaml\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\nversion: 1.0.0\nlicense: MIT\nauthor: Octavus Team\n---\n\n# QR Code Generator\n\n## Overview\n\nThis skill creates QR codes from text data using Python...\n\n## Quick Start\n\nGenerate a QR code with Python:\n\n```python\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n# ... code to generate QR code ...\n````\n\n## Scripts Reference\n\n### scripts/generate.py\n\nMain script for generating QR codes...\n\n````\n\n## Best Practices\n\n### 1. Clear Descriptions\n\nProvide clear, purpose-driven descriptions:\n\n```yaml\nskills:\n # Good - clear purpose\n qr-code:\n description: Generating QR codes for URLs, contact info, or any text data\n\n # Avoid - vague\n utility:\n description: Does stuff\n````\n\n### 2. When to Use Skills vs Tools\n\n| Use Skills When | Use Tools When |\n| ------------------------ | ---------------------------- |\n| Code execution needed | Simple API calls |\n| File generation | Database queries |\n| Complex calculations | External service integration |\n| Data processing | Authentication required |\n| Provider-agnostic needed | Backend-specific logic |\n\n### 3. Skill Selection\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n pdf-processor:\n display: description\n description: Processing PDFs\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis] # Skills available for this thread\n```\n\n### 4. Display Modes\n\nChoose appropriate display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n## Comparison: Skills vs Tools vs Provider Options\n\n| Feature | Octavus Skills | External Tools | Provider Tools/Skills |\n| ------------------ | ----------------- | ------------------- | --------------------- |\n| **Execution** | Isolated sandbox | Your backend | Provider servers |\n| **Provider** | Any (agnostic) | N/A | Provider-specific |\n| **Code Execution** | Yes | No | Yes (provider tools) |\n| **File Output** | Yes | No | Yes (provider skills) |\n| **Implementation** | Skill packages | Your code | Built-in |\n| **Cost** | Sandbox + LLM API | Your infrastructure | Included in API |\n\n## Uploading Custom Skills\n\nYou can upload custom skills to your organization:\n\n1. Create a skill following the [Agent Skills](https://agentskills.io) format\n2. Package it as a `.skill` bundle (ZIP file)\n3. Upload via the platform UI\n4. Reference by slug in your protocol\n\n```yaml\nskills:\n custom-analysis:\n display: description\n description: Custom analysis tool\n\nagent:\n skills: [custom-analysis]\n```\n\n## Sandbox Timeout\n\nThe default sandbox timeout is 5 minutes. You can configure a custom timeout using `sandboxTimeout` in the agent config or on individual `start-thread` blocks:\n\n```yaml\n# Agent-level timeout (applies to main thread)\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes (in milliseconds)\n```\n\n```yaml\n# Thread-level timeout (overrides agent-level for this thread)\nsteps:\n Start thread:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 3600000 # 1 hour\n```\n\nThread-level `sandboxTimeout` takes priority over agent-level. Maximum: 1 hour (3,600,000 ms).\n\n## Security\n\nSkills run in isolated sandbox environments:\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after each `next-message` execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n## Next Steps\n\n- [Agent Config](/docs/protocol/agent-config) \u2014 Configuring skills in agent settings\n- [Provider Options](/docs/protocol/provider-options) \u2014 Anthropic's built-in skills\n- [Skills Advanced Guide](/docs/protocol/skills-advanced) \u2014 Best practices and advanced patterns\n",
|
|
580
580
|
excerpt: "Skills Skills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are...",
|
|
581
581
|
order: 5
|
|
582
582
|
},
|
|
@@ -585,7 +585,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
585
585
|
section: "protocol",
|
|
586
586
|
title: "Handlers",
|
|
587
587
|
description: "Defining execution handlers with blocks.",
|
|
588
|
-
content: "\n# Handlers\n\nHandlers define what happens when a trigger fires. They contain execution blocks that run in sequence.\n\n## Handler Structure\n\n```yaml\nhandlers:\n trigger-name:\n Block Name:\n block: block-kind\n # block-specific properties\n\n Another Block:\n block: another-kind\n # ...\n```\n\nEach block has a human-readable name (shown in debug UI) and a `block` field that determines its behavior.\n\n## Block Kinds\n\n### next-message\n\nGenerate a response from the LLM:\n\n```yaml\nhandlers:\n user-message:\n Respond to user:\n block: next-message\n # Uses main conversation thread by default\n # Display defaults to 'stream'\n```\n\nWith options:\n\n```yaml\nGenerate summary:\n block: next-message\n thread: summary # Use named thread\n display: stream # Show streaming content\n independent: true # Don't add to main chat\n output: SUMMARY # Store output in variable\n description: Generating summary # Shown in UI\n```\n\nFor structured output (typed JSON response):\n\n```yaml\nRespond with suggestions:\n block: next-message\n responseType: ChatResponse # Type defined in types section\n output: RESPONSE # Stores the parsed object\n```\n\nWhen `responseType` is specified:\n\n- The LLM generates JSON matching the type schema\n- The `output` variable receives the parsed object (not plain text)\n- The client receives a `UIObjectPart` for custom rendering\n\nSee [Types](/docs/protocol/types#structured-output) for more details.\n\n### add-message\n\nAdd a message to the conversation:\n\n```yaml\nAdd user message:\n block: add-message\n role: user # user | assistant | system\n prompt: user-message # Reference to prompt file\n input: [USER_MESSAGE] # Variables to interpolate\n display: hidden # Don't show in UI\n```\n\nFor internal directives (LLM sees it, user doesn't):\n\n```yaml\nAdd internal directive:\n block: add-message\n role: user\n prompt: ticket-directive\n input: [TICKET_DETAILS]\n visible: false # LLM sees this, user doesn't\n```\n\nFor structured user input (object shown in UI, prompt for LLM context):\n\n```yaml\nAdd user message:\n block: add-message\n role: user\n prompt: user-message # Rendered for LLM context (hidden from UI)\n input: [USER_INPUT]\n uiContent: USER_INPUT # Variable shown in UI (object \u2192 object part)\n display: hidden\n```\n\nWhen `uiContent` is set:\n\n- The variable value is shown in the UI (string \u2192 text part, object \u2192 object part)\n- The prompt text is hidden from the UI but kept for LLM context\n- Useful for rich UI interactions where the visual differs from the LLM context\n\n### tool-call\n\nCall a tool deterministically:\n\n```yaml\nCreate ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY # Variable reference\n priority: medium # Literal value\n output: TICKET # Store result\n```\n\n### set-resource\n\nUpdate a persistent resource:\n\n```yaml\nSave summary:\n block: set-resource\n resource: CONVERSATION_SUMMARY\n value: SUMMARY # Variable to save\n display: name # Show block name\n```\n\n### start-thread\n\nCreate a named conversation thread:\n\n```yaml\nStart summary thread:\n block: start-thread\n thread: summary # Thread name\n model: anthropic/claude-sonnet-4-5 # Optional: different model\n thinking: low # Extended reasoning level\n maxSteps: 1 # Tool call limit\n system: escalation-summary # System prompt\n input: [COMPANY_NAME] # Variables for prompt\n```\n\nThe `model` field can also reference a variable for dynamic model selection:\n\n```yaml\nStart summary thread:\n block: start-thread\n thread: summary\n model: SUMMARY_MODEL # Resolved from input variable\n system: escalation-summary\n```\n\n### serialize-thread\n\nConvert conversation to text:\n\n```yaml\nSerialize conversation:\n block: serialize-thread\n thread: main # Which thread (default: main)\n format: markdown # markdown | json\n output: CONVERSATION_TEXT # Variable to store result\n```\n\n### generate-image\n\nGenerate an image from a prompt variable:\n\n```yaml\nGenerate image:\n block: generate-image\n prompt: OPTIMIZED_PROMPT # Variable containing the prompt\n imageModel: google/gemini-2.5-flash-image # Required image model\n size: 1024x1024 # 1024x1024 | 1792x1024 | 1024x1792\n output: GENERATED_IMAGE # Store URL in variable\n description: Generating your image... # Shown in UI\n```\n\nEdit an existing image using reference images:\n\n```yaml\nEdit image:\n block: generate-image\n prompt: EDIT_INSTRUCTIONS # e.g., \"Remove the background\"\n referenceImages: [SOURCE_IMAGE_URL] # Variable(s) containing image URLs\n imageModel: google/gemini-2.5-flash-image\n output: EDITED_IMAGE\n description: Editing image...\n```\n\n| Field | Required | Description |\n| ----------------- | -------- | --------------------------------------------------------------- |\n| `prompt` | Yes | Variable name containing the image prompt or edit instructions |\n| `imageModel` | Yes | Image model identifier (e.g., `google/gemini-2.5-flash-image`) |\n| `size` | No | Image dimensions: `1024x1024`, `1792x1024`, or `1024x1792` |\n| `referenceImages` | No | Variable names containing image URLs for editing/transformation |\n| `output` | No | Variable name to store the generated image URL |\n| `thread` | No | Thread to associate the output file with |\n| `description` | No | Description shown in the UI during generation |\n\nThis block is for deterministic image generation pipelines where the prompt is constructed programmatically (e.g., via prompt engineering in a separate thread). When `referenceImages` are provided, the prompt describes how to modify those images.\n\nFor agentic image generation where the LLM decides when to generate, configure `imageModel` in the [agent config](/docs/protocol/agent-config#image-generation).\n\n## Display Modes\n\nEvery block has a `display` property:\n\n| Mode | Default For | Behavior |\n| ------------- | ------------------------- | ----------------- |\n| `hidden` | add-message | Not shown to user |\n| `name` | set-resource | Shows block name |\n| `description` | tool-call, generate-image | Shows description |\n| `stream` | next-message | Streams content |\n\n## Complete Example\n\n```yaml\nhandlers:\n user-message:\n # Add the user's message to conversation\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n # Generate response (LLM may call tools)\n Respond to user:\n block: next-message\n # display: stream (default)\n\n request-human:\n # Step 1: Serialize conversation for summary\n Serialize conversation:\n block: serialize-thread\n format: markdown\n output: CONVERSATION_TEXT\n\n # Step 2: Create separate thread for summarization\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n thinking: low\n system: escalation-summary\n input: [COMPANY_NAME]\n\n # Step 3: Add request to summary thread\n Add summarize request:\n block: add-message\n thread: summary\n role: user\n prompt: summarize-request\n input:\n - CONVERSATION: CONVERSATION_TEXT\n\n # Step 4: Generate summary\n Generate summary:\n block: next-message\n thread: summary\n display: stream\n description: Summarizing your conversation\n independent: true\n output: SUMMARY\n\n # Step 5: Save to resource\n Save summary:\n block: set-resource\n resource: CONVERSATION_SUMMARY\n value: SUMMARY\n\n # Step 6: Create support ticket\n Create ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY\n priority: medium\n output: TICKET\n\n # Step 7: Add directive for response\n Add directive:\n block: add-message\n role: user\n prompt: ticket-directive\n input: [TICKET_DETAILS: TICKET]\n visible: false\n\n # Step 8: Respond to user\n Respond:\n block: next-message\n```\n\n## Block Input Mapping\n\nThe `input` field on blocks controls which variables are passed to the prompt. Only variables listed in `input` are available for interpolation.\n\nVariables can come from `protocol.input`, `protocol.resources`, `protocol.variables`, `trigger.input`, or outputs from prior blocks.\n\n```yaml\n# Array format (same name)\ninput: [USER_MESSAGE, COMPANY_NAME]\n\n# Array format (rename)\ninput:\n - CONVERSATION: CONVERSATION_TEXT # Prompt sees CONVERSATION, value comes from CONVERSATION_TEXT\n - TICKET_DETAILS: TICKET\n\n# Object format (rename)\ninput:\n CONVERSATION: CONVERSATION_TEXT\n TICKET_DETAILS: TICKET\n```\n\n## Independent Blocks\n\nUse `independent: true` for content that shouldn't go to the main chat:\n\n```yaml\nGenerate summary:\n block: next-message\n thread: summary\n independent: true # Output stored in variable, not main chat\n output: SUMMARY\n```\n\nThis is useful for:\n\n- Background processing\n- Summarization in separate threads\n- Generating content for tools\n",
|
|
588
|
+
content: "\n# Handlers\n\nHandlers define what happens when a trigger fires. They contain execution blocks that run in sequence.\n\n## Handler Structure\n\n```yaml\nhandlers:\n trigger-name:\n Block Name:\n block: block-kind\n # block-specific properties\n\n Another Block:\n block: another-kind\n # ...\n```\n\nEach block has a human-readable name (shown in debug UI) and a `block` field that determines its behavior.\n\n## Block Kinds\n\n### next-message\n\nGenerate a response from the LLM:\n\n```yaml\nhandlers:\n user-message:\n Respond to user:\n block: next-message\n # Uses main conversation thread by default\n # Display defaults to 'stream'\n```\n\nWith options:\n\n```yaml\nGenerate summary:\n block: next-message\n thread: summary # Use named thread\n display: stream # Show streaming content\n independent: true # Don't add to main chat\n output: SUMMARY # Store output in variable\n description: Generating summary # Shown in UI\n```\n\nFor structured output (typed JSON response):\n\n```yaml\nRespond with suggestions:\n block: next-message\n responseType: ChatResponse # Type defined in types section\n output: RESPONSE # Stores the parsed object\n```\n\nWhen `responseType` is specified:\n\n- The LLM generates JSON matching the type schema\n- The `output` variable receives the parsed object (not plain text)\n- The client receives a `UIObjectPart` for custom rendering\n\nSee [Types](/docs/protocol/types#structured-output) for more details.\n\n### add-message\n\nAdd a message to the conversation:\n\n```yaml\nAdd user message:\n block: add-message\n role: user # user | assistant | system\n prompt: user-message # Reference to prompt file\n input: [USER_MESSAGE] # Variables to interpolate\n display: hidden # Don't show in UI\n```\n\nFor internal directives (LLM sees it, user doesn't):\n\n```yaml\nAdd internal directive:\n block: add-message\n role: user\n prompt: ticket-directive\n input: [TICKET_DETAILS]\n visible: false # LLM sees this, user doesn't\n```\n\nFor structured user input (object shown in UI, prompt for LLM context):\n\n```yaml\nAdd user message:\n block: add-message\n role: user\n prompt: user-message # Rendered for LLM context (hidden from UI)\n input: [USER_INPUT]\n uiContent: USER_INPUT # Variable shown in UI (object \u2192 object part)\n display: hidden\n```\n\nWhen `uiContent` is set:\n\n- The variable value is shown in the UI (string \u2192 text part, object \u2192 object part)\n- The prompt text is hidden from the UI but kept for LLM context\n- Useful for rich UI interactions where the visual differs from the LLM context\n\n### tool-call\n\nCall a tool deterministically:\n\n```yaml\nCreate ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY # Variable reference\n priority: medium # Literal value\n output: TICKET # Store result\n```\n\n### set-resource\n\nUpdate a persistent resource:\n\n```yaml\nSave summary:\n block: set-resource\n resource: CONVERSATION_SUMMARY\n value: SUMMARY # Variable to save\n display: name # Show block name\n```\n\n### start-thread\n\nCreate a named conversation thread:\n\n```yaml\nStart summary thread:\n block: start-thread\n thread: summary # Thread name\n model: anthropic/claude-sonnet-4-5 # Optional: different model\n thinking: low # Extended reasoning level\n maxSteps: 1 # Tool call limit\n system: escalation-summary # System prompt\n input: [COMPANY_NAME] # Variables for prompt\n skills: [qr-code] # Octavus skills for this thread\n sandboxTimeout: 600000 # Skill sandbox timeout (default: 5 min, max: 1 hour)\n imageModel: google/gemini-2.5-flash-image # Image generation model\n```\n\nThe `model` field can also reference a variable for dynamic model selection:\n\n```yaml\nStart summary thread:\n block: start-thread\n thread: summary\n model: SUMMARY_MODEL # Resolved from input variable\n system: escalation-summary\n```\n\n### serialize-thread\n\nConvert conversation to text:\n\n```yaml\nSerialize conversation:\n block: serialize-thread\n thread: main # Which thread (default: main)\n format: markdown # markdown | json\n output: CONVERSATION_TEXT # Variable to store result\n```\n\n### generate-image\n\nGenerate an image from a prompt variable:\n\n```yaml\nGenerate image:\n block: generate-image\n prompt: OPTIMIZED_PROMPT # Variable containing the prompt\n imageModel: google/gemini-2.5-flash-image # Required image model\n size: 1024x1024 # 1024x1024 | 1792x1024 | 1024x1792\n output: GENERATED_IMAGE # Store URL in variable\n description: Generating your image... # Shown in UI\n```\n\nEdit an existing image using reference images:\n\n```yaml\nEdit image:\n block: generate-image\n prompt: EDIT_INSTRUCTIONS # e.g., \"Remove the background\"\n referenceImages: [SOURCE_IMAGE_URL] # Variable(s) containing image URLs\n imageModel: google/gemini-2.5-flash-image\n output: EDITED_IMAGE\n description: Editing image...\n```\n\n| Field | Required | Description |\n| ----------------- | -------- | --------------------------------------------------------------- |\n| `prompt` | Yes | Variable name containing the image prompt or edit instructions |\n| `imageModel` | Yes | Image model identifier (e.g., `google/gemini-2.5-flash-image`) |\n| `size` | No | Image dimensions: `1024x1024`, `1792x1024`, or `1024x1792` |\n| `referenceImages` | No | Variable names containing image URLs for editing/transformation |\n| `output` | No | Variable name to store the generated image URL |\n| `thread` | No | Thread to associate the output file with |\n| `description` | No | Description shown in the UI during generation |\n\nThis block is for deterministic image generation pipelines where the prompt is constructed programmatically (e.g., via prompt engineering in a separate thread). When `referenceImages` are provided, the prompt describes how to modify those images.\n\nFor agentic image generation where the LLM decides when to generate, configure `imageModel` in the [agent config](/docs/protocol/agent-config#image-generation).\n\n## Display Modes\n\nEvery block has a `display` property:\n\n| Mode | Default For | Behavior |\n| ------------- | ------------------------- | ----------------- |\n| `hidden` | add-message | Not shown to user |\n| `name` | set-resource | Shows block name |\n| `description` | tool-call, generate-image | Shows description |\n| `stream` | next-message | Streams content |\n\n## Complete Example\n\n```yaml\nhandlers:\n user-message:\n # Add the user's message to conversation\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n # Generate response (LLM may call tools)\n Respond to user:\n block: next-message\n # display: stream (default)\n\n request-human:\n # Step 1: Serialize conversation for summary\n Serialize conversation:\n block: serialize-thread\n format: markdown\n output: CONVERSATION_TEXT\n\n # Step 2: Create separate thread for summarization\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n thinking: low\n system: escalation-summary\n input: [COMPANY_NAME]\n\n # Step 3: Add request to summary thread\n Add summarize request:\n block: add-message\n thread: summary\n role: user\n prompt: summarize-request\n input:\n - CONVERSATION: CONVERSATION_TEXT\n\n # Step 4: Generate summary\n Generate summary:\n block: next-message\n thread: summary\n display: stream\n description: Summarizing your conversation\n independent: true\n output: SUMMARY\n\n # Step 5: Save to resource\n Save summary:\n block: set-resource\n resource: CONVERSATION_SUMMARY\n value: SUMMARY\n\n # Step 6: Create support ticket\n Create ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY\n priority: medium\n output: TICKET\n\n # Step 7: Add directive for response\n Add directive:\n block: add-message\n role: user\n prompt: ticket-directive\n input: [TICKET_DETAILS: TICKET]\n visible: false\n\n # Step 8: Respond to user\n Respond:\n block: next-message\n```\n\n## Block Input Mapping\n\nThe `input` field on blocks controls which variables are passed to the prompt. Only variables listed in `input` are available for interpolation.\n\nVariables can come from `protocol.input`, `protocol.resources`, `protocol.variables`, `trigger.input`, or outputs from prior blocks.\n\n```yaml\n# Array format (same name)\ninput: [USER_MESSAGE, COMPANY_NAME]\n\n# Array format (rename)\ninput:\n - CONVERSATION: CONVERSATION_TEXT # Prompt sees CONVERSATION, value comes from CONVERSATION_TEXT\n - TICKET_DETAILS: TICKET\n\n# Object format (rename)\ninput:\n CONVERSATION: CONVERSATION_TEXT\n TICKET_DETAILS: TICKET\n```\n\n## Independent Blocks\n\nUse `independent: true` for content that shouldn't go to the main chat:\n\n```yaml\nGenerate summary:\n block: next-message\n thread: summary\n independent: true # Output stored in variable, not main chat\n output: SUMMARY\n```\n\nThis is useful for:\n\n- Background processing\n- Summarization in separate threads\n- Generating content for tools\n",
|
|
589
589
|
excerpt: "Handlers Handlers define what happens when a trigger fires. They contain execution blocks that run in sequence. Handler Structure Each block has a human-readable name (shown in debug UI) and a ...",
|
|
590
590
|
order: 6
|
|
591
591
|
},
|
|
@@ -594,8 +594,8 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
594
594
|
section: "protocol",
|
|
595
595
|
title: "Agent Config",
|
|
596
596
|
description: "Configuring the agent model and behavior.",
|
|
597
|
-
content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to pass to the system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | -------------------------------------------------------------------- |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash-preview\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions. The `input` field controls which variables are available to the prompt \u2014 only variables listed in `input` are interpolated.\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nVariables in `input` can come from `protocol.input`, `protocol.resources`, or `protocol.variables`.\n\n### Input Mapping Formats\n\n```yaml\n# Array format (same name)\ninput:\n - COMPANY_NAME\n - PRODUCT_NAME\n\n# Array format (rename)\ninput:\n - CONTEXT: CONVERSATION_SUMMARY # Prompt sees CONTEXT, value comes from CONVERSATION_SUMMARY\n\n# Object format (rename)\ninput:\n CONTEXT: CONVERSATION_SUMMARY\n```\n\nThe left side (label) is what the prompt sees. The right side (source) is where the value comes from.\n\n### Example\n\n`prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests. The tool supports both text-to-image generation and image editing/transformation using reference images.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Image Editing with Reference Images\n\nBoth the agentic tool and the `generate-image` block support reference images for editing and transformation. When reference images are provided, the prompt describes how to modify or use those images.\n\n| Provider | Models | Reference Image Support |\n| -------- | -------------------------------- | ----------------------- |\n| OpenAI | `gpt-image-1` | Yes |\n| Google | Gemini native (`gemini-*-image`) | Yes |\n| Google | Imagen (`imagen-*`) | No |\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate or edit images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation or editing\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n```\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
|
|
598
|
-
excerpt: "Agent Config The section configures the LLM model, system prompt, tools, and behavior. Basic Configuration Configuration Options | Field
|
|
597
|
+
content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ---------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to pass to the system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `sandboxTimeout` | No | Skill sandbox timeout in ms (default: 5 min, max: 1 hour) |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | -------------------------------------------------------------------- |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash-preview\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions. The `input` field controls which variables are available to the prompt \u2014 only variables listed in `input` are interpolated.\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nVariables in `input` can come from `protocol.input`, `protocol.resources`, or `protocol.variables`.\n\n### Input Mapping Formats\n\n```yaml\n# Array format (same name)\ninput:\n - COMPANY_NAME\n - PRODUCT_NAME\n\n# Array format (rename)\ninput:\n - CONTEXT: CONVERSATION_SUMMARY # Prompt sees CONTEXT, value comes from CONVERSATION_SUMMARY\n\n# Object format (rename)\ninput:\n CONTEXT: CONVERSATION_SUMMARY\n```\n\nThe left side (label) is what the prompt sees. The right side (source) is where the value comes from.\n\n### Example\n\n`prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests. The tool supports both text-to-image generation and image editing/transformation using reference images.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Image Editing with Reference Images\n\nBoth the agentic tool and the `generate-image` block support reference images for editing and transformation. When reference images are provided, the prompt describes how to modify or use those images.\n\n| Provider | Models | Reference Image Support |\n| -------- | -------------------------------- | ----------------------- |\n| OpenAI | `gpt-image-1` | Yes |\n| Google | Gemini native (`gemini-*-image`) | Yes |\n| Google | Imagen (`imagen-*`) | No |\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate or edit images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation or editing\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n skills: [data-analysis] # Thread-specific skills\n imageModel: google/gemini-2.5-flash-image # Thread-specific image model\n```\n\nEach thread can have its own skills and image model. Skills referenced here must be defined in the protocol's `skills:` section. Workers use this same pattern since they don't have a global `agent:` section.\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
|
|
598
|
+
excerpt: "Agent Config The section configures the LLM model, system prompt, tools, and behavior. Basic Configuration Configuration Options | Field | Required | Description ...",
|
|
599
599
|
order: 7
|
|
600
600
|
},
|
|
601
601
|
{
|
|
@@ -612,7 +612,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
612
612
|
section: "protocol",
|
|
613
613
|
title: "Skills Advanced Guide",
|
|
614
614
|
description: "Best practices and advanced patterns for using Octavus skills.",
|
|
615
|
-
content: "\n# Skills Advanced Guide\n\nThis guide covers advanced patterns and best practices for using Octavus skills in your agents.\n\n## When to Use Skills\n\nSkills are ideal for:\n\n- **Code execution** - Running Python/Bash scripts\n- **File generation** - Creating images, PDFs, reports\n- **Data processing** - Analyzing, transforming, or visualizing data\n- **Provider-agnostic needs** - Features that should work with any LLM\n\nUse external tools instead when:\n\n- **Simple API calls** - Database queries, external services\n- **Authentication required** - Accessing user-specific resources\n- **Backend integration** - Tight coupling with your infrastructure\n\n## Skill Selection Strategy\n\n### Defining Available Skills\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n pdf-processor:\n display: description\n description: Processing PDFs\n data-analysis:\n display: description\n description: Analyzing data\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Skills available for this thread\n```\n\n### Match Skills to Use Cases\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread based on use case:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n visualization:\n display: description\n description: Creating charts and visualizations\n\n# Skills available for this chat thread (support use case)\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Skills available for this thread\n```\n\nFor a data analysis thread, you would specify `[data-analysis, visualization]` in `agent.skills`, but still define all available skills in the `skills:` section above.\n\n## Display Mode Strategy\n\nChoose display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n### Guidelines\n\n- **`hidden`**: Background work that doesn't need user awareness\n- **`description`**: User-facing operations (default)\n- **`name`**: Quick operations where name is sufficient\n- **`stream`**: Long-running operations where progress matters\n\n## System Prompt Integration\n\nSkills are automatically injected into the system prompt. The LLM learns:\n\n1. **Available skills** - List of enabled skills with descriptions\n2. **How to use skills** - Instructions for using skill tools\n3. **Tool reference** - Available skill tools (`octavus_skill_read`, `octavus_code_run`, etc.)\n\nYou don't need to manually document skills in your system prompt. However, you can guide the LLM:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a helpful assistant that can generate QR codes.\n\n## When to Generate QR Codes\n\nGenerate QR codes when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Share WiFi credentials\n- Create scannable data\n\nUse the qr-code skill for all QR code generation tasks.\n```\n\n## Error Handling\n\nSkills handle errors gracefully:\n\n```yaml\n# Skill execution errors are returned to the LLM\n# The LLM can retry or explain the error to the user\n```\n\nCommon error scenarios:\n\n1. **Invalid skill slug** - Skill not found in organization\n2. **Code execution errors** - Syntax errors, runtime exceptions\n3. **Missing dependencies** - Required packages not installed\n4. **File I/O errors** - Permission issues, invalid paths\n\nThe LLM receives error messages and can:\n\n- Retry with corrected code\n- Explain errors to users\n- Suggest alternatives\n\n## File Output Patterns\n\n### Single File Output\n\n```python\n# Save single file to /output/\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\nqr = qrcode.QRCode()\nqr.add_data('https://example.com')\nimg = qr.make_image()\nimg.save(f'{output_dir}/qrcode.png')\n```\n\n### Multiple Files\n\n```python\n# Save multiple files\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Generate multiple outputs\nfor i in range(3):\n filename = f'{output_dir}/output_{i}.png'\n # ... generate file ...\n```\n\n### Structured Output\n\n```python\n# Save structured data + files\nimport json\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Save metadata\nmetadata = {\n 'files': ['chart.png', 'data.csv'],\n 'summary': 'Analysis complete'\n}\nwith open(f'{output_dir}/metadata.json', 'w') as f:\n json.dump(metadata, f)\n\n# Save actual files\n# ... generate chart.png and data.csv ...\n```\n\n## Performance Considerations\n\n### Lazy Initialization\n\nSandboxes are created only when a skill tool is first called:\n\n```yaml\n# Sandbox not created until LLM calls a skill tool\nagent:\n skills: [qr-code] # Sandbox created on first use\n```\n\nThis means:\n\n- No cost if skills aren't used\n- Fast startup (no sandbox creation delay)\n- Sandbox reused for all skill calls in a trigger\n\n### Timeout Limits\n\nSandboxes have a 5-minute default timeout, which can be configured via `sandboxTimeout`:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes for long-running analysis\n```\n\n`sandboxTimeout` Maximum: 1 hour (3,600,000 ms)\n\n**Timeout guidelines:**\n\n- **Short operations** (default 5 min): QR codes, simple calculations\n- **Medium operations** (10-30 min): Data analysis, report generation\n- **Long operations** (30+ min): Complex processing, large dataset analysis\n\n### Sandbox Lifecycle\n\nEach trigger execution gets a fresh sandbox:\n\n- **Clean state** - No leftover files from previous executions\n- **Isolated** - No interference between sessions\n- **Destroyed** - Sandbox cleaned up after trigger completes\n\n## Combining Skills with Tools\n\nSkills and tools can work together:\n\n```yaml\ntools:\n get-user-data:\n description: Fetch user data from database\n parameters:\n userId: { type: string }\n\nskills:\n data-analysis:\n display: description\n description: Analyzing data\n\nagent:\n tools: [get-user-data]\n skills: [data-analysis]\n agentic: true\n\nhandlers:\n analyze-user:\n Get user data:\n block: tool-call\n tool: get-user-data\n input:\n userId: USER_ID\n output: USER_DATA\n\n Analyze:\n block: next-message\n # LLM can use data-analysis skill with USER_DATA\n```\n\nPattern:\n\n1. Fetch data via tool (from your backend)\n2. LLM uses skill to analyze/process the data\n3. Generate outputs (files, reports)\n\n## Skill Development Tips\n\n### Writing SKILL.md\n\nFocus on **when** and **how** to use the skill:\n\n```markdown\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\n---\n\n# QR Code Generator\n\n## When to Use\n\nUse this skill when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Create scannable data\n\n## Quick Start\n\n[Clear examples of how to use the skill]\n```\n\n### Script Organization\n\nOrganize scripts logically:\n\n```\nskill-name/\n\u251C\u2500\u2500 SKILL.md\n\u2514\u2500\u2500 scripts/\n \u251C\u2500\u2500 generate.py # Main script\n \u251C\u2500\u2500 utils.py # Helper functions\n \u2514\u2500\u2500 requirements.txt # Dependencies\n```\n\n### Error Messages\n\nProvide helpful error messages:\n\n```python\ntry:\n # ... code ...\nexcept ValueError as e:\n print(f\"Error: Invalid input - {e}\")\n sys.exit(1)\n```\n\nThe LLM sees these errors and can retry or explain to users.\n\n## Security Considerations\n\n### Sandbox Isolation\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n### Input Validation\n\nSkills should validate inputs:\n\n```python\nimport sys\n\nif not data:\n print(\"Error: Data is required\")\n sys.exit(1)\n\nif len(data) > 1000:\n print(\"Error: Data too long (max 1000 characters)\")\n sys.exit(1)\n```\n\n### Resource Limits\n\nBe aware of:\n\n- **File size limits** - Large files may fail to upload\n- **Execution time** - 5-minute sandbox timeout\n- **Memory limits** - Sandbox environment constraints\n\n## Debugging Skills\n\n### Check Skill Documentation\n\nThe LLM can read skill docs:\n\n```python\n# LLM calls octavus_skill_read to see skill instructions\n```\n\n### Test Locally\n\nTest skills before uploading:\n\n```bash\n# Test skill locally\npython scripts/generate.py --data \"test\"\n```\n\n### Monitor Execution\n\nCheck execution logs in the platform debug view:\n\n- Tool calls and arguments\n- Code execution results\n- File outputs\n- Error messages\n\n## Common Patterns\n\n### Pattern 1: Generate and Return\n\n```yaml\n# User asks for QR code\n# LLM generates QR code\n# File automatically available for download\n```\n\n### Pattern 2: Analyze and Report\n\n```yaml\n# User provides data\n# LLM analyzes with skill\n# Generates report file\n# Returns summary + file link\n```\n\n### Pattern 3: Transform and Save\n\n```yaml\n# User uploads file (via tool)\n# LLM processes with skill\n# Generates transformed file\n# Returns new file link\n```\n\n## Best Practices Summary\n\n1. **Enable only needed skills** - Don't overwhelm the LLM\n2. **Choose appropriate display modes** - Match user experience needs\n3. **Write clear skill descriptions** - Help LLM understand when to use\n4. **Handle errors gracefully** - Provide helpful error messages\n5. **Test skills locally** - Verify before uploading\n6. **Monitor execution** - Check logs for issues\n7. **Combine with tools** - Use tools for data, skills for processing\n8. **Consider performance** - Be aware of timeouts and limits\n\n## Next Steps\n\n- [Skills](/docs/protocol/skills) - Basic skills documentation\n- [Agent Config](/docs/protocol/agent-config) - Configuring skills\n- [Tools](/docs/protocol/tools) - External tools integration\n",
|
|
615
|
+
content: "\n# Skills Advanced Guide\n\nThis guide covers advanced patterns and best practices for using Octavus skills in your agents.\n\n## When to Use Skills\n\nSkills are ideal for:\n\n- **Code execution** - Running Python/Bash scripts\n- **File generation** - Creating images, PDFs, reports\n- **Data processing** - Analyzing, transforming, or visualizing data\n- **Provider-agnostic needs** - Features that should work with any LLM\n\nUse external tools instead when:\n\n- **Simple API calls** - Database queries, external services\n- **Authentication required** - Accessing user-specific resources\n- **Backend integration** - Tight coupling with your infrastructure\n\n## Skill Selection Strategy\n\n### Defining Available Skills\n\nDefine all skills in the `skills:` section, then reference which skills are available where they're used:\n\n**Interactive agents** \u2014 reference in `agent.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n pdf-processor:\n display: description\n description: Processing PDFs\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n```\n\n**Workers and named threads** \u2014 reference per-thread in `start-thread.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n\nsteps:\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis]\n maxSteps: 10\n```\n\n### Match Skills to Use Cases\n\nDifferent threads can have different skills. Define all skills at the protocol level, then scope them to each thread:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n visualization:\n display: description\n description: Creating charts and visualizations\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n```\n\nFor a data analysis thread, you would specify `[data-analysis, visualization]` in `agent.skills` or in a `start-thread` block's `skills` field.\n\n## Display Mode Strategy\n\nChoose display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n### Guidelines\n\n- **`hidden`**: Background work that doesn't need user awareness\n- **`description`**: User-facing operations (default)\n- **`name`**: Quick operations where name is sufficient\n- **`stream`**: Long-running operations where progress matters\n\n## System Prompt Integration\n\nSkills are automatically injected into the system prompt. The LLM learns:\n\n1. **Available skills** - List of enabled skills with descriptions\n2. **How to use skills** - Instructions for using skill tools\n3. **Tool reference** - Available skill tools (`octavus_skill_read`, `octavus_code_run`, etc.)\n\nYou don't need to manually document skills in your system prompt. However, you can guide the LLM:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a helpful assistant that can generate QR codes.\n\n## When to Generate QR Codes\n\nGenerate QR codes when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Share WiFi credentials\n- Create scannable data\n\nUse the qr-code skill for all QR code generation tasks.\n```\n\n## Error Handling\n\nSkills handle errors gracefully:\n\n```yaml\n# Skill execution errors are returned to the LLM\n# The LLM can retry or explain the error to the user\n```\n\nCommon error scenarios:\n\n1. **Invalid skill slug** - Skill not found in organization\n2. **Code execution errors** - Syntax errors, runtime exceptions\n3. **Missing dependencies** - Required packages not installed\n4. **File I/O errors** - Permission issues, invalid paths\n\nThe LLM receives error messages and can:\n\n- Retry with corrected code\n- Explain errors to users\n- Suggest alternatives\n\n## File Output Patterns\n\n### Single File Output\n\n```python\n# Save single file to /output/\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\nqr = qrcode.QRCode()\nqr.add_data('https://example.com')\nimg = qr.make_image()\nimg.save(f'{output_dir}/qrcode.png')\n```\n\n### Multiple Files\n\n```python\n# Save multiple files\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Generate multiple outputs\nfor i in range(3):\n filename = f'{output_dir}/output_{i}.png'\n # ... generate file ...\n```\n\n### Structured Output\n\n```python\n# Save structured data + files\nimport json\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Save metadata\nmetadata = {\n 'files': ['chart.png', 'data.csv'],\n 'summary': 'Analysis complete'\n}\nwith open(f'{output_dir}/metadata.json', 'w') as f:\n json.dump(metadata, f)\n\n# Save actual files\n# ... generate chart.png and data.csv ...\n```\n\n## Performance Considerations\n\n### Lazy Initialization\n\nSandboxes are created only when a skill tool is first called:\n\n```yaml\nagent:\n skills: [qr-code] # Sandbox created on first skill tool call\n```\n\nThis means:\n\n- No cost if skills aren't used\n- Fast startup (no sandbox creation delay)\n- Each `next-message` execution gets its own sandbox with only the skills it needs\n\n### Timeout Limits\n\nSandboxes default to a 5-minute timeout. Configure `sandboxTimeout` on the agent config or per thread:\n\n```yaml\n# Agent-level\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes\n```\n\n```yaml\n# Thread-level (overrides agent-level)\nsteps:\n Start thread:\n block: start-thread\n thread: analysis\n skills: [data-analysis]\n sandboxTimeout: 3600000 # 1 hour for long-running analysis\n```\n\nThread-level `sandboxTimeout` takes priority. Maximum: 1 hour (3,600,000 ms).\n\n### Sandbox Lifecycle\n\nEach `next-message` execution gets its own sandbox:\n\n- **Scoped** - Only contains the skills available to that thread\n- **Isolated** - Interactive agents and workers don't share sandboxes\n- **Resilient** - If a sandbox expires, it's transparently recreated\n- **Cleaned up** - Sandbox destroyed when the LLM call completes\n\n## Combining Skills with Tools\n\nSkills and tools can work together:\n\n```yaml\ntools:\n get-user-data:\n description: Fetch user data from database\n parameters:\n userId: { type: string }\n\nskills:\n data-analysis:\n display: description\n description: Analyzing data\n\nagent:\n tools: [get-user-data]\n skills: [data-analysis]\n agentic: true\n\nhandlers:\n analyze-user:\n Get user data:\n block: tool-call\n tool: get-user-data\n input:\n userId: USER_ID\n output: USER_DATA\n\n Analyze:\n block: next-message\n # LLM can use data-analysis skill with USER_DATA\n```\n\nPattern:\n\n1. Fetch data via tool (from your backend)\n2. LLM uses skill to analyze/process the data\n3. Generate outputs (files, reports)\n\n## Skill Development Tips\n\n### Writing SKILL.md\n\nFocus on **when** and **how** to use the skill:\n\n```markdown\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\n---\n\n# QR Code Generator\n\n## When to Use\n\nUse this skill when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Create scannable data\n\n## Quick Start\n\n[Clear examples of how to use the skill]\n```\n\n### Script Organization\n\nOrganize scripts logically:\n\n```\nskill-name/\n\u251C\u2500\u2500 SKILL.md\n\u2514\u2500\u2500 scripts/\n \u251C\u2500\u2500 generate.py # Main script\n \u251C\u2500\u2500 utils.py # Helper functions\n \u2514\u2500\u2500 requirements.txt # Dependencies\n```\n\n### Error Messages\n\nProvide helpful error messages:\n\n```python\ntry:\n # ... code ...\nexcept ValueError as e:\n print(f\"Error: Invalid input - {e}\")\n sys.exit(1)\n```\n\nThe LLM sees these errors and can retry or explain to users.\n\n## Security Considerations\n\n### Sandbox Isolation\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after each `next-message` execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n### Input Validation\n\nSkills should validate inputs:\n\n```python\nimport sys\n\nif not data:\n print(\"Error: Data is required\")\n sys.exit(1)\n\nif len(data) > 1000:\n print(\"Error: Data too long (max 1000 characters)\")\n sys.exit(1)\n```\n\n### Resource Limits\n\nBe aware of:\n\n- **File size limits** - Large files may fail to upload\n- **Execution time** - Sandbox timeout (5-minute default, 1-hour maximum)\n- **Memory limits** - Sandbox environment constraints\n\n## Debugging Skills\n\n### Check Skill Documentation\n\nThe LLM can read skill docs:\n\n```python\n# LLM calls octavus_skill_read to see skill instructions\n```\n\n### Test Locally\n\nTest skills before uploading:\n\n```bash\n# Test skill locally\npython scripts/generate.py --data \"test\"\n```\n\n### Monitor Execution\n\nCheck execution logs in the platform debug view:\n\n- Tool calls and arguments\n- Code execution results\n- File outputs\n- Error messages\n\n## Common Patterns\n\n### Pattern 1: Generate and Return\n\n```yaml\n# User asks for QR code\n# LLM generates QR code\n# File automatically available for download\n```\n\n### Pattern 2: Analyze and Report\n\n```yaml\n# User provides data\n# LLM analyzes with skill\n# Generates report file\n# Returns summary + file link\n```\n\n### Pattern 3: Transform and Save\n\n```yaml\n# User uploads file (via tool)\n# LLM processes with skill\n# Generates transformed file\n# Returns new file link\n```\n\n## Best Practices Summary\n\n1. **Enable only needed skills** - Don't overwhelm the LLM\n2. **Choose appropriate display modes** - Match user experience needs\n3. **Write clear skill descriptions** - Help LLM understand when to use\n4. **Handle errors gracefully** - Provide helpful error messages\n5. **Test skills locally** - Verify before uploading\n6. **Monitor execution** - Check logs for issues\n7. **Combine with tools** - Use tools for data, skills for processing\n8. **Consider performance** - Be aware of timeouts and limits\n\n## Next Steps\n\n- [Skills](/docs/protocol/skills) - Basic skills documentation\n- [Agent Config](/docs/protocol/agent-config) - Configuring skills\n- [Tools](/docs/protocol/tools) - External tools integration\n",
|
|
616
616
|
excerpt: "Skills Advanced Guide This guide covers advanced patterns and best practices for using Octavus skills in your agents. When to Use Skills Skills are ideal for: - Code execution - Running Python/Bash...",
|
|
617
617
|
order: 9
|
|
618
618
|
},
|
|
@@ -630,7 +630,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
630
630
|
section: "protocol",
|
|
631
631
|
title: "Workers",
|
|
632
632
|
description: "Defining worker agents for background and task-based execution.",
|
|
633
|
-
content: '\n# Workers\n\nWorkers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value.\n\n## When to Use Workers\n\nWorkers are ideal for:\n\n- **Background processing** \u2014 Long-running tasks that don\'t need conversation\n- **Composable tasks** \u2014 Reusable units of work called by other agents\n- **Pipelines** \u2014 Multi-step processing with structured output\n- **Parallel execution** \u2014 Tasks that can run independently\n\nUse interactive agents instead when:\n\n- **Conversation is needed** \u2014 Multi-turn dialogue with users\n- **Persistence matters** \u2014 State should survive across interactions\n- **Session context** \u2014 User context needs to persist\n\n## Worker vs Interactive\n\n| Aspect | Interactive | Worker |\n| ---------- | ---------------------------------- | ----------------------------- |\n| Structure | `triggers` + `handlers` + `agent` | `steps` + `output` |\n| LLM Config | Global `agent:` section | Per-thread via `start-thread` |\n| Invocation | Fire a named trigger | Direct execution with input |\n| Session | Persists across triggers (24h TTL) | Single execution |\n| Result | Streaming chat | Streaming + output value |\n\n## Protocol Structure\n\nWorkers use a simpler protocol structure than interactive agents:\n\n```yaml\n# Input schema - provided when worker is executed\ninput:\n TOPIC:\n type: string\n description: Topic to research\n DEPTH:\n type: string\n optional: true\n default: medium\n\n# Variables for intermediate results\nvariables:\n RESEARCH_DATA:\n type: string\n ANALYSIS:\n type: string\n description: Final analysis result\n\n# Tools available to the worker\ntools:\n web-search:\n description: Search the web\n parameters:\n query: { type: string }\n\n# Sequential execution steps\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC, DEPTH]\n tools: [web-search]\n maxSteps: 5\n\n Add research request:\n block: add-message\n thread: research\n role: user\n prompt: research-prompt\n input: [TOPIC, DEPTH]\n\n Generate research:\n block: next-message\n thread: research\n output: RESEARCH_DATA\n\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: analysis-system\n\n Add analysis request:\n block: add-message\n thread: analysis\n role: user\n prompt: analysis-prompt\n input: [RESEARCH_DATA]\n\n Generate analysis:\n block: next-message\n thread: analysis\n output: ANALYSIS\n\n# Output variable - the worker\'s return value\noutput: ANALYSIS\n```\n\n## settings.json\n\nWorkers are identified by the `format` field:\n\n```json\n{\n "slug": "research-assistant",\n "name": "Research Assistant",\n "description": "Researches topics and returns structured analysis",\n "format": "worker"\n}\n```\n\n## Key Differences\n\n### No Global Agent Config\n\nInteractive agents have a global `agent:` section that configures a main thread. Workers don\'t have this \u2014 every thread must be explicitly created via `start-thread`:\n\n```yaml\n# Interactive agent: Global config\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [tool-a, tool-b]\n\n# Worker: Each thread configured independently\nsteps:\n Start thread A:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n tools: [tool-a]\n\n Start thread B:\n block: start-thread\n thread: analysis\n model: openai/gpt-4o\n tools: [tool-b]\n```\n\nThis gives workers flexibility to use different models, tools, and settings at different stages.\n\n### Steps Instead of Handlers\n\nWorkers use `steps:` instead of `handlers:`. Steps execute sequentially, like handler blocks:\n\n```yaml\n# Interactive: Handlers respond to triggers\nhandlers:\n user-message:\n Add message:\n block: add-message\n # ...\n\n# Worker: Steps execute in sequence\nsteps:\n Add message:\n block: add-message\n # ...\n```\n\n### Output Value\n\nWorkers can return an output value to the caller:\n\n```yaml\nvariables:\n RESULT:\n type: string\n\nsteps:\n # ... steps that populate RESULT ...\n\noutput: RESULT # Return this variable\'s value\n```\n\nThe `output` field references a variable declared in `variables:`. If omitted, the worker completes without returning a value.\n\n## Available Blocks\n\nWorkers support the same blocks as handlers:\n\n| Block | Purpose |\n| ------------------ | -------------------------------------------- |\n| `start-thread` | Create a named thread with LLM configuration |\n| `add-message` | Add a message to a thread |\n| `next-message` | Generate LLM response |\n| `tool-call` | Call a tool deterministically |\n| `set-resource` | Update a resource value |\n| `serialize-thread` | Convert thread to text |\n| `generate-image` | Generate an image from a prompt variable |\n\n### start-thread (Required for LLM)\n\nEvery thread must be initialized with `start-thread` before using `next-message`:\n\n```yaml\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC]\n tools: [web-search]\n thinking: medium\n maxSteps: 5\n```\n\nAll LLM configuration goes here:\n\n| Field | Description |\n| ------------- | ------------------------------------------------- |\n| `thread` | Thread name (defaults to block name) |\n| `model` | LLM model to use |\n| `system` | System prompt filename (required) |\n| `input` | Variables for system prompt |\n| `tools` | Tools available in this thread |\n| `workers` | Workers available to this thread (as LLM tools) |\n| `imageModel` | Image generation model |\n| `thinking` | Extended reasoning level |\n| `temperature` | Model temperature |\n| `maxSteps` | Maximum tool call cycles (enables agentic if > 1) |\n\n## Simple Example\n\nA worker that generates a title from a summary:\n\n```yaml\n# Input\ninput:\n CONVERSATION_SUMMARY:\n type: string\n description: Summary to generate a title for\n\n# Variables\nvariables:\n TITLE:\n type: string\n description: The generated title\n\n# Steps\nsteps:\n Start title thread:\n block: start-thread\n thread: title-gen\n model: anthropic/claude-sonnet-4-5\n system: title-system\n\n Add title request:\n block: add-message\n thread: title-gen\n role: user\n prompt: title-request\n input: [CONVERSATION_SUMMARY]\n\n Generate title:\n block: next-message\n thread: title-gen\n output: TITLE\n display: stream\n\n# Output\noutput: TITLE\n```\n\n## Advanced Example\n\nA worker with multiple threads, tools, and agentic behavior:\n\n```yaml\ninput:\n USER_MESSAGE:\n type: string\n description: The user\'s message to respond to\n USER_ID:\n type: string\n description: User ID for account lookups\n optional: true\n\ntools:\n get-user-account:\n description: Looking up account information\n parameters:\n userId: { type: string }\n create-support-ticket:\n description: Creating a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string }\n\nvariables:\n ASSISTANT_RESPONSE:\n type: string\n CHAT_TRANSCRIPT:\n type: string\n CONVERSATION_SUMMARY:\n type: string\n\nsteps:\n # Thread 1: Chat with agentic tool calling\n Start chat thread:\n block: start-thread\n thread: chat\n model: anthropic/claude-sonnet-4-5\n system: chat-system\n input: [USER_ID]\n tools: [get-user-account, create-support-ticket]\n thinking: medium\n maxSteps: 5\n\n Add user message:\n block: add-message\n thread: chat\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Generate response:\n block: next-message\n thread: chat\n output: ASSISTANT_RESPONSE\n display: stream\n\n # Serialize for summary\n Save conversation:\n block: serialize-thread\n thread: chat\n output: CHAT_TRANSCRIPT\n\n # Thread 2: Summary generation\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n system: summary-system\n thinking: low\n\n Add summary request:\n block: add-message\n thread: summary\n role: user\n prompt: summary-request\n input: [CHAT_TRANSCRIPT]\n\n Generate summary:\n block: next-message\n thread: summary\n output: CONVERSATION_SUMMARY\n display: stream\n\noutput: CONVERSATION_SUMMARY\n```\n\n## Tool Handling\n\nWorkers support the same tool handling as interactive agents:\n\n- **Server tools** \u2014 Handled by tool handlers you provide\n- **Client tools** \u2014 Pause execution, return tool request to caller\n\n```typescript\nconst events = client.workers.execute(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => {\n return await searchWeb(args.query);\n },\n },\n },\n);\n```\n\nSee [Server SDK Workers](/docs/server-sdk/workers) for tool handling details.\n\n## Stream Events\n\nWorkers emit the same events as interactive agents, plus worker-specific events:\n\n| Event | Description |\n| --------------- | ---------------------------------- |\n| `worker-start` | Worker execution begins |\n| `worker-result` | Worker completes (includes output) |\n\nAll standard events (text-delta, tool calls, etc.) are also emitted.\n\n## Calling Workers from Interactive Agents\n\nInteractive agents can call workers in two ways:\n\n1. **Deterministically** \u2014 Using the `run-worker` block\n2. **Agentically** \u2014 LLM calls worker as a tool\n\n### Worker Declaration\n\nFirst, declare workers in your interactive agent\'s protocol:\n\n```yaml\nworkers:\n generate-title:\n description: Generating conversation title\n display: description\n research-assistant:\n description: Researching topic\n display: stream\n tools:\n search: web-search # Map worker tool \u2192 parent tool\n```\n\n### run-worker Block\n\nCall a worker deterministically from a handler:\n\n```yaml\nhandlers:\n request-human:\n Generate title:\n block: run-worker\n worker: generate-title\n input:\n CONVERSATION_SUMMARY: SUMMARY\n output: CONVERSATION_TITLE\n```\n\n### LLM Tool Invocation\n\nMake workers available to the LLM:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n workers: [generate-title, research-assistant]\n agentic: true\n```\n\nThe LLM can then call workers as tools during conversation.\n\n### Display Modes\n\nControl how worker execution appears to users:\n\n| Mode | Behavior |\n| ------------- | --------------------------------- |\n| `hidden` | Worker runs silently |\n| `name` | Shows worker name |\n| `description` | Shows description text |\n| `stream` | Streams all worker events to user |\n\n### Tool Mapping\n\nMap parent tools to worker tools when the worker needs access to your tool handlers:\n\n```yaml\nworkers:\n research-assistant:\n description: Research topics\n tools:\n search: web-search # Worker\'s "search" \u2192 parent\'s "web-search"\n```\n\nWhen the worker calls its `search` tool, your `web-search` handler executes.\n\n## Next Steps\n\n- [Server SDK Workers](/docs/server-sdk/workers) \u2014 Executing workers from code\n- [Handlers](/docs/protocol/handlers) \u2014 Block reference for steps\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n',
|
|
633
|
+
content: '\n# Workers\n\nWorkers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value.\n\n## When to Use Workers\n\nWorkers are ideal for:\n\n- **Background processing** \u2014 Long-running tasks that don\'t need conversation\n- **Composable tasks** \u2014 Reusable units of work called by other agents\n- **Pipelines** \u2014 Multi-step processing with structured output\n- **Parallel execution** \u2014 Tasks that can run independently\n\nUse interactive agents instead when:\n\n- **Conversation is needed** \u2014 Multi-turn dialogue with users\n- **Persistence matters** \u2014 State should survive across interactions\n- **Session context** \u2014 User context needs to persist\n\n## Worker vs Interactive\n\n| Aspect | Interactive | Worker |\n| ---------- | ---------------------------------- | ----------------------------- |\n| Structure | `triggers` + `handlers` + `agent` | `steps` + `output` |\n| LLM Config | Global `agent:` section | Per-thread via `start-thread` |\n| Invocation | Fire a named trigger | Direct execution with input |\n| Session | Persists across triggers (24h TTL) | Single execution |\n| Result | Streaming chat | Streaming + output value |\n\n## Protocol Structure\n\nWorkers use a simpler protocol structure than interactive agents:\n\n```yaml\n# Input schema - provided when worker is executed\ninput:\n TOPIC:\n type: string\n description: Topic to research\n DEPTH:\n type: string\n optional: true\n default: medium\n\n# Variables for intermediate results\nvariables:\n RESEARCH_DATA:\n type: string\n ANALYSIS:\n type: string\n description: Final analysis result\n\n# Tools available to the worker\ntools:\n web-search:\n description: Search the web\n parameters:\n query: { type: string }\n\n# Sequential execution steps\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC, DEPTH]\n tools: [web-search]\n maxSteps: 5\n\n Add research request:\n block: add-message\n thread: research\n role: user\n prompt: research-prompt\n input: [TOPIC, DEPTH]\n\n Generate research:\n block: next-message\n thread: research\n output: RESEARCH_DATA\n\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: analysis-system\n\n Add analysis request:\n block: add-message\n thread: analysis\n role: user\n prompt: analysis-prompt\n input: [RESEARCH_DATA]\n\n Generate analysis:\n block: next-message\n thread: analysis\n output: ANALYSIS\n\n# Output variable - the worker\'s return value\noutput: ANALYSIS\n```\n\n## settings.json\n\nWorkers are identified by the `format` field:\n\n```json\n{\n "slug": "research-assistant",\n "name": "Research Assistant",\n "description": "Researches topics and returns structured analysis",\n "format": "worker"\n}\n```\n\n## Key Differences\n\n### No Global Agent Config\n\nInteractive agents have a global `agent:` section that configures a main thread. Workers don\'t have this \u2014 every thread must be explicitly created via `start-thread`:\n\n```yaml\n# Interactive agent: Global config\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [tool-a, tool-b]\n\n# Worker: Each thread configured independently\nsteps:\n Start thread A:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n tools: [tool-a]\n\n Start thread B:\n block: start-thread\n thread: analysis\n model: openai/gpt-4o\n tools: [tool-b]\n```\n\nThis gives workers flexibility to use different models, tools, skills, and settings at different stages.\n\n### Steps Instead of Handlers\n\nWorkers use `steps:` instead of `handlers:`. Steps execute sequentially, like handler blocks:\n\n```yaml\n# Interactive: Handlers respond to triggers\nhandlers:\n user-message:\n Add message:\n block: add-message\n # ...\n\n# Worker: Steps execute in sequence\nsteps:\n Add message:\n block: add-message\n # ...\n```\n\n### Output Value\n\nWorkers can return an output value to the caller:\n\n```yaml\nvariables:\n RESULT:\n type: string\n\nsteps:\n # ... steps that populate RESULT ...\n\noutput: RESULT # Return this variable\'s value\n```\n\nThe `output` field references a variable declared in `variables:`. If omitted, the worker completes without returning a value.\n\n## Available Blocks\n\nWorkers support the same blocks as handlers:\n\n| Block | Purpose |\n| ------------------ | -------------------------------------------- |\n| `start-thread` | Create a named thread with LLM configuration |\n| `add-message` | Add a message to a thread |\n| `next-message` | Generate LLM response |\n| `tool-call` | Call a tool deterministically |\n| `set-resource` | Update a resource value |\n| `serialize-thread` | Convert thread to text |\n| `generate-image` | Generate an image from a prompt variable |\n\n### start-thread (Required for LLM)\n\nEvery thread must be initialized with `start-thread` before using `next-message`:\n\n```yaml\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC]\n tools: [web-search]\n thinking: medium\n maxSteps: 5\n```\n\nAll LLM configuration goes here:\n\n| Field | Description |\n| ------------- | ------------------------------------------------- |\n| `thread` | Thread name (defaults to block name) |\n| `model` | LLM model to use |\n| `system` | System prompt filename (required) |\n| `input` | Variables for system prompt |\n| `tools` | Tools available in this thread |\n| `skills` | Octavus skills available in this thread |\n| `imageModel` | Image generation model |\n| `thinking` | Extended reasoning level |\n| `temperature` | Model temperature |\n| `maxSteps` | Maximum tool call cycles (enables agentic if > 1) |\n\n## Simple Example\n\nA worker that generates a title from a summary:\n\n```yaml\n# Input\ninput:\n CONVERSATION_SUMMARY:\n type: string\n description: Summary to generate a title for\n\n# Variables\nvariables:\n TITLE:\n type: string\n description: The generated title\n\n# Steps\nsteps:\n Start title thread:\n block: start-thread\n thread: title-gen\n model: anthropic/claude-sonnet-4-5\n system: title-system\n\n Add title request:\n block: add-message\n thread: title-gen\n role: user\n prompt: title-request\n input: [CONVERSATION_SUMMARY]\n\n Generate title:\n block: next-message\n thread: title-gen\n output: TITLE\n display: stream\n\n# Output\noutput: TITLE\n```\n\n## Advanced Example\n\nA worker with multiple threads, tools, and agentic behavior:\n\n```yaml\ninput:\n USER_MESSAGE:\n type: string\n description: The user\'s message to respond to\n USER_ID:\n type: string\n description: User ID for account lookups\n optional: true\n\ntools:\n get-user-account:\n description: Looking up account information\n parameters:\n userId: { type: string }\n create-support-ticket:\n description: Creating a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string }\n\nvariables:\n ASSISTANT_RESPONSE:\n type: string\n CHAT_TRANSCRIPT:\n type: string\n CONVERSATION_SUMMARY:\n type: string\n\nsteps:\n # Thread 1: Chat with agentic tool calling\n Start chat thread:\n block: start-thread\n thread: chat\n model: anthropic/claude-sonnet-4-5\n system: chat-system\n input: [USER_ID]\n tools: [get-user-account, create-support-ticket]\n thinking: medium\n maxSteps: 5\n\n Add user message:\n block: add-message\n thread: chat\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Generate response:\n block: next-message\n thread: chat\n output: ASSISTANT_RESPONSE\n display: stream\n\n # Serialize for summary\n Save conversation:\n block: serialize-thread\n thread: chat\n output: CHAT_TRANSCRIPT\n\n # Thread 2: Summary generation\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n system: summary-system\n thinking: low\n\n Add summary request:\n block: add-message\n thread: summary\n role: user\n prompt: summary-request\n input: [CHAT_TRANSCRIPT]\n\n Generate summary:\n block: next-message\n thread: summary\n output: CONVERSATION_SUMMARY\n display: stream\n\noutput: CONVERSATION_SUMMARY\n```\n\n## Skills and Image Generation\n\nWorkers can use Octavus skills and image generation, configured per-thread via `start-thread`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generate QR codes\n\nsteps:\n Start thread:\n block: start-thread\n thread: worker\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n imageModel: google/gemini-2.5-flash-image\n maxSteps: 10\n```\n\nWorkers define their own skills independently -- they don\'t inherit skills from a parent interactive agent. Each thread gets its own sandbox scoped to only its listed skills.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## Tool Handling\n\nWorkers support the same tool handling as interactive agents:\n\n- **Server tools** \u2014 Handled by tool handlers you provide\n- **Client tools** \u2014 Pause execution, return tool request to caller\n\n```typescript\nconst events = client.workers.execute(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => {\n return await searchWeb(args.query);\n },\n },\n },\n);\n```\n\nSee [Server SDK Workers](/docs/server-sdk/workers) for tool handling details.\n\n## Stream Events\n\nWorkers emit the same events as interactive agents, plus worker-specific events:\n\n| Event | Description |\n| --------------- | ---------------------------------- |\n| `worker-start` | Worker execution begins |\n| `worker-result` | Worker completes (includes output) |\n\nAll standard events (text-delta, tool calls, etc.) are also emitted.\n\n## Calling Workers from Interactive Agents\n\nInteractive agents can call workers in two ways:\n\n1. **Deterministically** \u2014 Using the `run-worker` block\n2. **Agentically** \u2014 LLM calls worker as a tool\n\n### Worker Declaration\n\nFirst, declare workers in your interactive agent\'s protocol:\n\n```yaml\nworkers:\n generate-title:\n description: Generating conversation title\n display: description\n research-assistant:\n description: Researching topic\n display: stream\n tools:\n search: web-search # Map worker tool \u2192 parent tool\n```\n\n### run-worker Block\n\nCall a worker deterministically from a handler:\n\n```yaml\nhandlers:\n request-human:\n Generate title:\n block: run-worker\n worker: generate-title\n input:\n CONVERSATION_SUMMARY: SUMMARY\n output: CONVERSATION_TITLE\n```\n\n### LLM Tool Invocation\n\nMake workers available to the LLM:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n workers: [generate-title, research-assistant]\n agentic: true\n```\n\nThe LLM can then call workers as tools during conversation.\n\n### Display Modes\n\nControl how worker execution appears to users:\n\n| Mode | Behavior |\n| ------------- | --------------------------------- |\n| `hidden` | Worker runs silently |\n| `name` | Shows worker name |\n| `description` | Shows description text |\n| `stream` | Streams all worker events to user |\n\n### Tool Mapping\n\nMap parent tools to worker tools when the worker needs access to your tool handlers:\n\n```yaml\nworkers:\n research-assistant:\n description: Research topics\n tools:\n search: web-search # Worker\'s "search" \u2192 parent\'s "web-search"\n```\n\nWhen the worker calls its `search` tool, your `web-search` handler executes.\n\n## Next Steps\n\n- [Server SDK Workers](/docs/server-sdk/workers) \u2014 Executing workers from code\n- [Handlers](/docs/protocol/handlers) \u2014 Block reference for steps\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n',
|
|
634
634
|
excerpt: "Workers Workers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value. When to...",
|
|
635
635
|
order: 11
|
|
636
636
|
},
|
|
@@ -1307,7 +1307,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1307
1307
|
section: "protocol",
|
|
1308
1308
|
title: "Skills",
|
|
1309
1309
|
description: "Using Octavus skills for code execution and specialized capabilities.",
|
|
1310
|
-
content: "\n# Skills\n\nSkills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are self-contained packages with documentation and scripts that run in secure sandboxes.\n\n## Overview\n\nOctavus Skills provide **provider-agnostic** code execution. They work with any LLM provider (Anthropic, OpenAI, Google) by using explicit tool calls and system prompt injection.\n\n### How Skills Work\n\n1. **Skill Definition**: Skills are defined in the protocol's `skills:` section\n2. **Skill Resolution**: Skills are resolved from available sources (see below)\n3. **Sandbox Execution**: When a skill is used, code runs in an isolated sandbox environment\n4. **File Generation**: Files saved to `/output/` are automatically captured and made available for download\n\n### Skill Sources\n\nSkills come from two sources, visible in the Skills tab of your organization:\n\n| Source | Badge in UI | Visibility | Example |\n| ----------- | ----------- | ------------------------------ | ------------------ |\n| **Octavus** | `Octavus` | Available to all organizations | `qr-code` |\n| **Custom** | None | Private to your organization | `my-company-skill` |\n\nWhen you reference a skill in your protocol, Octavus resolves it from your available skills. If you create a custom skill with the same name as an Octavus skill, your custom skill takes precedence.\n\n## Defining Skills\n\nDefine skills in the protocol's `skills:` section:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n```\n\n### Skill Fields\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------------------------------------- |\n| `display` | No | How to show in UI: `hidden`, `name`, `description`, `stream` (default: `description`) |\n| `description` | No | Custom description shown to users (overrides skill's built-in description) |\n\n### Display Modes\n\n| Mode | Behavior |\n| ------------- | ------------------------------------------- |\n| `hidden` | Skill usage not shown to users |\n| `name` | Shows skill name while executing |\n| `description` | Shows description while executing (default) |\n| `stream` | Streams progress if available |\n\n## Enabling Skills\n\nAfter defining skills in the `skills:` section, specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account]\n skills: [qr-code] # Skills available for this thread\n agentic: true\n```\n\n## Skill Tools\n\nWhen skills are enabled, the LLM has access to these tools:\n\n| Tool | Purpose |\n| -------------------- | --------------------------------------- |\n| `octavus_skill_read` | Read skill documentation (SKILL.md) |\n| `octavus_skill_list` | List available scripts in a skill |\n| `octavus_skill_run` | Execute a pre-built script from a skill |\n| `octavus_code_run` | Execute arbitrary Python/Bash code |\n| `octavus_file_write` | Create files in the sandbox |\n| `octavus_file_read` | Read files from the sandbox |\n\nThe LLM learns about available skills through system prompt injection and can use these tools to interact with skills.\n\n## Example: QR Code Generation\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n agentic: true\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond:\n block: next-message\n```\n\nWhen a user asks \"Create a QR code for octavus.ai\", the LLM will:\n\n1. Recognize the task matches the `qr-code` skill\n2. Call `octavus_skill_read` to learn how to use the skill\n3. Execute code (via `octavus_code_run` or `octavus_skill_run`) to generate the QR code\n4. Save the image to `/output/` in the sandbox\n5. The file is automatically captured and made available for download\n\n## File Output\n\nFiles saved to `/output/` in the sandbox are automatically:\n\n1. **Captured** after code execution\n2. **Uploaded** to S3 storage\n3. **Made available** via presigned URLs\n4. **Included** in the message as file parts\n\nFiles persist across page refreshes and are stored in the session's message history.\n\n## Skill Format\n\nSkills follow the [Agent Skills](https://agentskills.io) open standard:\n\n- `SKILL.md` - Required skill documentation with YAML frontmatter\n- `scripts/` - Optional executable code (Python/Bash)\n- `references/` - Optional documentation loaded as needed\n- `assets/` - Optional files used in outputs (templates, images)\n\n### SKILL.md Format\n\n````yaml\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\nversion: 1.0.0\nlicense: MIT\nauthor: Octavus Team\n---\n\n# QR Code Generator\n\n## Overview\n\nThis skill creates QR codes from text data using Python...\n\n## Quick Start\n\nGenerate a QR code with Python:\n\n```python\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n# ... code to generate QR code ...\n````\n\n## Scripts Reference\n\n### scripts/generate.py\n\nMain script for generating QR codes...\n\n````\n\n## Best Practices\n\n### 1. Clear Descriptions\n\nProvide clear, purpose-driven descriptions:\n\n```yaml\nskills:\n # Good - clear purpose\n qr-code:\n description: Generating QR codes for URLs, contact info, or any text data\n\n # Avoid - vague\n utility:\n description: Does stuff\n````\n\n### 2. When to Use Skills vs Tools\n\n| Use Skills When | Use Tools When |\n| ------------------------ | ---------------------------- |\n| Code execution needed | Simple API calls |\n| File generation | Database queries |\n| Complex calculations | External service integration |\n| Data processing | Authentication required |\n| Provider-agnostic needed | Backend-specific logic |\n\n### 3. Skill Selection\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n pdf-processor:\n display: description\n description: Processing PDFs\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis] # Skills available for this thread\n```\n\n### 4. Display Modes\n\nChoose appropriate display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n## Comparison: Skills vs Tools vs Provider Options\n\n| Feature | Octavus Skills | External Tools | Provider Tools/Skills |\n| ------------------ | ----------------- | ------------------- | --------------------- |\n| **Execution** | Isolated sandbox | Your backend | Provider servers |\n| **Provider** | Any (agnostic) | N/A | Provider-specific |\n| **Code Execution** | Yes | No | Yes (provider tools) |\n| **File Output** | Yes | No | Yes (provider skills) |\n| **Implementation** | Skill packages | Your code | Built-in |\n| **Cost** | Sandbox + LLM API | Your infrastructure | Included in API |\n\n## Uploading Custom Skills\n\nYou can upload custom skills to your organization:\n\n1. Create a skill following the [Agent Skills](https://agentskills.io) format\n2. Package it as a `.skill` bundle (ZIP file)\n3. Upload via the platform UI\n4. Reference by slug in your protocol\n\n```yaml\nskills:\n custom-analysis:\n display: description\n description: Custom analysis tool\n\nagent:\n skills: [custom-analysis]\n```\n\n## Sandbox Timeout\n\nThe default sandbox timeout is 5 minutes. For long-running operations, you can configure a custom timeout using `sandboxTimeout` in the agent config:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes (in milliseconds)\n```\n\n`sandboxTimeout` Maximum: 1 hour (3,600,000 ms)\n\n## Security\n\nSkills run in isolated sandbox environments:\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n## Next Steps\n\n- [Agent Config](/docs/protocol/agent-config) \u2014 Configuring skills in agent settings\n- [Provider Options](/docs/protocol/provider-options) \u2014 Anthropic's built-in skills\n- [Skills Advanced Guide](/docs/protocol/skills-advanced) \u2014 Best practices and advanced patterns\n",
|
|
1310
|
+
content: "\n# Skills\n\nSkills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are self-contained packages with documentation and scripts that run in secure sandboxes.\n\n## Overview\n\nOctavus Skills provide **provider-agnostic** code execution. They work with any LLM provider (Anthropic, OpenAI, Google) by using explicit tool calls and system prompt injection.\n\n### How Skills Work\n\n1. **Skill Definition**: Skills are defined in the protocol's `skills:` section\n2. **Skill Resolution**: Skills are resolved from available sources (see below)\n3. **Sandbox Execution**: When a skill is used, code runs in an isolated sandbox environment\n4. **File Generation**: Files saved to `/output/` are automatically captured and made available for download\n\n### Skill Sources\n\nSkills come from two sources, visible in the Skills tab of your organization:\n\n| Source | Badge in UI | Visibility | Example |\n| ----------- | ----------- | ------------------------------ | ------------------ |\n| **Octavus** | `Octavus` | Available to all organizations | `qr-code` |\n| **Custom** | None | Private to your organization | `my-company-skill` |\n\nWhen you reference a skill in your protocol, Octavus resolves it from your available skills. If you create a custom skill with the same name as an Octavus skill, your custom skill takes precedence.\n\n## Defining Skills\n\nDefine skills in the protocol's `skills:` section:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n```\n\n### Skill Fields\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------------------------------------- |\n| `display` | No | How to show in UI: `hidden`, `name`, `description`, `stream` (default: `description`) |\n| `description` | No | Custom description shown to users (overrides skill's built-in description) |\n\n### Display Modes\n\n| Mode | Behavior |\n| ------------- | ------------------------------------------- |\n| `hidden` | Skill usage not shown to users |\n| `name` | Shows skill name while executing |\n| `description` | Shows description while executing (default) |\n| `stream` | Streams progress if available |\n\n## Enabling Skills\n\nAfter defining skills in the `skills:` section, specify which skills are available. Skills work in both interactive agents and workers.\n\n### Interactive Agents\n\nReference skills in `agent.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account]\n skills: [qr-code]\n agentic: true\n```\n\n### Workers and Named Threads\n\nReference skills per-thread in `start-thread.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nsteps:\n Start thread:\n block: start-thread\n thread: worker\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n maxSteps: 10\n```\n\nThis also works for named threads in interactive agents, allowing different threads to have different skills.\n\n## Skill Tools\n\nWhen skills are enabled, the LLM has access to these tools:\n\n| Tool | Purpose |\n| -------------------- | --------------------------------------- |\n| `octavus_skill_read` | Read skill documentation (SKILL.md) |\n| `octavus_skill_list` | List available scripts in a skill |\n| `octavus_skill_run` | Execute a pre-built script from a skill |\n| `octavus_code_run` | Execute arbitrary Python/Bash code |\n| `octavus_file_write` | Create files in the sandbox |\n| `octavus_file_read` | Read files from the sandbox |\n\nThe LLM learns about available skills through system prompt injection and can use these tools to interact with skills.\n\n## Example: QR Code Generation\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n agentic: true\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond:\n block: next-message\n```\n\nWhen a user asks \"Create a QR code for octavus.ai\", the LLM will:\n\n1. Recognize the task matches the `qr-code` skill\n2. Call `octavus_skill_read` to learn how to use the skill\n3. Execute code (via `octavus_code_run` or `octavus_skill_run`) to generate the QR code\n4. Save the image to `/output/` in the sandbox\n5. The file is automatically captured and made available for download\n\n## File Output\n\nFiles saved to `/output/` in the sandbox are automatically:\n\n1. **Captured** after code execution\n2. **Uploaded** to S3 storage\n3. **Made available** via presigned URLs\n4. **Included** in the message as file parts\n\nFiles persist across page refreshes and are stored in the session's message history.\n\n## Skill Format\n\nSkills follow the [Agent Skills](https://agentskills.io) open standard:\n\n- `SKILL.md` - Required skill documentation with YAML frontmatter\n- `scripts/` - Optional executable code (Python/Bash)\n- `references/` - Optional documentation loaded as needed\n- `assets/` - Optional files used in outputs (templates, images)\n\n### SKILL.md Format\n\n````yaml\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\nversion: 1.0.0\nlicense: MIT\nauthor: Octavus Team\n---\n\n# QR Code Generator\n\n## Overview\n\nThis skill creates QR codes from text data using Python...\n\n## Quick Start\n\nGenerate a QR code with Python:\n\n```python\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n# ... code to generate QR code ...\n````\n\n## Scripts Reference\n\n### scripts/generate.py\n\nMain script for generating QR codes...\n\n````\n\n## Best Practices\n\n### 1. Clear Descriptions\n\nProvide clear, purpose-driven descriptions:\n\n```yaml\nskills:\n # Good - clear purpose\n qr-code:\n description: Generating QR codes for URLs, contact info, or any text data\n\n # Avoid - vague\n utility:\n description: Does stuff\n````\n\n### 2. When to Use Skills vs Tools\n\n| Use Skills When | Use Tools When |\n| ------------------------ | ---------------------------- |\n| Code execution needed | Simple API calls |\n| File generation | Database queries |\n| Complex calculations | External service integration |\n| Data processing | Authentication required |\n| Provider-agnostic needed | Backend-specific logic |\n\n### 3. Skill Selection\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n pdf-processor:\n display: description\n description: Processing PDFs\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis] # Skills available for this thread\n```\n\n### 4. Display Modes\n\nChoose appropriate display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n## Comparison: Skills vs Tools vs Provider Options\n\n| Feature | Octavus Skills | External Tools | Provider Tools/Skills |\n| ------------------ | ----------------- | ------------------- | --------------------- |\n| **Execution** | Isolated sandbox | Your backend | Provider servers |\n| **Provider** | Any (agnostic) | N/A | Provider-specific |\n| **Code Execution** | Yes | No | Yes (provider tools) |\n| **File Output** | Yes | No | Yes (provider skills) |\n| **Implementation** | Skill packages | Your code | Built-in |\n| **Cost** | Sandbox + LLM API | Your infrastructure | Included in API |\n\n## Uploading Custom Skills\n\nYou can upload custom skills to your organization:\n\n1. Create a skill following the [Agent Skills](https://agentskills.io) format\n2. Package it as a `.skill` bundle (ZIP file)\n3. Upload via the platform UI\n4. Reference by slug in your protocol\n\n```yaml\nskills:\n custom-analysis:\n display: description\n description: Custom analysis tool\n\nagent:\n skills: [custom-analysis]\n```\n\n## Sandbox Timeout\n\nThe default sandbox timeout is 5 minutes. You can configure a custom timeout using `sandboxTimeout` in the agent config or on individual `start-thread` blocks:\n\n```yaml\n# Agent-level timeout (applies to main thread)\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes (in milliseconds)\n```\n\n```yaml\n# Thread-level timeout (overrides agent-level for this thread)\nsteps:\n Start thread:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 3600000 # 1 hour\n```\n\nThread-level `sandboxTimeout` takes priority over agent-level. Maximum: 1 hour (3,600,000 ms).\n\n## Security\n\nSkills run in isolated sandbox environments:\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after each `next-message` execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n## Next Steps\n\n- [Agent Config](/docs/protocol/agent-config) \u2014 Configuring skills in agent settings\n- [Provider Options](/docs/protocol/provider-options) \u2014 Anthropic's built-in skills\n- [Skills Advanced Guide](/docs/protocol/skills-advanced) \u2014 Best practices and advanced patterns\n",
|
|
1311
1311
|
excerpt: "Skills Skills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are...",
|
|
1312
1312
|
order: 5
|
|
1313
1313
|
},
|
|
@@ -1316,7 +1316,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1316
1316
|
section: "protocol",
|
|
1317
1317
|
title: "Handlers",
|
|
1318
1318
|
description: "Defining execution handlers with blocks.",
|
|
1319
|
-
content: "\n# Handlers\n\nHandlers define what happens when a trigger fires. They contain execution blocks that run in sequence.\n\n## Handler Structure\n\n```yaml\nhandlers:\n trigger-name:\n Block Name:\n block: block-kind\n # block-specific properties\n\n Another Block:\n block: another-kind\n # ...\n```\n\nEach block has a human-readable name (shown in debug UI) and a `block` field that determines its behavior.\n\n## Block Kinds\n\n### next-message\n\nGenerate a response from the LLM:\n\n```yaml\nhandlers:\n user-message:\n Respond to user:\n block: next-message\n # Uses main conversation thread by default\n # Display defaults to 'stream'\n```\n\nWith options:\n\n```yaml\nGenerate summary:\n block: next-message\n thread: summary # Use named thread\n display: stream # Show streaming content\n independent: true # Don't add to main chat\n output: SUMMARY # Store output in variable\n description: Generating summary # Shown in UI\n```\n\nFor structured output (typed JSON response):\n\n```yaml\nRespond with suggestions:\n block: next-message\n responseType: ChatResponse # Type defined in types section\n output: RESPONSE # Stores the parsed object\n```\n\nWhen `responseType` is specified:\n\n- The LLM generates JSON matching the type schema\n- The `output` variable receives the parsed object (not plain text)\n- The client receives a `UIObjectPart` for custom rendering\n\nSee [Types](/docs/protocol/types#structured-output) for more details.\n\n### add-message\n\nAdd a message to the conversation:\n\n```yaml\nAdd user message:\n block: add-message\n role: user # user | assistant | system\n prompt: user-message # Reference to prompt file\n input: [USER_MESSAGE] # Variables to interpolate\n display: hidden # Don't show in UI\n```\n\nFor internal directives (LLM sees it, user doesn't):\n\n```yaml\nAdd internal directive:\n block: add-message\n role: user\n prompt: ticket-directive\n input: [TICKET_DETAILS]\n visible: false # LLM sees this, user doesn't\n```\n\nFor structured user input (object shown in UI, prompt for LLM context):\n\n```yaml\nAdd user message:\n block: add-message\n role: user\n prompt: user-message # Rendered for LLM context (hidden from UI)\n input: [USER_INPUT]\n uiContent: USER_INPUT # Variable shown in UI (object \u2192 object part)\n display: hidden\n```\n\nWhen `uiContent` is set:\n\n- The variable value is shown in the UI (string \u2192 text part, object \u2192 object part)\n- The prompt text is hidden from the UI but kept for LLM context\n- Useful for rich UI interactions where the visual differs from the LLM context\n\n### tool-call\n\nCall a tool deterministically:\n\n```yaml\nCreate ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY # Variable reference\n priority: medium # Literal value\n output: TICKET # Store result\n```\n\n### set-resource\n\nUpdate a persistent resource:\n\n```yaml\nSave summary:\n block: set-resource\n resource: CONVERSATION_SUMMARY\n value: SUMMARY # Variable to save\n display: name # Show block name\n```\n\n### start-thread\n\nCreate a named conversation thread:\n\n```yaml\nStart summary thread:\n block: start-thread\n thread: summary # Thread name\n model: anthropic/claude-sonnet-4-5 # Optional: different model\n thinking: low # Extended reasoning level\n maxSteps: 1 # Tool call limit\n system: escalation-summary # System prompt\n input: [COMPANY_NAME] # Variables for prompt\n```\n\nThe `model` field can also reference a variable for dynamic model selection:\n\n```yaml\nStart summary thread:\n block: start-thread\n thread: summary\n model: SUMMARY_MODEL # Resolved from input variable\n system: escalation-summary\n```\n\n### serialize-thread\n\nConvert conversation to text:\n\n```yaml\nSerialize conversation:\n block: serialize-thread\n thread: main # Which thread (default: main)\n format: markdown # markdown | json\n output: CONVERSATION_TEXT # Variable to store result\n```\n\n### generate-image\n\nGenerate an image from a prompt variable:\n\n```yaml\nGenerate image:\n block: generate-image\n prompt: OPTIMIZED_PROMPT # Variable containing the prompt\n imageModel: google/gemini-2.5-flash-image # Required image model\n size: 1024x1024 # 1024x1024 | 1792x1024 | 1024x1792\n output: GENERATED_IMAGE # Store URL in variable\n description: Generating your image... # Shown in UI\n```\n\nEdit an existing image using reference images:\n\n```yaml\nEdit image:\n block: generate-image\n prompt: EDIT_INSTRUCTIONS # e.g., \"Remove the background\"\n referenceImages: [SOURCE_IMAGE_URL] # Variable(s) containing image URLs\n imageModel: google/gemini-2.5-flash-image\n output: EDITED_IMAGE\n description: Editing image...\n```\n\n| Field | Required | Description |\n| ----------------- | -------- | --------------------------------------------------------------- |\n| `prompt` | Yes | Variable name containing the image prompt or edit instructions |\n| `imageModel` | Yes | Image model identifier (e.g., `google/gemini-2.5-flash-image`) |\n| `size` | No | Image dimensions: `1024x1024`, `1792x1024`, or `1024x1792` |\n| `referenceImages` | No | Variable names containing image URLs for editing/transformation |\n| `output` | No | Variable name to store the generated image URL |\n| `thread` | No | Thread to associate the output file with |\n| `description` | No | Description shown in the UI during generation |\n\nThis block is for deterministic image generation pipelines where the prompt is constructed programmatically (e.g., via prompt engineering in a separate thread). When `referenceImages` are provided, the prompt describes how to modify those images.\n\nFor agentic image generation where the LLM decides when to generate, configure `imageModel` in the [agent config](/docs/protocol/agent-config#image-generation).\n\n## Display Modes\n\nEvery block has a `display` property:\n\n| Mode | Default For | Behavior |\n| ------------- | ------------------------- | ----------------- |\n| `hidden` | add-message | Not shown to user |\n| `name` | set-resource | Shows block name |\n| `description` | tool-call, generate-image | Shows description |\n| `stream` | next-message | Streams content |\n\n## Complete Example\n\n```yaml\nhandlers:\n user-message:\n # Add the user's message to conversation\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n # Generate response (LLM may call tools)\n Respond to user:\n block: next-message\n # display: stream (default)\n\n request-human:\n # Step 1: Serialize conversation for summary\n Serialize conversation:\n block: serialize-thread\n format: markdown\n output: CONVERSATION_TEXT\n\n # Step 2: Create separate thread for summarization\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n thinking: low\n system: escalation-summary\n input: [COMPANY_NAME]\n\n # Step 3: Add request to summary thread\n Add summarize request:\n block: add-message\n thread: summary\n role: user\n prompt: summarize-request\n input:\n - CONVERSATION: CONVERSATION_TEXT\n\n # Step 4: Generate summary\n Generate summary:\n block: next-message\n thread: summary\n display: stream\n description: Summarizing your conversation\n independent: true\n output: SUMMARY\n\n # Step 5: Save to resource\n Save summary:\n block: set-resource\n resource: CONVERSATION_SUMMARY\n value: SUMMARY\n\n # Step 6: Create support ticket\n Create ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY\n priority: medium\n output: TICKET\n\n # Step 7: Add directive for response\n Add directive:\n block: add-message\n role: user\n prompt: ticket-directive\n input: [TICKET_DETAILS: TICKET]\n visible: false\n\n # Step 8: Respond to user\n Respond:\n block: next-message\n```\n\n## Block Input Mapping\n\nThe `input` field on blocks controls which variables are passed to the prompt. Only variables listed in `input` are available for interpolation.\n\nVariables can come from `protocol.input`, `protocol.resources`, `protocol.variables`, `trigger.input`, or outputs from prior blocks.\n\n```yaml\n# Array format (same name)\ninput: [USER_MESSAGE, COMPANY_NAME]\n\n# Array format (rename)\ninput:\n - CONVERSATION: CONVERSATION_TEXT # Prompt sees CONVERSATION, value comes from CONVERSATION_TEXT\n - TICKET_DETAILS: TICKET\n\n# Object format (rename)\ninput:\n CONVERSATION: CONVERSATION_TEXT\n TICKET_DETAILS: TICKET\n```\n\n## Independent Blocks\n\nUse `independent: true` for content that shouldn't go to the main chat:\n\n```yaml\nGenerate summary:\n block: next-message\n thread: summary\n independent: true # Output stored in variable, not main chat\n output: SUMMARY\n```\n\nThis is useful for:\n\n- Background processing\n- Summarization in separate threads\n- Generating content for tools\n",
|
|
1319
|
+
content: "\n# Handlers\n\nHandlers define what happens when a trigger fires. They contain execution blocks that run in sequence.\n\n## Handler Structure\n\n```yaml\nhandlers:\n trigger-name:\n Block Name:\n block: block-kind\n # block-specific properties\n\n Another Block:\n block: another-kind\n # ...\n```\n\nEach block has a human-readable name (shown in debug UI) and a `block` field that determines its behavior.\n\n## Block Kinds\n\n### next-message\n\nGenerate a response from the LLM:\n\n```yaml\nhandlers:\n user-message:\n Respond to user:\n block: next-message\n # Uses main conversation thread by default\n # Display defaults to 'stream'\n```\n\nWith options:\n\n```yaml\nGenerate summary:\n block: next-message\n thread: summary # Use named thread\n display: stream # Show streaming content\n independent: true # Don't add to main chat\n output: SUMMARY # Store output in variable\n description: Generating summary # Shown in UI\n```\n\nFor structured output (typed JSON response):\n\n```yaml\nRespond with suggestions:\n block: next-message\n responseType: ChatResponse # Type defined in types section\n output: RESPONSE # Stores the parsed object\n```\n\nWhen `responseType` is specified:\n\n- The LLM generates JSON matching the type schema\n- The `output` variable receives the parsed object (not plain text)\n- The client receives a `UIObjectPart` for custom rendering\n\nSee [Types](/docs/protocol/types#structured-output) for more details.\n\n### add-message\n\nAdd a message to the conversation:\n\n```yaml\nAdd user message:\n block: add-message\n role: user # user | assistant | system\n prompt: user-message # Reference to prompt file\n input: [USER_MESSAGE] # Variables to interpolate\n display: hidden # Don't show in UI\n```\n\nFor internal directives (LLM sees it, user doesn't):\n\n```yaml\nAdd internal directive:\n block: add-message\n role: user\n prompt: ticket-directive\n input: [TICKET_DETAILS]\n visible: false # LLM sees this, user doesn't\n```\n\nFor structured user input (object shown in UI, prompt for LLM context):\n\n```yaml\nAdd user message:\n block: add-message\n role: user\n prompt: user-message # Rendered for LLM context (hidden from UI)\n input: [USER_INPUT]\n uiContent: USER_INPUT # Variable shown in UI (object \u2192 object part)\n display: hidden\n```\n\nWhen `uiContent` is set:\n\n- The variable value is shown in the UI (string \u2192 text part, object \u2192 object part)\n- The prompt text is hidden from the UI but kept for LLM context\n- Useful for rich UI interactions where the visual differs from the LLM context\n\n### tool-call\n\nCall a tool deterministically:\n\n```yaml\nCreate ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY # Variable reference\n priority: medium # Literal value\n output: TICKET # Store result\n```\n\n### set-resource\n\nUpdate a persistent resource:\n\n```yaml\nSave summary:\n block: set-resource\n resource: CONVERSATION_SUMMARY\n value: SUMMARY # Variable to save\n display: name # Show block name\n```\n\n### start-thread\n\nCreate a named conversation thread:\n\n```yaml\nStart summary thread:\n block: start-thread\n thread: summary # Thread name\n model: anthropic/claude-sonnet-4-5 # Optional: different model\n thinking: low # Extended reasoning level\n maxSteps: 1 # Tool call limit\n system: escalation-summary # System prompt\n input: [COMPANY_NAME] # Variables for prompt\n skills: [qr-code] # Octavus skills for this thread\n sandboxTimeout: 600000 # Skill sandbox timeout (default: 5 min, max: 1 hour)\n imageModel: google/gemini-2.5-flash-image # Image generation model\n```\n\nThe `model` field can also reference a variable for dynamic model selection:\n\n```yaml\nStart summary thread:\n block: start-thread\n thread: summary\n model: SUMMARY_MODEL # Resolved from input variable\n system: escalation-summary\n```\n\n### serialize-thread\n\nConvert conversation to text:\n\n```yaml\nSerialize conversation:\n block: serialize-thread\n thread: main # Which thread (default: main)\n format: markdown # markdown | json\n output: CONVERSATION_TEXT # Variable to store result\n```\n\n### generate-image\n\nGenerate an image from a prompt variable:\n\n```yaml\nGenerate image:\n block: generate-image\n prompt: OPTIMIZED_PROMPT # Variable containing the prompt\n imageModel: google/gemini-2.5-flash-image # Required image model\n size: 1024x1024 # 1024x1024 | 1792x1024 | 1024x1792\n output: GENERATED_IMAGE # Store URL in variable\n description: Generating your image... # Shown in UI\n```\n\nEdit an existing image using reference images:\n\n```yaml\nEdit image:\n block: generate-image\n prompt: EDIT_INSTRUCTIONS # e.g., \"Remove the background\"\n referenceImages: [SOURCE_IMAGE_URL] # Variable(s) containing image URLs\n imageModel: google/gemini-2.5-flash-image\n output: EDITED_IMAGE\n description: Editing image...\n```\n\n| Field | Required | Description |\n| ----------------- | -------- | --------------------------------------------------------------- |\n| `prompt` | Yes | Variable name containing the image prompt or edit instructions |\n| `imageModel` | Yes | Image model identifier (e.g., `google/gemini-2.5-flash-image`) |\n| `size` | No | Image dimensions: `1024x1024`, `1792x1024`, or `1024x1792` |\n| `referenceImages` | No | Variable names containing image URLs for editing/transformation |\n| `output` | No | Variable name to store the generated image URL |\n| `thread` | No | Thread to associate the output file with |\n| `description` | No | Description shown in the UI during generation |\n\nThis block is for deterministic image generation pipelines where the prompt is constructed programmatically (e.g., via prompt engineering in a separate thread). When `referenceImages` are provided, the prompt describes how to modify those images.\n\nFor agentic image generation where the LLM decides when to generate, configure `imageModel` in the [agent config](/docs/protocol/agent-config#image-generation).\n\n## Display Modes\n\nEvery block has a `display` property:\n\n| Mode | Default For | Behavior |\n| ------------- | ------------------------- | ----------------- |\n| `hidden` | add-message | Not shown to user |\n| `name` | set-resource | Shows block name |\n| `description` | tool-call, generate-image | Shows description |\n| `stream` | next-message | Streams content |\n\n## Complete Example\n\n```yaml\nhandlers:\n user-message:\n # Add the user's message to conversation\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n # Generate response (LLM may call tools)\n Respond to user:\n block: next-message\n # display: stream (default)\n\n request-human:\n # Step 1: Serialize conversation for summary\n Serialize conversation:\n block: serialize-thread\n format: markdown\n output: CONVERSATION_TEXT\n\n # Step 2: Create separate thread for summarization\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n thinking: low\n system: escalation-summary\n input: [COMPANY_NAME]\n\n # Step 3: Add request to summary thread\n Add summarize request:\n block: add-message\n thread: summary\n role: user\n prompt: summarize-request\n input:\n - CONVERSATION: CONVERSATION_TEXT\n\n # Step 4: Generate summary\n Generate summary:\n block: next-message\n thread: summary\n display: stream\n description: Summarizing your conversation\n independent: true\n output: SUMMARY\n\n # Step 5: Save to resource\n Save summary:\n block: set-resource\n resource: CONVERSATION_SUMMARY\n value: SUMMARY\n\n # Step 6: Create support ticket\n Create ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY\n priority: medium\n output: TICKET\n\n # Step 7: Add directive for response\n Add directive:\n block: add-message\n role: user\n prompt: ticket-directive\n input: [TICKET_DETAILS: TICKET]\n visible: false\n\n # Step 8: Respond to user\n Respond:\n block: next-message\n```\n\n## Block Input Mapping\n\nThe `input` field on blocks controls which variables are passed to the prompt. Only variables listed in `input` are available for interpolation.\n\nVariables can come from `protocol.input`, `protocol.resources`, `protocol.variables`, `trigger.input`, or outputs from prior blocks.\n\n```yaml\n# Array format (same name)\ninput: [USER_MESSAGE, COMPANY_NAME]\n\n# Array format (rename)\ninput:\n - CONVERSATION: CONVERSATION_TEXT # Prompt sees CONVERSATION, value comes from CONVERSATION_TEXT\n - TICKET_DETAILS: TICKET\n\n# Object format (rename)\ninput:\n CONVERSATION: CONVERSATION_TEXT\n TICKET_DETAILS: TICKET\n```\n\n## Independent Blocks\n\nUse `independent: true` for content that shouldn't go to the main chat:\n\n```yaml\nGenerate summary:\n block: next-message\n thread: summary\n independent: true # Output stored in variable, not main chat\n output: SUMMARY\n```\n\nThis is useful for:\n\n- Background processing\n- Summarization in separate threads\n- Generating content for tools\n",
|
|
1320
1320
|
excerpt: "Handlers Handlers define what happens when a trigger fires. They contain execution blocks that run in sequence. Handler Structure Each block has a human-readable name (shown in debug UI) and a ...",
|
|
1321
1321
|
order: 6
|
|
1322
1322
|
},
|
|
@@ -1325,8 +1325,8 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1325
1325
|
section: "protocol",
|
|
1326
1326
|
title: "Agent Config",
|
|
1327
1327
|
description: "Configuring the agent model and behavior.",
|
|
1328
|
-
content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to pass to the system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | -------------------------------------------------------------------- |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash-preview\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions. The `input` field controls which variables are available to the prompt \u2014 only variables listed in `input` are interpolated.\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nVariables in `input` can come from `protocol.input`, `protocol.resources`, or `protocol.variables`.\n\n### Input Mapping Formats\n\n```yaml\n# Array format (same name)\ninput:\n - COMPANY_NAME\n - PRODUCT_NAME\n\n# Array format (rename)\ninput:\n - CONTEXT: CONVERSATION_SUMMARY # Prompt sees CONTEXT, value comes from CONVERSATION_SUMMARY\n\n# Object format (rename)\ninput:\n CONTEXT: CONVERSATION_SUMMARY\n```\n\nThe left side (label) is what the prompt sees. The right side (source) is where the value comes from.\n\n### Example\n\n`prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests. The tool supports both text-to-image generation and image editing/transformation using reference images.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Image Editing with Reference Images\n\nBoth the agentic tool and the `generate-image` block support reference images for editing and transformation. When reference images are provided, the prompt describes how to modify or use those images.\n\n| Provider | Models | Reference Image Support |\n| -------- | -------------------------------- | ----------------------- |\n| OpenAI | `gpt-image-1` | Yes |\n| Google | Gemini native (`gemini-*-image`) | Yes |\n| Google | Imagen (`imagen-*`) | No |\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate or edit images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation or editing\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n```\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
|
|
1329
|
-
excerpt: "Agent Config The section configures the LLM model, system prompt, tools, and behavior. Basic Configuration Configuration Options | Field
|
|
1328
|
+
content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ---------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to pass to the system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `sandboxTimeout` | No | Skill sandbox timeout in ms (default: 5 min, max: 1 hour) |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | -------------------------------------------------------------------- |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash-preview\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions. The `input` field controls which variables are available to the prompt \u2014 only variables listed in `input` are interpolated.\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nVariables in `input` can come from `protocol.input`, `protocol.resources`, or `protocol.variables`.\n\n### Input Mapping Formats\n\n```yaml\n# Array format (same name)\ninput:\n - COMPANY_NAME\n - PRODUCT_NAME\n\n# Array format (rename)\ninput:\n - CONTEXT: CONVERSATION_SUMMARY # Prompt sees CONTEXT, value comes from CONVERSATION_SUMMARY\n\n# Object format (rename)\ninput:\n CONTEXT: CONVERSATION_SUMMARY\n```\n\nThe left side (label) is what the prompt sees. The right side (source) is where the value comes from.\n\n### Example\n\n`prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests. The tool supports both text-to-image generation and image editing/transformation using reference images.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Image Editing with Reference Images\n\nBoth the agentic tool and the `generate-image` block support reference images for editing and transformation. When reference images are provided, the prompt describes how to modify or use those images.\n\n| Provider | Models | Reference Image Support |\n| -------- | -------------------------------- | ----------------------- |\n| OpenAI | `gpt-image-1` | Yes |\n| Google | Gemini native (`gemini-*-image`) | Yes |\n| Google | Imagen (`imagen-*`) | No |\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate or edit images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation or editing\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n skills: [data-analysis] # Thread-specific skills\n imageModel: google/gemini-2.5-flash-image # Thread-specific image model\n```\n\nEach thread can have its own skills and image model. Skills referenced here must be defined in the protocol's `skills:` section. Workers use this same pattern since they don't have a global `agent:` section.\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
|
|
1329
|
+
excerpt: "Agent Config The section configures the LLM model, system prompt, tools, and behavior. Basic Configuration Configuration Options | Field | Required | Description ...",
|
|
1330
1330
|
order: 7
|
|
1331
1331
|
},
|
|
1332
1332
|
{
|
|
@@ -1343,7 +1343,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1343
1343
|
section: "protocol",
|
|
1344
1344
|
title: "Skills Advanced Guide",
|
|
1345
1345
|
description: "Best practices and advanced patterns for using Octavus skills.",
|
|
1346
|
-
content: "\n# Skills Advanced Guide\n\nThis guide covers advanced patterns and best practices for using Octavus skills in your agents.\n\n## When to Use Skills\n\nSkills are ideal for:\n\n- **Code execution** - Running Python/Bash scripts\n- **File generation** - Creating images, PDFs, reports\n- **Data processing** - Analyzing, transforming, or visualizing data\n- **Provider-agnostic needs** - Features that should work with any LLM\n\nUse external tools instead when:\n\n- **Simple API calls** - Database queries, external services\n- **Authentication required** - Accessing user-specific resources\n- **Backend integration** - Tight coupling with your infrastructure\n\n## Skill Selection Strategy\n\n### Defining Available Skills\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n pdf-processor:\n display: description\n description: Processing PDFs\n data-analysis:\n display: description\n description: Analyzing data\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Skills available for this thread\n```\n\n### Match Skills to Use Cases\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread based on use case:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n visualization:\n display: description\n description: Creating charts and visualizations\n\n# Skills available for this chat thread (support use case)\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Skills available for this thread\n```\n\nFor a data analysis thread, you would specify `[data-analysis, visualization]` in `agent.skills`, but still define all available skills in the `skills:` section above.\n\n## Display Mode Strategy\n\nChoose display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n### Guidelines\n\n- **`hidden`**: Background work that doesn't need user awareness\n- **`description`**: User-facing operations (default)\n- **`name`**: Quick operations where name is sufficient\n- **`stream`**: Long-running operations where progress matters\n\n## System Prompt Integration\n\nSkills are automatically injected into the system prompt. The LLM learns:\n\n1. **Available skills** - List of enabled skills with descriptions\n2. **How to use skills** - Instructions for using skill tools\n3. **Tool reference** - Available skill tools (`octavus_skill_read`, `octavus_code_run`, etc.)\n\nYou don't need to manually document skills in your system prompt. However, you can guide the LLM:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a helpful assistant that can generate QR codes.\n\n## When to Generate QR Codes\n\nGenerate QR codes when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Share WiFi credentials\n- Create scannable data\n\nUse the qr-code skill for all QR code generation tasks.\n```\n\n## Error Handling\n\nSkills handle errors gracefully:\n\n```yaml\n# Skill execution errors are returned to the LLM\n# The LLM can retry or explain the error to the user\n```\n\nCommon error scenarios:\n\n1. **Invalid skill slug** - Skill not found in organization\n2. **Code execution errors** - Syntax errors, runtime exceptions\n3. **Missing dependencies** - Required packages not installed\n4. **File I/O errors** - Permission issues, invalid paths\n\nThe LLM receives error messages and can:\n\n- Retry with corrected code\n- Explain errors to users\n- Suggest alternatives\n\n## File Output Patterns\n\n### Single File Output\n\n```python\n# Save single file to /output/\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\nqr = qrcode.QRCode()\nqr.add_data('https://example.com')\nimg = qr.make_image()\nimg.save(f'{output_dir}/qrcode.png')\n```\n\n### Multiple Files\n\n```python\n# Save multiple files\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Generate multiple outputs\nfor i in range(3):\n filename = f'{output_dir}/output_{i}.png'\n # ... generate file ...\n```\n\n### Structured Output\n\n```python\n# Save structured data + files\nimport json\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Save metadata\nmetadata = {\n 'files': ['chart.png', 'data.csv'],\n 'summary': 'Analysis complete'\n}\nwith open(f'{output_dir}/metadata.json', 'w') as f:\n json.dump(metadata, f)\n\n# Save actual files\n# ... generate chart.png and data.csv ...\n```\n\n## Performance Considerations\n\n### Lazy Initialization\n\nSandboxes are created only when a skill tool is first called:\n\n```yaml\n# Sandbox not created until LLM calls a skill tool\nagent:\n skills: [qr-code] # Sandbox created on first use\n```\n\nThis means:\n\n- No cost if skills aren't used\n- Fast startup (no sandbox creation delay)\n- Sandbox reused for all skill calls in a trigger\n\n### Timeout Limits\n\nSandboxes have a 5-minute default timeout, which can be configured via `sandboxTimeout`:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes for long-running analysis\n```\n\n`sandboxTimeout` Maximum: 1 hour (3,600,000 ms)\n\n**Timeout guidelines:**\n\n- **Short operations** (default 5 min): QR codes, simple calculations\n- **Medium operations** (10-30 min): Data analysis, report generation\n- **Long operations** (30+ min): Complex processing, large dataset analysis\n\n### Sandbox Lifecycle\n\nEach trigger execution gets a fresh sandbox:\n\n- **Clean state** - No leftover files from previous executions\n- **Isolated** - No interference between sessions\n- **Destroyed** - Sandbox cleaned up after trigger completes\n\n## Combining Skills with Tools\n\nSkills and tools can work together:\n\n```yaml\ntools:\n get-user-data:\n description: Fetch user data from database\n parameters:\n userId: { type: string }\n\nskills:\n data-analysis:\n display: description\n description: Analyzing data\n\nagent:\n tools: [get-user-data]\n skills: [data-analysis]\n agentic: true\n\nhandlers:\n analyze-user:\n Get user data:\n block: tool-call\n tool: get-user-data\n input:\n userId: USER_ID\n output: USER_DATA\n\n Analyze:\n block: next-message\n # LLM can use data-analysis skill with USER_DATA\n```\n\nPattern:\n\n1. Fetch data via tool (from your backend)\n2. LLM uses skill to analyze/process the data\n3. Generate outputs (files, reports)\n\n## Skill Development Tips\n\n### Writing SKILL.md\n\nFocus on **when** and **how** to use the skill:\n\n```markdown\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\n---\n\n# QR Code Generator\n\n## When to Use\n\nUse this skill when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Create scannable data\n\n## Quick Start\n\n[Clear examples of how to use the skill]\n```\n\n### Script Organization\n\nOrganize scripts logically:\n\n```\nskill-name/\n\u251C\u2500\u2500 SKILL.md\n\u2514\u2500\u2500 scripts/\n \u251C\u2500\u2500 generate.py # Main script\n \u251C\u2500\u2500 utils.py # Helper functions\n \u2514\u2500\u2500 requirements.txt # Dependencies\n```\n\n### Error Messages\n\nProvide helpful error messages:\n\n```python\ntry:\n # ... code ...\nexcept ValueError as e:\n print(f\"Error: Invalid input - {e}\")\n sys.exit(1)\n```\n\nThe LLM sees these errors and can retry or explain to users.\n\n## Security Considerations\n\n### Sandbox Isolation\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n### Input Validation\n\nSkills should validate inputs:\n\n```python\nimport sys\n\nif not data:\n print(\"Error: Data is required\")\n sys.exit(1)\n\nif len(data) > 1000:\n print(\"Error: Data too long (max 1000 characters)\")\n sys.exit(1)\n```\n\n### Resource Limits\n\nBe aware of:\n\n- **File size limits** - Large files may fail to upload\n- **Execution time** - 5-minute sandbox timeout\n- **Memory limits** - Sandbox environment constraints\n\n## Debugging Skills\n\n### Check Skill Documentation\n\nThe LLM can read skill docs:\n\n```python\n# LLM calls octavus_skill_read to see skill instructions\n```\n\n### Test Locally\n\nTest skills before uploading:\n\n```bash\n# Test skill locally\npython scripts/generate.py --data \"test\"\n```\n\n### Monitor Execution\n\nCheck execution logs in the platform debug view:\n\n- Tool calls and arguments\n- Code execution results\n- File outputs\n- Error messages\n\n## Common Patterns\n\n### Pattern 1: Generate and Return\n\n```yaml\n# User asks for QR code\n# LLM generates QR code\n# File automatically available for download\n```\n\n### Pattern 2: Analyze and Report\n\n```yaml\n# User provides data\n# LLM analyzes with skill\n# Generates report file\n# Returns summary + file link\n```\n\n### Pattern 3: Transform and Save\n\n```yaml\n# User uploads file (via tool)\n# LLM processes with skill\n# Generates transformed file\n# Returns new file link\n```\n\n## Best Practices Summary\n\n1. **Enable only needed skills** - Don't overwhelm the LLM\n2. **Choose appropriate display modes** - Match user experience needs\n3. **Write clear skill descriptions** - Help LLM understand when to use\n4. **Handle errors gracefully** - Provide helpful error messages\n5. **Test skills locally** - Verify before uploading\n6. **Monitor execution** - Check logs for issues\n7. **Combine with tools** - Use tools for data, skills for processing\n8. **Consider performance** - Be aware of timeouts and limits\n\n## Next Steps\n\n- [Skills](/docs/protocol/skills) - Basic skills documentation\n- [Agent Config](/docs/protocol/agent-config) - Configuring skills\n- [Tools](/docs/protocol/tools) - External tools integration\n",
|
|
1346
|
+
content: "\n# Skills Advanced Guide\n\nThis guide covers advanced patterns and best practices for using Octavus skills in your agents.\n\n## When to Use Skills\n\nSkills are ideal for:\n\n- **Code execution** - Running Python/Bash scripts\n- **File generation** - Creating images, PDFs, reports\n- **Data processing** - Analyzing, transforming, or visualizing data\n- **Provider-agnostic needs** - Features that should work with any LLM\n\nUse external tools instead when:\n\n- **Simple API calls** - Database queries, external services\n- **Authentication required** - Accessing user-specific resources\n- **Backend integration** - Tight coupling with your infrastructure\n\n## Skill Selection Strategy\n\n### Defining Available Skills\n\nDefine all skills in the `skills:` section, then reference which skills are available where they're used:\n\n**Interactive agents** \u2014 reference in `agent.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n pdf-processor:\n display: description\n description: Processing PDFs\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n```\n\n**Workers and named threads** \u2014 reference per-thread in `start-thread.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n\nsteps:\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis]\n maxSteps: 10\n```\n\n### Match Skills to Use Cases\n\nDifferent threads can have different skills. Define all skills at the protocol level, then scope them to each thread:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n visualization:\n display: description\n description: Creating charts and visualizations\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n```\n\nFor a data analysis thread, you would specify `[data-analysis, visualization]` in `agent.skills` or in a `start-thread` block's `skills` field.\n\n## Display Mode Strategy\n\nChoose display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n### Guidelines\n\n- **`hidden`**: Background work that doesn't need user awareness\n- **`description`**: User-facing operations (default)\n- **`name`**: Quick operations where name is sufficient\n- **`stream`**: Long-running operations where progress matters\n\n## System Prompt Integration\n\nSkills are automatically injected into the system prompt. The LLM learns:\n\n1. **Available skills** - List of enabled skills with descriptions\n2. **How to use skills** - Instructions for using skill tools\n3. **Tool reference** - Available skill tools (`octavus_skill_read`, `octavus_code_run`, etc.)\n\nYou don't need to manually document skills in your system prompt. However, you can guide the LLM:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a helpful assistant that can generate QR codes.\n\n## When to Generate QR Codes\n\nGenerate QR codes when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Share WiFi credentials\n- Create scannable data\n\nUse the qr-code skill for all QR code generation tasks.\n```\n\n## Error Handling\n\nSkills handle errors gracefully:\n\n```yaml\n# Skill execution errors are returned to the LLM\n# The LLM can retry or explain the error to the user\n```\n\nCommon error scenarios:\n\n1. **Invalid skill slug** - Skill not found in organization\n2. **Code execution errors** - Syntax errors, runtime exceptions\n3. **Missing dependencies** - Required packages not installed\n4. **File I/O errors** - Permission issues, invalid paths\n\nThe LLM receives error messages and can:\n\n- Retry with corrected code\n- Explain errors to users\n- Suggest alternatives\n\n## File Output Patterns\n\n### Single File Output\n\n```python\n# Save single file to /output/\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\nqr = qrcode.QRCode()\nqr.add_data('https://example.com')\nimg = qr.make_image()\nimg.save(f'{output_dir}/qrcode.png')\n```\n\n### Multiple Files\n\n```python\n# Save multiple files\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Generate multiple outputs\nfor i in range(3):\n filename = f'{output_dir}/output_{i}.png'\n # ... generate file ...\n```\n\n### Structured Output\n\n```python\n# Save structured data + files\nimport json\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Save metadata\nmetadata = {\n 'files': ['chart.png', 'data.csv'],\n 'summary': 'Analysis complete'\n}\nwith open(f'{output_dir}/metadata.json', 'w') as f:\n json.dump(metadata, f)\n\n# Save actual files\n# ... generate chart.png and data.csv ...\n```\n\n## Performance Considerations\n\n### Lazy Initialization\n\nSandboxes are created only when a skill tool is first called:\n\n```yaml\nagent:\n skills: [qr-code] # Sandbox created on first skill tool call\n```\n\nThis means:\n\n- No cost if skills aren't used\n- Fast startup (no sandbox creation delay)\n- Each `next-message` execution gets its own sandbox with only the skills it needs\n\n### Timeout Limits\n\nSandboxes default to a 5-minute timeout. Configure `sandboxTimeout` on the agent config or per thread:\n\n```yaml\n# Agent-level\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes\n```\n\n```yaml\n# Thread-level (overrides agent-level)\nsteps:\n Start thread:\n block: start-thread\n thread: analysis\n skills: [data-analysis]\n sandboxTimeout: 3600000 # 1 hour for long-running analysis\n```\n\nThread-level `sandboxTimeout` takes priority. Maximum: 1 hour (3,600,000 ms).\n\n### Sandbox Lifecycle\n\nEach `next-message` execution gets its own sandbox:\n\n- **Scoped** - Only contains the skills available to that thread\n- **Isolated** - Interactive agents and workers don't share sandboxes\n- **Resilient** - If a sandbox expires, it's transparently recreated\n- **Cleaned up** - Sandbox destroyed when the LLM call completes\n\n## Combining Skills with Tools\n\nSkills and tools can work together:\n\n```yaml\ntools:\n get-user-data:\n description: Fetch user data from database\n parameters:\n userId: { type: string }\n\nskills:\n data-analysis:\n display: description\n description: Analyzing data\n\nagent:\n tools: [get-user-data]\n skills: [data-analysis]\n agentic: true\n\nhandlers:\n analyze-user:\n Get user data:\n block: tool-call\n tool: get-user-data\n input:\n userId: USER_ID\n output: USER_DATA\n\n Analyze:\n block: next-message\n # LLM can use data-analysis skill with USER_DATA\n```\n\nPattern:\n\n1. Fetch data via tool (from your backend)\n2. LLM uses skill to analyze/process the data\n3. Generate outputs (files, reports)\n\n## Skill Development Tips\n\n### Writing SKILL.md\n\nFocus on **when** and **how** to use the skill:\n\n```markdown\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\n---\n\n# QR Code Generator\n\n## When to Use\n\nUse this skill when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Create scannable data\n\n## Quick Start\n\n[Clear examples of how to use the skill]\n```\n\n### Script Organization\n\nOrganize scripts logically:\n\n```\nskill-name/\n\u251C\u2500\u2500 SKILL.md\n\u2514\u2500\u2500 scripts/\n \u251C\u2500\u2500 generate.py # Main script\n \u251C\u2500\u2500 utils.py # Helper functions\n \u2514\u2500\u2500 requirements.txt # Dependencies\n```\n\n### Error Messages\n\nProvide helpful error messages:\n\n```python\ntry:\n # ... code ...\nexcept ValueError as e:\n print(f\"Error: Invalid input - {e}\")\n sys.exit(1)\n```\n\nThe LLM sees these errors and can retry or explain to users.\n\n## Security Considerations\n\n### Sandbox Isolation\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after each `next-message` execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n### Input Validation\n\nSkills should validate inputs:\n\n```python\nimport sys\n\nif not data:\n print(\"Error: Data is required\")\n sys.exit(1)\n\nif len(data) > 1000:\n print(\"Error: Data too long (max 1000 characters)\")\n sys.exit(1)\n```\n\n### Resource Limits\n\nBe aware of:\n\n- **File size limits** - Large files may fail to upload\n- **Execution time** - Sandbox timeout (5-minute default, 1-hour maximum)\n- **Memory limits** - Sandbox environment constraints\n\n## Debugging Skills\n\n### Check Skill Documentation\n\nThe LLM can read skill docs:\n\n```python\n# LLM calls octavus_skill_read to see skill instructions\n```\n\n### Test Locally\n\nTest skills before uploading:\n\n```bash\n# Test skill locally\npython scripts/generate.py --data \"test\"\n```\n\n### Monitor Execution\n\nCheck execution logs in the platform debug view:\n\n- Tool calls and arguments\n- Code execution results\n- File outputs\n- Error messages\n\n## Common Patterns\n\n### Pattern 1: Generate and Return\n\n```yaml\n# User asks for QR code\n# LLM generates QR code\n# File automatically available for download\n```\n\n### Pattern 2: Analyze and Report\n\n```yaml\n# User provides data\n# LLM analyzes with skill\n# Generates report file\n# Returns summary + file link\n```\n\n### Pattern 3: Transform and Save\n\n```yaml\n# User uploads file (via tool)\n# LLM processes with skill\n# Generates transformed file\n# Returns new file link\n```\n\n## Best Practices Summary\n\n1. **Enable only needed skills** - Don't overwhelm the LLM\n2. **Choose appropriate display modes** - Match user experience needs\n3. **Write clear skill descriptions** - Help LLM understand when to use\n4. **Handle errors gracefully** - Provide helpful error messages\n5. **Test skills locally** - Verify before uploading\n6. **Monitor execution** - Check logs for issues\n7. **Combine with tools** - Use tools for data, skills for processing\n8. **Consider performance** - Be aware of timeouts and limits\n\n## Next Steps\n\n- [Skills](/docs/protocol/skills) - Basic skills documentation\n- [Agent Config](/docs/protocol/agent-config) - Configuring skills\n- [Tools](/docs/protocol/tools) - External tools integration\n",
|
|
1347
1347
|
excerpt: "Skills Advanced Guide This guide covers advanced patterns and best practices for using Octavus skills in your agents. When to Use Skills Skills are ideal for: - Code execution - Running Python/Bash...",
|
|
1348
1348
|
order: 9
|
|
1349
1349
|
},
|
|
@@ -1361,7 +1361,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1361
1361
|
section: "protocol",
|
|
1362
1362
|
title: "Workers",
|
|
1363
1363
|
description: "Defining worker agents for background and task-based execution.",
|
|
1364
|
-
content: '\n# Workers\n\nWorkers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value.\n\n## When to Use Workers\n\nWorkers are ideal for:\n\n- **Background processing** \u2014 Long-running tasks that don\'t need conversation\n- **Composable tasks** \u2014 Reusable units of work called by other agents\n- **Pipelines** \u2014 Multi-step processing with structured output\n- **Parallel execution** \u2014 Tasks that can run independently\n\nUse interactive agents instead when:\n\n- **Conversation is needed** \u2014 Multi-turn dialogue with users\n- **Persistence matters** \u2014 State should survive across interactions\n- **Session context** \u2014 User context needs to persist\n\n## Worker vs Interactive\n\n| Aspect | Interactive | Worker |\n| ---------- | ---------------------------------- | ----------------------------- |\n| Structure | `triggers` + `handlers` + `agent` | `steps` + `output` |\n| LLM Config | Global `agent:` section | Per-thread via `start-thread` |\n| Invocation | Fire a named trigger | Direct execution with input |\n| Session | Persists across triggers (24h TTL) | Single execution |\n| Result | Streaming chat | Streaming + output value |\n\n## Protocol Structure\n\nWorkers use a simpler protocol structure than interactive agents:\n\n```yaml\n# Input schema - provided when worker is executed\ninput:\n TOPIC:\n type: string\n description: Topic to research\n DEPTH:\n type: string\n optional: true\n default: medium\n\n# Variables for intermediate results\nvariables:\n RESEARCH_DATA:\n type: string\n ANALYSIS:\n type: string\n description: Final analysis result\n\n# Tools available to the worker\ntools:\n web-search:\n description: Search the web\n parameters:\n query: { type: string }\n\n# Sequential execution steps\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC, DEPTH]\n tools: [web-search]\n maxSteps: 5\n\n Add research request:\n block: add-message\n thread: research\n role: user\n prompt: research-prompt\n input: [TOPIC, DEPTH]\n\n Generate research:\n block: next-message\n thread: research\n output: RESEARCH_DATA\n\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: analysis-system\n\n Add analysis request:\n block: add-message\n thread: analysis\n role: user\n prompt: analysis-prompt\n input: [RESEARCH_DATA]\n\n Generate analysis:\n block: next-message\n thread: analysis\n output: ANALYSIS\n\n# Output variable - the worker\'s return value\noutput: ANALYSIS\n```\n\n## settings.json\n\nWorkers are identified by the `format` field:\n\n```json\n{\n "slug": "research-assistant",\n "name": "Research Assistant",\n "description": "Researches topics and returns structured analysis",\n "format": "worker"\n}\n```\n\n## Key Differences\n\n### No Global Agent Config\n\nInteractive agents have a global `agent:` section that configures a main thread. Workers don\'t have this \u2014 every thread must be explicitly created via `start-thread`:\n\n```yaml\n# Interactive agent: Global config\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [tool-a, tool-b]\n\n# Worker: Each thread configured independently\nsteps:\n Start thread A:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n tools: [tool-a]\n\n Start thread B:\n block: start-thread\n thread: analysis\n model: openai/gpt-4o\n tools: [tool-b]\n```\n\nThis gives workers flexibility to use different models, tools, and settings at different stages.\n\n### Steps Instead of Handlers\n\nWorkers use `steps:` instead of `handlers:`. Steps execute sequentially, like handler blocks:\n\n```yaml\n# Interactive: Handlers respond to triggers\nhandlers:\n user-message:\n Add message:\n block: add-message\n # ...\n\n# Worker: Steps execute in sequence\nsteps:\n Add message:\n block: add-message\n # ...\n```\n\n### Output Value\n\nWorkers can return an output value to the caller:\n\n```yaml\nvariables:\n RESULT:\n type: string\n\nsteps:\n # ... steps that populate RESULT ...\n\noutput: RESULT # Return this variable\'s value\n```\n\nThe `output` field references a variable declared in `variables:`. If omitted, the worker completes without returning a value.\n\n## Available Blocks\n\nWorkers support the same blocks as handlers:\n\n| Block | Purpose |\n| ------------------ | -------------------------------------------- |\n| `start-thread` | Create a named thread with LLM configuration |\n| `add-message` | Add a message to a thread |\n| `next-message` | Generate LLM response |\n| `tool-call` | Call a tool deterministically |\n| `set-resource` | Update a resource value |\n| `serialize-thread` | Convert thread to text |\n| `generate-image` | Generate an image from a prompt variable |\n\n### start-thread (Required for LLM)\n\nEvery thread must be initialized with `start-thread` before using `next-message`:\n\n```yaml\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC]\n tools: [web-search]\n thinking: medium\n maxSteps: 5\n```\n\nAll LLM configuration goes here:\n\n| Field | Description |\n| ------------- | ------------------------------------------------- |\n| `thread` | Thread name (defaults to block name) |\n| `model` | LLM model to use |\n| `system` | System prompt filename (required) |\n| `input` | Variables for system prompt |\n| `tools` | Tools available in this thread |\n| `workers` | Workers available to this thread (as LLM tools) |\n| `imageModel` | Image generation model |\n| `thinking` | Extended reasoning level |\n| `temperature` | Model temperature |\n| `maxSteps` | Maximum tool call cycles (enables agentic if > 1) |\n\n## Simple Example\n\nA worker that generates a title from a summary:\n\n```yaml\n# Input\ninput:\n CONVERSATION_SUMMARY:\n type: string\n description: Summary to generate a title for\n\n# Variables\nvariables:\n TITLE:\n type: string\n description: The generated title\n\n# Steps\nsteps:\n Start title thread:\n block: start-thread\n thread: title-gen\n model: anthropic/claude-sonnet-4-5\n system: title-system\n\n Add title request:\n block: add-message\n thread: title-gen\n role: user\n prompt: title-request\n input: [CONVERSATION_SUMMARY]\n\n Generate title:\n block: next-message\n thread: title-gen\n output: TITLE\n display: stream\n\n# Output\noutput: TITLE\n```\n\n## Advanced Example\n\nA worker with multiple threads, tools, and agentic behavior:\n\n```yaml\ninput:\n USER_MESSAGE:\n type: string\n description: The user\'s message to respond to\n USER_ID:\n type: string\n description: User ID for account lookups\n optional: true\n\ntools:\n get-user-account:\n description: Looking up account information\n parameters:\n userId: { type: string }\n create-support-ticket:\n description: Creating a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string }\n\nvariables:\n ASSISTANT_RESPONSE:\n type: string\n CHAT_TRANSCRIPT:\n type: string\n CONVERSATION_SUMMARY:\n type: string\n\nsteps:\n # Thread 1: Chat with agentic tool calling\n Start chat thread:\n block: start-thread\n thread: chat\n model: anthropic/claude-sonnet-4-5\n system: chat-system\n input: [USER_ID]\n tools: [get-user-account, create-support-ticket]\n thinking: medium\n maxSteps: 5\n\n Add user message:\n block: add-message\n thread: chat\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Generate response:\n block: next-message\n thread: chat\n output: ASSISTANT_RESPONSE\n display: stream\n\n # Serialize for summary\n Save conversation:\n block: serialize-thread\n thread: chat\n output: CHAT_TRANSCRIPT\n\n # Thread 2: Summary generation\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n system: summary-system\n thinking: low\n\n Add summary request:\n block: add-message\n thread: summary\n role: user\n prompt: summary-request\n input: [CHAT_TRANSCRIPT]\n\n Generate summary:\n block: next-message\n thread: summary\n output: CONVERSATION_SUMMARY\n display: stream\n\noutput: CONVERSATION_SUMMARY\n```\n\n## Tool Handling\n\nWorkers support the same tool handling as interactive agents:\n\n- **Server tools** \u2014 Handled by tool handlers you provide\n- **Client tools** \u2014 Pause execution, return tool request to caller\n\n```typescript\nconst events = client.workers.execute(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => {\n return await searchWeb(args.query);\n },\n },\n },\n);\n```\n\nSee [Server SDK Workers](/docs/server-sdk/workers) for tool handling details.\n\n## Stream Events\n\nWorkers emit the same events as interactive agents, plus worker-specific events:\n\n| Event | Description |\n| --------------- | ---------------------------------- |\n| `worker-start` | Worker execution begins |\n| `worker-result` | Worker completes (includes output) |\n\nAll standard events (text-delta, tool calls, etc.) are also emitted.\n\n## Calling Workers from Interactive Agents\n\nInteractive agents can call workers in two ways:\n\n1. **Deterministically** \u2014 Using the `run-worker` block\n2. **Agentically** \u2014 LLM calls worker as a tool\n\n### Worker Declaration\n\nFirst, declare workers in your interactive agent\'s protocol:\n\n```yaml\nworkers:\n generate-title:\n description: Generating conversation title\n display: description\n research-assistant:\n description: Researching topic\n display: stream\n tools:\n search: web-search # Map worker tool \u2192 parent tool\n```\n\n### run-worker Block\n\nCall a worker deterministically from a handler:\n\n```yaml\nhandlers:\n request-human:\n Generate title:\n block: run-worker\n worker: generate-title\n input:\n CONVERSATION_SUMMARY: SUMMARY\n output: CONVERSATION_TITLE\n```\n\n### LLM Tool Invocation\n\nMake workers available to the LLM:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n workers: [generate-title, research-assistant]\n agentic: true\n```\n\nThe LLM can then call workers as tools during conversation.\n\n### Display Modes\n\nControl how worker execution appears to users:\n\n| Mode | Behavior |\n| ------------- | --------------------------------- |\n| `hidden` | Worker runs silently |\n| `name` | Shows worker name |\n| `description` | Shows description text |\n| `stream` | Streams all worker events to user |\n\n### Tool Mapping\n\nMap parent tools to worker tools when the worker needs access to your tool handlers:\n\n```yaml\nworkers:\n research-assistant:\n description: Research topics\n tools:\n search: web-search # Worker\'s "search" \u2192 parent\'s "web-search"\n```\n\nWhen the worker calls its `search` tool, your `web-search` handler executes.\n\n## Next Steps\n\n- [Server SDK Workers](/docs/server-sdk/workers) \u2014 Executing workers from code\n- [Handlers](/docs/protocol/handlers) \u2014 Block reference for steps\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n',
|
|
1364
|
+
content: '\n# Workers\n\nWorkers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value.\n\n## When to Use Workers\n\nWorkers are ideal for:\n\n- **Background processing** \u2014 Long-running tasks that don\'t need conversation\n- **Composable tasks** \u2014 Reusable units of work called by other agents\n- **Pipelines** \u2014 Multi-step processing with structured output\n- **Parallel execution** \u2014 Tasks that can run independently\n\nUse interactive agents instead when:\n\n- **Conversation is needed** \u2014 Multi-turn dialogue with users\n- **Persistence matters** \u2014 State should survive across interactions\n- **Session context** \u2014 User context needs to persist\n\n## Worker vs Interactive\n\n| Aspect | Interactive | Worker |\n| ---------- | ---------------------------------- | ----------------------------- |\n| Structure | `triggers` + `handlers` + `agent` | `steps` + `output` |\n| LLM Config | Global `agent:` section | Per-thread via `start-thread` |\n| Invocation | Fire a named trigger | Direct execution with input |\n| Session | Persists across triggers (24h TTL) | Single execution |\n| Result | Streaming chat | Streaming + output value |\n\n## Protocol Structure\n\nWorkers use a simpler protocol structure than interactive agents:\n\n```yaml\n# Input schema - provided when worker is executed\ninput:\n TOPIC:\n type: string\n description: Topic to research\n DEPTH:\n type: string\n optional: true\n default: medium\n\n# Variables for intermediate results\nvariables:\n RESEARCH_DATA:\n type: string\n ANALYSIS:\n type: string\n description: Final analysis result\n\n# Tools available to the worker\ntools:\n web-search:\n description: Search the web\n parameters:\n query: { type: string }\n\n# Sequential execution steps\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC, DEPTH]\n tools: [web-search]\n maxSteps: 5\n\n Add research request:\n block: add-message\n thread: research\n role: user\n prompt: research-prompt\n input: [TOPIC, DEPTH]\n\n Generate research:\n block: next-message\n thread: research\n output: RESEARCH_DATA\n\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: analysis-system\n\n Add analysis request:\n block: add-message\n thread: analysis\n role: user\n prompt: analysis-prompt\n input: [RESEARCH_DATA]\n\n Generate analysis:\n block: next-message\n thread: analysis\n output: ANALYSIS\n\n# Output variable - the worker\'s return value\noutput: ANALYSIS\n```\n\n## settings.json\n\nWorkers are identified by the `format` field:\n\n```json\n{\n "slug": "research-assistant",\n "name": "Research Assistant",\n "description": "Researches topics and returns structured analysis",\n "format": "worker"\n}\n```\n\n## Key Differences\n\n### No Global Agent Config\n\nInteractive agents have a global `agent:` section that configures a main thread. Workers don\'t have this \u2014 every thread must be explicitly created via `start-thread`:\n\n```yaml\n# Interactive agent: Global config\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [tool-a, tool-b]\n\n# Worker: Each thread configured independently\nsteps:\n Start thread A:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n tools: [tool-a]\n\n Start thread B:\n block: start-thread\n thread: analysis\n model: openai/gpt-4o\n tools: [tool-b]\n```\n\nThis gives workers flexibility to use different models, tools, skills, and settings at different stages.\n\n### Steps Instead of Handlers\n\nWorkers use `steps:` instead of `handlers:`. Steps execute sequentially, like handler blocks:\n\n```yaml\n# Interactive: Handlers respond to triggers\nhandlers:\n user-message:\n Add message:\n block: add-message\n # ...\n\n# Worker: Steps execute in sequence\nsteps:\n Add message:\n block: add-message\n # ...\n```\n\n### Output Value\n\nWorkers can return an output value to the caller:\n\n```yaml\nvariables:\n RESULT:\n type: string\n\nsteps:\n # ... steps that populate RESULT ...\n\noutput: RESULT # Return this variable\'s value\n```\n\nThe `output` field references a variable declared in `variables:`. If omitted, the worker completes without returning a value.\n\n## Available Blocks\n\nWorkers support the same blocks as handlers:\n\n| Block | Purpose |\n| ------------------ | -------------------------------------------- |\n| `start-thread` | Create a named thread with LLM configuration |\n| `add-message` | Add a message to a thread |\n| `next-message` | Generate LLM response |\n| `tool-call` | Call a tool deterministically |\n| `set-resource` | Update a resource value |\n| `serialize-thread` | Convert thread to text |\n| `generate-image` | Generate an image from a prompt variable |\n\n### start-thread (Required for LLM)\n\nEvery thread must be initialized with `start-thread` before using `next-message`:\n\n```yaml\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC]\n tools: [web-search]\n thinking: medium\n maxSteps: 5\n```\n\nAll LLM configuration goes here:\n\n| Field | Description |\n| ------------- | ------------------------------------------------- |\n| `thread` | Thread name (defaults to block name) |\n| `model` | LLM model to use |\n| `system` | System prompt filename (required) |\n| `input` | Variables for system prompt |\n| `tools` | Tools available in this thread |\n| `skills` | Octavus skills available in this thread |\n| `imageModel` | Image generation model |\n| `thinking` | Extended reasoning level |\n| `temperature` | Model temperature |\n| `maxSteps` | Maximum tool call cycles (enables agentic if > 1) |\n\n## Simple Example\n\nA worker that generates a title from a summary:\n\n```yaml\n# Input\ninput:\n CONVERSATION_SUMMARY:\n type: string\n description: Summary to generate a title for\n\n# Variables\nvariables:\n TITLE:\n type: string\n description: The generated title\n\n# Steps\nsteps:\n Start title thread:\n block: start-thread\n thread: title-gen\n model: anthropic/claude-sonnet-4-5\n system: title-system\n\n Add title request:\n block: add-message\n thread: title-gen\n role: user\n prompt: title-request\n input: [CONVERSATION_SUMMARY]\n\n Generate title:\n block: next-message\n thread: title-gen\n output: TITLE\n display: stream\n\n# Output\noutput: TITLE\n```\n\n## Advanced Example\n\nA worker with multiple threads, tools, and agentic behavior:\n\n```yaml\ninput:\n USER_MESSAGE:\n type: string\n description: The user\'s message to respond to\n USER_ID:\n type: string\n description: User ID for account lookups\n optional: true\n\ntools:\n get-user-account:\n description: Looking up account information\n parameters:\n userId: { type: string }\n create-support-ticket:\n description: Creating a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string }\n\nvariables:\n ASSISTANT_RESPONSE:\n type: string\n CHAT_TRANSCRIPT:\n type: string\n CONVERSATION_SUMMARY:\n type: string\n\nsteps:\n # Thread 1: Chat with agentic tool calling\n Start chat thread:\n block: start-thread\n thread: chat\n model: anthropic/claude-sonnet-4-5\n system: chat-system\n input: [USER_ID]\n tools: [get-user-account, create-support-ticket]\n thinking: medium\n maxSteps: 5\n\n Add user message:\n block: add-message\n thread: chat\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Generate response:\n block: next-message\n thread: chat\n output: ASSISTANT_RESPONSE\n display: stream\n\n # Serialize for summary\n Save conversation:\n block: serialize-thread\n thread: chat\n output: CHAT_TRANSCRIPT\n\n # Thread 2: Summary generation\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n system: summary-system\n thinking: low\n\n Add summary request:\n block: add-message\n thread: summary\n role: user\n prompt: summary-request\n input: [CHAT_TRANSCRIPT]\n\n Generate summary:\n block: next-message\n thread: summary\n output: CONVERSATION_SUMMARY\n display: stream\n\noutput: CONVERSATION_SUMMARY\n```\n\n## Skills and Image Generation\n\nWorkers can use Octavus skills and image generation, configured per-thread via `start-thread`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generate QR codes\n\nsteps:\n Start thread:\n block: start-thread\n thread: worker\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n imageModel: google/gemini-2.5-flash-image\n maxSteps: 10\n```\n\nWorkers define their own skills independently -- they don\'t inherit skills from a parent interactive agent. Each thread gets its own sandbox scoped to only its listed skills.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## Tool Handling\n\nWorkers support the same tool handling as interactive agents:\n\n- **Server tools** \u2014 Handled by tool handlers you provide\n- **Client tools** \u2014 Pause execution, return tool request to caller\n\n```typescript\nconst events = client.workers.execute(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => {\n return await searchWeb(args.query);\n },\n },\n },\n);\n```\n\nSee [Server SDK Workers](/docs/server-sdk/workers) for tool handling details.\n\n## Stream Events\n\nWorkers emit the same events as interactive agents, plus worker-specific events:\n\n| Event | Description |\n| --------------- | ---------------------------------- |\n| `worker-start` | Worker execution begins |\n| `worker-result` | Worker completes (includes output) |\n\nAll standard events (text-delta, tool calls, etc.) are also emitted.\n\n## Calling Workers from Interactive Agents\n\nInteractive agents can call workers in two ways:\n\n1. **Deterministically** \u2014 Using the `run-worker` block\n2. **Agentically** \u2014 LLM calls worker as a tool\n\n### Worker Declaration\n\nFirst, declare workers in your interactive agent\'s protocol:\n\n```yaml\nworkers:\n generate-title:\n description: Generating conversation title\n display: description\n research-assistant:\n description: Researching topic\n display: stream\n tools:\n search: web-search # Map worker tool \u2192 parent tool\n```\n\n### run-worker Block\n\nCall a worker deterministically from a handler:\n\n```yaml\nhandlers:\n request-human:\n Generate title:\n block: run-worker\n worker: generate-title\n input:\n CONVERSATION_SUMMARY: SUMMARY\n output: CONVERSATION_TITLE\n```\n\n### LLM Tool Invocation\n\nMake workers available to the LLM:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n workers: [generate-title, research-assistant]\n agentic: true\n```\n\nThe LLM can then call workers as tools during conversation.\n\n### Display Modes\n\nControl how worker execution appears to users:\n\n| Mode | Behavior |\n| ------------- | --------------------------------- |\n| `hidden` | Worker runs silently |\n| `name` | Shows worker name |\n| `description` | Shows description text |\n| `stream` | Streams all worker events to user |\n\n### Tool Mapping\n\nMap parent tools to worker tools when the worker needs access to your tool handlers:\n\n```yaml\nworkers:\n research-assistant:\n description: Research topics\n tools:\n search: web-search # Worker\'s "search" \u2192 parent\'s "web-search"\n```\n\nWhen the worker calls its `search` tool, your `web-search` handler executes.\n\n## Next Steps\n\n- [Server SDK Workers](/docs/server-sdk/workers) \u2014 Executing workers from code\n- [Handlers](/docs/protocol/handlers) \u2014 Block reference for steps\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n',
|
|
1365
1365
|
excerpt: "Workers Workers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value. When to...",
|
|
1366
1366
|
order: 11
|
|
1367
1367
|
}
|
|
@@ -1486,4 +1486,4 @@ export {
|
|
|
1486
1486
|
getDocSlugs,
|
|
1487
1487
|
getSectionBySlug
|
|
1488
1488
|
};
|
|
1489
|
-
//# sourceMappingURL=chunk-
|
|
1489
|
+
//# sourceMappingURL=chunk-6TO62UOU.js.map
|