@octavus/docs 2.2.0 → 2.4.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/content/02-server-sdk/01-overview.md +2 -0
- package/content/02-server-sdk/06-workers.md +360 -0
- package/content/04-protocol/01-overview.md +17 -1
- package/content/04-protocol/05-skills.md +14 -1
- package/content/04-protocol/07-agent-config.md +6 -6
- package/content/04-protocol/09-skills-advanced.md +16 -5
- package/content/04-protocol/11-workers.md +480 -0
- package/content/07-migration/01-v1-to-v2.md +1 -1
- package/dist/{chunk-GI574O6S.js → chunk-SX6AIMRO.js} +53 -17
- package/dist/chunk-SX6AIMRO.js.map +1 -0
- package/dist/{chunk-KUB6BGPR.js → chunk-WQ7BTD5T.js} +51 -15
- package/dist/chunk-WQ7BTD5T.js.map +1 -0
- package/dist/content.js +1 -1
- package/dist/docs.json +26 -8
- package/dist/index.js +1 -1
- package/dist/search-index.json +1 -1
- package/dist/search.js +1 -1
- package/dist/search.js.map +1 -1
- package/dist/sections.json +26 -8
- package/package.json +3 -3
- package/dist/chunk-3ER2T7S7.js +0 -663
- package/dist/chunk-3ER2T7S7.js.map +0 -1
- package/dist/chunk-GI574O6S.js.map +0 -1
- package/dist/chunk-HFF2TVGV.js +0 -663
- package/dist/chunk-HFF2TVGV.js.map +0 -1
- package/dist/chunk-JGWOMZWD.js +0 -1435
- package/dist/chunk-JGWOMZWD.js.map +0 -1
- package/dist/chunk-KUB6BGPR.js.map +0 -1
- package/dist/chunk-S5JUVAKE.js +0 -1409
- package/dist/chunk-S5JUVAKE.js.map +0 -1
- package/dist/chunk-TMJG4CJH.js +0 -1409
- package/dist/chunk-TMJG4CJH.js.map +0 -1
- package/dist/chunk-WKCT4ABS.js +0 -1435
- package/dist/chunk-WKCT4ABS.js.map +0 -1
- package/dist/chunk-YJPO6KOJ.js +0 -1435
- package/dist/chunk-YJPO6KOJ.js.map +0 -1
- package/dist/chunk-ZSCRYD5P.js +0 -1409
- package/dist/chunk-ZSCRYD5P.js.map +0 -1
|
@@ -23,7 +23,7 @@ var docs_default = [
|
|
|
23
23
|
section: "server-sdk",
|
|
24
24
|
title: "Overview",
|
|
25
25
|
description: "Introduction to the Octavus Server SDK for backend integration.",
|
|
26
|
-
content: "\n# Server SDK Overview\n\nThe `@octavus/server-sdk` package provides a Node.js SDK for integrating Octavus agents into your backend application. It handles session management, streaming, and the tool execution continuation loop.\n\n**Current version:** `2.
|
|
26
|
+
content: "\n# Server SDK Overview\n\nThe `@octavus/server-sdk` package provides a Node.js SDK for integrating Octavus agents into your backend application. It handles session management, streaming, and the tool execution continuation loop.\n\n**Current version:** `2.3.0`\n\n## Installation\n\n```bash\nnpm install @octavus/server-sdk\n```\n\nFor agent management (sync, validate), install the CLI as a dev dependency:\n\n```bash\nnpm install --save-dev @octavus/cli\n```\n\n## Basic Usage\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: 'your-api-key',\n});\n```\n\n## Key Features\n\n### Agent Management\n\nAgent definitions are managed via the CLI. See the [CLI documentation](/docs/server-sdk/cli) for details.\n\n```bash\n# Sync agent from local files\noctavus sync ./agents/support-chat\n\n# Output: Created: support-chat\n# Agent ID: clxyz123abc456\n```\n\n### Session Management\n\nCreate and manage agent sessions using the agent ID:\n\n```typescript\n// Create a new session (use agent ID from CLI sync)\nconst sessionId = await client.agentSessions.create('clxyz123abc456', {\n COMPANY_NAME: 'Acme Corp',\n PRODUCT_NAME: 'Widget Pro',\n});\n\n// Get UI-ready session messages (for session restore)\nconst session = await client.agentSessions.getMessages(sessionId);\n```\n\n### Tool Handlers\n\nTools run on your server with your data:\n\n```typescript\nconst session = client.agentSessions.attach(sessionId, {\n tools: {\n 'get-user-account': async (args) => {\n // Access your database, APIs, etc.\n return await db.users.findById(args.userId);\n },\n },\n});\n```\n\n### Streaming\n\nAll responses stream in real-time:\n\n```typescript\nimport { toSSEStream } from '@octavus/server-sdk';\n\n// execute() returns an async generator of events\nconst events = session.execute({\n type: 'trigger',\n triggerName: 'user-message',\n input: { USER_MESSAGE: 'Hello!' },\n});\n\n// Convert to SSE stream for HTTP responses\nreturn new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n});\n```\n\n## API Reference\n\n### OctavusClient\n\nThe main entry point for interacting with Octavus.\n\n```typescript\ninterface OctavusClientConfig {\n baseUrl: string; // Octavus API URL\n apiKey?: string; // Your API key\n}\n\nclass OctavusClient {\n readonly agents: AgentsApi;\n readonly agentSessions: AgentSessionsApi;\n readonly workers: WorkersApi;\n readonly files: FilesApi;\n\n constructor(config: OctavusClientConfig);\n}\n```\n\n### AgentSessionsApi\n\nManages agent sessions.\n\n```typescript\nclass AgentSessionsApi {\n // Create a new session\n async create(agentId: string, input?: Record<string, unknown>): Promise<string>;\n\n // Get full session state (for debugging/internal use)\n async get(sessionId: string): Promise<SessionState>;\n\n // Get UI-ready messages (for client display)\n async getMessages(sessionId: string): Promise<UISessionState>;\n\n // Attach to a session for triggering\n attach(sessionId: string, options?: SessionAttachOptions): AgentSession;\n}\n\n// Full session state (internal format)\ninterface SessionState {\n id: string;\n agentId: string;\n input: Record<string, unknown>;\n variables: Record<string, unknown>;\n resources: Record<string, unknown>;\n messages: ChatMessage[]; // Internal message format\n createdAt: string;\n updatedAt: string;\n}\n\n// UI-ready session state\ninterface UISessionState {\n sessionId: string;\n agentId: string;\n messages: UIMessage[]; // UI-ready messages for frontend\n}\n```\n\n### AgentSession\n\nHandles request execution and streaming for a specific session.\n\n```typescript\nclass AgentSession {\n // Execute a request and stream parsed events\n execute(request: SessionRequest, options?: TriggerOptions): AsyncGenerator<StreamEvent>;\n\n // Get the session ID\n getSessionId(): string;\n}\n\ntype SessionRequest = TriggerRequest | ContinueRequest;\n\ninterface TriggerRequest {\n type: 'trigger';\n triggerName: string;\n input?: Record<string, unknown>;\n}\n\ninterface ContinueRequest {\n type: 'continue';\n executionId: string;\n toolResults: ToolResult[];\n}\n\n// Helper to convert events to SSE stream\nfunction toSSEStream(events: AsyncIterable<StreamEvent>): ReadableStream<Uint8Array>;\n```\n\n### FilesApi\n\nHandles file uploads for sessions.\n\n```typescript\nclass FilesApi {\n // Get presigned URLs for file uploads\n async getUploadUrls(sessionId: string, files: FileUploadRequest[]): Promise<UploadUrlsResponse>;\n}\n\ninterface FileUploadRequest {\n filename: string;\n mediaType: string;\n size: number;\n}\n\ninterface UploadUrlsResponse {\n files: {\n id: string; // File ID for references\n uploadUrl: string; // PUT to this URL\n downloadUrl: string; // GET URL after upload\n }[];\n}\n```\n\nThe client uploads files directly to S3 using the presigned upload URL. See [File Uploads](/docs/client-sdk/file-uploads) for the full integration pattern.\n\n## Next Steps\n\n- [Sessions](/docs/server-sdk/sessions) \u2014 Deep dive into session management\n- [Tools](/docs/server-sdk/tools) \u2014 Implementing tool handlers\n- [Streaming](/docs/server-sdk/streaming) \u2014 Understanding stream events\n- [Workers](/docs/server-sdk/workers) \u2014 Executing worker agents\n",
|
|
27
27
|
excerpt: "Server SDK Overview The package provides a Node.js SDK for integrating Octavus agents into your backend application. It handles session management, streaming, and the tool execution continuation...",
|
|
28
28
|
order: 1
|
|
29
29
|
},
|
|
@@ -59,16 +59,25 @@ var docs_default = [
|
|
|
59
59
|
section: "server-sdk",
|
|
60
60
|
title: "CLI",
|
|
61
61
|
description: "Command-line interface for validating and syncing agent definitions.",
|
|
62
|
-
content: '\n# Octavus CLI\n\nThe `@octavus/cli` package provides a command-line interface for validating and syncing agent definitions from your local filesystem to the Octavus platform.\n\n**Current version:** `2.
|
|
62
|
+
content: '\n# Octavus CLI\n\nThe `@octavus/cli` package provides a command-line interface for validating and syncing agent definitions from your local filesystem to the Octavus platform.\n\n**Current version:** `2.3.0`\n\n## Installation\n\n```bash\nnpm install --save-dev @octavus/cli\n```\n\n## Configuration\n\nThe CLI requires an API key with the **Agents** permission.\n\n### Environment Variables\n\n| Variable | Description |\n| --------------------- | ---------------------------------------------- |\n| `OCTAVUS_CLI_API_KEY` | API key with "Agents" permission (recommended) |\n| `OCTAVUS_API_KEY` | Fallback if `OCTAVUS_CLI_API_KEY` not set |\n| `OCTAVUS_API_URL` | Optional, defaults to `https://octavus.ai` |\n\n### Two-Key Strategy (Recommended)\n\nFor production deployments, use separate API keys with minimal permissions:\n\n```bash\n# CI/CD or .env.local (not committed)\nOCTAVUS_CLI_API_KEY=oct_sk_... # "Agents" permission only\n\n# Production .env\nOCTAVUS_API_KEY=oct_sk_... # "Sessions" permission only\n```\n\nThis ensures production servers only have session permissions (smaller blast radius if leaked), while agent management is restricted to development/CI environments.\n\n### Multiple Environments\n\nUse separate Octavus projects for staging and production, each with their own API keys. The `--env` flag lets you load different environment files:\n\n```bash\n# Local development (default: .env)\noctavus sync ./agents/my-agent\n\n# Staging project\noctavus --env .env.staging sync ./agents/my-agent\n\n# Production project\noctavus --env .env.production sync ./agents/my-agent\n```\n\nExample environment files:\n\n```bash\n# .env.staging (syncs to your staging project)\nOCTAVUS_CLI_API_KEY=oct_sk_staging_project_key...\n\n# .env.production (syncs to your production project)\nOCTAVUS_CLI_API_KEY=oct_sk_production_project_key...\n```\n\nEach project has its own agents, so you\'ll get different agent IDs per environment.\n\n## Global Options\n\n| Option | Description |\n| -------------- | ------------------------------------------------------- |\n| `--env <file>` | Load environment from a specific file (default: `.env`) |\n| `--help` | Show help |\n| `--version` | Show version |\n\n## Commands\n\n### `octavus sync <path>`\n\nSync an agent definition to the platform. Creates the agent if it doesn\'t exist, or updates it if it does.\n\n```bash\noctavus sync ./agents/my-agent\n```\n\n**Options:**\n\n- `--json` \u2014 Output as JSON (for CI/CD parsing)\n- `--quiet` \u2014 Suppress non-essential output\n\n**Example output:**\n\n```\n\u2139 Reading agent from ./agents/my-agent...\n\u2139 Syncing support-chat...\n\u2713 Created: support-chat\n Agent ID: clxyz123abc456\n```\n\n### `octavus validate <path>`\n\nValidate an agent definition without saving. Useful for CI/CD pipelines.\n\n```bash\noctavus validate ./agents/my-agent\n```\n\n**Exit codes:**\n\n- `0` \u2014 Validation passed\n- `1` \u2014 Validation errors\n- `2` \u2014 Configuration errors (missing API key, etc.)\n\n### `octavus list`\n\nList all agents in your project.\n\n```bash\noctavus list\n```\n\n**Example output:**\n\n```\nSLUG NAME FORMAT ID\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\nsupport-chat Support Chat Agent interactive clxyz123abc456\n\n1 agent(s)\n```\n\n### `octavus get <slug>`\n\nGet details about a specific agent by its slug.\n\n```bash\noctavus get support-chat\n```\n\n## Agent Directory Structure\n\nThe CLI expects agent definitions in a specific directory structure:\n\n```\nmy-agent/\n\u251C\u2500\u2500 settings.json # Required: Agent metadata\n\u251C\u2500\u2500 protocol.yaml # Required: Agent protocol\n\u2514\u2500\u2500 prompts/ # Optional: Prompt templates\n \u251C\u2500\u2500 system.md\n \u2514\u2500\u2500 user-message.md\n```\n\n### settings.json\n\n```json\n{\n "slug": "my-agent",\n "name": "My Agent",\n "description": "A helpful assistant",\n "format": "interactive"\n}\n```\n\n### protocol.yaml\n\nSee the [Protocol documentation](/docs/protocol/overview) for details on protocol syntax.\n\n## CI/CD Integration\n\n### GitHub Actions\n\n```yaml\nname: Validate and Sync Agents\n\non:\n push:\n branches: [main]\n paths:\n - \'agents/**\'\n\njobs:\n sync:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n\n - uses: actions/setup-node@v4\n with:\n node-version: \'22\'\n\n - run: npm install\n\n - name: Validate agent\n run: npx octavus validate ./agents/support-chat\n env:\n OCTAVUS_CLI_API_KEY: ${{ secrets.OCTAVUS_CLI_API_KEY }}\n\n - name: Sync agent\n run: npx octavus sync ./agents/support-chat\n env:\n OCTAVUS_CLI_API_KEY: ${{ secrets.OCTAVUS_CLI_API_KEY }}\n```\n\n### Package.json Scripts\n\nAdd sync scripts to your `package.json`:\n\n```json\n{\n "scripts": {\n "agents:validate": "octavus validate ./agents/my-agent",\n "agents:sync": "octavus sync ./agents/my-agent"\n },\n "devDependencies": {\n "@octavus/cli": "^0.1.0"\n }\n}\n```\n\n## Workflow\n\nThe recommended workflow for managing agents:\n\n1. **Define agent locally** \u2014 Create `settings.json`, `protocol.yaml`, and prompts\n2. **Validate** \u2014 Run `octavus validate ./my-agent` to check for errors\n3. **Sync** \u2014 Run `octavus sync ./my-agent` to push to platform\n4. **Store agent ID** \u2014 Save the output ID in an environment variable\n5. **Use in app** \u2014 Read the ID from env and pass to `client.agentSessions.create()`\n\n```bash\n# After syncing: octavus sync ./agents/support-chat\n# Output: Agent ID: clxyz123abc456\n\n# Add to your .env file\nOCTAVUS_SUPPORT_AGENT_ID=clxyz123abc456\n```\n\n```typescript\nconst agentId = process.env.OCTAVUS_SUPPORT_AGENT_ID;\n\nconst sessionId = await client.agentSessions.create(agentId, {\n COMPANY_NAME: \'Acme Corp\',\n});\n```\n',
|
|
63
63
|
excerpt: "Octavus CLI The package provides a command-line interface for validating and syncing agent definitions from your local filesystem to the Octavus platform. Current version: Installation ...",
|
|
64
64
|
order: 5
|
|
65
65
|
},
|
|
66
|
+
{
|
|
67
|
+
slug: "server-sdk/workers",
|
|
68
|
+
section: "server-sdk",
|
|
69
|
+
title: "Workers",
|
|
70
|
+
description: "Executing worker agents with the Server SDK.",
|
|
71
|
+
content: "\n# Workers API\n\nThe `WorkersApi` enables executing worker agents from your server. Workers are task-based agents that run steps sequentially and return an output value.\n\n## Basic Usage\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: 'your-api-key',\n});\n\n// Execute a worker\nconst events = client.workers.execute(agentId, {\n TOPIC: 'AI safety',\n DEPTH: 'detailed',\n});\n\n// Process events\nfor await (const event of events) {\n if (event.type === 'worker-start') {\n console.log(`Worker ${event.workerSlug} started`);\n }\n if (event.type === 'text-delta') {\n process.stdout.write(event.delta);\n }\n if (event.type === 'worker-result') {\n console.log('Output:', event.output);\n }\n}\n```\n\n## WorkersApi Reference\n\n### execute()\n\nExecute a worker and stream the response.\n\n```typescript\nasync *execute(\n agentId: string,\n input: Record<string, unknown>,\n options?: WorkerExecuteOptions\n): AsyncGenerator<StreamEvent>\n```\n\n**Parameters:**\n\n| Parameter | Type | Description |\n| --------- | ------------------------- | --------------------------- |\n| `agentId` | `string` | The worker agent ID |\n| `input` | `Record<string, unknown>` | Input values for the worker |\n| `options` | `WorkerExecuteOptions` | Optional configuration |\n\n**Options:**\n\n```typescript\ninterface WorkerExecuteOptions {\n /** Tool handlers for server-side tool execution */\n tools?: ToolHandlers;\n /** Abort signal to cancel the execution */\n signal?: AbortSignal;\n}\n```\n\n### continue()\n\nContinue execution after client-side tool handling.\n\n```typescript\nasync *continue(\n agentId: string,\n executionId: string,\n toolResults: ToolResult[],\n options?: WorkerExecuteOptions\n): AsyncGenerator<StreamEvent>\n```\n\nUse this when the worker has tools without server-side handlers. The execution pauses with a `client-tool-request` event, you execute the tools, then call `continue()` to resume.\n\n## Tool Handlers\n\nProvide tool handlers to execute tools server-side:\n\n```typescript\nconst events = client.workers.execute(\n agentId,\n { TOPIC: 'AI safety' },\n {\n tools: {\n 'web-search': async (args) => {\n const results = await searchWeb(args.query);\n return results;\n },\n 'get-user-data': async (args) => {\n return await db.users.findById(args.userId);\n },\n },\n },\n);\n```\n\nTools defined in the worker protocol but not provided as handlers become client tools \u2014 the execution pauses and emits a `client-tool-request` event.\n\n## Stream Events\n\nWorkers emit standard stream events plus worker-specific events.\n\n### Worker Events\n\n```typescript\n// Worker started\n{\n type: 'worker-start',\n workerId: string, // Unique ID (also used as session ID for debug)\n workerSlug: string, // The worker's slug\n description?: string, // Display description for UI\n}\n\n// Worker completed\n{\n type: 'worker-result',\n workerId: string,\n output?: unknown, // The worker's output value\n error?: string, // Error message if worker failed\n}\n```\n\n### Common Events\n\n| Event | Description |\n| ----------------------- | --------------------------- |\n| `start` | Execution started |\n| `finish` | Execution completed |\n| `text-start` | Text generation started |\n| `text-delta` | Text chunk received |\n| `text-end` | Text generation ended |\n| `block-start` | Step started |\n| `block-end` | Step completed |\n| `tool-input-available` | Tool arguments ready |\n| `tool-output-available` | Tool result ready |\n| `client-tool-request` | Client tools need execution |\n| `error` | Error occurred |\n\n## Extracting Output\n\nTo get just the worker's output value:\n\n```typescript\nasync function executeWorker(\n client: OctavusClient,\n agentId: string,\n input: Record<string, unknown>,\n): Promise<unknown> {\n const events = client.workers.execute(agentId, input);\n\n for await (const event of events) {\n if (event.type === 'worker-result') {\n if (event.error) {\n throw new Error(event.error);\n }\n return event.output;\n }\n }\n\n return undefined;\n}\n\n// Usage\nconst analysis = await executeWorker(client, agentId, { TOPIC: 'AI' });\n```\n\n## Client Tool Continuation\n\nWhen workers have tools without handlers, execution pauses:\n\n```typescript\nfor await (const event of client.workers.execute(agentId, input)) {\n if (event.type === 'client-tool-request') {\n // Execute tools client-side\n const results = await executeClientTools(event.toolCalls);\n\n // Continue execution\n for await (const ev of client.workers.continue(agentId, event.executionId, results)) {\n // Handle remaining events\n }\n break;\n }\n}\n```\n\nThe `client-tool-request` event includes:\n\n```typescript\n{\n type: 'client-tool-request',\n executionId: string, // Pass to continue()\n toolCalls: [{\n toolCallId: string,\n toolName: string,\n args: Record<string, unknown>,\n }],\n}\n```\n\n## Streaming to HTTP Response\n\nConvert worker events to an SSE stream:\n\n```typescript\nimport { toSSEStream } from '@octavus/server-sdk';\n\nexport async function POST(request: Request) {\n const { agentId, input } = await request.json();\n\n const events = client.workers.execute(agentId, input, {\n tools: {\n search: async (args) => await search(args.query),\n },\n });\n\n return new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n });\n}\n```\n\n## Cancellation\n\nUse an abort signal to cancel execution:\n\n```typescript\nconst controller = new AbortController();\n\n// Cancel after 30 seconds\nsetTimeout(() => controller.abort(), 30000);\n\nconst events = client.workers.execute(agentId, input, {\n signal: controller.signal,\n});\n\ntry {\n for await (const event of events) {\n // Process events\n }\n} catch (error) {\n if (error.name === 'AbortError') {\n console.log('Worker cancelled');\n }\n}\n```\n\n## Error Handling\n\nErrors can occur at different levels:\n\n```typescript\nfor await (const event of client.workers.execute(agentId, input)) {\n // Stream-level error event\n if (event.type === 'error') {\n console.error(`Error: ${event.message}`);\n console.error(`Type: ${event.errorType}`);\n console.error(`Retryable: ${event.retryable}`);\n }\n\n // Worker-level error in result\n if (event.type === 'worker-result' && event.error) {\n console.error(`Worker failed: ${event.error}`);\n }\n}\n```\n\nError types include:\n\n| Type | Description |\n| ------------------ | --------------------- |\n| `validation_error` | Invalid input |\n| `not_found_error` | Worker not found |\n| `provider_error` | LLM provider error |\n| `tool_error` | Tool execution failed |\n| `execution_error` | Worker step failed |\n\n## Full Example\n\n```typescript\nimport { OctavusClient, type StreamEvent } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: process.env.OCTAVUS_API_KEY!,\n});\n\nasync function runResearchWorker(topic: string) {\n console.log(`Researching: ${topic}\\n`);\n\n const events = client.workers.execute(\n 'research-assistant-id',\n {\n TOPIC: topic,\n DEPTH: 'detailed',\n },\n {\n tools: {\n 'web-search': async ({ query }) => {\n console.log(`Searching: ${query}`);\n return await performWebSearch(query);\n },\n },\n },\n );\n\n let output: unknown;\n\n for await (const event of events) {\n switch (event.type) {\n case 'worker-start':\n console.log(`Started: ${event.workerSlug}`);\n break;\n\n case 'block-start':\n console.log(`Step: ${event.blockName}`);\n break;\n\n case 'text-delta':\n process.stdout.write(event.delta);\n break;\n\n case 'worker-result':\n if (event.error) {\n throw new Error(event.error);\n }\n output = event.output;\n break;\n\n case 'error':\n throw new Error(event.message);\n }\n }\n\n console.log('\\n\\nResearch complete!');\n return output;\n}\n\n// Run the worker\nconst result = await runResearchWorker('AI safety best practices');\nconsole.log('Result:', result);\n```\n\n## Next Steps\n\n- [Workers Protocol](/docs/protocol/workers) \u2014 Worker protocol reference\n- [Streaming](/docs/server-sdk/streaming) \u2014 Understanding stream events\n- [Tools](/docs/server-sdk/tools) \u2014 Tool handler patterns\n",
|
|
72
|
+
excerpt: "Workers API The enables executing worker agents from your server. Workers are task-based agents that run steps sequentially and return an output value. Basic Usage WorkersApi Reference execute()...",
|
|
73
|
+
order: 6
|
|
74
|
+
},
|
|
66
75
|
{
|
|
67
76
|
slug: "client-sdk/overview",
|
|
68
77
|
section: "client-sdk",
|
|
69
78
|
title: "Overview",
|
|
70
79
|
description: "Introduction to the Octavus Client SDKs for building chat interfaces.",
|
|
71
|
-
content: "\n# Client SDK Overview\n\nOctavus provides two packages for frontend integration:\n\n| Package | Purpose | Use When |\n| --------------------- | ------------------------ | ----------------------------------------------------- |\n| `@octavus/react` | React hooks and bindings | Building React applications |\n| `@octavus/client-sdk` | Framework-agnostic core | Using Vue, Svelte, vanilla JS, or custom integrations |\n\n**Most users should install `@octavus/react`** \u2014 it includes everything from `@octavus/client-sdk` plus React-specific hooks.\n\n## Installation\n\n### React Applications\n\n```bash\nnpm install @octavus/react\n```\n\n**Current version:** `2.1.0`\n\n### Other Frameworks\n\n```bash\nnpm install @octavus/client-sdk\n```\n\n**Current version:** `2.1.0`\n\n## Transport Pattern\n\nThe Client SDK uses a **transport abstraction** to handle communication with your backend. This gives you flexibility in how events are delivered:\n\n| Transport | Use Case | Docs |\n| ----------------------- | -------------------------------------------- | ----------------------------------------------------- |\n| `createHttpTransport` | HTTP/SSE (Next.js, Express, etc.) | [HTTP Transport](/docs/client-sdk/http-transport) |\n| `createSocketTransport` | WebSocket, SockJS, or other socket protocols | [Socket Transport](/docs/client-sdk/socket-transport) |\n\nWhen the transport changes (e.g., when `sessionId` changes), the `useOctavusChat` hook automatically reinitializes with the new transport.\n\n> **Recommendation**: Use HTTP transport unless you specifically need WebSocket features (custom real-time events, Meteor/Phoenix, etc.).\n\n## React Usage\n\nThe `useOctavusChat` hook provides state management and streaming for React applications:\n\n```tsx\nimport { useMemo } from 'react';\nimport { useOctavusChat, createHttpTransport, type UIMessage } from '@octavus/react';\n\nfunction Chat({ sessionId }: { sessionId: string }) {\n // Create a stable transport instance (memoized on sessionId)\n const transport = useMemo(\n () =>\n createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n [sessionId],\n );\n\n const { messages, status, send } = useOctavusChat({ transport });\n\n const sendMessage = async (text: string) => {\n await send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n };\n\n return (\n <div>\n {messages.map((msg) => (\n <MessageBubble key={msg.id} message={msg} />\n ))}\n </div>\n );\n}\n\nfunction MessageBubble({ message }: { message: UIMessage }) {\n return (\n <div>\n {message.parts.map((part, i) => {\n if (part.type === 'text') {\n return <p key={i}>{part.text}</p>;\n }\n return null;\n })}\n </div>\n );\n}\n```\n\n## Framework-Agnostic Usage\n\nThe `OctavusChat` class can be used with any framework or vanilla JavaScript:\n\n```typescript\nimport { OctavusChat, createHttpTransport } from '@octavus/client-sdk';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n\nconst chat = new OctavusChat({ transport });\n\n// Subscribe to state changes\nconst unsubscribe = chat.subscribe(() => {\n console.log('Messages:', chat.messages);\n console.log('Status:', chat.status);\n // Update your UI here\n});\n\n// Send a message\nawait chat.send('user-message', { USER_MESSAGE: 'Hello' }, { userMessage: { content: 'Hello' } });\n\n// Cleanup when done\nunsubscribe();\n```\n\n## Key Features\n\n### Unified Send Function\n\nThe `send` function handles both user message display and agent triggering in one call:\n\n```tsx\nconst { send } = useOctavusChat({ transport });\n\n// Add user message to UI and trigger agent\nawait send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n\n// Trigger without adding a user message (e.g., button click)\nawait send('request-human');\n```\n\n### Message Parts\n\nMessages contain ordered `parts` for rich content:\n\n```tsx\nconst { messages } = useOctavusChat({ transport });\n\n// Each message has typed parts\nmessage.parts.map((part) => {\n switch (part.type) {\n case 'text': // Text content\n case 'reasoning': // Extended reasoning/thinking\n case 'tool-call': // Tool execution\n case 'operation': // Internal operations (set-resource, etc.)\n }\n});\n```\n\n### Status Tracking\n\n```tsx\nconst { status } = useOctavusChat({ transport });\n\n// status: 'idle' | 'streaming' | 'error' | 'awaiting-input'\n// 'awaiting-input' occurs when interactive client tools need user action\n```\n\n### Stop Streaming\n\n```tsx\nconst { stop } = useOctavusChat({ transport });\n\n// Stop current stream and finalize message\nstop();\n```\n\n## Hook Reference (React)\n\n### useOctavusChat\n\n```typescript\nfunction useOctavusChat(options: OctavusChatOptions): UseOctavusChatReturn;\n\ninterface OctavusChatOptions {\n // Required: Transport for streaming events\n transport: Transport;\n\n // Optional: Function to request upload URLs for file uploads\n requestUploadUrls?: (\n files: { filename: string; mediaType: string; size: number }[],\n ) => Promise<UploadUrlsResponse>;\n\n // Optional: Client-side tool handlers\n // - Function: executes automatically and returns result\n // - 'interactive': appears in pendingClientTools for user input\n clientTools?: Record<string, ClientToolHandler>;\n\n // Optional: Pre-populate with existing messages (session restore)\n initialMessages?: UIMessage[];\n\n // Optional: Callbacks\n onError?: (error: OctavusError) => void; // Structured error with type, source, retryable\n onFinish?: () => void;\n onStop?: () => void; // Called when user stops generation\n onResourceUpdate?: (name: string, value: unknown) => void;\n}\n\ninterface UseOctavusChatReturn {\n // State\n messages: UIMessage[];\n status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n error: OctavusError | null; // Structured error with type, source, retryable\n\n // Connection (socket transport only - undefined for HTTP)\n connectionState: ConnectionState | undefined; // 'disconnected' | 'connecting' | 'connected' | 'error'\n connectionError: Error | undefined;\n\n // Client tools (interactive tools awaiting user input)\n pendingClientTools: Record<string, InteractiveTool[]>; // Keyed by tool name\n\n // Actions\n send: (\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ) => Promise<void>;\n stop: () => void;\n\n // Connection management (socket transport only - undefined for HTTP)\n connect: (() => Promise<void>) | undefined;\n disconnect: (() => void) | undefined;\n\n // File uploads (requires requestUploadUrls)\n uploadFiles: (\n files: FileList | File[],\n onProgress?: (fileIndex: number, progress: number) => void,\n ) => Promise<FileReference[]>;\n}\n\ninterface UserMessageInput {\n content?: string;\n files?: FileList | File[] | FileReference[];\n}\n```\n\n## Transport Reference\n\n### createHttpTransport\n\nCreates an HTTP/SSE transport using native `fetch()`:\n\n```typescript\nimport { createHttpTransport } from '@octavus/react';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n```\n\n### createSocketTransport\n\nCreates a WebSocket/SockJS transport for real-time connections:\n\n```typescript\nimport { createSocketTransport } from '@octavus/react';\n\nconst transport = createSocketTransport({\n connect: () =>\n new Promise((resolve, reject) => {\n const ws = new WebSocket(`wss://api.example.com/stream?sessionId=${sessionId}`);\n ws.onopen = () => resolve(ws);\n ws.onerror = () => reject(new Error('Connection failed'));\n }),\n});\n```\n\nSocket transport provides additional connection management:\n\n```typescript\n// Access connection state directly\ntransport.connectionState; // 'disconnected' | 'connecting' | 'connected' | 'error'\n\n// Subscribe to state changes\ntransport.onConnectionStateChange((state, error) => {\n /* ... */\n});\n\n// Eager connection (instead of lazy on first send)\nawait transport.connect();\n\n// Manual disconnect\ntransport.disconnect();\n```\n\nFor detailed WebSocket/SockJS usage including custom events, reconnection patterns, and server-side implementation, see [Socket Transport](/docs/client-sdk/socket-transport).\n\n## Class Reference (Framework-Agnostic)\n\n### OctavusChat\n\n```typescript\nclass OctavusChat {\n constructor(options: OctavusChatOptions);\n\n // State (read-only)\n readonly messages: UIMessage[];\n readonly status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n readonly error: OctavusError | null; // Structured error\n readonly pendingClientTools: Record<string, InteractiveTool[]>; // Interactive tools\n\n // Actions\n send(\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ): Promise<void>;\n stop(): void;\n\n // Subscription\n subscribe(callback: () => void): () => void; // Returns unsubscribe function\n}\n```\n\n## Next Steps\n\n- [HTTP Transport](/docs/client-sdk/http-transport) \u2014 HTTP/SSE integration (recommended)\n- [Socket Transport](/docs/client-sdk/socket-transport) \u2014 WebSocket and SockJS integration\n- [Messages](/docs/client-sdk/messages) \u2014 Working with message state\n- [Streaming](/docs/client-sdk/streaming) \u2014 Building streaming UIs\n- [Client Tools](/docs/client-sdk/client-tools) \u2014 Interactive browser-side tool handling\n- [Operations](/docs/client-sdk/execution-blocks) \u2014 Showing agent progress\n- [Error Handling](/docs/client-sdk/error-handling) \u2014 Handling errors with type guards\n- [File Uploads](/docs/client-sdk/file-uploads) \u2014 Uploading images and documents\n- [Examples](/docs/examples/overview) \u2014 Complete working examples\n",
|
|
80
|
+
content: "\n# Client SDK Overview\n\nOctavus provides two packages for frontend integration:\n\n| Package | Purpose | Use When |\n| --------------------- | ------------------------ | ----------------------------------------------------- |\n| `@octavus/react` | React hooks and bindings | Building React applications |\n| `@octavus/client-sdk` | Framework-agnostic core | Using Vue, Svelte, vanilla JS, or custom integrations |\n\n**Most users should install `@octavus/react`** \u2014 it includes everything from `@octavus/client-sdk` plus React-specific hooks.\n\n## Installation\n\n### React Applications\n\n```bash\nnpm install @octavus/react\n```\n\n**Current version:** `2.3.0`\n\n### Other Frameworks\n\n```bash\nnpm install @octavus/client-sdk\n```\n\n**Current version:** `2.3.0`\n\n## Transport Pattern\n\nThe Client SDK uses a **transport abstraction** to handle communication with your backend. This gives you flexibility in how events are delivered:\n\n| Transport | Use Case | Docs |\n| ----------------------- | -------------------------------------------- | ----------------------------------------------------- |\n| `createHttpTransport` | HTTP/SSE (Next.js, Express, etc.) | [HTTP Transport](/docs/client-sdk/http-transport) |\n| `createSocketTransport` | WebSocket, SockJS, or other socket protocols | [Socket Transport](/docs/client-sdk/socket-transport) |\n\nWhen the transport changes (e.g., when `sessionId` changes), the `useOctavusChat` hook automatically reinitializes with the new transport.\n\n> **Recommendation**: Use HTTP transport unless you specifically need WebSocket features (custom real-time events, Meteor/Phoenix, etc.).\n\n## React Usage\n\nThe `useOctavusChat` hook provides state management and streaming for React applications:\n\n```tsx\nimport { useMemo } from 'react';\nimport { useOctavusChat, createHttpTransport, type UIMessage } from '@octavus/react';\n\nfunction Chat({ sessionId }: { sessionId: string }) {\n // Create a stable transport instance (memoized on sessionId)\n const transport = useMemo(\n () =>\n createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n [sessionId],\n );\n\n const { messages, status, send } = useOctavusChat({ transport });\n\n const sendMessage = async (text: string) => {\n await send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n };\n\n return (\n <div>\n {messages.map((msg) => (\n <MessageBubble key={msg.id} message={msg} />\n ))}\n </div>\n );\n}\n\nfunction MessageBubble({ message }: { message: UIMessage }) {\n return (\n <div>\n {message.parts.map((part, i) => {\n if (part.type === 'text') {\n return <p key={i}>{part.text}</p>;\n }\n return null;\n })}\n </div>\n );\n}\n```\n\n## Framework-Agnostic Usage\n\nThe `OctavusChat` class can be used with any framework or vanilla JavaScript:\n\n```typescript\nimport { OctavusChat, createHttpTransport } from '@octavus/client-sdk';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n\nconst chat = new OctavusChat({ transport });\n\n// Subscribe to state changes\nconst unsubscribe = chat.subscribe(() => {\n console.log('Messages:', chat.messages);\n console.log('Status:', chat.status);\n // Update your UI here\n});\n\n// Send a message\nawait chat.send('user-message', { USER_MESSAGE: 'Hello' }, { userMessage: { content: 'Hello' } });\n\n// Cleanup when done\nunsubscribe();\n```\n\n## Key Features\n\n### Unified Send Function\n\nThe `send` function handles both user message display and agent triggering in one call:\n\n```tsx\nconst { send } = useOctavusChat({ transport });\n\n// Add user message to UI and trigger agent\nawait send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n\n// Trigger without adding a user message (e.g., button click)\nawait send('request-human');\n```\n\n### Message Parts\n\nMessages contain ordered `parts` for rich content:\n\n```tsx\nconst { messages } = useOctavusChat({ transport });\n\n// Each message has typed parts\nmessage.parts.map((part) => {\n switch (part.type) {\n case 'text': // Text content\n case 'reasoning': // Extended reasoning/thinking\n case 'tool-call': // Tool execution\n case 'operation': // Internal operations (set-resource, etc.)\n }\n});\n```\n\n### Status Tracking\n\n```tsx\nconst { status } = useOctavusChat({ transport });\n\n// status: 'idle' | 'streaming' | 'error' | 'awaiting-input'\n// 'awaiting-input' occurs when interactive client tools need user action\n```\n\n### Stop Streaming\n\n```tsx\nconst { stop } = useOctavusChat({ transport });\n\n// Stop current stream and finalize message\nstop();\n```\n\n## Hook Reference (React)\n\n### useOctavusChat\n\n```typescript\nfunction useOctavusChat(options: OctavusChatOptions): UseOctavusChatReturn;\n\ninterface OctavusChatOptions {\n // Required: Transport for streaming events\n transport: Transport;\n\n // Optional: Function to request upload URLs for file uploads\n requestUploadUrls?: (\n files: { filename: string; mediaType: string; size: number }[],\n ) => Promise<UploadUrlsResponse>;\n\n // Optional: Client-side tool handlers\n // - Function: executes automatically and returns result\n // - 'interactive': appears in pendingClientTools for user input\n clientTools?: Record<string, ClientToolHandler>;\n\n // Optional: Pre-populate with existing messages (session restore)\n initialMessages?: UIMessage[];\n\n // Optional: Callbacks\n onError?: (error: OctavusError) => void; // Structured error with type, source, retryable\n onFinish?: () => void;\n onStop?: () => void; // Called when user stops generation\n onResourceUpdate?: (name: string, value: unknown) => void;\n}\n\ninterface UseOctavusChatReturn {\n // State\n messages: UIMessage[];\n status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n error: OctavusError | null; // Structured error with type, source, retryable\n\n // Connection (socket transport only - undefined for HTTP)\n connectionState: ConnectionState | undefined; // 'disconnected' | 'connecting' | 'connected' | 'error'\n connectionError: Error | undefined;\n\n // Client tools (interactive tools awaiting user input)\n pendingClientTools: Record<string, InteractiveTool[]>; // Keyed by tool name\n\n // Actions\n send: (\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ) => Promise<void>;\n stop: () => void;\n\n // Connection management (socket transport only - undefined for HTTP)\n connect: (() => Promise<void>) | undefined;\n disconnect: (() => void) | undefined;\n\n // File uploads (requires requestUploadUrls)\n uploadFiles: (\n files: FileList | File[],\n onProgress?: (fileIndex: number, progress: number) => void,\n ) => Promise<FileReference[]>;\n}\n\ninterface UserMessageInput {\n content?: string;\n files?: FileList | File[] | FileReference[];\n}\n```\n\n## Transport Reference\n\n### createHttpTransport\n\nCreates an HTTP/SSE transport using native `fetch()`:\n\n```typescript\nimport { createHttpTransport } from '@octavus/react';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n```\n\n### createSocketTransport\n\nCreates a WebSocket/SockJS transport for real-time connections:\n\n```typescript\nimport { createSocketTransport } from '@octavus/react';\n\nconst transport = createSocketTransport({\n connect: () =>\n new Promise((resolve, reject) => {\n const ws = new WebSocket(`wss://api.example.com/stream?sessionId=${sessionId}`);\n ws.onopen = () => resolve(ws);\n ws.onerror = () => reject(new Error('Connection failed'));\n }),\n});\n```\n\nSocket transport provides additional connection management:\n\n```typescript\n// Access connection state directly\ntransport.connectionState; // 'disconnected' | 'connecting' | 'connected' | 'error'\n\n// Subscribe to state changes\ntransport.onConnectionStateChange((state, error) => {\n /* ... */\n});\n\n// Eager connection (instead of lazy on first send)\nawait transport.connect();\n\n// Manual disconnect\ntransport.disconnect();\n```\n\nFor detailed WebSocket/SockJS usage including custom events, reconnection patterns, and server-side implementation, see [Socket Transport](/docs/client-sdk/socket-transport).\n\n## Class Reference (Framework-Agnostic)\n\n### OctavusChat\n\n```typescript\nclass OctavusChat {\n constructor(options: OctavusChatOptions);\n\n // State (read-only)\n readonly messages: UIMessage[];\n readonly status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n readonly error: OctavusError | null; // Structured error\n readonly pendingClientTools: Record<string, InteractiveTool[]>; // Interactive tools\n\n // Actions\n send(\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ): Promise<void>;\n stop(): void;\n\n // Subscription\n subscribe(callback: () => void): () => void; // Returns unsubscribe function\n}\n```\n\n## Next Steps\n\n- [HTTP Transport](/docs/client-sdk/http-transport) \u2014 HTTP/SSE integration (recommended)\n- [Socket Transport](/docs/client-sdk/socket-transport) \u2014 WebSocket and SockJS integration\n- [Messages](/docs/client-sdk/messages) \u2014 Working with message state\n- [Streaming](/docs/client-sdk/streaming) \u2014 Building streaming UIs\n- [Client Tools](/docs/client-sdk/client-tools) \u2014 Interactive browser-side tool handling\n- [Operations](/docs/client-sdk/execution-blocks) \u2014 Showing agent progress\n- [Error Handling](/docs/client-sdk/error-handling) \u2014 Handling errors with type guards\n- [File Uploads](/docs/client-sdk/file-uploads) \u2014 Uploading images and documents\n- [Examples](/docs/examples/overview) \u2014 Complete working examples\n",
|
|
72
81
|
excerpt: "Client SDK Overview Octavus provides two packages for frontend integration: | Package | Purpose | Use When | |...",
|
|
73
82
|
order: 1
|
|
74
83
|
},
|
|
@@ -522,7 +531,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
522
531
|
section: "protocol",
|
|
523
532
|
title: "Overview",
|
|
524
533
|
description: "Introduction to Octavus agent protocols.",
|
|
525
|
-
content: '\n# Protocol Overview\n\nAgent protocols define how an AI agent behaves. They\'re written in YAML and specify inputs, triggers, tools, and execution handlers.\n\n## Why Protocols?\n\nProtocols provide:\n\n- **Declarative definition** \u2014 Define behavior, not implementation\n- **Portable agents** \u2014 Move agents between projects\n- **Versioning** \u2014 Track changes with git\n- **Validation** \u2014 Catch errors before runtime\n- **Visualization** \u2014 Debug execution flows\n\n## Protocol Structure\n\n```yaml\n# Agent inputs (provided when creating a session)\ninput:\n COMPANY_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\n# Persistent resources the agent can read/write\nresources:\n CONVERSATION_SUMMARY:\n description: Summary for handoff\n default: \'\'\n\n# How the agent can be invoked\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n request-human:\n description: User clicks "Talk to Human"\n\n# Temporary variables for execution (with types)\nvariables:\n SUMMARY:\n type: string\n TICKET:\n type: unknown\n\n# Tools the agent can use\ntools:\n get-user-account:\n description: Looking up your account\n parameters:\n userId: { type: string }\n\n# Octavus skills (provider-agnostic code execution)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\n# Agent configuration (model, tools, etc.)\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account]\n skills: [qr-code] # Enable skills\n imageModel: google/gemini-2.5-flash-image # Enable image generation\n agentic: true # Allow multiple tool calls\n thinking: medium # Extended reasoning\n\n# What happens when triggers fire\nhandlers:\n user-message:\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond to user:\n block: next-message\n```\n\n## File Structure\n\nEach agent is a folder with:\n\n```\nmy-agent/\n\u251C\u2500\u2500 protocol.yaml # Main logic (required)\n\u251C\u2500\u2500 settings.json # Agent metadata (required)\n\u2514\u2500\u2500 prompts/ # Prompt templates\n \u251C\u2500\u2500 system.md\n \u251C\u2500\u2500 user-message.md\n \u2514\u2500\u2500 escalation-summary.md\n```\n\n### settings.json\n\n```json\n{\n "slug": "my-agent",\n "name": "My Agent",\n "description": "What this agent does",\n "format": "interactive"\n}\n```\n\n| Field | Required | Description |\n| ------------- | -------- | ----------------------------------------------- |\n| `slug` | Yes | URL-safe identifier (lowercase, digits, dashes) |\n| `name` | Yes | Human-readable name |\n| `description` | No | Brief description |\n| `format` | Yes | `interactive` (chat) or `worker` (background) |\n\n## Naming Conventions\n\n- **Slugs**: `lowercase-with-dashes`\n- **Variables**: `UPPERCASE_SNAKE_CASE`\n- **Prompts**: `lowercase-with-dashes.md`\n- **Tools**: `lowercase-with-dashes`\n- **Triggers**: `lowercase-with-dashes`\n\n## Variables in Prompts\n\nReference variables with `{{VARIABLE_NAME}}`:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a support agent for {{COMPANY_NAME}}.\n\nHelp users with their {{PRODUCT_NAME}} questions.\n\n## Support Policies\n\n{{SUPPORT_POLICIES}}\n```\n\nVariables are replaced with their values at runtime. If a variable is not provided, it\'s replaced with an empty string.\n\n## Next Steps\n\n- [Input & Resources](/docs/protocol/input-resources) \u2014 Defining agent inputs\n- [Triggers](/docs/protocol/triggers) \u2014 How agents are invoked\n- [Tools](/docs/protocol/tools) \u2014 External capabilities\n- [Skills](/docs/protocol/skills) \u2014 Code execution and knowledge packages\n- [Handlers](/docs/protocol/handlers) \u2014 Execution blocks\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n- [Provider Options](/docs/protocol/provider-options) \u2014 Provider-specific features\n- [Types](/docs/protocol/types) \u2014 Custom type definitions\n',
|
|
534
|
+
content: '\n# Protocol Overview\n\nAgent protocols define how an AI agent behaves. They\'re written in YAML and specify inputs, triggers, tools, and execution handlers.\n\n## Why Protocols?\n\nProtocols provide:\n\n- **Declarative definition** \u2014 Define behavior, not implementation\n- **Portable agents** \u2014 Move agents between projects\n- **Versioning** \u2014 Track changes with git\n- **Validation** \u2014 Catch errors before runtime\n- **Visualization** \u2014 Debug execution flows\n\n## Agent Formats\n\nOctavus supports two agent formats:\n\n| Format | Use Case | Structure |\n| ------------- | ------------------------------ | --------------------------------- |\n| `interactive` | Chat and multi-turn dialogue | `triggers` + `handlers` + `agent` |\n| `worker` | Background tasks and pipelines | `steps` + `output` |\n\n**Interactive agents** handle conversations \u2014 they respond to triggers (like user messages) and maintain session state across interactions.\n\n**Worker agents** execute tasks \u2014 they run steps sequentially and return an output value. Workers can be called independently or composed into interactive agents.\n\nSee [Workers](/docs/protocol/workers) for the worker protocol reference.\n\n## Interactive Protocol Structure\n\n```yaml\n# Agent inputs (provided when creating a session)\ninput:\n COMPANY_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\n# Persistent resources the agent can read/write\nresources:\n CONVERSATION_SUMMARY:\n description: Summary for handoff\n default: \'\'\n\n# How the agent can be invoked\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n request-human:\n description: User clicks "Talk to Human"\n\n# Temporary variables for execution (with types)\nvariables:\n SUMMARY:\n type: string\n TICKET:\n type: unknown\n\n# Tools the agent can use\ntools:\n get-user-account:\n description: Looking up your account\n parameters:\n userId: { type: string }\n\n# Octavus skills (provider-agnostic code execution)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\n# Agent configuration (model, tools, etc.)\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account]\n skills: [qr-code] # Enable skills\n imageModel: google/gemini-2.5-flash-image # Enable image generation\n agentic: true # Allow multiple tool calls\n thinking: medium # Extended reasoning\n\n# What happens when triggers fire\nhandlers:\n user-message:\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond to user:\n block: next-message\n```\n\n## File Structure\n\nEach agent is a folder with:\n\n```\nmy-agent/\n\u251C\u2500\u2500 protocol.yaml # Main logic (required)\n\u251C\u2500\u2500 settings.json # Agent metadata (required)\n\u2514\u2500\u2500 prompts/ # Prompt templates\n \u251C\u2500\u2500 system.md\n \u251C\u2500\u2500 user-message.md\n \u2514\u2500\u2500 escalation-summary.md\n```\n\n### settings.json\n\n```json\n{\n "slug": "my-agent",\n "name": "My Agent",\n "description": "What this agent does",\n "format": "interactive"\n}\n```\n\n| Field | Required | Description |\n| ------------- | -------- | ----------------------------------------------- |\n| `slug` | Yes | URL-safe identifier (lowercase, digits, dashes) |\n| `name` | Yes | Human-readable name |\n| `description` | No | Brief description |\n| `format` | Yes | `interactive` (chat) or `worker` (background) |\n\n## Naming Conventions\n\n- **Slugs**: `lowercase-with-dashes`\n- **Variables**: `UPPERCASE_SNAKE_CASE`\n- **Prompts**: `lowercase-with-dashes.md`\n- **Tools**: `lowercase-with-dashes`\n- **Triggers**: `lowercase-with-dashes`\n\n## Variables in Prompts\n\nReference variables with `{{VARIABLE_NAME}}`:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a support agent for {{COMPANY_NAME}}.\n\nHelp users with their {{PRODUCT_NAME}} questions.\n\n## Support Policies\n\n{{SUPPORT_POLICIES}}\n```\n\nVariables are replaced with their values at runtime. If a variable is not provided, it\'s replaced with an empty string.\n\n## Next Steps\n\n- [Input & Resources](/docs/protocol/input-resources) \u2014 Defining agent inputs\n- [Triggers](/docs/protocol/triggers) \u2014 How agents are invoked\n- [Tools](/docs/protocol/tools) \u2014 External capabilities\n- [Skills](/docs/protocol/skills) \u2014 Code execution and knowledge packages\n- [Handlers](/docs/protocol/handlers) \u2014 Execution blocks\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n- [Workers](/docs/protocol/workers) \u2014 Worker agent format\n- [Provider Options](/docs/protocol/provider-options) \u2014 Provider-specific features\n- [Types](/docs/protocol/types) \u2014 Custom type definitions\n',
|
|
526
535
|
excerpt: "Protocol Overview Agent protocols define how an AI agent behaves. They're written in YAML and specify inputs, triggers, tools, and execution handlers. Why Protocols? Protocols provide: - Declarative...",
|
|
527
536
|
order: 1
|
|
528
537
|
},
|
|
@@ -558,7 +567,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
558
567
|
section: "protocol",
|
|
559
568
|
title: "Skills",
|
|
560
569
|
description: "Using Octavus skills for code execution and specialized capabilities.",
|
|
561
|
-
content: "\n# Skills\n\nSkills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are self-contained packages with documentation and scripts that run in secure sandboxes.\n\n## Overview\n\nOctavus Skills provide **provider-agnostic** code execution. They work with any LLM provider (Anthropic, OpenAI, Google) by using explicit tool calls and system prompt injection.\n\n### How Skills Work\n\n1. **Skill Definition**: Skills are defined in the protocol's `skills:` section\n2. **Skill Resolution**: Skills are resolved from available sources (see below)\n3. **Sandbox Execution**: When a skill is used, code runs in an isolated sandbox environment\n4. **File Generation**: Files saved to `/output/` are automatically captured and made available for download\n\n### Skill Sources\n\nSkills come from two sources, visible in the Skills tab of your organization:\n\n| Source | Badge in UI | Visibility | Example |\n| ----------- | ----------- | ------------------------------ | ------------------ |\n| **Octavus** | `Octavus` | Available to all organizations | `qr-code` |\n| **Custom** | None | Private to your organization | `my-company-skill` |\n\nWhen you reference a skill in your protocol, Octavus resolves it from your available skills. If you create a custom skill with the same name as an Octavus skill, your custom skill takes precedence.\n\n## Defining Skills\n\nDefine skills in the protocol's `skills:` section:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n```\n\n### Skill Fields\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------------------------------------- |\n| `display` | No | How to show in UI: `hidden`, `name`, `description`, `stream` (default: `description`) |\n| `description` | No | Custom description shown to users (overrides skill's built-in description) |\n\n### Display Modes\n\n| Mode | Behavior |\n| ------------- | ------------------------------------------- |\n| `hidden` | Skill usage not shown to users |\n| `name` | Shows skill name while executing |\n| `description` | Shows description while executing (default) |\n| `stream` | Streams progress if available |\n\n## Enabling Skills\n\nAfter defining skills in the `skills:` section, specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account]\n skills: [qr-code] # Skills available for this thread\n agentic: true\n```\n\n## Skill Tools\n\nWhen skills are enabled, the LLM has access to these tools:\n\n| Tool | Purpose |\n| -------------------- | --------------------------------------- |\n| `octavus_skill_read` | Read skill documentation (SKILL.md) |\n| `octavus_skill_list` | List available scripts in a skill |\n| `octavus_skill_run` | Execute a pre-built script from a skill |\n| `octavus_code_run` | Execute arbitrary Python/Bash code |\n| `octavus_file_write` | Create files in the sandbox |\n| `octavus_file_read` | Read files from the sandbox |\n\nThe LLM learns about available skills through system prompt injection and can use these tools to interact with skills.\n\n## Example: QR Code Generation\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n agentic: true\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond:\n block: next-message\n```\n\nWhen a user asks \"Create a QR code for octavus.ai\", the LLM will:\n\n1. Recognize the task matches the `qr-code` skill\n2. Call `octavus_skill_read` to learn how to use the skill\n3. Execute code (via `octavus_code_run` or `octavus_skill_run`) to generate the QR code\n4. Save the image to `/output/` in the sandbox\n5. The file is automatically captured and made available for download\n\n## File Output\n\nFiles saved to `/output/` in the sandbox are automatically:\n\n1. **Captured** after code execution\n2. **Uploaded** to S3 storage\n3. **Made available** via presigned URLs\n4. **Included** in the message as file parts\n\nFiles persist across page refreshes and are stored in the session's message history.\n\n## Skill Format\n\nSkills follow the [Agent Skills](https://agentskills.io) open standard:\n\n- `SKILL.md` - Required skill documentation with YAML frontmatter\n- `scripts/` - Optional executable code (Python/Bash)\n- `references/` - Optional documentation loaded as needed\n- `assets/` - Optional files used in outputs (templates, images)\n\n### SKILL.md Format\n\n````yaml\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\nversion: 1.0.0\nlicense: MIT\nauthor: Octavus Team\n---\n\n# QR Code Generator\n\n## Overview\n\nThis skill creates QR codes from text data using Python...\n\n## Quick Start\n\nGenerate a QR code with Python:\n\n```python\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n# ... code to generate QR code ...\n````\n\n## Scripts Reference\n\n### scripts/generate.py\n\nMain script for generating QR codes...\n\n````\n\n## Best Practices\n\n### 1. Clear Descriptions\n\nProvide clear, purpose-driven descriptions:\n\n```yaml\nskills:\n # Good - clear purpose\n qr-code:\n description: Generating QR codes for URLs, contact info, or any text data\n\n # Avoid - vague\n utility:\n description: Does stuff\n````\n\n### 2. When to Use Skills vs Tools\n\n| Use Skills When | Use Tools When |\n| ------------------------ | ---------------------------- |\n| Code execution needed | Simple API calls |\n| File generation | Database queries |\n| Complex calculations | External service integration |\n| Data processing | Authentication required |\n| Provider-agnostic needed | Backend-specific logic |\n\n### 3. Skill Selection\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n pdf-processor:\n display: description\n description: Processing PDFs\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis] # Skills available for this thread\n```\n\n### 4. Display Modes\n\nChoose appropriate display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n## Comparison: Skills vs Tools vs Provider Options\n\n| Feature | Octavus Skills | External Tools | Provider Tools/Skills |\n| ------------------ | ----------------- | ------------------- | --------------------- |\n| **Execution** | Isolated sandbox | Your backend | Provider servers |\n| **Provider** | Any (agnostic) | N/A | Provider-specific |\n| **Code Execution** | Yes | No | Yes (provider tools) |\n| **File Output** | Yes | No | Yes (provider skills) |\n| **Implementation** | Skill packages | Your code | Built-in |\n| **Cost** | Sandbox + LLM API | Your infrastructure | Included in API |\n\n## Uploading Custom Skills\n\nYou can upload custom skills to your organization:\n\n1. Create a skill following the [Agent Skills](https://agentskills.io) format\n2. Package it as a `.skill` bundle (ZIP file)\n3. Upload via the platform UI\n4. Reference by slug in your protocol\n\n```yaml\nskills:\n custom-analysis:\n display: description\n description: Custom analysis tool\n\nagent:\n skills: [custom-analysis]\n```\n\n## Security\n\nSkills run in isolated sandbox environments:\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default timeout)\n\n## Next Steps\n\n- [Agent Config](/docs/protocol/agent-config) \u2014 Configuring skills in agent settings\n- [Provider Options](/docs/protocol/provider-options) \u2014 Anthropic's built-in skills\n- [Skills Advanced Guide](/docs/protocol/skills-advanced) \u2014 Best practices and advanced patterns\n",
|
|
570
|
+
content: "\n# Skills\n\nSkills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are self-contained packages with documentation and scripts that run in secure sandboxes.\n\n## Overview\n\nOctavus Skills provide **provider-agnostic** code execution. They work with any LLM provider (Anthropic, OpenAI, Google) by using explicit tool calls and system prompt injection.\n\n### How Skills Work\n\n1. **Skill Definition**: Skills are defined in the protocol's `skills:` section\n2. **Skill Resolution**: Skills are resolved from available sources (see below)\n3. **Sandbox Execution**: When a skill is used, code runs in an isolated sandbox environment\n4. **File Generation**: Files saved to `/output/` are automatically captured and made available for download\n\n### Skill Sources\n\nSkills come from two sources, visible in the Skills tab of your organization:\n\n| Source | Badge in UI | Visibility | Example |\n| ----------- | ----------- | ------------------------------ | ------------------ |\n| **Octavus** | `Octavus` | Available to all organizations | `qr-code` |\n| **Custom** | None | Private to your organization | `my-company-skill` |\n\nWhen you reference a skill in your protocol, Octavus resolves it from your available skills. If you create a custom skill with the same name as an Octavus skill, your custom skill takes precedence.\n\n## Defining Skills\n\nDefine skills in the protocol's `skills:` section:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n```\n\n### Skill Fields\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------------------------------------- |\n| `display` | No | How to show in UI: `hidden`, `name`, `description`, `stream` (default: `description`) |\n| `description` | No | Custom description shown to users (overrides skill's built-in description) |\n\n### Display Modes\n\n| Mode | Behavior |\n| ------------- | ------------------------------------------- |\n| `hidden` | Skill usage not shown to users |\n| `name` | Shows skill name while executing |\n| `description` | Shows description while executing (default) |\n| `stream` | Streams progress if available |\n\n## Enabling Skills\n\nAfter defining skills in the `skills:` section, specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account]\n skills: [qr-code] # Skills available for this thread\n agentic: true\n```\n\n## Skill Tools\n\nWhen skills are enabled, the LLM has access to these tools:\n\n| Tool | Purpose |\n| -------------------- | --------------------------------------- |\n| `octavus_skill_read` | Read skill documentation (SKILL.md) |\n| `octavus_skill_list` | List available scripts in a skill |\n| `octavus_skill_run` | Execute a pre-built script from a skill |\n| `octavus_code_run` | Execute arbitrary Python/Bash code |\n| `octavus_file_write` | Create files in the sandbox |\n| `octavus_file_read` | Read files from the sandbox |\n\nThe LLM learns about available skills through system prompt injection and can use these tools to interact with skills.\n\n## Example: QR Code Generation\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n agentic: true\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond:\n block: next-message\n```\n\nWhen a user asks \"Create a QR code for octavus.ai\", the LLM will:\n\n1. Recognize the task matches the `qr-code` skill\n2. Call `octavus_skill_read` to learn how to use the skill\n3. Execute code (via `octavus_code_run` or `octavus_skill_run`) to generate the QR code\n4. Save the image to `/output/` in the sandbox\n5. The file is automatically captured and made available for download\n\n## File Output\n\nFiles saved to `/output/` in the sandbox are automatically:\n\n1. **Captured** after code execution\n2. **Uploaded** to S3 storage\n3. **Made available** via presigned URLs\n4. **Included** in the message as file parts\n\nFiles persist across page refreshes and are stored in the session's message history.\n\n## Skill Format\n\nSkills follow the [Agent Skills](https://agentskills.io) open standard:\n\n- `SKILL.md` - Required skill documentation with YAML frontmatter\n- `scripts/` - Optional executable code (Python/Bash)\n- `references/` - Optional documentation loaded as needed\n- `assets/` - Optional files used in outputs (templates, images)\n\n### SKILL.md Format\n\n````yaml\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\nversion: 1.0.0\nlicense: MIT\nauthor: Octavus Team\n---\n\n# QR Code Generator\n\n## Overview\n\nThis skill creates QR codes from text data using Python...\n\n## Quick Start\n\nGenerate a QR code with Python:\n\n```python\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n# ... code to generate QR code ...\n````\n\n## Scripts Reference\n\n### scripts/generate.py\n\nMain script for generating QR codes...\n\n````\n\n## Best Practices\n\n### 1. Clear Descriptions\n\nProvide clear, purpose-driven descriptions:\n\n```yaml\nskills:\n # Good - clear purpose\n qr-code:\n description: Generating QR codes for URLs, contact info, or any text data\n\n # Avoid - vague\n utility:\n description: Does stuff\n````\n\n### 2. When to Use Skills vs Tools\n\n| Use Skills When | Use Tools When |\n| ------------------------ | ---------------------------- |\n| Code execution needed | Simple API calls |\n| File generation | Database queries |\n| Complex calculations | External service integration |\n| Data processing | Authentication required |\n| Provider-agnostic needed | Backend-specific logic |\n\n### 3. Skill Selection\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n pdf-processor:\n display: description\n description: Processing PDFs\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis] # Skills available for this thread\n```\n\n### 4. Display Modes\n\nChoose appropriate display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n## Comparison: Skills vs Tools vs Provider Options\n\n| Feature | Octavus Skills | External Tools | Provider Tools/Skills |\n| ------------------ | ----------------- | ------------------- | --------------------- |\n| **Execution** | Isolated sandbox | Your backend | Provider servers |\n| **Provider** | Any (agnostic) | N/A | Provider-specific |\n| **Code Execution** | Yes | No | Yes (provider tools) |\n| **File Output** | Yes | No | Yes (provider skills) |\n| **Implementation** | Skill packages | Your code | Built-in |\n| **Cost** | Sandbox + LLM API | Your infrastructure | Included in API |\n\n## Uploading Custom Skills\n\nYou can upload custom skills to your organization:\n\n1. Create a skill following the [Agent Skills](https://agentskills.io) format\n2. Package it as a `.skill` bundle (ZIP file)\n3. Upload via the platform UI\n4. Reference by slug in your protocol\n\n```yaml\nskills:\n custom-analysis:\n display: description\n description: Custom analysis tool\n\nagent:\n skills: [custom-analysis]\n```\n\n## Sandbox Timeout\n\nThe default sandbox timeout is 5 minutes. For long-running operations, you can configure a custom timeout using `sandboxTimeout` in the agent config:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes (in milliseconds)\n```\n\n`sandboxTimeout` Maximum: 1 hour (3,600,000 ms)\n\n## Security\n\nSkills run in isolated sandbox environments:\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n## Next Steps\n\n- [Agent Config](/docs/protocol/agent-config) \u2014 Configuring skills in agent settings\n- [Provider Options](/docs/protocol/provider-options) \u2014 Anthropic's built-in skills\n- [Skills Advanced Guide](/docs/protocol/skills-advanced) \u2014 Best practices and advanced patterns\n",
|
|
562
571
|
excerpt: "Skills Skills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are...",
|
|
563
572
|
order: 5
|
|
564
573
|
},
|
|
@@ -576,7 +585,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
576
585
|
section: "protocol",
|
|
577
586
|
title: "Agent Config",
|
|
578
587
|
description: "Configuring the agent model and behavior.",
|
|
579
|
-
content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to interpolate in system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | ------------------------------------------------------------ |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions:\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nExample `prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n```\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
|
|
588
|
+
content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to interpolate in system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | -------------------------------------------------------------------- |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash-preview\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions:\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nExample `prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n```\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
|
|
580
589
|
excerpt: "Agent Config The section configures the LLM model, system prompt, tools, and behavior. Basic Configuration Configuration Options | Field | Required | Description ...",
|
|
581
590
|
order: 7
|
|
582
591
|
},
|
|
@@ -594,7 +603,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
594
603
|
section: "protocol",
|
|
595
604
|
title: "Skills Advanced Guide",
|
|
596
605
|
description: "Best practices and advanced patterns for using Octavus skills.",
|
|
597
|
-
content: "\n# Skills Advanced Guide\n\nThis guide covers advanced patterns and best practices for using Octavus skills in your agents.\n\n## When to Use Skills\n\nSkills are ideal for:\n\n- **Code execution** - Running Python/Bash scripts\n- **File generation** - Creating images, PDFs, reports\n- **Data processing** - Analyzing, transforming, or visualizing data\n- **Provider-agnostic needs** - Features that should work with any LLM\n\nUse external tools instead when:\n\n- **Simple API calls** - Database queries, external services\n- **Authentication required** - Accessing user-specific resources\n- **Backend integration** - Tight coupling with your infrastructure\n\n## Skill Selection Strategy\n\n### Defining Available Skills\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n pdf-processor:\n display: description\n description: Processing PDFs\n data-analysis:\n display: description\n description: Analyzing data\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Skills available for this thread\n```\n\n### Match Skills to Use Cases\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread based on use case:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n visualization:\n display: description\n description: Creating charts and visualizations\n\n# Skills available for this chat thread (support use case)\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Skills available for this thread\n```\n\nFor a data analysis thread, you would specify `[data-analysis, visualization]` in `agent.skills`, but still define all available skills in the `skills:` section above.\n\n## Display Mode Strategy\n\nChoose display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n### Guidelines\n\n- **`hidden`**: Background work that doesn't need user awareness\n- **`description`**: User-facing operations (default)\n- **`name`**: Quick operations where name is sufficient\n- **`stream`**: Long-running operations where progress matters\n\n## System Prompt Integration\n\nSkills are automatically injected into the system prompt. The LLM learns:\n\n1. **Available skills** - List of enabled skills with descriptions\n2. **How to use skills** - Instructions for using skill tools\n3. **Tool reference** - Available skill tools (`octavus_skill_read`, `octavus_code_run`, etc.)\n\nYou don't need to manually document skills in your system prompt. However, you can guide the LLM:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a helpful assistant that can generate QR codes.\n\n## When to Generate QR Codes\n\nGenerate QR codes when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Share WiFi credentials\n- Create scannable data\n\nUse the qr-code skill for all QR code generation tasks.\n```\n\n## Error Handling\n\nSkills handle errors gracefully:\n\n```yaml\n# Skill execution errors are returned to the LLM\n# The LLM can retry or explain the error to the user\n```\n\nCommon error scenarios:\n\n1. **Invalid skill slug** - Skill not found in organization\n2. **Code execution errors** - Syntax errors, runtime exceptions\n3. **Missing dependencies** - Required packages not installed\n4. **File I/O errors** - Permission issues, invalid paths\n\nThe LLM receives error messages and can:\n\n- Retry with corrected code\n- Explain errors to users\n- Suggest alternatives\n\n## File Output Patterns\n\n### Single File Output\n\n```python\n# Save single file to /output/\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\nqr = qrcode.QRCode()\nqr.add_data('https://example.com')\nimg = qr.make_image()\nimg.save(f'{output_dir}/qrcode.png')\n```\n\n### Multiple Files\n\n```python\n# Save multiple files\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Generate multiple outputs\nfor i in range(3):\n filename = f'{output_dir}/output_{i}.png'\n # ... generate file ...\n```\n\n### Structured Output\n\n```python\n# Save structured data + files\nimport json\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Save metadata\nmetadata = {\n 'files': ['chart.png', 'data.csv'],\n 'summary': 'Analysis complete'\n}\nwith open(f'{output_dir}/metadata.json', 'w') as f:\n json.dump(metadata, f)\n\n# Save actual files\n# ... generate chart.png and data.csv ...\n```\n\n## Performance Considerations\n\n### Lazy Initialization\n\nSandboxes are created only when a skill tool is first called:\n\n```yaml\n# Sandbox not created until LLM calls a skill tool\nagent:\n skills: [qr-code] # Sandbox created on first use\n```\n\nThis means:\n\n- No cost if skills aren't used\n- Fast startup (no sandbox creation delay)\n- Sandbox reused for all skill calls in a trigger\n\n### Timeout Limits\n\nSandboxes have a 5-minute default timeout:\n\n- **Short operations**: QR codes, simple calculations\n- **Medium operations**: Data analysis, report generation\n- **Long operations**: May need to split into multiple steps\n\n### Sandbox Lifecycle\n\nEach trigger execution gets a fresh sandbox:\n\n- **Clean state** - No leftover files from previous executions\n- **Isolated** - No interference between sessions\n- **Destroyed** - Sandbox cleaned up after trigger completes\n\n## Combining Skills with Tools\n\nSkills and tools can work together:\n\n```yaml\ntools:\n get-user-data:\n description: Fetch user data from database\n parameters:\n userId: { type: string }\n\nskills:\n data-analysis:\n display: description\n description: Analyzing data\n\nagent:\n tools: [get-user-data]\n skills: [data-analysis]\n agentic: true\n\nhandlers:\n analyze-user:\n Get user data:\n block: tool-call\n tool: get-user-data\n input:\n userId: USER_ID\n output: USER_DATA\n\n Analyze:\n block: next-message\n # LLM can use data-analysis skill with USER_DATA\n```\n\nPattern:\n\n1. Fetch data via tool (from your backend)\n2. LLM uses skill to analyze/process the data\n3. Generate outputs (files, reports)\n\n## Skill Development Tips\n\n### Writing SKILL.md\n\nFocus on **when** and **how** to use the skill:\n\n```markdown\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\n---\n\n# QR Code Generator\n\n## When to Use\n\nUse this skill when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Create scannable data\n\n## Quick Start\n\n[Clear examples of how to use the skill]\n```\n\n### Script Organization\n\nOrganize scripts logically:\n\n```\nskill-name/\n\u251C\u2500\u2500 SKILL.md\n\u2514\u2500\u2500 scripts/\n \u251C\u2500\u2500 generate.py # Main script\n \u251C\u2500\u2500 utils.py # Helper functions\n \u2514\u2500\u2500 requirements.txt # Dependencies\n```\n\n### Error Messages\n\nProvide helpful error messages:\n\n```python\ntry:\n # ... code ...\nexcept ValueError as e:\n print(f\"Error: Invalid input - {e}\")\n sys.exit(1)\n```\n\nThe LLM sees these errors and can retry or explain to users.\n\n## Security Considerations\n\n### Sandbox Isolation\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default)\n\n### Input Validation\n\nSkills should validate inputs:\n\n```python\nimport sys\n\nif not data:\n print(\"Error: Data is required\")\n sys.exit(1)\n\nif len(data) > 1000:\n print(\"Error: Data too long (max 1000 characters)\")\n sys.exit(1)\n```\n\n### Resource Limits\n\nBe aware of:\n\n- **File size limits** - Large files may fail to upload\n- **Execution time** - 5-minute sandbox timeout\n- **Memory limits** - Sandbox environment constraints\n\n## Debugging Skills\n\n### Check Skill Documentation\n\nThe LLM can read skill docs:\n\n```python\n# LLM calls octavus_skill_read to see skill instructions\n```\n\n### Test Locally\n\nTest skills before uploading:\n\n```bash\n# Test skill locally\npython scripts/generate.py --data \"test\"\n```\n\n### Monitor Execution\n\nCheck execution logs in the platform debug view:\n\n- Tool calls and arguments\n- Code execution results\n- File outputs\n- Error messages\n\n## Common Patterns\n\n### Pattern 1: Generate and Return\n\n```yaml\n# User asks for QR code\n# LLM generates QR code\n# File automatically available for download\n```\n\n### Pattern 2: Analyze and Report\n\n```yaml\n# User provides data\n# LLM analyzes with skill\n# Generates report file\n# Returns summary + file link\n```\n\n### Pattern 3: Transform and Save\n\n```yaml\n# User uploads file (via tool)\n# LLM processes with skill\n# Generates transformed file\n# Returns new file link\n```\n\n## Best Practices Summary\n\n1. **Enable only needed skills** - Don't overwhelm the LLM\n2. **Choose appropriate display modes** - Match user experience needs\n3. **Write clear skill descriptions** - Help LLM understand when to use\n4. **Handle errors gracefully** - Provide helpful error messages\n5. **Test skills locally** - Verify before uploading\n6. **Monitor execution** - Check logs for issues\n7. **Combine with tools** - Use tools for data, skills for processing\n8. **Consider performance** - Be aware of timeouts and limits\n\n## Next Steps\n\n- [Skills](/docs/protocol/skills) - Basic skills documentation\n- [Agent Config](/docs/protocol/agent-config) - Configuring skills\n- [Tools](/docs/protocol/tools) - External tools integration\n",
|
|
606
|
+
content: "\n# Skills Advanced Guide\n\nThis guide covers advanced patterns and best practices for using Octavus skills in your agents.\n\n## When to Use Skills\n\nSkills are ideal for:\n\n- **Code execution** - Running Python/Bash scripts\n- **File generation** - Creating images, PDFs, reports\n- **Data processing** - Analyzing, transforming, or visualizing data\n- **Provider-agnostic needs** - Features that should work with any LLM\n\nUse external tools instead when:\n\n- **Simple API calls** - Database queries, external services\n- **Authentication required** - Accessing user-specific resources\n- **Backend integration** - Tight coupling with your infrastructure\n\n## Skill Selection Strategy\n\n### Defining Available Skills\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n pdf-processor:\n display: description\n description: Processing PDFs\n data-analysis:\n display: description\n description: Analyzing data\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Skills available for this thread\n```\n\n### Match Skills to Use Cases\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread based on use case:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n visualization:\n display: description\n description: Creating charts and visualizations\n\n# Skills available for this chat thread (support use case)\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Skills available for this thread\n```\n\nFor a data analysis thread, you would specify `[data-analysis, visualization]` in `agent.skills`, but still define all available skills in the `skills:` section above.\n\n## Display Mode Strategy\n\nChoose display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n### Guidelines\n\n- **`hidden`**: Background work that doesn't need user awareness\n- **`description`**: User-facing operations (default)\n- **`name`**: Quick operations where name is sufficient\n- **`stream`**: Long-running operations where progress matters\n\n## System Prompt Integration\n\nSkills are automatically injected into the system prompt. The LLM learns:\n\n1. **Available skills** - List of enabled skills with descriptions\n2. **How to use skills** - Instructions for using skill tools\n3. **Tool reference** - Available skill tools (`octavus_skill_read`, `octavus_code_run`, etc.)\n\nYou don't need to manually document skills in your system prompt. However, you can guide the LLM:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a helpful assistant that can generate QR codes.\n\n## When to Generate QR Codes\n\nGenerate QR codes when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Share WiFi credentials\n- Create scannable data\n\nUse the qr-code skill for all QR code generation tasks.\n```\n\n## Error Handling\n\nSkills handle errors gracefully:\n\n```yaml\n# Skill execution errors are returned to the LLM\n# The LLM can retry or explain the error to the user\n```\n\nCommon error scenarios:\n\n1. **Invalid skill slug** - Skill not found in organization\n2. **Code execution errors** - Syntax errors, runtime exceptions\n3. **Missing dependencies** - Required packages not installed\n4. **File I/O errors** - Permission issues, invalid paths\n\nThe LLM receives error messages and can:\n\n- Retry with corrected code\n- Explain errors to users\n- Suggest alternatives\n\n## File Output Patterns\n\n### Single File Output\n\n```python\n# Save single file to /output/\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\nqr = qrcode.QRCode()\nqr.add_data('https://example.com')\nimg = qr.make_image()\nimg.save(f'{output_dir}/qrcode.png')\n```\n\n### Multiple Files\n\n```python\n# Save multiple files\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Generate multiple outputs\nfor i in range(3):\n filename = f'{output_dir}/output_{i}.png'\n # ... generate file ...\n```\n\n### Structured Output\n\n```python\n# Save structured data + files\nimport json\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Save metadata\nmetadata = {\n 'files': ['chart.png', 'data.csv'],\n 'summary': 'Analysis complete'\n}\nwith open(f'{output_dir}/metadata.json', 'w') as f:\n json.dump(metadata, f)\n\n# Save actual files\n# ... generate chart.png and data.csv ...\n```\n\n## Performance Considerations\n\n### Lazy Initialization\n\nSandboxes are created only when a skill tool is first called:\n\n```yaml\n# Sandbox not created until LLM calls a skill tool\nagent:\n skills: [qr-code] # Sandbox created on first use\n```\n\nThis means:\n\n- No cost if skills aren't used\n- Fast startup (no sandbox creation delay)\n- Sandbox reused for all skill calls in a trigger\n\n### Timeout Limits\n\nSandboxes have a 5-minute default timeout, which can be configured via `sandboxTimeout`:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes for long-running analysis\n```\n\n`sandboxTimeout` Maximum: 1 hour (3,600,000 ms)\n\n**Timeout guidelines:**\n\n- **Short operations** (default 5 min): QR codes, simple calculations\n- **Medium operations** (10-30 min): Data analysis, report generation\n- **Long operations** (30+ min): Complex processing, large dataset analysis\n\n### Sandbox Lifecycle\n\nEach trigger execution gets a fresh sandbox:\n\n- **Clean state** - No leftover files from previous executions\n- **Isolated** - No interference between sessions\n- **Destroyed** - Sandbox cleaned up after trigger completes\n\n## Combining Skills with Tools\n\nSkills and tools can work together:\n\n```yaml\ntools:\n get-user-data:\n description: Fetch user data from database\n parameters:\n userId: { type: string }\n\nskills:\n data-analysis:\n display: description\n description: Analyzing data\n\nagent:\n tools: [get-user-data]\n skills: [data-analysis]\n agentic: true\n\nhandlers:\n analyze-user:\n Get user data:\n block: tool-call\n tool: get-user-data\n input:\n userId: USER_ID\n output: USER_DATA\n\n Analyze:\n block: next-message\n # LLM can use data-analysis skill with USER_DATA\n```\n\nPattern:\n\n1. Fetch data via tool (from your backend)\n2. LLM uses skill to analyze/process the data\n3. Generate outputs (files, reports)\n\n## Skill Development Tips\n\n### Writing SKILL.md\n\nFocus on **when** and **how** to use the skill:\n\n```markdown\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\n---\n\n# QR Code Generator\n\n## When to Use\n\nUse this skill when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Create scannable data\n\n## Quick Start\n\n[Clear examples of how to use the skill]\n```\n\n### Script Organization\n\nOrganize scripts logically:\n\n```\nskill-name/\n\u251C\u2500\u2500 SKILL.md\n\u2514\u2500\u2500 scripts/\n \u251C\u2500\u2500 generate.py # Main script\n \u251C\u2500\u2500 utils.py # Helper functions\n \u2514\u2500\u2500 requirements.txt # Dependencies\n```\n\n### Error Messages\n\nProvide helpful error messages:\n\n```python\ntry:\n # ... code ...\nexcept ValueError as e:\n print(f\"Error: Invalid input - {e}\")\n sys.exit(1)\n```\n\nThe LLM sees these errors and can retry or explain to users.\n\n## Security Considerations\n\n### Sandbox Isolation\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n### Input Validation\n\nSkills should validate inputs:\n\n```python\nimport sys\n\nif not data:\n print(\"Error: Data is required\")\n sys.exit(1)\n\nif len(data) > 1000:\n print(\"Error: Data too long (max 1000 characters)\")\n sys.exit(1)\n```\n\n### Resource Limits\n\nBe aware of:\n\n- **File size limits** - Large files may fail to upload\n- **Execution time** - 5-minute sandbox timeout\n- **Memory limits** - Sandbox environment constraints\n\n## Debugging Skills\n\n### Check Skill Documentation\n\nThe LLM can read skill docs:\n\n```python\n# LLM calls octavus_skill_read to see skill instructions\n```\n\n### Test Locally\n\nTest skills before uploading:\n\n```bash\n# Test skill locally\npython scripts/generate.py --data \"test\"\n```\n\n### Monitor Execution\n\nCheck execution logs in the platform debug view:\n\n- Tool calls and arguments\n- Code execution results\n- File outputs\n- Error messages\n\n## Common Patterns\n\n### Pattern 1: Generate and Return\n\n```yaml\n# User asks for QR code\n# LLM generates QR code\n# File automatically available for download\n```\n\n### Pattern 2: Analyze and Report\n\n```yaml\n# User provides data\n# LLM analyzes with skill\n# Generates report file\n# Returns summary + file link\n```\n\n### Pattern 3: Transform and Save\n\n```yaml\n# User uploads file (via tool)\n# LLM processes with skill\n# Generates transformed file\n# Returns new file link\n```\n\n## Best Practices Summary\n\n1. **Enable only needed skills** - Don't overwhelm the LLM\n2. **Choose appropriate display modes** - Match user experience needs\n3. **Write clear skill descriptions** - Help LLM understand when to use\n4. **Handle errors gracefully** - Provide helpful error messages\n5. **Test skills locally** - Verify before uploading\n6. **Monitor execution** - Check logs for issues\n7. **Combine with tools** - Use tools for data, skills for processing\n8. **Consider performance** - Be aware of timeouts and limits\n\n## Next Steps\n\n- [Skills](/docs/protocol/skills) - Basic skills documentation\n- [Agent Config](/docs/protocol/agent-config) - Configuring skills\n- [Tools](/docs/protocol/tools) - External tools integration\n",
|
|
598
607
|
excerpt: "Skills Advanced Guide This guide covers advanced patterns and best practices for using Octavus skills in your agents. When to Use Skills Skills are ideal for: - Code execution - Running Python/Bash...",
|
|
599
608
|
order: 9
|
|
600
609
|
},
|
|
@@ -607,6 +616,15 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
607
616
|
excerpt: "Types Types let you define reusable data structures for your agent. Use them in inputs, triggers, tools, resources, variables, and structured output responses. Why Types? - Reusability \u2014 Define...",
|
|
608
617
|
order: 10
|
|
609
618
|
},
|
|
619
|
+
{
|
|
620
|
+
slug: "protocol/workers",
|
|
621
|
+
section: "protocol",
|
|
622
|
+
title: "Workers",
|
|
623
|
+
description: "Defining worker agents for background and task-based execution.",
|
|
624
|
+
content: '\n# Workers\n\nWorkers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value.\n\n## When to Use Workers\n\nWorkers are ideal for:\n\n- **Background processing** \u2014 Long-running tasks that don\'t need conversation\n- **Composable tasks** \u2014 Reusable units of work called by other agents\n- **Pipelines** \u2014 Multi-step processing with structured output\n- **Parallel execution** \u2014 Tasks that can run independently\n\nUse interactive agents instead when:\n\n- **Conversation is needed** \u2014 Multi-turn dialogue with users\n- **Persistence matters** \u2014 State should survive across interactions\n- **Session context** \u2014 User context needs to persist\n\n## Worker vs Interactive\n\n| Aspect | Interactive | Worker |\n| ---------- | ---------------------------------- | ----------------------------- |\n| Structure | `triggers` + `handlers` + `agent` | `steps` + `output` |\n| LLM Config | Global `agent:` section | Per-thread via `start-thread` |\n| Invocation | Fire a named trigger | Direct execution with input |\n| Session | Persists across triggers (24h TTL) | Single execution |\n| Result | Streaming chat | Streaming + output value |\n\n## Protocol Structure\n\nWorkers use a simpler protocol structure than interactive agents:\n\n```yaml\n# Input schema - provided when worker is executed\ninput:\n TOPIC:\n type: string\n description: Topic to research\n DEPTH:\n type: string\n optional: true\n default: medium\n\n# Variables for intermediate results\nvariables:\n RESEARCH_DATA:\n type: string\n ANALYSIS:\n type: string\n description: Final analysis result\n\n# Tools available to the worker\ntools:\n web-search:\n description: Search the web\n parameters:\n query: { type: string }\n\n# Sequential execution steps\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC, DEPTH]\n tools: [web-search]\n maxSteps: 5\n\n Add research request:\n block: add-message\n thread: research\n role: user\n prompt: research-prompt\n input: [TOPIC, DEPTH]\n\n Generate research:\n block: next-message\n thread: research\n output: RESEARCH_DATA\n\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: analysis-system\n\n Add analysis request:\n block: add-message\n thread: analysis\n role: user\n prompt: analysis-prompt\n input: [RESEARCH_DATA]\n\n Generate analysis:\n block: next-message\n thread: analysis\n output: ANALYSIS\n\n# Output variable - the worker\'s return value\noutput: ANALYSIS\n```\n\n## settings.json\n\nWorkers are identified by the `format` field:\n\n```json\n{\n "slug": "research-assistant",\n "name": "Research Assistant",\n "description": "Researches topics and returns structured analysis",\n "format": "worker"\n}\n```\n\n## Key Differences\n\n### No Global Agent Config\n\nInteractive agents have a global `agent:` section that configures a main thread. Workers don\'t have this \u2014 every thread must be explicitly created via `start-thread`:\n\n```yaml\n# Interactive agent: Global config\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [tool-a, tool-b]\n\n# Worker: Each thread configured independently\nsteps:\n Start thread A:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n tools: [tool-a]\n\n Start thread B:\n block: start-thread\n thread: analysis\n model: openai/gpt-4o\n tools: [tool-b]\n```\n\nThis gives workers flexibility to use different models, tools, and settings at different stages.\n\n### Steps Instead of Handlers\n\nWorkers use `steps:` instead of `handlers:`. Steps execute sequentially, like handler blocks:\n\n```yaml\n# Interactive: Handlers respond to triggers\nhandlers:\n user-message:\n Add message:\n block: add-message\n # ...\n\n# Worker: Steps execute in sequence\nsteps:\n Add message:\n block: add-message\n # ...\n```\n\n### Output Value\n\nWorkers can return an output value to the caller:\n\n```yaml\nvariables:\n RESULT:\n type: string\n\nsteps:\n # ... steps that populate RESULT ...\n\noutput: RESULT # Return this variable\'s value\n```\n\nThe `output` field references a variable declared in `variables:`. If omitted, the worker completes without returning a value.\n\n## Available Blocks\n\nWorkers support the same blocks as handlers:\n\n| Block | Purpose |\n| ------------------ | -------------------------------------------- |\n| `start-thread` | Create a named thread with LLM configuration |\n| `add-message` | Add a message to a thread |\n| `next-message` | Generate LLM response |\n| `tool-call` | Call a tool deterministically |\n| `set-resource` | Update a resource value |\n| `serialize-thread` | Convert thread to text |\n| `generate-image` | Generate an image from a prompt variable |\n\n### start-thread (Required for LLM)\n\nEvery thread must be initialized with `start-thread` before using `next-message`:\n\n```yaml\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC]\n tools: [web-search]\n thinking: medium\n maxSteps: 5\n```\n\nAll LLM configuration goes here:\n\n| Field | Description |\n| ------------- | ------------------------------------------------- |\n| `thread` | Thread name (defaults to block name) |\n| `model` | LLM model to use |\n| `system` | System prompt filename (required) |\n| `input` | Variables for system prompt |\n| `tools` | Tools available in this thread |\n| `workers` | Workers available to this thread (as LLM tools) |\n| `imageModel` | Image generation model |\n| `thinking` | Extended reasoning level |\n| `temperature` | Model temperature |\n| `maxSteps` | Maximum tool call cycles (enables agentic if > 1) |\n\n## Simple Example\n\nA worker that generates a title from a summary:\n\n```yaml\n# Input\ninput:\n CONVERSATION_SUMMARY:\n type: string\n description: Summary to generate a title for\n\n# Variables\nvariables:\n TITLE:\n type: string\n description: The generated title\n\n# Steps\nsteps:\n Start title thread:\n block: start-thread\n thread: title-gen\n model: anthropic/claude-sonnet-4-5\n system: title-system\n\n Add title request:\n block: add-message\n thread: title-gen\n role: user\n prompt: title-request\n input: [CONVERSATION_SUMMARY]\n\n Generate title:\n block: next-message\n thread: title-gen\n output: TITLE\n display: stream\n\n# Output\noutput: TITLE\n```\n\n## Advanced Example\n\nA worker with multiple threads, tools, and agentic behavior:\n\n```yaml\ninput:\n USER_MESSAGE:\n type: string\n description: The user\'s message to respond to\n USER_ID:\n type: string\n description: User ID for account lookups\n optional: true\n\ntools:\n get-user-account:\n description: Looking up account information\n parameters:\n userId: { type: string }\n create-support-ticket:\n description: Creating a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string }\n\nvariables:\n ASSISTANT_RESPONSE:\n type: string\n CHAT_TRANSCRIPT:\n type: string\n CONVERSATION_SUMMARY:\n type: string\n\nsteps:\n # Thread 1: Chat with agentic tool calling\n Start chat thread:\n block: start-thread\n thread: chat\n model: anthropic/claude-sonnet-4-5\n system: chat-system\n input: [USER_ID]\n tools: [get-user-account, create-support-ticket]\n thinking: medium\n maxSteps: 5\n\n Add user message:\n block: add-message\n thread: chat\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Generate response:\n block: next-message\n thread: chat\n output: ASSISTANT_RESPONSE\n display: stream\n\n # Serialize for summary\n Save conversation:\n block: serialize-thread\n thread: chat\n output: CHAT_TRANSCRIPT\n\n # Thread 2: Summary generation\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n system: summary-system\n thinking: low\n\n Add summary request:\n block: add-message\n thread: summary\n role: user\n prompt: summary-request\n input: [CHAT_TRANSCRIPT]\n\n Generate summary:\n block: next-message\n thread: summary\n output: CONVERSATION_SUMMARY\n display: stream\n\noutput: CONVERSATION_SUMMARY\n```\n\n## Tool Handling\n\nWorkers support the same tool handling as interactive agents:\n\n- **Server tools** \u2014 Handled by tool handlers you provide\n- **Client tools** \u2014 Pause execution, return tool request to caller\n\n```typescript\nconst events = client.workers.execute(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => {\n return await searchWeb(args.query);\n },\n },\n },\n);\n```\n\nSee [Server SDK Workers](/docs/server-sdk/workers) for tool handling details.\n\n## Stream Events\n\nWorkers emit the same events as interactive agents, plus worker-specific events:\n\n| Event | Description |\n| --------------- | ---------------------------------- |\n| `worker-start` | Worker execution begins |\n| `worker-result` | Worker completes (includes output) |\n\nAll standard events (text-delta, tool calls, etc.) are also emitted.\n\n## Calling Workers from Interactive Agents\n\nInteractive agents can call workers in two ways:\n\n1. **Deterministically** \u2014 Using the `run-worker` block\n2. **Agentically** \u2014 LLM calls worker as a tool\n\n### Worker Declaration\n\nFirst, declare workers in your interactive agent\'s protocol:\n\n```yaml\nworkers:\n generate-title:\n description: Generating conversation title\n display: description\n research-assistant:\n description: Researching topic\n display: stream\n tools:\n search: web-search # Map worker tool \u2192 parent tool\n```\n\n### run-worker Block\n\nCall a worker deterministically from a handler:\n\n```yaml\nhandlers:\n request-human:\n Generate title:\n block: run-worker\n worker: generate-title\n input:\n CONVERSATION_SUMMARY: SUMMARY\n output: CONVERSATION_TITLE\n```\n\n### LLM Tool Invocation\n\nMake workers available to the LLM:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n workers: [generate-title, research-assistant]\n agentic: true\n```\n\nThe LLM can then call workers as tools during conversation.\n\n### Display Modes\n\nControl how worker execution appears to users:\n\n| Mode | Behavior |\n| ------------- | --------------------------------- |\n| `hidden` | Worker runs silently |\n| `name` | Shows worker name |\n| `description` | Shows description text |\n| `stream` | Streams all worker events to user |\n\n### Tool Mapping\n\nMap parent tools to worker tools when the worker needs access to your tool handlers:\n\n```yaml\nworkers:\n research-assistant:\n description: Research topics\n tools:\n search: web-search # Worker\'s "search" \u2192 parent\'s "web-search"\n```\n\nWhen the worker calls its `search` tool, your `web-search` handler executes.\n\n## Next Steps\n\n- [Server SDK Workers](/docs/server-sdk/workers) \u2014 Executing workers from code\n- [Handlers](/docs/protocol/handlers) \u2014 Block reference for steps\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n',
|
|
625
|
+
excerpt: "Workers Workers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value. When to...",
|
|
626
|
+
order: 11
|
|
627
|
+
},
|
|
610
628
|
{
|
|
611
629
|
slug: "api-reference/overview",
|
|
612
630
|
section: "api-reference",
|
|
@@ -711,7 +729,7 @@ var sections_default = [
|
|
|
711
729
|
section: "server-sdk",
|
|
712
730
|
title: "Overview",
|
|
713
731
|
description: "Introduction to the Octavus Server SDK for backend integration.",
|
|
714
|
-
content: "\n# Server SDK Overview\n\nThe `@octavus/server-sdk` package provides a Node.js SDK for integrating Octavus agents into your backend application. It handles session management, streaming, and the tool execution continuation loop.\n\n**Current version:** `2.
|
|
732
|
+
content: "\n# Server SDK Overview\n\nThe `@octavus/server-sdk` package provides a Node.js SDK for integrating Octavus agents into your backend application. It handles session management, streaming, and the tool execution continuation loop.\n\n**Current version:** `2.3.0`\n\n## Installation\n\n```bash\nnpm install @octavus/server-sdk\n```\n\nFor agent management (sync, validate), install the CLI as a dev dependency:\n\n```bash\nnpm install --save-dev @octavus/cli\n```\n\n## Basic Usage\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: 'your-api-key',\n});\n```\n\n## Key Features\n\n### Agent Management\n\nAgent definitions are managed via the CLI. See the [CLI documentation](/docs/server-sdk/cli) for details.\n\n```bash\n# Sync agent from local files\noctavus sync ./agents/support-chat\n\n# Output: Created: support-chat\n# Agent ID: clxyz123abc456\n```\n\n### Session Management\n\nCreate and manage agent sessions using the agent ID:\n\n```typescript\n// Create a new session (use agent ID from CLI sync)\nconst sessionId = await client.agentSessions.create('clxyz123abc456', {\n COMPANY_NAME: 'Acme Corp',\n PRODUCT_NAME: 'Widget Pro',\n});\n\n// Get UI-ready session messages (for session restore)\nconst session = await client.agentSessions.getMessages(sessionId);\n```\n\n### Tool Handlers\n\nTools run on your server with your data:\n\n```typescript\nconst session = client.agentSessions.attach(sessionId, {\n tools: {\n 'get-user-account': async (args) => {\n // Access your database, APIs, etc.\n return await db.users.findById(args.userId);\n },\n },\n});\n```\n\n### Streaming\n\nAll responses stream in real-time:\n\n```typescript\nimport { toSSEStream } from '@octavus/server-sdk';\n\n// execute() returns an async generator of events\nconst events = session.execute({\n type: 'trigger',\n triggerName: 'user-message',\n input: { USER_MESSAGE: 'Hello!' },\n});\n\n// Convert to SSE stream for HTTP responses\nreturn new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n});\n```\n\n## API Reference\n\n### OctavusClient\n\nThe main entry point for interacting with Octavus.\n\n```typescript\ninterface OctavusClientConfig {\n baseUrl: string; // Octavus API URL\n apiKey?: string; // Your API key\n}\n\nclass OctavusClient {\n readonly agents: AgentsApi;\n readonly agentSessions: AgentSessionsApi;\n readonly workers: WorkersApi;\n readonly files: FilesApi;\n\n constructor(config: OctavusClientConfig);\n}\n```\n\n### AgentSessionsApi\n\nManages agent sessions.\n\n```typescript\nclass AgentSessionsApi {\n // Create a new session\n async create(agentId: string, input?: Record<string, unknown>): Promise<string>;\n\n // Get full session state (for debugging/internal use)\n async get(sessionId: string): Promise<SessionState>;\n\n // Get UI-ready messages (for client display)\n async getMessages(sessionId: string): Promise<UISessionState>;\n\n // Attach to a session for triggering\n attach(sessionId: string, options?: SessionAttachOptions): AgentSession;\n}\n\n// Full session state (internal format)\ninterface SessionState {\n id: string;\n agentId: string;\n input: Record<string, unknown>;\n variables: Record<string, unknown>;\n resources: Record<string, unknown>;\n messages: ChatMessage[]; // Internal message format\n createdAt: string;\n updatedAt: string;\n}\n\n// UI-ready session state\ninterface UISessionState {\n sessionId: string;\n agentId: string;\n messages: UIMessage[]; // UI-ready messages for frontend\n}\n```\n\n### AgentSession\n\nHandles request execution and streaming for a specific session.\n\n```typescript\nclass AgentSession {\n // Execute a request and stream parsed events\n execute(request: SessionRequest, options?: TriggerOptions): AsyncGenerator<StreamEvent>;\n\n // Get the session ID\n getSessionId(): string;\n}\n\ntype SessionRequest = TriggerRequest | ContinueRequest;\n\ninterface TriggerRequest {\n type: 'trigger';\n triggerName: string;\n input?: Record<string, unknown>;\n}\n\ninterface ContinueRequest {\n type: 'continue';\n executionId: string;\n toolResults: ToolResult[];\n}\n\n// Helper to convert events to SSE stream\nfunction toSSEStream(events: AsyncIterable<StreamEvent>): ReadableStream<Uint8Array>;\n```\n\n### FilesApi\n\nHandles file uploads for sessions.\n\n```typescript\nclass FilesApi {\n // Get presigned URLs for file uploads\n async getUploadUrls(sessionId: string, files: FileUploadRequest[]): Promise<UploadUrlsResponse>;\n}\n\ninterface FileUploadRequest {\n filename: string;\n mediaType: string;\n size: number;\n}\n\ninterface UploadUrlsResponse {\n files: {\n id: string; // File ID for references\n uploadUrl: string; // PUT to this URL\n downloadUrl: string; // GET URL after upload\n }[];\n}\n```\n\nThe client uploads files directly to S3 using the presigned upload URL. See [File Uploads](/docs/client-sdk/file-uploads) for the full integration pattern.\n\n## Next Steps\n\n- [Sessions](/docs/server-sdk/sessions) \u2014 Deep dive into session management\n- [Tools](/docs/server-sdk/tools) \u2014 Implementing tool handlers\n- [Streaming](/docs/server-sdk/streaming) \u2014 Understanding stream events\n- [Workers](/docs/server-sdk/workers) \u2014 Executing worker agents\n",
|
|
715
733
|
excerpt: "Server SDK Overview The package provides a Node.js SDK for integrating Octavus agents into your backend application. It handles session management, streaming, and the tool execution continuation...",
|
|
716
734
|
order: 1
|
|
717
735
|
},
|
|
@@ -747,9 +765,18 @@ var sections_default = [
|
|
|
747
765
|
section: "server-sdk",
|
|
748
766
|
title: "CLI",
|
|
749
767
|
description: "Command-line interface for validating and syncing agent definitions.",
|
|
750
|
-
content: '\n# Octavus CLI\n\nThe `@octavus/cli` package provides a command-line interface for validating and syncing agent definitions from your local filesystem to the Octavus platform.\n\n**Current version:** `2.
|
|
768
|
+
content: '\n# Octavus CLI\n\nThe `@octavus/cli` package provides a command-line interface for validating and syncing agent definitions from your local filesystem to the Octavus platform.\n\n**Current version:** `2.3.0`\n\n## Installation\n\n```bash\nnpm install --save-dev @octavus/cli\n```\n\n## Configuration\n\nThe CLI requires an API key with the **Agents** permission.\n\n### Environment Variables\n\n| Variable | Description |\n| --------------------- | ---------------------------------------------- |\n| `OCTAVUS_CLI_API_KEY` | API key with "Agents" permission (recommended) |\n| `OCTAVUS_API_KEY` | Fallback if `OCTAVUS_CLI_API_KEY` not set |\n| `OCTAVUS_API_URL` | Optional, defaults to `https://octavus.ai` |\n\n### Two-Key Strategy (Recommended)\n\nFor production deployments, use separate API keys with minimal permissions:\n\n```bash\n# CI/CD or .env.local (not committed)\nOCTAVUS_CLI_API_KEY=oct_sk_... # "Agents" permission only\n\n# Production .env\nOCTAVUS_API_KEY=oct_sk_... # "Sessions" permission only\n```\n\nThis ensures production servers only have session permissions (smaller blast radius if leaked), while agent management is restricted to development/CI environments.\n\n### Multiple Environments\n\nUse separate Octavus projects for staging and production, each with their own API keys. The `--env` flag lets you load different environment files:\n\n```bash\n# Local development (default: .env)\noctavus sync ./agents/my-agent\n\n# Staging project\noctavus --env .env.staging sync ./agents/my-agent\n\n# Production project\noctavus --env .env.production sync ./agents/my-agent\n```\n\nExample environment files:\n\n```bash\n# .env.staging (syncs to your staging project)\nOCTAVUS_CLI_API_KEY=oct_sk_staging_project_key...\n\n# .env.production (syncs to your production project)\nOCTAVUS_CLI_API_KEY=oct_sk_production_project_key...\n```\n\nEach project has its own agents, so you\'ll get different agent IDs per environment.\n\n## Global Options\n\n| Option | Description |\n| -------------- | ------------------------------------------------------- |\n| `--env <file>` | Load environment from a specific file (default: `.env`) |\n| `--help` | Show help |\n| `--version` | Show version |\n\n## Commands\n\n### `octavus sync <path>`\n\nSync an agent definition to the platform. Creates the agent if it doesn\'t exist, or updates it if it does.\n\n```bash\noctavus sync ./agents/my-agent\n```\n\n**Options:**\n\n- `--json` \u2014 Output as JSON (for CI/CD parsing)\n- `--quiet` \u2014 Suppress non-essential output\n\n**Example output:**\n\n```\n\u2139 Reading agent from ./agents/my-agent...\n\u2139 Syncing support-chat...\n\u2713 Created: support-chat\n Agent ID: clxyz123abc456\n```\n\n### `octavus validate <path>`\n\nValidate an agent definition without saving. Useful for CI/CD pipelines.\n\n```bash\noctavus validate ./agents/my-agent\n```\n\n**Exit codes:**\n\n- `0` \u2014 Validation passed\n- `1` \u2014 Validation errors\n- `2` \u2014 Configuration errors (missing API key, etc.)\n\n### `octavus list`\n\nList all agents in your project.\n\n```bash\noctavus list\n```\n\n**Example output:**\n\n```\nSLUG NAME FORMAT ID\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\nsupport-chat Support Chat Agent interactive clxyz123abc456\n\n1 agent(s)\n```\n\n### `octavus get <slug>`\n\nGet details about a specific agent by its slug.\n\n```bash\noctavus get support-chat\n```\n\n## Agent Directory Structure\n\nThe CLI expects agent definitions in a specific directory structure:\n\n```\nmy-agent/\n\u251C\u2500\u2500 settings.json # Required: Agent metadata\n\u251C\u2500\u2500 protocol.yaml # Required: Agent protocol\n\u2514\u2500\u2500 prompts/ # Optional: Prompt templates\n \u251C\u2500\u2500 system.md\n \u2514\u2500\u2500 user-message.md\n```\n\n### settings.json\n\n```json\n{\n "slug": "my-agent",\n "name": "My Agent",\n "description": "A helpful assistant",\n "format": "interactive"\n}\n```\n\n### protocol.yaml\n\nSee the [Protocol documentation](/docs/protocol/overview) for details on protocol syntax.\n\n## CI/CD Integration\n\n### GitHub Actions\n\n```yaml\nname: Validate and Sync Agents\n\non:\n push:\n branches: [main]\n paths:\n - \'agents/**\'\n\njobs:\n sync:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n\n - uses: actions/setup-node@v4\n with:\n node-version: \'22\'\n\n - run: npm install\n\n - name: Validate agent\n run: npx octavus validate ./agents/support-chat\n env:\n OCTAVUS_CLI_API_KEY: ${{ secrets.OCTAVUS_CLI_API_KEY }}\n\n - name: Sync agent\n run: npx octavus sync ./agents/support-chat\n env:\n OCTAVUS_CLI_API_KEY: ${{ secrets.OCTAVUS_CLI_API_KEY }}\n```\n\n### Package.json Scripts\n\nAdd sync scripts to your `package.json`:\n\n```json\n{\n "scripts": {\n "agents:validate": "octavus validate ./agents/my-agent",\n "agents:sync": "octavus sync ./agents/my-agent"\n },\n "devDependencies": {\n "@octavus/cli": "^0.1.0"\n }\n}\n```\n\n## Workflow\n\nThe recommended workflow for managing agents:\n\n1. **Define agent locally** \u2014 Create `settings.json`, `protocol.yaml`, and prompts\n2. **Validate** \u2014 Run `octavus validate ./my-agent` to check for errors\n3. **Sync** \u2014 Run `octavus sync ./my-agent` to push to platform\n4. **Store agent ID** \u2014 Save the output ID in an environment variable\n5. **Use in app** \u2014 Read the ID from env and pass to `client.agentSessions.create()`\n\n```bash\n# After syncing: octavus sync ./agents/support-chat\n# Output: Agent ID: clxyz123abc456\n\n# Add to your .env file\nOCTAVUS_SUPPORT_AGENT_ID=clxyz123abc456\n```\n\n```typescript\nconst agentId = process.env.OCTAVUS_SUPPORT_AGENT_ID;\n\nconst sessionId = await client.agentSessions.create(agentId, {\n COMPANY_NAME: \'Acme Corp\',\n});\n```\n',
|
|
751
769
|
excerpt: "Octavus CLI The package provides a command-line interface for validating and syncing agent definitions from your local filesystem to the Octavus platform. Current version: Installation ...",
|
|
752
770
|
order: 5
|
|
771
|
+
},
|
|
772
|
+
{
|
|
773
|
+
slug: "server-sdk/workers",
|
|
774
|
+
section: "server-sdk",
|
|
775
|
+
title: "Workers",
|
|
776
|
+
description: "Executing worker agents with the Server SDK.",
|
|
777
|
+
content: "\n# Workers API\n\nThe `WorkersApi` enables executing worker agents from your server. Workers are task-based agents that run steps sequentially and return an output value.\n\n## Basic Usage\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: 'your-api-key',\n});\n\n// Execute a worker\nconst events = client.workers.execute(agentId, {\n TOPIC: 'AI safety',\n DEPTH: 'detailed',\n});\n\n// Process events\nfor await (const event of events) {\n if (event.type === 'worker-start') {\n console.log(`Worker ${event.workerSlug} started`);\n }\n if (event.type === 'text-delta') {\n process.stdout.write(event.delta);\n }\n if (event.type === 'worker-result') {\n console.log('Output:', event.output);\n }\n}\n```\n\n## WorkersApi Reference\n\n### execute()\n\nExecute a worker and stream the response.\n\n```typescript\nasync *execute(\n agentId: string,\n input: Record<string, unknown>,\n options?: WorkerExecuteOptions\n): AsyncGenerator<StreamEvent>\n```\n\n**Parameters:**\n\n| Parameter | Type | Description |\n| --------- | ------------------------- | --------------------------- |\n| `agentId` | `string` | The worker agent ID |\n| `input` | `Record<string, unknown>` | Input values for the worker |\n| `options` | `WorkerExecuteOptions` | Optional configuration |\n\n**Options:**\n\n```typescript\ninterface WorkerExecuteOptions {\n /** Tool handlers for server-side tool execution */\n tools?: ToolHandlers;\n /** Abort signal to cancel the execution */\n signal?: AbortSignal;\n}\n```\n\n### continue()\n\nContinue execution after client-side tool handling.\n\n```typescript\nasync *continue(\n agentId: string,\n executionId: string,\n toolResults: ToolResult[],\n options?: WorkerExecuteOptions\n): AsyncGenerator<StreamEvent>\n```\n\nUse this when the worker has tools without server-side handlers. The execution pauses with a `client-tool-request` event, you execute the tools, then call `continue()` to resume.\n\n## Tool Handlers\n\nProvide tool handlers to execute tools server-side:\n\n```typescript\nconst events = client.workers.execute(\n agentId,\n { TOPIC: 'AI safety' },\n {\n tools: {\n 'web-search': async (args) => {\n const results = await searchWeb(args.query);\n return results;\n },\n 'get-user-data': async (args) => {\n return await db.users.findById(args.userId);\n },\n },\n },\n);\n```\n\nTools defined in the worker protocol but not provided as handlers become client tools \u2014 the execution pauses and emits a `client-tool-request` event.\n\n## Stream Events\n\nWorkers emit standard stream events plus worker-specific events.\n\n### Worker Events\n\n```typescript\n// Worker started\n{\n type: 'worker-start',\n workerId: string, // Unique ID (also used as session ID for debug)\n workerSlug: string, // The worker's slug\n description?: string, // Display description for UI\n}\n\n// Worker completed\n{\n type: 'worker-result',\n workerId: string,\n output?: unknown, // The worker's output value\n error?: string, // Error message if worker failed\n}\n```\n\n### Common Events\n\n| Event | Description |\n| ----------------------- | --------------------------- |\n| `start` | Execution started |\n| `finish` | Execution completed |\n| `text-start` | Text generation started |\n| `text-delta` | Text chunk received |\n| `text-end` | Text generation ended |\n| `block-start` | Step started |\n| `block-end` | Step completed |\n| `tool-input-available` | Tool arguments ready |\n| `tool-output-available` | Tool result ready |\n| `client-tool-request` | Client tools need execution |\n| `error` | Error occurred |\n\n## Extracting Output\n\nTo get just the worker's output value:\n\n```typescript\nasync function executeWorker(\n client: OctavusClient,\n agentId: string,\n input: Record<string, unknown>,\n): Promise<unknown> {\n const events = client.workers.execute(agentId, input);\n\n for await (const event of events) {\n if (event.type === 'worker-result') {\n if (event.error) {\n throw new Error(event.error);\n }\n return event.output;\n }\n }\n\n return undefined;\n}\n\n// Usage\nconst analysis = await executeWorker(client, agentId, { TOPIC: 'AI' });\n```\n\n## Client Tool Continuation\n\nWhen workers have tools without handlers, execution pauses:\n\n```typescript\nfor await (const event of client.workers.execute(agentId, input)) {\n if (event.type === 'client-tool-request') {\n // Execute tools client-side\n const results = await executeClientTools(event.toolCalls);\n\n // Continue execution\n for await (const ev of client.workers.continue(agentId, event.executionId, results)) {\n // Handle remaining events\n }\n break;\n }\n}\n```\n\nThe `client-tool-request` event includes:\n\n```typescript\n{\n type: 'client-tool-request',\n executionId: string, // Pass to continue()\n toolCalls: [{\n toolCallId: string,\n toolName: string,\n args: Record<string, unknown>,\n }],\n}\n```\n\n## Streaming to HTTP Response\n\nConvert worker events to an SSE stream:\n\n```typescript\nimport { toSSEStream } from '@octavus/server-sdk';\n\nexport async function POST(request: Request) {\n const { agentId, input } = await request.json();\n\n const events = client.workers.execute(agentId, input, {\n tools: {\n search: async (args) => await search(args.query),\n },\n });\n\n return new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n });\n}\n```\n\n## Cancellation\n\nUse an abort signal to cancel execution:\n\n```typescript\nconst controller = new AbortController();\n\n// Cancel after 30 seconds\nsetTimeout(() => controller.abort(), 30000);\n\nconst events = client.workers.execute(agentId, input, {\n signal: controller.signal,\n});\n\ntry {\n for await (const event of events) {\n // Process events\n }\n} catch (error) {\n if (error.name === 'AbortError') {\n console.log('Worker cancelled');\n }\n}\n```\n\n## Error Handling\n\nErrors can occur at different levels:\n\n```typescript\nfor await (const event of client.workers.execute(agentId, input)) {\n // Stream-level error event\n if (event.type === 'error') {\n console.error(`Error: ${event.message}`);\n console.error(`Type: ${event.errorType}`);\n console.error(`Retryable: ${event.retryable}`);\n }\n\n // Worker-level error in result\n if (event.type === 'worker-result' && event.error) {\n console.error(`Worker failed: ${event.error}`);\n }\n}\n```\n\nError types include:\n\n| Type | Description |\n| ------------------ | --------------------- |\n| `validation_error` | Invalid input |\n| `not_found_error` | Worker not found |\n| `provider_error` | LLM provider error |\n| `tool_error` | Tool execution failed |\n| `execution_error` | Worker step failed |\n\n## Full Example\n\n```typescript\nimport { OctavusClient, type StreamEvent } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: process.env.OCTAVUS_API_KEY!,\n});\n\nasync function runResearchWorker(topic: string) {\n console.log(`Researching: ${topic}\\n`);\n\n const events = client.workers.execute(\n 'research-assistant-id',\n {\n TOPIC: topic,\n DEPTH: 'detailed',\n },\n {\n tools: {\n 'web-search': async ({ query }) => {\n console.log(`Searching: ${query}`);\n return await performWebSearch(query);\n },\n },\n },\n );\n\n let output: unknown;\n\n for await (const event of events) {\n switch (event.type) {\n case 'worker-start':\n console.log(`Started: ${event.workerSlug}`);\n break;\n\n case 'block-start':\n console.log(`Step: ${event.blockName}`);\n break;\n\n case 'text-delta':\n process.stdout.write(event.delta);\n break;\n\n case 'worker-result':\n if (event.error) {\n throw new Error(event.error);\n }\n output = event.output;\n break;\n\n case 'error':\n throw new Error(event.message);\n }\n }\n\n console.log('\\n\\nResearch complete!');\n return output;\n}\n\n// Run the worker\nconst result = await runResearchWorker('AI safety best practices');\nconsole.log('Result:', result);\n```\n\n## Next Steps\n\n- [Workers Protocol](/docs/protocol/workers) \u2014 Worker protocol reference\n- [Streaming](/docs/server-sdk/streaming) \u2014 Understanding stream events\n- [Tools](/docs/server-sdk/tools) \u2014 Tool handler patterns\n",
|
|
778
|
+
excerpt: "Workers API The enables executing worker agents from your server. Workers are task-based agents that run steps sequentially and return an output value. Basic Usage WorkersApi Reference execute()...",
|
|
779
|
+
order: 6
|
|
753
780
|
}
|
|
754
781
|
]
|
|
755
782
|
},
|
|
@@ -764,7 +791,7 @@ var sections_default = [
|
|
|
764
791
|
section: "client-sdk",
|
|
765
792
|
title: "Overview",
|
|
766
793
|
description: "Introduction to the Octavus Client SDKs for building chat interfaces.",
|
|
767
|
-
content: "\n# Client SDK Overview\n\nOctavus provides two packages for frontend integration:\n\n| Package | Purpose | Use When |\n| --------------------- | ------------------------ | ----------------------------------------------------- |\n| `@octavus/react` | React hooks and bindings | Building React applications |\n| `@octavus/client-sdk` | Framework-agnostic core | Using Vue, Svelte, vanilla JS, or custom integrations |\n\n**Most users should install `@octavus/react`** \u2014 it includes everything from `@octavus/client-sdk` plus React-specific hooks.\n\n## Installation\n\n### React Applications\n\n```bash\nnpm install @octavus/react\n```\n\n**Current version:** `2.1.0`\n\n### Other Frameworks\n\n```bash\nnpm install @octavus/client-sdk\n```\n\n**Current version:** `2.1.0`\n\n## Transport Pattern\n\nThe Client SDK uses a **transport abstraction** to handle communication with your backend. This gives you flexibility in how events are delivered:\n\n| Transport | Use Case | Docs |\n| ----------------------- | -------------------------------------------- | ----------------------------------------------------- |\n| `createHttpTransport` | HTTP/SSE (Next.js, Express, etc.) | [HTTP Transport](/docs/client-sdk/http-transport) |\n| `createSocketTransport` | WebSocket, SockJS, or other socket protocols | [Socket Transport](/docs/client-sdk/socket-transport) |\n\nWhen the transport changes (e.g., when `sessionId` changes), the `useOctavusChat` hook automatically reinitializes with the new transport.\n\n> **Recommendation**: Use HTTP transport unless you specifically need WebSocket features (custom real-time events, Meteor/Phoenix, etc.).\n\n## React Usage\n\nThe `useOctavusChat` hook provides state management and streaming for React applications:\n\n```tsx\nimport { useMemo } from 'react';\nimport { useOctavusChat, createHttpTransport, type UIMessage } from '@octavus/react';\n\nfunction Chat({ sessionId }: { sessionId: string }) {\n // Create a stable transport instance (memoized on sessionId)\n const transport = useMemo(\n () =>\n createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n [sessionId],\n );\n\n const { messages, status, send } = useOctavusChat({ transport });\n\n const sendMessage = async (text: string) => {\n await send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n };\n\n return (\n <div>\n {messages.map((msg) => (\n <MessageBubble key={msg.id} message={msg} />\n ))}\n </div>\n );\n}\n\nfunction MessageBubble({ message }: { message: UIMessage }) {\n return (\n <div>\n {message.parts.map((part, i) => {\n if (part.type === 'text') {\n return <p key={i}>{part.text}</p>;\n }\n return null;\n })}\n </div>\n );\n}\n```\n\n## Framework-Agnostic Usage\n\nThe `OctavusChat` class can be used with any framework or vanilla JavaScript:\n\n```typescript\nimport { OctavusChat, createHttpTransport } from '@octavus/client-sdk';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n\nconst chat = new OctavusChat({ transport });\n\n// Subscribe to state changes\nconst unsubscribe = chat.subscribe(() => {\n console.log('Messages:', chat.messages);\n console.log('Status:', chat.status);\n // Update your UI here\n});\n\n// Send a message\nawait chat.send('user-message', { USER_MESSAGE: 'Hello' }, { userMessage: { content: 'Hello' } });\n\n// Cleanup when done\nunsubscribe();\n```\n\n## Key Features\n\n### Unified Send Function\n\nThe `send` function handles both user message display and agent triggering in one call:\n\n```tsx\nconst { send } = useOctavusChat({ transport });\n\n// Add user message to UI and trigger agent\nawait send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n\n// Trigger without adding a user message (e.g., button click)\nawait send('request-human');\n```\n\n### Message Parts\n\nMessages contain ordered `parts` for rich content:\n\n```tsx\nconst { messages } = useOctavusChat({ transport });\n\n// Each message has typed parts\nmessage.parts.map((part) => {\n switch (part.type) {\n case 'text': // Text content\n case 'reasoning': // Extended reasoning/thinking\n case 'tool-call': // Tool execution\n case 'operation': // Internal operations (set-resource, etc.)\n }\n});\n```\n\n### Status Tracking\n\n```tsx\nconst { status } = useOctavusChat({ transport });\n\n// status: 'idle' | 'streaming' | 'error' | 'awaiting-input'\n// 'awaiting-input' occurs when interactive client tools need user action\n```\n\n### Stop Streaming\n\n```tsx\nconst { stop } = useOctavusChat({ transport });\n\n// Stop current stream and finalize message\nstop();\n```\n\n## Hook Reference (React)\n\n### useOctavusChat\n\n```typescript\nfunction useOctavusChat(options: OctavusChatOptions): UseOctavusChatReturn;\n\ninterface OctavusChatOptions {\n // Required: Transport for streaming events\n transport: Transport;\n\n // Optional: Function to request upload URLs for file uploads\n requestUploadUrls?: (\n files: { filename: string; mediaType: string; size: number }[],\n ) => Promise<UploadUrlsResponse>;\n\n // Optional: Client-side tool handlers\n // - Function: executes automatically and returns result\n // - 'interactive': appears in pendingClientTools for user input\n clientTools?: Record<string, ClientToolHandler>;\n\n // Optional: Pre-populate with existing messages (session restore)\n initialMessages?: UIMessage[];\n\n // Optional: Callbacks\n onError?: (error: OctavusError) => void; // Structured error with type, source, retryable\n onFinish?: () => void;\n onStop?: () => void; // Called when user stops generation\n onResourceUpdate?: (name: string, value: unknown) => void;\n}\n\ninterface UseOctavusChatReturn {\n // State\n messages: UIMessage[];\n status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n error: OctavusError | null; // Structured error with type, source, retryable\n\n // Connection (socket transport only - undefined for HTTP)\n connectionState: ConnectionState | undefined; // 'disconnected' | 'connecting' | 'connected' | 'error'\n connectionError: Error | undefined;\n\n // Client tools (interactive tools awaiting user input)\n pendingClientTools: Record<string, InteractiveTool[]>; // Keyed by tool name\n\n // Actions\n send: (\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ) => Promise<void>;\n stop: () => void;\n\n // Connection management (socket transport only - undefined for HTTP)\n connect: (() => Promise<void>) | undefined;\n disconnect: (() => void) | undefined;\n\n // File uploads (requires requestUploadUrls)\n uploadFiles: (\n files: FileList | File[],\n onProgress?: (fileIndex: number, progress: number) => void,\n ) => Promise<FileReference[]>;\n}\n\ninterface UserMessageInput {\n content?: string;\n files?: FileList | File[] | FileReference[];\n}\n```\n\n## Transport Reference\n\n### createHttpTransport\n\nCreates an HTTP/SSE transport using native `fetch()`:\n\n```typescript\nimport { createHttpTransport } from '@octavus/react';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n```\n\n### createSocketTransport\n\nCreates a WebSocket/SockJS transport for real-time connections:\n\n```typescript\nimport { createSocketTransport } from '@octavus/react';\n\nconst transport = createSocketTransport({\n connect: () =>\n new Promise((resolve, reject) => {\n const ws = new WebSocket(`wss://api.example.com/stream?sessionId=${sessionId}`);\n ws.onopen = () => resolve(ws);\n ws.onerror = () => reject(new Error('Connection failed'));\n }),\n});\n```\n\nSocket transport provides additional connection management:\n\n```typescript\n// Access connection state directly\ntransport.connectionState; // 'disconnected' | 'connecting' | 'connected' | 'error'\n\n// Subscribe to state changes\ntransport.onConnectionStateChange((state, error) => {\n /* ... */\n});\n\n// Eager connection (instead of lazy on first send)\nawait transport.connect();\n\n// Manual disconnect\ntransport.disconnect();\n```\n\nFor detailed WebSocket/SockJS usage including custom events, reconnection patterns, and server-side implementation, see [Socket Transport](/docs/client-sdk/socket-transport).\n\n## Class Reference (Framework-Agnostic)\n\n### OctavusChat\n\n```typescript\nclass OctavusChat {\n constructor(options: OctavusChatOptions);\n\n // State (read-only)\n readonly messages: UIMessage[];\n readonly status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n readonly error: OctavusError | null; // Structured error\n readonly pendingClientTools: Record<string, InteractiveTool[]>; // Interactive tools\n\n // Actions\n send(\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ): Promise<void>;\n stop(): void;\n\n // Subscription\n subscribe(callback: () => void): () => void; // Returns unsubscribe function\n}\n```\n\n## Next Steps\n\n- [HTTP Transport](/docs/client-sdk/http-transport) \u2014 HTTP/SSE integration (recommended)\n- [Socket Transport](/docs/client-sdk/socket-transport) \u2014 WebSocket and SockJS integration\n- [Messages](/docs/client-sdk/messages) \u2014 Working with message state\n- [Streaming](/docs/client-sdk/streaming) \u2014 Building streaming UIs\n- [Client Tools](/docs/client-sdk/client-tools) \u2014 Interactive browser-side tool handling\n- [Operations](/docs/client-sdk/execution-blocks) \u2014 Showing agent progress\n- [Error Handling](/docs/client-sdk/error-handling) \u2014 Handling errors with type guards\n- [File Uploads](/docs/client-sdk/file-uploads) \u2014 Uploading images and documents\n- [Examples](/docs/examples/overview) \u2014 Complete working examples\n",
|
|
794
|
+
content: "\n# Client SDK Overview\n\nOctavus provides two packages for frontend integration:\n\n| Package | Purpose | Use When |\n| --------------------- | ------------------------ | ----------------------------------------------------- |\n| `@octavus/react` | React hooks and bindings | Building React applications |\n| `@octavus/client-sdk` | Framework-agnostic core | Using Vue, Svelte, vanilla JS, or custom integrations |\n\n**Most users should install `@octavus/react`** \u2014 it includes everything from `@octavus/client-sdk` plus React-specific hooks.\n\n## Installation\n\n### React Applications\n\n```bash\nnpm install @octavus/react\n```\n\n**Current version:** `2.3.0`\n\n### Other Frameworks\n\n```bash\nnpm install @octavus/client-sdk\n```\n\n**Current version:** `2.3.0`\n\n## Transport Pattern\n\nThe Client SDK uses a **transport abstraction** to handle communication with your backend. This gives you flexibility in how events are delivered:\n\n| Transport | Use Case | Docs |\n| ----------------------- | -------------------------------------------- | ----------------------------------------------------- |\n| `createHttpTransport` | HTTP/SSE (Next.js, Express, etc.) | [HTTP Transport](/docs/client-sdk/http-transport) |\n| `createSocketTransport` | WebSocket, SockJS, or other socket protocols | [Socket Transport](/docs/client-sdk/socket-transport) |\n\nWhen the transport changes (e.g., when `sessionId` changes), the `useOctavusChat` hook automatically reinitializes with the new transport.\n\n> **Recommendation**: Use HTTP transport unless you specifically need WebSocket features (custom real-time events, Meteor/Phoenix, etc.).\n\n## React Usage\n\nThe `useOctavusChat` hook provides state management and streaming for React applications:\n\n```tsx\nimport { useMemo } from 'react';\nimport { useOctavusChat, createHttpTransport, type UIMessage } from '@octavus/react';\n\nfunction Chat({ sessionId }: { sessionId: string }) {\n // Create a stable transport instance (memoized on sessionId)\n const transport = useMemo(\n () =>\n createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n [sessionId],\n );\n\n const { messages, status, send } = useOctavusChat({ transport });\n\n const sendMessage = async (text: string) => {\n await send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n };\n\n return (\n <div>\n {messages.map((msg) => (\n <MessageBubble key={msg.id} message={msg} />\n ))}\n </div>\n );\n}\n\nfunction MessageBubble({ message }: { message: UIMessage }) {\n return (\n <div>\n {message.parts.map((part, i) => {\n if (part.type === 'text') {\n return <p key={i}>{part.text}</p>;\n }\n return null;\n })}\n </div>\n );\n}\n```\n\n## Framework-Agnostic Usage\n\nThe `OctavusChat` class can be used with any framework or vanilla JavaScript:\n\n```typescript\nimport { OctavusChat, createHttpTransport } from '@octavus/client-sdk';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n\nconst chat = new OctavusChat({ transport });\n\n// Subscribe to state changes\nconst unsubscribe = chat.subscribe(() => {\n console.log('Messages:', chat.messages);\n console.log('Status:', chat.status);\n // Update your UI here\n});\n\n// Send a message\nawait chat.send('user-message', { USER_MESSAGE: 'Hello' }, { userMessage: { content: 'Hello' } });\n\n// Cleanup when done\nunsubscribe();\n```\n\n## Key Features\n\n### Unified Send Function\n\nThe `send` function handles both user message display and agent triggering in one call:\n\n```tsx\nconst { send } = useOctavusChat({ transport });\n\n// Add user message to UI and trigger agent\nawait send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n\n// Trigger without adding a user message (e.g., button click)\nawait send('request-human');\n```\n\n### Message Parts\n\nMessages contain ordered `parts` for rich content:\n\n```tsx\nconst { messages } = useOctavusChat({ transport });\n\n// Each message has typed parts\nmessage.parts.map((part) => {\n switch (part.type) {\n case 'text': // Text content\n case 'reasoning': // Extended reasoning/thinking\n case 'tool-call': // Tool execution\n case 'operation': // Internal operations (set-resource, etc.)\n }\n});\n```\n\n### Status Tracking\n\n```tsx\nconst { status } = useOctavusChat({ transport });\n\n// status: 'idle' | 'streaming' | 'error' | 'awaiting-input'\n// 'awaiting-input' occurs when interactive client tools need user action\n```\n\n### Stop Streaming\n\n```tsx\nconst { stop } = useOctavusChat({ transport });\n\n// Stop current stream and finalize message\nstop();\n```\n\n## Hook Reference (React)\n\n### useOctavusChat\n\n```typescript\nfunction useOctavusChat(options: OctavusChatOptions): UseOctavusChatReturn;\n\ninterface OctavusChatOptions {\n // Required: Transport for streaming events\n transport: Transport;\n\n // Optional: Function to request upload URLs for file uploads\n requestUploadUrls?: (\n files: { filename: string; mediaType: string; size: number }[],\n ) => Promise<UploadUrlsResponse>;\n\n // Optional: Client-side tool handlers\n // - Function: executes automatically and returns result\n // - 'interactive': appears in pendingClientTools for user input\n clientTools?: Record<string, ClientToolHandler>;\n\n // Optional: Pre-populate with existing messages (session restore)\n initialMessages?: UIMessage[];\n\n // Optional: Callbacks\n onError?: (error: OctavusError) => void; // Structured error with type, source, retryable\n onFinish?: () => void;\n onStop?: () => void; // Called when user stops generation\n onResourceUpdate?: (name: string, value: unknown) => void;\n}\n\ninterface UseOctavusChatReturn {\n // State\n messages: UIMessage[];\n status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n error: OctavusError | null; // Structured error with type, source, retryable\n\n // Connection (socket transport only - undefined for HTTP)\n connectionState: ConnectionState | undefined; // 'disconnected' | 'connecting' | 'connected' | 'error'\n connectionError: Error | undefined;\n\n // Client tools (interactive tools awaiting user input)\n pendingClientTools: Record<string, InteractiveTool[]>; // Keyed by tool name\n\n // Actions\n send: (\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ) => Promise<void>;\n stop: () => void;\n\n // Connection management (socket transport only - undefined for HTTP)\n connect: (() => Promise<void>) | undefined;\n disconnect: (() => void) | undefined;\n\n // File uploads (requires requestUploadUrls)\n uploadFiles: (\n files: FileList | File[],\n onProgress?: (fileIndex: number, progress: number) => void,\n ) => Promise<FileReference[]>;\n}\n\ninterface UserMessageInput {\n content?: string;\n files?: FileList | File[] | FileReference[];\n}\n```\n\n## Transport Reference\n\n### createHttpTransport\n\nCreates an HTTP/SSE transport using native `fetch()`:\n\n```typescript\nimport { createHttpTransport } from '@octavus/react';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n```\n\n### createSocketTransport\n\nCreates a WebSocket/SockJS transport for real-time connections:\n\n```typescript\nimport { createSocketTransport } from '@octavus/react';\n\nconst transport = createSocketTransport({\n connect: () =>\n new Promise((resolve, reject) => {\n const ws = new WebSocket(`wss://api.example.com/stream?sessionId=${sessionId}`);\n ws.onopen = () => resolve(ws);\n ws.onerror = () => reject(new Error('Connection failed'));\n }),\n});\n```\n\nSocket transport provides additional connection management:\n\n```typescript\n// Access connection state directly\ntransport.connectionState; // 'disconnected' | 'connecting' | 'connected' | 'error'\n\n// Subscribe to state changes\ntransport.onConnectionStateChange((state, error) => {\n /* ... */\n});\n\n// Eager connection (instead of lazy on first send)\nawait transport.connect();\n\n// Manual disconnect\ntransport.disconnect();\n```\n\nFor detailed WebSocket/SockJS usage including custom events, reconnection patterns, and server-side implementation, see [Socket Transport](/docs/client-sdk/socket-transport).\n\n## Class Reference (Framework-Agnostic)\n\n### OctavusChat\n\n```typescript\nclass OctavusChat {\n constructor(options: OctavusChatOptions);\n\n // State (read-only)\n readonly messages: UIMessage[];\n readonly status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n readonly error: OctavusError | null; // Structured error\n readonly pendingClientTools: Record<string, InteractiveTool[]>; // Interactive tools\n\n // Actions\n send(\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ): Promise<void>;\n stop(): void;\n\n // Subscription\n subscribe(callback: () => void): () => void; // Returns unsubscribe function\n}\n```\n\n## Next Steps\n\n- [HTTP Transport](/docs/client-sdk/http-transport) \u2014 HTTP/SSE integration (recommended)\n- [Socket Transport](/docs/client-sdk/socket-transport) \u2014 WebSocket and SockJS integration\n- [Messages](/docs/client-sdk/messages) \u2014 Working with message state\n- [Streaming](/docs/client-sdk/streaming) \u2014 Building streaming UIs\n- [Client Tools](/docs/client-sdk/client-tools) \u2014 Interactive browser-side tool handling\n- [Operations](/docs/client-sdk/execution-blocks) \u2014 Showing agent progress\n- [Error Handling](/docs/client-sdk/error-handling) \u2014 Handling errors with type guards\n- [File Uploads](/docs/client-sdk/file-uploads) \u2014 Uploading images and documents\n- [Examples](/docs/examples/overview) \u2014 Complete working examples\n",
|
|
768
795
|
excerpt: "Client SDK Overview Octavus provides two packages for frontend integration: | Package | Purpose | Use When | |...",
|
|
769
796
|
order: 1
|
|
770
797
|
},
|
|
@@ -1226,7 +1253,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1226
1253
|
section: "protocol",
|
|
1227
1254
|
title: "Overview",
|
|
1228
1255
|
description: "Introduction to Octavus agent protocols.",
|
|
1229
|
-
content: '\n# Protocol Overview\n\nAgent protocols define how an AI agent behaves. They\'re written in YAML and specify inputs, triggers, tools, and execution handlers.\n\n## Why Protocols?\n\nProtocols provide:\n\n- **Declarative definition** \u2014 Define behavior, not implementation\n- **Portable agents** \u2014 Move agents between projects\n- **Versioning** \u2014 Track changes with git\n- **Validation** \u2014 Catch errors before runtime\n- **Visualization** \u2014 Debug execution flows\n\n## Protocol Structure\n\n```yaml\n# Agent inputs (provided when creating a session)\ninput:\n COMPANY_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\n# Persistent resources the agent can read/write\nresources:\n CONVERSATION_SUMMARY:\n description: Summary for handoff\n default: \'\'\n\n# How the agent can be invoked\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n request-human:\n description: User clicks "Talk to Human"\n\n# Temporary variables for execution (with types)\nvariables:\n SUMMARY:\n type: string\n TICKET:\n type: unknown\n\n# Tools the agent can use\ntools:\n get-user-account:\n description: Looking up your account\n parameters:\n userId: { type: string }\n\n# Octavus skills (provider-agnostic code execution)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\n# Agent configuration (model, tools, etc.)\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account]\n skills: [qr-code] # Enable skills\n imageModel: google/gemini-2.5-flash-image # Enable image generation\n agentic: true # Allow multiple tool calls\n thinking: medium # Extended reasoning\n\n# What happens when triggers fire\nhandlers:\n user-message:\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond to user:\n block: next-message\n```\n\n## File Structure\n\nEach agent is a folder with:\n\n```\nmy-agent/\n\u251C\u2500\u2500 protocol.yaml # Main logic (required)\n\u251C\u2500\u2500 settings.json # Agent metadata (required)\n\u2514\u2500\u2500 prompts/ # Prompt templates\n \u251C\u2500\u2500 system.md\n \u251C\u2500\u2500 user-message.md\n \u2514\u2500\u2500 escalation-summary.md\n```\n\n### settings.json\n\n```json\n{\n "slug": "my-agent",\n "name": "My Agent",\n "description": "What this agent does",\n "format": "interactive"\n}\n```\n\n| Field | Required | Description |\n| ------------- | -------- | ----------------------------------------------- |\n| `slug` | Yes | URL-safe identifier (lowercase, digits, dashes) |\n| `name` | Yes | Human-readable name |\n| `description` | No | Brief description |\n| `format` | Yes | `interactive` (chat) or `worker` (background) |\n\n## Naming Conventions\n\n- **Slugs**: `lowercase-with-dashes`\n- **Variables**: `UPPERCASE_SNAKE_CASE`\n- **Prompts**: `lowercase-with-dashes.md`\n- **Tools**: `lowercase-with-dashes`\n- **Triggers**: `lowercase-with-dashes`\n\n## Variables in Prompts\n\nReference variables with `{{VARIABLE_NAME}}`:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a support agent for {{COMPANY_NAME}}.\n\nHelp users with their {{PRODUCT_NAME}} questions.\n\n## Support Policies\n\n{{SUPPORT_POLICIES}}\n```\n\nVariables are replaced with their values at runtime. If a variable is not provided, it\'s replaced with an empty string.\n\n## Next Steps\n\n- [Input & Resources](/docs/protocol/input-resources) \u2014 Defining agent inputs\n- [Triggers](/docs/protocol/triggers) \u2014 How agents are invoked\n- [Tools](/docs/protocol/tools) \u2014 External capabilities\n- [Skills](/docs/protocol/skills) \u2014 Code execution and knowledge packages\n- [Handlers](/docs/protocol/handlers) \u2014 Execution blocks\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n- [Provider Options](/docs/protocol/provider-options) \u2014 Provider-specific features\n- [Types](/docs/protocol/types) \u2014 Custom type definitions\n',
|
|
1256
|
+
content: '\n# Protocol Overview\n\nAgent protocols define how an AI agent behaves. They\'re written in YAML and specify inputs, triggers, tools, and execution handlers.\n\n## Why Protocols?\n\nProtocols provide:\n\n- **Declarative definition** \u2014 Define behavior, not implementation\n- **Portable agents** \u2014 Move agents between projects\n- **Versioning** \u2014 Track changes with git\n- **Validation** \u2014 Catch errors before runtime\n- **Visualization** \u2014 Debug execution flows\n\n## Agent Formats\n\nOctavus supports two agent formats:\n\n| Format | Use Case | Structure |\n| ------------- | ------------------------------ | --------------------------------- |\n| `interactive` | Chat and multi-turn dialogue | `triggers` + `handlers` + `agent` |\n| `worker` | Background tasks and pipelines | `steps` + `output` |\n\n**Interactive agents** handle conversations \u2014 they respond to triggers (like user messages) and maintain session state across interactions.\n\n**Worker agents** execute tasks \u2014 they run steps sequentially and return an output value. Workers can be called independently or composed into interactive agents.\n\nSee [Workers](/docs/protocol/workers) for the worker protocol reference.\n\n## Interactive Protocol Structure\n\n```yaml\n# Agent inputs (provided when creating a session)\ninput:\n COMPANY_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\n# Persistent resources the agent can read/write\nresources:\n CONVERSATION_SUMMARY:\n description: Summary for handoff\n default: \'\'\n\n# How the agent can be invoked\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n request-human:\n description: User clicks "Talk to Human"\n\n# Temporary variables for execution (with types)\nvariables:\n SUMMARY:\n type: string\n TICKET:\n type: unknown\n\n# Tools the agent can use\ntools:\n get-user-account:\n description: Looking up your account\n parameters:\n userId: { type: string }\n\n# Octavus skills (provider-agnostic code execution)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\n# Agent configuration (model, tools, etc.)\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account]\n skills: [qr-code] # Enable skills\n imageModel: google/gemini-2.5-flash-image # Enable image generation\n agentic: true # Allow multiple tool calls\n thinking: medium # Extended reasoning\n\n# What happens when triggers fire\nhandlers:\n user-message:\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond to user:\n block: next-message\n```\n\n## File Structure\n\nEach agent is a folder with:\n\n```\nmy-agent/\n\u251C\u2500\u2500 protocol.yaml # Main logic (required)\n\u251C\u2500\u2500 settings.json # Agent metadata (required)\n\u2514\u2500\u2500 prompts/ # Prompt templates\n \u251C\u2500\u2500 system.md\n \u251C\u2500\u2500 user-message.md\n \u2514\u2500\u2500 escalation-summary.md\n```\n\n### settings.json\n\n```json\n{\n "slug": "my-agent",\n "name": "My Agent",\n "description": "What this agent does",\n "format": "interactive"\n}\n```\n\n| Field | Required | Description |\n| ------------- | -------- | ----------------------------------------------- |\n| `slug` | Yes | URL-safe identifier (lowercase, digits, dashes) |\n| `name` | Yes | Human-readable name |\n| `description` | No | Brief description |\n| `format` | Yes | `interactive` (chat) or `worker` (background) |\n\n## Naming Conventions\n\n- **Slugs**: `lowercase-with-dashes`\n- **Variables**: `UPPERCASE_SNAKE_CASE`\n- **Prompts**: `lowercase-with-dashes.md`\n- **Tools**: `lowercase-with-dashes`\n- **Triggers**: `lowercase-with-dashes`\n\n## Variables in Prompts\n\nReference variables with `{{VARIABLE_NAME}}`:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a support agent for {{COMPANY_NAME}}.\n\nHelp users with their {{PRODUCT_NAME}} questions.\n\n## Support Policies\n\n{{SUPPORT_POLICIES}}\n```\n\nVariables are replaced with their values at runtime. If a variable is not provided, it\'s replaced with an empty string.\n\n## Next Steps\n\n- [Input & Resources](/docs/protocol/input-resources) \u2014 Defining agent inputs\n- [Triggers](/docs/protocol/triggers) \u2014 How agents are invoked\n- [Tools](/docs/protocol/tools) \u2014 External capabilities\n- [Skills](/docs/protocol/skills) \u2014 Code execution and knowledge packages\n- [Handlers](/docs/protocol/handlers) \u2014 Execution blocks\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n- [Workers](/docs/protocol/workers) \u2014 Worker agent format\n- [Provider Options](/docs/protocol/provider-options) \u2014 Provider-specific features\n- [Types](/docs/protocol/types) \u2014 Custom type definitions\n',
|
|
1230
1257
|
excerpt: "Protocol Overview Agent protocols define how an AI agent behaves. They're written in YAML and specify inputs, triggers, tools, and execution handlers. Why Protocols? Protocols provide: - Declarative...",
|
|
1231
1258
|
order: 1
|
|
1232
1259
|
},
|
|
@@ -1262,7 +1289,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1262
1289
|
section: "protocol",
|
|
1263
1290
|
title: "Skills",
|
|
1264
1291
|
description: "Using Octavus skills for code execution and specialized capabilities.",
|
|
1265
|
-
content: "\n# Skills\n\nSkills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are self-contained packages with documentation and scripts that run in secure sandboxes.\n\n## Overview\n\nOctavus Skills provide **provider-agnostic** code execution. They work with any LLM provider (Anthropic, OpenAI, Google) by using explicit tool calls and system prompt injection.\n\n### How Skills Work\n\n1. **Skill Definition**: Skills are defined in the protocol's `skills:` section\n2. **Skill Resolution**: Skills are resolved from available sources (see below)\n3. **Sandbox Execution**: When a skill is used, code runs in an isolated sandbox environment\n4. **File Generation**: Files saved to `/output/` are automatically captured and made available for download\n\n### Skill Sources\n\nSkills come from two sources, visible in the Skills tab of your organization:\n\n| Source | Badge in UI | Visibility | Example |\n| ----------- | ----------- | ------------------------------ | ------------------ |\n| **Octavus** | `Octavus` | Available to all organizations | `qr-code` |\n| **Custom** | None | Private to your organization | `my-company-skill` |\n\nWhen you reference a skill in your protocol, Octavus resolves it from your available skills. If you create a custom skill with the same name as an Octavus skill, your custom skill takes precedence.\n\n## Defining Skills\n\nDefine skills in the protocol's `skills:` section:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n```\n\n### Skill Fields\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------------------------------------- |\n| `display` | No | How to show in UI: `hidden`, `name`, `description`, `stream` (default: `description`) |\n| `description` | No | Custom description shown to users (overrides skill's built-in description) |\n\n### Display Modes\n\n| Mode | Behavior |\n| ------------- | ------------------------------------------- |\n| `hidden` | Skill usage not shown to users |\n| `name` | Shows skill name while executing |\n| `description` | Shows description while executing (default) |\n| `stream` | Streams progress if available |\n\n## Enabling Skills\n\nAfter defining skills in the `skills:` section, specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account]\n skills: [qr-code] # Skills available for this thread\n agentic: true\n```\n\n## Skill Tools\n\nWhen skills are enabled, the LLM has access to these tools:\n\n| Tool | Purpose |\n| -------------------- | --------------------------------------- |\n| `octavus_skill_read` | Read skill documentation (SKILL.md) |\n| `octavus_skill_list` | List available scripts in a skill |\n| `octavus_skill_run` | Execute a pre-built script from a skill |\n| `octavus_code_run` | Execute arbitrary Python/Bash code |\n| `octavus_file_write` | Create files in the sandbox |\n| `octavus_file_read` | Read files from the sandbox |\n\nThe LLM learns about available skills through system prompt injection and can use these tools to interact with skills.\n\n## Example: QR Code Generation\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n agentic: true\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond:\n block: next-message\n```\n\nWhen a user asks \"Create a QR code for octavus.ai\", the LLM will:\n\n1. Recognize the task matches the `qr-code` skill\n2. Call `octavus_skill_read` to learn how to use the skill\n3. Execute code (via `octavus_code_run` or `octavus_skill_run`) to generate the QR code\n4. Save the image to `/output/` in the sandbox\n5. The file is automatically captured and made available for download\n\n## File Output\n\nFiles saved to `/output/` in the sandbox are automatically:\n\n1. **Captured** after code execution\n2. **Uploaded** to S3 storage\n3. **Made available** via presigned URLs\n4. **Included** in the message as file parts\n\nFiles persist across page refreshes and are stored in the session's message history.\n\n## Skill Format\n\nSkills follow the [Agent Skills](https://agentskills.io) open standard:\n\n- `SKILL.md` - Required skill documentation with YAML frontmatter\n- `scripts/` - Optional executable code (Python/Bash)\n- `references/` - Optional documentation loaded as needed\n- `assets/` - Optional files used in outputs (templates, images)\n\n### SKILL.md Format\n\n````yaml\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\nversion: 1.0.0\nlicense: MIT\nauthor: Octavus Team\n---\n\n# QR Code Generator\n\n## Overview\n\nThis skill creates QR codes from text data using Python...\n\n## Quick Start\n\nGenerate a QR code with Python:\n\n```python\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n# ... code to generate QR code ...\n````\n\n## Scripts Reference\n\n### scripts/generate.py\n\nMain script for generating QR codes...\n\n````\n\n## Best Practices\n\n### 1. Clear Descriptions\n\nProvide clear, purpose-driven descriptions:\n\n```yaml\nskills:\n # Good - clear purpose\n qr-code:\n description: Generating QR codes for URLs, contact info, or any text data\n\n # Avoid - vague\n utility:\n description: Does stuff\n````\n\n### 2. When to Use Skills vs Tools\n\n| Use Skills When | Use Tools When |\n| ------------------------ | ---------------------------- |\n| Code execution needed | Simple API calls |\n| File generation | Database queries |\n| Complex calculations | External service integration |\n| Data processing | Authentication required |\n| Provider-agnostic needed | Backend-specific logic |\n\n### 3. Skill Selection\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n pdf-processor:\n display: description\n description: Processing PDFs\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis] # Skills available for this thread\n```\n\n### 4. Display Modes\n\nChoose appropriate display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n## Comparison: Skills vs Tools vs Provider Options\n\n| Feature | Octavus Skills | External Tools | Provider Tools/Skills |\n| ------------------ | ----------------- | ------------------- | --------------------- |\n| **Execution** | Isolated sandbox | Your backend | Provider servers |\n| **Provider** | Any (agnostic) | N/A | Provider-specific |\n| **Code Execution** | Yes | No | Yes (provider tools) |\n| **File Output** | Yes | No | Yes (provider skills) |\n| **Implementation** | Skill packages | Your code | Built-in |\n| **Cost** | Sandbox + LLM API | Your infrastructure | Included in API |\n\n## Uploading Custom Skills\n\nYou can upload custom skills to your organization:\n\n1. Create a skill following the [Agent Skills](https://agentskills.io) format\n2. Package it as a `.skill` bundle (ZIP file)\n3. Upload via the platform UI\n4. Reference by slug in your protocol\n\n```yaml\nskills:\n custom-analysis:\n display: description\n description: Custom analysis tool\n\nagent:\n skills: [custom-analysis]\n```\n\n## Security\n\nSkills run in isolated sandbox environments:\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default timeout)\n\n## Next Steps\n\n- [Agent Config](/docs/protocol/agent-config) \u2014 Configuring skills in agent settings\n- [Provider Options](/docs/protocol/provider-options) \u2014 Anthropic's built-in skills\n- [Skills Advanced Guide](/docs/protocol/skills-advanced) \u2014 Best practices and advanced patterns\n",
|
|
1292
|
+
content: "\n# Skills\n\nSkills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are self-contained packages with documentation and scripts that run in secure sandboxes.\n\n## Overview\n\nOctavus Skills provide **provider-agnostic** code execution. They work with any LLM provider (Anthropic, OpenAI, Google) by using explicit tool calls and system prompt injection.\n\n### How Skills Work\n\n1. **Skill Definition**: Skills are defined in the protocol's `skills:` section\n2. **Skill Resolution**: Skills are resolved from available sources (see below)\n3. **Sandbox Execution**: When a skill is used, code runs in an isolated sandbox environment\n4. **File Generation**: Files saved to `/output/` are automatically captured and made available for download\n\n### Skill Sources\n\nSkills come from two sources, visible in the Skills tab of your organization:\n\n| Source | Badge in UI | Visibility | Example |\n| ----------- | ----------- | ------------------------------ | ------------------ |\n| **Octavus** | `Octavus` | Available to all organizations | `qr-code` |\n| **Custom** | None | Private to your organization | `my-company-skill` |\n\nWhen you reference a skill in your protocol, Octavus resolves it from your available skills. If you create a custom skill with the same name as an Octavus skill, your custom skill takes precedence.\n\n## Defining Skills\n\nDefine skills in the protocol's `skills:` section:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n```\n\n### Skill Fields\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------------------------------------- |\n| `display` | No | How to show in UI: `hidden`, `name`, `description`, `stream` (default: `description`) |\n| `description` | No | Custom description shown to users (overrides skill's built-in description) |\n\n### Display Modes\n\n| Mode | Behavior |\n| ------------- | ------------------------------------------- |\n| `hidden` | Skill usage not shown to users |\n| `name` | Shows skill name while executing |\n| `description` | Shows description while executing (default) |\n| `stream` | Streams progress if available |\n\n## Enabling Skills\n\nAfter defining skills in the `skills:` section, specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account]\n skills: [qr-code] # Skills available for this thread\n agentic: true\n```\n\n## Skill Tools\n\nWhen skills are enabled, the LLM has access to these tools:\n\n| Tool | Purpose |\n| -------------------- | --------------------------------------- |\n| `octavus_skill_read` | Read skill documentation (SKILL.md) |\n| `octavus_skill_list` | List available scripts in a skill |\n| `octavus_skill_run` | Execute a pre-built script from a skill |\n| `octavus_code_run` | Execute arbitrary Python/Bash code |\n| `octavus_file_write` | Create files in the sandbox |\n| `octavus_file_read` | Read files from the sandbox |\n\nThe LLM learns about available skills through system prompt injection and can use these tools to interact with skills.\n\n## Example: QR Code Generation\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n agentic: true\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond:\n block: next-message\n```\n\nWhen a user asks \"Create a QR code for octavus.ai\", the LLM will:\n\n1. Recognize the task matches the `qr-code` skill\n2. Call `octavus_skill_read` to learn how to use the skill\n3. Execute code (via `octavus_code_run` or `octavus_skill_run`) to generate the QR code\n4. Save the image to `/output/` in the sandbox\n5. The file is automatically captured and made available for download\n\n## File Output\n\nFiles saved to `/output/` in the sandbox are automatically:\n\n1. **Captured** after code execution\n2. **Uploaded** to S3 storage\n3. **Made available** via presigned URLs\n4. **Included** in the message as file parts\n\nFiles persist across page refreshes and are stored in the session's message history.\n\n## Skill Format\n\nSkills follow the [Agent Skills](https://agentskills.io) open standard:\n\n- `SKILL.md` - Required skill documentation with YAML frontmatter\n- `scripts/` - Optional executable code (Python/Bash)\n- `references/` - Optional documentation loaded as needed\n- `assets/` - Optional files used in outputs (templates, images)\n\n### SKILL.md Format\n\n````yaml\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\nversion: 1.0.0\nlicense: MIT\nauthor: Octavus Team\n---\n\n# QR Code Generator\n\n## Overview\n\nThis skill creates QR codes from text data using Python...\n\n## Quick Start\n\nGenerate a QR code with Python:\n\n```python\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n# ... code to generate QR code ...\n````\n\n## Scripts Reference\n\n### scripts/generate.py\n\nMain script for generating QR codes...\n\n````\n\n## Best Practices\n\n### 1. Clear Descriptions\n\nProvide clear, purpose-driven descriptions:\n\n```yaml\nskills:\n # Good - clear purpose\n qr-code:\n description: Generating QR codes for URLs, contact info, or any text data\n\n # Avoid - vague\n utility:\n description: Does stuff\n````\n\n### 2. When to Use Skills vs Tools\n\n| Use Skills When | Use Tools When |\n| ------------------------ | ---------------------------- |\n| Code execution needed | Simple API calls |\n| File generation | Database queries |\n| Complex calculations | External service integration |\n| Data processing | Authentication required |\n| Provider-agnostic needed | Backend-specific logic |\n\n### 3. Skill Selection\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n pdf-processor:\n display: description\n description: Processing PDFs\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis] # Skills available for this thread\n```\n\n### 4. Display Modes\n\nChoose appropriate display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n## Comparison: Skills vs Tools vs Provider Options\n\n| Feature | Octavus Skills | External Tools | Provider Tools/Skills |\n| ------------------ | ----------------- | ------------------- | --------------------- |\n| **Execution** | Isolated sandbox | Your backend | Provider servers |\n| **Provider** | Any (agnostic) | N/A | Provider-specific |\n| **Code Execution** | Yes | No | Yes (provider tools) |\n| **File Output** | Yes | No | Yes (provider skills) |\n| **Implementation** | Skill packages | Your code | Built-in |\n| **Cost** | Sandbox + LLM API | Your infrastructure | Included in API |\n\n## Uploading Custom Skills\n\nYou can upload custom skills to your organization:\n\n1. Create a skill following the [Agent Skills](https://agentskills.io) format\n2. Package it as a `.skill` bundle (ZIP file)\n3. Upload via the platform UI\n4. Reference by slug in your protocol\n\n```yaml\nskills:\n custom-analysis:\n display: description\n description: Custom analysis tool\n\nagent:\n skills: [custom-analysis]\n```\n\n## Sandbox Timeout\n\nThe default sandbox timeout is 5 minutes. For long-running operations, you can configure a custom timeout using `sandboxTimeout` in the agent config:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes (in milliseconds)\n```\n\n`sandboxTimeout` Maximum: 1 hour (3,600,000 ms)\n\n## Security\n\nSkills run in isolated sandbox environments:\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n## Next Steps\n\n- [Agent Config](/docs/protocol/agent-config) \u2014 Configuring skills in agent settings\n- [Provider Options](/docs/protocol/provider-options) \u2014 Anthropic's built-in skills\n- [Skills Advanced Guide](/docs/protocol/skills-advanced) \u2014 Best practices and advanced patterns\n",
|
|
1266
1293
|
excerpt: "Skills Skills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are...",
|
|
1267
1294
|
order: 5
|
|
1268
1295
|
},
|
|
@@ -1280,7 +1307,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1280
1307
|
section: "protocol",
|
|
1281
1308
|
title: "Agent Config",
|
|
1282
1309
|
description: "Configuring the agent model and behavior.",
|
|
1283
|
-
content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to interpolate in system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | ------------------------------------------------------------ |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions:\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nExample `prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n```\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
|
|
1310
|
+
content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to interpolate in system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | -------------------------------------------------------------------- |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash-preview\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions:\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nExample `prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n```\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
|
|
1284
1311
|
excerpt: "Agent Config The section configures the LLM model, system prompt, tools, and behavior. Basic Configuration Configuration Options | Field | Required | Description ...",
|
|
1285
1312
|
order: 7
|
|
1286
1313
|
},
|
|
@@ -1298,7 +1325,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1298
1325
|
section: "protocol",
|
|
1299
1326
|
title: "Skills Advanced Guide",
|
|
1300
1327
|
description: "Best practices and advanced patterns for using Octavus skills.",
|
|
1301
|
-
content: "\n# Skills Advanced Guide\n\nThis guide covers advanced patterns and best practices for using Octavus skills in your agents.\n\n## When to Use Skills\n\nSkills are ideal for:\n\n- **Code execution** - Running Python/Bash scripts\n- **File generation** - Creating images, PDFs, reports\n- **Data processing** - Analyzing, transforming, or visualizing data\n- **Provider-agnostic needs** - Features that should work with any LLM\n\nUse external tools instead when:\n\n- **Simple API calls** - Database queries, external services\n- **Authentication required** - Accessing user-specific resources\n- **Backend integration** - Tight coupling with your infrastructure\n\n## Skill Selection Strategy\n\n### Defining Available Skills\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n pdf-processor:\n display: description\n description: Processing PDFs\n data-analysis:\n display: description\n description: Analyzing data\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Skills available for this thread\n```\n\n### Match Skills to Use Cases\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread based on use case:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n visualization:\n display: description\n description: Creating charts and visualizations\n\n# Skills available for this chat thread (support use case)\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Skills available for this thread\n```\n\nFor a data analysis thread, you would specify `[data-analysis, visualization]` in `agent.skills`, but still define all available skills in the `skills:` section above.\n\n## Display Mode Strategy\n\nChoose display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n### Guidelines\n\n- **`hidden`**: Background work that doesn't need user awareness\n- **`description`**: User-facing operations (default)\n- **`name`**: Quick operations where name is sufficient\n- **`stream`**: Long-running operations where progress matters\n\n## System Prompt Integration\n\nSkills are automatically injected into the system prompt. The LLM learns:\n\n1. **Available skills** - List of enabled skills with descriptions\n2. **How to use skills** - Instructions for using skill tools\n3. **Tool reference** - Available skill tools (`octavus_skill_read`, `octavus_code_run`, etc.)\n\nYou don't need to manually document skills in your system prompt. However, you can guide the LLM:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a helpful assistant that can generate QR codes.\n\n## When to Generate QR Codes\n\nGenerate QR codes when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Share WiFi credentials\n- Create scannable data\n\nUse the qr-code skill for all QR code generation tasks.\n```\n\n## Error Handling\n\nSkills handle errors gracefully:\n\n```yaml\n# Skill execution errors are returned to the LLM\n# The LLM can retry or explain the error to the user\n```\n\nCommon error scenarios:\n\n1. **Invalid skill slug** - Skill not found in organization\n2. **Code execution errors** - Syntax errors, runtime exceptions\n3. **Missing dependencies** - Required packages not installed\n4. **File I/O errors** - Permission issues, invalid paths\n\nThe LLM receives error messages and can:\n\n- Retry with corrected code\n- Explain errors to users\n- Suggest alternatives\n\n## File Output Patterns\n\n### Single File Output\n\n```python\n# Save single file to /output/\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\nqr = qrcode.QRCode()\nqr.add_data('https://example.com')\nimg = qr.make_image()\nimg.save(f'{output_dir}/qrcode.png')\n```\n\n### Multiple Files\n\n```python\n# Save multiple files\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Generate multiple outputs\nfor i in range(3):\n filename = f'{output_dir}/output_{i}.png'\n # ... generate file ...\n```\n\n### Structured Output\n\n```python\n# Save structured data + files\nimport json\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Save metadata\nmetadata = {\n 'files': ['chart.png', 'data.csv'],\n 'summary': 'Analysis complete'\n}\nwith open(f'{output_dir}/metadata.json', 'w') as f:\n json.dump(metadata, f)\n\n# Save actual files\n# ... generate chart.png and data.csv ...\n```\n\n## Performance Considerations\n\n### Lazy Initialization\n\nSandboxes are created only when a skill tool is first called:\n\n```yaml\n# Sandbox not created until LLM calls a skill tool\nagent:\n skills: [qr-code] # Sandbox created on first use\n```\n\nThis means:\n\n- No cost if skills aren't used\n- Fast startup (no sandbox creation delay)\n- Sandbox reused for all skill calls in a trigger\n\n### Timeout Limits\n\nSandboxes have a 5-minute default timeout:\n\n- **Short operations**: QR codes, simple calculations\n- **Medium operations**: Data analysis, report generation\n- **Long operations**: May need to split into multiple steps\n\n### Sandbox Lifecycle\n\nEach trigger execution gets a fresh sandbox:\n\n- **Clean state** - No leftover files from previous executions\n- **Isolated** - No interference between sessions\n- **Destroyed** - Sandbox cleaned up after trigger completes\n\n## Combining Skills with Tools\n\nSkills and tools can work together:\n\n```yaml\ntools:\n get-user-data:\n description: Fetch user data from database\n parameters:\n userId: { type: string }\n\nskills:\n data-analysis:\n display: description\n description: Analyzing data\n\nagent:\n tools: [get-user-data]\n skills: [data-analysis]\n agentic: true\n\nhandlers:\n analyze-user:\n Get user data:\n block: tool-call\n tool: get-user-data\n input:\n userId: USER_ID\n output: USER_DATA\n\n Analyze:\n block: next-message\n # LLM can use data-analysis skill with USER_DATA\n```\n\nPattern:\n\n1. Fetch data via tool (from your backend)\n2. LLM uses skill to analyze/process the data\n3. Generate outputs (files, reports)\n\n## Skill Development Tips\n\n### Writing SKILL.md\n\nFocus on **when** and **how** to use the skill:\n\n```markdown\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\n---\n\n# QR Code Generator\n\n## When to Use\n\nUse this skill when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Create scannable data\n\n## Quick Start\n\n[Clear examples of how to use the skill]\n```\n\n### Script Organization\n\nOrganize scripts logically:\n\n```\nskill-name/\n\u251C\u2500\u2500 SKILL.md\n\u2514\u2500\u2500 scripts/\n \u251C\u2500\u2500 generate.py # Main script\n \u251C\u2500\u2500 utils.py # Helper functions\n \u2514\u2500\u2500 requirements.txt # Dependencies\n```\n\n### Error Messages\n\nProvide helpful error messages:\n\n```python\ntry:\n # ... code ...\nexcept ValueError as e:\n print(f\"Error: Invalid input - {e}\")\n sys.exit(1)\n```\n\nThe LLM sees these errors and can retry or explain to users.\n\n## Security Considerations\n\n### Sandbox Isolation\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default)\n\n### Input Validation\n\nSkills should validate inputs:\n\n```python\nimport sys\n\nif not data:\n print(\"Error: Data is required\")\n sys.exit(1)\n\nif len(data) > 1000:\n print(\"Error: Data too long (max 1000 characters)\")\n sys.exit(1)\n```\n\n### Resource Limits\n\nBe aware of:\n\n- **File size limits** - Large files may fail to upload\n- **Execution time** - 5-minute sandbox timeout\n- **Memory limits** - Sandbox environment constraints\n\n## Debugging Skills\n\n### Check Skill Documentation\n\nThe LLM can read skill docs:\n\n```python\n# LLM calls octavus_skill_read to see skill instructions\n```\n\n### Test Locally\n\nTest skills before uploading:\n\n```bash\n# Test skill locally\npython scripts/generate.py --data \"test\"\n```\n\n### Monitor Execution\n\nCheck execution logs in the platform debug view:\n\n- Tool calls and arguments\n- Code execution results\n- File outputs\n- Error messages\n\n## Common Patterns\n\n### Pattern 1: Generate and Return\n\n```yaml\n# User asks for QR code\n# LLM generates QR code\n# File automatically available for download\n```\n\n### Pattern 2: Analyze and Report\n\n```yaml\n# User provides data\n# LLM analyzes with skill\n# Generates report file\n# Returns summary + file link\n```\n\n### Pattern 3: Transform and Save\n\n```yaml\n# User uploads file (via tool)\n# LLM processes with skill\n# Generates transformed file\n# Returns new file link\n```\n\n## Best Practices Summary\n\n1. **Enable only needed skills** - Don't overwhelm the LLM\n2. **Choose appropriate display modes** - Match user experience needs\n3. **Write clear skill descriptions** - Help LLM understand when to use\n4. **Handle errors gracefully** - Provide helpful error messages\n5. **Test skills locally** - Verify before uploading\n6. **Monitor execution** - Check logs for issues\n7. **Combine with tools** - Use tools for data, skills for processing\n8. **Consider performance** - Be aware of timeouts and limits\n\n## Next Steps\n\n- [Skills](/docs/protocol/skills) - Basic skills documentation\n- [Agent Config](/docs/protocol/agent-config) - Configuring skills\n- [Tools](/docs/protocol/tools) - External tools integration\n",
|
|
1328
|
+
content: "\n# Skills Advanced Guide\n\nThis guide covers advanced patterns and best practices for using Octavus skills in your agents.\n\n## When to Use Skills\n\nSkills are ideal for:\n\n- **Code execution** - Running Python/Bash scripts\n- **File generation** - Creating images, PDFs, reports\n- **Data processing** - Analyzing, transforming, or visualizing data\n- **Provider-agnostic needs** - Features that should work with any LLM\n\nUse external tools instead when:\n\n- **Simple API calls** - Database queries, external services\n- **Authentication required** - Accessing user-specific resources\n- **Backend integration** - Tight coupling with your infrastructure\n\n## Skill Selection Strategy\n\n### Defining Available Skills\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n pdf-processor:\n display: description\n description: Processing PDFs\n data-analysis:\n display: description\n description: Analyzing data\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Skills available for this thread\n```\n\n### Match Skills to Use Cases\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread based on use case:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n visualization:\n display: description\n description: Creating charts and visualizations\n\n# Skills available for this chat thread (support use case)\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Skills available for this thread\n```\n\nFor a data analysis thread, you would specify `[data-analysis, visualization]` in `agent.skills`, but still define all available skills in the `skills:` section above.\n\n## Display Mode Strategy\n\nChoose display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n### Guidelines\n\n- **`hidden`**: Background work that doesn't need user awareness\n- **`description`**: User-facing operations (default)\n- **`name`**: Quick operations where name is sufficient\n- **`stream`**: Long-running operations where progress matters\n\n## System Prompt Integration\n\nSkills are automatically injected into the system prompt. The LLM learns:\n\n1. **Available skills** - List of enabled skills with descriptions\n2. **How to use skills** - Instructions for using skill tools\n3. **Tool reference** - Available skill tools (`octavus_skill_read`, `octavus_code_run`, etc.)\n\nYou don't need to manually document skills in your system prompt. However, you can guide the LLM:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a helpful assistant that can generate QR codes.\n\n## When to Generate QR Codes\n\nGenerate QR codes when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Share WiFi credentials\n- Create scannable data\n\nUse the qr-code skill for all QR code generation tasks.\n```\n\n## Error Handling\n\nSkills handle errors gracefully:\n\n```yaml\n# Skill execution errors are returned to the LLM\n# The LLM can retry or explain the error to the user\n```\n\nCommon error scenarios:\n\n1. **Invalid skill slug** - Skill not found in organization\n2. **Code execution errors** - Syntax errors, runtime exceptions\n3. **Missing dependencies** - Required packages not installed\n4. **File I/O errors** - Permission issues, invalid paths\n\nThe LLM receives error messages and can:\n\n- Retry with corrected code\n- Explain errors to users\n- Suggest alternatives\n\n## File Output Patterns\n\n### Single File Output\n\n```python\n# Save single file to /output/\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\nqr = qrcode.QRCode()\nqr.add_data('https://example.com')\nimg = qr.make_image()\nimg.save(f'{output_dir}/qrcode.png')\n```\n\n### Multiple Files\n\n```python\n# Save multiple files\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Generate multiple outputs\nfor i in range(3):\n filename = f'{output_dir}/output_{i}.png'\n # ... generate file ...\n```\n\n### Structured Output\n\n```python\n# Save structured data + files\nimport json\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Save metadata\nmetadata = {\n 'files': ['chart.png', 'data.csv'],\n 'summary': 'Analysis complete'\n}\nwith open(f'{output_dir}/metadata.json', 'w') as f:\n json.dump(metadata, f)\n\n# Save actual files\n# ... generate chart.png and data.csv ...\n```\n\n## Performance Considerations\n\n### Lazy Initialization\n\nSandboxes are created only when a skill tool is first called:\n\n```yaml\n# Sandbox not created until LLM calls a skill tool\nagent:\n skills: [qr-code] # Sandbox created on first use\n```\n\nThis means:\n\n- No cost if skills aren't used\n- Fast startup (no sandbox creation delay)\n- Sandbox reused for all skill calls in a trigger\n\n### Timeout Limits\n\nSandboxes have a 5-minute default timeout, which can be configured via `sandboxTimeout`:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes for long-running analysis\n```\n\n`sandboxTimeout` Maximum: 1 hour (3,600,000 ms)\n\n**Timeout guidelines:**\n\n- **Short operations** (default 5 min): QR codes, simple calculations\n- **Medium operations** (10-30 min): Data analysis, report generation\n- **Long operations** (30+ min): Complex processing, large dataset analysis\n\n### Sandbox Lifecycle\n\nEach trigger execution gets a fresh sandbox:\n\n- **Clean state** - No leftover files from previous executions\n- **Isolated** - No interference between sessions\n- **Destroyed** - Sandbox cleaned up after trigger completes\n\n## Combining Skills with Tools\n\nSkills and tools can work together:\n\n```yaml\ntools:\n get-user-data:\n description: Fetch user data from database\n parameters:\n userId: { type: string }\n\nskills:\n data-analysis:\n display: description\n description: Analyzing data\n\nagent:\n tools: [get-user-data]\n skills: [data-analysis]\n agentic: true\n\nhandlers:\n analyze-user:\n Get user data:\n block: tool-call\n tool: get-user-data\n input:\n userId: USER_ID\n output: USER_DATA\n\n Analyze:\n block: next-message\n # LLM can use data-analysis skill with USER_DATA\n```\n\nPattern:\n\n1. Fetch data via tool (from your backend)\n2. LLM uses skill to analyze/process the data\n3. Generate outputs (files, reports)\n\n## Skill Development Tips\n\n### Writing SKILL.md\n\nFocus on **when** and **how** to use the skill:\n\n```markdown\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\n---\n\n# QR Code Generator\n\n## When to Use\n\nUse this skill when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Create scannable data\n\n## Quick Start\n\n[Clear examples of how to use the skill]\n```\n\n### Script Organization\n\nOrganize scripts logically:\n\n```\nskill-name/\n\u251C\u2500\u2500 SKILL.md\n\u2514\u2500\u2500 scripts/\n \u251C\u2500\u2500 generate.py # Main script\n \u251C\u2500\u2500 utils.py # Helper functions\n \u2514\u2500\u2500 requirements.txt # Dependencies\n```\n\n### Error Messages\n\nProvide helpful error messages:\n\n```python\ntry:\n # ... code ...\nexcept ValueError as e:\n print(f\"Error: Invalid input - {e}\")\n sys.exit(1)\n```\n\nThe LLM sees these errors and can retry or explain to users.\n\n## Security Considerations\n\n### Sandbox Isolation\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n### Input Validation\n\nSkills should validate inputs:\n\n```python\nimport sys\n\nif not data:\n print(\"Error: Data is required\")\n sys.exit(1)\n\nif len(data) > 1000:\n print(\"Error: Data too long (max 1000 characters)\")\n sys.exit(1)\n```\n\n### Resource Limits\n\nBe aware of:\n\n- **File size limits** - Large files may fail to upload\n- **Execution time** - 5-minute sandbox timeout\n- **Memory limits** - Sandbox environment constraints\n\n## Debugging Skills\n\n### Check Skill Documentation\n\nThe LLM can read skill docs:\n\n```python\n# LLM calls octavus_skill_read to see skill instructions\n```\n\n### Test Locally\n\nTest skills before uploading:\n\n```bash\n# Test skill locally\npython scripts/generate.py --data \"test\"\n```\n\n### Monitor Execution\n\nCheck execution logs in the platform debug view:\n\n- Tool calls and arguments\n- Code execution results\n- File outputs\n- Error messages\n\n## Common Patterns\n\n### Pattern 1: Generate and Return\n\n```yaml\n# User asks for QR code\n# LLM generates QR code\n# File automatically available for download\n```\n\n### Pattern 2: Analyze and Report\n\n```yaml\n# User provides data\n# LLM analyzes with skill\n# Generates report file\n# Returns summary + file link\n```\n\n### Pattern 3: Transform and Save\n\n```yaml\n# User uploads file (via tool)\n# LLM processes with skill\n# Generates transformed file\n# Returns new file link\n```\n\n## Best Practices Summary\n\n1. **Enable only needed skills** - Don't overwhelm the LLM\n2. **Choose appropriate display modes** - Match user experience needs\n3. **Write clear skill descriptions** - Help LLM understand when to use\n4. **Handle errors gracefully** - Provide helpful error messages\n5. **Test skills locally** - Verify before uploading\n6. **Monitor execution** - Check logs for issues\n7. **Combine with tools** - Use tools for data, skills for processing\n8. **Consider performance** - Be aware of timeouts and limits\n\n## Next Steps\n\n- [Skills](/docs/protocol/skills) - Basic skills documentation\n- [Agent Config](/docs/protocol/agent-config) - Configuring skills\n- [Tools](/docs/protocol/tools) - External tools integration\n",
|
|
1302
1329
|
excerpt: "Skills Advanced Guide This guide covers advanced patterns and best practices for using Octavus skills in your agents. When to Use Skills Skills are ideal for: - Code execution - Running Python/Bash...",
|
|
1303
1330
|
order: 9
|
|
1304
1331
|
},
|
|
@@ -1310,6 +1337,15 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1310
1337
|
content: '\n# Types\n\nTypes let you define reusable data structures for your agent. Use them in inputs, triggers, tools, resources, variables, and structured output responses.\n\n## Why Types?\n\n- **Reusability** \u2014 Define once, use in multiple places\n- **Validation** \u2014 Catch errors at protocol validation time\n- **Documentation** \u2014 Clear data contracts for your agent\n- **Tool Parameters** \u2014 Use complex types in tool parameters\n- **Structured Output** \u2014 Get typed JSON responses from the LLM\n\n## Defining Types\n\nTypes are defined in the `types:` section using PascalCase names:\n\n```yaml\ntypes:\n Product:\n id:\n type: string\n description: Unique product identifier\n name:\n type: string\n description: Product display name\n price:\n type: number\n description: Price in cents\n inStock:\n type: boolean\n description: Whether the product is available\n```\n\n## Built-in Types\n\nThese scalar types can be used directly in inputs, resources, variables, triggers, and tool parameters:\n\n| Type | Description | Example Values |\n| --------- | ------------------------------------- | ------------------------------- |\n| `string` | Text values | `"hello"`, `"user@example.com"` |\n| `number` | Numeric values (integers or decimals) | `42`, `3.14`, `-10` |\n| `integer` | Whole numbers only | `1`, `100`, `-5` |\n| `boolean` | True or false | `true`, `false` |\n| `unknown` | Any value (no type checking) | Any JSON value |\n| `file` | Uploaded file reference | `{ id, mediaType, url, ... }` |\n\nThe `file` type represents an uploaded file (image, document, etc.) with this structure:\n\n```typescript\ninterface FileReference {\n id: string; // Unique file ID\n mediaType: string; // MIME type (e.g., \'image/png\')\n url: string; // Presigned download URL\n filename?: string; // Original filename\n size?: number; // File size in bytes\n}\n```\n\n> **Note:** There is no standalone `array` or `object` type. If you need typed arrays or objects, define a [custom type](#defining-types). If you don\'t care about the internal structure, use `unknown`.\n\n## Array Shorthand\n\nFor simple arrays, use the `type[]` shorthand syntax:\n\n```yaml\ntriggers:\n user-message:\n input:\n USER_MESSAGE:\n type: string\n FILES:\n type: file[] # Array of file references\n optional: true\n\nvariables:\n TAGS:\n type: string[] # Array of strings\n```\n\nThis is equivalent to defining a top-level array type but more concise. Array shorthand works with any built-in type or custom type reference:\n\n| Shorthand | Equivalent To |\n| ---------- | ----------------------------- |\n| `string[]` | Array of strings |\n| `file[]` | Array of file references |\n| `number[]` | Array of numbers |\n| `MyType[]` | Array of custom type `MyType` |\n\n## Property Fields\n\nEach property in a type can have these fields:\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------ |\n| `type` | Yes | The data type (built-in or custom type reference) |\n| `description` | No | Human-readable description |\n| `optional` | No | If `true`, property is not required (default: `false`) |\n| `enum` | No | List of allowed string values |\n| `const` | No | Fixed literal value (for discriminators) |\n\n### Required vs Optional\n\nProperties are **required by default**. Use `optional: true` to make them optional:\n\n```yaml\ntypes:\n UserProfile:\n email:\n type: string\n description: User\'s email address\n\n phone:\n type: string\n description: User\'s phone number\n optional: true\n\n nickname:\n type: string\n optional: true\n```\n\n### Descriptions\n\nDescriptions help document your types and guide LLM behavior:\n\n```yaml\ntypes:\n SupportTicket:\n priority:\n type: string\n enum: [low, medium, high, urgent]\n description: >\n Ticket priority level. Use \'urgent\' only for critical issues\n affecting multiple users or causing data loss.\n```\n\n## Enums\n\nRestrict string values to a specific set:\n\n```yaml\ntypes:\n OrderStatus:\n status:\n type: string\n enum: [pending, processing, shipped, delivered, cancelled]\n description: Current order status\n\n paymentMethod:\n type: string\n enum: [credit_card, paypal, bank_transfer]\n```\n\n## Arrays\n\nThere are two ways to define arrays:\n\n### Array Properties\n\nDefine array properties within object types using `type: array` and an `items` definition:\n\n```yaml\ntypes:\n ShoppingCart:\n items:\n type: array\n items:\n type: CartItem\n description: Items in the cart\n\n tags:\n type: array\n items:\n type: string\n description: Cart tags for analytics\n\n CartItem:\n productId:\n type: string\n quantity:\n type: integer\n```\n\n### Top-Level Array Types\n\nDefine a named type that IS an array (not an object containing an array):\n\n```yaml\ntypes:\n CartItem:\n productId:\n type: string\n description: Product ID to add to cart\n quantity:\n type: integer\n description: Number of items (1-10)\n\n # Top-level array type - the type IS an array\n CartItemList:\n type: array\n items:\n type: CartItem\n description: List of cart items\n```\n\nTop-level array types are useful when you need to pass arrays as tool parameters without wrapping them in an object.\n\n### Array Guidelines\n\nWhen using arrays in structured output, use descriptions to guide the LLM on expected array sizes:\n\n```yaml\ntypes:\n Survey:\n answers:\n type: array\n items:\n type: string\n description: Survey answers (provide 1-10 responses)\n\n TopPicks:\n recommendations:\n type: array\n items:\n type: Product\n description: Top 3-5 product recommendations\n```\n\n> **Note:** Array length constraints (`minItems`, `maxItems`) are not enforced by LLM providers in structured output. Use descriptive prompts to guide the model.\n\n## Type References\n\nReference other types by their PascalCase name:\n\n```yaml\ntypes:\n Address:\n street:\n type: string\n city:\n type: string\n country:\n type: string\n postalCode:\n type: string\n\n Customer:\n name:\n type: string\n email:\n type: string\n shippingAddress:\n type: Address\n billingAddress:\n type: Address\n optional: true\n```\n\n## Discriminated Unions\n\nCreate types that can be one of several variants using `anyOf`. Each variant must have a discriminator field with a unique `const` value:\n\n```yaml\ntypes:\n PaymentResult:\n anyOf:\n - PaymentSuccess\n - PaymentFailure\n discriminator: status\n\n PaymentSuccess:\n status:\n type: string\n const: success\n transactionId:\n type: string\n description: Unique transaction identifier\n amount:\n type: number\n description: Amount charged in cents\n\n PaymentFailure:\n status:\n type: string\n const: failure\n errorCode:\n type: string\n description: Error code for the failure\n message:\n type: string\n description: Human-readable error message\n```\n\n### Union Requirements\n\n- Use `anyOf` with an array of type names (minimum 2)\n- Specify a `discriminator` field name\n- Each variant must have the discriminator field with a unique `const` value\n\n### Multiple Unions\n\nYou can have multiple discriminated unions:\n\n```yaml\ntypes:\n ApiResponse:\n anyOf:\n - SuccessResponse\n - ErrorResponse\n discriminator: status\n\n SuccessResponse:\n status:\n type: string\n const: success\n data:\n type: unknown\n\n ErrorResponse:\n status:\n type: string\n const: error\n message:\n type: string\n\n UserAction:\n anyOf:\n - ClickAction\n - ScrollAction\n - SubmitAction\n discriminator: type\n\n ClickAction:\n type:\n type: string\n const: click\n elementId:\n type: string\n\n ScrollAction:\n type:\n type: string\n const: scroll\n position:\n type: number\n\n SubmitAction:\n type:\n type: string\n const: submit\n formData:\n type: unknown\n```\n\n## Complete Example\n\nHere\'s a comprehensive example combining multiple type features:\n\n```yaml\ntypes:\n # Simple object type\n Price:\n amount:\n type: number\n description: Price amount\n currency:\n type: string\n enum: [USD, EUR, GBP]\n description: Currency code\n\n # Type with references and arrays\n Product:\n id:\n type: string\n name:\n type: string\n price:\n type: Price\n category:\n type: string\n enum: [electronics, clothing, home, sports]\n tags:\n type: array\n items:\n type: string\n description: Product tags (up to 10)\n optional: true\n\n # Discriminated union\n SearchResult:\n anyOf:\n - ProductResult\n - CategoryResult\n discriminator: resultType\n\n ProductResult:\n resultType:\n type: string\n const: product\n product:\n type: Product\n relevanceScore:\n type: number\n\n CategoryResult:\n resultType:\n type: string\n const: category\n categoryName:\n type: string\n productCount:\n type: integer\n\ninput:\n STORE_NAME:\n type: string\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE:\n type: string\n\ntools:\n search-products:\n description: Search the product catalog\n parameters:\n query:\n type: string\n category:\n type: string\n optional: true\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [search-products]\n agentic: true\n```\n\n## Using Types in Tools\n\nCustom types can be used in tool parameters. Tool calls are always objects where each parameter name maps to a value.\n\n### Basic Tool Parameters\n\n```yaml\ntools:\n get-product:\n description: Getting product details\n parameters:\n productId:\n type: string\n includeReviews:\n type: boolean\n optional: true\n```\n\nThe LLM calls this with: `{ productId: "prod-123", includeReviews: true }`\n\n### Array Parameters\n\nFor array parameters, define a top-level array type and use it as the parameter type:\n\n```yaml\ntypes:\n CartItem:\n productId:\n type: string\n description: Product ID to add to cart\n quantity:\n type: integer\n description: Number of items (1-10)\n giftWrap:\n type: boolean\n description: Whether to gift wrap this item\n optional: true\n\n # Top-level array type - the type IS an array\n CartItemList:\n type: array\n items:\n type: CartItem\n description: List of cart items\n\ntools:\n add-to-cart:\n description: Adding products to cart\n display: description\n parameters:\n cartItems:\n type: CartItemList\n description: Items to add to the cart\n```\n\nThe tool receives: `{ cartItems: [{ productId: "...", quantity: 1 }, ...] }`\n\n### Why Use Named Array Types?\n\nNamed array types provide:\n\n- **Reusability** \u2014 Use the same array type in multiple tools\n- **Clear schema** \u2014 The array structure is validated\n- **Clean tool calls** \u2014 No unnecessary wrapper objects\n\n## Structured Output\n\nUse `responseType` on a `next-message` block to get structured JSON responses instead of plain text.\n\n### Basic Example\n\n```yaml\ntypes:\n ChatResponse:\n content:\n type: string\n description: The main response text to the user\n suggestions:\n type: array\n items:\n type: string\n description: 1-3 follow-up suggestions (empty array if none)\n\nvariables:\n RESPONSE:\n type: ChatResponse\n\nhandlers:\n user-message:\n Respond to user:\n block: next-message\n responseType: ChatResponse\n output: RESPONSE\n```\n\n### Discriminated Unions for Response Variants\n\nWhen you need different response formats based on context, use a discriminated union **wrapped in an object**. LLM providers don\'t allow `anyOf` (discriminated unions) at the schema root, so you must wrap them.\n\n```yaml\ntypes:\n # \u2705 Wrapper object (required - responseType must be an object, not a union)\n ChatResponseWrapper:\n response:\n type: ChatResponseUnion\n description: The response variant\n\n # Discriminated union with 3 variants\n ChatResponseUnion:\n anyOf:\n - ContentOnlyResponse\n - ContentWithSuggestionsResponse\n - ContentWithProductsResponse\n discriminator: responseType\n\n ContentOnlyResponse:\n responseType:\n type: string\n const: content\n content:\n type: string\n\n ContentWithSuggestionsResponse:\n responseType:\n type: string\n const: content_with_suggestions\n content:\n type: string\n suggestions:\n type: array\n items:\n type: string\n\n ContentWithProductsResponse:\n responseType:\n type: string\n const: content_with_products\n content:\n type: string\n recommendedProducts:\n type: array\n items:\n type: ProductSummary\n\nhandlers:\n user-message:\n Respond to user:\n block: next-message\n responseType: ChatResponseWrapper # Use the wrapper, not the union directly\n```\n\nThe client receives an object like `{ response: { responseType: "content_with_suggestions", content: "...", suggestions: [...] } }`.\n\n### Response Type Requirements\n\nThe `responseType` must be an **object type** (regular custom type with properties).\n\nThe following cannot be used directly as `responseType`:\n\n- **Discriminated unions** \u2014 LLM providers don\'t allow `anyOf` at the schema root ([OpenAI docs](https://platform.openai.com/docs/guides/structured-outputs#root-objects-must-not-be-anyof-and-must-be-an-object))\n- **Array types** \u2014 Must be wrapped in an object\n- **Primitives** \u2014 `string`, `number`, etc. are not valid\n\n```yaml\ntypes:\n # \u274C Cannot use discriminated union directly as responseType\n ChatResponseUnion:\n anyOf: [ContentResponse, ProductResponse]\n discriminator: type\n\n # \u2705 Wrap the union in an object\n ChatResponseWrapper:\n response:\n type: ChatResponseUnion\n\n # \u274C Cannot use array type as responseType\n ProductList:\n type: array\n items:\n type: Product\n\n # \u2705 Wrap the array in an object\n ProductListResponse:\n products:\n type: array\n items:\n type: Product\n description: List of products\n```\n\n### How It Works\n\n1. The LLM generates a structured JSON response matching the type schema\n2. The response is validated against the schema\n3. The parsed object is stored in the `output` variable (if specified)\n4. The client SDK receives an `object` part instead of a `text` part\n\n### Client-Side Rendering\n\nWhen `responseType` is set, the client SDK receives a `UIObjectPart` that can be rendered with custom UI. See the [Structured Output](/docs/client-sdk/structured-output) guide for details on building custom renderers.\n\n### Best Practices\n\n**Use descriptions to guide the LLM:**\n\n```yaml\ntypes:\n ChatResponse:\n content:\n type: string\n description: >\n The main response to the user. Use markdown formatting\n for lists and code blocks when appropriate.\n suggestions:\n type: array\n items:\n type: string\n description: >\n 2-3 natural follow-up questions the user might ask.\n Return an empty array if no suggestions are relevant.\n```\n\n**Keep types focused:**\n\nCreate separate types for different response formats rather than one complex type with many optional fields. Use discriminated unions when the response can be one of several distinct variants.\n\n**Handle streaming gracefully:**\n\nThe client receives partial objects during streaming. Design your UI to handle incomplete data (e.g., show skeleton loaders for missing fields).\n\n## Naming Conventions\n\n| Element | Convention | Examples |\n| -------------- | --------------------------------- | --------------------------------------- |\n| Type names | PascalCase | `Product`, `UserProfile`, `OrderStatus` |\n| Property names | camelCase | `firstName`, `orderId`, `isActive` |\n| Enum values | lowercase_snake_case or camelCase | `in_stock`, `pending`, `creditCard` |\n\n## Validation\n\nTypes are validated when the protocol is loaded:\n\n- Type names must be PascalCase\n- Referenced types must exist\n- Circular references are not allowed\n- Union variants must have unique discriminator values\n- Arrays with `type: array` must have an `items` definition\n\n## Limitations\n\n### Type Definition Limits\n\n- **No standalone `array` or `object`** \u2014 Define a custom type instead, or use `unknown` for untyped data\n- **No recursive types** \u2014 A type cannot reference itself (directly or indirectly)\n- **No generic types** \u2014 Types are concrete, not parameterized\n- **String enums only** \u2014 `enum` values must be strings\n- **No array constraints** \u2014 `minItems` and `maxItems` are not supported (LLM providers don\'t enforce them)\n\n### Tool Limitations\n\n- **Tool parameters are always objects** \u2014 Each tool call is `{ param1: value1, param2: value2, ... }`\n- **Array parameters need named types** \u2014 Use top-level array types for array parameters\n\n### Structured Output Limitations\n\n- **responseType must be an object type** \u2014 Only object types can be used as responseType\n- **Discriminated unions need object wrapper** \u2014 Unions (`anyOf`) are not allowed at the schema root\n- **Array types need object wrapper** \u2014 Arrays cannot be used directly as responseType\n- **Primitives are not allowed** \u2014 `string`, `number`, etc. cannot be used as responseType\n\nThese limitations exist because LLM providers (OpenAI, Anthropic) require the root schema to be an object:\n\n- [OpenAI: Root objects must not be anyOf](https://platform.openai.com/docs/guides/structured-outputs#root-objects-must-not-be-anyof-and-must-be-an-object)\n- JSON Schema validation works best with explicit object structures at the root\n',
|
|
1311
1338
|
excerpt: "Types Types let you define reusable data structures for your agent. Use them in inputs, triggers, tools, resources, variables, and structured output responses. Why Types? - Reusability \u2014 Define...",
|
|
1312
1339
|
order: 10
|
|
1340
|
+
},
|
|
1341
|
+
{
|
|
1342
|
+
slug: "protocol/workers",
|
|
1343
|
+
section: "protocol",
|
|
1344
|
+
title: "Workers",
|
|
1345
|
+
description: "Defining worker agents for background and task-based execution.",
|
|
1346
|
+
content: '\n# Workers\n\nWorkers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value.\n\n## When to Use Workers\n\nWorkers are ideal for:\n\n- **Background processing** \u2014 Long-running tasks that don\'t need conversation\n- **Composable tasks** \u2014 Reusable units of work called by other agents\n- **Pipelines** \u2014 Multi-step processing with structured output\n- **Parallel execution** \u2014 Tasks that can run independently\n\nUse interactive agents instead when:\n\n- **Conversation is needed** \u2014 Multi-turn dialogue with users\n- **Persistence matters** \u2014 State should survive across interactions\n- **Session context** \u2014 User context needs to persist\n\n## Worker vs Interactive\n\n| Aspect | Interactive | Worker |\n| ---------- | ---------------------------------- | ----------------------------- |\n| Structure | `triggers` + `handlers` + `agent` | `steps` + `output` |\n| LLM Config | Global `agent:` section | Per-thread via `start-thread` |\n| Invocation | Fire a named trigger | Direct execution with input |\n| Session | Persists across triggers (24h TTL) | Single execution |\n| Result | Streaming chat | Streaming + output value |\n\n## Protocol Structure\n\nWorkers use a simpler protocol structure than interactive agents:\n\n```yaml\n# Input schema - provided when worker is executed\ninput:\n TOPIC:\n type: string\n description: Topic to research\n DEPTH:\n type: string\n optional: true\n default: medium\n\n# Variables for intermediate results\nvariables:\n RESEARCH_DATA:\n type: string\n ANALYSIS:\n type: string\n description: Final analysis result\n\n# Tools available to the worker\ntools:\n web-search:\n description: Search the web\n parameters:\n query: { type: string }\n\n# Sequential execution steps\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC, DEPTH]\n tools: [web-search]\n maxSteps: 5\n\n Add research request:\n block: add-message\n thread: research\n role: user\n prompt: research-prompt\n input: [TOPIC, DEPTH]\n\n Generate research:\n block: next-message\n thread: research\n output: RESEARCH_DATA\n\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: analysis-system\n\n Add analysis request:\n block: add-message\n thread: analysis\n role: user\n prompt: analysis-prompt\n input: [RESEARCH_DATA]\n\n Generate analysis:\n block: next-message\n thread: analysis\n output: ANALYSIS\n\n# Output variable - the worker\'s return value\noutput: ANALYSIS\n```\n\n## settings.json\n\nWorkers are identified by the `format` field:\n\n```json\n{\n "slug": "research-assistant",\n "name": "Research Assistant",\n "description": "Researches topics and returns structured analysis",\n "format": "worker"\n}\n```\n\n## Key Differences\n\n### No Global Agent Config\n\nInteractive agents have a global `agent:` section that configures a main thread. Workers don\'t have this \u2014 every thread must be explicitly created via `start-thread`:\n\n```yaml\n# Interactive agent: Global config\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [tool-a, tool-b]\n\n# Worker: Each thread configured independently\nsteps:\n Start thread A:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n tools: [tool-a]\n\n Start thread B:\n block: start-thread\n thread: analysis\n model: openai/gpt-4o\n tools: [tool-b]\n```\n\nThis gives workers flexibility to use different models, tools, and settings at different stages.\n\n### Steps Instead of Handlers\n\nWorkers use `steps:` instead of `handlers:`. Steps execute sequentially, like handler blocks:\n\n```yaml\n# Interactive: Handlers respond to triggers\nhandlers:\n user-message:\n Add message:\n block: add-message\n # ...\n\n# Worker: Steps execute in sequence\nsteps:\n Add message:\n block: add-message\n # ...\n```\n\n### Output Value\n\nWorkers can return an output value to the caller:\n\n```yaml\nvariables:\n RESULT:\n type: string\n\nsteps:\n # ... steps that populate RESULT ...\n\noutput: RESULT # Return this variable\'s value\n```\n\nThe `output` field references a variable declared in `variables:`. If omitted, the worker completes without returning a value.\n\n## Available Blocks\n\nWorkers support the same blocks as handlers:\n\n| Block | Purpose |\n| ------------------ | -------------------------------------------- |\n| `start-thread` | Create a named thread with LLM configuration |\n| `add-message` | Add a message to a thread |\n| `next-message` | Generate LLM response |\n| `tool-call` | Call a tool deterministically |\n| `set-resource` | Update a resource value |\n| `serialize-thread` | Convert thread to text |\n| `generate-image` | Generate an image from a prompt variable |\n\n### start-thread (Required for LLM)\n\nEvery thread must be initialized with `start-thread` before using `next-message`:\n\n```yaml\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC]\n tools: [web-search]\n thinking: medium\n maxSteps: 5\n```\n\nAll LLM configuration goes here:\n\n| Field | Description |\n| ------------- | ------------------------------------------------- |\n| `thread` | Thread name (defaults to block name) |\n| `model` | LLM model to use |\n| `system` | System prompt filename (required) |\n| `input` | Variables for system prompt |\n| `tools` | Tools available in this thread |\n| `workers` | Workers available to this thread (as LLM tools) |\n| `imageModel` | Image generation model |\n| `thinking` | Extended reasoning level |\n| `temperature` | Model temperature |\n| `maxSteps` | Maximum tool call cycles (enables agentic if > 1) |\n\n## Simple Example\n\nA worker that generates a title from a summary:\n\n```yaml\n# Input\ninput:\n CONVERSATION_SUMMARY:\n type: string\n description: Summary to generate a title for\n\n# Variables\nvariables:\n TITLE:\n type: string\n description: The generated title\n\n# Steps\nsteps:\n Start title thread:\n block: start-thread\n thread: title-gen\n model: anthropic/claude-sonnet-4-5\n system: title-system\n\n Add title request:\n block: add-message\n thread: title-gen\n role: user\n prompt: title-request\n input: [CONVERSATION_SUMMARY]\n\n Generate title:\n block: next-message\n thread: title-gen\n output: TITLE\n display: stream\n\n# Output\noutput: TITLE\n```\n\n## Advanced Example\n\nA worker with multiple threads, tools, and agentic behavior:\n\n```yaml\ninput:\n USER_MESSAGE:\n type: string\n description: The user\'s message to respond to\n USER_ID:\n type: string\n description: User ID for account lookups\n optional: true\n\ntools:\n get-user-account:\n description: Looking up account information\n parameters:\n userId: { type: string }\n create-support-ticket:\n description: Creating a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string }\n\nvariables:\n ASSISTANT_RESPONSE:\n type: string\n CHAT_TRANSCRIPT:\n type: string\n CONVERSATION_SUMMARY:\n type: string\n\nsteps:\n # Thread 1: Chat with agentic tool calling\n Start chat thread:\n block: start-thread\n thread: chat\n model: anthropic/claude-sonnet-4-5\n system: chat-system\n input: [USER_ID]\n tools: [get-user-account, create-support-ticket]\n thinking: medium\n maxSteps: 5\n\n Add user message:\n block: add-message\n thread: chat\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Generate response:\n block: next-message\n thread: chat\n output: ASSISTANT_RESPONSE\n display: stream\n\n # Serialize for summary\n Save conversation:\n block: serialize-thread\n thread: chat\n output: CHAT_TRANSCRIPT\n\n # Thread 2: Summary generation\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n system: summary-system\n thinking: low\n\n Add summary request:\n block: add-message\n thread: summary\n role: user\n prompt: summary-request\n input: [CHAT_TRANSCRIPT]\n\n Generate summary:\n block: next-message\n thread: summary\n output: CONVERSATION_SUMMARY\n display: stream\n\noutput: CONVERSATION_SUMMARY\n```\n\n## Tool Handling\n\nWorkers support the same tool handling as interactive agents:\n\n- **Server tools** \u2014 Handled by tool handlers you provide\n- **Client tools** \u2014 Pause execution, return tool request to caller\n\n```typescript\nconst events = client.workers.execute(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => {\n return await searchWeb(args.query);\n },\n },\n },\n);\n```\n\nSee [Server SDK Workers](/docs/server-sdk/workers) for tool handling details.\n\n## Stream Events\n\nWorkers emit the same events as interactive agents, plus worker-specific events:\n\n| Event | Description |\n| --------------- | ---------------------------------- |\n| `worker-start` | Worker execution begins |\n| `worker-result` | Worker completes (includes output) |\n\nAll standard events (text-delta, tool calls, etc.) are also emitted.\n\n## Calling Workers from Interactive Agents\n\nInteractive agents can call workers in two ways:\n\n1. **Deterministically** \u2014 Using the `run-worker` block\n2. **Agentically** \u2014 LLM calls worker as a tool\n\n### Worker Declaration\n\nFirst, declare workers in your interactive agent\'s protocol:\n\n```yaml\nworkers:\n generate-title:\n description: Generating conversation title\n display: description\n research-assistant:\n description: Researching topic\n display: stream\n tools:\n search: web-search # Map worker tool \u2192 parent tool\n```\n\n### run-worker Block\n\nCall a worker deterministically from a handler:\n\n```yaml\nhandlers:\n request-human:\n Generate title:\n block: run-worker\n worker: generate-title\n input:\n CONVERSATION_SUMMARY: SUMMARY\n output: CONVERSATION_TITLE\n```\n\n### LLM Tool Invocation\n\nMake workers available to the LLM:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n workers: [generate-title, research-assistant]\n agentic: true\n```\n\nThe LLM can then call workers as tools during conversation.\n\n### Display Modes\n\nControl how worker execution appears to users:\n\n| Mode | Behavior |\n| ------------- | --------------------------------- |\n| `hidden` | Worker runs silently |\n| `name` | Shows worker name |\n| `description` | Shows description text |\n| `stream` | Streams all worker events to user |\n\n### Tool Mapping\n\nMap parent tools to worker tools when the worker needs access to your tool handlers:\n\n```yaml\nworkers:\n research-assistant:\n description: Research topics\n tools:\n search: web-search # Worker\'s "search" \u2192 parent\'s "web-search"\n```\n\nWhen the worker calls its `search` tool, your `web-search` handler executes.\n\n## Next Steps\n\n- [Server SDK Workers](/docs/server-sdk/workers) \u2014 Executing workers from code\n- [Handlers](/docs/protocol/handlers) \u2014 Block reference for steps\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n',
|
|
1347
|
+
excerpt: "Workers Workers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value. When to...",
|
|
1348
|
+
order: 11
|
|
1313
1349
|
}
|
|
1314
1350
|
]
|
|
1315
1351
|
},
|
|
@@ -1432,4 +1468,4 @@ export {
|
|
|
1432
1468
|
getDocSlugs,
|
|
1433
1469
|
getSectionBySlug
|
|
1434
1470
|
};
|
|
1435
|
-
//# sourceMappingURL=chunk-
|
|
1471
|
+
//# sourceMappingURL=chunk-WQ7BTD5T.js.map
|