@octavus/docs 2.10.0 → 2.12.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/content/02-server-sdk/01-overview.md +16 -0
- package/content/02-server-sdk/05-cli.md +9 -3
- package/content/02-server-sdk/06-workers.md +218 -143
- package/content/03-client-sdk/08-file-uploads.md +57 -3
- package/content/04-protocol/01-overview.md +33 -6
- package/content/04-protocol/05-skills.md +43 -7
- package/content/04-protocol/06-handlers.md +3 -0
- package/content/04-protocol/07-agent-config.md +38 -13
- package/content/04-protocol/09-skills-advanced.md +50 -29
- package/content/04-protocol/11-workers.md +40 -5
- package/content/04-protocol/12-references.md +189 -0
- package/content/05-api-reference/03-agents.md +31 -9
- package/dist/{chunk-HPVIPOLY.js → chunk-PQ5AGOPY.js} +27 -27
- package/dist/chunk-PQ5AGOPY.js.map +1 -0
- package/dist/chunk-RRXIH3DI.js +1507 -0
- package/dist/chunk-RRXIH3DI.js.map +1 -0
- package/dist/{chunk-RZZE5BMI.js → chunk-SAB5XUB6.js} +49 -31
- package/dist/chunk-SAB5XUB6.js.map +1 -0
- package/dist/content.js +1 -1
- package/dist/docs.json +24 -15
- package/dist/index.js +1 -1
- package/dist/search-index.json +1 -1
- package/dist/search.js +1 -1
- package/dist/search.js.map +1 -1
- package/dist/sections.json +24 -15
- package/package.json +1 -1
- package/dist/chunk-HPVIPOLY.js.map +0 -1
- package/dist/chunk-RZZE5BMI.js.map +0 -1
|
@@ -23,7 +23,7 @@ var docs_default = [
|
|
|
23
23
|
section: "server-sdk",
|
|
24
24
|
title: "Overview",
|
|
25
25
|
description: "Introduction to the Octavus Server SDK for backend integration.",
|
|
26
|
-
content: "\n# Server SDK Overview\n\nThe `@octavus/server-sdk` package provides a Node.js SDK for integrating Octavus agents into your backend application. It handles session management, streaming, and the tool execution continuation loop.\n\n**Current version:** `2.
|
|
26
|
+
content: "\n# Server SDK Overview\n\nThe `@octavus/server-sdk` package provides a Node.js SDK for integrating Octavus agents into your backend application. It handles session management, streaming, and the tool execution continuation loop.\n\n**Current version:** `2.12.0`\n\n## Installation\n\n```bash\nnpm install @octavus/server-sdk\n```\n\nFor agent management (sync, validate), install the CLI as a dev dependency:\n\n```bash\nnpm install --save-dev @octavus/cli\n```\n\n## Basic Usage\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: 'your-api-key',\n});\n```\n\n## Key Features\n\n### Agent Management\n\nAgent definitions are managed via the CLI. See the [CLI documentation](/docs/server-sdk/cli) for details.\n\n```bash\n# Sync agent from local files\noctavus sync ./agents/support-chat\n\n# Output: Created: support-chat\n# Agent ID: clxyz123abc456\n```\n\n### Session Management\n\nCreate and manage agent sessions using the agent ID:\n\n```typescript\n// Create a new session (use agent ID from CLI sync)\nconst sessionId = await client.agentSessions.create('clxyz123abc456', {\n COMPANY_NAME: 'Acme Corp',\n PRODUCT_NAME: 'Widget Pro',\n});\n\n// Get UI-ready session messages (for session restore)\nconst session = await client.agentSessions.getMessages(sessionId);\n```\n\n### Tool Handlers\n\nTools run on your server with your data:\n\n```typescript\nconst session = client.agentSessions.attach(sessionId, {\n tools: {\n 'get-user-account': async (args) => {\n // Access your database, APIs, etc.\n return await db.users.findById(args.userId);\n },\n },\n});\n```\n\n### Streaming\n\nAll responses stream in real-time:\n\n```typescript\nimport { toSSEStream } from '@octavus/server-sdk';\n\n// execute() returns an async generator of events\nconst events = session.execute({\n type: 'trigger',\n triggerName: 'user-message',\n input: { USER_MESSAGE: 'Hello!' },\n});\n\n// Convert to SSE stream for HTTP responses\nreturn new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n});\n```\n\n### Workers\n\nExecute worker agents for task-based processing:\n\n```typescript\n// Non-streaming: get the output directly\nconst { output } = await client.workers.generate(agentId, {\n TOPIC: 'AI safety',\n});\n\n// Streaming: observe events in real-time\nfor await (const event of client.workers.execute(agentId, input)) {\n // Handle stream events\n}\n```\n\n## API Reference\n\n### OctavusClient\n\nThe main entry point for interacting with Octavus.\n\n```typescript\ninterface OctavusClientConfig {\n baseUrl: string; // Octavus API URL\n apiKey?: string; // Your API key\n traceModelRequests?: boolean; // Enable model request tracing (default: false)\n}\n\nclass OctavusClient {\n readonly agents: AgentsApi;\n readonly agentSessions: AgentSessionsApi;\n readonly workers: WorkersApi;\n readonly files: FilesApi;\n\n constructor(config: OctavusClientConfig);\n}\n```\n\n### AgentSessionsApi\n\nManages agent sessions.\n\n```typescript\nclass AgentSessionsApi {\n // Create a new session\n async create(agentId: string, input?: Record<string, unknown>): Promise<string>;\n\n // Get full session state (for debugging/internal use)\n async get(sessionId: string): Promise<SessionState>;\n\n // Get UI-ready messages (for client display)\n async getMessages(sessionId: string): Promise<UISessionState>;\n\n // Attach to a session for triggering\n attach(sessionId: string, options?: SessionAttachOptions): AgentSession;\n}\n\n// Full session state (internal format)\ninterface SessionState {\n id: string;\n agentId: string;\n input: Record<string, unknown>;\n variables: Record<string, unknown>;\n resources: Record<string, unknown>;\n messages: ChatMessage[]; // Internal message format\n createdAt: string;\n updatedAt: string;\n}\n\n// UI-ready session state\ninterface UISessionState {\n sessionId: string;\n agentId: string;\n messages: UIMessage[]; // UI-ready messages for frontend\n}\n```\n\n### AgentSession\n\nHandles request execution and streaming for a specific session.\n\n```typescript\nclass AgentSession {\n // Execute a request and stream parsed events\n execute(request: SessionRequest, options?: TriggerOptions): AsyncGenerator<StreamEvent>;\n\n // Get the session ID\n getSessionId(): string;\n}\n\ntype SessionRequest = TriggerRequest | ContinueRequest;\n\ninterface TriggerRequest {\n type: 'trigger';\n triggerName: string;\n input?: Record<string, unknown>;\n}\n\ninterface ContinueRequest {\n type: 'continue';\n executionId: string;\n toolResults: ToolResult[];\n}\n\n// Helper to convert events to SSE stream\nfunction toSSEStream(events: AsyncIterable<StreamEvent>): ReadableStream<Uint8Array>;\n```\n\n### FilesApi\n\nHandles file uploads for sessions.\n\n```typescript\nclass FilesApi {\n // Get presigned URLs for file uploads\n async getUploadUrls(sessionId: string, files: FileUploadRequest[]): Promise<UploadUrlsResponse>;\n}\n\ninterface FileUploadRequest {\n filename: string;\n mediaType: string;\n size: number;\n}\n\ninterface UploadUrlsResponse {\n files: {\n id: string; // File ID for references\n uploadUrl: string; // PUT to this URL\n downloadUrl: string; // GET URL after upload\n }[];\n}\n```\n\nThe client uploads files directly to S3 using the presigned upload URL. See [File Uploads](/docs/client-sdk/file-uploads) for the full integration pattern.\n\n## Next Steps\n\n- [Sessions](/docs/server-sdk/sessions) \u2014 Deep dive into session management\n- [Tools](/docs/server-sdk/tools) \u2014 Implementing tool handlers\n- [Streaming](/docs/server-sdk/streaming) \u2014 Understanding stream events\n- [Workers](/docs/server-sdk/workers) \u2014 Executing worker agents\n- [Debugging](/docs/server-sdk/debugging) \u2014 Model request tracing and debugging\n",
|
|
27
27
|
excerpt: "Server SDK Overview The package provides a Node.js SDK for integrating Octavus agents into your backend application. It handles session management, streaming, and the tool execution continuation...",
|
|
28
28
|
order: 1
|
|
29
29
|
},
|
|
@@ -59,7 +59,7 @@ var docs_default = [
|
|
|
59
59
|
section: "server-sdk",
|
|
60
60
|
title: "CLI",
|
|
61
61
|
description: "Command-line interface for validating and syncing agent definitions.",
|
|
62
|
-
content: '\n# Octavus CLI\n\nThe `@octavus/cli` package provides a command-line interface for validating and syncing agent definitions from your local filesystem to the Octavus platform.\n\n**Current version:** `2.
|
|
62
|
+
content: '\n# Octavus CLI\n\nThe `@octavus/cli` package provides a command-line interface for validating and syncing agent definitions from your local filesystem to the Octavus platform.\n\n**Current version:** `2.12.0`\n\n## Installation\n\n```bash\nnpm install --save-dev @octavus/cli\n```\n\n## Configuration\n\nThe CLI requires an API key with the **Agents** permission.\n\n### Environment Variables\n\n| Variable | Description |\n| --------------------- | ---------------------------------------------- |\n| `OCTAVUS_CLI_API_KEY` | API key with "Agents" permission (recommended) |\n| `OCTAVUS_API_KEY` | Fallback if `OCTAVUS_CLI_API_KEY` not set |\n| `OCTAVUS_API_URL` | Optional, defaults to `https://octavus.ai` |\n\n### Two-Key Strategy (Recommended)\n\nFor production deployments, use separate API keys with minimal permissions:\n\n```bash\n# CI/CD or .env.local (not committed)\nOCTAVUS_CLI_API_KEY=oct_sk_... # "Agents" permission only\n\n# Production .env\nOCTAVUS_API_KEY=oct_sk_... # "Sessions" permission only\n```\n\nThis ensures production servers only have session permissions (smaller blast radius if leaked), while agent management is restricted to development/CI environments.\n\n### Multiple Environments\n\nUse separate Octavus projects for staging and production, each with their own API keys. The `--env` flag lets you load different environment files:\n\n```bash\n# Local development (default: .env)\noctavus sync ./agents/my-agent\n\n# Staging project\noctavus --env .env.staging sync ./agents/my-agent\n\n# Production project\noctavus --env .env.production sync ./agents/my-agent\n```\n\nExample environment files:\n\n```bash\n# .env.staging (syncs to your staging project)\nOCTAVUS_CLI_API_KEY=oct_sk_staging_project_key...\n\n# .env.production (syncs to your production project)\nOCTAVUS_CLI_API_KEY=oct_sk_production_project_key...\n```\n\nEach project has its own agents, so you\'ll get different agent IDs per environment.\n\n## Global Options\n\n| Option | Description |\n| -------------- | ------------------------------------------------------- |\n| `--env <file>` | Load environment from a specific file (default: `.env`) |\n| `--help` | Show help |\n| `--version` | Show version |\n\n## Commands\n\n### `octavus sync <path>`\n\nSync an agent definition to the platform. Creates the agent if it doesn\'t exist, or updates it if it does.\n\n```bash\noctavus sync ./agents/my-agent\n```\n\n**Options:**\n\n- `--json` \u2014 Output as JSON (for CI/CD parsing)\n- `--quiet` \u2014 Suppress non-essential output\n\n**Example output:**\n\n```\n\u2139 Reading agent from ./agents/my-agent...\n\u2139 Syncing support-chat...\n\u2713 Created: support-chat\n Agent ID: clxyz123abc456\n```\n\n### `octavus validate <path>`\n\nValidate an agent definition without saving. Useful for CI/CD pipelines.\n\n```bash\noctavus validate ./agents/my-agent\n```\n\n**Exit codes:**\n\n- `0` \u2014 Validation passed\n- `1` \u2014 Validation errors\n- `2` \u2014 Configuration errors (missing API key, etc.)\n\n### `octavus list`\n\nList all agents in your project.\n\n```bash\noctavus list\n```\n\n**Example output:**\n\n```\nSLUG NAME FORMAT ID\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\nsupport-chat Support Chat Agent interactive clxyz123abc456\n\n1 agent(s)\n```\n\n### `octavus get <slug>`\n\nGet details about a specific agent by its slug.\n\n```bash\noctavus get support-chat\n```\n\n### `octavus archive <slug>`\n\nArchive an agent by slug (soft delete). Archived agents are removed from the active agent list and their slug is freed for reuse.\n\n```bash\noctavus archive support-chat\n```\n\n**Options:**\n\n- `--json` \u2014 Output as JSON (for CI/CD parsing)\n- `--quiet` \u2014 Suppress non-essential output\n\n**Example output:**\n\n```\n\u2139 Archiving support-chat...\n\u2713 Archived: support-chat\n Agent ID: clxyz123abc456\n```\n\n## Agent Directory Structure\n\nThe CLI expects agent definitions in a specific directory structure:\n\n```\nmy-agent/\n\u251C\u2500\u2500 settings.json # Required: Agent metadata\n\u251C\u2500\u2500 protocol.yaml # Required: Agent protocol\n\u251C\u2500\u2500 prompts/ # Optional: Prompt templates\n\u2502 \u251C\u2500\u2500 system.md\n\u2502 \u2514\u2500\u2500 user-message.md\n\u2514\u2500\u2500 references/ # Optional: Reference documents\n \u2514\u2500\u2500 api-guidelines.md\n```\n\n### references/\n\nReference files are markdown documents with YAML frontmatter containing a `description`. The agent can fetch these on demand during execution. See [References](/docs/protocol/references) for details.\n\n### settings.json\n\n```json\n{\n "slug": "my-agent",\n "name": "My Agent",\n "description": "A helpful assistant",\n "format": "interactive"\n}\n```\n\n### protocol.yaml\n\nSee the [Protocol documentation](/docs/protocol/overview) for details on protocol syntax.\n\n## CI/CD Integration\n\n### GitHub Actions\n\n```yaml\nname: Validate and Sync Agents\n\non:\n push:\n branches: [main]\n paths:\n - \'agents/**\'\n\njobs:\n sync:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n\n - uses: actions/setup-node@v4\n with:\n node-version: \'22\'\n\n - run: npm install\n\n - name: Validate agent\n run: npx octavus validate ./agents/support-chat\n env:\n OCTAVUS_CLI_API_KEY: ${{ secrets.OCTAVUS_CLI_API_KEY }}\n\n - name: Sync agent\n run: npx octavus sync ./agents/support-chat\n env:\n OCTAVUS_CLI_API_KEY: ${{ secrets.OCTAVUS_CLI_API_KEY }}\n```\n\n### Package.json Scripts\n\nAdd sync scripts to your `package.json`:\n\n```json\n{\n "scripts": {\n "agents:validate": "octavus validate ./agents/my-agent",\n "agents:sync": "octavus sync ./agents/my-agent"\n },\n "devDependencies": {\n "@octavus/cli": "^0.1.0"\n }\n}\n```\n\n## Workflow\n\nThe recommended workflow for managing agents:\n\n1. **Define agent locally** \u2014 Create `settings.json`, `protocol.yaml`, and prompts\n2. **Validate** \u2014 Run `octavus validate ./my-agent` to check for errors\n3. **Sync** \u2014 Run `octavus sync ./my-agent` to push to platform\n4. **Store agent ID** \u2014 Save the output ID in an environment variable\n5. **Use in app** \u2014 Read the ID from env and pass to `client.agentSessions.create()`\n\n```bash\n# After syncing: octavus sync ./agents/support-chat\n# Output: Agent ID: clxyz123abc456\n\n# Add to your .env file\nOCTAVUS_SUPPORT_AGENT_ID=clxyz123abc456\n```\n\n```typescript\nconst agentId = process.env.OCTAVUS_SUPPORT_AGENT_ID;\n\nconst sessionId = await client.agentSessions.create(agentId, {\n COMPANY_NAME: \'Acme Corp\',\n});\n```\n',
|
|
63
63
|
excerpt: "Octavus CLI The package provides a command-line interface for validating and syncing agent definitions from your local filesystem to the Octavus platform. Current version: Installation ...",
|
|
64
64
|
order: 5
|
|
65
65
|
},
|
|
@@ -68,8 +68,8 @@ var docs_default = [
|
|
|
68
68
|
section: "server-sdk",
|
|
69
69
|
title: "Workers",
|
|
70
70
|
description: "Executing worker agents with the Server SDK.",
|
|
71
|
-
content: "\n# Workers API\n\nThe `WorkersApi` enables executing worker agents from your server. Workers are task-based agents that run steps sequentially and return an output value.\n\n## Basic Usage\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: 'your-api-key',\n});\n\n// Execute a worker\nconst events = client.workers.execute(agentId, {\n TOPIC: 'AI safety',\n DEPTH: 'detailed',\n});\n\n// Process events\nfor await (const event of events) {\n if (event.type === 'worker-start') {\n console.log(`Worker ${event.workerSlug} started`);\n }\n if (event.type === 'text-delta') {\n process.stdout.write(event.delta);\n }\n if (event.type === 'worker-result') {\n console.log('Output:', event.output);\n }\n}\n```\n\n## WorkersApi Reference\n\n### execute()\n\nExecute a worker and stream the response.\n\n```typescript\nasync *execute(\n agentId: string,\n input: Record<string, unknown>,\n options?: WorkerExecuteOptions\n): AsyncGenerator<StreamEvent>\n```\n\n**Parameters:**\n\n| Parameter | Type | Description |\n| --------- | ------------------------- | --------------------------- |\n| `agentId` | `string` | The worker agent ID |\n| `input` | `Record<string, unknown>` | Input values for the worker |\n| `options` | `WorkerExecuteOptions` | Optional configuration |\n\n**Options:**\n\n```typescript\ninterface WorkerExecuteOptions {\n /** Tool handlers for server-side tool execution */\n tools?: ToolHandlers;\n /** Abort signal to cancel the execution */\n signal?: AbortSignal;\n}\n```\n\n### continue()\n\nContinue execution after client-side tool handling.\n\n```typescript\nasync *continue(\n agentId: string,\n executionId: string,\n toolResults: ToolResult[],\n options?: WorkerExecuteOptions\n): AsyncGenerator<StreamEvent>\n```\n\nUse this when the worker has tools without server-side handlers. The execution pauses with a `client-tool-request` event, you execute the tools, then call `continue()` to resume.\n\n## Tool Handlers\n\nProvide tool handlers to execute tools server-side:\n\n```typescript\nconst events = client.workers.execute(\n agentId,\n { TOPIC: 'AI safety' },\n {\n tools: {\n 'web-search': async (args) => {\n const results = await searchWeb(args.query);\n return results;\n },\n 'get-user-data': async (args) => {\n return await db.users.findById(args.userId);\n },\n },\n },\n);\n```\n\nTools defined in the worker protocol but not provided as handlers become client tools \u2014 the execution pauses and emits a `client-tool-request` event.\n\n## Stream Events\n\nWorkers emit standard stream events plus worker-specific events.\n\n### Worker Events\n\n```typescript\n// Worker started\n{\n type: 'worker-start',\n workerId: string, // Unique ID (also used as session ID for debug)\n workerSlug: string, // The worker's slug\n description?: string, // Display description for UI\n}\n\n// Worker completed\n{\n type: 'worker-result',\n workerId: string,\n output?: unknown, // The worker's output value\n error?: string, // Error message if worker failed\n}\n```\n\n### Common Events\n\n| Event | Description |\n| ----------------------- | --------------------------- |\n| `start` | Execution started |\n| `finish` | Execution completed |\n| `text-start` | Text generation started |\n| `text-delta` | Text chunk received |\n| `text-end` | Text generation ended |\n| `block-start` | Step started |\n| `block-end` | Step completed |\n| `tool-input-available` | Tool arguments ready |\n| `tool-output-available` | Tool result ready |\n| `client-tool-request` | Client tools need execution |\n| `error` | Error occurred |\n\n## Extracting Output\n\nTo get just the worker's output value:\n\n```typescript\nasync function executeWorker(\n client: OctavusClient,\n agentId: string,\n input: Record<string, unknown>,\n): Promise<unknown> {\n const events = client.workers.execute(agentId, input);\n\n for await (const event of events) {\n if (event.type === 'worker-result') {\n if (event.error) {\n throw new Error(event.error);\n }\n return event.output;\n }\n }\n\n return undefined;\n}\n\n// Usage\nconst analysis = await executeWorker(client, agentId, { TOPIC: 'AI' });\n```\n\n## Client Tool Continuation\n\nWhen workers have tools without handlers, execution pauses:\n\n```typescript\nfor await (const event of client.workers.execute(agentId, input)) {\n if (event.type === 'client-tool-request') {\n // Execute tools client-side\n const results = await executeClientTools(event.toolCalls);\n\n // Continue execution\n for await (const ev of client.workers.continue(agentId, event.executionId, results)) {\n // Handle remaining events\n }\n break;\n }\n}\n```\n\nThe `client-tool-request` event includes:\n\n```typescript\n{\n type: 'client-tool-request',\n executionId: string, // Pass to continue()\n toolCalls: [{\n toolCallId: string,\n toolName: string,\n args: Record<string, unknown>,\n }],\n}\n```\n\n## Streaming to HTTP Response\n\nConvert worker events to an SSE stream:\n\n```typescript\nimport { toSSEStream } from '@octavus/server-sdk';\n\nexport async function POST(request: Request) {\n const { agentId, input } = await request.json();\n\n const events = client.workers.execute(agentId, input, {\n tools: {\n search: async (args) => await search(args.query),\n },\n });\n\n return new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n });\n}\n```\n\n## Cancellation\n\nUse an abort signal to cancel execution:\n\n```typescript\nconst controller = new AbortController();\n\n// Cancel after 30 seconds\nsetTimeout(() => controller.abort(), 30000);\n\nconst events = client.workers.execute(agentId, input, {\n signal: controller.signal,\n});\n\ntry {\n for await (const event of events) {\n // Process events\n }\n} catch (error) {\n if (error.name === 'AbortError') {\n console.log('Worker cancelled');\n }\n}\n```\n\n## Error Handling\n\nErrors can occur at different levels:\n\n```typescript\nfor await (const event of client.workers.execute(agentId, input)) {\n // Stream-level error event\n if (event.type === 'error') {\n console.error(`Error: ${event.message}`);\n console.error(`Type: ${event.errorType}`);\n console.error(`Retryable: ${event.retryable}`);\n }\n\n // Worker-level error in result\n if (event.type === 'worker-result' && event.error) {\n console.error(`Worker failed: ${event.error}`);\n }\n}\n```\n\nError types include:\n\n| Type | Description |\n| ------------------ | --------------------- |\n| `validation_error` | Invalid input |\n| `not_found_error` | Worker not found |\n| `provider_error` | LLM provider error |\n| `tool_error` | Tool execution failed |\n| `execution_error` | Worker step failed |\n\n## Full Example\n\n```typescript\nimport { OctavusClient, type StreamEvent } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: process.env.OCTAVUS_API_KEY!,\n});\n\nasync function runResearchWorker(topic: string) {\n console.log(`Researching: ${topic}\\n`);\n\n const events = client.workers.execute(\n 'research-assistant-id',\n {\n TOPIC: topic,\n DEPTH: 'detailed',\n },\n {\n tools: {\n 'web-search': async ({ query }) => {\n console.log(`Searching: ${query}`);\n return await performWebSearch(query);\n },\n },\n },\n );\n\n let output: unknown;\n\n for await (const event of events) {\n switch (event.type) {\n case 'worker-start':\n console.log(`Started: ${event.workerSlug}`);\n break;\n\n case 'block-start':\n console.log(`Step: ${event.blockName}`);\n break;\n\n case 'text-delta':\n process.stdout.write(event.delta);\n break;\n\n case 'worker-result':\n if (event.error) {\n throw new Error(event.error);\n }\n output = event.output;\n break;\n\n case 'error':\n throw new Error(event.message);\n }\n }\n\n console.log('\\n\\nResearch complete!');\n return output;\n}\n\n// Run the worker\nconst result = await runResearchWorker('AI safety best practices');\nconsole.log('Result:', result);\n```\n\n## Next Steps\n\n- [Workers Protocol](/docs/protocol/workers) \u2014 Worker protocol reference\n- [Streaming](/docs/server-sdk/streaming) \u2014 Understanding stream events\n- [Tools](/docs/server-sdk/tools) \u2014 Tool handler patterns\n",
|
|
72
|
-
excerpt: "Workers API The enables executing worker agents from your server. Workers are task-based agents that run steps sequentially and return an output value. Basic Usage WorkersApi Reference
|
|
71
|
+
content: "\n# Workers API\n\nThe `WorkersApi` enables executing worker agents from your server. Workers are task-based agents that run steps sequentially and return an output value.\n\n## Basic Usage\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: 'your-api-key',\n});\n\nconst { output, sessionId } = await client.workers.generate(agentId, {\n TOPIC: 'AI safety',\n DEPTH: 'detailed',\n});\n\nconsole.log('Result:', output);\nconsole.log(`Debug: ${client.baseUrl}/sessions/${sessionId}`);\n```\n\n## WorkersApi Reference\n\n### generate()\n\nExecute a worker and return the output directly.\n\n```typescript\nasync generate(\n agentId: string,\n input: Record<string, unknown>,\n options?: WorkerExecuteOptions\n): Promise<WorkerGenerateResult>\n```\n\nRuns the worker to completion and returns the output value. This is the simplest way to execute a worker.\n\n**Returns:**\n\n```typescript\ninterface WorkerGenerateResult {\n /** The worker's output value */\n output: unknown;\n /** Session ID for debugging (usable for session URLs) */\n sessionId: string;\n}\n```\n\n**Throws:** `WorkerError` if the worker fails or completes without producing output.\n\n### execute()\n\nExecute a worker and stream the response. Use this when you need to observe intermediate events like text deltas, tool calls, or progress tracking.\n\n```typescript\nasync *execute(\n agentId: string,\n input: Record<string, unknown>,\n options?: WorkerExecuteOptions\n): AsyncGenerator<StreamEvent>\n```\n\n### continue()\n\nContinue execution after client-side tool handling.\n\n```typescript\nasync *continue(\n agentId: string,\n executionId: string,\n toolResults: ToolResult[],\n options?: WorkerExecuteOptions\n): AsyncGenerator<StreamEvent>\n```\n\nUse this when the worker has tools without server-side handlers. The execution pauses with a `client-tool-request` event, you execute the tools, then call `continue()` to resume.\n\n### Shared Options\n\nAll methods accept the same options:\n\n```typescript\ninterface WorkerExecuteOptions {\n /** Tool handlers for server-side tool execution */\n tools?: ToolHandlers;\n /** Abort signal to cancel the execution */\n signal?: AbortSignal;\n}\n```\n\n**Parameters:**\n\n| Parameter | Type | Description |\n| --------- | ------------------------- | --------------------------- |\n| `agentId` | `string` | The worker agent ID |\n| `input` | `Record<string, unknown>` | Input values for the worker |\n| `options` | `WorkerExecuteOptions` | Optional configuration |\n\n## Tool Handlers\n\nProvide tool handlers to execute tools server-side:\n\n```typescript\nconst { output } = await client.workers.generate(\n agentId,\n { TOPIC: 'AI safety' },\n {\n tools: {\n 'web-search': async (args) => {\n return await searchWeb(args.query);\n },\n 'get-user-data': async (args) => {\n return await db.users.findById(args.userId);\n },\n },\n },\n);\n```\n\nTools defined in the worker protocol but not provided as handlers become client tools \u2014 the execution pauses and emits a `client-tool-request` event.\n\n## Error Handling\n\n### WorkerError (generate)\n\n`generate()` throws a `WorkerError` on failure. The error includes an optional `sessionId` for constructing debug URLs:\n\n```typescript\nimport { OctavusClient, WorkerError } from '@octavus/server-sdk';\n\ntry {\n const { output } = await client.workers.generate(agentId, input);\n console.log('Result:', output);\n} catch (error) {\n if (error instanceof WorkerError) {\n console.error('Worker failed:', error.message);\n if (error.sessionId) {\n console.error(`Debug: ${client.baseUrl}/sessions/${error.sessionId}`);\n }\n }\n}\n```\n\n### Stream Errors (execute)\n\nWhen using `execute()`, errors appear as stream events:\n\n```typescript\nfor await (const event of client.workers.execute(agentId, input)) {\n if (event.type === 'error') {\n console.error(`Error: ${event.message}`);\n console.error(`Type: ${event.errorType}`);\n console.error(`Retryable: ${event.retryable}`);\n }\n\n if (event.type === 'worker-result' && event.error) {\n console.error(`Worker failed: ${event.error}`);\n }\n}\n```\n\n### Error Types\n\n| Type | Description |\n| ------------------ | --------------------- |\n| `validation_error` | Invalid input |\n| `not_found_error` | Worker not found |\n| `provider_error` | LLM provider error |\n| `tool_error` | Tool execution failed |\n| `execution_error` | Worker step failed |\n\n## Cancellation\n\nUse an abort signal to cancel execution:\n\n```typescript\nconst { output } = await client.workers.generate(agentId, input, {\n signal: AbortSignal.timeout(30_000),\n});\n```\n\nWith `execute()` and a manual controller:\n\n```typescript\nconst controller = new AbortController();\nsetTimeout(() => controller.abort(), 30000);\n\ntry {\n for await (const event of client.workers.execute(agentId, input, {\n signal: controller.signal,\n })) {\n // Process events\n }\n} catch (error) {\n if (error.name === 'AbortError') {\n console.log('Worker cancelled');\n }\n}\n```\n\n## Streaming\n\nWhen you need real-time visibility into the worker's execution \u2014 text generation, tool calls, or progress \u2014 use `execute()` instead of `generate()`.\n\n### Basic Streaming\n\n```typescript\nconst events = client.workers.execute(agentId, {\n TOPIC: 'AI safety',\n DEPTH: 'detailed',\n});\n\nfor await (const event of events) {\n if (event.type === 'worker-start') {\n console.log(`Worker ${event.workerSlug} started`);\n }\n if (event.type === 'text-delta') {\n process.stdout.write(event.delta);\n }\n if (event.type === 'worker-result') {\n console.log('Output:', event.output);\n }\n}\n```\n\n### Streaming to HTTP Response\n\nConvert worker events to an SSE stream:\n\n```typescript\nimport { toSSEStream } from '@octavus/server-sdk';\n\nexport async function POST(request: Request) {\n const { agentId, input } = await request.json();\n\n const events = client.workers.execute(agentId, input, {\n tools: {\n search: async (args) => await search(args.query),\n },\n });\n\n return new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n });\n}\n```\n\n### Client Tool Continuation\n\nWhen workers have tools without handlers, execution pauses:\n\n```typescript\nfor await (const event of client.workers.execute(agentId, input)) {\n if (event.type === 'client-tool-request') {\n const results = await executeClientTools(event.toolCalls);\n\n for await (const ev of client.workers.continue(agentId, event.executionId, results)) {\n // Handle remaining events\n }\n break;\n }\n}\n```\n\nThe `client-tool-request` event includes:\n\n```typescript\n{\n type: 'client-tool-request',\n executionId: string, // Pass to continue()\n toolCalls: [{\n toolCallId: string,\n toolName: string,\n args: Record<string, unknown>,\n }],\n}\n```\n\n### Stream Events\n\nWorkers emit standard stream events plus worker-specific events.\n\n#### Worker Events\n\n```typescript\n// Worker started\n{\n type: 'worker-start',\n workerId: string, // Unique ID (also used as session ID for debug)\n workerSlug: string, // The worker's slug\n description?: string, // Display description for UI\n}\n\n// Worker completed\n{\n type: 'worker-result',\n workerId: string,\n output?: unknown, // The worker's output value\n error?: string, // Error message if worker failed\n}\n```\n\n#### Common Events\n\n| Event | Description |\n| ----------------------- | --------------------------- |\n| `start` | Execution started |\n| `finish` | Execution completed |\n| `text-start` | Text generation started |\n| `text-delta` | Text chunk received |\n| `text-end` | Text generation ended |\n| `block-start` | Step started |\n| `block-end` | Step completed |\n| `tool-input-available` | Tool arguments ready |\n| `tool-output-available` | Tool result ready |\n| `client-tool-request` | Client tools need execution |\n| `error` | Error occurred |\n\n## Full Examples\n\n### generate()\n\n```typescript\nimport { OctavusClient, WorkerError } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: process.env.OCTAVUS_API_KEY!,\n});\n\ntry {\n const { output, sessionId } = await client.workers.generate(\n 'research-assistant-id',\n {\n TOPIC: 'AI safety best practices',\n DEPTH: 'detailed',\n },\n {\n tools: {\n 'web-search': async ({ query }) => await performWebSearch(query),\n },\n signal: AbortSignal.timeout(120_000),\n },\n );\n\n console.log('Result:', output);\n} catch (error) {\n if (error instanceof WorkerError) {\n console.error('Failed:', error.message);\n if (error.sessionId) {\n console.error(`Debug: ${client.baseUrl}/sessions/${error.sessionId}`);\n }\n }\n}\n```\n\n### execute()\n\nFor full control over streaming events and progress tracking:\n\n```typescript\nimport { OctavusClient, type StreamEvent } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: process.env.OCTAVUS_API_KEY!,\n});\n\nasync function runResearchWorker(topic: string) {\n console.log(`Researching: ${topic}\\n`);\n\n const events = client.workers.execute(\n 'research-assistant-id',\n {\n TOPIC: topic,\n DEPTH: 'detailed',\n },\n {\n tools: {\n 'web-search': async ({ query }) => {\n console.log(`Searching: ${query}`);\n return await performWebSearch(query);\n },\n },\n },\n );\n\n let output: unknown;\n\n for await (const event of events) {\n switch (event.type) {\n case 'worker-start':\n console.log(`Started: ${event.workerSlug}`);\n break;\n\n case 'block-start':\n console.log(`Step: ${event.blockName}`);\n break;\n\n case 'text-delta':\n process.stdout.write(event.delta);\n break;\n\n case 'worker-result':\n if (event.error) {\n throw new Error(event.error);\n }\n output = event.output;\n break;\n\n case 'error':\n throw new Error(event.message);\n }\n }\n\n console.log('\\n\\nResearch complete!');\n return output;\n}\n\nconst result = await runResearchWorker('AI safety best practices');\nconsole.log('Result:', result);\n```\n\n## Next Steps\n\n- [Workers Protocol](/docs/protocol/workers) \u2014 Worker protocol reference\n- [Streaming](/docs/server-sdk/streaming) \u2014 Understanding stream events\n- [Tools](/docs/server-sdk/tools) \u2014 Tool handler patterns\n",
|
|
72
|
+
excerpt: "Workers API The enables executing worker agents from your server. Workers are task-based agents that run steps sequentially and return an output value. Basic Usage WorkersApi Reference generate()...",
|
|
73
73
|
order: 6
|
|
74
74
|
},
|
|
75
75
|
{
|
|
@@ -86,7 +86,7 @@ var docs_default = [
|
|
|
86
86
|
section: "client-sdk",
|
|
87
87
|
title: "Overview",
|
|
88
88
|
description: "Introduction to the Octavus Client SDKs for building chat interfaces.",
|
|
89
|
-
content: "\n# Client SDK Overview\n\nOctavus provides two packages for frontend integration:\n\n| Package | Purpose | Use When |\n| --------------------- | ------------------------ | ----------------------------------------------------- |\n| `@octavus/react` | React hooks and bindings | Building React applications |\n| `@octavus/client-sdk` | Framework-agnostic core | Using Vue, Svelte, vanilla JS, or custom integrations |\n\n**Most users should install `@octavus/react`** \u2014 it includes everything from `@octavus/client-sdk` plus React-specific hooks.\n\n## Installation\n\n### React Applications\n\n```bash\nnpm install @octavus/react\n```\n\n**Current version:** `2.9.0`\n\n### Other Frameworks\n\n```bash\nnpm install @octavus/client-sdk\n```\n\n**Current version:** `2.9.0`\n\n## Transport Pattern\n\nThe Client SDK uses a **transport abstraction** to handle communication with your backend. This gives you flexibility in how events are delivered:\n\n| Transport | Use Case | Docs |\n| ----------------------- | -------------------------------------------- | ----------------------------------------------------- |\n| `createHttpTransport` | HTTP/SSE (Next.js, Express, etc.) | [HTTP Transport](/docs/client-sdk/http-transport) |\n| `createSocketTransport` | WebSocket, SockJS, or other socket protocols | [Socket Transport](/docs/client-sdk/socket-transport) |\n\nWhen the transport changes (e.g., when `sessionId` changes), the `useOctavusChat` hook automatically reinitializes with the new transport.\n\n> **Recommendation**: Use HTTP transport unless you specifically need WebSocket features (custom real-time events, Meteor/Phoenix, etc.).\n\n## React Usage\n\nThe `useOctavusChat` hook provides state management and streaming for React applications:\n\n```tsx\nimport { useMemo } from 'react';\nimport { useOctavusChat, createHttpTransport, type UIMessage } from '@octavus/react';\n\nfunction Chat({ sessionId }: { sessionId: string }) {\n // Create a stable transport instance (memoized on sessionId)\n const transport = useMemo(\n () =>\n createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n [sessionId],\n );\n\n const { messages, status, send } = useOctavusChat({ transport });\n\n const sendMessage = async (text: string) => {\n await send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n };\n\n return (\n <div>\n {messages.map((msg) => (\n <MessageBubble key={msg.id} message={msg} />\n ))}\n </div>\n );\n}\n\nfunction MessageBubble({ message }: { message: UIMessage }) {\n return (\n <div>\n {message.parts.map((part, i) => {\n if (part.type === 'text') {\n return <p key={i}>{part.text}</p>;\n }\n return null;\n })}\n </div>\n );\n}\n```\n\n## Framework-Agnostic Usage\n\nThe `OctavusChat` class can be used with any framework or vanilla JavaScript:\n\n```typescript\nimport { OctavusChat, createHttpTransport } from '@octavus/client-sdk';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n\nconst chat = new OctavusChat({ transport });\n\n// Subscribe to state changes\nconst unsubscribe = chat.subscribe(() => {\n console.log('Messages:', chat.messages);\n console.log('Status:', chat.status);\n // Update your UI here\n});\n\n// Send a message\nawait chat.send('user-message', { USER_MESSAGE: 'Hello' }, { userMessage: { content: 'Hello' } });\n\n// Cleanup when done\nunsubscribe();\n```\n\n## Key Features\n\n### Unified Send Function\n\nThe `send` function handles both user message display and agent triggering in one call:\n\n```tsx\nconst { send } = useOctavusChat({ transport });\n\n// Add user message to UI and trigger agent\nawait send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n\n// Trigger without adding a user message (e.g., button click)\nawait send('request-human');\n```\n\n### Message Parts\n\nMessages contain ordered `parts` for rich content:\n\n```tsx\nconst { messages } = useOctavusChat({ transport });\n\n// Each message has typed parts\nmessage.parts.map((part) => {\n switch (part.type) {\n case 'text': // Text content\n case 'reasoning': // Extended reasoning/thinking\n case 'tool-call': // Tool execution\n case 'operation': // Internal operations (set-resource, etc.)\n }\n});\n```\n\n### Status Tracking\n\n```tsx\nconst { status } = useOctavusChat({ transport });\n\n// status: 'idle' | 'streaming' | 'error' | 'awaiting-input'\n// 'awaiting-input' occurs when interactive client tools need user action\n```\n\n### Stop Streaming\n\n```tsx\nconst { stop } = useOctavusChat({ transport });\n\n// Stop current stream and finalize message\nstop();\n```\n\n## Hook Reference (React)\n\n### useOctavusChat\n\n```typescript\nfunction useOctavusChat(options: OctavusChatOptions): UseOctavusChatReturn;\n\ninterface OctavusChatOptions {\n // Required: Transport for streaming events\n transport: Transport;\n\n // Optional: Function to request upload URLs for file uploads\n requestUploadUrls?: (\n files: { filename: string; mediaType: string; size: number }[],\n ) => Promise<UploadUrlsResponse>;\n\n // Optional: Client-side tool handlers\n // - Function: executes automatically and returns result\n // - 'interactive': appears in pendingClientTools for user input\n clientTools?: Record<string, ClientToolHandler>;\n\n // Optional: Pre-populate with existing messages (session restore)\n initialMessages?: UIMessage[];\n\n // Optional: Callbacks\n onError?: (error: OctavusError) => void; // Structured error with type, source, retryable\n onFinish?: () => void;\n onStop?: () => void; // Called when user stops generation\n onResourceUpdate?: (name: string, value: unknown) => void;\n}\n\ninterface UseOctavusChatReturn {\n // State\n messages: UIMessage[];\n status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n error: OctavusError | null; // Structured error with type, source, retryable\n\n // Connection (socket transport only - undefined for HTTP)\n connectionState: ConnectionState | undefined; // 'disconnected' | 'connecting' | 'connected' | 'error'\n connectionError: Error | undefined;\n\n // Client tools (interactive tools awaiting user input)\n pendingClientTools: Record<string, InteractiveTool[]>; // Keyed by tool name\n\n // Actions\n send: (\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ) => Promise<void>;\n stop: () => void;\n\n // Connection management (socket transport only - undefined for HTTP)\n connect: (() => Promise<void>) | undefined;\n disconnect: (() => void) | undefined;\n\n // File uploads (requires requestUploadUrls)\n uploadFiles: (\n files: FileList | File[],\n onProgress?: (fileIndex: number, progress: number) => void,\n ) => Promise<FileReference[]>;\n}\n\ninterface UserMessageInput {\n content?: string;\n files?: FileList | File[] | FileReference[];\n}\n```\n\n## Transport Reference\n\n### createHttpTransport\n\nCreates an HTTP/SSE transport using native `fetch()`:\n\n```typescript\nimport { createHttpTransport } from '@octavus/react';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n```\n\n### createSocketTransport\n\nCreates a WebSocket/SockJS transport for real-time connections:\n\n```typescript\nimport { createSocketTransport } from '@octavus/react';\n\nconst transport = createSocketTransport({\n connect: () =>\n new Promise((resolve, reject) => {\n const ws = new WebSocket(`wss://api.example.com/stream?sessionId=${sessionId}`);\n ws.onopen = () => resolve(ws);\n ws.onerror = () => reject(new Error('Connection failed'));\n }),\n});\n```\n\nSocket transport provides additional connection management:\n\n```typescript\n// Access connection state directly\ntransport.connectionState; // 'disconnected' | 'connecting' | 'connected' | 'error'\n\n// Subscribe to state changes\ntransport.onConnectionStateChange((state, error) => {\n /* ... */\n});\n\n// Eager connection (instead of lazy on first send)\nawait transport.connect();\n\n// Manual disconnect\ntransport.disconnect();\n```\n\nFor detailed WebSocket/SockJS usage including custom events, reconnection patterns, and server-side implementation, see [Socket Transport](/docs/client-sdk/socket-transport).\n\n## Class Reference (Framework-Agnostic)\n\n### OctavusChat\n\n```typescript\nclass OctavusChat {\n constructor(options: OctavusChatOptions);\n\n // State (read-only)\n readonly messages: UIMessage[];\n readonly status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n readonly error: OctavusError | null; // Structured error\n readonly pendingClientTools: Record<string, InteractiveTool[]>; // Interactive tools\n\n // Actions\n send(\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ): Promise<void>;\n stop(): void;\n\n // Subscription\n subscribe(callback: () => void): () => void; // Returns unsubscribe function\n}\n```\n\n## Next Steps\n\n- [HTTP Transport](/docs/client-sdk/http-transport) \u2014 HTTP/SSE integration (recommended)\n- [Socket Transport](/docs/client-sdk/socket-transport) \u2014 WebSocket and SockJS integration\n- [Messages](/docs/client-sdk/messages) \u2014 Working with message state\n- [Streaming](/docs/client-sdk/streaming) \u2014 Building streaming UIs\n- [Client Tools](/docs/client-sdk/client-tools) \u2014 Interactive browser-side tool handling\n- [Operations](/docs/client-sdk/execution-blocks) \u2014 Showing agent progress\n- [Error Handling](/docs/client-sdk/error-handling) \u2014 Handling errors with type guards\n- [File Uploads](/docs/client-sdk/file-uploads) \u2014 Uploading images and documents\n- [Examples](/docs/examples/overview) \u2014 Complete working examples\n",
|
|
89
|
+
content: "\n# Client SDK Overview\n\nOctavus provides two packages for frontend integration:\n\n| Package | Purpose | Use When |\n| --------------------- | ------------------------ | ----------------------------------------------------- |\n| `@octavus/react` | React hooks and bindings | Building React applications |\n| `@octavus/client-sdk` | Framework-agnostic core | Using Vue, Svelte, vanilla JS, or custom integrations |\n\n**Most users should install `@octavus/react`** \u2014 it includes everything from `@octavus/client-sdk` plus React-specific hooks.\n\n## Installation\n\n### React Applications\n\n```bash\nnpm install @octavus/react\n```\n\n**Current version:** `2.12.0`\n\n### Other Frameworks\n\n```bash\nnpm install @octavus/client-sdk\n```\n\n**Current version:** `2.12.0`\n\n## Transport Pattern\n\nThe Client SDK uses a **transport abstraction** to handle communication with your backend. This gives you flexibility in how events are delivered:\n\n| Transport | Use Case | Docs |\n| ----------------------- | -------------------------------------------- | ----------------------------------------------------- |\n| `createHttpTransport` | HTTP/SSE (Next.js, Express, etc.) | [HTTP Transport](/docs/client-sdk/http-transport) |\n| `createSocketTransport` | WebSocket, SockJS, or other socket protocols | [Socket Transport](/docs/client-sdk/socket-transport) |\n\nWhen the transport changes (e.g., when `sessionId` changes), the `useOctavusChat` hook automatically reinitializes with the new transport.\n\n> **Recommendation**: Use HTTP transport unless you specifically need WebSocket features (custom real-time events, Meteor/Phoenix, etc.).\n\n## React Usage\n\nThe `useOctavusChat` hook provides state management and streaming for React applications:\n\n```tsx\nimport { useMemo } from 'react';\nimport { useOctavusChat, createHttpTransport, type UIMessage } from '@octavus/react';\n\nfunction Chat({ sessionId }: { sessionId: string }) {\n // Create a stable transport instance (memoized on sessionId)\n const transport = useMemo(\n () =>\n createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n [sessionId],\n );\n\n const { messages, status, send } = useOctavusChat({ transport });\n\n const sendMessage = async (text: string) => {\n await send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n };\n\n return (\n <div>\n {messages.map((msg) => (\n <MessageBubble key={msg.id} message={msg} />\n ))}\n </div>\n );\n}\n\nfunction MessageBubble({ message }: { message: UIMessage }) {\n return (\n <div>\n {message.parts.map((part, i) => {\n if (part.type === 'text') {\n return <p key={i}>{part.text}</p>;\n }\n return null;\n })}\n </div>\n );\n}\n```\n\n## Framework-Agnostic Usage\n\nThe `OctavusChat` class can be used with any framework or vanilla JavaScript:\n\n```typescript\nimport { OctavusChat, createHttpTransport } from '@octavus/client-sdk';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n\nconst chat = new OctavusChat({ transport });\n\n// Subscribe to state changes\nconst unsubscribe = chat.subscribe(() => {\n console.log('Messages:', chat.messages);\n console.log('Status:', chat.status);\n // Update your UI here\n});\n\n// Send a message\nawait chat.send('user-message', { USER_MESSAGE: 'Hello' }, { userMessage: { content: 'Hello' } });\n\n// Cleanup when done\nunsubscribe();\n```\n\n## Key Features\n\n### Unified Send Function\n\nThe `send` function handles both user message display and agent triggering in one call:\n\n```tsx\nconst { send } = useOctavusChat({ transport });\n\n// Add user message to UI and trigger agent\nawait send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n\n// Trigger without adding a user message (e.g., button click)\nawait send('request-human');\n```\n\n### Message Parts\n\nMessages contain ordered `parts` for rich content:\n\n```tsx\nconst { messages } = useOctavusChat({ transport });\n\n// Each message has typed parts\nmessage.parts.map((part) => {\n switch (part.type) {\n case 'text': // Text content\n case 'reasoning': // Extended reasoning/thinking\n case 'tool-call': // Tool execution\n case 'operation': // Internal operations (set-resource, etc.)\n }\n});\n```\n\n### Status Tracking\n\n```tsx\nconst { status } = useOctavusChat({ transport });\n\n// status: 'idle' | 'streaming' | 'error' | 'awaiting-input'\n// 'awaiting-input' occurs when interactive client tools need user action\n```\n\n### Stop Streaming\n\n```tsx\nconst { stop } = useOctavusChat({ transport });\n\n// Stop current stream and finalize message\nstop();\n```\n\n## Hook Reference (React)\n\n### useOctavusChat\n\n```typescript\nfunction useOctavusChat(options: OctavusChatOptions): UseOctavusChatReturn;\n\ninterface OctavusChatOptions {\n // Required: Transport for streaming events\n transport: Transport;\n\n // Optional: Function to request upload URLs for file uploads\n requestUploadUrls?: (\n files: { filename: string; mediaType: string; size: number }[],\n ) => Promise<UploadUrlsResponse>;\n\n // Optional: Client-side tool handlers\n // - Function: executes automatically and returns result\n // - 'interactive': appears in pendingClientTools for user input\n clientTools?: Record<string, ClientToolHandler>;\n\n // Optional: Pre-populate with existing messages (session restore)\n initialMessages?: UIMessage[];\n\n // Optional: Callbacks\n onError?: (error: OctavusError) => void; // Structured error with type, source, retryable\n onFinish?: () => void;\n onStop?: () => void; // Called when user stops generation\n onResourceUpdate?: (name: string, value: unknown) => void;\n}\n\ninterface UseOctavusChatReturn {\n // State\n messages: UIMessage[];\n status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n error: OctavusError | null; // Structured error with type, source, retryable\n\n // Connection (socket transport only - undefined for HTTP)\n connectionState: ConnectionState | undefined; // 'disconnected' | 'connecting' | 'connected' | 'error'\n connectionError: Error | undefined;\n\n // Client tools (interactive tools awaiting user input)\n pendingClientTools: Record<string, InteractiveTool[]>; // Keyed by tool name\n\n // Actions\n send: (\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ) => Promise<void>;\n stop: () => void;\n\n // Connection management (socket transport only - undefined for HTTP)\n connect: (() => Promise<void>) | undefined;\n disconnect: (() => void) | undefined;\n\n // File uploads (requires requestUploadUrls)\n uploadFiles: (\n files: FileList | File[],\n onProgress?: (fileIndex: number, progress: number) => void,\n ) => Promise<FileReference[]>;\n}\n\ninterface UserMessageInput {\n content?: string;\n files?: FileList | File[] | FileReference[];\n}\n```\n\n## Transport Reference\n\n### createHttpTransport\n\nCreates an HTTP/SSE transport using native `fetch()`:\n\n```typescript\nimport { createHttpTransport } from '@octavus/react';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n```\n\n### createSocketTransport\n\nCreates a WebSocket/SockJS transport for real-time connections:\n\n```typescript\nimport { createSocketTransport } from '@octavus/react';\n\nconst transport = createSocketTransport({\n connect: () =>\n new Promise((resolve, reject) => {\n const ws = new WebSocket(`wss://api.example.com/stream?sessionId=${sessionId}`);\n ws.onopen = () => resolve(ws);\n ws.onerror = () => reject(new Error('Connection failed'));\n }),\n});\n```\n\nSocket transport provides additional connection management:\n\n```typescript\n// Access connection state directly\ntransport.connectionState; // 'disconnected' | 'connecting' | 'connected' | 'error'\n\n// Subscribe to state changes\ntransport.onConnectionStateChange((state, error) => {\n /* ... */\n});\n\n// Eager connection (instead of lazy on first send)\nawait transport.connect();\n\n// Manual disconnect\ntransport.disconnect();\n```\n\nFor detailed WebSocket/SockJS usage including custom events, reconnection patterns, and server-side implementation, see [Socket Transport](/docs/client-sdk/socket-transport).\n\n## Class Reference (Framework-Agnostic)\n\n### OctavusChat\n\n```typescript\nclass OctavusChat {\n constructor(options: OctavusChatOptions);\n\n // State (read-only)\n readonly messages: UIMessage[];\n readonly status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n readonly error: OctavusError | null; // Structured error\n readonly pendingClientTools: Record<string, InteractiveTool[]>; // Interactive tools\n\n // Actions\n send(\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ): Promise<void>;\n stop(): void;\n\n // Subscription\n subscribe(callback: () => void): () => void; // Returns unsubscribe function\n}\n```\n\n## Next Steps\n\n- [HTTP Transport](/docs/client-sdk/http-transport) \u2014 HTTP/SSE integration (recommended)\n- [Socket Transport](/docs/client-sdk/socket-transport) \u2014 WebSocket and SockJS integration\n- [Messages](/docs/client-sdk/messages) \u2014 Working with message state\n- [Streaming](/docs/client-sdk/streaming) \u2014 Building streaming UIs\n- [Client Tools](/docs/client-sdk/client-tools) \u2014 Interactive browser-side tool handling\n- [Operations](/docs/client-sdk/execution-blocks) \u2014 Showing agent progress\n- [Error Handling](/docs/client-sdk/error-handling) \u2014 Handling errors with type guards\n- [File Uploads](/docs/client-sdk/file-uploads) \u2014 Uploading images and documents\n- [Examples](/docs/examples/overview) \u2014 Complete working examples\n",
|
|
90
90
|
excerpt: "Client SDK Overview Octavus provides two packages for frontend integration: | Package | Purpose | Use When | |...",
|
|
91
91
|
order: 1
|
|
92
92
|
},
|
|
@@ -513,7 +513,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
513
513
|
section: "client-sdk",
|
|
514
514
|
title: "File Uploads",
|
|
515
515
|
description: "Uploading images and files for vision models and document processing.",
|
|
516
|
-
content: "\n# File Uploads\n\nThe Client SDK supports uploading images and documents that can be sent with messages. This enables vision model capabilities (analyzing images) and document processing.\n\n## Overview\n\nFile uploads follow a two-step flow:\n\n1. **Request upload URLs** from the platform via your backend\n2. **Upload files directly to S3** using presigned URLs\n3. **Send file references** with your message\n\nThis architecture keeps your API key secure on the server while enabling fast, direct uploads.\n\n## Setup\n\n### Backend: Upload URLs Endpoint\n\nCreate an endpoint that proxies upload URL requests to the Octavus platform:\n\n```typescript\n// app/api/upload-urls/route.ts (Next.js)\nimport { NextResponse } from 'next/server';\nimport { octavus } from '@/lib/octavus';\n\nexport async function POST(request: Request) {\n const { sessionId, files } = await request.json();\n\n // Get presigned URLs from Octavus\n const result = await octavus.files.getUploadUrls(sessionId, files);\n\n return NextResponse.json(result);\n}\n```\n\n### Client: Configure File Uploads\n\nPass `requestUploadUrls` to the chat hook:\n\n```tsx\nimport { useMemo, useCallback } from 'react';\nimport { useOctavusChat, createHttpTransport } from '@octavus/react';\n\nfunction Chat({ sessionId }: { sessionId: string }) {\n const transport = useMemo(\n () =>\n createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n [sessionId],\n );\n\n // Request upload URLs from your backend\n const requestUploadUrls = useCallback(\n async (files: { filename: string; mediaType: string; size: number }[]) => {\n const response = await fetch('/api/upload-urls', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, files }),\n });\n return response.json();\n },\n [sessionId],\n );\n\n const { messages, status, send, uploadFiles } = useOctavusChat({\n transport,\n requestUploadUrls,\n });\n\n // ...\n}\n```\n\n## Uploading Files\n\n### Method 1: Upload Before Sending\n\nFor the best UX (showing upload progress), upload files first, then send:\n\n```tsx\nimport { useState, useRef } from 'react';\nimport type { FileReference } from '@octavus/react';\n\nfunction ChatInput({ sessionId }: { sessionId: string }) {\n const [pendingFiles, setPendingFiles] = useState<FileReference[]>([]);\n const [uploading, setUploading] = useState(false);\n const fileInputRef = useRef<HTMLInputElement>(null);\n\n const { send, uploadFiles } = useOctavusChat({\n transport,\n requestUploadUrls,\n });\n\n async function handleFileSelect(event: React.ChangeEvent<HTMLInputElement>) {\n const files = event.target.files;\n if (!files?.length) return;\n\n setUploading(true);\n try {\n // Upload files with progress tracking\n const fileRefs = await uploadFiles(files, (fileIndex, progress) => {\n console.log(`File ${fileIndex}: ${progress}%`);\n });\n setPendingFiles((prev) => [...prev, ...fileRefs]);\n } finally {\n setUploading(false);\n }\n }\n\n async function handleSend(message: string) {\n await send(\n 'user-message',\n {\n USER_MESSAGE: message,\n FILES: pendingFiles.length > 0 ? pendingFiles : undefined,\n },\n {\n userMessage: {\n content: message,\n files: pendingFiles.length > 0 ? pendingFiles : undefined,\n },\n },\n );\n setPendingFiles([]);\n }\n\n return (\n <div>\n {/* File preview */}\n {pendingFiles.map((file) => (\n <img key={file.id} src={file.url} alt={file.filename} className=\"h-16\" />\n ))}\n\n <input\n ref={fileInputRef}\n type=\"file\"\n accept=\"image/*,.pdf\"\n multiple\n onChange={handleFileSelect}\n className=\"hidden\"\n />\n\n <button onClick={() => fileInputRef.current?.click()} disabled={uploading}>\n {uploading ? 'Uploading...' : 'Attach'}\n </button>\n </div>\n );\n}\n```\n\n### Method 2: Upload on Send (Automatic)\n\nFor simpler implementations, pass `File` objects directly:\n\n```tsx\nasync function handleSend(message: string, files?: File[]) {\n await send(\n 'user-message',\n { USER_MESSAGE: message, FILES: files },\n { userMessage: { content: message, files } },\n );\n}\n```\n\nThe SDK automatically uploads the files before sending. Note: This doesn't provide upload progress.\n\n## FileReference Type\n\nFile references contain metadata and URLs:\n\n```typescript\ninterface FileReference {\n /** Unique file ID (platform-generated) */\n id: string;\n /** IANA media type (e.g., 'image/png', 'application/pdf') */\n mediaType: string;\n /** Presigned download URL (S3) */\n url: string;\n /** Original filename */\n filename?: string;\n /** File size in bytes */\n size?: number;\n}\n```\n\n## Protocol Integration\n\nTo accept files in your agent protocol, use the `file[]` type:\n\n```yaml\ntriggers:\n user-message:\n input:\n USER_MESSAGE:\n type: string\n description: The user's message\n FILES:\n type: file[]\n optional: true\n description: User-attached images for vision analysis\n\nhandlers:\n user-message:\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input:\n - USER_MESSAGE\n files:\n - FILES # Attach files to the message\n display: hidden\n\n Respond to user:\n block: next-message\n```\n\nThe `file` type is a built-in type representing uploaded files. Use `file[]` for arrays of files.\n\n## Supported File Types\n\n| Type | Media Types |\n| --------- | -------------------------------------------------------------------- |\n| Images | `image/jpeg`, `image/png`, `image/gif`, `image/webp` |\n| Documents | `application/pdf`, `text/plain`, `text/markdown`, `application/json` |\n\n## File Limits\n\n| Limit | Value |\n| --------------------- | ---------- |\n| Max file size | 10 MB |\n| Max total per request | 50 MB |\n| Max files per request | 20 |\n| Upload URL expiry | 15 minutes |\n| Download URL expiry | 24 hours |\n\n## Rendering User Files\n\nUser-uploaded files appear as `UIFilePart` in user messages:\n\n```tsx\nfunction UserMessage({ message }: { message: UIMessage }) {\n return (\n <div>\n {message.parts.map((part, i) => {\n if (part.type === 'file') {\n if (part.mediaType.startsWith('image/')) {\n return (\n <img\n key={i}\n src={part.url}\n alt={part.filename || 'Uploaded image'}\n className=\"max-h-48 rounded-lg\"\n />\n );\n }\n return (\n <a key={i} href={part.url} className=\"text-blue-500\">\n \u{1F4C4} {part.filename}\n </a>\n );\n }\n if (part.type === 'text') {\n return <p key={i}>{part.text}</p>;\n }\n return null;\n })}\n </div>\n );\n}\n```\n\n## Server SDK: Files API\n\nThe Server SDK provides direct access to the Files API:\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: 'your-api-key',\n});\n\n// Get presigned upload URLs\nconst { files } = await client.files.getUploadUrls(sessionId, [\n { filename: 'photo.jpg', mediaType: 'image/jpeg', size: 102400 },\n { filename: 'doc.pdf', mediaType: 'application/pdf', size: 204800 },\n]);\n\n// files[0].id - Use in FileReference\n// files[0].uploadUrl - PUT to this URL to upload\n// files[0].downloadUrl - Use as FileReference.url\n```\n\n## Complete Example\n\nHere's a full chat input component with file upload:\n\n```tsx\n'use client';\n\nimport { useState, useRef, useMemo, useCallback } from 'react';\nimport { useOctavusChat, createHttpTransport, type FileReference } from '@octavus/react';\n\ninterface PendingFile {\n file: File;\n id: string;\n status: 'uploading' | 'done' | 'error';\n progress: number;\n fileRef?: FileReference;\n error?: string;\n}\n\nexport function Chat({ sessionId }: { sessionId: string }) {\n const [input, setInput] = useState('');\n const [pendingFiles, setPendingFiles] = useState<PendingFile[]>([]);\n const fileInputRef = useRef<HTMLInputElement>(null);\n const fileIdCounter = useRef(0);\n\n const transport = useMemo(\n () =>\n createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n [sessionId],\n );\n\n const requestUploadUrls = useCallback(\n async (files: { filename: string; mediaType: string; size: number }[]) => {\n const res = await fetch('/api/upload-urls', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, files }),\n });\n return res.json();\n },\n [sessionId],\n );\n\n const { messages, status, send, uploadFiles } = useOctavusChat({\n transport,\n requestUploadUrls,\n });\n\n const isUploading = pendingFiles.some((f) => f.status === 'uploading');\n const hasErrors = pendingFiles.some((f) => f.status === 'error');\n const allReady = pendingFiles.every((f) => f.status === 'done');\n\n async function handleFileSelect(e: React.ChangeEvent<HTMLInputElement>) {\n const files = Array.from(e.target.files ?? []);\n if (!files.length) return;\n e.target.value = '';\n\n const newPending: PendingFile[] = files.map((file) => ({\n file,\n id: `pending-${++fileIdCounter.current}`,\n status: 'uploading',\n progress: 0,\n }));\n\n setPendingFiles((prev) => [...prev, ...newPending]);\n\n for (const pending of newPending) {\n try {\n const [fileRef] = await uploadFiles([pending.file], (_, progress) => {\n setPendingFiles((prev) =>\n prev.map((f) => (f.id === pending.id ? { ...f, progress } : f)),\n );\n });\n setPendingFiles((prev) =>\n prev.map((f) => (f.id === pending.id ? { ...f, status: 'done', fileRef } : f)),\n );\n } catch (err) {\n setPendingFiles((prev) =>\n prev.map((f) =>\n f.id === pending.id ? { ...f, status: 'error', error: String(err) } : f,\n ),\n );\n }\n }\n }\n\n async function handleSubmit() {\n if ((!input.trim() && !pendingFiles.length) || !allReady) return;\n\n const fileRefs = pendingFiles.filter((f) => f.fileRef).map((f) => f.fileRef!);\n\n await send(\n 'user-message',\n {\n USER_MESSAGE: input,\n FILES: fileRefs.length > 0 ? fileRefs : undefined,\n },\n {\n userMessage: {\n content: input,\n files: fileRefs.length > 0 ? fileRefs : undefined,\n },\n },\n );\n\n setInput('');\n setPendingFiles([]);\n }\n\n return (\n <div>\n {/* Messages */}\n {messages.map((msg) => (\n <div key={msg.id}>{/* ... render message */}</div>\n ))}\n\n {/* Pending files */}\n {pendingFiles.length > 0 && (\n <div className=\"flex gap-2\">\n {pendingFiles.map((f) => (\n <div key={f.id} className=\"relative\">\n <img\n src={URL.createObjectURL(f.file)}\n alt={f.file.name}\n className=\"h-16 w-16 object-cover rounded\"\n />\n {f.status === 'uploading' && (\n <div className=\"absolute inset-0 flex items-center justify-center bg-black/50\">\n <span className=\"text-white text-xs\">{f.progress}%</span>\n </div>\n )}\n <button\n onClick={() => setPendingFiles((prev) => prev.filter((p) => p.id !== f.id))}\n className=\"absolute -top-2 -right-2 bg-red-500 text-white rounded-full w-5 h-5\"\n >\n \xD7\n </button>\n </div>\n ))}\n </div>\n )}\n\n {/* Input */}\n <div className=\"flex gap-2\">\n <input type=\"file\" ref={fileInputRef} onChange={handleFileSelect} hidden />\n <button onClick={() => fileInputRef.current?.click()}>\u{1F4CE}</button>\n <input\n value={input}\n onChange={(e) => setInput(e.target.value)}\n placeholder=\"Type a message...\"\n className=\"flex-1\"\n />\n <button onClick={handleSubmit} disabled={isUploading || hasErrors}>\n {isUploading ? 'Uploading...' : 'Send'}\n </button>\n </div>\n </div>\n );\n}\n```\n",
|
|
516
|
+
content: "\n# File Uploads\n\nThe Client SDK supports uploading images and documents that can be sent with messages. This enables vision model capabilities (analyzing images) and document processing.\n\n## Overview\n\nFile uploads follow a two-step flow:\n\n1. **Request upload URLs** from the platform via your backend\n2. **Upload files directly to S3** using presigned URLs\n3. **Send file references** with your message\n\nThis architecture keeps your API key secure on the server while enabling fast, direct uploads.\n\n## Setup\n\n### Backend: Upload URLs Endpoint\n\nCreate an endpoint that proxies upload URL requests to the Octavus platform:\n\n```typescript\n// app/api/upload-urls/route.ts (Next.js)\nimport { NextResponse } from 'next/server';\nimport { octavus } from '@/lib/octavus';\n\nexport async function POST(request: Request) {\n const { sessionId, files } = await request.json();\n\n // Get presigned URLs from Octavus\n const result = await octavus.files.getUploadUrls(sessionId, files);\n\n return NextResponse.json(result);\n}\n```\n\n### Client: Configure File Uploads\n\nPass `requestUploadUrls` to the chat hook:\n\n```tsx\nimport { useMemo, useCallback } from 'react';\nimport { useOctavusChat, createHttpTransport } from '@octavus/react';\n\nfunction Chat({ sessionId }: { sessionId: string }) {\n const transport = useMemo(\n () =>\n createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n [sessionId],\n );\n\n // Request upload URLs from your backend\n const requestUploadUrls = useCallback(\n async (files: { filename: string; mediaType: string; size: number }[]) => {\n const response = await fetch('/api/upload-urls', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, files }),\n });\n return response.json();\n },\n [sessionId],\n );\n\n const { messages, status, send, uploadFiles } = useOctavusChat({\n transport,\n requestUploadUrls,\n // Optional: configure upload timeout and retry behavior\n uploadOptions: {\n timeoutMs: 60_000, // Per-file timeout (default: 60s, set to 0 to disable)\n maxRetries: 2, // Retry attempts on transient failures (default: 2)\n retryDelayMs: 1_000, // Delay between retries (default: 1s)\n },\n });\n\n // ...\n}\n```\n\n## Uploading Files\n\n### Method 1: Upload Before Sending\n\nFor the best UX (showing upload progress), upload files first, then send:\n\n```tsx\nimport { useState, useRef } from 'react';\nimport type { FileReference } from '@octavus/react';\n\nfunction ChatInput({ sessionId }: { sessionId: string }) {\n const [pendingFiles, setPendingFiles] = useState<FileReference[]>([]);\n const [uploading, setUploading] = useState(false);\n const fileInputRef = useRef<HTMLInputElement>(null);\n\n const { send, uploadFiles } = useOctavusChat({\n transport,\n requestUploadUrls,\n });\n\n async function handleFileSelect(event: React.ChangeEvent<HTMLInputElement>) {\n const files = event.target.files;\n if (!files?.length) return;\n\n setUploading(true);\n try {\n // Upload files with progress tracking\n const fileRefs = await uploadFiles(files, (fileIndex, progress) => {\n console.log(`File ${fileIndex}: ${progress}%`);\n });\n setPendingFiles((prev) => [...prev, ...fileRefs]);\n } finally {\n setUploading(false);\n }\n }\n\n async function handleSend(message: string) {\n await send(\n 'user-message',\n {\n USER_MESSAGE: message,\n FILES: pendingFiles.length > 0 ? pendingFiles : undefined,\n },\n {\n userMessage: {\n content: message,\n files: pendingFiles.length > 0 ? pendingFiles : undefined,\n },\n },\n );\n setPendingFiles([]);\n }\n\n return (\n <div>\n {/* File preview */}\n {pendingFiles.map((file) => (\n <img key={file.id} src={file.url} alt={file.filename} className=\"h-16\" />\n ))}\n\n <input\n ref={fileInputRef}\n type=\"file\"\n accept=\"image/*,.pdf\"\n multiple\n onChange={handleFileSelect}\n className=\"hidden\"\n />\n\n <button onClick={() => fileInputRef.current?.click()} disabled={uploading}>\n {uploading ? 'Uploading...' : 'Attach'}\n </button>\n </div>\n );\n}\n```\n\n### Method 2: Upload on Send (Automatic)\n\nFor simpler implementations, pass `File` objects directly:\n\n```tsx\nasync function handleSend(message: string, files?: File[]) {\n await send(\n 'user-message',\n { USER_MESSAGE: message, FILES: files },\n { userMessage: { content: message, files } },\n );\n}\n```\n\nThe SDK automatically uploads the files before sending. Note: This doesn't provide upload progress.\n\n## Upload Reliability\n\nUploads include built-in timeout and retry logic for handling transient failures (network errors, server issues, mobile network switches).\n\n**Default behavior:**\n\n- **Timeout**: 60 seconds per file \u2014 prevents uploads from hanging on stalled connections\n- **Retries**: 2 automatic retries on transient failures (network errors, 5xx, 429)\n- **Retry delay**: 1 second between retries\n- **Non-retryable errors** (4xx like 403, 404) fail immediately without retrying\n\nOnly the S3 upload is retried \u2014 the presigned URL stays valid for 15 minutes. On retry, the progress callback resets to 0%.\n\nConfigure via `uploadOptions`:\n\n```typescript\nconst { send, uploadFiles } = useOctavusChat({\n transport,\n requestUploadUrls,\n uploadOptions: {\n timeoutMs: 120_000, // 2 minutes for large files\n maxRetries: 3,\n retryDelayMs: 2_000,\n },\n});\n```\n\nTo disable timeout or retries:\n\n```typescript\nuploadOptions: {\n timeoutMs: 0, // No timeout\n maxRetries: 0, // No retries\n}\n```\n\n### Using `OctavusChat` Directly\n\nWhen using the `OctavusChat` class directly (without the React hook), pass `uploadOptions` in the constructor:\n\n```typescript\nconst chat = new OctavusChat({\n transport,\n requestUploadUrls,\n uploadOptions: { timeoutMs: 120_000, maxRetries: 3 },\n});\n```\n\n## FileReference Type\n\nFile references contain metadata and URLs:\n\n```typescript\ninterface FileReference {\n /** Unique file ID (platform-generated) */\n id: string;\n /** IANA media type (e.g., 'image/png', 'application/pdf') */\n mediaType: string;\n /** Presigned download URL (S3) */\n url: string;\n /** Original filename */\n filename?: string;\n /** File size in bytes */\n size?: number;\n}\n```\n\n## Protocol Integration\n\nTo accept files in your agent protocol, use the `file[]` type:\n\n```yaml\ntriggers:\n user-message:\n input:\n USER_MESSAGE:\n type: string\n description: The user's message\n FILES:\n type: file[]\n optional: true\n description: User-attached images for vision analysis\n\nhandlers:\n user-message:\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input:\n - USER_MESSAGE\n files:\n - FILES # Attach files to the message\n display: hidden\n\n Respond to user:\n block: next-message\n```\n\nThe `file` type is a built-in type representing uploaded files. Use `file[]` for arrays of files.\n\n## Supported File Types\n\n| Type | Media Types |\n| --------- | -------------------------------------------------------------------- |\n| Images | `image/jpeg`, `image/png`, `image/gif`, `image/webp` |\n| Video | `video/mp4`, `video/webm`, `video/quicktime`, `video/mpeg` |\n| Documents | `application/pdf`, `text/plain`, `text/markdown`, `application/json` |\n\n## File Limits\n\n| Limit | Value |\n| --------------------- | ---------- |\n| Max file size | 100 MB |\n| Max total per request | 200 MB |\n| Upload URL expiry | 15 minutes |\n| Download URL expiry | 24 hours |\n\n## Rendering User Files\n\nUser-uploaded files appear as `UIFilePart` in user messages:\n\n```tsx\nfunction UserMessage({ message }: { message: UIMessage }) {\n return (\n <div>\n {message.parts.map((part, i) => {\n if (part.type === 'file') {\n if (part.mediaType.startsWith('image/')) {\n return (\n <img\n key={i}\n src={part.url}\n alt={part.filename || 'Uploaded image'}\n className=\"max-h-48 rounded-lg\"\n />\n );\n }\n return (\n <a key={i} href={part.url} className=\"text-blue-500\">\n \u{1F4C4} {part.filename}\n </a>\n );\n }\n if (part.type === 'text') {\n return <p key={i}>{part.text}</p>;\n }\n return null;\n })}\n </div>\n );\n}\n```\n\n## Server SDK: Files API\n\nThe Server SDK provides direct access to the Files API:\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: 'your-api-key',\n});\n\n// Get presigned upload URLs\nconst { files } = await client.files.getUploadUrls(sessionId, [\n { filename: 'photo.jpg', mediaType: 'image/jpeg', size: 102400 },\n { filename: 'doc.pdf', mediaType: 'application/pdf', size: 204800 },\n]);\n\n// files[0].id - Use in FileReference\n// files[0].uploadUrl - PUT to this URL to upload\n// files[0].downloadUrl - Use as FileReference.url\n```\n\n## Complete Example\n\nHere's a full chat input component with file upload:\n\n```tsx\n'use client';\n\nimport { useState, useRef, useMemo, useCallback } from 'react';\nimport { useOctavusChat, createHttpTransport, type FileReference } from '@octavus/react';\n\ninterface PendingFile {\n file: File;\n id: string;\n status: 'uploading' | 'done' | 'error';\n progress: number;\n fileRef?: FileReference;\n error?: string;\n}\n\nexport function Chat({ sessionId }: { sessionId: string }) {\n const [input, setInput] = useState('');\n const [pendingFiles, setPendingFiles] = useState<PendingFile[]>([]);\n const fileInputRef = useRef<HTMLInputElement>(null);\n const fileIdCounter = useRef(0);\n\n const transport = useMemo(\n () =>\n createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n [sessionId],\n );\n\n const requestUploadUrls = useCallback(\n async (files: { filename: string; mediaType: string; size: number }[]) => {\n const res = await fetch('/api/upload-urls', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, files }),\n });\n return res.json();\n },\n [sessionId],\n );\n\n const { messages, status, send, uploadFiles } = useOctavusChat({\n transport,\n requestUploadUrls,\n });\n\n const isUploading = pendingFiles.some((f) => f.status === 'uploading');\n const hasErrors = pendingFiles.some((f) => f.status === 'error');\n const allReady = pendingFiles.every((f) => f.status === 'done');\n\n async function handleFileSelect(e: React.ChangeEvent<HTMLInputElement>) {\n const files = Array.from(e.target.files ?? []);\n if (!files.length) return;\n e.target.value = '';\n\n const newPending: PendingFile[] = files.map((file) => ({\n file,\n id: `pending-${++fileIdCounter.current}`,\n status: 'uploading',\n progress: 0,\n }));\n\n setPendingFiles((prev) => [...prev, ...newPending]);\n\n for (const pending of newPending) {\n try {\n const [fileRef] = await uploadFiles([pending.file], (_, progress) => {\n setPendingFiles((prev) =>\n prev.map((f) => (f.id === pending.id ? { ...f, progress } : f)),\n );\n });\n setPendingFiles((prev) =>\n prev.map((f) => (f.id === pending.id ? { ...f, status: 'done', fileRef } : f)),\n );\n } catch (err) {\n setPendingFiles((prev) =>\n prev.map((f) =>\n f.id === pending.id ? { ...f, status: 'error', error: String(err) } : f,\n ),\n );\n }\n }\n }\n\n async function handleSubmit() {\n if ((!input.trim() && !pendingFiles.length) || !allReady) return;\n\n const fileRefs = pendingFiles.filter((f) => f.fileRef).map((f) => f.fileRef!);\n\n await send(\n 'user-message',\n {\n USER_MESSAGE: input,\n FILES: fileRefs.length > 0 ? fileRefs : undefined,\n },\n {\n userMessage: {\n content: input,\n files: fileRefs.length > 0 ? fileRefs : undefined,\n },\n },\n );\n\n setInput('');\n setPendingFiles([]);\n }\n\n return (\n <div>\n {/* Messages */}\n {messages.map((msg) => (\n <div key={msg.id}>{/* ... render message */}</div>\n ))}\n\n {/* Pending files */}\n {pendingFiles.length > 0 && (\n <div className=\"flex gap-2\">\n {pendingFiles.map((f) => (\n <div key={f.id} className=\"relative\">\n <img\n src={URL.createObjectURL(f.file)}\n alt={f.file.name}\n className=\"h-16 w-16 object-cover rounded\"\n />\n {f.status === 'uploading' && (\n <div className=\"absolute inset-0 flex items-center justify-center bg-black/50\">\n <span className=\"text-white text-xs\">{f.progress}%</span>\n </div>\n )}\n <button\n onClick={() => setPendingFiles((prev) => prev.filter((p) => p.id !== f.id))}\n className=\"absolute -top-2 -right-2 bg-red-500 text-white rounded-full w-5 h-5\"\n >\n \xD7\n </button>\n </div>\n ))}\n </div>\n )}\n\n {/* Input */}\n <div className=\"flex gap-2\">\n <input type=\"file\" ref={fileInputRef} onChange={handleFileSelect} hidden />\n <button onClick={() => fileInputRef.current?.click()}>\u{1F4CE}</button>\n <input\n value={input}\n onChange={(e) => setInput(e.target.value)}\n placeholder=\"Type a message...\"\n className=\"flex-1\"\n />\n <button onClick={handleSubmit} disabled={isUploading || hasErrors}>\n {isUploading ? 'Uploading...' : 'Send'}\n </button>\n </div>\n </div>\n );\n}\n```\n",
|
|
517
517
|
excerpt: "File Uploads The Client SDK supports uploading images and documents that can be sent with messages. This enables vision model capabilities (analyzing images) and document processing. Overview File...",
|
|
518
518
|
order: 8
|
|
519
519
|
},
|
|
@@ -540,7 +540,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
540
540
|
section: "protocol",
|
|
541
541
|
title: "Overview",
|
|
542
542
|
description: "Introduction to Octavus agent protocols.",
|
|
543
|
-
content: '\n# Protocol Overview\n\nAgent protocols define how an AI agent behaves. They\'re written in YAML and specify inputs, triggers, tools, and execution handlers.\n\n## Why Protocols?\n\nProtocols provide:\n\n- **Declarative definition** \u2014 Define behavior, not implementation\n- **Portable agents** \u2014 Move agents between projects\n- **Versioning** \u2014 Track changes with git\n- **Validation** \u2014 Catch errors before runtime\n- **Visualization** \u2014 Debug execution flows\n\n## Agent Formats\n\nOctavus supports two agent formats:\n\n| Format | Use Case | Structure |\n| ------------- | ------------------------------ | --------------------------------- |\n| `interactive` | Chat and multi-turn dialogue | `triggers` + `handlers` + `agent` |\n| `worker` | Background tasks and pipelines | `steps` + `output` |\n\n**Interactive agents** handle conversations \u2014 they respond to triggers (like user messages) and maintain session state across interactions.\n\n**Worker agents** execute tasks \u2014 they run steps sequentially and return an output value. Workers can be called independently or composed into interactive agents.\n\nSee [Workers](/docs/protocol/workers) for the worker protocol reference.\n\n## Interactive Protocol Structure\n\n```yaml\n# Agent inputs (provided when creating a session)\ninput:\n COMPANY_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\n# Persistent resources the agent can read/write\nresources:\n CONVERSATION_SUMMARY:\n description: Summary for handoff\n default: \'\'\n\n# How the agent can be invoked\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n request-human:\n description: User clicks "Talk to Human"\n\n# Temporary variables for execution (with types)\nvariables:\n SUMMARY:\n type: string\n TICKET:\n type: unknown\n\n# Tools the agent can use\ntools:\n get-user-account:\n description: Looking up your account\n parameters:\n userId: { type: string }\n\n# Octavus skills (provider-agnostic code execution)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\n# Agent configuration (model, tools, etc.)\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account]\n skills: [qr-code] # Enable skills\n imageModel: google/gemini-2.5-flash-image # Enable image generation\n agentic: true # Allow multiple tool calls\n thinking: medium # Extended reasoning\n\n# What happens when triggers fire\nhandlers:\n user-message:\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond to user:\n block: next-message\n```\n\n## File Structure\n\nEach agent is a folder with:\n\n```\nmy-agent/\n\u251C\u2500\u2500 protocol.yaml # Main logic (required)\n\u251C\u2500\u2500 settings.json # Agent metadata (required)\n\
|
|
543
|
+
content: '\n# Protocol Overview\n\nAgent protocols define how an AI agent behaves. They\'re written in YAML and specify inputs, triggers, tools, and execution handlers.\n\n## Why Protocols?\n\nProtocols provide:\n\n- **Declarative definition** \u2014 Define behavior, not implementation\n- **Portable agents** \u2014 Move agents between projects\n- **Versioning** \u2014 Track changes with git\n- **Validation** \u2014 Catch errors before runtime\n- **Visualization** \u2014 Debug execution flows\n\n## Agent Formats\n\nOctavus supports two agent formats:\n\n| Format | Use Case | Structure |\n| ------------- | ------------------------------ | --------------------------------- |\n| `interactive` | Chat and multi-turn dialogue | `triggers` + `handlers` + `agent` |\n| `worker` | Background tasks and pipelines | `steps` + `output` |\n\n**Interactive agents** handle conversations \u2014 they respond to triggers (like user messages) and maintain session state across interactions.\n\n**Worker agents** execute tasks \u2014 they run steps sequentially and return an output value. Workers can be called independently or composed into interactive agents.\n\nSee [Workers](/docs/protocol/workers) for the worker protocol reference.\n\n## Interactive Protocol Structure\n\n```yaml\n# Agent inputs (provided when creating a session)\ninput:\n COMPANY_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\n# Persistent resources the agent can read/write\nresources:\n CONVERSATION_SUMMARY:\n description: Summary for handoff\n default: \'\'\n\n# How the agent can be invoked\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n request-human:\n description: User clicks "Talk to Human"\n\n# Temporary variables for execution (with types)\nvariables:\n SUMMARY:\n type: string\n TICKET:\n type: unknown\n\n# Tools the agent can use\ntools:\n get-user-account:\n description: Looking up your account\n parameters:\n userId: { type: string }\n\n# Octavus skills (provider-agnostic code execution)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\n# Agent configuration (model, tools, etc.)\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account]\n skills: [qr-code] # Enable skills\n imageModel: google/gemini-2.5-flash-image # Enable image generation\n agentic: true # Allow multiple tool calls\n thinking: medium # Extended reasoning\n\n# What happens when triggers fire\nhandlers:\n user-message:\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond to user:\n block: next-message\n```\n\n## File Structure\n\nEach agent is a folder with:\n\n```\nmy-agent/\n\u251C\u2500\u2500 protocol.yaml # Main logic (required)\n\u251C\u2500\u2500 settings.json # Agent metadata (required)\n\u251C\u2500\u2500 prompts/ # Prompt templates (supports subdirectories)\n\u2502 \u251C\u2500\u2500 system.md\n\u2502 \u251C\u2500\u2500 user-message.md\n\u2502 \u2514\u2500\u2500 shared/\n\u2502 \u251C\u2500\u2500 company-info.md\n\u2502 \u2514\u2500\u2500 formatting-rules.md\n\u2514\u2500\u2500 references/ # On-demand context documents (optional)\n \u2514\u2500\u2500 api-guidelines.md\n```\n\nPrompts can be organized in subdirectories. In the protocol, reference nested prompts by their path relative to `prompts/` (without `.md`): `shared/company-info`.\n\nReferences are markdown files with YAML frontmatter that the agent can fetch on demand during execution. See [References](/docs/protocol/references).\n\n### settings.json\n\n```json\n{\n "slug": "my-agent",\n "name": "My Agent",\n "description": "What this agent does",\n "format": "interactive"\n}\n```\n\n| Field | Required | Description |\n| ------------- | -------- | ----------------------------------------------- |\n| `slug` | Yes | URL-safe identifier (lowercase, digits, dashes) |\n| `name` | Yes | Human-readable name |\n| `description` | No | Brief description |\n| `format` | Yes | `interactive` (chat) or `worker` (background) |\n\n## Naming Conventions\n\n- **Slugs**: `lowercase-with-dashes`\n- **Variables**: `UPPERCASE_SNAKE_CASE`\n- **Prompts**: `lowercase-with-dashes.md` (paths use `/` for subdirectories)\n- **Tools**: `lowercase-with-dashes`\n- **Triggers**: `lowercase-with-dashes`\n\n## Variables in Prompts\n\nReference variables with `{{VARIABLE_NAME}}`:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a support agent for {{COMPANY_NAME}}.\n\nHelp users with their {{PRODUCT_NAME}} questions.\n\n## Support Policies\n\n{{SUPPORT_POLICIES}}\n```\n\nVariables are replaced with their values at runtime. If a variable is not provided, the placeholder is kept as-is.\n\n## Prompt Interpolation\n\nInclude other prompts inside a prompt with `{{@path.md}}`:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a customer support agent.\n\n{{@shared/company-info.md}}\n\n{{@shared/formatting-rules.md}}\n\nHelp users with their questions.\n```\n\nThe referenced prompt content is inserted before variable interpolation, so variables in included prompts work the same way. Circular references are not allowed and will be caught during validation.\n\n## Next Steps\n\n- [Input & Resources](/docs/protocol/input-resources) \u2014 Defining agent inputs\n- [Triggers](/docs/protocol/triggers) \u2014 How agents are invoked\n- [Tools](/docs/protocol/tools) \u2014 External capabilities\n- [Skills](/docs/protocol/skills) \u2014 Code execution and knowledge packages\n- [References](/docs/protocol/references) \u2014 On-demand context documents\n- [Handlers](/docs/protocol/handlers) \u2014 Execution blocks\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n- [Workers](/docs/protocol/workers) \u2014 Worker agent format\n- [Provider Options](/docs/protocol/provider-options) \u2014 Provider-specific features\n- [Types](/docs/protocol/types) \u2014 Custom type definitions\n',
|
|
544
544
|
excerpt: "Protocol Overview Agent protocols define how an AI agent behaves. They're written in YAML and specify inputs, triggers, tools, and execution handlers. Why Protocols? Protocols provide: - Declarative...",
|
|
545
545
|
order: 1
|
|
546
546
|
},
|
|
@@ -576,7 +576,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
576
576
|
section: "protocol",
|
|
577
577
|
title: "Skills",
|
|
578
578
|
description: "Using Octavus skills for code execution and specialized capabilities.",
|
|
579
|
-
content: "\n# Skills\n\nSkills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are self-contained packages with documentation and scripts that run in secure sandboxes.\n\n## Overview\n\nOctavus Skills provide **provider-agnostic** code execution. They work with any LLM provider (Anthropic, OpenAI, Google) by using explicit tool calls and system prompt injection.\n\n### How Skills Work\n\n1. **Skill Definition**: Skills are defined in the protocol's `skills:` section\n2. **Skill Resolution**: Skills are resolved from available sources (see below)\n3. **Sandbox Execution**: When a skill is used, code runs in an isolated sandbox environment\n4. **File Generation**: Files saved to `/output/` are automatically captured and made available for download\n\n### Skill Sources\n\nSkills come from two sources, visible in the Skills tab of your organization:\n\n| Source | Badge in UI | Visibility | Example |\n| ----------- | ----------- | ------------------------------ | ------------------ |\n| **Octavus** | `Octavus` | Available to all organizations | `qr-code` |\n| **Custom** | None | Private to your organization | `my-company-skill` |\n\nWhen you reference a skill in your protocol, Octavus resolves it from your available skills. If you create a custom skill with the same name as an Octavus skill, your custom skill takes precedence.\n\n## Defining Skills\n\nDefine skills in the protocol's `skills:` section:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n```\n\n### Skill Fields\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------------------------------------- |\n| `display` | No | How to show in UI: `hidden`, `name`, `description`, `stream` (default: `description`) |\n| `description` | No | Custom description shown to users (overrides skill's built-in description) |\n\n### Display Modes\n\n| Mode | Behavior |\n| ------------- | ------------------------------------------- |\n| `hidden` | Skill usage not shown to users |\n| `name` | Shows skill name while executing |\n| `description` | Shows description while executing (default) |\n| `stream` | Streams progress if available |\n\n## Enabling Skills\n\nAfter defining skills in the `skills:` section, specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account]\n skills: [qr-code] # Skills available for this thread\n agentic: true\n```\n\n## Skill Tools\n\nWhen skills are enabled, the LLM has access to these tools:\n\n| Tool | Purpose |\n| -------------------- | --------------------------------------- |\n| `octavus_skill_read` | Read skill documentation (SKILL.md) |\n| `octavus_skill_list` | List available scripts in a skill |\n| `octavus_skill_run` | Execute a pre-built script from a skill |\n| `octavus_code_run` | Execute arbitrary Python/Bash code |\n| `octavus_file_write` | Create files in the sandbox |\n| `octavus_file_read` | Read files from the sandbox |\n\nThe LLM learns about available skills through system prompt injection and can use these tools to interact with skills.\n\n## Example: QR Code Generation\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n agentic: true\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond:\n block: next-message\n```\n\nWhen a user asks \"Create a QR code for octavus.ai\", the LLM will:\n\n1. Recognize the task matches the `qr-code` skill\n2. Call `octavus_skill_read` to learn how to use the skill\n3. Execute code (via `octavus_code_run` or `octavus_skill_run`) to generate the QR code\n4. Save the image to `/output/` in the sandbox\n5. The file is automatically captured and made available for download\n\n## File Output\n\nFiles saved to `/output/` in the sandbox are automatically:\n\n1. **Captured** after code execution\n2. **Uploaded** to S3 storage\n3. **Made available** via presigned URLs\n4. **Included** in the message as file parts\n\nFiles persist across page refreshes and are stored in the session's message history.\n\n## Skill Format\n\nSkills follow the [Agent Skills](https://agentskills.io) open standard:\n\n- `SKILL.md` - Required skill documentation with YAML frontmatter\n- `scripts/` - Optional executable code (Python/Bash)\n- `references/` - Optional documentation loaded as needed\n- `assets/` - Optional files used in outputs (templates, images)\n\n### SKILL.md Format\n\n````yaml\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\nversion: 1.0.0\nlicense: MIT\nauthor: Octavus Team\n---\n\n# QR Code Generator\n\n## Overview\n\nThis skill creates QR codes from text data using Python...\n\n## Quick Start\n\nGenerate a QR code with Python:\n\n```python\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n# ... code to generate QR code ...\n````\n\n## Scripts Reference\n\n### scripts/generate.py\n\nMain script for generating QR codes...\n\n````\n\n## Best Practices\n\n### 1. Clear Descriptions\n\nProvide clear, purpose-driven descriptions:\n\n```yaml\nskills:\n # Good - clear purpose\n qr-code:\n description: Generating QR codes for URLs, contact info, or any text data\n\n # Avoid - vague\n utility:\n description: Does stuff\n````\n\n### 2. When to Use Skills vs Tools\n\n| Use Skills When | Use Tools When |\n| ------------------------ | ---------------------------- |\n| Code execution needed | Simple API calls |\n| File generation | Database queries |\n| Complex calculations | External service integration |\n| Data processing | Authentication required |\n| Provider-agnostic needed | Backend-specific logic |\n\n### 3. Skill Selection\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n pdf-processor:\n display: description\n description: Processing PDFs\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis] # Skills available for this thread\n```\n\n### 4. Display Modes\n\nChoose appropriate display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n## Comparison: Skills vs Tools vs Provider Options\n\n| Feature | Octavus Skills | External Tools | Provider Tools/Skills |\n| ------------------ | ----------------- | ------------------- | --------------------- |\n| **Execution** | Isolated sandbox | Your backend | Provider servers |\n| **Provider** | Any (agnostic) | N/A | Provider-specific |\n| **Code Execution** | Yes | No | Yes (provider tools) |\n| **File Output** | Yes | No | Yes (provider skills) |\n| **Implementation** | Skill packages | Your code | Built-in |\n| **Cost** | Sandbox + LLM API | Your infrastructure | Included in API |\n\n## Uploading Custom Skills\n\nYou can upload custom skills to your organization:\n\n1. Create a skill following the [Agent Skills](https://agentskills.io) format\n2. Package it as a `.skill` bundle (ZIP file)\n3. Upload via the platform UI\n4. Reference by slug in your protocol\n\n```yaml\nskills:\n custom-analysis:\n display: description\n description: Custom analysis tool\n\nagent:\n skills: [custom-analysis]\n```\n\n## Sandbox Timeout\n\nThe default sandbox timeout is 5 minutes. For long-running operations, you can configure a custom timeout using `sandboxTimeout` in the agent config:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes (in milliseconds)\n```\n\n`sandboxTimeout` Maximum: 1 hour (3,600,000 ms)\n\n## Security\n\nSkills run in isolated sandbox environments:\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n## Next Steps\n\n- [Agent Config](/docs/protocol/agent-config) \u2014 Configuring skills in agent settings\n- [Provider Options](/docs/protocol/provider-options) \u2014 Anthropic's built-in skills\n- [Skills Advanced Guide](/docs/protocol/skills-advanced) \u2014 Best practices and advanced patterns\n",
|
|
579
|
+
content: "\n# Skills\n\nSkills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are self-contained packages with documentation and scripts that run in secure sandboxes.\n\n## Overview\n\nOctavus Skills provide **provider-agnostic** code execution. They work with any LLM provider (Anthropic, OpenAI, Google) by using explicit tool calls and system prompt injection.\n\n### How Skills Work\n\n1. **Skill Definition**: Skills are defined in the protocol's `skills:` section\n2. **Skill Resolution**: Skills are resolved from available sources (see below)\n3. **Sandbox Execution**: When a skill is used, code runs in an isolated sandbox environment\n4. **File Generation**: Files saved to `/output/` are automatically captured and made available for download\n\n### Skill Sources\n\nSkills come from two sources, visible in the Skills tab of your organization:\n\n| Source | Badge in UI | Visibility | Example |\n| ----------- | ----------- | ------------------------------ | ------------------ |\n| **Octavus** | `Octavus` | Available to all organizations | `qr-code` |\n| **Custom** | None | Private to your organization | `my-company-skill` |\n\nWhen you reference a skill in your protocol, Octavus resolves it from your available skills. If you create a custom skill with the same name as an Octavus skill, your custom skill takes precedence.\n\n## Defining Skills\n\nDefine skills in the protocol's `skills:` section:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n```\n\n### Skill Fields\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------------------------------------- |\n| `display` | No | How to show in UI: `hidden`, `name`, `description`, `stream` (default: `description`) |\n| `description` | No | Custom description shown to users (overrides skill's built-in description) |\n\n### Display Modes\n\n| Mode | Behavior |\n| ------------- | ------------------------------------------- |\n| `hidden` | Skill usage not shown to users |\n| `name` | Shows skill name while executing |\n| `description` | Shows description while executing (default) |\n| `stream` | Streams progress if available |\n\n## Enabling Skills\n\nAfter defining skills in the `skills:` section, specify which skills are available. Skills work in both interactive agents and workers.\n\n### Interactive Agents\n\nReference skills in `agent.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account]\n skills: [qr-code]\n agentic: true\n```\n\n### Workers and Named Threads\n\nReference skills per-thread in `start-thread.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nsteps:\n Start thread:\n block: start-thread\n thread: worker\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n maxSteps: 10\n```\n\nThis also works for named threads in interactive agents, allowing different threads to have different skills.\n\n## Skill Tools\n\nWhen skills are enabled, the LLM has access to these tools:\n\n| Tool | Purpose |\n| -------------------- | --------------------------------------- |\n| `octavus_skill_read` | Read skill documentation (SKILL.md) |\n| `octavus_skill_list` | List available scripts in a skill |\n| `octavus_skill_run` | Execute a pre-built script from a skill |\n| `octavus_code_run` | Execute arbitrary Python/Bash code |\n| `octavus_file_write` | Create files in the sandbox |\n| `octavus_file_read` | Read files from the sandbox |\n\nThe LLM learns about available skills through system prompt injection and can use these tools to interact with skills.\n\n## Example: QR Code Generation\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n agentic: true\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond:\n block: next-message\n```\n\nWhen a user asks \"Create a QR code for octavus.ai\", the LLM will:\n\n1. Recognize the task matches the `qr-code` skill\n2. Call `octavus_skill_read` to learn how to use the skill\n3. Execute code (via `octavus_code_run` or `octavus_skill_run`) to generate the QR code\n4. Save the image to `/output/` in the sandbox\n5. The file is automatically captured and made available for download\n\n## File Output\n\nFiles saved to `/output/` in the sandbox are automatically:\n\n1. **Captured** after code execution\n2. **Uploaded** to S3 storage\n3. **Made available** via presigned URLs\n4. **Included** in the message as file parts\n\nFiles persist across page refreshes and are stored in the session's message history.\n\n## Skill Format\n\nSkills follow the [Agent Skills](https://agentskills.io) open standard:\n\n- `SKILL.md` - Required skill documentation with YAML frontmatter\n- `scripts/` - Optional executable code (Python/Bash)\n- `references/` - Optional documentation loaded as needed\n- `assets/` - Optional files used in outputs (templates, images)\n\n### SKILL.md Format\n\n````yaml\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\nversion: 1.0.0\nlicense: MIT\nauthor: Octavus Team\n---\n\n# QR Code Generator\n\n## Overview\n\nThis skill creates QR codes from text data using Python...\n\n## Quick Start\n\nGenerate a QR code with Python:\n\n```python\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n# ... code to generate QR code ...\n````\n\n## Scripts Reference\n\n### scripts/generate.py\n\nMain script for generating QR codes...\n\n````\n\n## Best Practices\n\n### 1. Clear Descriptions\n\nProvide clear, purpose-driven descriptions:\n\n```yaml\nskills:\n # Good - clear purpose\n qr-code:\n description: Generating QR codes for URLs, contact info, or any text data\n\n # Avoid - vague\n utility:\n description: Does stuff\n````\n\n### 2. When to Use Skills vs Tools\n\n| Use Skills When | Use Tools When |\n| ------------------------ | ---------------------------- |\n| Code execution needed | Simple API calls |\n| File generation | Database queries |\n| Complex calculations | External service integration |\n| Data processing | Authentication required |\n| Provider-agnostic needed | Backend-specific logic |\n\n### 3. Skill Selection\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n pdf-processor:\n display: description\n description: Processing PDFs\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis] # Skills available for this thread\n```\n\n### 4. Display Modes\n\nChoose appropriate display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n## Comparison: Skills vs Tools vs Provider Options\n\n| Feature | Octavus Skills | External Tools | Provider Tools/Skills |\n| ------------------ | ----------------- | ------------------- | --------------------- |\n| **Execution** | Isolated sandbox | Your backend | Provider servers |\n| **Provider** | Any (agnostic) | N/A | Provider-specific |\n| **Code Execution** | Yes | No | Yes (provider tools) |\n| **File Output** | Yes | No | Yes (provider skills) |\n| **Implementation** | Skill packages | Your code | Built-in |\n| **Cost** | Sandbox + LLM API | Your infrastructure | Included in API |\n\n## Uploading Custom Skills\n\nYou can upload custom skills to your organization:\n\n1. Create a skill following the [Agent Skills](https://agentskills.io) format\n2. Package it as a `.skill` bundle (ZIP file)\n3. Upload via the platform UI\n4. Reference by slug in your protocol\n\n```yaml\nskills:\n custom-analysis:\n display: description\n description: Custom analysis tool\n\nagent:\n skills: [custom-analysis]\n```\n\n## Sandbox Timeout\n\nThe default sandbox timeout is 5 minutes. You can configure a custom timeout using `sandboxTimeout` in the agent config or on individual `start-thread` blocks:\n\n```yaml\n# Agent-level timeout (applies to main thread)\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes (in milliseconds)\n```\n\n```yaml\n# Thread-level timeout (overrides agent-level for this thread)\nsteps:\n Start thread:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 3600000 # 1 hour\n```\n\nThread-level `sandboxTimeout` takes priority over agent-level. Maximum: 1 hour (3,600,000 ms).\n\n## Security\n\nSkills run in isolated sandbox environments:\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after each `next-message` execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n## Next Steps\n\n- [Agent Config](/docs/protocol/agent-config) \u2014 Configuring skills in agent settings\n- [Provider Options](/docs/protocol/provider-options) \u2014 Anthropic's built-in skills\n- [Skills Advanced Guide](/docs/protocol/skills-advanced) \u2014 Best practices and advanced patterns\n",
|
|
580
580
|
excerpt: "Skills Skills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are...",
|
|
581
581
|
order: 5
|
|
582
582
|
},
|
|
@@ -585,7 +585,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
585
585
|
section: "protocol",
|
|
586
586
|
title: "Handlers",
|
|
587
587
|
description: "Defining execution handlers with blocks.",
|
|
588
|
-
content: "\n# Handlers\n\nHandlers define what happens when a trigger fires. They contain execution blocks that run in sequence.\n\n## Handler Structure\n\n```yaml\nhandlers:\n trigger-name:\n Block Name:\n block: block-kind\n # block-specific properties\n\n Another Block:\n block: another-kind\n # ...\n```\n\nEach block has a human-readable name (shown in debug UI) and a `block` field that determines its behavior.\n\n## Block Kinds\n\n### next-message\n\nGenerate a response from the LLM:\n\n```yaml\nhandlers:\n user-message:\n Respond to user:\n block: next-message\n # Uses main conversation thread by default\n # Display defaults to 'stream'\n```\n\nWith options:\n\n```yaml\nGenerate summary:\n block: next-message\n thread: summary # Use named thread\n display: stream # Show streaming content\n independent: true # Don't add to main chat\n output: SUMMARY # Store output in variable\n description: Generating summary # Shown in UI\n```\n\nFor structured output (typed JSON response):\n\n```yaml\nRespond with suggestions:\n block: next-message\n responseType: ChatResponse # Type defined in types section\n output: RESPONSE # Stores the parsed object\n```\n\nWhen `responseType` is specified:\n\n- The LLM generates JSON matching the type schema\n- The `output` variable receives the parsed object (not plain text)\n- The client receives a `UIObjectPart` for custom rendering\n\nSee [Types](/docs/protocol/types#structured-output) for more details.\n\n### add-message\n\nAdd a message to the conversation:\n\n```yaml\nAdd user message:\n block: add-message\n role: user # user | assistant | system\n prompt: user-message # Reference to prompt file\n input: [USER_MESSAGE] # Variables to interpolate\n display: hidden # Don't show in UI\n```\n\nFor internal directives (LLM sees it, user doesn't):\n\n```yaml\nAdd internal directive:\n block: add-message\n role: user\n prompt: ticket-directive\n input: [TICKET_DETAILS]\n visible: false # LLM sees this, user doesn't\n```\n\nFor structured user input (object shown in UI, prompt for LLM context):\n\n```yaml\nAdd user message:\n block: add-message\n role: user\n prompt: user-message # Rendered for LLM context (hidden from UI)\n input: [USER_INPUT]\n uiContent: USER_INPUT # Variable shown in UI (object \u2192 object part)\n display: hidden\n```\n\nWhen `uiContent` is set:\n\n- The variable value is shown in the UI (string \u2192 text part, object \u2192 object part)\n- The prompt text is hidden from the UI but kept for LLM context\n- Useful for rich UI interactions where the visual differs from the LLM context\n\n### tool-call\n\nCall a tool deterministically:\n\n```yaml\nCreate ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY # Variable reference\n priority: medium # Literal value\n output: TICKET # Store result\n```\n\n### set-resource\n\nUpdate a persistent resource:\n\n```yaml\nSave summary:\n block: set-resource\n resource: CONVERSATION_SUMMARY\n value: SUMMARY # Variable to save\n display: name # Show block name\n```\n\n### start-thread\n\nCreate a named conversation thread:\n\n```yaml\nStart summary thread:\n block: start-thread\n thread: summary # Thread name\n model: anthropic/claude-sonnet-4-5 # Optional: different model\n thinking: low # Extended reasoning level\n maxSteps: 1 # Tool call limit\n system: escalation-summary # System prompt\n input: [COMPANY_NAME] # Variables for prompt\n```\n\nThe `model` field can also reference a variable for dynamic model selection:\n\n```yaml\nStart summary thread:\n block: start-thread\n thread: summary\n model: SUMMARY_MODEL # Resolved from input variable\n system: escalation-summary\n```\n\n### serialize-thread\n\nConvert conversation to text:\n\n```yaml\nSerialize conversation:\n block: serialize-thread\n thread: main # Which thread (default: main)\n format: markdown # markdown | json\n output: CONVERSATION_TEXT # Variable to store result\n```\n\n### generate-image\n\nGenerate an image from a prompt variable:\n\n```yaml\nGenerate image:\n block: generate-image\n prompt: OPTIMIZED_PROMPT # Variable containing the prompt\n imageModel: google/gemini-2.5-flash-image # Required image model\n size: 1024x1024 # 1024x1024 | 1792x1024 | 1024x1792\n output: GENERATED_IMAGE # Store URL in variable\n description: Generating your image... # Shown in UI\n```\n\nEdit an existing image using reference images:\n\n```yaml\nEdit image:\n block: generate-image\n prompt: EDIT_INSTRUCTIONS # e.g., \"Remove the background\"\n referenceImages: [SOURCE_IMAGE_URL] # Variable(s) containing image URLs\n imageModel: google/gemini-2.5-flash-image\n output: EDITED_IMAGE\n description: Editing image...\n```\n\n| Field | Required | Description |\n| ----------------- | -------- | --------------------------------------------------------------- |\n| `prompt` | Yes | Variable name containing the image prompt or edit instructions |\n| `imageModel` | Yes | Image model identifier (e.g., `google/gemini-2.5-flash-image`) |\n| `size` | No | Image dimensions: `1024x1024`, `1792x1024`, or `1024x1792` |\n| `referenceImages` | No | Variable names containing image URLs for editing/transformation |\n| `output` | No | Variable name to store the generated image URL |\n| `thread` | No | Thread to associate the output file with |\n| `description` | No | Description shown in the UI during generation |\n\nThis block is for deterministic image generation pipelines where the prompt is constructed programmatically (e.g., via prompt engineering in a separate thread). When `referenceImages` are provided, the prompt describes how to modify those images.\n\nFor agentic image generation where the LLM decides when to generate, configure `imageModel` in the [agent config](/docs/protocol/agent-config#image-generation).\n\n## Display Modes\n\nEvery block has a `display` property:\n\n| Mode | Default For | Behavior |\n| ------------- | ------------------------- | ----------------- |\n| `hidden` | add-message | Not shown to user |\n| `name` | set-resource | Shows block name |\n| `description` | tool-call, generate-image | Shows description |\n| `stream` | next-message | Streams content |\n\n## Complete Example\n\n```yaml\nhandlers:\n user-message:\n # Add the user's message to conversation\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n # Generate response (LLM may call tools)\n Respond to user:\n block: next-message\n # display: stream (default)\n\n request-human:\n # Step 1: Serialize conversation for summary\n Serialize conversation:\n block: serialize-thread\n format: markdown\n output: CONVERSATION_TEXT\n\n # Step 2: Create separate thread for summarization\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n thinking: low\n system: escalation-summary\n input: [COMPANY_NAME]\n\n # Step 3: Add request to summary thread\n Add summarize request:\n block: add-message\n thread: summary\n role: user\n prompt: summarize-request\n input:\n - CONVERSATION: CONVERSATION_TEXT\n\n # Step 4: Generate summary\n Generate summary:\n block: next-message\n thread: summary\n display: stream\n description: Summarizing your conversation\n independent: true\n output: SUMMARY\n\n # Step 5: Save to resource\n Save summary:\n block: set-resource\n resource: CONVERSATION_SUMMARY\n value: SUMMARY\n\n # Step 6: Create support ticket\n Create ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY\n priority: medium\n output: TICKET\n\n # Step 7: Add directive for response\n Add directive:\n block: add-message\n role: user\n prompt: ticket-directive\n input: [TICKET_DETAILS: TICKET]\n visible: false\n\n # Step 8: Respond to user\n Respond:\n block: next-message\n```\n\n## Block Input Mapping\n\nThe `input` field on blocks controls which variables are passed to the prompt. Only variables listed in `input` are available for interpolation.\n\nVariables can come from `protocol.input`, `protocol.resources`, `protocol.variables`, `trigger.input`, or outputs from prior blocks.\n\n```yaml\n# Array format (same name)\ninput: [USER_MESSAGE, COMPANY_NAME]\n\n# Array format (rename)\ninput:\n - CONVERSATION: CONVERSATION_TEXT # Prompt sees CONVERSATION, value comes from CONVERSATION_TEXT\n - TICKET_DETAILS: TICKET\n\n# Object format (rename)\ninput:\n CONVERSATION: CONVERSATION_TEXT\n TICKET_DETAILS: TICKET\n```\n\n## Independent Blocks\n\nUse `independent: true` for content that shouldn't go to the main chat:\n\n```yaml\nGenerate summary:\n block: next-message\n thread: summary\n independent: true # Output stored in variable, not main chat\n output: SUMMARY\n```\n\nThis is useful for:\n\n- Background processing\n- Summarization in separate threads\n- Generating content for tools\n",
|
|
588
|
+
content: "\n# Handlers\n\nHandlers define what happens when a trigger fires. They contain execution blocks that run in sequence.\n\n## Handler Structure\n\n```yaml\nhandlers:\n trigger-name:\n Block Name:\n block: block-kind\n # block-specific properties\n\n Another Block:\n block: another-kind\n # ...\n```\n\nEach block has a human-readable name (shown in debug UI) and a `block` field that determines its behavior.\n\n## Block Kinds\n\n### next-message\n\nGenerate a response from the LLM:\n\n```yaml\nhandlers:\n user-message:\n Respond to user:\n block: next-message\n # Uses main conversation thread by default\n # Display defaults to 'stream'\n```\n\nWith options:\n\n```yaml\nGenerate summary:\n block: next-message\n thread: summary # Use named thread\n display: stream # Show streaming content\n independent: true # Don't add to main chat\n output: SUMMARY # Store output in variable\n description: Generating summary # Shown in UI\n```\n\nFor structured output (typed JSON response):\n\n```yaml\nRespond with suggestions:\n block: next-message\n responseType: ChatResponse # Type defined in types section\n output: RESPONSE # Stores the parsed object\n```\n\nWhen `responseType` is specified:\n\n- The LLM generates JSON matching the type schema\n- The `output` variable receives the parsed object (not plain text)\n- The client receives a `UIObjectPart` for custom rendering\n\nSee [Types](/docs/protocol/types#structured-output) for more details.\n\n### add-message\n\nAdd a message to the conversation:\n\n```yaml\nAdd user message:\n block: add-message\n role: user # user | assistant | system\n prompt: user-message # Reference to prompt file\n input: [USER_MESSAGE] # Variables to interpolate\n display: hidden # Don't show in UI\n```\n\nFor internal directives (LLM sees it, user doesn't):\n\n```yaml\nAdd internal directive:\n block: add-message\n role: user\n prompt: ticket-directive\n input: [TICKET_DETAILS]\n visible: false # LLM sees this, user doesn't\n```\n\nFor structured user input (object shown in UI, prompt for LLM context):\n\n```yaml\nAdd user message:\n block: add-message\n role: user\n prompt: user-message # Rendered for LLM context (hidden from UI)\n input: [USER_INPUT]\n uiContent: USER_INPUT # Variable shown in UI (object \u2192 object part)\n display: hidden\n```\n\nWhen `uiContent` is set:\n\n- The variable value is shown in the UI (string \u2192 text part, object \u2192 object part)\n- The prompt text is hidden from the UI but kept for LLM context\n- Useful for rich UI interactions where the visual differs from the LLM context\n\n### tool-call\n\nCall a tool deterministically:\n\n```yaml\nCreate ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY # Variable reference\n priority: medium # Literal value\n output: TICKET # Store result\n```\n\n### set-resource\n\nUpdate a persistent resource:\n\n```yaml\nSave summary:\n block: set-resource\n resource: CONVERSATION_SUMMARY\n value: SUMMARY # Variable to save\n display: name # Show block name\n```\n\n### start-thread\n\nCreate a named conversation thread:\n\n```yaml\nStart summary thread:\n block: start-thread\n thread: summary # Thread name\n model: anthropic/claude-sonnet-4-5 # Optional: different model\n thinking: low # Extended reasoning level\n maxSteps: 1 # Tool call limit\n system: escalation-summary # System prompt\n input: [COMPANY_NAME] # Variables for prompt\n skills: [qr-code] # Octavus skills for this thread\n sandboxTimeout: 600000 # Skill sandbox timeout (default: 5 min, max: 1 hour)\n imageModel: google/gemini-2.5-flash-image # Image generation model\n```\n\nThe `model` field can also reference a variable for dynamic model selection:\n\n```yaml\nStart summary thread:\n block: start-thread\n thread: summary\n model: SUMMARY_MODEL # Resolved from input variable\n system: escalation-summary\n```\n\n### serialize-thread\n\nConvert conversation to text:\n\n```yaml\nSerialize conversation:\n block: serialize-thread\n thread: main # Which thread (default: main)\n format: markdown # markdown | json\n output: CONVERSATION_TEXT # Variable to store result\n```\n\n### generate-image\n\nGenerate an image from a prompt variable:\n\n```yaml\nGenerate image:\n block: generate-image\n prompt: OPTIMIZED_PROMPT # Variable containing the prompt\n imageModel: google/gemini-2.5-flash-image # Required image model\n size: 1024x1024 # 1024x1024 | 1792x1024 | 1024x1792\n output: GENERATED_IMAGE # Store URL in variable\n description: Generating your image... # Shown in UI\n```\n\nEdit an existing image using reference images:\n\n```yaml\nEdit image:\n block: generate-image\n prompt: EDIT_INSTRUCTIONS # e.g., \"Remove the background\"\n referenceImages: [SOURCE_IMAGE_URL] # Variable(s) containing image URLs\n imageModel: google/gemini-2.5-flash-image\n output: EDITED_IMAGE\n description: Editing image...\n```\n\n| Field | Required | Description |\n| ----------------- | -------- | --------------------------------------------------------------- |\n| `prompt` | Yes | Variable name containing the image prompt or edit instructions |\n| `imageModel` | Yes | Image model identifier (e.g., `google/gemini-2.5-flash-image`) |\n| `size` | No | Image dimensions: `1024x1024`, `1792x1024`, or `1024x1792` |\n| `referenceImages` | No | Variable names containing image URLs for editing/transformation |\n| `output` | No | Variable name to store the generated image URL |\n| `thread` | No | Thread to associate the output file with |\n| `description` | No | Description shown in the UI during generation |\n\nThis block is for deterministic image generation pipelines where the prompt is constructed programmatically (e.g., via prompt engineering in a separate thread). When `referenceImages` are provided, the prompt describes how to modify those images.\n\nFor agentic image generation where the LLM decides when to generate, configure `imageModel` in the [agent config](/docs/protocol/agent-config#image-generation).\n\n## Display Modes\n\nEvery block has a `display` property:\n\n| Mode | Default For | Behavior |\n| ------------- | ------------------------- | ----------------- |\n| `hidden` | add-message | Not shown to user |\n| `name` | set-resource | Shows block name |\n| `description` | tool-call, generate-image | Shows description |\n| `stream` | next-message | Streams content |\n\n## Complete Example\n\n```yaml\nhandlers:\n user-message:\n # Add the user's message to conversation\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n # Generate response (LLM may call tools)\n Respond to user:\n block: next-message\n # display: stream (default)\n\n request-human:\n # Step 1: Serialize conversation for summary\n Serialize conversation:\n block: serialize-thread\n format: markdown\n output: CONVERSATION_TEXT\n\n # Step 2: Create separate thread for summarization\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n thinking: low\n system: escalation-summary\n input: [COMPANY_NAME]\n\n # Step 3: Add request to summary thread\n Add summarize request:\n block: add-message\n thread: summary\n role: user\n prompt: summarize-request\n input:\n - CONVERSATION: CONVERSATION_TEXT\n\n # Step 4: Generate summary\n Generate summary:\n block: next-message\n thread: summary\n display: stream\n description: Summarizing your conversation\n independent: true\n output: SUMMARY\n\n # Step 5: Save to resource\n Save summary:\n block: set-resource\n resource: CONVERSATION_SUMMARY\n value: SUMMARY\n\n # Step 6: Create support ticket\n Create ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY\n priority: medium\n output: TICKET\n\n # Step 7: Add directive for response\n Add directive:\n block: add-message\n role: user\n prompt: ticket-directive\n input: [TICKET_DETAILS: TICKET]\n visible: false\n\n # Step 8: Respond to user\n Respond:\n block: next-message\n```\n\n## Block Input Mapping\n\nThe `input` field on blocks controls which variables are passed to the prompt. Only variables listed in `input` are available for interpolation.\n\nVariables can come from `protocol.input`, `protocol.resources`, `protocol.variables`, `trigger.input`, or outputs from prior blocks.\n\n```yaml\n# Array format (same name)\ninput: [USER_MESSAGE, COMPANY_NAME]\n\n# Array format (rename)\ninput:\n - CONVERSATION: CONVERSATION_TEXT # Prompt sees CONVERSATION, value comes from CONVERSATION_TEXT\n - TICKET_DETAILS: TICKET\n\n# Object format (rename)\ninput:\n CONVERSATION: CONVERSATION_TEXT\n TICKET_DETAILS: TICKET\n```\n\n## Independent Blocks\n\nUse `independent: true` for content that shouldn't go to the main chat:\n\n```yaml\nGenerate summary:\n block: next-message\n thread: summary\n independent: true # Output stored in variable, not main chat\n output: SUMMARY\n```\n\nThis is useful for:\n\n- Background processing\n- Summarization in separate threads\n- Generating content for tools\n",
|
|
589
589
|
excerpt: "Handlers Handlers define what happens when a trigger fires. They contain execution blocks that run in sequence. Handler Structure Each block has a human-readable name (shown in debug UI) and a ...",
|
|
590
590
|
order: 6
|
|
591
591
|
},
|
|
@@ -594,8 +594,8 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
594
594
|
section: "protocol",
|
|
595
595
|
title: "Agent Config",
|
|
596
596
|
description: "Configuring the agent model and behavior.",
|
|
597
|
-
content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to pass to the system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | -------------------------------------------------------------------- |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash-preview\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions. The `input` field controls which variables are available to the prompt \u2014 only variables listed in `input` are interpolated.\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nVariables in `input` can come from `protocol.input`, `protocol.resources`, or `protocol.variables`.\n\n### Input Mapping Formats\n\n```yaml\n# Array format (same name)\ninput:\n - COMPANY_NAME\n - PRODUCT_NAME\n\n# Array format (rename)\ninput:\n - CONTEXT: CONVERSATION_SUMMARY # Prompt sees CONTEXT, value comes from CONVERSATION_SUMMARY\n\n# Object format (rename)\ninput:\n CONTEXT: CONVERSATION_SUMMARY\n```\n\nThe left side (label) is what the prompt sees. The right side (source) is where the value comes from.\n\n### Example\n\n`prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests. The tool supports both text-to-image generation and image editing/transformation using reference images.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Image Editing with Reference Images\n\nBoth the agentic tool and the `generate-image` block support reference images for editing and transformation. When reference images are provided, the prompt describes how to modify or use those images.\n\n| Provider | Models | Reference Image Support |\n| -------- | -------------------------------- | ----------------------- |\n| OpenAI | `gpt-image-1` | Yes |\n| Google | Gemini native (`gemini-*-image`) | Yes |\n| Google | Imagen (`imagen-*`) | No |\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate or edit images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation or editing\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n```\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
|
|
598
|
-
excerpt: "Agent Config The section configures the LLM model, system prompt, tools, and behavior. Basic Configuration Configuration Options | Field
|
|
597
|
+
content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n references: [api-guidelines] # On-demand context documents\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ---------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to pass to the system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `references` | No | List of references the LLM can fetch on demand |\n| `sandboxTimeout` | No | Skill sandbox timeout in ms (default: 5 min, max: 1 hour) |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | -------------------------------------------------------------------- |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash-preview\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions. The `input` field controls which variables are available to the prompt \u2014 only variables listed in `input` are interpolated.\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nVariables in `input` can come from `protocol.input`, `protocol.resources`, or `protocol.variables`.\n\n### Input Mapping Formats\n\n```yaml\n# Array format (same name)\ninput:\n - COMPANY_NAME\n - PRODUCT_NAME\n\n# Array format (rename)\ninput:\n - CONTEXT: CONVERSATION_SUMMARY # Prompt sees CONTEXT, value comes from CONVERSATION_SUMMARY\n\n# Object format (rename)\ninput:\n CONTEXT: CONVERSATION_SUMMARY\n```\n\nThe left side (label) is what the prompt sees. The right side (source) is where the value comes from.\n\n### Example\n\n`prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## References\n\nEnable on-demand context loading via reference documents:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n references: [api-guidelines, error-codes]\n agentic: true\n```\n\nReferences are markdown files stored in the agent's `references/` directory. When enabled, the LLM can list available references and read their content using `octavus_reference_list` and `octavus_reference_read` tools.\n\nSee [References](/docs/protocol/references) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests. The tool supports both text-to-image generation and image editing/transformation using reference images.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Image Editing with Reference Images\n\nBoth the agentic tool and the `generate-image` block support reference images for editing and transformation. When reference images are provided, the prompt describes how to modify or use those images.\n\n| Provider | Models | Reference Image Support |\n| -------- | -------------------------------- | ----------------------- |\n| OpenAI | `gpt-image-1` | Yes |\n| Google | Gemini native (`gemini-*-image`) | Yes |\n| Google | Imagen (`imagen-*`) | No |\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate or edit images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation or editing\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n skills: [data-analysis] # Thread-specific skills\n references: [escalation-policy] # Thread-specific references\n imageModel: google/gemini-2.5-flash-image # Thread-specific image model\n```\n\nEach thread can have its own skills, references, and image model. Skills must be defined in the protocol's `skills:` section. References must exist in the agent's `references/` directory. Workers use this same pattern since they don't have a global `agent:` section.\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n references: [support-policies] # On-demand context\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
|
|
598
|
+
excerpt: "Agent Config The section configures the LLM model, system prompt, tools, and behavior. Basic Configuration Configuration Options | Field | Required | Description ...",
|
|
599
599
|
order: 7
|
|
600
600
|
},
|
|
601
601
|
{
|
|
@@ -612,7 +612,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
612
612
|
section: "protocol",
|
|
613
613
|
title: "Skills Advanced Guide",
|
|
614
614
|
description: "Best practices and advanced patterns for using Octavus skills.",
|
|
615
|
-
content: "\n# Skills Advanced Guide\n\nThis guide covers advanced patterns and best practices for using Octavus skills in your agents.\n\n## When to Use Skills\n\nSkills are ideal for:\n\n- **Code execution** - Running Python/Bash scripts\n- **File generation** - Creating images, PDFs, reports\n- **Data processing** - Analyzing, transforming, or visualizing data\n- **Provider-agnostic needs** - Features that should work with any LLM\n\nUse external tools instead when:\n\n- **Simple API calls** - Database queries, external services\n- **Authentication required** - Accessing user-specific resources\n- **Backend integration** - Tight coupling with your infrastructure\n\n## Skill Selection Strategy\n\n### Defining Available Skills\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n pdf-processor:\n display: description\n description: Processing PDFs\n data-analysis:\n display: description\n description: Analyzing data\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Skills available for this thread\n```\n\n### Match Skills to Use Cases\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread based on use case:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n visualization:\n display: description\n description: Creating charts and visualizations\n\n# Skills available for this chat thread (support use case)\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Skills available for this thread\n```\n\nFor a data analysis thread, you would specify `[data-analysis, visualization]` in `agent.skills`, but still define all available skills in the `skills:` section above.\n\n## Display Mode Strategy\n\nChoose display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n### Guidelines\n\n- **`hidden`**: Background work that doesn't need user awareness\n- **`description`**: User-facing operations (default)\n- **`name`**: Quick operations where name is sufficient\n- **`stream`**: Long-running operations where progress matters\n\n## System Prompt Integration\n\nSkills are automatically injected into the system prompt. The LLM learns:\n\n1. **Available skills** - List of enabled skills with descriptions\n2. **How to use skills** - Instructions for using skill tools\n3. **Tool reference** - Available skill tools (`octavus_skill_read`, `octavus_code_run`, etc.)\n\nYou don't need to manually document skills in your system prompt. However, you can guide the LLM:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a helpful assistant that can generate QR codes.\n\n## When to Generate QR Codes\n\nGenerate QR codes when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Share WiFi credentials\n- Create scannable data\n\nUse the qr-code skill for all QR code generation tasks.\n```\n\n## Error Handling\n\nSkills handle errors gracefully:\n\n```yaml\n# Skill execution errors are returned to the LLM\n# The LLM can retry or explain the error to the user\n```\n\nCommon error scenarios:\n\n1. **Invalid skill slug** - Skill not found in organization\n2. **Code execution errors** - Syntax errors, runtime exceptions\n3. **Missing dependencies** - Required packages not installed\n4. **File I/O errors** - Permission issues, invalid paths\n\nThe LLM receives error messages and can:\n\n- Retry with corrected code\n- Explain errors to users\n- Suggest alternatives\n\n## File Output Patterns\n\n### Single File Output\n\n```python\n# Save single file to /output/\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\nqr = qrcode.QRCode()\nqr.add_data('https://example.com')\nimg = qr.make_image()\nimg.save(f'{output_dir}/qrcode.png')\n```\n\n### Multiple Files\n\n```python\n# Save multiple files\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Generate multiple outputs\nfor i in range(3):\n filename = f'{output_dir}/output_{i}.png'\n # ... generate file ...\n```\n\n### Structured Output\n\n```python\n# Save structured data + files\nimport json\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Save metadata\nmetadata = {\n 'files': ['chart.png', 'data.csv'],\n 'summary': 'Analysis complete'\n}\nwith open(f'{output_dir}/metadata.json', 'w') as f:\n json.dump(metadata, f)\n\n# Save actual files\n# ... generate chart.png and data.csv ...\n```\n\n## Performance Considerations\n\n### Lazy Initialization\n\nSandboxes are created only when a skill tool is first called:\n\n```yaml\n# Sandbox not created until LLM calls a skill tool\nagent:\n skills: [qr-code] # Sandbox created on first use\n```\n\nThis means:\n\n- No cost if skills aren't used\n- Fast startup (no sandbox creation delay)\n- Sandbox reused for all skill calls in a trigger\n\n### Timeout Limits\n\nSandboxes have a 5-minute default timeout, which can be configured via `sandboxTimeout`:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes for long-running analysis\n```\n\n`sandboxTimeout` Maximum: 1 hour (3,600,000 ms)\n\n**Timeout guidelines:**\n\n- **Short operations** (default 5 min): QR codes, simple calculations\n- **Medium operations** (10-30 min): Data analysis, report generation\n- **Long operations** (30+ min): Complex processing, large dataset analysis\n\n### Sandbox Lifecycle\n\nEach trigger execution gets a fresh sandbox:\n\n- **Clean state** - No leftover files from previous executions\n- **Isolated** - No interference between sessions\n- **Destroyed** - Sandbox cleaned up after trigger completes\n\n## Combining Skills with Tools\n\nSkills and tools can work together:\n\n```yaml\ntools:\n get-user-data:\n description: Fetch user data from database\n parameters:\n userId: { type: string }\n\nskills:\n data-analysis:\n display: description\n description: Analyzing data\n\nagent:\n tools: [get-user-data]\n skills: [data-analysis]\n agentic: true\n\nhandlers:\n analyze-user:\n Get user data:\n block: tool-call\n tool: get-user-data\n input:\n userId: USER_ID\n output: USER_DATA\n\n Analyze:\n block: next-message\n # LLM can use data-analysis skill with USER_DATA\n```\n\nPattern:\n\n1. Fetch data via tool (from your backend)\n2. LLM uses skill to analyze/process the data\n3. Generate outputs (files, reports)\n\n## Skill Development Tips\n\n### Writing SKILL.md\n\nFocus on **when** and **how** to use the skill:\n\n```markdown\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\n---\n\n# QR Code Generator\n\n## When to Use\n\nUse this skill when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Create scannable data\n\n## Quick Start\n\n[Clear examples of how to use the skill]\n```\n\n### Script Organization\n\nOrganize scripts logically:\n\n```\nskill-name/\n\u251C\u2500\u2500 SKILL.md\n\u2514\u2500\u2500 scripts/\n \u251C\u2500\u2500 generate.py # Main script\n \u251C\u2500\u2500 utils.py # Helper functions\n \u2514\u2500\u2500 requirements.txt # Dependencies\n```\n\n### Error Messages\n\nProvide helpful error messages:\n\n```python\ntry:\n # ... code ...\nexcept ValueError as e:\n print(f\"Error: Invalid input - {e}\")\n sys.exit(1)\n```\n\nThe LLM sees these errors and can retry or explain to users.\n\n## Security Considerations\n\n### Sandbox Isolation\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n### Input Validation\n\nSkills should validate inputs:\n\n```python\nimport sys\n\nif not data:\n print(\"Error: Data is required\")\n sys.exit(1)\n\nif len(data) > 1000:\n print(\"Error: Data too long (max 1000 characters)\")\n sys.exit(1)\n```\n\n### Resource Limits\n\nBe aware of:\n\n- **File size limits** - Large files may fail to upload\n- **Execution time** - 5-minute sandbox timeout\n- **Memory limits** - Sandbox environment constraints\n\n## Debugging Skills\n\n### Check Skill Documentation\n\nThe LLM can read skill docs:\n\n```python\n# LLM calls octavus_skill_read to see skill instructions\n```\n\n### Test Locally\n\nTest skills before uploading:\n\n```bash\n# Test skill locally\npython scripts/generate.py --data \"test\"\n```\n\n### Monitor Execution\n\nCheck execution logs in the platform debug view:\n\n- Tool calls and arguments\n- Code execution results\n- File outputs\n- Error messages\n\n## Common Patterns\n\n### Pattern 1: Generate and Return\n\n```yaml\n# User asks for QR code\n# LLM generates QR code\n# File automatically available for download\n```\n\n### Pattern 2: Analyze and Report\n\n```yaml\n# User provides data\n# LLM analyzes with skill\n# Generates report file\n# Returns summary + file link\n```\n\n### Pattern 3: Transform and Save\n\n```yaml\n# User uploads file (via tool)\n# LLM processes with skill\n# Generates transformed file\n# Returns new file link\n```\n\n## Best Practices Summary\n\n1. **Enable only needed skills** - Don't overwhelm the LLM\n2. **Choose appropriate display modes** - Match user experience needs\n3. **Write clear skill descriptions** - Help LLM understand when to use\n4. **Handle errors gracefully** - Provide helpful error messages\n5. **Test skills locally** - Verify before uploading\n6. **Monitor execution** - Check logs for issues\n7. **Combine with tools** - Use tools for data, skills for processing\n8. **Consider performance** - Be aware of timeouts and limits\n\n## Next Steps\n\n- [Skills](/docs/protocol/skills) - Basic skills documentation\n- [Agent Config](/docs/protocol/agent-config) - Configuring skills\n- [Tools](/docs/protocol/tools) - External tools integration\n",
|
|
615
|
+
content: "\n# Skills Advanced Guide\n\nThis guide covers advanced patterns and best practices for using Octavus skills in your agents.\n\n## When to Use Skills\n\nSkills are ideal for:\n\n- **Code execution** - Running Python/Bash scripts\n- **File generation** - Creating images, PDFs, reports\n- **Data processing** - Analyzing, transforming, or visualizing data\n- **Provider-agnostic needs** - Features that should work with any LLM\n\nUse external tools instead when:\n\n- **Simple API calls** - Database queries, external services\n- **Authentication required** - Accessing user-specific resources\n- **Backend integration** - Tight coupling with your infrastructure\n\n## Skill Selection Strategy\n\n### Defining Available Skills\n\nDefine all skills in the `skills:` section, then reference which skills are available where they're used:\n\n**Interactive agents** \u2014 reference in `agent.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n pdf-processor:\n display: description\n description: Processing PDFs\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n```\n\n**Workers and named threads** \u2014 reference per-thread in `start-thread.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n\nsteps:\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis]\n maxSteps: 10\n```\n\n### Match Skills to Use Cases\n\nDifferent threads can have different skills. Define all skills at the protocol level, then scope them to each thread:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n visualization:\n display: description\n description: Creating charts and visualizations\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n```\n\nFor a data analysis thread, you would specify `[data-analysis, visualization]` in `agent.skills` or in a `start-thread` block's `skills` field.\n\n## Display Mode Strategy\n\nChoose display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n### Guidelines\n\n- **`hidden`**: Background work that doesn't need user awareness\n- **`description`**: User-facing operations (default)\n- **`name`**: Quick operations where name is sufficient\n- **`stream`**: Long-running operations where progress matters\n\n## System Prompt Integration\n\nSkills are automatically injected into the system prompt. The LLM learns:\n\n1. **Available skills** - List of enabled skills with descriptions\n2. **How to use skills** - Instructions for using skill tools\n3. **Tool reference** - Available skill tools (`octavus_skill_read`, `octavus_code_run`, etc.)\n\nYou don't need to manually document skills in your system prompt. However, you can guide the LLM:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a helpful assistant that can generate QR codes.\n\n## When to Generate QR Codes\n\nGenerate QR codes when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Share WiFi credentials\n- Create scannable data\n\nUse the qr-code skill for all QR code generation tasks.\n```\n\n## Error Handling\n\nSkills handle errors gracefully:\n\n```yaml\n# Skill execution errors are returned to the LLM\n# The LLM can retry or explain the error to the user\n```\n\nCommon error scenarios:\n\n1. **Invalid skill slug** - Skill not found in organization\n2. **Code execution errors** - Syntax errors, runtime exceptions\n3. **Missing dependencies** - Required packages not installed\n4. **File I/O errors** - Permission issues, invalid paths\n\nThe LLM receives error messages and can:\n\n- Retry with corrected code\n- Explain errors to users\n- Suggest alternatives\n\n## File Output Patterns\n\n### Single File Output\n\n```python\n# Save single file to /output/\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\nqr = qrcode.QRCode()\nqr.add_data('https://example.com')\nimg = qr.make_image()\nimg.save(f'{output_dir}/qrcode.png')\n```\n\n### Multiple Files\n\n```python\n# Save multiple files\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Generate multiple outputs\nfor i in range(3):\n filename = f'{output_dir}/output_{i}.png'\n # ... generate file ...\n```\n\n### Structured Output\n\n```python\n# Save structured data + files\nimport json\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Save metadata\nmetadata = {\n 'files': ['chart.png', 'data.csv'],\n 'summary': 'Analysis complete'\n}\nwith open(f'{output_dir}/metadata.json', 'w') as f:\n json.dump(metadata, f)\n\n# Save actual files\n# ... generate chart.png and data.csv ...\n```\n\n## Performance Considerations\n\n### Lazy Initialization\n\nSandboxes are created only when a skill tool is first called:\n\n```yaml\nagent:\n skills: [qr-code] # Sandbox created on first skill tool call\n```\n\nThis means:\n\n- No cost if skills aren't used\n- Fast startup (no sandbox creation delay)\n- Each `next-message` execution gets its own sandbox with only the skills it needs\n\n### Timeout Limits\n\nSandboxes default to a 5-minute timeout. Configure `sandboxTimeout` on the agent config or per thread:\n\n```yaml\n# Agent-level\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes\n```\n\n```yaml\n# Thread-level (overrides agent-level)\nsteps:\n Start thread:\n block: start-thread\n thread: analysis\n skills: [data-analysis]\n sandboxTimeout: 3600000 # 1 hour for long-running analysis\n```\n\nThread-level `sandboxTimeout` takes priority. Maximum: 1 hour (3,600,000 ms).\n\n### Sandbox Lifecycle\n\nEach `next-message` execution gets its own sandbox:\n\n- **Scoped** - Only contains the skills available to that thread\n- **Isolated** - Interactive agents and workers don't share sandboxes\n- **Resilient** - If a sandbox expires, it's transparently recreated\n- **Cleaned up** - Sandbox destroyed when the LLM call completes\n\n## Combining Skills with Tools\n\nSkills and tools can work together:\n\n```yaml\ntools:\n get-user-data:\n description: Fetch user data from database\n parameters:\n userId: { type: string }\n\nskills:\n data-analysis:\n display: description\n description: Analyzing data\n\nagent:\n tools: [get-user-data]\n skills: [data-analysis]\n agentic: true\n\nhandlers:\n analyze-user:\n Get user data:\n block: tool-call\n tool: get-user-data\n input:\n userId: USER_ID\n output: USER_DATA\n\n Analyze:\n block: next-message\n # LLM can use data-analysis skill with USER_DATA\n```\n\nPattern:\n\n1. Fetch data via tool (from your backend)\n2. LLM uses skill to analyze/process the data\n3. Generate outputs (files, reports)\n\n## Skill Development Tips\n\n### Writing SKILL.md\n\nFocus on **when** and **how** to use the skill:\n\n```markdown\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\n---\n\n# QR Code Generator\n\n## When to Use\n\nUse this skill when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Create scannable data\n\n## Quick Start\n\n[Clear examples of how to use the skill]\n```\n\n### Script Organization\n\nOrganize scripts logically:\n\n```\nskill-name/\n\u251C\u2500\u2500 SKILL.md\n\u2514\u2500\u2500 scripts/\n \u251C\u2500\u2500 generate.py # Main script\n \u251C\u2500\u2500 utils.py # Helper functions\n \u2514\u2500\u2500 requirements.txt # Dependencies\n```\n\n### Error Messages\n\nProvide helpful error messages:\n\n```python\ntry:\n # ... code ...\nexcept ValueError as e:\n print(f\"Error: Invalid input - {e}\")\n sys.exit(1)\n```\n\nThe LLM sees these errors and can retry or explain to users.\n\n## Security Considerations\n\n### Sandbox Isolation\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after each `next-message` execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n### Input Validation\n\nSkills should validate inputs:\n\n```python\nimport sys\n\nif not data:\n print(\"Error: Data is required\")\n sys.exit(1)\n\nif len(data) > 1000:\n print(\"Error: Data too long (max 1000 characters)\")\n sys.exit(1)\n```\n\n### Resource Limits\n\nBe aware of:\n\n- **File size limits** - Large files may fail to upload\n- **Execution time** - Sandbox timeout (5-minute default, 1-hour maximum)\n- **Memory limits** - Sandbox environment constraints\n\n## Debugging Skills\n\n### Check Skill Documentation\n\nThe LLM can read skill docs:\n\n```python\n# LLM calls octavus_skill_read to see skill instructions\n```\n\n### Test Locally\n\nTest skills before uploading:\n\n```bash\n# Test skill locally\npython scripts/generate.py --data \"test\"\n```\n\n### Monitor Execution\n\nCheck execution logs in the platform debug view:\n\n- Tool calls and arguments\n- Code execution results\n- File outputs\n- Error messages\n\n## Common Patterns\n\n### Pattern 1: Generate and Return\n\n```yaml\n# User asks for QR code\n# LLM generates QR code\n# File automatically available for download\n```\n\n### Pattern 2: Analyze and Report\n\n```yaml\n# User provides data\n# LLM analyzes with skill\n# Generates report file\n# Returns summary + file link\n```\n\n### Pattern 3: Transform and Save\n\n```yaml\n# User uploads file (via tool)\n# LLM processes with skill\n# Generates transformed file\n# Returns new file link\n```\n\n## Best Practices Summary\n\n1. **Enable only needed skills** - Don't overwhelm the LLM\n2. **Choose appropriate display modes** - Match user experience needs\n3. **Write clear skill descriptions** - Help LLM understand when to use\n4. **Handle errors gracefully** - Provide helpful error messages\n5. **Test skills locally** - Verify before uploading\n6. **Monitor execution** - Check logs for issues\n7. **Combine with tools** - Use tools for data, skills for processing\n8. **Consider performance** - Be aware of timeouts and limits\n\n## Next Steps\n\n- [Skills](/docs/protocol/skills) - Basic skills documentation\n- [Agent Config](/docs/protocol/agent-config) - Configuring skills\n- [Tools](/docs/protocol/tools) - External tools integration\n",
|
|
616
616
|
excerpt: "Skills Advanced Guide This guide covers advanced patterns and best practices for using Octavus skills in your agents. When to Use Skills Skills are ideal for: - Code execution - Running Python/Bash...",
|
|
617
617
|
order: 9
|
|
618
618
|
},
|
|
@@ -630,10 +630,19 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
630
630
|
section: "protocol",
|
|
631
631
|
title: "Workers",
|
|
632
632
|
description: "Defining worker agents for background and task-based execution.",
|
|
633
|
-
content: '\n# Workers\n\nWorkers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value.\n\n## When to Use Workers\n\nWorkers are ideal for:\n\n- **Background processing** \u2014 Long-running tasks that don\'t need conversation\n- **Composable tasks** \u2014 Reusable units of work called by other agents\n- **Pipelines** \u2014 Multi-step processing with structured output\n- **Parallel execution** \u2014 Tasks that can run independently\n\nUse interactive agents instead when:\n\n- **Conversation is needed** \u2014 Multi-turn dialogue with users\n- **Persistence matters** \u2014 State should survive across interactions\n- **Session context** \u2014 User context needs to persist\n\n## Worker vs Interactive\n\n| Aspect | Interactive | Worker |\n| ---------- | ---------------------------------- | ----------------------------- |\n| Structure | `triggers` + `handlers` + `agent` | `steps` + `output` |\n| LLM Config | Global `agent:` section | Per-thread via `start-thread` |\n| Invocation | Fire a named trigger | Direct execution with input |\n| Session | Persists across triggers (24h TTL) | Single execution |\n| Result | Streaming chat | Streaming + output value |\n\n## Protocol Structure\n\nWorkers use a simpler protocol structure than interactive agents:\n\n```yaml\n# Input schema - provided when worker is executed\ninput:\n TOPIC:\n type: string\n description: Topic to research\n DEPTH:\n type: string\n optional: true\n default: medium\n\n# Variables for intermediate results\nvariables:\n RESEARCH_DATA:\n type: string\n ANALYSIS:\n type: string\n description: Final analysis result\n\n# Tools available to the worker\ntools:\n web-search:\n description: Search the web\n parameters:\n query: { type: string }\n\n# Sequential execution steps\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC, DEPTH]\n tools: [web-search]\n maxSteps: 5\n\n Add research request:\n block: add-message\n thread: research\n role: user\n prompt: research-prompt\n input: [TOPIC, DEPTH]\n\n Generate research:\n block: next-message\n thread: research\n output: RESEARCH_DATA\n\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: analysis-system\n\n Add analysis request:\n block: add-message\n thread: analysis\n role: user\n prompt: analysis-prompt\n input: [RESEARCH_DATA]\n\n Generate analysis:\n block: next-message\n thread: analysis\n output: ANALYSIS\n\n# Output variable - the worker\'s return value\noutput: ANALYSIS\n```\n\n## settings.json\n\nWorkers are identified by the `format` field:\n\n```json\n{\n "slug": "research-assistant",\n "name": "Research Assistant",\n "description": "Researches topics and returns structured analysis",\n "format": "worker"\n}\n```\n\n## Key Differences\n\n### No Global Agent Config\n\nInteractive agents have a global `agent:` section that configures a main thread. Workers don\'t have this \u2014 every thread must be explicitly created via `start-thread`:\n\n```yaml\n# Interactive agent: Global config\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [tool-a, tool-b]\n\n# Worker: Each thread configured independently\nsteps:\n Start thread A:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n tools: [tool-a]\n\n Start thread B:\n block: start-thread\n thread: analysis\n model: openai/gpt-4o\n tools: [tool-b]\n```\n\nThis gives workers flexibility to use different models, tools, and settings at different stages.\n\n### Steps Instead of Handlers\n\nWorkers use `steps:` instead of `handlers:`. Steps execute sequentially, like handler blocks:\n\n```yaml\n# Interactive: Handlers respond to triggers\nhandlers:\n user-message:\n Add message:\n block: add-message\n # ...\n\n# Worker: Steps execute in sequence\nsteps:\n Add message:\n block: add-message\n # ...\n```\n\n### Output Value\n\nWorkers can return an output value to the caller:\n\n```yaml\nvariables:\n RESULT:\n type: string\n\nsteps:\n # ... steps that populate RESULT ...\n\noutput: RESULT # Return this variable\'s value\n```\n\nThe `output` field references a variable declared in `variables:`. If omitted, the worker completes without returning a value.\n\n## Available Blocks\n\nWorkers support the same blocks as handlers:\n\n| Block | Purpose |\n| ------------------ | -------------------------------------------- |\n| `start-thread` | Create a named thread with LLM configuration |\n| `add-message` | Add a message to a thread |\n| `next-message` | Generate LLM response |\n| `tool-call` | Call a tool deterministically |\n| `set-resource` | Update a resource value |\n| `serialize-thread` | Convert thread to text |\n| `generate-image` | Generate an image from a prompt variable |\n\n### start-thread (Required for LLM)\n\nEvery thread must be initialized with `start-thread` before using `next-message`:\n\n```yaml\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC]\n tools: [web-search]\n thinking: medium\n maxSteps: 5\n```\n\nAll LLM configuration goes here:\n\n| Field | Description |\n| ------------- | ------------------------------------------------- |\n| `thread` | Thread name (defaults to block name) |\n| `model` | LLM model to use |\n| `system` | System prompt filename (required) |\n| `input` | Variables for system prompt |\n| `tools` | Tools available in this thread |\n| `workers` | Workers available to this thread (as LLM tools) |\n| `imageModel` | Image generation model |\n| `thinking` | Extended reasoning level |\n| `temperature` | Model temperature |\n| `maxSteps` | Maximum tool call cycles (enables agentic if > 1) |\n\n## Simple Example\n\nA worker that generates a title from a summary:\n\n```yaml\n# Input\ninput:\n CONVERSATION_SUMMARY:\n type: string\n description: Summary to generate a title for\n\n# Variables\nvariables:\n TITLE:\n type: string\n description: The generated title\n\n# Steps\nsteps:\n Start title thread:\n block: start-thread\n thread: title-gen\n model: anthropic/claude-sonnet-4-5\n system: title-system\n\n Add title request:\n block: add-message\n thread: title-gen\n role: user\n prompt: title-request\n input: [CONVERSATION_SUMMARY]\n\n Generate title:\n block: next-message\n thread: title-gen\n output: TITLE\n display: stream\n\n# Output\noutput: TITLE\n```\n\n## Advanced Example\n\nA worker with multiple threads, tools, and agentic behavior:\n\n```yaml\ninput:\n USER_MESSAGE:\n type: string\n description: The user\'s message to respond to\n USER_ID:\n type: string\n description: User ID for account lookups\n optional: true\n\ntools:\n get-user-account:\n description: Looking up account information\n parameters:\n userId: { type: string }\n create-support-ticket:\n description: Creating a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string }\n\nvariables:\n ASSISTANT_RESPONSE:\n type: string\n CHAT_TRANSCRIPT:\n type: string\n CONVERSATION_SUMMARY:\n type: string\n\nsteps:\n # Thread 1: Chat with agentic tool calling\n Start chat thread:\n block: start-thread\n thread: chat\n model: anthropic/claude-sonnet-4-5\n system: chat-system\n input: [USER_ID]\n tools: [get-user-account, create-support-ticket]\n thinking: medium\n maxSteps: 5\n\n Add user message:\n block: add-message\n thread: chat\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Generate response:\n block: next-message\n thread: chat\n output: ASSISTANT_RESPONSE\n display: stream\n\n # Serialize for summary\n Save conversation:\n block: serialize-thread\n thread: chat\n output: CHAT_TRANSCRIPT\n\n # Thread 2: Summary generation\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n system: summary-system\n thinking: low\n\n Add summary request:\n block: add-message\n thread: summary\n role: user\n prompt: summary-request\n input: [CHAT_TRANSCRIPT]\n\n Generate summary:\n block: next-message\n thread: summary\n output: CONVERSATION_SUMMARY\n display: stream\n\noutput: CONVERSATION_SUMMARY\n```\n\n## Tool Handling\n\nWorkers support the same tool handling as interactive agents:\n\n- **Server tools** \u2014 Handled by tool handlers you provide\n- **Client tools** \u2014 Pause execution, return tool request to caller\n\n```typescript\nconst events = client.workers.execute(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => {\n return await searchWeb(args.query);\n },\n },\n },\n);\n```\n\nSee [Server SDK Workers](/docs/server-sdk/workers) for tool handling details.\n\n## Stream Events\n\nWorkers emit the same events as interactive agents, plus worker-specific events:\n\n| Event | Description |\n| --------------- | ---------------------------------- |\n| `worker-start` | Worker execution begins |\n| `worker-result` | Worker completes (includes output) |\n\nAll standard events (text-delta, tool calls, etc.) are also emitted.\n\n## Calling Workers from Interactive Agents\n\nInteractive agents can call workers in two ways:\n\n1. **Deterministically** \u2014 Using the `run-worker` block\n2. **Agentically** \u2014 LLM calls worker as a tool\n\n### Worker Declaration\n\nFirst, declare workers in your interactive agent\'s protocol:\n\n```yaml\nworkers:\n generate-title:\n description: Generating conversation title\n display: description\n research-assistant:\n description: Researching topic\n display: stream\n tools:\n search: web-search # Map worker tool \u2192 parent tool\n```\n\n### run-worker Block\n\nCall a worker deterministically from a handler:\n\n```yaml\nhandlers:\n request-human:\n Generate title:\n block: run-worker\n worker: generate-title\n input:\n CONVERSATION_SUMMARY: SUMMARY\n output: CONVERSATION_TITLE\n```\n\n### LLM Tool Invocation\n\nMake workers available to the LLM:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n workers: [generate-title, research-assistant]\n agentic: true\n```\n\nThe LLM can then call workers as tools during conversation.\n\n### Display Modes\n\nControl how worker execution appears to users:\n\n| Mode | Behavior |\n| ------------- | --------------------------------- |\n| `hidden` | Worker runs silently |\n| `name` | Shows worker name |\n| `description` | Shows description text |\n| `stream` | Streams all worker events to user |\n\n### Tool Mapping\n\nMap parent tools to worker tools when the worker needs access to your tool handlers:\n\n```yaml\nworkers:\n research-assistant:\n description: Research topics\n tools:\n search: web-search # Worker\'s "search" \u2192 parent\'s "web-search"\n```\n\nWhen the worker calls its `search` tool, your `web-search` handler executes.\n\n## Next Steps\n\n- [Server SDK Workers](/docs/server-sdk/workers) \u2014 Executing workers from code\n- [Handlers](/docs/protocol/handlers) \u2014 Block reference for steps\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n',
|
|
633
|
+
content: '\n# Workers\n\nWorkers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value.\n\n## When to Use Workers\n\nWorkers are ideal for:\n\n- **Background processing** \u2014 Long-running tasks that don\'t need conversation\n- **Composable tasks** \u2014 Reusable units of work called by other agents\n- **Pipelines** \u2014 Multi-step processing with structured output\n- **Parallel execution** \u2014 Tasks that can run independently\n\nUse interactive agents instead when:\n\n- **Conversation is needed** \u2014 Multi-turn dialogue with users\n- **Persistence matters** \u2014 State should survive across interactions\n- **Session context** \u2014 User context needs to persist\n\n## Worker vs Interactive\n\n| Aspect | Interactive | Worker |\n| ---------- | ---------------------------------- | ----------------------------- |\n| Structure | `triggers` + `handlers` + `agent` | `steps` + `output` |\n| LLM Config | Global `agent:` section | Per-thread via `start-thread` |\n| Invocation | Fire a named trigger | Direct execution with input |\n| Session | Persists across triggers (24h TTL) | Single execution |\n| Result | Streaming chat | Streaming + output value |\n\n## Protocol Structure\n\nWorkers use a simpler protocol structure than interactive agents:\n\n```yaml\n# Input schema - provided when worker is executed\ninput:\n TOPIC:\n type: string\n description: Topic to research\n DEPTH:\n type: string\n optional: true\n default: medium\n\n# Variables for intermediate results\nvariables:\n RESEARCH_DATA:\n type: string\n ANALYSIS:\n type: string\n description: Final analysis result\n\n# Tools available to the worker\ntools:\n web-search:\n description: Search the web\n parameters:\n query: { type: string }\n\n# Sequential execution steps\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC, DEPTH]\n tools: [web-search]\n maxSteps: 5\n\n Add research request:\n block: add-message\n thread: research\n role: user\n prompt: research-prompt\n input: [TOPIC, DEPTH]\n\n Generate research:\n block: next-message\n thread: research\n output: RESEARCH_DATA\n\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: analysis-system\n\n Add analysis request:\n block: add-message\n thread: analysis\n role: user\n prompt: analysis-prompt\n input: [RESEARCH_DATA]\n\n Generate analysis:\n block: next-message\n thread: analysis\n output: ANALYSIS\n\n# Output variable - the worker\'s return value\noutput: ANALYSIS\n```\n\n## settings.json\n\nWorkers are identified by the `format` field:\n\n```json\n{\n "slug": "research-assistant",\n "name": "Research Assistant",\n "description": "Researches topics and returns structured analysis",\n "format": "worker"\n}\n```\n\n## Key Differences\n\n### No Global Agent Config\n\nInteractive agents have a global `agent:` section that configures a main thread. Workers don\'t have this \u2014 every thread must be explicitly created via `start-thread`:\n\n```yaml\n# Interactive agent: Global config\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [tool-a, tool-b]\n\n# Worker: Each thread configured independently\nsteps:\n Start thread A:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n tools: [tool-a]\n\n Start thread B:\n block: start-thread\n thread: analysis\n model: openai/gpt-4o\n tools: [tool-b]\n```\n\nThis gives workers flexibility to use different models, tools, skills, and settings at different stages.\n\n### Steps Instead of Handlers\n\nWorkers use `steps:` instead of `handlers:`. Steps execute sequentially, like handler blocks:\n\n```yaml\n# Interactive: Handlers respond to triggers\nhandlers:\n user-message:\n Add message:\n block: add-message\n # ...\n\n# Worker: Steps execute in sequence\nsteps:\n Add message:\n block: add-message\n # ...\n```\n\n### Output Value\n\nWorkers can return an output value to the caller:\n\n```yaml\nvariables:\n RESULT:\n type: string\n\nsteps:\n # ... steps that populate RESULT ...\n\noutput: RESULT # Return this variable\'s value\n```\n\nThe `output` field references a variable declared in `variables:`. If omitted, the worker completes without returning a value.\n\n## Available Blocks\n\nWorkers support the same blocks as handlers:\n\n| Block | Purpose |\n| ------------------ | -------------------------------------------- |\n| `start-thread` | Create a named thread with LLM configuration |\n| `add-message` | Add a message to a thread |\n| `next-message` | Generate LLM response |\n| `tool-call` | Call a tool deterministically |\n| `set-resource` | Update a resource value |\n| `serialize-thread` | Convert thread to text |\n| `generate-image` | Generate an image from a prompt variable |\n\n### start-thread (Required for LLM)\n\nEvery thread must be initialized with `start-thread` before using `next-message`:\n\n```yaml\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC]\n tools: [web-search]\n thinking: medium\n maxSteps: 5\n```\n\nAll LLM configuration goes here:\n\n| Field | Description |\n| ------------- | ------------------------------------------------- |\n| `thread` | Thread name (defaults to block name) |\n| `model` | LLM model to use |\n| `system` | System prompt filename (required) |\n| `input` | Variables for system prompt |\n| `tools` | Tools available in this thread |\n| `skills` | Octavus skills available in this thread |\n| `imageModel` | Image generation model |\n| `thinking` | Extended reasoning level |\n| `temperature` | Model temperature |\n| `maxSteps` | Maximum tool call cycles (enables agentic if > 1) |\n\n## Simple Example\n\nA worker that generates a title from a summary:\n\n```yaml\n# Input\ninput:\n CONVERSATION_SUMMARY:\n type: string\n description: Summary to generate a title for\n\n# Variables\nvariables:\n TITLE:\n type: string\n description: The generated title\n\n# Steps\nsteps:\n Start title thread:\n block: start-thread\n thread: title-gen\n model: anthropic/claude-sonnet-4-5\n system: title-system\n\n Add title request:\n block: add-message\n thread: title-gen\n role: user\n prompt: title-request\n input: [CONVERSATION_SUMMARY]\n\n Generate title:\n block: next-message\n thread: title-gen\n output: TITLE\n display: stream\n\n# Output\noutput: TITLE\n```\n\n## Advanced Example\n\nA worker with multiple threads, tools, and agentic behavior:\n\n```yaml\ninput:\n USER_MESSAGE:\n type: string\n description: The user\'s message to respond to\n USER_ID:\n type: string\n description: User ID for account lookups\n optional: true\n\ntools:\n get-user-account:\n description: Looking up account information\n parameters:\n userId: { type: string }\n create-support-ticket:\n description: Creating a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string }\n\nvariables:\n ASSISTANT_RESPONSE:\n type: string\n CHAT_TRANSCRIPT:\n type: string\n CONVERSATION_SUMMARY:\n type: string\n\nsteps:\n # Thread 1: Chat with agentic tool calling\n Start chat thread:\n block: start-thread\n thread: chat\n model: anthropic/claude-sonnet-4-5\n system: chat-system\n input: [USER_ID]\n tools: [get-user-account, create-support-ticket]\n thinking: medium\n maxSteps: 5\n\n Add user message:\n block: add-message\n thread: chat\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Generate response:\n block: next-message\n thread: chat\n output: ASSISTANT_RESPONSE\n display: stream\n\n # Serialize for summary\n Save conversation:\n block: serialize-thread\n thread: chat\n output: CHAT_TRANSCRIPT\n\n # Thread 2: Summary generation\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n system: summary-system\n thinking: low\n\n Add summary request:\n block: add-message\n thread: summary\n role: user\n prompt: summary-request\n input: [CHAT_TRANSCRIPT]\n\n Generate summary:\n block: next-message\n thread: summary\n output: CONVERSATION_SUMMARY\n display: stream\n\noutput: CONVERSATION_SUMMARY\n```\n\n## Skills and Image Generation\n\nWorkers can use Octavus skills and image generation, configured per-thread via `start-thread`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generate QR codes\n\nsteps:\n Start thread:\n block: start-thread\n thread: worker\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n imageModel: google/gemini-2.5-flash-image\n maxSteps: 10\n```\n\nWorkers define their own skills independently -- they don\'t inherit skills from a parent interactive agent. Each thread gets its own sandbox scoped to only its listed skills.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## Tool Handling\n\nWorkers support the same tool handling as interactive agents:\n\n- **Server tools** \u2014 Handled by tool handlers you provide\n- **Client tools** \u2014 Pause execution, return tool request to caller\n\n```typescript\n// Non-streaming: get the output directly\nconst { output } = await client.workers.generate(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => await searchWeb(args.query),\n },\n },\n);\n\n// Streaming: observe events in real-time\nconst events = client.workers.execute(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => await searchWeb(args.query),\n },\n },\n);\n```\n\nSee [Server SDK Workers](/docs/server-sdk/workers) for tool handling details.\n\n## Stream Events\n\nWorkers emit the same events as interactive agents, plus worker-specific events:\n\n| Event | Description |\n| --------------- | ---------------------------------- |\n| `worker-start` | Worker execution begins |\n| `worker-result` | Worker completes (includes output) |\n\nAll standard events (text-delta, tool calls, etc.) are also emitted.\n\n## Calling Workers from Interactive Agents\n\nInteractive agents can call workers in two ways:\n\n1. **Deterministically** \u2014 Using the `run-worker` block\n2. **Agentically** \u2014 LLM calls worker as a tool\n\n### Worker Declaration\n\nFirst, declare workers in your interactive agent\'s protocol:\n\n```yaml\nworkers:\n generate-title:\n description: Generating conversation title\n display: description\n research-assistant:\n description: Researching topic\n display: stream\n tools:\n search: web-search # Map worker tool \u2192 parent tool\n```\n\n### run-worker Block\n\nCall a worker deterministically from a handler:\n\n```yaml\nhandlers:\n request-human:\n Generate title:\n block: run-worker\n worker: generate-title\n input:\n CONVERSATION_SUMMARY: SUMMARY\n output: CONVERSATION_TITLE\n```\n\n### LLM Tool Invocation\n\nMake workers available to the LLM:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n workers: [generate-title, research-assistant]\n agentic: true\n```\n\nThe LLM can then call workers as tools during conversation.\n\n### Display Modes\n\nControl how worker execution appears to users:\n\n| Mode | Behavior |\n| ------------- | --------------------------------- |\n| `hidden` | Worker runs silently |\n| `name` | Shows worker name |\n| `description` | Shows description text |\n| `stream` | Streams all worker events to user |\n\n### Tool Mapping\n\nMap parent tools to worker tools when the worker needs access to your tool handlers:\n\n```yaml\nworkers:\n research-assistant:\n description: Research topics\n tools:\n search: web-search # Worker\'s "search" \u2192 parent\'s "web-search"\n```\n\nWhen the worker calls its `search` tool, your `web-search` handler executes.\n\n## Next Steps\n\n- [Server SDK Workers](/docs/server-sdk/workers) \u2014 Executing workers from code\n- [Handlers](/docs/protocol/handlers) \u2014 Block reference for steps\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n',
|
|
634
634
|
excerpt: "Workers Workers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value. When to...",
|
|
635
635
|
order: 11
|
|
636
636
|
},
|
|
637
|
+
{
|
|
638
|
+
slug: "protocol/references",
|
|
639
|
+
section: "protocol",
|
|
640
|
+
title: "References",
|
|
641
|
+
description: "Using references for on-demand context loading in agents.",
|
|
642
|
+
content: "\n# References\n\nReferences are markdown documents that agents can fetch on demand. Instead of loading everything into the system prompt upfront, references let the agent decide what context it needs and load it when relevant.\n\n## Overview\n\nReferences are useful for:\n\n- **Large context** \u2014 Documents too long to include in every system prompt\n- **Selective loading** \u2014 Let the agent decide which context is relevant\n- **Shared knowledge** \u2014 Reusable documents across threads\n\n### How References Work\n\n1. **Definition**: Reference files live in the `references/` directory alongside your agent\n2. **Configuration**: List available references in `agent.references` or `start-thread.references`\n3. **Discovery**: The agent sees reference names and descriptions in its system prompt\n4. **Fetching**: The agent calls reference tools to read the full content when needed\n\n## Creating References\n\nEach reference is a markdown file with YAML frontmatter in the `references/` directory:\n\n```\nmy-agent/\n\u251C\u2500\u2500 settings.json\n\u251C\u2500\u2500 protocol.yaml\n\u251C\u2500\u2500 prompts/\n\u2502 \u2514\u2500\u2500 system.md\n\u2514\u2500\u2500 references/\n \u251C\u2500\u2500 api-guidelines.md\n \u2514\u2500\u2500 error-codes.md\n```\n\n### Reference Format\n\n```markdown\n---\ndescription: >\n API design guidelines including naming conventions,\n error handling patterns, and pagination standards.\n---\n\n# API Guidelines\n\n## Naming Conventions\n\nUse lowercase with dashes for URL paths...\n\n## Error Handling\n\nAll errors return a standard error envelope...\n```\n\nThe `description` field is required. It tells the agent what the reference contains so it can decide when to fetch it.\n\n### Naming Convention\n\nReference filenames use `lowercase-with-dashes`:\n\n- `api-guidelines.md`\n- `error-codes.md`\n- `coding-standards.md`\n\nThe filename (without `.md`) becomes the reference name used in the protocol.\n\n## Enabling References\n\nAfter creating reference files, specify which references are available in the protocol.\n\n### Interactive Agents\n\nList references in `agent.references`:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n references: [api-guidelines, error-codes]\n agentic: true\n```\n\n### Workers and Named Threads\n\nList references per-thread in `start-thread.references`:\n\n```yaml\nsteps:\n Start thread:\n block: start-thread\n thread: worker\n model: anthropic/claude-sonnet-4-5\n system: system\n references: [api-guidelines]\n maxSteps: 10\n```\n\nDifferent threads can have different references.\n\n## Reference Tools\n\nWhen references are enabled, the agent has access to two tools:\n\n| Tool | Purpose |\n| ------------------------ | ----------------------------------------------- |\n| `octavus_reference_list` | List all available references with descriptions |\n| `octavus_reference_read` | Read the full content of a specific reference |\n\nThe agent also sees reference names and descriptions in its system prompt, so it knows what's available without calling `octavus_reference_list`.\n\n## Example\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [review-pull-request]\n references: [coding-standards, api-guidelines]\n agentic: true\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond:\n block: next-message\n```\n\nWith `references/coding-standards.md`:\n\n```markdown\n---\ndescription: >\n Team coding standards including naming conventions,\n code organization, and review checklist.\n---\n\n# Coding Standards\n\n## Naming Conventions\n\n- Files: kebab-case\n- Variables: camelCase\n- Constants: UPPER_SNAKE_CASE\n ...\n```\n\nWhen a user asks the agent to review code, the agent will:\n\n1. See \"coding-standards\" and \"api-guidelines\" in its system prompt\n2. Decide which references are relevant to the review\n3. Call `octavus_reference_read` to load the relevant reference\n4. Use the loaded context to provide an informed review\n\n## Validation\n\nThe CLI and platform validate references during sync and deployment:\n\n- **Undefined references** \u2014 Referencing a name that doesn't have a matching file in `references/`\n- **Unused references** \u2014 A reference file exists but isn't listed in any `agent.references` or `start-thread.references`\n- **Invalid names** \u2014 Names that don't follow the `lowercase-with-dashes` convention\n- **Missing description** \u2014 Reference files without the required `description` in frontmatter\n\n## References vs Skills\n\n| Aspect | References | Skills |\n| ------------- | ----------------------------- | ------------------------------- |\n| **Purpose** | On-demand context documents | Code execution and file output |\n| **Content** | Markdown text | Documentation + scripts |\n| **Execution** | Synchronous text retrieval | Sandboxed code execution (E2B) |\n| **Scope** | Per-agent (stored with agent) | Per-organization (shared) |\n| **Tools** | List and read (2 tools) | Read, list, run, code (6 tools) |\n\nUse **references** when the agent needs access to text-based knowledge. Use **skills** when the agent needs to execute code or generate files.\n\n## Next Steps\n\n- [Agent Config](/docs/protocol/agent-config) \u2014 Configuring references in agent settings\n- [Skills](/docs/protocol/skills) \u2014 Code execution and knowledge packages\n- [Workers](/docs/protocol/workers) \u2014 Using references in worker agents\n",
|
|
643
|
+
excerpt: "References References are markdown documents that agents can fetch on demand. Instead of loading everything into the system prompt upfront, references let the agent decide what context it needs and...",
|
|
644
|
+
order: 12
|
|
645
|
+
},
|
|
637
646
|
{
|
|
638
647
|
slug: "api-reference/overview",
|
|
639
648
|
section: "api-reference",
|
|
@@ -657,8 +666,8 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
657
666
|
section: "api-reference",
|
|
658
667
|
title: "Agents",
|
|
659
668
|
description: "Agent management API endpoints.",
|
|
660
|
-
content: '\n# Agents API\n\nManage agent definitions including protocols and
|
|
661
|
-
excerpt: "Agents API Manage agent definitions including protocols and
|
|
669
|
+
content: '\n# Agents API\n\nManage agent definitions including protocols, prompts, and references.\n\n## Permissions\n\n| Endpoint | Method | Permission Required |\n| ---------------------- | ------ | ------------------- |\n| `/api/agents` | GET | Agents OR Sessions |\n| `/api/agents/:id` | GET | Agents OR Sessions |\n| `/api/agents` | POST | Agents |\n| `/api/agents/:id` | PATCH | Agents |\n| `/api/agents/:id` | DELETE | Agents |\n| `/api/agents/validate` | POST | Agents |\n\nRead endpoints work with either permission since both the CLI (for sync) and Server SDK (for sessions) need to read agent definitions.\n\n## List Agents\n\nGet all agents in the project.\n\n```\nGET /api/agents\n```\n\n### Response\n\n```json\n{\n "agents": [\n {\n "id": "cm5xvz7k80001abcd",\n "slug": "support-chat",\n "name": "Support Chat",\n "description": "Customer support agent",\n "format": "interactive",\n "createdAt": "2024-01-10T08:00:00Z",\n "updatedAt": "2024-01-15T10:00:00Z"\n }\n ]\n}\n```\n\n### Example\n\n```bash\ncurl https://octavus.ai/api/agents \\\n -H "Authorization: Bearer YOUR_API_KEY"\n```\n\n## Get Agent\n\nGet a single agent by ID.\n\n```\nGET /api/agents/:id\n```\n\n### Response\n\n```json\n{\n "id": "cm5xvz7k80001abcd",\n "settings": {\n "slug": "support-chat",\n "name": "Support Chat",\n "description": "Customer support agent",\n "format": "interactive"\n },\n "protocol": "input:\\n COMPANY_NAME: { type: string }\\n...",\n "prompts": [\n {\n "name": "system",\n "content": "You are a support agent for {{COMPANY_NAME}}..."\n },\n {\n "name": "user-message",\n "content": "{{USER_MESSAGE}}"\n }\n ],\n "references": [\n {\n "name": "api-guidelines",\n "description": "API design guidelines and conventions",\n "content": "# API Guidelines\\n\\nUse lowercase with dashes..."\n }\n ]\n}\n```\n\n### Example\n\n```bash\ncurl https://octavus.ai/api/agents/:agentId \\\n -H "Authorization: Bearer YOUR_API_KEY"\n```\n\n> **Tip:** You can also view and edit agents directly in the [platform](https://octavus.ai), or use the [CLI](/docs/server-sdk/cli) (`octavus list`) for local workflows.\n\n## Create Agent\n\nCreate a new agent.\n\n```\nPOST /api/agents\n```\n\n### Request Body\n\n```json\n{\n "settings": {\n "slug": "support-chat",\n "name": "Support Chat",\n "description": "Customer support agent",\n "format": "interactive"\n },\n "protocol": "input:\\n COMPANY_NAME: { type: string }\\n...",\n "prompts": [\n {\n "name": "system",\n "content": "You are a support agent..."\n }\n ],\n "references": [\n {\n "name": "api-guidelines",\n "description": "API design guidelines and conventions",\n "content": "# API Guidelines\\n..."\n }\n ]\n}\n```\n\n| Field | Type | Required | Description |\n| ---------------------- | ------ | -------- | ------------------------------------------------ |\n| `settings.slug` | string | Yes | URL-safe identifier |\n| `settings.name` | string | Yes | Display name |\n| `settings.description` | string | No | Agent description |\n| `settings.format` | string | Yes | `interactive` or `worker` |\n| `protocol` | string | Yes | YAML protocol definition |\n| `prompts` | array | Yes | Prompt files |\n| `references` | array | No | Reference documents (name, description, content) |\n\n### Response\n\n```json\n{\n "agentId": "cm5xvz7k80001abcd",\n "message": "Agent created successfully"\n}\n```\n\n### Example\n\n```bash\ncurl -X POST https://octavus.ai/api/agents \\\n -H "Authorization: Bearer YOUR_API_KEY" \\\n -H "Content-Type: application/json" \\\n -d \'{\n "settings": {\n "slug": "my-agent",\n "name": "My Agent",\n "format": "interactive"\n },\n "protocol": "agent:\\n model: anthropic/claude-sonnet-4-5\\n system: system",\n "prompts": [\n { "name": "system", "content": "You are a helpful assistant." }\n ]\n }\'\n```\n\n## Update Agent\n\nUpdate an existing agent.\n\n```\nPATCH /api/agents/:id\n```\n\n### Request Body\n\n```json\n{\n "protocol": "input:\\n COMPANY_NAME: { type: string }\\n...",\n "prompts": [\n {\n "name": "system",\n "content": "Updated system prompt..."\n }\n ],\n "references": [\n {\n "name": "api-guidelines",\n "description": "Updated description",\n "content": "Updated content..."\n }\n ]\n}\n```\n\nAll fields are optional. Only provided fields are updated.\n\n### Response\n\n```json\n{\n "agentId": "cm5xvz7k80001abcd",\n "message": "Agent updated successfully"\n}\n```\n\n### Example\n\n```bash\ncurl -X PATCH https://octavus.ai/api/agents/:agentId \\\n -H "Authorization: Bearer YOUR_API_KEY" \\\n -H "Content-Type: application/json" \\\n -d \'{\n "protocol": "agent:\\n model: anthropic/claude-sonnet-4-5\\n system: system\\n thinking: high"\n }\'\n```\n\n## Archive Agent\n\nArchive an agent (soft delete). The agent is removed from the active agent list and its slug is freed for reuse. Session history is preserved.\n\n```\nDELETE /api/agents/:id\n```\n\nSupports `?by=slug` query parameter to look up by slug instead of ID.\n\n### Response\n\n```json\n{\n "agentId": "cm5xvz7k80001abcd",\n "message": "Agent archived successfully"\n}\n```\n\n### Example\n\n```bash\n# Archive by ID\ncurl -X DELETE https://octavus.ai/api/agents/:agentId \\\n -H "Authorization: Bearer YOUR_API_KEY"\n\n# Archive by slug\ncurl -X DELETE https://octavus.ai/api/agents/support-chat?by=slug \\\n -H "Authorization: Bearer YOUR_API_KEY"\n```\n\n## Creating and Managing Agents\n\nThere are two ways to manage agents:\n\n### Platform UI\n\nCreate and edit agents directly at [octavus.ai](https://octavus.ai). The web editor provides real-time validation and is the easiest way to get started. Copy the agent ID from the URL to use in your application.\n\n### CLI (Local Development)\n\nFor version-controlled agent definitions, use the [Octavus CLI](/docs/server-sdk/cli):\n\n```bash\noctavus sync ./agents/support-chat\n```\n\nThis creates the agent if it doesn\'t exist, or updates it if it does. The CLI outputs the agent ID which you should store in an environment variable.\n\nFor CI/CD integration, see the [CLI documentation](/docs/server-sdk/cli#cicd-integration).\n',
|
|
670
|
+
excerpt: "Agents API Manage agent definitions including protocols, prompts, and references. Permissions | Endpoint | Method | Permission Required | | ---------------------- | ------ |...",
|
|
662
671
|
order: 3
|
|
663
672
|
},
|
|
664
673
|
{
|
|
@@ -738,7 +747,7 @@ var sections_default = [
|
|
|
738
747
|
section: "server-sdk",
|
|
739
748
|
title: "Overview",
|
|
740
749
|
description: "Introduction to the Octavus Server SDK for backend integration.",
|
|
741
|
-
content: "\n# Server SDK Overview\n\nThe `@octavus/server-sdk` package provides a Node.js SDK for integrating Octavus agents into your backend application. It handles session management, streaming, and the tool execution continuation loop.\n\n**Current version:** `2.
|
|
750
|
+
content: "\n# Server SDK Overview\n\nThe `@octavus/server-sdk` package provides a Node.js SDK for integrating Octavus agents into your backend application. It handles session management, streaming, and the tool execution continuation loop.\n\n**Current version:** `2.12.0`\n\n## Installation\n\n```bash\nnpm install @octavus/server-sdk\n```\n\nFor agent management (sync, validate), install the CLI as a dev dependency:\n\n```bash\nnpm install --save-dev @octavus/cli\n```\n\n## Basic Usage\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: 'your-api-key',\n});\n```\n\n## Key Features\n\n### Agent Management\n\nAgent definitions are managed via the CLI. See the [CLI documentation](/docs/server-sdk/cli) for details.\n\n```bash\n# Sync agent from local files\noctavus sync ./agents/support-chat\n\n# Output: Created: support-chat\n# Agent ID: clxyz123abc456\n```\n\n### Session Management\n\nCreate and manage agent sessions using the agent ID:\n\n```typescript\n// Create a new session (use agent ID from CLI sync)\nconst sessionId = await client.agentSessions.create('clxyz123abc456', {\n COMPANY_NAME: 'Acme Corp',\n PRODUCT_NAME: 'Widget Pro',\n});\n\n// Get UI-ready session messages (for session restore)\nconst session = await client.agentSessions.getMessages(sessionId);\n```\n\n### Tool Handlers\n\nTools run on your server with your data:\n\n```typescript\nconst session = client.agentSessions.attach(sessionId, {\n tools: {\n 'get-user-account': async (args) => {\n // Access your database, APIs, etc.\n return await db.users.findById(args.userId);\n },\n },\n});\n```\n\n### Streaming\n\nAll responses stream in real-time:\n\n```typescript\nimport { toSSEStream } from '@octavus/server-sdk';\n\n// execute() returns an async generator of events\nconst events = session.execute({\n type: 'trigger',\n triggerName: 'user-message',\n input: { USER_MESSAGE: 'Hello!' },\n});\n\n// Convert to SSE stream for HTTP responses\nreturn new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n});\n```\n\n### Workers\n\nExecute worker agents for task-based processing:\n\n```typescript\n// Non-streaming: get the output directly\nconst { output } = await client.workers.generate(agentId, {\n TOPIC: 'AI safety',\n});\n\n// Streaming: observe events in real-time\nfor await (const event of client.workers.execute(agentId, input)) {\n // Handle stream events\n}\n```\n\n## API Reference\n\n### OctavusClient\n\nThe main entry point for interacting with Octavus.\n\n```typescript\ninterface OctavusClientConfig {\n baseUrl: string; // Octavus API URL\n apiKey?: string; // Your API key\n traceModelRequests?: boolean; // Enable model request tracing (default: false)\n}\n\nclass OctavusClient {\n readonly agents: AgentsApi;\n readonly agentSessions: AgentSessionsApi;\n readonly workers: WorkersApi;\n readonly files: FilesApi;\n\n constructor(config: OctavusClientConfig);\n}\n```\n\n### AgentSessionsApi\n\nManages agent sessions.\n\n```typescript\nclass AgentSessionsApi {\n // Create a new session\n async create(agentId: string, input?: Record<string, unknown>): Promise<string>;\n\n // Get full session state (for debugging/internal use)\n async get(sessionId: string): Promise<SessionState>;\n\n // Get UI-ready messages (for client display)\n async getMessages(sessionId: string): Promise<UISessionState>;\n\n // Attach to a session for triggering\n attach(sessionId: string, options?: SessionAttachOptions): AgentSession;\n}\n\n// Full session state (internal format)\ninterface SessionState {\n id: string;\n agentId: string;\n input: Record<string, unknown>;\n variables: Record<string, unknown>;\n resources: Record<string, unknown>;\n messages: ChatMessage[]; // Internal message format\n createdAt: string;\n updatedAt: string;\n}\n\n// UI-ready session state\ninterface UISessionState {\n sessionId: string;\n agentId: string;\n messages: UIMessage[]; // UI-ready messages for frontend\n}\n```\n\n### AgentSession\n\nHandles request execution and streaming for a specific session.\n\n```typescript\nclass AgentSession {\n // Execute a request and stream parsed events\n execute(request: SessionRequest, options?: TriggerOptions): AsyncGenerator<StreamEvent>;\n\n // Get the session ID\n getSessionId(): string;\n}\n\ntype SessionRequest = TriggerRequest | ContinueRequest;\n\ninterface TriggerRequest {\n type: 'trigger';\n triggerName: string;\n input?: Record<string, unknown>;\n}\n\ninterface ContinueRequest {\n type: 'continue';\n executionId: string;\n toolResults: ToolResult[];\n}\n\n// Helper to convert events to SSE stream\nfunction toSSEStream(events: AsyncIterable<StreamEvent>): ReadableStream<Uint8Array>;\n```\n\n### FilesApi\n\nHandles file uploads for sessions.\n\n```typescript\nclass FilesApi {\n // Get presigned URLs for file uploads\n async getUploadUrls(sessionId: string, files: FileUploadRequest[]): Promise<UploadUrlsResponse>;\n}\n\ninterface FileUploadRequest {\n filename: string;\n mediaType: string;\n size: number;\n}\n\ninterface UploadUrlsResponse {\n files: {\n id: string; // File ID for references\n uploadUrl: string; // PUT to this URL\n downloadUrl: string; // GET URL after upload\n }[];\n}\n```\n\nThe client uploads files directly to S3 using the presigned upload URL. See [File Uploads](/docs/client-sdk/file-uploads) for the full integration pattern.\n\n## Next Steps\n\n- [Sessions](/docs/server-sdk/sessions) \u2014 Deep dive into session management\n- [Tools](/docs/server-sdk/tools) \u2014 Implementing tool handlers\n- [Streaming](/docs/server-sdk/streaming) \u2014 Understanding stream events\n- [Workers](/docs/server-sdk/workers) \u2014 Executing worker agents\n- [Debugging](/docs/server-sdk/debugging) \u2014 Model request tracing and debugging\n",
|
|
742
751
|
excerpt: "Server SDK Overview The package provides a Node.js SDK for integrating Octavus agents into your backend application. It handles session management, streaming, and the tool execution continuation...",
|
|
743
752
|
order: 1
|
|
744
753
|
},
|
|
@@ -774,7 +783,7 @@ var sections_default = [
|
|
|
774
783
|
section: "server-sdk",
|
|
775
784
|
title: "CLI",
|
|
776
785
|
description: "Command-line interface for validating and syncing agent definitions.",
|
|
777
|
-
content: '\n# Octavus CLI\n\nThe `@octavus/cli` package provides a command-line interface for validating and syncing agent definitions from your local filesystem to the Octavus platform.\n\n**Current version:** `2.
|
|
786
|
+
content: '\n# Octavus CLI\n\nThe `@octavus/cli` package provides a command-line interface for validating and syncing agent definitions from your local filesystem to the Octavus platform.\n\n**Current version:** `2.12.0`\n\n## Installation\n\n```bash\nnpm install --save-dev @octavus/cli\n```\n\n## Configuration\n\nThe CLI requires an API key with the **Agents** permission.\n\n### Environment Variables\n\n| Variable | Description |\n| --------------------- | ---------------------------------------------- |\n| `OCTAVUS_CLI_API_KEY` | API key with "Agents" permission (recommended) |\n| `OCTAVUS_API_KEY` | Fallback if `OCTAVUS_CLI_API_KEY` not set |\n| `OCTAVUS_API_URL` | Optional, defaults to `https://octavus.ai` |\n\n### Two-Key Strategy (Recommended)\n\nFor production deployments, use separate API keys with minimal permissions:\n\n```bash\n# CI/CD or .env.local (not committed)\nOCTAVUS_CLI_API_KEY=oct_sk_... # "Agents" permission only\n\n# Production .env\nOCTAVUS_API_KEY=oct_sk_... # "Sessions" permission only\n```\n\nThis ensures production servers only have session permissions (smaller blast radius if leaked), while agent management is restricted to development/CI environments.\n\n### Multiple Environments\n\nUse separate Octavus projects for staging and production, each with their own API keys. The `--env` flag lets you load different environment files:\n\n```bash\n# Local development (default: .env)\noctavus sync ./agents/my-agent\n\n# Staging project\noctavus --env .env.staging sync ./agents/my-agent\n\n# Production project\noctavus --env .env.production sync ./agents/my-agent\n```\n\nExample environment files:\n\n```bash\n# .env.staging (syncs to your staging project)\nOCTAVUS_CLI_API_KEY=oct_sk_staging_project_key...\n\n# .env.production (syncs to your production project)\nOCTAVUS_CLI_API_KEY=oct_sk_production_project_key...\n```\n\nEach project has its own agents, so you\'ll get different agent IDs per environment.\n\n## Global Options\n\n| Option | Description |\n| -------------- | ------------------------------------------------------- |\n| `--env <file>` | Load environment from a specific file (default: `.env`) |\n| `--help` | Show help |\n| `--version` | Show version |\n\n## Commands\n\n### `octavus sync <path>`\n\nSync an agent definition to the platform. Creates the agent if it doesn\'t exist, or updates it if it does.\n\n```bash\noctavus sync ./agents/my-agent\n```\n\n**Options:**\n\n- `--json` \u2014 Output as JSON (for CI/CD parsing)\n- `--quiet` \u2014 Suppress non-essential output\n\n**Example output:**\n\n```\n\u2139 Reading agent from ./agents/my-agent...\n\u2139 Syncing support-chat...\n\u2713 Created: support-chat\n Agent ID: clxyz123abc456\n```\n\n### `octavus validate <path>`\n\nValidate an agent definition without saving. Useful for CI/CD pipelines.\n\n```bash\noctavus validate ./agents/my-agent\n```\n\n**Exit codes:**\n\n- `0` \u2014 Validation passed\n- `1` \u2014 Validation errors\n- `2` \u2014 Configuration errors (missing API key, etc.)\n\n### `octavus list`\n\nList all agents in your project.\n\n```bash\noctavus list\n```\n\n**Example output:**\n\n```\nSLUG NAME FORMAT ID\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\nsupport-chat Support Chat Agent interactive clxyz123abc456\n\n1 agent(s)\n```\n\n### `octavus get <slug>`\n\nGet details about a specific agent by its slug.\n\n```bash\noctavus get support-chat\n```\n\n### `octavus archive <slug>`\n\nArchive an agent by slug (soft delete). Archived agents are removed from the active agent list and their slug is freed for reuse.\n\n```bash\noctavus archive support-chat\n```\n\n**Options:**\n\n- `--json` \u2014 Output as JSON (for CI/CD parsing)\n- `--quiet` \u2014 Suppress non-essential output\n\n**Example output:**\n\n```\n\u2139 Archiving support-chat...\n\u2713 Archived: support-chat\n Agent ID: clxyz123abc456\n```\n\n## Agent Directory Structure\n\nThe CLI expects agent definitions in a specific directory structure:\n\n```\nmy-agent/\n\u251C\u2500\u2500 settings.json # Required: Agent metadata\n\u251C\u2500\u2500 protocol.yaml # Required: Agent protocol\n\u251C\u2500\u2500 prompts/ # Optional: Prompt templates\n\u2502 \u251C\u2500\u2500 system.md\n\u2502 \u2514\u2500\u2500 user-message.md\n\u2514\u2500\u2500 references/ # Optional: Reference documents\n \u2514\u2500\u2500 api-guidelines.md\n```\n\n### references/\n\nReference files are markdown documents with YAML frontmatter containing a `description`. The agent can fetch these on demand during execution. See [References](/docs/protocol/references) for details.\n\n### settings.json\n\n```json\n{\n "slug": "my-agent",\n "name": "My Agent",\n "description": "A helpful assistant",\n "format": "interactive"\n}\n```\n\n### protocol.yaml\n\nSee the [Protocol documentation](/docs/protocol/overview) for details on protocol syntax.\n\n## CI/CD Integration\n\n### GitHub Actions\n\n```yaml\nname: Validate and Sync Agents\n\non:\n push:\n branches: [main]\n paths:\n - \'agents/**\'\n\njobs:\n sync:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n\n - uses: actions/setup-node@v4\n with:\n node-version: \'22\'\n\n - run: npm install\n\n - name: Validate agent\n run: npx octavus validate ./agents/support-chat\n env:\n OCTAVUS_CLI_API_KEY: ${{ secrets.OCTAVUS_CLI_API_KEY }}\n\n - name: Sync agent\n run: npx octavus sync ./agents/support-chat\n env:\n OCTAVUS_CLI_API_KEY: ${{ secrets.OCTAVUS_CLI_API_KEY }}\n```\n\n### Package.json Scripts\n\nAdd sync scripts to your `package.json`:\n\n```json\n{\n "scripts": {\n "agents:validate": "octavus validate ./agents/my-agent",\n "agents:sync": "octavus sync ./agents/my-agent"\n },\n "devDependencies": {\n "@octavus/cli": "^0.1.0"\n }\n}\n```\n\n## Workflow\n\nThe recommended workflow for managing agents:\n\n1. **Define agent locally** \u2014 Create `settings.json`, `protocol.yaml`, and prompts\n2. **Validate** \u2014 Run `octavus validate ./my-agent` to check for errors\n3. **Sync** \u2014 Run `octavus sync ./my-agent` to push to platform\n4. **Store agent ID** \u2014 Save the output ID in an environment variable\n5. **Use in app** \u2014 Read the ID from env and pass to `client.agentSessions.create()`\n\n```bash\n# After syncing: octavus sync ./agents/support-chat\n# Output: Agent ID: clxyz123abc456\n\n# Add to your .env file\nOCTAVUS_SUPPORT_AGENT_ID=clxyz123abc456\n```\n\n```typescript\nconst agentId = process.env.OCTAVUS_SUPPORT_AGENT_ID;\n\nconst sessionId = await client.agentSessions.create(agentId, {\n COMPANY_NAME: \'Acme Corp\',\n});\n```\n',
|
|
778
787
|
excerpt: "Octavus CLI The package provides a command-line interface for validating and syncing agent definitions from your local filesystem to the Octavus platform. Current version: Installation ...",
|
|
779
788
|
order: 5
|
|
780
789
|
},
|
|
@@ -783,8 +792,8 @@ var sections_default = [
|
|
|
783
792
|
section: "server-sdk",
|
|
784
793
|
title: "Workers",
|
|
785
794
|
description: "Executing worker agents with the Server SDK.",
|
|
786
|
-
content: "\n# Workers API\n\nThe `WorkersApi` enables executing worker agents from your server. Workers are task-based agents that run steps sequentially and return an output value.\n\n## Basic Usage\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: 'your-api-key',\n});\n\n// Execute a worker\nconst events = client.workers.execute(agentId, {\n TOPIC: 'AI safety',\n DEPTH: 'detailed',\n});\n\n// Process events\nfor await (const event of events) {\n if (event.type === 'worker-start') {\n console.log(`Worker ${event.workerSlug} started`);\n }\n if (event.type === 'text-delta') {\n process.stdout.write(event.delta);\n }\n if (event.type === 'worker-result') {\n console.log('Output:', event.output);\n }\n}\n```\n\n## WorkersApi Reference\n\n### execute()\n\nExecute a worker and stream the response.\n\n```typescript\nasync *execute(\n agentId: string,\n input: Record<string, unknown>,\n options?: WorkerExecuteOptions\n): AsyncGenerator<StreamEvent>\n```\n\n**Parameters:**\n\n| Parameter | Type | Description |\n| --------- | ------------------------- | --------------------------- |\n| `agentId` | `string` | The worker agent ID |\n| `input` | `Record<string, unknown>` | Input values for the worker |\n| `options` | `WorkerExecuteOptions` | Optional configuration |\n\n**Options:**\n\n```typescript\ninterface WorkerExecuteOptions {\n /** Tool handlers for server-side tool execution */\n tools?: ToolHandlers;\n /** Abort signal to cancel the execution */\n signal?: AbortSignal;\n}\n```\n\n### continue()\n\nContinue execution after client-side tool handling.\n\n```typescript\nasync *continue(\n agentId: string,\n executionId: string,\n toolResults: ToolResult[],\n options?: WorkerExecuteOptions\n): AsyncGenerator<StreamEvent>\n```\n\nUse this when the worker has tools without server-side handlers. The execution pauses with a `client-tool-request` event, you execute the tools, then call `continue()` to resume.\n\n## Tool Handlers\n\nProvide tool handlers to execute tools server-side:\n\n```typescript\nconst events = client.workers.execute(\n agentId,\n { TOPIC: 'AI safety' },\n {\n tools: {\n 'web-search': async (args) => {\n const results = await searchWeb(args.query);\n return results;\n },\n 'get-user-data': async (args) => {\n return await db.users.findById(args.userId);\n },\n },\n },\n);\n```\n\nTools defined in the worker protocol but not provided as handlers become client tools \u2014 the execution pauses and emits a `client-tool-request` event.\n\n## Stream Events\n\nWorkers emit standard stream events plus worker-specific events.\n\n### Worker Events\n\n```typescript\n// Worker started\n{\n type: 'worker-start',\n workerId: string, // Unique ID (also used as session ID for debug)\n workerSlug: string, // The worker's slug\n description?: string, // Display description for UI\n}\n\n// Worker completed\n{\n type: 'worker-result',\n workerId: string,\n output?: unknown, // The worker's output value\n error?: string, // Error message if worker failed\n}\n```\n\n### Common Events\n\n| Event | Description |\n| ----------------------- | --------------------------- |\n| `start` | Execution started |\n| `finish` | Execution completed |\n| `text-start` | Text generation started |\n| `text-delta` | Text chunk received |\n| `text-end` | Text generation ended |\n| `block-start` | Step started |\n| `block-end` | Step completed |\n| `tool-input-available` | Tool arguments ready |\n| `tool-output-available` | Tool result ready |\n| `client-tool-request` | Client tools need execution |\n| `error` | Error occurred |\n\n## Extracting Output\n\nTo get just the worker's output value:\n\n```typescript\nasync function executeWorker(\n client: OctavusClient,\n agentId: string,\n input: Record<string, unknown>,\n): Promise<unknown> {\n const events = client.workers.execute(agentId, input);\n\n for await (const event of events) {\n if (event.type === 'worker-result') {\n if (event.error) {\n throw new Error(event.error);\n }\n return event.output;\n }\n }\n\n return undefined;\n}\n\n// Usage\nconst analysis = await executeWorker(client, agentId, { TOPIC: 'AI' });\n```\n\n## Client Tool Continuation\n\nWhen workers have tools without handlers, execution pauses:\n\n```typescript\nfor await (const event of client.workers.execute(agentId, input)) {\n if (event.type === 'client-tool-request') {\n // Execute tools client-side\n const results = await executeClientTools(event.toolCalls);\n\n // Continue execution\n for await (const ev of client.workers.continue(agentId, event.executionId, results)) {\n // Handle remaining events\n }\n break;\n }\n}\n```\n\nThe `client-tool-request` event includes:\n\n```typescript\n{\n type: 'client-tool-request',\n executionId: string, // Pass to continue()\n toolCalls: [{\n toolCallId: string,\n toolName: string,\n args: Record<string, unknown>,\n }],\n}\n```\n\n## Streaming to HTTP Response\n\nConvert worker events to an SSE stream:\n\n```typescript\nimport { toSSEStream } from '@octavus/server-sdk';\n\nexport async function POST(request: Request) {\n const { agentId, input } = await request.json();\n\n const events = client.workers.execute(agentId, input, {\n tools: {\n search: async (args) => await search(args.query),\n },\n });\n\n return new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n });\n}\n```\n\n## Cancellation\n\nUse an abort signal to cancel execution:\n\n```typescript\nconst controller = new AbortController();\n\n// Cancel after 30 seconds\nsetTimeout(() => controller.abort(), 30000);\n\nconst events = client.workers.execute(agentId, input, {\n signal: controller.signal,\n});\n\ntry {\n for await (const event of events) {\n // Process events\n }\n} catch (error) {\n if (error.name === 'AbortError') {\n console.log('Worker cancelled');\n }\n}\n```\n\n## Error Handling\n\nErrors can occur at different levels:\n\n```typescript\nfor await (const event of client.workers.execute(agentId, input)) {\n // Stream-level error event\n if (event.type === 'error') {\n console.error(`Error: ${event.message}`);\n console.error(`Type: ${event.errorType}`);\n console.error(`Retryable: ${event.retryable}`);\n }\n\n // Worker-level error in result\n if (event.type === 'worker-result' && event.error) {\n console.error(`Worker failed: ${event.error}`);\n }\n}\n```\n\nError types include:\n\n| Type | Description |\n| ------------------ | --------------------- |\n| `validation_error` | Invalid input |\n| `not_found_error` | Worker not found |\n| `provider_error` | LLM provider error |\n| `tool_error` | Tool execution failed |\n| `execution_error` | Worker step failed |\n\n## Full Example\n\n```typescript\nimport { OctavusClient, type StreamEvent } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: process.env.OCTAVUS_API_KEY!,\n});\n\nasync function runResearchWorker(topic: string) {\n console.log(`Researching: ${topic}\\n`);\n\n const events = client.workers.execute(\n 'research-assistant-id',\n {\n TOPIC: topic,\n DEPTH: 'detailed',\n },\n {\n tools: {\n 'web-search': async ({ query }) => {\n console.log(`Searching: ${query}`);\n return await performWebSearch(query);\n },\n },\n },\n );\n\n let output: unknown;\n\n for await (const event of events) {\n switch (event.type) {\n case 'worker-start':\n console.log(`Started: ${event.workerSlug}`);\n break;\n\n case 'block-start':\n console.log(`Step: ${event.blockName}`);\n break;\n\n case 'text-delta':\n process.stdout.write(event.delta);\n break;\n\n case 'worker-result':\n if (event.error) {\n throw new Error(event.error);\n }\n output = event.output;\n break;\n\n case 'error':\n throw new Error(event.message);\n }\n }\n\n console.log('\\n\\nResearch complete!');\n return output;\n}\n\n// Run the worker\nconst result = await runResearchWorker('AI safety best practices');\nconsole.log('Result:', result);\n```\n\n## Next Steps\n\n- [Workers Protocol](/docs/protocol/workers) \u2014 Worker protocol reference\n- [Streaming](/docs/server-sdk/streaming) \u2014 Understanding stream events\n- [Tools](/docs/server-sdk/tools) \u2014 Tool handler patterns\n",
|
|
787
|
-
excerpt: "Workers API The enables executing worker agents from your server. Workers are task-based agents that run steps sequentially and return an output value. Basic Usage WorkersApi Reference
|
|
795
|
+
content: "\n# Workers API\n\nThe `WorkersApi` enables executing worker agents from your server. Workers are task-based agents that run steps sequentially and return an output value.\n\n## Basic Usage\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: 'your-api-key',\n});\n\nconst { output, sessionId } = await client.workers.generate(agentId, {\n TOPIC: 'AI safety',\n DEPTH: 'detailed',\n});\n\nconsole.log('Result:', output);\nconsole.log(`Debug: ${client.baseUrl}/sessions/${sessionId}`);\n```\n\n## WorkersApi Reference\n\n### generate()\n\nExecute a worker and return the output directly.\n\n```typescript\nasync generate(\n agentId: string,\n input: Record<string, unknown>,\n options?: WorkerExecuteOptions\n): Promise<WorkerGenerateResult>\n```\n\nRuns the worker to completion and returns the output value. This is the simplest way to execute a worker.\n\n**Returns:**\n\n```typescript\ninterface WorkerGenerateResult {\n /** The worker's output value */\n output: unknown;\n /** Session ID for debugging (usable for session URLs) */\n sessionId: string;\n}\n```\n\n**Throws:** `WorkerError` if the worker fails or completes without producing output.\n\n### execute()\n\nExecute a worker and stream the response. Use this when you need to observe intermediate events like text deltas, tool calls, or progress tracking.\n\n```typescript\nasync *execute(\n agentId: string,\n input: Record<string, unknown>,\n options?: WorkerExecuteOptions\n): AsyncGenerator<StreamEvent>\n```\n\n### continue()\n\nContinue execution after client-side tool handling.\n\n```typescript\nasync *continue(\n agentId: string,\n executionId: string,\n toolResults: ToolResult[],\n options?: WorkerExecuteOptions\n): AsyncGenerator<StreamEvent>\n```\n\nUse this when the worker has tools without server-side handlers. The execution pauses with a `client-tool-request` event, you execute the tools, then call `continue()` to resume.\n\n### Shared Options\n\nAll methods accept the same options:\n\n```typescript\ninterface WorkerExecuteOptions {\n /** Tool handlers for server-side tool execution */\n tools?: ToolHandlers;\n /** Abort signal to cancel the execution */\n signal?: AbortSignal;\n}\n```\n\n**Parameters:**\n\n| Parameter | Type | Description |\n| --------- | ------------------------- | --------------------------- |\n| `agentId` | `string` | The worker agent ID |\n| `input` | `Record<string, unknown>` | Input values for the worker |\n| `options` | `WorkerExecuteOptions` | Optional configuration |\n\n## Tool Handlers\n\nProvide tool handlers to execute tools server-side:\n\n```typescript\nconst { output } = await client.workers.generate(\n agentId,\n { TOPIC: 'AI safety' },\n {\n tools: {\n 'web-search': async (args) => {\n return await searchWeb(args.query);\n },\n 'get-user-data': async (args) => {\n return await db.users.findById(args.userId);\n },\n },\n },\n);\n```\n\nTools defined in the worker protocol but not provided as handlers become client tools \u2014 the execution pauses and emits a `client-tool-request` event.\n\n## Error Handling\n\n### WorkerError (generate)\n\n`generate()` throws a `WorkerError` on failure. The error includes an optional `sessionId` for constructing debug URLs:\n\n```typescript\nimport { OctavusClient, WorkerError } from '@octavus/server-sdk';\n\ntry {\n const { output } = await client.workers.generate(agentId, input);\n console.log('Result:', output);\n} catch (error) {\n if (error instanceof WorkerError) {\n console.error('Worker failed:', error.message);\n if (error.sessionId) {\n console.error(`Debug: ${client.baseUrl}/sessions/${error.sessionId}`);\n }\n }\n}\n```\n\n### Stream Errors (execute)\n\nWhen using `execute()`, errors appear as stream events:\n\n```typescript\nfor await (const event of client.workers.execute(agentId, input)) {\n if (event.type === 'error') {\n console.error(`Error: ${event.message}`);\n console.error(`Type: ${event.errorType}`);\n console.error(`Retryable: ${event.retryable}`);\n }\n\n if (event.type === 'worker-result' && event.error) {\n console.error(`Worker failed: ${event.error}`);\n }\n}\n```\n\n### Error Types\n\n| Type | Description |\n| ------------------ | --------------------- |\n| `validation_error` | Invalid input |\n| `not_found_error` | Worker not found |\n| `provider_error` | LLM provider error |\n| `tool_error` | Tool execution failed |\n| `execution_error` | Worker step failed |\n\n## Cancellation\n\nUse an abort signal to cancel execution:\n\n```typescript\nconst { output } = await client.workers.generate(agentId, input, {\n signal: AbortSignal.timeout(30_000),\n});\n```\n\nWith `execute()` and a manual controller:\n\n```typescript\nconst controller = new AbortController();\nsetTimeout(() => controller.abort(), 30000);\n\ntry {\n for await (const event of client.workers.execute(agentId, input, {\n signal: controller.signal,\n })) {\n // Process events\n }\n} catch (error) {\n if (error.name === 'AbortError') {\n console.log('Worker cancelled');\n }\n}\n```\n\n## Streaming\n\nWhen you need real-time visibility into the worker's execution \u2014 text generation, tool calls, or progress \u2014 use `execute()` instead of `generate()`.\n\n### Basic Streaming\n\n```typescript\nconst events = client.workers.execute(agentId, {\n TOPIC: 'AI safety',\n DEPTH: 'detailed',\n});\n\nfor await (const event of events) {\n if (event.type === 'worker-start') {\n console.log(`Worker ${event.workerSlug} started`);\n }\n if (event.type === 'text-delta') {\n process.stdout.write(event.delta);\n }\n if (event.type === 'worker-result') {\n console.log('Output:', event.output);\n }\n}\n```\n\n### Streaming to HTTP Response\n\nConvert worker events to an SSE stream:\n\n```typescript\nimport { toSSEStream } from '@octavus/server-sdk';\n\nexport async function POST(request: Request) {\n const { agentId, input } = await request.json();\n\n const events = client.workers.execute(agentId, input, {\n tools: {\n search: async (args) => await search(args.query),\n },\n });\n\n return new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n });\n}\n```\n\n### Client Tool Continuation\n\nWhen workers have tools without handlers, execution pauses:\n\n```typescript\nfor await (const event of client.workers.execute(agentId, input)) {\n if (event.type === 'client-tool-request') {\n const results = await executeClientTools(event.toolCalls);\n\n for await (const ev of client.workers.continue(agentId, event.executionId, results)) {\n // Handle remaining events\n }\n break;\n }\n}\n```\n\nThe `client-tool-request` event includes:\n\n```typescript\n{\n type: 'client-tool-request',\n executionId: string, // Pass to continue()\n toolCalls: [{\n toolCallId: string,\n toolName: string,\n args: Record<string, unknown>,\n }],\n}\n```\n\n### Stream Events\n\nWorkers emit standard stream events plus worker-specific events.\n\n#### Worker Events\n\n```typescript\n// Worker started\n{\n type: 'worker-start',\n workerId: string, // Unique ID (also used as session ID for debug)\n workerSlug: string, // The worker's slug\n description?: string, // Display description for UI\n}\n\n// Worker completed\n{\n type: 'worker-result',\n workerId: string,\n output?: unknown, // The worker's output value\n error?: string, // Error message if worker failed\n}\n```\n\n#### Common Events\n\n| Event | Description |\n| ----------------------- | --------------------------- |\n| `start` | Execution started |\n| `finish` | Execution completed |\n| `text-start` | Text generation started |\n| `text-delta` | Text chunk received |\n| `text-end` | Text generation ended |\n| `block-start` | Step started |\n| `block-end` | Step completed |\n| `tool-input-available` | Tool arguments ready |\n| `tool-output-available` | Tool result ready |\n| `client-tool-request` | Client tools need execution |\n| `error` | Error occurred |\n\n## Full Examples\n\n### generate()\n\n```typescript\nimport { OctavusClient, WorkerError } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: process.env.OCTAVUS_API_KEY!,\n});\n\ntry {\n const { output, sessionId } = await client.workers.generate(\n 'research-assistant-id',\n {\n TOPIC: 'AI safety best practices',\n DEPTH: 'detailed',\n },\n {\n tools: {\n 'web-search': async ({ query }) => await performWebSearch(query),\n },\n signal: AbortSignal.timeout(120_000),\n },\n );\n\n console.log('Result:', output);\n} catch (error) {\n if (error instanceof WorkerError) {\n console.error('Failed:', error.message);\n if (error.sessionId) {\n console.error(`Debug: ${client.baseUrl}/sessions/${error.sessionId}`);\n }\n }\n}\n```\n\n### execute()\n\nFor full control over streaming events and progress tracking:\n\n```typescript\nimport { OctavusClient, type StreamEvent } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: process.env.OCTAVUS_API_KEY!,\n});\n\nasync function runResearchWorker(topic: string) {\n console.log(`Researching: ${topic}\\n`);\n\n const events = client.workers.execute(\n 'research-assistant-id',\n {\n TOPIC: topic,\n DEPTH: 'detailed',\n },\n {\n tools: {\n 'web-search': async ({ query }) => {\n console.log(`Searching: ${query}`);\n return await performWebSearch(query);\n },\n },\n },\n );\n\n let output: unknown;\n\n for await (const event of events) {\n switch (event.type) {\n case 'worker-start':\n console.log(`Started: ${event.workerSlug}`);\n break;\n\n case 'block-start':\n console.log(`Step: ${event.blockName}`);\n break;\n\n case 'text-delta':\n process.stdout.write(event.delta);\n break;\n\n case 'worker-result':\n if (event.error) {\n throw new Error(event.error);\n }\n output = event.output;\n break;\n\n case 'error':\n throw new Error(event.message);\n }\n }\n\n console.log('\\n\\nResearch complete!');\n return output;\n}\n\nconst result = await runResearchWorker('AI safety best practices');\nconsole.log('Result:', result);\n```\n\n## Next Steps\n\n- [Workers Protocol](/docs/protocol/workers) \u2014 Worker protocol reference\n- [Streaming](/docs/server-sdk/streaming) \u2014 Understanding stream events\n- [Tools](/docs/server-sdk/tools) \u2014 Tool handler patterns\n",
|
|
796
|
+
excerpt: "Workers API The enables executing worker agents from your server. Workers are task-based agents that run steps sequentially and return an output value. Basic Usage WorkersApi Reference generate()...",
|
|
788
797
|
order: 6
|
|
789
798
|
},
|
|
790
799
|
{
|
|
@@ -809,7 +818,7 @@ var sections_default = [
|
|
|
809
818
|
section: "client-sdk",
|
|
810
819
|
title: "Overview",
|
|
811
820
|
description: "Introduction to the Octavus Client SDKs for building chat interfaces.",
|
|
812
|
-
content: "\n# Client SDK Overview\n\nOctavus provides two packages for frontend integration:\n\n| Package | Purpose | Use When |\n| --------------------- | ------------------------ | ----------------------------------------------------- |\n| `@octavus/react` | React hooks and bindings | Building React applications |\n| `@octavus/client-sdk` | Framework-agnostic core | Using Vue, Svelte, vanilla JS, or custom integrations |\n\n**Most users should install `@octavus/react`** \u2014 it includes everything from `@octavus/client-sdk` plus React-specific hooks.\n\n## Installation\n\n### React Applications\n\n```bash\nnpm install @octavus/react\n```\n\n**Current version:** `2.9.0`\n\n### Other Frameworks\n\n```bash\nnpm install @octavus/client-sdk\n```\n\n**Current version:** `2.9.0`\n\n## Transport Pattern\n\nThe Client SDK uses a **transport abstraction** to handle communication with your backend. This gives you flexibility in how events are delivered:\n\n| Transport | Use Case | Docs |\n| ----------------------- | -------------------------------------------- | ----------------------------------------------------- |\n| `createHttpTransport` | HTTP/SSE (Next.js, Express, etc.) | [HTTP Transport](/docs/client-sdk/http-transport) |\n| `createSocketTransport` | WebSocket, SockJS, or other socket protocols | [Socket Transport](/docs/client-sdk/socket-transport) |\n\nWhen the transport changes (e.g., when `sessionId` changes), the `useOctavusChat` hook automatically reinitializes with the new transport.\n\n> **Recommendation**: Use HTTP transport unless you specifically need WebSocket features (custom real-time events, Meteor/Phoenix, etc.).\n\n## React Usage\n\nThe `useOctavusChat` hook provides state management and streaming for React applications:\n\n```tsx\nimport { useMemo } from 'react';\nimport { useOctavusChat, createHttpTransport, type UIMessage } from '@octavus/react';\n\nfunction Chat({ sessionId }: { sessionId: string }) {\n // Create a stable transport instance (memoized on sessionId)\n const transport = useMemo(\n () =>\n createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n [sessionId],\n );\n\n const { messages, status, send } = useOctavusChat({ transport });\n\n const sendMessage = async (text: string) => {\n await send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n };\n\n return (\n <div>\n {messages.map((msg) => (\n <MessageBubble key={msg.id} message={msg} />\n ))}\n </div>\n );\n}\n\nfunction MessageBubble({ message }: { message: UIMessage }) {\n return (\n <div>\n {message.parts.map((part, i) => {\n if (part.type === 'text') {\n return <p key={i}>{part.text}</p>;\n }\n return null;\n })}\n </div>\n );\n}\n```\n\n## Framework-Agnostic Usage\n\nThe `OctavusChat` class can be used with any framework or vanilla JavaScript:\n\n```typescript\nimport { OctavusChat, createHttpTransport } from '@octavus/client-sdk';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n\nconst chat = new OctavusChat({ transport });\n\n// Subscribe to state changes\nconst unsubscribe = chat.subscribe(() => {\n console.log('Messages:', chat.messages);\n console.log('Status:', chat.status);\n // Update your UI here\n});\n\n// Send a message\nawait chat.send('user-message', { USER_MESSAGE: 'Hello' }, { userMessage: { content: 'Hello' } });\n\n// Cleanup when done\nunsubscribe();\n```\n\n## Key Features\n\n### Unified Send Function\n\nThe `send` function handles both user message display and agent triggering in one call:\n\n```tsx\nconst { send } = useOctavusChat({ transport });\n\n// Add user message to UI and trigger agent\nawait send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n\n// Trigger without adding a user message (e.g., button click)\nawait send('request-human');\n```\n\n### Message Parts\n\nMessages contain ordered `parts` for rich content:\n\n```tsx\nconst { messages } = useOctavusChat({ transport });\n\n// Each message has typed parts\nmessage.parts.map((part) => {\n switch (part.type) {\n case 'text': // Text content\n case 'reasoning': // Extended reasoning/thinking\n case 'tool-call': // Tool execution\n case 'operation': // Internal operations (set-resource, etc.)\n }\n});\n```\n\n### Status Tracking\n\n```tsx\nconst { status } = useOctavusChat({ transport });\n\n// status: 'idle' | 'streaming' | 'error' | 'awaiting-input'\n// 'awaiting-input' occurs when interactive client tools need user action\n```\n\n### Stop Streaming\n\n```tsx\nconst { stop } = useOctavusChat({ transport });\n\n// Stop current stream and finalize message\nstop();\n```\n\n## Hook Reference (React)\n\n### useOctavusChat\n\n```typescript\nfunction useOctavusChat(options: OctavusChatOptions): UseOctavusChatReturn;\n\ninterface OctavusChatOptions {\n // Required: Transport for streaming events\n transport: Transport;\n\n // Optional: Function to request upload URLs for file uploads\n requestUploadUrls?: (\n files: { filename: string; mediaType: string; size: number }[],\n ) => Promise<UploadUrlsResponse>;\n\n // Optional: Client-side tool handlers\n // - Function: executes automatically and returns result\n // - 'interactive': appears in pendingClientTools for user input\n clientTools?: Record<string, ClientToolHandler>;\n\n // Optional: Pre-populate with existing messages (session restore)\n initialMessages?: UIMessage[];\n\n // Optional: Callbacks\n onError?: (error: OctavusError) => void; // Structured error with type, source, retryable\n onFinish?: () => void;\n onStop?: () => void; // Called when user stops generation\n onResourceUpdate?: (name: string, value: unknown) => void;\n}\n\ninterface UseOctavusChatReturn {\n // State\n messages: UIMessage[];\n status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n error: OctavusError | null; // Structured error with type, source, retryable\n\n // Connection (socket transport only - undefined for HTTP)\n connectionState: ConnectionState | undefined; // 'disconnected' | 'connecting' | 'connected' | 'error'\n connectionError: Error | undefined;\n\n // Client tools (interactive tools awaiting user input)\n pendingClientTools: Record<string, InteractiveTool[]>; // Keyed by tool name\n\n // Actions\n send: (\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ) => Promise<void>;\n stop: () => void;\n\n // Connection management (socket transport only - undefined for HTTP)\n connect: (() => Promise<void>) | undefined;\n disconnect: (() => void) | undefined;\n\n // File uploads (requires requestUploadUrls)\n uploadFiles: (\n files: FileList | File[],\n onProgress?: (fileIndex: number, progress: number) => void,\n ) => Promise<FileReference[]>;\n}\n\ninterface UserMessageInput {\n content?: string;\n files?: FileList | File[] | FileReference[];\n}\n```\n\n## Transport Reference\n\n### createHttpTransport\n\nCreates an HTTP/SSE transport using native `fetch()`:\n\n```typescript\nimport { createHttpTransport } from '@octavus/react';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n```\n\n### createSocketTransport\n\nCreates a WebSocket/SockJS transport for real-time connections:\n\n```typescript\nimport { createSocketTransport } from '@octavus/react';\n\nconst transport = createSocketTransport({\n connect: () =>\n new Promise((resolve, reject) => {\n const ws = new WebSocket(`wss://api.example.com/stream?sessionId=${sessionId}`);\n ws.onopen = () => resolve(ws);\n ws.onerror = () => reject(new Error('Connection failed'));\n }),\n});\n```\n\nSocket transport provides additional connection management:\n\n```typescript\n// Access connection state directly\ntransport.connectionState; // 'disconnected' | 'connecting' | 'connected' | 'error'\n\n// Subscribe to state changes\ntransport.onConnectionStateChange((state, error) => {\n /* ... */\n});\n\n// Eager connection (instead of lazy on first send)\nawait transport.connect();\n\n// Manual disconnect\ntransport.disconnect();\n```\n\nFor detailed WebSocket/SockJS usage including custom events, reconnection patterns, and server-side implementation, see [Socket Transport](/docs/client-sdk/socket-transport).\n\n## Class Reference (Framework-Agnostic)\n\n### OctavusChat\n\n```typescript\nclass OctavusChat {\n constructor(options: OctavusChatOptions);\n\n // State (read-only)\n readonly messages: UIMessage[];\n readonly status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n readonly error: OctavusError | null; // Structured error\n readonly pendingClientTools: Record<string, InteractiveTool[]>; // Interactive tools\n\n // Actions\n send(\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ): Promise<void>;\n stop(): void;\n\n // Subscription\n subscribe(callback: () => void): () => void; // Returns unsubscribe function\n}\n```\n\n## Next Steps\n\n- [HTTP Transport](/docs/client-sdk/http-transport) \u2014 HTTP/SSE integration (recommended)\n- [Socket Transport](/docs/client-sdk/socket-transport) \u2014 WebSocket and SockJS integration\n- [Messages](/docs/client-sdk/messages) \u2014 Working with message state\n- [Streaming](/docs/client-sdk/streaming) \u2014 Building streaming UIs\n- [Client Tools](/docs/client-sdk/client-tools) \u2014 Interactive browser-side tool handling\n- [Operations](/docs/client-sdk/execution-blocks) \u2014 Showing agent progress\n- [Error Handling](/docs/client-sdk/error-handling) \u2014 Handling errors with type guards\n- [File Uploads](/docs/client-sdk/file-uploads) \u2014 Uploading images and documents\n- [Examples](/docs/examples/overview) \u2014 Complete working examples\n",
|
|
821
|
+
content: "\n# Client SDK Overview\n\nOctavus provides two packages for frontend integration:\n\n| Package | Purpose | Use When |\n| --------------------- | ------------------------ | ----------------------------------------------------- |\n| `@octavus/react` | React hooks and bindings | Building React applications |\n| `@octavus/client-sdk` | Framework-agnostic core | Using Vue, Svelte, vanilla JS, or custom integrations |\n\n**Most users should install `@octavus/react`** \u2014 it includes everything from `@octavus/client-sdk` plus React-specific hooks.\n\n## Installation\n\n### React Applications\n\n```bash\nnpm install @octavus/react\n```\n\n**Current version:** `2.12.0`\n\n### Other Frameworks\n\n```bash\nnpm install @octavus/client-sdk\n```\n\n**Current version:** `2.12.0`\n\n## Transport Pattern\n\nThe Client SDK uses a **transport abstraction** to handle communication with your backend. This gives you flexibility in how events are delivered:\n\n| Transport | Use Case | Docs |\n| ----------------------- | -------------------------------------------- | ----------------------------------------------------- |\n| `createHttpTransport` | HTTP/SSE (Next.js, Express, etc.) | [HTTP Transport](/docs/client-sdk/http-transport) |\n| `createSocketTransport` | WebSocket, SockJS, or other socket protocols | [Socket Transport](/docs/client-sdk/socket-transport) |\n\nWhen the transport changes (e.g., when `sessionId` changes), the `useOctavusChat` hook automatically reinitializes with the new transport.\n\n> **Recommendation**: Use HTTP transport unless you specifically need WebSocket features (custom real-time events, Meteor/Phoenix, etc.).\n\n## React Usage\n\nThe `useOctavusChat` hook provides state management and streaming for React applications:\n\n```tsx\nimport { useMemo } from 'react';\nimport { useOctavusChat, createHttpTransport, type UIMessage } from '@octavus/react';\n\nfunction Chat({ sessionId }: { sessionId: string }) {\n // Create a stable transport instance (memoized on sessionId)\n const transport = useMemo(\n () =>\n createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n [sessionId],\n );\n\n const { messages, status, send } = useOctavusChat({ transport });\n\n const sendMessage = async (text: string) => {\n await send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n };\n\n return (\n <div>\n {messages.map((msg) => (\n <MessageBubble key={msg.id} message={msg} />\n ))}\n </div>\n );\n}\n\nfunction MessageBubble({ message }: { message: UIMessage }) {\n return (\n <div>\n {message.parts.map((part, i) => {\n if (part.type === 'text') {\n return <p key={i}>{part.text}</p>;\n }\n return null;\n })}\n </div>\n );\n}\n```\n\n## Framework-Agnostic Usage\n\nThe `OctavusChat` class can be used with any framework or vanilla JavaScript:\n\n```typescript\nimport { OctavusChat, createHttpTransport } from '@octavus/client-sdk';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n\nconst chat = new OctavusChat({ transport });\n\n// Subscribe to state changes\nconst unsubscribe = chat.subscribe(() => {\n console.log('Messages:', chat.messages);\n console.log('Status:', chat.status);\n // Update your UI here\n});\n\n// Send a message\nawait chat.send('user-message', { USER_MESSAGE: 'Hello' }, { userMessage: { content: 'Hello' } });\n\n// Cleanup when done\nunsubscribe();\n```\n\n## Key Features\n\n### Unified Send Function\n\nThe `send` function handles both user message display and agent triggering in one call:\n\n```tsx\nconst { send } = useOctavusChat({ transport });\n\n// Add user message to UI and trigger agent\nawait send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n\n// Trigger without adding a user message (e.g., button click)\nawait send('request-human');\n```\n\n### Message Parts\n\nMessages contain ordered `parts` for rich content:\n\n```tsx\nconst { messages } = useOctavusChat({ transport });\n\n// Each message has typed parts\nmessage.parts.map((part) => {\n switch (part.type) {\n case 'text': // Text content\n case 'reasoning': // Extended reasoning/thinking\n case 'tool-call': // Tool execution\n case 'operation': // Internal operations (set-resource, etc.)\n }\n});\n```\n\n### Status Tracking\n\n```tsx\nconst { status } = useOctavusChat({ transport });\n\n// status: 'idle' | 'streaming' | 'error' | 'awaiting-input'\n// 'awaiting-input' occurs when interactive client tools need user action\n```\n\n### Stop Streaming\n\n```tsx\nconst { stop } = useOctavusChat({ transport });\n\n// Stop current stream and finalize message\nstop();\n```\n\n## Hook Reference (React)\n\n### useOctavusChat\n\n```typescript\nfunction useOctavusChat(options: OctavusChatOptions): UseOctavusChatReturn;\n\ninterface OctavusChatOptions {\n // Required: Transport for streaming events\n transport: Transport;\n\n // Optional: Function to request upload URLs for file uploads\n requestUploadUrls?: (\n files: { filename: string; mediaType: string; size: number }[],\n ) => Promise<UploadUrlsResponse>;\n\n // Optional: Client-side tool handlers\n // - Function: executes automatically and returns result\n // - 'interactive': appears in pendingClientTools for user input\n clientTools?: Record<string, ClientToolHandler>;\n\n // Optional: Pre-populate with existing messages (session restore)\n initialMessages?: UIMessage[];\n\n // Optional: Callbacks\n onError?: (error: OctavusError) => void; // Structured error with type, source, retryable\n onFinish?: () => void;\n onStop?: () => void; // Called when user stops generation\n onResourceUpdate?: (name: string, value: unknown) => void;\n}\n\ninterface UseOctavusChatReturn {\n // State\n messages: UIMessage[];\n status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n error: OctavusError | null; // Structured error with type, source, retryable\n\n // Connection (socket transport only - undefined for HTTP)\n connectionState: ConnectionState | undefined; // 'disconnected' | 'connecting' | 'connected' | 'error'\n connectionError: Error | undefined;\n\n // Client tools (interactive tools awaiting user input)\n pendingClientTools: Record<string, InteractiveTool[]>; // Keyed by tool name\n\n // Actions\n send: (\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ) => Promise<void>;\n stop: () => void;\n\n // Connection management (socket transport only - undefined for HTTP)\n connect: (() => Promise<void>) | undefined;\n disconnect: (() => void) | undefined;\n\n // File uploads (requires requestUploadUrls)\n uploadFiles: (\n files: FileList | File[],\n onProgress?: (fileIndex: number, progress: number) => void,\n ) => Promise<FileReference[]>;\n}\n\ninterface UserMessageInput {\n content?: string;\n files?: FileList | File[] | FileReference[];\n}\n```\n\n## Transport Reference\n\n### createHttpTransport\n\nCreates an HTTP/SSE transport using native `fetch()`:\n\n```typescript\nimport { createHttpTransport } from '@octavus/react';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n```\n\n### createSocketTransport\n\nCreates a WebSocket/SockJS transport for real-time connections:\n\n```typescript\nimport { createSocketTransport } from '@octavus/react';\n\nconst transport = createSocketTransport({\n connect: () =>\n new Promise((resolve, reject) => {\n const ws = new WebSocket(`wss://api.example.com/stream?sessionId=${sessionId}`);\n ws.onopen = () => resolve(ws);\n ws.onerror = () => reject(new Error('Connection failed'));\n }),\n});\n```\n\nSocket transport provides additional connection management:\n\n```typescript\n// Access connection state directly\ntransport.connectionState; // 'disconnected' | 'connecting' | 'connected' | 'error'\n\n// Subscribe to state changes\ntransport.onConnectionStateChange((state, error) => {\n /* ... */\n});\n\n// Eager connection (instead of lazy on first send)\nawait transport.connect();\n\n// Manual disconnect\ntransport.disconnect();\n```\n\nFor detailed WebSocket/SockJS usage including custom events, reconnection patterns, and server-side implementation, see [Socket Transport](/docs/client-sdk/socket-transport).\n\n## Class Reference (Framework-Agnostic)\n\n### OctavusChat\n\n```typescript\nclass OctavusChat {\n constructor(options: OctavusChatOptions);\n\n // State (read-only)\n readonly messages: UIMessage[];\n readonly status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n readonly error: OctavusError | null; // Structured error\n readonly pendingClientTools: Record<string, InteractiveTool[]>; // Interactive tools\n\n // Actions\n send(\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ): Promise<void>;\n stop(): void;\n\n // Subscription\n subscribe(callback: () => void): () => void; // Returns unsubscribe function\n}\n```\n\n## Next Steps\n\n- [HTTP Transport](/docs/client-sdk/http-transport) \u2014 HTTP/SSE integration (recommended)\n- [Socket Transport](/docs/client-sdk/socket-transport) \u2014 WebSocket and SockJS integration\n- [Messages](/docs/client-sdk/messages) \u2014 Working with message state\n- [Streaming](/docs/client-sdk/streaming) \u2014 Building streaming UIs\n- [Client Tools](/docs/client-sdk/client-tools) \u2014 Interactive browser-side tool handling\n- [Operations](/docs/client-sdk/execution-blocks) \u2014 Showing agent progress\n- [Error Handling](/docs/client-sdk/error-handling) \u2014 Handling errors with type guards\n- [File Uploads](/docs/client-sdk/file-uploads) \u2014 Uploading images and documents\n- [Examples](/docs/examples/overview) \u2014 Complete working examples\n",
|
|
813
822
|
excerpt: "Client SDK Overview Octavus provides two packages for frontend integration: | Package | Purpose | Use When | |...",
|
|
814
823
|
order: 1
|
|
815
824
|
},
|
|
@@ -1236,7 +1245,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1236
1245
|
section: "client-sdk",
|
|
1237
1246
|
title: "File Uploads",
|
|
1238
1247
|
description: "Uploading images and files for vision models and document processing.",
|
|
1239
|
-
content: "\n# File Uploads\n\nThe Client SDK supports uploading images and documents that can be sent with messages. This enables vision model capabilities (analyzing images) and document processing.\n\n## Overview\n\nFile uploads follow a two-step flow:\n\n1. **Request upload URLs** from the platform via your backend\n2. **Upload files directly to S3** using presigned URLs\n3. **Send file references** with your message\n\nThis architecture keeps your API key secure on the server while enabling fast, direct uploads.\n\n## Setup\n\n### Backend: Upload URLs Endpoint\n\nCreate an endpoint that proxies upload URL requests to the Octavus platform:\n\n```typescript\n// app/api/upload-urls/route.ts (Next.js)\nimport { NextResponse } from 'next/server';\nimport { octavus } from '@/lib/octavus';\n\nexport async function POST(request: Request) {\n const { sessionId, files } = await request.json();\n\n // Get presigned URLs from Octavus\n const result = await octavus.files.getUploadUrls(sessionId, files);\n\n return NextResponse.json(result);\n}\n```\n\n### Client: Configure File Uploads\n\nPass `requestUploadUrls` to the chat hook:\n\n```tsx\nimport { useMemo, useCallback } from 'react';\nimport { useOctavusChat, createHttpTransport } from '@octavus/react';\n\nfunction Chat({ sessionId }: { sessionId: string }) {\n const transport = useMemo(\n () =>\n createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n [sessionId],\n );\n\n // Request upload URLs from your backend\n const requestUploadUrls = useCallback(\n async (files: { filename: string; mediaType: string; size: number }[]) => {\n const response = await fetch('/api/upload-urls', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, files }),\n });\n return response.json();\n },\n [sessionId],\n );\n\n const { messages, status, send, uploadFiles } = useOctavusChat({\n transport,\n requestUploadUrls,\n });\n\n // ...\n}\n```\n\n## Uploading Files\n\n### Method 1: Upload Before Sending\n\nFor the best UX (showing upload progress), upload files first, then send:\n\n```tsx\nimport { useState, useRef } from 'react';\nimport type { FileReference } from '@octavus/react';\n\nfunction ChatInput({ sessionId }: { sessionId: string }) {\n const [pendingFiles, setPendingFiles] = useState<FileReference[]>([]);\n const [uploading, setUploading] = useState(false);\n const fileInputRef = useRef<HTMLInputElement>(null);\n\n const { send, uploadFiles } = useOctavusChat({\n transport,\n requestUploadUrls,\n });\n\n async function handleFileSelect(event: React.ChangeEvent<HTMLInputElement>) {\n const files = event.target.files;\n if (!files?.length) return;\n\n setUploading(true);\n try {\n // Upload files with progress tracking\n const fileRefs = await uploadFiles(files, (fileIndex, progress) => {\n console.log(`File ${fileIndex}: ${progress}%`);\n });\n setPendingFiles((prev) => [...prev, ...fileRefs]);\n } finally {\n setUploading(false);\n }\n }\n\n async function handleSend(message: string) {\n await send(\n 'user-message',\n {\n USER_MESSAGE: message,\n FILES: pendingFiles.length > 0 ? pendingFiles : undefined,\n },\n {\n userMessage: {\n content: message,\n files: pendingFiles.length > 0 ? pendingFiles : undefined,\n },\n },\n );\n setPendingFiles([]);\n }\n\n return (\n <div>\n {/* File preview */}\n {pendingFiles.map((file) => (\n <img key={file.id} src={file.url} alt={file.filename} className=\"h-16\" />\n ))}\n\n <input\n ref={fileInputRef}\n type=\"file\"\n accept=\"image/*,.pdf\"\n multiple\n onChange={handleFileSelect}\n className=\"hidden\"\n />\n\n <button onClick={() => fileInputRef.current?.click()} disabled={uploading}>\n {uploading ? 'Uploading...' : 'Attach'}\n </button>\n </div>\n );\n}\n```\n\n### Method 2: Upload on Send (Automatic)\n\nFor simpler implementations, pass `File` objects directly:\n\n```tsx\nasync function handleSend(message: string, files?: File[]) {\n await send(\n 'user-message',\n { USER_MESSAGE: message, FILES: files },\n { userMessage: { content: message, files } },\n );\n}\n```\n\nThe SDK automatically uploads the files before sending. Note: This doesn't provide upload progress.\n\n## FileReference Type\n\nFile references contain metadata and URLs:\n\n```typescript\ninterface FileReference {\n /** Unique file ID (platform-generated) */\n id: string;\n /** IANA media type (e.g., 'image/png', 'application/pdf') */\n mediaType: string;\n /** Presigned download URL (S3) */\n url: string;\n /** Original filename */\n filename?: string;\n /** File size in bytes */\n size?: number;\n}\n```\n\n## Protocol Integration\n\nTo accept files in your agent protocol, use the `file[]` type:\n\n```yaml\ntriggers:\n user-message:\n input:\n USER_MESSAGE:\n type: string\n description: The user's message\n FILES:\n type: file[]\n optional: true\n description: User-attached images for vision analysis\n\nhandlers:\n user-message:\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input:\n - USER_MESSAGE\n files:\n - FILES # Attach files to the message\n display: hidden\n\n Respond to user:\n block: next-message\n```\n\nThe `file` type is a built-in type representing uploaded files. Use `file[]` for arrays of files.\n\n## Supported File Types\n\n| Type | Media Types |\n| --------- | -------------------------------------------------------------------- |\n| Images | `image/jpeg`, `image/png`, `image/gif`, `image/webp` |\n| Documents | `application/pdf`, `text/plain`, `text/markdown`, `application/json` |\n\n## File Limits\n\n| Limit | Value |\n| --------------------- | ---------- |\n| Max file size | 10 MB |\n| Max total per request | 50 MB |\n| Max files per request | 20 |\n| Upload URL expiry | 15 minutes |\n| Download URL expiry | 24 hours |\n\n## Rendering User Files\n\nUser-uploaded files appear as `UIFilePart` in user messages:\n\n```tsx\nfunction UserMessage({ message }: { message: UIMessage }) {\n return (\n <div>\n {message.parts.map((part, i) => {\n if (part.type === 'file') {\n if (part.mediaType.startsWith('image/')) {\n return (\n <img\n key={i}\n src={part.url}\n alt={part.filename || 'Uploaded image'}\n className=\"max-h-48 rounded-lg\"\n />\n );\n }\n return (\n <a key={i} href={part.url} className=\"text-blue-500\">\n \u{1F4C4} {part.filename}\n </a>\n );\n }\n if (part.type === 'text') {\n return <p key={i}>{part.text}</p>;\n }\n return null;\n })}\n </div>\n );\n}\n```\n\n## Server SDK: Files API\n\nThe Server SDK provides direct access to the Files API:\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: 'your-api-key',\n});\n\n// Get presigned upload URLs\nconst { files } = await client.files.getUploadUrls(sessionId, [\n { filename: 'photo.jpg', mediaType: 'image/jpeg', size: 102400 },\n { filename: 'doc.pdf', mediaType: 'application/pdf', size: 204800 },\n]);\n\n// files[0].id - Use in FileReference\n// files[0].uploadUrl - PUT to this URL to upload\n// files[0].downloadUrl - Use as FileReference.url\n```\n\n## Complete Example\n\nHere's a full chat input component with file upload:\n\n```tsx\n'use client';\n\nimport { useState, useRef, useMemo, useCallback } from 'react';\nimport { useOctavusChat, createHttpTransport, type FileReference } from '@octavus/react';\n\ninterface PendingFile {\n file: File;\n id: string;\n status: 'uploading' | 'done' | 'error';\n progress: number;\n fileRef?: FileReference;\n error?: string;\n}\n\nexport function Chat({ sessionId }: { sessionId: string }) {\n const [input, setInput] = useState('');\n const [pendingFiles, setPendingFiles] = useState<PendingFile[]>([]);\n const fileInputRef = useRef<HTMLInputElement>(null);\n const fileIdCounter = useRef(0);\n\n const transport = useMemo(\n () =>\n createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n [sessionId],\n );\n\n const requestUploadUrls = useCallback(\n async (files: { filename: string; mediaType: string; size: number }[]) => {\n const res = await fetch('/api/upload-urls', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, files }),\n });\n return res.json();\n },\n [sessionId],\n );\n\n const { messages, status, send, uploadFiles } = useOctavusChat({\n transport,\n requestUploadUrls,\n });\n\n const isUploading = pendingFiles.some((f) => f.status === 'uploading');\n const hasErrors = pendingFiles.some((f) => f.status === 'error');\n const allReady = pendingFiles.every((f) => f.status === 'done');\n\n async function handleFileSelect(e: React.ChangeEvent<HTMLInputElement>) {\n const files = Array.from(e.target.files ?? []);\n if (!files.length) return;\n e.target.value = '';\n\n const newPending: PendingFile[] = files.map((file) => ({\n file,\n id: `pending-${++fileIdCounter.current}`,\n status: 'uploading',\n progress: 0,\n }));\n\n setPendingFiles((prev) => [...prev, ...newPending]);\n\n for (const pending of newPending) {\n try {\n const [fileRef] = await uploadFiles([pending.file], (_, progress) => {\n setPendingFiles((prev) =>\n prev.map((f) => (f.id === pending.id ? { ...f, progress } : f)),\n );\n });\n setPendingFiles((prev) =>\n prev.map((f) => (f.id === pending.id ? { ...f, status: 'done', fileRef } : f)),\n );\n } catch (err) {\n setPendingFiles((prev) =>\n prev.map((f) =>\n f.id === pending.id ? { ...f, status: 'error', error: String(err) } : f,\n ),\n );\n }\n }\n }\n\n async function handleSubmit() {\n if ((!input.trim() && !pendingFiles.length) || !allReady) return;\n\n const fileRefs = pendingFiles.filter((f) => f.fileRef).map((f) => f.fileRef!);\n\n await send(\n 'user-message',\n {\n USER_MESSAGE: input,\n FILES: fileRefs.length > 0 ? fileRefs : undefined,\n },\n {\n userMessage: {\n content: input,\n files: fileRefs.length > 0 ? fileRefs : undefined,\n },\n },\n );\n\n setInput('');\n setPendingFiles([]);\n }\n\n return (\n <div>\n {/* Messages */}\n {messages.map((msg) => (\n <div key={msg.id}>{/* ... render message */}</div>\n ))}\n\n {/* Pending files */}\n {pendingFiles.length > 0 && (\n <div className=\"flex gap-2\">\n {pendingFiles.map((f) => (\n <div key={f.id} className=\"relative\">\n <img\n src={URL.createObjectURL(f.file)}\n alt={f.file.name}\n className=\"h-16 w-16 object-cover rounded\"\n />\n {f.status === 'uploading' && (\n <div className=\"absolute inset-0 flex items-center justify-center bg-black/50\">\n <span className=\"text-white text-xs\">{f.progress}%</span>\n </div>\n )}\n <button\n onClick={() => setPendingFiles((prev) => prev.filter((p) => p.id !== f.id))}\n className=\"absolute -top-2 -right-2 bg-red-500 text-white rounded-full w-5 h-5\"\n >\n \xD7\n </button>\n </div>\n ))}\n </div>\n )}\n\n {/* Input */}\n <div className=\"flex gap-2\">\n <input type=\"file\" ref={fileInputRef} onChange={handleFileSelect} hidden />\n <button onClick={() => fileInputRef.current?.click()}>\u{1F4CE}</button>\n <input\n value={input}\n onChange={(e) => setInput(e.target.value)}\n placeholder=\"Type a message...\"\n className=\"flex-1\"\n />\n <button onClick={handleSubmit} disabled={isUploading || hasErrors}>\n {isUploading ? 'Uploading...' : 'Send'}\n </button>\n </div>\n </div>\n );\n}\n```\n",
|
|
1248
|
+
content: "\n# File Uploads\n\nThe Client SDK supports uploading images and documents that can be sent with messages. This enables vision model capabilities (analyzing images) and document processing.\n\n## Overview\n\nFile uploads follow a two-step flow:\n\n1. **Request upload URLs** from the platform via your backend\n2. **Upload files directly to S3** using presigned URLs\n3. **Send file references** with your message\n\nThis architecture keeps your API key secure on the server while enabling fast, direct uploads.\n\n## Setup\n\n### Backend: Upload URLs Endpoint\n\nCreate an endpoint that proxies upload URL requests to the Octavus platform:\n\n```typescript\n// app/api/upload-urls/route.ts (Next.js)\nimport { NextResponse } from 'next/server';\nimport { octavus } from '@/lib/octavus';\n\nexport async function POST(request: Request) {\n const { sessionId, files } = await request.json();\n\n // Get presigned URLs from Octavus\n const result = await octavus.files.getUploadUrls(sessionId, files);\n\n return NextResponse.json(result);\n}\n```\n\n### Client: Configure File Uploads\n\nPass `requestUploadUrls` to the chat hook:\n\n```tsx\nimport { useMemo, useCallback } from 'react';\nimport { useOctavusChat, createHttpTransport } from '@octavus/react';\n\nfunction Chat({ sessionId }: { sessionId: string }) {\n const transport = useMemo(\n () =>\n createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n [sessionId],\n );\n\n // Request upload URLs from your backend\n const requestUploadUrls = useCallback(\n async (files: { filename: string; mediaType: string; size: number }[]) => {\n const response = await fetch('/api/upload-urls', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, files }),\n });\n return response.json();\n },\n [sessionId],\n );\n\n const { messages, status, send, uploadFiles } = useOctavusChat({\n transport,\n requestUploadUrls,\n // Optional: configure upload timeout and retry behavior\n uploadOptions: {\n timeoutMs: 60_000, // Per-file timeout (default: 60s, set to 0 to disable)\n maxRetries: 2, // Retry attempts on transient failures (default: 2)\n retryDelayMs: 1_000, // Delay between retries (default: 1s)\n },\n });\n\n // ...\n}\n```\n\n## Uploading Files\n\n### Method 1: Upload Before Sending\n\nFor the best UX (showing upload progress), upload files first, then send:\n\n```tsx\nimport { useState, useRef } from 'react';\nimport type { FileReference } from '@octavus/react';\n\nfunction ChatInput({ sessionId }: { sessionId: string }) {\n const [pendingFiles, setPendingFiles] = useState<FileReference[]>([]);\n const [uploading, setUploading] = useState(false);\n const fileInputRef = useRef<HTMLInputElement>(null);\n\n const { send, uploadFiles } = useOctavusChat({\n transport,\n requestUploadUrls,\n });\n\n async function handleFileSelect(event: React.ChangeEvent<HTMLInputElement>) {\n const files = event.target.files;\n if (!files?.length) return;\n\n setUploading(true);\n try {\n // Upload files with progress tracking\n const fileRefs = await uploadFiles(files, (fileIndex, progress) => {\n console.log(`File ${fileIndex}: ${progress}%`);\n });\n setPendingFiles((prev) => [...prev, ...fileRefs]);\n } finally {\n setUploading(false);\n }\n }\n\n async function handleSend(message: string) {\n await send(\n 'user-message',\n {\n USER_MESSAGE: message,\n FILES: pendingFiles.length > 0 ? pendingFiles : undefined,\n },\n {\n userMessage: {\n content: message,\n files: pendingFiles.length > 0 ? pendingFiles : undefined,\n },\n },\n );\n setPendingFiles([]);\n }\n\n return (\n <div>\n {/* File preview */}\n {pendingFiles.map((file) => (\n <img key={file.id} src={file.url} alt={file.filename} className=\"h-16\" />\n ))}\n\n <input\n ref={fileInputRef}\n type=\"file\"\n accept=\"image/*,.pdf\"\n multiple\n onChange={handleFileSelect}\n className=\"hidden\"\n />\n\n <button onClick={() => fileInputRef.current?.click()} disabled={uploading}>\n {uploading ? 'Uploading...' : 'Attach'}\n </button>\n </div>\n );\n}\n```\n\n### Method 2: Upload on Send (Automatic)\n\nFor simpler implementations, pass `File` objects directly:\n\n```tsx\nasync function handleSend(message: string, files?: File[]) {\n await send(\n 'user-message',\n { USER_MESSAGE: message, FILES: files },\n { userMessage: { content: message, files } },\n );\n}\n```\n\nThe SDK automatically uploads the files before sending. Note: This doesn't provide upload progress.\n\n## Upload Reliability\n\nUploads include built-in timeout and retry logic for handling transient failures (network errors, server issues, mobile network switches).\n\n**Default behavior:**\n\n- **Timeout**: 60 seconds per file \u2014 prevents uploads from hanging on stalled connections\n- **Retries**: 2 automatic retries on transient failures (network errors, 5xx, 429)\n- **Retry delay**: 1 second between retries\n- **Non-retryable errors** (4xx like 403, 404) fail immediately without retrying\n\nOnly the S3 upload is retried \u2014 the presigned URL stays valid for 15 minutes. On retry, the progress callback resets to 0%.\n\nConfigure via `uploadOptions`:\n\n```typescript\nconst { send, uploadFiles } = useOctavusChat({\n transport,\n requestUploadUrls,\n uploadOptions: {\n timeoutMs: 120_000, // 2 minutes for large files\n maxRetries: 3,\n retryDelayMs: 2_000,\n },\n});\n```\n\nTo disable timeout or retries:\n\n```typescript\nuploadOptions: {\n timeoutMs: 0, // No timeout\n maxRetries: 0, // No retries\n}\n```\n\n### Using `OctavusChat` Directly\n\nWhen using the `OctavusChat` class directly (without the React hook), pass `uploadOptions` in the constructor:\n\n```typescript\nconst chat = new OctavusChat({\n transport,\n requestUploadUrls,\n uploadOptions: { timeoutMs: 120_000, maxRetries: 3 },\n});\n```\n\n## FileReference Type\n\nFile references contain metadata and URLs:\n\n```typescript\ninterface FileReference {\n /** Unique file ID (platform-generated) */\n id: string;\n /** IANA media type (e.g., 'image/png', 'application/pdf') */\n mediaType: string;\n /** Presigned download URL (S3) */\n url: string;\n /** Original filename */\n filename?: string;\n /** File size in bytes */\n size?: number;\n}\n```\n\n## Protocol Integration\n\nTo accept files in your agent protocol, use the `file[]` type:\n\n```yaml\ntriggers:\n user-message:\n input:\n USER_MESSAGE:\n type: string\n description: The user's message\n FILES:\n type: file[]\n optional: true\n description: User-attached images for vision analysis\n\nhandlers:\n user-message:\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input:\n - USER_MESSAGE\n files:\n - FILES # Attach files to the message\n display: hidden\n\n Respond to user:\n block: next-message\n```\n\nThe `file` type is a built-in type representing uploaded files. Use `file[]` for arrays of files.\n\n## Supported File Types\n\n| Type | Media Types |\n| --------- | -------------------------------------------------------------------- |\n| Images | `image/jpeg`, `image/png`, `image/gif`, `image/webp` |\n| Video | `video/mp4`, `video/webm`, `video/quicktime`, `video/mpeg` |\n| Documents | `application/pdf`, `text/plain`, `text/markdown`, `application/json` |\n\n## File Limits\n\n| Limit | Value |\n| --------------------- | ---------- |\n| Max file size | 100 MB |\n| Max total per request | 200 MB |\n| Upload URL expiry | 15 minutes |\n| Download URL expiry | 24 hours |\n\n## Rendering User Files\n\nUser-uploaded files appear as `UIFilePart` in user messages:\n\n```tsx\nfunction UserMessage({ message }: { message: UIMessage }) {\n return (\n <div>\n {message.parts.map((part, i) => {\n if (part.type === 'file') {\n if (part.mediaType.startsWith('image/')) {\n return (\n <img\n key={i}\n src={part.url}\n alt={part.filename || 'Uploaded image'}\n className=\"max-h-48 rounded-lg\"\n />\n );\n }\n return (\n <a key={i} href={part.url} className=\"text-blue-500\">\n \u{1F4C4} {part.filename}\n </a>\n );\n }\n if (part.type === 'text') {\n return <p key={i}>{part.text}</p>;\n }\n return null;\n })}\n </div>\n );\n}\n```\n\n## Server SDK: Files API\n\nThe Server SDK provides direct access to the Files API:\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: 'your-api-key',\n});\n\n// Get presigned upload URLs\nconst { files } = await client.files.getUploadUrls(sessionId, [\n { filename: 'photo.jpg', mediaType: 'image/jpeg', size: 102400 },\n { filename: 'doc.pdf', mediaType: 'application/pdf', size: 204800 },\n]);\n\n// files[0].id - Use in FileReference\n// files[0].uploadUrl - PUT to this URL to upload\n// files[0].downloadUrl - Use as FileReference.url\n```\n\n## Complete Example\n\nHere's a full chat input component with file upload:\n\n```tsx\n'use client';\n\nimport { useState, useRef, useMemo, useCallback } from 'react';\nimport { useOctavusChat, createHttpTransport, type FileReference } from '@octavus/react';\n\ninterface PendingFile {\n file: File;\n id: string;\n status: 'uploading' | 'done' | 'error';\n progress: number;\n fileRef?: FileReference;\n error?: string;\n}\n\nexport function Chat({ sessionId }: { sessionId: string }) {\n const [input, setInput] = useState('');\n const [pendingFiles, setPendingFiles] = useState<PendingFile[]>([]);\n const fileInputRef = useRef<HTMLInputElement>(null);\n const fileIdCounter = useRef(0);\n\n const transport = useMemo(\n () =>\n createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n [sessionId],\n );\n\n const requestUploadUrls = useCallback(\n async (files: { filename: string; mediaType: string; size: number }[]) => {\n const res = await fetch('/api/upload-urls', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, files }),\n });\n return res.json();\n },\n [sessionId],\n );\n\n const { messages, status, send, uploadFiles } = useOctavusChat({\n transport,\n requestUploadUrls,\n });\n\n const isUploading = pendingFiles.some((f) => f.status === 'uploading');\n const hasErrors = pendingFiles.some((f) => f.status === 'error');\n const allReady = pendingFiles.every((f) => f.status === 'done');\n\n async function handleFileSelect(e: React.ChangeEvent<HTMLInputElement>) {\n const files = Array.from(e.target.files ?? []);\n if (!files.length) return;\n e.target.value = '';\n\n const newPending: PendingFile[] = files.map((file) => ({\n file,\n id: `pending-${++fileIdCounter.current}`,\n status: 'uploading',\n progress: 0,\n }));\n\n setPendingFiles((prev) => [...prev, ...newPending]);\n\n for (const pending of newPending) {\n try {\n const [fileRef] = await uploadFiles([pending.file], (_, progress) => {\n setPendingFiles((prev) =>\n prev.map((f) => (f.id === pending.id ? { ...f, progress } : f)),\n );\n });\n setPendingFiles((prev) =>\n prev.map((f) => (f.id === pending.id ? { ...f, status: 'done', fileRef } : f)),\n );\n } catch (err) {\n setPendingFiles((prev) =>\n prev.map((f) =>\n f.id === pending.id ? { ...f, status: 'error', error: String(err) } : f,\n ),\n );\n }\n }\n }\n\n async function handleSubmit() {\n if ((!input.trim() && !pendingFiles.length) || !allReady) return;\n\n const fileRefs = pendingFiles.filter((f) => f.fileRef).map((f) => f.fileRef!);\n\n await send(\n 'user-message',\n {\n USER_MESSAGE: input,\n FILES: fileRefs.length > 0 ? fileRefs : undefined,\n },\n {\n userMessage: {\n content: input,\n files: fileRefs.length > 0 ? fileRefs : undefined,\n },\n },\n );\n\n setInput('');\n setPendingFiles([]);\n }\n\n return (\n <div>\n {/* Messages */}\n {messages.map((msg) => (\n <div key={msg.id}>{/* ... render message */}</div>\n ))}\n\n {/* Pending files */}\n {pendingFiles.length > 0 && (\n <div className=\"flex gap-2\">\n {pendingFiles.map((f) => (\n <div key={f.id} className=\"relative\">\n <img\n src={URL.createObjectURL(f.file)}\n alt={f.file.name}\n className=\"h-16 w-16 object-cover rounded\"\n />\n {f.status === 'uploading' && (\n <div className=\"absolute inset-0 flex items-center justify-center bg-black/50\">\n <span className=\"text-white text-xs\">{f.progress}%</span>\n </div>\n )}\n <button\n onClick={() => setPendingFiles((prev) => prev.filter((p) => p.id !== f.id))}\n className=\"absolute -top-2 -right-2 bg-red-500 text-white rounded-full w-5 h-5\"\n >\n \xD7\n </button>\n </div>\n ))}\n </div>\n )}\n\n {/* Input */}\n <div className=\"flex gap-2\">\n <input type=\"file\" ref={fileInputRef} onChange={handleFileSelect} hidden />\n <button onClick={() => fileInputRef.current?.click()}>\u{1F4CE}</button>\n <input\n value={input}\n onChange={(e) => setInput(e.target.value)}\n placeholder=\"Type a message...\"\n className=\"flex-1\"\n />\n <button onClick={handleSubmit} disabled={isUploading || hasErrors}>\n {isUploading ? 'Uploading...' : 'Send'}\n </button>\n </div>\n </div>\n );\n}\n```\n",
|
|
1240
1249
|
excerpt: "File Uploads The Client SDK supports uploading images and documents that can be sent with messages. This enables vision model capabilities (analyzing images) and document processing. Overview File...",
|
|
1241
1250
|
order: 8
|
|
1242
1251
|
},
|
|
@@ -1271,7 +1280,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1271
1280
|
section: "protocol",
|
|
1272
1281
|
title: "Overview",
|
|
1273
1282
|
description: "Introduction to Octavus agent protocols.",
|
|
1274
|
-
content: '\n# Protocol Overview\n\nAgent protocols define how an AI agent behaves. They\'re written in YAML and specify inputs, triggers, tools, and execution handlers.\n\n## Why Protocols?\n\nProtocols provide:\n\n- **Declarative definition** \u2014 Define behavior, not implementation\n- **Portable agents** \u2014 Move agents between projects\n- **Versioning** \u2014 Track changes with git\n- **Validation** \u2014 Catch errors before runtime\n- **Visualization** \u2014 Debug execution flows\n\n## Agent Formats\n\nOctavus supports two agent formats:\n\n| Format | Use Case | Structure |\n| ------------- | ------------------------------ | --------------------------------- |\n| `interactive` | Chat and multi-turn dialogue | `triggers` + `handlers` + `agent` |\n| `worker` | Background tasks and pipelines | `steps` + `output` |\n\n**Interactive agents** handle conversations \u2014 they respond to triggers (like user messages) and maintain session state across interactions.\n\n**Worker agents** execute tasks \u2014 they run steps sequentially and return an output value. Workers can be called independently or composed into interactive agents.\n\nSee [Workers](/docs/protocol/workers) for the worker protocol reference.\n\n## Interactive Protocol Structure\n\n```yaml\n# Agent inputs (provided when creating a session)\ninput:\n COMPANY_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\n# Persistent resources the agent can read/write\nresources:\n CONVERSATION_SUMMARY:\n description: Summary for handoff\n default: \'\'\n\n# How the agent can be invoked\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n request-human:\n description: User clicks "Talk to Human"\n\n# Temporary variables for execution (with types)\nvariables:\n SUMMARY:\n type: string\n TICKET:\n type: unknown\n\n# Tools the agent can use\ntools:\n get-user-account:\n description: Looking up your account\n parameters:\n userId: { type: string }\n\n# Octavus skills (provider-agnostic code execution)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\n# Agent configuration (model, tools, etc.)\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account]\n skills: [qr-code] # Enable skills\n imageModel: google/gemini-2.5-flash-image # Enable image generation\n agentic: true # Allow multiple tool calls\n thinking: medium # Extended reasoning\n\n# What happens when triggers fire\nhandlers:\n user-message:\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond to user:\n block: next-message\n```\n\n## File Structure\n\nEach agent is a folder with:\n\n```\nmy-agent/\n\u251C\u2500\u2500 protocol.yaml # Main logic (required)\n\u251C\u2500\u2500 settings.json # Agent metadata (required)\n\
|
|
1283
|
+
content: '\n# Protocol Overview\n\nAgent protocols define how an AI agent behaves. They\'re written in YAML and specify inputs, triggers, tools, and execution handlers.\n\n## Why Protocols?\n\nProtocols provide:\n\n- **Declarative definition** \u2014 Define behavior, not implementation\n- **Portable agents** \u2014 Move agents between projects\n- **Versioning** \u2014 Track changes with git\n- **Validation** \u2014 Catch errors before runtime\n- **Visualization** \u2014 Debug execution flows\n\n## Agent Formats\n\nOctavus supports two agent formats:\n\n| Format | Use Case | Structure |\n| ------------- | ------------------------------ | --------------------------------- |\n| `interactive` | Chat and multi-turn dialogue | `triggers` + `handlers` + `agent` |\n| `worker` | Background tasks and pipelines | `steps` + `output` |\n\n**Interactive agents** handle conversations \u2014 they respond to triggers (like user messages) and maintain session state across interactions.\n\n**Worker agents** execute tasks \u2014 they run steps sequentially and return an output value. Workers can be called independently or composed into interactive agents.\n\nSee [Workers](/docs/protocol/workers) for the worker protocol reference.\n\n## Interactive Protocol Structure\n\n```yaml\n# Agent inputs (provided when creating a session)\ninput:\n COMPANY_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\n# Persistent resources the agent can read/write\nresources:\n CONVERSATION_SUMMARY:\n description: Summary for handoff\n default: \'\'\n\n# How the agent can be invoked\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n request-human:\n description: User clicks "Talk to Human"\n\n# Temporary variables for execution (with types)\nvariables:\n SUMMARY:\n type: string\n TICKET:\n type: unknown\n\n# Tools the agent can use\ntools:\n get-user-account:\n description: Looking up your account\n parameters:\n userId: { type: string }\n\n# Octavus skills (provider-agnostic code execution)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\n# Agent configuration (model, tools, etc.)\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account]\n skills: [qr-code] # Enable skills\n imageModel: google/gemini-2.5-flash-image # Enable image generation\n agentic: true # Allow multiple tool calls\n thinking: medium # Extended reasoning\n\n# What happens when triggers fire\nhandlers:\n user-message:\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond to user:\n block: next-message\n```\n\n## File Structure\n\nEach agent is a folder with:\n\n```\nmy-agent/\n\u251C\u2500\u2500 protocol.yaml # Main logic (required)\n\u251C\u2500\u2500 settings.json # Agent metadata (required)\n\u251C\u2500\u2500 prompts/ # Prompt templates (supports subdirectories)\n\u2502 \u251C\u2500\u2500 system.md\n\u2502 \u251C\u2500\u2500 user-message.md\n\u2502 \u2514\u2500\u2500 shared/\n\u2502 \u251C\u2500\u2500 company-info.md\n\u2502 \u2514\u2500\u2500 formatting-rules.md\n\u2514\u2500\u2500 references/ # On-demand context documents (optional)\n \u2514\u2500\u2500 api-guidelines.md\n```\n\nPrompts can be organized in subdirectories. In the protocol, reference nested prompts by their path relative to `prompts/` (without `.md`): `shared/company-info`.\n\nReferences are markdown files with YAML frontmatter that the agent can fetch on demand during execution. See [References](/docs/protocol/references).\n\n### settings.json\n\n```json\n{\n "slug": "my-agent",\n "name": "My Agent",\n "description": "What this agent does",\n "format": "interactive"\n}\n```\n\n| Field | Required | Description |\n| ------------- | -------- | ----------------------------------------------- |\n| `slug` | Yes | URL-safe identifier (lowercase, digits, dashes) |\n| `name` | Yes | Human-readable name |\n| `description` | No | Brief description |\n| `format` | Yes | `interactive` (chat) or `worker` (background) |\n\n## Naming Conventions\n\n- **Slugs**: `lowercase-with-dashes`\n- **Variables**: `UPPERCASE_SNAKE_CASE`\n- **Prompts**: `lowercase-with-dashes.md` (paths use `/` for subdirectories)\n- **Tools**: `lowercase-with-dashes`\n- **Triggers**: `lowercase-with-dashes`\n\n## Variables in Prompts\n\nReference variables with `{{VARIABLE_NAME}}`:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a support agent for {{COMPANY_NAME}}.\n\nHelp users with their {{PRODUCT_NAME}} questions.\n\n## Support Policies\n\n{{SUPPORT_POLICIES}}\n```\n\nVariables are replaced with their values at runtime. If a variable is not provided, the placeholder is kept as-is.\n\n## Prompt Interpolation\n\nInclude other prompts inside a prompt with `{{@path.md}}`:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a customer support agent.\n\n{{@shared/company-info.md}}\n\n{{@shared/formatting-rules.md}}\n\nHelp users with their questions.\n```\n\nThe referenced prompt content is inserted before variable interpolation, so variables in included prompts work the same way. Circular references are not allowed and will be caught during validation.\n\n## Next Steps\n\n- [Input & Resources](/docs/protocol/input-resources) \u2014 Defining agent inputs\n- [Triggers](/docs/protocol/triggers) \u2014 How agents are invoked\n- [Tools](/docs/protocol/tools) \u2014 External capabilities\n- [Skills](/docs/protocol/skills) \u2014 Code execution and knowledge packages\n- [References](/docs/protocol/references) \u2014 On-demand context documents\n- [Handlers](/docs/protocol/handlers) \u2014 Execution blocks\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n- [Workers](/docs/protocol/workers) \u2014 Worker agent format\n- [Provider Options](/docs/protocol/provider-options) \u2014 Provider-specific features\n- [Types](/docs/protocol/types) \u2014 Custom type definitions\n',
|
|
1275
1284
|
excerpt: "Protocol Overview Agent protocols define how an AI agent behaves. They're written in YAML and specify inputs, triggers, tools, and execution handlers. Why Protocols? Protocols provide: - Declarative...",
|
|
1276
1285
|
order: 1
|
|
1277
1286
|
},
|
|
@@ -1307,7 +1316,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1307
1316
|
section: "protocol",
|
|
1308
1317
|
title: "Skills",
|
|
1309
1318
|
description: "Using Octavus skills for code execution and specialized capabilities.",
|
|
1310
|
-
content: "\n# Skills\n\nSkills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are self-contained packages with documentation and scripts that run in secure sandboxes.\n\n## Overview\n\nOctavus Skills provide **provider-agnostic** code execution. They work with any LLM provider (Anthropic, OpenAI, Google) by using explicit tool calls and system prompt injection.\n\n### How Skills Work\n\n1. **Skill Definition**: Skills are defined in the protocol's `skills:` section\n2. **Skill Resolution**: Skills are resolved from available sources (see below)\n3. **Sandbox Execution**: When a skill is used, code runs in an isolated sandbox environment\n4. **File Generation**: Files saved to `/output/` are automatically captured and made available for download\n\n### Skill Sources\n\nSkills come from two sources, visible in the Skills tab of your organization:\n\n| Source | Badge in UI | Visibility | Example |\n| ----------- | ----------- | ------------------------------ | ------------------ |\n| **Octavus** | `Octavus` | Available to all organizations | `qr-code` |\n| **Custom** | None | Private to your organization | `my-company-skill` |\n\nWhen you reference a skill in your protocol, Octavus resolves it from your available skills. If you create a custom skill with the same name as an Octavus skill, your custom skill takes precedence.\n\n## Defining Skills\n\nDefine skills in the protocol's `skills:` section:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n```\n\n### Skill Fields\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------------------------------------- |\n| `display` | No | How to show in UI: `hidden`, `name`, `description`, `stream` (default: `description`) |\n| `description` | No | Custom description shown to users (overrides skill's built-in description) |\n\n### Display Modes\n\n| Mode | Behavior |\n| ------------- | ------------------------------------------- |\n| `hidden` | Skill usage not shown to users |\n| `name` | Shows skill name while executing |\n| `description` | Shows description while executing (default) |\n| `stream` | Streams progress if available |\n\n## Enabling Skills\n\nAfter defining skills in the `skills:` section, specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account]\n skills: [qr-code] # Skills available for this thread\n agentic: true\n```\n\n## Skill Tools\n\nWhen skills are enabled, the LLM has access to these tools:\n\n| Tool | Purpose |\n| -------------------- | --------------------------------------- |\n| `octavus_skill_read` | Read skill documentation (SKILL.md) |\n| `octavus_skill_list` | List available scripts in a skill |\n| `octavus_skill_run` | Execute a pre-built script from a skill |\n| `octavus_code_run` | Execute arbitrary Python/Bash code |\n| `octavus_file_write` | Create files in the sandbox |\n| `octavus_file_read` | Read files from the sandbox |\n\nThe LLM learns about available skills through system prompt injection and can use these tools to interact with skills.\n\n## Example: QR Code Generation\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n agentic: true\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond:\n block: next-message\n```\n\nWhen a user asks \"Create a QR code for octavus.ai\", the LLM will:\n\n1. Recognize the task matches the `qr-code` skill\n2. Call `octavus_skill_read` to learn how to use the skill\n3. Execute code (via `octavus_code_run` or `octavus_skill_run`) to generate the QR code\n4. Save the image to `/output/` in the sandbox\n5. The file is automatically captured and made available for download\n\n## File Output\n\nFiles saved to `/output/` in the sandbox are automatically:\n\n1. **Captured** after code execution\n2. **Uploaded** to S3 storage\n3. **Made available** via presigned URLs\n4. **Included** in the message as file parts\n\nFiles persist across page refreshes and are stored in the session's message history.\n\n## Skill Format\n\nSkills follow the [Agent Skills](https://agentskills.io) open standard:\n\n- `SKILL.md` - Required skill documentation with YAML frontmatter\n- `scripts/` - Optional executable code (Python/Bash)\n- `references/` - Optional documentation loaded as needed\n- `assets/` - Optional files used in outputs (templates, images)\n\n### SKILL.md Format\n\n````yaml\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\nversion: 1.0.0\nlicense: MIT\nauthor: Octavus Team\n---\n\n# QR Code Generator\n\n## Overview\n\nThis skill creates QR codes from text data using Python...\n\n## Quick Start\n\nGenerate a QR code with Python:\n\n```python\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n# ... code to generate QR code ...\n````\n\n## Scripts Reference\n\n### scripts/generate.py\n\nMain script for generating QR codes...\n\n````\n\n## Best Practices\n\n### 1. Clear Descriptions\n\nProvide clear, purpose-driven descriptions:\n\n```yaml\nskills:\n # Good - clear purpose\n qr-code:\n description: Generating QR codes for URLs, contact info, or any text data\n\n # Avoid - vague\n utility:\n description: Does stuff\n````\n\n### 2. When to Use Skills vs Tools\n\n| Use Skills When | Use Tools When |\n| ------------------------ | ---------------------------- |\n| Code execution needed | Simple API calls |\n| File generation | Database queries |\n| Complex calculations | External service integration |\n| Data processing | Authentication required |\n| Provider-agnostic needed | Backend-specific logic |\n\n### 3. Skill Selection\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n pdf-processor:\n display: description\n description: Processing PDFs\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis] # Skills available for this thread\n```\n\n### 4. Display Modes\n\nChoose appropriate display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n## Comparison: Skills vs Tools vs Provider Options\n\n| Feature | Octavus Skills | External Tools | Provider Tools/Skills |\n| ------------------ | ----------------- | ------------------- | --------------------- |\n| **Execution** | Isolated sandbox | Your backend | Provider servers |\n| **Provider** | Any (agnostic) | N/A | Provider-specific |\n| **Code Execution** | Yes | No | Yes (provider tools) |\n| **File Output** | Yes | No | Yes (provider skills) |\n| **Implementation** | Skill packages | Your code | Built-in |\n| **Cost** | Sandbox + LLM API | Your infrastructure | Included in API |\n\n## Uploading Custom Skills\n\nYou can upload custom skills to your organization:\n\n1. Create a skill following the [Agent Skills](https://agentskills.io) format\n2. Package it as a `.skill` bundle (ZIP file)\n3. Upload via the platform UI\n4. Reference by slug in your protocol\n\n```yaml\nskills:\n custom-analysis:\n display: description\n description: Custom analysis tool\n\nagent:\n skills: [custom-analysis]\n```\n\n## Sandbox Timeout\n\nThe default sandbox timeout is 5 minutes. For long-running operations, you can configure a custom timeout using `sandboxTimeout` in the agent config:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes (in milliseconds)\n```\n\n`sandboxTimeout` Maximum: 1 hour (3,600,000 ms)\n\n## Security\n\nSkills run in isolated sandbox environments:\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n## Next Steps\n\n- [Agent Config](/docs/protocol/agent-config) \u2014 Configuring skills in agent settings\n- [Provider Options](/docs/protocol/provider-options) \u2014 Anthropic's built-in skills\n- [Skills Advanced Guide](/docs/protocol/skills-advanced) \u2014 Best practices and advanced patterns\n",
|
|
1319
|
+
content: "\n# Skills\n\nSkills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are self-contained packages with documentation and scripts that run in secure sandboxes.\n\n## Overview\n\nOctavus Skills provide **provider-agnostic** code execution. They work with any LLM provider (Anthropic, OpenAI, Google) by using explicit tool calls and system prompt injection.\n\n### How Skills Work\n\n1. **Skill Definition**: Skills are defined in the protocol's `skills:` section\n2. **Skill Resolution**: Skills are resolved from available sources (see below)\n3. **Sandbox Execution**: When a skill is used, code runs in an isolated sandbox environment\n4. **File Generation**: Files saved to `/output/` are automatically captured and made available for download\n\n### Skill Sources\n\nSkills come from two sources, visible in the Skills tab of your organization:\n\n| Source | Badge in UI | Visibility | Example |\n| ----------- | ----------- | ------------------------------ | ------------------ |\n| **Octavus** | `Octavus` | Available to all organizations | `qr-code` |\n| **Custom** | None | Private to your organization | `my-company-skill` |\n\nWhen you reference a skill in your protocol, Octavus resolves it from your available skills. If you create a custom skill with the same name as an Octavus skill, your custom skill takes precedence.\n\n## Defining Skills\n\nDefine skills in the protocol's `skills:` section:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n```\n\n### Skill Fields\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------------------------------------- |\n| `display` | No | How to show in UI: `hidden`, `name`, `description`, `stream` (default: `description`) |\n| `description` | No | Custom description shown to users (overrides skill's built-in description) |\n\n### Display Modes\n\n| Mode | Behavior |\n| ------------- | ------------------------------------------- |\n| `hidden` | Skill usage not shown to users |\n| `name` | Shows skill name while executing |\n| `description` | Shows description while executing (default) |\n| `stream` | Streams progress if available |\n\n## Enabling Skills\n\nAfter defining skills in the `skills:` section, specify which skills are available. Skills work in both interactive agents and workers.\n\n### Interactive Agents\n\nReference skills in `agent.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account]\n skills: [qr-code]\n agentic: true\n```\n\n### Workers and Named Threads\n\nReference skills per-thread in `start-thread.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nsteps:\n Start thread:\n block: start-thread\n thread: worker\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n maxSteps: 10\n```\n\nThis also works for named threads in interactive agents, allowing different threads to have different skills.\n\n## Skill Tools\n\nWhen skills are enabled, the LLM has access to these tools:\n\n| Tool | Purpose |\n| -------------------- | --------------------------------------- |\n| `octavus_skill_read` | Read skill documentation (SKILL.md) |\n| `octavus_skill_list` | List available scripts in a skill |\n| `octavus_skill_run` | Execute a pre-built script from a skill |\n| `octavus_code_run` | Execute arbitrary Python/Bash code |\n| `octavus_file_write` | Create files in the sandbox |\n| `octavus_file_read` | Read files from the sandbox |\n\nThe LLM learns about available skills through system prompt injection and can use these tools to interact with skills.\n\n## Example: QR Code Generation\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n agentic: true\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond:\n block: next-message\n```\n\nWhen a user asks \"Create a QR code for octavus.ai\", the LLM will:\n\n1. Recognize the task matches the `qr-code` skill\n2. Call `octavus_skill_read` to learn how to use the skill\n3. Execute code (via `octavus_code_run` or `octavus_skill_run`) to generate the QR code\n4. Save the image to `/output/` in the sandbox\n5. The file is automatically captured and made available for download\n\n## File Output\n\nFiles saved to `/output/` in the sandbox are automatically:\n\n1. **Captured** after code execution\n2. **Uploaded** to S3 storage\n3. **Made available** via presigned URLs\n4. **Included** in the message as file parts\n\nFiles persist across page refreshes and are stored in the session's message history.\n\n## Skill Format\n\nSkills follow the [Agent Skills](https://agentskills.io) open standard:\n\n- `SKILL.md` - Required skill documentation with YAML frontmatter\n- `scripts/` - Optional executable code (Python/Bash)\n- `references/` - Optional documentation loaded as needed\n- `assets/` - Optional files used in outputs (templates, images)\n\n### SKILL.md Format\n\n````yaml\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\nversion: 1.0.0\nlicense: MIT\nauthor: Octavus Team\n---\n\n# QR Code Generator\n\n## Overview\n\nThis skill creates QR codes from text data using Python...\n\n## Quick Start\n\nGenerate a QR code with Python:\n\n```python\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n# ... code to generate QR code ...\n````\n\n## Scripts Reference\n\n### scripts/generate.py\n\nMain script for generating QR codes...\n\n````\n\n## Best Practices\n\n### 1. Clear Descriptions\n\nProvide clear, purpose-driven descriptions:\n\n```yaml\nskills:\n # Good - clear purpose\n qr-code:\n description: Generating QR codes for URLs, contact info, or any text data\n\n # Avoid - vague\n utility:\n description: Does stuff\n````\n\n### 2. When to Use Skills vs Tools\n\n| Use Skills When | Use Tools When |\n| ------------------------ | ---------------------------- |\n| Code execution needed | Simple API calls |\n| File generation | Database queries |\n| Complex calculations | External service integration |\n| Data processing | Authentication required |\n| Provider-agnostic needed | Backend-specific logic |\n\n### 3. Skill Selection\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n pdf-processor:\n display: description\n description: Processing PDFs\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis] # Skills available for this thread\n```\n\n### 4. Display Modes\n\nChoose appropriate display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n## Comparison: Skills vs Tools vs Provider Options\n\n| Feature | Octavus Skills | External Tools | Provider Tools/Skills |\n| ------------------ | ----------------- | ------------------- | --------------------- |\n| **Execution** | Isolated sandbox | Your backend | Provider servers |\n| **Provider** | Any (agnostic) | N/A | Provider-specific |\n| **Code Execution** | Yes | No | Yes (provider tools) |\n| **File Output** | Yes | No | Yes (provider skills) |\n| **Implementation** | Skill packages | Your code | Built-in |\n| **Cost** | Sandbox + LLM API | Your infrastructure | Included in API |\n\n## Uploading Custom Skills\n\nYou can upload custom skills to your organization:\n\n1. Create a skill following the [Agent Skills](https://agentskills.io) format\n2. Package it as a `.skill` bundle (ZIP file)\n3. Upload via the platform UI\n4. Reference by slug in your protocol\n\n```yaml\nskills:\n custom-analysis:\n display: description\n description: Custom analysis tool\n\nagent:\n skills: [custom-analysis]\n```\n\n## Sandbox Timeout\n\nThe default sandbox timeout is 5 minutes. You can configure a custom timeout using `sandboxTimeout` in the agent config or on individual `start-thread` blocks:\n\n```yaml\n# Agent-level timeout (applies to main thread)\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes (in milliseconds)\n```\n\n```yaml\n# Thread-level timeout (overrides agent-level for this thread)\nsteps:\n Start thread:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 3600000 # 1 hour\n```\n\nThread-level `sandboxTimeout` takes priority over agent-level. Maximum: 1 hour (3,600,000 ms).\n\n## Security\n\nSkills run in isolated sandbox environments:\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after each `next-message` execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n## Next Steps\n\n- [Agent Config](/docs/protocol/agent-config) \u2014 Configuring skills in agent settings\n- [Provider Options](/docs/protocol/provider-options) \u2014 Anthropic's built-in skills\n- [Skills Advanced Guide](/docs/protocol/skills-advanced) \u2014 Best practices and advanced patterns\n",
|
|
1311
1320
|
excerpt: "Skills Skills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are...",
|
|
1312
1321
|
order: 5
|
|
1313
1322
|
},
|
|
@@ -1316,7 +1325,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1316
1325
|
section: "protocol",
|
|
1317
1326
|
title: "Handlers",
|
|
1318
1327
|
description: "Defining execution handlers with blocks.",
|
|
1319
|
-
content: "\n# Handlers\n\nHandlers define what happens when a trigger fires. They contain execution blocks that run in sequence.\n\n## Handler Structure\n\n```yaml\nhandlers:\n trigger-name:\n Block Name:\n block: block-kind\n # block-specific properties\n\n Another Block:\n block: another-kind\n # ...\n```\n\nEach block has a human-readable name (shown in debug UI) and a `block` field that determines its behavior.\n\n## Block Kinds\n\n### next-message\n\nGenerate a response from the LLM:\n\n```yaml\nhandlers:\n user-message:\n Respond to user:\n block: next-message\n # Uses main conversation thread by default\n # Display defaults to 'stream'\n```\n\nWith options:\n\n```yaml\nGenerate summary:\n block: next-message\n thread: summary # Use named thread\n display: stream # Show streaming content\n independent: true # Don't add to main chat\n output: SUMMARY # Store output in variable\n description: Generating summary # Shown in UI\n```\n\nFor structured output (typed JSON response):\n\n```yaml\nRespond with suggestions:\n block: next-message\n responseType: ChatResponse # Type defined in types section\n output: RESPONSE # Stores the parsed object\n```\n\nWhen `responseType` is specified:\n\n- The LLM generates JSON matching the type schema\n- The `output` variable receives the parsed object (not plain text)\n- The client receives a `UIObjectPart` for custom rendering\n\nSee [Types](/docs/protocol/types#structured-output) for more details.\n\n### add-message\n\nAdd a message to the conversation:\n\n```yaml\nAdd user message:\n block: add-message\n role: user # user | assistant | system\n prompt: user-message # Reference to prompt file\n input: [USER_MESSAGE] # Variables to interpolate\n display: hidden # Don't show in UI\n```\n\nFor internal directives (LLM sees it, user doesn't):\n\n```yaml\nAdd internal directive:\n block: add-message\n role: user\n prompt: ticket-directive\n input: [TICKET_DETAILS]\n visible: false # LLM sees this, user doesn't\n```\n\nFor structured user input (object shown in UI, prompt for LLM context):\n\n```yaml\nAdd user message:\n block: add-message\n role: user\n prompt: user-message # Rendered for LLM context (hidden from UI)\n input: [USER_INPUT]\n uiContent: USER_INPUT # Variable shown in UI (object \u2192 object part)\n display: hidden\n```\n\nWhen `uiContent` is set:\n\n- The variable value is shown in the UI (string \u2192 text part, object \u2192 object part)\n- The prompt text is hidden from the UI but kept for LLM context\n- Useful for rich UI interactions where the visual differs from the LLM context\n\n### tool-call\n\nCall a tool deterministically:\n\n```yaml\nCreate ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY # Variable reference\n priority: medium # Literal value\n output: TICKET # Store result\n```\n\n### set-resource\n\nUpdate a persistent resource:\n\n```yaml\nSave summary:\n block: set-resource\n resource: CONVERSATION_SUMMARY\n value: SUMMARY # Variable to save\n display: name # Show block name\n```\n\n### start-thread\n\nCreate a named conversation thread:\n\n```yaml\nStart summary thread:\n block: start-thread\n thread: summary # Thread name\n model: anthropic/claude-sonnet-4-5 # Optional: different model\n thinking: low # Extended reasoning level\n maxSteps: 1 # Tool call limit\n system: escalation-summary # System prompt\n input: [COMPANY_NAME] # Variables for prompt\n```\n\nThe `model` field can also reference a variable for dynamic model selection:\n\n```yaml\nStart summary thread:\n block: start-thread\n thread: summary\n model: SUMMARY_MODEL # Resolved from input variable\n system: escalation-summary\n```\n\n### serialize-thread\n\nConvert conversation to text:\n\n```yaml\nSerialize conversation:\n block: serialize-thread\n thread: main # Which thread (default: main)\n format: markdown # markdown | json\n output: CONVERSATION_TEXT # Variable to store result\n```\n\n### generate-image\n\nGenerate an image from a prompt variable:\n\n```yaml\nGenerate image:\n block: generate-image\n prompt: OPTIMIZED_PROMPT # Variable containing the prompt\n imageModel: google/gemini-2.5-flash-image # Required image model\n size: 1024x1024 # 1024x1024 | 1792x1024 | 1024x1792\n output: GENERATED_IMAGE # Store URL in variable\n description: Generating your image... # Shown in UI\n```\n\nEdit an existing image using reference images:\n\n```yaml\nEdit image:\n block: generate-image\n prompt: EDIT_INSTRUCTIONS # e.g., \"Remove the background\"\n referenceImages: [SOURCE_IMAGE_URL] # Variable(s) containing image URLs\n imageModel: google/gemini-2.5-flash-image\n output: EDITED_IMAGE\n description: Editing image...\n```\n\n| Field | Required | Description |\n| ----------------- | -------- | --------------------------------------------------------------- |\n| `prompt` | Yes | Variable name containing the image prompt or edit instructions |\n| `imageModel` | Yes | Image model identifier (e.g., `google/gemini-2.5-flash-image`) |\n| `size` | No | Image dimensions: `1024x1024`, `1792x1024`, or `1024x1792` |\n| `referenceImages` | No | Variable names containing image URLs for editing/transformation |\n| `output` | No | Variable name to store the generated image URL |\n| `thread` | No | Thread to associate the output file with |\n| `description` | No | Description shown in the UI during generation |\n\nThis block is for deterministic image generation pipelines where the prompt is constructed programmatically (e.g., via prompt engineering in a separate thread). When `referenceImages` are provided, the prompt describes how to modify those images.\n\nFor agentic image generation where the LLM decides when to generate, configure `imageModel` in the [agent config](/docs/protocol/agent-config#image-generation).\n\n## Display Modes\n\nEvery block has a `display` property:\n\n| Mode | Default For | Behavior |\n| ------------- | ------------------------- | ----------------- |\n| `hidden` | add-message | Not shown to user |\n| `name` | set-resource | Shows block name |\n| `description` | tool-call, generate-image | Shows description |\n| `stream` | next-message | Streams content |\n\n## Complete Example\n\n```yaml\nhandlers:\n user-message:\n # Add the user's message to conversation\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n # Generate response (LLM may call tools)\n Respond to user:\n block: next-message\n # display: stream (default)\n\n request-human:\n # Step 1: Serialize conversation for summary\n Serialize conversation:\n block: serialize-thread\n format: markdown\n output: CONVERSATION_TEXT\n\n # Step 2: Create separate thread for summarization\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n thinking: low\n system: escalation-summary\n input: [COMPANY_NAME]\n\n # Step 3: Add request to summary thread\n Add summarize request:\n block: add-message\n thread: summary\n role: user\n prompt: summarize-request\n input:\n - CONVERSATION: CONVERSATION_TEXT\n\n # Step 4: Generate summary\n Generate summary:\n block: next-message\n thread: summary\n display: stream\n description: Summarizing your conversation\n independent: true\n output: SUMMARY\n\n # Step 5: Save to resource\n Save summary:\n block: set-resource\n resource: CONVERSATION_SUMMARY\n value: SUMMARY\n\n # Step 6: Create support ticket\n Create ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY\n priority: medium\n output: TICKET\n\n # Step 7: Add directive for response\n Add directive:\n block: add-message\n role: user\n prompt: ticket-directive\n input: [TICKET_DETAILS: TICKET]\n visible: false\n\n # Step 8: Respond to user\n Respond:\n block: next-message\n```\n\n## Block Input Mapping\n\nThe `input` field on blocks controls which variables are passed to the prompt. Only variables listed in `input` are available for interpolation.\n\nVariables can come from `protocol.input`, `protocol.resources`, `protocol.variables`, `trigger.input`, or outputs from prior blocks.\n\n```yaml\n# Array format (same name)\ninput: [USER_MESSAGE, COMPANY_NAME]\n\n# Array format (rename)\ninput:\n - CONVERSATION: CONVERSATION_TEXT # Prompt sees CONVERSATION, value comes from CONVERSATION_TEXT\n - TICKET_DETAILS: TICKET\n\n# Object format (rename)\ninput:\n CONVERSATION: CONVERSATION_TEXT\n TICKET_DETAILS: TICKET\n```\n\n## Independent Blocks\n\nUse `independent: true` for content that shouldn't go to the main chat:\n\n```yaml\nGenerate summary:\n block: next-message\n thread: summary\n independent: true # Output stored in variable, not main chat\n output: SUMMARY\n```\n\nThis is useful for:\n\n- Background processing\n- Summarization in separate threads\n- Generating content for tools\n",
|
|
1328
|
+
content: "\n# Handlers\n\nHandlers define what happens when a trigger fires. They contain execution blocks that run in sequence.\n\n## Handler Structure\n\n```yaml\nhandlers:\n trigger-name:\n Block Name:\n block: block-kind\n # block-specific properties\n\n Another Block:\n block: another-kind\n # ...\n```\n\nEach block has a human-readable name (shown in debug UI) and a `block` field that determines its behavior.\n\n## Block Kinds\n\n### next-message\n\nGenerate a response from the LLM:\n\n```yaml\nhandlers:\n user-message:\n Respond to user:\n block: next-message\n # Uses main conversation thread by default\n # Display defaults to 'stream'\n```\n\nWith options:\n\n```yaml\nGenerate summary:\n block: next-message\n thread: summary # Use named thread\n display: stream # Show streaming content\n independent: true # Don't add to main chat\n output: SUMMARY # Store output in variable\n description: Generating summary # Shown in UI\n```\n\nFor structured output (typed JSON response):\n\n```yaml\nRespond with suggestions:\n block: next-message\n responseType: ChatResponse # Type defined in types section\n output: RESPONSE # Stores the parsed object\n```\n\nWhen `responseType` is specified:\n\n- The LLM generates JSON matching the type schema\n- The `output` variable receives the parsed object (not plain text)\n- The client receives a `UIObjectPart` for custom rendering\n\nSee [Types](/docs/protocol/types#structured-output) for more details.\n\n### add-message\n\nAdd a message to the conversation:\n\n```yaml\nAdd user message:\n block: add-message\n role: user # user | assistant | system\n prompt: user-message # Reference to prompt file\n input: [USER_MESSAGE] # Variables to interpolate\n display: hidden # Don't show in UI\n```\n\nFor internal directives (LLM sees it, user doesn't):\n\n```yaml\nAdd internal directive:\n block: add-message\n role: user\n prompt: ticket-directive\n input: [TICKET_DETAILS]\n visible: false # LLM sees this, user doesn't\n```\n\nFor structured user input (object shown in UI, prompt for LLM context):\n\n```yaml\nAdd user message:\n block: add-message\n role: user\n prompt: user-message # Rendered for LLM context (hidden from UI)\n input: [USER_INPUT]\n uiContent: USER_INPUT # Variable shown in UI (object \u2192 object part)\n display: hidden\n```\n\nWhen `uiContent` is set:\n\n- The variable value is shown in the UI (string \u2192 text part, object \u2192 object part)\n- The prompt text is hidden from the UI but kept for LLM context\n- Useful for rich UI interactions where the visual differs from the LLM context\n\n### tool-call\n\nCall a tool deterministically:\n\n```yaml\nCreate ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY # Variable reference\n priority: medium # Literal value\n output: TICKET # Store result\n```\n\n### set-resource\n\nUpdate a persistent resource:\n\n```yaml\nSave summary:\n block: set-resource\n resource: CONVERSATION_SUMMARY\n value: SUMMARY # Variable to save\n display: name # Show block name\n```\n\n### start-thread\n\nCreate a named conversation thread:\n\n```yaml\nStart summary thread:\n block: start-thread\n thread: summary # Thread name\n model: anthropic/claude-sonnet-4-5 # Optional: different model\n thinking: low # Extended reasoning level\n maxSteps: 1 # Tool call limit\n system: escalation-summary # System prompt\n input: [COMPANY_NAME] # Variables for prompt\n skills: [qr-code] # Octavus skills for this thread\n sandboxTimeout: 600000 # Skill sandbox timeout (default: 5 min, max: 1 hour)\n imageModel: google/gemini-2.5-flash-image # Image generation model\n```\n\nThe `model` field can also reference a variable for dynamic model selection:\n\n```yaml\nStart summary thread:\n block: start-thread\n thread: summary\n model: SUMMARY_MODEL # Resolved from input variable\n system: escalation-summary\n```\n\n### serialize-thread\n\nConvert conversation to text:\n\n```yaml\nSerialize conversation:\n block: serialize-thread\n thread: main # Which thread (default: main)\n format: markdown # markdown | json\n output: CONVERSATION_TEXT # Variable to store result\n```\n\n### generate-image\n\nGenerate an image from a prompt variable:\n\n```yaml\nGenerate image:\n block: generate-image\n prompt: OPTIMIZED_PROMPT # Variable containing the prompt\n imageModel: google/gemini-2.5-flash-image # Required image model\n size: 1024x1024 # 1024x1024 | 1792x1024 | 1024x1792\n output: GENERATED_IMAGE # Store URL in variable\n description: Generating your image... # Shown in UI\n```\n\nEdit an existing image using reference images:\n\n```yaml\nEdit image:\n block: generate-image\n prompt: EDIT_INSTRUCTIONS # e.g., \"Remove the background\"\n referenceImages: [SOURCE_IMAGE_URL] # Variable(s) containing image URLs\n imageModel: google/gemini-2.5-flash-image\n output: EDITED_IMAGE\n description: Editing image...\n```\n\n| Field | Required | Description |\n| ----------------- | -------- | --------------------------------------------------------------- |\n| `prompt` | Yes | Variable name containing the image prompt or edit instructions |\n| `imageModel` | Yes | Image model identifier (e.g., `google/gemini-2.5-flash-image`) |\n| `size` | No | Image dimensions: `1024x1024`, `1792x1024`, or `1024x1792` |\n| `referenceImages` | No | Variable names containing image URLs for editing/transformation |\n| `output` | No | Variable name to store the generated image URL |\n| `thread` | No | Thread to associate the output file with |\n| `description` | No | Description shown in the UI during generation |\n\nThis block is for deterministic image generation pipelines where the prompt is constructed programmatically (e.g., via prompt engineering in a separate thread). When `referenceImages` are provided, the prompt describes how to modify those images.\n\nFor agentic image generation where the LLM decides when to generate, configure `imageModel` in the [agent config](/docs/protocol/agent-config#image-generation).\n\n## Display Modes\n\nEvery block has a `display` property:\n\n| Mode | Default For | Behavior |\n| ------------- | ------------------------- | ----------------- |\n| `hidden` | add-message | Not shown to user |\n| `name` | set-resource | Shows block name |\n| `description` | tool-call, generate-image | Shows description |\n| `stream` | next-message | Streams content |\n\n## Complete Example\n\n```yaml\nhandlers:\n user-message:\n # Add the user's message to conversation\n Add user message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n # Generate response (LLM may call tools)\n Respond to user:\n block: next-message\n # display: stream (default)\n\n request-human:\n # Step 1: Serialize conversation for summary\n Serialize conversation:\n block: serialize-thread\n format: markdown\n output: CONVERSATION_TEXT\n\n # Step 2: Create separate thread for summarization\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n thinking: low\n system: escalation-summary\n input: [COMPANY_NAME]\n\n # Step 3: Add request to summary thread\n Add summarize request:\n block: add-message\n thread: summary\n role: user\n prompt: summarize-request\n input:\n - CONVERSATION: CONVERSATION_TEXT\n\n # Step 4: Generate summary\n Generate summary:\n block: next-message\n thread: summary\n display: stream\n description: Summarizing your conversation\n independent: true\n output: SUMMARY\n\n # Step 5: Save to resource\n Save summary:\n block: set-resource\n resource: CONVERSATION_SUMMARY\n value: SUMMARY\n\n # Step 6: Create support ticket\n Create ticket:\n block: tool-call\n tool: create-support-ticket\n input:\n summary: SUMMARY\n priority: medium\n output: TICKET\n\n # Step 7: Add directive for response\n Add directive:\n block: add-message\n role: user\n prompt: ticket-directive\n input: [TICKET_DETAILS: TICKET]\n visible: false\n\n # Step 8: Respond to user\n Respond:\n block: next-message\n```\n\n## Block Input Mapping\n\nThe `input` field on blocks controls which variables are passed to the prompt. Only variables listed in `input` are available for interpolation.\n\nVariables can come from `protocol.input`, `protocol.resources`, `protocol.variables`, `trigger.input`, or outputs from prior blocks.\n\n```yaml\n# Array format (same name)\ninput: [USER_MESSAGE, COMPANY_NAME]\n\n# Array format (rename)\ninput:\n - CONVERSATION: CONVERSATION_TEXT # Prompt sees CONVERSATION, value comes from CONVERSATION_TEXT\n - TICKET_DETAILS: TICKET\n\n# Object format (rename)\ninput:\n CONVERSATION: CONVERSATION_TEXT\n TICKET_DETAILS: TICKET\n```\n\n## Independent Blocks\n\nUse `independent: true` for content that shouldn't go to the main chat:\n\n```yaml\nGenerate summary:\n block: next-message\n thread: summary\n independent: true # Output stored in variable, not main chat\n output: SUMMARY\n```\n\nThis is useful for:\n\n- Background processing\n- Summarization in separate threads\n- Generating content for tools\n",
|
|
1320
1329
|
excerpt: "Handlers Handlers define what happens when a trigger fires. They contain execution blocks that run in sequence. Handler Structure Each block has a human-readable name (shown in debug UI) and a ...",
|
|
1321
1330
|
order: 6
|
|
1322
1331
|
},
|
|
@@ -1325,8 +1334,8 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1325
1334
|
section: "protocol",
|
|
1326
1335
|
title: "Agent Config",
|
|
1327
1336
|
description: "Configuring the agent model and behavior.",
|
|
1328
|
-
content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to pass to the system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | -------------------------------------------------------------------- |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash-preview\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions. The `input` field controls which variables are available to the prompt \u2014 only variables listed in `input` are interpolated.\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nVariables in `input` can come from `protocol.input`, `protocol.resources`, or `protocol.variables`.\n\n### Input Mapping Formats\n\n```yaml\n# Array format (same name)\ninput:\n - COMPANY_NAME\n - PRODUCT_NAME\n\n# Array format (rename)\ninput:\n - CONTEXT: CONVERSATION_SUMMARY # Prompt sees CONTEXT, value comes from CONVERSATION_SUMMARY\n\n# Object format (rename)\ninput:\n CONTEXT: CONVERSATION_SUMMARY\n```\n\nThe left side (label) is what the prompt sees. The right side (source) is where the value comes from.\n\n### Example\n\n`prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests. The tool supports both text-to-image generation and image editing/transformation using reference images.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Image Editing with Reference Images\n\nBoth the agentic tool and the `generate-image` block support reference images for editing and transformation. When reference images are provided, the prompt describes how to modify or use those images.\n\n| Provider | Models | Reference Image Support |\n| -------- | -------------------------------- | ----------------------- |\n| OpenAI | `gpt-image-1` | Yes |\n| Google | Gemini native (`gemini-*-image`) | Yes |\n| Google | Imagen (`imagen-*`) | No |\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate or edit images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation or editing\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n```\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
|
|
1329
|
-
excerpt: "Agent Config The section configures the LLM model, system prompt, tools, and behavior. Basic Configuration Configuration Options | Field
|
|
1337
|
+
content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n references: [api-guidelines] # On-demand context documents\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ---------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to pass to the system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `references` | No | List of references the LLM can fetch on demand |\n| `sandboxTimeout` | No | Skill sandbox timeout in ms (default: 5 min, max: 1 hour) |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | -------------------------------------------------------------------- |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash-preview\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions. The `input` field controls which variables are available to the prompt \u2014 only variables listed in `input` are interpolated.\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nVariables in `input` can come from `protocol.input`, `protocol.resources`, or `protocol.variables`.\n\n### Input Mapping Formats\n\n```yaml\n# Array format (same name)\ninput:\n - COMPANY_NAME\n - PRODUCT_NAME\n\n# Array format (rename)\ninput:\n - CONTEXT: CONVERSATION_SUMMARY # Prompt sees CONTEXT, value comes from CONVERSATION_SUMMARY\n\n# Object format (rename)\ninput:\n CONTEXT: CONVERSATION_SUMMARY\n```\n\nThe left side (label) is what the prompt sees. The right side (source) is where the value comes from.\n\n### Example\n\n`prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## References\n\nEnable on-demand context loading via reference documents:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n references: [api-guidelines, error-codes]\n agentic: true\n```\n\nReferences are markdown files stored in the agent's `references/` directory. When enabled, the LLM can list available references and read their content using `octavus_reference_list` and `octavus_reference_read` tools.\n\nSee [References](/docs/protocol/references) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests. The tool supports both text-to-image generation and image editing/transformation using reference images.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Image Editing with Reference Images\n\nBoth the agentic tool and the `generate-image` block support reference images for editing and transformation. When reference images are provided, the prompt describes how to modify or use those images.\n\n| Provider | Models | Reference Image Support |\n| -------- | -------------------------------- | ----------------------- |\n| OpenAI | `gpt-image-1` | Yes |\n| Google | Gemini native (`gemini-*-image`) | Yes |\n| Google | Imagen (`imagen-*`) | No |\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate or edit images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation or editing\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n skills: [data-analysis] # Thread-specific skills\n references: [escalation-policy] # Thread-specific references\n imageModel: google/gemini-2.5-flash-image # Thread-specific image model\n```\n\nEach thread can have its own skills, references, and image model. Skills must be defined in the protocol's `skills:` section. References must exist in the agent's `references/` directory. Workers use this same pattern since they don't have a global `agent:` section.\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n references: [support-policies] # On-demand context\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
|
|
1338
|
+
excerpt: "Agent Config The section configures the LLM model, system prompt, tools, and behavior. Basic Configuration Configuration Options | Field | Required | Description ...",
|
|
1330
1339
|
order: 7
|
|
1331
1340
|
},
|
|
1332
1341
|
{
|
|
@@ -1343,7 +1352,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1343
1352
|
section: "protocol",
|
|
1344
1353
|
title: "Skills Advanced Guide",
|
|
1345
1354
|
description: "Best practices and advanced patterns for using Octavus skills.",
|
|
1346
|
-
content: "\n# Skills Advanced Guide\n\nThis guide covers advanced patterns and best practices for using Octavus skills in your agents.\n\n## When to Use Skills\n\nSkills are ideal for:\n\n- **Code execution** - Running Python/Bash scripts\n- **File generation** - Creating images, PDFs, reports\n- **Data processing** - Analyzing, transforming, or visualizing data\n- **Provider-agnostic needs** - Features that should work with any LLM\n\nUse external tools instead when:\n\n- **Simple API calls** - Database queries, external services\n- **Authentication required** - Accessing user-specific resources\n- **Backend integration** - Tight coupling with your infrastructure\n\n## Skill Selection Strategy\n\n### Defining Available Skills\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n pdf-processor:\n display: description\n description: Processing PDFs\n data-analysis:\n display: description\n description: Analyzing data\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Skills available for this thread\n```\n\n### Match Skills to Use Cases\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread based on use case:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n visualization:\n display: description\n description: Creating charts and visualizations\n\n# Skills available for this chat thread (support use case)\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Skills available for this thread\n```\n\nFor a data analysis thread, you would specify `[data-analysis, visualization]` in `agent.skills`, but still define all available skills in the `skills:` section above.\n\n## Display Mode Strategy\n\nChoose display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n### Guidelines\n\n- **`hidden`**: Background work that doesn't need user awareness\n- **`description`**: User-facing operations (default)\n- **`name`**: Quick operations where name is sufficient\n- **`stream`**: Long-running operations where progress matters\n\n## System Prompt Integration\n\nSkills are automatically injected into the system prompt. The LLM learns:\n\n1. **Available skills** - List of enabled skills with descriptions\n2. **How to use skills** - Instructions for using skill tools\n3. **Tool reference** - Available skill tools (`octavus_skill_read`, `octavus_code_run`, etc.)\n\nYou don't need to manually document skills in your system prompt. However, you can guide the LLM:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a helpful assistant that can generate QR codes.\n\n## When to Generate QR Codes\n\nGenerate QR codes when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Share WiFi credentials\n- Create scannable data\n\nUse the qr-code skill for all QR code generation tasks.\n```\n\n## Error Handling\n\nSkills handle errors gracefully:\n\n```yaml\n# Skill execution errors are returned to the LLM\n# The LLM can retry or explain the error to the user\n```\n\nCommon error scenarios:\n\n1. **Invalid skill slug** - Skill not found in organization\n2. **Code execution errors** - Syntax errors, runtime exceptions\n3. **Missing dependencies** - Required packages not installed\n4. **File I/O errors** - Permission issues, invalid paths\n\nThe LLM receives error messages and can:\n\n- Retry with corrected code\n- Explain errors to users\n- Suggest alternatives\n\n## File Output Patterns\n\n### Single File Output\n\n```python\n# Save single file to /output/\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\nqr = qrcode.QRCode()\nqr.add_data('https://example.com')\nimg = qr.make_image()\nimg.save(f'{output_dir}/qrcode.png')\n```\n\n### Multiple Files\n\n```python\n# Save multiple files\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Generate multiple outputs\nfor i in range(3):\n filename = f'{output_dir}/output_{i}.png'\n # ... generate file ...\n```\n\n### Structured Output\n\n```python\n# Save structured data + files\nimport json\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Save metadata\nmetadata = {\n 'files': ['chart.png', 'data.csv'],\n 'summary': 'Analysis complete'\n}\nwith open(f'{output_dir}/metadata.json', 'w') as f:\n json.dump(metadata, f)\n\n# Save actual files\n# ... generate chart.png and data.csv ...\n```\n\n## Performance Considerations\n\n### Lazy Initialization\n\nSandboxes are created only when a skill tool is first called:\n\n```yaml\n# Sandbox not created until LLM calls a skill tool\nagent:\n skills: [qr-code] # Sandbox created on first use\n```\n\nThis means:\n\n- No cost if skills aren't used\n- Fast startup (no sandbox creation delay)\n- Sandbox reused for all skill calls in a trigger\n\n### Timeout Limits\n\nSandboxes have a 5-minute default timeout, which can be configured via `sandboxTimeout`:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes for long-running analysis\n```\n\n`sandboxTimeout` Maximum: 1 hour (3,600,000 ms)\n\n**Timeout guidelines:**\n\n- **Short operations** (default 5 min): QR codes, simple calculations\n- **Medium operations** (10-30 min): Data analysis, report generation\n- **Long operations** (30+ min): Complex processing, large dataset analysis\n\n### Sandbox Lifecycle\n\nEach trigger execution gets a fresh sandbox:\n\n- **Clean state** - No leftover files from previous executions\n- **Isolated** - No interference between sessions\n- **Destroyed** - Sandbox cleaned up after trigger completes\n\n## Combining Skills with Tools\n\nSkills and tools can work together:\n\n```yaml\ntools:\n get-user-data:\n description: Fetch user data from database\n parameters:\n userId: { type: string }\n\nskills:\n data-analysis:\n display: description\n description: Analyzing data\n\nagent:\n tools: [get-user-data]\n skills: [data-analysis]\n agentic: true\n\nhandlers:\n analyze-user:\n Get user data:\n block: tool-call\n tool: get-user-data\n input:\n userId: USER_ID\n output: USER_DATA\n\n Analyze:\n block: next-message\n # LLM can use data-analysis skill with USER_DATA\n```\n\nPattern:\n\n1. Fetch data via tool (from your backend)\n2. LLM uses skill to analyze/process the data\n3. Generate outputs (files, reports)\n\n## Skill Development Tips\n\n### Writing SKILL.md\n\nFocus on **when** and **how** to use the skill:\n\n```markdown\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\n---\n\n# QR Code Generator\n\n## When to Use\n\nUse this skill when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Create scannable data\n\n## Quick Start\n\n[Clear examples of how to use the skill]\n```\n\n### Script Organization\n\nOrganize scripts logically:\n\n```\nskill-name/\n\u251C\u2500\u2500 SKILL.md\n\u2514\u2500\u2500 scripts/\n \u251C\u2500\u2500 generate.py # Main script\n \u251C\u2500\u2500 utils.py # Helper functions\n \u2514\u2500\u2500 requirements.txt # Dependencies\n```\n\n### Error Messages\n\nProvide helpful error messages:\n\n```python\ntry:\n # ... code ...\nexcept ValueError as e:\n print(f\"Error: Invalid input - {e}\")\n sys.exit(1)\n```\n\nThe LLM sees these errors and can retry or explain to users.\n\n## Security Considerations\n\n### Sandbox Isolation\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n### Input Validation\n\nSkills should validate inputs:\n\n```python\nimport sys\n\nif not data:\n print(\"Error: Data is required\")\n sys.exit(1)\n\nif len(data) > 1000:\n print(\"Error: Data too long (max 1000 characters)\")\n sys.exit(1)\n```\n\n### Resource Limits\n\nBe aware of:\n\n- **File size limits** - Large files may fail to upload\n- **Execution time** - 5-minute sandbox timeout\n- **Memory limits** - Sandbox environment constraints\n\n## Debugging Skills\n\n### Check Skill Documentation\n\nThe LLM can read skill docs:\n\n```python\n# LLM calls octavus_skill_read to see skill instructions\n```\n\n### Test Locally\n\nTest skills before uploading:\n\n```bash\n# Test skill locally\npython scripts/generate.py --data \"test\"\n```\n\n### Monitor Execution\n\nCheck execution logs in the platform debug view:\n\n- Tool calls and arguments\n- Code execution results\n- File outputs\n- Error messages\n\n## Common Patterns\n\n### Pattern 1: Generate and Return\n\n```yaml\n# User asks for QR code\n# LLM generates QR code\n# File automatically available for download\n```\n\n### Pattern 2: Analyze and Report\n\n```yaml\n# User provides data\n# LLM analyzes with skill\n# Generates report file\n# Returns summary + file link\n```\n\n### Pattern 3: Transform and Save\n\n```yaml\n# User uploads file (via tool)\n# LLM processes with skill\n# Generates transformed file\n# Returns new file link\n```\n\n## Best Practices Summary\n\n1. **Enable only needed skills** - Don't overwhelm the LLM\n2. **Choose appropriate display modes** - Match user experience needs\n3. **Write clear skill descriptions** - Help LLM understand when to use\n4. **Handle errors gracefully** - Provide helpful error messages\n5. **Test skills locally** - Verify before uploading\n6. **Monitor execution** - Check logs for issues\n7. **Combine with tools** - Use tools for data, skills for processing\n8. **Consider performance** - Be aware of timeouts and limits\n\n## Next Steps\n\n- [Skills](/docs/protocol/skills) - Basic skills documentation\n- [Agent Config](/docs/protocol/agent-config) - Configuring skills\n- [Tools](/docs/protocol/tools) - External tools integration\n",
|
|
1355
|
+
content: "\n# Skills Advanced Guide\n\nThis guide covers advanced patterns and best practices for using Octavus skills in your agents.\n\n## When to Use Skills\n\nSkills are ideal for:\n\n- **Code execution** - Running Python/Bash scripts\n- **File generation** - Creating images, PDFs, reports\n- **Data processing** - Analyzing, transforming, or visualizing data\n- **Provider-agnostic needs** - Features that should work with any LLM\n\nUse external tools instead when:\n\n- **Simple API calls** - Database queries, external services\n- **Authentication required** - Accessing user-specific resources\n- **Backend integration** - Tight coupling with your infrastructure\n\n## Skill Selection Strategy\n\n### Defining Available Skills\n\nDefine all skills in the `skills:` section, then reference which skills are available where they're used:\n\n**Interactive agents** \u2014 reference in `agent.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n pdf-processor:\n display: description\n description: Processing PDFs\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n```\n\n**Workers and named threads** \u2014 reference per-thread in `start-thread.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n\nsteps:\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis]\n maxSteps: 10\n```\n\n### Match Skills to Use Cases\n\nDifferent threads can have different skills. Define all skills at the protocol level, then scope them to each thread:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n visualization:\n display: description\n description: Creating charts and visualizations\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n```\n\nFor a data analysis thread, you would specify `[data-analysis, visualization]` in `agent.skills` or in a `start-thread` block's `skills` field.\n\n## Display Mode Strategy\n\nChoose display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n### Guidelines\n\n- **`hidden`**: Background work that doesn't need user awareness\n- **`description`**: User-facing operations (default)\n- **`name`**: Quick operations where name is sufficient\n- **`stream`**: Long-running operations where progress matters\n\n## System Prompt Integration\n\nSkills are automatically injected into the system prompt. The LLM learns:\n\n1. **Available skills** - List of enabled skills with descriptions\n2. **How to use skills** - Instructions for using skill tools\n3. **Tool reference** - Available skill tools (`octavus_skill_read`, `octavus_code_run`, etc.)\n\nYou don't need to manually document skills in your system prompt. However, you can guide the LLM:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a helpful assistant that can generate QR codes.\n\n## When to Generate QR Codes\n\nGenerate QR codes when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Share WiFi credentials\n- Create scannable data\n\nUse the qr-code skill for all QR code generation tasks.\n```\n\n## Error Handling\n\nSkills handle errors gracefully:\n\n```yaml\n# Skill execution errors are returned to the LLM\n# The LLM can retry or explain the error to the user\n```\n\nCommon error scenarios:\n\n1. **Invalid skill slug** - Skill not found in organization\n2. **Code execution errors** - Syntax errors, runtime exceptions\n3. **Missing dependencies** - Required packages not installed\n4. **File I/O errors** - Permission issues, invalid paths\n\nThe LLM receives error messages and can:\n\n- Retry with corrected code\n- Explain errors to users\n- Suggest alternatives\n\n## File Output Patterns\n\n### Single File Output\n\n```python\n# Save single file to /output/\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\nqr = qrcode.QRCode()\nqr.add_data('https://example.com')\nimg = qr.make_image()\nimg.save(f'{output_dir}/qrcode.png')\n```\n\n### Multiple Files\n\n```python\n# Save multiple files\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Generate multiple outputs\nfor i in range(3):\n filename = f'{output_dir}/output_{i}.png'\n # ... generate file ...\n```\n\n### Structured Output\n\n```python\n# Save structured data + files\nimport json\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Save metadata\nmetadata = {\n 'files': ['chart.png', 'data.csv'],\n 'summary': 'Analysis complete'\n}\nwith open(f'{output_dir}/metadata.json', 'w') as f:\n json.dump(metadata, f)\n\n# Save actual files\n# ... generate chart.png and data.csv ...\n```\n\n## Performance Considerations\n\n### Lazy Initialization\n\nSandboxes are created only when a skill tool is first called:\n\n```yaml\nagent:\n skills: [qr-code] # Sandbox created on first skill tool call\n```\n\nThis means:\n\n- No cost if skills aren't used\n- Fast startup (no sandbox creation delay)\n- Each `next-message` execution gets its own sandbox with only the skills it needs\n\n### Timeout Limits\n\nSandboxes default to a 5-minute timeout. Configure `sandboxTimeout` on the agent config or per thread:\n\n```yaml\n# Agent-level\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes\n```\n\n```yaml\n# Thread-level (overrides agent-level)\nsteps:\n Start thread:\n block: start-thread\n thread: analysis\n skills: [data-analysis]\n sandboxTimeout: 3600000 # 1 hour for long-running analysis\n```\n\nThread-level `sandboxTimeout` takes priority. Maximum: 1 hour (3,600,000 ms).\n\n### Sandbox Lifecycle\n\nEach `next-message` execution gets its own sandbox:\n\n- **Scoped** - Only contains the skills available to that thread\n- **Isolated** - Interactive agents and workers don't share sandboxes\n- **Resilient** - If a sandbox expires, it's transparently recreated\n- **Cleaned up** - Sandbox destroyed when the LLM call completes\n\n## Combining Skills with Tools\n\nSkills and tools can work together:\n\n```yaml\ntools:\n get-user-data:\n description: Fetch user data from database\n parameters:\n userId: { type: string }\n\nskills:\n data-analysis:\n display: description\n description: Analyzing data\n\nagent:\n tools: [get-user-data]\n skills: [data-analysis]\n agentic: true\n\nhandlers:\n analyze-user:\n Get user data:\n block: tool-call\n tool: get-user-data\n input:\n userId: USER_ID\n output: USER_DATA\n\n Analyze:\n block: next-message\n # LLM can use data-analysis skill with USER_DATA\n```\n\nPattern:\n\n1. Fetch data via tool (from your backend)\n2. LLM uses skill to analyze/process the data\n3. Generate outputs (files, reports)\n\n## Skill Development Tips\n\n### Writing SKILL.md\n\nFocus on **when** and **how** to use the skill:\n\n```markdown\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\n---\n\n# QR Code Generator\n\n## When to Use\n\nUse this skill when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Create scannable data\n\n## Quick Start\n\n[Clear examples of how to use the skill]\n```\n\n### Script Organization\n\nOrganize scripts logically:\n\n```\nskill-name/\n\u251C\u2500\u2500 SKILL.md\n\u2514\u2500\u2500 scripts/\n \u251C\u2500\u2500 generate.py # Main script\n \u251C\u2500\u2500 utils.py # Helper functions\n \u2514\u2500\u2500 requirements.txt # Dependencies\n```\n\n### Error Messages\n\nProvide helpful error messages:\n\n```python\ntry:\n # ... code ...\nexcept ValueError as e:\n print(f\"Error: Invalid input - {e}\")\n sys.exit(1)\n```\n\nThe LLM sees these errors and can retry or explain to users.\n\n## Security Considerations\n\n### Sandbox Isolation\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after each `next-message` execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n### Input Validation\n\nSkills should validate inputs:\n\n```python\nimport sys\n\nif not data:\n print(\"Error: Data is required\")\n sys.exit(1)\n\nif len(data) > 1000:\n print(\"Error: Data too long (max 1000 characters)\")\n sys.exit(1)\n```\n\n### Resource Limits\n\nBe aware of:\n\n- **File size limits** - Large files may fail to upload\n- **Execution time** - Sandbox timeout (5-minute default, 1-hour maximum)\n- **Memory limits** - Sandbox environment constraints\n\n## Debugging Skills\n\n### Check Skill Documentation\n\nThe LLM can read skill docs:\n\n```python\n# LLM calls octavus_skill_read to see skill instructions\n```\n\n### Test Locally\n\nTest skills before uploading:\n\n```bash\n# Test skill locally\npython scripts/generate.py --data \"test\"\n```\n\n### Monitor Execution\n\nCheck execution logs in the platform debug view:\n\n- Tool calls and arguments\n- Code execution results\n- File outputs\n- Error messages\n\n## Common Patterns\n\n### Pattern 1: Generate and Return\n\n```yaml\n# User asks for QR code\n# LLM generates QR code\n# File automatically available for download\n```\n\n### Pattern 2: Analyze and Report\n\n```yaml\n# User provides data\n# LLM analyzes with skill\n# Generates report file\n# Returns summary + file link\n```\n\n### Pattern 3: Transform and Save\n\n```yaml\n# User uploads file (via tool)\n# LLM processes with skill\n# Generates transformed file\n# Returns new file link\n```\n\n## Best Practices Summary\n\n1. **Enable only needed skills** - Don't overwhelm the LLM\n2. **Choose appropriate display modes** - Match user experience needs\n3. **Write clear skill descriptions** - Help LLM understand when to use\n4. **Handle errors gracefully** - Provide helpful error messages\n5. **Test skills locally** - Verify before uploading\n6. **Monitor execution** - Check logs for issues\n7. **Combine with tools** - Use tools for data, skills for processing\n8. **Consider performance** - Be aware of timeouts and limits\n\n## Next Steps\n\n- [Skills](/docs/protocol/skills) - Basic skills documentation\n- [Agent Config](/docs/protocol/agent-config) - Configuring skills\n- [Tools](/docs/protocol/tools) - External tools integration\n",
|
|
1347
1356
|
excerpt: "Skills Advanced Guide This guide covers advanced patterns and best practices for using Octavus skills in your agents. When to Use Skills Skills are ideal for: - Code execution - Running Python/Bash...",
|
|
1348
1357
|
order: 9
|
|
1349
1358
|
},
|
|
@@ -1361,9 +1370,18 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1361
1370
|
section: "protocol",
|
|
1362
1371
|
title: "Workers",
|
|
1363
1372
|
description: "Defining worker agents for background and task-based execution.",
|
|
1364
|
-
content: '\n# Workers\n\nWorkers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value.\n\n## When to Use Workers\n\nWorkers are ideal for:\n\n- **Background processing** \u2014 Long-running tasks that don\'t need conversation\n- **Composable tasks** \u2014 Reusable units of work called by other agents\n- **Pipelines** \u2014 Multi-step processing with structured output\n- **Parallel execution** \u2014 Tasks that can run independently\n\nUse interactive agents instead when:\n\n- **Conversation is needed** \u2014 Multi-turn dialogue with users\n- **Persistence matters** \u2014 State should survive across interactions\n- **Session context** \u2014 User context needs to persist\n\n## Worker vs Interactive\n\n| Aspect | Interactive | Worker |\n| ---------- | ---------------------------------- | ----------------------------- |\n| Structure | `triggers` + `handlers` + `agent` | `steps` + `output` |\n| LLM Config | Global `agent:` section | Per-thread via `start-thread` |\n| Invocation | Fire a named trigger | Direct execution with input |\n| Session | Persists across triggers (24h TTL) | Single execution |\n| Result | Streaming chat | Streaming + output value |\n\n## Protocol Structure\n\nWorkers use a simpler protocol structure than interactive agents:\n\n```yaml\n# Input schema - provided when worker is executed\ninput:\n TOPIC:\n type: string\n description: Topic to research\n DEPTH:\n type: string\n optional: true\n default: medium\n\n# Variables for intermediate results\nvariables:\n RESEARCH_DATA:\n type: string\n ANALYSIS:\n type: string\n description: Final analysis result\n\n# Tools available to the worker\ntools:\n web-search:\n description: Search the web\n parameters:\n query: { type: string }\n\n# Sequential execution steps\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC, DEPTH]\n tools: [web-search]\n maxSteps: 5\n\n Add research request:\n block: add-message\n thread: research\n role: user\n prompt: research-prompt\n input: [TOPIC, DEPTH]\n\n Generate research:\n block: next-message\n thread: research\n output: RESEARCH_DATA\n\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: analysis-system\n\n Add analysis request:\n block: add-message\n thread: analysis\n role: user\n prompt: analysis-prompt\n input: [RESEARCH_DATA]\n\n Generate analysis:\n block: next-message\n thread: analysis\n output: ANALYSIS\n\n# Output variable - the worker\'s return value\noutput: ANALYSIS\n```\n\n## settings.json\n\nWorkers are identified by the `format` field:\n\n```json\n{\n "slug": "research-assistant",\n "name": "Research Assistant",\n "description": "Researches topics and returns structured analysis",\n "format": "worker"\n}\n```\n\n## Key Differences\n\n### No Global Agent Config\n\nInteractive agents have a global `agent:` section that configures a main thread. Workers don\'t have this \u2014 every thread must be explicitly created via `start-thread`:\n\n```yaml\n# Interactive agent: Global config\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [tool-a, tool-b]\n\n# Worker: Each thread configured independently\nsteps:\n Start thread A:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n tools: [tool-a]\n\n Start thread B:\n block: start-thread\n thread: analysis\n model: openai/gpt-4o\n tools: [tool-b]\n```\n\nThis gives workers flexibility to use different models, tools, and settings at different stages.\n\n### Steps Instead of Handlers\n\nWorkers use `steps:` instead of `handlers:`. Steps execute sequentially, like handler blocks:\n\n```yaml\n# Interactive: Handlers respond to triggers\nhandlers:\n user-message:\n Add message:\n block: add-message\n # ...\n\n# Worker: Steps execute in sequence\nsteps:\n Add message:\n block: add-message\n # ...\n```\n\n### Output Value\n\nWorkers can return an output value to the caller:\n\n```yaml\nvariables:\n RESULT:\n type: string\n\nsteps:\n # ... steps that populate RESULT ...\n\noutput: RESULT # Return this variable\'s value\n```\n\nThe `output` field references a variable declared in `variables:`. If omitted, the worker completes without returning a value.\n\n## Available Blocks\n\nWorkers support the same blocks as handlers:\n\n| Block | Purpose |\n| ------------------ | -------------------------------------------- |\n| `start-thread` | Create a named thread with LLM configuration |\n| `add-message` | Add a message to a thread |\n| `next-message` | Generate LLM response |\n| `tool-call` | Call a tool deterministically |\n| `set-resource` | Update a resource value |\n| `serialize-thread` | Convert thread to text |\n| `generate-image` | Generate an image from a prompt variable |\n\n### start-thread (Required for LLM)\n\nEvery thread must be initialized with `start-thread` before using `next-message`:\n\n```yaml\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC]\n tools: [web-search]\n thinking: medium\n maxSteps: 5\n```\n\nAll LLM configuration goes here:\n\n| Field | Description |\n| ------------- | ------------------------------------------------- |\n| `thread` | Thread name (defaults to block name) |\n| `model` | LLM model to use |\n| `system` | System prompt filename (required) |\n| `input` | Variables for system prompt |\n| `tools` | Tools available in this thread |\n| `workers` | Workers available to this thread (as LLM tools) |\n| `imageModel` | Image generation model |\n| `thinking` | Extended reasoning level |\n| `temperature` | Model temperature |\n| `maxSteps` | Maximum tool call cycles (enables agentic if > 1) |\n\n## Simple Example\n\nA worker that generates a title from a summary:\n\n```yaml\n# Input\ninput:\n CONVERSATION_SUMMARY:\n type: string\n description: Summary to generate a title for\n\n# Variables\nvariables:\n TITLE:\n type: string\n description: The generated title\n\n# Steps\nsteps:\n Start title thread:\n block: start-thread\n thread: title-gen\n model: anthropic/claude-sonnet-4-5\n system: title-system\n\n Add title request:\n block: add-message\n thread: title-gen\n role: user\n prompt: title-request\n input: [CONVERSATION_SUMMARY]\n\n Generate title:\n block: next-message\n thread: title-gen\n output: TITLE\n display: stream\n\n# Output\noutput: TITLE\n```\n\n## Advanced Example\n\nA worker with multiple threads, tools, and agentic behavior:\n\n```yaml\ninput:\n USER_MESSAGE:\n type: string\n description: The user\'s message to respond to\n USER_ID:\n type: string\n description: User ID for account lookups\n optional: true\n\ntools:\n get-user-account:\n description: Looking up account information\n parameters:\n userId: { type: string }\n create-support-ticket:\n description: Creating a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string }\n\nvariables:\n ASSISTANT_RESPONSE:\n type: string\n CHAT_TRANSCRIPT:\n type: string\n CONVERSATION_SUMMARY:\n type: string\n\nsteps:\n # Thread 1: Chat with agentic tool calling\n Start chat thread:\n block: start-thread\n thread: chat\n model: anthropic/claude-sonnet-4-5\n system: chat-system\n input: [USER_ID]\n tools: [get-user-account, create-support-ticket]\n thinking: medium\n maxSteps: 5\n\n Add user message:\n block: add-message\n thread: chat\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Generate response:\n block: next-message\n thread: chat\n output: ASSISTANT_RESPONSE\n display: stream\n\n # Serialize for summary\n Save conversation:\n block: serialize-thread\n thread: chat\n output: CHAT_TRANSCRIPT\n\n # Thread 2: Summary generation\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n system: summary-system\n thinking: low\n\n Add summary request:\n block: add-message\n thread: summary\n role: user\n prompt: summary-request\n input: [CHAT_TRANSCRIPT]\n\n Generate summary:\n block: next-message\n thread: summary\n output: CONVERSATION_SUMMARY\n display: stream\n\noutput: CONVERSATION_SUMMARY\n```\n\n## Tool Handling\n\nWorkers support the same tool handling as interactive agents:\n\n- **Server tools** \u2014 Handled by tool handlers you provide\n- **Client tools** \u2014 Pause execution, return tool request to caller\n\n```typescript\nconst events = client.workers.execute(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => {\n return await searchWeb(args.query);\n },\n },\n },\n);\n```\n\nSee [Server SDK Workers](/docs/server-sdk/workers) for tool handling details.\n\n## Stream Events\n\nWorkers emit the same events as interactive agents, plus worker-specific events:\n\n| Event | Description |\n| --------------- | ---------------------------------- |\n| `worker-start` | Worker execution begins |\n| `worker-result` | Worker completes (includes output) |\n\nAll standard events (text-delta, tool calls, etc.) are also emitted.\n\n## Calling Workers from Interactive Agents\n\nInteractive agents can call workers in two ways:\n\n1. **Deterministically** \u2014 Using the `run-worker` block\n2. **Agentically** \u2014 LLM calls worker as a tool\n\n### Worker Declaration\n\nFirst, declare workers in your interactive agent\'s protocol:\n\n```yaml\nworkers:\n generate-title:\n description: Generating conversation title\n display: description\n research-assistant:\n description: Researching topic\n display: stream\n tools:\n search: web-search # Map worker tool \u2192 parent tool\n```\n\n### run-worker Block\n\nCall a worker deterministically from a handler:\n\n```yaml\nhandlers:\n request-human:\n Generate title:\n block: run-worker\n worker: generate-title\n input:\n CONVERSATION_SUMMARY: SUMMARY\n output: CONVERSATION_TITLE\n```\n\n### LLM Tool Invocation\n\nMake workers available to the LLM:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n workers: [generate-title, research-assistant]\n agentic: true\n```\n\nThe LLM can then call workers as tools during conversation.\n\n### Display Modes\n\nControl how worker execution appears to users:\n\n| Mode | Behavior |\n| ------------- | --------------------------------- |\n| `hidden` | Worker runs silently |\n| `name` | Shows worker name |\n| `description` | Shows description text |\n| `stream` | Streams all worker events to user |\n\n### Tool Mapping\n\nMap parent tools to worker tools when the worker needs access to your tool handlers:\n\n```yaml\nworkers:\n research-assistant:\n description: Research topics\n tools:\n search: web-search # Worker\'s "search" \u2192 parent\'s "web-search"\n```\n\nWhen the worker calls its `search` tool, your `web-search` handler executes.\n\n## Next Steps\n\n- [Server SDK Workers](/docs/server-sdk/workers) \u2014 Executing workers from code\n- [Handlers](/docs/protocol/handlers) \u2014 Block reference for steps\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n',
|
|
1373
|
+
content: '\n# Workers\n\nWorkers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value.\n\n## When to Use Workers\n\nWorkers are ideal for:\n\n- **Background processing** \u2014 Long-running tasks that don\'t need conversation\n- **Composable tasks** \u2014 Reusable units of work called by other agents\n- **Pipelines** \u2014 Multi-step processing with structured output\n- **Parallel execution** \u2014 Tasks that can run independently\n\nUse interactive agents instead when:\n\n- **Conversation is needed** \u2014 Multi-turn dialogue with users\n- **Persistence matters** \u2014 State should survive across interactions\n- **Session context** \u2014 User context needs to persist\n\n## Worker vs Interactive\n\n| Aspect | Interactive | Worker |\n| ---------- | ---------------------------------- | ----------------------------- |\n| Structure | `triggers` + `handlers` + `agent` | `steps` + `output` |\n| LLM Config | Global `agent:` section | Per-thread via `start-thread` |\n| Invocation | Fire a named trigger | Direct execution with input |\n| Session | Persists across triggers (24h TTL) | Single execution |\n| Result | Streaming chat | Streaming + output value |\n\n## Protocol Structure\n\nWorkers use a simpler protocol structure than interactive agents:\n\n```yaml\n# Input schema - provided when worker is executed\ninput:\n TOPIC:\n type: string\n description: Topic to research\n DEPTH:\n type: string\n optional: true\n default: medium\n\n# Variables for intermediate results\nvariables:\n RESEARCH_DATA:\n type: string\n ANALYSIS:\n type: string\n description: Final analysis result\n\n# Tools available to the worker\ntools:\n web-search:\n description: Search the web\n parameters:\n query: { type: string }\n\n# Sequential execution steps\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC, DEPTH]\n tools: [web-search]\n maxSteps: 5\n\n Add research request:\n block: add-message\n thread: research\n role: user\n prompt: research-prompt\n input: [TOPIC, DEPTH]\n\n Generate research:\n block: next-message\n thread: research\n output: RESEARCH_DATA\n\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: analysis-system\n\n Add analysis request:\n block: add-message\n thread: analysis\n role: user\n prompt: analysis-prompt\n input: [RESEARCH_DATA]\n\n Generate analysis:\n block: next-message\n thread: analysis\n output: ANALYSIS\n\n# Output variable - the worker\'s return value\noutput: ANALYSIS\n```\n\n## settings.json\n\nWorkers are identified by the `format` field:\n\n```json\n{\n "slug": "research-assistant",\n "name": "Research Assistant",\n "description": "Researches topics and returns structured analysis",\n "format": "worker"\n}\n```\n\n## Key Differences\n\n### No Global Agent Config\n\nInteractive agents have a global `agent:` section that configures a main thread. Workers don\'t have this \u2014 every thread must be explicitly created via `start-thread`:\n\n```yaml\n# Interactive agent: Global config\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [tool-a, tool-b]\n\n# Worker: Each thread configured independently\nsteps:\n Start thread A:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n tools: [tool-a]\n\n Start thread B:\n block: start-thread\n thread: analysis\n model: openai/gpt-4o\n tools: [tool-b]\n```\n\nThis gives workers flexibility to use different models, tools, skills, and settings at different stages.\n\n### Steps Instead of Handlers\n\nWorkers use `steps:` instead of `handlers:`. Steps execute sequentially, like handler blocks:\n\n```yaml\n# Interactive: Handlers respond to triggers\nhandlers:\n user-message:\n Add message:\n block: add-message\n # ...\n\n# Worker: Steps execute in sequence\nsteps:\n Add message:\n block: add-message\n # ...\n```\n\n### Output Value\n\nWorkers can return an output value to the caller:\n\n```yaml\nvariables:\n RESULT:\n type: string\n\nsteps:\n # ... steps that populate RESULT ...\n\noutput: RESULT # Return this variable\'s value\n```\n\nThe `output` field references a variable declared in `variables:`. If omitted, the worker completes without returning a value.\n\n## Available Blocks\n\nWorkers support the same blocks as handlers:\n\n| Block | Purpose |\n| ------------------ | -------------------------------------------- |\n| `start-thread` | Create a named thread with LLM configuration |\n| `add-message` | Add a message to a thread |\n| `next-message` | Generate LLM response |\n| `tool-call` | Call a tool deterministically |\n| `set-resource` | Update a resource value |\n| `serialize-thread` | Convert thread to text |\n| `generate-image` | Generate an image from a prompt variable |\n\n### start-thread (Required for LLM)\n\nEvery thread must be initialized with `start-thread` before using `next-message`:\n\n```yaml\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC]\n tools: [web-search]\n thinking: medium\n maxSteps: 5\n```\n\nAll LLM configuration goes here:\n\n| Field | Description |\n| ------------- | ------------------------------------------------- |\n| `thread` | Thread name (defaults to block name) |\n| `model` | LLM model to use |\n| `system` | System prompt filename (required) |\n| `input` | Variables for system prompt |\n| `tools` | Tools available in this thread |\n| `skills` | Octavus skills available in this thread |\n| `imageModel` | Image generation model |\n| `thinking` | Extended reasoning level |\n| `temperature` | Model temperature |\n| `maxSteps` | Maximum tool call cycles (enables agentic if > 1) |\n\n## Simple Example\n\nA worker that generates a title from a summary:\n\n```yaml\n# Input\ninput:\n CONVERSATION_SUMMARY:\n type: string\n description: Summary to generate a title for\n\n# Variables\nvariables:\n TITLE:\n type: string\n description: The generated title\n\n# Steps\nsteps:\n Start title thread:\n block: start-thread\n thread: title-gen\n model: anthropic/claude-sonnet-4-5\n system: title-system\n\n Add title request:\n block: add-message\n thread: title-gen\n role: user\n prompt: title-request\n input: [CONVERSATION_SUMMARY]\n\n Generate title:\n block: next-message\n thread: title-gen\n output: TITLE\n display: stream\n\n# Output\noutput: TITLE\n```\n\n## Advanced Example\n\nA worker with multiple threads, tools, and agentic behavior:\n\n```yaml\ninput:\n USER_MESSAGE:\n type: string\n description: The user\'s message to respond to\n USER_ID:\n type: string\n description: User ID for account lookups\n optional: true\n\ntools:\n get-user-account:\n description: Looking up account information\n parameters:\n userId: { type: string }\n create-support-ticket:\n description: Creating a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string }\n\nvariables:\n ASSISTANT_RESPONSE:\n type: string\n CHAT_TRANSCRIPT:\n type: string\n CONVERSATION_SUMMARY:\n type: string\n\nsteps:\n # Thread 1: Chat with agentic tool calling\n Start chat thread:\n block: start-thread\n thread: chat\n model: anthropic/claude-sonnet-4-5\n system: chat-system\n input: [USER_ID]\n tools: [get-user-account, create-support-ticket]\n thinking: medium\n maxSteps: 5\n\n Add user message:\n block: add-message\n thread: chat\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Generate response:\n block: next-message\n thread: chat\n output: ASSISTANT_RESPONSE\n display: stream\n\n # Serialize for summary\n Save conversation:\n block: serialize-thread\n thread: chat\n output: CHAT_TRANSCRIPT\n\n # Thread 2: Summary generation\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n system: summary-system\n thinking: low\n\n Add summary request:\n block: add-message\n thread: summary\n role: user\n prompt: summary-request\n input: [CHAT_TRANSCRIPT]\n\n Generate summary:\n block: next-message\n thread: summary\n output: CONVERSATION_SUMMARY\n display: stream\n\noutput: CONVERSATION_SUMMARY\n```\n\n## Skills and Image Generation\n\nWorkers can use Octavus skills and image generation, configured per-thread via `start-thread`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generate QR codes\n\nsteps:\n Start thread:\n block: start-thread\n thread: worker\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n imageModel: google/gemini-2.5-flash-image\n maxSteps: 10\n```\n\nWorkers define their own skills independently -- they don\'t inherit skills from a parent interactive agent. Each thread gets its own sandbox scoped to only its listed skills.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## Tool Handling\n\nWorkers support the same tool handling as interactive agents:\n\n- **Server tools** \u2014 Handled by tool handlers you provide\n- **Client tools** \u2014 Pause execution, return tool request to caller\n\n```typescript\n// Non-streaming: get the output directly\nconst { output } = await client.workers.generate(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => await searchWeb(args.query),\n },\n },\n);\n\n// Streaming: observe events in real-time\nconst events = client.workers.execute(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => await searchWeb(args.query),\n },\n },\n);\n```\n\nSee [Server SDK Workers](/docs/server-sdk/workers) for tool handling details.\n\n## Stream Events\n\nWorkers emit the same events as interactive agents, plus worker-specific events:\n\n| Event | Description |\n| --------------- | ---------------------------------- |\n| `worker-start` | Worker execution begins |\n| `worker-result` | Worker completes (includes output) |\n\nAll standard events (text-delta, tool calls, etc.) are also emitted.\n\n## Calling Workers from Interactive Agents\n\nInteractive agents can call workers in two ways:\n\n1. **Deterministically** \u2014 Using the `run-worker` block\n2. **Agentically** \u2014 LLM calls worker as a tool\n\n### Worker Declaration\n\nFirst, declare workers in your interactive agent\'s protocol:\n\n```yaml\nworkers:\n generate-title:\n description: Generating conversation title\n display: description\n research-assistant:\n description: Researching topic\n display: stream\n tools:\n search: web-search # Map worker tool \u2192 parent tool\n```\n\n### run-worker Block\n\nCall a worker deterministically from a handler:\n\n```yaml\nhandlers:\n request-human:\n Generate title:\n block: run-worker\n worker: generate-title\n input:\n CONVERSATION_SUMMARY: SUMMARY\n output: CONVERSATION_TITLE\n```\n\n### LLM Tool Invocation\n\nMake workers available to the LLM:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n workers: [generate-title, research-assistant]\n agentic: true\n```\n\nThe LLM can then call workers as tools during conversation.\n\n### Display Modes\n\nControl how worker execution appears to users:\n\n| Mode | Behavior |\n| ------------- | --------------------------------- |\n| `hidden` | Worker runs silently |\n| `name` | Shows worker name |\n| `description` | Shows description text |\n| `stream` | Streams all worker events to user |\n\n### Tool Mapping\n\nMap parent tools to worker tools when the worker needs access to your tool handlers:\n\n```yaml\nworkers:\n research-assistant:\n description: Research topics\n tools:\n search: web-search # Worker\'s "search" \u2192 parent\'s "web-search"\n```\n\nWhen the worker calls its `search` tool, your `web-search` handler executes.\n\n## Next Steps\n\n- [Server SDK Workers](/docs/server-sdk/workers) \u2014 Executing workers from code\n- [Handlers](/docs/protocol/handlers) \u2014 Block reference for steps\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n',
|
|
1365
1374
|
excerpt: "Workers Workers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value. When to...",
|
|
1366
1375
|
order: 11
|
|
1376
|
+
},
|
|
1377
|
+
{
|
|
1378
|
+
slug: "protocol/references",
|
|
1379
|
+
section: "protocol",
|
|
1380
|
+
title: "References",
|
|
1381
|
+
description: "Using references for on-demand context loading in agents.",
|
|
1382
|
+
content: "\n# References\n\nReferences are markdown documents that agents can fetch on demand. Instead of loading everything into the system prompt upfront, references let the agent decide what context it needs and load it when relevant.\n\n## Overview\n\nReferences are useful for:\n\n- **Large context** \u2014 Documents too long to include in every system prompt\n- **Selective loading** \u2014 Let the agent decide which context is relevant\n- **Shared knowledge** \u2014 Reusable documents across threads\n\n### How References Work\n\n1. **Definition**: Reference files live in the `references/` directory alongside your agent\n2. **Configuration**: List available references in `agent.references` or `start-thread.references`\n3. **Discovery**: The agent sees reference names and descriptions in its system prompt\n4. **Fetching**: The agent calls reference tools to read the full content when needed\n\n## Creating References\n\nEach reference is a markdown file with YAML frontmatter in the `references/` directory:\n\n```\nmy-agent/\n\u251C\u2500\u2500 settings.json\n\u251C\u2500\u2500 protocol.yaml\n\u251C\u2500\u2500 prompts/\n\u2502 \u2514\u2500\u2500 system.md\n\u2514\u2500\u2500 references/\n \u251C\u2500\u2500 api-guidelines.md\n \u2514\u2500\u2500 error-codes.md\n```\n\n### Reference Format\n\n```markdown\n---\ndescription: >\n API design guidelines including naming conventions,\n error handling patterns, and pagination standards.\n---\n\n# API Guidelines\n\n## Naming Conventions\n\nUse lowercase with dashes for URL paths...\n\n## Error Handling\n\nAll errors return a standard error envelope...\n```\n\nThe `description` field is required. It tells the agent what the reference contains so it can decide when to fetch it.\n\n### Naming Convention\n\nReference filenames use `lowercase-with-dashes`:\n\n- `api-guidelines.md`\n- `error-codes.md`\n- `coding-standards.md`\n\nThe filename (without `.md`) becomes the reference name used in the protocol.\n\n## Enabling References\n\nAfter creating reference files, specify which references are available in the protocol.\n\n### Interactive Agents\n\nList references in `agent.references`:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n references: [api-guidelines, error-codes]\n agentic: true\n```\n\n### Workers and Named Threads\n\nList references per-thread in `start-thread.references`:\n\n```yaml\nsteps:\n Start thread:\n block: start-thread\n thread: worker\n model: anthropic/claude-sonnet-4-5\n system: system\n references: [api-guidelines]\n maxSteps: 10\n```\n\nDifferent threads can have different references.\n\n## Reference Tools\n\nWhen references are enabled, the agent has access to two tools:\n\n| Tool | Purpose |\n| ------------------------ | ----------------------------------------------- |\n| `octavus_reference_list` | List all available references with descriptions |\n| `octavus_reference_read` | Read the full content of a specific reference |\n\nThe agent also sees reference names and descriptions in its system prompt, so it knows what's available without calling `octavus_reference_list`.\n\n## Example\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [review-pull-request]\n references: [coding-standards, api-guidelines]\n agentic: true\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond:\n block: next-message\n```\n\nWith `references/coding-standards.md`:\n\n```markdown\n---\ndescription: >\n Team coding standards including naming conventions,\n code organization, and review checklist.\n---\n\n# Coding Standards\n\n## Naming Conventions\n\n- Files: kebab-case\n- Variables: camelCase\n- Constants: UPPER_SNAKE_CASE\n ...\n```\n\nWhen a user asks the agent to review code, the agent will:\n\n1. See \"coding-standards\" and \"api-guidelines\" in its system prompt\n2. Decide which references are relevant to the review\n3. Call `octavus_reference_read` to load the relevant reference\n4. Use the loaded context to provide an informed review\n\n## Validation\n\nThe CLI and platform validate references during sync and deployment:\n\n- **Undefined references** \u2014 Referencing a name that doesn't have a matching file in `references/`\n- **Unused references** \u2014 A reference file exists but isn't listed in any `agent.references` or `start-thread.references`\n- **Invalid names** \u2014 Names that don't follow the `lowercase-with-dashes` convention\n- **Missing description** \u2014 Reference files without the required `description` in frontmatter\n\n## References vs Skills\n\n| Aspect | References | Skills |\n| ------------- | ----------------------------- | ------------------------------- |\n| **Purpose** | On-demand context documents | Code execution and file output |\n| **Content** | Markdown text | Documentation + scripts |\n| **Execution** | Synchronous text retrieval | Sandboxed code execution (E2B) |\n| **Scope** | Per-agent (stored with agent) | Per-organization (shared) |\n| **Tools** | List and read (2 tools) | Read, list, run, code (6 tools) |\n\nUse **references** when the agent needs access to text-based knowledge. Use **skills** when the agent needs to execute code or generate files.\n\n## Next Steps\n\n- [Agent Config](/docs/protocol/agent-config) \u2014 Configuring references in agent settings\n- [Skills](/docs/protocol/skills) \u2014 Code execution and knowledge packages\n- [Workers](/docs/protocol/workers) \u2014 Using references in worker agents\n",
|
|
1383
|
+
excerpt: "References References are markdown documents that agents can fetch on demand. Instead of loading everything into the system prompt upfront, references let the agent decide what context it needs and...",
|
|
1384
|
+
order: 12
|
|
1367
1385
|
}
|
|
1368
1386
|
]
|
|
1369
1387
|
},
|
|
@@ -1396,8 +1414,8 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1396
1414
|
section: "api-reference",
|
|
1397
1415
|
title: "Agents",
|
|
1398
1416
|
description: "Agent management API endpoints.",
|
|
1399
|
-
content: '\n# Agents API\n\nManage agent definitions including protocols and
|
|
1400
|
-
excerpt: "Agents API Manage agent definitions including protocols and
|
|
1417
|
+
content: '\n# Agents API\n\nManage agent definitions including protocols, prompts, and references.\n\n## Permissions\n\n| Endpoint | Method | Permission Required |\n| ---------------------- | ------ | ------------------- |\n| `/api/agents` | GET | Agents OR Sessions |\n| `/api/agents/:id` | GET | Agents OR Sessions |\n| `/api/agents` | POST | Agents |\n| `/api/agents/:id` | PATCH | Agents |\n| `/api/agents/:id` | DELETE | Agents |\n| `/api/agents/validate` | POST | Agents |\n\nRead endpoints work with either permission since both the CLI (for sync) and Server SDK (for sessions) need to read agent definitions.\n\n## List Agents\n\nGet all agents in the project.\n\n```\nGET /api/agents\n```\n\n### Response\n\n```json\n{\n "agents": [\n {\n "id": "cm5xvz7k80001abcd",\n "slug": "support-chat",\n "name": "Support Chat",\n "description": "Customer support agent",\n "format": "interactive",\n "createdAt": "2024-01-10T08:00:00Z",\n "updatedAt": "2024-01-15T10:00:00Z"\n }\n ]\n}\n```\n\n### Example\n\n```bash\ncurl https://octavus.ai/api/agents \\\n -H "Authorization: Bearer YOUR_API_KEY"\n```\n\n## Get Agent\n\nGet a single agent by ID.\n\n```\nGET /api/agents/:id\n```\n\n### Response\n\n```json\n{\n "id": "cm5xvz7k80001abcd",\n "settings": {\n "slug": "support-chat",\n "name": "Support Chat",\n "description": "Customer support agent",\n "format": "interactive"\n },\n "protocol": "input:\\n COMPANY_NAME: { type: string }\\n...",\n "prompts": [\n {\n "name": "system",\n "content": "You are a support agent for {{COMPANY_NAME}}..."\n },\n {\n "name": "user-message",\n "content": "{{USER_MESSAGE}}"\n }\n ],\n "references": [\n {\n "name": "api-guidelines",\n "description": "API design guidelines and conventions",\n "content": "# API Guidelines\\n\\nUse lowercase with dashes..."\n }\n ]\n}\n```\n\n### Example\n\n```bash\ncurl https://octavus.ai/api/agents/:agentId \\\n -H "Authorization: Bearer YOUR_API_KEY"\n```\n\n> **Tip:** You can also view and edit agents directly in the [platform](https://octavus.ai), or use the [CLI](/docs/server-sdk/cli) (`octavus list`) for local workflows.\n\n## Create Agent\n\nCreate a new agent.\n\n```\nPOST /api/agents\n```\n\n### Request Body\n\n```json\n{\n "settings": {\n "slug": "support-chat",\n "name": "Support Chat",\n "description": "Customer support agent",\n "format": "interactive"\n },\n "protocol": "input:\\n COMPANY_NAME: { type: string }\\n...",\n "prompts": [\n {\n "name": "system",\n "content": "You are a support agent..."\n }\n ],\n "references": [\n {\n "name": "api-guidelines",\n "description": "API design guidelines and conventions",\n "content": "# API Guidelines\\n..."\n }\n ]\n}\n```\n\n| Field | Type | Required | Description |\n| ---------------------- | ------ | -------- | ------------------------------------------------ |\n| `settings.slug` | string | Yes | URL-safe identifier |\n| `settings.name` | string | Yes | Display name |\n| `settings.description` | string | No | Agent description |\n| `settings.format` | string | Yes | `interactive` or `worker` |\n| `protocol` | string | Yes | YAML protocol definition |\n| `prompts` | array | Yes | Prompt files |\n| `references` | array | No | Reference documents (name, description, content) |\n\n### Response\n\n```json\n{\n "agentId": "cm5xvz7k80001abcd",\n "message": "Agent created successfully"\n}\n```\n\n### Example\n\n```bash\ncurl -X POST https://octavus.ai/api/agents \\\n -H "Authorization: Bearer YOUR_API_KEY" \\\n -H "Content-Type: application/json" \\\n -d \'{\n "settings": {\n "slug": "my-agent",\n "name": "My Agent",\n "format": "interactive"\n },\n "protocol": "agent:\\n model: anthropic/claude-sonnet-4-5\\n system: system",\n "prompts": [\n { "name": "system", "content": "You are a helpful assistant." }\n ]\n }\'\n```\n\n## Update Agent\n\nUpdate an existing agent.\n\n```\nPATCH /api/agents/:id\n```\n\n### Request Body\n\n```json\n{\n "protocol": "input:\\n COMPANY_NAME: { type: string }\\n...",\n "prompts": [\n {\n "name": "system",\n "content": "Updated system prompt..."\n }\n ],\n "references": [\n {\n "name": "api-guidelines",\n "description": "Updated description",\n "content": "Updated content..."\n }\n ]\n}\n```\n\nAll fields are optional. Only provided fields are updated.\n\n### Response\n\n```json\n{\n "agentId": "cm5xvz7k80001abcd",\n "message": "Agent updated successfully"\n}\n```\n\n### Example\n\n```bash\ncurl -X PATCH https://octavus.ai/api/agents/:agentId \\\n -H "Authorization: Bearer YOUR_API_KEY" \\\n -H "Content-Type: application/json" \\\n -d \'{\n "protocol": "agent:\\n model: anthropic/claude-sonnet-4-5\\n system: system\\n thinking: high"\n }\'\n```\n\n## Archive Agent\n\nArchive an agent (soft delete). The agent is removed from the active agent list and its slug is freed for reuse. Session history is preserved.\n\n```\nDELETE /api/agents/:id\n```\n\nSupports `?by=slug` query parameter to look up by slug instead of ID.\n\n### Response\n\n```json\n{\n "agentId": "cm5xvz7k80001abcd",\n "message": "Agent archived successfully"\n}\n```\n\n### Example\n\n```bash\n# Archive by ID\ncurl -X DELETE https://octavus.ai/api/agents/:agentId \\\n -H "Authorization: Bearer YOUR_API_KEY"\n\n# Archive by slug\ncurl -X DELETE https://octavus.ai/api/agents/support-chat?by=slug \\\n -H "Authorization: Bearer YOUR_API_KEY"\n```\n\n## Creating and Managing Agents\n\nThere are two ways to manage agents:\n\n### Platform UI\n\nCreate and edit agents directly at [octavus.ai](https://octavus.ai). The web editor provides real-time validation and is the easiest way to get started. Copy the agent ID from the URL to use in your application.\n\n### CLI (Local Development)\n\nFor version-controlled agent definitions, use the [Octavus CLI](/docs/server-sdk/cli):\n\n```bash\noctavus sync ./agents/support-chat\n```\n\nThis creates the agent if it doesn\'t exist, or updates it if it does. The CLI outputs the agent ID which you should store in an environment variable.\n\nFor CI/CD integration, see the [CLI documentation](/docs/server-sdk/cli#cicd-integration).\n',
|
|
1418
|
+
excerpt: "Agents API Manage agent definitions including protocols, prompts, and references. Permissions | Endpoint | Method | Permission Required | | ---------------------- | ------ |...",
|
|
1401
1419
|
order: 3
|
|
1402
1420
|
}
|
|
1403
1421
|
]
|
|
@@ -1486,4 +1504,4 @@ export {
|
|
|
1486
1504
|
getDocSlugs,
|
|
1487
1505
|
getSectionBySlug
|
|
1488
1506
|
};
|
|
1489
|
-
//# sourceMappingURL=chunk-
|
|
1507
|
+
//# sourceMappingURL=chunk-SAB5XUB6.js.map
|