@octavus/docs 2.3.0 → 2.5.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/content/02-server-sdk/02-sessions.md +3 -1
- package/content/04-protocol/07-agent-config.md +6 -6
- package/content/07-migration/01-v1-to-v2.md +1 -1
- package/dist/{chunk-ELQKUF7H.js → chunk-G5OOF4JJ.js} +15 -15
- package/dist/chunk-G5OOF4JJ.js.map +1 -0
- package/dist/{chunk-CYA6VJUI.js → chunk-RI24MRSN.js} +17 -17
- package/dist/chunk-RI24MRSN.js.map +1 -0
- package/dist/{chunk-NXHCNS7O.js → chunk-SX6AIMRO.js} +11 -11
- package/dist/chunk-SX6AIMRO.js.map +1 -0
- package/dist/{chunk-LACIU65C.js → chunk-WQ7BTD5T.js} +3 -3
- package/dist/chunk-WQ7BTD5T.js.map +1 -0
- package/dist/content.js +1 -1
- package/dist/docs.json +6 -6
- package/dist/index.js +1 -1
- package/dist/search-index.json +1 -1
- package/dist/search.js +1 -1
- package/dist/search.js.map +1 -1
- package/dist/sections.json +6 -6
- package/package.json +3 -3
- package/dist/chunk-3ER2T7S7.js +0 -663
- package/dist/chunk-3ER2T7S7.js.map +0 -1
- package/dist/chunk-CYA6VJUI.js.map +0 -1
- package/dist/chunk-ELQKUF7H.js.map +0 -1
- package/dist/chunk-GI574O6S.js +0 -1435
- package/dist/chunk-GI574O6S.js.map +0 -1
- package/dist/chunk-HFF2TVGV.js +0 -663
- package/dist/chunk-HFF2TVGV.js.map +0 -1
- package/dist/chunk-JGWOMZWD.js +0 -1435
- package/dist/chunk-JGWOMZWD.js.map +0 -1
- package/dist/chunk-KUB6BGPR.js +0 -1435
- package/dist/chunk-KUB6BGPR.js.map +0 -1
- package/dist/chunk-LACIU65C.js.map +0 -1
- package/dist/chunk-NXHCNS7O.js.map +0 -1
- package/dist/chunk-OVK6L7X4.js +0 -1435
- package/dist/chunk-OVK6L7X4.js.map +0 -1
- package/dist/chunk-S5JUVAKE.js +0 -1409
- package/dist/chunk-S5JUVAKE.js.map +0 -1
- package/dist/chunk-TMJG4CJH.js +0 -1409
- package/dist/chunk-TMJG4CJH.js.map +0 -1
- package/dist/chunk-WBR3NHBM.js +0 -1471
- package/dist/chunk-WBR3NHBM.js.map +0 -1
- package/dist/chunk-WKCT4ABS.js +0 -1435
- package/dist/chunk-WKCT4ABS.js.map +0 -1
- package/dist/chunk-YJPO6KOJ.js +0 -1435
- package/dist/chunk-YJPO6KOJ.js.map +0 -1
- package/dist/chunk-ZSCRYD5P.js +0 -1409
- package/dist/chunk-ZSCRYD5P.js.map +0 -1
|
@@ -23,7 +23,7 @@ var docs_default = [
|
|
|
23
23
|
section: "server-sdk",
|
|
24
24
|
title: "Overview",
|
|
25
25
|
description: "Introduction to the Octavus Server SDK for backend integration.",
|
|
26
|
-
content: "\n# Server SDK Overview\n\nThe `@octavus/server-sdk` package provides a Node.js SDK for integrating Octavus agents into your backend application. It handles session management, streaming, and the tool execution continuation loop.\n\n**Current version:** `2.
|
|
26
|
+
content: "\n# Server SDK Overview\n\nThe `@octavus/server-sdk` package provides a Node.js SDK for integrating Octavus agents into your backend application. It handles session management, streaming, and the tool execution continuation loop.\n\n**Current version:** `2.5.0`\n\n## Installation\n\n```bash\nnpm install @octavus/server-sdk\n```\n\nFor agent management (sync, validate), install the CLI as a dev dependency:\n\n```bash\nnpm install --save-dev @octavus/cli\n```\n\n## Basic Usage\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: 'your-api-key',\n});\n```\n\n## Key Features\n\n### Agent Management\n\nAgent definitions are managed via the CLI. See the [CLI documentation](/docs/server-sdk/cli) for details.\n\n```bash\n# Sync agent from local files\noctavus sync ./agents/support-chat\n\n# Output: Created: support-chat\n# Agent ID: clxyz123abc456\n```\n\n### Session Management\n\nCreate and manage agent sessions using the agent ID:\n\n```typescript\n// Create a new session (use agent ID from CLI sync)\nconst sessionId = await client.agentSessions.create('clxyz123abc456', {\n COMPANY_NAME: 'Acme Corp',\n PRODUCT_NAME: 'Widget Pro',\n});\n\n// Get UI-ready session messages (for session restore)\nconst session = await client.agentSessions.getMessages(sessionId);\n```\n\n### Tool Handlers\n\nTools run on your server with your data:\n\n```typescript\nconst session = client.agentSessions.attach(sessionId, {\n tools: {\n 'get-user-account': async (args) => {\n // Access your database, APIs, etc.\n return await db.users.findById(args.userId);\n },\n },\n});\n```\n\n### Streaming\n\nAll responses stream in real-time:\n\n```typescript\nimport { toSSEStream } from '@octavus/server-sdk';\n\n// execute() returns an async generator of events\nconst events = session.execute({\n type: 'trigger',\n triggerName: 'user-message',\n input: { USER_MESSAGE: 'Hello!' },\n});\n\n// Convert to SSE stream for HTTP responses\nreturn new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n});\n```\n\n## API Reference\n\n### OctavusClient\n\nThe main entry point for interacting with Octavus.\n\n```typescript\ninterface OctavusClientConfig {\n baseUrl: string; // Octavus API URL\n apiKey?: string; // Your API key\n}\n\nclass OctavusClient {\n readonly agents: AgentsApi;\n readonly agentSessions: AgentSessionsApi;\n readonly workers: WorkersApi;\n readonly files: FilesApi;\n\n constructor(config: OctavusClientConfig);\n}\n```\n\n### AgentSessionsApi\n\nManages agent sessions.\n\n```typescript\nclass AgentSessionsApi {\n // Create a new session\n async create(agentId: string, input?: Record<string, unknown>): Promise<string>;\n\n // Get full session state (for debugging/internal use)\n async get(sessionId: string): Promise<SessionState>;\n\n // Get UI-ready messages (for client display)\n async getMessages(sessionId: string): Promise<UISessionState>;\n\n // Attach to a session for triggering\n attach(sessionId: string, options?: SessionAttachOptions): AgentSession;\n}\n\n// Full session state (internal format)\ninterface SessionState {\n id: string;\n agentId: string;\n input: Record<string, unknown>;\n variables: Record<string, unknown>;\n resources: Record<string, unknown>;\n messages: ChatMessage[]; // Internal message format\n createdAt: string;\n updatedAt: string;\n}\n\n// UI-ready session state\ninterface UISessionState {\n sessionId: string;\n agentId: string;\n messages: UIMessage[]; // UI-ready messages for frontend\n}\n```\n\n### AgentSession\n\nHandles request execution and streaming for a specific session.\n\n```typescript\nclass AgentSession {\n // Execute a request and stream parsed events\n execute(request: SessionRequest, options?: TriggerOptions): AsyncGenerator<StreamEvent>;\n\n // Get the session ID\n getSessionId(): string;\n}\n\ntype SessionRequest = TriggerRequest | ContinueRequest;\n\ninterface TriggerRequest {\n type: 'trigger';\n triggerName: string;\n input?: Record<string, unknown>;\n}\n\ninterface ContinueRequest {\n type: 'continue';\n executionId: string;\n toolResults: ToolResult[];\n}\n\n// Helper to convert events to SSE stream\nfunction toSSEStream(events: AsyncIterable<StreamEvent>): ReadableStream<Uint8Array>;\n```\n\n### FilesApi\n\nHandles file uploads for sessions.\n\n```typescript\nclass FilesApi {\n // Get presigned URLs for file uploads\n async getUploadUrls(sessionId: string, files: FileUploadRequest[]): Promise<UploadUrlsResponse>;\n}\n\ninterface FileUploadRequest {\n filename: string;\n mediaType: string;\n size: number;\n}\n\ninterface UploadUrlsResponse {\n files: {\n id: string; // File ID for references\n uploadUrl: string; // PUT to this URL\n downloadUrl: string; // GET URL after upload\n }[];\n}\n```\n\nThe client uploads files directly to S3 using the presigned upload URL. See [File Uploads](/docs/client-sdk/file-uploads) for the full integration pattern.\n\n## Next Steps\n\n- [Sessions](/docs/server-sdk/sessions) \u2014 Deep dive into session management\n- [Tools](/docs/server-sdk/tools) \u2014 Implementing tool handlers\n- [Streaming](/docs/server-sdk/streaming) \u2014 Understanding stream events\n- [Workers](/docs/server-sdk/workers) \u2014 Executing worker agents\n",
|
|
27
27
|
excerpt: "Server SDK Overview The package provides a Node.js SDK for integrating Octavus agents into your backend application. It handles session management, streaming, and the tool execution continuation...",
|
|
28
28
|
order: 1
|
|
29
29
|
},
|
|
@@ -32,7 +32,7 @@ var docs_default = [
|
|
|
32
32
|
section: "server-sdk",
|
|
33
33
|
title: "Sessions",
|
|
34
34
|
description: "Managing agent sessions with the Server SDK.",
|
|
35
|
-
content: "\n# Sessions\n\nSessions represent conversations with an agent. They store conversation history, track resources and variables, and enable stateful interactions.\n\n## Creating Sessions\n\nCreate a session by specifying the agent ID and initial input variables:\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: process.env.OCTAVUS_API_URL!,\n apiKey: process.env.OCTAVUS_API_KEY!,\n});\n\n// Create a session with the support-chat agent\nconst sessionId = await client.agentSessions.create('support-chat', {\n COMPANY_NAME: 'Acme Corp',\n PRODUCT_NAME: 'Widget Pro',\n USER_ID: 'user-123', // Optional inputs\n});\n\nconsole.log('Session created:', sessionId);\n```\n\n## Getting Session Messages\n\nTo restore a conversation on page load, use `getMessages()` to retrieve UI-ready messages:\n\n```typescript\nconst session = await client.agentSessions.getMessages(sessionId);\n\nconsole.log({\n sessionId: session.sessionId,\n agentId: session.agentId,\n messages: session.messages.length, // UIMessage[] ready for frontend\n});\n```\n\nThe returned messages can be passed directly to the client SDK's `initialMessages` option.\n\n### UISessionState Interface\n\n```typescript\ninterface UISessionState {\n sessionId: string;\n agentId: string;\n messages: UIMessage[]; // UI-ready conversation history\n}\n```\n\n## Full Session State (Debug)\n\nFor debugging or internal use, you can retrieve the complete session state including all variables and internal message format:\n\n```typescript\nconst state = await client.agentSessions.get(sessionId);\n\nconsole.log({\n id: state.id,\n agentId: state.agentId,\n messages: state.messages.length, // ChatMessage[] (internal format)\n resources: state.resources,\n variables: state.variables,\n createdAt: state.createdAt,\n updatedAt: state.updatedAt,\n});\n```\n\n> **Note**: Use `getMessages()` for client-facing code. The `get()` method returns internal message format that includes hidden content not intended for end users.\n\n## Attaching to Sessions\n\nTo trigger actions on a session, you need to attach to it first:\n\n```typescript\nconst session = client.agentSessions.attach(sessionId, {\n tools: {\n // Tool handlers (see Tools documentation)\n },\n resources: [\n // Resource watchers (optional)\n ],\n});\n```\n\n## Executing Requests\n\nOnce attached, execute requests on the session using `execute()`:\n\n```typescript\nimport { toSSEStream } from '@octavus/server-sdk';\n\n// execute() handles both triggers and client tool continuations\nconst events = session.execute(\n { type: 'trigger', triggerName: 'user-message', input: { USER_MESSAGE: 'Hello!' } },\n { signal: request.signal },\n);\n\n// Convert to SSE stream for HTTP responses\nreturn new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n});\n```\n\n### Request Types\n\nThe `execute()` method accepts a discriminated union:\n\n```typescript\ntype SessionRequest = TriggerRequest | ContinueRequest;\n\n// Start a new conversation turn\ninterface TriggerRequest {\n type: 'trigger';\n triggerName: string;\n input?: Record<string, unknown>;\n}\n\n// Continue after client-side tool handling\ninterface ContinueRequest {\n type: 'continue';\n executionId: string;\n toolResults: ToolResult[];\n}\n```\n\nThis makes it easy to pass requests through from the client:\n\n```typescript\n// Simple passthrough from HTTP request body\nexport async function POST(request: Request) {\n const body = await request.json();\n const { sessionId, ...payload } = body;\n\n const session = client.agentSessions.attach(sessionId, {\n tools: {\n /* ... */\n },\n });\n const events = session.execute(payload, { signal: request.signal });\n\n return new Response(toSSEStream(events));\n}\n```\n\n### Stop Support\n\nPass an abort signal to allow clients to stop generation:\n\n```typescript\nconst events = session.execute(request, {\n signal: request.signal, // Forward the client's abort signal\n});\n```\n\nWhen the client aborts the request, the signal propagates through to the LLM provider, stopping generation immediately. Any partial content is preserved.\n\n## WebSocket Handling\n\nFor WebSocket integrations, use `handleSocketMessage()` which manages abort controller lifecycle internally:\n\n```typescript\nimport type { SocketMessage } from '@octavus/server-sdk';\n\n// In your socket handler\nconn.on('data', async (rawData: string) => {\n const msg = JSON.parse(rawData);\n\n if (msg.type === 'trigger' || msg.type === 'continue' || msg.type === 'stop') {\n await session.handleSocketMessage(msg as SocketMessage, {\n onEvent: (event) => conn.write(JSON.stringify(event)),\n onFinish: () => sendMessagesUpdate(), // Optional callback after streaming\n });\n }\n});\n```\n\nThe `handleSocketMessage()` method:\n\n- Handles `trigger`, `continue`, and `stop` messages\n- Automatically aborts previous requests when a new one arrives\n- Streams events via the `onEvent` callback\n- Calls `onFinish` after streaming completes (not called if aborted)\n\nSee [Socket Chat Example](/docs/examples/socket-chat) for a complete implementation.\n\n## Session Lifecycle\n\n```mermaid\nflowchart TD\n A[1. CREATE] --> B[2. ATTACH]\n B --> C[3. TRIGGER]\n C --> C\n C --> D[4. RETRIEVE]\n D --> C\n C --> E[5. EXPIRE]\n E --> F{6. RESTORE?}\n F -->|Yes| C\n F -->|No| A\n\n A -.- A1[\"`**client.agentSessions.create()**\n Returns sessionId\n Initializes state`\"]\n\n B -.- B1[\"`**client.agentSessions.attach()**\n Configure tool handlers\n Configure resource watchers`\"]\n\n C -.- C1[\"`**session.execute()**\n Execute request\n Stream events\n Update state`\"]\n\n D -.- D1[\"`**client.agentSessions.getMessages()**\n Get UI-ready messages\n Check session status`\"]\n\n E -.- E1[\"`Sessions expire after\n 24 hours (configurable)`\"]\n\n F -.- F1[\"`**client.agentSessions.restore()**\n Restore from stored messages\n Or create new session`\"]\n```\n\n## Session Expiration\n\nSessions expire after a period of inactivity (default: 24 hours). When you call `getMessages()` or `get()`, the response includes a `status` field:\n\n```typescript\nconst result = await client.agentSessions.getMessages(sessionId);\n\nif (result.status === 'expired') {\n // Session has expired - restore or create new\n console.log('Session expired:', result.sessionId);\n} else {\n // Session is active\n console.log('Messages:', result.messages.length);\n}\n```\n\n### Response Types\n\n| Status | Type | Description |\n| --------- | --------------------- | ------------------------------------------------------------- |\n| `active` | `UISessionState` | Session is active, includes `messages` array |\n| `expired` | `ExpiredSessionState` | Session expired, includes `sessionId`, `agentId`, `createdAt` |\n\n## Persisting Chat History\n\nTo enable session restoration, store the chat messages in your own database after each interaction:\n\n```typescript\n// After each trigger completes, save messages\nconst result = await client.agentSessions.getMessages(sessionId);\n\nif (result.status === 'active') {\n // Store in your database\n await db.chats.update({\n where: { id: chatId },\n data: {\n sessionId: result.sessionId,\n messages: result.messages, // Store UIMessage[] as JSON\n },\n });\n}\n```\n\n> **Best Practice**: Store the full `UIMessage[]` array. This preserves all message parts (text, tool calls, files, etc.) needed for accurate restoration.\n\n## Restoring Sessions\n\nWhen a user returns to your app:\n\n```typescript\n// 1. Load stored data from your database\nconst chat = await db.chats.findUnique({ where: { id: chatId } });\n\n// 2. Check if session is still active\nconst result = await client.agentSessions.getMessages(chat.sessionId);\n\nif (result.status === 'active') {\n // Session is active - use it directly\n return {\n sessionId: result.sessionId,\n messages: result.messages,\n };\n}\n\n// 3. Session expired - restore from stored messages\nif (chat.messages && chat.messages.length > 0) {\n const restored = await client.agentSessions.restore(\n chat.sessionId,\n chat.messages,\n { COMPANY_NAME: 'Acme Corp' }, // Optional: same input as create()\n );\n\n if (restored.restored) {\n // Session restored successfully\n return {\n sessionId: restored.sessionId,\n messages: chat.messages,\n };\n }\n}\n\n// 4. Cannot restore - create new session\nconst newSessionId = await client.agentSessions.create('support-chat', {\n COMPANY_NAME: 'Acme Corp',\n});\n\nreturn {\n sessionId: newSessionId,\n messages: [],\n};\n```\n\n### Restore Response\n\n```typescript\ninterface RestoreSessionResult {\n sessionId: string;\n restored: boolean; // true if restored, false if session was already active\n}\n```\n\n## Complete Example\n\nHere's a complete session management flow:\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: process.env.OCTAVUS_API_URL!,\n apiKey: process.env.OCTAVUS_API_KEY!,\n});\n\nasync function getOrCreateSession(chatId: string, agentId: string, input: Record<string, unknown>) {\n // Load existing chat data\n const chat = await db.chats.findUnique({ where: { id: chatId } });\n\n if (chat?.sessionId) {\n // Check session status\n const result = await client.agentSessions.getMessages(chat.sessionId);\n\n if (result.status === 'active') {\n return { sessionId: result.sessionId, messages: result.messages };\n }\n\n // Try to restore expired session\n if (chat.messages?.length > 0) {\n const restored = await client.agentSessions.restore(chat.sessionId, chat.messages, input);\n if (restored.restored) {\n return { sessionId: restored.sessionId, messages: chat.messages };\n }\n }\n }\n\n // Create new session\n const sessionId = await client.agentSessions.create(agentId, input);\n\n // Save to database\n await db.chats.upsert({\n where: { id: chatId },\n create: { id: chatId, sessionId, messages: [] },\n update: { sessionId, messages: [] },\n });\n\n return { sessionId, messages: [] };\n}\n```\n\n## Error Handling\n\n```typescript\nimport { ApiError } from '@octavus/server-sdk';\n\ntry {\n const session = await client.agentSessions.getMessages(sessionId);\n} catch (error) {\n if (error instanceof ApiError) {\n if (error.status === 404) {\n // Session not found or expired\n console.log('Session expired, create a new one');\n } else {\n console.error('API Error:', error.message);\n }\n }\n throw error;\n}\n```\n",
|
|
35
|
+
content: "\n# Sessions\n\nSessions represent conversations with an agent. They store conversation history, track resources and variables, and enable stateful interactions.\n\n## Creating Sessions\n\nCreate a session by specifying the agent ID and initial input variables:\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: process.env.OCTAVUS_API_URL!,\n apiKey: process.env.OCTAVUS_API_KEY!,\n});\n\n// Create a session with the support-chat agent\nconst sessionId = await client.agentSessions.create('support-chat', {\n COMPANY_NAME: 'Acme Corp',\n PRODUCT_NAME: 'Widget Pro',\n USER_ID: 'user-123', // Optional inputs\n});\n\nconsole.log('Session created:', sessionId);\n```\n\n## Getting Session Messages\n\nTo restore a conversation on page load, use `getMessages()` to retrieve UI-ready messages:\n\n```typescript\nconst session = await client.agentSessions.getMessages(sessionId);\n\nconsole.log({\n sessionId: session.sessionId,\n agentId: session.agentId,\n messages: session.messages.length, // UIMessage[] ready for frontend\n});\n```\n\nThe returned messages can be passed directly to the client SDK's `initialMessages` option.\n\n### UISessionState Interface\n\n```typescript\ninterface UISessionState {\n sessionId: string;\n agentId: string;\n messages: UIMessage[]; // UI-ready conversation history\n}\n```\n\n## Full Session State (Debug)\n\nFor debugging or internal use, you can retrieve the complete session state including all variables and internal message format:\n\n```typescript\nconst state = await client.agentSessions.get(sessionId);\n\nconsole.log({\n id: state.id,\n agentId: state.agentId,\n messages: state.messages.length, // ChatMessage[] (internal format)\n resources: state.resources,\n variables: state.variables,\n createdAt: state.createdAt,\n updatedAt: state.updatedAt,\n});\n```\n\n> **Note**: Use `getMessages()` for client-facing code. The `get()` method returns internal message format that includes hidden content not intended for end users.\n\n## Attaching to Sessions\n\nTo trigger actions on a session, you need to attach to it first:\n\n```typescript\nconst session = client.agentSessions.attach(sessionId, {\n tools: {\n // Tool handlers (see Tools documentation)\n },\n resources: [\n // Resource watchers (optional)\n ],\n});\n```\n\n## Executing Requests\n\nOnce attached, execute requests on the session using `execute()`:\n\n```typescript\nimport { toSSEStream } from '@octavus/server-sdk';\n\n// execute() handles both triggers and client tool continuations\nconst events = session.execute(\n { type: 'trigger', triggerName: 'user-message', input: { USER_MESSAGE: 'Hello!' } },\n { signal: request.signal },\n);\n\n// Convert to SSE stream for HTTP responses\nreturn new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n});\n```\n\n### Request Types\n\nThe `execute()` method accepts a discriminated union:\n\n```typescript\ntype SessionRequest = TriggerRequest | ContinueRequest;\n\n// Start a new conversation turn\ninterface TriggerRequest {\n type: 'trigger';\n triggerName: string;\n input?: Record<string, unknown>;\n}\n\n// Continue after client-side tool handling\ninterface ContinueRequest {\n type: 'continue';\n executionId: string;\n toolResults: ToolResult[];\n}\n```\n\nThis makes it easy to pass requests through from the client:\n\n```typescript\n// Simple passthrough from HTTP request body\nexport async function POST(request: Request) {\n const body = await request.json();\n const { sessionId, ...payload } = body;\n\n const session = client.agentSessions.attach(sessionId, {\n tools: {\n /* ... */\n },\n });\n const events = session.execute(payload, { signal: request.signal });\n\n return new Response(toSSEStream(events));\n}\n```\n\n### Stop Support\n\nPass an abort signal to allow clients to stop generation:\n\n```typescript\nconst events = session.execute(request, {\n signal: request.signal, // Forward the client's abort signal\n});\n```\n\nWhen the client aborts the request, the signal propagates through to the LLM provider, stopping generation immediately. Any partial content is preserved.\n\n## WebSocket Handling\n\nFor WebSocket integrations, use `handleSocketMessage()` which manages abort controller lifecycle internally:\n\n```typescript\nimport type { SocketMessage } from '@octavus/server-sdk';\n\n// In your socket handler\nconn.on('data', async (rawData: string) => {\n const msg = JSON.parse(rawData);\n\n if (msg.type === 'trigger' || msg.type === 'continue' || msg.type === 'stop') {\n await session.handleSocketMessage(msg as SocketMessage, {\n onEvent: (event) => conn.write(JSON.stringify(event)),\n onFinish: async () => {\n // Fetch and persist messages to your database for restoration\n },\n });\n }\n});\n```\n\nThe `handleSocketMessage()` method:\n\n- Handles `trigger`, `continue`, and `stop` messages\n- Automatically aborts previous requests when a new one arrives\n- Streams events via the `onEvent` callback\n- Calls `onFinish` after streaming completes (not called if aborted)\n\nSee [Socket Chat Example](/docs/examples/socket-chat) for a complete implementation.\n\n## Session Lifecycle\n\n```mermaid\nflowchart TD\n A[1. CREATE] --> B[2. ATTACH]\n B --> C[3. TRIGGER]\n C --> C\n C --> D[4. RETRIEVE]\n D --> C\n C --> E[5. EXPIRE]\n E --> F{6. RESTORE?}\n F -->|Yes| C\n F -->|No| A\n\n A -.- A1[\"`**client.agentSessions.create()**\n Returns sessionId\n Initializes state`\"]\n\n B -.- B1[\"`**client.agentSessions.attach()**\n Configure tool handlers\n Configure resource watchers`\"]\n\n C -.- C1[\"`**session.execute()**\n Execute request\n Stream events\n Update state`\"]\n\n D -.- D1[\"`**client.agentSessions.getMessages()**\n Get UI-ready messages\n Check session status`\"]\n\n E -.- E1[\"`Sessions expire after\n 24 hours (configurable)`\"]\n\n F -.- F1[\"`**client.agentSessions.restore()**\n Restore from stored messages\n Or create new session`\"]\n```\n\n## Session Expiration\n\nSessions expire after a period of inactivity (default: 24 hours). When you call `getMessages()` or `get()`, the response includes a `status` field:\n\n```typescript\nconst result = await client.agentSessions.getMessages(sessionId);\n\nif (result.status === 'expired') {\n // Session has expired - restore or create new\n console.log('Session expired:', result.sessionId);\n} else {\n // Session is active\n console.log('Messages:', result.messages.length);\n}\n```\n\n### Response Types\n\n| Status | Type | Description |\n| --------- | --------------------- | ------------------------------------------------------------- |\n| `active` | `UISessionState` | Session is active, includes `messages` array |\n| `expired` | `ExpiredSessionState` | Session expired, includes `sessionId`, `agentId`, `createdAt` |\n\n## Persisting Chat History\n\nTo enable session restoration, store the chat messages in your own database after each interaction:\n\n```typescript\n// After each trigger completes, save messages\nconst result = await client.agentSessions.getMessages(sessionId);\n\nif (result.status === 'active') {\n // Store in your database\n await db.chats.update({\n where: { id: chatId },\n data: {\n sessionId: result.sessionId,\n messages: result.messages, // Store UIMessage[] as JSON\n },\n });\n}\n```\n\n> **Best Practice**: Store the full `UIMessage[]` array. This preserves all message parts (text, tool calls, files, etc.) needed for accurate restoration.\n\n## Restoring Sessions\n\nWhen a user returns to your app:\n\n```typescript\n// 1. Load stored data from your database\nconst chat = await db.chats.findUnique({ where: { id: chatId } });\n\n// 2. Check if session is still active\nconst result = await client.agentSessions.getMessages(chat.sessionId);\n\nif (result.status === 'active') {\n // Session is active - use it directly\n return {\n sessionId: result.sessionId,\n messages: result.messages,\n };\n}\n\n// 3. Session expired - restore from stored messages\nif (chat.messages && chat.messages.length > 0) {\n const restored = await client.agentSessions.restore(\n chat.sessionId,\n chat.messages,\n { COMPANY_NAME: 'Acme Corp' }, // Optional: same input as create()\n );\n\n if (restored.restored) {\n // Session restored successfully\n return {\n sessionId: restored.sessionId,\n messages: chat.messages,\n };\n }\n}\n\n// 4. Cannot restore - create new session\nconst newSessionId = await client.agentSessions.create('support-chat', {\n COMPANY_NAME: 'Acme Corp',\n});\n\nreturn {\n sessionId: newSessionId,\n messages: [],\n};\n```\n\n### Restore Response\n\n```typescript\ninterface RestoreSessionResult {\n sessionId: string;\n restored: boolean; // true if restored, false if session was already active\n}\n```\n\n## Complete Example\n\nHere's a complete session management flow:\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: process.env.OCTAVUS_API_URL!,\n apiKey: process.env.OCTAVUS_API_KEY!,\n});\n\nasync function getOrCreateSession(chatId: string, agentId: string, input: Record<string, unknown>) {\n // Load existing chat data\n const chat = await db.chats.findUnique({ where: { id: chatId } });\n\n if (chat?.sessionId) {\n // Check session status\n const result = await client.agentSessions.getMessages(chat.sessionId);\n\n if (result.status === 'active') {\n return { sessionId: result.sessionId, messages: result.messages };\n }\n\n // Try to restore expired session\n if (chat.messages?.length > 0) {\n const restored = await client.agentSessions.restore(chat.sessionId, chat.messages, input);\n if (restored.restored) {\n return { sessionId: restored.sessionId, messages: chat.messages };\n }\n }\n }\n\n // Create new session\n const sessionId = await client.agentSessions.create(agentId, input);\n\n // Save to database\n await db.chats.upsert({\n where: { id: chatId },\n create: { id: chatId, sessionId, messages: [] },\n update: { sessionId, messages: [] },\n });\n\n return { sessionId, messages: [] };\n}\n```\n\n## Error Handling\n\n```typescript\nimport { ApiError } from '@octavus/server-sdk';\n\ntry {\n const session = await client.agentSessions.getMessages(sessionId);\n} catch (error) {\n if (error instanceof ApiError) {\n if (error.status === 404) {\n // Session not found or expired\n console.log('Session expired, create a new one');\n } else {\n console.error('API Error:', error.message);\n }\n }\n throw error;\n}\n```\n",
|
|
36
36
|
excerpt: "Sessions Sessions represent conversations with an agent. They store conversation history, track resources and variables, and enable stateful interactions. Creating Sessions Create a session by...",
|
|
37
37
|
order: 2
|
|
38
38
|
},
|
|
@@ -59,7 +59,7 @@ var docs_default = [
|
|
|
59
59
|
section: "server-sdk",
|
|
60
60
|
title: "CLI",
|
|
61
61
|
description: "Command-line interface for validating and syncing agent definitions.",
|
|
62
|
-
content: '\n# Octavus CLI\n\nThe `@octavus/cli` package provides a command-line interface for validating and syncing agent definitions from your local filesystem to the Octavus platform.\n\n**Current version:** `2.
|
|
62
|
+
content: '\n# Octavus CLI\n\nThe `@octavus/cli` package provides a command-line interface for validating and syncing agent definitions from your local filesystem to the Octavus platform.\n\n**Current version:** `2.5.0`\n\n## Installation\n\n```bash\nnpm install --save-dev @octavus/cli\n```\n\n## Configuration\n\nThe CLI requires an API key with the **Agents** permission.\n\n### Environment Variables\n\n| Variable | Description |\n| --------------------- | ---------------------------------------------- |\n| `OCTAVUS_CLI_API_KEY` | API key with "Agents" permission (recommended) |\n| `OCTAVUS_API_KEY` | Fallback if `OCTAVUS_CLI_API_KEY` not set |\n| `OCTAVUS_API_URL` | Optional, defaults to `https://octavus.ai` |\n\n### Two-Key Strategy (Recommended)\n\nFor production deployments, use separate API keys with minimal permissions:\n\n```bash\n# CI/CD or .env.local (not committed)\nOCTAVUS_CLI_API_KEY=oct_sk_... # "Agents" permission only\n\n# Production .env\nOCTAVUS_API_KEY=oct_sk_... # "Sessions" permission only\n```\n\nThis ensures production servers only have session permissions (smaller blast radius if leaked), while agent management is restricted to development/CI environments.\n\n### Multiple Environments\n\nUse separate Octavus projects for staging and production, each with their own API keys. The `--env` flag lets you load different environment files:\n\n```bash\n# Local development (default: .env)\noctavus sync ./agents/my-agent\n\n# Staging project\noctavus --env .env.staging sync ./agents/my-agent\n\n# Production project\noctavus --env .env.production sync ./agents/my-agent\n```\n\nExample environment files:\n\n```bash\n# .env.staging (syncs to your staging project)\nOCTAVUS_CLI_API_KEY=oct_sk_staging_project_key...\n\n# .env.production (syncs to your production project)\nOCTAVUS_CLI_API_KEY=oct_sk_production_project_key...\n```\n\nEach project has its own agents, so you\'ll get different agent IDs per environment.\n\n## Global Options\n\n| Option | Description |\n| -------------- | ------------------------------------------------------- |\n| `--env <file>` | Load environment from a specific file (default: `.env`) |\n| `--help` | Show help |\n| `--version` | Show version |\n\n## Commands\n\n### `octavus sync <path>`\n\nSync an agent definition to the platform. Creates the agent if it doesn\'t exist, or updates it if it does.\n\n```bash\noctavus sync ./agents/my-agent\n```\n\n**Options:**\n\n- `--json` \u2014 Output as JSON (for CI/CD parsing)\n- `--quiet` \u2014 Suppress non-essential output\n\n**Example output:**\n\n```\n\u2139 Reading agent from ./agents/my-agent...\n\u2139 Syncing support-chat...\n\u2713 Created: support-chat\n Agent ID: clxyz123abc456\n```\n\n### `octavus validate <path>`\n\nValidate an agent definition without saving. Useful for CI/CD pipelines.\n\n```bash\noctavus validate ./agents/my-agent\n```\n\n**Exit codes:**\n\n- `0` \u2014 Validation passed\n- `1` \u2014 Validation errors\n- `2` \u2014 Configuration errors (missing API key, etc.)\n\n### `octavus list`\n\nList all agents in your project.\n\n```bash\noctavus list\n```\n\n**Example output:**\n\n```\nSLUG NAME FORMAT ID\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\nsupport-chat Support Chat Agent interactive clxyz123abc456\n\n1 agent(s)\n```\n\n### `octavus get <slug>`\n\nGet details about a specific agent by its slug.\n\n```bash\noctavus get support-chat\n```\n\n## Agent Directory Structure\n\nThe CLI expects agent definitions in a specific directory structure:\n\n```\nmy-agent/\n\u251C\u2500\u2500 settings.json # Required: Agent metadata\n\u251C\u2500\u2500 protocol.yaml # Required: Agent protocol\n\u2514\u2500\u2500 prompts/ # Optional: Prompt templates\n \u251C\u2500\u2500 system.md\n \u2514\u2500\u2500 user-message.md\n```\n\n### settings.json\n\n```json\n{\n "slug": "my-agent",\n "name": "My Agent",\n "description": "A helpful assistant",\n "format": "interactive"\n}\n```\n\n### protocol.yaml\n\nSee the [Protocol documentation](/docs/protocol/overview) for details on protocol syntax.\n\n## CI/CD Integration\n\n### GitHub Actions\n\n```yaml\nname: Validate and Sync Agents\n\non:\n push:\n branches: [main]\n paths:\n - \'agents/**\'\n\njobs:\n sync:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n\n - uses: actions/setup-node@v4\n with:\n node-version: \'22\'\n\n - run: npm install\n\n - name: Validate agent\n run: npx octavus validate ./agents/support-chat\n env:\n OCTAVUS_CLI_API_KEY: ${{ secrets.OCTAVUS_CLI_API_KEY }}\n\n - name: Sync agent\n run: npx octavus sync ./agents/support-chat\n env:\n OCTAVUS_CLI_API_KEY: ${{ secrets.OCTAVUS_CLI_API_KEY }}\n```\n\n### Package.json Scripts\n\nAdd sync scripts to your `package.json`:\n\n```json\n{\n "scripts": {\n "agents:validate": "octavus validate ./agents/my-agent",\n "agents:sync": "octavus sync ./agents/my-agent"\n },\n "devDependencies": {\n "@octavus/cli": "^0.1.0"\n }\n}\n```\n\n## Workflow\n\nThe recommended workflow for managing agents:\n\n1. **Define agent locally** \u2014 Create `settings.json`, `protocol.yaml`, and prompts\n2. **Validate** \u2014 Run `octavus validate ./my-agent` to check for errors\n3. **Sync** \u2014 Run `octavus sync ./my-agent` to push to platform\n4. **Store agent ID** \u2014 Save the output ID in an environment variable\n5. **Use in app** \u2014 Read the ID from env and pass to `client.agentSessions.create()`\n\n```bash\n# After syncing: octavus sync ./agents/support-chat\n# Output: Agent ID: clxyz123abc456\n\n# Add to your .env file\nOCTAVUS_SUPPORT_AGENT_ID=clxyz123abc456\n```\n\n```typescript\nconst agentId = process.env.OCTAVUS_SUPPORT_AGENT_ID;\n\nconst sessionId = await client.agentSessions.create(agentId, {\n COMPANY_NAME: \'Acme Corp\',\n});\n```\n',
|
|
63
63
|
excerpt: "Octavus CLI The package provides a command-line interface for validating and syncing agent definitions from your local filesystem to the Octavus platform. Current version: Installation ...",
|
|
64
64
|
order: 5
|
|
65
65
|
},
|
|
@@ -68,7 +68,7 @@ var docs_default = [
|
|
|
68
68
|
section: "server-sdk",
|
|
69
69
|
title: "Workers",
|
|
70
70
|
description: "Executing worker agents with the Server SDK.",
|
|
71
|
-
content: "\n# Workers API\n\nThe `WorkersApi` enables executing worker agents from your server. Workers are task-based agents that run steps sequentially and return an output value.\n\n## Basic Usage\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: 'your-api-key',\n});\n\n// Execute a worker\nconst events = client.workers.execute(agentId, {\n TOPIC: 'AI safety',\n DEPTH: 'detailed',\n});\n\n// Process events\nfor await (const event of events) {\n if (event.type === 'worker-start') {\n console.log(`Worker ${event.workerSlug} started`);\n }\n if (event.type === 'text-delta') {\n process.stdout.write(event.delta);\n }\n if (event.type === 'worker-result') {\n console.log('Output:', event.output);\n }\n}\n```\n\n## WorkersApi Reference\n\n### execute()\n\nExecute a worker and stream the response.\n\n```typescript\nasync *execute(\n agentId: string,\n input: Record<string, unknown>,\n options?: WorkerExecuteOptions\n): AsyncGenerator<StreamEvent>\n```\n\n**Parameters:**\n\n| Parameter | Type | Description |\n| --------- | ------------------------- | --------------------------- |\n| `agentId` | `string` | The worker agent ID |\n| `input` | `Record<string, unknown>` | Input values for the worker |\n| `options` | `WorkerExecuteOptions` | Optional configuration |\n\n**Options:**\n\n```typescript\ninterface WorkerExecuteOptions {\n /** Tool handlers for server-side tool execution */\n tools?: ToolHandlers;\n /** Abort signal to cancel the execution */\n signal?: AbortSignal;\n}\n```\n\n### continue()\n\nContinue execution after client-side tool handling.\n\n```typescript\nasync *continue(\n agentId: string,\n executionId: string,\n toolResults: ToolResult[],\n options?: WorkerExecuteOptions\n): AsyncGenerator<StreamEvent>\n```\n\nUse this when the worker has tools without server-side handlers. The execution pauses with a `client-tool-request` event, you execute the tools, then call `continue()` to resume.\n\n## Tool Handlers\n\nProvide tool handlers to execute tools server-side:\n\n```typescript\nconst events = client.workers.execute(\n agentId,\n { TOPIC: 'AI safety' },\n {\n tools: {\n 'web-search': async (args) => {\n const results = await searchWeb(args.query);\n return results;\n },\n 'get-user-data': async (args) => {\n return await db.users.findById(args.userId);\n },\n },\n },\n);\n```\n\nTools defined in the worker protocol but not provided as handlers become client tools \u2014 the execution pauses and emits a `client-tool-request` event.\n\n## Stream Events\n\nWorkers emit standard stream events plus worker-specific events.\n\n### Worker Events\n\n```typescript\n// Worker started\n{\n type: 'worker-start',\n workerId: string,
|
|
71
|
+
content: "\n# Workers API\n\nThe `WorkersApi` enables executing worker agents from your server. Workers are task-based agents that run steps sequentially and return an output value.\n\n## Basic Usage\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: 'your-api-key',\n});\n\n// Execute a worker\nconst events = client.workers.execute(agentId, {\n TOPIC: 'AI safety',\n DEPTH: 'detailed',\n});\n\n// Process events\nfor await (const event of events) {\n if (event.type === 'worker-start') {\n console.log(`Worker ${event.workerSlug} started`);\n }\n if (event.type === 'text-delta') {\n process.stdout.write(event.delta);\n }\n if (event.type === 'worker-result') {\n console.log('Output:', event.output);\n }\n}\n```\n\n## WorkersApi Reference\n\n### execute()\n\nExecute a worker and stream the response.\n\n```typescript\nasync *execute(\n agentId: string,\n input: Record<string, unknown>,\n options?: WorkerExecuteOptions\n): AsyncGenerator<StreamEvent>\n```\n\n**Parameters:**\n\n| Parameter | Type | Description |\n| --------- | ------------------------- | --------------------------- |\n| `agentId` | `string` | The worker agent ID |\n| `input` | `Record<string, unknown>` | Input values for the worker |\n| `options` | `WorkerExecuteOptions` | Optional configuration |\n\n**Options:**\n\n```typescript\ninterface WorkerExecuteOptions {\n /** Tool handlers for server-side tool execution */\n tools?: ToolHandlers;\n /** Abort signal to cancel the execution */\n signal?: AbortSignal;\n}\n```\n\n### continue()\n\nContinue execution after client-side tool handling.\n\n```typescript\nasync *continue(\n agentId: string,\n executionId: string,\n toolResults: ToolResult[],\n options?: WorkerExecuteOptions\n): AsyncGenerator<StreamEvent>\n```\n\nUse this when the worker has tools without server-side handlers. The execution pauses with a `client-tool-request` event, you execute the tools, then call `continue()` to resume.\n\n## Tool Handlers\n\nProvide tool handlers to execute tools server-side:\n\n```typescript\nconst events = client.workers.execute(\n agentId,\n { TOPIC: 'AI safety' },\n {\n tools: {\n 'web-search': async (args) => {\n const results = await searchWeb(args.query);\n return results;\n },\n 'get-user-data': async (args) => {\n return await db.users.findById(args.userId);\n },\n },\n },\n);\n```\n\nTools defined in the worker protocol but not provided as handlers become client tools \u2014 the execution pauses and emits a `client-tool-request` event.\n\n## Stream Events\n\nWorkers emit standard stream events plus worker-specific events.\n\n### Worker Events\n\n```typescript\n// Worker started\n{\n type: 'worker-start',\n workerId: string, // Unique ID (also used as session ID for debug)\n workerSlug: string, // The worker's slug\n description?: string, // Display description for UI\n}\n\n// Worker completed\n{\n type: 'worker-result',\n workerId: string,\n output?: unknown, // The worker's output value\n error?: string, // Error message if worker failed\n}\n```\n\n### Common Events\n\n| Event | Description |\n| ----------------------- | --------------------------- |\n| `start` | Execution started |\n| `finish` | Execution completed |\n| `text-start` | Text generation started |\n| `text-delta` | Text chunk received |\n| `text-end` | Text generation ended |\n| `block-start` | Step started |\n| `block-end` | Step completed |\n| `tool-input-available` | Tool arguments ready |\n| `tool-output-available` | Tool result ready |\n| `client-tool-request` | Client tools need execution |\n| `error` | Error occurred |\n\n## Extracting Output\n\nTo get just the worker's output value:\n\n```typescript\nasync function executeWorker(\n client: OctavusClient,\n agentId: string,\n input: Record<string, unknown>,\n): Promise<unknown> {\n const events = client.workers.execute(agentId, input);\n\n for await (const event of events) {\n if (event.type === 'worker-result') {\n if (event.error) {\n throw new Error(event.error);\n }\n return event.output;\n }\n }\n\n return undefined;\n}\n\n// Usage\nconst analysis = await executeWorker(client, agentId, { TOPIC: 'AI' });\n```\n\n## Client Tool Continuation\n\nWhen workers have tools without handlers, execution pauses:\n\n```typescript\nfor await (const event of client.workers.execute(agentId, input)) {\n if (event.type === 'client-tool-request') {\n // Execute tools client-side\n const results = await executeClientTools(event.toolCalls);\n\n // Continue execution\n for await (const ev of client.workers.continue(agentId, event.executionId, results)) {\n // Handle remaining events\n }\n break;\n }\n}\n```\n\nThe `client-tool-request` event includes:\n\n```typescript\n{\n type: 'client-tool-request',\n executionId: string, // Pass to continue()\n toolCalls: [{\n toolCallId: string,\n toolName: string,\n args: Record<string, unknown>,\n }],\n}\n```\n\n## Streaming to HTTP Response\n\nConvert worker events to an SSE stream:\n\n```typescript\nimport { toSSEStream } from '@octavus/server-sdk';\n\nexport async function POST(request: Request) {\n const { agentId, input } = await request.json();\n\n const events = client.workers.execute(agentId, input, {\n tools: {\n search: async (args) => await search(args.query),\n },\n });\n\n return new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n });\n}\n```\n\n## Cancellation\n\nUse an abort signal to cancel execution:\n\n```typescript\nconst controller = new AbortController();\n\n// Cancel after 30 seconds\nsetTimeout(() => controller.abort(), 30000);\n\nconst events = client.workers.execute(agentId, input, {\n signal: controller.signal,\n});\n\ntry {\n for await (const event of events) {\n // Process events\n }\n} catch (error) {\n if (error.name === 'AbortError') {\n console.log('Worker cancelled');\n }\n}\n```\n\n## Error Handling\n\nErrors can occur at different levels:\n\n```typescript\nfor await (const event of client.workers.execute(agentId, input)) {\n // Stream-level error event\n if (event.type === 'error') {\n console.error(`Error: ${event.message}`);\n console.error(`Type: ${event.errorType}`);\n console.error(`Retryable: ${event.retryable}`);\n }\n\n // Worker-level error in result\n if (event.type === 'worker-result' && event.error) {\n console.error(`Worker failed: ${event.error}`);\n }\n}\n```\n\nError types include:\n\n| Type | Description |\n| ------------------ | --------------------- |\n| `validation_error` | Invalid input |\n| `not_found_error` | Worker not found |\n| `provider_error` | LLM provider error |\n| `tool_error` | Tool execution failed |\n| `execution_error` | Worker step failed |\n\n## Full Example\n\n```typescript\nimport { OctavusClient, type StreamEvent } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: process.env.OCTAVUS_API_KEY!,\n});\n\nasync function runResearchWorker(topic: string) {\n console.log(`Researching: ${topic}\\n`);\n\n const events = client.workers.execute(\n 'research-assistant-id',\n {\n TOPIC: topic,\n DEPTH: 'detailed',\n },\n {\n tools: {\n 'web-search': async ({ query }) => {\n console.log(`Searching: ${query}`);\n return await performWebSearch(query);\n },\n },\n },\n );\n\n let output: unknown;\n\n for await (const event of events) {\n switch (event.type) {\n case 'worker-start':\n console.log(`Started: ${event.workerSlug}`);\n break;\n\n case 'block-start':\n console.log(`Step: ${event.blockName}`);\n break;\n\n case 'text-delta':\n process.stdout.write(event.delta);\n break;\n\n case 'worker-result':\n if (event.error) {\n throw new Error(event.error);\n }\n output = event.output;\n break;\n\n case 'error':\n throw new Error(event.message);\n }\n }\n\n console.log('\\n\\nResearch complete!');\n return output;\n}\n\n// Run the worker\nconst result = await runResearchWorker('AI safety best practices');\nconsole.log('Result:', result);\n```\n\n## Next Steps\n\n- [Workers Protocol](/docs/protocol/workers) \u2014 Worker protocol reference\n- [Streaming](/docs/server-sdk/streaming) \u2014 Understanding stream events\n- [Tools](/docs/server-sdk/tools) \u2014 Tool handler patterns\n",
|
|
72
72
|
excerpt: "Workers API The enables executing worker agents from your server. Workers are task-based agents that run steps sequentially and return an output value. Basic Usage WorkersApi Reference execute()...",
|
|
73
73
|
order: 6
|
|
74
74
|
},
|
|
@@ -77,7 +77,7 @@ var docs_default = [
|
|
|
77
77
|
section: "client-sdk",
|
|
78
78
|
title: "Overview",
|
|
79
79
|
description: "Introduction to the Octavus Client SDKs for building chat interfaces.",
|
|
80
|
-
content: "\n# Client SDK Overview\n\nOctavus provides two packages for frontend integration:\n\n| Package | Purpose | Use When |\n| --------------------- | ------------------------ | ----------------------------------------------------- |\n| `@octavus/react` | React hooks and bindings | Building React applications |\n| `@octavus/client-sdk` | Framework-agnostic core | Using Vue, Svelte, vanilla JS, or custom integrations |\n\n**Most users should install `@octavus/react`** \u2014 it includes everything from `@octavus/client-sdk` plus React-specific hooks.\n\n## Installation\n\n### React Applications\n\n```bash\nnpm install @octavus/react\n```\n\n**Current version:** `2.2.0`\n\n### Other Frameworks\n\n```bash\nnpm install @octavus/client-sdk\n```\n\n**Current version:** `2.2.0`\n\n## Transport Pattern\n\nThe Client SDK uses a **transport abstraction** to handle communication with your backend. This gives you flexibility in how events are delivered:\n\n| Transport | Use Case | Docs |\n| ----------------------- | -------------------------------------------- | ----------------------------------------------------- |\n| `createHttpTransport` | HTTP/SSE (Next.js, Express, etc.) | [HTTP Transport](/docs/client-sdk/http-transport) |\n| `createSocketTransport` | WebSocket, SockJS, or other socket protocols | [Socket Transport](/docs/client-sdk/socket-transport) |\n\nWhen the transport changes (e.g., when `sessionId` changes), the `useOctavusChat` hook automatically reinitializes with the new transport.\n\n> **Recommendation**: Use HTTP transport unless you specifically need WebSocket features (custom real-time events, Meteor/Phoenix, etc.).\n\n## React Usage\n\nThe `useOctavusChat` hook provides state management and streaming for React applications:\n\n```tsx\nimport { useMemo } from 'react';\nimport { useOctavusChat, createHttpTransport, type UIMessage } from '@octavus/react';\n\nfunction Chat({ sessionId }: { sessionId: string }) {\n // Create a stable transport instance (memoized on sessionId)\n const transport = useMemo(\n () =>\n createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n [sessionId],\n );\n\n const { messages, status, send } = useOctavusChat({ transport });\n\n const sendMessage = async (text: string) => {\n await send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n };\n\n return (\n <div>\n {messages.map((msg) => (\n <MessageBubble key={msg.id} message={msg} />\n ))}\n </div>\n );\n}\n\nfunction MessageBubble({ message }: { message: UIMessage }) {\n return (\n <div>\n {message.parts.map((part, i) => {\n if (part.type === 'text') {\n return <p key={i}>{part.text}</p>;\n }\n return null;\n })}\n </div>\n );\n}\n```\n\n## Framework-Agnostic Usage\n\nThe `OctavusChat` class can be used with any framework or vanilla JavaScript:\n\n```typescript\nimport { OctavusChat, createHttpTransport } from '@octavus/client-sdk';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n\nconst chat = new OctavusChat({ transport });\n\n// Subscribe to state changes\nconst unsubscribe = chat.subscribe(() => {\n console.log('Messages:', chat.messages);\n console.log('Status:', chat.status);\n // Update your UI here\n});\n\n// Send a message\nawait chat.send('user-message', { USER_MESSAGE: 'Hello' }, { userMessage: { content: 'Hello' } });\n\n// Cleanup when done\nunsubscribe();\n```\n\n## Key Features\n\n### Unified Send Function\n\nThe `send` function handles both user message display and agent triggering in one call:\n\n```tsx\nconst { send } = useOctavusChat({ transport });\n\n// Add user message to UI and trigger agent\nawait send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n\n// Trigger without adding a user message (e.g., button click)\nawait send('request-human');\n```\n\n### Message Parts\n\nMessages contain ordered `parts` for rich content:\n\n```tsx\nconst { messages } = useOctavusChat({ transport });\n\n// Each message has typed parts\nmessage.parts.map((part) => {\n switch (part.type) {\n case 'text': // Text content\n case 'reasoning': // Extended reasoning/thinking\n case 'tool-call': // Tool execution\n case 'operation': // Internal operations (set-resource, etc.)\n }\n});\n```\n\n### Status Tracking\n\n```tsx\nconst { status } = useOctavusChat({ transport });\n\n// status: 'idle' | 'streaming' | 'error' | 'awaiting-input'\n// 'awaiting-input' occurs when interactive client tools need user action\n```\n\n### Stop Streaming\n\n```tsx\nconst { stop } = useOctavusChat({ transport });\n\n// Stop current stream and finalize message\nstop();\n```\n\n## Hook Reference (React)\n\n### useOctavusChat\n\n```typescript\nfunction useOctavusChat(options: OctavusChatOptions): UseOctavusChatReturn;\n\ninterface OctavusChatOptions {\n // Required: Transport for streaming events\n transport: Transport;\n\n // Optional: Function to request upload URLs for file uploads\n requestUploadUrls?: (\n files: { filename: string; mediaType: string; size: number }[],\n ) => Promise<UploadUrlsResponse>;\n\n // Optional: Client-side tool handlers\n // - Function: executes automatically and returns result\n // - 'interactive': appears in pendingClientTools for user input\n clientTools?: Record<string, ClientToolHandler>;\n\n // Optional: Pre-populate with existing messages (session restore)\n initialMessages?: UIMessage[];\n\n // Optional: Callbacks\n onError?: (error: OctavusError) => void; // Structured error with type, source, retryable\n onFinish?: () => void;\n onStop?: () => void; // Called when user stops generation\n onResourceUpdate?: (name: string, value: unknown) => void;\n}\n\ninterface UseOctavusChatReturn {\n // State\n messages: UIMessage[];\n status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n error: OctavusError | null; // Structured error with type, source, retryable\n\n // Connection (socket transport only - undefined for HTTP)\n connectionState: ConnectionState | undefined; // 'disconnected' | 'connecting' | 'connected' | 'error'\n connectionError: Error | undefined;\n\n // Client tools (interactive tools awaiting user input)\n pendingClientTools: Record<string, InteractiveTool[]>; // Keyed by tool name\n\n // Actions\n send: (\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ) => Promise<void>;\n stop: () => void;\n\n // Connection management (socket transport only - undefined for HTTP)\n connect: (() => Promise<void>) | undefined;\n disconnect: (() => void) | undefined;\n\n // File uploads (requires requestUploadUrls)\n uploadFiles: (\n files: FileList | File[],\n onProgress?: (fileIndex: number, progress: number) => void,\n ) => Promise<FileReference[]>;\n}\n\ninterface UserMessageInput {\n content?: string;\n files?: FileList | File[] | FileReference[];\n}\n```\n\n## Transport Reference\n\n### createHttpTransport\n\nCreates an HTTP/SSE transport using native `fetch()`:\n\n```typescript\nimport { createHttpTransport } from '@octavus/react';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n```\n\n### createSocketTransport\n\nCreates a WebSocket/SockJS transport for real-time connections:\n\n```typescript\nimport { createSocketTransport } from '@octavus/react';\n\nconst transport = createSocketTransport({\n connect: () =>\n new Promise((resolve, reject) => {\n const ws = new WebSocket(`wss://api.example.com/stream?sessionId=${sessionId}`);\n ws.onopen = () => resolve(ws);\n ws.onerror = () => reject(new Error('Connection failed'));\n }),\n});\n```\n\nSocket transport provides additional connection management:\n\n```typescript\n// Access connection state directly\ntransport.connectionState; // 'disconnected' | 'connecting' | 'connected' | 'error'\n\n// Subscribe to state changes\ntransport.onConnectionStateChange((state, error) => {\n /* ... */\n});\n\n// Eager connection (instead of lazy on first send)\nawait transport.connect();\n\n// Manual disconnect\ntransport.disconnect();\n```\n\nFor detailed WebSocket/SockJS usage including custom events, reconnection patterns, and server-side implementation, see [Socket Transport](/docs/client-sdk/socket-transport).\n\n## Class Reference (Framework-Agnostic)\n\n### OctavusChat\n\n```typescript\nclass OctavusChat {\n constructor(options: OctavusChatOptions);\n\n // State (read-only)\n readonly messages: UIMessage[];\n readonly status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n readonly error: OctavusError | null; // Structured error\n readonly pendingClientTools: Record<string, InteractiveTool[]>; // Interactive tools\n\n // Actions\n send(\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ): Promise<void>;\n stop(): void;\n\n // Subscription\n subscribe(callback: () => void): () => void; // Returns unsubscribe function\n}\n```\n\n## Next Steps\n\n- [HTTP Transport](/docs/client-sdk/http-transport) \u2014 HTTP/SSE integration (recommended)\n- [Socket Transport](/docs/client-sdk/socket-transport) \u2014 WebSocket and SockJS integration\n- [Messages](/docs/client-sdk/messages) \u2014 Working with message state\n- [Streaming](/docs/client-sdk/streaming) \u2014 Building streaming UIs\n- [Client Tools](/docs/client-sdk/client-tools) \u2014 Interactive browser-side tool handling\n- [Operations](/docs/client-sdk/execution-blocks) \u2014 Showing agent progress\n- [Error Handling](/docs/client-sdk/error-handling) \u2014 Handling errors with type guards\n- [File Uploads](/docs/client-sdk/file-uploads) \u2014 Uploading images and documents\n- [Examples](/docs/examples/overview) \u2014 Complete working examples\n",
|
|
80
|
+
content: "\n# Client SDK Overview\n\nOctavus provides two packages for frontend integration:\n\n| Package | Purpose | Use When |\n| --------------------- | ------------------------ | ----------------------------------------------------- |\n| `@octavus/react` | React hooks and bindings | Building React applications |\n| `@octavus/client-sdk` | Framework-agnostic core | Using Vue, Svelte, vanilla JS, or custom integrations |\n\n**Most users should install `@octavus/react`** \u2014 it includes everything from `@octavus/client-sdk` plus React-specific hooks.\n\n## Installation\n\n### React Applications\n\n```bash\nnpm install @octavus/react\n```\n\n**Current version:** `2.5.0`\n\n### Other Frameworks\n\n```bash\nnpm install @octavus/client-sdk\n```\n\n**Current version:** `2.5.0`\n\n## Transport Pattern\n\nThe Client SDK uses a **transport abstraction** to handle communication with your backend. This gives you flexibility in how events are delivered:\n\n| Transport | Use Case | Docs |\n| ----------------------- | -------------------------------------------- | ----------------------------------------------------- |\n| `createHttpTransport` | HTTP/SSE (Next.js, Express, etc.) | [HTTP Transport](/docs/client-sdk/http-transport) |\n| `createSocketTransport` | WebSocket, SockJS, or other socket protocols | [Socket Transport](/docs/client-sdk/socket-transport) |\n\nWhen the transport changes (e.g., when `sessionId` changes), the `useOctavusChat` hook automatically reinitializes with the new transport.\n\n> **Recommendation**: Use HTTP transport unless you specifically need WebSocket features (custom real-time events, Meteor/Phoenix, etc.).\n\n## React Usage\n\nThe `useOctavusChat` hook provides state management and streaming for React applications:\n\n```tsx\nimport { useMemo } from 'react';\nimport { useOctavusChat, createHttpTransport, type UIMessage } from '@octavus/react';\n\nfunction Chat({ sessionId }: { sessionId: string }) {\n // Create a stable transport instance (memoized on sessionId)\n const transport = useMemo(\n () =>\n createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n [sessionId],\n );\n\n const { messages, status, send } = useOctavusChat({ transport });\n\n const sendMessage = async (text: string) => {\n await send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n };\n\n return (\n <div>\n {messages.map((msg) => (\n <MessageBubble key={msg.id} message={msg} />\n ))}\n </div>\n );\n}\n\nfunction MessageBubble({ message }: { message: UIMessage }) {\n return (\n <div>\n {message.parts.map((part, i) => {\n if (part.type === 'text') {\n return <p key={i}>{part.text}</p>;\n }\n return null;\n })}\n </div>\n );\n}\n```\n\n## Framework-Agnostic Usage\n\nThe `OctavusChat` class can be used with any framework or vanilla JavaScript:\n\n```typescript\nimport { OctavusChat, createHttpTransport } from '@octavus/client-sdk';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n\nconst chat = new OctavusChat({ transport });\n\n// Subscribe to state changes\nconst unsubscribe = chat.subscribe(() => {\n console.log('Messages:', chat.messages);\n console.log('Status:', chat.status);\n // Update your UI here\n});\n\n// Send a message\nawait chat.send('user-message', { USER_MESSAGE: 'Hello' }, { userMessage: { content: 'Hello' } });\n\n// Cleanup when done\nunsubscribe();\n```\n\n## Key Features\n\n### Unified Send Function\n\nThe `send` function handles both user message display and agent triggering in one call:\n\n```tsx\nconst { send } = useOctavusChat({ transport });\n\n// Add user message to UI and trigger agent\nawait send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n\n// Trigger without adding a user message (e.g., button click)\nawait send('request-human');\n```\n\n### Message Parts\n\nMessages contain ordered `parts` for rich content:\n\n```tsx\nconst { messages } = useOctavusChat({ transport });\n\n// Each message has typed parts\nmessage.parts.map((part) => {\n switch (part.type) {\n case 'text': // Text content\n case 'reasoning': // Extended reasoning/thinking\n case 'tool-call': // Tool execution\n case 'operation': // Internal operations (set-resource, etc.)\n }\n});\n```\n\n### Status Tracking\n\n```tsx\nconst { status } = useOctavusChat({ transport });\n\n// status: 'idle' | 'streaming' | 'error' | 'awaiting-input'\n// 'awaiting-input' occurs when interactive client tools need user action\n```\n\n### Stop Streaming\n\n```tsx\nconst { stop } = useOctavusChat({ transport });\n\n// Stop current stream and finalize message\nstop();\n```\n\n## Hook Reference (React)\n\n### useOctavusChat\n\n```typescript\nfunction useOctavusChat(options: OctavusChatOptions): UseOctavusChatReturn;\n\ninterface OctavusChatOptions {\n // Required: Transport for streaming events\n transport: Transport;\n\n // Optional: Function to request upload URLs for file uploads\n requestUploadUrls?: (\n files: { filename: string; mediaType: string; size: number }[],\n ) => Promise<UploadUrlsResponse>;\n\n // Optional: Client-side tool handlers\n // - Function: executes automatically and returns result\n // - 'interactive': appears in pendingClientTools for user input\n clientTools?: Record<string, ClientToolHandler>;\n\n // Optional: Pre-populate with existing messages (session restore)\n initialMessages?: UIMessage[];\n\n // Optional: Callbacks\n onError?: (error: OctavusError) => void; // Structured error with type, source, retryable\n onFinish?: () => void;\n onStop?: () => void; // Called when user stops generation\n onResourceUpdate?: (name: string, value: unknown) => void;\n}\n\ninterface UseOctavusChatReturn {\n // State\n messages: UIMessage[];\n status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n error: OctavusError | null; // Structured error with type, source, retryable\n\n // Connection (socket transport only - undefined for HTTP)\n connectionState: ConnectionState | undefined; // 'disconnected' | 'connecting' | 'connected' | 'error'\n connectionError: Error | undefined;\n\n // Client tools (interactive tools awaiting user input)\n pendingClientTools: Record<string, InteractiveTool[]>; // Keyed by tool name\n\n // Actions\n send: (\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ) => Promise<void>;\n stop: () => void;\n\n // Connection management (socket transport only - undefined for HTTP)\n connect: (() => Promise<void>) | undefined;\n disconnect: (() => void) | undefined;\n\n // File uploads (requires requestUploadUrls)\n uploadFiles: (\n files: FileList | File[],\n onProgress?: (fileIndex: number, progress: number) => void,\n ) => Promise<FileReference[]>;\n}\n\ninterface UserMessageInput {\n content?: string;\n files?: FileList | File[] | FileReference[];\n}\n```\n\n## Transport Reference\n\n### createHttpTransport\n\nCreates an HTTP/SSE transport using native `fetch()`:\n\n```typescript\nimport { createHttpTransport } from '@octavus/react';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n```\n\n### createSocketTransport\n\nCreates a WebSocket/SockJS transport for real-time connections:\n\n```typescript\nimport { createSocketTransport } from '@octavus/react';\n\nconst transport = createSocketTransport({\n connect: () =>\n new Promise((resolve, reject) => {\n const ws = new WebSocket(`wss://api.example.com/stream?sessionId=${sessionId}`);\n ws.onopen = () => resolve(ws);\n ws.onerror = () => reject(new Error('Connection failed'));\n }),\n});\n```\n\nSocket transport provides additional connection management:\n\n```typescript\n// Access connection state directly\ntransport.connectionState; // 'disconnected' | 'connecting' | 'connected' | 'error'\n\n// Subscribe to state changes\ntransport.onConnectionStateChange((state, error) => {\n /* ... */\n});\n\n// Eager connection (instead of lazy on first send)\nawait transport.connect();\n\n// Manual disconnect\ntransport.disconnect();\n```\n\nFor detailed WebSocket/SockJS usage including custom events, reconnection patterns, and server-side implementation, see [Socket Transport](/docs/client-sdk/socket-transport).\n\n## Class Reference (Framework-Agnostic)\n\n### OctavusChat\n\n```typescript\nclass OctavusChat {\n constructor(options: OctavusChatOptions);\n\n // State (read-only)\n readonly messages: UIMessage[];\n readonly status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n readonly error: OctavusError | null; // Structured error\n readonly pendingClientTools: Record<string, InteractiveTool[]>; // Interactive tools\n\n // Actions\n send(\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ): Promise<void>;\n stop(): void;\n\n // Subscription\n subscribe(callback: () => void): () => void; // Returns unsubscribe function\n}\n```\n\n## Next Steps\n\n- [HTTP Transport](/docs/client-sdk/http-transport) \u2014 HTTP/SSE integration (recommended)\n- [Socket Transport](/docs/client-sdk/socket-transport) \u2014 WebSocket and SockJS integration\n- [Messages](/docs/client-sdk/messages) \u2014 Working with message state\n- [Streaming](/docs/client-sdk/streaming) \u2014 Building streaming UIs\n- [Client Tools](/docs/client-sdk/client-tools) \u2014 Interactive browser-side tool handling\n- [Operations](/docs/client-sdk/execution-blocks) \u2014 Showing agent progress\n- [Error Handling](/docs/client-sdk/error-handling) \u2014 Handling errors with type guards\n- [File Uploads](/docs/client-sdk/file-uploads) \u2014 Uploading images and documents\n- [Examples](/docs/examples/overview) \u2014 Complete working examples\n",
|
|
81
81
|
excerpt: "Client SDK Overview Octavus provides two packages for frontend integration: | Package | Purpose | Use When | |...",
|
|
82
82
|
order: 1
|
|
83
83
|
},
|
|
@@ -585,7 +585,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
585
585
|
section: "protocol",
|
|
586
586
|
title: "Agent Config",
|
|
587
587
|
description: "Configuring the agent model and behavior.",
|
|
588
|
-
content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to interpolate in system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | ------------------------------------------------------------ |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions:\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nExample `prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n```\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
|
|
588
|
+
content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to interpolate in system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | -------------------------------------------------------------------- |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash-preview\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions:\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nExample `prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n```\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
|
|
589
589
|
excerpt: "Agent Config The section configures the LLM model, system prompt, tools, and behavior. Basic Configuration Configuration Options | Field | Required | Description ...",
|
|
590
590
|
order: 7
|
|
591
591
|
},
|
|
@@ -621,7 +621,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
621
621
|
section: "protocol",
|
|
622
622
|
title: "Workers",
|
|
623
623
|
description: "Defining worker agents for background and task-based execution.",
|
|
624
|
-
content: '\n# Workers\n\nWorkers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value.\n\n## When to Use Workers\n\nWorkers are ideal for:\n\n- **Background processing** \u2014 Long-running tasks that don\'t need conversation\n- **Composable tasks** \u2014 Reusable units of work called by other agents\n- **Pipelines** \u2014 Multi-step processing with structured output\n- **Parallel execution** \u2014 Tasks that can run independently\n\nUse interactive agents instead when:\n\n- **Conversation is needed** \u2014 Multi-turn dialogue with users\n- **Persistence matters** \u2014 State should survive across interactions\n- **Session context** \u2014 User context needs to persist\n\n## Worker vs Interactive\n\n| Aspect | Interactive | Worker |\n| ---------- | ---------------------------------- | ----------------------------- |\n| Structure | `triggers` + `handlers` + `agent` | `steps` + `output` |\n| LLM Config | Global `agent:` section | Per-thread via `start-thread` |\n| Invocation | Fire a named trigger | Direct execution with input |\n| Session | Persists across triggers (24h TTL) | Single execution |\n| Result | Streaming chat | Streaming + output value |\n\n## Protocol Structure\n\nWorkers use a simpler protocol structure than interactive agents:\n\n```yaml\n# Input schema - provided when worker is executed\ninput:\n TOPIC:\n type: string\n description: Topic to research\n DEPTH:\n type: string\n optional: true\n default: medium\n\n# Variables for intermediate results\nvariables:\n RESEARCH_DATA:\n type: string\n ANALYSIS:\n type: string\n description: Final analysis result\n\n# Tools available to the worker\ntools:\n web-search:\n description: Search the web\n parameters:\n query: { type: string }\n\n# Sequential execution steps\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC, DEPTH]\n tools: [web-search]\n agentic: true\n maxSteps: 5\n\n Add research request:\n block: add-message\n thread: research\n role: user\n prompt: research-prompt\n input: [TOPIC, DEPTH]\n\n Generate research:\n block: next-message\n thread: research\n output: RESEARCH_DATA\n\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: analysis-system\n\n Add analysis request:\n block: add-message\n thread: analysis\n role: user\n prompt: analysis-prompt\n input: [RESEARCH_DATA]\n\n Generate analysis:\n block: next-message\n thread: analysis\n output: ANALYSIS\n\n# Output variable - the worker\'s return value\noutput: ANALYSIS\n```\n\n## settings.json\n\nWorkers are identified by the `format` field:\n\n```json\n{\n "slug": "research-assistant",\n "name": "Research Assistant",\n "description": "Researches topics and returns structured analysis",\n "format": "worker"\n}\n```\n\n## Key Differences\n\n### No Global Agent Config\n\nInteractive agents have a global `agent:` section that configures a main thread. Workers don\'t have this \u2014 every thread must be explicitly created via `start-thread`:\n\n```yaml\n# Interactive agent: Global config\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [tool-a, tool-b]\n\n# Worker: Each thread configured independently\nsteps:\n Start thread A:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n tools: [tool-a]\n\n Start thread B:\n block: start-thread\n thread: analysis\n model: openai/gpt-4o\n tools: [tool-b]\n```\n\nThis gives workers flexibility to use different models, tools, and settings at different stages.\n\n### Steps Instead of Handlers\n\nWorkers use `steps:` instead of `handlers:`. Steps execute sequentially, like handler blocks:\n\n```yaml\n# Interactive: Handlers respond to triggers\nhandlers:\n user-message:\n Add message:\n block: add-message\n # ...\n\n# Worker: Steps execute in sequence\nsteps:\n Add message:\n block: add-message\n # ...\n```\n\n### Output Value\n\nWorkers can return an output value to the caller:\n\n```yaml\nvariables:\n RESULT:\n type: string\n\nsteps:\n # ... steps that populate RESULT ...\n\noutput: RESULT # Return this variable\'s value\n```\n\nThe `output` field references a variable declared in `variables:`. If omitted, the worker completes without returning a value.\n\n## Available Blocks\n\nWorkers support the same blocks as handlers:\n\n| Block | Purpose |\n| ------------------ | -------------------------------------------- |\n| `start-thread` | Create a named thread with LLM configuration |\n| `add-message` | Add a message to a thread |\n| `next-message` | Generate LLM response |\n| `tool-call` | Call a tool deterministically |\n| `set-resource` | Update a resource value |\n| `serialize-thread` | Convert thread to text |\n| `generate-image` | Generate an image from a prompt variable |\n\n### start-thread (Required for LLM)\n\nEvery thread must be initialized with `start-thread` before using `next-message`:\n\n```yaml\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC]\n tools: [web-search]\n thinking: medium\n agentic: true\n maxSteps: 5\n```\n\nAll LLM configuration goes here:\n\n| Field | Description |\n| ------------- | ------------------------------- |\n| `thread` | Thread name (required) |\n| `model` | LLM model (required) |\n| `system` | System prompt filename |\n| `input` | Variables for system prompt |\n| `tools` | Tools available in this thread |\n| `skills` | Skills available in this thread |\n| `imageModel` | Image generation model |\n| `thinking` | Extended reasoning level |\n| `temperature` | Model temperature |\n| `agentic` | Allow multiple tool call cycles |\n| `maxSteps` | Maximum agentic steps |\n\n## Simple Example\n\nA worker that generates a title from a summary:\n\n```yaml\n# Input\ninput:\n CONVERSATION_SUMMARY:\n type: string\n description: Summary to generate a title for\n\n# Variables\nvariables:\n TITLE:\n type: string\n description: The generated title\n\n# Steps\nsteps:\n Start title thread:\n block: start-thread\n thread: title-gen\n model: anthropic/claude-sonnet-4-5\n system: title-system\n\n Add title request:\n block: add-message\n thread: title-gen\n role: user\n prompt: title-request\n input: [CONVERSATION_SUMMARY]\n\n Generate title:\n block: next-message\n thread: title-gen\n output: TITLE\n display: stream\n\n# Output\noutput: TITLE\n```\n\n## Advanced Example\n\nA worker with multiple threads, tools, and agentic behavior:\n\n```yaml\ninput:\n USER_MESSAGE:\n type: string\n description: The user\'s message to respond to\n USER_ID:\n type: string\n description: User ID for account lookups\n optional: true\n\ntools:\n get-user-account:\n description: Looking up account information\n parameters:\n userId: { type: string }\n create-support-ticket:\n description: Creating a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string }\n\nvariables:\n ASSISTANT_RESPONSE:\n type: string\n CHAT_TRANSCRIPT:\n type: string\n CONVERSATION_SUMMARY:\n type: string\n\nsteps:\n # Thread 1: Chat with agentic tool calling\n Start chat thread:\n block: start-thread\n thread: chat\n model: anthropic/claude-sonnet-4-5\n system: chat-system\n input: [USER_ID]\n tools: [get-user-account, create-support-ticket]\n thinking: medium\n agentic: true\n maxSteps: 5\n\n Add user message:\n block: add-message\n thread: chat\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Generate response:\n block: next-message\n thread: chat\n output: ASSISTANT_RESPONSE\n display: stream\n\n # Serialize for summary\n Save conversation:\n block: serialize-thread\n thread: chat\n output: CHAT_TRANSCRIPT\n\n # Thread 2: Summary generation\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n system: summary-system\n thinking: low\n\n Add summary request:\n block: add-message\n thread: summary\n role: user\n prompt: summary-request\n input: [CHAT_TRANSCRIPT]\n\n Generate summary:\n block: next-message\n thread: summary\n output: CONVERSATION_SUMMARY\n display: stream\n\noutput: CONVERSATION_SUMMARY\n```\n\n## Tool Handling\n\nWorkers support the same tool handling as interactive agents:\n\n- **Server tools** \u2014 Handled by tool handlers you provide\n- **Client tools** \u2014 Pause execution, return tool request to caller\n\n```typescript\nconst events = client.workers.execute(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => {\n return await searchWeb(args.query);\n },\n },\n },\n);\n```\n\nSee [Server SDK Workers](/docs/server-sdk/workers) for tool handling details.\n\n## Stream Events\n\nWorkers emit the same events as interactive agents, plus worker-specific events:\n\n| Event | Description |\n| --------------- | ---------------------------------- |\n| `worker-start` | Worker execution begins |\n| `worker-result` | Worker completes (includes output) |\n\nAll standard events (text-delta, tool calls, etc.) are also emitted.\n\n## Calling Workers from Interactive Agents\n\nInteractive agents can call workers in two ways:\n\n1. **Deterministically** \u2014 Using the `run-worker` block\n2. **Agentically** \u2014 LLM calls worker as a tool\n\n### Worker Declaration\n\nFirst, declare workers in your interactive agent\'s protocol:\n\n```yaml\nworkers:\n generate-title:\n description: Generating conversation title\n display: description\n research-assistant:\n description: Researching topic\n display: stream\n tools:\n search: web-search # Map worker tool \u2192 parent tool\n```\n\n### run-worker Block\n\nCall a worker deterministically from a handler:\n\n```yaml\nhandlers:\n request-human:\n Generate title:\n block: run-worker\n worker: generate-title\n input:\n CONVERSATION_SUMMARY: SUMMARY\n output: CONVERSATION_TITLE\n```\n\n### LLM Tool Invocation\n\nMake workers available to the LLM:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n workers: [generate-title, research-assistant]\n agentic: true\n```\n\nThe LLM can then call workers as tools during conversation.\n\n### Display Modes\n\nControl how worker execution appears to users:\n\n| Mode | Behavior |\n| ------------- | --------------------------------- |\n| `hidden` | Worker runs silently |\n| `name` | Shows worker name |\n| `description` | Shows description text |\n| `stream` | Streams all worker events to user |\n\n### Tool Mapping\n\nMap parent tools to worker tools when the worker needs access to your tool handlers:\n\n```yaml\nworkers:\n research-assistant:\n description: Research topics\n tools:\n search: web-search # Worker\'s "search" \u2192 parent\'s "web-search"\n```\n\nWhen the worker calls its `search` tool, your `web-search` handler executes.\n\n## Next Steps\n\n- [Server SDK Workers](/docs/server-sdk/workers) \u2014 Executing workers from code\n- [Handlers](/docs/protocol/handlers) \u2014 Block reference for steps\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n',
|
|
624
|
+
content: '\n# Workers\n\nWorkers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value.\n\n## When to Use Workers\n\nWorkers are ideal for:\n\n- **Background processing** \u2014 Long-running tasks that don\'t need conversation\n- **Composable tasks** \u2014 Reusable units of work called by other agents\n- **Pipelines** \u2014 Multi-step processing with structured output\n- **Parallel execution** \u2014 Tasks that can run independently\n\nUse interactive agents instead when:\n\n- **Conversation is needed** \u2014 Multi-turn dialogue with users\n- **Persistence matters** \u2014 State should survive across interactions\n- **Session context** \u2014 User context needs to persist\n\n## Worker vs Interactive\n\n| Aspect | Interactive | Worker |\n| ---------- | ---------------------------------- | ----------------------------- |\n| Structure | `triggers` + `handlers` + `agent` | `steps` + `output` |\n| LLM Config | Global `agent:` section | Per-thread via `start-thread` |\n| Invocation | Fire a named trigger | Direct execution with input |\n| Session | Persists across triggers (24h TTL) | Single execution |\n| Result | Streaming chat | Streaming + output value |\n\n## Protocol Structure\n\nWorkers use a simpler protocol structure than interactive agents:\n\n```yaml\n# Input schema - provided when worker is executed\ninput:\n TOPIC:\n type: string\n description: Topic to research\n DEPTH:\n type: string\n optional: true\n default: medium\n\n# Variables for intermediate results\nvariables:\n RESEARCH_DATA:\n type: string\n ANALYSIS:\n type: string\n description: Final analysis result\n\n# Tools available to the worker\ntools:\n web-search:\n description: Search the web\n parameters:\n query: { type: string }\n\n# Sequential execution steps\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC, DEPTH]\n tools: [web-search]\n maxSteps: 5\n\n Add research request:\n block: add-message\n thread: research\n role: user\n prompt: research-prompt\n input: [TOPIC, DEPTH]\n\n Generate research:\n block: next-message\n thread: research\n output: RESEARCH_DATA\n\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: analysis-system\n\n Add analysis request:\n block: add-message\n thread: analysis\n role: user\n prompt: analysis-prompt\n input: [RESEARCH_DATA]\n\n Generate analysis:\n block: next-message\n thread: analysis\n output: ANALYSIS\n\n# Output variable - the worker\'s return value\noutput: ANALYSIS\n```\n\n## settings.json\n\nWorkers are identified by the `format` field:\n\n```json\n{\n "slug": "research-assistant",\n "name": "Research Assistant",\n "description": "Researches topics and returns structured analysis",\n "format": "worker"\n}\n```\n\n## Key Differences\n\n### No Global Agent Config\n\nInteractive agents have a global `agent:` section that configures a main thread. Workers don\'t have this \u2014 every thread must be explicitly created via `start-thread`:\n\n```yaml\n# Interactive agent: Global config\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [tool-a, tool-b]\n\n# Worker: Each thread configured independently\nsteps:\n Start thread A:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n tools: [tool-a]\n\n Start thread B:\n block: start-thread\n thread: analysis\n model: openai/gpt-4o\n tools: [tool-b]\n```\n\nThis gives workers flexibility to use different models, tools, and settings at different stages.\n\n### Steps Instead of Handlers\n\nWorkers use `steps:` instead of `handlers:`. Steps execute sequentially, like handler blocks:\n\n```yaml\n# Interactive: Handlers respond to triggers\nhandlers:\n user-message:\n Add message:\n block: add-message\n # ...\n\n# Worker: Steps execute in sequence\nsteps:\n Add message:\n block: add-message\n # ...\n```\n\n### Output Value\n\nWorkers can return an output value to the caller:\n\n```yaml\nvariables:\n RESULT:\n type: string\n\nsteps:\n # ... steps that populate RESULT ...\n\noutput: RESULT # Return this variable\'s value\n```\n\nThe `output` field references a variable declared in `variables:`. If omitted, the worker completes without returning a value.\n\n## Available Blocks\n\nWorkers support the same blocks as handlers:\n\n| Block | Purpose |\n| ------------------ | -------------------------------------------- |\n| `start-thread` | Create a named thread with LLM configuration |\n| `add-message` | Add a message to a thread |\n| `next-message` | Generate LLM response |\n| `tool-call` | Call a tool deterministically |\n| `set-resource` | Update a resource value |\n| `serialize-thread` | Convert thread to text |\n| `generate-image` | Generate an image from a prompt variable |\n\n### start-thread (Required for LLM)\n\nEvery thread must be initialized with `start-thread` before using `next-message`:\n\n```yaml\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC]\n tools: [web-search]\n thinking: medium\n maxSteps: 5\n```\n\nAll LLM configuration goes here:\n\n| Field | Description |\n| ------------- | ------------------------------------------------- |\n| `thread` | Thread name (defaults to block name) |\n| `model` | LLM model to use |\n| `system` | System prompt filename (required) |\n| `input` | Variables for system prompt |\n| `tools` | Tools available in this thread |\n| `workers` | Workers available to this thread (as LLM tools) |\n| `imageModel` | Image generation model |\n| `thinking` | Extended reasoning level |\n| `temperature` | Model temperature |\n| `maxSteps` | Maximum tool call cycles (enables agentic if > 1) |\n\n## Simple Example\n\nA worker that generates a title from a summary:\n\n```yaml\n# Input\ninput:\n CONVERSATION_SUMMARY:\n type: string\n description: Summary to generate a title for\n\n# Variables\nvariables:\n TITLE:\n type: string\n description: The generated title\n\n# Steps\nsteps:\n Start title thread:\n block: start-thread\n thread: title-gen\n model: anthropic/claude-sonnet-4-5\n system: title-system\n\n Add title request:\n block: add-message\n thread: title-gen\n role: user\n prompt: title-request\n input: [CONVERSATION_SUMMARY]\n\n Generate title:\n block: next-message\n thread: title-gen\n output: TITLE\n display: stream\n\n# Output\noutput: TITLE\n```\n\n## Advanced Example\n\nA worker with multiple threads, tools, and agentic behavior:\n\n```yaml\ninput:\n USER_MESSAGE:\n type: string\n description: The user\'s message to respond to\n USER_ID:\n type: string\n description: User ID for account lookups\n optional: true\n\ntools:\n get-user-account:\n description: Looking up account information\n parameters:\n userId: { type: string }\n create-support-ticket:\n description: Creating a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string }\n\nvariables:\n ASSISTANT_RESPONSE:\n type: string\n CHAT_TRANSCRIPT:\n type: string\n CONVERSATION_SUMMARY:\n type: string\n\nsteps:\n # Thread 1: Chat with agentic tool calling\n Start chat thread:\n block: start-thread\n thread: chat\n model: anthropic/claude-sonnet-4-5\n system: chat-system\n input: [USER_ID]\n tools: [get-user-account, create-support-ticket]\n thinking: medium\n maxSteps: 5\n\n Add user message:\n block: add-message\n thread: chat\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Generate response:\n block: next-message\n thread: chat\n output: ASSISTANT_RESPONSE\n display: stream\n\n # Serialize for summary\n Save conversation:\n block: serialize-thread\n thread: chat\n output: CHAT_TRANSCRIPT\n\n # Thread 2: Summary generation\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n system: summary-system\n thinking: low\n\n Add summary request:\n block: add-message\n thread: summary\n role: user\n prompt: summary-request\n input: [CHAT_TRANSCRIPT]\n\n Generate summary:\n block: next-message\n thread: summary\n output: CONVERSATION_SUMMARY\n display: stream\n\noutput: CONVERSATION_SUMMARY\n```\n\n## Tool Handling\n\nWorkers support the same tool handling as interactive agents:\n\n- **Server tools** \u2014 Handled by tool handlers you provide\n- **Client tools** \u2014 Pause execution, return tool request to caller\n\n```typescript\nconst events = client.workers.execute(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => {\n return await searchWeb(args.query);\n },\n },\n },\n);\n```\n\nSee [Server SDK Workers](/docs/server-sdk/workers) for tool handling details.\n\n## Stream Events\n\nWorkers emit the same events as interactive agents, plus worker-specific events:\n\n| Event | Description |\n| --------------- | ---------------------------------- |\n| `worker-start` | Worker execution begins |\n| `worker-result` | Worker completes (includes output) |\n\nAll standard events (text-delta, tool calls, etc.) are also emitted.\n\n## Calling Workers from Interactive Agents\n\nInteractive agents can call workers in two ways:\n\n1. **Deterministically** \u2014 Using the `run-worker` block\n2. **Agentically** \u2014 LLM calls worker as a tool\n\n### Worker Declaration\n\nFirst, declare workers in your interactive agent\'s protocol:\n\n```yaml\nworkers:\n generate-title:\n description: Generating conversation title\n display: description\n research-assistant:\n description: Researching topic\n display: stream\n tools:\n search: web-search # Map worker tool \u2192 parent tool\n```\n\n### run-worker Block\n\nCall a worker deterministically from a handler:\n\n```yaml\nhandlers:\n request-human:\n Generate title:\n block: run-worker\n worker: generate-title\n input:\n CONVERSATION_SUMMARY: SUMMARY\n output: CONVERSATION_TITLE\n```\n\n### LLM Tool Invocation\n\nMake workers available to the LLM:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n workers: [generate-title, research-assistant]\n agentic: true\n```\n\nThe LLM can then call workers as tools during conversation.\n\n### Display Modes\n\nControl how worker execution appears to users:\n\n| Mode | Behavior |\n| ------------- | --------------------------------- |\n| `hidden` | Worker runs silently |\n| `name` | Shows worker name |\n| `description` | Shows description text |\n| `stream` | Streams all worker events to user |\n\n### Tool Mapping\n\nMap parent tools to worker tools when the worker needs access to your tool handlers:\n\n```yaml\nworkers:\n research-assistant:\n description: Research topics\n tools:\n search: web-search # Worker\'s "search" \u2192 parent\'s "web-search"\n```\n\nWhen the worker calls its `search` tool, your `web-search` handler executes.\n\n## Next Steps\n\n- [Server SDK Workers](/docs/server-sdk/workers) \u2014 Executing workers from code\n- [Handlers](/docs/protocol/handlers) \u2014 Block reference for steps\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n',
|
|
625
625
|
excerpt: "Workers Workers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value. When to...",
|
|
626
626
|
order: 11
|
|
627
627
|
},
|
|
@@ -684,7 +684,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
684
684
|
section: "migration",
|
|
685
685
|
title: "Migrating from v1 to v2",
|
|
686
686
|
description: "Complete guide for upgrading from Octavus SDK v1.x to v2.0.",
|
|
687
|
-
content: "\n# Migrating from v1 to v2\n\nThis guide covers all breaking changes and new features in Octavus SDK v2.0.0.\n\n## Overview\n\nVersion 2.0.0 introduces **client-side tool execution**, allowing tools to run in the browser with support for both automatic execution and interactive user input. This required changes to the transport layer and streaming protocol.\n\n## Breaking Changes\n\n### HTTP Transport Configuration\n\nThe `triggerRequest` option has been renamed to `request` and the function signature changed to support both trigger and continuation requests.\n\n**Before (v1):**\n\n```typescript\nimport { createHttpTransport } from '@octavus/client-sdk';\n\nconst transport = createHttpTransport({\n triggerRequest: (triggerName, input, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, triggerName, input }),\n signal: options?.signal,\n }),\n});\n```\n\n**After (v2):**\n\n```typescript\nimport { createHttpTransport } from '@octavus/client-sdk';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n```\n\nThe `payload` parameter is a discriminated union with `type: 'trigger' | 'continue'`:\n\n- **Trigger request:** `{ type: 'trigger', triggerName: string, input?: Record<string, unknown> }`\n- **Continue request:** `{ type: 'continue', executionId: string, toolResults: ToolResult[] }`\n\n### Server SDK Route Handler\n\nServer-side route handlers should use the new `execute()` method which handles both triggers and continuations via the request type.\n\n**Before (v1):**\n\n```typescript\n// app/api/trigger/route.ts\nimport { AgentSession, toSSEStream } from '@octavus/server-sdk';\n\nexport async function POST(request: Request) {\n const { sessionId, triggerName, input } = await request.json();\n\n const session = new AgentSession({\n sessionId,\n agentId: 'your-agent-id',\n apiKey: process.env.OCTAVUS_API_KEY!,\n });\n\n const events = session.trigger(triggerName, input, { signal: request.signal });\n return new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n });\n}\n```\n\n**After (v2):**\n\n```typescript\n// app/api/trigger/route.ts\nimport { AgentSession, toSSEStream, type SessionRequest } from '@octavus/server-sdk';\n\nexport async function POST(request: Request) {\n const body = (await request.json()) as { sessionId: string } & SessionRequest;\n const { sessionId, ...sessionRequest } = body;\n\n const session = new AgentSession({\n sessionId,\n agentId: 'your-agent-id',\n apiKey: process.env.OCTAVUS_API_KEY!,\n });\n\n const events = session.execute(sessionRequest, { signal: request.signal });\n return new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n });\n}\n```\n\nThe `SessionRequest` type is a discriminated union:\n\n- **Trigger:** `{ type: 'trigger', triggerName: string, input?: Record<string, unknown> }`\n- **Continue:** `{ type: 'continue', executionId: string, toolResults: ToolResult[] }`\n\n### Export Changes\n\nThe SDKs no longer use star exports (`export * from '@octavus/core'`). Instead, they use explicit type exports and value exports. This improves tree-shaking but may require import adjustments.\n\n**If you were importing schemas:**\n\nSome Zod schemas are no longer re-exported from `@octavus/server-sdk`. If you need validation schemas, import them directly from `@octavus/core`:\n\n```typescript\n// Before (v1) - some schemas were re-exported\nimport { fileUploadRequestSchema } from '@octavus/server-sdk';\n\n// After (v2) - import from core if needed\nimport { fileReferenceSchema } from '@octavus/core';\n```\n\n**Type imports remain the same:**\n\n```typescript\n// Types are still available\nimport type { StreamEvent, UIMessage, FileReference } from '@octavus/client-sdk';\n```\n\n### Custom Transport Interface\n\nCustom transport implementations must now implement the `continueWithToolResults()` method:\n\n```typescript\ninterface Transport {\n trigger(triggerName: string, input?: Record<string, unknown>): AsyncIterable<StreamEvent>;\n continueWithToolResults(executionId: string, results: ToolResult[]): AsyncIterable<StreamEvent>;\n stop(): void;\n}\n```\n\n### New ChatStatus Value\n\nThe `ChatStatus` type now includes `'awaiting-input'`:\n\n```typescript\n// v1\ntype ChatStatus = 'idle' | 'streaming' | 'error';\n\n// v2\ntype ChatStatus = 'idle' | 'streaming' | 'error' | 'awaiting-input';\n```\n\nUpdate any status checks that assume only three values.\n\n### Stream Event Changes\n\nTwo stream events have been modified:\n\n**StartEvent and FinishEvent now include executionId:**\n\n```typescript\ninterface StartEvent {\n type: 'start';\n messageId?: string;\n executionId?: string; // New in v2\n}\n\ninterface FinishEvent {\n type: 'finish';\n finishReason: FinishReason;\n executionId?: string; // New in v2\n}\n```\n\n**New FinishReason:**\n\n```typescript\n// Added 'client-tool-calls' to FinishReason\ntype FinishReason =\n | 'stop'\n | 'tool-calls'\n | 'client-tool-calls' // New in v2\n | 'length'\n | 'content-filter'\n | 'error'\n | 'other';\n```\n\n**New ClientToolRequestEvent:**\n\n```typescript\ninterface ClientToolRequestEvent {\n type: 'client-tool-request';\n executionId: string;\n toolCalls: PendingToolCall[];\n serverToolResults?: ToolResult[];\n}\n```\n\n## New Features\n\n### Client-Side Tools\n\nv2 introduces client-side tool execution, allowing tools to run in the browser. This is useful for:\n\n- Accessing browser APIs (geolocation, clipboard, etc.)\n- Interactive tools requiring user input (confirmations, forms, selections)\n- Tools that need access to client-side state\n\nSee the [Client Tools guide](/docs/client-sdk/client-tools) for full documentation.\n\n#### Automatic Client Tools\n\nTools that execute automatically without user interaction:\n\n```typescript\nimport { useOctavusChat, createHttpTransport } from '@octavus/react';\n\nfunction Chat() {\n const { messages, send, status } = useOctavusChat({\n transport: createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n clientTools: {\n 'get-browser-location': async (args, ctx) => {\n const pos = await new Promise((resolve, reject) => {\n navigator.geolocation.getCurrentPosition(resolve, reject);\n });\n return { lat: pos.coords.latitude, lng: pos.coords.longitude };\n },\n 'get-clipboard': async () => {\n return await navigator.clipboard.readText();\n },\n },\n });\n\n // ... rest of component\n}\n```\n\n#### Interactive Client Tools\n\nTools that require user interaction before completing:\n\n```typescript\nimport { useOctavusChat, createHttpTransport } from '@octavus/react';\n\nfunction Chat() {\n const { messages, send, status, pendingClientTools } = useOctavusChat({\n transport: createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n clientTools: {\n 'request-confirmation': 'interactive',\n 'select-option': 'interactive',\n },\n });\n\n // Render UI for pending interactive tools\n const confirmationTools = pendingClientTools['request-confirmation'] ?? [];\n\n return (\n <div>\n {/* Chat messages */}\n\n {/* Interactive tool UI */}\n {confirmationTools.map((tool) => (\n <ConfirmationModal\n key={tool.toolCallId}\n message={tool.args.message as string}\n onConfirm={() => tool.submit({ confirmed: true })}\n onCancel={() => tool.cancel('User declined')}\n />\n ))}\n </div>\n );\n}\n```\n\n### New Types for Client Tools\n\n```typescript\n// Context provided to automatic tool handlers\ninterface ClientToolContext {\n toolCallId: string;\n toolName: string;\n signal: AbortSignal;\n}\n\n// Handler types\ntype ClientToolHandler =\n | ((args: Record<string, unknown>, ctx: ClientToolContext) => Promise<unknown>)\n | 'interactive';\n\n// Interactive tool with bound submit/cancel methods\ninterface InteractiveTool {\n toolCallId: string;\n toolName: string;\n args: Record<string, unknown>;\n submit: (result: unknown) => void;\n cancel: (reason?: string) => void;\n}\n```\n\n### pendingClientTools Property\n\nBoth `OctavusChat` and `useOctavusChat` now expose `pendingClientTools`:\n\n```typescript\n// Record keyed by tool name, with array of pending tools\nconst pendingClientTools: Record<string, InteractiveTool[]>;\n```\n\nThis allows multiple simultaneous calls to the same tool (e.g., batch confirmations).\n\n## Migration Checklist\n\nUse this checklist to ensure you've addressed all breaking changes:\n\n- [ ] Update all Octavus packages to v2.0.0\n- [ ] Update HTTP transport from `triggerRequest` to `request` with new signature\n- [ ] Update server route handlers to use `execute()` with `SessionRequest` type\n- [ ] Update any status checks to handle `'awaiting-input'`\n- [ ] Update custom transport implementations to include `continueWithToolResults()`\n- [ ] Review any direct schema imports and update if needed\n- [ ] (Optional) Add client-side tools for browser-based functionality\n\n## Package Versions\n\nAll packages should be updated together to ensure compatibility:\n\n```json\n{\n \"dependencies\": {\n \"@octavus/core\": \"^2.0.0\",\n \"@octavus/client-sdk\": \"^2.0.0\",\n \"@octavus/server-sdk\": \"^2.0.0\",\n \"@octavus/react\": \"^2.0.0\"\n }\n}\n```\n\n## Need Help?\n\nIf you encounter issues during migration:\n\n- Check the [Client Tools documentation](/docs/client-sdk/client-tools) for the new features\n- Review the [HTTP Transport](/docs/client-sdk/http-transport) and [Server SDK Sessions](/docs/server-sdk/sessions) docs for updated APIs\n- [Open an issue](https://github.com/octavus-ai/js-sdk/issues) on GitHub if you find a bug\n",
|
|
687
|
+
content: "\n# Migrating from v1 to v2\n\nThis guide covers all breaking changes and new features in Octavus SDK v2.0.0.\n\n## Overview\n\nVersion 2.0.0 introduces **client-side tool execution**, allowing tools to run in the browser with support for both automatic execution and interactive user input. This required changes to the transport layer and streaming protocol.\n\n## Breaking Changes\n\n### HTTP Transport Configuration\n\nThe `triggerRequest` option has been renamed to `request` and the function signature changed to support both trigger and continuation requests.\n\n**Before (v1):**\n\n```typescript\nimport { createHttpTransport } from '@octavus/client-sdk';\n\nconst transport = createHttpTransport({\n triggerRequest: (triggerName, input, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, triggerName, input }),\n signal: options?.signal,\n }),\n});\n```\n\n**After (v2):**\n\n```typescript\nimport { createHttpTransport } from '@octavus/client-sdk';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n```\n\nThe `payload` parameter is a discriminated union with `type: 'trigger' | 'continue'`:\n\n- **Trigger request:** `{ type: 'trigger', triggerName: string, input?: Record<string, unknown> }`\n- **Continue request:** `{ type: 'continue', executionId: string, toolResults: ToolResult[] }`\n\n### Server SDK Route Handler\n\nServer-side route handlers should use the new `execute()` method which handles both triggers and continuations via the request type.\n\n**Before (v1):**\n\n```typescript\n// app/api/trigger/route.ts\nimport { AgentSession, toSSEStream } from '@octavus/server-sdk';\n\nexport async function POST(request: Request) {\n const { sessionId, triggerName, input } = await request.json();\n\n const session = new AgentSession({\n sessionId,\n agentId: 'your-agent-id',\n apiKey: process.env.OCTAVUS_API_KEY!,\n });\n\n const events = session.trigger(triggerName, input, { signal: request.signal });\n return new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n });\n}\n```\n\n**After (v2):**\n\n```typescript\n// app/api/trigger/route.ts\nimport { AgentSession, toSSEStream, type SessionRequest } from '@octavus/server-sdk';\n\nexport async function POST(request: Request) {\n const body = (await request.json()) as { sessionId: string } & SessionRequest;\n const { sessionId, ...sessionRequest } = body;\n\n const session = new AgentSession({\n sessionId,\n agentId: 'your-agent-id',\n apiKey: process.env.OCTAVUS_API_KEY!,\n });\n\n const events = session.execute(sessionRequest, { signal: request.signal });\n return new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n });\n}\n```\n\nThe `SessionRequest` type is a discriminated union:\n\n- **Trigger:** `{ type: 'trigger', triggerName: string, input?: Record<string, unknown> }`\n- **Continue:** `{ type: 'continue', executionId: string, toolResults: ToolResult[] }`\n\n### Export Changes\n\nThe SDKs no longer use star exports (`export * from '@octavus/core'`). Instead, they use explicit type exports and value exports. This improves tree-shaking but may require import adjustments.\n\n**If you were importing schemas:**\n\nSome Zod schemas are no longer re-exported from `@octavus/server-sdk`. If you need validation schemas, import them directly from `@octavus/core`:\n\n```typescript\n// Before (v1) - some schemas were re-exported\nimport { fileUploadRequestSchema } from '@octavus/server-sdk';\n\n// After (v2) - import from core if needed\nimport { fileReferenceSchema } from '@octavus/core';\n```\n\n**Type imports remain the same:**\n\n```typescript\n// Types are still available\nimport type { StreamEvent, UIMessage, FileReference } from '@octavus/client-sdk';\n```\n\n### Custom Transport Interface\n\nCustom transport implementations must now implement the `continueWithToolResults()` method:\n\n```typescript\ninterface Transport {\n trigger(triggerName: string, input?: Record<string, unknown>): AsyncIterable<StreamEvent>;\n continueWithToolResults(executionId: string, results: ToolResult[]): AsyncIterable<StreamEvent>;\n stop(): void;\n}\n```\n\n### New ChatStatus Value\n\nThe `ChatStatus` type now includes `'awaiting-input'`:\n\n```typescript\n// v1\ntype ChatStatus = 'idle' | 'streaming' | 'error';\n\n// v2\ntype ChatStatus = 'idle' | 'streaming' | 'error' | 'awaiting-input';\n```\n\nUpdate any status checks that assume only three values.\n\n### Stream Event Changes\n\nTwo stream events have been modified:\n\n**StartEvent and FinishEvent now include executionId:**\n\n```typescript\ninterface StartEvent {\n type: 'start';\n messageId?: string;\n executionId?: string; // New in v2\n}\n\ninterface FinishEvent {\n type: 'finish';\n finishReason: FinishReason;\n executionId?: string; // New in v2\n}\n```\n\n**New FinishReason:**\n\n```typescript\n// Added 'client-tool-calls' to FinishReason\ntype FinishReason =\n | 'stop'\n | 'tool-calls'\n | 'client-tool-calls' // New in v2\n | 'length'\n | 'content-filter'\n | 'error'\n | 'other';\n```\n\n**New ClientToolRequestEvent:**\n\n```typescript\ninterface ClientToolRequestEvent {\n type: 'client-tool-request';\n executionId: string;\n toolCalls: PendingToolCall[];\n serverToolResults?: ToolResult[];\n}\n```\n\n## New Features\n\n### Client-Side Tools\n\nv2 introduces client-side tool execution, allowing tools to run in the browser. This is useful for:\n\n- Accessing browser APIs (geolocation, clipboard, etc.)\n- Interactive tools requiring user input (confirmations, forms, selections)\n- Tools that need access to client-side state\n\nSee the [Client Tools guide](/docs/client-sdk/client-tools) for full documentation.\n\n#### Automatic Client Tools\n\nTools that execute automatically without user interaction:\n\n```typescript\nimport { useOctavusChat, createHttpTransport } from '@octavus/react';\n\nfunction Chat() {\n const { messages, send, status } = useOctavusChat({\n transport: createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n clientTools: {\n 'get-browser-location': async (args, ctx) => {\n const pos = await new Promise((resolve, reject) => {\n navigator.geolocation.getCurrentPosition(resolve, reject);\n });\n return { lat: pos.coords.latitude, lng: pos.coords.longitude };\n },\n 'get-clipboard': async () => {\n return await navigator.clipboard.readText();\n },\n },\n });\n\n // ... rest of component\n}\n```\n\n#### Interactive Client Tools\n\nTools that require user interaction before completing:\n\n```typescript\nimport { useOctavusChat, createHttpTransport } from '@octavus/react';\n\nfunction Chat() {\n const { messages, send, status, pendingClientTools } = useOctavusChat({\n transport: createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n clientTools: {\n 'request-confirmation': 'interactive',\n 'select-option': 'interactive',\n },\n });\n\n // Render UI for pending interactive tools\n const confirmationTools = pendingClientTools['request-confirmation'] ?? [];\n\n return (\n <div>\n {/* Chat messages */}\n\n {/* Interactive tool UI */}\n {confirmationTools.map((tool) => (\n <ConfirmationModal\n key={tool.toolCallId}\n message={tool.args.message as string}\n onConfirm={() => tool.submit({ confirmed: true })}\n onCancel={() => tool.cancel('User declined')}\n />\n ))}\n </div>\n );\n}\n```\n\n### New Types for Client Tools\n\n```typescript\n// Context provided to automatic tool handlers\ninterface ClientToolContext {\n toolCallId: string;\n toolName: string;\n signal: AbortSignal;\n}\n\n// Handler types\ntype ClientToolHandler =\n | ((args: Record<string, unknown>, ctx: ClientToolContext) => Promise<unknown>)\n | 'interactive';\n\n// Interactive tool with bound submit/cancel methods\ninterface InteractiveTool {\n toolCallId: string;\n toolName: string;\n args: Record<string, unknown>;\n submit: (result: unknown) => void;\n cancel: (reason?: string) => void;\n}\n```\n\n### pendingClientTools Property\n\nBoth `OctavusChat` and `useOctavusChat` now expose `pendingClientTools`:\n\n```typescript\n// Record keyed by tool name, with array of pending tools\nconst pendingClientTools: Record<string, InteractiveTool[]>;\n```\n\nThis allows multiple simultaneous calls to the same tool (e.g., batch confirmations).\n\n## Migration Checklist\n\nUse this checklist to ensure you've addressed all breaking changes:\n\n- [ ] Update all Octavus packages to v2.0.0\n- [ ] Update HTTP transport from `triggerRequest` to `request` with new signature\n- [ ] Update server route handlers to use `execute()` with `SessionRequest` type\n- [ ] Update any status checks to handle `'awaiting-input'`\n- [ ] Update custom transport implementations to include `continueWithToolResults()`\n- [ ] Review any direct schema imports and update if needed\n- [ ] (Optional) Add client-side tools for browser-based functionality\n\n## Package Versions\n\nAll packages should be updated together to ensure compatibility:\n\n```json\n{\n \"dependencies\": {\n \"@octavus/core\": \"^2.0.0\",\n \"@octavus/client-sdk\": \"^2.0.0\",\n \"@octavus/server-sdk\": \"^2.0.0\",\n \"@octavus/react\": \"^2.0.0\"\n }\n}\n```\n\n## Need Help?\n\nIf you encounter issues during migration:\n\n- Check the [Client Tools documentation](/docs/client-sdk/client-tools) for the new features\n- Review the [HTTP Transport](/docs/client-sdk/http-transport) and [Server SDK Sessions](/docs/server-sdk/sessions) docs for updated APIs\n- [Open an issue](https://github.com/octavus-ai/agent-sdk/issues) on GitHub if you find a bug\n",
|
|
688
688
|
excerpt: "Migrating from v1 to v2 This guide covers all breaking changes and new features in Octavus SDK v2.0.0. Overview Version 2.0.0 introduces client-side tool execution, allowing tools to run in the...",
|
|
689
689
|
order: 1
|
|
690
690
|
}
|
|
@@ -729,7 +729,7 @@ var sections_default = [
|
|
|
729
729
|
section: "server-sdk",
|
|
730
730
|
title: "Overview",
|
|
731
731
|
description: "Introduction to the Octavus Server SDK for backend integration.",
|
|
732
|
-
content: "\n# Server SDK Overview\n\nThe `@octavus/server-sdk` package provides a Node.js SDK for integrating Octavus agents into your backend application. It handles session management, streaming, and the tool execution continuation loop.\n\n**Current version:** `2.
|
|
732
|
+
content: "\n# Server SDK Overview\n\nThe `@octavus/server-sdk` package provides a Node.js SDK for integrating Octavus agents into your backend application. It handles session management, streaming, and the tool execution continuation loop.\n\n**Current version:** `2.5.0`\n\n## Installation\n\n```bash\nnpm install @octavus/server-sdk\n```\n\nFor agent management (sync, validate), install the CLI as a dev dependency:\n\n```bash\nnpm install --save-dev @octavus/cli\n```\n\n## Basic Usage\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: 'your-api-key',\n});\n```\n\n## Key Features\n\n### Agent Management\n\nAgent definitions are managed via the CLI. See the [CLI documentation](/docs/server-sdk/cli) for details.\n\n```bash\n# Sync agent from local files\noctavus sync ./agents/support-chat\n\n# Output: Created: support-chat\n# Agent ID: clxyz123abc456\n```\n\n### Session Management\n\nCreate and manage agent sessions using the agent ID:\n\n```typescript\n// Create a new session (use agent ID from CLI sync)\nconst sessionId = await client.agentSessions.create('clxyz123abc456', {\n COMPANY_NAME: 'Acme Corp',\n PRODUCT_NAME: 'Widget Pro',\n});\n\n// Get UI-ready session messages (for session restore)\nconst session = await client.agentSessions.getMessages(sessionId);\n```\n\n### Tool Handlers\n\nTools run on your server with your data:\n\n```typescript\nconst session = client.agentSessions.attach(sessionId, {\n tools: {\n 'get-user-account': async (args) => {\n // Access your database, APIs, etc.\n return await db.users.findById(args.userId);\n },\n },\n});\n```\n\n### Streaming\n\nAll responses stream in real-time:\n\n```typescript\nimport { toSSEStream } from '@octavus/server-sdk';\n\n// execute() returns an async generator of events\nconst events = session.execute({\n type: 'trigger',\n triggerName: 'user-message',\n input: { USER_MESSAGE: 'Hello!' },\n});\n\n// Convert to SSE stream for HTTP responses\nreturn new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n});\n```\n\n## API Reference\n\n### OctavusClient\n\nThe main entry point for interacting with Octavus.\n\n```typescript\ninterface OctavusClientConfig {\n baseUrl: string; // Octavus API URL\n apiKey?: string; // Your API key\n}\n\nclass OctavusClient {\n readonly agents: AgentsApi;\n readonly agentSessions: AgentSessionsApi;\n readonly workers: WorkersApi;\n readonly files: FilesApi;\n\n constructor(config: OctavusClientConfig);\n}\n```\n\n### AgentSessionsApi\n\nManages agent sessions.\n\n```typescript\nclass AgentSessionsApi {\n // Create a new session\n async create(agentId: string, input?: Record<string, unknown>): Promise<string>;\n\n // Get full session state (for debugging/internal use)\n async get(sessionId: string): Promise<SessionState>;\n\n // Get UI-ready messages (for client display)\n async getMessages(sessionId: string): Promise<UISessionState>;\n\n // Attach to a session for triggering\n attach(sessionId: string, options?: SessionAttachOptions): AgentSession;\n}\n\n// Full session state (internal format)\ninterface SessionState {\n id: string;\n agentId: string;\n input: Record<string, unknown>;\n variables: Record<string, unknown>;\n resources: Record<string, unknown>;\n messages: ChatMessage[]; // Internal message format\n createdAt: string;\n updatedAt: string;\n}\n\n// UI-ready session state\ninterface UISessionState {\n sessionId: string;\n agentId: string;\n messages: UIMessage[]; // UI-ready messages for frontend\n}\n```\n\n### AgentSession\n\nHandles request execution and streaming for a specific session.\n\n```typescript\nclass AgentSession {\n // Execute a request and stream parsed events\n execute(request: SessionRequest, options?: TriggerOptions): AsyncGenerator<StreamEvent>;\n\n // Get the session ID\n getSessionId(): string;\n}\n\ntype SessionRequest = TriggerRequest | ContinueRequest;\n\ninterface TriggerRequest {\n type: 'trigger';\n triggerName: string;\n input?: Record<string, unknown>;\n}\n\ninterface ContinueRequest {\n type: 'continue';\n executionId: string;\n toolResults: ToolResult[];\n}\n\n// Helper to convert events to SSE stream\nfunction toSSEStream(events: AsyncIterable<StreamEvent>): ReadableStream<Uint8Array>;\n```\n\n### FilesApi\n\nHandles file uploads for sessions.\n\n```typescript\nclass FilesApi {\n // Get presigned URLs for file uploads\n async getUploadUrls(sessionId: string, files: FileUploadRequest[]): Promise<UploadUrlsResponse>;\n}\n\ninterface FileUploadRequest {\n filename: string;\n mediaType: string;\n size: number;\n}\n\ninterface UploadUrlsResponse {\n files: {\n id: string; // File ID for references\n uploadUrl: string; // PUT to this URL\n downloadUrl: string; // GET URL after upload\n }[];\n}\n```\n\nThe client uploads files directly to S3 using the presigned upload URL. See [File Uploads](/docs/client-sdk/file-uploads) for the full integration pattern.\n\n## Next Steps\n\n- [Sessions](/docs/server-sdk/sessions) \u2014 Deep dive into session management\n- [Tools](/docs/server-sdk/tools) \u2014 Implementing tool handlers\n- [Streaming](/docs/server-sdk/streaming) \u2014 Understanding stream events\n- [Workers](/docs/server-sdk/workers) \u2014 Executing worker agents\n",
|
|
733
733
|
excerpt: "Server SDK Overview The package provides a Node.js SDK for integrating Octavus agents into your backend application. It handles session management, streaming, and the tool execution continuation...",
|
|
734
734
|
order: 1
|
|
735
735
|
},
|
|
@@ -738,7 +738,7 @@ var sections_default = [
|
|
|
738
738
|
section: "server-sdk",
|
|
739
739
|
title: "Sessions",
|
|
740
740
|
description: "Managing agent sessions with the Server SDK.",
|
|
741
|
-
content: "\n# Sessions\n\nSessions represent conversations with an agent. They store conversation history, track resources and variables, and enable stateful interactions.\n\n## Creating Sessions\n\nCreate a session by specifying the agent ID and initial input variables:\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: process.env.OCTAVUS_API_URL!,\n apiKey: process.env.OCTAVUS_API_KEY!,\n});\n\n// Create a session with the support-chat agent\nconst sessionId = await client.agentSessions.create('support-chat', {\n COMPANY_NAME: 'Acme Corp',\n PRODUCT_NAME: 'Widget Pro',\n USER_ID: 'user-123', // Optional inputs\n});\n\nconsole.log('Session created:', sessionId);\n```\n\n## Getting Session Messages\n\nTo restore a conversation on page load, use `getMessages()` to retrieve UI-ready messages:\n\n```typescript\nconst session = await client.agentSessions.getMessages(sessionId);\n\nconsole.log({\n sessionId: session.sessionId,\n agentId: session.agentId,\n messages: session.messages.length, // UIMessage[] ready for frontend\n});\n```\n\nThe returned messages can be passed directly to the client SDK's `initialMessages` option.\n\n### UISessionState Interface\n\n```typescript\ninterface UISessionState {\n sessionId: string;\n agentId: string;\n messages: UIMessage[]; // UI-ready conversation history\n}\n```\n\n## Full Session State (Debug)\n\nFor debugging or internal use, you can retrieve the complete session state including all variables and internal message format:\n\n```typescript\nconst state = await client.agentSessions.get(sessionId);\n\nconsole.log({\n id: state.id,\n agentId: state.agentId,\n messages: state.messages.length, // ChatMessage[] (internal format)\n resources: state.resources,\n variables: state.variables,\n createdAt: state.createdAt,\n updatedAt: state.updatedAt,\n});\n```\n\n> **Note**: Use `getMessages()` for client-facing code. The `get()` method returns internal message format that includes hidden content not intended for end users.\n\n## Attaching to Sessions\n\nTo trigger actions on a session, you need to attach to it first:\n\n```typescript\nconst session = client.agentSessions.attach(sessionId, {\n tools: {\n // Tool handlers (see Tools documentation)\n },\n resources: [\n // Resource watchers (optional)\n ],\n});\n```\n\n## Executing Requests\n\nOnce attached, execute requests on the session using `execute()`:\n\n```typescript\nimport { toSSEStream } from '@octavus/server-sdk';\n\n// execute() handles both triggers and client tool continuations\nconst events = session.execute(\n { type: 'trigger', triggerName: 'user-message', input: { USER_MESSAGE: 'Hello!' } },\n { signal: request.signal },\n);\n\n// Convert to SSE stream for HTTP responses\nreturn new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n});\n```\n\n### Request Types\n\nThe `execute()` method accepts a discriminated union:\n\n```typescript\ntype SessionRequest = TriggerRequest | ContinueRequest;\n\n// Start a new conversation turn\ninterface TriggerRequest {\n type: 'trigger';\n triggerName: string;\n input?: Record<string, unknown>;\n}\n\n// Continue after client-side tool handling\ninterface ContinueRequest {\n type: 'continue';\n executionId: string;\n toolResults: ToolResult[];\n}\n```\n\nThis makes it easy to pass requests through from the client:\n\n```typescript\n// Simple passthrough from HTTP request body\nexport async function POST(request: Request) {\n const body = await request.json();\n const { sessionId, ...payload } = body;\n\n const session = client.agentSessions.attach(sessionId, {\n tools: {\n /* ... */\n },\n });\n const events = session.execute(payload, { signal: request.signal });\n\n return new Response(toSSEStream(events));\n}\n```\n\n### Stop Support\n\nPass an abort signal to allow clients to stop generation:\n\n```typescript\nconst events = session.execute(request, {\n signal: request.signal, // Forward the client's abort signal\n});\n```\n\nWhen the client aborts the request, the signal propagates through to the LLM provider, stopping generation immediately. Any partial content is preserved.\n\n## WebSocket Handling\n\nFor WebSocket integrations, use `handleSocketMessage()` which manages abort controller lifecycle internally:\n\n```typescript\nimport type { SocketMessage } from '@octavus/server-sdk';\n\n// In your socket handler\nconn.on('data', async (rawData: string) => {\n const msg = JSON.parse(rawData);\n\n if (msg.type === 'trigger' || msg.type === 'continue' || msg.type === 'stop') {\n await session.handleSocketMessage(msg as SocketMessage, {\n onEvent: (event) => conn.write(JSON.stringify(event)),\n onFinish: () => sendMessagesUpdate(), // Optional callback after streaming\n });\n }\n});\n```\n\nThe `handleSocketMessage()` method:\n\n- Handles `trigger`, `continue`, and `stop` messages\n- Automatically aborts previous requests when a new one arrives\n- Streams events via the `onEvent` callback\n- Calls `onFinish` after streaming completes (not called if aborted)\n\nSee [Socket Chat Example](/docs/examples/socket-chat) for a complete implementation.\n\n## Session Lifecycle\n\n```mermaid\nflowchart TD\n A[1. CREATE] --> B[2. ATTACH]\n B --> C[3. TRIGGER]\n C --> C\n C --> D[4. RETRIEVE]\n D --> C\n C --> E[5. EXPIRE]\n E --> F{6. RESTORE?}\n F -->|Yes| C\n F -->|No| A\n\n A -.- A1[\"`**client.agentSessions.create()**\n Returns sessionId\n Initializes state`\"]\n\n B -.- B1[\"`**client.agentSessions.attach()**\n Configure tool handlers\n Configure resource watchers`\"]\n\n C -.- C1[\"`**session.execute()**\n Execute request\n Stream events\n Update state`\"]\n\n D -.- D1[\"`**client.agentSessions.getMessages()**\n Get UI-ready messages\n Check session status`\"]\n\n E -.- E1[\"`Sessions expire after\n 24 hours (configurable)`\"]\n\n F -.- F1[\"`**client.agentSessions.restore()**\n Restore from stored messages\n Or create new session`\"]\n```\n\n## Session Expiration\n\nSessions expire after a period of inactivity (default: 24 hours). When you call `getMessages()` or `get()`, the response includes a `status` field:\n\n```typescript\nconst result = await client.agentSessions.getMessages(sessionId);\n\nif (result.status === 'expired') {\n // Session has expired - restore or create new\n console.log('Session expired:', result.sessionId);\n} else {\n // Session is active\n console.log('Messages:', result.messages.length);\n}\n```\n\n### Response Types\n\n| Status | Type | Description |\n| --------- | --------------------- | ------------------------------------------------------------- |\n| `active` | `UISessionState` | Session is active, includes `messages` array |\n| `expired` | `ExpiredSessionState` | Session expired, includes `sessionId`, `agentId`, `createdAt` |\n\n## Persisting Chat History\n\nTo enable session restoration, store the chat messages in your own database after each interaction:\n\n```typescript\n// After each trigger completes, save messages\nconst result = await client.agentSessions.getMessages(sessionId);\n\nif (result.status === 'active') {\n // Store in your database\n await db.chats.update({\n where: { id: chatId },\n data: {\n sessionId: result.sessionId,\n messages: result.messages, // Store UIMessage[] as JSON\n },\n });\n}\n```\n\n> **Best Practice**: Store the full `UIMessage[]` array. This preserves all message parts (text, tool calls, files, etc.) needed for accurate restoration.\n\n## Restoring Sessions\n\nWhen a user returns to your app:\n\n```typescript\n// 1. Load stored data from your database\nconst chat = await db.chats.findUnique({ where: { id: chatId } });\n\n// 2. Check if session is still active\nconst result = await client.agentSessions.getMessages(chat.sessionId);\n\nif (result.status === 'active') {\n // Session is active - use it directly\n return {\n sessionId: result.sessionId,\n messages: result.messages,\n };\n}\n\n// 3. Session expired - restore from stored messages\nif (chat.messages && chat.messages.length > 0) {\n const restored = await client.agentSessions.restore(\n chat.sessionId,\n chat.messages,\n { COMPANY_NAME: 'Acme Corp' }, // Optional: same input as create()\n );\n\n if (restored.restored) {\n // Session restored successfully\n return {\n sessionId: restored.sessionId,\n messages: chat.messages,\n };\n }\n}\n\n// 4. Cannot restore - create new session\nconst newSessionId = await client.agentSessions.create('support-chat', {\n COMPANY_NAME: 'Acme Corp',\n});\n\nreturn {\n sessionId: newSessionId,\n messages: [],\n};\n```\n\n### Restore Response\n\n```typescript\ninterface RestoreSessionResult {\n sessionId: string;\n restored: boolean; // true if restored, false if session was already active\n}\n```\n\n## Complete Example\n\nHere's a complete session management flow:\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: process.env.OCTAVUS_API_URL!,\n apiKey: process.env.OCTAVUS_API_KEY!,\n});\n\nasync function getOrCreateSession(chatId: string, agentId: string, input: Record<string, unknown>) {\n // Load existing chat data\n const chat = await db.chats.findUnique({ where: { id: chatId } });\n\n if (chat?.sessionId) {\n // Check session status\n const result = await client.agentSessions.getMessages(chat.sessionId);\n\n if (result.status === 'active') {\n return { sessionId: result.sessionId, messages: result.messages };\n }\n\n // Try to restore expired session\n if (chat.messages?.length > 0) {\n const restored = await client.agentSessions.restore(chat.sessionId, chat.messages, input);\n if (restored.restored) {\n return { sessionId: restored.sessionId, messages: chat.messages };\n }\n }\n }\n\n // Create new session\n const sessionId = await client.agentSessions.create(agentId, input);\n\n // Save to database\n await db.chats.upsert({\n where: { id: chatId },\n create: { id: chatId, sessionId, messages: [] },\n update: { sessionId, messages: [] },\n });\n\n return { sessionId, messages: [] };\n}\n```\n\n## Error Handling\n\n```typescript\nimport { ApiError } from '@octavus/server-sdk';\n\ntry {\n const session = await client.agentSessions.getMessages(sessionId);\n} catch (error) {\n if (error instanceof ApiError) {\n if (error.status === 404) {\n // Session not found or expired\n console.log('Session expired, create a new one');\n } else {\n console.error('API Error:', error.message);\n }\n }\n throw error;\n}\n```\n",
|
|
741
|
+
content: "\n# Sessions\n\nSessions represent conversations with an agent. They store conversation history, track resources and variables, and enable stateful interactions.\n\n## Creating Sessions\n\nCreate a session by specifying the agent ID and initial input variables:\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: process.env.OCTAVUS_API_URL!,\n apiKey: process.env.OCTAVUS_API_KEY!,\n});\n\n// Create a session with the support-chat agent\nconst sessionId = await client.agentSessions.create('support-chat', {\n COMPANY_NAME: 'Acme Corp',\n PRODUCT_NAME: 'Widget Pro',\n USER_ID: 'user-123', // Optional inputs\n});\n\nconsole.log('Session created:', sessionId);\n```\n\n## Getting Session Messages\n\nTo restore a conversation on page load, use `getMessages()` to retrieve UI-ready messages:\n\n```typescript\nconst session = await client.agentSessions.getMessages(sessionId);\n\nconsole.log({\n sessionId: session.sessionId,\n agentId: session.agentId,\n messages: session.messages.length, // UIMessage[] ready for frontend\n});\n```\n\nThe returned messages can be passed directly to the client SDK's `initialMessages` option.\n\n### UISessionState Interface\n\n```typescript\ninterface UISessionState {\n sessionId: string;\n agentId: string;\n messages: UIMessage[]; // UI-ready conversation history\n}\n```\n\n## Full Session State (Debug)\n\nFor debugging or internal use, you can retrieve the complete session state including all variables and internal message format:\n\n```typescript\nconst state = await client.agentSessions.get(sessionId);\n\nconsole.log({\n id: state.id,\n agentId: state.agentId,\n messages: state.messages.length, // ChatMessage[] (internal format)\n resources: state.resources,\n variables: state.variables,\n createdAt: state.createdAt,\n updatedAt: state.updatedAt,\n});\n```\n\n> **Note**: Use `getMessages()` for client-facing code. The `get()` method returns internal message format that includes hidden content not intended for end users.\n\n## Attaching to Sessions\n\nTo trigger actions on a session, you need to attach to it first:\n\n```typescript\nconst session = client.agentSessions.attach(sessionId, {\n tools: {\n // Tool handlers (see Tools documentation)\n },\n resources: [\n // Resource watchers (optional)\n ],\n});\n```\n\n## Executing Requests\n\nOnce attached, execute requests on the session using `execute()`:\n\n```typescript\nimport { toSSEStream } from '@octavus/server-sdk';\n\n// execute() handles both triggers and client tool continuations\nconst events = session.execute(\n { type: 'trigger', triggerName: 'user-message', input: { USER_MESSAGE: 'Hello!' } },\n { signal: request.signal },\n);\n\n// Convert to SSE stream for HTTP responses\nreturn new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n});\n```\n\n### Request Types\n\nThe `execute()` method accepts a discriminated union:\n\n```typescript\ntype SessionRequest = TriggerRequest | ContinueRequest;\n\n// Start a new conversation turn\ninterface TriggerRequest {\n type: 'trigger';\n triggerName: string;\n input?: Record<string, unknown>;\n}\n\n// Continue after client-side tool handling\ninterface ContinueRequest {\n type: 'continue';\n executionId: string;\n toolResults: ToolResult[];\n}\n```\n\nThis makes it easy to pass requests through from the client:\n\n```typescript\n// Simple passthrough from HTTP request body\nexport async function POST(request: Request) {\n const body = await request.json();\n const { sessionId, ...payload } = body;\n\n const session = client.agentSessions.attach(sessionId, {\n tools: {\n /* ... */\n },\n });\n const events = session.execute(payload, { signal: request.signal });\n\n return new Response(toSSEStream(events));\n}\n```\n\n### Stop Support\n\nPass an abort signal to allow clients to stop generation:\n\n```typescript\nconst events = session.execute(request, {\n signal: request.signal, // Forward the client's abort signal\n});\n```\n\nWhen the client aborts the request, the signal propagates through to the LLM provider, stopping generation immediately. Any partial content is preserved.\n\n## WebSocket Handling\n\nFor WebSocket integrations, use `handleSocketMessage()` which manages abort controller lifecycle internally:\n\n```typescript\nimport type { SocketMessage } from '@octavus/server-sdk';\n\n// In your socket handler\nconn.on('data', async (rawData: string) => {\n const msg = JSON.parse(rawData);\n\n if (msg.type === 'trigger' || msg.type === 'continue' || msg.type === 'stop') {\n await session.handleSocketMessage(msg as SocketMessage, {\n onEvent: (event) => conn.write(JSON.stringify(event)),\n onFinish: async () => {\n // Fetch and persist messages to your database for restoration\n },\n });\n }\n});\n```\n\nThe `handleSocketMessage()` method:\n\n- Handles `trigger`, `continue`, and `stop` messages\n- Automatically aborts previous requests when a new one arrives\n- Streams events via the `onEvent` callback\n- Calls `onFinish` after streaming completes (not called if aborted)\n\nSee [Socket Chat Example](/docs/examples/socket-chat) for a complete implementation.\n\n## Session Lifecycle\n\n```mermaid\nflowchart TD\n A[1. CREATE] --> B[2. ATTACH]\n B --> C[3. TRIGGER]\n C --> C\n C --> D[4. RETRIEVE]\n D --> C\n C --> E[5. EXPIRE]\n E --> F{6. RESTORE?}\n F -->|Yes| C\n F -->|No| A\n\n A -.- A1[\"`**client.agentSessions.create()**\n Returns sessionId\n Initializes state`\"]\n\n B -.- B1[\"`**client.agentSessions.attach()**\n Configure tool handlers\n Configure resource watchers`\"]\n\n C -.- C1[\"`**session.execute()**\n Execute request\n Stream events\n Update state`\"]\n\n D -.- D1[\"`**client.agentSessions.getMessages()**\n Get UI-ready messages\n Check session status`\"]\n\n E -.- E1[\"`Sessions expire after\n 24 hours (configurable)`\"]\n\n F -.- F1[\"`**client.agentSessions.restore()**\n Restore from stored messages\n Or create new session`\"]\n```\n\n## Session Expiration\n\nSessions expire after a period of inactivity (default: 24 hours). When you call `getMessages()` or `get()`, the response includes a `status` field:\n\n```typescript\nconst result = await client.agentSessions.getMessages(sessionId);\n\nif (result.status === 'expired') {\n // Session has expired - restore or create new\n console.log('Session expired:', result.sessionId);\n} else {\n // Session is active\n console.log('Messages:', result.messages.length);\n}\n```\n\n### Response Types\n\n| Status | Type | Description |\n| --------- | --------------------- | ------------------------------------------------------------- |\n| `active` | `UISessionState` | Session is active, includes `messages` array |\n| `expired` | `ExpiredSessionState` | Session expired, includes `sessionId`, `agentId`, `createdAt` |\n\n## Persisting Chat History\n\nTo enable session restoration, store the chat messages in your own database after each interaction:\n\n```typescript\n// After each trigger completes, save messages\nconst result = await client.agentSessions.getMessages(sessionId);\n\nif (result.status === 'active') {\n // Store in your database\n await db.chats.update({\n where: { id: chatId },\n data: {\n sessionId: result.sessionId,\n messages: result.messages, // Store UIMessage[] as JSON\n },\n });\n}\n```\n\n> **Best Practice**: Store the full `UIMessage[]` array. This preserves all message parts (text, tool calls, files, etc.) needed for accurate restoration.\n\n## Restoring Sessions\n\nWhen a user returns to your app:\n\n```typescript\n// 1. Load stored data from your database\nconst chat = await db.chats.findUnique({ where: { id: chatId } });\n\n// 2. Check if session is still active\nconst result = await client.agentSessions.getMessages(chat.sessionId);\n\nif (result.status === 'active') {\n // Session is active - use it directly\n return {\n sessionId: result.sessionId,\n messages: result.messages,\n };\n}\n\n// 3. Session expired - restore from stored messages\nif (chat.messages && chat.messages.length > 0) {\n const restored = await client.agentSessions.restore(\n chat.sessionId,\n chat.messages,\n { COMPANY_NAME: 'Acme Corp' }, // Optional: same input as create()\n );\n\n if (restored.restored) {\n // Session restored successfully\n return {\n sessionId: restored.sessionId,\n messages: chat.messages,\n };\n }\n}\n\n// 4. Cannot restore - create new session\nconst newSessionId = await client.agentSessions.create('support-chat', {\n COMPANY_NAME: 'Acme Corp',\n});\n\nreturn {\n sessionId: newSessionId,\n messages: [],\n};\n```\n\n### Restore Response\n\n```typescript\ninterface RestoreSessionResult {\n sessionId: string;\n restored: boolean; // true if restored, false if session was already active\n}\n```\n\n## Complete Example\n\nHere's a complete session management flow:\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: process.env.OCTAVUS_API_URL!,\n apiKey: process.env.OCTAVUS_API_KEY!,\n});\n\nasync function getOrCreateSession(chatId: string, agentId: string, input: Record<string, unknown>) {\n // Load existing chat data\n const chat = await db.chats.findUnique({ where: { id: chatId } });\n\n if (chat?.sessionId) {\n // Check session status\n const result = await client.agentSessions.getMessages(chat.sessionId);\n\n if (result.status === 'active') {\n return { sessionId: result.sessionId, messages: result.messages };\n }\n\n // Try to restore expired session\n if (chat.messages?.length > 0) {\n const restored = await client.agentSessions.restore(chat.sessionId, chat.messages, input);\n if (restored.restored) {\n return { sessionId: restored.sessionId, messages: chat.messages };\n }\n }\n }\n\n // Create new session\n const sessionId = await client.agentSessions.create(agentId, input);\n\n // Save to database\n await db.chats.upsert({\n where: { id: chatId },\n create: { id: chatId, sessionId, messages: [] },\n update: { sessionId, messages: [] },\n });\n\n return { sessionId, messages: [] };\n}\n```\n\n## Error Handling\n\n```typescript\nimport { ApiError } from '@octavus/server-sdk';\n\ntry {\n const session = await client.agentSessions.getMessages(sessionId);\n} catch (error) {\n if (error instanceof ApiError) {\n if (error.status === 404) {\n // Session not found or expired\n console.log('Session expired, create a new one');\n } else {\n console.error('API Error:', error.message);\n }\n }\n throw error;\n}\n```\n",
|
|
742
742
|
excerpt: "Sessions Sessions represent conversations with an agent. They store conversation history, track resources and variables, and enable stateful interactions. Creating Sessions Create a session by...",
|
|
743
743
|
order: 2
|
|
744
744
|
},
|
|
@@ -765,7 +765,7 @@ var sections_default = [
|
|
|
765
765
|
section: "server-sdk",
|
|
766
766
|
title: "CLI",
|
|
767
767
|
description: "Command-line interface for validating and syncing agent definitions.",
|
|
768
|
-
content: '\n# Octavus CLI\n\nThe `@octavus/cli` package provides a command-line interface for validating and syncing agent definitions from your local filesystem to the Octavus platform.\n\n**Current version:** `2.
|
|
768
|
+
content: '\n# Octavus CLI\n\nThe `@octavus/cli` package provides a command-line interface for validating and syncing agent definitions from your local filesystem to the Octavus platform.\n\n**Current version:** `2.5.0`\n\n## Installation\n\n```bash\nnpm install --save-dev @octavus/cli\n```\n\n## Configuration\n\nThe CLI requires an API key with the **Agents** permission.\n\n### Environment Variables\n\n| Variable | Description |\n| --------------------- | ---------------------------------------------- |\n| `OCTAVUS_CLI_API_KEY` | API key with "Agents" permission (recommended) |\n| `OCTAVUS_API_KEY` | Fallback if `OCTAVUS_CLI_API_KEY` not set |\n| `OCTAVUS_API_URL` | Optional, defaults to `https://octavus.ai` |\n\n### Two-Key Strategy (Recommended)\n\nFor production deployments, use separate API keys with minimal permissions:\n\n```bash\n# CI/CD or .env.local (not committed)\nOCTAVUS_CLI_API_KEY=oct_sk_... # "Agents" permission only\n\n# Production .env\nOCTAVUS_API_KEY=oct_sk_... # "Sessions" permission only\n```\n\nThis ensures production servers only have session permissions (smaller blast radius if leaked), while agent management is restricted to development/CI environments.\n\n### Multiple Environments\n\nUse separate Octavus projects for staging and production, each with their own API keys. The `--env` flag lets you load different environment files:\n\n```bash\n# Local development (default: .env)\noctavus sync ./agents/my-agent\n\n# Staging project\noctavus --env .env.staging sync ./agents/my-agent\n\n# Production project\noctavus --env .env.production sync ./agents/my-agent\n```\n\nExample environment files:\n\n```bash\n# .env.staging (syncs to your staging project)\nOCTAVUS_CLI_API_KEY=oct_sk_staging_project_key...\n\n# .env.production (syncs to your production project)\nOCTAVUS_CLI_API_KEY=oct_sk_production_project_key...\n```\n\nEach project has its own agents, so you\'ll get different agent IDs per environment.\n\n## Global Options\n\n| Option | Description |\n| -------------- | ------------------------------------------------------- |\n| `--env <file>` | Load environment from a specific file (default: `.env`) |\n| `--help` | Show help |\n| `--version` | Show version |\n\n## Commands\n\n### `octavus sync <path>`\n\nSync an agent definition to the platform. Creates the agent if it doesn\'t exist, or updates it if it does.\n\n```bash\noctavus sync ./agents/my-agent\n```\n\n**Options:**\n\n- `--json` \u2014 Output as JSON (for CI/CD parsing)\n- `--quiet` \u2014 Suppress non-essential output\n\n**Example output:**\n\n```\n\u2139 Reading agent from ./agents/my-agent...\n\u2139 Syncing support-chat...\n\u2713 Created: support-chat\n Agent ID: clxyz123abc456\n```\n\n### `octavus validate <path>`\n\nValidate an agent definition without saving. Useful for CI/CD pipelines.\n\n```bash\noctavus validate ./agents/my-agent\n```\n\n**Exit codes:**\n\n- `0` \u2014 Validation passed\n- `1` \u2014 Validation errors\n- `2` \u2014 Configuration errors (missing API key, etc.)\n\n### `octavus list`\n\nList all agents in your project.\n\n```bash\noctavus list\n```\n\n**Example output:**\n\n```\nSLUG NAME FORMAT ID\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\nsupport-chat Support Chat Agent interactive clxyz123abc456\n\n1 agent(s)\n```\n\n### `octavus get <slug>`\n\nGet details about a specific agent by its slug.\n\n```bash\noctavus get support-chat\n```\n\n## Agent Directory Structure\n\nThe CLI expects agent definitions in a specific directory structure:\n\n```\nmy-agent/\n\u251C\u2500\u2500 settings.json # Required: Agent metadata\n\u251C\u2500\u2500 protocol.yaml # Required: Agent protocol\n\u2514\u2500\u2500 prompts/ # Optional: Prompt templates\n \u251C\u2500\u2500 system.md\n \u2514\u2500\u2500 user-message.md\n```\n\n### settings.json\n\n```json\n{\n "slug": "my-agent",\n "name": "My Agent",\n "description": "A helpful assistant",\n "format": "interactive"\n}\n```\n\n### protocol.yaml\n\nSee the [Protocol documentation](/docs/protocol/overview) for details on protocol syntax.\n\n## CI/CD Integration\n\n### GitHub Actions\n\n```yaml\nname: Validate and Sync Agents\n\non:\n push:\n branches: [main]\n paths:\n - \'agents/**\'\n\njobs:\n sync:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n\n - uses: actions/setup-node@v4\n with:\n node-version: \'22\'\n\n - run: npm install\n\n - name: Validate agent\n run: npx octavus validate ./agents/support-chat\n env:\n OCTAVUS_CLI_API_KEY: ${{ secrets.OCTAVUS_CLI_API_KEY }}\n\n - name: Sync agent\n run: npx octavus sync ./agents/support-chat\n env:\n OCTAVUS_CLI_API_KEY: ${{ secrets.OCTAVUS_CLI_API_KEY }}\n```\n\n### Package.json Scripts\n\nAdd sync scripts to your `package.json`:\n\n```json\n{\n "scripts": {\n "agents:validate": "octavus validate ./agents/my-agent",\n "agents:sync": "octavus sync ./agents/my-agent"\n },\n "devDependencies": {\n "@octavus/cli": "^0.1.0"\n }\n}\n```\n\n## Workflow\n\nThe recommended workflow for managing agents:\n\n1. **Define agent locally** \u2014 Create `settings.json`, `protocol.yaml`, and prompts\n2. **Validate** \u2014 Run `octavus validate ./my-agent` to check for errors\n3. **Sync** \u2014 Run `octavus sync ./my-agent` to push to platform\n4. **Store agent ID** \u2014 Save the output ID in an environment variable\n5. **Use in app** \u2014 Read the ID from env and pass to `client.agentSessions.create()`\n\n```bash\n# After syncing: octavus sync ./agents/support-chat\n# Output: Agent ID: clxyz123abc456\n\n# Add to your .env file\nOCTAVUS_SUPPORT_AGENT_ID=clxyz123abc456\n```\n\n```typescript\nconst agentId = process.env.OCTAVUS_SUPPORT_AGENT_ID;\n\nconst sessionId = await client.agentSessions.create(agentId, {\n COMPANY_NAME: \'Acme Corp\',\n});\n```\n',
|
|
769
769
|
excerpt: "Octavus CLI The package provides a command-line interface for validating and syncing agent definitions from your local filesystem to the Octavus platform. Current version: Installation ...",
|
|
770
770
|
order: 5
|
|
771
771
|
},
|
|
@@ -774,7 +774,7 @@ var sections_default = [
|
|
|
774
774
|
section: "server-sdk",
|
|
775
775
|
title: "Workers",
|
|
776
776
|
description: "Executing worker agents with the Server SDK.",
|
|
777
|
-
content: "\n# Workers API\n\nThe `WorkersApi` enables executing worker agents from your server. Workers are task-based agents that run steps sequentially and return an output value.\n\n## Basic Usage\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: 'your-api-key',\n});\n\n// Execute a worker\nconst events = client.workers.execute(agentId, {\n TOPIC: 'AI safety',\n DEPTH: 'detailed',\n});\n\n// Process events\nfor await (const event of events) {\n if (event.type === 'worker-start') {\n console.log(`Worker ${event.workerSlug} started`);\n }\n if (event.type === 'text-delta') {\n process.stdout.write(event.delta);\n }\n if (event.type === 'worker-result') {\n console.log('Output:', event.output);\n }\n}\n```\n\n## WorkersApi Reference\n\n### execute()\n\nExecute a worker and stream the response.\n\n```typescript\nasync *execute(\n agentId: string,\n input: Record<string, unknown>,\n options?: WorkerExecuteOptions\n): AsyncGenerator<StreamEvent>\n```\n\n**Parameters:**\n\n| Parameter | Type | Description |\n| --------- | ------------------------- | --------------------------- |\n| `agentId` | `string` | The worker agent ID |\n| `input` | `Record<string, unknown>` | Input values for the worker |\n| `options` | `WorkerExecuteOptions` | Optional configuration |\n\n**Options:**\n\n```typescript\ninterface WorkerExecuteOptions {\n /** Tool handlers for server-side tool execution */\n tools?: ToolHandlers;\n /** Abort signal to cancel the execution */\n signal?: AbortSignal;\n}\n```\n\n### continue()\n\nContinue execution after client-side tool handling.\n\n```typescript\nasync *continue(\n agentId: string,\n executionId: string,\n toolResults: ToolResult[],\n options?: WorkerExecuteOptions\n): AsyncGenerator<StreamEvent>\n```\n\nUse this when the worker has tools without server-side handlers. The execution pauses with a `client-tool-request` event, you execute the tools, then call `continue()` to resume.\n\n## Tool Handlers\n\nProvide tool handlers to execute tools server-side:\n\n```typescript\nconst events = client.workers.execute(\n agentId,\n { TOPIC: 'AI safety' },\n {\n tools: {\n 'web-search': async (args) => {\n const results = await searchWeb(args.query);\n return results;\n },\n 'get-user-data': async (args) => {\n return await db.users.findById(args.userId);\n },\n },\n },\n);\n```\n\nTools defined in the worker protocol but not provided as handlers become client tools \u2014 the execution pauses and emits a `client-tool-request` event.\n\n## Stream Events\n\nWorkers emit standard stream events plus worker-specific events.\n\n### Worker Events\n\n```typescript\n// Worker started\n{\n type: 'worker-start',\n workerId: string,
|
|
777
|
+
content: "\n# Workers API\n\nThe `WorkersApi` enables executing worker agents from your server. Workers are task-based agents that run steps sequentially and return an output value.\n\n## Basic Usage\n\n```typescript\nimport { OctavusClient } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: 'your-api-key',\n});\n\n// Execute a worker\nconst events = client.workers.execute(agentId, {\n TOPIC: 'AI safety',\n DEPTH: 'detailed',\n});\n\n// Process events\nfor await (const event of events) {\n if (event.type === 'worker-start') {\n console.log(`Worker ${event.workerSlug} started`);\n }\n if (event.type === 'text-delta') {\n process.stdout.write(event.delta);\n }\n if (event.type === 'worker-result') {\n console.log('Output:', event.output);\n }\n}\n```\n\n## WorkersApi Reference\n\n### execute()\n\nExecute a worker and stream the response.\n\n```typescript\nasync *execute(\n agentId: string,\n input: Record<string, unknown>,\n options?: WorkerExecuteOptions\n): AsyncGenerator<StreamEvent>\n```\n\n**Parameters:**\n\n| Parameter | Type | Description |\n| --------- | ------------------------- | --------------------------- |\n| `agentId` | `string` | The worker agent ID |\n| `input` | `Record<string, unknown>` | Input values for the worker |\n| `options` | `WorkerExecuteOptions` | Optional configuration |\n\n**Options:**\n\n```typescript\ninterface WorkerExecuteOptions {\n /** Tool handlers for server-side tool execution */\n tools?: ToolHandlers;\n /** Abort signal to cancel the execution */\n signal?: AbortSignal;\n}\n```\n\n### continue()\n\nContinue execution after client-side tool handling.\n\n```typescript\nasync *continue(\n agentId: string,\n executionId: string,\n toolResults: ToolResult[],\n options?: WorkerExecuteOptions\n): AsyncGenerator<StreamEvent>\n```\n\nUse this when the worker has tools without server-side handlers. The execution pauses with a `client-tool-request` event, you execute the tools, then call `continue()` to resume.\n\n## Tool Handlers\n\nProvide tool handlers to execute tools server-side:\n\n```typescript\nconst events = client.workers.execute(\n agentId,\n { TOPIC: 'AI safety' },\n {\n tools: {\n 'web-search': async (args) => {\n const results = await searchWeb(args.query);\n return results;\n },\n 'get-user-data': async (args) => {\n return await db.users.findById(args.userId);\n },\n },\n },\n);\n```\n\nTools defined in the worker protocol but not provided as handlers become client tools \u2014 the execution pauses and emits a `client-tool-request` event.\n\n## Stream Events\n\nWorkers emit standard stream events plus worker-specific events.\n\n### Worker Events\n\n```typescript\n// Worker started\n{\n type: 'worker-start',\n workerId: string, // Unique ID (also used as session ID for debug)\n workerSlug: string, // The worker's slug\n description?: string, // Display description for UI\n}\n\n// Worker completed\n{\n type: 'worker-result',\n workerId: string,\n output?: unknown, // The worker's output value\n error?: string, // Error message if worker failed\n}\n```\n\n### Common Events\n\n| Event | Description |\n| ----------------------- | --------------------------- |\n| `start` | Execution started |\n| `finish` | Execution completed |\n| `text-start` | Text generation started |\n| `text-delta` | Text chunk received |\n| `text-end` | Text generation ended |\n| `block-start` | Step started |\n| `block-end` | Step completed |\n| `tool-input-available` | Tool arguments ready |\n| `tool-output-available` | Tool result ready |\n| `client-tool-request` | Client tools need execution |\n| `error` | Error occurred |\n\n## Extracting Output\n\nTo get just the worker's output value:\n\n```typescript\nasync function executeWorker(\n client: OctavusClient,\n agentId: string,\n input: Record<string, unknown>,\n): Promise<unknown> {\n const events = client.workers.execute(agentId, input);\n\n for await (const event of events) {\n if (event.type === 'worker-result') {\n if (event.error) {\n throw new Error(event.error);\n }\n return event.output;\n }\n }\n\n return undefined;\n}\n\n// Usage\nconst analysis = await executeWorker(client, agentId, { TOPIC: 'AI' });\n```\n\n## Client Tool Continuation\n\nWhen workers have tools without handlers, execution pauses:\n\n```typescript\nfor await (const event of client.workers.execute(agentId, input)) {\n if (event.type === 'client-tool-request') {\n // Execute tools client-side\n const results = await executeClientTools(event.toolCalls);\n\n // Continue execution\n for await (const ev of client.workers.continue(agentId, event.executionId, results)) {\n // Handle remaining events\n }\n break;\n }\n}\n```\n\nThe `client-tool-request` event includes:\n\n```typescript\n{\n type: 'client-tool-request',\n executionId: string, // Pass to continue()\n toolCalls: [{\n toolCallId: string,\n toolName: string,\n args: Record<string, unknown>,\n }],\n}\n```\n\n## Streaming to HTTP Response\n\nConvert worker events to an SSE stream:\n\n```typescript\nimport { toSSEStream } from '@octavus/server-sdk';\n\nexport async function POST(request: Request) {\n const { agentId, input } = await request.json();\n\n const events = client.workers.execute(agentId, input, {\n tools: {\n search: async (args) => await search(args.query),\n },\n });\n\n return new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n });\n}\n```\n\n## Cancellation\n\nUse an abort signal to cancel execution:\n\n```typescript\nconst controller = new AbortController();\n\n// Cancel after 30 seconds\nsetTimeout(() => controller.abort(), 30000);\n\nconst events = client.workers.execute(agentId, input, {\n signal: controller.signal,\n});\n\ntry {\n for await (const event of events) {\n // Process events\n }\n} catch (error) {\n if (error.name === 'AbortError') {\n console.log('Worker cancelled');\n }\n}\n```\n\n## Error Handling\n\nErrors can occur at different levels:\n\n```typescript\nfor await (const event of client.workers.execute(agentId, input)) {\n // Stream-level error event\n if (event.type === 'error') {\n console.error(`Error: ${event.message}`);\n console.error(`Type: ${event.errorType}`);\n console.error(`Retryable: ${event.retryable}`);\n }\n\n // Worker-level error in result\n if (event.type === 'worker-result' && event.error) {\n console.error(`Worker failed: ${event.error}`);\n }\n}\n```\n\nError types include:\n\n| Type | Description |\n| ------------------ | --------------------- |\n| `validation_error` | Invalid input |\n| `not_found_error` | Worker not found |\n| `provider_error` | LLM provider error |\n| `tool_error` | Tool execution failed |\n| `execution_error` | Worker step failed |\n\n## Full Example\n\n```typescript\nimport { OctavusClient, type StreamEvent } from '@octavus/server-sdk';\n\nconst client = new OctavusClient({\n baseUrl: 'https://octavus.ai',\n apiKey: process.env.OCTAVUS_API_KEY!,\n});\n\nasync function runResearchWorker(topic: string) {\n console.log(`Researching: ${topic}\\n`);\n\n const events = client.workers.execute(\n 'research-assistant-id',\n {\n TOPIC: topic,\n DEPTH: 'detailed',\n },\n {\n tools: {\n 'web-search': async ({ query }) => {\n console.log(`Searching: ${query}`);\n return await performWebSearch(query);\n },\n },\n },\n );\n\n let output: unknown;\n\n for await (const event of events) {\n switch (event.type) {\n case 'worker-start':\n console.log(`Started: ${event.workerSlug}`);\n break;\n\n case 'block-start':\n console.log(`Step: ${event.blockName}`);\n break;\n\n case 'text-delta':\n process.stdout.write(event.delta);\n break;\n\n case 'worker-result':\n if (event.error) {\n throw new Error(event.error);\n }\n output = event.output;\n break;\n\n case 'error':\n throw new Error(event.message);\n }\n }\n\n console.log('\\n\\nResearch complete!');\n return output;\n}\n\n// Run the worker\nconst result = await runResearchWorker('AI safety best practices');\nconsole.log('Result:', result);\n```\n\n## Next Steps\n\n- [Workers Protocol](/docs/protocol/workers) \u2014 Worker protocol reference\n- [Streaming](/docs/server-sdk/streaming) \u2014 Understanding stream events\n- [Tools](/docs/server-sdk/tools) \u2014 Tool handler patterns\n",
|
|
778
778
|
excerpt: "Workers API The enables executing worker agents from your server. Workers are task-based agents that run steps sequentially and return an output value. Basic Usage WorkersApi Reference execute()...",
|
|
779
779
|
order: 6
|
|
780
780
|
}
|
|
@@ -791,7 +791,7 @@ var sections_default = [
|
|
|
791
791
|
section: "client-sdk",
|
|
792
792
|
title: "Overview",
|
|
793
793
|
description: "Introduction to the Octavus Client SDKs for building chat interfaces.",
|
|
794
|
-
content: "\n# Client SDK Overview\n\nOctavus provides two packages for frontend integration:\n\n| Package | Purpose | Use When |\n| --------------------- | ------------------------ | ----------------------------------------------------- |\n| `@octavus/react` | React hooks and bindings | Building React applications |\n| `@octavus/client-sdk` | Framework-agnostic core | Using Vue, Svelte, vanilla JS, or custom integrations |\n\n**Most users should install `@octavus/react`** \u2014 it includes everything from `@octavus/client-sdk` plus React-specific hooks.\n\n## Installation\n\n### React Applications\n\n```bash\nnpm install @octavus/react\n```\n\n**Current version:** `2.2.0`\n\n### Other Frameworks\n\n```bash\nnpm install @octavus/client-sdk\n```\n\n**Current version:** `2.2.0`\n\n## Transport Pattern\n\nThe Client SDK uses a **transport abstraction** to handle communication with your backend. This gives you flexibility in how events are delivered:\n\n| Transport | Use Case | Docs |\n| ----------------------- | -------------------------------------------- | ----------------------------------------------------- |\n| `createHttpTransport` | HTTP/SSE (Next.js, Express, etc.) | [HTTP Transport](/docs/client-sdk/http-transport) |\n| `createSocketTransport` | WebSocket, SockJS, or other socket protocols | [Socket Transport](/docs/client-sdk/socket-transport) |\n\nWhen the transport changes (e.g., when `sessionId` changes), the `useOctavusChat` hook automatically reinitializes with the new transport.\n\n> **Recommendation**: Use HTTP transport unless you specifically need WebSocket features (custom real-time events, Meteor/Phoenix, etc.).\n\n## React Usage\n\nThe `useOctavusChat` hook provides state management and streaming for React applications:\n\n```tsx\nimport { useMemo } from 'react';\nimport { useOctavusChat, createHttpTransport, type UIMessage } from '@octavus/react';\n\nfunction Chat({ sessionId }: { sessionId: string }) {\n // Create a stable transport instance (memoized on sessionId)\n const transport = useMemo(\n () =>\n createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n [sessionId],\n );\n\n const { messages, status, send } = useOctavusChat({ transport });\n\n const sendMessage = async (text: string) => {\n await send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n };\n\n return (\n <div>\n {messages.map((msg) => (\n <MessageBubble key={msg.id} message={msg} />\n ))}\n </div>\n );\n}\n\nfunction MessageBubble({ message }: { message: UIMessage }) {\n return (\n <div>\n {message.parts.map((part, i) => {\n if (part.type === 'text') {\n return <p key={i}>{part.text}</p>;\n }\n return null;\n })}\n </div>\n );\n}\n```\n\n## Framework-Agnostic Usage\n\nThe `OctavusChat` class can be used with any framework or vanilla JavaScript:\n\n```typescript\nimport { OctavusChat, createHttpTransport } from '@octavus/client-sdk';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n\nconst chat = new OctavusChat({ transport });\n\n// Subscribe to state changes\nconst unsubscribe = chat.subscribe(() => {\n console.log('Messages:', chat.messages);\n console.log('Status:', chat.status);\n // Update your UI here\n});\n\n// Send a message\nawait chat.send('user-message', { USER_MESSAGE: 'Hello' }, { userMessage: { content: 'Hello' } });\n\n// Cleanup when done\nunsubscribe();\n```\n\n## Key Features\n\n### Unified Send Function\n\nThe `send` function handles both user message display and agent triggering in one call:\n\n```tsx\nconst { send } = useOctavusChat({ transport });\n\n// Add user message to UI and trigger agent\nawait send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n\n// Trigger without adding a user message (e.g., button click)\nawait send('request-human');\n```\n\n### Message Parts\n\nMessages contain ordered `parts` for rich content:\n\n```tsx\nconst { messages } = useOctavusChat({ transport });\n\n// Each message has typed parts\nmessage.parts.map((part) => {\n switch (part.type) {\n case 'text': // Text content\n case 'reasoning': // Extended reasoning/thinking\n case 'tool-call': // Tool execution\n case 'operation': // Internal operations (set-resource, etc.)\n }\n});\n```\n\n### Status Tracking\n\n```tsx\nconst { status } = useOctavusChat({ transport });\n\n// status: 'idle' | 'streaming' | 'error' | 'awaiting-input'\n// 'awaiting-input' occurs when interactive client tools need user action\n```\n\n### Stop Streaming\n\n```tsx\nconst { stop } = useOctavusChat({ transport });\n\n// Stop current stream and finalize message\nstop();\n```\n\n## Hook Reference (React)\n\n### useOctavusChat\n\n```typescript\nfunction useOctavusChat(options: OctavusChatOptions): UseOctavusChatReturn;\n\ninterface OctavusChatOptions {\n // Required: Transport for streaming events\n transport: Transport;\n\n // Optional: Function to request upload URLs for file uploads\n requestUploadUrls?: (\n files: { filename: string; mediaType: string; size: number }[],\n ) => Promise<UploadUrlsResponse>;\n\n // Optional: Client-side tool handlers\n // - Function: executes automatically and returns result\n // - 'interactive': appears in pendingClientTools for user input\n clientTools?: Record<string, ClientToolHandler>;\n\n // Optional: Pre-populate with existing messages (session restore)\n initialMessages?: UIMessage[];\n\n // Optional: Callbacks\n onError?: (error: OctavusError) => void; // Structured error with type, source, retryable\n onFinish?: () => void;\n onStop?: () => void; // Called when user stops generation\n onResourceUpdate?: (name: string, value: unknown) => void;\n}\n\ninterface UseOctavusChatReturn {\n // State\n messages: UIMessage[];\n status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n error: OctavusError | null; // Structured error with type, source, retryable\n\n // Connection (socket transport only - undefined for HTTP)\n connectionState: ConnectionState | undefined; // 'disconnected' | 'connecting' | 'connected' | 'error'\n connectionError: Error | undefined;\n\n // Client tools (interactive tools awaiting user input)\n pendingClientTools: Record<string, InteractiveTool[]>; // Keyed by tool name\n\n // Actions\n send: (\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ) => Promise<void>;\n stop: () => void;\n\n // Connection management (socket transport only - undefined for HTTP)\n connect: (() => Promise<void>) | undefined;\n disconnect: (() => void) | undefined;\n\n // File uploads (requires requestUploadUrls)\n uploadFiles: (\n files: FileList | File[],\n onProgress?: (fileIndex: number, progress: number) => void,\n ) => Promise<FileReference[]>;\n}\n\ninterface UserMessageInput {\n content?: string;\n files?: FileList | File[] | FileReference[];\n}\n```\n\n## Transport Reference\n\n### createHttpTransport\n\nCreates an HTTP/SSE transport using native `fetch()`:\n\n```typescript\nimport { createHttpTransport } from '@octavus/react';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n```\n\n### createSocketTransport\n\nCreates a WebSocket/SockJS transport for real-time connections:\n\n```typescript\nimport { createSocketTransport } from '@octavus/react';\n\nconst transport = createSocketTransport({\n connect: () =>\n new Promise((resolve, reject) => {\n const ws = new WebSocket(`wss://api.example.com/stream?sessionId=${sessionId}`);\n ws.onopen = () => resolve(ws);\n ws.onerror = () => reject(new Error('Connection failed'));\n }),\n});\n```\n\nSocket transport provides additional connection management:\n\n```typescript\n// Access connection state directly\ntransport.connectionState; // 'disconnected' | 'connecting' | 'connected' | 'error'\n\n// Subscribe to state changes\ntransport.onConnectionStateChange((state, error) => {\n /* ... */\n});\n\n// Eager connection (instead of lazy on first send)\nawait transport.connect();\n\n// Manual disconnect\ntransport.disconnect();\n```\n\nFor detailed WebSocket/SockJS usage including custom events, reconnection patterns, and server-side implementation, see [Socket Transport](/docs/client-sdk/socket-transport).\n\n## Class Reference (Framework-Agnostic)\n\n### OctavusChat\n\n```typescript\nclass OctavusChat {\n constructor(options: OctavusChatOptions);\n\n // State (read-only)\n readonly messages: UIMessage[];\n readonly status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n readonly error: OctavusError | null; // Structured error\n readonly pendingClientTools: Record<string, InteractiveTool[]>; // Interactive tools\n\n // Actions\n send(\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ): Promise<void>;\n stop(): void;\n\n // Subscription\n subscribe(callback: () => void): () => void; // Returns unsubscribe function\n}\n```\n\n## Next Steps\n\n- [HTTP Transport](/docs/client-sdk/http-transport) \u2014 HTTP/SSE integration (recommended)\n- [Socket Transport](/docs/client-sdk/socket-transport) \u2014 WebSocket and SockJS integration\n- [Messages](/docs/client-sdk/messages) \u2014 Working with message state\n- [Streaming](/docs/client-sdk/streaming) \u2014 Building streaming UIs\n- [Client Tools](/docs/client-sdk/client-tools) \u2014 Interactive browser-side tool handling\n- [Operations](/docs/client-sdk/execution-blocks) \u2014 Showing agent progress\n- [Error Handling](/docs/client-sdk/error-handling) \u2014 Handling errors with type guards\n- [File Uploads](/docs/client-sdk/file-uploads) \u2014 Uploading images and documents\n- [Examples](/docs/examples/overview) \u2014 Complete working examples\n",
|
|
794
|
+
content: "\n# Client SDK Overview\n\nOctavus provides two packages for frontend integration:\n\n| Package | Purpose | Use When |\n| --------------------- | ------------------------ | ----------------------------------------------------- |\n| `@octavus/react` | React hooks and bindings | Building React applications |\n| `@octavus/client-sdk` | Framework-agnostic core | Using Vue, Svelte, vanilla JS, or custom integrations |\n\n**Most users should install `@octavus/react`** \u2014 it includes everything from `@octavus/client-sdk` plus React-specific hooks.\n\n## Installation\n\n### React Applications\n\n```bash\nnpm install @octavus/react\n```\n\n**Current version:** `2.5.0`\n\n### Other Frameworks\n\n```bash\nnpm install @octavus/client-sdk\n```\n\n**Current version:** `2.5.0`\n\n## Transport Pattern\n\nThe Client SDK uses a **transport abstraction** to handle communication with your backend. This gives you flexibility in how events are delivered:\n\n| Transport | Use Case | Docs |\n| ----------------------- | -------------------------------------------- | ----------------------------------------------------- |\n| `createHttpTransport` | HTTP/SSE (Next.js, Express, etc.) | [HTTP Transport](/docs/client-sdk/http-transport) |\n| `createSocketTransport` | WebSocket, SockJS, or other socket protocols | [Socket Transport](/docs/client-sdk/socket-transport) |\n\nWhen the transport changes (e.g., when `sessionId` changes), the `useOctavusChat` hook automatically reinitializes with the new transport.\n\n> **Recommendation**: Use HTTP transport unless you specifically need WebSocket features (custom real-time events, Meteor/Phoenix, etc.).\n\n## React Usage\n\nThe `useOctavusChat` hook provides state management and streaming for React applications:\n\n```tsx\nimport { useMemo } from 'react';\nimport { useOctavusChat, createHttpTransport, type UIMessage } from '@octavus/react';\n\nfunction Chat({ sessionId }: { sessionId: string }) {\n // Create a stable transport instance (memoized on sessionId)\n const transport = useMemo(\n () =>\n createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n [sessionId],\n );\n\n const { messages, status, send } = useOctavusChat({ transport });\n\n const sendMessage = async (text: string) => {\n await send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n };\n\n return (\n <div>\n {messages.map((msg) => (\n <MessageBubble key={msg.id} message={msg} />\n ))}\n </div>\n );\n}\n\nfunction MessageBubble({ message }: { message: UIMessage }) {\n return (\n <div>\n {message.parts.map((part, i) => {\n if (part.type === 'text') {\n return <p key={i}>{part.text}</p>;\n }\n return null;\n })}\n </div>\n );\n}\n```\n\n## Framework-Agnostic Usage\n\nThe `OctavusChat` class can be used with any framework or vanilla JavaScript:\n\n```typescript\nimport { OctavusChat, createHttpTransport } from '@octavus/client-sdk';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n\nconst chat = new OctavusChat({ transport });\n\n// Subscribe to state changes\nconst unsubscribe = chat.subscribe(() => {\n console.log('Messages:', chat.messages);\n console.log('Status:', chat.status);\n // Update your UI here\n});\n\n// Send a message\nawait chat.send('user-message', { USER_MESSAGE: 'Hello' }, { userMessage: { content: 'Hello' } });\n\n// Cleanup when done\nunsubscribe();\n```\n\n## Key Features\n\n### Unified Send Function\n\nThe `send` function handles both user message display and agent triggering in one call:\n\n```tsx\nconst { send } = useOctavusChat({ transport });\n\n// Add user message to UI and trigger agent\nawait send('user-message', { USER_MESSAGE: text }, { userMessage: { content: text } });\n\n// Trigger without adding a user message (e.g., button click)\nawait send('request-human');\n```\n\n### Message Parts\n\nMessages contain ordered `parts` for rich content:\n\n```tsx\nconst { messages } = useOctavusChat({ transport });\n\n// Each message has typed parts\nmessage.parts.map((part) => {\n switch (part.type) {\n case 'text': // Text content\n case 'reasoning': // Extended reasoning/thinking\n case 'tool-call': // Tool execution\n case 'operation': // Internal operations (set-resource, etc.)\n }\n});\n```\n\n### Status Tracking\n\n```tsx\nconst { status } = useOctavusChat({ transport });\n\n// status: 'idle' | 'streaming' | 'error' | 'awaiting-input'\n// 'awaiting-input' occurs when interactive client tools need user action\n```\n\n### Stop Streaming\n\n```tsx\nconst { stop } = useOctavusChat({ transport });\n\n// Stop current stream and finalize message\nstop();\n```\n\n## Hook Reference (React)\n\n### useOctavusChat\n\n```typescript\nfunction useOctavusChat(options: OctavusChatOptions): UseOctavusChatReturn;\n\ninterface OctavusChatOptions {\n // Required: Transport for streaming events\n transport: Transport;\n\n // Optional: Function to request upload URLs for file uploads\n requestUploadUrls?: (\n files: { filename: string; mediaType: string; size: number }[],\n ) => Promise<UploadUrlsResponse>;\n\n // Optional: Client-side tool handlers\n // - Function: executes automatically and returns result\n // - 'interactive': appears in pendingClientTools for user input\n clientTools?: Record<string, ClientToolHandler>;\n\n // Optional: Pre-populate with existing messages (session restore)\n initialMessages?: UIMessage[];\n\n // Optional: Callbacks\n onError?: (error: OctavusError) => void; // Structured error with type, source, retryable\n onFinish?: () => void;\n onStop?: () => void; // Called when user stops generation\n onResourceUpdate?: (name: string, value: unknown) => void;\n}\n\ninterface UseOctavusChatReturn {\n // State\n messages: UIMessage[];\n status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n error: OctavusError | null; // Structured error with type, source, retryable\n\n // Connection (socket transport only - undefined for HTTP)\n connectionState: ConnectionState | undefined; // 'disconnected' | 'connecting' | 'connected' | 'error'\n connectionError: Error | undefined;\n\n // Client tools (interactive tools awaiting user input)\n pendingClientTools: Record<string, InteractiveTool[]>; // Keyed by tool name\n\n // Actions\n send: (\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ) => Promise<void>;\n stop: () => void;\n\n // Connection management (socket transport only - undefined for HTTP)\n connect: (() => Promise<void>) | undefined;\n disconnect: (() => void) | undefined;\n\n // File uploads (requires requestUploadUrls)\n uploadFiles: (\n files: FileList | File[],\n onProgress?: (fileIndex: number, progress: number) => void,\n ) => Promise<FileReference[]>;\n}\n\ninterface UserMessageInput {\n content?: string;\n files?: FileList | File[] | FileReference[];\n}\n```\n\n## Transport Reference\n\n### createHttpTransport\n\nCreates an HTTP/SSE transport using native `fetch()`:\n\n```typescript\nimport { createHttpTransport } from '@octavus/react';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n```\n\n### createSocketTransport\n\nCreates a WebSocket/SockJS transport for real-time connections:\n\n```typescript\nimport { createSocketTransport } from '@octavus/react';\n\nconst transport = createSocketTransport({\n connect: () =>\n new Promise((resolve, reject) => {\n const ws = new WebSocket(`wss://api.example.com/stream?sessionId=${sessionId}`);\n ws.onopen = () => resolve(ws);\n ws.onerror = () => reject(new Error('Connection failed'));\n }),\n});\n```\n\nSocket transport provides additional connection management:\n\n```typescript\n// Access connection state directly\ntransport.connectionState; // 'disconnected' | 'connecting' | 'connected' | 'error'\n\n// Subscribe to state changes\ntransport.onConnectionStateChange((state, error) => {\n /* ... */\n});\n\n// Eager connection (instead of lazy on first send)\nawait transport.connect();\n\n// Manual disconnect\ntransport.disconnect();\n```\n\nFor detailed WebSocket/SockJS usage including custom events, reconnection patterns, and server-side implementation, see [Socket Transport](/docs/client-sdk/socket-transport).\n\n## Class Reference (Framework-Agnostic)\n\n### OctavusChat\n\n```typescript\nclass OctavusChat {\n constructor(options: OctavusChatOptions);\n\n // State (read-only)\n readonly messages: UIMessage[];\n readonly status: ChatStatus; // 'idle' | 'streaming' | 'error' | 'awaiting-input'\n readonly error: OctavusError | null; // Structured error\n readonly pendingClientTools: Record<string, InteractiveTool[]>; // Interactive tools\n\n // Actions\n send(\n triggerName: string,\n input?: Record<string, unknown>,\n options?: { userMessage?: UserMessageInput },\n ): Promise<void>;\n stop(): void;\n\n // Subscription\n subscribe(callback: () => void): () => void; // Returns unsubscribe function\n}\n```\n\n## Next Steps\n\n- [HTTP Transport](/docs/client-sdk/http-transport) \u2014 HTTP/SSE integration (recommended)\n- [Socket Transport](/docs/client-sdk/socket-transport) \u2014 WebSocket and SockJS integration\n- [Messages](/docs/client-sdk/messages) \u2014 Working with message state\n- [Streaming](/docs/client-sdk/streaming) \u2014 Building streaming UIs\n- [Client Tools](/docs/client-sdk/client-tools) \u2014 Interactive browser-side tool handling\n- [Operations](/docs/client-sdk/execution-blocks) \u2014 Showing agent progress\n- [Error Handling](/docs/client-sdk/error-handling) \u2014 Handling errors with type guards\n- [File Uploads](/docs/client-sdk/file-uploads) \u2014 Uploading images and documents\n- [Examples](/docs/examples/overview) \u2014 Complete working examples\n",
|
|
795
795
|
excerpt: "Client SDK Overview Octavus provides two packages for frontend integration: | Package | Purpose | Use When | |...",
|
|
796
796
|
order: 1
|
|
797
797
|
},
|
|
@@ -1307,7 +1307,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1307
1307
|
section: "protocol",
|
|
1308
1308
|
title: "Agent Config",
|
|
1309
1309
|
description: "Configuring the agent model and behavior.",
|
|
1310
|
-
content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to interpolate in system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | ------------------------------------------------------------ |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions:\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nExample `prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n```\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
|
|
1310
|
+
content: "\n# Agent Config\n\nThe `agent` section configures the LLM model, system prompt, tools, and behavior.\n\n## Basic Configuration\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system # References prompts/system.md\n tools: [get-user-account] # Available tools\n skills: [qr-code] # Available skills\n```\n\n## Configuration Options\n\n| Field | Required | Description |\n| ------------- | -------- | --------------------------------------------------------- |\n| `model` | Yes | Model identifier or variable reference |\n| `system` | Yes | System prompt filename (without .md) |\n| `input` | No | Variables to interpolate in system prompt |\n| `tools` | No | List of tools the LLM can call |\n| `skills` | No | List of Octavus skills the LLM can use |\n| `imageModel` | No | Image generation model (enables agentic image generation) |\n| `agentic` | No | Allow multiple tool call cycles |\n| `maxSteps` | No | Maximum agentic steps (default: 10) |\n| `temperature` | No | Model temperature (0-2) |\n| `thinking` | No | Extended reasoning level |\n| `anthropic` | No | Anthropic-specific options (tools, skills) |\n\n## Models\n\nSpecify models in `provider/model-id` format. Any model supported by the provider's SDK will work.\n\n### Supported Providers\n\n| Provider | Format | Examples |\n| --------- | ---------------------- | -------------------------------------------------------------------- |\n| Anthropic | `anthropic/{model-id}` | `claude-opus-4-5`, `claude-sonnet-4-5`, `claude-haiku-4-5` |\n| Google | `google/{model-id}` | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-flash` |\n| OpenAI | `openai/{model-id}` | `gpt-5`, `gpt-4o`, `o4-mini`, `o3`, `o3-mini`, `o1` |\n\n### Examples\n\n```yaml\n# Anthropic Claude 4.5\nagent:\n model: anthropic/claude-sonnet-4-5\n\n# Google Gemini 3\nagent:\n model: google/gemini-3-flash-preview\n\n# OpenAI GPT-5\nagent:\n model: openai/gpt-5\n\n# OpenAI reasoning models\nagent:\n model: openai/o3-mini\n```\n\n> **Note**: Model IDs are passed directly to the provider SDK. Check the provider's documentation for the latest available models.\n\n### Dynamic Model Selection\n\nThe model field can also reference an input variable, allowing consumers to choose the model when creating a session:\n\n```yaml\ninput:\n MODEL:\n type: string\n description: The LLM model to use\n\nagent:\n model: MODEL # Resolved from session input\n system: system\n```\n\nWhen creating a session, pass the model:\n\n```typescript\nconst sessionId = await client.agentSessions.create('my-agent', {\n MODEL: 'anthropic/claude-sonnet-4-5',\n});\n```\n\nThis enables:\n\n- **Multi-provider support** \u2014 Same agent works with different providers\n- **A/B testing** \u2014 Test different models without protocol changes\n- **User preferences** \u2014 Let users choose their preferred model\n\nThe model value is validated at runtime to ensure it's in the correct `provider/model-id` format.\n\n> **Note**: When using dynamic models, provider-specific options (like `anthropic:`) may not apply if the model resolves to a different provider.\n\n## System Prompt\n\nThe system prompt sets the agent's persona and instructions:\n\n```yaml\nagent:\n system: system # Uses prompts/system.md\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n```\n\nExample `prompts/system.md`:\n\n```markdown\nYou are a friendly support agent for {{COMPANY_NAME}}.\n\n## Your Role\n\nHelp users with questions about {{PRODUCT_NAME}}.\n\n## Guidelines\n\n- Be helpful and professional\n- If you can't help, offer to escalate\n- Never share internal information\n```\n\n## Agentic Mode\n\nEnable multi-step tool calling:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account, search-docs, create-ticket]\n agentic: true # LLM can call multiple tools\n maxSteps: 10 # Limit cycles to prevent runaway\n```\n\n**How it works:**\n\n1. LLM receives user message\n2. LLM decides to call a tool\n3. Tool executes, result returned to LLM\n4. LLM decides if more tools needed\n5. Repeat until LLM responds or maxSteps reached\n\n## Extended Thinking\n\nEnable extended reasoning for complex tasks:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n thinking: medium # low | medium | high\n```\n\n| Level | Token Budget | Use Case |\n| -------- | ------------ | ------------------- |\n| `low` | ~5,000 | Simple reasoning |\n| `medium` | ~10,000 | Moderate complexity |\n| `high` | ~20,000 | Complex analysis |\n\nThinking content streams to the UI and can be displayed to users.\n\n## Skills\n\nEnable Octavus skills for code execution and file generation:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code] # Enable skills\n agentic: true\n```\n\nSkills provide provider-agnostic code execution in isolated sandboxes. When enabled, the LLM can execute Python/Bash code, run skill scripts, and generate files.\n\nSee [Skills](/docs/protocol/skills) for full documentation.\n\n## Image Generation\n\nEnable the LLM to generate images autonomously:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n imageModel: google/gemini-2.5-flash-image\n agentic: true\n```\n\nWhen `imageModel` is configured, the `octavus_generate_image` tool becomes available. The LLM can decide when to generate images based on user requests.\n\n### Supported Image Providers\n\n| Provider | Model Types | Examples |\n| -------- | --------------------------------------- | --------------------------------------------------------- |\n| OpenAI | Dedicated image models | `gpt-image-1` |\n| Google | Gemini native (contains \"image\") | `gemini-2.5-flash-image`, `gemini-3-flash-image-generate` |\n| Google | Imagen dedicated (starts with \"imagen\") | `imagen-4.0-generate-001` |\n\n> **Note**: Google has two image generation approaches. Gemini \"native\" models (containing \"image\" in the ID) generate images using the language model API with `responseModalities`. Imagen models (starting with \"imagen\") use a dedicated image generation API.\n\n### Image Sizes\n\nThe tool supports three image sizes:\n\n- `1024x1024` (default) \u2014 Square\n- `1792x1024` \u2014 Landscape (16:9)\n- `1024x1792` \u2014 Portrait (9:16)\n\n### Agentic vs Deterministic\n\nUse `imageModel` in agent config when:\n\n- The LLM should decide when to generate images\n- Users ask for images in natural language\n\nUse `generate-image` block (see [Handlers](/docs/protocol/handlers#generate-image)) when:\n\n- You want explicit control over image generation\n- Building prompt engineering pipelines\n- Images are generated at specific handler steps\n\n## Temperature\n\nControl response randomness:\n\n```yaml\nagent:\n model: openai/gpt-4o\n temperature: 0.7 # 0 = deterministic, 2 = creative\n```\n\n**Guidelines:**\n\n- `0 - 0.3`: Factual, consistent responses\n- `0.4 - 0.7`: Balanced (good default)\n- `0.8 - 1.2`: Creative, varied responses\n- `> 1.2`: Very creative (may be inconsistent)\n\n## Provider Options\n\nEnable provider-specific features like Anthropic's built-in tools and skills:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n```\n\nProvider options are validated against the model\u2014using `anthropic:` with a non-Anthropic model will fail validation.\n\nSee [Provider Options](/docs/protocol/provider-options) for full documentation.\n\n## Thread-Specific Config\n\nOverride config for named threads:\n\n```yaml\nhandlers:\n request-human:\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5 # Different model\n thinking: low # Different thinking\n maxSteps: 1 # Limit tool calls\n system: escalation-summary # Different prompt\n```\n\n## Full Example\n\n```yaml\ninput:\n COMPANY_NAME: { type: string }\n PRODUCT_NAME: { type: string }\n USER_ID: { type: string, optional: true }\n\nresources:\n CONVERSATION_SUMMARY:\n type: string\n default: ''\n\ntools:\n get-user-account:\n description: Look up user account\n parameters:\n userId: { type: string }\n\n search-docs:\n description: Search help documentation\n parameters:\n query: { type: string }\n\n create-support-ticket:\n description: Create a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string } # low, medium, high\n\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n input:\n - COMPANY_NAME\n - PRODUCT_NAME\n tools:\n - get-user-account\n - search-docs\n - create-support-ticket\n skills: [qr-code] # Octavus skills\n agentic: true\n maxSteps: 10\n thinking: medium\n # Anthropic-specific options\n anthropic:\n tools:\n web-search:\n display: description\n description: Searching the web\n skills:\n pdf:\n type: anthropic\n description: Processing PDF\n\ntriggers:\n user-message:\n input:\n USER_MESSAGE: { type: string }\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n display: hidden\n\n Respond:\n block: next-message\n```\n",
|
|
1311
1311
|
excerpt: "Agent Config The section configures the LLM model, system prompt, tools, and behavior. Basic Configuration Configuration Options | Field | Required | Description ...",
|
|
1312
1312
|
order: 7
|
|
1313
1313
|
},
|
|
@@ -1343,7 +1343,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1343
1343
|
section: "protocol",
|
|
1344
1344
|
title: "Workers",
|
|
1345
1345
|
description: "Defining worker agents for background and task-based execution.",
|
|
1346
|
-
content: '\n# Workers\n\nWorkers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value.\n\n## When to Use Workers\n\nWorkers are ideal for:\n\n- **Background processing** \u2014 Long-running tasks that don\'t need conversation\n- **Composable tasks** \u2014 Reusable units of work called by other agents\n- **Pipelines** \u2014 Multi-step processing with structured output\n- **Parallel execution** \u2014 Tasks that can run independently\n\nUse interactive agents instead when:\n\n- **Conversation is needed** \u2014 Multi-turn dialogue with users\n- **Persistence matters** \u2014 State should survive across interactions\n- **Session context** \u2014 User context needs to persist\n\n## Worker vs Interactive\n\n| Aspect | Interactive | Worker |\n| ---------- | ---------------------------------- | ----------------------------- |\n| Structure | `triggers` + `handlers` + `agent` | `steps` + `output` |\n| LLM Config | Global `agent:` section | Per-thread via `start-thread` |\n| Invocation | Fire a named trigger | Direct execution with input |\n| Session | Persists across triggers (24h TTL) | Single execution |\n| Result | Streaming chat | Streaming + output value |\n\n## Protocol Structure\n\nWorkers use a simpler protocol structure than interactive agents:\n\n```yaml\n# Input schema - provided when worker is executed\ninput:\n TOPIC:\n type: string\n description: Topic to research\n DEPTH:\n type: string\n optional: true\n default: medium\n\n# Variables for intermediate results\nvariables:\n RESEARCH_DATA:\n type: string\n ANALYSIS:\n type: string\n description: Final analysis result\n\n# Tools available to the worker\ntools:\n web-search:\n description: Search the web\n parameters:\n query: { type: string }\n\n# Sequential execution steps\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC, DEPTH]\n tools: [web-search]\n agentic: true\n maxSteps: 5\n\n Add research request:\n block: add-message\n thread: research\n role: user\n prompt: research-prompt\n input: [TOPIC, DEPTH]\n\n Generate research:\n block: next-message\n thread: research\n output: RESEARCH_DATA\n\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: analysis-system\n\n Add analysis request:\n block: add-message\n thread: analysis\n role: user\n prompt: analysis-prompt\n input: [RESEARCH_DATA]\n\n Generate analysis:\n block: next-message\n thread: analysis\n output: ANALYSIS\n\n# Output variable - the worker\'s return value\noutput: ANALYSIS\n```\n\n## settings.json\n\nWorkers are identified by the `format` field:\n\n```json\n{\n "slug": "research-assistant",\n "name": "Research Assistant",\n "description": "Researches topics and returns structured analysis",\n "format": "worker"\n}\n```\n\n## Key Differences\n\n### No Global Agent Config\n\nInteractive agents have a global `agent:` section that configures a main thread. Workers don\'t have this \u2014 every thread must be explicitly created via `start-thread`:\n\n```yaml\n# Interactive agent: Global config\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [tool-a, tool-b]\n\n# Worker: Each thread configured independently\nsteps:\n Start thread A:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n tools: [tool-a]\n\n Start thread B:\n block: start-thread\n thread: analysis\n model: openai/gpt-4o\n tools: [tool-b]\n```\n\nThis gives workers flexibility to use different models, tools, and settings at different stages.\n\n### Steps Instead of Handlers\n\nWorkers use `steps:` instead of `handlers:`. Steps execute sequentially, like handler blocks:\n\n```yaml\n# Interactive: Handlers respond to triggers\nhandlers:\n user-message:\n Add message:\n block: add-message\n # ...\n\n# Worker: Steps execute in sequence\nsteps:\n Add message:\n block: add-message\n # ...\n```\n\n### Output Value\n\nWorkers can return an output value to the caller:\n\n```yaml\nvariables:\n RESULT:\n type: string\n\nsteps:\n # ... steps that populate RESULT ...\n\noutput: RESULT # Return this variable\'s value\n```\n\nThe `output` field references a variable declared in `variables:`. If omitted, the worker completes without returning a value.\n\n## Available Blocks\n\nWorkers support the same blocks as handlers:\n\n| Block | Purpose |\n| ------------------ | -------------------------------------------- |\n| `start-thread` | Create a named thread with LLM configuration |\n| `add-message` | Add a message to a thread |\n| `next-message` | Generate LLM response |\n| `tool-call` | Call a tool deterministically |\n| `set-resource` | Update a resource value |\n| `serialize-thread` | Convert thread to text |\n| `generate-image` | Generate an image from a prompt variable |\n\n### start-thread (Required for LLM)\n\nEvery thread must be initialized with `start-thread` before using `next-message`:\n\n```yaml\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC]\n tools: [web-search]\n thinking: medium\n agentic: true\n maxSteps: 5\n```\n\nAll LLM configuration goes here:\n\n| Field | Description |\n| ------------- | ------------------------------- |\n| `thread` | Thread name (required) |\n| `model` | LLM model (required) |\n| `system` | System prompt filename |\n| `input` | Variables for system prompt |\n| `tools` | Tools available in this thread |\n| `skills` | Skills available in this thread |\n| `imageModel` | Image generation model |\n| `thinking` | Extended reasoning level |\n| `temperature` | Model temperature |\n| `agentic` | Allow multiple tool call cycles |\n| `maxSteps` | Maximum agentic steps |\n\n## Simple Example\n\nA worker that generates a title from a summary:\n\n```yaml\n# Input\ninput:\n CONVERSATION_SUMMARY:\n type: string\n description: Summary to generate a title for\n\n# Variables\nvariables:\n TITLE:\n type: string\n description: The generated title\n\n# Steps\nsteps:\n Start title thread:\n block: start-thread\n thread: title-gen\n model: anthropic/claude-sonnet-4-5\n system: title-system\n\n Add title request:\n block: add-message\n thread: title-gen\n role: user\n prompt: title-request\n input: [CONVERSATION_SUMMARY]\n\n Generate title:\n block: next-message\n thread: title-gen\n output: TITLE\n display: stream\n\n# Output\noutput: TITLE\n```\n\n## Advanced Example\n\nA worker with multiple threads, tools, and agentic behavior:\n\n```yaml\ninput:\n USER_MESSAGE:\n type: string\n description: The user\'s message to respond to\n USER_ID:\n type: string\n description: User ID for account lookups\n optional: true\n\ntools:\n get-user-account:\n description: Looking up account information\n parameters:\n userId: { type: string }\n create-support-ticket:\n description: Creating a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string }\n\nvariables:\n ASSISTANT_RESPONSE:\n type: string\n CHAT_TRANSCRIPT:\n type: string\n CONVERSATION_SUMMARY:\n type: string\n\nsteps:\n # Thread 1: Chat with agentic tool calling\n Start chat thread:\n block: start-thread\n thread: chat\n model: anthropic/claude-sonnet-4-5\n system: chat-system\n input: [USER_ID]\n tools: [get-user-account, create-support-ticket]\n thinking: medium\n agentic: true\n maxSteps: 5\n\n Add user message:\n block: add-message\n thread: chat\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Generate response:\n block: next-message\n thread: chat\n output: ASSISTANT_RESPONSE\n display: stream\n\n # Serialize for summary\n Save conversation:\n block: serialize-thread\n thread: chat\n output: CHAT_TRANSCRIPT\n\n # Thread 2: Summary generation\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n system: summary-system\n thinking: low\n\n Add summary request:\n block: add-message\n thread: summary\n role: user\n prompt: summary-request\n input: [CHAT_TRANSCRIPT]\n\n Generate summary:\n block: next-message\n thread: summary\n output: CONVERSATION_SUMMARY\n display: stream\n\noutput: CONVERSATION_SUMMARY\n```\n\n## Tool Handling\n\nWorkers support the same tool handling as interactive agents:\n\n- **Server tools** \u2014 Handled by tool handlers you provide\n- **Client tools** \u2014 Pause execution, return tool request to caller\n\n```typescript\nconst events = client.workers.execute(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => {\n return await searchWeb(args.query);\n },\n },\n },\n);\n```\n\nSee [Server SDK Workers](/docs/server-sdk/workers) for tool handling details.\n\n## Stream Events\n\nWorkers emit the same events as interactive agents, plus worker-specific events:\n\n| Event | Description |\n| --------------- | ---------------------------------- |\n| `worker-start` | Worker execution begins |\n| `worker-result` | Worker completes (includes output) |\n\nAll standard events (text-delta, tool calls, etc.) are also emitted.\n\n## Calling Workers from Interactive Agents\n\nInteractive agents can call workers in two ways:\n\n1. **Deterministically** \u2014 Using the `run-worker` block\n2. **Agentically** \u2014 LLM calls worker as a tool\n\n### Worker Declaration\n\nFirst, declare workers in your interactive agent\'s protocol:\n\n```yaml\nworkers:\n generate-title:\n description: Generating conversation title\n display: description\n research-assistant:\n description: Researching topic\n display: stream\n tools:\n search: web-search # Map worker tool \u2192 parent tool\n```\n\n### run-worker Block\n\nCall a worker deterministically from a handler:\n\n```yaml\nhandlers:\n request-human:\n Generate title:\n block: run-worker\n worker: generate-title\n input:\n CONVERSATION_SUMMARY: SUMMARY\n output: CONVERSATION_TITLE\n```\n\n### LLM Tool Invocation\n\nMake workers available to the LLM:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n workers: [generate-title, research-assistant]\n agentic: true\n```\n\nThe LLM can then call workers as tools during conversation.\n\n### Display Modes\n\nControl how worker execution appears to users:\n\n| Mode | Behavior |\n| ------------- | --------------------------------- |\n| `hidden` | Worker runs silently |\n| `name` | Shows worker name |\n| `description` | Shows description text |\n| `stream` | Streams all worker events to user |\n\n### Tool Mapping\n\nMap parent tools to worker tools when the worker needs access to your tool handlers:\n\n```yaml\nworkers:\n research-assistant:\n description: Research topics\n tools:\n search: web-search # Worker\'s "search" \u2192 parent\'s "web-search"\n```\n\nWhen the worker calls its `search` tool, your `web-search` handler executes.\n\n## Next Steps\n\n- [Server SDK Workers](/docs/server-sdk/workers) \u2014 Executing workers from code\n- [Handlers](/docs/protocol/handlers) \u2014 Block reference for steps\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n',
|
|
1346
|
+
content: '\n# Workers\n\nWorkers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value.\n\n## When to Use Workers\n\nWorkers are ideal for:\n\n- **Background processing** \u2014 Long-running tasks that don\'t need conversation\n- **Composable tasks** \u2014 Reusable units of work called by other agents\n- **Pipelines** \u2014 Multi-step processing with structured output\n- **Parallel execution** \u2014 Tasks that can run independently\n\nUse interactive agents instead when:\n\n- **Conversation is needed** \u2014 Multi-turn dialogue with users\n- **Persistence matters** \u2014 State should survive across interactions\n- **Session context** \u2014 User context needs to persist\n\n## Worker vs Interactive\n\n| Aspect | Interactive | Worker |\n| ---------- | ---------------------------------- | ----------------------------- |\n| Structure | `triggers` + `handlers` + `agent` | `steps` + `output` |\n| LLM Config | Global `agent:` section | Per-thread via `start-thread` |\n| Invocation | Fire a named trigger | Direct execution with input |\n| Session | Persists across triggers (24h TTL) | Single execution |\n| Result | Streaming chat | Streaming + output value |\n\n## Protocol Structure\n\nWorkers use a simpler protocol structure than interactive agents:\n\n```yaml\n# Input schema - provided when worker is executed\ninput:\n TOPIC:\n type: string\n description: Topic to research\n DEPTH:\n type: string\n optional: true\n default: medium\n\n# Variables for intermediate results\nvariables:\n RESEARCH_DATA:\n type: string\n ANALYSIS:\n type: string\n description: Final analysis result\n\n# Tools available to the worker\ntools:\n web-search:\n description: Search the web\n parameters:\n query: { type: string }\n\n# Sequential execution steps\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC, DEPTH]\n tools: [web-search]\n maxSteps: 5\n\n Add research request:\n block: add-message\n thread: research\n role: user\n prompt: research-prompt\n input: [TOPIC, DEPTH]\n\n Generate research:\n block: next-message\n thread: research\n output: RESEARCH_DATA\n\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: analysis-system\n\n Add analysis request:\n block: add-message\n thread: analysis\n role: user\n prompt: analysis-prompt\n input: [RESEARCH_DATA]\n\n Generate analysis:\n block: next-message\n thread: analysis\n output: ANALYSIS\n\n# Output variable - the worker\'s return value\noutput: ANALYSIS\n```\n\n## settings.json\n\nWorkers are identified by the `format` field:\n\n```json\n{\n "slug": "research-assistant",\n "name": "Research Assistant",\n "description": "Researches topics and returns structured analysis",\n "format": "worker"\n}\n```\n\n## Key Differences\n\n### No Global Agent Config\n\nInteractive agents have a global `agent:` section that configures a main thread. Workers don\'t have this \u2014 every thread must be explicitly created via `start-thread`:\n\n```yaml\n# Interactive agent: Global config\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [tool-a, tool-b]\n\n# Worker: Each thread configured independently\nsteps:\n Start thread A:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n tools: [tool-a]\n\n Start thread B:\n block: start-thread\n thread: analysis\n model: openai/gpt-4o\n tools: [tool-b]\n```\n\nThis gives workers flexibility to use different models, tools, and settings at different stages.\n\n### Steps Instead of Handlers\n\nWorkers use `steps:` instead of `handlers:`. Steps execute sequentially, like handler blocks:\n\n```yaml\n# Interactive: Handlers respond to triggers\nhandlers:\n user-message:\n Add message:\n block: add-message\n # ...\n\n# Worker: Steps execute in sequence\nsteps:\n Add message:\n block: add-message\n # ...\n```\n\n### Output Value\n\nWorkers can return an output value to the caller:\n\n```yaml\nvariables:\n RESULT:\n type: string\n\nsteps:\n # ... steps that populate RESULT ...\n\noutput: RESULT # Return this variable\'s value\n```\n\nThe `output` field references a variable declared in `variables:`. If omitted, the worker completes without returning a value.\n\n## Available Blocks\n\nWorkers support the same blocks as handlers:\n\n| Block | Purpose |\n| ------------------ | -------------------------------------------- |\n| `start-thread` | Create a named thread with LLM configuration |\n| `add-message` | Add a message to a thread |\n| `next-message` | Generate LLM response |\n| `tool-call` | Call a tool deterministically |\n| `set-resource` | Update a resource value |\n| `serialize-thread` | Convert thread to text |\n| `generate-image` | Generate an image from a prompt variable |\n\n### start-thread (Required for LLM)\n\nEvery thread must be initialized with `start-thread` before using `next-message`:\n\n```yaml\nsteps:\n Start research:\n block: start-thread\n thread: research\n model: anthropic/claude-sonnet-4-5\n system: research-system\n input: [TOPIC]\n tools: [web-search]\n thinking: medium\n maxSteps: 5\n```\n\nAll LLM configuration goes here:\n\n| Field | Description |\n| ------------- | ------------------------------------------------- |\n| `thread` | Thread name (defaults to block name) |\n| `model` | LLM model to use |\n| `system` | System prompt filename (required) |\n| `input` | Variables for system prompt |\n| `tools` | Tools available in this thread |\n| `workers` | Workers available to this thread (as LLM tools) |\n| `imageModel` | Image generation model |\n| `thinking` | Extended reasoning level |\n| `temperature` | Model temperature |\n| `maxSteps` | Maximum tool call cycles (enables agentic if > 1) |\n\n## Simple Example\n\nA worker that generates a title from a summary:\n\n```yaml\n# Input\ninput:\n CONVERSATION_SUMMARY:\n type: string\n description: Summary to generate a title for\n\n# Variables\nvariables:\n TITLE:\n type: string\n description: The generated title\n\n# Steps\nsteps:\n Start title thread:\n block: start-thread\n thread: title-gen\n model: anthropic/claude-sonnet-4-5\n system: title-system\n\n Add title request:\n block: add-message\n thread: title-gen\n role: user\n prompt: title-request\n input: [CONVERSATION_SUMMARY]\n\n Generate title:\n block: next-message\n thread: title-gen\n output: TITLE\n display: stream\n\n# Output\noutput: TITLE\n```\n\n## Advanced Example\n\nA worker with multiple threads, tools, and agentic behavior:\n\n```yaml\ninput:\n USER_MESSAGE:\n type: string\n description: The user\'s message to respond to\n USER_ID:\n type: string\n description: User ID for account lookups\n optional: true\n\ntools:\n get-user-account:\n description: Looking up account information\n parameters:\n userId: { type: string }\n create-support-ticket:\n description: Creating a support ticket\n parameters:\n summary: { type: string }\n priority: { type: string }\n\nvariables:\n ASSISTANT_RESPONSE:\n type: string\n CHAT_TRANSCRIPT:\n type: string\n CONVERSATION_SUMMARY:\n type: string\n\nsteps:\n # Thread 1: Chat with agentic tool calling\n Start chat thread:\n block: start-thread\n thread: chat\n model: anthropic/claude-sonnet-4-5\n system: chat-system\n input: [USER_ID]\n tools: [get-user-account, create-support-ticket]\n thinking: medium\n maxSteps: 5\n\n Add user message:\n block: add-message\n thread: chat\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Generate response:\n block: next-message\n thread: chat\n output: ASSISTANT_RESPONSE\n display: stream\n\n # Serialize for summary\n Save conversation:\n block: serialize-thread\n thread: chat\n output: CHAT_TRANSCRIPT\n\n # Thread 2: Summary generation\n Start summary thread:\n block: start-thread\n thread: summary\n model: anthropic/claude-sonnet-4-5\n system: summary-system\n thinking: low\n\n Add summary request:\n block: add-message\n thread: summary\n role: user\n prompt: summary-request\n input: [CHAT_TRANSCRIPT]\n\n Generate summary:\n block: next-message\n thread: summary\n output: CONVERSATION_SUMMARY\n display: stream\n\noutput: CONVERSATION_SUMMARY\n```\n\n## Tool Handling\n\nWorkers support the same tool handling as interactive agents:\n\n- **Server tools** \u2014 Handled by tool handlers you provide\n- **Client tools** \u2014 Pause execution, return tool request to caller\n\n```typescript\nconst events = client.workers.execute(\n agentId,\n { TOPIC: \'AI safety\' },\n {\n tools: {\n \'web-search\': async (args) => {\n return await searchWeb(args.query);\n },\n },\n },\n);\n```\n\nSee [Server SDK Workers](/docs/server-sdk/workers) for tool handling details.\n\n## Stream Events\n\nWorkers emit the same events as interactive agents, plus worker-specific events:\n\n| Event | Description |\n| --------------- | ---------------------------------- |\n| `worker-start` | Worker execution begins |\n| `worker-result` | Worker completes (includes output) |\n\nAll standard events (text-delta, tool calls, etc.) are also emitted.\n\n## Calling Workers from Interactive Agents\n\nInteractive agents can call workers in two ways:\n\n1. **Deterministically** \u2014 Using the `run-worker` block\n2. **Agentically** \u2014 LLM calls worker as a tool\n\n### Worker Declaration\n\nFirst, declare workers in your interactive agent\'s protocol:\n\n```yaml\nworkers:\n generate-title:\n description: Generating conversation title\n display: description\n research-assistant:\n description: Researching topic\n display: stream\n tools:\n search: web-search # Map worker tool \u2192 parent tool\n```\n\n### run-worker Block\n\nCall a worker deterministically from a handler:\n\n```yaml\nhandlers:\n request-human:\n Generate title:\n block: run-worker\n worker: generate-title\n input:\n CONVERSATION_SUMMARY: SUMMARY\n output: CONVERSATION_TITLE\n```\n\n### LLM Tool Invocation\n\nMake workers available to the LLM:\n\n```yaml\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n workers: [generate-title, research-assistant]\n agentic: true\n```\n\nThe LLM can then call workers as tools during conversation.\n\n### Display Modes\n\nControl how worker execution appears to users:\n\n| Mode | Behavior |\n| ------------- | --------------------------------- |\n| `hidden` | Worker runs silently |\n| `name` | Shows worker name |\n| `description` | Shows description text |\n| `stream` | Streams all worker events to user |\n\n### Tool Mapping\n\nMap parent tools to worker tools when the worker needs access to your tool handlers:\n\n```yaml\nworkers:\n research-assistant:\n description: Research topics\n tools:\n search: web-search # Worker\'s "search" \u2192 parent\'s "web-search"\n```\n\nWhen the worker calls its `search` tool, your `web-search` handler executes.\n\n## Next Steps\n\n- [Server SDK Workers](/docs/server-sdk/workers) \u2014 Executing workers from code\n- [Handlers](/docs/protocol/handlers) \u2014 Block reference for steps\n- [Agent Config](/docs/protocol/agent-config) \u2014 Model and settings\n',
|
|
1347
1347
|
excerpt: "Workers Workers are agents designed for task-based execution. Unlike interactive agents that handle multi-turn conversations, workers execute a sequence of steps and return an output value. When to...",
|
|
1348
1348
|
order: 11
|
|
1349
1349
|
}
|
|
@@ -1430,7 +1430,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1430
1430
|
section: "migration",
|
|
1431
1431
|
title: "Migrating from v1 to v2",
|
|
1432
1432
|
description: "Complete guide for upgrading from Octavus SDK v1.x to v2.0.",
|
|
1433
|
-
content: "\n# Migrating from v1 to v2\n\nThis guide covers all breaking changes and new features in Octavus SDK v2.0.0.\n\n## Overview\n\nVersion 2.0.0 introduces **client-side tool execution**, allowing tools to run in the browser with support for both automatic execution and interactive user input. This required changes to the transport layer and streaming protocol.\n\n## Breaking Changes\n\n### HTTP Transport Configuration\n\nThe `triggerRequest` option has been renamed to `request` and the function signature changed to support both trigger and continuation requests.\n\n**Before (v1):**\n\n```typescript\nimport { createHttpTransport } from '@octavus/client-sdk';\n\nconst transport = createHttpTransport({\n triggerRequest: (triggerName, input, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, triggerName, input }),\n signal: options?.signal,\n }),\n});\n```\n\n**After (v2):**\n\n```typescript\nimport { createHttpTransport } from '@octavus/client-sdk';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n```\n\nThe `payload` parameter is a discriminated union with `type: 'trigger' | 'continue'`:\n\n- **Trigger request:** `{ type: 'trigger', triggerName: string, input?: Record<string, unknown> }`\n- **Continue request:** `{ type: 'continue', executionId: string, toolResults: ToolResult[] }`\n\n### Server SDK Route Handler\n\nServer-side route handlers should use the new `execute()` method which handles both triggers and continuations via the request type.\n\n**Before (v1):**\n\n```typescript\n// app/api/trigger/route.ts\nimport { AgentSession, toSSEStream } from '@octavus/server-sdk';\n\nexport async function POST(request: Request) {\n const { sessionId, triggerName, input } = await request.json();\n\n const session = new AgentSession({\n sessionId,\n agentId: 'your-agent-id',\n apiKey: process.env.OCTAVUS_API_KEY!,\n });\n\n const events = session.trigger(triggerName, input, { signal: request.signal });\n return new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n });\n}\n```\n\n**After (v2):**\n\n```typescript\n// app/api/trigger/route.ts\nimport { AgentSession, toSSEStream, type SessionRequest } from '@octavus/server-sdk';\n\nexport async function POST(request: Request) {\n const body = (await request.json()) as { sessionId: string } & SessionRequest;\n const { sessionId, ...sessionRequest } = body;\n\n const session = new AgentSession({\n sessionId,\n agentId: 'your-agent-id',\n apiKey: process.env.OCTAVUS_API_KEY!,\n });\n\n const events = session.execute(sessionRequest, { signal: request.signal });\n return new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n });\n}\n```\n\nThe `SessionRequest` type is a discriminated union:\n\n- **Trigger:** `{ type: 'trigger', triggerName: string, input?: Record<string, unknown> }`\n- **Continue:** `{ type: 'continue', executionId: string, toolResults: ToolResult[] }`\n\n### Export Changes\n\nThe SDKs no longer use star exports (`export * from '@octavus/core'`). Instead, they use explicit type exports and value exports. This improves tree-shaking but may require import adjustments.\n\n**If you were importing schemas:**\n\nSome Zod schemas are no longer re-exported from `@octavus/server-sdk`. If you need validation schemas, import them directly from `@octavus/core`:\n\n```typescript\n// Before (v1) - some schemas were re-exported\nimport { fileUploadRequestSchema } from '@octavus/server-sdk';\n\n// After (v2) - import from core if needed\nimport { fileReferenceSchema } from '@octavus/core';\n```\n\n**Type imports remain the same:**\n\n```typescript\n// Types are still available\nimport type { StreamEvent, UIMessage, FileReference } from '@octavus/client-sdk';\n```\n\n### Custom Transport Interface\n\nCustom transport implementations must now implement the `continueWithToolResults()` method:\n\n```typescript\ninterface Transport {\n trigger(triggerName: string, input?: Record<string, unknown>): AsyncIterable<StreamEvent>;\n continueWithToolResults(executionId: string, results: ToolResult[]): AsyncIterable<StreamEvent>;\n stop(): void;\n}\n```\n\n### New ChatStatus Value\n\nThe `ChatStatus` type now includes `'awaiting-input'`:\n\n```typescript\n// v1\ntype ChatStatus = 'idle' | 'streaming' | 'error';\n\n// v2\ntype ChatStatus = 'idle' | 'streaming' | 'error' | 'awaiting-input';\n```\n\nUpdate any status checks that assume only three values.\n\n### Stream Event Changes\n\nTwo stream events have been modified:\n\n**StartEvent and FinishEvent now include executionId:**\n\n```typescript\ninterface StartEvent {\n type: 'start';\n messageId?: string;\n executionId?: string; // New in v2\n}\n\ninterface FinishEvent {\n type: 'finish';\n finishReason: FinishReason;\n executionId?: string; // New in v2\n}\n```\n\n**New FinishReason:**\n\n```typescript\n// Added 'client-tool-calls' to FinishReason\ntype FinishReason =\n | 'stop'\n | 'tool-calls'\n | 'client-tool-calls' // New in v2\n | 'length'\n | 'content-filter'\n | 'error'\n | 'other';\n```\n\n**New ClientToolRequestEvent:**\n\n```typescript\ninterface ClientToolRequestEvent {\n type: 'client-tool-request';\n executionId: string;\n toolCalls: PendingToolCall[];\n serverToolResults?: ToolResult[];\n}\n```\n\n## New Features\n\n### Client-Side Tools\n\nv2 introduces client-side tool execution, allowing tools to run in the browser. This is useful for:\n\n- Accessing browser APIs (geolocation, clipboard, etc.)\n- Interactive tools requiring user input (confirmations, forms, selections)\n- Tools that need access to client-side state\n\nSee the [Client Tools guide](/docs/client-sdk/client-tools) for full documentation.\n\n#### Automatic Client Tools\n\nTools that execute automatically without user interaction:\n\n```typescript\nimport { useOctavusChat, createHttpTransport } from '@octavus/react';\n\nfunction Chat() {\n const { messages, send, status } = useOctavusChat({\n transport: createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n clientTools: {\n 'get-browser-location': async (args, ctx) => {\n const pos = await new Promise((resolve, reject) => {\n navigator.geolocation.getCurrentPosition(resolve, reject);\n });\n return { lat: pos.coords.latitude, lng: pos.coords.longitude };\n },\n 'get-clipboard': async () => {\n return await navigator.clipboard.readText();\n },\n },\n });\n\n // ... rest of component\n}\n```\n\n#### Interactive Client Tools\n\nTools that require user interaction before completing:\n\n```typescript\nimport { useOctavusChat, createHttpTransport } from '@octavus/react';\n\nfunction Chat() {\n const { messages, send, status, pendingClientTools } = useOctavusChat({\n transport: createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n clientTools: {\n 'request-confirmation': 'interactive',\n 'select-option': 'interactive',\n },\n });\n\n // Render UI for pending interactive tools\n const confirmationTools = pendingClientTools['request-confirmation'] ?? [];\n\n return (\n <div>\n {/* Chat messages */}\n\n {/* Interactive tool UI */}\n {confirmationTools.map((tool) => (\n <ConfirmationModal\n key={tool.toolCallId}\n message={tool.args.message as string}\n onConfirm={() => tool.submit({ confirmed: true })}\n onCancel={() => tool.cancel('User declined')}\n />\n ))}\n </div>\n );\n}\n```\n\n### New Types for Client Tools\n\n```typescript\n// Context provided to automatic tool handlers\ninterface ClientToolContext {\n toolCallId: string;\n toolName: string;\n signal: AbortSignal;\n}\n\n// Handler types\ntype ClientToolHandler =\n | ((args: Record<string, unknown>, ctx: ClientToolContext) => Promise<unknown>)\n | 'interactive';\n\n// Interactive tool with bound submit/cancel methods\ninterface InteractiveTool {\n toolCallId: string;\n toolName: string;\n args: Record<string, unknown>;\n submit: (result: unknown) => void;\n cancel: (reason?: string) => void;\n}\n```\n\n### pendingClientTools Property\n\nBoth `OctavusChat` and `useOctavusChat` now expose `pendingClientTools`:\n\n```typescript\n// Record keyed by tool name, with array of pending tools\nconst pendingClientTools: Record<string, InteractiveTool[]>;\n```\n\nThis allows multiple simultaneous calls to the same tool (e.g., batch confirmations).\n\n## Migration Checklist\n\nUse this checklist to ensure you've addressed all breaking changes:\n\n- [ ] Update all Octavus packages to v2.0.0\n- [ ] Update HTTP transport from `triggerRequest` to `request` with new signature\n- [ ] Update server route handlers to use `execute()` with `SessionRequest` type\n- [ ] Update any status checks to handle `'awaiting-input'`\n- [ ] Update custom transport implementations to include `continueWithToolResults()`\n- [ ] Review any direct schema imports and update if needed\n- [ ] (Optional) Add client-side tools for browser-based functionality\n\n## Package Versions\n\nAll packages should be updated together to ensure compatibility:\n\n```json\n{\n \"dependencies\": {\n \"@octavus/core\": \"^2.0.0\",\n \"@octavus/client-sdk\": \"^2.0.0\",\n \"@octavus/server-sdk\": \"^2.0.0\",\n \"@octavus/react\": \"^2.0.0\"\n }\n}\n```\n\n## Need Help?\n\nIf you encounter issues during migration:\n\n- Check the [Client Tools documentation](/docs/client-sdk/client-tools) for the new features\n- Review the [HTTP Transport](/docs/client-sdk/http-transport) and [Server SDK Sessions](/docs/server-sdk/sessions) docs for updated APIs\n- [Open an issue](https://github.com/octavus-ai/js-sdk/issues) on GitHub if you find a bug\n",
|
|
1433
|
+
content: "\n# Migrating from v1 to v2\n\nThis guide covers all breaking changes and new features in Octavus SDK v2.0.0.\n\n## Overview\n\nVersion 2.0.0 introduces **client-side tool execution**, allowing tools to run in the browser with support for both automatic execution and interactive user input. This required changes to the transport layer and streaming protocol.\n\n## Breaking Changes\n\n### HTTP Transport Configuration\n\nThe `triggerRequest` option has been renamed to `request` and the function signature changed to support both trigger and continuation requests.\n\n**Before (v1):**\n\n```typescript\nimport { createHttpTransport } from '@octavus/client-sdk';\n\nconst transport = createHttpTransport({\n triggerRequest: (triggerName, input, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, triggerName, input }),\n signal: options?.signal,\n }),\n});\n```\n\n**After (v2):**\n\n```typescript\nimport { createHttpTransport } from '@octavus/client-sdk';\n\nconst transport = createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n});\n```\n\nThe `payload` parameter is a discriminated union with `type: 'trigger' | 'continue'`:\n\n- **Trigger request:** `{ type: 'trigger', triggerName: string, input?: Record<string, unknown> }`\n- **Continue request:** `{ type: 'continue', executionId: string, toolResults: ToolResult[] }`\n\n### Server SDK Route Handler\n\nServer-side route handlers should use the new `execute()` method which handles both triggers and continuations via the request type.\n\n**Before (v1):**\n\n```typescript\n// app/api/trigger/route.ts\nimport { AgentSession, toSSEStream } from '@octavus/server-sdk';\n\nexport async function POST(request: Request) {\n const { sessionId, triggerName, input } = await request.json();\n\n const session = new AgentSession({\n sessionId,\n agentId: 'your-agent-id',\n apiKey: process.env.OCTAVUS_API_KEY!,\n });\n\n const events = session.trigger(triggerName, input, { signal: request.signal });\n return new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n });\n}\n```\n\n**After (v2):**\n\n```typescript\n// app/api/trigger/route.ts\nimport { AgentSession, toSSEStream, type SessionRequest } from '@octavus/server-sdk';\n\nexport async function POST(request: Request) {\n const body = (await request.json()) as { sessionId: string } & SessionRequest;\n const { sessionId, ...sessionRequest } = body;\n\n const session = new AgentSession({\n sessionId,\n agentId: 'your-agent-id',\n apiKey: process.env.OCTAVUS_API_KEY!,\n });\n\n const events = session.execute(sessionRequest, { signal: request.signal });\n return new Response(toSSEStream(events), {\n headers: { 'Content-Type': 'text/event-stream' },\n });\n}\n```\n\nThe `SessionRequest` type is a discriminated union:\n\n- **Trigger:** `{ type: 'trigger', triggerName: string, input?: Record<string, unknown> }`\n- **Continue:** `{ type: 'continue', executionId: string, toolResults: ToolResult[] }`\n\n### Export Changes\n\nThe SDKs no longer use star exports (`export * from '@octavus/core'`). Instead, they use explicit type exports and value exports. This improves tree-shaking but may require import adjustments.\n\n**If you were importing schemas:**\n\nSome Zod schemas are no longer re-exported from `@octavus/server-sdk`. If you need validation schemas, import them directly from `@octavus/core`:\n\n```typescript\n// Before (v1) - some schemas were re-exported\nimport { fileUploadRequestSchema } from '@octavus/server-sdk';\n\n// After (v2) - import from core if needed\nimport { fileReferenceSchema } from '@octavus/core';\n```\n\n**Type imports remain the same:**\n\n```typescript\n// Types are still available\nimport type { StreamEvent, UIMessage, FileReference } from '@octavus/client-sdk';\n```\n\n### Custom Transport Interface\n\nCustom transport implementations must now implement the `continueWithToolResults()` method:\n\n```typescript\ninterface Transport {\n trigger(triggerName: string, input?: Record<string, unknown>): AsyncIterable<StreamEvent>;\n continueWithToolResults(executionId: string, results: ToolResult[]): AsyncIterable<StreamEvent>;\n stop(): void;\n}\n```\n\n### New ChatStatus Value\n\nThe `ChatStatus` type now includes `'awaiting-input'`:\n\n```typescript\n// v1\ntype ChatStatus = 'idle' | 'streaming' | 'error';\n\n// v2\ntype ChatStatus = 'idle' | 'streaming' | 'error' | 'awaiting-input';\n```\n\nUpdate any status checks that assume only three values.\n\n### Stream Event Changes\n\nTwo stream events have been modified:\n\n**StartEvent and FinishEvent now include executionId:**\n\n```typescript\ninterface StartEvent {\n type: 'start';\n messageId?: string;\n executionId?: string; // New in v2\n}\n\ninterface FinishEvent {\n type: 'finish';\n finishReason: FinishReason;\n executionId?: string; // New in v2\n}\n```\n\n**New FinishReason:**\n\n```typescript\n// Added 'client-tool-calls' to FinishReason\ntype FinishReason =\n | 'stop'\n | 'tool-calls'\n | 'client-tool-calls' // New in v2\n | 'length'\n | 'content-filter'\n | 'error'\n | 'other';\n```\n\n**New ClientToolRequestEvent:**\n\n```typescript\ninterface ClientToolRequestEvent {\n type: 'client-tool-request';\n executionId: string;\n toolCalls: PendingToolCall[];\n serverToolResults?: ToolResult[];\n}\n```\n\n## New Features\n\n### Client-Side Tools\n\nv2 introduces client-side tool execution, allowing tools to run in the browser. This is useful for:\n\n- Accessing browser APIs (geolocation, clipboard, etc.)\n- Interactive tools requiring user input (confirmations, forms, selections)\n- Tools that need access to client-side state\n\nSee the [Client Tools guide](/docs/client-sdk/client-tools) for full documentation.\n\n#### Automatic Client Tools\n\nTools that execute automatically without user interaction:\n\n```typescript\nimport { useOctavusChat, createHttpTransport } from '@octavus/react';\n\nfunction Chat() {\n const { messages, send, status } = useOctavusChat({\n transport: createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n clientTools: {\n 'get-browser-location': async (args, ctx) => {\n const pos = await new Promise((resolve, reject) => {\n navigator.geolocation.getCurrentPosition(resolve, reject);\n });\n return { lat: pos.coords.latitude, lng: pos.coords.longitude };\n },\n 'get-clipboard': async () => {\n return await navigator.clipboard.readText();\n },\n },\n });\n\n // ... rest of component\n}\n```\n\n#### Interactive Client Tools\n\nTools that require user interaction before completing:\n\n```typescript\nimport { useOctavusChat, createHttpTransport } from '@octavus/react';\n\nfunction Chat() {\n const { messages, send, status, pendingClientTools } = useOctavusChat({\n transport: createHttpTransport({\n request: (payload, options) =>\n fetch('/api/trigger', {\n method: 'POST',\n headers: { 'Content-Type': 'application/json' },\n body: JSON.stringify({ sessionId, ...payload }),\n signal: options?.signal,\n }),\n }),\n clientTools: {\n 'request-confirmation': 'interactive',\n 'select-option': 'interactive',\n },\n });\n\n // Render UI for pending interactive tools\n const confirmationTools = pendingClientTools['request-confirmation'] ?? [];\n\n return (\n <div>\n {/* Chat messages */}\n\n {/* Interactive tool UI */}\n {confirmationTools.map((tool) => (\n <ConfirmationModal\n key={tool.toolCallId}\n message={tool.args.message as string}\n onConfirm={() => tool.submit({ confirmed: true })}\n onCancel={() => tool.cancel('User declined')}\n />\n ))}\n </div>\n );\n}\n```\n\n### New Types for Client Tools\n\n```typescript\n// Context provided to automatic tool handlers\ninterface ClientToolContext {\n toolCallId: string;\n toolName: string;\n signal: AbortSignal;\n}\n\n// Handler types\ntype ClientToolHandler =\n | ((args: Record<string, unknown>, ctx: ClientToolContext) => Promise<unknown>)\n | 'interactive';\n\n// Interactive tool with bound submit/cancel methods\ninterface InteractiveTool {\n toolCallId: string;\n toolName: string;\n args: Record<string, unknown>;\n submit: (result: unknown) => void;\n cancel: (reason?: string) => void;\n}\n```\n\n### pendingClientTools Property\n\nBoth `OctavusChat` and `useOctavusChat` now expose `pendingClientTools`:\n\n```typescript\n// Record keyed by tool name, with array of pending tools\nconst pendingClientTools: Record<string, InteractiveTool[]>;\n```\n\nThis allows multiple simultaneous calls to the same tool (e.g., batch confirmations).\n\n## Migration Checklist\n\nUse this checklist to ensure you've addressed all breaking changes:\n\n- [ ] Update all Octavus packages to v2.0.0\n- [ ] Update HTTP transport from `triggerRequest` to `request` with new signature\n- [ ] Update server route handlers to use `execute()` with `SessionRequest` type\n- [ ] Update any status checks to handle `'awaiting-input'`\n- [ ] Update custom transport implementations to include `continueWithToolResults()`\n- [ ] Review any direct schema imports and update if needed\n- [ ] (Optional) Add client-side tools for browser-based functionality\n\n## Package Versions\n\nAll packages should be updated together to ensure compatibility:\n\n```json\n{\n \"dependencies\": {\n \"@octavus/core\": \"^2.0.0\",\n \"@octavus/client-sdk\": \"^2.0.0\",\n \"@octavus/server-sdk\": \"^2.0.0\",\n \"@octavus/react\": \"^2.0.0\"\n }\n}\n```\n\n## Need Help?\n\nIf you encounter issues during migration:\n\n- Check the [Client Tools documentation](/docs/client-sdk/client-tools) for the new features\n- Review the [HTTP Transport](/docs/client-sdk/http-transport) and [Server SDK Sessions](/docs/server-sdk/sessions) docs for updated APIs\n- [Open an issue](https://github.com/octavus-ai/agent-sdk/issues) on GitHub if you find a bug\n",
|
|
1434
1434
|
excerpt: "Migrating from v1 to v2 This guide covers all breaking changes and new features in Octavus SDK v2.0.0. Overview Version 2.0.0 introduces client-side tool execution, allowing tools to run in the...",
|
|
1435
1435
|
order: 1
|
|
1436
1436
|
}
|
|
@@ -1468,4 +1468,4 @@ export {
|
|
|
1468
1468
|
getDocSlugs,
|
|
1469
1469
|
getSectionBySlug
|
|
1470
1470
|
};
|
|
1471
|
-
//# sourceMappingURL=chunk-
|
|
1471
|
+
//# sourceMappingURL=chunk-RI24MRSN.js.map
|