@octavus/docs 2.15.0 → 2.16.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/content/01-getting-started/02-quickstart.md +1 -0
- package/content/02-server-sdk/04-streaming.md +9 -0
- package/content/03-client-sdk/06-http-transport.md +2 -0
- package/content/04-protocol/05-skills.md +88 -8
- package/content/04-protocol/09-skills-advanced.md +89 -8
- package/content/06-examples/02-nextjs-chat.md +1 -0
- package/dist/{chunk-2UFDUNPK.js → chunk-4ABNR2ZK.js} +5 -5
- package/dist/chunk-4ABNR2ZK.js.map +1 -0
- package/dist/{chunk-JEOGYIRI.js → chunk-4WYG2JYA.js} +21 -17
- package/dist/chunk-4WYG2JYA.js.map +1 -0
- package/dist/chunk-NKLLG2WY.js +1513 -0
- package/dist/chunk-NKLLG2WY.js.map +1 -0
- package/dist/content.js +1 -1
- package/dist/docs.json +9 -9
- package/dist/index.js +1 -1
- package/dist/search-index.json +1 -1
- package/dist/search.js +1 -1
- package/dist/search.js.map +1 -1
- package/dist/sections.json +9 -9
- package/package.json +1 -1
- package/dist/chunk-2UFDUNPK.js.map +0 -1
- package/dist/chunk-JEOGYIRI.js.map +0 -1
|
@@ -27,6 +27,7 @@ return new Response(toSSEStream(events), {
|
|
|
27
27
|
'Content-Type': 'text/event-stream',
|
|
28
28
|
'Cache-Control': 'no-cache',
|
|
29
29
|
Connection: 'keep-alive',
|
|
30
|
+
'X-Accel-Buffering': 'no',
|
|
30
31
|
},
|
|
31
32
|
});
|
|
32
33
|
|
|
@@ -36,6 +37,14 @@ for await (const event of events) {
|
|
|
36
37
|
}
|
|
37
38
|
```
|
|
38
39
|
|
|
40
|
+
The `X-Accel-Buffering: no` header disables proxy buffering on Nginx-based infrastructure (including Vercel), ensuring SSE events are forwarded immediately instead of being batched.
|
|
41
|
+
|
|
42
|
+
### Heartbeat
|
|
43
|
+
|
|
44
|
+
`toSSEStream` automatically sends SSE comment lines (`: heartbeat`) every 15 seconds during idle periods. This prevents proxies and load balancers from closing the connection due to inactivity — particularly important during multi-step executions where the stream may be silent while waiting for tool processing or LLM responses.
|
|
45
|
+
|
|
46
|
+
Heartbeat comments are ignored by all SSE parsers per the spec. No client-side handling is needed.
|
|
47
|
+
|
|
39
48
|
## Event Types
|
|
40
49
|
|
|
41
50
|
The stream emits various event types for lifecycle, text, reasoning, and tool interactions.
|
|
@@ -80,6 +80,7 @@ export async function POST(request: Request) {
|
|
|
80
80
|
'Content-Type': 'text/event-stream',
|
|
81
81
|
'Cache-Control': 'no-cache',
|
|
82
82
|
Connection: 'keep-alive',
|
|
83
|
+
'X-Accel-Buffering': 'no',
|
|
83
84
|
},
|
|
84
85
|
});
|
|
85
86
|
}
|
|
@@ -234,6 +235,7 @@ app.post('/api/trigger', async (req, res) => {
|
|
|
234
235
|
res.setHeader('Content-Type', 'text/event-stream');
|
|
235
236
|
res.setHeader('Cache-Control', 'no-cache');
|
|
236
237
|
res.setHeader('Connection', 'keep-alive');
|
|
238
|
+
res.setHeader('X-Accel-Buffering', 'no');
|
|
237
239
|
|
|
238
240
|
// Pipe the stream to the response
|
|
239
241
|
const reader = stream.getReader();
|
|
@@ -107,17 +107,19 @@ This also works for named threads in interactive agents, allowing different thre
|
|
|
107
107
|
|
|
108
108
|
When skills are enabled, the LLM has access to these tools:
|
|
109
109
|
|
|
110
|
-
| Tool | Purpose |
|
|
111
|
-
| -------------------- | --------------------------------------- |
|
|
112
|
-
| `octavus_skill_read` | Read skill documentation (SKILL.md) |
|
|
113
|
-
| `octavus_skill_list` | List available scripts in a skill |
|
|
114
|
-
| `octavus_skill_run` | Execute a pre-built script from a skill |
|
|
115
|
-
| `octavus_code_run` | Execute arbitrary Python/Bash code |
|
|
116
|
-
| `octavus_file_write` | Create files in the sandbox |
|
|
117
|
-
| `octavus_file_read` | Read files from the sandbox |
|
|
110
|
+
| Tool | Purpose | Availability |
|
|
111
|
+
| -------------------- | --------------------------------------- | -------------------- |
|
|
112
|
+
| `octavus_skill_read` | Read skill documentation (SKILL.md) | All skills |
|
|
113
|
+
| `octavus_skill_list` | List available scripts in a skill | All skills |
|
|
114
|
+
| `octavus_skill_run` | Execute a pre-built script from a skill | All skills |
|
|
115
|
+
| `octavus_code_run` | Execute arbitrary Python/Bash code | Standard skills only |
|
|
116
|
+
| `octavus_file_write` | Create files in the sandbox | Standard skills only |
|
|
117
|
+
| `octavus_file_read` | Read files from the sandbox | Standard skills only |
|
|
118
118
|
|
|
119
119
|
The LLM learns about available skills through system prompt injection and can use these tools to interact with skills.
|
|
120
120
|
|
|
121
|
+
Skills that have [secrets](#skill-secrets) configured run in **secure mode**, where only `octavus_skill_read`, `octavus_skill_list`, and `octavus_skill_run` are available. See [Skill Secrets](#skill-secrets) below.
|
|
122
|
+
|
|
121
123
|
## Example: QR Code Generation
|
|
122
124
|
|
|
123
125
|
```yaml
|
|
@@ -212,6 +214,17 @@ Main script for generating QR codes...
|
|
|
212
214
|
|
|
213
215
|
````
|
|
214
216
|
|
|
217
|
+
### Frontmatter Fields
|
|
218
|
+
|
|
219
|
+
| Field | Required | Description |
|
|
220
|
+
| ------------- | -------- | ------------------------------------------------------ |
|
|
221
|
+
| `name` | Yes | Skill slug (lowercase, hyphens) |
|
|
222
|
+
| `description` | Yes | What the skill does (shown to the LLM) |
|
|
223
|
+
| `version` | No | Semantic version string |
|
|
224
|
+
| `license` | No | License identifier |
|
|
225
|
+
| `author` | No | Skill author |
|
|
226
|
+
| `secrets` | No | Array of secret declarations (enables secure mode) |
|
|
227
|
+
|
|
215
228
|
## Best Practices
|
|
216
229
|
|
|
217
230
|
### 1. Clear Descriptions
|
|
@@ -337,6 +350,72 @@ steps:
|
|
|
337
350
|
|
|
338
351
|
Thread-level `sandboxTimeout` takes priority over agent-level. Maximum: 1 hour (3,600,000 ms).
|
|
339
352
|
|
|
353
|
+
## Skill Secrets
|
|
354
|
+
|
|
355
|
+
Skills can declare secrets they need to function. When an organization configures those secrets, the skill runs in **secure mode** with additional isolation.
|
|
356
|
+
|
|
357
|
+
### Declaring Secrets
|
|
358
|
+
|
|
359
|
+
Add a `secrets` array to your SKILL.md frontmatter:
|
|
360
|
+
|
|
361
|
+
```yaml
|
|
362
|
+
---
|
|
363
|
+
name: github
|
|
364
|
+
description: >
|
|
365
|
+
Run GitHub CLI (gh) commands to manage repos, issues, PRs, and more.
|
|
366
|
+
secrets:
|
|
367
|
+
- name: GITHUB_TOKEN
|
|
368
|
+
description: GitHub personal access token with repo access
|
|
369
|
+
required: true
|
|
370
|
+
- name: GITHUB_ORG
|
|
371
|
+
description: Default GitHub organization
|
|
372
|
+
required: false
|
|
373
|
+
---
|
|
374
|
+
```
|
|
375
|
+
|
|
376
|
+
Each secret declaration has:
|
|
377
|
+
|
|
378
|
+
| Field | Required | Description |
|
|
379
|
+
| ------------- | -------- | ----------------------------------------------------------- |
|
|
380
|
+
| `name` | Yes | Environment variable name (uppercase, e.g., `GITHUB_TOKEN`) |
|
|
381
|
+
| `description` | No | Explains what this secret is for (shown in the UI) |
|
|
382
|
+
| `required` | No | Whether the secret is required (defaults to `true`) |
|
|
383
|
+
|
|
384
|
+
Secret names must match the pattern `^[A-Z_][A-Z0-9_]*$` (uppercase letters, digits, and underscores).
|
|
385
|
+
|
|
386
|
+
### Configuring Secrets
|
|
387
|
+
|
|
388
|
+
Organization admins configure secret values through the skill editor in the platform UI. Each organization maintains its own independent set of secrets for each skill.
|
|
389
|
+
|
|
390
|
+
Secrets are encrypted at rest and only decrypted at execution time.
|
|
391
|
+
|
|
392
|
+
### Secure Mode
|
|
393
|
+
|
|
394
|
+
When a skill has secrets configured for the organization, it automatically runs in **secure mode**:
|
|
395
|
+
|
|
396
|
+
- The skill gets its own **isolated sandbox** (separate from other skills)
|
|
397
|
+
- Secrets are injected as **environment variables** available to all scripts
|
|
398
|
+
- Only `octavus_skill_read`, `octavus_skill_list`, and `octavus_skill_run` are available — `octavus_code_run`, `octavus_file_write`, and `octavus_file_read` are blocked
|
|
399
|
+
- Scripts receive input as **JSON via stdin** (using the `input` parameter on `octavus_skill_run`) instead of CLI args
|
|
400
|
+
- All output (stdout/stderr) is **automatically redacted** for secret values before being returned to the LLM
|
|
401
|
+
|
|
402
|
+
### Writing Scripts for Secure Skills
|
|
403
|
+
|
|
404
|
+
Scripts in secure skills read input from stdin as JSON and access secrets from environment variables:
|
|
405
|
+
|
|
406
|
+
```python
|
|
407
|
+
import json
|
|
408
|
+
import os
|
|
409
|
+
import sys
|
|
410
|
+
|
|
411
|
+
input_data = json.load(sys.stdin)
|
|
412
|
+
token = os.environ.get('GITHUB_TOKEN')
|
|
413
|
+
|
|
414
|
+
# Use the token and input_data to perform the task
|
|
415
|
+
```
|
|
416
|
+
|
|
417
|
+
For standard skills (without secrets), scripts receive input as CLI arguments. For secure skills, always use stdin JSON.
|
|
418
|
+
|
|
340
419
|
## Security
|
|
341
420
|
|
|
342
421
|
Skills run in isolated sandbox environments:
|
|
@@ -345,6 +424,7 @@ Skills run in isolated sandbox environments:
|
|
|
345
424
|
- **No persistent storage** (sandbox destroyed after each `next-message` execution)
|
|
346
425
|
- **File output only** via `/output/` directory
|
|
347
426
|
- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)
|
|
427
|
+
- **Secret redaction** — output from secure skills is automatically scanned for secret values
|
|
348
428
|
|
|
349
429
|
## Next Steps
|
|
350
430
|
|
|
@@ -307,6 +307,76 @@ Pattern:
|
|
|
307
307
|
2. LLM uses skill to analyze/process the data
|
|
308
308
|
3. Generate outputs (files, reports)
|
|
309
309
|
|
|
310
|
+
## Secure Skills
|
|
311
|
+
|
|
312
|
+
When a skill declares secrets and an organization configures them, the skill runs in secure mode with its own isolated sandbox.
|
|
313
|
+
|
|
314
|
+
### Standard vs Secure Skills
|
|
315
|
+
|
|
316
|
+
| Aspect | Standard Skills | Secure Skills |
|
|
317
|
+
| ------------------- | --------------------------------- | --------------------------------------------------- |
|
|
318
|
+
| **Sandbox** | Shared with other standard skills | Isolated (one per skill) |
|
|
319
|
+
| **Available tools** | All 6 skill tools | `skill_read`, `skill_list`, `skill_run` only |
|
|
320
|
+
| **Script input** | CLI arguments via `args` | JSON via stdin (use `input` parameter) |
|
|
321
|
+
| **Environment** | No secrets | Secrets as env vars |
|
|
322
|
+
| **Output** | Raw stdout/stderr | Redacted (secret values replaced with `[REDACTED]`) |
|
|
323
|
+
|
|
324
|
+
### Writing Scripts for Secure Skills
|
|
325
|
+
|
|
326
|
+
Secure skill scripts receive structured input via stdin (JSON) and access secrets from environment variables:
|
|
327
|
+
|
|
328
|
+
```python
|
|
329
|
+
#!/usr/bin/env python3
|
|
330
|
+
import json
|
|
331
|
+
import os
|
|
332
|
+
import sys
|
|
333
|
+
import subprocess
|
|
334
|
+
|
|
335
|
+
input_data = json.load(sys.stdin)
|
|
336
|
+
token = os.environ["GITHUB_TOKEN"]
|
|
337
|
+
|
|
338
|
+
repo = input_data.get("repo", "")
|
|
339
|
+
result = subprocess.run(
|
|
340
|
+
["gh", "repo", "view", repo, "--json", "name,description"],
|
|
341
|
+
capture_output=True, text=True,
|
|
342
|
+
env={**os.environ, "GH_TOKEN": token}
|
|
343
|
+
)
|
|
344
|
+
|
|
345
|
+
print(result.stdout)
|
|
346
|
+
```
|
|
347
|
+
|
|
348
|
+
Key patterns:
|
|
349
|
+
|
|
350
|
+
- **Read stdin**: `json.load(sys.stdin)` to get the `input` object from the `octavus_skill_run` call
|
|
351
|
+
- **Access secrets**: `os.environ["SECRET_NAME"]` — secrets are injected as env vars
|
|
352
|
+
- **Print output**: Write results to stdout — the LLM sees the (redacted) stdout
|
|
353
|
+
- **Error handling**: Write errors to stderr and exit with non-zero code
|
|
354
|
+
|
|
355
|
+
### Declaring Secrets in SKILL.md
|
|
356
|
+
|
|
357
|
+
```yaml
|
|
358
|
+
---
|
|
359
|
+
name: github
|
|
360
|
+
description: >
|
|
361
|
+
Run GitHub CLI (gh) commands to manage repos, issues, PRs, and more.
|
|
362
|
+
secrets:
|
|
363
|
+
- name: GITHUB_TOKEN
|
|
364
|
+
description: GitHub personal access token with repo access
|
|
365
|
+
required: true
|
|
366
|
+
- name: GITHUB_ORG
|
|
367
|
+
description: Default GitHub organization
|
|
368
|
+
required: false
|
|
369
|
+
---
|
|
370
|
+
```
|
|
371
|
+
|
|
372
|
+
### Testing Secure Skills Locally
|
|
373
|
+
|
|
374
|
+
You can test scripts locally by piping JSON to stdin:
|
|
375
|
+
|
|
376
|
+
```bash
|
|
377
|
+
echo '{"repo": "octavus-ai/agent-sdk"}' | GITHUB_TOKEN=ghp_xxx python scripts/list-issues.py
|
|
378
|
+
```
|
|
379
|
+
|
|
310
380
|
## Skill Development Tips
|
|
311
381
|
|
|
312
382
|
### Writing SKILL.md
|
|
@@ -373,6 +443,15 @@ The LLM sees these errors and can retry or explain to users.
|
|
|
373
443
|
- **File output only** via `/output/` directory
|
|
374
444
|
- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)
|
|
375
445
|
|
|
446
|
+
### Secret Protection
|
|
447
|
+
|
|
448
|
+
For skills with configured secrets:
|
|
449
|
+
|
|
450
|
+
- **Isolated sandbox** — each secure skill gets its own sandbox, preventing cross-skill secret leakage
|
|
451
|
+
- **No arbitrary code** — `octavus_code_run`, `octavus_file_write`, and `octavus_file_read` are blocked for secure skills, so only pre-built scripts can execute
|
|
452
|
+
- **Output redaction** — all stdout and stderr are scanned for secret values before being returned to the LLM
|
|
453
|
+
- **Encrypted at rest** — secrets are encrypted using AES-256-GCM and only decrypted at execution time
|
|
454
|
+
|
|
376
455
|
### Input Validation
|
|
377
456
|
|
|
378
457
|
Skills should validate inputs:
|
|
@@ -455,14 +534,16 @@ Check execution logs in the platform debug view:
|
|
|
455
534
|
|
|
456
535
|
## Best Practices Summary
|
|
457
536
|
|
|
458
|
-
1. **Enable only needed skills**
|
|
459
|
-
2. **Choose appropriate display modes**
|
|
460
|
-
3. **Write clear skill descriptions**
|
|
461
|
-
4. **Handle errors gracefully**
|
|
462
|
-
5. **Test skills locally**
|
|
463
|
-
6. **Monitor execution**
|
|
464
|
-
7. **Combine with tools**
|
|
465
|
-
8. **Consider performance**
|
|
537
|
+
1. **Enable only needed skills** — Don't overwhelm the LLM
|
|
538
|
+
2. **Choose appropriate display modes** — Match user experience needs
|
|
539
|
+
3. **Write clear skill descriptions** — Help LLM understand when to use
|
|
540
|
+
4. **Handle errors gracefully** — Provide helpful error messages
|
|
541
|
+
5. **Test skills locally** — Verify before uploading
|
|
542
|
+
6. **Monitor execution** — Check logs for issues
|
|
543
|
+
7. **Combine with tools** — Use tools for data, skills for processing
|
|
544
|
+
8. **Consider performance** — Be aware of timeouts and limits
|
|
545
|
+
9. **Use secrets for credentials** — Declare secrets in frontmatter instead of hardcoding tokens
|
|
546
|
+
10. **Design scripts for stdin input** — Secure skills receive JSON via stdin, so plan for both input methods if the skill might be used in either mode
|
|
466
547
|
|
|
467
548
|
## Next Steps
|
|
468
549
|
|
|
@@ -577,7 +577,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
577
577
|
section: "protocol",
|
|
578
578
|
title: "Skills",
|
|
579
579
|
description: "Using Octavus skills for code execution and specialized capabilities.",
|
|
580
|
-
content: "\n# Skills\n\nSkills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are self-contained packages with documentation and scripts that run in secure sandboxes.\n\n## Overview\n\nOctavus Skills provide **provider-agnostic** code execution. They work with any LLM provider (Anthropic, OpenAI, Google) by using explicit tool calls and system prompt injection.\n\n### How Skills Work\n\n1. **Skill Definition**: Skills are defined in the protocol's `skills:` section\n2. **Skill Resolution**: Skills are resolved from available sources (see below)\n3. **Sandbox Execution**: When a skill is used, code runs in an isolated sandbox environment\n4. **File Generation**: Files saved to `/output/` are automatically captured and made available for download\n\n### Skill Sources\n\nSkills come from two sources, visible in the Skills tab of your organization:\n\n| Source | Badge in UI | Visibility | Example |\n| ----------- | ----------- | ------------------------------ | ------------------ |\n| **Octavus** | `Octavus` | Available to all organizations | `qr-code` |\n| **Custom** | None | Private to your organization | `my-company-skill` |\n\nWhen you reference a skill in your protocol, Octavus resolves it from your available skills. If you create a custom skill with the same name as an Octavus skill, your custom skill takes precedence.\n\n## Defining Skills\n\nDefine skills in the protocol's `skills:` section:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n```\n\n### Skill Fields\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------------------------------------- |\n| `display` | No | How to show in UI: `hidden`, `name`, `description`, `stream` (default: `description`) |\n| `description` | No | Custom description shown to users (overrides skill's built-in description) |\n\n### Display Modes\n\n| Mode | Behavior |\n| ------------- | ------------------------------------------- |\n| `hidden` | Skill usage not shown to users |\n| `name` | Shows skill name while executing |\n| `description` | Shows description while executing (default) |\n| `stream` | Streams progress if available |\n\n## Enabling Skills\n\nAfter defining skills in the `skills:` section, specify which skills are available. Skills work in both interactive agents and workers.\n\n### Interactive Agents\n\nReference skills in `agent.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account]\n skills: [qr-code]\n agentic: true\n```\n\n### Workers and Named Threads\n\nReference skills per-thread in `start-thread.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nsteps:\n Start thread:\n block: start-thread\n thread: worker\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n maxSteps: 10\n```\n\nThis also works for named threads in interactive agents, allowing different threads to have different skills.\n\n## Skill Tools\n\nWhen skills are enabled, the LLM has access to these tools:\n\n| Tool | Purpose |\n| -------------------- | --------------------------------------- |\n| `octavus_skill_read` | Read skill documentation (SKILL.md) |\n| `octavus_skill_list` | List available scripts in a skill |\n| `octavus_skill_run` | Execute a pre-built script from a skill |\n| `octavus_code_run` | Execute arbitrary Python/Bash code |\n| `octavus_file_write` | Create files in the sandbox |\n| `octavus_file_read` | Read files from the sandbox |\n\nThe LLM learns about available skills through system prompt injection and can use these tools to interact with skills.\n\n## Example: QR Code Generation\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n agentic: true\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond:\n block: next-message\n```\n\nWhen a user asks \"Create a QR code for octavus.ai\", the LLM will:\n\n1. Recognize the task matches the `qr-code` skill\n2. Call `octavus_skill_read` to learn how to use the skill\n3. Execute code (via `octavus_code_run` or `octavus_skill_run`) to generate the QR code\n4. Save the image to `/output/` in the sandbox\n5. The file is automatically captured and made available for download\n\n## File Output\n\nFiles saved to `/output/` in the sandbox are automatically:\n\n1. **Captured** after code execution\n2. **Uploaded** to S3 storage\n3. **Made available** via presigned URLs\n4. **Included** in the message as file parts\n\nFiles persist across page refreshes and are stored in the session's message history.\n\n## Skill Format\n\nSkills follow the [Agent Skills](https://agentskills.io) open standard:\n\n- `SKILL.md` - Required skill documentation with YAML frontmatter\n- `scripts/` - Optional executable code (Python/Bash)\n- `references/` - Optional documentation loaded as needed\n- `assets/` - Optional files used in outputs (templates, images)\n\n### SKILL.md Format\n\n````yaml\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\nversion: 1.0.0\nlicense: MIT\nauthor: Octavus Team\n---\n\n# QR Code Generator\n\n## Overview\n\nThis skill creates QR codes from text data using Python...\n\n## Quick Start\n\nGenerate a QR code with Python:\n\n```python\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n# ... code to generate QR code ...\n````\n\n## Scripts Reference\n\n### scripts/generate.py\n\nMain script for generating QR codes...\n\n````\n\n## Best Practices\n\n### 1. Clear Descriptions\n\nProvide clear, purpose-driven descriptions:\n\n```yaml\nskills:\n # Good - clear purpose\n qr-code:\n description: Generating QR codes for URLs, contact info, or any text data\n\n # Avoid - vague\n utility:\n description: Does stuff\n````\n\n### 2. When to Use Skills vs Tools\n\n| Use Skills When | Use Tools When |\n| ------------------------ | ---------------------------- |\n| Code execution needed | Simple API calls |\n| File generation | Database queries |\n| Complex calculations | External service integration |\n| Data processing | Authentication required |\n| Provider-agnostic needed | Backend-specific logic |\n\n### 3. Skill Selection\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n pdf-processor:\n display: description\n description: Processing PDFs\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis] # Skills available for this thread\n```\n\n### 4. Display Modes\n\nChoose appropriate display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n## Comparison: Skills vs Tools vs Provider Options\n\n| Feature | Octavus Skills | External Tools | Provider Tools/Skills |\n| ------------------ | ----------------- | ------------------- | --------------------- |\n| **Execution** | Isolated sandbox | Your backend | Provider servers |\n| **Provider** | Any (agnostic) | N/A | Provider-specific |\n| **Code Execution** | Yes | No | Yes (provider tools) |\n| **File Output** | Yes | No | Yes (provider skills) |\n| **Implementation** | Skill packages | Your code | Built-in |\n| **Cost** | Sandbox + LLM API | Your infrastructure | Included in API |\n\n## Uploading Custom Skills\n\nYou can upload custom skills to your organization:\n\n1. Create a skill following the [Agent Skills](https://agentskills.io) format\n2. Package it as a `.skill` bundle (ZIP file)\n3. Upload via the platform UI\n4. Reference by slug in your protocol\n\n```yaml\nskills:\n custom-analysis:\n display: description\n description: Custom analysis tool\n\nagent:\n skills: [custom-analysis]\n```\n\n## Sandbox Timeout\n\nThe default sandbox timeout is 5 minutes. You can configure a custom timeout using `sandboxTimeout` in the agent config or on individual `start-thread` blocks:\n\n```yaml\n# Agent-level timeout (applies to main thread)\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes (in milliseconds)\n```\n\n```yaml\n# Thread-level timeout (overrides agent-level for this thread)\nsteps:\n Start thread:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 3600000 # 1 hour\n```\n\nThread-level `sandboxTimeout` takes priority over agent-level. Maximum: 1 hour (3,600,000 ms).\n\n## Security\n\nSkills run in isolated sandbox environments:\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after each `next-message` execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n## Next Steps\n\n- [Agent Config](/docs/protocol/agent-config) \u2014 Configuring skills in agent settings\n- [Provider Options](/docs/protocol/provider-options) \u2014 Anthropic's built-in skills\n- [Skills Advanced Guide](/docs/protocol/skills-advanced) \u2014 Best practices and advanced patterns\n",
|
|
580
|
+
content: "\n# Skills\n\nSkills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are self-contained packages with documentation and scripts that run in secure sandboxes.\n\n## Overview\n\nOctavus Skills provide **provider-agnostic** code execution. They work with any LLM provider (Anthropic, OpenAI, Google) by using explicit tool calls and system prompt injection.\n\n### How Skills Work\n\n1. **Skill Definition**: Skills are defined in the protocol's `skills:` section\n2. **Skill Resolution**: Skills are resolved from available sources (see below)\n3. **Sandbox Execution**: When a skill is used, code runs in an isolated sandbox environment\n4. **File Generation**: Files saved to `/output/` are automatically captured and made available for download\n\n### Skill Sources\n\nSkills come from two sources, visible in the Skills tab of your organization:\n\n| Source | Badge in UI | Visibility | Example |\n| ----------- | ----------- | ------------------------------ | ------------------ |\n| **Octavus** | `Octavus` | Available to all organizations | `qr-code` |\n| **Custom** | None | Private to your organization | `my-company-skill` |\n\nWhen you reference a skill in your protocol, Octavus resolves it from your available skills. If you create a custom skill with the same name as an Octavus skill, your custom skill takes precedence.\n\n## Defining Skills\n\nDefine skills in the protocol's `skills:` section:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n```\n\n### Skill Fields\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------------------------------------- |\n| `display` | No | How to show in UI: `hidden`, `name`, `description`, `stream` (default: `description`) |\n| `description` | No | Custom description shown to users (overrides skill's built-in description) |\n\n### Display Modes\n\n| Mode | Behavior |\n| ------------- | ------------------------------------------- |\n| `hidden` | Skill usage not shown to users |\n| `name` | Shows skill name while executing |\n| `description` | Shows description while executing (default) |\n| `stream` | Streams progress if available |\n\n## Enabling Skills\n\nAfter defining skills in the `skills:` section, specify which skills are available. Skills work in both interactive agents and workers.\n\n### Interactive Agents\n\nReference skills in `agent.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account]\n skills: [qr-code]\n agentic: true\n```\n\n### Workers and Named Threads\n\nReference skills per-thread in `start-thread.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nsteps:\n Start thread:\n block: start-thread\n thread: worker\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n maxSteps: 10\n```\n\nThis also works for named threads in interactive agents, allowing different threads to have different skills.\n\n## Skill Tools\n\nWhen skills are enabled, the LLM has access to these tools:\n\n| Tool | Purpose | Availability |\n| -------------------- | --------------------------------------- | -------------------- |\n| `octavus_skill_read` | Read skill documentation (SKILL.md) | All skills |\n| `octavus_skill_list` | List available scripts in a skill | All skills |\n| `octavus_skill_run` | Execute a pre-built script from a skill | All skills |\n| `octavus_code_run` | Execute arbitrary Python/Bash code | Standard skills only |\n| `octavus_file_write` | Create files in the sandbox | Standard skills only |\n| `octavus_file_read` | Read files from the sandbox | Standard skills only |\n\nThe LLM learns about available skills through system prompt injection and can use these tools to interact with skills.\n\nSkills that have [secrets](#skill-secrets) configured run in **secure mode**, where only `octavus_skill_read`, `octavus_skill_list`, and `octavus_skill_run` are available. See [Skill Secrets](#skill-secrets) below.\n\n## Example: QR Code Generation\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n agentic: true\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond:\n block: next-message\n```\n\nWhen a user asks \"Create a QR code for octavus.ai\", the LLM will:\n\n1. Recognize the task matches the `qr-code` skill\n2. Call `octavus_skill_read` to learn how to use the skill\n3. Execute code (via `octavus_code_run` or `octavus_skill_run`) to generate the QR code\n4. Save the image to `/output/` in the sandbox\n5. The file is automatically captured and made available for download\n\n## File Output\n\nFiles saved to `/output/` in the sandbox are automatically:\n\n1. **Captured** after code execution\n2. **Uploaded** to S3 storage\n3. **Made available** via presigned URLs\n4. **Included** in the message as file parts\n\nFiles persist across page refreshes and are stored in the session's message history.\n\n## Skill Format\n\nSkills follow the [Agent Skills](https://agentskills.io) open standard:\n\n- `SKILL.md` - Required skill documentation with YAML frontmatter\n- `scripts/` - Optional executable code (Python/Bash)\n- `references/` - Optional documentation loaded as needed\n- `assets/` - Optional files used in outputs (templates, images)\n\n### SKILL.md Format\n\n````yaml\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\nversion: 1.0.0\nlicense: MIT\nauthor: Octavus Team\n---\n\n# QR Code Generator\n\n## Overview\n\nThis skill creates QR codes from text data using Python...\n\n## Quick Start\n\nGenerate a QR code with Python:\n\n```python\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n# ... code to generate QR code ...\n````\n\n## Scripts Reference\n\n### scripts/generate.py\n\nMain script for generating QR codes...\n\n````\n\n### Frontmatter Fields\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------ |\n| `name` | Yes | Skill slug (lowercase, hyphens) |\n| `description` | Yes | What the skill does (shown to the LLM) |\n| `version` | No | Semantic version string |\n| `license` | No | License identifier |\n| `author` | No | Skill author |\n| `secrets` | No | Array of secret declarations (enables secure mode) |\n\n## Best Practices\n\n### 1. Clear Descriptions\n\nProvide clear, purpose-driven descriptions:\n\n```yaml\nskills:\n # Good - clear purpose\n qr-code:\n description: Generating QR codes for URLs, contact info, or any text data\n\n # Avoid - vague\n utility:\n description: Does stuff\n````\n\n### 2. When to Use Skills vs Tools\n\n| Use Skills When | Use Tools When |\n| ------------------------ | ---------------------------- |\n| Code execution needed | Simple API calls |\n| File generation | Database queries |\n| Complex calculations | External service integration |\n| Data processing | Authentication required |\n| Provider-agnostic needed | Backend-specific logic |\n\n### 3. Skill Selection\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n pdf-processor:\n display: description\n description: Processing PDFs\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis] # Skills available for this thread\n```\n\n### 4. Display Modes\n\nChoose appropriate display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n## Comparison: Skills vs Tools vs Provider Options\n\n| Feature | Octavus Skills | External Tools | Provider Tools/Skills |\n| ------------------ | ----------------- | ------------------- | --------------------- |\n| **Execution** | Isolated sandbox | Your backend | Provider servers |\n| **Provider** | Any (agnostic) | N/A | Provider-specific |\n| **Code Execution** | Yes | No | Yes (provider tools) |\n| **File Output** | Yes | No | Yes (provider skills) |\n| **Implementation** | Skill packages | Your code | Built-in |\n| **Cost** | Sandbox + LLM API | Your infrastructure | Included in API |\n\n## Uploading Custom Skills\n\nYou can upload custom skills to your organization:\n\n1. Create a skill following the [Agent Skills](https://agentskills.io) format\n2. Package it as a `.skill` bundle (ZIP file)\n3. Upload via the platform UI\n4. Reference by slug in your protocol\n\n```yaml\nskills:\n custom-analysis:\n display: description\n description: Custom analysis tool\n\nagent:\n skills: [custom-analysis]\n```\n\n## Sandbox Timeout\n\nThe default sandbox timeout is 5 minutes. You can configure a custom timeout using `sandboxTimeout` in the agent config or on individual `start-thread` blocks:\n\n```yaml\n# Agent-level timeout (applies to main thread)\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes (in milliseconds)\n```\n\n```yaml\n# Thread-level timeout (overrides agent-level for this thread)\nsteps:\n Start thread:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 3600000 # 1 hour\n```\n\nThread-level `sandboxTimeout` takes priority over agent-level. Maximum: 1 hour (3,600,000 ms).\n\n## Skill Secrets\n\nSkills can declare secrets they need to function. When an organization configures those secrets, the skill runs in **secure mode** with additional isolation.\n\n### Declaring Secrets\n\nAdd a `secrets` array to your SKILL.md frontmatter:\n\n```yaml\n---\nname: github\ndescription: >\n Run GitHub CLI (gh) commands to manage repos, issues, PRs, and more.\nsecrets:\n - name: GITHUB_TOKEN\n description: GitHub personal access token with repo access\n required: true\n - name: GITHUB_ORG\n description: Default GitHub organization\n required: false\n---\n```\n\nEach secret declaration has:\n\n| Field | Required | Description |\n| ------------- | -------- | ----------------------------------------------------------- |\n| `name` | Yes | Environment variable name (uppercase, e.g., `GITHUB_TOKEN`) |\n| `description` | No | Explains what this secret is for (shown in the UI) |\n| `required` | No | Whether the secret is required (defaults to `true`) |\n\nSecret names must match the pattern `^[A-Z_][A-Z0-9_]*$` (uppercase letters, digits, and underscores).\n\n### Configuring Secrets\n\nOrganization admins configure secret values through the skill editor in the platform UI. Each organization maintains its own independent set of secrets for each skill.\n\nSecrets are encrypted at rest and only decrypted at execution time.\n\n### Secure Mode\n\nWhen a skill has secrets configured for the organization, it automatically runs in **secure mode**:\n\n- The skill gets its own **isolated sandbox** (separate from other skills)\n- Secrets are injected as **environment variables** available to all scripts\n- Only `octavus_skill_read`, `octavus_skill_list`, and `octavus_skill_run` are available \u2014 `octavus_code_run`, `octavus_file_write`, and `octavus_file_read` are blocked\n- Scripts receive input as **JSON via stdin** (using the `input` parameter on `octavus_skill_run`) instead of CLI args\n- All output (stdout/stderr) is **automatically redacted** for secret values before being returned to the LLM\n\n### Writing Scripts for Secure Skills\n\nScripts in secure skills read input from stdin as JSON and access secrets from environment variables:\n\n```python\nimport json\nimport os\nimport sys\n\ninput_data = json.load(sys.stdin)\ntoken = os.environ.get('GITHUB_TOKEN')\n\n# Use the token and input_data to perform the task\n```\n\nFor standard skills (without secrets), scripts receive input as CLI arguments. For secure skills, always use stdin JSON.\n\n## Security\n\nSkills run in isolated sandbox environments:\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after each `next-message` execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n- **Secret redaction** \u2014 output from secure skills is automatically scanned for secret values\n\n## Next Steps\n\n- [Agent Config](/docs/protocol/agent-config) \u2014 Configuring skills in agent settings\n- [Provider Options](/docs/protocol/provider-options) \u2014 Anthropic's built-in skills\n- [Skills Advanced Guide](/docs/protocol/skills-advanced) \u2014 Best practices and advanced patterns\n",
|
|
581
581
|
excerpt: "Skills Skills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are...",
|
|
582
582
|
order: 5
|
|
583
583
|
},
|
|
@@ -613,7 +613,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
613
613
|
section: "protocol",
|
|
614
614
|
title: "Skills Advanced Guide",
|
|
615
615
|
description: "Best practices and advanced patterns for using Octavus skills.",
|
|
616
|
-
content: "\n# Skills Advanced Guide\n\nThis guide covers advanced patterns and best practices for using Octavus skills in your agents.\n\n## When to Use Skills\n\nSkills are ideal for:\n\n- **Code execution** - Running Python/Bash scripts\n- **File generation** - Creating images, PDFs, reports\n- **Data processing** - Analyzing, transforming, or visualizing data\n- **Provider-agnostic needs** - Features that should work with any LLM\n\nUse external tools instead when:\n\n- **Simple API calls** - Database queries, external services\n- **Authentication required** - Accessing user-specific resources\n- **Backend integration** - Tight coupling with your infrastructure\n\n## Skill Selection Strategy\n\n### Defining Available Skills\n\nDefine all skills in the `skills:` section, then reference which skills are available where they're used:\n\n**Interactive agents** \u2014 reference in `agent.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n pdf-processor:\n display: description\n description: Processing PDFs\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n```\n\n**Workers and named threads** \u2014 reference per-thread in `start-thread.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n\nsteps:\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis]\n maxSteps: 10\n```\n\n### Match Skills to Use Cases\n\nDifferent threads can have different skills. Define all skills at the protocol level, then scope them to each thread:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n visualization:\n display: description\n description: Creating charts and visualizations\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n```\n\nFor a data analysis thread, you would specify `[data-analysis, visualization]` in `agent.skills` or in a `start-thread` block's `skills` field.\n\n## Display Mode Strategy\n\nChoose display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n### Guidelines\n\n- **`hidden`**: Background work that doesn't need user awareness\n- **`description`**: User-facing operations (default)\n- **`name`**: Quick operations where name is sufficient\n- **`stream`**: Long-running operations where progress matters\n\n## System Prompt Integration\n\nSkills are automatically injected into the system prompt. The LLM learns:\n\n1. **Available skills** - List of enabled skills with descriptions\n2. **How to use skills** - Instructions for using skill tools\n3. **Tool reference** - Available skill tools (`octavus_skill_read`, `octavus_code_run`, etc.)\n\nYou don't need to manually document skills in your system prompt. However, you can guide the LLM:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a helpful assistant that can generate QR codes.\n\n## When to Generate QR Codes\n\nGenerate QR codes when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Share WiFi credentials\n- Create scannable data\n\nUse the qr-code skill for all QR code generation tasks.\n```\n\n## Error Handling\n\nSkills handle errors gracefully:\n\n```yaml\n# Skill execution errors are returned to the LLM\n# The LLM can retry or explain the error to the user\n```\n\nCommon error scenarios:\n\n1. **Invalid skill slug** - Skill not found in organization\n2. **Code execution errors** - Syntax errors, runtime exceptions\n3. **Missing dependencies** - Required packages not installed\n4. **File I/O errors** - Permission issues, invalid paths\n\nThe LLM receives error messages and can:\n\n- Retry with corrected code\n- Explain errors to users\n- Suggest alternatives\n\n## File Output Patterns\n\n### Single File Output\n\n```python\n# Save single file to /output/\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\nqr = qrcode.QRCode()\nqr.add_data('https://example.com')\nimg = qr.make_image()\nimg.save(f'{output_dir}/qrcode.png')\n```\n\n### Multiple Files\n\n```python\n# Save multiple files\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Generate multiple outputs\nfor i in range(3):\n filename = f'{output_dir}/output_{i}.png'\n # ... generate file ...\n```\n\n### Structured Output\n\n```python\n# Save structured data + files\nimport json\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Save metadata\nmetadata = {\n 'files': ['chart.png', 'data.csv'],\n 'summary': 'Analysis complete'\n}\nwith open(f'{output_dir}/metadata.json', 'w') as f:\n json.dump(metadata, f)\n\n# Save actual files\n# ... generate chart.png and data.csv ...\n```\n\n## Performance Considerations\n\n### Lazy Initialization\n\nSandboxes are created only when a skill tool is first called:\n\n```yaml\nagent:\n skills: [qr-code] # Sandbox created on first skill tool call\n```\n\nThis means:\n\n- No cost if skills aren't used\n- Fast startup (no sandbox creation delay)\n- Each `next-message` execution gets its own sandbox with only the skills it needs\n\n### Timeout Limits\n\nSandboxes default to a 5-minute timeout. Configure `sandboxTimeout` on the agent config or per thread:\n\n```yaml\n# Agent-level\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes\n```\n\n```yaml\n# Thread-level (overrides agent-level)\nsteps:\n Start thread:\n block: start-thread\n thread: analysis\n skills: [data-analysis]\n sandboxTimeout: 3600000 # 1 hour for long-running analysis\n```\n\nThread-level `sandboxTimeout` takes priority. Maximum: 1 hour (3,600,000 ms).\n\n### Sandbox Lifecycle\n\nEach `next-message` execution gets its own sandbox:\n\n- **Scoped** - Only contains the skills available to that thread\n- **Isolated** - Interactive agents and workers don't share sandboxes\n- **Resilient** - If a sandbox expires, it's transparently recreated\n- **Cleaned up** - Sandbox destroyed when the LLM call completes\n\n## Combining Skills with Tools\n\nSkills and tools can work together:\n\n```yaml\ntools:\n get-user-data:\n description: Fetch user data from database\n parameters:\n userId: { type: string }\n\nskills:\n data-analysis:\n display: description\n description: Analyzing data\n\nagent:\n tools: [get-user-data]\n skills: [data-analysis]\n agentic: true\n\nhandlers:\n analyze-user:\n Get user data:\n block: tool-call\n tool: get-user-data\n input:\n userId: USER_ID\n output: USER_DATA\n\n Analyze:\n block: next-message\n # LLM can use data-analysis skill with USER_DATA\n```\n\nPattern:\n\n1. Fetch data via tool (from your backend)\n2. LLM uses skill to analyze/process the data\n3. Generate outputs (files, reports)\n\n## Skill Development Tips\n\n### Writing SKILL.md\n\nFocus on **when** and **how** to use the skill:\n\n```markdown\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\n---\n\n# QR Code Generator\n\n## When to Use\n\nUse this skill when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Create scannable data\n\n## Quick Start\n\n[Clear examples of how to use the skill]\n```\n\n### Script Organization\n\nOrganize scripts logically:\n\n```\nskill-name/\n\u251C\u2500\u2500 SKILL.md\n\u2514\u2500\u2500 scripts/\n \u251C\u2500\u2500 generate.py # Main script\n \u251C\u2500\u2500 utils.py # Helper functions\n \u2514\u2500\u2500 requirements.txt # Dependencies\n```\n\n### Error Messages\n\nProvide helpful error messages:\n\n```python\ntry:\n # ... code ...\nexcept ValueError as e:\n print(f\"Error: Invalid input - {e}\")\n sys.exit(1)\n```\n\nThe LLM sees these errors and can retry or explain to users.\n\n## Security Considerations\n\n### Sandbox Isolation\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after each `next-message` execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n### Input Validation\n\nSkills should validate inputs:\n\n```python\nimport sys\n\nif not data:\n print(\"Error: Data is required\")\n sys.exit(1)\n\nif len(data) > 1000:\n print(\"Error: Data too long (max 1000 characters)\")\n sys.exit(1)\n```\n\n### Resource Limits\n\nBe aware of:\n\n- **File size limits** - Large files may fail to upload\n- **Execution time** - Sandbox timeout (5-minute default, 1-hour maximum)\n- **Memory limits** - Sandbox environment constraints\n\n## Debugging Skills\n\n### Check Skill Documentation\n\nThe LLM can read skill docs:\n\n```python\n# LLM calls octavus_skill_read to see skill instructions\n```\n\n### Test Locally\n\nTest skills before uploading:\n\n```bash\n# Test skill locally\npython scripts/generate.py --data \"test\"\n```\n\n### Monitor Execution\n\nCheck execution logs in the platform debug view:\n\n- Tool calls and arguments\n- Code execution results\n- File outputs\n- Error messages\n\n## Common Patterns\n\n### Pattern 1: Generate and Return\n\n```yaml\n# User asks for QR code\n# LLM generates QR code\n# File automatically available for download\n```\n\n### Pattern 2: Analyze and Report\n\n```yaml\n# User provides data\n# LLM analyzes with skill\n# Generates report file\n# Returns summary + file link\n```\n\n### Pattern 3: Transform and Save\n\n```yaml\n# User uploads file (via tool)\n# LLM processes with skill\n# Generates transformed file\n# Returns new file link\n```\n\n## Best Practices Summary\n\n1. **Enable only needed skills** - Don't overwhelm the LLM\n2. **Choose appropriate display modes** - Match user experience needs\n3. **Write clear skill descriptions** - Help LLM understand when to use\n4. **Handle errors gracefully** - Provide helpful error messages\n5. **Test skills locally** - Verify before uploading\n6. **Monitor execution** - Check logs for issues\n7. **Combine with tools** - Use tools for data, skills for processing\n8. **Consider performance** - Be aware of timeouts and limits\n\n## Next Steps\n\n- [Skills](/docs/protocol/skills) - Basic skills documentation\n- [Agent Config](/docs/protocol/agent-config) - Configuring skills\n- [Tools](/docs/protocol/tools) - External tools integration\n",
|
|
616
|
+
content: "\n# Skills Advanced Guide\n\nThis guide covers advanced patterns and best practices for using Octavus skills in your agents.\n\n## When to Use Skills\n\nSkills are ideal for:\n\n- **Code execution** - Running Python/Bash scripts\n- **File generation** - Creating images, PDFs, reports\n- **Data processing** - Analyzing, transforming, or visualizing data\n- **Provider-agnostic needs** - Features that should work with any LLM\n\nUse external tools instead when:\n\n- **Simple API calls** - Database queries, external services\n- **Authentication required** - Accessing user-specific resources\n- **Backend integration** - Tight coupling with your infrastructure\n\n## Skill Selection Strategy\n\n### Defining Available Skills\n\nDefine all skills in the `skills:` section, then reference which skills are available where they're used:\n\n**Interactive agents** \u2014 reference in `agent.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n pdf-processor:\n display: description\n description: Processing PDFs\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n```\n\n**Workers and named threads** \u2014 reference per-thread in `start-thread.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n\nsteps:\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis]\n maxSteps: 10\n```\n\n### Match Skills to Use Cases\n\nDifferent threads can have different skills. Define all skills at the protocol level, then scope them to each thread:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n visualization:\n display: description\n description: Creating charts and visualizations\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n```\n\nFor a data analysis thread, you would specify `[data-analysis, visualization]` in `agent.skills` or in a `start-thread` block's `skills` field.\n\n## Display Mode Strategy\n\nChoose display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n### Guidelines\n\n- **`hidden`**: Background work that doesn't need user awareness\n- **`description`**: User-facing operations (default)\n- **`name`**: Quick operations where name is sufficient\n- **`stream`**: Long-running operations where progress matters\n\n## System Prompt Integration\n\nSkills are automatically injected into the system prompt. The LLM learns:\n\n1. **Available skills** - List of enabled skills with descriptions\n2. **How to use skills** - Instructions for using skill tools\n3. **Tool reference** - Available skill tools (`octavus_skill_read`, `octavus_code_run`, etc.)\n\nYou don't need to manually document skills in your system prompt. However, you can guide the LLM:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a helpful assistant that can generate QR codes.\n\n## When to Generate QR Codes\n\nGenerate QR codes when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Share WiFi credentials\n- Create scannable data\n\nUse the qr-code skill for all QR code generation tasks.\n```\n\n## Error Handling\n\nSkills handle errors gracefully:\n\n```yaml\n# Skill execution errors are returned to the LLM\n# The LLM can retry or explain the error to the user\n```\n\nCommon error scenarios:\n\n1. **Invalid skill slug** - Skill not found in organization\n2. **Code execution errors** - Syntax errors, runtime exceptions\n3. **Missing dependencies** - Required packages not installed\n4. **File I/O errors** - Permission issues, invalid paths\n\nThe LLM receives error messages and can:\n\n- Retry with corrected code\n- Explain errors to users\n- Suggest alternatives\n\n## File Output Patterns\n\n### Single File Output\n\n```python\n# Save single file to /output/\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\nqr = qrcode.QRCode()\nqr.add_data('https://example.com')\nimg = qr.make_image()\nimg.save(f'{output_dir}/qrcode.png')\n```\n\n### Multiple Files\n\n```python\n# Save multiple files\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Generate multiple outputs\nfor i in range(3):\n filename = f'{output_dir}/output_{i}.png'\n # ... generate file ...\n```\n\n### Structured Output\n\n```python\n# Save structured data + files\nimport json\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Save metadata\nmetadata = {\n 'files': ['chart.png', 'data.csv'],\n 'summary': 'Analysis complete'\n}\nwith open(f'{output_dir}/metadata.json', 'w') as f:\n json.dump(metadata, f)\n\n# Save actual files\n# ... generate chart.png and data.csv ...\n```\n\n## Performance Considerations\n\n### Lazy Initialization\n\nSandboxes are created only when a skill tool is first called:\n\n```yaml\nagent:\n skills: [qr-code] # Sandbox created on first skill tool call\n```\n\nThis means:\n\n- No cost if skills aren't used\n- Fast startup (no sandbox creation delay)\n- Each `next-message` execution gets its own sandbox with only the skills it needs\n\n### Timeout Limits\n\nSandboxes default to a 5-minute timeout. Configure `sandboxTimeout` on the agent config or per thread:\n\n```yaml\n# Agent-level\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes\n```\n\n```yaml\n# Thread-level (overrides agent-level)\nsteps:\n Start thread:\n block: start-thread\n thread: analysis\n skills: [data-analysis]\n sandboxTimeout: 3600000 # 1 hour for long-running analysis\n```\n\nThread-level `sandboxTimeout` takes priority. Maximum: 1 hour (3,600,000 ms).\n\n### Sandbox Lifecycle\n\nEach `next-message` execution gets its own sandbox:\n\n- **Scoped** - Only contains the skills available to that thread\n- **Isolated** - Interactive agents and workers don't share sandboxes\n- **Resilient** - If a sandbox expires, it's transparently recreated\n- **Cleaned up** - Sandbox destroyed when the LLM call completes\n\n## Combining Skills with Tools\n\nSkills and tools can work together:\n\n```yaml\ntools:\n get-user-data:\n description: Fetch user data from database\n parameters:\n userId: { type: string }\n\nskills:\n data-analysis:\n display: description\n description: Analyzing data\n\nagent:\n tools: [get-user-data]\n skills: [data-analysis]\n agentic: true\n\nhandlers:\n analyze-user:\n Get user data:\n block: tool-call\n tool: get-user-data\n input:\n userId: USER_ID\n output: USER_DATA\n\n Analyze:\n block: next-message\n # LLM can use data-analysis skill with USER_DATA\n```\n\nPattern:\n\n1. Fetch data via tool (from your backend)\n2. LLM uses skill to analyze/process the data\n3. Generate outputs (files, reports)\n\n## Secure Skills\n\nWhen a skill declares secrets and an organization configures them, the skill runs in secure mode with its own isolated sandbox.\n\n### Standard vs Secure Skills\n\n| Aspect | Standard Skills | Secure Skills |\n| ------------------- | --------------------------------- | --------------------------------------------------- |\n| **Sandbox** | Shared with other standard skills | Isolated (one per skill) |\n| **Available tools** | All 6 skill tools | `skill_read`, `skill_list`, `skill_run` only |\n| **Script input** | CLI arguments via `args` | JSON via stdin (use `input` parameter) |\n| **Environment** | No secrets | Secrets as env vars |\n| **Output** | Raw stdout/stderr | Redacted (secret values replaced with `[REDACTED]`) |\n\n### Writing Scripts for Secure Skills\n\nSecure skill scripts receive structured input via stdin (JSON) and access secrets from environment variables:\n\n```python\n#!/usr/bin/env python3\nimport json\nimport os\nimport sys\nimport subprocess\n\ninput_data = json.load(sys.stdin)\ntoken = os.environ[\"GITHUB_TOKEN\"]\n\nrepo = input_data.get(\"repo\", \"\")\nresult = subprocess.run(\n [\"gh\", \"repo\", \"view\", repo, \"--json\", \"name,description\"],\n capture_output=True, text=True,\n env={**os.environ, \"GH_TOKEN\": token}\n)\n\nprint(result.stdout)\n```\n\nKey patterns:\n\n- **Read stdin**: `json.load(sys.stdin)` to get the `input` object from the `octavus_skill_run` call\n- **Access secrets**: `os.environ[\"SECRET_NAME\"]` \u2014 secrets are injected as env vars\n- **Print output**: Write results to stdout \u2014 the LLM sees the (redacted) stdout\n- **Error handling**: Write errors to stderr and exit with non-zero code\n\n### Declaring Secrets in SKILL.md\n\n```yaml\n---\nname: github\ndescription: >\n Run GitHub CLI (gh) commands to manage repos, issues, PRs, and more.\nsecrets:\n - name: GITHUB_TOKEN\n description: GitHub personal access token with repo access\n required: true\n - name: GITHUB_ORG\n description: Default GitHub organization\n required: false\n---\n```\n\n### Testing Secure Skills Locally\n\nYou can test scripts locally by piping JSON to stdin:\n\n```bash\necho '{\"repo\": \"octavus-ai/agent-sdk\"}' | GITHUB_TOKEN=ghp_xxx python scripts/list-issues.py\n```\n\n## Skill Development Tips\n\n### Writing SKILL.md\n\nFocus on **when** and **how** to use the skill:\n\n```markdown\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\n---\n\n# QR Code Generator\n\n## When to Use\n\nUse this skill when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Create scannable data\n\n## Quick Start\n\n[Clear examples of how to use the skill]\n```\n\n### Script Organization\n\nOrganize scripts logically:\n\n```\nskill-name/\n\u251C\u2500\u2500 SKILL.md\n\u2514\u2500\u2500 scripts/\n \u251C\u2500\u2500 generate.py # Main script\n \u251C\u2500\u2500 utils.py # Helper functions\n \u2514\u2500\u2500 requirements.txt # Dependencies\n```\n\n### Error Messages\n\nProvide helpful error messages:\n\n```python\ntry:\n # ... code ...\nexcept ValueError as e:\n print(f\"Error: Invalid input - {e}\")\n sys.exit(1)\n```\n\nThe LLM sees these errors and can retry or explain to users.\n\n## Security Considerations\n\n### Sandbox Isolation\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after each `next-message` execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n### Secret Protection\n\nFor skills with configured secrets:\n\n- **Isolated sandbox** \u2014 each secure skill gets its own sandbox, preventing cross-skill secret leakage\n- **No arbitrary code** \u2014 `octavus_code_run`, `octavus_file_write`, and `octavus_file_read` are blocked for secure skills, so only pre-built scripts can execute\n- **Output redaction** \u2014 all stdout and stderr are scanned for secret values before being returned to the LLM\n- **Encrypted at rest** \u2014 secrets are encrypted using AES-256-GCM and only decrypted at execution time\n\n### Input Validation\n\nSkills should validate inputs:\n\n```python\nimport sys\n\nif not data:\n print(\"Error: Data is required\")\n sys.exit(1)\n\nif len(data) > 1000:\n print(\"Error: Data too long (max 1000 characters)\")\n sys.exit(1)\n```\n\n### Resource Limits\n\nBe aware of:\n\n- **File size limits** - Large files may fail to upload\n- **Execution time** - Sandbox timeout (5-minute default, 1-hour maximum)\n- **Memory limits** - Sandbox environment constraints\n\n## Debugging Skills\n\n### Check Skill Documentation\n\nThe LLM can read skill docs:\n\n```python\n# LLM calls octavus_skill_read to see skill instructions\n```\n\n### Test Locally\n\nTest skills before uploading:\n\n```bash\n# Test skill locally\npython scripts/generate.py --data \"test\"\n```\n\n### Monitor Execution\n\nCheck execution logs in the platform debug view:\n\n- Tool calls and arguments\n- Code execution results\n- File outputs\n- Error messages\n\n## Common Patterns\n\n### Pattern 1: Generate and Return\n\n```yaml\n# User asks for QR code\n# LLM generates QR code\n# File automatically available for download\n```\n\n### Pattern 2: Analyze and Report\n\n```yaml\n# User provides data\n# LLM analyzes with skill\n# Generates report file\n# Returns summary + file link\n```\n\n### Pattern 3: Transform and Save\n\n```yaml\n# User uploads file (via tool)\n# LLM processes with skill\n# Generates transformed file\n# Returns new file link\n```\n\n## Best Practices Summary\n\n1. **Enable only needed skills** \u2014 Don't overwhelm the LLM\n2. **Choose appropriate display modes** \u2014 Match user experience needs\n3. **Write clear skill descriptions** \u2014 Help LLM understand when to use\n4. **Handle errors gracefully** \u2014 Provide helpful error messages\n5. **Test skills locally** \u2014 Verify before uploading\n6. **Monitor execution** \u2014 Check logs for issues\n7. **Combine with tools** \u2014 Use tools for data, skills for processing\n8. **Consider performance** \u2014 Be aware of timeouts and limits\n9. **Use secrets for credentials** \u2014 Declare secrets in frontmatter instead of hardcoding tokens\n10. **Design scripts for stdin input** \u2014 Secure skills receive JSON via stdin, so plan for both input methods if the skill might be used in either mode\n\n## Next Steps\n\n- [Skills](/docs/protocol/skills) - Basic skills documentation\n- [Agent Config](/docs/protocol/agent-config) - Configuring skills\n- [Tools](/docs/protocol/tools) - External tools integration\n",
|
|
617
617
|
excerpt: "Skills Advanced Guide This guide covers advanced patterns and best practices for using Octavus skills in your agents. When to Use Skills Skills are ideal for: - Code execution - Running Python/Bash...",
|
|
618
618
|
order: 9
|
|
619
619
|
},
|
|
@@ -1318,7 +1318,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1318
1318
|
section: "protocol",
|
|
1319
1319
|
title: "Skills",
|
|
1320
1320
|
description: "Using Octavus skills for code execution and specialized capabilities.",
|
|
1321
|
-
content: "\n# Skills\n\nSkills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are self-contained packages with documentation and scripts that run in secure sandboxes.\n\n## Overview\n\nOctavus Skills provide **provider-agnostic** code execution. They work with any LLM provider (Anthropic, OpenAI, Google) by using explicit tool calls and system prompt injection.\n\n### How Skills Work\n\n1. **Skill Definition**: Skills are defined in the protocol's `skills:` section\n2. **Skill Resolution**: Skills are resolved from available sources (see below)\n3. **Sandbox Execution**: When a skill is used, code runs in an isolated sandbox environment\n4. **File Generation**: Files saved to `/output/` are automatically captured and made available for download\n\n### Skill Sources\n\nSkills come from two sources, visible in the Skills tab of your organization:\n\n| Source | Badge in UI | Visibility | Example |\n| ----------- | ----------- | ------------------------------ | ------------------ |\n| **Octavus** | `Octavus` | Available to all organizations | `qr-code` |\n| **Custom** | None | Private to your organization | `my-company-skill` |\n\nWhen you reference a skill in your protocol, Octavus resolves it from your available skills. If you create a custom skill with the same name as an Octavus skill, your custom skill takes precedence.\n\n## Defining Skills\n\nDefine skills in the protocol's `skills:` section:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n```\n\n### Skill Fields\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------------------------------------- |\n| `display` | No | How to show in UI: `hidden`, `name`, `description`, `stream` (default: `description`) |\n| `description` | No | Custom description shown to users (overrides skill's built-in description) |\n\n### Display Modes\n\n| Mode | Behavior |\n| ------------- | ------------------------------------------- |\n| `hidden` | Skill usage not shown to users |\n| `name` | Shows skill name while executing |\n| `description` | Shows description while executing (default) |\n| `stream` | Streams progress if available |\n\n## Enabling Skills\n\nAfter defining skills in the `skills:` section, specify which skills are available. Skills work in both interactive agents and workers.\n\n### Interactive Agents\n\nReference skills in `agent.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account]\n skills: [qr-code]\n agentic: true\n```\n\n### Workers and Named Threads\n\nReference skills per-thread in `start-thread.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nsteps:\n Start thread:\n block: start-thread\n thread: worker\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n maxSteps: 10\n```\n\nThis also works for named threads in interactive agents, allowing different threads to have different skills.\n\n## Skill Tools\n\nWhen skills are enabled, the LLM has access to these tools:\n\n| Tool | Purpose |\n| -------------------- | --------------------------------------- |\n| `octavus_skill_read` | Read skill documentation (SKILL.md) |\n| `octavus_skill_list` | List available scripts in a skill |\n| `octavus_skill_run` | Execute a pre-built script from a skill |\n| `octavus_code_run` | Execute arbitrary Python/Bash code |\n| `octavus_file_write` | Create files in the sandbox |\n| `octavus_file_read` | Read files from the sandbox |\n\nThe LLM learns about available skills through system prompt injection and can use these tools to interact with skills.\n\n## Example: QR Code Generation\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n agentic: true\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond:\n block: next-message\n```\n\nWhen a user asks \"Create a QR code for octavus.ai\", the LLM will:\n\n1. Recognize the task matches the `qr-code` skill\n2. Call `octavus_skill_read` to learn how to use the skill\n3. Execute code (via `octavus_code_run` or `octavus_skill_run`) to generate the QR code\n4. Save the image to `/output/` in the sandbox\n5. The file is automatically captured and made available for download\n\n## File Output\n\nFiles saved to `/output/` in the sandbox are automatically:\n\n1. **Captured** after code execution\n2. **Uploaded** to S3 storage\n3. **Made available** via presigned URLs\n4. **Included** in the message as file parts\n\nFiles persist across page refreshes and are stored in the session's message history.\n\n## Skill Format\n\nSkills follow the [Agent Skills](https://agentskills.io) open standard:\n\n- `SKILL.md` - Required skill documentation with YAML frontmatter\n- `scripts/` - Optional executable code (Python/Bash)\n- `references/` - Optional documentation loaded as needed\n- `assets/` - Optional files used in outputs (templates, images)\n\n### SKILL.md Format\n\n````yaml\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\nversion: 1.0.0\nlicense: MIT\nauthor: Octavus Team\n---\n\n# QR Code Generator\n\n## Overview\n\nThis skill creates QR codes from text data using Python...\n\n## Quick Start\n\nGenerate a QR code with Python:\n\n```python\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n# ... code to generate QR code ...\n````\n\n## Scripts Reference\n\n### scripts/generate.py\n\nMain script for generating QR codes...\n\n````\n\n## Best Practices\n\n### 1. Clear Descriptions\n\nProvide clear, purpose-driven descriptions:\n\n```yaml\nskills:\n # Good - clear purpose\n qr-code:\n description: Generating QR codes for URLs, contact info, or any text data\n\n # Avoid - vague\n utility:\n description: Does stuff\n````\n\n### 2. When to Use Skills vs Tools\n\n| Use Skills When | Use Tools When |\n| ------------------------ | ---------------------------- |\n| Code execution needed | Simple API calls |\n| File generation | Database queries |\n| Complex calculations | External service integration |\n| Data processing | Authentication required |\n| Provider-agnostic needed | Backend-specific logic |\n\n### 3. Skill Selection\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n pdf-processor:\n display: description\n description: Processing PDFs\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis] # Skills available for this thread\n```\n\n### 4. Display Modes\n\nChoose appropriate display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n## Comparison: Skills vs Tools vs Provider Options\n\n| Feature | Octavus Skills | External Tools | Provider Tools/Skills |\n| ------------------ | ----------------- | ------------------- | --------------------- |\n| **Execution** | Isolated sandbox | Your backend | Provider servers |\n| **Provider** | Any (agnostic) | N/A | Provider-specific |\n| **Code Execution** | Yes | No | Yes (provider tools) |\n| **File Output** | Yes | No | Yes (provider skills) |\n| **Implementation** | Skill packages | Your code | Built-in |\n| **Cost** | Sandbox + LLM API | Your infrastructure | Included in API |\n\n## Uploading Custom Skills\n\nYou can upload custom skills to your organization:\n\n1. Create a skill following the [Agent Skills](https://agentskills.io) format\n2. Package it as a `.skill` bundle (ZIP file)\n3. Upload via the platform UI\n4. Reference by slug in your protocol\n\n```yaml\nskills:\n custom-analysis:\n display: description\n description: Custom analysis tool\n\nagent:\n skills: [custom-analysis]\n```\n\n## Sandbox Timeout\n\nThe default sandbox timeout is 5 minutes. You can configure a custom timeout using `sandboxTimeout` in the agent config or on individual `start-thread` blocks:\n\n```yaml\n# Agent-level timeout (applies to main thread)\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes (in milliseconds)\n```\n\n```yaml\n# Thread-level timeout (overrides agent-level for this thread)\nsteps:\n Start thread:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 3600000 # 1 hour\n```\n\nThread-level `sandboxTimeout` takes priority over agent-level. Maximum: 1 hour (3,600,000 ms).\n\n## Security\n\nSkills run in isolated sandbox environments:\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after each `next-message` execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n## Next Steps\n\n- [Agent Config](/docs/protocol/agent-config) \u2014 Configuring skills in agent settings\n- [Provider Options](/docs/protocol/provider-options) \u2014 Anthropic's built-in skills\n- [Skills Advanced Guide](/docs/protocol/skills-advanced) \u2014 Best practices and advanced patterns\n",
|
|
1321
|
+
content: "\n# Skills\n\nSkills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are self-contained packages with documentation and scripts that run in secure sandboxes.\n\n## Overview\n\nOctavus Skills provide **provider-agnostic** code execution. They work with any LLM provider (Anthropic, OpenAI, Google) by using explicit tool calls and system prompt injection.\n\n### How Skills Work\n\n1. **Skill Definition**: Skills are defined in the protocol's `skills:` section\n2. **Skill Resolution**: Skills are resolved from available sources (see below)\n3. **Sandbox Execution**: When a skill is used, code runs in an isolated sandbox environment\n4. **File Generation**: Files saved to `/output/` are automatically captured and made available for download\n\n### Skill Sources\n\nSkills come from two sources, visible in the Skills tab of your organization:\n\n| Source | Badge in UI | Visibility | Example |\n| ----------- | ----------- | ------------------------------ | ------------------ |\n| **Octavus** | `Octavus` | Available to all organizations | `qr-code` |\n| **Custom** | None | Private to your organization | `my-company-skill` |\n\nWhen you reference a skill in your protocol, Octavus resolves it from your available skills. If you create a custom skill with the same name as an Octavus skill, your custom skill takes precedence.\n\n## Defining Skills\n\nDefine skills in the protocol's `skills:` section:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n```\n\n### Skill Fields\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------------------------------------- |\n| `display` | No | How to show in UI: `hidden`, `name`, `description`, `stream` (default: `description`) |\n| `description` | No | Custom description shown to users (overrides skill's built-in description) |\n\n### Display Modes\n\n| Mode | Behavior |\n| ------------- | ------------------------------------------- |\n| `hidden` | Skill usage not shown to users |\n| `name` | Shows skill name while executing |\n| `description` | Shows description while executing (default) |\n| `stream` | Streams progress if available |\n\n## Enabling Skills\n\nAfter defining skills in the `skills:` section, specify which skills are available. Skills work in both interactive agents and workers.\n\n### Interactive Agents\n\nReference skills in `agent.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n tools: [get-user-account]\n skills: [qr-code]\n agentic: true\n```\n\n### Workers and Named Threads\n\nReference skills per-thread in `start-thread.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nsteps:\n Start thread:\n block: start-thread\n thread: worker\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n maxSteps: 10\n```\n\nThis also works for named threads in interactive agents, allowing different threads to have different skills.\n\n## Skill Tools\n\nWhen skills are enabled, the LLM has access to these tools:\n\n| Tool | Purpose | Availability |\n| -------------------- | --------------------------------------- | -------------------- |\n| `octavus_skill_read` | Read skill documentation (SKILL.md) | All skills |\n| `octavus_skill_list` | List available scripts in a skill | All skills |\n| `octavus_skill_run` | Execute a pre-built script from a skill | All skills |\n| `octavus_code_run` | Execute arbitrary Python/Bash code | Standard skills only |\n| `octavus_file_write` | Create files in the sandbox | Standard skills only |\n| `octavus_file_read` | Read files from the sandbox | Standard skills only |\n\nThe LLM learns about available skills through system prompt injection and can use these tools to interact with skills.\n\nSkills that have [secrets](#skill-secrets) configured run in **secure mode**, where only `octavus_skill_read`, `octavus_skill_list`, and `octavus_skill_run` are available. See [Skill Secrets](#skill-secrets) below.\n\n## Example: QR Code Generation\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n agentic: true\n\nhandlers:\n user-message:\n Add message:\n block: add-message\n role: user\n prompt: user-message\n input: [USER_MESSAGE]\n\n Respond:\n block: next-message\n```\n\nWhen a user asks \"Create a QR code for octavus.ai\", the LLM will:\n\n1. Recognize the task matches the `qr-code` skill\n2. Call `octavus_skill_read` to learn how to use the skill\n3. Execute code (via `octavus_code_run` or `octavus_skill_run`) to generate the QR code\n4. Save the image to `/output/` in the sandbox\n5. The file is automatically captured and made available for download\n\n## File Output\n\nFiles saved to `/output/` in the sandbox are automatically:\n\n1. **Captured** after code execution\n2. **Uploaded** to S3 storage\n3. **Made available** via presigned URLs\n4. **Included** in the message as file parts\n\nFiles persist across page refreshes and are stored in the session's message history.\n\n## Skill Format\n\nSkills follow the [Agent Skills](https://agentskills.io) open standard:\n\n- `SKILL.md` - Required skill documentation with YAML frontmatter\n- `scripts/` - Optional executable code (Python/Bash)\n- `references/` - Optional documentation loaded as needed\n- `assets/` - Optional files used in outputs (templates, images)\n\n### SKILL.md Format\n\n````yaml\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\nversion: 1.0.0\nlicense: MIT\nauthor: Octavus Team\n---\n\n# QR Code Generator\n\n## Overview\n\nThis skill creates QR codes from text data using Python...\n\n## Quick Start\n\nGenerate a QR code with Python:\n\n```python\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n# ... code to generate QR code ...\n````\n\n## Scripts Reference\n\n### scripts/generate.py\n\nMain script for generating QR codes...\n\n````\n\n### Frontmatter Fields\n\n| Field | Required | Description |\n| ------------- | -------- | ------------------------------------------------------ |\n| `name` | Yes | Skill slug (lowercase, hyphens) |\n| `description` | Yes | What the skill does (shown to the LLM) |\n| `version` | No | Semantic version string |\n| `license` | No | License identifier |\n| `author` | No | Skill author |\n| `secrets` | No | Array of secret declarations (enables secure mode) |\n\n## Best Practices\n\n### 1. Clear Descriptions\n\nProvide clear, purpose-driven descriptions:\n\n```yaml\nskills:\n # Good - clear purpose\n qr-code:\n description: Generating QR codes for URLs, contact info, or any text data\n\n # Avoid - vague\n utility:\n description: Does stuff\n````\n\n### 2. When to Use Skills vs Tools\n\n| Use Skills When | Use Tools When |\n| ------------------------ | ---------------------------- |\n| Code execution needed | Simple API calls |\n| File generation | Database queries |\n| Complex calculations | External service integration |\n| Data processing | Authentication required |\n| Provider-agnostic needed | Backend-specific logic |\n\n### 3. Skill Selection\n\nDefine all skills available to this agent in the `skills:` section. Then specify which skills are available for the chat thread in `agent.skills`:\n\n```yaml\n# All skills available to this agent (defined once at protocol level)\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n pdf-processor:\n display: description\n description: Processing PDFs\n\n# Skills available for this chat thread\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis] # Skills available for this thread\n```\n\n### 4. Display Modes\n\nChoose appropriate display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n## Comparison: Skills vs Tools vs Provider Options\n\n| Feature | Octavus Skills | External Tools | Provider Tools/Skills |\n| ------------------ | ----------------- | ------------------- | --------------------- |\n| **Execution** | Isolated sandbox | Your backend | Provider servers |\n| **Provider** | Any (agnostic) | N/A | Provider-specific |\n| **Code Execution** | Yes | No | Yes (provider tools) |\n| **File Output** | Yes | No | Yes (provider skills) |\n| **Implementation** | Skill packages | Your code | Built-in |\n| **Cost** | Sandbox + LLM API | Your infrastructure | Included in API |\n\n## Uploading Custom Skills\n\nYou can upload custom skills to your organization:\n\n1. Create a skill following the [Agent Skills](https://agentskills.io) format\n2. Package it as a `.skill` bundle (ZIP file)\n3. Upload via the platform UI\n4. Reference by slug in your protocol\n\n```yaml\nskills:\n custom-analysis:\n display: description\n description: Custom analysis tool\n\nagent:\n skills: [custom-analysis]\n```\n\n## Sandbox Timeout\n\nThe default sandbox timeout is 5 minutes. You can configure a custom timeout using `sandboxTimeout` in the agent config or on individual `start-thread` blocks:\n\n```yaml\n# Agent-level timeout (applies to main thread)\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes (in milliseconds)\n```\n\n```yaml\n# Thread-level timeout (overrides agent-level for this thread)\nsteps:\n Start thread:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 3600000 # 1 hour\n```\n\nThread-level `sandboxTimeout` takes priority over agent-level. Maximum: 1 hour (3,600,000 ms).\n\n## Skill Secrets\n\nSkills can declare secrets they need to function. When an organization configures those secrets, the skill runs in **secure mode** with additional isolation.\n\n### Declaring Secrets\n\nAdd a `secrets` array to your SKILL.md frontmatter:\n\n```yaml\n---\nname: github\ndescription: >\n Run GitHub CLI (gh) commands to manage repos, issues, PRs, and more.\nsecrets:\n - name: GITHUB_TOKEN\n description: GitHub personal access token with repo access\n required: true\n - name: GITHUB_ORG\n description: Default GitHub organization\n required: false\n---\n```\n\nEach secret declaration has:\n\n| Field | Required | Description |\n| ------------- | -------- | ----------------------------------------------------------- |\n| `name` | Yes | Environment variable name (uppercase, e.g., `GITHUB_TOKEN`) |\n| `description` | No | Explains what this secret is for (shown in the UI) |\n| `required` | No | Whether the secret is required (defaults to `true`) |\n\nSecret names must match the pattern `^[A-Z_][A-Z0-9_]*$` (uppercase letters, digits, and underscores).\n\n### Configuring Secrets\n\nOrganization admins configure secret values through the skill editor in the platform UI. Each organization maintains its own independent set of secrets for each skill.\n\nSecrets are encrypted at rest and only decrypted at execution time.\n\n### Secure Mode\n\nWhen a skill has secrets configured for the organization, it automatically runs in **secure mode**:\n\n- The skill gets its own **isolated sandbox** (separate from other skills)\n- Secrets are injected as **environment variables** available to all scripts\n- Only `octavus_skill_read`, `octavus_skill_list`, and `octavus_skill_run` are available \u2014 `octavus_code_run`, `octavus_file_write`, and `octavus_file_read` are blocked\n- Scripts receive input as **JSON via stdin** (using the `input` parameter on `octavus_skill_run`) instead of CLI args\n- All output (stdout/stderr) is **automatically redacted** for secret values before being returned to the LLM\n\n### Writing Scripts for Secure Skills\n\nScripts in secure skills read input from stdin as JSON and access secrets from environment variables:\n\n```python\nimport json\nimport os\nimport sys\n\ninput_data = json.load(sys.stdin)\ntoken = os.environ.get('GITHUB_TOKEN')\n\n# Use the token and input_data to perform the task\n```\n\nFor standard skills (without secrets), scripts receive input as CLI arguments. For secure skills, always use stdin JSON.\n\n## Security\n\nSkills run in isolated sandbox environments:\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after each `next-message` execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n- **Secret redaction** \u2014 output from secure skills is automatically scanned for secret values\n\n## Next Steps\n\n- [Agent Config](/docs/protocol/agent-config) \u2014 Configuring skills in agent settings\n- [Provider Options](/docs/protocol/provider-options) \u2014 Anthropic's built-in skills\n- [Skills Advanced Guide](/docs/protocol/skills-advanced) \u2014 Best practices and advanced patterns\n",
|
|
1322
1322
|
excerpt: "Skills Skills are knowledge packages that enable agents to execute code and generate files in isolated sandbox environments. Unlike external tools (which you implement in your backend), skills are...",
|
|
1323
1323
|
order: 5
|
|
1324
1324
|
},
|
|
@@ -1354,7 +1354,7 @@ See [Streaming Events](/docs/server-sdk/streaming#event-types) for the full list
|
|
|
1354
1354
|
section: "protocol",
|
|
1355
1355
|
title: "Skills Advanced Guide",
|
|
1356
1356
|
description: "Best practices and advanced patterns for using Octavus skills.",
|
|
1357
|
-
content: "\n# Skills Advanced Guide\n\nThis guide covers advanced patterns and best practices for using Octavus skills in your agents.\n\n## When to Use Skills\n\nSkills are ideal for:\n\n- **Code execution** - Running Python/Bash scripts\n- **File generation** - Creating images, PDFs, reports\n- **Data processing** - Analyzing, transforming, or visualizing data\n- **Provider-agnostic needs** - Features that should work with any LLM\n\nUse external tools instead when:\n\n- **Simple API calls** - Database queries, external services\n- **Authentication required** - Accessing user-specific resources\n- **Backend integration** - Tight coupling with your infrastructure\n\n## Skill Selection Strategy\n\n### Defining Available Skills\n\nDefine all skills in the `skills:` section, then reference which skills are available where they're used:\n\n**Interactive agents** \u2014 reference in `agent.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n pdf-processor:\n display: description\n description: Processing PDFs\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n```\n\n**Workers and named threads** \u2014 reference per-thread in `start-thread.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n\nsteps:\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis]\n maxSteps: 10\n```\n\n### Match Skills to Use Cases\n\nDifferent threads can have different skills. Define all skills at the protocol level, then scope them to each thread:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n visualization:\n display: description\n description: Creating charts and visualizations\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n```\n\nFor a data analysis thread, you would specify `[data-analysis, visualization]` in `agent.skills` or in a `start-thread` block's `skills` field.\n\n## Display Mode Strategy\n\nChoose display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n### Guidelines\n\n- **`hidden`**: Background work that doesn't need user awareness\n- **`description`**: User-facing operations (default)\n- **`name`**: Quick operations where name is sufficient\n- **`stream`**: Long-running operations where progress matters\n\n## System Prompt Integration\n\nSkills are automatically injected into the system prompt. The LLM learns:\n\n1. **Available skills** - List of enabled skills with descriptions\n2. **How to use skills** - Instructions for using skill tools\n3. **Tool reference** - Available skill tools (`octavus_skill_read`, `octavus_code_run`, etc.)\n\nYou don't need to manually document skills in your system prompt. However, you can guide the LLM:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a helpful assistant that can generate QR codes.\n\n## When to Generate QR Codes\n\nGenerate QR codes when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Share WiFi credentials\n- Create scannable data\n\nUse the qr-code skill for all QR code generation tasks.\n```\n\n## Error Handling\n\nSkills handle errors gracefully:\n\n```yaml\n# Skill execution errors are returned to the LLM\n# The LLM can retry or explain the error to the user\n```\n\nCommon error scenarios:\n\n1. **Invalid skill slug** - Skill not found in organization\n2. **Code execution errors** - Syntax errors, runtime exceptions\n3. **Missing dependencies** - Required packages not installed\n4. **File I/O errors** - Permission issues, invalid paths\n\nThe LLM receives error messages and can:\n\n- Retry with corrected code\n- Explain errors to users\n- Suggest alternatives\n\n## File Output Patterns\n\n### Single File Output\n\n```python\n# Save single file to /output/\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\nqr = qrcode.QRCode()\nqr.add_data('https://example.com')\nimg = qr.make_image()\nimg.save(f'{output_dir}/qrcode.png')\n```\n\n### Multiple Files\n\n```python\n# Save multiple files\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Generate multiple outputs\nfor i in range(3):\n filename = f'{output_dir}/output_{i}.png'\n # ... generate file ...\n```\n\n### Structured Output\n\n```python\n# Save structured data + files\nimport json\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Save metadata\nmetadata = {\n 'files': ['chart.png', 'data.csv'],\n 'summary': 'Analysis complete'\n}\nwith open(f'{output_dir}/metadata.json', 'w') as f:\n json.dump(metadata, f)\n\n# Save actual files\n# ... generate chart.png and data.csv ...\n```\n\n## Performance Considerations\n\n### Lazy Initialization\n\nSandboxes are created only when a skill tool is first called:\n\n```yaml\nagent:\n skills: [qr-code] # Sandbox created on first skill tool call\n```\n\nThis means:\n\n- No cost if skills aren't used\n- Fast startup (no sandbox creation delay)\n- Each `next-message` execution gets its own sandbox with only the skills it needs\n\n### Timeout Limits\n\nSandboxes default to a 5-minute timeout. Configure `sandboxTimeout` on the agent config or per thread:\n\n```yaml\n# Agent-level\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes\n```\n\n```yaml\n# Thread-level (overrides agent-level)\nsteps:\n Start thread:\n block: start-thread\n thread: analysis\n skills: [data-analysis]\n sandboxTimeout: 3600000 # 1 hour for long-running analysis\n```\n\nThread-level `sandboxTimeout` takes priority. Maximum: 1 hour (3,600,000 ms).\n\n### Sandbox Lifecycle\n\nEach `next-message` execution gets its own sandbox:\n\n- **Scoped** - Only contains the skills available to that thread\n- **Isolated** - Interactive agents and workers don't share sandboxes\n- **Resilient** - If a sandbox expires, it's transparently recreated\n- **Cleaned up** - Sandbox destroyed when the LLM call completes\n\n## Combining Skills with Tools\n\nSkills and tools can work together:\n\n```yaml\ntools:\n get-user-data:\n description: Fetch user data from database\n parameters:\n userId: { type: string }\n\nskills:\n data-analysis:\n display: description\n description: Analyzing data\n\nagent:\n tools: [get-user-data]\n skills: [data-analysis]\n agentic: true\n\nhandlers:\n analyze-user:\n Get user data:\n block: tool-call\n tool: get-user-data\n input:\n userId: USER_ID\n output: USER_DATA\n\n Analyze:\n block: next-message\n # LLM can use data-analysis skill with USER_DATA\n```\n\nPattern:\n\n1. Fetch data via tool (from your backend)\n2. LLM uses skill to analyze/process the data\n3. Generate outputs (files, reports)\n\n## Skill Development Tips\n\n### Writing SKILL.md\n\nFocus on **when** and **how** to use the skill:\n\n```markdown\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\n---\n\n# QR Code Generator\n\n## When to Use\n\nUse this skill when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Create scannable data\n\n## Quick Start\n\n[Clear examples of how to use the skill]\n```\n\n### Script Organization\n\nOrganize scripts logically:\n\n```\nskill-name/\n\u251C\u2500\u2500 SKILL.md\n\u2514\u2500\u2500 scripts/\n \u251C\u2500\u2500 generate.py # Main script\n \u251C\u2500\u2500 utils.py # Helper functions\n \u2514\u2500\u2500 requirements.txt # Dependencies\n```\n\n### Error Messages\n\nProvide helpful error messages:\n\n```python\ntry:\n # ... code ...\nexcept ValueError as e:\n print(f\"Error: Invalid input - {e}\")\n sys.exit(1)\n```\n\nThe LLM sees these errors and can retry or explain to users.\n\n## Security Considerations\n\n### Sandbox Isolation\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after each `next-message` execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n### Input Validation\n\nSkills should validate inputs:\n\n```python\nimport sys\n\nif not data:\n print(\"Error: Data is required\")\n sys.exit(1)\n\nif len(data) > 1000:\n print(\"Error: Data too long (max 1000 characters)\")\n sys.exit(1)\n```\n\n### Resource Limits\n\nBe aware of:\n\n- **File size limits** - Large files may fail to upload\n- **Execution time** - Sandbox timeout (5-minute default, 1-hour maximum)\n- **Memory limits** - Sandbox environment constraints\n\n## Debugging Skills\n\n### Check Skill Documentation\n\nThe LLM can read skill docs:\n\n```python\n# LLM calls octavus_skill_read to see skill instructions\n```\n\n### Test Locally\n\nTest skills before uploading:\n\n```bash\n# Test skill locally\npython scripts/generate.py --data \"test\"\n```\n\n### Monitor Execution\n\nCheck execution logs in the platform debug view:\n\n- Tool calls and arguments\n- Code execution results\n- File outputs\n- Error messages\n\n## Common Patterns\n\n### Pattern 1: Generate and Return\n\n```yaml\n# User asks for QR code\n# LLM generates QR code\n# File automatically available for download\n```\n\n### Pattern 2: Analyze and Report\n\n```yaml\n# User provides data\n# LLM analyzes with skill\n# Generates report file\n# Returns summary + file link\n```\n\n### Pattern 3: Transform and Save\n\n```yaml\n# User uploads file (via tool)\n# LLM processes with skill\n# Generates transformed file\n# Returns new file link\n```\n\n## Best Practices Summary\n\n1. **Enable only needed skills** - Don't overwhelm the LLM\n2. **Choose appropriate display modes** - Match user experience needs\n3. **Write clear skill descriptions** - Help LLM understand when to use\n4. **Handle errors gracefully** - Provide helpful error messages\n5. **Test skills locally** - Verify before uploading\n6. **Monitor execution** - Check logs for issues\n7. **Combine with tools** - Use tools for data, skills for processing\n8. **Consider performance** - Be aware of timeouts and limits\n\n## Next Steps\n\n- [Skills](/docs/protocol/skills) - Basic skills documentation\n- [Agent Config](/docs/protocol/agent-config) - Configuring skills\n- [Tools](/docs/protocol/tools) - External tools integration\n",
|
|
1357
|
+
content: "\n# Skills Advanced Guide\n\nThis guide covers advanced patterns and best practices for using Octavus skills in your agents.\n\n## When to Use Skills\n\nSkills are ideal for:\n\n- **Code execution** - Running Python/Bash scripts\n- **File generation** - Creating images, PDFs, reports\n- **Data processing** - Analyzing, transforming, or visualizing data\n- **Provider-agnostic needs** - Features that should work with any LLM\n\nUse external tools instead when:\n\n- **Simple API calls** - Database queries, external services\n- **Authentication required** - Accessing user-specific resources\n- **Backend integration** - Tight coupling with your infrastructure\n\n## Skill Selection Strategy\n\n### Defining Available Skills\n\nDefine all skills in the `skills:` section, then reference which skills are available where they're used:\n\n**Interactive agents** \u2014 reference in `agent.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n pdf-processor:\n display: description\n description: Processing PDFs\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n```\n\n**Workers and named threads** \u2014 reference per-thread in `start-thread.skills`:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data\n\nsteps:\n Start analysis:\n block: start-thread\n thread: analysis\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code, data-analysis]\n maxSteps: 10\n```\n\n### Match Skills to Use Cases\n\nDifferent threads can have different skills. Define all skills at the protocol level, then scope them to each thread:\n\n```yaml\nskills:\n qr-code:\n display: description\n description: Generating QR codes\n data-analysis:\n display: description\n description: Analyzing data and generating reports\n visualization:\n display: description\n description: Creating charts and visualizations\n\nagent:\n model: anthropic/claude-sonnet-4-5\n system: system\n skills: [qr-code]\n```\n\nFor a data analysis thread, you would specify `[data-analysis, visualization]` in `agent.skills` or in a `start-thread` block's `skills` field.\n\n## Display Mode Strategy\n\nChoose display modes based on user experience:\n\n```yaml\nskills:\n # Background processing - hide from user\n data-analysis:\n display: hidden\n\n # User-facing generation - show description\n qr-code:\n display: description\n\n # Interactive progress - stream updates\n report-generation:\n display: stream\n```\n\n### Guidelines\n\n- **`hidden`**: Background work that doesn't need user awareness\n- **`description`**: User-facing operations (default)\n- **`name`**: Quick operations where name is sufficient\n- **`stream`**: Long-running operations where progress matters\n\n## System Prompt Integration\n\nSkills are automatically injected into the system prompt. The LLM learns:\n\n1. **Available skills** - List of enabled skills with descriptions\n2. **How to use skills** - Instructions for using skill tools\n3. **Tool reference** - Available skill tools (`octavus_skill_read`, `octavus_code_run`, etc.)\n\nYou don't need to manually document skills in your system prompt. However, you can guide the LLM:\n\n```markdown\n<!-- prompts/system.md -->\n\nYou are a helpful assistant that can generate QR codes.\n\n## When to Generate QR Codes\n\nGenerate QR codes when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Share WiFi credentials\n- Create scannable data\n\nUse the qr-code skill for all QR code generation tasks.\n```\n\n## Error Handling\n\nSkills handle errors gracefully:\n\n```yaml\n# Skill execution errors are returned to the LLM\n# The LLM can retry or explain the error to the user\n```\n\nCommon error scenarios:\n\n1. **Invalid skill slug** - Skill not found in organization\n2. **Code execution errors** - Syntax errors, runtime exceptions\n3. **Missing dependencies** - Required packages not installed\n4. **File I/O errors** - Permission issues, invalid paths\n\nThe LLM receives error messages and can:\n\n- Retry with corrected code\n- Explain errors to users\n- Suggest alternatives\n\n## File Output Patterns\n\n### Single File Output\n\n```python\n# Save single file to /output/\nimport qrcode\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\nqr = qrcode.QRCode()\nqr.add_data('https://example.com')\nimg = qr.make_image()\nimg.save(f'{output_dir}/qrcode.png')\n```\n\n### Multiple Files\n\n```python\n# Save multiple files\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Generate multiple outputs\nfor i in range(3):\n filename = f'{output_dir}/output_{i}.png'\n # ... generate file ...\n```\n\n### Structured Output\n\n```python\n# Save structured data + files\nimport json\nimport os\n\noutput_dir = os.environ.get('OUTPUT_DIR', '/output')\n\n# Save metadata\nmetadata = {\n 'files': ['chart.png', 'data.csv'],\n 'summary': 'Analysis complete'\n}\nwith open(f'{output_dir}/metadata.json', 'w') as f:\n json.dump(metadata, f)\n\n# Save actual files\n# ... generate chart.png and data.csv ...\n```\n\n## Performance Considerations\n\n### Lazy Initialization\n\nSandboxes are created only when a skill tool is first called:\n\n```yaml\nagent:\n skills: [qr-code] # Sandbox created on first skill tool call\n```\n\nThis means:\n\n- No cost if skills aren't used\n- Fast startup (no sandbox creation delay)\n- Each `next-message` execution gets its own sandbox with only the skills it needs\n\n### Timeout Limits\n\nSandboxes default to a 5-minute timeout. Configure `sandboxTimeout` on the agent config or per thread:\n\n```yaml\n# Agent-level\nagent:\n model: anthropic/claude-sonnet-4-5\n skills: [data-analysis]\n sandboxTimeout: 1800000 # 30 minutes\n```\n\n```yaml\n# Thread-level (overrides agent-level)\nsteps:\n Start thread:\n block: start-thread\n thread: analysis\n skills: [data-analysis]\n sandboxTimeout: 3600000 # 1 hour for long-running analysis\n```\n\nThread-level `sandboxTimeout` takes priority. Maximum: 1 hour (3,600,000 ms).\n\n### Sandbox Lifecycle\n\nEach `next-message` execution gets its own sandbox:\n\n- **Scoped** - Only contains the skills available to that thread\n- **Isolated** - Interactive agents and workers don't share sandboxes\n- **Resilient** - If a sandbox expires, it's transparently recreated\n- **Cleaned up** - Sandbox destroyed when the LLM call completes\n\n## Combining Skills with Tools\n\nSkills and tools can work together:\n\n```yaml\ntools:\n get-user-data:\n description: Fetch user data from database\n parameters:\n userId: { type: string }\n\nskills:\n data-analysis:\n display: description\n description: Analyzing data\n\nagent:\n tools: [get-user-data]\n skills: [data-analysis]\n agentic: true\n\nhandlers:\n analyze-user:\n Get user data:\n block: tool-call\n tool: get-user-data\n input:\n userId: USER_ID\n output: USER_DATA\n\n Analyze:\n block: next-message\n # LLM can use data-analysis skill with USER_DATA\n```\n\nPattern:\n\n1. Fetch data via tool (from your backend)\n2. LLM uses skill to analyze/process the data\n3. Generate outputs (files, reports)\n\n## Secure Skills\n\nWhen a skill declares secrets and an organization configures them, the skill runs in secure mode with its own isolated sandbox.\n\n### Standard vs Secure Skills\n\n| Aspect | Standard Skills | Secure Skills |\n| ------------------- | --------------------------------- | --------------------------------------------------- |\n| **Sandbox** | Shared with other standard skills | Isolated (one per skill) |\n| **Available tools** | All 6 skill tools | `skill_read`, `skill_list`, `skill_run` only |\n| **Script input** | CLI arguments via `args` | JSON via stdin (use `input` parameter) |\n| **Environment** | No secrets | Secrets as env vars |\n| **Output** | Raw stdout/stderr | Redacted (secret values replaced with `[REDACTED]`) |\n\n### Writing Scripts for Secure Skills\n\nSecure skill scripts receive structured input via stdin (JSON) and access secrets from environment variables:\n\n```python\n#!/usr/bin/env python3\nimport json\nimport os\nimport sys\nimport subprocess\n\ninput_data = json.load(sys.stdin)\ntoken = os.environ[\"GITHUB_TOKEN\"]\n\nrepo = input_data.get(\"repo\", \"\")\nresult = subprocess.run(\n [\"gh\", \"repo\", \"view\", repo, \"--json\", \"name,description\"],\n capture_output=True, text=True,\n env={**os.environ, \"GH_TOKEN\": token}\n)\n\nprint(result.stdout)\n```\n\nKey patterns:\n\n- **Read stdin**: `json.load(sys.stdin)` to get the `input` object from the `octavus_skill_run` call\n- **Access secrets**: `os.environ[\"SECRET_NAME\"]` \u2014 secrets are injected as env vars\n- **Print output**: Write results to stdout \u2014 the LLM sees the (redacted) stdout\n- **Error handling**: Write errors to stderr and exit with non-zero code\n\n### Declaring Secrets in SKILL.md\n\n```yaml\n---\nname: github\ndescription: >\n Run GitHub CLI (gh) commands to manage repos, issues, PRs, and more.\nsecrets:\n - name: GITHUB_TOKEN\n description: GitHub personal access token with repo access\n required: true\n - name: GITHUB_ORG\n description: Default GitHub organization\n required: false\n---\n```\n\n### Testing Secure Skills Locally\n\nYou can test scripts locally by piping JSON to stdin:\n\n```bash\necho '{\"repo\": \"octavus-ai/agent-sdk\"}' | GITHUB_TOKEN=ghp_xxx python scripts/list-issues.py\n```\n\n## Skill Development Tips\n\n### Writing SKILL.md\n\nFocus on **when** and **how** to use the skill:\n\n```markdown\n---\nname: qr-code\ndescription: >\n Generate QR codes from text, URLs, or data. Use when the user needs to create\n a QR code for any purpose - sharing links, contact information, WiFi credentials,\n or any text data that should be scannable.\n---\n\n# QR Code Generator\n\n## When to Use\n\nUse this skill when users want to:\n\n- Share URLs easily\n- Provide contact information\n- Create scannable data\n\n## Quick Start\n\n[Clear examples of how to use the skill]\n```\n\n### Script Organization\n\nOrganize scripts logically:\n\n```\nskill-name/\n\u251C\u2500\u2500 SKILL.md\n\u2514\u2500\u2500 scripts/\n \u251C\u2500\u2500 generate.py # Main script\n \u251C\u2500\u2500 utils.py # Helper functions\n \u2514\u2500\u2500 requirements.txt # Dependencies\n```\n\n### Error Messages\n\nProvide helpful error messages:\n\n```python\ntry:\n # ... code ...\nexcept ValueError as e:\n print(f\"Error: Invalid input - {e}\")\n sys.exit(1)\n```\n\nThe LLM sees these errors and can retry or explain to users.\n\n## Security Considerations\n\n### Sandbox Isolation\n\n- **No network access** (unless explicitly configured)\n- **No persistent storage** (sandbox destroyed after each `next-message` execution)\n- **File output only** via `/output/` directory\n- **Time limits** enforced (5-minute default, configurable via `sandboxTimeout`)\n\n### Secret Protection\n\nFor skills with configured secrets:\n\n- **Isolated sandbox** \u2014 each secure skill gets its own sandbox, preventing cross-skill secret leakage\n- **No arbitrary code** \u2014 `octavus_code_run`, `octavus_file_write`, and `octavus_file_read` are blocked for secure skills, so only pre-built scripts can execute\n- **Output redaction** \u2014 all stdout and stderr are scanned for secret values before being returned to the LLM\n- **Encrypted at rest** \u2014 secrets are encrypted using AES-256-GCM and only decrypted at execution time\n\n### Input Validation\n\nSkills should validate inputs:\n\n```python\nimport sys\n\nif not data:\n print(\"Error: Data is required\")\n sys.exit(1)\n\nif len(data) > 1000:\n print(\"Error: Data too long (max 1000 characters)\")\n sys.exit(1)\n```\n\n### Resource Limits\n\nBe aware of:\n\n- **File size limits** - Large files may fail to upload\n- **Execution time** - Sandbox timeout (5-minute default, 1-hour maximum)\n- **Memory limits** - Sandbox environment constraints\n\n## Debugging Skills\n\n### Check Skill Documentation\n\nThe LLM can read skill docs:\n\n```python\n# LLM calls octavus_skill_read to see skill instructions\n```\n\n### Test Locally\n\nTest skills before uploading:\n\n```bash\n# Test skill locally\npython scripts/generate.py --data \"test\"\n```\n\n### Monitor Execution\n\nCheck execution logs in the platform debug view:\n\n- Tool calls and arguments\n- Code execution results\n- File outputs\n- Error messages\n\n## Common Patterns\n\n### Pattern 1: Generate and Return\n\n```yaml\n# User asks for QR code\n# LLM generates QR code\n# File automatically available for download\n```\n\n### Pattern 2: Analyze and Report\n\n```yaml\n# User provides data\n# LLM analyzes with skill\n# Generates report file\n# Returns summary + file link\n```\n\n### Pattern 3: Transform and Save\n\n```yaml\n# User uploads file (via tool)\n# LLM processes with skill\n# Generates transformed file\n# Returns new file link\n```\n\n## Best Practices Summary\n\n1. **Enable only needed skills** \u2014 Don't overwhelm the LLM\n2. **Choose appropriate display modes** \u2014 Match user experience needs\n3. **Write clear skill descriptions** \u2014 Help LLM understand when to use\n4. **Handle errors gracefully** \u2014 Provide helpful error messages\n5. **Test skills locally** \u2014 Verify before uploading\n6. **Monitor execution** \u2014 Check logs for issues\n7. **Combine with tools** \u2014 Use tools for data, skills for processing\n8. **Consider performance** \u2014 Be aware of timeouts and limits\n9. **Use secrets for credentials** \u2014 Declare secrets in frontmatter instead of hardcoding tokens\n10. **Design scripts for stdin input** \u2014 Secure skills receive JSON via stdin, so plan for both input methods if the skill might be used in either mode\n\n## Next Steps\n\n- [Skills](/docs/protocol/skills) - Basic skills documentation\n- [Agent Config](/docs/protocol/agent-config) - Configuring skills\n- [Tools](/docs/protocol/tools) - External tools integration\n",
|
|
1358
1358
|
excerpt: "Skills Advanced Guide This guide covers advanced patterns and best practices for using Octavus skills in your agents. When to Use Skills Skills are ideal for: - Code execution - Running Python/Bash...",
|
|
1359
1359
|
order: 9
|
|
1360
1360
|
},
|
|
@@ -1506,4 +1506,4 @@ export {
|
|
|
1506
1506
|
getDocSlugs,
|
|
1507
1507
|
getSectionBySlug
|
|
1508
1508
|
};
|
|
1509
|
-
//# sourceMappingURL=chunk-
|
|
1509
|
+
//# sourceMappingURL=chunk-4ABNR2ZK.js.map
|