@praveencs/agent 0.9.25 → 0.9.26

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,748 @@
1
+ # Agent Runtime — Complete Documentation
2
+
3
+ > Everything you need to know to install, configure, and use Agent Runtime effectively.
4
+
5
+ ---
6
+
7
+ ## Table of Contents
8
+
9
+ 1. [Getting Started](#getting-started)
10
+ 2. [Core Concepts](#core-concepts)
11
+ 3. [Interactive Mode](#interactive-mode)
12
+ 4. [Goals & Tasks](#goals--tasks)
13
+ 5. [The Daemon](#the-daemon)
14
+ 6. [Credential Vault](#credential-vault)
15
+ 7. [Skills](#skills)
16
+ 8. [Commands](#commands)
17
+ 9. [Scripts](#scripts)
18
+ 10. [Plugins](#plugins)
19
+ 11. [Hooks](#hooks)
20
+ 12. [Agent Studio](#agent-studio)
21
+ 13. [Memory System](#memory-system)
22
+ 14. [LLM Configuration](#llm-configuration)
23
+ 15. [Tool Reference](#tool-reference)
24
+ 16. [REST API Reference](#rest-api-reference)
25
+ 17. [Troubleshooting](#troubleshooting)
26
+
27
+ ---
28
+
29
+ ## Getting Started
30
+
31
+ ### Prerequisites
32
+
33
+ - **Node.js 18+** — Required for running the agent
34
+ - **npm** — Comes with Node.js
35
+ - **An LLM API key** — OpenAI, Anthropic, or Azure OpenAI (or Ollama for local)
36
+
37
+ ### Installation
38
+
39
+ ```bash
40
+ npm install -g @praveencs/agent
41
+ ```
42
+
43
+ ### Initialize Your Project
44
+
45
+ Navigate to any project directory and run:
46
+
47
+ ```bash
48
+ cd your-project
49
+ agent init
50
+ ```
51
+
52
+ This creates a `.agent/` directory containing:
53
+ - `config.json` — Agent configuration (LLM provider, model, etc.)
54
+ - `skills/` — Your custom skill definitions
55
+ - `commands/` — Lightweight command templates
56
+ - `scripts/` — Automation scripts
57
+ - `hooks/` — Lifecycle hooks
58
+ - `plugins/` — Installed plugin bundles
59
+
60
+ ### Set Your API Key
61
+
62
+ ```bash
63
+ # Option 1: Environment variable
64
+ export OPENAI_API_KEY=sk-your-key-here
65
+
66
+ # Option 2: .env file in your project
67
+ echo "OPENAI_API_KEY=sk-your-key-here" >> .env
68
+
69
+ # Option 3: Store in the encrypted vault
70
+ agent studio # → Open Credentials page → Add Secret
71
+ ```
72
+
73
+ ### Verify Installation
74
+
75
+ ```bash
76
+ agent doctor
77
+ ```
78
+
79
+ This checks your Node version, LLM connectivity, and project configuration.
80
+
81
+ ---
82
+
83
+ ## Core Concepts
84
+
85
+ ### What Is an Agent?
86
+
87
+ Unlike a chatbot where you send messages and wait for replies, an **agent** is proactive:
88
+
89
+ 1. **You give it a goal** — "Build a dashboard with system monitoring"
90
+ 2. **It plans** — The LLM decomposes your goal into subtasks with dependencies
91
+ 3. **It executes** — Each task uses tools (file system, shell, HTTP, etc.)
92
+ 4. **It adapts** — If something fails, it retries or re-plans
93
+ 5. **It learns** — Successful patterns are saved to memory
94
+
95
+ ### The Agent Components
96
+
97
+ | Component | Role |
98
+ |-----------|------|
99
+ | **CLI / REPL** | Your interface — type goals, run commands |
100
+ | **LLM Router** | Routes requests to OpenAI/Anthropic/Azure/Ollama |
101
+ | **Goal Store** | SQLite database of goals, tasks, and progress |
102
+ | **Daemon** | Background service that processes the task queue |
103
+ | **Tool Registry** | File system, shell, git, HTTP, secrets, scripts |
104
+ | **Policy Engine** | Controls what tools can execute, with approval gates |
105
+ | **Memory Store** | Persistent facts, learnings, and context |
106
+ | **Credential Vault** | Encrypted storage for API keys and tokens |
107
+ | **Skill/Command/Script Loaders** | Load extensible capabilities |
108
+ | **Plugin Loader** | Bundles of skills + commands + scripts + hooks |
109
+
110
+ ### The Execution Flow
111
+
112
+ ```
113
+ Goal → Decompose → Task Queue → Daemon picks task → Load tools + capabilities
114
+ → Build prompt (inject context, capabilities, dependency outputs)
115
+ → LLM decides which tools to call → Execute tools → Complete/Retry/Re-plan
116
+ ```
117
+
118
+ ---
119
+
120
+ ## Interactive Mode
121
+
122
+ The **recommended** way to use Agent Runtime:
123
+
124
+ ```bash
125
+ agent
126
+ ```
127
+
128
+ You enter a conversational REPL where you can:
129
+ - Type natural language goals
130
+ - Use `/slash` commands
131
+ - Have multi-turn conversations with context memory
132
+
133
+ ```
134
+ 🤖 Agent Runtime v0.9.25
135
+ Project: my-app │ Model: gpt-4o │ 3 skills │ 2 commands
136
+
137
+ > Refactor the auth module to use JWT tokens
138
+
139
+ ⚡ fs.read(src/auth/handler.ts) ✓
140
+ ⚡ fs.write(src/auth/jwt.ts) ✓
141
+ ⚡ fs.write(src/auth/middleware.ts) ✓
142
+ ⚡ cmd.run(npm test) ✓
143
+
144
+ ✓ Done (12.3s)
145
+
146
+ > Now add refresh token support ← Context is preserved!
147
+ ```
148
+
149
+ ### Slash Commands
150
+
151
+ | Command | Action |
152
+ |---------|--------|
153
+ | `/help` | Show all available commands |
154
+ | `/skills` | List installed skills with success metrics |
155
+ | `/commands` | List available command templates |
156
+ | `/scripts` | List available automation scripts |
157
+ | `/hooks` | Show registered lifecycle hooks |
158
+ | `/model` | Display current LLM provider and model |
159
+ | `/compact` | Summarize conversation to free context window |
160
+ | `/clear` | Clear the terminal |
161
+ | `/exit` | Exit the REPL |
162
+ | `/deploy-staging` | Custom commands are auto-available as slash commands |
163
+
164
+ ---
165
+
166
+ ## Goals & Tasks
167
+
168
+ ### Creating Goals
169
+
170
+ ```bash
171
+ # From CLI
172
+ agent goal add "Build a REST API for user management" --priority 1
173
+
174
+ # From interactive mode
175
+ > Create a complete CRUD API for users with authentication
176
+
177
+ # From Studio
178
+ # → Goals & Tasks page → "New Goal" button
179
+ ```
180
+
181
+ ### Auto-Decomposition
182
+
183
+ When you create a goal, the LLM automatically decomposes it into subtasks:
184
+
185
+ ```
186
+ Goal: "Build a REST API for user management"
187
+ ├── Task 1: Set up Express server and project structure
188
+ ├── Task 2: Create User model with database schema (depends: 1)
189
+ ├── Task 3: Implement CRUD endpoints (depends: 2)
190
+ ├── Task 4: Add authentication middleware (depends: 1)
191
+ ├── Task 5: Write integration tests (depends: 3, 4)
192
+ └── Task 6: Create API documentation (depends: 3)
193
+ ```
194
+
195
+ Tasks have:
196
+ - **Dependencies** — Won't start until prerequisites complete
197
+ - **Retries** — Automatically retry up to 3 times on failure
198
+ - **Output chaining** — Each task's output is available to downstream tasks
199
+ - **Re-decomposition** — If a task permanently fails, the LLM suggests alternatives
200
+
201
+ ### Monitoring Progress
202
+
203
+ ```bash
204
+ agent goal list # See all goals with progress
205
+ agent goal status 1 # Detailed task breakdown
206
+ agent daemon logs # See what the daemon is doing
207
+ ```
208
+
209
+ ---
210
+
211
+ ## The Daemon
212
+
213
+ The daemon is a background service that autonomously processes your task queue.
214
+
215
+ ### Starting and Stopping
216
+
217
+ ```bash
218
+ agent daemon start # Launch the daemon
219
+ agent daemon stop # Graceful shutdown
220
+ agent daemon status # Health check
221
+ agent daemon logs # View execution log
222
+ ```
223
+
224
+ ### What It Does
225
+
226
+ Every 2 minutes (configurable), the daemon:
227
+
228
+ 1. **Checks for goals** that need decomposition
229
+ 2. **Picks up pending tasks** respecting dependencies
230
+ 3. **Runs up to 3 tasks in parallel** (configurable)
231
+ 4. **For each task:**
232
+ - Loads ALL project capabilities (skills, commands, scripts, plugins, credentials)
233
+ - Builds a rich prompt with context from dependency outputs
234
+ - Lets the LLM decide which tools to call
235
+ - Executes tools and reports results
236
+ 5. **On failure:**
237
+ - Retries up to 3 times
238
+ - If still failing, triggers LLM re-decomposition
239
+ - Creates alternative subtasks to work around the problem
240
+ 6. **On completion:**
241
+ - Saves output for downstream tasks
242
+ - Saves to memory for future context
243
+ - Triggers processing of newly unblocked tasks
244
+
245
+ ### Daemon Prompt Context
246
+
247
+ When executing a task, the daemon tells the LLM about:
248
+
249
+ ```markdown
250
+ ## Available Capabilities
251
+
252
+ ### 🔑 Credentials (use secrets.get)
253
+ Available keys: GITHUB_TOKEN, APIFY_TOKEN, OPENAI_API_KEY
254
+
255
+ ### 📜 Scripts (use script.run)
256
+ - update-dashboard: "Regenerates the dashboard"
257
+ - health-check: "Checks system health"
258
+
259
+ ### 💻 Commands (use command.execute)
260
+ - deploy-staging: "Deploy to staging environment"
261
+
262
+ ### 🔌 Plugins
263
+ - @agent/credentials: vault + capture (built-in)
264
+ ```
265
+
266
+ This means the LLM **knows what's available** and reuses existing scripts/commands instead of recreating them.
267
+
268
+ ---
269
+
270
+ ## Credential Vault
271
+
272
+ ### Overview
273
+
274
+ The agent has a secure credential store for API keys, tokens, and passwords. Credentials are:
275
+
276
+ - **Encrypted at rest** with AES-256-GCM
277
+ - **Machine-specific** — encryption key derived from hostname + project path
278
+ - **Auto-detected** from `.env` files
279
+ - **Never logged** — values are masked in all daemon output
280
+
281
+ ### Storage Priority
282
+
283
+ When the LLM calls `secrets.get("GITHUB_TOKEN")`:
284
+
285
+ 1. ✅ **Encrypted vault** (`.agent/vault.json`) — checked first
286
+ 2. ✅ **`.env` file** — automatic fallback
287
+ 3. ✅ **Environment variables** — system-level fallback
288
+ 4. ❌ **Not found** — triggers interactive capture (via Studio)
289
+
290
+ ### Managing Credentials
291
+
292
+ **Studio UI (recommended):**
293
+ 1. Open Agent Studio (`agent studio`)
294
+ 2. Click "Credentials" in the sidebar
295
+ 3. Click "Add Secret"
296
+ 4. Enter key name and value
297
+ 5. Stored encrypted on disk
298
+
299
+ **`.env` file:**
300
+ ```env
301
+ GITHUB_TOKEN=ghp_xxxx
302
+ OPENAI_API_KEY=sk-xxxx
303
+ APIFY_TOKEN=apify_api_xxxx
304
+ SMTP_HOST=smtp.gmail.com
305
+ ```
306
+
307
+ **From the LLM (during task execution):**
308
+ ```
309
+ secrets.get({ key: "GITHUB_TOKEN" }) → returns the value
310
+ secrets.list() → returns key names
311
+ secrets.set({ key: "NEW_KEY", value: "xxx" }) → stores encrypted
312
+ ```
313
+
314
+ ---
315
+
316
+ ## Skills
317
+
318
+ Skills are reusable AI capabilities. Each skill has:
319
+ - `skill.json` — Manifest with name, version, tools, permissions
320
+ - `prompt.md` — The LLM prompt that defines behavior
321
+
322
+ ### Structure
323
+
324
+ ```
325
+ .agent/skills/deploy-aws/
326
+ ├── skill.json
327
+ └── prompt.md
328
+ ```
329
+
330
+ **skill.json:**
331
+ ```json
332
+ {
333
+ "name": "deploy-aws",
334
+ "version": "1.0.0",
335
+ "description": "Deploy application to AWS",
336
+ "inputs": {
337
+ "region": { "type": "string", "required": true }
338
+ },
339
+ "tools": ["cmd.run", "fs.read", "secrets.get"],
340
+ "permissions": { "required": ["exec", "secrets"] }
341
+ }
342
+ ```
343
+
344
+ **prompt.md:**
345
+ ```markdown
346
+ # Deploy to AWS
347
+
348
+ Deploy the application to {{region}} using the AWS CLI.
349
+ 1. Check AWS credentials with `secrets.get`
350
+ 2. Build the application
351
+ 3. Deploy using `cmd.run`
352
+ ```
353
+
354
+ ### CLI Commands
355
+
356
+ ```bash
357
+ agent skills list # List with success metrics
358
+ agent skills create my-skill # Scaffold a new skill
359
+ agent skills stats # View performance data
360
+ agent skills doctor my-skill # Diagnose failures
361
+ agent skills fix my-skill # Auto-repair with LLM
362
+ ```
363
+
364
+ ---
365
+
366
+ ## Commands
367
+
368
+ Commands are **lightweight goal templates** — just a markdown file.
369
+
370
+ ### Create a Command
371
+
372
+ Create `.agent/commands/deploy-staging.md`:
373
+
374
+ ```markdown
375
+ ---
376
+ name: deploy-staging
377
+ description: Deploy current branch to staging
378
+ tools: [cmd.run, git.status]
379
+ ---
380
+ # Deploy to Staging
381
+
382
+ 1. Run `npm test` to verify all tests pass
383
+ 2. Run `npm run build` to create production bundle
384
+ 3. Run `git push origin HEAD:staging` to trigger deploy
385
+ ```
386
+
387
+ ### Use It
388
+
389
+ ```bash
390
+ agent run deploy-staging # From CLI
391
+ > /deploy-staging # From interactive mode
392
+ ```
393
+
394
+ The command's markdown body becomes the LLM prompt, with only the whitelisted tools available.
395
+
396
+ ---
397
+
398
+ ## Scripts
399
+
400
+ Scripts are **direct automation** — no LLM involved. Shell, Python, or Node.js.
401
+
402
+ ### Create a Script
403
+
404
+ Create `.agent/scripts/health-check/`:
405
+
406
+ **script.yaml:**
407
+ ```yaml
408
+ name: health-check
409
+ description: Check system health and report
410
+ entrypoint: run.sh
411
+ ```
412
+
413
+ **run.sh:**
414
+ ```bash
415
+ #!/bin/bash
416
+ echo "=== System Health ==="
417
+ echo "Hostname: $(hostname)"
418
+ echo "CPU: $(uptime)"
419
+ echo "Memory: $(free -h | head -2)"
420
+ echo "Disk: $(df -h / | tail -1)"
421
+ ```
422
+
423
+ ### Execute
424
+
425
+ ```bash
426
+ agent scripts run health-check
427
+ ```
428
+
429
+ The daemon can also execute scripts via the `script.run` tool during autonomous task execution.
430
+
431
+ ---
432
+
433
+ ## Plugins
434
+
435
+ Plugins bundle skills + commands + scripts + hooks into a distributable package.
436
+
437
+ ### Structure
438
+
439
+ ```
440
+ my-plugin/
441
+ ├── plugin.json
442
+ ├── skills/
443
+ │ └── security-scan/
444
+ ├── commands/
445
+ │ └── audit.md
446
+ ├── scripts/
447
+ │ └── check-deps/
448
+ └── hooks/
449
+ └── hooks.json
450
+ ```
451
+
452
+ **plugin.json:**
453
+ ```json
454
+ {
455
+ "name": "enterprise-security",
456
+ "version": "1.0.0",
457
+ "description": "Security scanning and compliance",
458
+ "skills": ["skills/"],
459
+ "commands": ["commands/"],
460
+ "scripts": ["scripts/"],
461
+ "hooks": "hooks/hooks.json"
462
+ }
463
+ ```
464
+
465
+ ### Install and Manage
466
+
467
+ ```bash
468
+ agent plugins install ./my-plugin
469
+ agent plugins list
470
+ agent plugins remove my-plugin
471
+ ```
472
+
473
+ ---
474
+
475
+ ## Hooks
476
+
477
+ Hooks intercept agent execution at lifecycle events:
478
+
479
+ ```json
480
+ {
481
+ "hooks": {
482
+ "after:tool": [{
483
+ "match": "fs.write",
484
+ "command": "npx prettier --write {{path}}",
485
+ "blocking": false
486
+ }],
487
+ "before:plan": [{
488
+ "command": "./scripts/validate-env.sh",
489
+ "blocking": true
490
+ }]
491
+ }
492
+ }
493
+ ```
494
+
495
+ ### Available Events
496
+
497
+ | Event | When |
498
+ |-------|------|
499
+ | `before:tool` / `after:tool` | Before/after any tool executes |
500
+ | `before:plan` / `after:plan` | Before/after a plan runs |
501
+ | `after:step` | After each plan step |
502
+ | `before:skill` / `after:skill` | Around skill execution |
503
+ | `after:decompose` | After goal decomposition |
504
+ | `session:start` / `session:end` | At session boundaries |
505
+
506
+ ---
507
+
508
+ ## Agent Studio
509
+
510
+ The web-based management dashboard:
511
+
512
+ ```bash
513
+ agent studio
514
+ # → Agent Studio running at http://localhost:3333
515
+ ```
516
+
517
+ ### Available Pages
518
+
519
+ | Page | Description |
520
+ |------|-------------|
521
+ | **Console** | Real-time terminal with live command relay and WebSocket streaming |
522
+ | **Capabilities** | View all loaded tools, permissions, and provider info |
523
+ | **Goals & Tasks** | Create goals, view decomposition, track progress with status badges |
524
+ | **Templates** | 6 pre-built goal templates with variable substitution |
525
+ | **Credentials** | Encrypted vault manager — add/delete/mask secrets |
526
+ | **Skills** | CRUD for skill definitions, view success rates |
527
+ | **Commands** | View and manage command templates |
528
+ | **Scripts** | View script contents, run scripts, see output |
529
+ | **Plugins** | View installed plugins and their capabilities |
530
+ | **Daemon** | Start/stop daemon, view logs, check health |
531
+ | **Memory** | Search, add, and browse persistent agent memories |
532
+
533
+ ---
534
+
535
+ ## Memory System
536
+
537
+ The agent stores facts, learnings, and context in a SQLite database with FTS5 full-text search:
538
+
539
+ ```bash
540
+ agent memory search "database credentials"
541
+ agent memory add "Staging server is at 10.0.0.5" --category fact
542
+ ```
543
+
544
+ ### Memory Categories
545
+
546
+ | Category | When Stored |
547
+ |----------|------------|
548
+ | `learned` | After successful task completion |
549
+ | `fact` | User-provided facts |
550
+ | `error` | Error patterns and their resolutions |
551
+ | `preference` | User preferences and conventions |
552
+
553
+ ---
554
+
555
+ ## LLM Configuration
556
+
557
+ ### Supported Providers
558
+
559
+ | Provider | Env Variable | Example Models |
560
+ |----------|-------------|----------------|
561
+ | OpenAI | `OPENAI_API_KEY` | gpt-4o, gpt-4o-mini |
562
+ | Anthropic | `ANTHROPIC_API_KEY` | claude-3-sonnet, claude-3-opus |
563
+ | Azure OpenAI | `AZURE_OPENAI_API_KEY` + `AZURE_OPENAI_ENDPOINT` | Any deployed model |
564
+ | Ollama | None (local at `http://localhost:11434`) | llama3, codellama, mistral |
565
+
566
+ ### Fallback Chain
567
+
568
+ The LLM Router tries providers in order. If one fails, it falls back:
569
+
570
+ ```
571
+ OpenAI → Anthropic → Azure → Ollama
572
+ ```
573
+
574
+ Configure in `.agent/config.json`:
575
+ ```json
576
+ {
577
+ "llm": {
578
+ "provider": "openai",
579
+ "model": "gpt-4o",
580
+ "fallback": [
581
+ { "provider": "anthropic", "model": "claude-3-sonnet-20240229" },
582
+ { "provider": "ollama", "model": "llama3" }
583
+ ]
584
+ }
585
+ }
586
+ ```
587
+
588
+ ---
589
+
590
+ ## Tool Reference
591
+
592
+ ### File System
593
+
594
+ | Tool | Arguments | Description |
595
+ |------|-----------|-------------|
596
+ | `fs.read` | `{ path }` | Read file contents |
597
+ | `fs.write` | `{ path, content }` | Write/create file |
598
+ | `fs.mkdir` | `{ path }` | Create directory |
599
+ | `fs.list` | `{ path }` | List directory contents |
600
+ | `fs.stat` | `{ path }` | Get file metadata |
601
+
602
+ ### Shell
603
+
604
+ | Tool | Arguments | Description |
605
+ |------|-----------|-------------|
606
+ | `cmd.run` | `{ command, cwd? }` | Execute shell command |
607
+
608
+ ### Git
609
+
610
+ | Tool | Arguments | Description |
611
+ |------|-----------|-------------|
612
+ | `git.status` | — | Get git status |
613
+ | `git.diff` | `{ staged? }` | Show changes |
614
+ | `git.commit` | `{ message }` | Commit changes |
615
+
616
+ ### Network
617
+
618
+ | Tool | Arguments | Description |
619
+ |------|-----------|-------------|
620
+ | `http.request` | `{ url, method?, headers?, body? }` | HTTP request (GET/POST/PUT/DELETE) |
621
+
622
+ ### Credentials
623
+
624
+ | Tool | Arguments | Description |
625
+ |------|-----------|-------------|
626
+ | `secrets.get` | `{ key, reason? }` | Get a credential value |
627
+ | `secrets.list` | — | List known credential keys |
628
+ | `secrets.set` | `{ key, value }` | Store a credential |
629
+
630
+ ### Automation
631
+
632
+ | Tool | Arguments | Description |
633
+ |------|-----------|-------------|
634
+ | `script.run` | `{ name, args? }` | Execute a project script |
635
+ | `command.execute` | `{ name }` | Run a project command |
636
+
637
+ ---
638
+
639
+ ## REST API Reference
640
+
641
+ Agent Studio exposes a REST API at `http://localhost:3333`:
642
+
643
+ ### Instances
644
+
645
+ | Method | Endpoint | Description |
646
+ |--------|----------|-------------|
647
+ | GET | `/api/instances` | List all agent instances |
648
+ | GET | `/api/instances/:id/capabilities` | Get instance capabilities |
649
+
650
+ ### Goals & Tasks
651
+
652
+ | Method | Endpoint | Description |
653
+ |--------|----------|-------------|
654
+ | GET | `/api/instances/:id/goals` | List goals |
655
+ | POST | `/api/instances/:id/goals` | Create a goal |
656
+ | GET | `/api/instances/:id/tasks` | List tasks |
657
+ | POST | `/api/instances/:id/tasks/:taskId/approve` | Approve a task |
658
+
659
+ ### Credentials
660
+
661
+ | Method | Endpoint | Description |
662
+ |--------|----------|-------------|
663
+ | GET | `/api/instances/:id/credentials` | List credential keys |
664
+ | POST | `/api/instances/:id/credentials` | Add a credential `{ key, value }` |
665
+ | DELETE | `/api/instances/:id/credentials/:key` | Delete a credential |
666
+
667
+ ### Skills, Commands, Scripts, Plugins
668
+
669
+ | Method | Endpoint | Description |
670
+ |--------|----------|-------------|
671
+ | GET | `/api/instances/:id/skills` | List skills |
672
+ | GET | `/api/instances/:id/commands` | List commands |
673
+ | GET | `/api/instances/:id/scripts` | List scripts |
674
+ | GET | `/api/instances/:id/plugins` | List plugins |
675
+
676
+ ### Daemon
677
+
678
+ | Method | Endpoint | Description |
679
+ |--------|----------|-------------|
680
+ | GET | `/api/instances/:id/daemon/status` | Daemon status |
681
+ | POST | `/api/instances/:id/daemon/start` | Start daemon |
682
+ | POST | `/api/instances/:id/daemon/stop` | Stop daemon |
683
+ | GET | `/api/instances/:id/daemon/logs` | Get daemon logs |
684
+
685
+ ### Templates
686
+
687
+ | Method | Endpoint | Description |
688
+ |--------|----------|-------------|
689
+ | GET | `/api/goal-templates` | Get pre-built goal templates |
690
+
691
+ ### WebSocket Events
692
+
693
+ | Event | Direction | Description |
694
+ |-------|-----------|-------------|
695
+ | `subscribe` | Client → Server | Subscribe to instance events |
696
+ | `agent:log` | Both | Real-time log streaming |
697
+ | `agent:command` | Both | Command relay |
698
+ | `agent:approval:request` | Server → Client | Task needs approval |
699
+ | `agent:approval:response` | Client → Server | User approves/rejects |
700
+ | `credential:required` | Server → Client | Daemon needs a credential |
701
+ | `credential:provide` | Client → Server | User provides credential |
702
+ | `task:progress` | Server → Client | Live task execution updates |
703
+
704
+ ---
705
+
706
+ ## Troubleshooting
707
+
708
+ ### "No LLM provider available"
709
+
710
+ Your API keys aren't set. Fix with:
711
+ ```bash
712
+ export OPENAI_API_KEY=sk-your-key
713
+ # or use .env file
714
+ # or add via Studio → Credentials
715
+ ```
716
+
717
+ ### Daemon tasks not starting
718
+
719
+ Check that your goal is `active` and tasks are `pending`:
720
+ ```bash
721
+ agent goal list
722
+ agent goal status <id>
723
+ ```
724
+
725
+ ### "Permission denied" on tool execution
726
+
727
+ Your policy engine is blocking the tool. Set wildcard permissions for full autonomy:
728
+ ```json
729
+ {
730
+ "policy": {
731
+ "permissions": ["*"]
732
+ }
733
+ }
734
+ ```
735
+
736
+ ### Studio won't start
737
+
738
+ ```bash
739
+ agent doctor # Check system health
740
+ agent studio # Try again — port 3333
741
+ ```
742
+
743
+ ### Daemon stuck on a task
744
+
745
+ ```bash
746
+ agent daemon stop
747
+ agent daemon start # Restart picks up where it left off
748
+ ```