robot_lab 0.0.8 → 0.0.11
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +71 -0
- data/README.md +106 -4
- data/Rakefile +2 -1
- data/docs/api/core/robot.md +336 -1
- data/docs/api/mcp/client.md +1 -0
- data/docs/api/mcp/server.md +27 -8
- data/docs/api/mcp/transports.md +21 -6
- data/docs/architecture/core-concepts.md +1 -1
- data/docs/architecture/robot-execution.md +20 -2
- data/docs/concepts.md +4 -0
- data/docs/guides/building-robots.md +18 -0
- data/docs/guides/creating-networks.md +39 -0
- data/docs/guides/index.md +10 -0
- data/docs/guides/knowledge.md +182 -0
- data/docs/guides/mcp-integration.md +180 -2
- data/docs/guides/memory.md +2 -0
- data/docs/guides/observability.md +486 -0
- data/docs/guides/ractor-parallelism.md +364 -0
- data/docs/superpowers/plans/2026-04-14-ractor-integration.md +1538 -0
- data/docs/superpowers/specs/2026-04-14-ractor-integration-design.md +258 -0
- data/examples/14_rusty_circuit/.gitignore +1 -0
- data/examples/14_rusty_circuit/open_mic.rb +1 -1
- data/examples/19_token_tracking.rb +128 -0
- data/examples/20_circuit_breaker.rb +153 -0
- data/examples/21_learning_loop.rb +164 -0
- data/examples/22_context_compression.rb +179 -0
- data/examples/23_convergence.rb +137 -0
- data/examples/24_structured_delegation.rb +150 -0
- data/examples/25_history_search/conversation.jsonl +30 -0
- data/examples/25_history_search.rb +136 -0
- data/examples/26_document_store/api_versioning_adr.md +52 -0
- data/examples/26_document_store/incident_postmortem.md +46 -0
- data/examples/26_document_store/postgres_runbook.md +49 -0
- data/examples/26_document_store/redis_caching_guide.md +48 -0
- data/examples/26_document_store/sidekiq_guide.md +51 -0
- data/examples/26_document_store.rb +147 -0
- data/examples/27_incident_response/incident_response.rb +244 -0
- data/examples/28_mcp_discovery.rb +112 -0
- data/examples/29_ractor_tools.rb +243 -0
- data/examples/30_ractor_network.rb +256 -0
- data/examples/README.md +136 -0
- data/examples/prompts/skill_with_mcp_test.md +9 -0
- data/examples/prompts/skill_with_robot_name_test.md +5 -0
- data/examples/prompts/skill_with_tools_test.md +6 -0
- data/lib/robot_lab/bus_poller.rb +149 -0
- data/lib/robot_lab/convergence.rb +69 -0
- data/lib/robot_lab/delegation_future.rb +93 -0
- data/lib/robot_lab/document_store.rb +155 -0
- data/lib/robot_lab/error.rb +25 -0
- data/lib/robot_lab/history_compressor.rb +205 -0
- data/lib/robot_lab/mcp/client.rb +23 -9
- data/lib/robot_lab/mcp/connection_poller.rb +187 -0
- data/lib/robot_lab/mcp/server.rb +26 -3
- data/lib/robot_lab/mcp/server_discovery.rb +110 -0
- data/lib/robot_lab/mcp/transports/base.rb +10 -2
- data/lib/robot_lab/mcp/transports/stdio.rb +58 -26
- data/lib/robot_lab/memory.rb +103 -6
- data/lib/robot_lab/network.rb +44 -9
- data/lib/robot_lab/ractor_boundary.rb +42 -0
- data/lib/robot_lab/ractor_job.rb +37 -0
- data/lib/robot_lab/ractor_memory_proxy.rb +85 -0
- data/lib/robot_lab/ractor_network_scheduler.rb +154 -0
- data/lib/robot_lab/ractor_worker_pool.rb +117 -0
- data/lib/robot_lab/robot/bus_messaging.rb +43 -65
- data/lib/robot_lab/robot/history_search.rb +69 -0
- data/lib/robot_lab/robot/mcp_management.rb +61 -4
- data/lib/robot_lab/robot.rb +351 -11
- data/lib/robot_lab/robot_result.rb +26 -5
- data/lib/robot_lab/run_config.rb +1 -1
- data/lib/robot_lab/text_analysis.rb +103 -0
- data/lib/robot_lab/tool.rb +42 -3
- data/lib/robot_lab/tool_config.rb +1 -1
- data/lib/robot_lab/version.rb +1 -1
- data/lib/robot_lab/waiter.rb +49 -29
- data/lib/robot_lab.rb +25 -0
- data/mkdocs.yml +1 -0
- metadata +71 -2
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: f4b2a3fafbdf3a54de3044b57597b42d86c68bd2afdad6ce866ac82483e61091
|
|
4
|
+
data.tar.gz: 5137cff56485a26fabe5ab6606b144c4c3c21c1673ecec1d2254a392e015c25c
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: 33045f27ec803094a020caee4133c1d6c65887446330c294d9a9babd56a0fe7e71979fe0d032c2421488d1dcae886ad784a78dae579ba01b2786b5f9f91c0172
|
|
7
|
+
data.tar.gz: 5554296590bfb3dea031c95090ef8a47e946ac1c7a92b3efc6924439f55df7267e0b04d385e2332ea827099c37688c4988aa908caf5bb9d2f21eabc2c50c3167
|
data/CHANGELOG.md
CHANGED
|
@@ -8,6 +8,77 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|
|
8
8
|
|
|
9
9
|
## [Unreleased]
|
|
10
10
|
|
|
11
|
+
## [0.0.11] - 2026-04-14
|
|
12
|
+
|
|
13
|
+
### Added
|
|
14
|
+
|
|
15
|
+
- **Ractor parallelism — Track 1: CPU-bound tools** (`RactorWorkerPool`)
|
|
16
|
+
- `ractor_safe true` class macro on `Tool` — opts a tool class into Ractor execution; subclasses inherit automatically
|
|
17
|
+
- `RobotLab.ractor_pool` — global `RactorWorkerPool` singleton, one Ractor worker per CPU core by default
|
|
18
|
+
- `ractor_pool_size` field on `RunConfig` for configuring pool capacity
|
|
19
|
+
- `RactorWorkerPool#submit(tool_name, args)` — submits a job and blocks for the frozen result; raises `ToolError` on failure
|
|
20
|
+
- Tool dispatch routes `ractor_safe` tools through the pool automatically, bypassing the GVL for CPU-intensive work
|
|
21
|
+
- `RactorBoundary.freeze_deep(obj)` — deep-freezes nested hashes/arrays/strings to make them Ractor-shareable; raises `RactorBoundaryError` for non-shareable objects (Procs, IOs, etc.)
|
|
22
|
+
- **Ractor parallelism — Track 2: parallel robot pipelines** (`RactorNetworkScheduler`)
|
|
23
|
+
- `parallel_mode: :ractor` on `Network.new` — routes `network.run` through `RactorNetworkScheduler` instead of `SimpleFlow::Pipeline`
|
|
24
|
+
- `RactorNetworkScheduler` dispatches dependency waves: independent tasks run concurrently (one Thread per task); dependent tasks wait for their wave to complete
|
|
25
|
+
- `RobotSpec` — frozen `Data.define` descriptor carrying robot name, template, system prompt, and config; safely crosses Ractor boundaries
|
|
26
|
+
- `RactorNetworkScheduler#run_pipeline` returns `Hash { robot_name => result_string }` for the full pipeline
|
|
27
|
+
- `RactorNetworkScheduler#run_spec` for single-spec dispatch
|
|
28
|
+
- `RactorNetworkScheduler#shutdown` for graceful poison-pill cleanup
|
|
29
|
+
- `network.parallel_mode` reader exposes the configured mode (default `:async`)
|
|
30
|
+
- **Ractor memory proxy** — `RactorMemoryProxy` wraps `Memory` via `ractor-wrapper` for safe cross-Ractor memory access
|
|
31
|
+
- **Infrastructure data classes** — `RactorJob`, `RactorJobError` (`Data.define` structs) for job submission and error propagation across Ractor boundaries
|
|
32
|
+
- **`RactorBoundaryError`** — raised by `freeze_deep` when a non-shareable value (Proc, IO, etc.) would cross a Ractor boundary
|
|
33
|
+
- **`ToolError`** — raised by `RactorWorkerPool#submit` when a tool raises inside a Ractor; propagates message and frozen backtrace
|
|
34
|
+
- **Dependencies** — `ractor_queue` (~> 0.1) and `ractor-wrapper` (~> 0.4) added to gemspec
|
|
35
|
+
- **Ractor Parallelism guide** (`docs/guides/ractor-parallelism.md`) — covers architecture, two-track design, configuration, error handling, constraints, and best practices
|
|
36
|
+
- **Example 29: Ractor-Safe CPU Tools** (`examples/29_ractor_tools.rb`) — demonstrates `ractor_safe` flag, inheritance, `freeze_deep`, pool submissions, `ToolError` propagation, and parallel batch timing; no API key required
|
|
37
|
+
- **Example 30: Ractor Network Scheduler** (`examples/30_ractor_network.rb`) — demonstrates `RactorNetworkScheduler` wave ordering with simulated latencies, `Network.new(parallel_mode: :ractor)` API, and dependency graph inspection; no API key required for Parts 1 & 2
|
|
38
|
+
|
|
39
|
+
### Fixed
|
|
40
|
+
|
|
41
|
+
- `ToolConfig::NONE_VALUES` constant was not Ractor-shareable because its inner empty array `[]` was mutable; fixed by replacing `[]` with `[].freeze` so the entire constant is deeply frozen and safe to read from any Ractor
|
|
42
|
+
|
|
43
|
+
## [0.0.9] - 2026-03-02
|
|
44
|
+
|
|
45
|
+
### Added
|
|
46
|
+
|
|
47
|
+
- **Provider passthrough** — `provider:` parameter on Robot constructor for local LLM providers (Ollama, GPUStack, etc.)
|
|
48
|
+
- Automatically sets `assume_model_exists: true` when provider is specified
|
|
49
|
+
- Exposed via `robot.provider` accessor
|
|
50
|
+
- **MCP request timeouts** — configurable timeout for all MCP transports
|
|
51
|
+
- `MCP::Server` accepts `timeout:` parameter (default 15s); auto-converts millisecond values
|
|
52
|
+
- `MCP::Transports::Base` extracts and exposes `timeout` from config
|
|
53
|
+
- `MCP::Transports::Stdio` wraps all blocking I/O with `Timeout.timeout` — hung servers no longer block the caller forever
|
|
54
|
+
- Timeout propagated from `MCP::Server` through `MCP::Client` to transport layer
|
|
55
|
+
- **MCP connection resilience** — improved error handling and retry logic
|
|
56
|
+
- `ensure_mcp_clients` retries previously failed servers on subsequent calls
|
|
57
|
+
- `@failed_mcp_configs` tracks servers that failed to connect
|
|
58
|
+
- `robot.failed_mcp_server_names` — query which MCP servers are down
|
|
59
|
+
- `robot.connect_mcp!` — eagerly connect to MCP servers (normally lazy)
|
|
60
|
+
- `init_mcp_client` rescues `StandardError` so one bad server doesn't prevent others from connecting
|
|
61
|
+
- `cleanup_process` in Stdio transport for reliable resource cleanup
|
|
62
|
+
- Better error messages for command-not-found (`Errno::ENOENT`), broken pipe (`Errno::EPIPE`), and EOF conditions
|
|
63
|
+
- **`robot.inject_mcp!`** — inject pre-connected MCP clients and tools from an external host application
|
|
64
|
+
- **Conversation management APIs** on Robot
|
|
65
|
+
- `robot.chat` — access the underlying `RubyLLM::Chat` instance
|
|
66
|
+
- `robot.messages` — return conversation messages
|
|
67
|
+
- `robot.clear_messages(keep_system:)` — clear history, optionally preserving the system prompt
|
|
68
|
+
- `robot.replace_messages(messages)` — restore a saved conversation (checkpoint/restore)
|
|
69
|
+
- `robot.chat_provider` — query the provider name without reaching into chat internals
|
|
70
|
+
- `robot.mcp_client(server_name)` — find an MCP client by server name
|
|
71
|
+
- **`RobotResult#duration`** — elapsed seconds for a robot run, set automatically during pipeline execution
|
|
72
|
+
- **`RobotResult#raw`** — raw LLM response stored on every result (previously only settable via accessor)
|
|
73
|
+
- **Pipeline error resilience** — `Robot#call` (pipeline step) rescues all exceptions so one failing robot doesn't crash the entire network; error is captured in a `RobotResult` with the elapsed duration
|
|
74
|
+
|
|
75
|
+
### Changed
|
|
76
|
+
|
|
77
|
+
- Bumped version to 0.0.9
|
|
78
|
+
- Display `scout_path` in Rusty Circuit example updated to use `output/` subdirectory
|
|
79
|
+
- Updated `onnxruntime` dependency to 0.11.0
|
|
80
|
+
- Updated Gemfile.lock dependencies (erb, minitest, rails-html-sanitizer, json_schemer)
|
|
81
|
+
|
|
11
82
|
## [0.0.8] - 2026-02-22
|
|
12
83
|
|
|
13
84
|
### Added
|
data/README.md
CHANGED
|
@@ -20,12 +20,19 @@
|
|
|
20
20
|
- <strong>Extensible Tools</strong> - Custom capabilities with graceful error handling<br>
|
|
21
21
|
- <strong>Human-in-the-Loop</strong> - AskUser tool for interactive prompting<br>
|
|
22
22
|
- <strong>Content Streaming</strong> - Stored callbacks, per-call blocks, or both<br>
|
|
23
|
-
- <strong>MCP Integration</strong> - Connect to external tool servers<br>
|
|
23
|
+
- <strong>MCP Integration</strong> - Connect to external tool servers with timeouts and retry<br>
|
|
24
|
+
- <strong>Local LLM Providers</strong> - Ollama, GPUStack, LM Studio via provider passthrough<br>
|
|
24
25
|
- <strong>Shared Memory</strong> - Reactive key-value store with subscriptions<br>
|
|
25
26
|
- <strong>Message Bus</strong> - Bidirectional robot communication via TypedBus<br>
|
|
26
27
|
- <strong>Dynamic Spawning</strong> - Robots create new robots at runtime<br>
|
|
27
28
|
- <strong>Layered Configuration</strong> - Cascading YAML, env vars, and RunConfig<br>
|
|
28
|
-
- <strong>Rails Integration</strong> - Generators, background jobs, Turbo Stream broadcasting
|
|
29
|
+
- <strong>Rails Integration</strong> - Generators, background jobs, Turbo Stream broadcasting<br>
|
|
30
|
+
- <strong>Token & Cost Tracking</strong> - Per-run and cumulative token counts on every robot<br>
|
|
31
|
+
- <strong>Tool Loop Circuit Breaker</strong> - <code>max_tool_rounds:</code> guards against runaway tool call loops<br>
|
|
32
|
+
- <strong>Learning Accumulation</strong> - <code>robot.learn()</code> builds up cross-run observations with deduplication<br>
|
|
33
|
+
- <strong>Context Window Compression</strong> - <code>robot.compress_history()</code> prunes irrelevant old turns via TF cosine scoring<br>
|
|
34
|
+
- <strong>Convergence Detection</strong> - <code>RobotLab::Convergence</code> detects when independent agents agree, enabling reconciler fast-path<br>
|
|
35
|
+
- <strong>Structured Delegation</strong> - <code>robot.delegate(to:, task:)</code> sync or async inter-robot calls with duration and token metadata; async fan-out via <code>DelegationFuture</code>
|
|
29
36
|
</td>
|
|
30
37
|
</tr>
|
|
31
38
|
</table>
|
|
@@ -71,6 +78,19 @@ puts result.last_text_content
|
|
|
71
78
|
# => "The capital of France is Paris."
|
|
72
79
|
```
|
|
73
80
|
|
|
81
|
+
### Local LLM Providers
|
|
82
|
+
|
|
83
|
+
For local LLM providers (Ollama, GPUStack, LM Studio, etc.), use the `provider:` parameter:
|
|
84
|
+
|
|
85
|
+
```ruby
|
|
86
|
+
robot = RobotLab.build(
|
|
87
|
+
name: "local_bot",
|
|
88
|
+
model: "llama3.2",
|
|
89
|
+
provider: :ollama,
|
|
90
|
+
system_prompt: "You are a helpful assistant."
|
|
91
|
+
)
|
|
92
|
+
```
|
|
93
|
+
|
|
74
94
|
### Configuration
|
|
75
95
|
|
|
76
96
|
RobotLab uses [MywayConfig](https://github.com/MadBomber/myway_config) for layered configuration. There is no `configure` block. Configuration is loaded automatically from multiple sources in priority order:
|
|
@@ -443,14 +463,15 @@ puts result.value.last_text_content
|
|
|
443
463
|
Connect to external tool servers via Model Context Protocol:
|
|
444
464
|
|
|
445
465
|
```ruby
|
|
446
|
-
# Configure MCP server
|
|
466
|
+
# Configure MCP server (with optional timeout)
|
|
447
467
|
filesystem_server = {
|
|
448
468
|
name: "filesystem",
|
|
449
469
|
transport: {
|
|
450
470
|
type: "stdio",
|
|
451
471
|
command: "mcp-server-filesystem",
|
|
452
472
|
args: ["/path/to/allowed/directory"]
|
|
453
|
-
}
|
|
473
|
+
},
|
|
474
|
+
timeout: 30 # seconds (default: 15)
|
|
454
475
|
}
|
|
455
476
|
|
|
456
477
|
# Create robot with MCP server - tools are auto-discovered
|
|
@@ -460,10 +481,18 @@ robot = RobotLab.build(
|
|
|
460
481
|
mcp: [filesystem_server]
|
|
461
482
|
)
|
|
462
483
|
|
|
484
|
+
# Optionally connect eagerly (default is lazy on first run)
|
|
485
|
+
robot.connect_mcp!
|
|
486
|
+
|
|
487
|
+
# Check connection status
|
|
488
|
+
puts "Failed: #{robot.failed_mcp_server_names}" if robot.failed_mcp_server_names.any?
|
|
489
|
+
|
|
463
490
|
# Robot can now use filesystem tools
|
|
464
491
|
result = robot.run("List the files in the current directory")
|
|
465
492
|
```
|
|
466
493
|
|
|
494
|
+
MCP connections are resilient: failed servers are automatically retried on subsequent `run()` calls, and one failing server does not prevent others from connecting.
|
|
495
|
+
|
|
467
496
|
## Message Bus
|
|
468
497
|
|
|
469
498
|
Robots can communicate bidirectionally via an optional message bus, independent of the Network pipeline. This enables negotiation loops, convergence patterns, and cyclic workflows.
|
|
@@ -598,6 +627,79 @@ robot.run("Tell me a story") { |chunk| stream_to_client(chunk.content) }
|
|
|
598
627
|
|
|
599
628
|
The `on_content:` callback participates in the RunConfig cascade, so it can be set at the network or config level and inherited by robots.
|
|
600
629
|
|
|
630
|
+
## Token & Cost Tracking
|
|
631
|
+
|
|
632
|
+
Every `robot.run()` returns a `RobotResult` that carries token usage for that call. The robot itself accumulates running totals across all runs.
|
|
633
|
+
|
|
634
|
+
```ruby
|
|
635
|
+
robot = RobotLab.build(name: "analyst", system_prompt: "You are helpful.")
|
|
636
|
+
|
|
637
|
+
result = robot.run("What is a stack?")
|
|
638
|
+
puts result.input_tokens # tokens sent to the LLM this run
|
|
639
|
+
puts result.output_tokens # tokens generated this run
|
|
640
|
+
|
|
641
|
+
puts robot.total_input_tokens # cumulative across all runs
|
|
642
|
+
puts robot.total_output_tokens
|
|
643
|
+
```
|
|
644
|
+
|
|
645
|
+
To start a fresh cost batch without rebuilding the robot, call `reset_token_totals`. This resets the **accounting counter only** — the chat history keeps accumulating, so subsequent `input_tokens` will reflect the full context window sent to the API:
|
|
646
|
+
|
|
647
|
+
```ruby
|
|
648
|
+
robot.reset_token_totals
|
|
649
|
+
puts robot.total_input_tokens # => 0
|
|
650
|
+
```
|
|
651
|
+
|
|
652
|
+
Token counts are zero for providers that do not return usage data.
|
|
653
|
+
|
|
654
|
+
## Tool Loop Circuit Breaker
|
|
655
|
+
|
|
656
|
+
Set `max_tool_rounds:` to prevent a robot from looping indefinitely through tool calls. When the limit is exceeded, `RobotLab::ToolLoopError` is raised.
|
|
657
|
+
|
|
658
|
+
```ruby
|
|
659
|
+
robot = RobotLab.build(
|
|
660
|
+
name: "runner",
|
|
661
|
+
system_prompt: "Execute every step.",
|
|
662
|
+
local_tools: [StepTool],
|
|
663
|
+
max_tool_rounds: 10
|
|
664
|
+
)
|
|
665
|
+
|
|
666
|
+
begin
|
|
667
|
+
robot.run("Run all steps.")
|
|
668
|
+
rescue RobotLab::ToolLoopError => e
|
|
669
|
+
puts e.message # "Tool call limit of 10 exceeded"
|
|
670
|
+
end
|
|
671
|
+
```
|
|
672
|
+
|
|
673
|
+
After a `ToolLoopError` the chat contains a dangling `tool_use` block with no matching `tool_result`. Most providers (including Anthropic) will reject any subsequent request with that history. Call `clear_messages` before reusing the robot:
|
|
674
|
+
|
|
675
|
+
```ruby
|
|
676
|
+
robot.clear_messages # flushes broken history; system prompt is kept
|
|
677
|
+
result = robot.run("Something new.") # robot is healthy again
|
|
678
|
+
```
|
|
679
|
+
|
|
680
|
+
## Learning Accumulation
|
|
681
|
+
|
|
682
|
+
`robot.learn(text)` records a cross-run observation. On each subsequent `run()`, active learnings are automatically prepended to the user message as a `LEARNINGS FROM PREVIOUS RUNS:` block so the LLM can incorporate prior context without needing a persistent chat:
|
|
683
|
+
|
|
684
|
+
```ruby
|
|
685
|
+
reviewer = RobotLab.build(
|
|
686
|
+
name: "reviewer",
|
|
687
|
+
system_prompt: "You are a Ruby code reviewer."
|
|
688
|
+
)
|
|
689
|
+
|
|
690
|
+
reviewer.run("Review snippet A")
|
|
691
|
+
reviewer.learn("This codebase prefers map/collect over manual array accumulation")
|
|
692
|
+
|
|
693
|
+
reviewer.run("Review snippet B") # learning is injected automatically
|
|
694
|
+
```
|
|
695
|
+
|
|
696
|
+
Learnings deduplicate bidirectionally: if a broader learning is added that contains an existing narrower one, the narrower one is dropped. Learnings are persisted to the robot's `Memory` and survive a robot rebuild when the same `Memory` object is reused.
|
|
697
|
+
|
|
698
|
+
```ruby
|
|
699
|
+
reviewer.learnings # => ["This codebase prefers map/collect..."]
|
|
700
|
+
reviewer.learn("new fact") # deduplicates before storing
|
|
701
|
+
```
|
|
702
|
+
|
|
601
703
|
## Rails Integration
|
|
602
704
|
|
|
603
705
|
```bash
|
data/Rakefile
CHANGED
|
@@ -49,7 +49,8 @@ namespace :examples do
|
|
|
49
49
|
SUBDIR_ENTRY_POINTS = {
|
|
50
50
|
"14_rusty_circuit" => "open_mic.rb",
|
|
51
51
|
"15_memory_network_and_bus" => "editorial_pipeline.rb",
|
|
52
|
-
"16_writers_room" => "writers_room.rb"
|
|
52
|
+
"16_writers_room" => "writers_room.rb",
|
|
53
|
+
"27_incident_response" => "incident_response.rb"
|
|
53
54
|
}.freeze
|
|
54
55
|
|
|
55
56
|
# Subdirectory demos that are standalone apps (not run via `ruby`)
|
data/docs/api/core/robot.md
CHANGED
|
@@ -23,6 +23,7 @@ Robot.new(
|
|
|
23
23
|
description: nil,
|
|
24
24
|
local_tools: [],
|
|
25
25
|
model: nil,
|
|
26
|
+
provider: nil,
|
|
26
27
|
mcp_servers: [],
|
|
27
28
|
mcp: :none,
|
|
28
29
|
tools: :none,
|
|
@@ -32,6 +33,8 @@ Robot.new(
|
|
|
32
33
|
enable_cache: true,
|
|
33
34
|
bus: nil,
|
|
34
35
|
skills: nil,
|
|
36
|
+
max_tool_rounds: nil,
|
|
37
|
+
token_budget: nil,
|
|
35
38
|
temperature: nil,
|
|
36
39
|
top_p: nil,
|
|
37
40
|
top_k: nil,
|
|
@@ -54,6 +57,7 @@ Robot.new(
|
|
|
54
57
|
| `description` | `String`, `nil` | `nil` | Human-readable description of what the robot does |
|
|
55
58
|
| `local_tools` | `Array` | `[]` | Tools defined locally (`RubyLLM::Tool` subclasses or `RobotLab::Tool` instances) |
|
|
56
59
|
| `model` | `String`, `nil` | `nil` | LLM model ID (falls back to `RobotLab.config.ruby_llm.model`) |
|
|
60
|
+
| `provider` | `String`, `Symbol`, `nil` | `nil` | LLM provider for local providers (e.g., `:ollama`, `:gpustack`). Automatically sets `assume_model_exists: true` |
|
|
57
61
|
| `mcp_servers` | `Array` | `[]` | Legacy MCP server configurations |
|
|
58
62
|
| `mcp` | `Symbol`, `Array` | `:none` | Hierarchical MCP config (`:none`, `:inherit`, or server array) |
|
|
59
63
|
| `tools` | `Symbol`, `Array` | `:none` | Hierarchical tools config (`:none`, `:inherit`, or tool name array) |
|
|
@@ -63,6 +67,8 @@ Robot.new(
|
|
|
63
67
|
| `enable_cache` | `Boolean` | `true` | Whether to enable semantic caching |
|
|
64
68
|
| `bus` | `TypedBus::MessageBus`, `nil` | `nil` | Optional message bus for inter-robot communication |
|
|
65
69
|
| `skills` | `Symbol`, `Array<Symbol>`, `nil` | `nil` | Skill templates to prepend (see [Skills](#skills)) |
|
|
70
|
+
| `max_tool_rounds` | `Integer`, `nil` | `nil` | Circuit breaker: raise `ToolLoopError` after this many tool calls in one `run()` (see [Tool Loop Circuit Breaker](#tool-loop-circuit-breaker)) |
|
|
71
|
+
| `token_budget` | `Integer`, `nil` | `nil` | Raise `InferenceError` if cumulative input tokens exceed this limit |
|
|
66
72
|
| `config` | `RunConfig`, `nil` | `nil` | Shared config merged with explicit kwargs (see [RunConfig](#runconfig)) |
|
|
67
73
|
| `temperature` | `Float`, `nil` | `nil` | Controls randomness (0.0-1.0) |
|
|
68
74
|
| `top_p` | `Float`, `nil` | `nil` | Nucleus sampling threshold |
|
|
@@ -101,6 +107,7 @@ If `name` is omitted, it defaults to `"robot"`.
|
|
|
101
107
|
| `template` | `Symbol`, `nil` | Prompt template identifier |
|
|
102
108
|
| `system_prompt` | `String`, `nil` | Inline system prompt |
|
|
103
109
|
| `skills` | `Array<Symbol>`, `nil` | Constructor-provided skill template IDs (nil if none) |
|
|
110
|
+
| `provider` | `String`, `nil` | LLM provider name (e.g., `"ollama"`) — set when using local providers |
|
|
104
111
|
| `local_tools` | `Array` | Locally defined tools |
|
|
105
112
|
| `mcp_clients` | `Hash<String, MCP::Client>` | Connected MCP clients, keyed by server name |
|
|
106
113
|
| `mcp_tools` | `Array<Tool>` | Tools discovered from MCP servers |
|
|
@@ -110,6 +117,9 @@ If `name` is omitted, it defaults to `"robot"`.
|
|
|
110
117
|
| `config` | `RunConfig` | Effective RunConfig (merged from constructor kwargs and passed-in config) |
|
|
111
118
|
| `mcp_config` | `Symbol`, `Array` | Build-time MCP configuration (raw, unresolved) |
|
|
112
119
|
| `tools_config` | `Symbol`, `Array` | Build-time tools configuration (raw, unresolved) |
|
|
120
|
+
| `total_input_tokens` | `Integer` | Cumulative input tokens sent across all `run()` calls |
|
|
121
|
+
| `total_output_tokens` | `Integer` | Cumulative output tokens received across all `run()` calls |
|
|
122
|
+
| `learnings` | `Array<String>` | Accumulated cross-run observations (see [Learning Accumulation](#learning-accumulation)) |
|
|
113
123
|
|
|
114
124
|
## Attributes (Read-Write)
|
|
115
125
|
|
|
@@ -239,7 +249,9 @@ robot.call(result)
|
|
|
239
249
|
# => SimpleFlow::Result
|
|
240
250
|
```
|
|
241
251
|
|
|
242
|
-
SimpleFlow step interface. Extracts the message from `result.context[:run_params]`, calls `run`, and wraps the output in a continued `SimpleFlow::Result`.
|
|
252
|
+
SimpleFlow step interface. Extracts the message from `result.context[:run_params]`, calls `run`, and wraps the output in a continued `SimpleFlow::Result`. Automatically records `RobotResult#duration` (elapsed seconds).
|
|
253
|
+
|
|
254
|
+
If the robot raises any exception during execution, the error is caught and wrapped in a `RobotResult` with the error message as content. This ensures one failing robot does not crash the entire network pipeline.
|
|
243
255
|
|
|
244
256
|
Override this method in subclasses for custom routing logic (e.g., classifiers).
|
|
245
257
|
|
|
@@ -401,6 +413,142 @@ bot.with_bus(bus1) # joins bus1
|
|
|
401
413
|
bot.with_bus(bus2) # leaves bus1, joins bus2
|
|
402
414
|
```
|
|
403
415
|
|
|
416
|
+
### connect_mcp!
|
|
417
|
+
|
|
418
|
+
```ruby
|
|
419
|
+
robot.connect_mcp!
|
|
420
|
+
# => self
|
|
421
|
+
```
|
|
422
|
+
|
|
423
|
+
Eagerly connect to configured MCP servers and discover tools. Normally MCP connections are lazy (established on first `run`). Call this to connect early, e.g., to display connection status at startup.
|
|
424
|
+
|
|
425
|
+
**Returns:** `self`
|
|
426
|
+
|
|
427
|
+
### failed_mcp_server_names
|
|
428
|
+
|
|
429
|
+
```ruby
|
|
430
|
+
robot.failed_mcp_server_names
|
|
431
|
+
# => Array<String>
|
|
432
|
+
```
|
|
433
|
+
|
|
434
|
+
Returns server names that failed to connect. Useful for displaying connection status or deciding whether to retry.
|
|
435
|
+
|
|
436
|
+
### inject_mcp!
|
|
437
|
+
|
|
438
|
+
```ruby
|
|
439
|
+
robot.inject_mcp!(clients: mcp_clients, tools: mcp_tools)
|
|
440
|
+
# => self
|
|
441
|
+
```
|
|
442
|
+
|
|
443
|
+
Inject pre-connected MCP clients and their tools into this robot. Used by host applications that manage MCP connections externally and need to pass them to robots without re-connecting.
|
|
444
|
+
|
|
445
|
+
**Parameters:**
|
|
446
|
+
|
|
447
|
+
| Name | Type | Description |
|
|
448
|
+
|------|------|-------------|
|
|
449
|
+
| `clients` | `Hash<String, MCP::Client>` | Connected MCP clients keyed by server name |
|
|
450
|
+
| `tools` | `Array<Tool>` | Tools discovered from the MCP servers |
|
|
451
|
+
|
|
452
|
+
**Returns:** `self`
|
|
453
|
+
|
|
454
|
+
**Example:**
|
|
455
|
+
|
|
456
|
+
```ruby
|
|
457
|
+
# Host app manages MCP connections
|
|
458
|
+
clients = { "github" => github_client }
|
|
459
|
+
tools = github_client.list_tools.map { |t| RobotLab::Tool.from_mcp(t) }
|
|
460
|
+
|
|
461
|
+
robot.inject_mcp!(clients: clients, tools: tools)
|
|
462
|
+
```
|
|
463
|
+
|
|
464
|
+
### chat
|
|
465
|
+
|
|
466
|
+
```ruby
|
|
467
|
+
robot.chat
|
|
468
|
+
# => RubyLLM::Chat
|
|
469
|
+
```
|
|
470
|
+
|
|
471
|
+
Access the underlying `RubyLLM::Chat` instance. Useful for checkpoint/restore operations that need direct access to conversation state.
|
|
472
|
+
|
|
473
|
+
### messages
|
|
474
|
+
|
|
475
|
+
```ruby
|
|
476
|
+
robot.messages
|
|
477
|
+
# => Array<RubyLLM::Message>
|
|
478
|
+
```
|
|
479
|
+
|
|
480
|
+
Return the conversation messages from the underlying chat.
|
|
481
|
+
|
|
482
|
+
### clear_messages
|
|
483
|
+
|
|
484
|
+
```ruby
|
|
485
|
+
robot.clear_messages(keep_system: true)
|
|
486
|
+
# => self
|
|
487
|
+
```
|
|
488
|
+
|
|
489
|
+
Clear conversation messages, optionally keeping the system prompt.
|
|
490
|
+
|
|
491
|
+
**Parameters:**
|
|
492
|
+
|
|
493
|
+
| Name | Type | Default | Description |
|
|
494
|
+
|------|------|---------|-------------|
|
|
495
|
+
| `keep_system` | `Boolean` | `true` | Whether to preserve the system message |
|
|
496
|
+
|
|
497
|
+
**Returns:** `self`
|
|
498
|
+
|
|
499
|
+
### replace_messages
|
|
500
|
+
|
|
501
|
+
```ruby
|
|
502
|
+
robot.replace_messages(messages)
|
|
503
|
+
# => self
|
|
504
|
+
```
|
|
505
|
+
|
|
506
|
+
Replace conversation messages with a saved set. Useful for checkpoint/restore workflows.
|
|
507
|
+
|
|
508
|
+
**Parameters:**
|
|
509
|
+
|
|
510
|
+
| Name | Type | Description |
|
|
511
|
+
|------|------|-------------|
|
|
512
|
+
| `messages` | `Array<RubyLLM::Message>` | The messages to restore |
|
|
513
|
+
|
|
514
|
+
**Returns:** `self`
|
|
515
|
+
|
|
516
|
+
**Example:**
|
|
517
|
+
|
|
518
|
+
```ruby
|
|
519
|
+
# Save a checkpoint
|
|
520
|
+
saved = robot.messages.dup
|
|
521
|
+
|
|
522
|
+
# ... later, restore it
|
|
523
|
+
robot.replace_messages(saved)
|
|
524
|
+
```
|
|
525
|
+
|
|
526
|
+
### chat_provider
|
|
527
|
+
|
|
528
|
+
```ruby
|
|
529
|
+
robot.chat_provider
|
|
530
|
+
# => String or nil
|
|
531
|
+
```
|
|
532
|
+
|
|
533
|
+
Return the provider for this robot's chat. Useful for displaying model/provider info without reaching into chat internals.
|
|
534
|
+
|
|
535
|
+
### mcp_client
|
|
536
|
+
|
|
537
|
+
```ruby
|
|
538
|
+
robot.mcp_client("github")
|
|
539
|
+
# => MCP::Client or nil
|
|
540
|
+
```
|
|
541
|
+
|
|
542
|
+
Find an MCP client by server name.
|
|
543
|
+
|
|
544
|
+
**Parameters:**
|
|
545
|
+
|
|
546
|
+
| Name | Type | Description |
|
|
547
|
+
|------|------|-------------|
|
|
548
|
+
| `server_name` | `String` | The MCP server name |
|
|
549
|
+
|
|
550
|
+
**Returns:** `MCP::Client` or `nil`
|
|
551
|
+
|
|
404
552
|
### disconnect
|
|
405
553
|
|
|
406
554
|
```ruby
|
|
@@ -653,6 +801,18 @@ robot = RobotLab.build(
|
|
|
653
801
|
result = robot.run("What is 15 * 7?")
|
|
654
802
|
```
|
|
655
803
|
|
|
804
|
+
### Robot with Local Provider
|
|
805
|
+
|
|
806
|
+
```ruby
|
|
807
|
+
robot = RobotLab.build(
|
|
808
|
+
name: "local_bot",
|
|
809
|
+
model: "llama3.2",
|
|
810
|
+
provider: :ollama,
|
|
811
|
+
system_prompt: "You are helpful."
|
|
812
|
+
)
|
|
813
|
+
result = robot.run("Hello!")
|
|
814
|
+
```
|
|
815
|
+
|
|
656
816
|
### Robot with MCP
|
|
657
817
|
|
|
658
818
|
```ruby
|
|
@@ -749,6 +909,181 @@ bot.with_bus(bus)
|
|
|
749
909
|
bot.send_message(to: :someone, content: "Hello!")
|
|
750
910
|
```
|
|
751
911
|
|
|
912
|
+
## Token & Cost Tracking
|
|
913
|
+
|
|
914
|
+
Every `robot.run()` returns a `RobotResult` with token counts for that call. The robot accumulates running totals across all runs.
|
|
915
|
+
|
|
916
|
+
### RobotResult Token Fields
|
|
917
|
+
|
|
918
|
+
| Field | Type | Description |
|
|
919
|
+
|-------|------|-------------|
|
|
920
|
+
| `input_tokens` | `Integer` | Input tokens sent to the LLM in this run (0 if provider doesn't report usage) |
|
|
921
|
+
| `output_tokens` | `Integer` | Output tokens received from the LLM in this run (0 if not reported) |
|
|
922
|
+
|
|
923
|
+
### Robot Cumulative Totals
|
|
924
|
+
|
|
925
|
+
| Attribute | Type | Description |
|
|
926
|
+
|-----------|------|-------------|
|
|
927
|
+
| `total_input_tokens` | `Integer` | Cumulative input tokens across all `run()` calls |
|
|
928
|
+
| `total_output_tokens` | `Integer` | Cumulative output tokens across all `run()` calls |
|
|
929
|
+
|
|
930
|
+
### reset_token_totals
|
|
931
|
+
|
|
932
|
+
```ruby
|
|
933
|
+
robot.reset_token_totals
|
|
934
|
+
# => nil
|
|
935
|
+
```
|
|
936
|
+
|
|
937
|
+
Reset the cumulative accounting counters to zero. Useful when you want to measure cost for a specific task batch while keeping the robot alive for the next batch.
|
|
938
|
+
|
|
939
|
+
> **Note:** This resets the *accounting counter only* — the underlying chat history keeps growing. The next run's `input_tokens` will reflect the full accumulated chat context sent to the API.
|
|
940
|
+
|
|
941
|
+
**Example:**
|
|
942
|
+
|
|
943
|
+
```ruby
|
|
944
|
+
robot = RobotLab.build(name: "analyst", system_prompt: "You are helpful.")
|
|
945
|
+
|
|
946
|
+
result = robot.run("What is a stack?")
|
|
947
|
+
puts result.input_tokens # e.g. 120
|
|
948
|
+
puts result.output_tokens # e.g. 45
|
|
949
|
+
|
|
950
|
+
result2 = robot.run("And a queue?")
|
|
951
|
+
puts result2.input_tokens # larger — full chat history sent
|
|
952
|
+
|
|
953
|
+
puts robot.total_input_tokens # 120 + result2.input_tokens
|
|
954
|
+
puts robot.total_output_tokens
|
|
955
|
+
|
|
956
|
+
# Start a fresh accounting batch
|
|
957
|
+
robot.reset_token_totals
|
|
958
|
+
puts robot.total_input_tokens # => 0
|
|
959
|
+
```
|
|
960
|
+
|
|
961
|
+
## Tool Loop Circuit Breaker
|
|
962
|
+
|
|
963
|
+
Set `max_tool_rounds:` to guard against a robot looping indefinitely through tool calls. After the limit is reached, `RobotLab::ToolLoopError` is raised.
|
|
964
|
+
|
|
965
|
+
### max_tool_rounds Parameter
|
|
966
|
+
|
|
967
|
+
```ruby
|
|
968
|
+
robot = RobotLab.build(
|
|
969
|
+
name: "runner",
|
|
970
|
+
system_prompt: "Execute every step.",
|
|
971
|
+
local_tools: [StepTool],
|
|
972
|
+
max_tool_rounds: 10
|
|
973
|
+
)
|
|
974
|
+
```
|
|
975
|
+
|
|
976
|
+
`max_tool_rounds` can also be set via `RunConfig`:
|
|
977
|
+
|
|
978
|
+
```ruby
|
|
979
|
+
config = RobotLab::RunConfig.new(max_tool_rounds: 10)
|
|
980
|
+
robot = RobotLab.build(name: "runner", system_prompt: "...", config: config)
|
|
981
|
+
```
|
|
982
|
+
|
|
983
|
+
### ToolLoopError
|
|
984
|
+
|
|
985
|
+
`RobotLab::ToolLoopError < RobotLab::InferenceError`
|
|
986
|
+
|
|
987
|
+
Raised when the number of tool calls in a single `run()` exceeds `max_tool_rounds`. The error message includes the limit that was exceeded.
|
|
988
|
+
|
|
989
|
+
### Recovery after ToolLoopError
|
|
990
|
+
|
|
991
|
+
After a `ToolLoopError`, the chat contains a dangling `tool_use` block with no matching `tool_result`. Anthropic and most providers will reject any subsequent request with that broken history.
|
|
992
|
+
|
|
993
|
+
**You must call `clear_messages` before reusing the robot:**
|
|
994
|
+
|
|
995
|
+
```ruby
|
|
996
|
+
begin
|
|
997
|
+
robot.run("Execute all steps.")
|
|
998
|
+
rescue RobotLab::ToolLoopError => e
|
|
999
|
+
puts "Circuit breaker fired: #{e.message}"
|
|
1000
|
+
end
|
|
1001
|
+
|
|
1002
|
+
# Flush the corrupted chat (system prompt is kept)
|
|
1003
|
+
robot.clear_messages
|
|
1004
|
+
puts robot.config.max_tool_rounds # still set — config unchanged
|
|
1005
|
+
|
|
1006
|
+
# Robot is healthy again
|
|
1007
|
+
result = robot.run("Something new.")
|
|
1008
|
+
```
|
|
1009
|
+
|
|
1010
|
+
## Learning Accumulation
|
|
1011
|
+
|
|
1012
|
+
`robot.learn(text)` records a cross-run observation. On each subsequent `run()`, active learnings are automatically prepended to the user message as a `LEARNINGS FROM PREVIOUS RUNS:` block.
|
|
1013
|
+
|
|
1014
|
+
### learn
|
|
1015
|
+
|
|
1016
|
+
```ruby
|
|
1017
|
+
robot.learn(text)
|
|
1018
|
+
# => self
|
|
1019
|
+
```
|
|
1020
|
+
|
|
1021
|
+
Add a learning to the robot's accumulated observations. Learnings are automatically deduplicated:
|
|
1022
|
+
|
|
1023
|
+
- If the new text is a substring of an existing learning, it is dropped (the existing broader learning already covers it).
|
|
1024
|
+
- If an existing learning is a substring of the new text, the narrower one is replaced.
|
|
1025
|
+
|
|
1026
|
+
Learnings are persisted to `memory[:learnings]` and survive a robot rebuild when the same `Memory` object is reused.
|
|
1027
|
+
|
|
1028
|
+
**Parameters:**
|
|
1029
|
+
|
|
1030
|
+
| Name | Type | Description |
|
|
1031
|
+
|------|------|-------------|
|
|
1032
|
+
| `text` | `String` | The observation or insight to record |
|
|
1033
|
+
|
|
1034
|
+
**Returns:** `self`
|
|
1035
|
+
|
|
1036
|
+
### learnings
|
|
1037
|
+
|
|
1038
|
+
```ruby
|
|
1039
|
+
robot.learnings
|
|
1040
|
+
# => Array<String>
|
|
1041
|
+
```
|
|
1042
|
+
|
|
1043
|
+
Returns the list of accumulated learning strings in insertion order.
|
|
1044
|
+
|
|
1045
|
+
### How Learnings Are Injected
|
|
1046
|
+
|
|
1047
|
+
When learnings are present, each `run(message)` prepends them to the message before sending to the LLM:
|
|
1048
|
+
|
|
1049
|
+
```
|
|
1050
|
+
LEARNINGS FROM PREVIOUS RUNS:
|
|
1051
|
+
- This codebase prefers map/collect over manual array accumulation
|
|
1052
|
+
- Explicit nil comparisons appear frequently here
|
|
1053
|
+
|
|
1054
|
+
<original user message>
|
|
1055
|
+
```
|
|
1056
|
+
|
|
1057
|
+
**Example:**
|
|
1058
|
+
|
|
1059
|
+
```ruby
|
|
1060
|
+
reviewer = RobotLab.build(
|
|
1061
|
+
name: "reviewer",
|
|
1062
|
+
system_prompt: "You are a Ruby code reviewer."
|
|
1063
|
+
)
|
|
1064
|
+
|
|
1065
|
+
# Run 1 — no learnings yet
|
|
1066
|
+
reviewer.run("Review snippet A")
|
|
1067
|
+
reviewer.learn("Prefer map/collect over manual accumulation")
|
|
1068
|
+
|
|
1069
|
+
# Run 2 — learning injected automatically
|
|
1070
|
+
reviewer.run("Review snippet B")
|
|
1071
|
+
reviewer.learn("Avoid explicit nil comparisons")
|
|
1072
|
+
|
|
1073
|
+
# Run 3 — both learnings injected
|
|
1074
|
+
reviewer.run("Review snippet C")
|
|
1075
|
+
|
|
1076
|
+
puts reviewer.learnings.size # => 2
|
|
1077
|
+
```
|
|
1078
|
+
|
|
1079
|
+
### Deduplication Example
|
|
1080
|
+
|
|
1081
|
+
```ruby
|
|
1082
|
+
robot.learn("avoid using puts")
|
|
1083
|
+
robot.learn("avoid using puts and p in production code")
|
|
1084
|
+
# => broader learning replaces narrower; robot.learnings.size == 1
|
|
1085
|
+
```
|
|
1086
|
+
|
|
752
1087
|
## See Also
|
|
753
1088
|
|
|
754
1089
|
- [Building Robots Guide](../../guides/building-robots.md) (includes [Composable Skills](../../guides/building-robots.md#composable-skills))
|
data/docs/api/mcp/client.md
CHANGED
|
@@ -36,6 +36,7 @@ Accepts either a `Server` instance or a Hash configuration. When a Hash is provi
|
|
|
36
36
|
|-----|------|----------|-------------|
|
|
37
37
|
| `name` | `String` | Yes | Server identifier |
|
|
38
38
|
| `transport` | `Hash` | Yes | Transport configuration (must include `type`) |
|
|
39
|
+
| `timeout` | `Numeric` | No | Request timeout in seconds (default: 15). Propagated to the transport layer |
|
|
39
40
|
|
|
40
41
|
**Raises:** `ArgumentError` if the config is neither a `Server` nor a `Hash`.
|
|
41
42
|
|