robot_lab 0.0.9 → 0.0.11
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +32 -0
- data/README.md +80 -1
- data/Rakefile +2 -1
- data/docs/api/core/robot.md +182 -0
- data/docs/guides/creating-networks.md +21 -0
- data/docs/guides/index.md +10 -0
- data/docs/guides/knowledge.md +182 -0
- data/docs/guides/mcp-integration.md +106 -0
- data/docs/guides/memory.md +2 -0
- data/docs/guides/observability.md +486 -0
- data/docs/guides/ractor-parallelism.md +364 -0
- data/docs/superpowers/plans/2026-04-14-ractor-integration.md +1538 -0
- data/docs/superpowers/specs/2026-04-14-ractor-integration-design.md +258 -0
- data/examples/19_token_tracking.rb +128 -0
- data/examples/20_circuit_breaker.rb +153 -0
- data/examples/21_learning_loop.rb +164 -0
- data/examples/22_context_compression.rb +179 -0
- data/examples/23_convergence.rb +137 -0
- data/examples/24_structured_delegation.rb +150 -0
- data/examples/25_history_search/conversation.jsonl +30 -0
- data/examples/25_history_search.rb +136 -0
- data/examples/26_document_store/api_versioning_adr.md +52 -0
- data/examples/26_document_store/incident_postmortem.md +46 -0
- data/examples/26_document_store/postgres_runbook.md +49 -0
- data/examples/26_document_store/redis_caching_guide.md +48 -0
- data/examples/26_document_store/sidekiq_guide.md +51 -0
- data/examples/26_document_store.rb +147 -0
- data/examples/27_incident_response/incident_response.rb +244 -0
- data/examples/28_mcp_discovery.rb +112 -0
- data/examples/29_ractor_tools.rb +243 -0
- data/examples/30_ractor_network.rb +256 -0
- data/examples/README.md +136 -0
- data/examples/prompts/skill_with_mcp_test.md +9 -0
- data/examples/prompts/skill_with_robot_name_test.md +5 -0
- data/examples/prompts/skill_with_tools_test.md +6 -0
- data/lib/robot_lab/bus_poller.rb +149 -0
- data/lib/robot_lab/convergence.rb +69 -0
- data/lib/robot_lab/delegation_future.rb +93 -0
- data/lib/robot_lab/document_store.rb +155 -0
- data/lib/robot_lab/error.rb +25 -0
- data/lib/robot_lab/history_compressor.rb +205 -0
- data/lib/robot_lab/mcp/client.rb +17 -5
- data/lib/robot_lab/mcp/connection_poller.rb +187 -0
- data/lib/robot_lab/mcp/server.rb +7 -2
- data/lib/robot_lab/mcp/server_discovery.rb +110 -0
- data/lib/robot_lab/mcp/transports/stdio.rb +6 -0
- data/lib/robot_lab/memory.rb +103 -6
- data/lib/robot_lab/network.rb +44 -9
- data/lib/robot_lab/ractor_boundary.rb +42 -0
- data/lib/robot_lab/ractor_job.rb +37 -0
- data/lib/robot_lab/ractor_memory_proxy.rb +85 -0
- data/lib/robot_lab/ractor_network_scheduler.rb +154 -0
- data/lib/robot_lab/ractor_worker_pool.rb +117 -0
- data/lib/robot_lab/robot/bus_messaging.rb +43 -65
- data/lib/robot_lab/robot/history_search.rb +69 -0
- data/lib/robot_lab/robot.rb +228 -11
- data/lib/robot_lab/robot_result.rb +24 -5
- data/lib/robot_lab/run_config.rb +1 -1
- data/lib/robot_lab/text_analysis.rb +103 -0
- data/lib/robot_lab/tool.rb +42 -3
- data/lib/robot_lab/tool_config.rb +1 -1
- data/lib/robot_lab/version.rb +1 -1
- data/lib/robot_lab/waiter.rb +49 -29
- data/lib/robot_lab.rb +25 -0
- data/mkdocs.yml +1 -0
- metadata +70 -2
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: f4b2a3fafbdf3a54de3044b57597b42d86c68bd2afdad6ce866ac82483e61091
|
|
4
|
+
data.tar.gz: 5137cff56485a26fabe5ab6606b144c4c3c21c1673ecec1d2254a392e015c25c
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: 33045f27ec803094a020caee4133c1d6c65887446330c294d9a9babd56a0fe7e71979fe0d032c2421488d1dcae886ad784a78dae579ba01b2786b5f9f91c0172
|
|
7
|
+
data.tar.gz: 5554296590bfb3dea031c95090ef8a47e946ac1c7a92b3efc6924439f55df7267e0b04d385e2332ea827099c37688c4988aa908caf5bb9d2f21eabc2c50c3167
|
data/CHANGELOG.md
CHANGED
|
@@ -8,6 +8,38 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|
|
8
8
|
|
|
9
9
|
## [Unreleased]
|
|
10
10
|
|
|
11
|
+
## [0.0.11] - 2026-04-14
|
|
12
|
+
|
|
13
|
+
### Added
|
|
14
|
+
|
|
15
|
+
- **Ractor parallelism — Track 1: CPU-bound tools** (`RactorWorkerPool`)
|
|
16
|
+
- `ractor_safe true` class macro on `Tool` — opts a tool class into Ractor execution; subclasses inherit automatically
|
|
17
|
+
- `RobotLab.ractor_pool` — global `RactorWorkerPool` singleton, one Ractor worker per CPU core by default
|
|
18
|
+
- `ractor_pool_size` field on `RunConfig` for configuring pool capacity
|
|
19
|
+
- `RactorWorkerPool#submit(tool_name, args)` — submits a job and blocks for the frozen result; raises `ToolError` on failure
|
|
20
|
+
- Tool dispatch routes `ractor_safe` tools through the pool automatically, bypassing the GVL for CPU-intensive work
|
|
21
|
+
- `RactorBoundary.freeze_deep(obj)` — deep-freezes nested hashes/arrays/strings to make them Ractor-shareable; raises `RactorBoundaryError` for non-shareable objects (Procs, IOs, etc.)
|
|
22
|
+
- **Ractor parallelism — Track 2: parallel robot pipelines** (`RactorNetworkScheduler`)
|
|
23
|
+
- `parallel_mode: :ractor` on `Network.new` — routes `network.run` through `RactorNetworkScheduler` instead of `SimpleFlow::Pipeline`
|
|
24
|
+
- `RactorNetworkScheduler` dispatches dependency waves: independent tasks run concurrently (one Thread per task); dependent tasks wait for their wave to complete
|
|
25
|
+
- `RobotSpec` — frozen `Data.define` descriptor carrying robot name, template, system prompt, and config; safely crosses Ractor boundaries
|
|
26
|
+
- `RactorNetworkScheduler#run_pipeline` returns `Hash { robot_name => result_string }` for the full pipeline
|
|
27
|
+
- `RactorNetworkScheduler#run_spec` for single-spec dispatch
|
|
28
|
+
- `RactorNetworkScheduler#shutdown` for graceful poison-pill cleanup
|
|
29
|
+
- `network.parallel_mode` reader exposes the configured mode (default `:async`)
|
|
30
|
+
- **Ractor memory proxy** — `RactorMemoryProxy` wraps `Memory` via `ractor-wrapper` for safe cross-Ractor memory access
|
|
31
|
+
- **Infrastructure data classes** — `RactorJob`, `RactorJobError` (`Data.define` structs) for job submission and error propagation across Ractor boundaries
|
|
32
|
+
- **`RactorBoundaryError`** — raised by `freeze_deep` when a non-shareable value (Proc, IO, etc.) would cross a Ractor boundary
|
|
33
|
+
- **`ToolError`** — raised by `RactorWorkerPool#submit` when a tool raises inside a Ractor; propagates message and frozen backtrace
|
|
34
|
+
- **Dependencies** — `ractor_queue` (~> 0.1) and `ractor-wrapper` (~> 0.4) added to gemspec
|
|
35
|
+
- **Ractor Parallelism guide** (`docs/guides/ractor-parallelism.md`) — covers architecture, two-track design, configuration, error handling, constraints, and best practices
|
|
36
|
+
- **Example 29: Ractor-Safe CPU Tools** (`examples/29_ractor_tools.rb`) — demonstrates `ractor_safe` flag, inheritance, `freeze_deep`, pool submissions, `ToolError` propagation, and parallel batch timing; no API key required
|
|
37
|
+
- **Example 30: Ractor Network Scheduler** (`examples/30_ractor_network.rb`) — demonstrates `RactorNetworkScheduler` wave ordering with simulated latencies, `Network.new(parallel_mode: :ractor)` API, and dependency graph inspection; no API key required for Parts 1 & 2
|
|
38
|
+
|
|
39
|
+
### Fixed
|
|
40
|
+
|
|
41
|
+
- `ToolConfig::NONE_VALUES` constant was not Ractor-shareable because its inner empty array `[]` was mutable; fixed by replacing `[]` with `[].freeze` so the entire constant is deeply frozen and safe to read from any Ractor
|
|
42
|
+
|
|
11
43
|
## [0.0.9] - 2026-03-02
|
|
12
44
|
|
|
13
45
|
### Added
|
data/README.md
CHANGED
|
@@ -26,7 +26,13 @@
|
|
|
26
26
|
- <strong>Message Bus</strong> - Bidirectional robot communication via TypedBus<br>
|
|
27
27
|
- <strong>Dynamic Spawning</strong> - Robots create new robots at runtime<br>
|
|
28
28
|
- <strong>Layered Configuration</strong> - Cascading YAML, env vars, and RunConfig<br>
|
|
29
|
-
- <strong>Rails Integration</strong> - Generators, background jobs, Turbo Stream broadcasting
|
|
29
|
+
- <strong>Rails Integration</strong> - Generators, background jobs, Turbo Stream broadcasting<br>
|
|
30
|
+
- <strong>Token & Cost Tracking</strong> - Per-run and cumulative token counts on every robot<br>
|
|
31
|
+
- <strong>Tool Loop Circuit Breaker</strong> - <code>max_tool_rounds:</code> guards against runaway tool call loops<br>
|
|
32
|
+
- <strong>Learning Accumulation</strong> - <code>robot.learn()</code> builds up cross-run observations with deduplication<br>
|
|
33
|
+
- <strong>Context Window Compression</strong> - <code>robot.compress_history()</code> prunes irrelevant old turns via TF cosine scoring<br>
|
|
34
|
+
- <strong>Convergence Detection</strong> - <code>RobotLab::Convergence</code> detects when independent agents agree, enabling reconciler fast-path<br>
|
|
35
|
+
- <strong>Structured Delegation</strong> - <code>robot.delegate(to:, task:)</code> sync or async inter-robot calls with duration and token metadata; async fan-out via <code>DelegationFuture</code>
|
|
30
36
|
</td>
|
|
31
37
|
</tr>
|
|
32
38
|
</table>
|
|
@@ -621,6 +627,79 @@ robot.run("Tell me a story") { |chunk| stream_to_client(chunk.content) }
|
|
|
621
627
|
|
|
622
628
|
The `on_content:` callback participates in the RunConfig cascade, so it can be set at the network or config level and inherited by robots.
|
|
623
629
|
|
|
630
|
+
## Token & Cost Tracking
|
|
631
|
+
|
|
632
|
+
Every `robot.run()` returns a `RobotResult` that carries token usage for that call. The robot itself accumulates running totals across all runs.
|
|
633
|
+
|
|
634
|
+
```ruby
|
|
635
|
+
robot = RobotLab.build(name: "analyst", system_prompt: "You are helpful.")
|
|
636
|
+
|
|
637
|
+
result = robot.run("What is a stack?")
|
|
638
|
+
puts result.input_tokens # tokens sent to the LLM this run
|
|
639
|
+
puts result.output_tokens # tokens generated this run
|
|
640
|
+
|
|
641
|
+
puts robot.total_input_tokens # cumulative across all runs
|
|
642
|
+
puts robot.total_output_tokens
|
|
643
|
+
```
|
|
644
|
+
|
|
645
|
+
To start a fresh cost batch without rebuilding the robot, call `reset_token_totals`. This resets the **accounting counter only** — the chat history keeps accumulating, so subsequent `input_tokens` will reflect the full context window sent to the API:
|
|
646
|
+
|
|
647
|
+
```ruby
|
|
648
|
+
robot.reset_token_totals
|
|
649
|
+
puts robot.total_input_tokens # => 0
|
|
650
|
+
```
|
|
651
|
+
|
|
652
|
+
Token counts are zero for providers that do not return usage data.
|
|
653
|
+
|
|
654
|
+
## Tool Loop Circuit Breaker
|
|
655
|
+
|
|
656
|
+
Set `max_tool_rounds:` to prevent a robot from looping indefinitely through tool calls. When the limit is exceeded, `RobotLab::ToolLoopError` is raised.
|
|
657
|
+
|
|
658
|
+
```ruby
|
|
659
|
+
robot = RobotLab.build(
|
|
660
|
+
name: "runner",
|
|
661
|
+
system_prompt: "Execute every step.",
|
|
662
|
+
local_tools: [StepTool],
|
|
663
|
+
max_tool_rounds: 10
|
|
664
|
+
)
|
|
665
|
+
|
|
666
|
+
begin
|
|
667
|
+
robot.run("Run all steps.")
|
|
668
|
+
rescue RobotLab::ToolLoopError => e
|
|
669
|
+
puts e.message # "Tool call limit of 10 exceeded"
|
|
670
|
+
end
|
|
671
|
+
```
|
|
672
|
+
|
|
673
|
+
After a `ToolLoopError` the chat contains a dangling `tool_use` block with no matching `tool_result`. Most providers (including Anthropic) will reject any subsequent request with that history. Call `clear_messages` before reusing the robot:
|
|
674
|
+
|
|
675
|
+
```ruby
|
|
676
|
+
robot.clear_messages # flushes broken history; system prompt is kept
|
|
677
|
+
result = robot.run("Something new.") # robot is healthy again
|
|
678
|
+
```
|
|
679
|
+
|
|
680
|
+
## Learning Accumulation
|
|
681
|
+
|
|
682
|
+
`robot.learn(text)` records a cross-run observation. On each subsequent `run()`, active learnings are automatically prepended to the user message as a `LEARNINGS FROM PREVIOUS RUNS:` block so the LLM can incorporate prior context without needing a persistent chat:
|
|
683
|
+
|
|
684
|
+
```ruby
|
|
685
|
+
reviewer = RobotLab.build(
|
|
686
|
+
name: "reviewer",
|
|
687
|
+
system_prompt: "You are a Ruby code reviewer."
|
|
688
|
+
)
|
|
689
|
+
|
|
690
|
+
reviewer.run("Review snippet A")
|
|
691
|
+
reviewer.learn("This codebase prefers map/collect over manual array accumulation")
|
|
692
|
+
|
|
693
|
+
reviewer.run("Review snippet B") # learning is injected automatically
|
|
694
|
+
```
|
|
695
|
+
|
|
696
|
+
Learnings deduplicate bidirectionally: if a broader learning is added that contains an existing narrower one, the narrower one is dropped. Learnings are persisted to the robot's `Memory` and survive a robot rebuild when the same `Memory` object is reused.
|
|
697
|
+
|
|
698
|
+
```ruby
|
|
699
|
+
reviewer.learnings # => ["This codebase prefers map/collect..."]
|
|
700
|
+
reviewer.learn("new fact") # deduplicates before storing
|
|
701
|
+
```
|
|
702
|
+
|
|
624
703
|
## Rails Integration
|
|
625
704
|
|
|
626
705
|
```bash
|
data/Rakefile
CHANGED
|
@@ -49,7 +49,8 @@ namespace :examples do
|
|
|
49
49
|
SUBDIR_ENTRY_POINTS = {
|
|
50
50
|
"14_rusty_circuit" => "open_mic.rb",
|
|
51
51
|
"15_memory_network_and_bus" => "editorial_pipeline.rb",
|
|
52
|
-
"16_writers_room" => "writers_room.rb"
|
|
52
|
+
"16_writers_room" => "writers_room.rb",
|
|
53
|
+
"27_incident_response" => "incident_response.rb"
|
|
53
54
|
}.freeze
|
|
54
55
|
|
|
55
56
|
# Subdirectory demos that are standalone apps (not run via `ruby`)
|
data/docs/api/core/robot.md
CHANGED
|
@@ -33,6 +33,8 @@ Robot.new(
|
|
|
33
33
|
enable_cache: true,
|
|
34
34
|
bus: nil,
|
|
35
35
|
skills: nil,
|
|
36
|
+
max_tool_rounds: nil,
|
|
37
|
+
token_budget: nil,
|
|
36
38
|
temperature: nil,
|
|
37
39
|
top_p: nil,
|
|
38
40
|
top_k: nil,
|
|
@@ -65,6 +67,8 @@ Robot.new(
|
|
|
65
67
|
| `enable_cache` | `Boolean` | `true` | Whether to enable semantic caching |
|
|
66
68
|
| `bus` | `TypedBus::MessageBus`, `nil` | `nil` | Optional message bus for inter-robot communication |
|
|
67
69
|
| `skills` | `Symbol`, `Array<Symbol>`, `nil` | `nil` | Skill templates to prepend (see [Skills](#skills)) |
|
|
70
|
+
| `max_tool_rounds` | `Integer`, `nil` | `nil` | Circuit breaker: raise `ToolLoopError` after this many tool calls in one `run()` (see [Tool Loop Circuit Breaker](#tool-loop-circuit-breaker)) |
|
|
71
|
+
| `token_budget` | `Integer`, `nil` | `nil` | Raise `InferenceError` if cumulative input tokens exceed this limit |
|
|
68
72
|
| `config` | `RunConfig`, `nil` | `nil` | Shared config merged with explicit kwargs (see [RunConfig](#runconfig)) |
|
|
69
73
|
| `temperature` | `Float`, `nil` | `nil` | Controls randomness (0.0-1.0) |
|
|
70
74
|
| `top_p` | `Float`, `nil` | `nil` | Nucleus sampling threshold |
|
|
@@ -113,6 +117,9 @@ If `name` is omitted, it defaults to `"robot"`.
|
|
|
113
117
|
| `config` | `RunConfig` | Effective RunConfig (merged from constructor kwargs and passed-in config) |
|
|
114
118
|
| `mcp_config` | `Symbol`, `Array` | Build-time MCP configuration (raw, unresolved) |
|
|
115
119
|
| `tools_config` | `Symbol`, `Array` | Build-time tools configuration (raw, unresolved) |
|
|
120
|
+
| `total_input_tokens` | `Integer` | Cumulative input tokens sent across all `run()` calls |
|
|
121
|
+
| `total_output_tokens` | `Integer` | Cumulative output tokens received across all `run()` calls |
|
|
122
|
+
| `learnings` | `Array<String>` | Accumulated cross-run observations (see [Learning Accumulation](#learning-accumulation)) |
|
|
116
123
|
|
|
117
124
|
## Attributes (Read-Write)
|
|
118
125
|
|
|
@@ -902,6 +909,181 @@ bot.with_bus(bus)
|
|
|
902
909
|
bot.send_message(to: :someone, content: "Hello!")
|
|
903
910
|
```
|
|
904
911
|
|
|
912
|
+
## Token & Cost Tracking
|
|
913
|
+
|
|
914
|
+
Every `robot.run()` returns a `RobotResult` with token counts for that call. The robot accumulates running totals across all runs.
|
|
915
|
+
|
|
916
|
+
### RobotResult Token Fields
|
|
917
|
+
|
|
918
|
+
| Field | Type | Description |
|
|
919
|
+
|-------|------|-------------|
|
|
920
|
+
| `input_tokens` | `Integer` | Input tokens sent to the LLM in this run (0 if provider doesn't report usage) |
|
|
921
|
+
| `output_tokens` | `Integer` | Output tokens received from the LLM in this run (0 if not reported) |
|
|
922
|
+
|
|
923
|
+
### Robot Cumulative Totals
|
|
924
|
+
|
|
925
|
+
| Attribute | Type | Description |
|
|
926
|
+
|-----------|------|-------------|
|
|
927
|
+
| `total_input_tokens` | `Integer` | Cumulative input tokens across all `run()` calls |
|
|
928
|
+
| `total_output_tokens` | `Integer` | Cumulative output tokens across all `run()` calls |
|
|
929
|
+
|
|
930
|
+
### reset_token_totals
|
|
931
|
+
|
|
932
|
+
```ruby
|
|
933
|
+
robot.reset_token_totals
|
|
934
|
+
# => nil
|
|
935
|
+
```
|
|
936
|
+
|
|
937
|
+
Reset the cumulative accounting counters to zero. Useful when you want to measure cost for a specific task batch while keeping the robot alive for the next batch.
|
|
938
|
+
|
|
939
|
+
> **Note:** This resets the *accounting counter only* — the underlying chat history keeps growing. The next run's `input_tokens` will reflect the full accumulated chat context sent to the API.
|
|
940
|
+
|
|
941
|
+
**Example:**
|
|
942
|
+
|
|
943
|
+
```ruby
|
|
944
|
+
robot = RobotLab.build(name: "analyst", system_prompt: "You are helpful.")
|
|
945
|
+
|
|
946
|
+
result = robot.run("What is a stack?")
|
|
947
|
+
puts result.input_tokens # e.g. 120
|
|
948
|
+
puts result.output_tokens # e.g. 45
|
|
949
|
+
|
|
950
|
+
result2 = robot.run("And a queue?")
|
|
951
|
+
puts result2.input_tokens # larger — full chat history sent
|
|
952
|
+
|
|
953
|
+
puts robot.total_input_tokens # 120 + result2.input_tokens
|
|
954
|
+
puts robot.total_output_tokens
|
|
955
|
+
|
|
956
|
+
# Start a fresh accounting batch
|
|
957
|
+
robot.reset_token_totals
|
|
958
|
+
puts robot.total_input_tokens # => 0
|
|
959
|
+
```
|
|
960
|
+
|
|
961
|
+
## Tool Loop Circuit Breaker
|
|
962
|
+
|
|
963
|
+
Set `max_tool_rounds:` to guard against a robot looping indefinitely through tool calls. After the limit is reached, `RobotLab::ToolLoopError` is raised.
|
|
964
|
+
|
|
965
|
+
### max_tool_rounds Parameter
|
|
966
|
+
|
|
967
|
+
```ruby
|
|
968
|
+
robot = RobotLab.build(
|
|
969
|
+
name: "runner",
|
|
970
|
+
system_prompt: "Execute every step.",
|
|
971
|
+
local_tools: [StepTool],
|
|
972
|
+
max_tool_rounds: 10
|
|
973
|
+
)
|
|
974
|
+
```
|
|
975
|
+
|
|
976
|
+
`max_tool_rounds` can also be set via `RunConfig`:
|
|
977
|
+
|
|
978
|
+
```ruby
|
|
979
|
+
config = RobotLab::RunConfig.new(max_tool_rounds: 10)
|
|
980
|
+
robot = RobotLab.build(name: "runner", system_prompt: "...", config: config)
|
|
981
|
+
```
|
|
982
|
+
|
|
983
|
+
### ToolLoopError
|
|
984
|
+
|
|
985
|
+
`RobotLab::ToolLoopError < RobotLab::InferenceError`
|
|
986
|
+
|
|
987
|
+
Raised when the number of tool calls in a single `run()` exceeds `max_tool_rounds`. The error message includes the limit that was exceeded.
|
|
988
|
+
|
|
989
|
+
### Recovery after ToolLoopError
|
|
990
|
+
|
|
991
|
+
After a `ToolLoopError`, the chat contains a dangling `tool_use` block with no matching `tool_result`. Anthropic and most providers will reject any subsequent request with that broken history.
|
|
992
|
+
|
|
993
|
+
**You must call `clear_messages` before reusing the robot:**
|
|
994
|
+
|
|
995
|
+
```ruby
|
|
996
|
+
begin
|
|
997
|
+
robot.run("Execute all steps.")
|
|
998
|
+
rescue RobotLab::ToolLoopError => e
|
|
999
|
+
puts "Circuit breaker fired: #{e.message}"
|
|
1000
|
+
end
|
|
1001
|
+
|
|
1002
|
+
# Flush the corrupted chat (system prompt is kept)
|
|
1003
|
+
robot.clear_messages
|
|
1004
|
+
puts robot.config.max_tool_rounds # still set — config unchanged
|
|
1005
|
+
|
|
1006
|
+
# Robot is healthy again
|
|
1007
|
+
result = robot.run("Something new.")
|
|
1008
|
+
```
|
|
1009
|
+
|
|
1010
|
+
## Learning Accumulation
|
|
1011
|
+
|
|
1012
|
+
`robot.learn(text)` records a cross-run observation. On each subsequent `run()`, active learnings are automatically prepended to the user message as a `LEARNINGS FROM PREVIOUS RUNS:` block.
|
|
1013
|
+
|
|
1014
|
+
### learn
|
|
1015
|
+
|
|
1016
|
+
```ruby
|
|
1017
|
+
robot.learn(text)
|
|
1018
|
+
# => self
|
|
1019
|
+
```
|
|
1020
|
+
|
|
1021
|
+
Add a learning to the robot's accumulated observations. Learnings are automatically deduplicated:
|
|
1022
|
+
|
|
1023
|
+
- If the new text is a substring of an existing learning, it is dropped (the existing broader learning already covers it).
|
|
1024
|
+
- If an existing learning is a substring of the new text, the narrower one is replaced.
|
|
1025
|
+
|
|
1026
|
+
Learnings are persisted to `memory[:learnings]` and survive a robot rebuild when the same `Memory` object is reused.
|
|
1027
|
+
|
|
1028
|
+
**Parameters:**
|
|
1029
|
+
|
|
1030
|
+
| Name | Type | Description |
|
|
1031
|
+
|------|------|-------------|
|
|
1032
|
+
| `text` | `String` | The observation or insight to record |
|
|
1033
|
+
|
|
1034
|
+
**Returns:** `self`
|
|
1035
|
+
|
|
1036
|
+
### learnings
|
|
1037
|
+
|
|
1038
|
+
```ruby
|
|
1039
|
+
robot.learnings
|
|
1040
|
+
# => Array<String>
|
|
1041
|
+
```
|
|
1042
|
+
|
|
1043
|
+
Returns the list of accumulated learning strings in insertion order.
|
|
1044
|
+
|
|
1045
|
+
### How Learnings Are Injected
|
|
1046
|
+
|
|
1047
|
+
When learnings are present, each `run(message)` prepends them to the message before sending to the LLM:
|
|
1048
|
+
|
|
1049
|
+
```
|
|
1050
|
+
LEARNINGS FROM PREVIOUS RUNS:
|
|
1051
|
+
- This codebase prefers map/collect over manual array accumulation
|
|
1052
|
+
- Explicit nil comparisons appear frequently here
|
|
1053
|
+
|
|
1054
|
+
<original user message>
|
|
1055
|
+
```
|
|
1056
|
+
|
|
1057
|
+
**Example:**
|
|
1058
|
+
|
|
1059
|
+
```ruby
|
|
1060
|
+
reviewer = RobotLab.build(
|
|
1061
|
+
name: "reviewer",
|
|
1062
|
+
system_prompt: "You are a Ruby code reviewer."
|
|
1063
|
+
)
|
|
1064
|
+
|
|
1065
|
+
# Run 1 — no learnings yet
|
|
1066
|
+
reviewer.run("Review snippet A")
|
|
1067
|
+
reviewer.learn("Prefer map/collect over manual accumulation")
|
|
1068
|
+
|
|
1069
|
+
# Run 2 — learning injected automatically
|
|
1070
|
+
reviewer.run("Review snippet B")
|
|
1071
|
+
reviewer.learn("Avoid explicit nil comparisons")
|
|
1072
|
+
|
|
1073
|
+
# Run 3 — both learnings injected
|
|
1074
|
+
reviewer.run("Review snippet C")
|
|
1075
|
+
|
|
1076
|
+
puts reviewer.learnings.size # => 2
|
|
1077
|
+
```
|
|
1078
|
+
|
|
1079
|
+
### Deduplication Example
|
|
1080
|
+
|
|
1081
|
+
```ruby
|
|
1082
|
+
robot.learn("avoid using puts")
|
|
1083
|
+
robot.learn("avoid using puts and p in production code")
|
|
1084
|
+
# => broader learning replaces narrower; robot.learnings.size == 1
|
|
1085
|
+
```
|
|
1086
|
+
|
|
905
1087
|
## See Also
|
|
906
1088
|
|
|
907
1089
|
- [Building Robots Guide](../../guides/building-robots.md) (includes [Composable Skills](../../guides/building-robots.md#composable-skills))
|
|
@@ -124,6 +124,7 @@ end
|
|
|
124
124
|
| `memory` | Task-specific memory |
|
|
125
125
|
| `config` | Per-task `RunConfig` (merged on top of network's config) |
|
|
126
126
|
| `depends_on` | `:none`, `[:task1]`, or `:optional` |
|
|
127
|
+
| `poller_group` | Bus delivery group label (`:default`, `:slow`, etc.) |
|
|
127
128
|
|
|
128
129
|
## Conditional Routing
|
|
129
130
|
|
|
@@ -164,6 +165,26 @@ network = RobotLab.create_network(name: "support") do
|
|
|
164
165
|
end
|
|
165
166
|
```
|
|
166
167
|
|
|
168
|
+
## Poller Groups
|
|
169
|
+
|
|
170
|
+
Each network maintains a shared `BusPoller` that serializes TypedBus deliveries on a per-robot basis: if a robot is already processing a message, new deliveries are queued and drained after the current one completes. This prevents re-entrancy without blocking other robots.
|
|
171
|
+
|
|
172
|
+
Named **poller groups** let you label tasks so slow robots are identifiable in logs and monitoring without needing separate infrastructure:
|
|
173
|
+
|
|
174
|
+
```ruby
|
|
175
|
+
network = RobotLab.create_network(name: "mixed_speed") do
|
|
176
|
+
# Fast robots on the default group
|
|
177
|
+
task :fetcher, fetcher_robot, depends_on: :none
|
|
178
|
+
task :summarize, summarizer, depends_on: [:fetcher]
|
|
179
|
+
|
|
180
|
+
# Slow robots with expensive LLM calls — label them :slow
|
|
181
|
+
task :analyst, analyst_robot, depends_on: [:fetcher], poller_group: :slow
|
|
182
|
+
task :writer, writer_robot, depends_on: [:analyst], poller_group: :slow
|
|
183
|
+
end
|
|
184
|
+
```
|
|
185
|
+
|
|
186
|
+
Group labels are informational — there is no separate queue per group. In Async execution, robots naturally yield during LLM HTTP calls, so fast and slow robots interleave without explicit isolation.
|
|
187
|
+
|
|
167
188
|
## Running Networks
|
|
168
189
|
|
|
169
190
|
### Basic Run
|
data/docs/guides/index.md
CHANGED
|
@@ -38,6 +38,14 @@ If you're new to RobotLab, start here:
|
|
|
38
38
|
|
|
39
39
|
Share data between robots with the memory system
|
|
40
40
|
|
|
41
|
+
- [:octicons-pulse-24: **Observability & Safety**](observability.md)
|
|
42
|
+
|
|
43
|
+
Token tracking, circuit breakers, and learning accumulation
|
|
44
|
+
|
|
45
|
+
- [:material-cpu-64-bit: **Ractor Parallelism**](ractor-parallelism.md)
|
|
46
|
+
|
|
47
|
+
True CPU parallelism for tools and robot pipelines via Ruby Ractors
|
|
48
|
+
|
|
41
49
|
</div>
|
|
42
50
|
|
|
43
51
|
## Framework Integration
|
|
@@ -61,3 +69,5 @@ If you're new to RobotLab, start here:
|
|
|
61
69
|
| [Streaming](streaming.md) | Real-time responses | 5 min |
|
|
62
70
|
| [Memory](memory.md) | Shared data store | 5 min |
|
|
63
71
|
| [Rails Integration](rails-integration.md) | Rails application setup | 15 min |
|
|
72
|
+
| [Observability & Safety](observability.md) | Token tracking, circuit breaker, learning loop | 10 min |
|
|
73
|
+
| [Ractor Parallelism](ractor-parallelism.md) | CPU-parallel tools and robot pipelines | 15 min |
|
|
@@ -0,0 +1,182 @@
|
|
|
1
|
+
# Knowledge & Retrieval
|
|
2
|
+
|
|
3
|
+
Facilities for searching and retrieving knowledge from a robot's history and from external documents:
|
|
4
|
+
|
|
5
|
+
- **Chat History Search** — semantic search over accumulated conversation turns
|
|
6
|
+
- **Embedding-Based Document Store** — lightweight RAG: store arbitrary text, search by meaning
|
|
7
|
+
|
|
8
|
+
---
|
|
9
|
+
|
|
10
|
+
## Chat History Search
|
|
11
|
+
|
|
12
|
+
### The Problem
|
|
13
|
+
|
|
14
|
+
Long-running robots accumulate many conversation turns. When you need to recall what was discussed earlier on a specific topic, re-sending the full history wastes tokens. `search_history` gives you a focused slice of the most relevant past messages without touching the LLM.
|
|
15
|
+
|
|
16
|
+
### robot.search_history
|
|
17
|
+
|
|
18
|
+
```ruby
|
|
19
|
+
results = robot.search_history(query, limit: 5)
|
|
20
|
+
```
|
|
21
|
+
|
|
22
|
+
Scores every message in the robot's conversation history against `query` using stemmed term-frequency cosine similarity (via the `classifier` gem). Returns up to `limit` `HistoryResult` objects sorted by score descending.
|
|
23
|
+
|
|
24
|
+
```ruby
|
|
25
|
+
results = robot.search_history("quarterly revenue", limit: 3)
|
|
26
|
+
|
|
27
|
+
results.each do |r|
|
|
28
|
+
puts "[#{r.role}] score=#{r.score.round(3)} idx=#{r.index}"
|
|
29
|
+
puts " #{r.text}"
|
|
30
|
+
end
|
|
31
|
+
```
|
|
32
|
+
|
|
33
|
+
### HistoryResult Fields
|
|
34
|
+
|
|
35
|
+
| Field | Type | Description |
|
|
36
|
+
|-------|------|-------------|
|
|
37
|
+
| `text` | String | The message text |
|
|
38
|
+
| `role` | Symbol | `:user`, `:assistant`, or `:system` |
|
|
39
|
+
| `score` | Float (0.0–1.0) | Cosine similarity with the query |
|
|
40
|
+
| `index` | Integer | Position in `@chat.messages` |
|
|
41
|
+
|
|
42
|
+
### Typical Scores
|
|
43
|
+
|
|
44
|
+
| Relationship | Typical Score |
|
|
45
|
+
|---|---|
|
|
46
|
+
| Direct answer to the query | 0.50 – 0.80 |
|
|
47
|
+
| Same topic, different phrasing | 0.20 – 0.50 |
|
|
48
|
+
| Unrelated | < 0.10 |
|
|
49
|
+
|
|
50
|
+
### Short Messages
|
|
51
|
+
|
|
52
|
+
Messages shorter than 20 characters are skipped — they produce no meaningful term vector.
|
|
53
|
+
|
|
54
|
+
### Full Example
|
|
55
|
+
|
|
56
|
+
```ruby
|
|
57
|
+
robot = RobotLab.build(name: "analyst", system_prompt: "You are a financial analyst.")
|
|
58
|
+
|
|
59
|
+
# … after several robot.run() calls …
|
|
60
|
+
|
|
61
|
+
hits = robot.search_history("customer acquisition cost")
|
|
62
|
+
hits.each { |r| puts "#{r.role} (#{r.score.round(2)}): #{r.text}" }
|
|
63
|
+
```
|
|
64
|
+
|
|
65
|
+
### RAG Pattern — Retrieve Then Generate
|
|
66
|
+
|
|
67
|
+
Use `search_history` to inject only the relevant past context into the next call:
|
|
68
|
+
|
|
69
|
+
```ruby
|
|
70
|
+
hits = robot.search_history(user_query, limit: 3)
|
|
71
|
+
context = hits.map(&:text).join("\n")
|
|
72
|
+
|
|
73
|
+
robot.run("Recall context:\n#{context}\n\nNew question: #{user_query}")
|
|
74
|
+
```
|
|
75
|
+
|
|
76
|
+
### Optional Dependency
|
|
77
|
+
|
|
78
|
+
`search_history` requires the `classifier` gem:
|
|
79
|
+
|
|
80
|
+
```ruby
|
|
81
|
+
gem "classifier", "~> 2.3"
|
|
82
|
+
```
|
|
83
|
+
|
|
84
|
+
Without it, calling `search_history` raises `RobotLab::DependencyError` with an install hint.
|
|
85
|
+
|
|
86
|
+
---
|
|
87
|
+
|
|
88
|
+
## Embedding-Based Document Store
|
|
89
|
+
|
|
90
|
+
### The Problem
|
|
91
|
+
|
|
92
|
+
Sometimes the knowledge you need isn't in the conversation history — it's in a README, a product spec, a changelog. `store_document` / `search_documents` embed arbitrary text with `fastembed` and retrieve the most relevant chunk at query time.
|
|
93
|
+
|
|
94
|
+
### memory.store_document / memory.search_documents
|
|
95
|
+
|
|
96
|
+
```ruby
|
|
97
|
+
memory.store_document(:readme, File.read("README.md"))
|
|
98
|
+
memory.store_document(:changelog, File.read("CHANGELOG.md"))
|
|
99
|
+
|
|
100
|
+
hits = memory.search_documents("how to configure redis", limit: 3)
|
|
101
|
+
hits.each { |h| puts "#{h[:key]} (#{h[:score].round(3)}): #{h[:text][0..80]}" }
|
|
102
|
+
```
|
|
103
|
+
|
|
104
|
+
Each result hash contains:
|
|
105
|
+
|
|
106
|
+
| Key | Type | Description |
|
|
107
|
+
|-----|------|-------------|
|
|
108
|
+
| `:key` | Symbol | The key the document was stored under |
|
|
109
|
+
| `:text` | String | The full stored text |
|
|
110
|
+
| `:score` | Float (0.0–1.0) | Cosine similarity with the query |
|
|
111
|
+
|
|
112
|
+
### Standalone DocumentStore
|
|
113
|
+
|
|
114
|
+
The `Memory` methods delegate to `RobotLab::DocumentStore`, which can also be used directly:
|
|
115
|
+
|
|
116
|
+
```ruby
|
|
117
|
+
store = RobotLab::DocumentStore.new
|
|
118
|
+
store.store(:doc_a, "Ruby on Rails is a full-stack web framework.")
|
|
119
|
+
store.store(:doc_b, "Postgres is an advanced relational database.")
|
|
120
|
+
|
|
121
|
+
results = store.search("relational database SQL", limit: 2)
|
|
122
|
+
puts results.first[:key] # => :doc_b
|
|
123
|
+
```
|
|
124
|
+
|
|
125
|
+
Management methods:
|
|
126
|
+
|
|
127
|
+
```ruby
|
|
128
|
+
store.size # => 2
|
|
129
|
+
store.keys # => [:doc_a, :doc_b]
|
|
130
|
+
store.empty? # => false
|
|
131
|
+
store.delete(:doc_a)
|
|
132
|
+
store.clear
|
|
133
|
+
```
|
|
134
|
+
|
|
135
|
+
### Embedding Model
|
|
136
|
+
|
|
137
|
+
Default: `BAAI/bge-small-en-v1.5` (~23 MB, downloaded on first use, cached in `~/.cache/fastembed/`).
|
|
138
|
+
|
|
139
|
+
Documents are embedded with a `"passage: "` prefix and queries with `"query: "` prefix — the standard retrieval convention for BGE models.
|
|
140
|
+
|
|
141
|
+
Custom model:
|
|
142
|
+
|
|
143
|
+
```ruby
|
|
144
|
+
store = RobotLab::DocumentStore.new(model_name: "BAAI/bge-base-en-v1.5")
|
|
145
|
+
```
|
|
146
|
+
|
|
147
|
+
### RAG Pattern
|
|
148
|
+
|
|
149
|
+
```ruby
|
|
150
|
+
# 1. Index your knowledge base at startup
|
|
151
|
+
memory.store_document(:readme, File.read("README.md"))
|
|
152
|
+
memory.store_document(:changelog, File.read("CHANGELOG.md"))
|
|
153
|
+
memory.store_document(:api_docs, File.read("docs/api.md"))
|
|
154
|
+
|
|
155
|
+
# 2. At query time, retrieve the most relevant chunks
|
|
156
|
+
hits = memory.search_documents(user_query, limit: 3)
|
|
157
|
+
context = hits.map { |h| h[:text] }.join("\n\n")
|
|
158
|
+
|
|
159
|
+
# 3. Pass context to your robot
|
|
160
|
+
result = robot.run("Use the following context:\n#{context}\n\nQuestion: #{user_query}")
|
|
161
|
+
```
|
|
162
|
+
|
|
163
|
+
### Memory API Summary
|
|
164
|
+
|
|
165
|
+
| Method | Description |
|
|
166
|
+
|--------|-------------|
|
|
167
|
+
| `memory.store_document(key, text)` | Embed and store a document |
|
|
168
|
+
| `memory.search_documents(query, limit: 5)` | Search by semantic similarity |
|
|
169
|
+
| `memory.document_keys` | List stored keys |
|
|
170
|
+
| `memory.delete_document(key)` | Remove a document |
|
|
171
|
+
|
|
172
|
+
### Dependency
|
|
173
|
+
|
|
174
|
+
`fastembed` is a core RobotLab dependency — no optional gem required. The ONNX model is downloaded on first use.
|
|
175
|
+
|
|
176
|
+
---
|
|
177
|
+
|
|
178
|
+
## See Also
|
|
179
|
+
|
|
180
|
+
- [Observability Guide](observability.md)
|
|
181
|
+
- [Example 25 — Chat History Search](../../examples/25_history_search.rb)
|
|
182
|
+
- [Example 26 — Embedding-Based Document Store](../../examples/26_document_store.rb)
|