robot_lab 0.0.12 → 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +36 -0
- data/README.md +50 -0
- data/docs/api/messages/index.md +21 -0
- data/docs/getting-started/configuration.md +1 -1
- data/docs/guides/creating-networks.md +23 -0
- data/docs/guides/rails-integration.md +52 -13
- data/examples/18_rails/app/jobs/robot_run_job.rb +15 -75
- data/examples/31_launch_assessment.rb +248 -0
- data/examples/README.md +9 -0
- data/lib/generators/robot_lab/job_generator.rb +40 -0
- data/lib/generators/robot_lab/templates/job.rb.tt +10 -81
- data/lib/generators/robot_lab/templates/robot_job.rb.tt +18 -0
- data/lib/robot_lab/message.rb +1 -1
- data/lib/robot_lab/network.rb +1 -1
- data/lib/robot_lab/rails_integration/job.rb +158 -0
- data/lib/robot_lab/rails_integration/railtie.rb +9 -0
- data/lib/robot_lab/run_config.rb +1 -1
- data/lib/robot_lab/version.rb +1 -1
- data/lib/robot_lab.rb +4 -0
- metadata +7 -4
- data/.github/workflows/deploy-yard-docs.yml +0 -52
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: 9a1898a9909e7fdd795350b7f10cf80c57d69c1cec1b2533e56b9652cc2b3930
|
|
4
|
+
data.tar.gz: 641d4bd4022788a4ce53965a1ff8413ccf1420f71a1981f975618f825c29b80c
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: d51060ef5a929f5d9b895ba7e11220d47765fb92a342e9c3c76d5c223043747d53e7b924c9ecc53fccc90b90a0b08c180e75d5955527cc8b3cd99411a9c64026
|
|
7
|
+
data.tar.gz: c59f404518a69d97c65d960982367c582b8410d3754d3cf238ab8117f400eb229a02891bb449ab311bceaa16f0382c6cfd355c8f976bcc8f49b8191f8035e221
|
data/CHANGELOG.md
CHANGED
|
@@ -8,6 +8,42 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|
|
8
8
|
|
|
9
9
|
## [Unreleased]
|
|
10
10
|
|
|
11
|
+
## [0.1.0] - 2026-04-29
|
|
12
|
+
|
|
13
|
+
### Added
|
|
14
|
+
|
|
15
|
+
- **`RobotLab::Job` base class** (`lib/robot_lab/rails_integration/job.rb`) — `ActiveJob::Base` subclass encapsulating the full robot-run lifecycle for Rails background jobs
|
|
16
|
+
- `robot_class` DSL — bind a job subclass to a specific robot class at the class level; per-subclass, not inherited
|
|
17
|
+
- `perform(message:, robot_class: nil, thread_id: nil, **context)` — resolves robot class, wires Turbo Stream callbacks, runs robot, persists `RobotResult`, broadcasts completion/error
|
|
18
|
+
- `thread_id` omitted → fire-and-forget mode (no persistence, no broadcasting)
|
|
19
|
+
- Turbo Stream wiring is a graceful no-op when `turbo-rails` is absent
|
|
20
|
+
- `retry_on StandardError, wait: 5.seconds, attempts: 3` and `discard_on ActiveJob::DeserializationError` configured by default
|
|
21
|
+
- `RobotLab::Job` top-level alias registered in `robot_lab.rb` when Rails is present so job subclasses can write `< RobotLab::Job`
|
|
22
|
+
- **`rails generate robot_lab:job NAME`** (`lib/generators/robot_lab/job_generator.rb`) — generates a dedicated job subclass pre-wired to `<NAME>Robot` via `robot_class` DSL
|
|
23
|
+
- `--queue` option (default `"default"`)
|
|
24
|
+
- Template: `lib/generators/robot_lab/templates/robot_job.rb.tt`
|
|
25
|
+
- **`max_concurrent_robots` field on `RunConfig`** — caps the number of fiber-concurrent robots in a parallel network execution; passed to `SimpleFlow::Pipeline#call_parallel` as `max_concurrent:`
|
|
26
|
+
- **Example 31: Launch Assessment** (`examples/31_launch_assessment.rb`) — six `AnalystRobot` instances run in parallel (market, competitive, tech, risk, financial, legal) with a cap of 4 concurrent robots; a `LaunchDirector` synthesizes findings into a GO/NO-GO decision
|
|
27
|
+
- **20 unit tests for `RobotLab::RailsIntegration::Job`** (`test/robot_lab/rails_integration/job_test.rb`) covering `robot_class` DSL, `resolve_robot_class`, `setup_thread`, `build_robot`, `broadcast_completion`, `broadcast_error`, and `turbo_available?`
|
|
28
|
+
|
|
29
|
+
### Changed
|
|
30
|
+
|
|
31
|
+
- Bumped version to 0.1.0
|
|
32
|
+
- **`RobotRunJob` (generated job)** is now a thin two-line subclass of `RobotLab::Job` — all lifecycle logic lives in the base class
|
|
33
|
+
- **`job.rb.tt` install generator template** updated to the thin-subclass pattern
|
|
34
|
+
- **`examples/18_rails` `RobotRunJob`** updated to thin subclass
|
|
35
|
+
- **`Network#call_parallel`** now forwards `max_concurrent: @config.max_concurrent_robots` to `SimpleFlow::Pipeline`, enabling the concurrency cap introduced in `RunConfig`
|
|
36
|
+
|
|
37
|
+
### Fixed
|
|
38
|
+
|
|
39
|
+
- **`Message.from_hash`** — records persisted without a `type` key (e.g. legacy user-message rows) previously raised `ArgumentError: missing keyword: :type`; `from_hash` now defaults a nil or absent `type` to `"text"` so old rows deserialize as `TextMessage` without error
|
|
40
|
+
|
|
41
|
+
### Documentation
|
|
42
|
+
|
|
43
|
+
- **`docs/guides/rails-integration.md`** — rewrote Background Jobs section to document `RobotLab::Job` base class, lifecycle steps, `robot_class` DSL, dedicated job generator, fire-and-forget mode, and when to use a custom `ApplicationJob` instead
|
|
44
|
+
- **`docs/api/messages/index.md`** — added "Deserializing from Hash" section documenting `Message.from_hash` dispatch logic and the missing-type fallback
|
|
45
|
+
- **README.md** — expanded Rails Integration section with full Background Jobs documentation including both generic and dedicated job patterns
|
|
46
|
+
|
|
11
47
|
## [0.0.12] - 2026-04-18
|
|
12
48
|
|
|
13
49
|
### Added
|
data/README.md
CHANGED
|
@@ -842,6 +842,56 @@ This creates:
|
|
|
842
842
|
- `app/robots/` - Directory for your robots
|
|
843
843
|
- Database tables for conversation history
|
|
844
844
|
|
|
845
|
+
### Background Jobs
|
|
846
|
+
|
|
847
|
+
RobotLab ships with `RobotLab::Job`, an `ActiveJob::Base` subclass that handles the full robot-run lifecycle: robot class resolution, Turbo Stream wiring, thread-record persistence, and completion/error broadcasting.
|
|
848
|
+
|
|
849
|
+
**Generic job** (robot class supplied at enqueue time):
|
|
850
|
+
|
|
851
|
+
```bash
|
|
852
|
+
rails generate robot_lab:install # creates app/jobs/robot_run_job.rb
|
|
853
|
+
```
|
|
854
|
+
|
|
855
|
+
```ruby
|
|
856
|
+
# app/jobs/robot_run_job.rb (generated)
|
|
857
|
+
class RobotRunJob < RobotLab::Job
|
|
858
|
+
queue_as :default
|
|
859
|
+
end
|
|
860
|
+
|
|
861
|
+
# Enqueue from a controller:
|
|
862
|
+
RobotRunJob.perform_later(
|
|
863
|
+
robot_class: "SupportRobot",
|
|
864
|
+
message: params[:message],
|
|
865
|
+
thread_id: session_id
|
|
866
|
+
)
|
|
867
|
+
```
|
|
868
|
+
|
|
869
|
+
**Dedicated job** (robot class bound at the class level via DSL):
|
|
870
|
+
|
|
871
|
+
```bash
|
|
872
|
+
rails generate robot_lab:job Support # binds to SupportRobot, queue: default
|
|
873
|
+
rails generate robot_lab:job Support --queue ai # custom queue
|
|
874
|
+
```
|
|
875
|
+
|
|
876
|
+
```ruby
|
|
877
|
+
# app/jobs/support_job.rb (generated)
|
|
878
|
+
class SupportJob < RobotLab::Job
|
|
879
|
+
queue_as :default
|
|
880
|
+
robot_class SupportRobot
|
|
881
|
+
end
|
|
882
|
+
|
|
883
|
+
# Enqueue (no robot_class: needed):
|
|
884
|
+
SupportJob.perform_later(message: params[:message], thread_id: session_id)
|
|
885
|
+
```
|
|
886
|
+
|
|
887
|
+
When `thread_id` is provided and [turbo-rails](https://github.com/hotwired/turbo-rails) is installed, `RobotLab::Job` automatically:
|
|
888
|
+
|
|
889
|
+
- Wires `on_content` / `on_tool_call` Turbo Stream callbacks so the UI updates in real time
|
|
890
|
+
- Broadcasts a **completion** event to `"robot_lab_thread_#{thread_id}"` when the run finishes
|
|
891
|
+
- Broadcasts an **error** event (HTML-escaped) if the job raises
|
|
892
|
+
|
|
893
|
+
Omitting `thread_id` runs the robot in fire-and-forget mode — no persistence, no broadcasting.
|
|
894
|
+
|
|
845
895
|
## Documentation
|
|
846
896
|
|
|
847
897
|
Full documentation is available at **[https://madbomber.github.io/robot_lab](https://madbomber.github.io/robot_lab)**
|
data/docs/api/messages/index.md
CHANGED
|
@@ -76,6 +76,27 @@ memory.messages # => Array<Message>
|
|
|
76
76
|
memory.format_history # => Array<Message>
|
|
77
77
|
```
|
|
78
78
|
|
|
79
|
+
## Deserializing from Hash
|
|
80
|
+
|
|
81
|
+
`Message.from_hash` reconstructs the correct subclass from a stored hash:
|
|
82
|
+
|
|
83
|
+
```ruby
|
|
84
|
+
RobotLab::Message.from_hash({ type: "text", role: "user", content: "Hello" })
|
|
85
|
+
# => #<RobotLab::TextMessage ...>
|
|
86
|
+
|
|
87
|
+
RobotLab::Message.from_hash({ type: "tool_call", role: "assistant", tools: [...] })
|
|
88
|
+
# => #<RobotLab::ToolCallMessage ...>
|
|
89
|
+
```
|
|
90
|
+
|
|
91
|
+
When the `type` key is absent or `nil` (e.g. records persisted before the field was introduced), `from_hash` defaults to `TextMessage`:
|
|
92
|
+
|
|
93
|
+
```ruby
|
|
94
|
+
RobotLab::Message.from_hash({ role: "user", content: "legacy row" })
|
|
95
|
+
# => #<RobotLab::TextMessage ...>
|
|
96
|
+
```
|
|
97
|
+
|
|
98
|
+
String keys are normalised automatically via `transform_keys(&:to_sym)`.
|
|
99
|
+
|
|
79
100
|
## See Also
|
|
80
101
|
|
|
81
102
|
- [Memory](../core/memory.md)
|
|
@@ -361,7 +361,7 @@ effective.temperature #=> 0.9 (overridden)
|
|
|
361
361
|
| **LLM** | `model`, `temperature`, `top_p`, `top_k`, `max_tokens`, `presence_penalty`, `frequency_penalty`, `stop` |
|
|
362
362
|
| **Tools** | `mcp`, `tools` |
|
|
363
363
|
| **Callbacks** | `on_tool_call`, `on_tool_result` |
|
|
364
|
-
| **Infrastructure** | `bus`, `enable_cache` |
|
|
364
|
+
| **Infrastructure** | `bus`, `enable_cache`, `max_tool_rounds`, `token_budget`, `ractor_pool_size`, `max_concurrent_robots` |
|
|
365
365
|
|
|
366
366
|
### RunConfig vs RobotLab.config
|
|
367
367
|
|
|
@@ -84,6 +84,29 @@ network = RobotLab.create_network(name: "parallel_analysis") do
|
|
|
84
84
|
end
|
|
85
85
|
```
|
|
86
86
|
|
|
87
|
+
### Concurrency Cap
|
|
88
|
+
|
|
89
|
+
When a network fans out to many parallel robots, each makes a simultaneous LLM API call. With no limit this can exhaust API rate-limit quotas or database connection pools under load. Set `max_concurrent_robots:` on a `RunConfig` to cap how many robot tasks run at once — the rest queue behind an `Async::Semaphore` and start as slots open:
|
|
90
|
+
|
|
91
|
+
```ruby
|
|
92
|
+
config = RobotLab::RunConfig.new(max_concurrent_robots: 4)
|
|
93
|
+
|
|
94
|
+
network = RobotLab.create_network(name: "launch_assessment", config: config) do
|
|
95
|
+
# All six declared parallel, but at most 4 LLM calls in-flight simultaneously
|
|
96
|
+
task :market, market_robot, depends_on: :none
|
|
97
|
+
task :competitive, comp_robot, depends_on: :none
|
|
98
|
+
task :tech, tech_robot, depends_on: :none
|
|
99
|
+
task :risk, risk_robot, depends_on: :none
|
|
100
|
+
task :financial, financial_robot, depends_on: :none # queues until a slot opens
|
|
101
|
+
task :legal, legal_robot, depends_on: :none # queues until a slot opens
|
|
102
|
+
|
|
103
|
+
task :director, director_robot, depends_on: [:market, :competitive, :tech,
|
|
104
|
+
:risk, :financial, :legal]
|
|
105
|
+
end
|
|
106
|
+
```
|
|
107
|
+
|
|
108
|
+
`nil` (the default) means unlimited — identical to pre-existing behavior. For Rails deployments, size the cap to match your database connection pool and API rate tier. See [Example 31](../../examples/31_launch_assessment.rb) for a working demo.
|
|
109
|
+
|
|
87
110
|
### Optional Tasks
|
|
88
111
|
|
|
89
112
|
Optional tasks only run when explicitly activated by a preceding robot:
|
|
@@ -362,34 +362,73 @@ channel.send({ message: "Hello!", session_id: sessionId });
|
|
|
362
362
|
|
|
363
363
|
## Background Jobs
|
|
364
364
|
|
|
365
|
+
### RobotLab::Job Base Class
|
|
366
|
+
|
|
367
|
+
All RobotLab background jobs inherit from `RobotLab::Job` (`RobotLab::RailsIntegration::Job`), which handles the full robot-run lifecycle automatically:
|
|
368
|
+
|
|
369
|
+
1. Resolves the robot class (from the `robot_class` DSL or a `robot_class:` kwarg at enqueue time)
|
|
370
|
+
2. Finds or creates a `RobotLabThread` record and stamps the incoming message
|
|
371
|
+
3. Wires Turbo Stream callbacks when `turbo-rails` is available (graceful no-op otherwise)
|
|
372
|
+
4. Runs the robot and persists the `RobotResult` to `RobotLabResult`
|
|
373
|
+
5. Broadcasts a completion or error event via Turbo Streams
|
|
374
|
+
|
|
375
|
+
`retry_on StandardError` (3 attempts, 5 s wait) and `discard_on ActiveJob::DeserializationError` are configured by default.
|
|
376
|
+
|
|
365
377
|
### RobotRunJob (Generated)
|
|
366
378
|
|
|
367
|
-
The install generator creates
|
|
379
|
+
The install generator creates a thin subclass you can enqueue with any robot class at runtime:
|
|
380
|
+
|
|
381
|
+
```ruby title="app/jobs/robot_run_job.rb"
|
|
382
|
+
class RobotRunJob < RobotLab::Job
|
|
383
|
+
queue_as :default
|
|
384
|
+
end
|
|
385
|
+
```
|
|
368
386
|
|
|
369
387
|
```ruby
|
|
370
|
-
# Enqueue from a controller
|
|
388
|
+
# Enqueue from a controller — pass robot_class: as a string
|
|
371
389
|
RobotRunJob.perform_later(
|
|
372
390
|
robot_class: "SupportRobot",
|
|
373
|
-
message:
|
|
374
|
-
thread_id:
|
|
391
|
+
message: params[:message],
|
|
392
|
+
thread_id: session_id
|
|
375
393
|
)
|
|
376
394
|
|
|
377
395
|
render json: { status: "processing" }
|
|
378
396
|
```
|
|
379
397
|
|
|
380
|
-
|
|
398
|
+
### Dedicated Job (robot_class DSL)
|
|
381
399
|
|
|
382
|
-
|
|
383
|
-
|
|
384
|
-
|
|
385
|
-
|
|
386
|
-
|
|
400
|
+
Generate a job pre-bound to a specific robot class so callers never need to pass `robot_class:`:
|
|
401
|
+
|
|
402
|
+
```bash
|
|
403
|
+
rails generate robot_lab:job Support # binds to SupportRobot, queue: default
|
|
404
|
+
rails generate robot_lab:job Support --queue ai # custom queue name
|
|
405
|
+
```
|
|
387
406
|
|
|
388
|
-
|
|
407
|
+
```ruby title="app/jobs/support_job.rb"
|
|
408
|
+
class SupportJob < RobotLab::Job
|
|
409
|
+
queue_as :default
|
|
410
|
+
robot_class SupportRobot
|
|
411
|
+
end
|
|
412
|
+
```
|
|
413
|
+
|
|
414
|
+
```ruby
|
|
415
|
+
# No robot_class: needed at enqueue time
|
|
416
|
+
SupportJob.perform_later(message: params[:message], thread_id: session_id)
|
|
417
|
+
```
|
|
418
|
+
|
|
419
|
+
The `robot_class` DSL is per-subclass and does not affect sibling job classes.
|
|
420
|
+
|
|
421
|
+
### Omitting thread_id (fire-and-forget)
|
|
422
|
+
|
|
423
|
+
When `thread_id` is omitted the job runs the robot and returns the result without any persistence or broadcasting:
|
|
424
|
+
|
|
425
|
+
```ruby
|
|
426
|
+
RobotRunJob.perform_later(robot_class: "ChatRobot", message: "ping")
|
|
427
|
+
```
|
|
389
428
|
|
|
390
429
|
### Turbo Stream Token Streaming
|
|
391
430
|
|
|
392
|
-
When `turbo-rails` is installed, `
|
|
431
|
+
When `turbo-rails` is installed, `RobotLab::Job` automatically streams content tokens and tool call badges to the browser in real time.
|
|
393
432
|
|
|
394
433
|
#### View Setup
|
|
395
434
|
|
|
@@ -435,7 +474,7 @@ The stream name convention is `"robot_lab_thread_#{thread_id}"`, matching the `R
|
|
|
435
474
|
|
|
436
475
|
### Custom Background Job
|
|
437
476
|
|
|
438
|
-
For full control
|
|
477
|
+
For full control outside of the `RobotLab::Job` lifecycle (e.g. custom persistence or a different broadcasting strategy), inherit from `ApplicationJob` directly:
|
|
439
478
|
|
|
440
479
|
```ruby title="app/jobs/process_message_job.rb"
|
|
441
480
|
class ProcessMessageJob < ApplicationJob
|
|
@@ -1,79 +1,19 @@
|
|
|
1
1
|
# frozen_string_literal: true
|
|
2
2
|
|
|
3
|
-
|
|
3
|
+
# Generic background job for executing any robot asynchronously.
|
|
4
|
+
#
|
|
5
|
+
# Inherits from RobotLab::Job — Turbo Stream wiring, thread persistence,
|
|
6
|
+
# and completion/error broadcasting are all handled by the base class.
|
|
7
|
+
#
|
|
8
|
+
# Pass robot_class: at enqueue time to select which robot to run.
|
|
9
|
+
#
|
|
10
|
+
# @example Enqueue from a controller
|
|
11
|
+
# RobotRunJob.perform_later(
|
|
12
|
+
# robot_class: "ChatRobot",
|
|
13
|
+
# message: params[:message],
|
|
14
|
+
# thread_id: session_id
|
|
15
|
+
# )
|
|
16
|
+
#
|
|
17
|
+
class RobotRunJob < RobotLab::Job
|
|
4
18
|
queue_as :default
|
|
5
|
-
|
|
6
|
-
retry_on StandardError, wait: 5.seconds, attempts: 3
|
|
7
|
-
discard_on ActiveJob::DeserializationError
|
|
8
|
-
|
|
9
|
-
def perform(robot_class:, message:, thread_id:, **context)
|
|
10
|
-
thread = RobotLabThread.find_or_create_by_session_id(thread_id)
|
|
11
|
-
thread.update!(last_user_message: message, last_user_message_at: Time.current)
|
|
12
|
-
|
|
13
|
-
robot = resolve_robot(robot_class, thread_id)
|
|
14
|
-
result = robot.run(message, **context)
|
|
15
|
-
|
|
16
|
-
persist_result(thread, result)
|
|
17
|
-
broadcast_completion(thread_id)
|
|
18
|
-
rescue StandardError => e
|
|
19
|
-
broadcast_error(thread_id, e)
|
|
20
|
-
raise
|
|
21
|
-
end
|
|
22
|
-
|
|
23
|
-
private
|
|
24
|
-
|
|
25
|
-
def resolve_robot(robot_class, thread_id)
|
|
26
|
-
klass = robot_class.to_s.constantize
|
|
27
|
-
stream_name = "robot_lab_thread_#{thread_id}"
|
|
28
|
-
|
|
29
|
-
if turbo_available?
|
|
30
|
-
on_content = RobotLab::RailsIntegration::TurboStreamCallbacks.build_content_callback(
|
|
31
|
-
stream_name: stream_name
|
|
32
|
-
)
|
|
33
|
-
on_tool_call = RobotLab::RailsIntegration::TurboStreamCallbacks.build_tool_call_callback(
|
|
34
|
-
stream_name: stream_name
|
|
35
|
-
)
|
|
36
|
-
klass.build(on_content: on_content, on_tool_call: on_tool_call)
|
|
37
|
-
else
|
|
38
|
-
klass.build
|
|
39
|
-
end
|
|
40
|
-
end
|
|
41
|
-
|
|
42
|
-
def persist_result(thread, result)
|
|
43
|
-
sequence = thread.results.maximum(:sequence_number).to_i + 1
|
|
44
|
-
exported = result.export
|
|
45
|
-
|
|
46
|
-
thread.results.create!(
|
|
47
|
-
robot_name: result.robot_name,
|
|
48
|
-
sequence_number: sequence,
|
|
49
|
-
output_data: exported[:output],
|
|
50
|
-
tool_calls_data: exported[:tool_calls],
|
|
51
|
-
stop_reason: result.stop_reason,
|
|
52
|
-
checksum: result.checksum
|
|
53
|
-
)
|
|
54
|
-
end
|
|
55
|
-
|
|
56
|
-
def broadcast_completion(thread_id)
|
|
57
|
-
return unless turbo_available?
|
|
58
|
-
|
|
59
|
-
Turbo::StreamsChannel.broadcast_replace_to(
|
|
60
|
-
"robot_lab_thread_#{thread_id}",
|
|
61
|
-
target: "robot_status",
|
|
62
|
-
html: "<div id=\"robot_status\"><span class=\"complete\">Complete</span></div>"
|
|
63
|
-
)
|
|
64
|
-
end
|
|
65
|
-
|
|
66
|
-
def broadcast_error(thread_id, error)
|
|
67
|
-
return unless turbo_available?
|
|
68
|
-
|
|
69
|
-
Turbo::StreamsChannel.broadcast_append_to(
|
|
70
|
-
"robot_lab_thread_#{thread_id}",
|
|
71
|
-
target: "robot_errors",
|
|
72
|
-
html: "<div class=\"error\">#{ERB::Util.html_escape(error.message)}</div>"
|
|
73
|
-
)
|
|
74
|
-
end
|
|
75
|
-
|
|
76
|
-
def turbo_available?
|
|
77
|
-
defined?(Turbo::StreamsChannel)
|
|
78
|
-
end
|
|
79
19
|
end
|
|
@@ -0,0 +1,248 @@
|
|
|
1
|
+
#!/usr/bin/env ruby
|
|
2
|
+
# frozen_string_literal: true
|
|
3
|
+
|
|
4
|
+
# Example 31: Product Launch Assessment — 6 Parallel Analysts, Cap of 4
|
|
5
|
+
#
|
|
6
|
+
# Six specialist robots evaluate a product launch simultaneously.
|
|
7
|
+
# max_concurrent_robots: 4 ensures at most 4 LLM API calls are in-flight
|
|
8
|
+
# at once. Robots 5 and 6 queue behind the Async::Semaphore and start as
|
|
9
|
+
# soon as any of the first 4 finishes — providing natural back-pressure
|
|
10
|
+
# without slowing the pipeline more than necessary.
|
|
11
|
+
#
|
|
12
|
+
# Architecture:
|
|
13
|
+
#
|
|
14
|
+
# ┌──────────────────────────────────────────────────────────────────────┐
|
|
15
|
+
# │ PARALLEL ANALYSIS PHASE (max_concurrent_robots: 4) │
|
|
16
|
+
# │ │
|
|
17
|
+
# │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
|
|
18
|
+
# │ │ Market │ │ Compet. │ │ Tech │ │ Risk │ slots 1-4 │
|
|
19
|
+
# │ │ Analyst │ │ Analyst │ │ Reviewer │ │ Assessor │ │
|
|
20
|
+
# │ └────┬─────┘ └────┬─────┘ └────┬─────┘ └────┬─────┘ │
|
|
21
|
+
# │ │ │ │ │ │
|
|
22
|
+
# │ start start start start │
|
|
23
|
+
# │ │
|
|
24
|
+
# │ ┌──────────┐ ┌──────────┐ │
|
|
25
|
+
# │ │Financial │ │ Legal │ queued — start when a slot opens │
|
|
26
|
+
# │ │ Reviewer │ │ Reviewer │ │
|
|
27
|
+
# │ └────┬─────┘ └────┬─────┘ │
|
|
28
|
+
# │ │ │ │
|
|
29
|
+
# │ (deferred) (deferred) │
|
|
30
|
+
# │ │
|
|
31
|
+
# │ ┌─────────────────────────────────────────────────────────────┐ │
|
|
32
|
+
# │ │ SHARED MEMORY │ │
|
|
33
|
+
# │ │ :market :competitive :tech :risk :financial :legal │ │
|
|
34
|
+
# │ └─────────────────────────────────────────────────────────────┘ │
|
|
35
|
+
# │ │ │
|
|
36
|
+
# │ ▼ │
|
|
37
|
+
# │ ┌─────────────────────────────────────────────────────────────┐ │
|
|
38
|
+
# │ │ Launch Director │ │
|
|
39
|
+
# │ │ Blocks on reactive memory until all 6 findings arrive, │ │
|
|
40
|
+
# │ │ then issues a GO / NO-GO recommendation. │ │
|
|
41
|
+
# │ └─────────────────────────────────────────────────────────────┘ │
|
|
42
|
+
# └──────────────────────────────────────────────────────────────────────┘
|
|
43
|
+
#
|
|
44
|
+
# Key config:
|
|
45
|
+
# RunConfig.new(max_concurrent_robots: 4)
|
|
46
|
+
#
|
|
47
|
+
# Usage:
|
|
48
|
+
# ANTHROPIC_API_KEY=your_key ruby examples/31_launch_assessment.rb
|
|
49
|
+
|
|
50
|
+
ENV["ROBOT_LAB_TEMPLATE_PATH"] ||= File.join(__dir__, "prompts")
|
|
51
|
+
|
|
52
|
+
require_relative "../lib/robot_lab"
|
|
53
|
+
|
|
54
|
+
RubyLLM.configure { |c| c.logger = Logger.new(File::NULL) }
|
|
55
|
+
|
|
56
|
+
# ── AnalystRobot ────────────────────────────────────────────────────────────────
|
|
57
|
+
#
|
|
58
|
+
# Runs its LLM call, writes the verdict to shared memory, and logs a timing line
|
|
59
|
+
# so you can see when the semaphore releases robots 5 and 6.
|
|
60
|
+
|
|
61
|
+
class AnalystRobot < RobotLab::Robot
|
|
62
|
+
attr_reader :memory_key
|
|
63
|
+
attr_writer :shared_memory
|
|
64
|
+
|
|
65
|
+
def initialize(name:, memory_key:, role:)
|
|
66
|
+
super(
|
|
67
|
+
name: name,
|
|
68
|
+
system_prompt: "You are a #{role}. " \
|
|
69
|
+
"Review the product brief in 2-3 crisp sentences from your area of expertise. " \
|
|
70
|
+
"Close with a one-word verdict: READY or NOT-READY."
|
|
71
|
+
)
|
|
72
|
+
@memory_key = memory_key
|
|
73
|
+
end
|
|
74
|
+
|
|
75
|
+
def call(result)
|
|
76
|
+
brief = extract_brief(result)
|
|
77
|
+
started = Time.now
|
|
78
|
+
puts " [#{name}] started at +#{"%.1f" % (started - $run_start)}s"
|
|
79
|
+
|
|
80
|
+
verdict = run(brief).reply.strip
|
|
81
|
+
|
|
82
|
+
elapsed = "%.1f" % (Time.now - started)
|
|
83
|
+
puts " [#{name}] finished in #{elapsed}s — #{verdict.split.last(2).join(" ")}"
|
|
84
|
+
|
|
85
|
+
if @shared_memory
|
|
86
|
+
@shared_memory.current_writer = name
|
|
87
|
+
@shared_memory.set(@memory_key, verdict)
|
|
88
|
+
end
|
|
89
|
+
|
|
90
|
+
result.with_context(name.to_sym, verdict).continue(verdict)
|
|
91
|
+
end
|
|
92
|
+
|
|
93
|
+
private
|
|
94
|
+
|
|
95
|
+
def extract_brief(result)
|
|
96
|
+
case result.value
|
|
97
|
+
when Hash then result.value[:message].to_s
|
|
98
|
+
when RobotLab::RobotResult then result.value.reply.to_s
|
|
99
|
+
else result.value.to_s
|
|
100
|
+
end
|
|
101
|
+
end
|
|
102
|
+
end
|
|
103
|
+
|
|
104
|
+
# ── LaunchDirector ──────────────────────────────────────────────────────────────
|
|
105
|
+
#
|
|
106
|
+
# Waits for all 6 findings via reactive memory, then issues the final call.
|
|
107
|
+
# SimpleFlow guarantees the six analyst tasks are done before this task runs,
|
|
108
|
+
# so the memory.get is effectively a non-blocking read by the time we arrive here.
|
|
109
|
+
|
|
110
|
+
class LaunchDirector < RobotLab::Robot
|
|
111
|
+
attr_writer :shared_memory
|
|
112
|
+
|
|
113
|
+
FINDING_KEYS = %i[market competitive tech risk financial legal].freeze
|
|
114
|
+
|
|
115
|
+
def call(result)
|
|
116
|
+
puts " [#{name}] reading all findings from shared memory..."
|
|
117
|
+
findings = @shared_memory.get(*FINDING_KEYS, wait: 120)
|
|
118
|
+
|
|
119
|
+
if findings.values.any? { |v| v == :timeout }
|
|
120
|
+
timed_out = findings.select { |_, v| v == :timeout }.keys
|
|
121
|
+
puts " [#{name}] WARNING: timed out waiting for: #{timed_out.join(", ")}"
|
|
122
|
+
end
|
|
123
|
+
|
|
124
|
+
prompt = <<~PROMPT
|
|
125
|
+
Six specialist analysts have reviewed our product launch readiness.
|
|
126
|
+
Based on their findings, issue a final GO or NO-GO recommendation
|
|
127
|
+
in 3-5 sentences. Be direct and specific about the key deciding factors.
|
|
128
|
+
|
|
129
|
+
Market analysis: #{findings[:market] || "(not received)"}
|
|
130
|
+
Competitive analysis: #{findings[:competitive] || "(not received)"}
|
|
131
|
+
Technical review: #{findings[:tech] || "(not received)"}
|
|
132
|
+
Risk assessment: #{findings[:risk] || "(not received)"}
|
|
133
|
+
Financial review: #{findings[:financial] || "(not received)"}
|
|
134
|
+
Legal review: #{findings[:legal] || "(not received)"}
|
|
135
|
+
|
|
136
|
+
Begin your response with "GO -" or "NO-GO -".
|
|
137
|
+
PROMPT
|
|
138
|
+
|
|
139
|
+
recommendation = run(prompt).reply.strip
|
|
140
|
+
@shared_memory.set(:recommendation, recommendation)
|
|
141
|
+
puts " [#{name}] recommendation ready"
|
|
142
|
+
|
|
143
|
+
result.with_context(:recommendation, recommendation).continue(recommendation)
|
|
144
|
+
end
|
|
145
|
+
end
|
|
146
|
+
|
|
147
|
+
# ── Product Brief ───────────────────────────────────────────────────────────────
|
|
148
|
+
|
|
149
|
+
PRODUCT_BRIEF = <<~BRIEF
|
|
150
|
+
Product: "Orion" — an AI-powered project management tool that auto-generates
|
|
151
|
+
sprint plans from Jira backlogs, detects scope creep in real-time, and integrates
|
|
152
|
+
with GitHub and Slack via webhooks. SaaS pricing: $25/seat/month, 14-day free trial.
|
|
153
|
+
Target: mid-size engineering teams (20-200 developers). Launch date: 6 weeks out.
|
|
154
|
+
Beta: 12 paying customers, 94% satisfaction, 0 critical bugs open. SOC 2 Type I
|
|
155
|
+
certification in progress, expected within 30 days.
|
|
156
|
+
BRIEF
|
|
157
|
+
|
|
158
|
+
# ── Build the six analysts ───────────────────────────────────────────────────────
|
|
159
|
+
|
|
160
|
+
ANALYSTS = [
|
|
161
|
+
{ name: "market_analyst", key: :market, role: "market opportunity analyst" },
|
|
162
|
+
{ name: "competitive_analyst", key: :competitive, role: "competitive intelligence analyst" },
|
|
163
|
+
{ name: "tech_reviewer", key: :tech, role: "technical readiness and quality reviewer" },
|
|
164
|
+
{ name: "risk_assessor", key: :risk, role: "product risk assessment specialist" },
|
|
165
|
+
{ name: "financial_reviewer", key: :financial, role: "financial viability and pricing analyst" },
|
|
166
|
+
{ name: "legal_reviewer", key: :legal, role: "legal, compliance, and IP reviewer" },
|
|
167
|
+
].freeze
|
|
168
|
+
|
|
169
|
+
analyst_robots = ANALYSTS.map do |spec|
|
|
170
|
+
AnalystRobot.new(name: spec[:name], memory_key: spec[:key], role: spec[:role])
|
|
171
|
+
end
|
|
172
|
+
|
|
173
|
+
director = LaunchDirector.new(
|
|
174
|
+
name: "launch_director",
|
|
175
|
+
system_prompt: "You are the VP of Product making the final launch call."
|
|
176
|
+
)
|
|
177
|
+
|
|
178
|
+
# ── Network — note the concurrency cap ──────────────────────────────────────────
|
|
179
|
+
|
|
180
|
+
config = RobotLab::RunConfig.new(max_concurrent_robots: 4)
|
|
181
|
+
|
|
182
|
+
analyst_names = analyst_robots.map { |r| r.name.to_sym }
|
|
183
|
+
|
|
184
|
+
network = RobotLab.create_network(name: "launch_assessment", config: config) do
|
|
185
|
+
analyst_robots.each do |robot|
|
|
186
|
+
task robot.name.to_sym, robot, depends_on: :none
|
|
187
|
+
end
|
|
188
|
+
|
|
189
|
+
task :launch_director, director, depends_on: analyst_names
|
|
190
|
+
end
|
|
191
|
+
|
|
192
|
+
# Assign shared memory so each robot can write to it directly
|
|
193
|
+
shared_memory = network.memory
|
|
194
|
+
(analyst_robots + [director]).each { |r| r.shared_memory = shared_memory }
|
|
195
|
+
|
|
196
|
+
# Subscribe for a memory-level audit trail
|
|
197
|
+
analyst_robots.each do |robot|
|
|
198
|
+
network.memory.subscribe(robot.memory_key) do |change|
|
|
199
|
+
puts " [memory] :#{change.key} written by #{change.writer}"
|
|
200
|
+
end
|
|
201
|
+
end
|
|
202
|
+
|
|
203
|
+
# ── Run ─────────────────────────────────────────────────────────────────────────
|
|
204
|
+
|
|
205
|
+
puts "=" * 68
|
|
206
|
+
puts "Example 31: Product Launch Assessment"
|
|
207
|
+
puts " 6 specialist analysts in parallel, max_concurrent_robots: 4"
|
|
208
|
+
puts "=" * 68
|
|
209
|
+
puts
|
|
210
|
+
puts "Pipeline:"
|
|
211
|
+
puts network.visualize
|
|
212
|
+
puts
|
|
213
|
+
puts "Concurrency config: #{config.inspect}"
|
|
214
|
+
puts
|
|
215
|
+
puts "Product brief:"
|
|
216
|
+
puts PRODUCT_BRIEF.strip.gsub(/^/, " ")
|
|
217
|
+
puts
|
|
218
|
+
puts "-" * 68
|
|
219
|
+
puts "Running — analysts 5 and 6 queue until a semaphore slot opens..."
|
|
220
|
+
puts "-" * 68
|
|
221
|
+
puts
|
|
222
|
+
|
|
223
|
+
$run_start = Time.now
|
|
224
|
+
result = network.run(message: PRODUCT_BRIEF)
|
|
225
|
+
elapsed = "%.1f" % (Time.now - $run_start)
|
|
226
|
+
|
|
227
|
+
puts
|
|
228
|
+
puts "-" * 68
|
|
229
|
+
puts "All analysts complete. Total wall time: #{elapsed}s"
|
|
230
|
+
puts "-" * 68
|
|
231
|
+
puts
|
|
232
|
+
puts "=" * 68
|
|
233
|
+
puts "LAUNCH DIRECTOR RECOMMENDATION"
|
|
234
|
+
puts "=" * 68
|
|
235
|
+
puts
|
|
236
|
+
puts network.memory[:recommendation]
|
|
237
|
+
puts
|
|
238
|
+
puts "=" * 68
|
|
239
|
+
puts "INDIVIDUAL ANALYST VERDICTS"
|
|
240
|
+
puts "=" * 68
|
|
241
|
+
puts
|
|
242
|
+
analyst_robots.each do |robot|
|
|
243
|
+
label = robot.name.gsub("_", " ").upcase
|
|
244
|
+
finding = network.memory.get(robot.memory_key).to_s
|
|
245
|
+
puts "#{label}"
|
|
246
|
+
puts finding
|
|
247
|
+
puts
|
|
248
|
+
end
|
data/examples/README.md
CHANGED
|
@@ -57,6 +57,7 @@ examples/
|
|
|
57
57
|
26_document_store.rb # Embedding-based document store (RAG) via fastembed
|
|
58
58
|
29_ractor_tools.rb # Ractor-safe tools: worker pool, freeze_deep, parallel batch
|
|
59
59
|
30_ractor_network.rb # Ractor network scheduler: dependency waves, parallel_mode
|
|
60
|
+
31_launch_assessment.rb # 6 parallel analysts, max_concurrent_robots: 4 semaphore cap
|
|
60
61
|
18_rails/ # Minimal Rails 8 demo app (full integration)
|
|
61
62
|
app/robots/chat_robot.rb # Robot factory with system prompt + TimeTool
|
|
62
63
|
app/tools/time_tool.rb # Custom RobotLab::Tool subclass
|
|
@@ -297,6 +298,14 @@ and the `pipeline.step_dependencies` dependency graph inspection.
|
|
|
297
298
|
|
|
298
299
|
**Requires:** None for Parts 1 & 2. LLM API key for Part 3.
|
|
299
300
|
|
|
301
|
+
### 31 — Product Launch Assessment (Concurrency Cap)
|
|
302
|
+
|
|
303
|
+
Six specialist robots evaluate a product launch simultaneously: market, competitive, technical, risk, financial, and legal analysts. `RunConfig.new(max_concurrent_robots: 4)` caps the `Async::Semaphore` at 4 in-flight LLM calls — robots 5 and 6 queue until a slot opens. A `LaunchDirector` reads all six findings from shared reactive memory and issues a GO / NO-GO recommendation. Start timestamps in the output make the semaphore behavior visible.
|
|
304
|
+
|
|
305
|
+
Demonstrates: `max_concurrent_robots:` on `RunConfig`, `Async::Semaphore` back-pressure via `simple_flow`, six parallel `depends_on: :none` tasks, shared memory writes and blocking reads.
|
|
306
|
+
|
|
307
|
+
**Requires:** LLM API key
|
|
308
|
+
|
|
300
309
|
### 18 — Rails Integration Demo
|
|
301
310
|
|
|
302
311
|
A minimal, hand-built Rails 8 app that exercises every piece of RobotLab's Rails integration end-to-end. No `rails new` — every file is hand-crafted for minimum size.
|
|
@@ -0,0 +1,40 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
require "rails/generators"
|
|
4
|
+
|
|
5
|
+
module RobotLab
|
|
6
|
+
module Generators
|
|
7
|
+
# Generates a RobotLab job subclass pre-wired to a specific robot class.
|
|
8
|
+
#
|
|
9
|
+
# Usage:
|
|
10
|
+
# rails generate robot_lab:job NAME [options]
|
|
11
|
+
#
|
|
12
|
+
# Examples:
|
|
13
|
+
# rails generate robot_lab:job Support
|
|
14
|
+
# # => app/jobs/support_job.rb (robot_class SupportRobot)
|
|
15
|
+
#
|
|
16
|
+
class JobGenerator < ::Rails::Generators::NamedBase
|
|
17
|
+
source_root File.expand_path("templates", __dir__)
|
|
18
|
+
|
|
19
|
+
class_option :queue, type: :string, default: "default",
|
|
20
|
+
desc: "ActiveJob queue name"
|
|
21
|
+
|
|
22
|
+
# Creates the job file.
|
|
23
|
+
#
|
|
24
|
+
# @return [void]
|
|
25
|
+
def create_job_file
|
|
26
|
+
template "robot_job.rb.tt", "app/jobs/#{file_name}_job.rb"
|
|
27
|
+
end
|
|
28
|
+
|
|
29
|
+
private
|
|
30
|
+
|
|
31
|
+
def queue_name
|
|
32
|
+
options[:queue]
|
|
33
|
+
end
|
|
34
|
+
|
|
35
|
+
def robot_class_name
|
|
36
|
+
"#{class_name}Robot"
|
|
37
|
+
end
|
|
38
|
+
end
|
|
39
|
+
end
|
|
40
|
+
end
|
|
@@ -1,92 +1,21 @@
|
|
|
1
1
|
# frozen_string_literal: true
|
|
2
2
|
|
|
3
|
-
#
|
|
3
|
+
# Generic background job for executing any robot asynchronously.
|
|
4
4
|
#
|
|
5
|
-
#
|
|
6
|
-
#
|
|
7
|
-
#
|
|
5
|
+
# Inherits from RobotLab::Job — Turbo Stream wiring, thread persistence,
|
|
6
|
+
# and completion/error broadcasting are all handled by the base class.
|
|
7
|
+
#
|
|
8
|
+
# Pass robot_class: at enqueue time to select which robot to run, or
|
|
9
|
+
# generate a dedicated job with `rails generate robot_lab:job NAME` to
|
|
10
|
+
# bind a job class to a specific robot via the robot_class DSL.
|
|
8
11
|
#
|
|
9
12
|
# @example Enqueue from a controller
|
|
10
13
|
# RobotRunJob.perform_later(
|
|
11
14
|
# robot_class: "SupportRobot",
|
|
12
|
-
# message:
|
|
13
|
-
# thread_id:
|
|
15
|
+
# message: params[:message],
|
|
16
|
+
# thread_id: session_id
|
|
14
17
|
# )
|
|
15
18
|
#
|
|
16
|
-
class RobotRunJob <
|
|
19
|
+
class RobotRunJob < RobotLab::Job
|
|
17
20
|
queue_as :default
|
|
18
|
-
|
|
19
|
-
retry_on StandardError, wait: 5.seconds, attempts: 3
|
|
20
|
-
discard_on ActiveJob::DeserializationError
|
|
21
|
-
|
|
22
|
-
def perform(robot_class:, message:, thread_id:, **context)
|
|
23
|
-
thread = RobotLabThread.find_or_create_by_session_id(thread_id)
|
|
24
|
-
thread.update!(last_user_message: message, last_user_message_at: Time.current)
|
|
25
|
-
|
|
26
|
-
robot = resolve_robot(robot_class, thread_id)
|
|
27
|
-
result = robot.run(message, **context)
|
|
28
|
-
|
|
29
|
-
persist_result(thread, result)
|
|
30
|
-
broadcast_completion(thread_id)
|
|
31
|
-
rescue StandardError => e
|
|
32
|
-
broadcast_error(thread_id, e)
|
|
33
|
-
raise
|
|
34
|
-
end
|
|
35
|
-
|
|
36
|
-
private
|
|
37
|
-
|
|
38
|
-
def resolve_robot(robot_class, thread_id)
|
|
39
|
-
klass = robot_class.to_s.constantize
|
|
40
|
-
stream_name = "robot_lab_thread_#{thread_id}"
|
|
41
|
-
|
|
42
|
-
if turbo_available?
|
|
43
|
-
on_content = RobotLab::RailsIntegration::TurboStreamCallbacks.build_content_callback(
|
|
44
|
-
stream_name: stream_name
|
|
45
|
-
)
|
|
46
|
-
on_tool_call = RobotLab::RailsIntegration::TurboStreamCallbacks.build_tool_call_callback(
|
|
47
|
-
stream_name: stream_name
|
|
48
|
-
)
|
|
49
|
-
klass.build(on_content: on_content, on_tool_call: on_tool_call)
|
|
50
|
-
else
|
|
51
|
-
klass.build
|
|
52
|
-
end
|
|
53
|
-
end
|
|
54
|
-
|
|
55
|
-
def persist_result(thread, result)
|
|
56
|
-
sequence = thread.results.maximum(:sequence_number).to_i + 1
|
|
57
|
-
exported = result.export
|
|
58
|
-
|
|
59
|
-
thread.results.create!(
|
|
60
|
-
robot_name: result.robot_name,
|
|
61
|
-
sequence_number: sequence,
|
|
62
|
-
output_data: exported[:output],
|
|
63
|
-
tool_calls_data: exported[:tool_calls],
|
|
64
|
-
stop_reason: result.stop_reason,
|
|
65
|
-
checksum: result.checksum
|
|
66
|
-
)
|
|
67
|
-
end
|
|
68
|
-
|
|
69
|
-
def broadcast_completion(thread_id)
|
|
70
|
-
return unless turbo_available?
|
|
71
|
-
|
|
72
|
-
Turbo::StreamsChannel.broadcast_replace_to(
|
|
73
|
-
"robot_lab_thread_#{thread_id}",
|
|
74
|
-
target: "robot_status",
|
|
75
|
-
html: "<span class=\"robot-status-complete\">Complete</span>"
|
|
76
|
-
)
|
|
77
|
-
end
|
|
78
|
-
|
|
79
|
-
def broadcast_error(thread_id, error)
|
|
80
|
-
return unless turbo_available?
|
|
81
|
-
|
|
82
|
-
Turbo::StreamsChannel.broadcast_append_to(
|
|
83
|
-
"robot_lab_thread_#{thread_id}",
|
|
84
|
-
target: "robot_errors",
|
|
85
|
-
html: "<div class=\"robot-error\">#{ERB::Util.html_escape(error.message)}</div>"
|
|
86
|
-
)
|
|
87
|
-
end
|
|
88
|
-
|
|
89
|
-
def turbo_available?
|
|
90
|
-
defined?(Turbo::StreamsChannel)
|
|
91
|
-
end
|
|
92
21
|
end
|
|
@@ -0,0 +1,18 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
# Background job that runs <%= robot_class_name %> asynchronously.
|
|
4
|
+
#
|
|
5
|
+
# Inherits from RobotLab::Job — Turbo Stream wiring, thread persistence,
|
|
6
|
+
# and completion/error broadcasting are all handled by the base class.
|
|
7
|
+
#
|
|
8
|
+
# @example Enqueue from a controller
|
|
9
|
+
# <%= class_name %>Job.perform_later(
|
|
10
|
+
# message: params[:message],
|
|
11
|
+
# thread_id: session_id
|
|
12
|
+
# )
|
|
13
|
+
#
|
|
14
|
+
class <%= class_name %>Job < RobotLab::Job
|
|
15
|
+
queue_as :<%= queue_name %>
|
|
16
|
+
|
|
17
|
+
robot_class <%= robot_class_name %>
|
|
18
|
+
end
|
data/lib/robot_lab/message.rb
CHANGED
data/lib/robot_lab/network.rb
CHANGED
|
@@ -0,0 +1,158 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
module RobotLab
|
|
4
|
+
module RailsIntegration
|
|
5
|
+
# Base class for RobotLab background jobs.
|
|
6
|
+
#
|
|
7
|
+
# Encapsulates the full robot-run lifecycle: robot class resolution,
|
|
8
|
+
# Turbo Stream callback wiring, thread-record persistence, and
|
|
9
|
+
# completion/error broadcasting. Suitable for fiber-safe execution
|
|
10
|
+
# under Solid Queue's fiber mode (see issue #20 for SQ version details).
|
|
11
|
+
#
|
|
12
|
+
# @example Minimal subclass using the robot_class DSL
|
|
13
|
+
# class SupportRobotJob < RobotLab::Job
|
|
14
|
+
# queue_as :default
|
|
15
|
+
# robot_class SupportRobot
|
|
16
|
+
# end
|
|
17
|
+
#
|
|
18
|
+
# # Enqueue (no robot_class: needed — taken from DSL):
|
|
19
|
+
# SupportRobotJob.perform_later(message: "Hello", thread_id: session_id)
|
|
20
|
+
#
|
|
21
|
+
# @example Generic job accepting robot_class at enqueue time
|
|
22
|
+
# class RobotRunJob < RobotLab::Job
|
|
23
|
+
# queue_as :default
|
|
24
|
+
# end
|
|
25
|
+
#
|
|
26
|
+
# RobotRunJob.perform_later(
|
|
27
|
+
# robot_class: "SupportRobot",
|
|
28
|
+
# message: params[:message],
|
|
29
|
+
# thread_id: session_id
|
|
30
|
+
# )
|
|
31
|
+
#
|
|
32
|
+
class Job < ActiveJob::Base
|
|
33
|
+
# Set or get the default robot class for this job subclass.
|
|
34
|
+
#
|
|
35
|
+
# @overload robot_class
|
|
36
|
+
# @return [Class, nil] the configured robot class
|
|
37
|
+
# @overload robot_class(klass)
|
|
38
|
+
# @param klass [Class] the robot class (must respond to .build)
|
|
39
|
+
# @return [Class]
|
|
40
|
+
def self.robot_class(klass = nil)
|
|
41
|
+
klass ? @robot_class = klass : @robot_class
|
|
42
|
+
end
|
|
43
|
+
|
|
44
|
+
retry_on StandardError, wait: 5.seconds, attempts: 3
|
|
45
|
+
discard_on ActiveJob::DeserializationError
|
|
46
|
+
|
|
47
|
+
# Run a robot as a background job.
|
|
48
|
+
#
|
|
49
|
+
# When +thread_id+ is provided the job:
|
|
50
|
+
# - Finds or creates a +RobotLabThread+ record and updates its last-message fields
|
|
51
|
+
# - Wires +TurboStreamCallbacks+ onto the robot (when turbo-rails is present)
|
|
52
|
+
# - Persists the +RobotResult+ to +RobotLabResult+
|
|
53
|
+
# - Broadcasts a completion or error event via Turbo Streams
|
|
54
|
+
#
|
|
55
|
+
# When +thread_id+ is omitted the robot runs in a fire-and-forget mode —
|
|
56
|
+
# no persistence, no broadcasting, result returned directly.
|
|
57
|
+
#
|
|
58
|
+
# @param message [String] the user message forwarded to robot.run
|
|
59
|
+
# @param robot_class [String, Class, nil] override; falls back to the class-level DSL
|
|
60
|
+
# @param thread_id [String, nil] session key for persistence and Turbo broadcasting
|
|
61
|
+
# @param context [Hash] additional keyword args forwarded to robot.run
|
|
62
|
+
# @return [RobotResult]
|
|
63
|
+
def perform(message:, robot_class: nil, thread_id: nil, **context)
|
|
64
|
+
klass = resolve_robot_class(robot_class)
|
|
65
|
+
thread = setup_thread(thread_id, message)
|
|
66
|
+
robot = build_robot(klass, thread_id)
|
|
67
|
+
result = robot.run(message, **context)
|
|
68
|
+
|
|
69
|
+
if thread
|
|
70
|
+
persist_result(thread, result)
|
|
71
|
+
broadcast_completion(thread_id)
|
|
72
|
+
end
|
|
73
|
+
|
|
74
|
+
result
|
|
75
|
+
rescue StandardError => e
|
|
76
|
+
broadcast_error(thread_id, e) if thread_id
|
|
77
|
+
raise
|
|
78
|
+
end
|
|
79
|
+
|
|
80
|
+
private
|
|
81
|
+
|
|
82
|
+
# Resolve the robot class from the runtime arg or the class-level DSL.
|
|
83
|
+
def resolve_robot_class(runtime_class)
|
|
84
|
+
klass = runtime_class || self.class.robot_class
|
|
85
|
+
raise ArgumentError,
|
|
86
|
+
"No robot class specified. Pass robot_class: to perform or set robot_class on the job class." \
|
|
87
|
+
unless klass
|
|
88
|
+
|
|
89
|
+
return klass if klass.is_a?(Class)
|
|
90
|
+
|
|
91
|
+
klass.to_s.constantize
|
|
92
|
+
end
|
|
93
|
+
|
|
94
|
+
# Find or create the thread record and stamp the incoming message.
|
|
95
|
+
# Returns nil when thread_id is absent (fire-and-forget mode).
|
|
96
|
+
def setup_thread(thread_id, message)
|
|
97
|
+
return nil unless thread_id
|
|
98
|
+
|
|
99
|
+
thread = "RobotLabThread".constantize.find_or_create_by_session_id(thread_id)
|
|
100
|
+
thread.update!(last_user_message: message, last_user_message_at: Time.current)
|
|
101
|
+
thread
|
|
102
|
+
end
|
|
103
|
+
|
|
104
|
+
# Build the robot, wiring Turbo callbacks when thread_id + turbo-rails are present.
|
|
105
|
+
def build_robot(klass, thread_id)
|
|
106
|
+
if thread_id && turbo_available?
|
|
107
|
+
stream_name = "robot_lab_thread_#{thread_id}"
|
|
108
|
+
on_content = TurboStreamCallbacks.build_content_callback(stream_name: stream_name)
|
|
109
|
+
on_tool_call = TurboStreamCallbacks.build_tool_call_callback(stream_name: stream_name)
|
|
110
|
+
klass.build(on_content: on_content, on_tool_call: on_tool_call)
|
|
111
|
+
else
|
|
112
|
+
klass.build
|
|
113
|
+
end
|
|
114
|
+
end
|
|
115
|
+
|
|
116
|
+
# Append a RobotLabResult record to the thread.
|
|
117
|
+
def persist_result(thread, result)
|
|
118
|
+
sequence = thread.results.maximum(:sequence_number).to_i + 1
|
|
119
|
+
exported = result.export
|
|
120
|
+
|
|
121
|
+
thread.results.create!(
|
|
122
|
+
robot_name: result.robot_name,
|
|
123
|
+
sequence_number: sequence,
|
|
124
|
+
output_data: exported[:output],
|
|
125
|
+
tool_calls_data: exported[:tool_calls],
|
|
126
|
+
stop_reason: result.stop_reason,
|
|
127
|
+
checksum: result.checksum
|
|
128
|
+
)
|
|
129
|
+
end
|
|
130
|
+
|
|
131
|
+
# Broadcast a "Complete" badge to the Turbo Stream.
|
|
132
|
+
def broadcast_completion(thread_id)
|
|
133
|
+
return unless turbo_available?
|
|
134
|
+
|
|
135
|
+
Turbo::StreamsChannel.broadcast_replace_to(
|
|
136
|
+
"robot_lab_thread_#{thread_id}",
|
|
137
|
+
target: "robot_status",
|
|
138
|
+
html: "<div id=\"robot_status\"><span class=\"complete\">Complete</span></div>"
|
|
139
|
+
)
|
|
140
|
+
end
|
|
141
|
+
|
|
142
|
+
# Broadcast an HTML-escaped error message to the Turbo Stream.
|
|
143
|
+
def broadcast_error(thread_id, error)
|
|
144
|
+
return unless turbo_available?
|
|
145
|
+
|
|
146
|
+
Turbo::StreamsChannel.broadcast_append_to(
|
|
147
|
+
"robot_lab_thread_#{thread_id}",
|
|
148
|
+
target: "robot_errors",
|
|
149
|
+
html: "<div class=\"error\">#{ERB::Util.html_escape(error.message)}</div>"
|
|
150
|
+
)
|
|
151
|
+
end
|
|
152
|
+
|
|
153
|
+
def turbo_available?
|
|
154
|
+
defined?(Turbo::StreamsChannel)
|
|
155
|
+
end
|
|
156
|
+
end
|
|
157
|
+
end
|
|
158
|
+
end
|
|
@@ -33,9 +33,18 @@ module RobotLab
|
|
|
33
33
|
Dir.glob("#{path}/**/*.rake").each { |f| load f }
|
|
34
34
|
end
|
|
35
35
|
|
|
36
|
+
# TODO: Add fiber isolation warning once Solid Queue fiber mode lands in a
|
|
37
|
+
# mainline release. PR rails/solid_queue#728 (branch crmne/solid_queue
|
|
38
|
+
# async-worker-execution-mode) introduces the `fibers:` worker key but has
|
|
39
|
+
# not yet been merged or released. When a released version is detectable,
|
|
40
|
+
# add an initializer here that warns when:
|
|
41
|
+
# defined?(SolidQueue) &&
|
|
42
|
+
# app.config.active_support.isolation_level != :fiber
|
|
43
|
+
|
|
36
44
|
generators do
|
|
37
45
|
require "generators/robot_lab/install_generator"
|
|
38
46
|
require "generators/robot_lab/robot_generator"
|
|
47
|
+
require "generators/robot_lab/job_generator"
|
|
39
48
|
end
|
|
40
49
|
end
|
|
41
50
|
end
|
data/lib/robot_lab/run_config.rb
CHANGED
|
@@ -41,7 +41,7 @@ module RobotLab
|
|
|
41
41
|
CALLBACK_FIELDS = %i[on_tool_call on_tool_result on_content].freeze
|
|
42
42
|
|
|
43
43
|
# Infrastructure fields
|
|
44
|
-
INFRA_FIELDS = %i[bus enable_cache max_tool_rounds token_budget ractor_pool_size].freeze
|
|
44
|
+
INFRA_FIELDS = %i[bus enable_cache max_tool_rounds token_budget ractor_pool_size max_concurrent_robots].freeze
|
|
45
45
|
|
|
46
46
|
# All recognized fields
|
|
47
47
|
FIELDS = (LLM_FIELDS + TOOL_FIELDS + CALLBACK_FIELDS + INFRA_FIELDS).freeze
|
data/lib/robot_lab/version.rb
CHANGED
data/lib/robot_lab.rb
CHANGED
|
@@ -251,4 +251,8 @@ if defined?(Rails::Engine)
|
|
|
251
251
|
require 'robot_lab/rails_integration/engine'
|
|
252
252
|
require 'robot_lab/rails_integration/railtie'
|
|
253
253
|
require 'robot_lab/rails_integration/turbo_stream_callbacks'
|
|
254
|
+
require 'robot_lab/rails_integration/job'
|
|
255
|
+
|
|
256
|
+
# Convenience alias so job subclasses can inherit from RobotLab::Job
|
|
257
|
+
RobotLab::Job = RobotLab::RailsIntegration::Job
|
|
254
258
|
end
|
metadata
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
|
2
2
|
name: robot_lab
|
|
3
3
|
version: !ruby/object:Gem::Version
|
|
4
|
-
version: 0.0
|
|
4
|
+
version: 0.1.0
|
|
5
5
|
platform: ruby
|
|
6
6
|
authors:
|
|
7
7
|
- Dewayne VanHoozer
|
|
@@ -155,14 +155,14 @@ dependencies:
|
|
|
155
155
|
requirements:
|
|
156
156
|
- - "~>"
|
|
157
157
|
- !ruby/object:Gem::Version
|
|
158
|
-
version: 0.
|
|
158
|
+
version: 0.4.0
|
|
159
159
|
type: :runtime
|
|
160
160
|
prerelease: false
|
|
161
161
|
version_requirements: !ruby/object:Gem::Requirement
|
|
162
162
|
requirements:
|
|
163
163
|
- - "~>"
|
|
164
164
|
- !ruby/object:Gem::Version
|
|
165
|
-
version: 0.
|
|
165
|
+
version: 0.4.0
|
|
166
166
|
- !ruby/object:Gem::Dependency
|
|
167
167
|
name: ractor_queue
|
|
168
168
|
requirement: !ruby/object:Gem::Requirement
|
|
@@ -234,7 +234,6 @@ extra_rdoc_files: []
|
|
|
234
234
|
files:
|
|
235
235
|
- ".envrc"
|
|
236
236
|
- ".github/workflows/deploy-github-pages.yml"
|
|
237
|
-
- ".github/workflows/deploy-yard-docs.yml"
|
|
238
237
|
- ".irbrc"
|
|
239
238
|
- CHANGELOG.md
|
|
240
239
|
- COMMITS.md
|
|
@@ -396,6 +395,7 @@ files:
|
|
|
396
395
|
- examples/28_mcp_discovery.rb
|
|
397
396
|
- examples/29_ractor_tools.rb
|
|
398
397
|
- examples/30_ractor_network.rb
|
|
398
|
+
- examples/31_launch_assessment.rb
|
|
399
399
|
- examples/README.md
|
|
400
400
|
- examples/prompts/assistant.md
|
|
401
401
|
- examples/prompts/audit_trail.md
|
|
@@ -446,12 +446,14 @@ files:
|
|
|
446
446
|
- examples/prompts/template_with_skills_test.md
|
|
447
447
|
- examples/prompts/triage.md
|
|
448
448
|
- lib/generators/robot_lab/install_generator.rb
|
|
449
|
+
- lib/generators/robot_lab/job_generator.rb
|
|
449
450
|
- lib/generators/robot_lab/robot_generator.rb
|
|
450
451
|
- lib/generators/robot_lab/templates/initializer.rb.tt
|
|
451
452
|
- lib/generators/robot_lab/templates/job.rb.tt
|
|
452
453
|
- lib/generators/robot_lab/templates/migration.rb.tt
|
|
453
454
|
- lib/generators/robot_lab/templates/result_model.rb.tt
|
|
454
455
|
- lib/generators/robot_lab/templates/robot.rb.tt
|
|
456
|
+
- lib/generators/robot_lab/templates/robot_job.rb.tt
|
|
455
457
|
- lib/generators/robot_lab/templates/robot_test.rb.tt
|
|
456
458
|
- lib/generators/robot_lab/templates/routing_robot.rb.tt
|
|
457
459
|
- lib/generators/robot_lab/templates/thread_model.rb.tt
|
|
@@ -484,6 +486,7 @@ files:
|
|
|
484
486
|
- lib/robot_lab/ractor_network_scheduler.rb
|
|
485
487
|
- lib/robot_lab/ractor_worker_pool.rb
|
|
486
488
|
- lib/robot_lab/rails_integration/engine.rb
|
|
489
|
+
- lib/robot_lab/rails_integration/job.rb
|
|
487
490
|
- lib/robot_lab/rails_integration/railtie.rb
|
|
488
491
|
- lib/robot_lab/rails_integration/turbo_stream_callbacks.rb
|
|
489
492
|
- lib/robot_lab/robot.rb
|
|
@@ -1,52 +0,0 @@
|
|
|
1
|
-
name: Deploy YARD Documentation to GitHub Pages
|
|
2
|
-
on:
|
|
3
|
-
push:
|
|
4
|
-
branches:
|
|
5
|
-
- main
|
|
6
|
-
- develop
|
|
7
|
-
paths:
|
|
8
|
-
- "lib/**"
|
|
9
|
-
- "docs/assets/**"
|
|
10
|
-
- ".yardopts"
|
|
11
|
-
- "*.gemspec"
|
|
12
|
-
- ".github/workflows/deploy-yard-docs.yml"
|
|
13
|
-
workflow_dispatch:
|
|
14
|
-
|
|
15
|
-
permissions:
|
|
16
|
-
contents: write
|
|
17
|
-
pages: write
|
|
18
|
-
id-token: write
|
|
19
|
-
|
|
20
|
-
jobs:
|
|
21
|
-
deploy:
|
|
22
|
-
runs-on: ubuntu-latest
|
|
23
|
-
steps:
|
|
24
|
-
- name: Checkout code
|
|
25
|
-
uses: actions/checkout@v4
|
|
26
|
-
with:
|
|
27
|
-
fetch-depth: 0
|
|
28
|
-
|
|
29
|
-
- name: Setup Ruby
|
|
30
|
-
uses: ruby/setup-ruby@v1
|
|
31
|
-
with:
|
|
32
|
-
ruby-version: "3.3"
|
|
33
|
-
bundler-cache: true
|
|
34
|
-
|
|
35
|
-
- name: Install YARD
|
|
36
|
-
run: gem install yard
|
|
37
|
-
|
|
38
|
-
- name: Build YARD documentation
|
|
39
|
-
run: yard doc
|
|
40
|
-
|
|
41
|
-
- name: Configure Git
|
|
42
|
-
run: |
|
|
43
|
-
git config --local user.email "action@github.com"
|
|
44
|
-
git config --local user.name "GitHub Action"
|
|
45
|
-
|
|
46
|
-
- name: Deploy to GitHub Pages
|
|
47
|
-
uses: peaceiris/actions-gh-pages@v4
|
|
48
|
-
with:
|
|
49
|
-
github_token: ${{ secrets.GITHUB_TOKEN }}
|
|
50
|
-
publish_dir: ./doc
|
|
51
|
-
destination_dir: yard
|
|
52
|
-
keep_files: true
|