ruby-pi 0.1.3 → 0.1.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: e996e18be62d604fb9950b3f1ffa3951be1cf89f9bcc56bdedee790a7fc6c042
4
- data.tar.gz: 48ecbe8c92e41bfeeaa723d05230ad8d8d58eaf13da2019f87bc19614342de98
3
+ metadata.gz: 45a50058497c3e040f81e977ffdbe0030b901ea053e91002e4ddddba06c23a6f
4
+ data.tar.gz: 6066bc184d7f8eb951ae137595a8ed082ec41c3abc8ca84cbfe5b4206fde6f66
5
5
  SHA512:
6
- metadata.gz: 923f10b140fdf2fa78948d515c97ee2f5ed48ba29d7bb7c20cd3cbb6b2b253690972a40a1e2edf3a1ec5aa460c09aabf03ce074a2713a6b841be9000101941c0
7
- data.tar.gz: 261ab3f511a1f8abef34f438a842be7648c8a327df72540523c7c0538b127743cb03ec7a80992856d26addeef7c1c654b5e649fe0aec818c544cdc1acc6085d4
6
+ metadata.gz: d486b3a171dca8c59bf442ba5c03f135ee450a083c47c05f075af6b23d87eb24285c72c734a9579456d71d6ac247ed4357f7bd16b813219e77655a98f44b0cc2
7
+ data.tar.gz: 602ed6dc493731203c9fedb8d69f118c81ce292c653ca42ac4ca0d9f8d793dc281697fa87f9df9f7601022812dd791d4420d0f22ba89ad6051119791e87f33ca
data/CHANGELOG.md CHANGED
@@ -5,6 +5,57 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [0.1.5] - 2026-04-30
9
+
10
+ ### Fixed (adversarial review round 3)
11
+
12
+ - **Streaming error body recovery — Gemini and OpenAI**: The 0.1.4 fix accumulated `error_body` in the `on_data` callback for all three providers but only Anthropic actually passed it to `handle_error_response`. Gemini (`gemini.rb`) and OpenAI (`openai.rb`) now also pass `override_body: error_body` so `ApiError#response_body` carries the real server error message on streaming HTTP failures, matching Anthropic's behavior
13
+ - **Compaction consecutive same-role messages**: The 0.1.4 fix changed the summary role from `:user` to `:assistant` to avoid consecutive user messages, but produced consecutive assistants when the first preserved message was already `:assistant`. The summary role is now chosen based on the first preserved message's role: `:user` when the next message is `:assistant`, `:assistant` otherwise. New compaction spec exercises both orderings
14
+ - **`:fallback_start` not surfaced to agent users**: `RubyPi::LLM::Fallback` emits a `:fallback_start` `StreamEvent` when the primary fails mid-stream, but the agent loop's stream block only handled `:text_delta` and `:tool_call_delta`, dropping the signal. The loop now translates `:fallback_start` into a new agent-level `:provider_fallback` event, and clears the streamed-content accumulator so the recorded response reflects only the fallback's output. Subscribers register with `agent.on(:provider_fallback) { |e| ... }`
15
+ - **`StreamEvent#fallback_start?` predicate**: Added to match `text_delta?`, `tool_call_delta?`, and `done?` so consumers can branch on it without comparing `event.type` directly
16
+ - **Tool result JSON serialization crash**: `Loop#act` called `JSON.generate(result.value)` unconditionally; tools returning Time, Date, or other non-JSON-serializable objects raised `JSON::GeneratorError` and aborted the agent run. The serialization is now wrapped in a rescue that falls back to `result.value.to_s`
17
+ - **`tool_calls_made` argument shape inconsistency**: The arguments recorded in `Result#tool_calls_made` were the raw string-keyed `JSON.parse` output, while the tool block itself received the symbol-keyed copy `Tools::Executor` produces. Both now use the symbolized form, and `Tools::Executor.deep_symbolize_keys` is exposed as a public class method so the loop can apply the same transformation up front
18
+ - **`Agent::Core` `config:` kwarg honesty**: The 0.1.4 changelog claimed the kwarg "flows through to provider construction." It does not — the model is constructed before the agent. The kwarg is informational only; users who want per-agent provider config must pass `config:` to the model factory: `RubyPi::LLM.model(:openai, "gpt-4o", config: cfg)`. The Agent::Core docstring and CHANGELOG now reflect this
19
+ - **CHANGELOG references to nonexistent `BufferedStreamProxy`**: The 0.1.4 entry referenced a `BufferedStreamProxy` class that does not exist in the codebase; the buffering logic was inline in `Fallback`. Removed those references
20
+ - **CLAUDE.md provider/extension guides**: "Adding a New LLM Provider" still referenced the deleted `parse_sse_events` helper; updated to describe the `on_data` streaming pattern. The `:agent_end` extension example used `event[:iterations]` (no such key) — replaced with `event[:result].turns`. Added `:tool_call_delta` and `:provider_fallback` to the available events list
21
+ - **README streaming docs**: Documented the `:fallback_start` stream event and the agent-level `:provider_fallback` event with payload schema
22
+
23
+ ## [0.1.4] - 2026-04-30
24
+
25
+ ### Fixed (adversarial review round 2)
26
+
27
+ - **Anthropic ProviderError string interpolation**: Removed backslash-escaping on `#{}` interpolation in the ProviderError message for malformed tool call JSON, so actual tool name and parser error appear instead of literal `\#{...}` text
28
+ - **Thread-unsafe streaming instance variables**: Replaced `@_stream_*` instance variables in Anthropic provider with method-local variables via a `process_anthropic_stream_event` helper that returns updated state as a hash, making streaming safe for concurrent requests
29
+ - **Streaming error body recovery (Anthropic)**: Anthropic's streaming path now detects HTTP error status in the `on_data` callback, accumulates the error response body separately, and passes it to `handle_error_response` via the new `override_body:` kwarg so `ApiError#response_body` carries the server's error message even though `on_data` consumed the response. (Note: 0.1.5 extends this to Gemini and OpenAI, which were missed in 0.1.4)
30
+ - **Compaction `:system` poisoning**: Changed compaction summary role from `:system` to a non-system role to prevent overwriting the real system prompt on Anthropic. (Note: 0.1.5 refines the role choice to also avoid consecutive same-role messages)
31
+ - **OpenAI missing tool_call_id**: OpenAI provider now raises `RubyPi::ProviderError` on nil/blank `tool_call_id` in tool result messages and assistant tool calls (same fail-fast pattern as Anthropic), instead of silently sending `"unknown"`
32
+ - **Gemini streaming finish_reason**: Streaming responses now parse the actual `finishReason` from the Gemini candidate object instead of hardcoding `"stop"`
33
+ - **README incorrect event keys**: Fixed `e[:iteration]` to `e[:turn]` and `event[:iterations]` to `event[:result].turns` throughout README examples
34
+ - **Dead `parse_sse_events` method**: Removed unused `parse_sse_events` from `BaseProvider` (all providers now use real incremental streaming via `on_data`)
35
+ - **`faraday-net_http` version cap**: Removed arbitrary `< 3.4` upper bound from both Gemfile and gemspec
36
+ - **`Fallback` no longer buffers happy-path streams**: `Fallback#perform_complete_with_streaming_fallback` previously buffered all primary events and flushed them after completion, destroying the streaming UX even when nothing went wrong. Events now flow through to the consumer in real time. On primary failure, a `:fallback_start` `StreamEvent` is emitted before the fallback streams, signaling consumers to clear partial output
37
+
38
+ #### Previously Addressed (adversarial review round 1, 35 items)
39
+
40
+ - API key exposure in Gemini URL query strings (moved to header)
41
+ - Provider `format_tool` accepting `Definition` objects
42
+ - Retry-after header parsing for `RateLimitError`
43
+ - Max iterations boundary condition (off-by-one)
44
+ - Token usage accumulation across turns
45
+ - Agent result `success?` semantics for max-iteration stops
46
+ - Context compaction system prompt poisoning
47
+ - `nil` tool guard in executor
48
+ - Tool call ID validation (Anthropic fail-fast)
49
+ - Streaming event types (`:text_delta` for text, `:tool_call_delta` for tools)
50
+ - Concurrent tool execution thread safety
51
+ - `before_tool_call` / `after_tool_call` lifecycle hooks
52
+ - `transform_context` pipeline support
53
+ - Extension base class DSL (`on_event`, `before_tool`, `after_tool`)
54
+ - `Agent::Core#config:` kwarg accepted (informational only — pass `config:` to the model factory to actually override provider config)
55
+ - Typed error hierarchy with `response_body` on `ApiError`
56
+ - `ostruct` runtime dependency declaration
57
+ - Comprehensive test coverage (440+ examples)
58
+
8
59
  ## [0.1.3] - 2026-04-29
9
60
 
10
61
  ### Added
data/README.md CHANGED
@@ -73,15 +73,19 @@ registry = RubyPi::Tools::Registry.new
73
73
  registry.register(weather)
74
74
 
75
75
  model = RubyPi::LLM.model(:gemini, "gemini-2.0-flash")
76
- agent = RubyPi::Agent.new(model: model, tools: registry, stream: true)
76
+ agent = RubyPi::Agent.new(
77
+ system_prompt: "You are a helpful weather assistant.",
78
+ model: model,
79
+ tools: registry
80
+ )
77
81
 
78
82
  # 4. Subscribe to events
79
- agent.on(:text_delta) { |e| print e[:data] }
80
- agent.on(:tool_execution_end) { |e| puts "\n[Tool] #{e[:name]} => #{e[:result]}" }
83
+ agent.on(:text_delta) { |e| print e[:content] }
84
+ agent.on(:tool_execution_end) { |e| puts "\n[Tool] #{e[:tool_name]} => #{e[:result].value}" }
81
85
 
82
86
  # 5. Run
83
87
  result = agent.run("What's the weather in San Francisco?")
84
- puts "\nDone: #{result.output}"
88
+ puts "\nDone: #{result.content}"
85
89
  ```
86
90
 
87
91
  ---
@@ -126,12 +130,24 @@ model.complete(messages: messages, stream: true) do |event|
126
130
  print event.data # incremental text chunk
127
131
  when :tool_call_delta
128
132
  handle_fragment(event.data) # partial tool call JSON
133
+ when :fallback_start
134
+ # Only emitted by RubyPi::LLM::Fallback when the primary provider
135
+ # fails mid-stream. Discard any partial output rendered from the
136
+ # primary; the fallback provider is about to stream its full reply.
137
+ # Payload: { failed_provider:, error:, fallback_provider: }
138
+ clear_partial_output
129
139
  when :done
130
140
  puts "\nStream finished"
131
141
  end
132
142
  end
133
143
  ```
134
144
 
145
+ When using `RubyPi::Agent`, the loop translates a `:fallback_start` stream
146
+ event into an agent-level `:provider_fallback` event you can subscribe to
147
+ with `agent.on(:provider_fallback) { |e| ... }`. The agent also discards
148
+ any partial text it accumulated from the failed primary so the recorded
149
+ response reflects only the fallback's output.
150
+
135
151
  #### Response & ToolCall
136
152
 
137
153
  | Class | Attributes |
@@ -161,6 +177,12 @@ Authentication errors (401/403) are **not** retried with the fallback -- they in
161
177
 
162
178
  A lightweight DSL for defining tools (functions) that LLMs can call, plus a registry and executor for dispatching them.
163
179
 
180
+ > **`RubyPi::Tool` vs `RubyPi::Tools`:** `RubyPi::Tool.define(...)` is the convenience API
181
+ > for creating tool definitions with a clean syntax. `RubyPi::Tools` is the internal namespace
182
+ > containing the supporting classes (`Definition`, `Registry`, `Executor`, `Result`, `Schema`).
183
+ > Use `RubyPi::Tool.define` in application code; reference `RubyPi::Tools::*` when you need
184
+ > direct access to the underlying components.
185
+
164
186
  #### Defining Tools
165
187
 
166
188
  ```ruby
@@ -247,14 +269,19 @@ The Agent implements a **think-act-observe** loop: send messages to the LLM, exe
247
269
 
248
270
  ```ruby
249
271
  agent = RubyPi::Agent.new(
272
+ system_prompt: "You are a helpful assistant.", # required: system-level instruction
250
273
  model: model, # required: an LLM provider instance
251
274
  tools: registry, # optional: a Tools::Registry
252
- stream: false, # optional: enable streaming
253
- max_iterations: 10, # optional: loop safety limit
254
- context_compaction: compaction, # optional: Context::Compaction instance
255
- context_transform: transform, # optional: Context::Transform instance
256
- extensions: [my_extension] # optional: Extension instances
275
+ max_iterations: 10, # optional: loop safety limit (default: 10)
276
+ compaction: compaction, # optional: Context::Compaction instance
277
+ transform_context: transform, # optional: context transform callable
278
+ before_tool_call: ->(tc) { }, # optional: pre-tool-execution hook
279
+ after_tool_call: ->(tc, r) { }, # optional: post-tool-execution hook
280
+ user_data: {} # optional: arbitrary data for transforms/extensions
257
281
  )
282
+
283
+ # Extensions are registered after construction via `use` (takes a CLASS, not an instance):
284
+ agent.use(MetricsExtension)
258
285
  ```
259
286
 
260
287
  #### Running the Agent
@@ -262,11 +289,10 @@ agent = RubyPi::Agent.new(
262
289
  ```ruby
263
290
  # Single run
264
291
  result = agent.run("What is the weather in Tokyo?")
265
- result.output # => "The weather in Tokyo is..."
292
+ result.content # => "The weather in Tokyo is..."
266
293
  result.messages # => full conversation history
267
294
  result.tool_calls_made # => [{ name: "get_weather", ... }, ...]
268
- result.iterations # => 2
269
- result.stop_reason # => :complete or :max_iterations
295
+ result.turns # => 2
270
296
 
271
297
  # Continue the conversation
272
298
  result2 = agent.continue("And in London?")
@@ -277,13 +303,15 @@ result2 = agent.continue("And in London?")
277
303
  Subscribe to lifecycle events for logging, monitoring, or custom behavior:
278
304
 
279
305
  ```ruby
280
- agent.on(:turn_start) { |e| puts "Turn #{e[:iteration]} starting" }
281
- agent.on(:turn_end) { |e| puts "Turn #{e[:iteration]} ended" }
282
- agent.on(:text_delta) { |e| print e[:data] }
306
+ agent.on(:turn_start) { |e| puts "Turn #{e[:turn]} starting" }
307
+ agent.on(:turn_end) { |e| puts "Turn #{e[:turn]} ended" }
308
+ agent.on(:text_delta) { |e| print e[:content] }
283
309
  agent.on(:tool_execution_start){ |e| puts "Calling #{e[:tool_name]}" }
284
- agent.on(:tool_execution_end) { |e| puts "#{e[:name]} => #{e[:result]}" }
285
- agent.on(:before_tool_call) { |e| puts "About to call #{e[:tool_name]}" }
286
- agent.on(:after_tool_call) { |e| puts "Finished #{e[:tool_name]}" }
310
+ agent.on(:tool_execution_end) { |e| puts "#{e[:tool_name]} => #{e[:result].value}" }
311
+ # Note: before_tool_call and after_tool_call are constructor hooks (Procs),
312
+ # not subscribable events. Use the constructor kwargs instead:
313
+ # before_tool_call: ->(tc) { puts "About to call #{tc.name}" }
314
+ # after_tool_call: ->(tc, result) { puts "Finished #{tc.name}" }
287
315
  agent.on(:agent_end) { |e| puts "Agent finished" }
288
316
  agent.on(:error) { |e| warn "Error: #{e[:error].message}" }
289
317
  ```
@@ -300,11 +328,16 @@ Prevent unbounded context growth by compacting older messages:
300
328
 
301
329
  ```ruby
302
330
  compaction = RubyPi::Context::Compaction.new(
303
- max_tokens: 4000, # trigger compaction above this threshold
304
- strategy: :truncate # :truncate removes oldest messages
331
+ summary_model: summary_model, # required: LLM provider for generating summaries
332
+ max_tokens: 4000, # optional: trigger compaction above this threshold (default: 8000)
333
+ preserve_last_n: 4 # optional: always keep the last N messages (default: 4)
305
334
  )
306
335
 
307
- agent = RubyPi::Agent.new(model: model, context_compaction: compaction)
336
+ agent = RubyPi::Agent.new(
337
+ system_prompt: "You are helpful.",
338
+ model: model,
339
+ compaction: compaction
340
+ )
308
341
  ```
309
342
 
310
343
  #### Transform
@@ -312,12 +345,23 @@ agent = RubyPi::Agent.new(model: model, context_compaction: compaction)
312
345
  Apply arbitrary transformations to the message list before each LLM call:
313
346
 
314
347
  ```ruby
315
- transform = RubyPi::Context::Transform.new do |messages|
316
- # Inject a system prompt at the beginning
317
- [{ role: "system", content: "You are a helpful Ruby assistant." }] + messages
348
+ # Transform is a module with factory methods, not a class.
349
+ # Use the built-in transforms or compose your own:
350
+ transform = RubyPi::Context::Transform.compose(
351
+ RubyPi::Context::Transform.inject_datetime,
352
+ RubyPi::Context::Transform.inject_user_preferences { |state| state.user_data[:prefs] }
353
+ )
354
+
355
+ # Or write a custom transform as a lambda that receives the Agent::State:
356
+ custom_transform = ->(state) do
357
+ state.system_prompt += "\n\nAdditional context here."
318
358
  end
319
359
 
320
- agent = RubyPi::Agent.new(model: model, context_transform: transform)
360
+ agent = RubyPi::Agent.new(
361
+ system_prompt: "You are a helpful Ruby assistant.",
362
+ model: model,
363
+ transform_context: transform
364
+ )
321
365
  ```
322
366
 
323
367
  ---
@@ -336,11 +380,11 @@ class MetricsExtension < RubyPi::Extensions::Base
336
380
 
337
381
  on_event :turn_end do |event|
338
382
  elapsed = Process.clock_gettime(Process::CLOCK_MONOTONIC) - @turn_timer
339
- puts "Turn #{event[:iteration]} took #{elapsed.round(2)}s"
383
+ puts "Turn #{event[:turn]} took #{elapsed.round(2)}s"
340
384
  end
341
385
 
342
386
  on_event :agent_end do |event|
343
- puts "Agent completed in #{event[:iterations]} iterations"
387
+ puts "Agent completed in #{event[:result].turns} turns"
344
388
  end
345
389
  end
346
390
  ```
@@ -349,9 +393,13 @@ end
349
393
 
350
394
  ```ruby
351
395
  agent = RubyPi::Agent.new(
352
- model: model,
353
- extensions: [MetricsExtension.new, AnotherExtension.new]
396
+ system_prompt: "You are helpful.",
397
+ model: model
354
398
  )
399
+
400
+ # Register extensions using `use` — pass the CLASS, not an instance:
401
+ agent.use(MetricsExtension)
402
+ agent.use(AnotherExtension)
355
403
  ```
356
404
 
357
405
  Extensions receive the same event payloads as `agent.on(...)` callbacks. Use them when you want reusable, self-contained behavior modules.
@@ -45,6 +45,24 @@ module RubyPi
45
45
  # @return [RubyPi::Agent::State] the agent's mutable state
46
46
  attr_reader :state
47
47
 
48
+ # @return [Array<Class>] registered extension classes for introspection
49
+ attr_reader :extensions
50
+
51
+ # @return [RubyPi::Configuration, nil] per-agent configuration handle
52
+ # exposed for inspection only — see {#config=} caveats below. The
53
+ # actual provider config is resolved at model construction time:
54
+ #
55
+ # custom = RubyPi::Configuration.new
56
+ # custom.openai_api_key = "sk-..."
57
+ # model = RubyPi::LLM.model(:openai, "gpt-4o", config: custom)
58
+ # agent = RubyPi::Agent.new(model: model, config: custom, ...)
59
+ #
60
+ # Passing `config:` to `Agent.new` does NOT retroactively change the
61
+ # model's behavior — it is informational, intended for inspection by
62
+ # transforms and extensions. To override provider config you must
63
+ # pass `config:` to the model factory as shown above.
64
+ attr_reader :config
65
+
48
66
  # Creates a new Agent instance.
49
67
  #
50
68
  # @param system_prompt [String] the system-level instruction prompt
@@ -57,6 +75,17 @@ module RubyPi
57
75
  # @param after_tool_call [Proc, nil] post-tool-execution hook
58
76
  # @param compaction [RubyPi::Context::Compaction, nil] compaction strategy
59
77
  # @param user_data [Hash] arbitrary data bag for transforms/extensions
78
+ # @param config [RubyPi::Configuration, nil] informational handle to
79
+ # the per-agent configuration. Stored for inspection by transforms
80
+ # and extensions but does NOT override the model's provider config —
81
+ # the model is already constructed by the time it reaches the agent.
82
+ # To use a per-agent config for actual API calls, pass it to the
83
+ # model factory:
84
+ # RubyPi::LLM.model(:openai, "gpt-4o", config: custom_config)
85
+ # @param execution_mode [Symbol] tool execution mode (:parallel or :sequential,
86
+ # default: :parallel)
87
+ # @param tool_timeout [Numeric] per-tool execution timeout in seconds
88
+ # (default: 30)
60
89
  def initialize(
61
90
  system_prompt:,
62
91
  model:,
@@ -67,7 +96,10 @@ module RubyPi
67
96
  before_tool_call: nil,
68
97
  after_tool_call: nil,
69
98
  compaction: nil,
70
- user_data: {}
99
+ user_data: {},
100
+ config: nil,
101
+ execution_mode: :parallel,
102
+ tool_timeout: 30
71
103
  )
72
104
  @state = State.new(
73
105
  system_prompt: system_prompt,
@@ -82,15 +114,24 @@ module RubyPi
82
114
  )
83
115
  @compaction = compaction
84
116
  @extensions = []
117
+ @config = config
118
+ @execution_mode = execution_mode
119
+ @tool_timeout = tool_timeout
85
120
  end
86
121
 
87
122
  # Runs the agent with an initial user prompt. Adds the prompt to the
88
123
  # conversation history, executes the think-act-observe loop, emits
89
124
  # :agent_end when done, and returns the result.
90
125
  #
126
+ # Issue #16: Resets the iteration counter at the start of each run()
127
+ # call using the encapsulated reset_iteration! method. Previously,
128
+ # the counter was never reset on run(), so a second call to run()
129
+ # on the same agent instance could immediately trip max_iterations_reached?.
130
+ #
91
131
  # @param prompt [String] the user's initial message
92
132
  # @return [RubyPi::Agent::Result] the outcome of the agent run
93
133
  def run(prompt)
134
+ @state.reset_iteration!
94
135
  @state.add_message(role: :user, content: prompt)
95
136
  execute_loop
96
137
  end
@@ -99,11 +140,14 @@ module RubyPi
99
140
  # the existing conversation history and appends the new prompt before
100
141
  # resuming the loop.
101
142
  #
143
+ # Issue #16: Uses the encapsulated reset_iteration! method instead of
144
+ # the old approach that bypassed encapsulation
145
+ # and was fragile.
146
+ #
102
147
  # @param prompt [String] the follow-up user message
103
148
  # @return [RubyPi::Agent::Result] the outcome of the continued run
104
149
  def continue(prompt)
105
- # Reset the iteration counter for the new run while keeping history
106
- @state.instance_variable_set(:@iteration, 0)
150
+ @state.reset_iteration!
107
151
  @state.add_message(role: :user, content: prompt)
108
152
  execute_loop
109
153
  end
@@ -132,6 +176,15 @@ module RubyPi
132
176
  @extensions << extension_class
133
177
  end
134
178
 
179
+ # Returns the effective configuration for this agent. If a per-agent
180
+ # config was provided, returns that; otherwise falls back to the
181
+ # global RubyPi.configuration.
182
+ #
183
+ # @return [RubyPi::Configuration] the active configuration
184
+ def effective_config
185
+ @config || RubyPi.configuration
186
+ end
187
+
135
188
  private
136
189
 
137
190
  # Creates a Loop instance and executes it, emitting :agent_end when
@@ -142,7 +195,9 @@ module RubyPi
142
195
  loop_runner = Loop.new(
143
196
  state: @state,
144
197
  emitter: self,
145
- compaction: @compaction
198
+ compaction: @compaction,
199
+ execution_mode: @execution_mode,
200
+ tool_timeout: @tool_timeout
146
201
  )
147
202
 
148
203
  result = loop_runner.run
@@ -14,6 +14,7 @@ module RubyPi
14
14
  # represents a specific moment or occurrence:
15
15
  #
16
16
  # - :text_delta — An incremental text chunk from the LLM stream.
17
+ # - :tool_call_delta — An incremental tool call chunk from the LLM stream.
17
18
  # - :tool_execution_start — A tool is about to be executed.
18
19
  # - :tool_execution_end — A tool has finished executing.
19
20
  # - :turn_start — A new think-act-observe cycle is beginning.
@@ -21,8 +22,13 @@ module RubyPi
21
22
  # - :agent_end — The agent has finished its run (final event).
22
23
  # - :error — A recoverable or fatal error occurred.
23
24
  # - :compaction — Context compaction was triggered.
25
+ # - :provider_fallback — A Fallback provider switched from primary
26
+ # to backup mid-stream. Subscribers should
27
+ # discard any partial text_delta output that
28
+ # arrived before this event.
24
29
  EVENTS = %i[
25
30
  text_delta
31
+ tool_call_delta
26
32
  tool_execution_start
27
33
  tool_execution_end
28
34
  turn_start
@@ -30,6 +36,7 @@ module RubyPi
30
36
  agent_end
31
37
  error
32
38
  compaction
39
+ provider_fallback
33
40
  ].freeze
34
41
 
35
42
  # Mixin that adds event subscription and emission to any class. Include
@@ -65,6 +72,12 @@ module RubyPi
65
72
  # rescued individually — one failing handler does not prevent others
66
73
  # from executing.
67
74
  #
75
+ # If a handler raises during a non-error event, the error is re-emitted
76
+ # as an :error event so subscribers can observe it. To prevent infinite
77
+ # recursion, errors raised inside :error event handlers are silently
78
+ # swallowed — they are not re-emitted. This ensures that a broken error
79
+ # handler cannot crash the agent loop.
80
+ #
68
81
  # @param event [Symbol] the event type to fire
69
82
  # @param data [Hash] arbitrary payload passed to each handler
70
83
  # @return [void]
@@ -73,9 +86,10 @@ module RubyPi
73
86
  event_handlers[event].each do |handler|
74
87
  handler.call(data)
75
88
  rescue StandardError => e
76
- # Log but do not propagate handler errors they should not break
77
- # the agent loop. Emit an :error event if this is not already an
78
- # error event (to prevent infinite recursion).
89
+ # Guard against infinite recursion: if we are already emitting an
90
+ # :error event and the error handler itself raises, we must not
91
+ # re-emit that would cause unbounded recursion. Silently discard
92
+ # the secondary error instead.
79
93
  if event != :error
80
94
  emit(:error, error: e, source: :event_handler, event: event)
81
95
  end