ai-agents 0.5.0 → 0.7.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: d07ac97ca06177ee4504601099af6bc6ba3e4d8557b28d34cd3222240b27575d
4
- data.tar.gz: cd0c7121918c8a28c3760325c9acd319e86d349f34fb715d6b953912a99f98a8
3
+ metadata.gz: c143345c7d0dd3a91ff483e0db22a7d23f6c85393275727eddcaf288a1341901
4
+ data.tar.gz: 0f1abfe692571706be41b85a1da924a717f980b8721b21ece7dd62ad4e995fbf
5
5
  SHA512:
6
- metadata.gz: 63db003d8bf43b2ba8d52d48dbb24d2f8bf86ee4b4e8ab704316cf89d0b80156110a8438a7ca86a6fc37182ff9f5a723ace7c474e53a078de050cb22eff4b268
7
- data.tar.gz: 05606d58c663ca0b9b062ed537cba33a4b987e5fd720a1188c003150f880a7c953cc28096ca4024854dcdec0a9896a7da35653ebb4f95c2e04a0f5b97e3e1c2e
6
+ metadata.gz: 74c96799a138a5e725b3e889eba22c45e9581e7606a0cd235cecd50b8bdde1c6f10281ec1516fde77f48373c39022d2f72172be8c29489260df79fb575d92de8
7
+ data.tar.gz: 257fd42a178107182a53eb737848ae1a023c28712ac450c4603b12cb60e2ad39dbe7b957a324bd6c24a3a175af8b7c373ec30e25e36425e81642a8f160da7ceb
data/.rubocop.yml CHANGED
@@ -10,20 +10,18 @@ Style/StringLiterals:
10
10
  Style/StringLiteralsInInterpolation:
11
11
  EnforcedStyle: double_quotes
12
12
 
13
+ Metrics/MethodLength:
14
+ Max: 20
15
+ Metrics/ClassLength:
16
+ Enabled: false
17
+
18
+ RSpec/MultipleDescribes:
19
+ Enabled: false
13
20
  RSpec/MultipleExpectations:
14
21
  Max: 10
15
-
16
22
  RSpec/ExampleLength:
17
23
  Max: 20
18
-
19
24
  RSpec/MultipleMemoizedHelpers:
20
25
  Max: 15
21
-
22
26
  RSpec/SpecFilePathFormat:
23
27
  Enabled: false
24
-
25
- Metrics/MethodLength:
26
- Max: 20
27
-
28
- RSpec/MultipleDescribes:
29
- Enabled: false
data/CHANGELOG.md CHANGED
@@ -5,6 +5,41 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [0.7.0] - 2025-10-16
9
+
10
+ ### Added
11
+ - **Lifecycle Callback Hooks**: New callbacks for complete execution visibility and observability integration
12
+ - Added `on_run_start` callback triggered before agent execution begins with agent name, input, and run context
13
+ - Added `on_run_complete` callback triggered after execution ends (success or failure) with agent name, result, and run context
14
+ - Added `on_agent_complete` callback triggered after each agent turn with agent name, result, error (if any), and run context
15
+ - Run context parameter enables storing and retrieving custom data (e.g., span context, trace IDs) throughout execution
16
+ - Designed for integration with observability platforms (OpenTelemetry, Datadog, New Relic, etc.)
17
+ - All callbacks are thread-safe and non-blocking with proper error handling
18
+ - Updated callback documentation with integration patterns for UI feedback, logging, and metrics
19
+
20
+ ### Changed
21
+ - CallbackManager now supports 7 event types (previously 4)
22
+ - Enhanced callback system to provide complete lifecycle coverage for monitoring and tracing
23
+
24
+ ## [0.6.0] - 2025-10-16
25
+
26
+ ### Added
27
+ - **Custom HTTP Headers Support**: Agents can now specify custom HTTP headers for LLM requests
28
+ - Added `headers` parameter to `Agent#initialize` for setting agent-level default headers
29
+ - Runtime headers can be passed via `headers` parameter in `AgentRunner#run` method
30
+ - Runtime headers take precedence over agent-level headers when keys overlap
31
+ - Headers are automatically normalized (symbolized keys) and validated
32
+ - Full support for headers across agent handoffs with proper merging logic
33
+ - New `Agents::Helpers::Headers` module for header normalization and merging
34
+ - Comprehensive test coverage for header functionality
35
+
36
+ ### Changed
37
+ - **Code Organization**: Refactored internal helpers into dedicated module structure
38
+ - Moved `MessageExtractor` to `Agents::Helpers::MessageExtractor` module
39
+ - Converted `MessageExtractor` from class-based to module-function pattern
40
+ - Created `lib/agents/helpers/` directory for helper modules
41
+ - All helper modules now use consistent flat naming convention (`Agents::Helpers::ModuleName`)
42
+
8
43
  ## [0.5.0] - 2025-08-20
9
44
 
10
45
  ### Added
@@ -11,7 +11,13 @@ The AI Agents SDK provides real-time callbacks that allow you to monitor agent e
11
11
 
12
12
  ## Available Callbacks
13
13
 
14
- The SDK provides four types of callbacks that give you visibility into different stages of agent execution:
14
+ The SDK provides seven types of callbacks that give you visibility into different stages of agent execution:
15
+
16
+ **Run Start** - Triggered before agent execution begins. Receives the agent name, input message, and run context.
17
+
18
+ **Run Complete** - Called after agent execution ends (whether successful or failed). Receives the agent name, result object, and run context.
19
+
20
+ **Agent Complete** - Triggered after each agent turn finishes. Receives the agent name, result, error (if any), and run context.
15
21
 
16
22
  **Agent Thinking** - Triggered when an agent is about to make an LLM call. Useful for showing "thinking" indicators in UIs.
17
23
 
@@ -27,6 +33,9 @@ Callbacks are registered on the AgentRunner using chainable methods:
27
33
 
28
34
  ```ruby
29
35
  runner = Agents::Runner.with_agents(triage, support)
36
+ .on_run_start { |agent, input, ctx| puts "Starting: #{agent}" }
37
+ .on_run_complete { |agent, result, ctx| puts "Completed: #{agent}" }
38
+ .on_agent_complete { |agent, result, error, ctx| puts "Agent done: #{agent}" }
30
39
  .on_agent_thinking { |agent, input| puts "#{agent} thinking..." }
31
40
  .on_tool_start { |tool, args| puts "Using #{tool}" }
32
41
  .on_tool_complete { |tool, result| puts "#{tool} completed" }
@@ -35,7 +44,32 @@ runner = Agents::Runner.with_agents(triage, support)
35
44
 
36
45
  ## Integration Patterns
37
46
 
38
- Callbacks work well with real-time web frameworks like Rails ActionCable, allowing you to stream agent status updates directly to browser clients. They're also useful for logging, metrics collection, and building debug interfaces.
47
+ ### UI Feedback
48
+
49
+ Callbacks work well with real-time web frameworks like Rails ActionCable, allowing you to stream agent status updates directly to browser clients:
50
+
51
+ ```ruby
52
+ runner = Agents::Runner.with_agents(agent)
53
+ .on_agent_thinking { |agent, input|
54
+ ActionCable.server.broadcast("agent_#{user_id}", { type: 'thinking', agent: agent })
55
+ }
56
+ .on_tool_start { |tool, args|
57
+ ActionCable.server.broadcast("agent_#{user_id}", { type: 'tool', name: tool })
58
+ }
59
+ ```
60
+
61
+ ### Logging & Metrics
62
+
63
+ Callbacks are also useful for structured logging and metrics collection:
64
+
65
+ ```ruby
66
+ runner = Agents::Runner.with_agents(agent)
67
+ .on_run_start { |agent, input, ctx| logger.info("Run started", agent: agent) }
68
+ .on_tool_start { |tool, args| metrics.increment("tool.calls", tags: ["tool:#{tool}"]) }
69
+ .on_agent_complete do |agent, result, error, ctx|
70
+ logger.error("Agent failed", agent: agent, error: error) if error
71
+ end
72
+ ```
39
73
 
40
74
  ## Thread Safety
41
75
 
@@ -0,0 +1,91 @@
1
+ ---
2
+ layout: default
3
+ title: Custom Request Headers
4
+ parent: Guides
5
+ nav_order: 6
6
+ ---
7
+
8
+ # Custom Request Headers
9
+
10
+ Custom HTTP headers allow you to pass additional metadata with your LLM API requests. This is useful for authentication, request tracking, A/B testing, and provider-specific features.
11
+
12
+ ## Basic Usage
13
+
14
+ ### Agent-Level Headers
15
+
16
+ Set default headers when creating an agent that will be applied to all requests:
17
+
18
+ ```ruby
19
+ agent = Agents::Agent.new(
20
+ name: "Assistant",
21
+ instructions: "You are a helpful assistant",
22
+ headers: {
23
+ "X-Custom-ID" => "agent-123",
24
+ "X-Environment" => "production"
25
+ }
26
+ )
27
+
28
+ runner = Agents::Runner.with_agents(agent)
29
+ result = runner.run("Hello!")
30
+ # All requests will include the custom headers
31
+ ```
32
+
33
+ ### Runtime Headers
34
+
35
+ Override or add headers for specific requests:
36
+
37
+ ```ruby
38
+ agent = Agents::Agent.new(
39
+ name: "Assistant",
40
+ instructions: "You are a helpful assistant"
41
+ )
42
+
43
+ runner = Agents::Runner.with_agents(agent)
44
+
45
+ # Pass headers at runtime
46
+ result = runner.run(
47
+ "What's the weather?",
48
+ headers: {
49
+ "X-Request-ID" => "req-456",
50
+ "X-User-ID" => "user-789"
51
+ }
52
+ )
53
+ ```
54
+
55
+ ### Header precedence
56
+
57
+ When both agent-level and runtime headers are provided, **runtime headers take precedence**:
58
+
59
+ ```ruby
60
+ agent = Agents::Agent.new(
61
+ name: "Assistant",
62
+ instructions: "You are a helpful assistant",
63
+ headers: {
64
+ "X-Environment" => "staging",
65
+ "X-Agent-ID" => "agent-001"
66
+ }
67
+ )
68
+
69
+ runner = Agents::Runner.with_agents(agent)
70
+
71
+ result = runner.run(
72
+ "Hello!",
73
+ headers: {
74
+ "X-Environment" => "production", # Overrides agent's staging value
75
+ "X-Request-ID" => "req-123" # Additional header
76
+ }
77
+ )
78
+
79
+ # Final headers sent to LLM API:
80
+ # {
81
+ # "X-Environment" => "production", # Runtime value wins
82
+ # "X-Agent-ID" => "agent-001", # From agent
83
+ # "X-Request-ID" => "req-123" # From runtime
84
+ # }
85
+ ```
86
+
87
+ ## See Also
88
+
89
+ - [Multi-Agent Systems](multi-agent-systems.html) - Using headers across agent handoffs
90
+ - [Rails Integration](rails-integration.html) - Request tracking in Rails applications
91
+ - [State Persistence](state-persistence.html) - Combining headers with conversation state
data/docs/guides.md CHANGED
@@ -17,3 +17,4 @@ Practical guides for building real-world applications with the AI Agents library
17
17
  - **[Rails Integration](guides/rails-integration.html)** - Integrating agents with Ruby on Rails applications and ActiveRecord persistence
18
18
  - **[State Persistence](guides/state-persistence.html)** - Managing conversation state and context across sessions and processes
19
19
  - **[Structured Output](guides/structured-output.html)** - Enforcing JSON schema validation for reliable agent responses
20
+ - **[Custom Request Headers](guides/request-headers.html)** - Adding custom HTTP headers for authentication, tracking, and provider-specific features
@@ -6,6 +6,7 @@ require_relative "tools/create_lead_tool"
6
6
  require_relative "tools/create_checkout_tool"
7
7
  require_relative "tools/search_docs_tool"
8
8
  require_relative "tools/escalate_to_human_tool"
9
+ require "ruby_llm/schema"
9
10
 
10
11
  module ISPSupport
11
12
  # Factory for creating all ISP support agents with proper handoff relationships.
@@ -56,7 +57,8 @@ module ISPSupport
56
57
  instructions: sales_instructions_with_state,
57
58
  model: "gpt-4.1-mini",
58
59
  tools: [ISPSupport::CreateLeadTool.new, ISPSupport::CreateCheckoutTool.new],
59
- temperature: 0.8 # Higher temperature for more persuasive, varied sales language
60
+ temperature: 0.8, # Higher temperature for more persuasive, varied sales language
61
+ response_schema: sales_response_schema
60
62
  )
61
63
  end
62
64
 
@@ -70,7 +72,8 @@ module ISPSupport
70
72
  ISPSupport::SearchDocsTool.new,
71
73
  ISPSupport::EscalateToHumanTool.new
72
74
  ],
73
- temperature: 0.5 # Balanced temperature for helpful but consistent technical support
75
+ temperature: 0.5, # Balanced temperature for helpful but consistent technical support
76
+ response_schema: triage_response_schema
74
77
  )
75
78
  end
76
79
 
@@ -95,22 +98,33 @@ module ISPSupport
95
98
  end
96
99
 
97
100
  def triage_response_schema
98
- {
99
- type: "object",
100
- properties: {
101
- response: {
102
- type: "string",
103
- description: "Your response to the customer"
104
- },
105
- intent: {
106
- type: "string",
107
- enum: %w[sales support unclear],
108
- description: "The detected intent category"
109
- }
110
- },
111
- required: %w[response intent],
112
- additionalProperties: false
113
- }
101
+ RubyLLM::Schema.create do
102
+ string :response, description: "Your response to the customer"
103
+ string :intent, enum: %w[sales support unclear], description: "The detected intent category"
104
+ array :sentiment, description: "Customer sentiment indicators" do
105
+ string enum: %w[positive neutral negative frustrated urgent confused satisfied]
106
+ end
107
+ end
108
+ end
109
+
110
+ def support_response_schema
111
+ RubyLLM::Schema.create do
112
+ string :response, description: "Your response to the customer"
113
+ string :intent, enum: %w[support], description: "The intent category (always support)"
114
+ array :sentiment, description: "Customer sentiment indicators" do
115
+ string enum: %w[positive neutral negative frustrated urgent confused satisfied]
116
+ end
117
+ end
118
+ end
119
+
120
+ def sales_response_schema
121
+ RubyLLM::Schema.create do
122
+ string :response, description: "Your response to the customer"
123
+ string :intent, enum: %w[sales], description: "The intent category (always sales)"
124
+ array :sentiment, description: "Customer sentiment indicators" do
125
+ string enum: %w[positive neutral negative frustrated urgent confused satisfied]
126
+ end
127
+ end
114
128
  end
115
129
 
116
130
  def sales_instructions
@@ -2,6 +2,7 @@
2
2
  # frozen_string_literal: true
3
3
 
4
4
  require "json"
5
+ require "readline"
5
6
  require_relative "../../lib/agents"
6
7
  require_relative "agents_factory"
7
8
 
@@ -29,41 +30,59 @@ class ISPSupportDemo
29
30
  @context = {}
30
31
  @current_status = ""
31
32
 
32
- puts "🏢 Welcome to ISP Customer Support!"
33
- puts "Type '/help' for commands or 'exit' to quit."
33
+ puts green("🏢 Welcome to ISP Customer Support!")
34
+ puts dim_text("Type '/help' for commands or 'exit' to quit.")
34
35
  puts
35
36
  end
36
37
 
37
38
  def start
38
39
  loop do
39
- print "💬 You: "
40
- user_input = gets.chomp.strip
40
+ user_input = Readline.readline(cyan("\u{1F4AC} You: "), true)
41
+ next unless user_input # Handle Ctrl+D
41
42
 
43
+ user_input = user_input.strip
42
44
  command_result = handle_command(user_input)
43
45
  break if command_result == :exit
44
46
  next if command_result == :handled || user_input.empty?
45
47
 
46
48
  # Clear any previous status and show agent is working
47
49
  clear_status_line
48
- print "🤖 Processing..."
50
+ print yellow("🤖 Processing...")
49
51
 
50
- # Use the runner - it automatically determines the right agent from context
51
- result = @runner.run(user_input, context: @context)
52
+ begin
53
+ # Use the runner - it automatically determines the right agent from context
54
+ result = @runner.run(user_input, context: @context)
52
55
 
53
- # Update our context with the returned context from Runner
54
- @context = result.context if result.respond_to?(:context) && result.context
56
+ # Update our context with the returned context from Runner
57
+ @context = result.context if result.respond_to?(:context) && result.context
55
58
 
56
- # Clear status and show response
57
- clear_status_line
59
+ # Clear status and show response with callback history
60
+ clear_status_line
58
61
 
59
- # Handle structured output from triage agent
60
- output = result.output || "[No output]"
61
- if @context[:current_agent] == "Triage Agent" && output.is_a?(Hash)
62
- # Display the response from structured response
63
- puts "🤖 #{output["response"]}"
64
- puts "\e[2m [Intent]: #{output["intent"]}\e[0m" if output["intent"]
65
- else
66
- puts "🤖 #{output}"
62
+ # Display callback messages if any
63
+ if @callback_messages.any?
64
+ puts dim_text(@callback_messages.join("\n"))
65
+ @callback_messages.clear
66
+ end
67
+
68
+ # Handle structured output from agents
69
+ output = result.output || "[No output]"
70
+
71
+ if output.is_a?(Hash) && output.key?("response")
72
+ # Display the response from structured response
73
+ puts "🤖 #{output["response"]}"
74
+ puts dim_text(" [Intent]: #{output["intent"]}") if output["intent"]
75
+ puts dim_text(" [Sentiment]: #{output["sentiment"].join(", ")}") if output["sentiment"]&.any?
76
+ else
77
+ puts "🤖 #{output}"
78
+ end
79
+
80
+ puts # Add blank line after agent response
81
+ rescue StandardError => e
82
+ clear_status_line
83
+ puts red("❌ Error: #{e.message}")
84
+ puts dim_text("Please try again or type '/help' for assistance.")
85
+ puts # Add blank line after error message
67
86
  end
68
87
  end
69
88
  end
@@ -71,26 +90,36 @@ class ISPSupportDemo
71
90
  private
72
91
 
73
92
  def setup_callbacks
93
+ @callback_messages = []
94
+
74
95
  @runner.on_agent_thinking do |agent_name, _input|
75
- update_status("🧠 #{agent_name} is thinking...")
96
+ message = "🧠 #{agent_name} is thinking..."
97
+ update_status(message)
98
+ @callback_messages << message
76
99
  end
77
100
 
78
101
  @runner.on_tool_start do |tool_name, _args|
79
- update_status("🔧 Using #{tool_name}...")
102
+ message = "🔧 Using #{tool_name}..."
103
+ update_status(message)
104
+ @callback_messages << message
80
105
  end
81
106
 
82
107
  @runner.on_tool_complete do |tool_name, _result|
83
- update_status("✅ #{tool_name} completed")
108
+ message = "✅ #{tool_name} completed"
109
+ update_status(message)
110
+ @callback_messages << message
84
111
  end
85
112
 
86
113
  @runner.on_agent_handoff do |from_agent, to_agent, _reason|
87
- update_status("🔄 Handoff: #{from_agent} → #{to_agent}")
114
+ message = "🔄 Handoff: #{from_agent} → #{to_agent}"
115
+ update_status(message)
116
+ @callback_messages << message
88
117
  end
89
118
  end
90
119
 
91
120
  def update_status(message)
92
121
  clear_status_line
93
- print message
122
+ print dim_text(message)
94
123
  $stdout.flush
95
124
  end
96
125
 
@@ -197,6 +226,27 @@ class ISPSupportDemo
197
226
  else "Unknown agent"
198
227
  end
199
228
  end
229
+
230
+ # ANSI color helper methods
231
+ def dim_text(text)
232
+ "\e[90m#{text}\e[0m"
233
+ end
234
+
235
+ def green(text)
236
+ "\e[32m#{text}\e[0m"
237
+ end
238
+
239
+ def yellow(text)
240
+ "\e[33m#{text}\e[0m"
241
+ end
242
+
243
+ def red(text)
244
+ "\e[31m#{text}\e[0m"
245
+ end
246
+
247
+ def cyan(text)
248
+ "\e[36m#{text}\e[0m"
249
+ end
200
250
  end
201
251
 
202
252
  # Run the demo
data/lib/agents/agent.rb CHANGED
@@ -4,7 +4,7 @@
4
4
  # Agents are immutable, thread-safe objects that can be cloned with modifications.
5
5
  # They encapsulate the configuration needed to interact with an LLM including
6
6
  # instructions, tools, and potential handoff targets.
7
- #
7
+ require_relative "helpers/headers"
8
8
  # @example Creating a basic agent
9
9
  # agent = Agents::Agent.new(
10
10
  # name: "Assistant",
@@ -50,7 +50,7 @@
50
50
  # )
51
51
  module Agents
52
52
  class Agent
53
- attr_reader :name, :instructions, :model, :tools, :handoff_agents, :temperature, :response_schema
53
+ attr_reader :name, :instructions, :model, :tools, :handoff_agents, :temperature, :response_schema, :headers
54
54
 
55
55
  # Initialize a new Agent instance
56
56
  #
@@ -61,8 +61,9 @@ module Agents
61
61
  # @param handoff_agents [Array<Agents::Agent>] Array of agents this agent can hand off to
62
62
  # @param temperature [Float] Controls randomness in responses (0.0 = deterministic, 1.0 = very random, default: 0.7)
63
63
  # @param response_schema [Hash, nil] JSON schema for structured output responses
64
+ # @param headers [Hash, nil] Default HTTP headers applied to LLM requests
64
65
  def initialize(name:, instructions: nil, model: "gpt-4.1-mini", tools: [], handoff_agents: [], temperature: 0.7,
65
- response_schema: nil)
66
+ response_schema: nil, headers: nil)
66
67
  @name = name
67
68
  @instructions = instructions
68
69
  @model = model
@@ -70,6 +71,7 @@ module Agents
70
71
  @handoff_agents = []
71
72
  @temperature = temperature
72
73
  @response_schema = response_schema
74
+ @headers = Helpers::Headers.normalize(headers, freeze_result: true)
73
75
 
74
76
  # Mutex for thread-safe handoff registration
75
77
  # While agents are typically configured at startup, we want to ensure
@@ -164,7 +166,8 @@ module Agents
164
166
  tools: changes.fetch(:tools, @tools.dup),
165
167
  handoff_agents: changes.fetch(:handoff_agents, @handoff_agents),
166
168
  temperature: changes.fetch(:temperature, @temperature),
167
- response_schema: changes.fetch(:response_schema, @response_schema)
169
+ response_schema: changes.fetch(:response_schema, @response_schema),
170
+ headers: changes.fetch(:headers, @headers)
168
171
  )
169
172
  end
170
173
 
@@ -44,6 +44,9 @@ module Agents
44
44
 
45
45
  # Initialize callback storage - use thread-safe arrays
46
46
  @callbacks = {
47
+ run_start: [],
48
+ run_complete: [],
49
+ agent_complete: [],
47
50
  tool_start: [],
48
51
  tool_complete: [],
49
52
  agent_thinking: [],
@@ -58,12 +61,12 @@ module Agents
58
61
  # @param input [String] User's message
59
62
  # @param context [Hash] Conversation context (will be restored if continuing conversation)
60
63
  # @param max_turns [Integer] Maximum turns before stopping (default: 10)
64
+ # @param headers [Hash, nil] Custom HTTP headers to pass through to the underlying LLM provider
61
65
  # @return [RunResult] Execution result with output, messages, and updated context
62
- def run(input, context: {}, max_turns: Runner::DEFAULT_MAX_TURNS)
66
+ def run(input, context: {}, max_turns: Runner::DEFAULT_MAX_TURNS, headers: nil)
63
67
  # Determine which agent should handle this conversation
64
68
  # Uses conversation history to maintain continuity across handoffs
65
69
  current_agent = determine_conversation_agent(context)
66
-
67
70
  # Execute using stateless Runner - each execution is independent and thread-safe
68
71
  # Pass callbacks to enable real-time event notifications
69
72
  Runner.new.run(
@@ -72,6 +75,7 @@ module Agents
72
75
  context: context,
73
76
  registry: @registry,
74
77
  max_turns: max_turns,
78
+ headers: headers,
75
79
  callbacks: @callbacks
76
80
  )
77
81
  end
@@ -124,6 +128,42 @@ module Agents
124
128
  self
125
129
  end
126
130
 
131
+ # Register a callback for run start events.
132
+ # Called before agent execution begins.
133
+ #
134
+ # @param block [Proc] Callback block that receives (agent, input, run_context)
135
+ # @return [self] For method chaining
136
+ def on_run_start(&block)
137
+ return self unless block
138
+
139
+ @callbacks_mutex.synchronize { @callbacks[:run_start] << block }
140
+ self
141
+ end
142
+
143
+ # Register a callback for run complete events.
144
+ # Called after agent execution ends (success or error).
145
+ #
146
+ # @param block [Proc] Callback block that receives (agent, result, run_context)
147
+ # @return [self] For method chaining
148
+ def on_run_complete(&block)
149
+ return self unless block
150
+
151
+ @callbacks_mutex.synchronize { @callbacks[:run_complete] << block }
152
+ self
153
+ end
154
+
155
+ # Register a callback for agent complete events.
156
+ # Called after each agent turn finishes.
157
+ #
158
+ # @param block [Proc] Callback block that receives (agent_name, result, error, run_context)
159
+ # @return [self] For method chaining
160
+ def on_agent_complete(&block)
161
+ return self unless block
162
+
163
+ @callbacks_mutex.synchronize { @callbacks[:agent_complete] << block }
164
+ self
165
+ end
166
+
127
167
  private
128
168
 
129
169
  # Build agent registry from provided agents only.
@@ -13,6 +13,9 @@ module Agents
13
13
  class CallbackManager
14
14
  # Supported callback event types
15
15
  EVENT_TYPES = %i[
16
+ run_start
17
+ run_complete
18
+ agent_complete
16
19
  tool_start
17
20
  tool_complete
18
21
  agent_thinking
@@ -0,0 +1,33 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Agents
4
+ module Helpers
5
+ module Headers
6
+ module_function
7
+
8
+ def normalize(headers, freeze_result: false)
9
+ return freeze_result ? {}.freeze : {} if headers.nil? || (headers.respond_to?(:empty?) && headers.empty?)
10
+
11
+ hash = headers.respond_to?(:to_h) ? headers.to_h : headers
12
+ raise ArgumentError, "headers must be a Hash or respond to #to_h" unless hash.is_a?(Hash)
13
+
14
+ result = symbolize_keys(hash)
15
+ freeze_result ? result.freeze : result
16
+ end
17
+
18
+ def merge(agent_headers, runtime_headers)
19
+ return runtime_headers if agent_headers.empty?
20
+ return agent_headers if runtime_headers.empty?
21
+
22
+ agent_headers.merge(runtime_headers) { |_key, _agent_value, runtime_value| runtime_value }
23
+ end
24
+
25
+ def symbolize_keys(hash)
26
+ hash.transform_keys do |key|
27
+ key.is_a?(Symbol) ? key : key.to_sym
28
+ end
29
+ end
30
+ private_class_method :symbolize_keys
31
+ end
32
+ end
33
+ end
@@ -0,0 +1,92 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Service object responsible for extracting and formatting conversation messages
4
+ # from RubyLLM chat objects into a format suitable for persistence and context restoration.
5
+ #
6
+ # Handles different message types:
7
+ # - User messages: Basic content preservation
8
+ # - Assistant messages: Includes agent attribution and tool calls
9
+ # - Tool result messages: Links back to original tool calls
10
+ #
11
+ # @example Extract messages from a chat
12
+ # messages = Agents::Helpers::MessageExtractor.extract_messages(chat, current_agent)
13
+ # #=> [
14
+ # { role: :user, content: "Hello" },
15
+ # { role: :assistant, content: "Hi!", agent_name: "Support", tool_calls: [...] },
16
+ # { role: :tool, content: "Result", tool_call_id: "call_123" }
17
+ # ]
18
+ module Agents
19
+ module Helpers
20
+ module MessageExtractor
21
+ module_function
22
+
23
+ # Check if content is considered empty (handles both String and Hash content)
24
+ #
25
+ # @param content [String, Hash, nil] The content to check
26
+ # @return [Boolean] true if content is empty, false otherwise
27
+ def content_empty?(content)
28
+ case content
29
+ when String
30
+ content.strip.empty?
31
+ when Hash
32
+ content.empty?
33
+ else
34
+ content.nil?
35
+ end
36
+ end
37
+
38
+ # Extract messages from a chat object for conversation history persistence
39
+ #
40
+ # @param chat [Object] Chat object that responds to :messages
41
+ # @param current_agent [Agent] The agent currently handling the conversation
42
+ # @return [Array<Hash>] Array of message hashes suitable for persistence
43
+ def extract_messages(chat, current_agent)
44
+ return [] unless chat.respond_to?(:messages)
45
+
46
+ chat.messages.filter_map do |msg|
47
+ case msg.role
48
+ when :user, :assistant
49
+ extract_user_or_assistant_message(msg, current_agent)
50
+ when :tool
51
+ extract_tool_message(msg)
52
+ end
53
+ end
54
+ end
55
+
56
+ def extract_user_or_assistant_message(msg, current_agent)
57
+ return nil unless msg.content && !content_empty?(msg.content)
58
+
59
+ message = {
60
+ role: msg.role,
61
+ content: msg.content
62
+ }
63
+
64
+ if msg.role == :assistant
65
+ # Add agent attribution for conversation continuity
66
+ message[:agent_name] = current_agent.name if current_agent
67
+
68
+ # Add tool calls if present
69
+ if msg.tool_call? && msg.tool_calls
70
+ # RubyLLM stores tool_calls as Hash with call_id => ToolCall object
71
+ # Reference: RubyLLM::StreamAccumulator#tool_calls_from_stream
72
+ message[:tool_calls] = msg.tool_calls.values.map(&:to_h)
73
+ end
74
+ end
75
+
76
+ message
77
+ end
78
+ private_class_method :extract_user_or_assistant_message
79
+
80
+ def extract_tool_message(msg)
81
+ return nil unless msg.tool_result?
82
+
83
+ {
84
+ role: msg.role,
85
+ content: msg.content,
86
+ tool_call_id: msg.tool_call_id
87
+ }
88
+ end
89
+ private_class_method :extract_tool_message
90
+ end
91
+ end
92
+ end
@@ -0,0 +1,9 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Agents
4
+ module Helpers
5
+ end
6
+ end
7
+
8
+ require_relative "helpers/headers"
9
+ require_relative "helpers/message_extractor"
data/lib/agents/runner.rb CHANGED
@@ -1,7 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
- require_relative "message_extractor"
4
-
5
3
  module Agents
6
4
  # The execution engine that orchestrates conversations between users and agents.
7
5
  # Runner manages the conversation flow, handles tool execution through RubyLLM,
@@ -55,6 +53,7 @@ module Agents
55
53
  DEFAULT_MAX_TURNS = 10
56
54
 
57
55
  class MaxTurnsExceeded < StandardError; end
56
+ class AgentNotFoundError < StandardError; end
58
57
 
59
58
  # Create a thread-safe agent runner for multi-agent conversations.
60
59
  # The first agent becomes the default entry point for new conversations.
@@ -79,9 +78,10 @@ module Agents
79
78
  # @param context [Hash] Shared context data accessible to all tools
80
79
  # @param registry [Hash] Registry of agents for handoff resolution
81
80
  # @param max_turns [Integer] Maximum conversation turns before stopping
81
+ # @param headers [Hash, nil] Custom HTTP headers passed to the underlying LLM provider
82
82
  # @param callbacks [Hash] Optional callbacks for real-time event notifications
83
83
  # @return [RunResult] The result containing output, messages, and usage
84
- def run(starting_agent, input, context: {}, registry: {}, max_turns: DEFAULT_MAX_TURNS, callbacks: {})
84
+ def run(starting_agent, input, context: {}, registry: {}, max_turns: DEFAULT_MAX_TURNS, headers: nil, callbacks: {})
85
85
  # The starting_agent is already determined by AgentRunner based on conversation history
86
86
  current_agent = starting_agent
87
87
 
@@ -90,15 +90,24 @@ module Agents
90
90
  context_wrapper = RunContext.new(context_copy, callbacks: callbacks)
91
91
  current_turn = 0
92
92
 
93
+ # Emit run start event
94
+ context_wrapper.callback_manager.emit_run_start(current_agent.name, input, context_wrapper)
95
+
96
+ runtime_headers = Helpers::Headers.normalize(headers)
97
+ agent_headers = Helpers::Headers.normalize(current_agent.headers)
98
+
93
99
  # Create chat and restore conversation history
94
- chat = create_chat(current_agent, context_wrapper)
100
+ chat = RubyLLM::Chat.new(model: current_agent.model)
101
+ current_headers = Helpers::Headers.merge(agent_headers, runtime_headers)
102
+ apply_headers(chat, current_headers)
103
+ configure_chat_for_agent(chat, current_agent, context_wrapper, replace: false)
95
104
  restore_conversation_history(chat, context_wrapper)
96
105
 
97
106
  loop do
98
107
  current_turn += 1
99
108
  raise MaxTurnsExceeded, "Exceeded maximum turns: #{max_turns}" if current_turn > max_turns
100
109
 
101
- # Get response from LLM (Extended Chat handles tool execution with handoff detection)
110
+ # Get response from LLM (RubyLLM handles tool execution with halting based handoff detection)
102
111
  result = if current_turn == 1
103
112
  # Emit agent thinking event for initial message
104
113
  context_wrapper.callback_manager.emit_agent_thinking(current_agent.name, input)
@@ -118,20 +127,30 @@ module Agents
118
127
  # Validate that the target agent is in our registry
119
128
  # This prevents handoffs to agents that weren't explicitly provided
120
129
  unless registry[next_agent.name]
121
- puts "[Agents] Warning: Handoff to unregistered agent '#{next_agent.name}', continuing with current agent"
122
- # Return the halt content as the final response
123
130
  save_conversation_state(chat, context_wrapper, current_agent)
124
- return RunResult.new(
125
- output: response.content,
126
- messages: MessageExtractor.extract_messages(chat, current_agent),
131
+ error = AgentNotFoundError.new("Handoff failed: Agent '#{next_agent.name}' not found in registry")
132
+
133
+ result = RunResult.new(
134
+ output: nil,
135
+ messages: Helpers::MessageExtractor.extract_messages(chat, current_agent),
127
136
  usage: context_wrapper.usage,
128
- context: context_wrapper.context
137
+ context: context_wrapper.context,
138
+ error: error
129
139
  )
140
+
141
+ # Emit agent complete and run complete events with error
142
+ context_wrapper.callback_manager.emit_agent_complete(current_agent.name, result, error, context_wrapper)
143
+ context_wrapper.callback_manager.emit_run_complete(current_agent.name, result, context_wrapper)
144
+
145
+ return result
130
146
  end
131
147
 
132
148
  # Save current conversation state before switching
133
149
  save_conversation_state(chat, context_wrapper, current_agent)
134
150
 
151
+ # Emit agent complete event before handoff
152
+ context_wrapper.callback_manager.emit_agent_complete(current_agent.name, nil, nil, context_wrapper)
153
+
135
154
  # Emit agent handoff event
136
155
  context_wrapper.callback_manager.emit_agent_handoff(current_agent.name, next_agent.name, "handoff")
137
156
 
@@ -139,9 +158,11 @@ module Agents
139
158
  current_agent = next_agent
140
159
  context_wrapper.context[:current_agent] = next_agent.name
141
160
 
142
- # Create new chat for new agent with restored history
143
- chat = create_chat(current_agent, context_wrapper)
144
- restore_conversation_history(chat, context_wrapper)
161
+ # Reconfigure existing chat for new agent - preserves conversation history automatically
162
+ configure_chat_for_agent(chat, current_agent, context_wrapper, replace: true)
163
+ agent_headers = Helpers::Headers.normalize(current_agent.headers)
164
+ current_headers = Helpers::Headers.merge(agent_headers, runtime_headers)
165
+ apply_headers(chat, current_headers)
145
166
 
146
167
  # Force the new agent to respond to the conversation context
147
168
  # This ensures the user gets a response from the new agent
@@ -152,12 +173,19 @@ module Agents
152
173
  # Handle non-handoff halts - return the halt content as final response
153
174
  if response.is_a?(RubyLLM::Tool::Halt)
154
175
  save_conversation_state(chat, context_wrapper, current_agent)
155
- return RunResult.new(
176
+
177
+ result = RunResult.new(
156
178
  output: response.content,
157
- messages: MessageExtractor.extract_messages(chat, current_agent),
179
+ messages: Helpers::MessageExtractor.extract_messages(chat, current_agent),
158
180
  usage: context_wrapper.usage,
159
181
  context: context_wrapper.context
160
182
  )
183
+
184
+ # Emit agent complete and run complete events
185
+ context_wrapper.callback_manager.emit_agent_complete(current_agent.name, result, nil, context_wrapper)
186
+ context_wrapper.callback_manager.emit_run_complete(current_agent.name, result, context_wrapper)
187
+
188
+ return result
161
189
  end
162
190
 
163
191
  # If tools were called, continue the loop to let them execute
@@ -168,39 +196,62 @@ module Agents
168
196
  # Save final state before returning
169
197
  save_conversation_state(chat, context_wrapper, current_agent)
170
198
 
171
- return RunResult.new(
199
+ result = RunResult.new(
172
200
  output: response.content,
173
- messages: MessageExtractor.extract_messages(chat, current_agent),
201
+ messages: Helpers::MessageExtractor.extract_messages(chat, current_agent),
174
202
  usage: context_wrapper.usage,
175
203
  context: context_wrapper.context
176
204
  )
205
+
206
+ # Emit agent complete and run complete events
207
+ context_wrapper.callback_manager.emit_agent_complete(current_agent.name, result, nil, context_wrapper)
208
+ context_wrapper.callback_manager.emit_run_complete(current_agent.name, result, context_wrapper)
209
+
210
+ return result
177
211
  end
178
212
  rescue MaxTurnsExceeded => e
179
213
  # Save state even on error
180
214
  save_conversation_state(chat, context_wrapper, current_agent) if chat
181
215
 
182
- RunResult.new(
216
+ result = RunResult.new(
183
217
  output: "Conversation ended: #{e.message}",
184
- messages: chat ? MessageExtractor.extract_messages(chat, current_agent) : [],
218
+ messages: chat ? Helpers::MessageExtractor.extract_messages(chat, current_agent) : [],
185
219
  usage: context_wrapper.usage,
186
220
  error: e,
187
221
  context: context_wrapper.context
188
222
  )
223
+
224
+ # Emit agent complete and run complete events with error
225
+ context_wrapper.callback_manager.emit_agent_complete(current_agent.name, result, e, context_wrapper)
226
+ context_wrapper.callback_manager.emit_run_complete(current_agent.name, result, context_wrapper)
227
+
228
+ result
189
229
  rescue StandardError => e
190
230
  # Save state even on error
191
231
  save_conversation_state(chat, context_wrapper, current_agent) if chat
192
232
 
193
- RunResult.new(
233
+ result = RunResult.new(
194
234
  output: nil,
195
- messages: chat ? MessageExtractor.extract_messages(chat, current_agent) : [],
235
+ messages: chat ? Helpers::MessageExtractor.extract_messages(chat, current_agent) : [],
196
236
  usage: context_wrapper.usage,
197
237
  error: e,
198
238
  context: context_wrapper.context
199
239
  )
240
+
241
+ # Emit agent complete and run complete events with error
242
+ context_wrapper.callback_manager.emit_agent_complete(current_agent.name, result, e, context_wrapper)
243
+ context_wrapper.callback_manager.emit_run_complete(current_agent.name, result, context_wrapper)
244
+
245
+ result
200
246
  end
201
247
 
202
248
  private
203
249
 
250
+ # Creates a deep copy of context data for thread safety.
251
+ # Preserves conversation history array structure while avoiding agent mutation.
252
+ #
253
+ # @param context [Hash] The context to copy
254
+ # @return [Hash] Thread-safe deep copy of the context
204
255
  def deep_copy_context(context)
205
256
  # Handle deep copying for thread safety
206
257
  context.dup.tap do |copied|
@@ -211,13 +262,18 @@ module Agents
211
262
  end
212
263
  end
213
264
 
265
+ # Restores conversation history from context into RubyLLM chat.
266
+ # Converts stored message hashes back into RubyLLM::Message objects with proper content handling.
267
+ #
268
+ # @param chat [RubyLLM::Chat] The chat instance to restore history into
269
+ # @param context_wrapper [RunContext] Context containing conversation history
214
270
  def restore_conversation_history(chat, context_wrapper)
215
271
  history = context_wrapper.context[:conversation_history] || []
216
272
 
217
273
  history.each do |msg|
218
274
  # Only restore user and assistant messages with content
219
275
  next unless %i[user assistant].include?(msg[:role].to_sym)
220
- next unless msg[:content] && !MessageExtractor.content_empty?(msg[:content])
276
+ next unless msg[:content] && !Helpers::MessageExtractor.content_empty?(msg[:content])
221
277
 
222
278
  # Extract text content safely - handle both string and hash content
223
279
  content = RubyLLM::Content.new(msg[:content])
@@ -228,21 +284,18 @@ module Agents
228
284
  content: content
229
285
  )
230
286
  chat.add_message(message)
231
- rescue StandardError => e
232
- # Continue with partial history on error
233
- # TODO: Remove this, and let the error propagate up the call stack
234
- puts "[Agents] Failed to restore message: #{e.message}\n#{e.backtrace.join("\n")}"
235
287
  end
236
- rescue StandardError => e
237
- # If history restoration completely fails, continue with empty history
238
- # TODO: Remove this, and let the error propagate up the call stack
239
- puts "[Agents] Failed to restore conversation history: #{e.message}"
240
- context_wrapper.context[:conversation_history] = []
241
288
  end
242
289
 
290
+ # Saves current conversation state from RubyLLM chat back to context for persistence.
291
+ # Maintains conversation continuity across agent handoffs and process boundaries.
292
+ #
293
+ # @param chat [RubyLLM::Chat] The chat instance to extract state from
294
+ # @param context_wrapper [RunContext] Context to save state into
295
+ # @param current_agent [Agents::Agent] The currently active agent
243
296
  def save_conversation_state(chat, context_wrapper, current_agent)
244
297
  # Extract messages from chat
245
- messages = MessageExtractor.extract_messages(chat, current_agent)
298
+ messages = Helpers::MessageExtractor.extract_messages(chat, current_agent)
246
299
 
247
300
  # Update context with latest state
248
301
  context_wrapper.context[:conversation_history] = messages
@@ -254,14 +307,45 @@ module Agents
254
307
  context_wrapper.context.delete(:pending_handoff)
255
308
  end
256
309
 
257
- def create_chat(agent, context_wrapper)
310
+ # Configures a RubyLLM chat instance with agent-specific settings.
311
+ # Uses RubyLLM's replace option to swap agent context while preserving conversation history during handoffs.
312
+ #
313
+ # @param chat [RubyLLM::Chat] The chat instance to configure
314
+ # @param agent [Agents::Agent] The agent whose configuration to apply
315
+ # @param context_wrapper [RunContext] Thread-safe context wrapper
316
+ # @param replace [Boolean] Whether to replace existing configuration (true for handoffs, false for initial setup)
317
+ # @return [RubyLLM::Chat] The configured chat instance
318
+ def configure_chat_for_agent(chat, agent, context_wrapper, replace: false)
258
319
  # Get system prompt (may be dynamic)
259
320
  system_prompt = agent.get_system_prompt(context_wrapper)
260
321
 
261
- # Create standard RubyLLM chat
262
- chat = RubyLLM::Chat.new(model: agent.model)
263
-
264
322
  # Combine all tools - both handoff and regular tools need wrapping
323
+ all_tools = build_agent_tools(agent, context_wrapper)
324
+
325
+ # Switch model if different (important for handoffs between agents using different models)
326
+ chat.with_model(agent.model) if replace
327
+
328
+ # Configure chat with instructions, temperature, tools, and schema
329
+ chat.with_instructions(system_prompt, replace: replace) if system_prompt
330
+ chat.with_temperature(agent.temperature) if agent.temperature
331
+ chat.with_tools(*all_tools, replace: replace)
332
+ chat.with_schema(agent.response_schema) if agent.response_schema
333
+
334
+ chat
335
+ end
336
+
337
+ def apply_headers(chat, headers)
338
+ return if headers.empty?
339
+
340
+ chat.with_headers(**headers)
341
+ end
342
+
343
+ # Builds thread-safe tool wrappers for an agent's tools and handoff tools.
344
+ #
345
+ # @param agent [Agents::Agent] The agent whose tools to wrap
346
+ # @param context_wrapper [RunContext] Thread-safe context wrapper for tool execution
347
+ # @return [Array<ToolWrapper>] Array of wrapped tools ready for RubyLLM
348
+ def build_agent_tools(agent, context_wrapper)
265
349
  all_tools = []
266
350
 
267
351
  # Add handoff tools
@@ -275,13 +359,7 @@ module Agents
275
359
  all_tools << ToolWrapper.new(tool, context_wrapper)
276
360
  end
277
361
 
278
- # Configure chat with instructions, temperature, tools, and schema
279
- chat.with_instructions(system_prompt) if system_prompt
280
- chat.with_temperature(agent.temperature) if agent.temperature
281
- chat.with_tools(*all_tools) if all_tools.any?
282
- chat.with_schema(agent.response_schema) if agent.response_schema
283
-
284
- chat
362
+ all_tools
285
363
  end
286
364
  end
287
365
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Agents
4
- VERSION = "0.5.0"
4
+ VERSION = "0.7.0"
5
5
  end
data/lib/agents.rb CHANGED
@@ -111,11 +111,11 @@ require_relative "agents/run_context"
111
111
  require_relative "agents/tool_context"
112
112
  require_relative "agents/tool"
113
113
  require_relative "agents/handoff"
114
+ require_relative "agents/helpers"
114
115
  require_relative "agents/agent"
115
116
 
116
117
  # Execution components
117
118
  require_relative "agents/tool_wrapper"
118
- require_relative "agents/message_extractor"
119
119
  require_relative "agents/callback_manager"
120
120
  require_relative "agents/agent_runner"
121
121
  require_relative "agents/runner"
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ai-agents
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.5.0
4
+ version: 0.7.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Shivam Mishra
@@ -15,14 +15,14 @@ dependencies:
15
15
  requirements:
16
16
  - - "~>"
17
17
  - !ruby/object:Gem::Version
18
- version: 1.6.0
18
+ version: 1.8.2
19
19
  type: :runtime
20
20
  prerelease: false
21
21
  version_requirements: !ruby/object:Gem::Requirement
22
22
  requirements:
23
23
  - - "~>"
24
24
  - !ruby/object:Gem::Version
25
- version: 1.6.0
25
+ version: 1.8.2
26
26
  description: Ruby AI Agents SDK enables creating complex AI workflows with multi-agent
27
27
  orchestration, tool execution, safety guardrails, and provider-agnostic LLM integration.
28
28
  email:
@@ -60,6 +60,7 @@ files:
60
60
  - docs/guides/agent-as-tool-pattern.md
61
61
  - docs/guides/multi-agent-systems.md
62
62
  - docs/guides/rails-integration.md
63
+ - docs/guides/request-headers.md
63
64
  - docs/guides/state-persistence.md
64
65
  - docs/guides/structured-output.md
65
66
  - docs/index.md
@@ -102,7 +103,9 @@ files:
102
103
  - lib/agents/agent_tool.rb
103
104
  - lib/agents/callback_manager.rb
104
105
  - lib/agents/handoff.rb
105
- - lib/agents/message_extractor.rb
106
+ - lib/agents/helpers.rb
107
+ - lib/agents/helpers/headers.rb
108
+ - lib/agents/helpers/message_extractor.rb
106
109
  - lib/agents/result.rb
107
110
  - lib/agents/run_context.rb
108
111
  - lib/agents/runner.rb
@@ -1,97 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module Agents
4
- # Service object responsible for extracting and formatting conversation messages
5
- # from RubyLLM chat objects into a format suitable for persistence and context restoration.
6
- #
7
- # Handles different message types:
8
- # - User messages: Basic content preservation
9
- # - Assistant messages: Includes agent attribution and tool calls
10
- # - Tool result messages: Links back to original tool calls
11
- #
12
- # @example Extract messages from a chat
13
- # messages = MessageExtractor.extract_messages(chat, current_agent)
14
- # #=> [
15
- # { role: :user, content: "Hello" },
16
- # { role: :assistant, content: "Hi!", agent_name: "Support", tool_calls: [...] },
17
- # { role: :tool, content: "Result", tool_call_id: "call_123" }
18
- # ]
19
- class MessageExtractor
20
- # Check if content is considered empty (handles both String and Hash content)
21
- #
22
- # @param content [String, Hash, nil] The content to check
23
- # @return [Boolean] true if content is empty, false otherwise
24
- def self.content_empty?(content)
25
- case content
26
- when String
27
- content.strip.empty?
28
- when Hash
29
- content.empty?
30
- else
31
- content.nil?
32
- end
33
- end
34
-
35
- # Extract messages from a chat object for conversation history persistence
36
- #
37
- # @param chat [Object] Chat object that responds to :messages
38
- # @param current_agent [Agent] The agent currently handling the conversation
39
- # @return [Array<Hash>] Array of message hashes suitable for persistence
40
- def self.extract_messages(chat, current_agent)
41
- new(chat, current_agent).extract
42
- end
43
-
44
- def initialize(chat, current_agent)
45
- @chat = chat
46
- @current_agent = current_agent
47
- end
48
-
49
- def extract
50
- return [] unless @chat.respond_to?(:messages)
51
-
52
- @chat.messages.filter_map do |msg|
53
- case msg.role
54
- when :user, :assistant
55
- extract_user_or_assistant_message(msg)
56
- when :tool
57
- extract_tool_message(msg)
58
- end
59
- end
60
- end
61
-
62
- private
63
-
64
- def extract_user_or_assistant_message(msg)
65
- return nil unless msg.content && !self.class.content_empty?(msg.content)
66
-
67
- message = {
68
- role: msg.role,
69
- content: msg.content
70
- }
71
-
72
- if msg.role == :assistant
73
- # Add agent attribution for conversation continuity
74
- message[:agent_name] = @current_agent.name if @current_agent
75
-
76
- # Add tool calls if present
77
- if msg.tool_call? && msg.tool_calls
78
- # RubyLLM stores tool_calls as Hash with call_id => ToolCall object
79
- # Reference: RubyLLM::StreamAccumulator#tool_calls_from_stream
80
- message[:tool_calls] = msg.tool_calls.values.map(&:to_h)
81
- end
82
- end
83
-
84
- message
85
- end
86
-
87
- def extract_tool_message(msg)
88
- return nil unless msg.tool_result?
89
-
90
- {
91
- role: msg.role,
92
- content: msg.content,
93
- tool_call_id: msg.tool_call_id
94
- }
95
- end
96
- end
97
- end